亚洲欧美第一页_禁久久精品乱码_粉嫩av一区二区三区免费野_久草精品视频

? 歡迎來到蟲蟲下載站! | ?? 資源下載 ?? 資源專輯 ?? 關于我們
? 蟲蟲下載站

?? ics 180, february 4, 1999.htm

?? 介紹各種經典算法的代碼。說明詳細
?? HTM
字號:
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN">
<!-- saved from url=(0049)http://www.ics.uci.edu/~eppstein/180a/990204.html -->
<HTML><HEAD><TITLE>ICS 180, February 4, 1999</TITLE>
<META content="text/html; charset=gb2312" http-equiv=Content-Type>
<META name=Owner value="eppstein">
<META name=Reply-To value="eppstein@ics.uci.edu">
<META content="MSHTML 5.00.2614.3500" name=GENERATOR></HEAD>
<BODY><IMG alt="" height=72 src="ICS 180, February 4, 1999.files/icslogo2.gif" 
width=472>
<P><A href="http://www.ics.uci.edu/~eppstein/180a/index.html">
<H1>ICS 180, Winter 1999:<BR>Strategy and board game programming</H1></A>
<H2>Lecture notes for February 4, 1999<BR>Which nodes to search? Full-width vs. 
selective search</H2>Alpha-beta tells us how to search, but we still need to 
know when to expand a node (search its children) and when to just stop and call 
the evaluation function. 
<H3>The Horizon Effect</H3>The pseudo-code I've shown you so far searches every 
move out to a given fixed depth (this depth is also known as the <I>horizon</I>. 
Although this can be quite effective at seeing tactical threats that could be 
carried out within the horizon, it (obviously) can't detect threats that would 
take effect past the horizon; for instance a depth-8 search (that is, a search 
four moves deep) would likely not have much or any information about a forced 
checkmate in five moves. What it don't know, it can't defend against, and it 
would simply ignore those long-term threats. But this sort of fixed-depth search 
can behave even worse when the position contains medium-depth threats in which 
some bad outcome is forced to occur, but where some lines have that outcome 
within the search horizon and some don't. In that case, the program can play 
horrible pointless moves in an attempt to delay the bad outcome long enough that 
it can't be seen. This phenomenon is known as the <I>horizon effect</I>. 
<P>Here's an example. In the following position, black's bishop is trapped by 
the white pawns. No matter what black does, the bishop will be taken in a few 
moves; for instance the white rook could maneuver h2-h1-a1-a2 and capture the 
bishop. That sequence is 8 plys deep, and suppose that the black program is 
searching to a depth of 8 plys. Probably the best move for black in the actual 
position is to trade off the bishop and pawns, e.g. bishop x pawn, pawn x 
bishop. In the remaining endgame, black's three connected passed pawns may be 
enough to win or draw against the rook. But, a program searching 8 plys will 
likely instead move the black pawn forwards, checking the white king. White must 
respond (e.g. by taking the pawn with his king), but that forced response delays 
the loss of the bishop long enough that the program can't see it anymore, and 
thinks the bishop is safe. In fact, in this position, a fixed-depth program can 
continue throwing away its pawns, delaying the bishop capture a few more moves 
but probably causing the eventual loss of the game. 
<P>
<CENTER><IMG alt="horizon effect &#10;example" height=292 
src="ICS 180, February 4, 1999.files/990204.gif" width=292></CENTER>One way to 
counter the horizon effect is to add knowledge to your program: if it knows from 
the evaluation that the bishop is trapped, its search won't try to delay the 
capture by throwing away pawns. Another is to make the search faster and deeper: 
the more levels your program searches, the less likely you are to run across a 
situation like this where it is possible to delay the loss of the bishop past 
the horizon. But the most effective general solution is to make the search depth 
more flexible, so that the program searches deeper in the lines where a pawn is 
being given away and less deep in other lines where it doesn't need the depth. 
<H3>Brute Force and Selectivity</H3>
<P>Shannon's original paper on computer chess listed two possible strategies for 
adjusting the search depth of a game program. 
<P>The most obvious is what the pseudo-code I've shown you so far does: a 
full-width, brute force search to a fixed depth. Just pass in a "depth" 
parameter to your program, decrement it by one for each level of search, and 
stop when it hits zero. This has the advantage of seeing even wierd-looking 
lines of play, as long as they remain within the search horizon. But the high 
branching factor means that it doesn't search any line very deeply (bachelor's 
degree: knows nothing about everything). And even worse, it falls prey to the 
horizon effect. Suppose, in chess, we have a program searching seven levels 
deep, 
<P>The other method suggested by Shannon was selective pruning: again search to 
some fixed depth, but to keep the branching factor down only search some of the 
children of each node (avoiding the "obviously bad" moves). So, it can search 
much more deeply, but there are lines it completely doesn't see (ph.d.: knows 
everything about nothing). Shannon thought this was a good idea because it's 
closer to how humans think. Turing used a variant of this idea, only searching 
capturing moves. More typically one might evaluate the children and only expand 
the <I>k</I> best of them where <I>k</I> is some parameter less than the true 
branching factor. 
<P>Unfortunately, "obviously bad" moves are often not bad at all, but are 
brilliant sacrifices that win the game. If you don't find one you should have 
made, you'll have to work harder and find some other way to win. Worse, if you 
don't see that your opponent is about to spring some such move sequence on you, 
you'll fall into the trap and lose. 
<P>Nowadays, neither of these ideas is used in its pure form. Instead, we use a 
synthesis of both: selective extension. We search all lines to some fixed depth, 
but then extend extend some lines deeper than that horizon. Sometimes we'll also 
do some pruning (beyond the safe pruning done by alpha-beta), but this is 
usually extremely conservative because it's too hard to pick out only the good 
moves; but we can sometimes pick out and ignore really bad moves. For games 
other than chess, with higher branching factors, it may be necessary to use more 
aggressive pruning techniques. 
<H3>When to extend?</H3>What is the point of extending? To get better (more 
accurate) evaluations. So, should extend 
<OL>
  <LI>when the current evaluation is likely to be inaccurate, or 
  <LI>when the current line of play is a particularly important part of the 
  overall game tree search </LI></OL>(or some combination of both). 
<H3>Quiescence Search</H3>
<LI>In chess or other games in which there are both capturing and non-capturing 
moves (checkers, go, fanorona), if there are captures to be made, the evaluation 
will change greatly with each capture. 
<P><I>Quiescence search</I> is the idea of, after reaching the main search 
horizon, running a Turing-like search in which we only expand capturing moves 
(or sometimes, capturing and checking) moves. For games other than chess, the 
main idea would be to only include moves which make large changes to the 
evaluation. Such a search must also include "pass" moves in which we decide to 
stop capturing. So, each call to the evaluation function in the main alpha-beta 
search would be replaced by the following, a slimmed down version of alpha-beta 
that only searches capturing moves, and that allows the search to stop if the 
current evaluation is already good enough for a fail high: <PRE>    // quiescence search
    // call this from the main alphabeta search in place of eval()

    quiesce(int alpha, int beta) {
        int score = eval();
        if (score &gt;= beta) return score;
        for (each capturing move m) {
            make move m;
            score = -quiesce(-beta,-alpha);
            unmake move m;
            if (score &gt;= alpha) {
                alpha = score;
                if (score &gt;= beta) break;
            }
        }
        return score;
    }
</PRE>Some people also include checks to the king inside the quiescence search, 
but you have to be careful: because there is no depth parameter, quiescence can 
search a huge number of nodes. Captures are naturally limited (you can only 
perform 16 captures before you've run out of pieces to capture) but checks can 
go on forever and cause an infinite recursion. 
<H3>Selective extensions</H3>If the position has been active in the recent past, 
this may be evidence that further tactics are coming up, or that some of the 
previous moves were delaying tactics that prevent us from seeing deeply enough 
to get a good evaluation. So one often increases the search depth if the search 
passes through an "interesting" move such as a capture or a check. In the 
alpha-beta pseudocode, this would be accomplished by replacing the depth-1 
parameter to the recursive call to the search routine by the value 
depth-1+extension. You have to be careful not to do this too often, though, or 
you could end up with a hugely expanded (even possibly infinite!) search tree. 
<P>One trick helps make sure this extension idea terminate: only extend by a 
fraction of a level. Specifically, make the "depth" counter record some multiple 
of the number of levels you really want to search, say depth=levels*24. Then, in 
recursive calls to alpha-beta search, pass a value of depth-24+extension. If the 
extension is always strictly less than 24, the method is guaranteed to 
terminate, and you can choose which situations result in larger or smaller 
extensions. 
<P>It may also be useful to include within the evaluation function knowledge 
about how difficult a position is to evaluate, and extend the search on 
positions that are too difficult. My program does this: the program passes the 
current depth to the evaluation function. If the position is complicated, and 
the depth is close to zero, the evaluation returns a special value telling the 
search to continue. But if the depth reaches a large negative number, the 
evaluation function always succeeds, so that the search will eventually 
terminate. 
<H3>How to combine accuracy with importance?</H3>So far, we've just looked at 
trying to find the points at which the evaluation may be inaccurate. But maybe 
we don't care if it's inaccurate for unimportant parts of the tree, but we 
really do care for nodes on the principal variation. How do we take importance 
into account when performing selective extensions? 
<OL>
  <LI>Don't, let alpha-beta sort out importance and just extend based on 
  accuracy. 
  <P></P>
  <LI>Extend lines that are part of (or near) the principal variation (e.g. 
  singular extensions -- used in Deep Blue and/or its predecessors -- if there 
  is one move much better than others in a position, extend the search on that 
  move). 
  <P></P>
  <LI>Moving away from alpha-beta... conspiracy number search -- what is the 
  minimum number of positions the value of which would have to change to force 
  program to make a different move? Search those positions deeper. </LI></OL>
<H3>Null-move search</H3>This idea fits in with the general theme of the 
lecture, adjusting search depth in appropriate circumstances, however it works 
in a different direction. Instead of extending the search in hard positions, we 
reduce the search in easy positions. 
<P>The idea is based on a piece of knowledge about chess: it's very rare (except 
in the endgame) for it to be a disadvantage to move. Normally, if it's your turn 
to move, there is something you can do to make your position better. Positions 
in which all possible moves make the position worse are called "zugzwang" 
(German for move-compulsion), and normally only happen in endgames. In some 
other games, such as Go-Moku, zugzwang doesn't happen at all. So, if you changed 
the rules of chess to allow a "pass" move, passing would usually be a mistake 
and the game wouldn't change much. 
<P>So, suppose you have a search node that you expect to fail high (i.e., 
alphabeta will return a score of at least beta). The idea of null-move search is 
to search the "pass" move <I>first</I>, even though it's usually <I>not</I> the 
best move. If the pass move fails high, then the true best move is also likely 
to fail high, and you can return beta right away rather than searching it. To 
make this even faster, the depth at which the passing move is searched should be 
shallower than usual. 
<P>You should be careful: this heuristic changes the result of the search, and 
may cause you to miss some important lines of play. You shouldn't use null moves 
twice in a row (because then your search will degenerate to just returning the 
evaluation), and you should be careful to only use it in situations that are 
unlikely to be zugzwang. In chess, that means only positions with many pieces 
left. <PRE>    // alpha-beta search with null-move heuristic
    alphabeta(int depth, int alpha, int beta) {
        if (won game or depth &lt;= 0) return score;
        make passing move;
        if (last move wasn't null &amp;&amp; position is unlikely to be zugzwang &amp;&amp;
            -alphabeta(depth-3, -beta, -beta+1) &gt;= beta)
          return beta;
        for (each possible move m) {
            make move m;
            alpha = max(alpha, -alphabeta(depth-1, -beta, -alpha);
            unmake move m;
            if (alpha &gt;= beta) break;
        }
        return alpha;
    }
</PRE>
<P>
<HR>
<A href="http://www.ics.uci.edu/~eppstein/">David Eppstein, <A 
href="http://www.ics.uci.edu/">Dept. Information &amp; Computer Science</A>, <A 
href="http://www.uci.edu/">UC Irvine</A>, Monday, 01-Feb-1999 16:58:05 PST. 
</LI></BODY></HTML>

?? 快捷鍵說明

復制代碼 Ctrl + C
搜索代碼 Ctrl + F
全屏模式 F11
切換主題 Ctrl + Shift + D
顯示快捷鍵 ?
增大字號 Ctrl + =
減小字號 Ctrl + -
亚洲欧美第一页_禁久久精品乱码_粉嫩av一区二区三区免费野_久草精品视频
欧美日韩三级一区二区| 欧美放荡的少妇| 日本精品一区二区三区四区的功能| 久久激情五月激情| 91在线视频18| 99麻豆久久久国产精品免费| 在线亚洲高清视频| 精品国产污污免费网站入口 | 蜜桃视频第一区免费观看| 狠狠色2019综合网| 99久久久精品免费观看国产蜜| 成人一区在线观看| 欧美调教femdomvk| 久久亚区不卡日本| 亚洲美女少妇撒尿| 国产麻豆精品在线观看| 欧美日韩专区在线| 久久精品亚洲精品国产欧美kt∨ | 国产精品网站导航| 日日夜夜精品视频天天综合网| 亚洲一区影音先锋| 国产一区在线精品| 69堂精品视频| 亚洲色图.com| 亚洲一区二区三区爽爽爽爽爽| 国产成人无遮挡在线视频| 欧美精品久久99久久在免费线| 欧美国产精品一区二区三区| 日本不卡视频在线观看| 色婷婷一区二区| 日韩欧美国产综合| 亚洲一区二区视频| 91蝌蚪porny九色| 国产色产综合色产在线视频| 日本美女视频一区二区| 欧美日韩中文一区| 亚洲美女在线一区| 99精品国产热久久91蜜凸| 国产三级欧美三级| 久久97超碰国产精品超碰| 欧美日韩中文国产| 亚洲福利电影网| 91网站在线播放| 奇米888四色在线精品| 欧美日韩中文精品| 亚洲欧美日韩国产成人精品影院 | 国产网站一区二区| 99精品欧美一区二区三区小说 | 国产自产高清不卡| 一区二区中文字幕在线| 在线免费观看日本一区| 亚洲成人tv网| 久久综合色综合88| www.视频一区| 偷拍与自拍一区| 国产亚洲欧美日韩日本| 一本一道久久a久久精品| 五月婷婷综合激情| 精品成人免费观看| 色婷婷国产精品久久包臀 | 欧美大片拔萝卜| 日韩亚洲电影在线| 国产成人免费高清| 亚洲国产综合在线| 久久一留热品黄| 色婷婷av一区| 国产乱理伦片在线观看夜一区 | 91久久久免费一区二区| 日韩黄色在线观看| 国产精品免费看片| 在线播放国产精品二区一二区四区 | 免费在线观看视频一区| 国产精品国产精品国产专区不片| 欧美在线观看禁18| 国产91在线|亚洲| 天天色综合天天| 国产精品久久久久久久久免费丝袜| 欧美日韩成人激情| 99综合影院在线| 美日韩一区二区三区| 亚洲天堂免费在线观看视频| 精品奇米国产一区二区三区| 91丨九色丨蝌蚪丨老版| 国产不卡在线视频| 人人狠狠综合久久亚洲| 亚洲视频综合在线| 精品国产免费人成在线观看| 欧美区视频在线观看| www.久久精品| 国产一区二区不卡| 免费观看日韩电影| 亚洲成人av免费| 亚洲色图20p| 中文字幕欧美国产| 久久久久久一二三区| 4hu四虎永久在线影院成人| 在线欧美一区二区| 成人高清视频免费观看| 国产精品亚洲第一| 国产在线不卡一卡二卡三卡四卡| 亚洲一区二区三区中文字幕 | 国产精品三级在线观看| 久久久99精品久久| 精品国产免费人成在线观看| 日韩视频中午一区| 日韩一区二区三区电影在线观看| 欧美日精品一区视频| 欧美性猛片xxxx免费看久爱| 欧美在线观看你懂的| 欧美午夜在线观看| 精品视频在线免费看| 欧美性色aⅴ视频一区日韩精品| 色婷婷亚洲婷婷| 色噜噜久久综合| 91久久精品一区二区三| 欧美亚洲综合另类| 91精品在线免费| 日韩精品一区二区在线| 欧美精品一区二区三区很污很色的| 精品国产一区二区精华| 国产日韩欧美电影| 日韩一区有码在线| 亚洲精品第一国产综合野| 亚洲第一搞黄网站| 琪琪久久久久日韩精品| 国内国产精品久久| www.亚洲国产| 91福利精品视频| 正在播放亚洲一区| 久久综合久色欧美综合狠狠| 欧美国产日本韩| 亚洲一区二区欧美| 久久精品国产77777蜜臀| 国产成人一区在线| 91福利国产精品| 精品国精品国产| 亚洲欧美日韩中文字幕一区二区三区| 亚洲愉拍自拍另类高清精品| 日韩**一区毛片| 懂色av一区二区在线播放| 色视频成人在线观看免| 欧美成人一级视频| 亚洲欧美色一区| 久草在线在线精品观看| 不卡av在线网| 9191成人精品久久| 中文字幕不卡三区| 日韩电影一区二区三区四区| 国产高清久久久久| 欧美视频完全免费看| 国产三级欧美三级| 五月婷婷激情综合网| 成人精品一区二区三区中文字幕| 欧美丝袜自拍制服另类| 欧美精品在线观看播放| 欧美高清在线一区| 人人精品人人爱| 一本久久综合亚洲鲁鲁五月天 | 亚洲精品视频一区二区| 秋霞成人午夜伦在线观看| 成人avav在线| 精品国产成人在线影院| 一区二区在线电影| 国产成人在线电影| 欧美一级免费大片| |精品福利一区二区三区| 美女视频第一区二区三区免费观看网站 | 香蕉久久夜色精品国产使用方法 | 国产精品中文字幕欧美| 在线一区二区视频| 国产精品视频麻豆| 国产一区二区影院| 91精品国产丝袜白色高跟鞋| 亚洲日本电影在线| 国产成人精品影院| 26uuu久久综合| 免费一区二区视频| 欧美日韩一区视频| 一区二区三区日韩在线观看| 不卡的电视剧免费网站有什么| 日韩亚洲电影在线| 日本vs亚洲vs韩国一区三区二区| 欧洲精品在线观看| 亚洲精品欧美激情| 99久久精品情趣| 国产精品电影一区二区三区| 国产成人综合网| 久久久精品2019中文字幕之3| 免费xxxx性欧美18vr| 欧美精品tushy高清| 亚洲国产精品久久久久婷婷884 | 粉嫩aⅴ一区二区三区四区五区| 日韩欧美一二区| 免费在线视频一区| 日韩一区二区三区免费观看| 奇米色一区二区| 日韩一区二区中文字幕| 美女网站色91| 精品成人一区二区| 成人一区二区三区中文字幕|