?? exhaustive search.htm
字號:
binary search, we know little about how to organize large state spaces for
effective exhaustive search. Judging from the experience that the resources
needed to complete a given exhaustive search can rarely be predicted, we are
still in the process of learning the basics. And for sharpening our tools, the
well-defined ``micro worlds'' of games and puzzles, with state spaces of any
size we feel challenged to tackle, are well suited. Playful experiments provide
moving targets that let us explore the shifting boundary of feasible brute-force
problem-solving.
<P>It is common knowledge that the raw computing power available has increased
spectacularly over the past three or four decades - easily a factor of 10^6 with
respect to both: processing speed and memory capacity. Whereas ``super
computers'' around 1960 offered Kiloflops to Megaflops and memories of size 10
to 100 KiloBytes, today's top-of-the-line machines are in the range of Gigaflops
to Teraflops and 10 to 100 GigaBytes. What is less well known, perhaps
controversial but certainly arguable, is that there are reasons to expect
similar rates of growth in raw computing power over the coming few decades. A
number of independent factors can be listed in support of such a prediction. The
physical limits of current technologies have not yet been reached - Moore's
``doubling law'' is still in force; new technologies, such as optical storage
devices, are only in the beginning stages of their potential development; last
but not least, parallel and distributed computing are bound to increase in
popularity with the continuing price drop of hardware. As a portent of trends we
mention that, at the 1995 World computer chess championship a program Star
Socrates participated, running on an Intel Paragon with 1800 processors.
<P>We hasten to emphasize the phrase ``raw computing power''! Whereas the
asymptotic complexity analysis of efficient (polynomial-time) algorithms tells
us with amazing accuracy what we can and cannot compute with given resources of
time and memory, it is far from clear what additional problems can be solved by
brute-force techniques with a millionfold increase of raw computing power. It is
easy to demonstrate a problem where such a boost does not even suffice to go
from a problem of size <EM>n</EM> to the same problem of size <EM>n+1</EM>. But
whereas problems like evaluating Ackermann's function are contrived by
theoreticians, we observe the same phenomenon in empirical settings. To return
to the example of computer chess: the 1800 processor monster Paragon came second
to World Champion ``Fritz'' competing on a personal computer.
<P>One expects a quantitative change of six orders of magnitude to lead to
qualitative changes, but we don't know ahead of time where the latter will take
place. This is particularly true for many problems such as combinatorial
optimization that exhibit no detectable regular structure to be exploited. Their
state spaces appear to be ``chaotic'', they allow no description briefer than an
exhaustive enumeration of all states, and thus do not yield to efficient
algorithms. Whatever progress has been made in this field owes a lot to
tinkerers who apply great ingenuity to attack intriguing toy problems. Let us
describe some of these. <BR><BR><BR><A name=4></A>
<H3>4. State Spaces, their Size and Structure</H3>
<P>Although one often talks about ``the state space of a problem'', this
terminology is misleading. A problem does not come with ``its state space'' -
rather, the problem solver must design a state space that models the problem
adequately. A given problem may have many plausible state spaces that differ
greatly in size, in structure, and ease of understanding.
<P>In most cases, designing a state space, understanding its structure, and
finding a good representation for it is by far the most important ingredient of
an efficient search. Properties of the state space decide whether there exist
invariants, bounds, symmetries that serve to ``prune'', i.e. avoid visiting,
large parts of the state space. As a well-known illustration, consider the
problem: is it possible to cover all the squares of an amputated chess board
that is missing two diagonally opposite squares with dominoes, such that each
domino covers two vertically or horizontally adjacent squares? A plausible state
space might consists of all partial covers, with the initial state ``no
dominoes'' and final states that leave no room to place any additional domino.
But a simple parity argument suffices to answer ``no'', even if the problem is
generalized to <EM>n x n</EM> chess boards, avoiding search altogether:
Diagonally opposite corners are of the same color, thus the amputated chess
board has an unequal number of white and black squares, and cannot possibly be
covered by dominoes each of which covers one white and one black square.
<P>As an example of how a well-designed state space and representation supports
intuition and human problem solving, consider the following graphical
presentation of an introductory problem discussed in many artificial
intelligence textbooks, called ``cannibals and missionaries''. Three cannibals
and three missionaries wish to cross a river in a two-person boat. The cannibals
insist on never being left in a minority on either river bank, for fear of being
converted by a majority of missionaries. A minority of zero cannibals is ok, of
course - converting the empty set is no threat.
<P>A few sentences should suffice to explain the space depicted below. A problem
state consists primarily of the pair (CL, ML) of cannibals and missionaries on
the left bank, where the team starts. Thus we consider a matrix 0 <= CL <=
3, 0 <= ML <= 3. The constraint ``CL = 0 or CL >= ML'' rules out three
entries. But the same matrix can also be labeled with the complement (CR, MR) ,
and the constraint ``CR = 0 or CR >= MR'' rules out three more entries. The
problem state contains an additional bit, namely, whether the boat is on the
left or right shore. This bit flips at every move, so we take care of it by
distinguishing odd moves and even moves. Odd moves start from the left bank and
thus decrease (CL, ML) in one of 5 ways shown by arrows. Even moves start from
the right bank and decrease (CR, MR) as shown by arrows in the opposite
direction. Visual inspection quickly reveals that there is only one way to cross
the barrier that separates the initial state from the goal state. Under the
constraint of alternating between moves towards the upper left and moves back
towards the lower right, the ``N-shaped'' pattern shown in bold arrows is
unique. It is easy to extend these critical three steps to a solution consisting
of 11 moves. This concise visual representation of the state space supports
intuition and saves a lot of trial-and-error. <!--Figure: State space and operators of the ``Cannibals and missionaries'' puzzle--><BR><BR><BR>
<CENTER><IMG alt="Figure 1" src="Exhaustive Search.files/figure_1_big.gif">
<BR><SMALL>Figure 1: State space and operators of the ``Cannibals and
missionaries'' puzzle</SMALL> </CENTER><BR><BR>
<P>The two examples of state spaces shown above are not typical. The first one
is an extreme case where an invariant defined on the state space prunes the
entire space, thus making search unnecessary. The second problem can be solved
only by searching, but the state space is so tiny that even unsystematic search
will soon find a solution. The examples do illustrate, however, the two basic
steps in designing a state space. They become increasingly important the more
challenging the problem: a rigorous, understandable definition of the state
space, and the effectiveness of invariants in pruning the search.
<P>As an example of a state space that taxes the state of the art, consider the
game of Merrils (Nine Men's Morris, Muehle, Moulin) solved in <A
href="http://nobi.ethz.ch/febi/ex_search_paper/paper.html#Gasser95">[Gasser
95]</A>. The board position below gives a hint of the complexity of the game.
Two players White and Black alternate in first placing, then sliding, and
ultimately jumping one stone of their own color. During the 18-ply (ply = a move
by either player) opening, each player places his 9 stones on any unoccupied
point among the 24 playable points of the board. During the midgame, each player
slides one of his stones from its current location to an adjacent point. During
the endgame, when at least one player has exactly three stones left, that player
may jump, i.e. move any one of his stones to any unoccupied point. At all times,
when a player ``closes a mill'', i.e. gets three of his stones in a row along
adjacent points, he may remove an opponent's stone. That player loses who first
cannot move or has fewer than three stones left. For Merrils players: the
position below is a mutual ``Zugzwang'', i.e. the player to move loses against
optimal play: White to move loses in 74 plys, Black in 60 plys. <!--Figure: State space of Merrils with symmetries--><BR><BR><BR>
<CENTER><IMG alt="Figure 2" src="Exhaustive Search.files/figure_2_big.gif">
<BR><SMALL>Figure 2: State space of Merrils with symmetries</SMALL>
</CENTER><BR><BR>
<P>The Merrils board possesses the symmetry axes shown, which generate a group
consisting of 16 permutations. Using Polya's theory of counting to eliminate
symmetric duplicates, we obtain a state space with a total of 7'673'759'269
states, as detailed below. The space is partitioned into 28 subspaces according
to the number of stones on the board, ranging from the smallest subspace 3-3 to
the largest subspace 8-7, with over 10^9 positions. Part of the structure of the
space is shown in the diagram at right, which illustrates the information flow
among the subspaces. For example, a position with 9 white and 8 black stones
transfers, after capture of a stone, either into an 8-8 or a 9-7 position. Thus
the value of a 9-8 position is deduced from values of 8-8 and 9-7 positions, as
explained in section 5. <!--Figure: State space of Merrils: size and structure--><BR><BR><BR>
<CENTER><IMG alt="Figure 3" src="Exhaustive Search.files/figure_3_big.gif">
<BR><SMALL>Figure 3: State space of Merrils: size and structure</SMALL>
</CENTER><BR><BR><A name=5></A>
<H3>5. Exhaustive Search Techniques</H3>
<P>The literature on search uses an abundance of terminology to refer to the
same few basic concepts and to techniques that differ only with respect to
details. At the risk of oversimplification, let us single out a few fundamental
ideas and their relation to each other, and point out modifications that capture
a great variety of exhaustive search techniques.
<P>The state space is a graph, directed or undirected, with possibly a host of
other givens: initial states and goal states, functions defined on vertices
(states) or edges, and constraints of the most varied kind. ``Exhaustive''
refers to the fact that we lack a priori information for excluding any subspace,
hence the first requirement of an exhaustive search is a graph traversal
algorithms capable of visiting all the states. Depth-first search (DFS) is the
prime candidate. Its simple logic: ``keep going as long as you see anything new,
and when that is not possible, back up as far as necessary and proceed in a new
direction'', can be traced to the myth of Theseus traversing the Minotaur's
labyrinth with the help of a thread held by Ariadne. This thread, called a stack
in computer jargon, is obviously a key data structure for backing up. The myth
does not tell us in detail sufficient to a programmer how Theseus kept track of
the subspace already visited. It is instructive to investigate whether Theseus
would have needed a chalk to guarantee visiting every cave in the labyrinth, and
if not, what properties of the labyrinth and what algorithm guarantee an
exhaustive traversal.
<P>Whether or not a chalk is needed to mark states already visited is a major
issue for the space complexity of the search. The chalk marks require memory
proportional to the size of the graph, and that is typically much more than the
other memory requirements. In realistic computations, the graph is rarely
present in its entirety. Rather, its required parts are recomputed on demand,
based on the usual assumption that given any state s, we can compute all its
neighbors (adjacent states). But the totality of the chalk marks would
effectively require storing the entire graph. In contrast, Ariadne's thread or
stack can be expected to require only a diminishing fraction of memory -
ideally, storage space proportional to the logarithm of the number of states. An
entry in the stack used in DFS is usually a small amount of data. Since
consecutive states in the stack are neighbors in the graph and differ little,
only the difference is stored, not the entire state. In game terminology, two
consecutive positions differ by one move, and the former is obtained by playing
the reverse move from the later position.
<P>Backtrack is one important special case of DFS that does not require a chalk.
Although the terminology of search is not standard, all the conventional
examples of backtrack show a state space given explicitly as a tree. If the
Minotaur's labyrinth is a tree, Theseus will see nodes visited earlier only
along the path from his current position to the root. Since these are
momentarily marked by the thread, they need no chalk mark. The only assumption
Theseus needs is that all the edges that emanate from a node are ordered, from
first to last: when he backs up along edge <EM>i</EM> he will next plunge deeper
into the labyrinth along edge <EM>i+1</EM>, if it exists.
<P>Problems may admit additional assumptions that define a subgraph T of the
state space that is a spanning tree, or a spanning forest of trees. Since
``spanning'' means that T reaches every state, a backtrack search on T becomes a
space-saving DFS of the entire state space. An interesting example is ``reverse
search'' <A
?? 快捷鍵說明
復制代碼
Ctrl + C
搜索代碼
Ctrl + F
全屏模式
F11
切換主題
Ctrl + Shift + D
顯示快捷鍵
?
增大字號
Ctrl + =
減小字號
Ctrl + -