Solving Problems by Searching

Introduction

searching is one of the most common solution-techniques in AI, and so we will spend some time discussing the basic terminology and approach

we usually imagine that our searching is done, at least conceptually, on a search tree

  • we can also do this on a graph, by using an “explored” list to keep track of already-visited states

a search tree consists of nodes, and edges between nodes that indicate single steps

  • a node contains the state of the environment, plus other information relevant to the search
    • an environment state is a description of the world at one time, i.e. all the relevant facts at once moment in time
  • usually, we will have a lot of nodes in any interesting search tree (e.g. billions and billions is not uncommon), and so in practice node representations should usually be quite efficient, storing only essential information
  • also, such gargantuan search trees often cannot fit entirely into memory at one time; thus, we will usually assume the tree will be expanded incrementally as we go
    • this is a very different assumption than with the sort of trees you probably dealt with in CMPT 225, where it was assumed the entire tree was always in memory

a root is specified, so that we know where the search starts

also, we have some way of recognizing goal nodes, i.e. the nodes that we are searching for that represent a solution to our problem

e.g. the 15-puzzle is a classic toy puzzle

  • it consists of 15 tiles, numbered 1 to 15, that can slide between the grid locations on a 4x4 board of cells

  • since there are 15 tiles and 16 cells, there is always one blank location

  • a tile adjacent to the blank can be slide into the blank (and so the blank is no where that tile use to be)

    • a tile can move in only the 4 main directions: up, down, left, right
  • sliding one tile has a cost of 1

  • the goal is to arrange the tiles, one move at a time, until the board looks like this:

     1  2  3  4     Goal State
     5  6  7  8
     9 10 11 12
    13 14 15 *      * is the blank
    
  • usually the board starts in a random position

    • careful: about half of all random permutations of the tiles are states that cannot reach this goal state, i.e. in about half the states the puzzle is unsolvable

    • famously, when the 15-puzzle was first sold in the 1800s with the 14 and 15 tiles were swapped like this:

       1  2  3  4     An Impossible state
       5  6  7  8
       9 10 11 12
      13 15 14 *      * is the blank
      

      it’s impossible to reach the goal state from this position making just legal tile moves!

  • when people solve the 15-puzzle, they are usually satisfied with any solution

  • but a much more difficult problem is, from any given solvable state, to find the minimum number of moves to the solution state

    • using the algorithms in this chapter, we will be able to find solutions to the 3x3 variant
    • with an efficient implementation of the same algorithms (probably in a faster language, like C or C++), you could also find optimal solutions to the 15-puzzle
    • it’s possible, but very difficult, to find optimal solutions to random instances of the 5x5 variant (the 25-puzzle)
    • as far as I know, there is no way to find optimal solutions to random instance of the 6x6 puzzle or higher: it seems that some new ideas are needed!

Problems

for this section, we define a problem as an object with at least these parts:

  • an initial state, where the agent starts
  • a set of actions that can be executed in a particular state; if \(s\) is a state, then \(Actions(s)\) is the set of all applicable actions, i.e. the actions that can be executed in \(s\)
  • a transition model that describes how exactly a state changes when an action is applied to it; the notation \(Result(s, a)\) is the state that results from doing action \(a\) in state \(s\)
  • a goal test that determines if a given state is a goal state
  • a path cost that assigns a numeric cost to each path; the sum of a path will be taken to be the sum of the individual costs along the path; the notation \(c(s, a, s')\) is the cost of going from state \(s\) to \(s'\) by doing action \(a\)
    • e.g. if the nodes represent locations on a map, the path costs could represent times between cities, or distances between cities
    • e.g. if the nodes represent states of a Rubik’s cube puzzle, then the path cost between any two adjacent states would be 1, i.e. 1 move is needed to go between the states

this model of what a problem is applies just to the kinds of search problems we are describing in this chapter

  • there are other models of problems, e.g. in robotics you often cannot be certain what state you end up in after doing an action, i.e. \(Result(s, a)\) might return a set of states with a probability distribution on them
    • e.g. a robot might try to pick up a glass; after doing that action, it might holding the glass, or it might have crushed the glass, or the glass might have slipped out of its grasp and is still sitting on the table

Measuring Problem-solving Performance

search algorithm performance is typically evaluated in four ways:

  • completeness: If there is a solution, is the algorithm guaranteed to find it?
  • optimality: Does it find the goal node with the lowest path cost?
  • time complexity: How long does it take to find a solution?
  • space complexity: How much memory is needed to do the search?

A* Search: Minimizing the Total Estimated Solution Cost

if \(n\) is a node, the let \(g(n)\) be the cost (from the root) to reach node \(n\)

  • note that we know \(g(n)\) precisely

then we could define the evaluation function \(f\) like this:

\[f(n) = g(n) + h(n)\]

ordering the nodes in frontier gives us an algorithm known as A*-search

intuitively, we can think of this \(f(n)\) as the estimated cost of the cheapest solution path that goes through node \(n\)

A*-search is important because, if the right conditions for the heuristic function \(h\) hold, then A*-search is both complete (will find a solution if there is one) and optimal

there are two conditions that \(h\) must satisfy:

  • \(h\) must be admissible
  • \(h\) must be consistent (or monotonic)

Admissible Heuristics

a heuristic function is admissible if it never over-estimates the cost to reach the goal

  • admissible heuristic functions are optimistic: they report that the cost of achieving a solution is never more than what it actually is
  • an example of an admissible heuristic function is the straight-line distance between two points in a map
    • to get from point A to B, the best path is to follow the straight line that connects them
    • but in reality, you often can’t go straight, e.g. if you are driving then you must follow roads
  • another example of an admissible heuristic is the 8-puzzle: the number of tiles not already in their home position is always less than (or equal to) the number of moves that must be made to reach the home position
  • for tree-search problems, admissibility is all we need to ensure A*-search is complete and optimal

Consistent Heuristics

a heuristic function \(h\) is consistent if for every node \(n\) and for every successor node \(n'\) generated by an action \(a\), this inequality holds:

\[h(n) \leq c(n, a, n') + h(n')\]

\(c(n, a, n')\) is the cost of going from node \(n\) to node \(n'\) by action \(a\)

A*-Search with a Consistent Heuristic is Optimal

an important property of A*-search is that it guarantees to find the cheapest path from the initial node to the goal, as long as a consistent heuristic is used

this result follows from two basic facts:

  • values of \(f(n)\) along any path are non-decreasing
  • if A* selects a node \(n\) for expansion, the optimal path has been found

thus, A*-search can be useful if you want to find the cheapest cost solution to a problem, e.g. the minimal number of moves to solve the 8-puzzle

  • other algorithms might be faster at finding a solution, but they generally cannot guarantee they’ve found the optimal solution

from a high-level view, the frontier of A*-search looks like a contour, and A*-search expands nodes on this contour in way that is biased towards the goal

  • the more accurate the heuristic, the bigger the bias toward the goal, and the more likely it is to quickly find the goal

unfortunately, the number of nodes in these contours is typically exponential

  • A* is a kind of breadth-first search

so in practice, the problem with A* search is that for many problems it simply uses more memory than is available

so in practice, memory-efficient variants of A* search, such as IDA* (iterative-deepening A* search), RBFS (recursive breadth-first search) or SMA* (simplified memory-bounded A* search) must be used for most problems

  • or, as is often the case n practice with hard problems, the requirement for optimality is dropped

Heuristic Functions

the better your heuristic, the quicker A*-search can find a solution

generally, we want heuristics that estimate the true cost of a path to a goal without going over that true cost (i.e. admissibility)

so, suppose you happened to have two admissible heuristics, \(h_1\) and :math:h_2`

you can define a new admissible heuristic \(h_3\) as follows:

\[h_3(n) = max(h_1(n), h_2(n))\]

in general, if you have \(n\) admissible heuristics, then taking the max of them is another admissible heuristic

  • however, it can be difficult to say in general if the algorithm will run faster — it’s possible that the time spent calculating the \(n\) heuristic values outweighs the benefit of the improved heuristic accuracy

another sometimes useful trick for developing admissible heuristics is to considered a relaxed version of the problem being solved, i.e. a version of the problem with fewer constraints on actions

for example, in the 8-puzzle, suppose you could simply move each tile to its home position — this is a relaxed version of the 8-puzzle that corresponds to the misplaced tile heuristic

a different relaxation of the 8-puzzle is this: suppose a tile could slide unimpeded from its current position to its home position, one cell at a time (up, down, left, or right as usual)

  • this corresponds to the Manhattan metric

relaxation is an interesting idea in part because it is a fairly general-purpose rule for helping to find admissible heuristics

  • it can even be automated, to help discover heuristics automatically

Pattern Databases for Heuristics

at least for problems similar the 8-puzzle (which includes things like Rubik’s cube), the most successful heuristics have been created using so-called pattern databases

the basic idea is to computer a very accurate heuristic function \(h\) by pre-solving simplified versions of the problem in a way that never over-estimates the true cost

to understand this, lets re-create the Manhattan metric 8-puzzle heuristic as a small pattern database

recall that in the 8-puzzle, the Manhattan distance between a tile and its home location is the number of moves the tile must make to get to its home position if it were the only tile on the board

the Manhattan heuristic is then the sum of all these costs for each tile

this clearly never over-estimates the true number of moves because you usually have to move other tiles out of the way in the full 8-puzzle

now consider just tile 1, the tile that will be in the upper-left corner of the board in the solved puzzle:

1 2 3    Solution State for 8-puzzle
4 5 6
7 8 *

for each of the 9 positions on the board, we can pre-compute the Manhattan distance of that that position with the home position of tile 1:

0 1 2   Pre-computed Manhattan distances to
1 2 3   home position of tile 1 (upper left corner)
2 3 4

so, on a board containing only the tile 1, if tile 1 is in its home position, 0 moves are required to move it home

if, instead, tile 1 was in the upper-right corner of the board, 2 moves would be needed to move tile 1 home (assuming its the only tile on the board)

to get the complete Manhattan heuristic estimate for a state of the 8-puzzle, we will need to create 9 of these tables, one for each possible home position:

0  1  2  # 1
1  2  3
2  3  4

1  0  1  # 2
2  1  2
3  2  3

2  1  0  # 3
3  2  1
4  3  2

1  2  3  # 4
0  1  2
1  2  3

2  1  2  # 5
1  0  1
2  1  2

3  2  1  # 6
2  1  0
3  2  1

2  3  4  # 7
1  2  3
0  1  2

3  2  3  # 8
2  1  2
1  0  1

4  3  2  # 9
3  2  1
2  1  0

these tables can be used to implement a very efficient version of the Manhattan heuristic (a good thing to do in heuristic search!)

  • this is essentially a table of 9*9=81 values, indexed by (tile name, location)
  • for example, if tile 5 is at location 3 (the first position of the second row), then the board above for tile 5 says that 1 move is needed for 5 to get to its home location from position 3

but this approach also suggests what turns out to be a useful generalization

what if instead of only one tile on the board, we have two tiles on the board?

for example, consider just tile 1 and tile 2

there are 9*8=72 different placements of tile 1 and 2 on the 8-puzzle

here’s one of them:

. . 1
2 . .
. . .

4 moves are needed to get both tile 1 and tile 2 home (assuming no other tiles on the board)

here’s another example:

2 1 .
. . .
. . .

this needs 4 moves to get both tiles to their home position

  • note that the Manhattan heuristic estimates only 2 moves are needed
  • but more are needed because one of the tiles has to “move around” the other

for each of these 72 placements of tiles 1 and 2, we could find an optimal solution and record the number of moves the two tiles made into a table (i.e. a database)

  • the table would be indexed by tile positions so that whatever the positions of 1 and 2 are, we could quickly retrieve the number of moves needed to get them home
  • we could use A*-search to solve these simpler problems

now we could do the same trick for the other pairs of tiles: 3/4, 5/6, and 7/8

for each pair of tiles, we optimally solve the 72 possible tile positions, and store the results in a table index by position

overall, this gives us a table with 4*72=288 entries index by all possible positions of 2 tiles

we can now use this table to get a good heuristic estimate for the 8-puzzle: look up the positions of tiles 1/2, 3/4, 5/6, and 7/8, and add their corresponding values together

this is an admissible heuristic that is a little better than the Manhattan heuristic, because in some cases it is a greater value

and that’s the basic idea for creating heuristics using disjoint pattern databases

essentially, we pre-computer solutions to easier subsets of the problem, and use those as estimates for our heuristic

for more complex puzzles, such as the 4x4 15-puzzle or the 5x5 25-puzzle, pattern databases with more tiles are pre-computed, and they result in much more effective heuristics that can be used to solve random instance of those puzzles in a reasonable amount of time when used with a memory-friendly A*-search variant (such as IDA*-search)

  • however, for the 6x6 36-puzzle and beyond, this technique doesn’t seem to help since the pattern databases get too big and the heuristics don’t reduce the search enough
  • it seems like a new idea is needed to find optimal solutions to the 36-puzzle and beyond
    • maybe some variation on pattern database no one has thought of yet
    • or maybe some completely different idea!

Aside: Solving the 15-puzzle by Hand

here is a simple way to solve the 4x4 15-puzzle (and ones like it) without a computer …

start by solving the first row and the first column

  • the tricky part is getting the last two tiles of the row/column in place, but with a bit of experimentation you should be able to figure out the trick

now the first row and first column are done, and they won’t be touched again

the remaining tiles form a 3x3 8-puzzle

solve this by solving the first row (get tiles 6, 7, 8 in the first row) and the first column (6, 10, 14)

then solve the remaining 2x2 puzzle — its trivial!

and that’s it!

the resulting solutions are not necessarily optimal, but it is easy to remember and fairly easy for humans to do

which leads one to wonder: could you write a computer program that woulc come up with a general strategy like this for solving such puzzles, as opposed to just solving particular instances of it the way A*-search does?