Starting from:

$30

Artificial Intelligence – Problem Set 1

1)    Task Environments

In class, we have discussed the following properties of task environments:

i.    fully observable versus partially observable
ii.    deterministic versus stochastic (a third, in-between possibility was strategic)
iii.    episodic versus sequential
iv.    static versus dynamic (a third, in-between possibility was semi-dynamic)
v.    discrete versus continuous
vi.    single agent versus multi-agent

Note that if you have been reading the relevant textbook chapters, the terms may be slightly different, or the in-between possibilities may be left out, depending on which edition you are using. I am using the terms the way we covered them in class, and the way they appear on my slides. We also talked about whether a task environment is known or unknown (which is not technically a property of the task environment, but of our knowledge of it). I am not asking about that in this question.

Consider the task environments for the following potential AI agents. For each task environment, indicate to which of the above categories the task environment belongs. You can, and should, make certain reasonable simplifications for this question; for example, pseudo-randomness counts as randomness, and digital images should be treated as continuous.

Also, for each task environment, state whether a rational agent would base its decisions on only the current percept or the entire percept history. So, for each of the potential agents, you are answering seven things.

Briefly explain any answers that are not obvious. (There may be answers that are debatable, and for those, I'll give you credit when you disagree with me if your explanation is reasonable. For many parts, I think there is just one clear answer.)

(a)    A program that plays Go against the user. (If you need to look up the rules, look them up. Note that the specific algorithm a program uses to play should not affect the answer; this question is about the task environment, not the agent itself.)

(b)    A program controlling a robotic arm that detects and stacks Jenga pieces (one of the senior projects this year!). The goal is not to have the robotic arm play the game, but rather it will detect pieces on a surface and build the initial tower.

(c)    A face recognition system that detects faces in an image and looks them up in a database of known faces to find matches (assume the system has already been trained).

2)    Games

Assume a program is playing a deterministic (really strategic), turn-taking, two-player, zero-sum game of perfect information. The game works as follows. A full game takes four moves. MAX takes a turn, then MIN takes a turn, then MAX, then MIN. Then the game is over, scored according to an objective function. Positive values represent wins for MAX (higher is better), whereas negative values represent wins for MIN (lower is better). A game tree for the game is depicted on the following page. MAX nodes are represented as upward pointing triangles, MIN nodes are represented as downward pointing triangles, and terminal nodes are represented as rectangles with the final scores indicated inside the rectangles.

(a)    Assuming that no pruning occurs, fill in the minimax values of all the internal nodes (including the root) in the game tree. Also circle the move (Move1 or Move2) that is selected according to the search.

(b)    Now assuming that alpha-beta pruning occurs, draw lines through the edges in the game tree that separate the nodes at which the pruning decision occurs from the nodes which are never considered. (In other words, the node above each line will be a node at which a pruning decision occurs after the evaluation of one or more of its children, and the subtree below each line will contain nodes that are not considered at all due to the pruning.) HINT: You should NOT have to step through the entire alpha-beta search algorithm to do this. That would take longer than necessary. If you understand conceptually what alpha-beta pruning does, you should be able to carefully eyeball the graph and figure this out.

(c)    Answer below: If both players play perfectly, what will be the outcome of the game (i.e., will MAX or MIN win or will it be a draw)? Briefly explain your answer.

 
3)    Search Strategies

Consider an AI software agent (a program) that processes a maze contained in an N x M grid, where N (the number of rows) and M (the number of columns) are guaranteed to be at least 2. The entire maze is available to the agent at once, not as an image, but as an array indicating where the walls are. The agent is tasked with finding a path from the top left square to the bottom right square. The path cost is the number of steps (i.e., every move from one square to an adjacent square adds one to the cost). Only horizontal and vertical steps are allowed. Walls will only occur between squares. It is possible that there may be loops, multiple paths to the solution, or perhaps no path to the solution. An example of such a maze, where N is 3 and M is 4:
 
Now answer the following question. (Do not assume the specific maze shown here, it is just an example; N, M, and the maze will be provided to the agent.) Briefly explain all your answers.
(a)    First consider applying a tree search version of depth-first search (DFS) to this problem. For this question, consider a version that does not remember all reached nodes, but that does remember nodes on the current path to avoid cycles. Is this strategy complete? Is this strategy optimal?

(b)    Now consider applying a graph search version of breadth-first search (BFS) to this problem, as discussed in class. Is such a strategy complete? Is such a strategy optimal?

(c)    Now consider applying the graph search version of A* search, as discussed in class, to the problem. You will consider four heuristic, where x is the current row and y is the current column. Assume rows are labeled from 1 to N and columns are labeled from 1 to M; the top-left cell (the starting position) is (1, 1), and the bottom-right cell (the destination) is (N, M). The four heuristics we are considering are:
•    H1(n) = 0
•    H2(n) = N – x + M - y.
•    H3(n) = [(N – x)2 + (M – y)2]½  
•    H4(n) = (N – x) * (M – y)
Which of the above four heuristics would guarantee that the graph search version of A* search is optimal? Specify all that apply.

(d)    Which of the above heuristics is the best one to use with the graph search version of A* search? Why?

More products