Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

CS 188: Introduction to AI Fall 2002 Final Exam Solutions, Exams of Artificial Intelligence

Solutions to the questions from the final exam of the introduction to ai course offered by professor stuart russell during the fall 2002 semester. The questions cover topics such as search algorithms, propositional and first-order logic, logical inference, probability theory, bayes nets, decision theory, dynamic bayes nets, learning, and robotic path planning.

Typology: Exams

2012/2013

Uploaded on 04/02/2013

shalin_p01ic
shalin_p01ic 🇮🇳

4

(7)

86 documents

1 / 3

Toggle sidebar

This page cannot be seen from the preview

Don't miss anything!

bg1
CS 188 Introduction to AI
Fall 2002 Stuart Russell Final Solutions
1. (10 pts.) Some Easy Questions to Start With
(a) (2) False; ambiguity resolution often requires commonsense knowledge, context, etc. Dictionaries often
giv multiple definitions of a word, which is one source of ambiguity.
(b) (2) True; it’s hard to see directly into the mental state of other humans.
(c) (2) False; feedforward NNs have no internal state, hence cannot track a partially observable environment.
(d) (2) False; obviously it depends on how well it’s programmed, and we don’t know how to do that yet
(despite what Kurzweil, Moravec, Joy, and others suggest).
(e) (2) False; odometry errors accumulate over time; the robot must sense its environment.
2. (13 pts.) Search
(a) (1) The branching factor is 4 (number of neighbors of each location).
(b) (2) The states at depth kform a square rotated at 45 degrees to the grid. Obviously there are a linear
number of states along the boundary of the square, so the answer is 4k.
(c) (2) Without repeated state checking, BFS expends exponentially many nodes: counting precisely, we get
((4x+y+1 1)/3) 1.
(d) (2) There are quadratically many states within the square for depth x+y, so the answer is 2(x+y)(x+
y+ 1) 1.
(e) (2) True; this is the Manhattan distance metric.
(f) (2) False; all nodes in the rectangle defined by (0,0) and (x, y) are candidates for the optimal path, and
there are quadratically many of them, all of which may be expended in the worst case.
(g) (1) True; removing links may induce detours, which require more steps, so his an underestimate.
(h) (1) False; nonlocal links can reduce the actual path length below the Manhattan distance.
3. (6 pts.) Propositional Logic
(a) (1) False; {A=false, B =f alse}satisfies ABbut not AB.
(b) (1) True; ¬ABis the same as AB, which is entailed by AB.
(c) (2) True; the first conjunct has models and entails the second.
(d) (2) True; both have four models in A,B,C.
4. (10 pts.) First-Order Logic
(a) (3) “Every cat loves its mother or father” can be translated as
i. x¬Cat(x)Loves(x, M other(x)) Loves(x, F ather(x)).
(b) (3) “Every dog who loves one of its brothers is happy” can be translated as
i. x Dog(x)(y Brother(y, x)Loves(x, y)) H appy(x).
ii. x, y Dog(x)Brother(y, x)Loves(x, y)H appy(x).
(c) (4) (a)(ii) and (a)(iii) both contain disjunctions of two positive literals and hence are not Horn-clause-
representable. (b)(i) and (b)(ii) are logically equivalent; (b)(ii) is obviously Horn, so (b)(i) is too.
5. (8 pts.) Logical Inference
1
pf3

Partial preview of the text

Download CS 188: Introduction to AI Fall 2002 Final Exam Solutions and more Exams Artificial Intelligence in PDF only on Docsity!

CS 188 Introduction to AI

Fall 2002 Stuart Russell Final Solutions

  1. (10 pts.) Some Easy Questions to Start With (a) (2) False; ambiguity resolution often requires commonsense knowledge, context, etc. Dictionaries often giv multiple definitions of a word, which is one source of ambiguity. (b) (2) True; it’s hard to see directly into the mental state of other humans. (c) (2) False; feedforward NNs have no internal state, hence cannot track a partially observable environment. (d) (2) False; obviously it depends on how well it’s programmed, and we don’t know how to do that yet (despite what Kurzweil, Moravec, Joy, and others suggest). (e) (2) False; odometry errors accumulate over time; the robot must sense its environment.
  2. (13 pts.) Search (a) (1) The branching factor is 4 (number of neighbors of each location). (b) (2) The states at depth k form a square rotated at 45 degrees to the grid. Obviously there are a linear number of states along the boundary of the square, so the answer is 4k. (c) (2) Without repeated state checking, BFS expends exponentially many nodes: counting precisely, we get ((4x+y+1^ − 1)/3) − 1. (d) (2) There are quadratically many states within the square for depth x + y, so the answer is 2(x + y)(x + y + 1) − 1. (e) (2) True; this is the Manhattan distance metric. (f) (2) False; all nodes in the rectangle defined by (0, 0) and (x, y) are candidates for the optimal path, and there are quadratically many of them, all of which may be expended in the worst case. (g) (1) True; removing links may induce detours, which require more steps, so h is an underestimate. (h) (1) False; nonlocal links can reduce the actual path length below the Manhattan distance.
  3. (6 pts.) Propositional Logic (a) (1) False; {A = f alse, B = f alse} satisfies A ⇔ B but not A ∨ B. (b) (1) True; ¬A ∨ B is the same as A ⇒ B, which is entailed by A ⇔ B. (c) (2) True; the first conjunct has models and entails the second. (d) (2) True; both have four models in A, B, C.
  4. (10 pts.) First-Order Logic (a) (3) “Every cat loves its mother or father” can be translated as i. ∀x ¬Cat(x) ∨ Loves(x, M other(x)) ∨ Loves(x, F ather(x)). (b) (3) “Every dog who loves one of its brothers is happy” can be translated as i. ∀x Dog(x) ∧ (∃y Brother(y, x) ∧ Loves(x, y)) ⇒ Happy(x). ii. ∀x, y Dog(x) ∧ Brother(y, x) ∧ Loves(x, y) ⇒ Happy(x). (c) (4) (a)(ii) and (a)(iii) both contain disjunctions of two positive literals and hence are not Horn-clause- representable. (b)(i) and (b)(ii) are logically equivalent; (b)(ii) is obviously Horn, so (b)(i) is too.
  5. (8 pts.) Logical Inference

(a) (1) False; Q(a) is not entailed. (b) (2) True; via P (F (b)). (c) (2) True; breadth-first FC is complete for Horn KBs. (d) (2) False; infinite loop applying the first rule repeatedly. (e) (1) False; P (b) is an example.

  1. (16 pts.) Probability, Bayes Nets, Decision Theory

(a) (3) (ii) and (iii). (iii) is the “correct” model and (ii) is a complete network and can represent any distribution. (i) is incorrect because W rapper and Shape are in fact dependent. (b) (iii) because it is correct and minimal. (c) True; there is no link, so they are asserted to be independent. (d) 0.59; P (W = Red) = P (W = Red|F = s)P (F = s)+P (W = Red|F = a)P (F = a) = (0. 7 × 0 .8)+(0. 3 × 0 .1). (e) (3) > 0.99; P (F |S = r, W = r) = αP (S = r, W = r|F )P (F ) = αP (S = r|F )P (W = r|F )P (F ) = = α〈 0. 8 × 0. 8 × 0. 7 , 0. 1 × 0. 1 × 0. 3 〉 = α〈 0. 448 , 0. 003 〉. (f) (2) Strictly speaking, we have to assume risk neutrality, which is reasonable for one candy: 0. 7 s + 0. 3 a. (g) (3) This is tricky and can be viewed in several ways. Here’s one argument: Less than before. An owner of a wrapped candy has now lost a choice (eat or sell, vs. eat, sell, or unwrap) hence the state of owning the candy has lost expected value.

  1. (12 pts.) Dynamic Bayes Nets

(a) (3) The only tricky bit is noticing that X and Y evolve independently of each other given the action.

N 1

1

A 1

X

Y 1

N

A

X

Y

N

A

X

Y

2

2

2

2

3

3

3

3

(b) (2) After N 1 = 1, the agent could be in any of the four edge squares. Of these, only (2,1) and (2,3) are consistent with N 2 = 2 after Right. (c) (2) Could not have been in (1,2) and (2,2). (d) (2) False; even with no sensors, the agent can execute, e.g., LLDD and know that it is in (1,1). (e) (3) True; this DBN is not completely connected between layers, so its transition model will increase in size.

  1. (15 pts.) Learning

(a) (2) False positives are X 3 , X 4 , X 6. (b) (2) No false negatives. (c) (2) Only Sunny = T has a mixture of positive and negative examples.