人工智能导论课件
合集下载
相关主题
- 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
- 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
- 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。
Pitesti 211 142 98
Lugoj Mehadia
75 120 146
97
101 138 90
85
Urziceni
Hirsova 86
Bucharest
Dobreta
Craiova
Giurgiu
Eforie
Tree search algorithms Basic idea:
offline, simulated exploration of state space by generating successors of already-explored states (a.k.a. expanding states)
Rimnicu Vilcea
Arad
Lugoj
Arad
Oradea
Tree search example
Arad
Sibiu
Timisoara
Zerind
Arad
Fagaras
Oradea
Rimnicu Vilcea
Arad
Lugoj
Arad
Oradea
Chapter 3
28
Implementation: general tree search
Tree search example
Arad
Sibiu
Timisoara
Zerind
Arad
Fagaras
Oradea
Rimnicu Vilcea
Arad
Lugoj
Fra Baidu bibliotek
Arad
Oradea
Tree search example
Arad
Sibiu
Timisoara
Zerind
Arad
Fagaras
Oradea
function Tree-Search(problem,strategy) returns a solution, or failure initialize the search tree using the initial state of problem loop do if there are no candidates for expansion then return failure choose a leaf node for expansion according to strategy if the node contains a goal state then return the corresponding solution else expand the node and add the resulting nodes to the search tree end
Example: vacuum world state space graph
R L L S S R
R
L L S S R L L S S R S R L
R
R L S
states??: integer dirt and robot locations (ignore dirt amounts etc.) actions??: Left, Right, Suck, NoOp goal test??: no dirt path cost??: 1 per action (0 for NoOp)
3.1Problem-solving agents
Example: Romania
71 Oradea Neamt 151 87 Iasi 140 Sibiu 118 Timisoara 111 70 Mehadia 146 138 120 90 Craiova Giurgiu Eforie 101 85 Urziceni Lugoj 80 Rimnicu Vilcea Pitesti 211 142 98 99 Fagaras 92 Vaslui Zerind 75 Arad
97
Hirsova 86
75
Dobreta
Bucharest
3.1Problem-solving agents
Problem Formulation
States (初始状态) Actions (行动) goal test (目标测试) path cost (路径耗散函数)
Selecting a state space
3.1Problem-solving agents
Example: Romania
On holiday in Romania; currently in Arad. Flight leaves tomorrow from Bucharest Formulate goal: be in Bucharest Formulate problem: states: various cities actions: drive between cities Find solution: sequence of cities, e.g., Arad, Sibiu, Fagaras, Bucharest
R L L R
S
R L L S S R L L S S S R L
S R R L S
R
states??: integer dirt and robot locations (ignore dirt amounts etc.) actions??: Left, Right, Suck, NoOp goal test?? path cost??
R L L S S R
R
L L S S R L L S S R S R L
R
R L S
states??: integer dirt and robot locations (ignore dirt amounts etc.) actions?? goal test?? path cost??
Example: vacuum world state space graph
3.4Uninformed search strategies
Search strategies A strategy is defined by picking the order of node expansion Strategies are evaluated along the following dimensions: completeness—does it always find a solution if one exists? time complexity—number of nodes generated/expanded space complexity—maximum number of nodes in memory optimality—does it always find a least-cost solution? Time and space complexity are measured in terms of b—maximum branching factor of the search tree d—depth of the least-cost solution m—maximum depth of the state space (may be ∞)
function Tree-Search(problem,fringe) returns a solution, or failure fringe←Insert(Make-Node(Initial-State[problem]),fringe) loop do if fringe is empty then return failure node←Remove-Front(fringe) if Goal-Test(problem,State(node)) then return node fringe←InsertAll(Expand(node,problem),fringe) function Expand(node,problem) returns a set of nodes successors←the empty set for each action,result in Successor-Fn(problem,State[node]) do s←a new Node Parent-Node[s]←node; Action[s]←action; State[s]←result Path-Cost[s]←Path-Cost[node] + Step-Cost(node,action,s) Depth[s]←Depth[node] + 1 add s to successors return successors
Problem solving and Search
Yu Jie Oct. 2013
outline
Problem solving Search
3.1Problem-solving agents 3.2Problem types 3.3Tree search algorithms 3.4Uninformed search strategies 3.5 Avoid Repeated States Search
Uninformed strategies use only the information available in the problem definition
Uninformed search strategies
Breadth-first search (广度优先) Uniform-cost search (代价一致) Depth-first search (深度优先)
3.3Tree search algorithms
Tree search Example: Romania
71 Oradea Neamt 151 87 Iasi Zerind 75 Arad
140
Sibiu 99 Fagaras
92 Vaslui
118 Timisoara 111 70
80 Rimnicu Vilcea
Example: vacuum world state space graph
R L L S R L L S S R L L R S S R L L R
S
R R
S
S
states?? actions?? goal test?? path cost??
Example: vacuum world state space graph
Example: vacuum world state space graph
R L L R
S
R L L S S R L L S S S R L
S R R L S
R
states??: integer dirt and robot locations (ignore dirt amounts etc.) actions??: Left, Right, Suck, NoOp goal test??: no dirt path cost??
Real world is absurdly complex ⇒ state space must be abstracted for problem solving
(Abstract) state = set of real states
(Abstract) action = complex combination of real actions e.g., “Arad → Zerind” represents a complex set of possible routes, detours, rest stops, etc. For guaranteed realizability, any real state “in Arad” must get to some real state “in Zerind” (Abstract) solution = set of real paths that are solutions in the real world Each abstract action should be “easier” than the original problem!
Depth-limited search (深度有限)
Iterative deepening search (迭代深入深度优先)
Breadth-first search
Lugoj Mehadia
75 120 146
97
101 138 90
85
Urziceni
Hirsova 86
Bucharest
Dobreta
Craiova
Giurgiu
Eforie
Tree search algorithms Basic idea:
offline, simulated exploration of state space by generating successors of already-explored states (a.k.a. expanding states)
Rimnicu Vilcea
Arad
Lugoj
Arad
Oradea
Tree search example
Arad
Sibiu
Timisoara
Zerind
Arad
Fagaras
Oradea
Rimnicu Vilcea
Arad
Lugoj
Arad
Oradea
Chapter 3
28
Implementation: general tree search
Tree search example
Arad
Sibiu
Timisoara
Zerind
Arad
Fagaras
Oradea
Rimnicu Vilcea
Arad
Lugoj
Fra Baidu bibliotek
Arad
Oradea
Tree search example
Arad
Sibiu
Timisoara
Zerind
Arad
Fagaras
Oradea
function Tree-Search(problem,strategy) returns a solution, or failure initialize the search tree using the initial state of problem loop do if there are no candidates for expansion then return failure choose a leaf node for expansion according to strategy if the node contains a goal state then return the corresponding solution else expand the node and add the resulting nodes to the search tree end
Example: vacuum world state space graph
R L L S S R
R
L L S S R L L S S R S R L
R
R L S
states??: integer dirt and robot locations (ignore dirt amounts etc.) actions??: Left, Right, Suck, NoOp goal test??: no dirt path cost??: 1 per action (0 for NoOp)
3.1Problem-solving agents
Example: Romania
71 Oradea Neamt 151 87 Iasi 140 Sibiu 118 Timisoara 111 70 Mehadia 146 138 120 90 Craiova Giurgiu Eforie 101 85 Urziceni Lugoj 80 Rimnicu Vilcea Pitesti 211 142 98 99 Fagaras 92 Vaslui Zerind 75 Arad
97
Hirsova 86
75
Dobreta
Bucharest
3.1Problem-solving agents
Problem Formulation
States (初始状态) Actions (行动) goal test (目标测试) path cost (路径耗散函数)
Selecting a state space
3.1Problem-solving agents
Example: Romania
On holiday in Romania; currently in Arad. Flight leaves tomorrow from Bucharest Formulate goal: be in Bucharest Formulate problem: states: various cities actions: drive between cities Find solution: sequence of cities, e.g., Arad, Sibiu, Fagaras, Bucharest
R L L R
S
R L L S S R L L S S S R L
S R R L S
R
states??: integer dirt and robot locations (ignore dirt amounts etc.) actions??: Left, Right, Suck, NoOp goal test?? path cost??
R L L S S R
R
L L S S R L L S S R S R L
R
R L S
states??: integer dirt and robot locations (ignore dirt amounts etc.) actions?? goal test?? path cost??
Example: vacuum world state space graph
3.4Uninformed search strategies
Search strategies A strategy is defined by picking the order of node expansion Strategies are evaluated along the following dimensions: completeness—does it always find a solution if one exists? time complexity—number of nodes generated/expanded space complexity—maximum number of nodes in memory optimality—does it always find a least-cost solution? Time and space complexity are measured in terms of b—maximum branching factor of the search tree d—depth of the least-cost solution m—maximum depth of the state space (may be ∞)
function Tree-Search(problem,fringe) returns a solution, or failure fringe←Insert(Make-Node(Initial-State[problem]),fringe) loop do if fringe is empty then return failure node←Remove-Front(fringe) if Goal-Test(problem,State(node)) then return node fringe←InsertAll(Expand(node,problem),fringe) function Expand(node,problem) returns a set of nodes successors←the empty set for each action,result in Successor-Fn(problem,State[node]) do s←a new Node Parent-Node[s]←node; Action[s]←action; State[s]←result Path-Cost[s]←Path-Cost[node] + Step-Cost(node,action,s) Depth[s]←Depth[node] + 1 add s to successors return successors
Problem solving and Search
Yu Jie Oct. 2013
outline
Problem solving Search
3.1Problem-solving agents 3.2Problem types 3.3Tree search algorithms 3.4Uninformed search strategies 3.5 Avoid Repeated States Search
Uninformed strategies use only the information available in the problem definition
Uninformed search strategies
Breadth-first search (广度优先) Uniform-cost search (代价一致) Depth-first search (深度优先)
3.3Tree search algorithms
Tree search Example: Romania
71 Oradea Neamt 151 87 Iasi Zerind 75 Arad
140
Sibiu 99 Fagaras
92 Vaslui
118 Timisoara 111 70
80 Rimnicu Vilcea
Example: vacuum world state space graph
R L L S R L L S S R L L R S S R L L R
S
R R
S
S
states?? actions?? goal test?? path cost??
Example: vacuum world state space graph
Example: vacuum world state space graph
R L L R
S
R L L S S R L L S S S R L
S R R L S
R
states??: integer dirt and robot locations (ignore dirt amounts etc.) actions??: Left, Right, Suck, NoOp goal test??: no dirt path cost??
Real world is absurdly complex ⇒ state space must be abstracted for problem solving
(Abstract) state = set of real states
(Abstract) action = complex combination of real actions e.g., “Arad → Zerind” represents a complex set of possible routes, detours, rest stops, etc. For guaranteed realizability, any real state “in Arad” must get to some real state “in Zerind” (Abstract) solution = set of real paths that are solutions in the real world Each abstract action should be “easier” than the original problem!
Depth-limited search (深度有限)
Iterative deepening search (迭代深入深度优先)
Breadth-first search