分享
AI: Anticipation and Inference
输入“/”快速插入内容
AI: Anticipation and Inference
飞书用户8331
1月6日修改
This document is only intended as a brief outline for memorizing.
Lecture 1: Basic Search
Good Search Method Measurement
•
Completeness: always find a solution if exists
•
Optimality: always find the least-cost solution
•
Time complexity: # of all nodes generated / expanded
•
Space complexity: maximum # nodes stored in memory
•
State space: how many possibilities of the current state do we have
Un-informed Search Methods
•
Breath First Search (BFS)
: expand shallowest unexpected node first
◦
Complete, Optimal, but with both exponential time and space complexity
•
Uniform-Cost Search (UCS)
: expand the cheapest unexpanded node first (uniformly use cost to search)
◦
Complete, Optimal, may and may not be better than BFS for complexity
•
Depth First Search (DFS)
: expand deepest unexpanded node first
◦
Non-complete for inifinite-depth / looped space, also not optimal, time
, linearly space
•
Depth Limited Search (DLS)
: DFS with depth limit
◦
Also not complete and not optimal, optimized time
, linearly space
•
Iterative Deepening Search (IDS)
: DLS with increasing limits, combine the benefits of BFS and DFS
◦
Complete and optimal as BFS, also similar time
as BFS, but use lower space
•
Bidirectional Search
: search from both directions simultaneously, and utilize two smaller sub-search trees
◦
Complete and optimal when using BFS, and have better both complexity
Heuristic Search
•
Uses domain knowledge in heuristic function
which estimates the cheapest future cost
•
Greedy Search: only uses
for result, and is both not complete nor optimal
•
A* Search
: uses
that utilizes both current result and future estimation
◦
Places with lowest overall
is explored first
◦
Both (always)
complete and optimal
(only when
admissible
: estimation
never higher than real value
)
◦
For 2 heuristic functions, if
always, then
dominates
, and is more
efficient
Lecture 2: Advanced Search
•
Two basic issues:
◦
Search operator (how to generate a new candidate solution, e.g. Via sampling)
◦
Evaluation criterion (or replacement strategy)
Typical Searching Frameworks
•
Greedy Local Search: Always picks the best solution so far, also known as Hill Climbing
◦
But may be trapped in local optimum
•
Simulated Annealing
: When temperature is high, form the shape; when temperature is low, polish the detail
◦
When better than before, we always use that
◦
When worse than before, give a probability
that it can still survive
▪
, where
is the next solution
50%
50%
•
Other methods:
◦
Tabu Search
: don't visit the sample candidate solution twice via a tabu list
◦
Bayesian Optimization
: build a Bayesian model to guess better solutions
•
Population-based Search
: sample a population, which can smooth the object function
◦
First generate some initial solutions, then evaluate them and use the top k while re-sample some more
46%
54%