State Spaces
- Definition
- A state space is the set of all configurations that a given problem
and its environment could achieve.
- Each configuration is called a state, and contains
- Static information. This is often extracted and held
separately, e.g., in the knowledge base of the agent.
- Dynamic information, which changes between states.
This is held in the problem representation of the agent.
- Examples
- Die - 6 states
- Chess - Too many states
- Missionaries and cannibals - Lots of states
- Google maps route finder - Number of states depends on task
- Problems in terms of state spaces
- Initial states
- Solution detector: Test to recognise a goal state
- State generator: Given certain achieved states in a state space
(starting with the initial states), new states can be achieved.
- Independent generation of states, e.g. roll a die.
- State tranformations applied to achieved states, e.g. flip
a die, move in chess, move in M&C, choose a road in maps.
- Described as LHS => RHS, where the LHS determines
applicability and the RHS specifies results.
- Typically there are too many for ground representation
(e.g. chess +-10^120), so variables are used.
- Solving Problems
- A problem may be solved by the methods of achieving new states.
- The difficulty of finding goal states:
- A branching factor of B gives rise to a state space of size
BD at a search depth of D.
- This is typical of problem state spaces, i.e. problems are
typically exponentially hard.
- Only a finite number of states can be computed and considered
in a finite amount of time.
- We don't have exponential resources.
- State space search using transformations and control strategies
- Search strategies that search the entire state space cannot
be effective in solving problems,
e.g. DFS, BFS, Iterative deepening.
- A control strategy will decide which (finite number of)
transformations should be made at any point in time, from
which achieved states.
- The problem representation has three types of information that
can be used:
- The static information
- The dynamic information in achieved states, statically
state per state.
- The structure of the space of achieved states.
- The knowledge base of the agent provides the knowledge that
is used with the state information, to guide search.
- Common approaches to selecting transformations are:
- For recoverable problems (and thus planning), select
the best of the achieved states and apply all possible
transformations.
- For non-recoverable problems, predict the qualities of the
outcomes of all possible transformations and choose the
one with the best outcome state.
- For a control strategy to succeed, it is necessary that :
- The goal states are not randomly distributed.
- The pattern of distribution is detectable.
- The problem solver be able to react so as to search to
according to the detected pattern.
- Solution information
- If the sequence of steps required to get to a solution
configuration is required, the state transformations are
recorded within each newly achieved state.
- If there is more than one solution state (sequence of
transformations that achieves a solution state), and a "best"
solution state (sequence) is required, the search for solutions
(sequences) must continue until it is assured that the best
solution (sequence) has been found.
- Best may be measured by cost of transformation, and quality of
goal state found.
- The difficulty of finding goal states independently
- Cannot use control strategies.
- Cannot provide a sequence of transformations for reuse
- Only useful if there is no sequence of transformations that
leads to a goal state.
- We will only consider searching hence forth.
- Utility
- If a problem is not deterministic, the outcome of a state space
transformation is uncertain.
- The possible outcomes have degrees of probability, which
must sum to 1.
- The utility of a transformation is the sum of the
products of the utilities of the outcome states with their
probabilities.
- If the environment is dynamic or not accessible, then the quality
of a state is uncertain.
- The possible state values have degrees of probability, which
must sum to 1.
- The utility of a state is the sum of the products of
the quality of the state values with their probabilities.
- Decision theory uses the utility of a transformation to decide
what to do next.
- Example with indeterminism
- Example with inaccessible/dynamic environment
- Example with indeterminism and inaccessible/dynamic environment
- Representing States
- Three problems
- How can individual objects be represented.
- How can the individual object representations be combined into
a state.
- How can sequences of states be represented without duplicating
constant information.
- Solutions
- The first two are the problem of knowledge representation.
In terms of physical symbol systems, require a representation of
symbols and symbol structures.
What data structures are appropriate?
- Logics (the mathematical approach)
- Non-structured knowledge representations (the computer
science approach)
- The last is called the frame problem, and can be solved by
structure sharing.
- Build the current state by keeping only changes on each
state change and working from the initial state.
- Can have a re-start every now and again to avoid continuous
rebuilding.
Or maybe keep the current state completely until a new
branch is used.
- Direction of search
- Forward search
- Backward search - only if goal states are known.
- Depending on the topology of the problem space the better search
direction changes.
- Working in the direction of a lower branching factor is
superior, as the search tree is smaller.
- Number of start states and number of goal states.
- If the branching factor is the same both ways this is important.
However branching rate is more important.
- It is better to start from a few states and work towards
many states.
- In some cases the start states may not be precisely known,
e.g. theorem proving.
- Is justification for the reasoning required.
- Forward reasoning cannot absolutely justify itself.
- Working in both directions is called bi-directional search.
- BN vs. BFN/2 + BBN/2
- Bi-directional search has the danger of passing itself in
the dark, but cuts the exponential growths of search space.
- Means-ends analysis
- This strategy does not work forward/backwards from achieved
states, but rather finds transformations that reduce the
differences between the initial and goal states.
- If the transformation cannot be applied, recursively a new
goal is set to get to a state where it can be applied.
- Finding transformations that cause the biggest reductions
in difference are preferred.
- Avoiding repeated states
- Common with commutative transforms (e.g., dressing), and reversible
transformations (e.g., M&C, dressing in the morning).
- In increasing order of expense and effectiveness
- Do not return to where you just came from
- Do not create cycles
- Do not create less general states
- Relationship to Physical Symbol Systems.
- A state space representation of a problem can implemented in
terms of a physical symbol system.
- States are described by symbol structures.
- State transformations are implemented by processes in the
physical symbol system.
- New states may be generated independently by processes.
Exam Style Questions
- Define a state space.
- What two types of information are held a state?
- How may a problem be described in terms of a state space?
- What does the control strategy do in state space search?
- What are the three types of information that can be used to guide the
search for a solution in a state space?
- What are the three requirements for a control strategy to succeed?
- In what two ways may one problem solution be considered better than
another?
- Give two reasons why independent generation of states not suitable as a
mechanism for reaching a goal state in a state space?
- What are the two types of uncertainly that affect the utility of a
state stransformation?
- Describe three common approaches to the frame problem (storing the
dynamic information in states).
Indicate the advantages and disadvantages of each.
- Describe the criteria that determine the best direction for search in
a problem space.
- Explain the problems and advantages of bi-directional search.
- Briefly describe three increasingly effective principles for avoiding
loops in state space search.
Mention their relative costs.
- Explain how state space search can be implemented in terms of a
physical symbol system.