Traditionally, planning involves a single agent for which a planner needs to find a sequence of actions that can transform some initial state into some state where a given goal statement is satisfied. A good example of such a problem is how to solve Rubik's cube. The initial state is some configuration of the cube, and we need to find a sequence of rotations such that every tile on each side have the same color. Even though this problem is hard, the planning agent has full control over the situation. The outcome of a rotation action is completely known. Real-world planning problems, however, seldom conform to this simple domain model. There may be uncontrollable actions of other agents in the domain interfering with the actions applied by the planning agent. Such uncertainty can be modeled by nondeterminism where actions may have several possible outcomes. One approach is to assume that transition probabilities are known and produce plans with a high likelihood to succeed. The scalability off such planners, however, is limited due to the overhead of reasoning about probabilities. In addition, it may be hard to gather enough statistical data to estimate the transition probabilities. In this chapter, we consider a simpler model of nondeterminism without transition probabilities. The effect of a nondeterministic action is given as a set of possible next states. In this chapter, we specifically examine nondeterministic planning in domains where nondeterminism is caused by uncontrollable agents with specific goals of their own. Such problems have been considered in the AI literature on multiagent systems, in particular, concerning co-operation and negotiation. They have also been studied in game theory and formal verification under various forms.
|Title of host publication||Planning in Intelligent Systems: Aspects, Motivations, and Methods|
|Number of pages||24|
|Publication status||Published - 2005|