next up previous contents
Next: Speech act theory Up: Planning of Dialogue Previous: Planning   Contents


Belief, desire and intention

When one agent reasons about the plans of other agents, it must consider that the preconditions to actions are evaluated in the context of not its own, but rather the other agent's beliefs. Such beliefs can be about the agent's capabilities with respect to actions, represented as the plan rules that it holds, their effects, or about the environment state. Beliefs are expressed using a modal logic [34], whereby a modal operator $ B$ is used to express that an agent believes a proposition. The agent is taken to consider a set of worlds, each of which corresponds with a consistent set of propositions that hold in that world. The intersection of all of the sets of propositions of the possible worlds constitutes the propositions that the agent believes. Those propositions that occur in only some of the sets are considered possible, while those that occur in none of the worlds are disbelieved. The beliefs can be nested, since propositions to which the belief operator has been applied are also propositions. To formally define the semantics of a modal logic, a Kripke model is used [42], which is a set of worlds, and an accessibility relation between worlds. A sentence $ B(P)$ holds in a world $ w1$ if and only if for every world $ w2$ accessible from $ w1$, $ P$ holds. A number of axioms can be introduced, whose validity depends on certain constraints on the accessibility relation. One useful set of axioms is KD45, given below.

\begin{displaymath}\begin{split}&K:B(A) \land B(A \rightarrow B) \rightarrow B(B...
...(A)) \\ &5:\lnot B(A) \rightarrow B( \lnot B(A)) \\ \end{split}\end{displaymath} (2.1)

The language of belief can be extended so that an agent may have beliefs about other agents, allowing the agents to reason about one another's plans. For example, the following sentence might be generated by agent $ X$, expressing that agent $ Y$ believes that agent $ Z$ believes $ P$.

$\displaystyle B(Y,B(Z,P))$ (2.2)

An agent is said to know a proposition only if the proposition can be proved and only if it can be proved that the agent believes the proposition. Agent A might then claim "agent B knows P", which means that both agent A believes P and that agent A believes that agent B believes P.

$\displaystyle P \land B(P) \leftrightarrow K(P)$ (2.3)

A common metaphor for reasoning about agents, for designing agents, and for designing agents that reason about other agents is that of a mental state consisting of beliefs, desires and intentions [17]. This is known as a BDI model or BDI architecture [7]. In Rao and Georgeff's decision model for BDI agents [55], the desires of the agent describe the preferred goal states, in the same sense of a goal as in STRIPS planning. For each of the agent's possible worlds, generated by its beliefs, desires and intentions, there is a time-tree structure that represents the actions of the agent on its edges and states at the nodes. The agent's goals can be any consistent subset of its desires. A subset of the the possible worlds will have corresponding time-trees that are consistent with a goal. These are called the goal-accessible worlds. The agent must choose and commit to one goal. The worlds in which the agent is committed to a goal are the intention-accessible worlds, which are a subset of the goal-accessible worlds. A number of BDI architectures have been proposed, for example IRMA [7] and PRS [24], in which the agent's mental state is composed of beliefs, desires and intentions, and in which the agent executes a cycle of observation of the environment, update of beliefs, deliberation of over intentions, and execution of an intended plan. The planner developed in this thesis will also use a BDI architecture.


next up previous contents
Next: Speech act theory Up: Planning of Dialogue Previous: Planning   Contents
bmceleney 2006-12-19