(2.1) |
The language of belief can be extended so that an agent may have beliefs about other agents, allowing the agents to reason about one another's plans. For example, the following sentence might be generated by agent , expressing that agent believes that agent believes .
(2.2) |
An agent is said to know a proposition only if the proposition can be proved and only if it can be proved that the agent believes the proposition. Agent A might then claim "agent B knows P", which means that both agent A believes P and that agent A believes that agent B believes P.
(2.3) |
A common metaphor for reasoning about agents, for designing agents, and for designing agents that reason about other agents is that of a mental state consisting of beliefs, desires and intentions [17]. This is known as a BDI model or BDI architecture [7]. In Rao and Georgeff's decision model for BDI agents [55], the desires of the agent describe the preferred goal states, in the same sense of a goal as in STRIPS planning. For each of the agent's possible worlds, generated by its beliefs, desires and intentions, there is a time-tree structure that represents the actions of the agent on its edges and states at the nodes. The agent's goals can be any consistent subset of its desires. A subset of the the possible worlds will have corresponding time-trees that are consistent with a goal. These are called the goal-accessible worlds. The agent must choose and commit to one goal. The worlds in which the agent is committed to a goal are the intention-accessible worlds, which are a subset of the goal-accessible worlds. A number of BDI architectures have been proposed, for example IRMA [7] and PRS [24], in which the agent's mental state is composed of beliefs, desires and intentions, and in which the agent executes a cycle of observation of the environment, update of beliefs, deliberation of over intentions, and execution of an intended plan. The planner developed in this thesis will also use a BDI architecture.