next up previous contents
Next: Generating and understanding dialogues Up: Planning of Dialogue Previous: Plan recognition and cooperative   Contents


User modelling in dialogue systems

User modelling is the representation, acquisition, and maintenance of a model of those components of the mental state of a user that determine his preference over different courses of a dialogue with the system. Typically, these are his beliefs about the state of the environment and the system, his capabilities for action within that environment, and the value he associates with the achievement of goals [38]. A dialogue system with a good user modelling component will achieve the user's intentions more quickly, since it can infer those intentions without being told. Such a system would never tell the user the same fact twice, nor begin a subplan that the user is incapable of cooperating with, nor ask for too much clarification about the plan that the user is trying to construct. In the context of dialogue planning, the BDI model is the natural place to start for representing a user model. The intentions of the agent are a function of its beliefs and desires. The desires of the agent do not change during the dialogue, since the agent does not choose desires, he only chooses intentions. Intentions are not part of the user model but are determined by the system through the plan recogniser. Beliefs and desires determine many possible intentions, but only some of those possibilities will be consistent with the actions of the agent observed in the dialogue. A BDI user model will then have three components: beliefs, desires and action history, from which hypotheses about the intended plan structure can be inferred.

While much work on dialogue planning and plan recognition makes no distinction between the different beliefs of the agents, it is quite possible, in apprentice-expert dialogues or where each agent brings a complementary set of skills, that they will have differing beliefs about the state of the environment, and differing beliefs about the plan rules as well [53]. An agent may then construct one plan based on two different sets of plan rules. Plan recognition for dialogues of two or more agents must account for these differences.

For the model of dialogue planning based on STRIPS schemas, the basic mechanism of acquisition and maintenance of a BDI user model is through inference of preconditions and effects. Assuming that each agent is capable of recognising that each dialogue act has happened, it can then update those propositions that were necessary to satisfy the preconditions of the action, and the propositions that were on the effects list of the action [52]. Another means of maintaining a belief model is by using stereotypes. For example, a dialogue system for flight bookings might see dozens of business users every day, who all have a similar belief state, and dozens of tourists every day who all have a similar belief state, which is quite different from that of the business user. Finding out whether the user is a business user or a tourist then allows the system to retrieve an appropriate stereotype model, constructed from previous dialogues, that best represents the user's state. Rich's GRUNDY system was the first to use stereotype models [57].

Nested belief models are those that represent the system's private beliefs, the system's beliefs about the user, the system's beliefs about the user's beliefs about the system and so on. The first model, the system's private beliefs, is referred to as level one, the second, level two, and so on. Beliefs that occur at all levels are referred to as mutual beliefs. Such beliefs are quite common, since if the agents mutually believe they are both observing the dialogue, any inferences drawn from the dialogue will be mutual. Many dialogues have this property, and so the system need only maintain a two-level belief model, since every second level is identical [72]. There are exceptions however. For example, the agents may be talking on a noisy telephone line, and agent A may assume that agent B has not heard what it said, whereas agent B may believe that agent A assumes it has been heard. This would form a discrepancy between level one and level three. Similarly, one agent may leave the room and perform actions that the other cannot observe. Clark and Schaefer [15] call the process of establishing mutual belief "grounding". Failure in communication in spoken dialogue is common, whether through disagreement about what was actually said, or through disagreement about what can be inferred from what was said. Hearers must therefore follow up what has been said with some evidence that can allow the speaker to update his level three model, his level five model, and so on. For example, after dictating a telephone number, the speaker cannot say that the hearer knows what the speaker uttered. However, if the hearer repeats the number back to the speaker, the speaker can establish this belief. Once the level three belief is established, the speaker can expect that any plan that the hearer pursues that involves this number will succeed. Deeply nested belief models are also necessary when stereotypes are used. For example, one travel agent may offer extra legroom, whereas all the other travel agents in town offer no extra legroom. Therefore the user may believe that the system does not offer extra legroom. This would be a discrepancy between levels 1 and 3 of the nested belief model. As a result of this the system must refer to level 3, and make an extra effort to ground the fact that extra legroom is offered. The planner will therefore allow belief models nested to arbitrary depths.

There is a standard notation, introduced by [40], for describing the occurrence of propositions in a belief model. To say that a proposition P occurs at level 1 of the belief model, SBP would be used, meaning "the system believes P". For level 2, SBUBP would be used, meaning "the system believes that the user believes P". For level 3, SBUBSBP would be used. The notation MBP is used to say that a proposition P is mutual, that is, it occurs at every level in the belief model. It is often the case that a proposition occurs at every second level of the belief model. The expression SBMBUBP is used in this case to say that "the system believes that it is mutually believed that the user believes P".


next up previous contents
Next: Generating and understanding dialogues Up: Planning of Dialogue Previous: Plan recognition and cooperative   Contents
bmceleney 2006-12-19