next up previous contents
Next: Demonstration 1 Up: Evaluation Previous: Demonstration 5: Efficiency   Contents


Belief model acquisition

In this section, user model acquisition is demonstrated by showing how the belief revision component of the system is used to adapt a stereotype model to a sequence of dialogues between the system and a user. Consider again the window seat example of the last section. In figure 4.25 the system must decide whether or not to offer a window seat. This depends on its model of the user's intention to have a seat, and on its model of the user's belief about whether the system has a window seat available. It is assumed in this example that there is one user, and that there is a sequence of dialogues. A stereotype model is employed to capture a user, who on some occasions wants a window seat, and on others does not. Similarly, it is assumed that the user treats the system as varying from day to day, with availability of a window seat randomly determined according to a certain probability. Therefore the user too employs a stereotype, and the system must make an estimate of this stereotype. Over the course of a sequence of dialogues, the stereotype model comes to estimate the expected belief state. In this example, stereotypes that are nested to all levels are used, since the agents mutually believe that each is modelling the other using a stereotype model.

Figure 4.25: Game tree for agent's choice between offer and chat
\includegraphics[width=1\textwidth]{figures/window_plantree_branch.eps}

The system was set to use a "decaying" average to compute the stereotype. That is, the stereotype was a 90%/10% mix of the previous and revised value, with the revised value obtained by starting with the previous value and performing belief revision on it using the current dialogue. This ensured that the most recent evidence had a greater weighting. Appropriate preconditions were set up for the plan rules to ensure that the system could make all of the necessary inferences about beliefs and intentions. These rules are given in table 4.1


Table 4.1: Preconditions used by the belief revision mechanism
act precondition    
offer have-seat    
give have-seat    
dontgive not(have-seat)    
       
ask intend(book-flight-window)    
accept intend(book-flight-window)    
reject intend(book-flight-any)    


The "dry-land" algorithm is particularly helpful in this example, for the dialogue [discuss-details,chat,chat] (see figure 4.15). The final chat has no precondition and so ordinary belief revision does nothing. However, it can be explained by both of the user not intending a window seat and the user not believing that one is available. The dry-land algorithm adopts the simplest combination of these. It is also useful at the offer/chat decision. For example, if the system were to choose chat, the system would expect the user to update his model with an explanation that the system believes that either the user does not want a window seat or that the user believes the system has a window seat and will therefore take the initiative himself. That the dry-land algorithm is useful twice in this example indicates that it may be significantly important in other types of dialogue, but these are yet to be investigated.

Some demonstrations are presented now to show how the stereotype model adapts over the course of a sequence of dialogues. Demonstration 1 looks at the updating of the system's model of the user's model, in response to the system's act on the first turn. Demonstration 2 looks at the system's revision of its own model in response to the user's act on the second turn. Demonstration 3 looks at the sequence of belief states that result from a sequence of dialogues.



Subsections
next up previous contents
Next: Demonstration 1 Up: Evaluation Previous: Demonstration 5: Efficiency   Contents
bmceleney 2006-12-19