next up previous contents
Next: Dry-land algorithm Up: Design of the planner Previous: Evaluation example   Contents


Belief revision

Plan recognition is used to infer the agent's intention given the dialogue history, so that possible continuations of a plan can be inferred. To differentiate the possible continuations, the beliefs of the agent must also be inferred, and the possible continuations of the plan evaluated in the context of these beliefs. The belief revision process is used to update the agent's beliefs from the evidence of executed acts. After each turn in the dialogue, the belief revision process is called. It is also used in the evaluation process, where as the game tree is traversed, the belief set used to evaluate a chance node must be updated to reflect the acts in the path that leads to it. Belief revision [23] involves the agent adding propositions to its belief set. The new set may be inconsistent, requiring that the agent search for some set that best satisfies the evidence given by the existing set and the new proposition together. In a logical model of belief, one procedure might be to drop as few propositions from the set as possible to obtain consistency. This would work quite well for a set such as $ \{ A \Rightarrow B, A \}$ with the revision $ \lnot B$. The inconsistent set would then be $ \{ A , A \Rightarrow B , \lnot B \}$ and a minimal contraction of this set would obtain $ \{ A , \lnot B \}$. In a probabilistic representation such as a belief network [50], statistical information about the co-occurrence of beliefs in the observed dialogues might be used to infer the causal relationship between the beliefs, and thus construct the causal structure of the belief network. As more data arrives, the causal structure might be recomputed. For a given current dialogue, the inferred network could then be evaluated using the node values obtained, which would provide updated values for the other beliefs in the network. There appears to be plenty of interest in the subject of inferring belief network structure, but it is also more than a trivial problem [32], and will not be tackled here. One way around the problem is to just assume that there is no causal relationship between beliefs. By doing so, each belief can be updated independently and directly from the dialogue evidence. Such an assumption works quite well, especially for the examples that will be presented in this thesis, since for many of the preconditions found in the dialogue plan rules, there is no causal relationship. Therefore, this is the approach that is taken. Scenarios where this assumption does not hold as well are discussed in Section 6.3.3.

The basic revisions that should be made by the agent are those drawn from the preconditions and effects of the act. If an act was observed, then it must have been the case that its preconditions held, and it must be the case that its effects now hold. Therefore the preconditions must be added to the observers beliefs, followed by the addition of the effects. The order is important since effects can undo preconditions. The decomposition rule that the agent used to choose the act should also be added as a capability belief, but since there may be many suitable rules, it is not clear at the moment how to do this. Grammar induction techniques might be useful for this problem. The preconditions and effects of physical acts are added to the belief model at all levels, but for acts whose preconditions refer to the beliefs of the actor, only the level of the acting agent, to avoid having to resolve conflicts between the agents' beliefs. For example the physical act of handing a ticket has an unquestionable effect of giving possession of the ticket to the receiver, but the spoken act of claiming that a ticket is available has a precondition that refers to the beliefs of the speaker and so should be updated only at level 2 of the hearer's belief model. An assumption of full observability of all actions by both of the agents is made. This ensures that any revisions of beliefs that are made as a consequence of these actions are mutual. Therefore each proposition is adopted as a mutual alternating belief. For example, if the user executes an act with a belief precondition, levels 2, 4, 6, 8 and so on are updated in the system's belief model. As a result of this lazy approach to belief revision, the system cannot cope with misconception dialogues [48] in which the agent must revise its own beliefs. For example, an agent may attempt a plan with a failed belief precondition. This should prompt the hearer to attempt to convince the speaker that the believed proposition does not hold. As a result, the speaker could try an alternative plan that is enabled by the revised base beliefs.

As an example of belief revision, consider the question and answer pair used in section 3.4.4. In executing this dialogue, the agent should respond to the user's answer by updating the belief model at levels 2, 4, 6 and so on.

As well as revising beliefs, the system should revise the intention rules. Each rule specifies a probability distribution of parent intentions for a plan tree, which should be updated from frequency counts in the dialogue data. This is not done by inference of preconditions or effects, but rather by counting the occurrence of acts in the plan tree. One difficulty with doing this is that there could be several candidate plans that explain an act sequence. If the probability of each plan can be obtained, an appropriate distribution of probability mass could be given to the parents in the intention rule. This probability might be found by checking for occurrences of each plan over all outcomes in the game tree. Due to the difficulty of implementing this, the revision of intention rules is left to future work. For the moment, the first parse is taken, and every parent in the parse tree is counted in updating the intention rules.



Subsections
next up previous contents
Next: Dry-land algorithm Up: Design of the planner Previous: Evaluation example   Contents
bmceleney 2006-12-19