Next: Multilogue planning Up: Improvements to design features Previous: Beliefs about plan rules   Contents

## Improved belief revision

The current design of the belief revision system is based on lazy revision, that is, revising only the least controversial beliefs. For example, if agent 1 and agent 2 were to disagree about the proposition "blue(sky)", and agent 2 executed the act tell(blue(sky)), agent 1 would come to believe that agent 2 believes that blue(sky), revising at level 2, but would not address his conflicting belief at level 1. Level 1 beliefs are always absolute (true or false), in contrast to beliefs at the other levels, which are continuous estimates of level 1 beliefs. However, these estimates cannot currently be communicated by the planner - it only deals with level 1, absolute beliefs. Supposing though that they could be communicated, it would be straightforward to conflate each agent's estimate by summing the frequency counts of each. For example agent 1 might estimate that agent 3 believes P at 0.6 since on 3 of 5 occasions, an act was observed as evidence of P. On the other hand, agent 2 might estimate 3 of 15. The summed count would then be of giving a probability of 0.3. There is no similar way of conflating level 1 beliefs in the current system.

One good example of where conflicting level 1 beliefs need to be resolved is that of misconception correction. An example given by Pollack is one of a caller to a hospital who asks for Kathy's number. A precondition to asking is that the caller believes that Kathy is still in hospital. The receptionist can choose whether to give Kathy's home number, give Kathy's home number and correct the misconception, or to give the hospital number as appropriate (see figure 6.1). Correcting the misconception has some dialogue cost, but this is made up for since the caller can then visit Kathy at home. This problem was input to the planner. It is hard to say whether the receptionist should revise his beliefs about Kathy being discharged when the caller asks, or whether the caller should revise his beliefs when the receptionist tells him that Kathy has been discharged. The former case leads to the planner choosing the play shown in figure 6.2. Unfortunately the receptionist chooses not to correct the misconception and the plan eventually fails. By using the strategy of the caller revising his beliefs, the best play chosen by the planner is given in figure 6.3. This strategy results in the desired behaviour, but it is hard to make a domain-independent decision about which of the agents is right. Notice that the planner as it is currently implemented produces the play in figure 6.4, with the receptionist giving the correct number but incorrectly omitting the misconception correction since the caller will not as a result revise his level 1 beliefs.

To properly deal with problems such as the misconception problem above, agents must reason about the amount of evidence upon which their beliefs are founded, and the inferences that can be drawn from conditional dependencies between beliefs. For instance, the fact that the receptionist is at the hospital should be supporting evidence for his belief, that can be used to conflate it with the caller's previous belief. They might also conduct a dialogue of providing supporting beliefs in a structured argument, which would update the evidence beliefs in the hearer's belief model, thereby reinforcing the conclusion of the argument. For instance, the receptionist might mention that he is in the hospital to support the conclusion. Value of information judgements could be used in the selection of the argument. This would require a belief model that allows for non-independent beliefs, such as a Bayesian network. There is already a system by Horvitz and Paek [35] that makes similar value of information judgements about collecting evidence to support a hypotheses. Coincidentally, it also involves a receptionist problem, where clarifications are planned to disambiguate a user's intention.

It was argued in Section 3.4.7 that for the examples in this thesis, it is convenient that beliefs should be independent. However, there is a style of dialogue that is not focussed on the immediate domain plan, but rather provides value of information over the long-term by improving the agent's general knowledge. In such dialogues, the ability to draw inferences from acquired beliefs is a lot more important. For example, an agent may learn that there is sugar in the cupboard, which supports his immediate plan of making a pavlova, but the inference that the week's shopping has been done and that there is probably flour as well provides value of information on many occasions in the distant future. Because of this, and because of the need to look at evidence and at argumentation, future work might focus more on the conditional dependencies between beliefs, and use a Bayesian belief network [50] to represent these.

The planner is not yet capable of revising the parent intention rules (see Section 3.4.7), since such revision is quite complicated. It is hoped that in the future that the some form of revision can be implemented. The approach that is used at the moment is to take the first plan tree that fits an act sequence, and revise the rules based on this. This only works for examples that have only one parse. For example, the window-seat problem given in Section 4.6 has a unique parse.

Next: Multilogue planning Up: Improvements to design features Previous: Beliefs about plan rules   Contents
bmceleney 2006-12-19