The basic revisions that should be made by the agent are those drawn from the preconditions and effects of the act. If an act was observed, then it must have been the case that its preconditions held, and it must be the case that its effects now hold. Therefore the preconditions must be added to the observers beliefs, followed by the addition of the effects. The order is important since effects can undo preconditions. The decomposition rule that the agent used to choose the act should also be added as a capability belief, but since there may be many suitable rules, it is not clear at the moment how to do this. Grammar induction techniques might be useful for this problem. The preconditions and effects of physical acts are added to the belief model at all levels, but for acts whose preconditions refer to the beliefs of the actor, only the level of the acting agent, to avoid having to resolve conflicts between the agents' beliefs. For example the physical act of handing a ticket has an unquestionable effect of giving possession of the ticket to the receiver, but the spoken act of claiming that a ticket is available has a precondition that refers to the beliefs of the speaker and so should be updated only at level 2 of the hearer's belief model. An assumption of full observability of all actions by both of the agents is made. This ensures that any revisions of beliefs that are made as a consequence of these actions are mutual. Therefore each proposition is adopted as a mutual alternating belief. For example, if the user executes an act with a belief precondition, levels 2, 4, 6, 8 and so on are updated in the system's belief model. As a result of this lazy approach to belief revision, the system cannot cope with misconception dialogues [48] in which the agent must revise its own beliefs. For example, an agent may attempt a plan with a failed belief precondition. This should prompt the hearer to attempt to convince the speaker that the believed proposition does not hold. As a result, the speaker could try an alternative plan that is enabled by the revised base beliefs.
As an example of belief revision, consider the question and answer pair used in section 3.4.4. In executing this dialogue, the agent should respond to the user's answer by updating the belief model at levels 2, 4, 6 and so on.
As well as revising beliefs, the system should revise the intention rules. Each rule specifies a probability distribution of parent intentions for a plan tree, which should be updated from frequency counts in the dialogue data. This is not done by inference of preconditions or effects, but rather by counting the occurrence of acts in the plan tree. One difficulty with doing this is that there could be several candidate plans that explain an act sequence. If the probability of each plan can be obtained, an appropriate distribution of probability mass could be given to the parents in the intention rule. This probability might be found by checking for occurrences of each plan over all outcomes in the game tree. Due to the difficulty of implementing this, the revision of intention rules is left to future work. For the moment, the first parse is taken, and every parent in the parse tree is counted in updating the intention rules.