next up previous contents
Next: Planning of Negotiation Dialogue Up: Evaluation Previous: Related work on initiative   Contents

Conclusion

The purpose of this chapter was to evaluate and demonstrate the planner. In Chapter 1, objectives were set to develop a dialogue planner suitable for use in a dialogue management system that would be easy to use, and at the same time, demonstrate an efficiency advantage over systems that do not take advantage of a user model. The two examples in this chapter demonstrate ease of use. For further evidence, an example of an input file for example 1 is given in the appendix. Therefore, dialogue problems can be specified just as well as for a state based system or for a system that uses a phrase structured dialogue grammar. The question of whether the system is as efficient as other candidates has been partially answered. The two examples showed that a significant efficiency gain can be obtained in some dialogues by exploiting a user model. The only question that remains is whether this system is competitive among dialogue systems that exploit a user model. For those user modelling systems that use a logical, rather than a probabilistic model ( [5,37,51,10,38,54,69,2,72]) , the answer is yes, since these systems cannot distinguish the negation of the difference in utility that occurs between alternatives across a decision surface. The other category of systems that use an (implicit) user model is that of reinforcement learning systems. While reinforcement learning is very useful for routine dialogues ( [64], [76], [58]), it requires plenty of training data. In a routine situation, there would be little point in using planned dialogue. On the other hand, a planning system can make intelligent use of training data in novel situations. The revision of a belief for one plan can have a positive effect on another, novel plan, once the two plans share that belief as a precondition. In particular, negotiations over complex domain plans may require choices of negotiation acts that depend on a large state space of intentions and beliefs for that domain-level plan. The application of the planner to this sort of problem will be discussed in the next chapter. The efficiency of the planner compared with that of a reinforcement learning system has yet to be shown. In the future work chapter plans for experiments that compare each approach will be given.

This chapter has fallen short in not performing at least some evaluation in a human rather than a simulated setting. While simulation is very flexible and gives detail results, it seems that even a small experiment in a human setting would produce valuable evidence about the planners expected performance in such a setting. More discussion of such evaluation will be given in the future work section.


next up previous contents
Next: Planning of Negotiation Dialogue Up: Evaluation Previous: Related work on initiative   Contents
bmceleney 2006-12-19