next up previous contents
Next: Planning of Dialogue Up: Introduction Previous: Objectives   Contents

The structure of the thesis

The next chapter, chapter 2, introduces the subject of planning in dialogue systems, to provide a foundation for the planner's design, and to relate the planner to planning problems, and planning systems that have been discussed in the past. It describes the nature of action, planning and intentional agents, and the cooperative process of planning and plan recognition. It is shown that the nested beliefs of the agents, as well as what they say, determine the meaning of their dialogue acts. Since this thesis is about efficient dialogue, game theory is introduced to provide a mechanism for choosing between alternatives of differing value.

Chapter 3 describes the planning model and design of the planner, drawing from the foundations provided in chapter 2. The chapter starts by defining the requirements of the planner, and giving a set of assumptions about the problem setting. Then, the design components are explained, and an example is used as an illustration. The agent's state is described in terms of beliefs, desires, and the dialogue history. Then the set of processes that use and update this state is described - the planner which generates the plan alternatives, the evaluator which evaluates them so one can be chosen, and belief revision, which updates the agent's mental state in response to observed actions.

Chapter 4 illustrates how the planner can be used to solve two practical dialogue problems. The utility gains that the planner can obtain by using belief revision to adapt the probabilistic belief model to the user are estimated using simulation. The subject of the first example is that of deciding whether to say something that has little dialogue cost, but risks plan failure due to misinterpretation, or whether to use an alternative whose meaning is clearer. The subject of the second example is that of deciding whether to pursue a goal by introducing it to the dialogue. The agent must decide whether it is better for him to take the initiative and risk plan failure, or whether to allow the other agent, who knows whether the plan will fail, to take the initiative instead. Continuing from this planning problem, a demonstration is given of the planner adapting its belief model to a user over the course of several dialogues.

Chapter 5 looks at the use of built-in negotiation acts, and how they can be used to efficiently pass information about the agents' beliefs so that they can improve the expected utility of their domain-level plan. Again a set of demonstrations is given which indicates the utility gain obtained by using a planner that can adapt a probabilistic belief model to its dialogue partner. The demonstrations compare the use of the different negotiation acts in some simple problems in a cookery domain.

Chapter 6 looks at some ideas for future work. Some of the ideas are improvements that can be made to the design and implementation of the planner. Comparisons with other planning algorithms are proposed, and evaluation in a human setting, rather than a simulated setting is discussed.

Chapter 7 concludes the thesis, discussing the objectives, and what has been done to achieve them.

There is an appendix, which serves as a guide to the implemented planner, giving a brief description of the code modules, formally defining the input file syntax, and guiding the reader to the experiment materials that correspond with the demonstrations.


next up previous contents
Next: Planning of Dialogue Up: Introduction Previous: Objectives   Contents
bmceleney 2006-12-19