next up previous contents
Next: Future Work Up: Planning of Negotiation Dialogue Previous: Related work   Contents

Conclusion

This chapter has investigated the planning of negotiation dialogues, which are meta-level dialogues in which agents exchange information about their beliefs before choosing a domain-level plan. It was argued that such negotiation must take the probability of belief states into account, since there are may be many candidate plans for negotiation that are possible, but few that have a reasonable chance of being chosen, and therefore few that are worth the effort of negotiation. On the other hand, meta-level planning using a logical belief model considers all plan candidates equally, and may fail to be useful because of this. A set of negotiation acts was chosen with a number of desirable properties in mind - that they correspond with the acts seen in human dialogue, that they are simple, efficient, and fully expressive without any redundancy of expression. The chosen repertoire of acts - "pass", "tell", "propose" and "request" was demonstrated to have these properties. It was possible to formally specify these acts by writing STRIPS rules for them, but in implementing the negotiation planner, a recursive function was written to generate a negotiation game tree rather than directly using the STRIPS rules. This game tree is then evaluated using the evaluation module of the domain-level planner.

The planner has been shown to be able to decide whether a negotiation act is efficient, but so far the examples given have been small. There may be some issues of coherence of long negotiations where the negotiators should move the focus point [28] in a regular manner over the domain-level plan. For long negotiations, the game trees grow quickly with the number of alternative negotiation acts and with the length of the negotiation. It has been suggested that a heuristic search be used to cope with the rapid expansion of the game tree, although no results about the efficacy of this approach are available yet. An example of a larger problem might be one in which a kitchen assistant robot must schedule a large number of meals, in cooperation with a human chef. Such a problem would have a suitably large belief set that an extended negotiation would occur. There might be many differing beliefs about the set of tasks to be accomplished, availability of ingredients, and availability and state of the cooking implements. This would be especially interesting in an environment that is not mutually and fully observable, so that information becomes available more through dialogue than through observation of actions taking place in the environment.


next up previous contents
Next: Future Work Up: Planning of Negotiation Dialogue Previous: Related work   Contents
bmceleney 2006-12-19