next up previous contents
Next: Focussing and plan recognition Up: Negotiation dialogue Previous: Request and propose   Contents

Planning in self-interested settings

All of the examples in this thesis have used a shared utility function, and so have been examples of fully cooperative dialogues. However, when agents have individual utility functions, veracity is lost, and so agents can compute the value of misinformation as well as that of information. In performing plan recognition, hypotheses would be formed for each of the sincere and insincere forms of the speakers acts, so that the hearer can estimate whether a lie was the rational strategy for the agent. One example of this would be a "friend or foe" type of game where if both agents cooperate in a venture they can expect to do better than if both defect. However, if one defects and the other cooperates, the defector will be better off than had they both cooperated. For example a fraudster in a dialogue with a banking dialogue system might insincerely give his account number. It is then up to the dialogue system to decide between the risk of fraud, and the expense of a security measure such as the asking of the speaker's date of birth. This decision would rest on the system's expectation that the speaker is insincere. As another example, a negotiation of strategies between an alliance of two parties in a war, or in a competitive marketplace, might result in an insincere proposal to attack a third party, who is stronger than each, at a specific time. If the proposer defects, he can allow the other two to weaken one another to the point where he may enjoy a victory over both rather than a shared victory. The planning of communications in such a setting must accommodate both the planning and the recognition of insincere acts.


next up previous contents
Next: Focussing and plan recognition Up: Negotiation dialogue Previous: Request and propose   Contents
bmceleney 2006-12-19