next up previous contents
Next: Demonstrations Up: Example 1: Risking misinterpretation Previous: Example 1: Risking misinterpretation   Contents


Plan library

The plan library for the problem is described diagrammatically in figure 4.1. The notation used in this diagram is intended to represent the decomposition rules that form the capabilities of the agent. The "decomp" diamond shape represents the relation "Act A may be decomposed to acts A1....An". The colour coding is intended to indicate the agent who executes each of the acts. Blue represents agent 1 and pink represents agent 2. To apply these rules, the agent might start with fix-car for example, and follow a chain of decomposition to find an act to execute. In this example, ask-car-spanner would be chosen. The agent then has a choice between three decompositions resulting in one of the leaf acts ask-car-unambiguous and ask-ambiguous.

Figure 4.1: Plan library for the risk problem
\includegraphics[width=0.9\textwidth]{figures/spanner_lib.eps}

The belief model used for this problem is 5 levels deep, corresponding with planning to a depth of 5 steps. Each level is initialised with the same belief set, since their is no dispute between the agents over the plan rules for the problem. The code used by the planner now follows. Notice that all of the decomposition rules from figure 4.1 are included, as well as some intention rules. Recall that these rules are used for inferring parents in full subtrees during plan recognition. Of particular interest is ask-ambiguous, which being risky, has two possible parents. Notice the correspondence between this specification language and the definition of the agent state given in Section 3.4.2

[
  p(decomp(fix-car,
    [ask-car-spanner,lend-car-spanner,use-car-spanner]),1),

  p(decomp(ask-car-spanner,
    [ask-ambiguous,clarify-car]),1),

  p(decomp(ask-car-spanner,
    [ask-car-unambiguous]),1),

  p(decomp(ask-car-spanner,
    [ask-ambiguous]),1),

  p(decomp(clarify-car,
    [ask-clar,answer-car]),1),

  p(decomp(fix-bike,
    [ask-bike-spanner,lend-bike-spanner,use-bike-spanner]),1),

  p(decomp(ask-bike-spanner,
    [ask-ambiguous,clarify-bike]),1),

  p(decomp(ask-bike-spanner,
    [ask-bike-unambiguous]),1),

  p(decomp(ask-bike-spanner,
    [ask-ambiguous]),1),

  p(decomp(clarify-bike,
    [ask-clar,answer-bike]),1),


  p(intend(fix-car,
    [ask-car-spanner]),1),

  p(intend(ask-car-spanner,
    [ask-ambiguous]),0.5),

  p(intend(ask-car-spanner,
    [ask-car-unambiguous]),1),


  p(intend(fix-bike,
    [ask-bike-spanner]),1),

  p(intend(ask-bike-spanner,
    [ask-ambiguous]),0.5),

  p(intend(ask-bike-spanner,
    [ask-bike-unambiguous]),1)
]

The utility function for this problem is given in the code fragment below. For simplicity, utility functions have been implemented as mutual, and so are specified outside the nested belief model. The values chosen are estimated, rather than directly based on empirical data. A reward of 100 is given if the correct spanner is passed. This is reduced to 80 if the wrong spanner is passed since it would cost 20 units to replan and execute a dialogue in which the agent asks again. If the first agent replans the dialogue, then the hearing agent must accommodate the second attempt by discounting the intention state in which the speaker intended the spanner that he was given on the first attempt. This revision uses the dry-land algorithm. As a result of this revision, the speaker need only ask for the spanner again, and the hearer ought to realise that he has passed the wrong spanner in the first instance. Since the dialogue is guaranteed to succeed at the second attempt, there is a constant 10 for asking and 10 for giving. Rather than build a game tree deep enough for both attempts, the reward of 80 rather than 100 was placed at the leaf corresponding with the failure of the dialogue on the first attempt.

While the utility values for the acts were estimates, informal checks were made to ensure that reasonable variations of these values did not result in more than proportionate changes to the utility values of the alternatives available to the agent. It was found that utility functions retained their general shape and decision surfaces remained approximately in place. These checks did not systematically vary the utility values, nor were the results recorded, but they provide some evidence that the results are reasonable. Checks were performed for each of the demonstrations in this thesis.

Notice that the utility function is compositional, in keeping with the discussion in Section 2.13, where it was claimed that the value of a dialogue could be worked out using a sum of the reward for task completion and the sum of the costs of the dialogue acts.

  utility(ask-ambiguous,-5).
  utility(ask-car-unambiguous,-10).
  utility(ask-bike-unambiguous,-10).
  utility(lend-car-spanner,-10).
  utility(lend-bike-spanner,-10).
  utility(ask-clar,-3).
  utility(answer-bike,-1).
  utility(answer-car,-1).

  utility(use-car-spanner,0).
  utility(use-bike-spanner,0).

  reward(Plan,100) :- plan_contains(Plan,fix-car),
                      plan_contains(Plan,use-car-spanner), !.

  reward(Plan,100) :- plan_contains(Plan,fix-bike),
                      plan_contains(Plan,use-bike-spanner), !.

  reward(Plan,80).


next up previous contents
Next: Demonstrations Up: Example 1: Risking misinterpretation Previous: Example 1: Risking misinterpretation   Contents
bmceleney 2006-12-19