next up previous contents
Next: Demonstration 5: Comparison with Up: Demonstrations Previous: Demonstration 3: Clarification   Contents

Demonstration 4: The complete tree

In this section, the performance of the system using the complete game tree is described, repeated here in figure 4.9. Now that the clarify alternative has been introduced to the tree, it would be expected that the first agent could more readily drop the grounding initiative on the first move, and expect the second agent to pick it up at the second move.

Figure 4.9: Complete game tree
\includegraphics[width=0.9\textwidth]{figures/spanner_plantree.eps}

As before, at the first move, the unambiguous alternative yields 80. The risky alternative now has three cases:

if $ p < 0.2$

\begin{displaymath}\begin{split}&u(aa) + u(lbs) + reward \\ = &-5 - 10 + 80 \\ = &65 \\ \end{split}\end{displaymath} (4.10)

if $ p > 0.8$

\begin{displaymath}\begin{split}&u(aa) + u(lcs) + reward \\ = &-5 - 10 - 100 \\ = &85 \end{split}\end{displaymath} (4.11)

if $ p \leq 0.2$ and $ p \geq 0.8$

\begin{displaymath}\begin{split}&u(aa) + u(clarify) \\ = &-5 + 86 \\ = &81 \end{split}\end{displaymath} (4.12)

Therefore, the planner is expected to produce a curve with a constant 65 in the left interval, a constant 85 in the right interval, and a constant 81 in the middle interval. The unambiguous strategy is at 80, and so for this configuration of the problem, the agent should drop the initiative in the middle interval and allow the other agent to pick it up at the next move. It is not hard to alter the utility function so that the agent should instead take the initiative in the middle interval. This is because the grounding effort is more or less the same, no matter who takes the initiative. Figure 4.10 is a plot of the planner's output, clearly showing the three intervals, and the close competition for initiative in the middle interval. Spill is once again evident here, with the system dropping the initiative for the best part of the middle interval for low values of n, but taking it up as $ n$ becomes high.

Figure 4.10 also provides an illustration of the need for a probabilistic belief model, rather than a logical one. A logical model must take one of the two extremes on the probability scale, whereas a probabilistic model can take any value on this scale. Therefore, the decision surface that occurs around the point 0.2 would not be recognised using a logical model, even though the relative utility of the strategies varies greatly across this decision surface.

Figure 4.10: Utility of strategies against P(intend(car-spanner)) for four levels of error
\includegraphics[width=0.9\textwidth]{figures/e4.eps}

It is interesting to compare the utility of the risky strategy in the complete game tree with that in the game tree without the clarification subtree. This comparison is plotted in figure 4.11, for $ n=8$. Notice that using clarification provides greater utility for the most part, except for the region to the right, where clarification is chosen in error by the second agent, since the first agent intends a car-spanner, and had clarification not occurred, the second agent would have taken its best guess instead, which would have turned out to also be the car-spanner.

Figure 4.11: Utility of strategies against P(intend(car-spanner))
\includegraphics[width=0.9\textwidth]{figures/e4x2_8.eps}


next up previous contents
Next: Demonstration 5: Comparison with Up: Demonstrations Previous: Demonstration 3: Clarification   Contents
bmceleney 2006-12-19