The assumption of mutuality may be more controversial once the scope of the planner goes beyond spoken dialogue, to encompass planning and recognition of acts in a physical world. Physical acts may not be immediately observed, with information about the order of execution lost, and the effects of acts overwritten by later acts. Agents may disregard turn-taking and act before observing all of the acts of the other agent. Some perceptual model is required. For example, a robot equipped with a sonar device would make a sequence of partial observations over time, since some effects will have been overwritten and some effects are hidden from immediate view. Where the dialogue planner generates plan hypotheses that are consistent with a history list of dialogue acts, the robot should generate hypotheses that are consistent with the sequence of effect observations. Since observation is part of the agent's cycle of sensing, planning and acting, expectations can be generated about observations as well as about actions. For example, a robot may plan to move into another room, and in generating the continuation of its plan, generate an expectation that another robot will fail to observe the effects of its actions.
Mutuality has a pleasant effect on the complexity of the evaluation function. If the belief model is identical at every second level, the evaluation function need not be applied from many different perspectives. Generally, a call to minimax using levels 1 and 2 will result in a recursive call to minimax that uses levels 2 and 3 and so on, but if mutuality is present, each of these calls would return the same best play. If mutuality of plan rules is present, only one game tree is required as well. As a result, this one game tree can be evaluated in one pass, rather than many. This may be one reason why dialogue participants rarely make the computational effort to consider more than a two level belief model [72].