1 Introduction

This paper concerns the representations used for planning tasks. Revising plans using traditional representational media such as maps or blueprints is cumbersome at best. Consider the problem of evaluating alternative routes for foot travel (Dubois and Shalin 1995). Multiple, alternative routes can be difficult to depict on the same map, for example when routes cross. Examining alternative assumptions behind a plan is even more difficult. For example, consider the problem of crossing a river whose course and level has changed following a major storm. In this case not only the route must be altered, but the map itself is incorrect and requires alteration, or at least some kind of annotation (Oviatt 1997). When representations for planning are computational (that is capable of computer-based manipulation), they facilitate such evaluations, and provide the foundation for planning technology in aerospace, military and manufacturing domains.

However, as identified below, the requirements and issues associated with computational representations for planning extend beyond those apparent in real-time control, where a substantial, existing research literature informs designers. Understanding these issues depends fundamentally on an appreciation for the roles of humans and computers in the planning process. Relative to the limited research on this topic that focuses on the need for human–machine collaboration and cooperation (Hoc 2000, 2001), the present paper takes a more basic position; the role of the computer is to represent. Rather than examine the effect of experimenter-designed representations on planning performance as in Layton et al. (1994) and Smith et al. (1997), the present paper examines representations for planning designed by domain specialists, along with the work practices that they have developed around these.

This introduction first reviews a process model for real-time control informed largely by mathematical control theory, and then integrates real-time control with planning. The purpose of this review is to delineate the substantial existing theory that informs representations for real-time control, and demonstrate both its relevance and limitations for the issues that arise in planning. An alternative theoretical foundation for planning representations, drawn from computer science, identifies a set of issues pertinent to representations that support planning and frames the present paper.

1.1 A process model for real-time control of dynamic systems

Figure 1 provides an illustration of the process and representational entities that participate in real-time control, slightly modified from the traditional textbook depiction to emphasize some issues of particular relevance below. The focus of control is an evaluation of the match between the current state and the goal state. If the states match, the comparison will be re-executed after some period of time. If the states do not match, a physical procedure changes the state of the world. The effects of that procedure appear in real-time and in reality.

Fig. 1
figure 1

A process model for manual control

Often operators cannot view the world for the changes they are effecting, or the appearance of the world provides insufficient resolution for the dimension that operators are controlling, such as the airspeed of an aircraft. In such cases a display (or representation) mediates between the execution of a procedure and perception in the world for human controllers (Norman 1991; Woods 1988a). A predictable relationship between conditions in the world and a representation provides it with extensional semantics, or meaning. While games and laboratory studies sometimes ignore the distinction between conditions in the world and the representation of these conditions on a display, in realistic tasks the display is distinct from the world (Vicente and Burns 1996). Experienced operators appreciate the distinction between a representation and the conditions in the world that it represents, because it is the later that determines the ultimate success of their interventions.

An established set of textbook guidelines concerning control devices, human capability and displays assist designers attempting to create representations for manual control (Wickens 1992). A number of guidelines concern the relationship between inputs to the system and outputs, that is, properties of the transfer function such as gain, lag and order of control. Relevant human properties include limited processing capacity, which interacts with the transfer function to determine the proper level of gain, tolerable lags and appropriate order of control to result in a stable system.

Display requirements generally include representations for the current and goal states. In a spatial tracking task, the target state might be a dot or a solid circle, while a cursor represents the current state. The discrepancy between the two informs operators about the required inputs. In realistic domains the goals change. Because of limitations in human processing time, displays that predict future as well as immediate goal states, such as Wickens‘ example of a target flight path, help operators prepare for appropriate actions as the goals change. Displays that illustrate the expected results of control inputs further reduce cognitive demand (Wickens et al. 1989). While the novice focuses on compensating for error between current and goal states, when the required inputs can be predicted in advance, experts may acquire the ability to control without feedback on the results of their inputs. Nevertheless, with changing goals, even experts require persisting representations of current and goal states for comparison. For example, in the manual control of airspeed, an adjustable pointer (“bug”) represents the goal state, while the cursor reflects the current value of the dimension in question. Compatibility between changes in the display and the direction of control device movement, as well as co-location of displays and control devices also informs designers.

When a control task incorporates several dimensions and several different types of controls, effective control depends on an understanding of the inter-relationships between dimensions themselves and the controls that affect these dimensions. Human factors researchers typically refer to these inter-related dimensions as an operator’s “mental model”. Through a somewhat laborious process of task analysis, researchers acquire an appreciation for the relevant features of the mental model, and ensure the availability of corresponding information for display and/or training. Because of the multiple, inter-related dimensions and controls in a complex system, control often involves specialized procedures for individual goals (Woods 1988b). Multiple dimensions map to configural displays to help operators detect abnormalities (Bennett et al. 1993). For example, an equal-sided polygon represents the goal state. When the system departs from the goal state, the length of the sides of the polygon are not equal, alerting the operator of the need for diagnostic and or corrective action. Bennett has extended this type of display to graphs (Calcaterra and Bennett 2003).

This very brief overview of the representational issues associated with manual control merely echoes textbook resources concerning the general properties of displays and controls for mediating action in a physical world. The next section combines control with planning.

1.2 A process model for control combined with planning

Planning involves the decomposition of a presently unattainable goal into a stack of intermediate sub-goals, each of which may be separately attainable by a procedure that is at best, only partially specified for emerging, specific circumstances. Because these circumstances differ across repeated events, planning results in a newly created (or at least newly modified) procedure (Clancey 1985). The predicted outcome is not an actual reflection of the current state of the world as it is in real-time control, but instead a reflection of a human’s intended state of the world given certain assumptions. Often a computational simulation supports planning, by combining a representation of initial conditions with rules that compute the state changes resulting from a selected set of actions. Using a simulation, changes to the state representation appear faster than real-time, via a hypothetical procedure that is decoupled from physical reality. The representation resulting from a hypothetical procedure enables evaluation prior to execution, and the avoidance of costly execution errors.

Figure 2 provides a simplified, domain independent illustration for interleaving computational planning (on the left) with control (on the right) for the real-time planning that is the subject of this paper. Focusing on the leftmost portion of the figure, operators execute a hypothetical, parameterized procedure on some sub-state in order to generate a new hypothetical state. If the new state is useful relative to the desired goal, the new state, along with the procedure that generates it is added as a sub-goal to an existing stack of goals for execution. The new, hypothetical state becomes a point of departure for the next round of hypothetical reasoning that addresses another mission goal. If the new state is not useful, operators will attempt to alter the parameters for the hypothetical procedure until a more useful state arises. The rightmost portion of the figure illustrates a slightly modified control process. As shown at the bottom, a goal stack determines the current sub-goal and the procedure to execute, but the rest of the control process for the current sub-goal is identical to Fig. 1. In this context, when one sub-goal is attained, a new sub-goal and associated, simulated procedure emerges from the goal stack to guide execution.

Fig. 2
figure 2

A process model for planning integrated with manual control

The claim of the present paper is that guidelines and issues are not understood for the representations that support this kind of planning task. To address this deficiency, this paper provides two different resources: (1) an alternative theoretical foundation drawn from computational planning and (2) illustrations of representations and corresponding work practice for real-time control and planning for the US Shuttle program.

1.3 A computational approach to planning problems

Just as mathematically based control theory has provided a logical foundation for manual control, symbolic computation provides a logical foundation for planning. Below, the literature on computational planning helps to identify issues associated with representations for planning.

Computational planning involves a goal, constraints and operations that may be chained to connect an initial state to a final state (Georgeff 1987). The states correspond to a symbolic representation of the objects, attributes and relations (an ontology) associated with the task, capturing much of what the human factors researcher refers to as a “mental model”. As hypothetical operations execute, they alter the status of the objects, attributes and relations reflected in the representation. Candidate plans are extended and evaluated using this representation, resulting in a set of operations that predict a transformation from an initial state of objects, attributes and relations into the goal state of objects, attributes and relations.

The above-described process met with enthusiasm and early success on toy problems. But, when applied to an actual physical world, as in robotics, this process met with a number of challenges, generally regarding the symbolic representations employed. Challenges to the representation concern its scope and meaning, and the relationship between the representation and a dynamic world. A final challenge concerns the relevance of computational planning to human planning. The following subsections introduce these issues as a preview to the body of the present paper, which examines the manner in which these issues are addressed in the work practice surrounding the use of computational representations for human planning.

1.3.1 What is the scope of an appropriate representation?

Representations are by definition, lower fidelity, simplifications of a complex situation. Some researchers have noted the futility of representing all of the relevant dimensions for real-world problems (McCarthy and Hayes 1969). However, a totally automated system that depends on fixed representation is unable to recognize something outside the original ontology of the model (Bickhard and Terveen 1995). That is, the original design of the representation limits the success of any reasoning that follows (Winograd and Flores 1986).

1.3.2 How does a representation acquire meaning?

Critics note that the human viewer plays a key role in attributing meaning to a representation (McDermott 1981) while the computer “understands” nothing of this meaning (Searle 1997). If understanding meaning is critical to the appropriate application of the representation, an agent who does not understand the meaning of the representation cannot be said to use it intelligently (Winograd 1990). It follows that human users who exploit computational planning are not interacting with an intelligent agent.

1.3.3 How does a representation reflect a dynamic world?

A planner locked into processing a representation may be developing a flawed plan tied to a web of inappropriate assumptions. If the world changes and violates these assumptions, the success of a nascent plan is potentially compromised. Some process must examine the world for change and prompt re-synchronization of the representation. Perhaps any change in the world should prompt an audit of the dependencies between all of the inferences in the plan and features that have changed. However, this is a computationally costly process (deKleer 1986). Moreover, not every change in the world is relevant, making an audit for every change wasteful.

1.3.4 What is the relevance of computational planning to human behavior?

Some researchers attempt to equate human problem solving with the processes implemented on the computer. This account of human problem solving, initiated in the ‘70s through the work of Newell and Simon (1972; Newell 1980), remains a focus of lively debate (Fetzer 1997). One implication is that humans manipulate internal (mental) computer-like representations as part of their own cognition. However, robotics researchers note that sequential stages of monitoring, modeling, planning, and execution based on a representation provide an implausible account of movement capability (Agre 1995; Brooks 1991). Computational planning may be more useful as an external representation to support human cognition rather than as a model or substitute for human cognition.

This brief review of computational planning has identified four sets of issues, concerning the scope of a representation, the meaning of the representation, its relationship to a dynamic world, and the relevance of computational planning to human behavior. The following sections address these issues in two different work domains within the United States Space Transportation System, commonly known as NASA’s shuttle program, Instrumentation and Communication and Flight Dynamics. In both domains, the computer not only represents the subsystem in question, but this representation is manipulated computationally, to enable prediction and exploration of alternative plans. Before turning to a description of the representations and work practices in question, this paper addresses the methods that were used to obtain this description.

2 Methods

The descriptions of the domains have their foundation in more than 2,000 h of observational study of flight controllers working in these domains. Study of the Flight Dynamics Officers (FDOs) occurred for a period of approximately three years. Study of the Instrumentation and Communication Officers (INCOs) occurred for a period of approximately two years, initiated because they serve as customers for FDO. The majority of observations occurred in conjunction with actual shuttle missions. These were supplemented by work performance in occasional simulations, pre- and post-flight meetings, ongoing software development meetings and readily available documentation.

The observational methods used are generally consistent with the conventions of participant observation established within anthropology (Fetterman 1998). During observation, subjects conducted their ordinary work activities. To obtain a record of these observations, cameras already mounted in the front room of mission control provided a video feed for recording. Additional cameras were used in the backroom. Voice loop interactions were simultaneously recorded with the video. An ambient microphone permitted recording of the conversation between the consenting Flight Dynamics Officer and the Trajectory Officer seated adjacent to each other in the Front Room of Mission Control.

The observer was seated at a workstation with the backroom support personnel for Trajectory Operations. She listened to Front Room activity on both the voice loops and the ambient microphone, while viewing the Front Room on a closed circuit broadcast. Most important for the present paper, the observer could also bring up and print displays in use on the workstation. However, the opportunities for true participation were limited to non-technical activities not requiring certification, such as taking phone messages or photocopying. Incoming flight controllers often passed by to chat, offer explanations and occasionally, ask a question about the status of an ongoing flight.

The backroom cameras, ambient and display recordings and the presence of the observer seated in the backroom comprised unusual interventions in this workplace. However, after more than two years and nine flights, the presence of the observer and recording equipment is hardly an exceptional condition; indeed for a while, the absence of the observer became the exception.

3 Results and discussion

After an overview of the work domains in question, the remaining subsections document properties of the domains suggested by the above analysis of computational planning, focusing first on the challenges articulated in the planning literature and then turning to some additional observations outside of this literature. As the results of observational study illustrate, challenges to computational planning point to the very roles that humans fill when they use computational representations as part of their work. That is, humans determine the scope of the representations, the meaning of the representations, and synchronization with a dynamic world. In addition, the distributed nature of planning in these domains necessitates a role for explicit, exchangeable representations.

3.1 Overview

Flying the Shuttle involves distributed real-time planning, through a coordinated effort of multiple specialists, including the two groups of interest here: The INCOs and the FDOs. INCOs are responsible for the uplink and downlink of information between the orbiter and Mission Control, mediated by ground sites or satellites in the Tracking and Data Relay Satellite System (TDRSS). The FDOs plan changes to the trajectory of the orbiter, often to ensure that it approaches an intended relationship with another object in orbit, that is, a docking, or a separation. INCO relies on FDO to predict the location of the orbiter necessary for the development of a communication plan. FDO relies on INCO to send commands to the orbiter.

FDO and INCO are front room positions, supported by personnel located in an adjacent “backroom” workspace. INCO has a backroom of supporting flight controllers: an RF Communication assistant, an Instrumentation assistant, and a Data Communication assistant. FDO sits at the front of the room, with a Trajectory assistant (Traj). FDO’s primary assistant sits in the front room with FDO, but other assistants are located in an adjacent backroom workspace. During the orbit phases of flight (excluding specialized ascent and entry phases), the backroom personnel for FDO include Navigation, Dynamics and for rendezvous, Profile Support.

Communication between collocated and remote personnel occurs via speech and displays. Voice loops allow an operator to call another operator, while other operators monitor the exchange according to the loop in use. Communication also occurs through shared access to a set of computer and front room displays that indicate current and upcoming events. Because flight controllers determine upcoming events, changes to certain front room displays comprise messages to other flight controllers. For example, FDO may initiate the posting of an engine ignition clock, which reminds all flight controllers of an approaching procedure.

For a naïve observer, the detailed content of each work domain is surprisingly different. Knowledge required in one domain comprises just a fraction of the relevant knowledge in the other. Specialists in trajectory operations typically have a background in aerospace or mechanical engineering. Specialists in communications typically have a background in electrical engineering. Moreover, there is no standard career pathway from one discipline to another. Surface indicators, such as the use of common displays suggest a limited overlap, primarily regarding shared interest in the availability of communication satellites and the location of the orbiter. Despite the absence of apparent surface similarity, both domains use the computer to represent the orbiter and examine hypothetical plans for current mission sub-goals. Both domains must address the issues raised above, which transcend superficial differences.

3.2 Scope of representation

Both work domains have designed their own representations. The ontology of the representation differs for each domain, reflecting task specifics and decisions of the team that designed the original computer program. However, both domains must address two issues related to the scope of this representation: (1) the need for a representation that specifies its own properties as well as properties of the orbiter and its environment and (2) adaptations to limitations in the scope of the representations.

3.2.1 Properties of the representation

INCOs use the display represented in the left half of Fig. 3, called “Ant Man”, for Antenna Manager. For communication to occur, the orbiter’s antenna must point at a TDRSS satellite or a ground site, handing over as satellite or site availability ends. The AntMan background corresponds to the surface of the orbiter, viewed from the inside and projected onto a rectangle. The quadrants are labeled, to indicate upper, lower, forward and aft portions of the orbiter surface. Labels for communications satellites (E or W) and ground sites indicate the position of those objects relative to the orbiter’s surface. The display can also depict ground sites as they become relevant. Lines projected from satellites or ground sites represent their change in orientation relative to the orbiter as it progresses along its trajectory of maneuvers. Other lines on the representation correspond to surfaces of the orbiter that block transmission between the orbiter and receiving or transmitting objects.

Fig. 3
figure 3

Re-created AntMan in current and prediction modes, with displays tiled as they would appear on a flight controller’s screen

Ant Man exists in two modes, reflecting either current conditions or predicted communication conditions resulting from the expected execution of the overall mission plan. In response to inadequate communication, INCO might intervene, for example by switching antenna earlier than planned, or under exceptional circumstances, arrange for transmission via different ground sites or satellites.Footnote 1 Using prediction mode (at the bottom of Fig. 2), INCOs can input changes in the availability of satellites to compute and illustrate the resulting predicted access to communication. In this mode, in addition to properties of the orbiter and its environment, Ant Man also indicates properties of the representation subject to intentional modification. For example, in the upper right hand corner is a time field, labeled GMT (for Greenwich Mean Time). In prediction mode operators can scroll ahead to evaluate predicted communication for a particular time interval. In contrast to manual control, display values (such as time) no longer mirror the world, but reflect instead operator intent. Knowing the time span of the prediction and its implications in a broader context is crucial for using the representation. “Ratty comm” resulting from obstructions between the antenna and a satellite might be fine during a time period of crew sleep, but completely unacceptable during Extra-Vehicular Activity.

FDOs use a representation of an object’s trajectory over time, in the form of vectors called an ephemeris. These are associated with the display “Trajectory Profile Status” (TPS) shown in Fig. 4, which functions something like INCO’s AntMan in its predictive mode. TPS provides standard information about seven ephemerides used on the ground. Footnote 2 The three panels at the bottom of the screen correspond to TDRS satellites that provide communication between Mission Control and the orbiter. The remaining four panels may correspond to the orbiter, rendezvous targets, deployed satellites or debris. Footnote 3

Fig. 4
figure 4

Trajectory profile status display

In planning a change to an orbiter trajectory, FDOs explore alternative plans, with the results directed to an ephemeris. They influence the orbiter ephemeris via selection of a number of modifiable parameter values related to an engine burn, including the axes of shuttle attitude, the orientation of the engines to be utilized and the ignition time of the burn. Several parameter values for an ephemeris concern properties of the representation rather than properties of the orbiter: the number of times the representation has been updated (TUP), the number of maneuvers incorporated, constants used in the trajectory propagating software such as drag and area according to options set (D, V, A and M) the vector that anchors the ephemeris (AVID) and it’s time (GMTV), orbit number at the beginning of the ephemeris (ORBB) and the beginning and ending times of the ephemeris in GMT and Mission Elapsed Time (MET).Footnote 4

The space of alternative engine burns is too large to search with optimization algorithms. Hence, FDO is directly involved in the search of the space, by specifying an initial condition, setting some parameters (such as the engines to be used) and allowing maneuver simulation software to determine others (the resulting trajectory). FDO examines the properties of the calculated trajectory and the predicted amount of propellant required. FDO might alter input parameters for several cycles, until the outputs appear acceptable.

3.2.2 Adaptations for limitations in scope

Additional task features may extend beyond the ontology anticipated in the initial design of the general representation. This raises concern for how, if at all, these additional features might be represented.

The INCOs have two responses to this concern. First, they may extend the meaning of an existing symbol to cover an unanticipated case. Thus, S, referring to a standard spare TDRS, or even E referring to a standard eastern satellite can be extended by temporary agreement to represent a nonstandard satellite. In addition, INCOs can adjust their display for transient features of a new mission. As shown in Fig. 5, they design masks (visual constraints) to indicate forbidden areas of transmission, for example when the orbiter is docked to an expanding space station. Because these masks do not interact with the computation that predicts the orientation of the orbiter and communication satellites, masks do not require major changes to the software.

Fig. 5
figure 5

Re-creation of AntMan with a superimposed mask, identifying the area in which radiation is temporarily forbidden. In this example, the East TDRS is in use

The FDO ephemerides include so-called “fudge factors” whose purpose is to account for the effect of context without actually modeling the causal relationship between some feature of the environment and the other properties of the model. Thus, FDOs will adjust two constants (KCON and KVAR) to add drag to a model that deviates from current circumstances. FDOs might also add a perturbation, called a “vent” to the model, to account for the cumulative effects of otherwise un-modeled changes, due for example to particular engines used in attitude maneuvers. Nevertheless, FDOs representations are localized and incomplete, and capture only some dimensions of the overall context. For example, engine burn plans do not represent the assumption that experiments sensitive to acceleration have been completed.

3.3 Model meaning

In general, the representations have an extensional semantics (with respect to the orbiter and its environment) and intensional semantics, relating properties of the representation to other representations. But, the attribution of meaning is a process that rests with the human user—either a planner or a recipient. The use of AntMan invites this problem when standard symbols are extended to non-standard satellites. However, TPS poses even greater challenge. The same object, such as the orbiter, may be represented in multiple ephemerides. For example, one ephemeris might correspond to the current best guess of the orbiter’s trajectory for the next 48 h. A second ephemeris might contain the trajectory for the following 48 h. A different ephemeris might correspond to planning in progress, as a maneuver is added (or more likely refined) relative to the current best guess. Another ephemeris might correspond to a contingency plan, for example to breakaway from a rendezvous target, or jettison a payload. Footnote 5 The numerous orbiter ephemerides are used in conjunction with ephemerides for targets and, in some cases, debris, which must also be modeled so that FDO can study possible collisions. Different objects have different weights but no explicit identifier. Something that is heavy is likely the orbiter. Something that is heavier is likely the space station. Something that is heavier still is the orbiter docked to the space station. The existence of multiple models creates the risk of not knowing exactly what is being modeled in a given ephemeris.

The predicted trajectory for an object reflects numerous assumptions and computational operations at the discretion of the operator. For example, each of the main panels contains a maneuver counter, referring to maneuvers that are defined elsewhere (in a plan table) and added to the trajectory. The maneuver counter is not necessarily incremented for each added maneuver. Instead, at certain points in the planning process, the counter is actually decremented while the effect of the maneuver is represented implicitly. Weight changes associated with the reduction in propellant remain in the representation as do trajectory consequences, accounting for the effect of the maneuver without explicitly representing the maneuver itself. However, users must remember these implicit properties in order to use the representation for the next steps in the planning process.

Maneuvers have different purposes, and these are not indicated in the traditional TPS display or its maneuver counter. For example, one maneuver might take the orbiter closer to docking, while a different maneuver might take the orbiter away from a target to affect a breakout from rendezvous. These are usually planned in different ephemeris panels. Consequently, the meaning of an ephemeris is a function of which maneuvers have been added, or more generally, a function of the operations performed on the ephemeris. The numerous options for operators to alter the meaning of an ephemeris, coupled with a semantics that is determined by the history of operations performed provide ample opportunity for confusion.

The simulations generate sequences of predicted states but do not provide an interpretation of the resulting states. For example, FDOs must evaluate a calculated trajectory to determine whether it corresponds to the required outcome. INCOs must evaluate AntMan predictions in context to determine the implications of ratty comm. There is no representation of desired states—no aviator’s speed bug—to gauge evaluation, perhaps in part because the desired states are always changing. In both domains, evaluation rests on memory for the prevailing subgoals and mission status, which are outside the framework of the computational representation.

In summary, as the computer science literature correctly anticipates, determining the meaning of the model is precisely the issue in the use of these representations, and remains the responsibility of the human operator.

3.4 Relationship to dynamic world

Both work disciplines distinguish between a representation that predicts future conditions and current conditions in the world. In both cases, operators must consider whether conditions in the world have changed sufficiently to merit a change in predictions and/or plans. The control of this synchronization is also an issue in the planning literature. Here it is an issue that work practice must address. However, direct inspection of the current circumstances is not possible in low-earth orbit. The solution is to compare two models. One model reflects current conditions with the highest possible fidelity. The other model is the one already in use during the planning process.

The INCOs use two the versions of AntMan to address this issue, with the version capturing current conditions generally adjacent to the version representing predicted conditions. In Fig. 3, current mode is just above prediction mode. Deviations between current and predicted communication for the same time interval might occur because stale models are predicting the orbits of a TDRS satellite, or because the actual attitude of the orbiter differs from its anticipated attitude. By visually comparing the predictive mode of Ant Man with Ant Man in current mode, INCO can readily identify discrepancies, and if necessary, address any newly apparent communication deficiencies. However, INCO might also initiate a change to the predictive representation, for example, if stale TDRS or orbiter trajectories are responsible for a discrepancy between predictions and current conditions. In most cases, INCO need not update the prediction model immediately in order to identify a response that maintains communication.

FDOs’ work requires a more formal process for synchronizing its predictive representation with the world because the success of engine burns is quite dependent on precise ephemerides. The navigators are responsible for tracking the orbiter according to a schedule, or at the direction of FDO, preceding a critical planning activity. Navigators detect changes relative to the existing representation, by comparing old and new (actual) vectors. A large deviation is a cue to FDO that the existing representation is stale, and requires resynchronization. FDOs will also update their models following the execution of a planned burn. In this case, the planned velocity changes are replaced with actual velocity changes.

In both work domains, the key point is the existence of work practices that acknowledge a distinction between the representations used for planning and the conditions in the world. Rather than altering the world to fit the plan (as in a manual control paradigm), the plan must be altered to synchronize with the world. Table 1 summarizes the challenges drawn from the literature on computational planning, and the manner in which the two domains address these challenges.

Table 1 Representational issues based on computational planning literature

3.5 Issues not anticipated in computational planning literature

The previous discussion drew largely on the computational planning literature, which assumes a single, isolated problem solver or planner. A complete understanding of the role of representations in human planning must step outside of this assumption, and acknowledge that computational representations support multiple human developers and recipients. In failing to acknowledge a role for humans, the computer science literature cannot anticipate an important source of human error in configuring the representation, nor the work practices to counter this source. Similarly, this literature does not anticipate the many issues of distributed planning. Explicit, visible and accessible representations are pivotal in the coordination of multidisciplinary, continuous work (Hutchins 1995; Shehory et al. 1999), for example here, between front and backroom personnel. In addition, the computational planning literature fails to anticipate the problems that can arise when multiple operators modify and distribute representations in their work.

3.5.1 Configuration and troubleshooting

When operators manually alter the parameter values of a representation, the possibility exists for human error, of two types. First, the executed change may not accomplish the intended change, due to either a slip or a misunderstanding. Second, the intended change may not be appropriate for the current circumstances. Thus, model troubleshooting becomes an important part of the human work. To be clear, troubleshooting here does not refer to faults in the orbiter itself, but rather to an erroneous representation of the orbiter.

Troubleshooting begins with the recognition of a problem, either a discrepancy between the world and predictions as discussed above, or a discrepancy between the appearance of the model and expectations. There are just a few parameter settings in the current version of Ant Man that could be problematic. However, the numerous operations on an ephemeris make troubleshooting particularly problematic for the FDOs. Consequently, Dynamics maintains a written history of all operations executed on the ephemeris along with a computer log of detailed commands, which are reviewed together in troubleshooting an errant ephemeris.

3.5.2 Informing representation users of parameter value change

In most human factors applications, screen content changes because a feature in the world changes. In rare cases, such as the mode of an autopilot, the screen (and sometimes even the computation) reflects an agent’s change in the configuration of the display. This is not problematic for the operator who personally instigates the change, and understands its consequence. However, when a different agent causes the change, the recipient may become confused (Degani et al. 2000; Sarter and Woods 1995).

AntMan in prediction mode changes under two circumstances. In one circumstance, the operator changes the events or time intervals of interest. Because AntMan in prediction mode is local to a particular workstation, one operator’s change does not affect another operator’s workstation. Different versions of AntMan can and do reside on different operators’ workstations.

The other circumstance of human-initated change is potentially far more problematic. Personnel working outside of the immediate INCO workgroup (in satellite scheduling or trajectory) can effect changes to AntMan predictions across all operators’ workstations. An unannounced change could surprise or confuse operators. The INCOs compensate for the potential problem with technology, posting automated warnings that must be dismissed. Figure 6 for example shows a warning for a change in the Scheduled Hand Overs between TDRSs. This warning remains until the viewer acknowledges it. Multiple unacknowledged changes generate a stack of warnings that must be dismissed individually.

Fig. 6
figure 6

Re-creation of an alert on prediction mode of AntMan

The FDOs inform others of representation change by exploiting the distribution of tasks. Maintaining the trajectory is the responsibility of FDO, Traj and the Dynamics officers, who are the only specialists allowed to alter its parameters. Until quite recently, only the Dynamics officers had the capability of actually incorporating changes into the ephemeris stored on the vintage MOC. Should FDO decide for example, that a new vector is required, FDO will use the voice loops to ask Dynamics to TUP the ephemeris for the new vector. TUPping results in short term-change to the appearance of an ephemeris panel of TPS, indicating that a substantive change is in progress. Dynamics informs the rest of the backroom that TUPping is in progress, over the “airwaves”, that is by ambient voice rather than voice loop. When the TUP is complete, the TUP counter increments. Thus, two human auditory sources inform listeners of a change along with two changes in the appearance of the display.

The above-described practice is threatened by the recent software and hardware changes that permit changes to an ephemeris without passing through Dynamics. Although clearly good practice, the instigator need not announce the intended change over the voice loops in order for change to occur. A viewer who is not focused on the displays at the moment of change, or who is not tracking the TUP counter may not know that a change has occurred.Footnote 6 In addition, the capability to make changes independent of Dynamics threatens the integrity of the log for troubleshooting, noted above.

3.5.3 Managing multiple models

An important distinction between INCO and FDO work is that INCOs manage just one representation for planning, while FDOs manage multiple sub-models within the various locations of TPS. Usually different members of the trajectory team are responsible for refining the contents of the ephemeredes, while FDO is responsible overall for their contents. Team members begin their refinement by copying an existing ephemeris into a new ephemeris.

Until recently, FDO had just the four ephemeris slots indicated on TPS to juggle its models, creating a risk of misidentification as contents were swapped in and out of ephemeredes. However, the FDOs have recently developed a new workstation application that provides many more ephemeris slots, along with more opportunity for confusion. Some of the ephemerides mirror the MOC, so that when the MOC ephemeris is changed due to trajectory updates, the mirror image changes. This mirroring process ensures that changed circumstances are taken into account uniformly in the planning process. However, mirroring is computationally costly. As a result, other ephemerides are local to an individual’s workstation, originating as copies of the mirrors, but processed and viewed only on an individual workstation for speed. A final type of ephemeris is global, originating as copies of the mirrors or locals, but capable of being shared and operated on by all users. In current practice, the computer provides ease of distribution and modification with little assistance in coordination among models.

In summary of Sect. 3.5, the distributed nature of planning that includes human configuration of representation parameters raises additional issues not anticipated in the classical computer science literature. These include potential error in parameter configuration leading to later troubleshooting, informing users of parameter value changes, and the management of multiple representations (copied, linked or independent of each other) distributed across multiple users.

4 Conclusions and implications

Illustrated by two work domains in space transportation, criticism within the classical computational planning literature in artificial intelligence guided the identification of the human role in interacting with computational representations in the context of an actual operational work setting. These concern the scope of the representation, attribution of meaning and relationship with a dynamic world. Looking beyond this literature to distributed problem solving, new issues emerged based on the ease of creating, altering and distributing computer-based representations. These concern potential error in model configuration leading to later troubleshooting, informing other operators of representation changes, and the management of multiple related models. While room for improvement is apparent Footnote 7, the domains provided numerous examples of the display requirements and successful work practices for supporting the human role. The examination of any planning domain involving computer-based models for computational purposes (e.g., manufacturing design or assembly) should reveal the same issues, and some kind of work practice or technology feature designed to address these.

Many of these issues arise because of the loose coupling of the representation from a corresponding physical environment, a defining feature of planning tasks, in contrast to manual control. However, the present research echoes Vicente and Burns (1996) who emphasized the distinction between representation and the world it represents, even in manual control tasks. Unlike the conclusion from laboratory studies of computer-supported planning, the present study provides no evidence for confusion or complacency regarding the limitations of the representations. While operators in the present domains are largely dependent on telemetry to portray the remote physical world, several work practices acknowledge a potential discrepancy between that world and representations used in planning. Displays and procedures promote frequent comparisons between the assumptions represented in plans, and current conditions, and mismatches prompt correction to the representation. Over-extended display features and “fudge factors” enable limited representations to accommodate the complexity and novelty of specific situations.

In planning domains, the representation must include features that pertain to the representation operators are viewing (e.g., future time intervals, as well as current time and number of updates). This information assists in the interpretation of the information regarding the world. However, human operators can adjust properties of the representation in a keystroke, with an instantaneous, intentional change in the meaning of the representation. When multiple operators rely on the same representation, unpredictable change has the potential to engender confusion, similar to unexpected mode changes of automation.

In stark contrast to manual control, the goal state (e.g., a target trajectory, or being in a state of good communication) is not computational here. The conditions represented in verbal plans correspond to the predicted results of control inputs as in Wickens et al. (1989), but there is no computational assessment of the quality of the results. That is, while operators know what it means to attain a goal, the simulation software itself does not. This is consistent with other complaints that the evaluation of computational planning results is too dependent on human observers (Layton et al. 1994; Smith et al. 1997). The specific representational deficiency identified here is not just missing context or the inability to make tradeoffs, but a missing representation of goals and sub-goals that a computer might use for evaluating predicted results. Instead, a human specialist is required to interpret overall status and the import of any discrepancies with respect to mission goals expressed only in natural language. The deficiency impacts real-time control as well; a specialist must broadcast discipline-specific conclusions to other disciplines, whose go-ahead on future activities may depend on a successful outcome. Human interpretation, broadcast over the voice loops is the only mechanism for announcing goal attainment. This will become problematic in future space exploration initiatives, as the work becomes distributed across numerous remote control centers.

The prevailing approach to the identification of issues and requirements in the design of computational support is to develop a theory, implement alternative systems based on this theory and examine the effect of these experimenter-generated systems on behavior. An alternative, complementary approach is to examine successful practitioner-generated systems and surrounding work practice for issues that practitioners have addressed intuitively, and search for the theoretical framework to unify these intuitions. The present work exemplifies this complementary approach.