Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

1 Belonging

Theory, terminology and reality often interact in complex and complicated relationships, particularly in disciplines where the material base is evolving rapidly, such as in ICT-related domains. As a consequence, the nomenclature of research in digital design and development is often confusing and contested – the field of Augmented Reality is no exception. This is confirmed by an recent episode from everyday life of Academia.

In the context of a well known conference series on Augmented Reality, the author of this chapter received a rather unexpected comment, stating that the type of mobile Augmented Reality application we have been experimenting with – which we call situated simulations – is in fact: “… a mobile VR application (or a mobile MR application at the most), but definitely not AR according to Ron Azumas definition.” The utterance was surprising. The project in question has been reported at numerous conference over the last couple of years, but never received any comments questioning its affiliation with the domain of mobile augmented reality.

The question of what should be recognized as inside or outside the field of augmented reality had not been given much attention in our research – until now! For the last five years we had basically followed some key interests and ideas, and tried to explore and solve the potential and problems as we encountered them. ‘Augmented Reality’ had always been the obvious candidate for a research tradition it made sense to identify with. The comment, however, made us curious and interested in AR definitions as such, and in how our own experiments with ‘situated simulations’ (sitsims) could in fact be positioned in the context of Augmented and Mixed Reality research and development. A more thorough reflection upon the relationship between sitsims and AR might well benefit our own understanding of what we are doing. Such a comparison is also closely related to the method we are practising.

Our experiments with situated simulations have an origin and background that are contrary to most other augmented reality systems. The purpose of our research has been to explore the rhetorical and narrative potential of emerging technological platforms. Rather than following a technological path, we are taking available, off–the–shelf hardware configurations – such as iOS and Android OS devices – as given platforms, and then employ the available software and multimodal assets in order to create innovative solutions at the level of textual expression. Thus, our perspective, in general, is positioned in and dominated by humanistic approaches to digital design and composition [12]. More specifically, our inquiry is part of a practice that attempts to develop a method of design with focus on the potential types and patterns of digital texts. We have tentatively called this ‘genre design’. A situated simulation is an eample of such a prototyped potential genre (see: http://www.inventioproject.no/sitsim).

In this chapter we take the opportunity to dicuss the pertinence of our work on situated simulations in relation to the traditon of augmented reality research and applications. Given our point of departure in the humanities and the question of where sitsims ‘belongs’, we will take a closer look at some of the most central definitions of augmented and mixed reality over recent decades, and see how they are suited to describe our explorations of situated simulations. At the same time, we would also like to allow some room to present one of the main motivations and purposes behind our research: the attempt to design new narrative spaces in order to generate new rhetorical experiences in the dissemination of knowledge and information linked to a specific space and place. Before we determine whether a situated simulation is AR or not, according to Azuma’s taxonomy [34], we need to have a closer look at the defining features and key qualities of sitsims [56].

2 Elucidations

Technically, a situated simulation requires a mobile terminal (smartphone) with broadband networking, high resolution graphics display, and orientation/location capabilities. Semantically, a situated simulation exhibits a multimodal (audiovisual) dynamic 3D environment, which the user can observe, access and explore by moving and interacting with the terminal and the application’s interface. The smartphone thus serves as a point of view – a virtual camera – which provides a continually changing perspective into the 3D graphics environment. When using a situated simulation there is then approximate identity between the user’s visual perspective and perception of the real physical environment, and the user’s perspective in(to) the virtual environment as this is audiovisually presented by means of the phone and sitsim’s interface. The relative congruity between the ‘real’ and the ‘virtual’ is obtained by allowing the camera’s position, movement and orientation in the 3D environment, to be constrained by the orientation- and location technology of the smartphone: As the user moves the phone in real space the perspective inside the virtual space changes accordingly.

Given these qualities, situated simulations have proved suitable for representing, on a given location, an item or topic (knowledge and information), which is relevant to that specific place or site, but which is some way or another absent or not accessible to the user. These could either be objects that have ceased to exist, that are hidden or have not yet come into being. In our experimental implementations so far, we have primarily focused on the first mode, trying to design situated simulations that reconstruct, on location, historical objects, actions and events that once took place on a particular site. Figure 14.1 depicts an example of a situated simulation in use showing a reconstruction of Forum Iulium on location in Rome.

Fig. 14.1
figure 1_14

A situated simulation of Forum Iulium in Rome (see: http://www.inventioproject.no/sitsim)

Azuma’s definition of Augmented Reality is quite straight forward: “What is augmented reality? An AR system supplements the real world with virtual (computer-generated) objects that appear to coexist in the same space as the real world.” [4] More specifically, his definition of AR is threefold: augmented reality

  • combines real and virtual objects in a real environment;

  • runs interactively, and in real time; and

  • registers (aligns) real and virtual objects with each other

Immediately, it seems that our general description of sitsims above fits with Azuma’s definition. (1) In a situated simulation the real and the virtual is combined in a real environment: the actual real environment is combined with the 3D reconstruction of historical objects and events in situ. (2) The sitsim application runs interactively in realtime: the perspective changes as the user moves, and he or she may access information and trigger activity inside the virtual environment with instant feedback. Also, (3) the real and the virtual are aligned with each other: the user’s perspective in the real world has approximate identity with the user’s perspective into the virtual environment.

One may now interrupt and counter this reasoning by arguing that in augmented reality the real environment is registered by means of certain technologies of tracking: using a digital camera and software for pattern recognition of fiducial markers etc. It thus involves much more advanced computations than the typical location and orientation technologies one finds in current off–the–shelf smartphones. To this objection one may add that in Azuma’s discussion it is explicitly stressed that definitions of augmented reality should be general and not limited to particular hardware technologies [34]. Other definitions also seem to support the possibility that situated simulations may be considered a form of augmented reality. Feiner [7] defines augmented reality as follows: “Augmented reality refers to computer displays that add virtual information to a user’s sensory perception.” There can be little doubt that this general description also is fully consistent with our depiction of situated simulations, despite the fact that it too was conceived long before current smartphones became available.

3 Sitsims as Augmented Reality: The Temple of Divus Iulius

Since the launch of the second generation iPhone in 2008 we have implemented and testet six situated simulations with historical topics on location in Norway (Oseberg Viking Ship), San Franscico (Mission Dolores & Earthquake 1906), Athens (The Parthenon) and Rome (Forum Iulium & Temple of Divus Iulius). The latter was evaluated by students of classics at The Norwegian Institute in Rome, first in May 2010, and then a new version in February 2011. The new version of Temple of Divus Iulius runs on iOS 4 and the students tested and evaluated it using both the iPhone4 and iPad on location in the Roman Forum. In the following we will take a closer look at this situated simulation by describing its features in accordance with Azuma’s three defining criteria of augmented reality [34].

3.1 Combining Virtual and Real in Real Environments

In a sitsim, the virtual environment is brought into the real environment, a specific space and place, and thus combined with it. Ideally, the real and the virtual occupy the same space, that is there is a congruity between the real and the virtual environment, movement and orientation in the one is mapped in the other. Thus the sitsim displays various interpretations of the surroundings in which the user is positioned. In the case of Divus Iulius the site in question is the Republican Forum in Rome. As the user moves around the Forum he or she can observe – in parallell – the surroundings at various points in time, from 44 BC, just after the Ides of March and Julius Caesar’s murder, to 29 BC, when Octavian consecrated the new Temple to the memory and sacred tribute of his deified grand uncle. As the user navigate this historic site and the various stages in time, the real and the virtual is combined in space by means of the position (GPS), orientation (magnetometer) and movement (accelerometer, gyroscope) of the device (Fig. 14.2).

Fig. 14.2
figure 2_14

Students in action with iPhone and iPad in the Roman Forum

3.2 Interaction in Real Time

In general (as described above) the sitsim responds in real time to vertical and horizontal movement as well as change of position. More specifically, the user may engage in real time interactions with the virtual environment by means of spatially positioned hypertext links. These links trigger various types of information: verbal audio narrations; written text; photographs; 3D objects of scanned artefacts for detail view, including rotation and scaling based on the touch interface; semi–transparency of larger objects for visual access to hidden inner spaces (rooms in buildings); flying virtual camera (for transcending the user’s position in order to visit positions which are physically inaccessible, orientation remains relative to the user); access to external online resources via an internal web browser; in situ link–creation where the user may name, position and compose links and nodes, and add comments to each other’s link postings; and switching between temporal episodes containing historical actions and events (Fig. 14.3).

Fig. 14.3
figure 3_14

Screenshots from a sitsim reconstructing the Roman Forum in various stages and situations: Temple of Julius Caesar (left), and Marc Anthony’s Eulogy (right)

3.3 Registration and Alignment in 3D

The criteria of registration is naturally primarily focused on mixed solution, but alignment is also a necessity in situated simulations where the virtual environment is displayed on a full screen, although it need not be as exact as in a mixed solution, since the comparison and matching is mediated by the user’s perceptual processes and actions. If the alignment slips, the opportunity to experience the real and the virtual as separate, but parallel, dimensions (interpretations) of the same space disappears, and the virtual environment becomes accidental and irrelevant. In a situated simulation one might say that the digital application and the virtual environment itself is only half of the experience. The real site and surroundings are a necessary context and environment for the system to function according to the design intentions. We strongly believe that it is in this combination of the virtual and the real, that is in their difference and similarity, that incremental information and added value is generated, for instance in the user’s reflection upon the relationship (double description) between the reconstructed virtual past and the aura of the historic site in the real present.

4 Situated Simulations as a Narrative Space

One of the key motivations for our research and experiments with digital genre design and situated simulations is to explore the narrative and rhetorical potential of emerging digital platforms. Augmented reality and location–based media certainly present interesting challenges to experimentation with new modes of storytelling. Early experiments with wearable augmented reality suggested the notion of situated documentaries [8] placing multimodal documentation in real locations by means of see–through head–worn display to overlay 3D graphics, imagery and sound to augment the experience of the real world. Our attempts can be viewed as an extension of these endeavours.

Stories have alwayes travelled across media, in prehistoric and ancient times, from orality to literacy. Later, the early 20th century saw rapid development and dissemination of electro–mechanical material markers for new combinations of audio-visual narrative forms, in media such as cinema and television. With digitalization of all the traditional textual types (writing, still images and moving images, and various forms of audio), the computer itself has contributed dynamic 3D environments as a genuinly new form of representation. It presents an integrated text type that has the fascinating feature of being able to include audio-visually and spatio-temporally all the traditional analogue text types, thus generating a complex and integrated form of representation with, in principle, enormous potential for expressivity and meaning making.

Just as the novel is the dominant fictional genre of literature and book technology, the feature film is the dominant form of cinema, and the series is the dominant fictional genre of television. When it comes to digital media, neither hypertext fiction nor interactive movies managed to achieve any popular success. In digital media, we may state that computer games are the dominant form of fiction, and today this is most signigicantly expressed in the advanced and innovative uses of the real time 3D environment. How computer games relate to fiction and storytelling has become an extensive and exhausting discussion, but one that we will not address here. In location-based media and mobile augmented reality, fiction and storytelling is still at the experimental level.

In our situated simulation The Temple of Divus Iulius we have included a situated documentary as an interpretation of some of the crucial events that led up to the construction of the Temple of the deified Julius Caesar. When the user is positioned in front of the temple near its special altar, as it may have looked in the year 29 BC, one is offered to perform a temporal ellipsis and shift back to the days following the Ides of March in 44 BC. After the temporal transition and by means of sound, the user’s attention is directed towards the Northern side of the Forum where Marc Anthony is about to perform his Eulogy. The user may now approach the speaker’s platform and witness (an interpretation of) this significant event in western history. Further, the user can observe the following incidents and changes on the site of the later temple.

As a temporal sequence, this narrative combination of actions and events has a traditional structure, combining one flashback with several flash forwards. This is a familiar structure in literary as well as filmic narratives, whether the mode or topic is factual or fictional. In our context the temporal loop is used to provide a significant narrative and informational context to a digitally reconstruction of an historical building. However, what is interesting here is not the fact that a mobile augmented reality application manages to convey a traditional and well-established narrative technique, usually called asynchronous storytelling [9], that was already in advanced use at the time of Homer’s Oddysey. What is more unique with our simple experiment on the Forum is the fact that the movement in time (the flashback-flash forward loop) is performed and repeated in the spatial environment. As the user decides to move back in time, he or she is also triggered to move in space. In this story each point in time has its corresponding point (or position) in space. In this historical case, the effect is given by the circumstances as they are documented and basically follow from that fact. However, it is obvious that this tempo-spatial parallelism could also serve as an efficient design technique in the further exploration of mobile augmented reality narratives, whether fictive or factive in character.

The multilinear sequencing of links and nodes in hypertext fiction makes possible complex interrelationships between story level (story told) and discourse level (telling of story) in narratives [10]. In 3D-based computer games the gameplay predominantly takes place in a ‘scenic’ mode [9] where the time passing inside the game environment is identical with the time played. A situated simulation exploits the 3D modes of games, but the tempo–spatial parallelism and its double descriptions opens new possibilities. We are not limited to the narrative time of the simulation and its relationships to the user: The virtual environment is also always placed in a (real) context of present time and space.

5 So, Where do Situated Simulations Belong?

To answer this question we must now return to the definitions. One of the key aspects of a situated simulation, as mentioned, is not in accordance with most augmented reality systems and applications, although interesting exceptions exist [11]. A situated simulation does not mix the graphics of the virtual 3D environment with a live video feed from the phone’s camera, thus creating a Mixed Reality solution at the level of the screen. This makes it difficult to position a situated simulation along Milgram’s well known Virtual Continuum [1213].

In Milgran’s diagram, Mixed Reality occupies the middle section of the continuum between Real only and Virtual only environments. Mixed reality is a gradual movement from augmented reality to augmented virtuality depending on which of the two mixed elements is dominant. In this taxonomy augmented reality is a sub category of mixed reality and there is no other option for combining the real and the virtual. So where does a situated simulation belong on this continuum? Or, as our reviewer suggested, does it belong at all?

By aligning a picture of a situated simulation in use with the graphical presentation of Milgram’s Virtual Continuum it is easier to see how a situated simulation is related to the continuum (see Fig. 14.4).

Fig. 14.4
figure 4_14

Situated simulation as a combination of real and virtual outside the virtual continuum of mixed realities. The Oseberg Viking Ship burial mound

The problem with Milgram’s virtual continuum is that it is one-dimentional. In its given framing the combinations of real and virtual environments are only possible by means of some form of mixed reality, and the mixing may then only take place at the level of the display, whether it is a screen with a live video feed or a head–mounted display with a see–through solution. This rules out incidents that exploit current smartphones and their location and orientation technology, such as the situated simulations in question, where the real environment is combined and aligned with a virtual environment displayed on a full screen without any direct or integrated mixing. In the early 1990s when Milgram’s Virtual Continuum was devised, today’s smartphones were not even conceived. The current mobility of hardware features, twenty years ago merely utopian, now demands a revision or an extension of the continuum. The fact that neither Azuma’s definitions nor Milgram’s continuum account for this form of coupling or combining real and virtual environments does not mean that it is incompatible with the traditional definitions. As we have seen, Azuma’s threefold criteria is fully adequate given some clarifiction and interpretation when describing a situated simulation.

Based on the discussion above, it is tempting to suggest a minor revision of Milgram’s continuum in order to open up a space where we might position the kind of mobile augmented reality that a situated simulation represents. In this diagram we have substituted the linear continuum with a two-dimentional field. This makes it possible to distinguish between mixed reality and augmented reality and at the same time open up for a space for situated simulations, the variation of mobile augmented reality where the real and the virtual environment is combined independent of mixed reality (Fig. 14.5).

Fig. 14.5
figure 5_14

Milgrams’s Virtual Continuum revised to include situated simulations

6 Closing

Situated simulations, as we have discussed them here, may not be augmented reality according to strictly technical definitions, particularly when one focuses on certain forms of tracking and registration, but there can be little question about the fact that by using a situated simulation on a specific location you are in fact augmenting reality.

AR has been focused on merging reality with additional information provided by graphical layers, which again match the depth, space and texture of reality. Reality is per se what is real. It is the present reality we are concerned with: and that reality which is real is the present. Consequently, augmented reality, which seeks to mix the real and the virtual in a combined, integrated spatial representation is based necessarily on the presence of the present. As long as augmented reality seeks the mixed reality solution it is always partly representing and anchored in the presence. This limits its potential for both reconstructions of the past, and preconstructions of future scenarios. In our work on situated simulations we have focused on simulating past and future versions of a specific space and place. This is where situated simulations as a variant of mobile augmented reality might have its advantages.