1 Introduction

Incorporating 3D data capturing tools has become a common practice in archaeological work worldwide, making archaeological data more available for post-processing and presentation, but also much more voluminous than ever before. Also the array of data formats, the variety of processing tools, and the range of devices enabling presentation of digital and digitized data is much wider. The challenge, therefore, modern archaeologists face is setting proper standards regarding data-sets manipulation, sophisticated visualizations, and flexible data application. To meet the rise in expectations, co-operating with other researchers or specialists in interdisciplinary frameworks gathers momentum. This underlines a need to articulate the imperatives for engaging in an archaeological research, and carefully employ techniques which are relevant to chosen archaeological analysis and work methodologies. Increased incorporation of advanced computer and game technologies does not only expand an archaeologist’s toolkit, but also encourages re-thinking work procedures and work assumptions. In particular, virtual environments (VE) and virtual reality (VR) can be utilized to experiment with new approaches to interpret archaeological assets and provide new means, for both the public as well as scholars, to re-engage with cultural heritage knowledge. Contrary to the more common hyper-realistic nature of 3D simulations associated with popular culture, when purposed for scientific use it is essential to maintain the correct relations between different data-sets and varying resolutions involved in assembling archaeological information systems.

A recently developed application evaluates the potential of VR platforms and immersive devices as ground-braking tools in interpretative archaeological research. The application allows for real-time interaction with a modeled space and its assets, and supports remote collaborative work within a single simulated space. Demonstrated on the game engine generated 3D simulation of the Temple of the Storm God in Aleppo, purposing immersive devices to the field of cultural heritage is explored.

2 Implementing and Manipulating Integrated Data in Virtual Environments

A viable 3D simulation of the Temple of the Storm God and its assets, which correctly visualize the various temple components, was constructed through workflows compiled by members of a dedicated interdisciplinary working group [1]. These workflows describe the steps taken to properly convert the various inputs comprising the virtual scene. The datasets used to compile the virtual scene include data captured originally in 3D in the temple compound itself in 2010, available satellite imagery, and 2D legacy data. Prior to compiling the virtual scene, processing these varied inputs for further manipulation followed specific pipelines in order to meet the project objectives and while taking methodological concerns into account [1]. The resulting product is a detailed and multi-scalar simulation of the temple compound with its architectural elements and finely carved inscriptions. The geometrical relations of the objects were accurately maintained despite rigorous file size reduction, achieved by implementing decimation algorithms [14]. Using capabilities offered by media design and game technology, a 3D simulation of the temple was embedded in a VE, facilitated with interactive functionality and displayable on both desktop as well as immersive devices, such as head mounted displays (HMD).

Several aspects need to be considered when implementing 3D scientific data in VR platforms. Technical issues might be the most obvious ones. From our experience with several game engines, achieving high representational fidelity is much more a matter of the engine’s architecture and the quality and Level of Details (LoD) it supports rather than the quality and accuracy of the fed data. As programming a costume game engine is not part of the project goals, and in order to keep costs low, a strategic decision was taken to generate the simulation on available gaming platforms (several cost-free platforms available for non-commercial use were tested. In this paper the Unreal Engine 4 is referred to). Formerly, relying on cost-free development kits restricted the range of available game engines to those offering only limited access to code-editing and asset manipulation. This approach substantially constrains the ability to simulate complex scenes comprising multiple assets with a wide range of LoDs. Fortunately, game technology is a rapidly developing industry. Newer versions of game engines are regularly released to the market, implementing better rendering algorithms, deeper development kits, and higher representational fidelity. Once the groundwork is set and the required 3D data is processed, converting the simulated scene to another game engine or a newer version is relatively simple, without apparent loss of details or data quality. Nonetheless, this step should be supported with hardware components which correspond to the system’s requirements.

Technical issues are only the backstage to other fundamental concerns regarding content and functionality. The potential affordances of a certain VE are prompted through several factors, determine the user’s experience and perception of the environment. Such factors can be, for example, the scene’s design, smoothness of display, choice of interface, or type and range of user allocated actions [5, 6]. In the developed applications presented here, navigational tools were implemented, meant to enhance an exploratory affordance and help the user to develop a better sense of presence and spatial acquaintance with the modeled environment. It is important to note the difference between a sense of presence, or embodiment - the reaction a user might develop while experiencing a VE - and the notion of spatial immersion, which can be described as the constellation of components and characteristics comprising a certain VE [6]. While the latter depends on system architecture and hardware configuration, the former has to do with the content and actions embedded in the VE. Whether an immersive system will encourage a user to develop a sense of presence depends on the system’s configuration and the representational fidelity of the environment. However, whether a user will be motivated to engage with the VE and the assets in it has much to do with the affordances offered by the VE. An effective immersive system should convince us, that is our perception, in the real-life properties of the virtual simulation we are experiencing.

The challenges in reconciling between the different methodologies of game technology and archaeology are made more clear when trying to adjust game engines to produce complex multi-scalar and visually versatile virtual simulations. By their nature, game engines are constructed to deliver first-person experience and are optimal for presenting vivid representations of a (virtual) world from a subjective point of view. Hence, some game engines offer better rendering when the simulated scenes are spatially limited. Although it is usually possible to depict endless simulated plains, distant objects will be often simplified as a “backdrop” to the main scene.

3 Virtually Re-contextualizing the Temple of the Storm God

The created VE depicts a 3D simulation of the Temple of the Storm God in Aleppo, a compound measuring 42 × 42 m with an interior cella of approximately 27 × 17 m, dated to the 1st half of the 2nd millennium BCE [7]. The modeling of the temple was created with common 3D processing software [1], rendered in the Unreal Engine. The resulting simulation retains the temple’s floor plan in several scales while maintaining accurate geometric relations of the architectural elements (Fig. 1). A key aspect in modeling the temple is the ability to distinguish various spatial levels, such as very fine chisel marks and hieroglyphic inscriptions from remote mountain ranges.

Fig. 1.
figure 1

The virtual simulation of the Temple of the Storm God compound rendered in the Unreal Engine 4.

Two types of applications were tested in the virtual simulation, both well established in the world of game technology and immersive environments. One of the applications uses an HMD. By nature, such devices are designed to generate a highly immersive environment, which can induce users to very vivid reactions and a strong sense of presence, mimicking an illusory sense of body ownership [810 and references therein].

The other application refers to Multiplayer Computer (Online) Games (MCG/MOG), among the most widely distributed types of popular entertainment. Research on online game addiction [11, 12] indicates the profound immersive quality MOGs can have, even though - contrary to games played using immersive devices such as HMDs - for the most part they are played in front of a computer screen. Research on the educational and sociological aspects of computer and online games [1315] reveals that such games provide users valuable opportunities to engage in social interactions. MCGs and MOGs are powerful platforms, motivating participants to fulfill effective learning tasks and encouraging them to explore, communicate, and collaborate.

4 Interactively Exploring the Temple of the Storm God

Having user experience in mind, this application implements an Oculus Rift DK2 HMD device to the virtual simulation of the temple compound (Fig. 2), with free movement in the modeled space, and direct access to interaction being emphasized. Enabling these actions aims to facilitate spatial affordances - high degree of acquaintance with the temple compound and its layout, and consistent assessment of scales and measures from different locations within the compound. However, a learning affordance was also assessed through enabling the interaction with archaeological content, in this case the temple’s decorated reliefs.

Fig. 2.
figure 2

The hardware setup used in the HMD application implemented in the virtual environment of the temple. The user wears an Oculus Rift DK2 HMD device. Movement in the virtual environment is controlled through a hand held XBox 360 controller. The user’s movements are captured through an external Oculus positional tracking camera positioned in front of the user.

Nonetheless, several limitations and challenges are inevitable when using immersive devices such as HMDs, relating to representation of the virtual avatar and to navigation in the VE. Motion sickness (or simulator sickness) is the most prominent challenge to be addressed in this context. A variety of factors could trigger this phenomenon. Some refer to the construction of the system, such as frame rate, navigation speed, or display characteristics. Other factors are rather subjective and depend on individual circumstances, such as age, previous experience with VE systems, or medical condition [1618]. The most common occurrence of motion sickness are instances of spatial disorientation or general discomfort, triggered by the effects that an immersive experience has on the proprioceptive system. Movement is one of the most obvious examples of motion sickness: while the physical body of a user is stationary in the real world (s)he experiences its virtual representation, or virtual avatar, as moving or flying without any corresponding vestibular or kinesthetic cues.

A possible way to reduce some of these disorienting effects is to bridge the gap between the physical gestures of a user and those of the virtual counterpart. A solution implemented in the tested HMD application was to allow the user to control navigation in the modeled space with a hand held controller (here an XBox 360 joystick). However, in order to do so, first the movements of the user need to be captured and mapped on to the gestures of the avatar in the VE. A basic HMD configuration positions only a single tracking device. The tracking camera provided with the Oculus Rift device, which was used as a default for the temple’s virtual scene (Fig. 2), tracks only the position of the HMD and not that of the full body. However, movement mapping does not need to be perfect, suffice it when gestures of movement or directions are generally simulated in order for a user to develop a sense of presence and experience embodiment [6].

Stepping out of the range of the tracking device pauses the motion tracking and the display fades to black. This pre-defined characteristic could also induce motion sickness. But more importantly, this action deeply effects the user’s movement, in the physical world as well as in the virtual space. Extending the tracking volume (of both the user and the avatar) can be rather easily achieved with additional devices/detectors, or by switching to technologies using motion capture systems or magnetic based tracking (trading-off a light-weight single laptop and HMD with a much less portable system). However, even if systems are switched or tracking devices are added, the user will still be literally confined to staying on ground level. Performing a flight, for example in order to observe objects from above, will still, in most likelihood, induce motion sickness, since it can be simulated virtually but not tracked physically.

Reconciling between system mobility and limited tracking volume was settled in the developed application through implementing a virtual dynamic platform, on which the avatar is placed. In fact, this solution improves user experience in a twofold way. On the one hand, the tracking volume is substantially extended. The user is granted with an almost unlimited range of movement in the simulated temple, and so the risk of stepping out of the scene is reduced. In addition, also motion sickness is reduced by overcoming the discrepancy between the physical movement and its simulation in the temple.

The virtual platform, designed to resemble a “flying carpet” (Fig. 3a), is visible to the user and semi-transparent to not distract the view. The platform’s position in the VE is manipulated by the user through a hand held controller, allowing to actively explore the entire modeled temple and approach objects from multiple angles.

Fig. 3.
figure 3

A virtual movable dynamic platform and visualization of tracking volume boundaries. (a) The user’s representation in the VE (as an avatar) is visualized on the dynamic platform, while the user’s body (here slightly crouching) is approximated from the position of the HMD respective the tracking device. (b) Schematic depiction of the tracking volume covered by the tracking device positioned in front of the user. (c) The walls of the confining box blend in front of the user/the avatar when approaching the limits of the tracking volume, signaling the need to re-position.

The platform is structured as a confining box with dynamically changing walls. In a static position, only a subset of the platform is visible at the feet of the user (as seen in Fig. 3a). When the limits of the tracking volume are approached (Fig. 3b), a colored wall appears in front of the user, signaling a need to reposition (Fig. 3c). Moving the dynamic platform results in the re-positioning of the tracking volume within the virtual space, and does not require to re-position the avatar (essentially negating the need to physically move in the real world). The confining box becomes visible also when the platform is shifted in the virtual space. The user thus can have a better spatial reference and can apprehend that the platform (or avatar) is what being moved rather than its own physical body.

5 Remote Collaboration in a Shared Virtual Space

Another tested application builds on the concept of multi-user systems, or multiplayer gaming platforms. Whether played online or implemented in VEs, multi-user systems demonstrate powerful immersive characteristics. Such systems can support basic features, such as location, orientation, agency, and communication [9, 19]. These features are capable of inducing a strong sense of embodiment and participation, albeit the fundamental difference in their configuration compared to other immersive VEs, such as HMDs or DIVE/CAVE systems [13, 20].

Running on the same virtual simulation of the temple, the system is configured to support a first-person point of view and the scene is displayed on a screen. Similar to the HMD application, the temple compound can be actively explored and the different assets can be approached from up close. Also the avatar movements and parts of the virtual body representation are visible to the user, encouraging a sense of orientation and direction in the VE. The main difference compared to the user-oriented HMD application, is the design of this tested multi-user system, which aims to enhance producer expertise. In order to reach that goal an innovative approach was taken, maximizing real-time interaction and knowledge sharing between peers using common interfaces, such as desktop PC.

The concept of the application’s design follows two key objectives: real-time communication and interactive cooperation, both are essential properties in conducting meaningful collaborations in VEs [19, 20]. The platform offers participants to carry out remote work sessions, thus supporting knowledge transfer between professionals and decision makers. Using the application, multiple users can conduct remote work sessions in a faster and more efficient way while sitting in front of their personal computer screens.

During a work session, participants can operate two main types of user-asset interaction. They can open information pop-ups (Fig. 4), containing information regarding the characteristics of a certain object. Additionally, assets in the virtual scene can be edited and manipulated on-the-fly (Fig. 5a), while these and other actions are evaluated in real time through verbal communication.

Fig. 4.
figure 4

Informational pop-ups regarding the properties of specific objects can be activated by the user when approaching an object.

Fig. 5.
figure 5

The virtual simulation of the Temple of the Storm God as seen from the point of view of a single participant in the multi-user application. (a) A modeled wall added to the virtual scene as part of testing a suggested reconstruction of the pedestal wall in the Temple’s compound. (b) The avatar of another participant is seen while inspecting the cult images of the Storm God and King Taita of Philistin to his right.

The temple compound can be navigated and explored simultaneously by multiple participants (Fig. 5b). The properties of existing objects (for example its spatial position or physical measures) can be edited, and new objects can be added to the scene, however only one object can be manipulated by a single user at a time. While an editing action takes place, the object is highlighted so other peer(s) can be aware of changes made to the scene. Object manipulation in the application is based on an underlying database, containing the model and its assets. Establishing a direct link to the underlying database allows to access assets in real-time without the need to recompile the entire application after each editing task.

Enabling an advanced asset-interaction is particularly important in order to organize and carry out tasks successfully in a shared virtual space. The developed user-asset interactions specifically aim to open up new approaches to re-contextualize cultural heritage knowledge, for example when testing assumptions regarding architectural reconstructions of both diachronic and synchronic implications. However, much as in real-life, peer communication and collaboration are essential to support a sense of co-presence in VE, and can be decisive factors in carrying out joint tasks successfully [19, 20]. Hence, both written and verbal communication options are offered in the application, as embedded text messaging or via VoIP technology offered by third-party services.

Different from the HMD application, some prerequisites need to be met in order to operate the multi-user system. Participating in a session requires internet accessibility in order to connect to the server hosting the VE and the temple’s simulation. Hardware configuration (in particular graphic card performance) as well as system and network stability (which are dependent on the available client/server infrastructure) might also present limiting factors to carrying out a successful collaborative session. At the moment, the design of the application requires all participants to load the same model version on their respective PC in order to share the same simulated space. Designing a peer-to-peer (P2P) configuration and connecting to a shared asset database can improve some of these issues and provide better application flexibility and increased stability. Further aspects that can be improved in future versions of the application relate to physical properties and sense of embodiment in the VE. These, however, are more so dependent on the state of game technology available in the market. Developments in the field are anticipated in the coming future with the introduction of advanced tactile user-interfaces and newer versions of end-user immersive devices.

6 Closing Remarks

Taking advantage of capabilities offered by media design and game technology, dedicated applications were implemented on a 3D simulation of the Temple of the Storm God in Aleppo. The 3D model of the temple compound and its assets are generated in a game engine, and based on data integrated from diverse sources with varying scales. The purpose of the presented applications is to allow real-time interaction with the virtual temple space and its assets. While an HMD application emphasizes user experience, a multi-user system addresses scientist and professionals in the fields of archaeology and cultural heritage. In particular, the multi-user system enables conducting a remote collaborative work within a single shared space. Both applications facilitate interactive functionalities and are displayable on PC desktop as well as immersive devices. In general, virtual environments can be very effective tools in inducing a strong sense of presence and embodiment, and motivate engagement with individual as well as joint tasks. Both applications implement further advanced functions which allow to directly manipulate and edit assets in real-time. The work of an interdisciplinary group of archaeologists, media designers, and computer scientists presented in the paper lays ground for meaningful remote collaborative work, where peers can act and communicate in the same virtual space. Such capabilities are particular interest also for professionals seeking to engage in re-thinking work procedures as well as re-contextualizing cultural heritage knowledge. With future developments in game technology and newer versions of immersive devices released to the market, soft-spots such as motion sickness and tactile feedback are expected to be improved.