Abstract
Digital and 3D data are common components in current archaeological work, and expectations regarding their utilization in contextualizing archaeological knowledge are steadily on the rise. The rapid progress in real-time rendering software and more accessible computational power enables integrated data-sets to (re)gain relevance in the process of interpreting archaeological contexts. Retaining high level of details and correct geometric relations of a complex scene while reconciling inherent variations in the scale, format, and resolution of input data (including 2D legacy data and 3D field recordings) has been already successfully achieved in the simulation of the Temple of the Storm God of Aleppo, realized by an interdisciplinary working group in the HTW Berlin. The current paper addresses the modification of virtual and immersive environments within the field of cultural heritage, and evaluating their potential as tools in interpretative archaeological processes. Based on widely available game technology, two applications are presented, supporting real-time interaction and collaborative work within a single modeled space.
Access provided by Autonomous University of Puebla. Download conference paper PDF
Similar content being viewed by others
1 Introduction
Incorporating 3D data capturing tools has become a common practice in archaeological work worldwide, making archaeological data more available for post-processing and presentation, but also much more voluminous than ever before. Also the array of data formats, the variety of processing tools, and the range of devices enabling presentation of digital and digitized data is much wider. The challenge, therefore, modern archaeologists face is setting proper standards regarding data-sets manipulation, sophisticated visualizations, and flexible data application. To meet the rise in expectations, co-operating with other researchers or specialists in interdisciplinary frameworks gathers momentum. This underlines a need to articulate the imperatives for engaging in an archaeological research, and carefully employ techniques which are relevant to chosen archaeological analysis and work methodologies. Increased incorporation of advanced computer and game technologies does not only expand an archaeologist’s toolkit, but also encourages re-thinking work procedures and work assumptions. In particular, virtual environments (VE) and virtual reality (VR) can be utilized to experiment with new approaches to interpret archaeological assets and provide new means, for both the public as well as scholars, to re-engage with cultural heritage knowledge. Contrary to the more common hyper-realistic nature of 3D simulations associated with popular culture, when purposed for scientific use it is essential to maintain the correct relations between different data-sets and varying resolutions involved in assembling archaeological information systems.
A recently developed application evaluates the potential of VR platforms and immersive devices as ground-braking tools in interpretative archaeological research. The application allows for real-time interaction with a modeled space and its assets, and supports remote collaborative work within a single simulated space. Demonstrated on the game engine generated 3D simulation of the Temple of the Storm God in Aleppo, purposing immersive devices to the field of cultural heritage is explored.
2 Implementing and Manipulating Integrated Data in Virtual Environments
A viable 3D simulation of the Temple of the Storm God and its assets, which correctly visualize the various temple components, was constructed through workflows compiled by members of a dedicated interdisciplinary working group [1]. These workflows describe the steps taken to properly convert the various inputs comprising the virtual scene. The datasets used to compile the virtual scene include data captured originally in 3D in the temple compound itself in 2010, available satellite imagery, and 2D legacy data. Prior to compiling the virtual scene, processing these varied inputs for further manipulation followed specific pipelines in order to meet the project objectives and while taking methodological concerns into account [1]. The resulting product is a detailed and multi-scalar simulation of the temple compound with its architectural elements and finely carved inscriptions. The geometrical relations of the objects were accurately maintained despite rigorous file size reduction, achieved by implementing decimation algorithms [1–4]. Using capabilities offered by media design and game technology, a 3D simulation of the temple was embedded in a VE, facilitated with interactive functionality and displayable on both desktop as well as immersive devices, such as head mounted displays (HMD).
Several aspects need to be considered when implementing 3D scientific data in VR platforms. Technical issues might be the most obvious ones. From our experience with several game engines, achieving high representational fidelity is much more a matter of the engine’s architecture and the quality and Level of Details (LoD) it supports rather than the quality and accuracy of the fed data. As programming a costume game engine is not part of the project goals, and in order to keep costs low, a strategic decision was taken to generate the simulation on available gaming platforms (several cost-free platforms available for non-commercial use were tested. In this paper the Unreal Engine 4 is referred to). Formerly, relying on cost-free development kits restricted the range of available game engines to those offering only limited access to code-editing and asset manipulation. This approach substantially constrains the ability to simulate complex scenes comprising multiple assets with a wide range of LoDs. Fortunately, game technology is a rapidly developing industry. Newer versions of game engines are regularly released to the market, implementing better rendering algorithms, deeper development kits, and higher representational fidelity. Once the groundwork is set and the required 3D data is processed, converting the simulated scene to another game engine or a newer version is relatively simple, without apparent loss of details or data quality. Nonetheless, this step should be supported with hardware components which correspond to the system’s requirements.
Technical issues are only the backstage to other fundamental concerns regarding content and functionality. The potential affordances of a certain VE are prompted through several factors, determine the user’s experience and perception of the environment. Such factors can be, for example, the scene’s design, smoothness of display, choice of interface, or type and range of user allocated actions [5, 6]. In the developed applications presented here, navigational tools were implemented, meant to enhance an exploratory affordance and help the user to develop a better sense of presence and spatial acquaintance with the modeled environment. It is important to note the difference between a sense of presence, or embodiment - the reaction a user might develop while experiencing a VE - and the notion of spatial immersion, which can be described as the constellation of components and characteristics comprising a certain VE [6]. While the latter depends on system architecture and hardware configuration, the former has to do with the content and actions embedded in the VE. Whether an immersive system will encourage a user to develop a sense of presence depends on the system’s configuration and the representational fidelity of the environment. However, whether a user will be motivated to engage with the VE and the assets in it has much to do with the affordances offered by the VE. An effective immersive system should convince us, that is our perception, in the real-life properties of the virtual simulation we are experiencing.
The challenges in reconciling between the different methodologies of game technology and archaeology are made more clear when trying to adjust game engines to produce complex multi-scalar and visually versatile virtual simulations. By their nature, game engines are constructed to deliver first-person experience and are optimal for presenting vivid representations of a (virtual) world from a subjective point of view. Hence, some game engines offer better rendering when the simulated scenes are spatially limited. Although it is usually possible to depict endless simulated plains, distant objects will be often simplified as a “backdrop” to the main scene.
3 Virtually Re-contextualizing the Temple of the Storm God
The created VE depicts a 3D simulation of the Temple of the Storm God in Aleppo, a compound measuring 42 × 42 m with an interior cella of approximately 27 × 17 m, dated to the 1st half of the 2nd millennium BCE [7]. The modeling of the temple was created with common 3D processing software [1], rendered in the Unreal Engine. The resulting simulation retains the temple’s floor plan in several scales while maintaining accurate geometric relations of the architectural elements (Fig. 1). A key aspect in modeling the temple is the ability to distinguish various spatial levels, such as very fine chisel marks and hieroglyphic inscriptions from remote mountain ranges.
Two types of applications were tested in the virtual simulation, both well established in the world of game technology and immersive environments. One of the applications uses an HMD. By nature, such devices are designed to generate a highly immersive environment, which can induce users to very vivid reactions and a strong sense of presence, mimicking an illusory sense of body ownership [8–10 and references therein].
The other application refers to Multiplayer Computer (Online) Games (MCG/MOG), among the most widely distributed types of popular entertainment. Research on online game addiction [11, 12] indicates the profound immersive quality MOGs can have, even though - contrary to games played using immersive devices such as HMDs - for the most part they are played in front of a computer screen. Research on the educational and sociological aspects of computer and online games [13–15] reveals that such games provide users valuable opportunities to engage in social interactions. MCGs and MOGs are powerful platforms, motivating participants to fulfill effective learning tasks and encouraging them to explore, communicate, and collaborate.
4 Interactively Exploring the Temple of the Storm God
Having user experience in mind, this application implements an Oculus Rift DK2 HMD device to the virtual simulation of the temple compound (Fig. 2), with free movement in the modeled space, and direct access to interaction being emphasized. Enabling these actions aims to facilitate spatial affordances - high degree of acquaintance with the temple compound and its layout, and consistent assessment of scales and measures from different locations within the compound. However, a learning affordance was also assessed through enabling the interaction with archaeological content, in this case the temple’s decorated reliefs.
Nonetheless, several limitations and challenges are inevitable when using immersive devices such as HMDs, relating to representation of the virtual avatar and to navigation in the VE. Motion sickness (or simulator sickness) is the most prominent challenge to be addressed in this context. A variety of factors could trigger this phenomenon. Some refer to the construction of the system, such as frame rate, navigation speed, or display characteristics. Other factors are rather subjective and depend on individual circumstances, such as age, previous experience with VE systems, or medical condition [16–18]. The most common occurrence of motion sickness are instances of spatial disorientation or general discomfort, triggered by the effects that an immersive experience has on the proprioceptive system. Movement is one of the most obvious examples of motion sickness: while the physical body of a user is stationary in the real world (s)he experiences its virtual representation, or virtual avatar, as moving or flying without any corresponding vestibular or kinesthetic cues.
A possible way to reduce some of these disorienting effects is to bridge the gap between the physical gestures of a user and those of the virtual counterpart. A solution implemented in the tested HMD application was to allow the user to control navigation in the modeled space with a hand held controller (here an XBox 360 joystick). However, in order to do so, first the movements of the user need to be captured and mapped on to the gestures of the avatar in the VE. A basic HMD configuration positions only a single tracking device. The tracking camera provided with the Oculus Rift device, which was used as a default for the temple’s virtual scene (Fig. 2), tracks only the position of the HMD and not that of the full body. However, movement mapping does not need to be perfect, suffice it when gestures of movement or directions are generally simulated in order for a user to develop a sense of presence and experience embodiment [6].
Stepping out of the range of the tracking device pauses the motion tracking and the display fades to black. This pre-defined characteristic could also induce motion sickness. But more importantly, this action deeply effects the user’s movement, in the physical world as well as in the virtual space. Extending the tracking volume (of both the user and the avatar) can be rather easily achieved with additional devices/detectors, or by switching to technologies using motion capture systems or magnetic based tracking (trading-off a light-weight single laptop and HMD with a much less portable system). However, even if systems are switched or tracking devices are added, the user will still be literally confined to staying on ground level. Performing a flight, for example in order to observe objects from above, will still, in most likelihood, induce motion sickness, since it can be simulated virtually but not tracked physically.
Reconciling between system mobility and limited tracking volume was settled in the developed application through implementing a virtual dynamic platform, on which the avatar is placed. In fact, this solution improves user experience in a twofold way. On the one hand, the tracking volume is substantially extended. The user is granted with an almost unlimited range of movement in the simulated temple, and so the risk of stepping out of the scene is reduced. In addition, also motion sickness is reduced by overcoming the discrepancy between the physical movement and its simulation in the temple.
The virtual platform, designed to resemble a “flying carpet” (Fig. 3a), is visible to the user and semi-transparent to not distract the view. The platform’s position in the VE is manipulated by the user through a hand held controller, allowing to actively explore the entire modeled temple and approach objects from multiple angles.
The platform is structured as a confining box with dynamically changing walls. In a static position, only a subset of the platform is visible at the feet of the user (as seen in Fig. 3a). When the limits of the tracking volume are approached (Fig. 3b), a colored wall appears in front of the user, signaling a need to reposition (Fig. 3c). Moving the dynamic platform results in the re-positioning of the tracking volume within the virtual space, and does not require to re-position the avatar (essentially negating the need to physically move in the real world). The confining box becomes visible also when the platform is shifted in the virtual space. The user thus can have a better spatial reference and can apprehend that the platform (or avatar) is what being moved rather than its own physical body.
5 Remote Collaboration in a Shared Virtual Space
Another tested application builds on the concept of multi-user systems, or multiplayer gaming platforms. Whether played online or implemented in VEs, multi-user systems demonstrate powerful immersive characteristics. Such systems can support basic features, such as location, orientation, agency, and communication [9, 19]. These features are capable of inducing a strong sense of embodiment and participation, albeit the fundamental difference in their configuration compared to other immersive VEs, such as HMDs or DIVE/CAVE systems [13, 20].
Running on the same virtual simulation of the temple, the system is configured to support a first-person point of view and the scene is displayed on a screen. Similar to the HMD application, the temple compound can be actively explored and the different assets can be approached from up close. Also the avatar movements and parts of the virtual body representation are visible to the user, encouraging a sense of orientation and direction in the VE. The main difference compared to the user-oriented HMD application, is the design of this tested multi-user system, which aims to enhance producer expertise. In order to reach that goal an innovative approach was taken, maximizing real-time interaction and knowledge sharing between peers using common interfaces, such as desktop PC.
The concept of the application’s design follows two key objectives: real-time communication and interactive cooperation, both are essential properties in conducting meaningful collaborations in VEs [19, 20]. The platform offers participants to carry out remote work sessions, thus supporting knowledge transfer between professionals and decision makers. Using the application, multiple users can conduct remote work sessions in a faster and more efficient way while sitting in front of their personal computer screens.
During a work session, participants can operate two main types of user-asset interaction. They can open information pop-ups (Fig. 4), containing information regarding the characteristics of a certain object. Additionally, assets in the virtual scene can be edited and manipulated on-the-fly (Fig. 5a), while these and other actions are evaluated in real time through verbal communication.
The temple compound can be navigated and explored simultaneously by multiple participants (Fig. 5b). The properties of existing objects (for example its spatial position or physical measures) can be edited, and new objects can be added to the scene, however only one object can be manipulated by a single user at a time. While an editing action takes place, the object is highlighted so other peer(s) can be aware of changes made to the scene. Object manipulation in the application is based on an underlying database, containing the model and its assets. Establishing a direct link to the underlying database allows to access assets in real-time without the need to recompile the entire application after each editing task.
Enabling an advanced asset-interaction is particularly important in order to organize and carry out tasks successfully in a shared virtual space. The developed user-asset interactions specifically aim to open up new approaches to re-contextualize cultural heritage knowledge, for example when testing assumptions regarding architectural reconstructions of both diachronic and synchronic implications. However, much as in real-life, peer communication and collaboration are essential to support a sense of co-presence in VE, and can be decisive factors in carrying out joint tasks successfully [19, 20]. Hence, both written and verbal communication options are offered in the application, as embedded text messaging or via VoIP technology offered by third-party services.
Different from the HMD application, some prerequisites need to be met in order to operate the multi-user system. Participating in a session requires internet accessibility in order to connect to the server hosting the VE and the temple’s simulation. Hardware configuration (in particular graphic card performance) as well as system and network stability (which are dependent on the available client/server infrastructure) might also present limiting factors to carrying out a successful collaborative session. At the moment, the design of the application requires all participants to load the same model version on their respective PC in order to share the same simulated space. Designing a peer-to-peer (P2P) configuration and connecting to a shared asset database can improve some of these issues and provide better application flexibility and increased stability. Further aspects that can be improved in future versions of the application relate to physical properties and sense of embodiment in the VE. These, however, are more so dependent on the state of game technology available in the market. Developments in the field are anticipated in the coming future with the introduction of advanced tactile user-interfaces and newer versions of end-user immersive devices.
6 Closing Remarks
Taking advantage of capabilities offered by media design and game technology, dedicated applications were implemented on a 3D simulation of the Temple of the Storm God in Aleppo. The 3D model of the temple compound and its assets are generated in a game engine, and based on data integrated from diverse sources with varying scales. The purpose of the presented applications is to allow real-time interaction with the virtual temple space and its assets. While an HMD application emphasizes user experience, a multi-user system addresses scientist and professionals in the fields of archaeology and cultural heritage. In particular, the multi-user system enables conducting a remote collaborative work within a single shared space. Both applications facilitate interactive functionalities and are displayable on PC desktop as well as immersive devices. In general, virtual environments can be very effective tools in inducing a strong sense of presence and embodiment, and motivate engagement with individual as well as joint tasks. Both applications implement further advanced functions which allow to directly manipulate and edit assets in real-time. The work of an interdisciplinary group of archaeologists, media designers, and computer scientists presented in the paper lays ground for meaningful remote collaborative work, where peers can act and communicate in the same virtual space. Such capabilities are particular interest also for professionals seeking to engage in re-thinking work procedures as well as re-contextualizing cultural heritage knowledge. With future developments in game technology and newer versions of immersive devices released to the market, soft-spots such as motion sickness and tactile feedback are expected to be improved.
References
Goren, A., Kohlmeyer, K., Bremer, T., Kai-Browne, A., Bebermeier, W., Öztürk, D., Öztürk, S., Müller, T.: The virtual archaeology project – towards an interactive multi-scalar 3D visualisation in computer game engines. In: Traviglia, A. (ed.) Across Space and Time, Selected Papers from the 41st Computer Applications and Quantitative Methods in Archaeology Conference, Perth, WA, 25–28 March 2013, pp. 386–400. Amsterdam University Press (2015)
Alliez, P., Ucelli, G., Gotsman, C., Attene, M.: Recent advances in remeshing surfaces. In: De Floriani, L., Spangnulo, M. (eds.) Shape Analysis and Structuring, pp. 53–82. Springer, Heidelberg (2008)
Merlo, A., Dalcò, L., Fantini, F. Game engine for cultural heritage: new opportunities in the relation between simplified models and database. In: Guidi, G., Addison, A.C. (eds.) 18th International Conference on Virtual Systems and Multimedia (VSMM), Proceedings of the VSMM 2012 Virtual Systems in the Information Society, Milan, Italy, 2–5 September 2012, pp. 623–628. Institute of Electrical and Electronics Engineers (IEEE), Piscataway (2012)
Merlo, A., Sánchez Belenguer, C., Vendrell Vidal, E., Fantini, F, Alipetra, A.: 3D model visualization enhancements in real-time game engines. In: Bohem, J., Remondion, F., Kersten, T., Fuse, T., Gonzalez-Aguilera, D. (eds.) 5th International Workshop 3D-Arch 2013: Virtual Reconstruction and Visuzlization of Complex Architectures, Trento, Italy, 25–26 February 2013, vol. XL-5/W1, pp. 181–188. International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences (ISPRS) (2013)
Dalgarno, B., Lee, M.J.: What are the learning affordances of 3-D virtual environments? Br. J. Educ. Technol. 41(1), 10–32 (2010)
Slater, M.: A note on presence terminology. Presence Connect 3(3), 1–5 (2003)
Kohlmeyer, K.: Der Tempel des Wettergottes von Aleppo. Baugeschichte und Bautyp, räumliche Bezüge, Inventar und bildliche Ausstattung. In: Kamlah, J. (ed.) Temple Building and Temple Cult, pp. 55–78. Harrassowitz Verlag, Wiesbaden (2012)
Ehrsson, H.H.: The experimental induction of out-of-body experiences. Science 317(5841), 1048 (2007)
Kilteni, K., Groten, R., Slater, M.: The sense of embodiment in virtual reality. Presence Teleop. Virtual Environ. 21(4), 373–387 (2012)
Slater, M., Perez-Marcos, D., Ehrsson, H.H., Sanchez-Vives, M.V.: Inducing illusory ownership of a virtual body. Front. Neurosci. 3(2), 214–220 (2009)
Kuss, D.J., Griffiths, M.D.: Internet gaming addiction: a systematic review of empirical research. Int. J. Mental Health Addict. 10(2), 278–296 (2012)
Lee, Z.W.Y., Cheung, C.M.K., Chan, T.K.H.: Massively multiplayer online game addiction: instrument development and validation. Inf. Manag. 52(4), 413–430 (2015)
De Freitas, S.: Learning in Immersive Worlds. Joint Information Systems Committee, London (2006)
Steinkuehler, C.A.: Massively multiplayer online games. Paper Presented at the Proceedings of the 6th International Conference on Learning Sciences: ICLS 2004: Embracing Diversity in The Learning Sciences, University of California, Los Angeles, Santa Monica, CA, 22–26 June 2004 (2004)
Steinkuehler, C.A., Williams, D.: Where everybody knows your (screen) name: online games as “Third Places”. J. Comput. Mediated Commun. 11(4), 885–909 (2006)
Kolasinski, E.M.: Simulator Sickmess in Virtual Environments. DTIC, U.S. Army Research Unit (1995)
Moss, J.D., Muth, E.R.: Characteristics of head-mounted displays and their effects on simulator sickness. Hum. Fact. J. Hum. Facto. Ergon. Soc. 53(3), 308–319 (2011)
So, R.H.Y., Lo, W.T., Ho, A.T.K.: Effects of navigation speed on motion sickness caused by an immersive virtual environment. Hum. Fact. J. Hum. Facto. Ergon. Soc. 43(3), 452–461 (2001)
Mortensen, J., Vinayagamoorthy, V., Slater, M., Steed, A., Lok, B., Whitton, M.: Collaboration in tele-immersive environments. In: Stürzlinger, W., Müller, S. (eds.) Proceedings of the Eighth Eurographics Workshop on Virtual Environments 2002. Eurographics Association, Aire-la-Ville (2002)
Benford, S., Bowers, J., Fahlen, L.E., Greenhalgh, C., Snowdon, D.: User embodiment in collaborative virtual environments. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 242–249. Addison-Wesley Publishing Co. (1995)
Acknowledgements
Former sponsors of the project and related fieldwork are: the IFAF Institute Berlin, the World Monuments Fund, the German Research Foundation, the Gerda Henkel Foundation, TOPOI Cluster of Excellence at the Free University of Berlin.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2016 Springer International Publishing AG
About this paper
Cite this paper
Goren, A. et al. (2016). Interacting with Simulated Archaeological Assets. In: Ioannides, M., et al. Digital Heritage. Progress in Cultural Heritage: Documentation, Preservation, and Protection. EuroMed 2016. Lecture Notes in Computer Science(), vol 10058. Springer, Cham. https://doi.org/10.1007/978-3-319-48496-9_23
Download citation
DOI: https://doi.org/10.1007/978-3-319-48496-9_23
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-48495-2
Online ISBN: 978-3-319-48496-9
eBook Packages: Computer ScienceComputer Science (R0)