1 Introduction

In recent years, the use of CG and related techniques, e.g. for virtual reconstructions and 3D digital imaging, have become increasingly recognized as tools with significant potential for the documentation, analysis and presentation (Addison 2000) of all aspects of CH and material culture (Berndt and Carlos 2000), facilitating the development of novel presentation formats for raising public awareness of CH, bringing history “alive” and attracting new audiences to museums. Their combination with new media (the web and computer/video games) and novel interaction techniques have enabled the proliferation of CG and related techniques for CH, providing an important approach to the preservation and presentation of CH. These technologies have been used by museums, providing new approaches to the curation of artefacts, aided archaeologists in the analysis of sites and artefacts, been applied to formal and informal education settings, and they have enabled the development of novel presentation formats such as virtual museums or serious games for raising public awareness of CH (Anderson et al. 2010). They have also benefited from the development and spread of multi-modal user interfaces and mobile devices, opening up new avenues to the digitization of tangible CH, the authoring and capture of intangible CH as well as representations of CH using augmented reality (AR) and mixed reality (MR), facilitating novel paradigms for interaction with CH.

Serious games and similar interactive virtual environments (VEs) allow audiences to experience tangible and intangible CH—in this context sometimes referred to as Virtual Heritage (VH)—either on-site, often using a kiosk-style presentation system, AR with mobile devices, or on-line. Over the past decade, MR interfaces related to these, have also shown a potential to enhance the audience’s experiences of virtual worlds and VH. Recently it has been demonstrated that game engines used for interactive VEs can also be used for animation production (Bąk and Wojciechowska 2020) as a more affordable alternative to conventional tools.

Until a few years ago, CG systems for CH were often created with bespoke hard- and software, almost always expensive to acquire and time-consuming to develop. Only fairly recently has there been a trend to replace proprietary systems with off-the-shelf solutions and once expensive professional systems are being subsumed by affordable consumer-level systems, reducing up-front costs and greatly simplifying development of CH applications. In this chapter we first provide an overview of the preservation and presentation of CH, followed by a set of recent case studies, illustrating and discussing how affordable off-the-shelf CG systems—developer and artists’ tools from the entertainment industries—can be employed to achieve this.

2 Preserving and Presenting Cultural Heritage Using CG

The potential of CG methods and techniques for the preservation and presentation of CH has been known for decades, but before the advent of consumer-grade high-performance hard- and software its uptake was limited due to a lack of tools and visual realism (Addison 2001). Great advances in CG hardware, methods and related techniques, such as recent developments in VR and multi-modal user interfaces, have popularized the use of CG as such and consequently increased the use of CG for CH (Berndt and Carlos 2000; Anderson 2013). This increased use of CG in CH has driven technological, methodological and conceptual developments ranging from the development of production pipelines reaching from the digitization of CH artefacts to their public presentation (Bruno et al. 2010), to the development of internationally recognized principles concerning the digitization and visualisation of CH as set out in the “London Charter” (Beacham et al. 2009).

There are many CG methods and techniques that are suitable for application to the preservation and presentation of CH, a detailed taxonomy and discussion of was presented by Foni et al. (2010). As a first step, most of these CH approaches require the 3D construction or reconstruction of CH objects, which other than by means of manual modelling or procedural generation (Haegler et al. 2009) would require some form of 3D acquisition technique (Pavlidis et al. 2007) for digitizing the CH artefact or site. While there are many different methods and techniques to achieve this (Vilbrandt et al. 2004), the best results tend to be achieved when different techniques are combined (Núñez Andrés et al. 2012). Other than straightforward visualisations, CH applications implemented using interactive VEs as used in video games have not only facilitated the virtual (re-)construction of culturally and historically relevant artefacts in serious games for education (Anderson et al. 2010), but they have also been used by museums, providing new approaches for the promotion of knowledge transfer and supporting public outreach and they have enabled the development of novel presentation formats for raising public awareness of CH. Such VH systems can take the form of physical installations, e.g. kiosk-style systems located in museums, or of web-based on-line applications. Examples for these are “Colonia3D”, a virtual reconstruction of Roman Cologne (Trapp et al. 2012) in the “Romano-Germanic Museum” of Cologne (Germany) or the on-line only “Virtual Viipuri”, a virtual reconstruction of the formerly Finnish city of Vyborg before it was lost to the Soviet Union (Wells 2019).

Interactive VEs and related technologies and techniques are especially interesting for the emerging application area concerned with the preservation, presentation and dissemination of intangible CH, which in recent years has been highlighted by the UNESCO (2003), which is concerned with human and societal aspects of CH. The recording or capturing and consecutive visualisation of intangible CH, as it literally concerns the intangible, provides both conceptual and practical challenges that interactive VEs are ideally suited to address, as audiences are immersed in and interact with the VE, allowing them to subjectively experience the heritage artefact. Their employment in the preservation of heritage artefacts such as rituals and stories can be combined with the application of novel interaction methods (Anderson 2013), and MR presentations such as the Egyptian Oracle (Gillam and Jacobson 2015), where a public performance is enhanced with an interactive VE, are another possibility for the presentation of intangible CH.

A large number of CH applications created over the past decades have been developed using proprietary soft- and hardware systems, however, in many cases the development of such applications can be improved, sped-up and/or simplified through the use of appropriate off-the-shelf solutions. In general, among the main benefits for the use of off-the-shelf hardware and software are often considerable cost savings (Weinberg and Harsham 2009), and this is equally true for any hardware employed in CH applications, starting with the underlying computing platform. For example, in 2000 an interactive visualisation of the Siena Cathedral (Italy) was deployed on a highly specialized 3D graphics platform (SGI ONYX2-InfiniteReality3) (Behr et al. 2001), costing a six figure USD value. Less specialized off-the-shelf hardware such as a workstation computer with a consumer-grade graphics card can be just as effective and should be considered for CH applications before a much more expensive specialized solution is used—with the benefit of hindsight, one could argue that higher-end off-the-shelf hardware of the time could have achieved similar results for the Siena Cathedral, at a fraction of the costs.

Regarding the creation of interactive VEs and virtual reality systems for CH, Champion (2015) highlighted “infrastructure issues” that result in huge expenses incurred through the development of proprietary systems, which can be much simplified through the use of game engines as used in the entertainment industry. The use of game engines as visualisation infrastructure in projects that do not have entertainment as their primary objective, e.g. in scientific research, is not new (Lewis and Jacobson 2002). In CH contexts, games technology and game engines—as mentioned above—have been used for the creation of CH serious games (Anderson et al. 2010; Mortara et al. 2014), however, the commercial game engines used in popular entertainment games have often been too expensive to be employed for CH purposes, and their use has frequently been limited to the creation of so-called “Mods” (modifications) for existing commercial entertainment games (Kushner 2003), such as the “History Game Canada” (Rockwell and Kee 2011). Recent changes in the licensing terms and costs for game engines, however, have made these software systems considerably more attractive for developers in many different domains, including the preservation and presentation of CH, greatly simplifying the selection of a suitable system infrastructure, for which established methodologies can be employed (Anderson et al. 2013). This can be observed in the development of the Egyptian Oracle mentioned above, itself based on the earlier Virtual Egyptian Temple VH experience (Troche and Jacobson 2010), which over several iterations evolved from a proprietary software system that was replaced with a Mod of a commercial entertainment game (Unreal Tournament 2004), which was subsequently ported to the low-cost Unity game engine (Jacobson et al. 2019).

3 Case Studies

To demonstrate the effectiveness of using off-the-shelf CG systems for CH we explore a number of successful case studies employing such visual computing methods, techniques and technologies in CH contexts, i.e. different manners in which CG has and can be applied to the interpretation of CH objects, aiding their digital and physical preservation as well as their dissemination. These include applications of affordable interactive VE infrastructure (game engines and graphics and visualisation libraries) to support different CH presentation formats, as well as museum exhibits and the digitization of archaeological human remains.

3.1 The “Exercise Smash” Virtual Heritage Experience

VH presentations that allow users to experience the past by directly immersing them in an interactive VE at the time and place of a historical event have the potential to engage audiences in a similar way as computer games. As such they need to be able to provide a similar interface and visual quality to entertainment games (Anderson et al. 2010). Computer game engines, as used in these entertainment games, provide an ideal infrastructure for interactive VEs, implementing VH presentations. In this first case study we present a synthesis of tangible and intangible heritage based on maritime archaeology that was implemented using an off-the-shelf game engine and created using affordable 3D content creation programs usually employed in the entertainment industry.

3.1.1 Exercise Smash

“Exercise Smash” was a military exercise conducted in Studland Bay on England’s south coast on the 4th of April 1944 as preparation for the D-Day landings. During this live-fire landing exercise several amphibious tanks that were launched from landing craft sank before reaching the beach with the loss of six crew members’ lives. While several of the wrecks were popular targets for sports divers, over time, the exact locations of all of the tank wrecks were lost until in 2014 an archaeological search and survey by Maritime Archaeologists from Bournemouth University (BU), UK (Manousos 2014) resulted in their rediscovery, followed up by a photogrammetric survey in 2018, when 3D scans of the tank wrecks were created.

3.1.2 A Snapshot in Time

The aim of the “Exercise Smash” project was to develop a VH experience that provides audiences who may not be able to dive and visit the actual wrecks with a means to experience the underwater archaeology that is more engaging than existing “virtual dive trails” (James 2018), which tend to limit their visualisation to little more than an interactive 3D model viewer for 3D scans of the archaeological remains (frequently annotated with additional information) without depicting the surrounding environment. One of the main aims of this project was therefore to provide a more immersive VE that provides “visitors” of the virtual dive trail with the impression of viewing the archaeological remains in situ, i.e. under water, within a complete VE that resembles the actual site.

In the “Exercise Smash” VH experience, the archaeological remains are being presented in two modes within two fully interactive VEs (Fig. 22.1). Firstly, as a snapshot in time that allows audiences to experience the historical event that created the archaeological remains, and secondly as a virtual dive that allows audiences to experience the remains as they exist now (Anderson and Cousins 2019). Specifically, in the first part, audiences get to take part in the landing exercise, where they have to try to ‘swim’ one of the tanks from a landing craft to the beach. The second part is a virtual diving expedition in which participants can take a boat to the locations of the submerged tank wrecks, dive down to the wrecks and explore them.

Fig. 22.1
figure 1

The two parts of the VH experience. Left: taking part in the 1944 amphibious landing exercise; Right: exploring the submerged tank wrecks in a present-day virtual dive

To create this VH experience, Unreal Engine 4 (UE4; free for non-commercial projects) was used as the interactive VE infrastructure, with 3D assets (models and textures) resulting from photogrammetric surveys of the wreck sites processed and additional 3D assets created using Autodesk Maya and Adobe Photoshop, with additional work on textures performed using Substance Painter. All of these are affordable standard software packages used in game development. An initial version of the VH experience was unveiled to visitors of The Tank Museum in Bovington (Dorset, UK) during the June 2019 “Tankfest” event, where it was well received.

3.2 Creating Visualisations Using Game Engines for the New Forest Heritage Mapping Project

The New Forest is a National Park in the south of England covering 566 square kilometres of pasture and heathland, containing many well-preserved archaeological sites dating from the Neolithic up to the Second World War. In 2011, a Lidar (Light Detection and Ranging) survey of the entire forest was commissioned to produce an accurate ground level 3D map of the forest, stripped of trees and foliage. This map was examined to search for features in the landscape that may indicate archaeological remains. Before the scheme, 1000 archaeological sites had been recorded throughout the forest, but after examining the Lidar images, over 3,000 had been identified (New Forest National Park Authority 2019). As part of the effort to engage the public with the heritage of the forest and promote conservation Games Technology students from BU created interactive virtual reality visualisations, bringing to life several of the historic sites. These environments have been exhibited at various public events including a four-month exhibition at the New Forest Visitor Centre, Lyndhurst (September 2015–January 2016), that attracted over 30,000 visitors and other shorter events (Shaw et al. 2016; John et al. 2017, 2018).

Visualisations were created for a range of sites covering a wide spread of historic periods including a Neolithic long barrow, Bronze Age round barrows, Iron Age hillforts, Rockbourne Roman villa, a medieval hunting lodge, Tudor coastal defence forts, Buckler’s Hard Georgian shipbuilding yard, Fritham Victorian gunpowder factory, airfields from the First and Second World Wars, Matley First World War training trenches and Yew Tree Heath Anti-Aircraft Battery. Game engines (Unity and UE4) were used to create the environments to make use of the pre-existing functionality and graphical capabilities that enable the user to explore the environment at a human scale from a first-person perspective, at a low cost.

A major consideration for the development of the VEs was the intended platform, ideally including a VR headset and a computer with high graphics capabilities. However, it was not possible to commit expensive hardware for long-running exhibitions, where only iPad tablets with limited storage capacity and restricted processing power were available. It was possible to create flythrough videos of the VEs with full detail and visual effects to display on large TV screens and projections at the event. VR headsets were available only at special events.

The landscapes were the largest features in the VEs and used a large percentage of the available processing resources. There was a trade-off between maximising the size of the landscape and optimising the environment to suit the available resources. Some environment details were lost, but background images could be used to show distant hills, and a fog effect could be applied to reduce clarity of the horizon and keep the user’s focus on the immediate environment. An exception to this approach was the Beaulieu Second World War airfield environment (Fig. 22.2a) that allowed users to fly at an angle that prevented the landscape’s edges from being hidden.

Fig. 22.2
figure 2

Examples of the New Forest virtual visualisations (running left to right, top to bottom); a Beaulieu Airfield, b Hurst Castle, c Buckland Rings, d Rockbourne Roman Villa, e Fritham Gunpowder Factory, f Medieval Hunting Lodge at Telegraph Hill, g Stoney Cross airfield, h East Boldre airfield, i Frankenbury Hillfort

Accurate landscape terrains were generated from heightmaps that were exported from the Lidar data and imported into the game engine. The Lidar survey had a resolution of one vertex every 50 cm. UE4 imported this data, placing one vertex per metre, so the X and Y axes were scaled by half for the correct dimensions. The height (Z axis) had to be manually adjusted, as UE4 imports height values as a percentage of the minimum and maximum of the range in the heightmap, rather than an absolute value. When a map texture that shows the height contours is overlaid, it is possible to measure the distance between contours and to adjust the height until the distance between these contours matches the scale.

Once the heightmap has been imported, the terrain often required additional clean-up. There may be unwanted terrain features, such as outlines of modern roads and unintended bumps caused by cars or dense bushes. Features such as buildings and taller trees are removed from the heightmap by the Lidar processing software (identified by a sudden change in height values), creating holes in the imported landscape terrain. These unwanted bumps and holes need to be manually corrected by using the landscape editing tools. Fixing unwanted features in the terrain can be a long and tedious process, but only needs to be done in places where the defects are visible in the environment and cannot be covered by meshes or foliage. Other implications for the implementation of landscape textures arise from the choice of target platform: the iPads have a limit on the number of textures that can be applied to landscapes, which affects how different layers are rendered. Single layer satellite images can be used as textures, but these are unsuitable for viewing from close range.

Different approaches can be taken for creating 3D models for use in the environment. The Hurst Castle environment (Fig. 22.2b) used a single large model for the castle. Similarly, the Buckler’s Hard environment contained one highly-detailed ship model, surrounded by less-detailed buildings. This approach allows the main 3D models and textures to be more complex and natural lighting to be applied, whereas, the greater the complexity of the main model, the less detail can be added to other objects in the scene, making the environment less believable for users. An alternative approach is to create smaller objects that can be repeated multiple times, such as the roundhouses, wicker fences and palisades in the Rockbourne Villa (Fig. 22.2d) and Iron Age Hill fort environments (Fig. 22.2c). For 3D models that are viewed at a distance it was more efficient to add detail to the textures and normal maps instead of the 3D objects. Due to hardware limitations, many effects that make the game graphics visually appealing (including foliage, atmospheric fog and post processing volumes) could not be reproduced accurately by the iPads, for instance, the Medieval Hunting Lodge environment (Fig. 22.2f) contained a large number of trees that could not be rendered on an iPad and even mid-range game development computers struggled to display the environment with a reasonable frame-rate.

An important feature within the VEs was the display of information about the site to the user. When navigation is controlled with mouse and keyboard, it is relatively simple to assign keys or buttons to toggle the display of information on and off (as in the Gunpowder Factory environment—Fig. 22.2e). A keyboard is not easily available for the iPad presentation, and consequently, all control has to be provided through the touch screen using the default navigation controls or on-screen buttons. For the Rockbourne and Neolithic Long Barrow environments, information is displayed on information boards (Fig. 22.2d) that appear when the user overlaps with a box trigger, and that are hidden when the user leaves the trigger area. It can be difficult to read text written on objects within the environment when the user is oriented at a wrong angle to an information board. The board itself can potentially obstruct the users’ movement. For example, it was possible for the user to be lifted up by an information board and become stuck inside the Neolithic Barrow. An alternative strategy was to display the information directly on the screen, either above the top of the scene (as in the Gunpowder Factory—Fig. 22.2e—and the Medieval Hunting Lodge—Fig. 22.2f—environments) or covering the scene (as in the East Boldre—Fig. 22.2h—and Stoney Cross—Fig. 22.2g—environments).

It is important to consider how to help users navigating the environment, especially in light of the conflict between allowing them to freely explore the virtual world and the definition of a predetermined path around the environment. A total of 20 information boards were hidden throughout the Rockbourne environment, however, it is easy for users to miss the display of important information. An evaluation of 30 participants discovered that while free-exploration of the VE was more popular than reading a guide book, it was the readers who absorbed more knowledge (John et al. 2017). In an effort to guide users to areas of interest, the information boards in the Gunpowder Factory environment were left visible at all times. Similarly, floating icons were used to draw attention to areas where information was available in the Medieval Hunting Lodge (Fig. 22.2f) and Stoney Cross Airfield (Fig. 22.2g) environments, although these were always prominent in the environment. An objective marker system that displayed diamond shapes was used in the East Boldre (Fig. 22.2h) environment to indicate where areas of interest were located. For the Frankenbury Iron Age Fort environment (Fig. 22.2i), icons were moved from the environment to a permanently displayed overhead mini-map on the bottom left side of the screen.

The project demonstrated that game engines can be employed successfully as a tool for creating VR environments for use in public engagement activities. There are a number of technical challenges that need to be overcome when using Lidar for creating landscapes, and different strategies have to be followed for implementing the display of information and other interactive features. Furthermore, the amount of detail and visual effects that can be displayed depends on the intended presentation platform. Other findings of the “New Forest Heritage Mapping Project” are that a presentation using VR headsets rates higher as an immersive experience than iPads, but where it is not possible to supervise these activities iPads can provide an engaging experience. Visitors will not find an iPad touch screen more difficult to navigate than using a keyboard and mouse. There may be differences in the strength of opinion between participants, depending on their previous experience of VR or knowledge of the site being visualised, but overall feedback has been positive.

3.3 Poole Museum—Town Cellars Visualisation

Game engines are not limited to the creation of games or similar artefacts involving interactive VEs but can also be applied in situations where more traditional solutions are unsuitable, e.g. because of time constraints, and the creation of 3D animated films and videos can be very time consuming in terms of 3D rendering (the process of synthesizing animation frames from 3D assets).

In certain situations, the image quality achievable with modern game engines can closely match the image quality of expensive production-rendering systems, making them a viable alternative to such systems. An example for a project in which this was successfully done is our second case study, the “Town Cellars Visualisation” project for Poole Museum, a local heritage museum in Poole (Dorset, UK) (Anderson et al. 2018).

This project involved a mixture of architectural and heritage visualisation, based on documented archaeological findings as well as on-site collected photographic references (using a consumer DSLR—digital single-lens reflex—camera) and manual measurements, resulting in the “town cellars transformation”, showing a transition of the Grade I listed town cellars building that is part of the museum in its current state to a planned restoration closely matching its original state (Fig. 22.3).

Fig. 22.3
figure 3

Returning Poole Town Cellars closer to their original state (left to right)

The virtual reconstruction was carried out over a 3-week period, mainly within the UE4 game engine environment, with some 3D models created using Maya and texture assets created with Photoshop and Substance Painter. UE4 was then also used for the rendering of the animation frames, with the resulting video edited with Adobe Premiere, aiming to provide visual aids for stakeholders and funders as well as providing information about the developments to the general public.

The end results are of a quality that is comparable to that achievable with common 3D modelling and animation systems employing a built-in or 3rd party off-line rendering solution, demonstrating both the versatility of modern computer game engines and their suitability for all aspects of cultural heritage visualisation.

3.4 Tidebanan: Visualizing the History of the Stockholm Metro

Transportation is central for urban design. Roads, railways, and metro lines determine city shape and culture. People search for neighbourhoods largely based on accessibility, which predominantly determines property value and area services. This section presents Tidebanan, a project aiming at visualizing the demographic and cultural events in Stockholm during the creation of its metro system.

Tidebanan began as a student project in the Information Visualization course at KTH, Royal Institute of Technology, Stockholm. The aim of the project was to provide an interactive visualisation of the metro system across space, the map of Stockholm, and time, the period beginning in 1949 when the system opened the first lines and stationsFootnote 1 (Fig. 22.4). It is a Geographic Information System (GIS) that includes several layers. The cultural heritage (CH) explored is the history of the metro system, facts about the design, construction and operation of its 100 stations, images and videos of inaugurations, and historical changes in population density. Users can pan and zoom across a city map and scrub across time (Fig. 22.5). The visualisation shows annual population density since 1950 as a layered choropleth map.

Fig. 22.4
figure 4

Visitors interacting with Tidebanan, a visualisation of the history of the Stockholm metro system (1949–2016), a permanent exhibit at the Transport Museum of Stockholm

Fig. 22.5
figure 5

Tidebanan shows data starting from the 1950s. Users navigate the timeline by scrubbing the horizontal bar, pan and zoom the map on the central panel and select information layers with the checkboxes on the left. The right panel presents content about each station

The original student project only had content available online. After a demonstration, the Director of the Transport Museum of Stockholm urged the students to transform the project into an exhibit. The students collaborated with museum staff to curate deeper and novel content. They searched through unpublished museum material and carefully selected, digitized and restored never-before-published photographs and films dating back to 1950 (Fig. 22.6). The students placed the content at the appropriate spatio-temporal locations on the visualization and provided navigation techniques, including overviewing, zooming, filtering, scrubbing, panning, and clicking for details on points of interest (Shneiderman 2003) (Fig. 22.5).

Fig. 22.6
figure 6

King Gustaf VI Adolf of Sweden inaugurating T-centralen, 1957, one of the never-before published images digitized from museum archives

The equipment consisted of off-the-shelf hardware: a personal computer, a touchscreen display, and a projector. The touchscreen, computer, cables, projector, and projection screen were mounted on sturdy frames to withstand the extensive usage of museum exhibits. Staff jokingly called this “museum-strength” development. The touchscreen was placed horizontally on a tailor-made wooden box to allow one or more visitors to interactively explore the visualisation, while the projector opened the experience to a larger audience. The system used standard touch gesture libraries including one-finger touch and release for clicking, two-finger stretching and pinching for zooming in and out respectively, and one-finger swiping for panning and scrubbing. The system also used off-the-shelf open source software which consisted of D3.js, Javascript, HTML 5, jQuery, and CSS. D3 (Data-Driven Documents) is a high-level Javascript library for interactively visualizing data using web standards. jQuery is a JavaScript library to easily traverse and manipulate trees in HTML DOM (Document Object Model). The maps, demographic data, and facts about each station were open data sourced from Wikidata and Wikipedia. The media content started as open data in Wikidata, and, as stated before, after careful curation it included never-before published images and film held at the museum archives.

From this perspective, all the components of the system were either free of charge or very inexpensive. The project’s only cost to the museum were two student internships, the site license and the hardware (combined, approximately 20 thousand euros). For the students, the first compensation was in academic credits and learning. Then, the site license and internships compensated their efforts beyond the course project. For the course leaders, their time while on the course project was covered by their salaries for teaching. The time supporting and researching the work beyond the course project was a contribution from the course leaders financed by open researching funding. The rewards include collaborating in a museum exhibit, providing new and engaging experiences to a large audience of museum visitors, increasing the public visibility of the museum and the university as institutions for collaborative innovation, evidence for future external funding, and learning from the on-site longitudinal research of its deployment, which affords the current report as this work has not been previously published.

The exhibit opened in September 2015 and ran uninterrupted for two years, until the museum closed for relocation in 2017. The museum and exhibit are planned to reopen in 2020. It is important to note that the system did not crash during that time and that it did not require hardware or software maintenance. While on exhibit, we observed and interviewed visitors interacting with the visualisation. The most enthusiastic visitors were children and older adults. The children enjoyed the digital interaction and understood the content and effects of their actions with relative ease. They learned about the history, planning, and impact of the metro system. They also enjoyed the anecdotal content. Older adults, on the other hand, enjoyed remembering the events depicted in the visualisation. For them, it was a trip through their own history and relationship to the city.

The visualisation includes historical photos and films that had never been seen as they were in storage. Each metro station has content associated to it. For example, the 1957 opening of the central station T-centralen, connecting the south and north lines (Fig. 22.6). The same year, the metro stations also began to be decorated with art.

Each station has images and text describing its history and art. Important events are signalled with exclamation points over the period and place where they happened. The visualisation has two more layers of information. It displays choropleth maps of the population density of Stockholm divided into its parishes and the population change since 1950. These layers, coupled with the temporal slider and the creation of the metro lines, support an informal analysis of the impact the metro system has had on the population density of the city.

3.4.1 Lessons Learned

We conclude this section with a synthesis of the lessons learned while designing the course project and transitioning it into the exhibit. While each museum exhibit is unique, our aim is to provide pointers for future students, educators, researchers, designers and curators for expediting their processes and ensuring a more permanent and engaging experience for visitors.

  1. 1.

    Stay connected: collaborating with local museums and institutions is critical for the serendipitous development of projects like Tidebanan. The course leaders had collaborated with the museum director and it was relatively simple to present the student project to start the collaboration.

  2. 2.

    Give creative freedom: course leaders allow students to explore and create their own projects. Students are free to choose content and are evaluated on their ability to justify their design choices on solid technical and theoretical frameworks.

  3. 3.

    Do not retain intellectual property: Swedish universities do not make a claim on the intellectual property (IP) of student or research projects. Course leaders chose to not claim IP on the course projects. Thus, all the IP remained with students, allowing for independent negotiations with the museum.

  4. 4.

    Carefully review software licenses: Tidebanan uses free open source licenses. When packaging a software system, it is critical to review what is possible to sell and what must remain free and open.

  5. 5.

    Distinguish course rewards from project remuneration: students who worked on the course project received academic credit and achieved learning objectives. The compensation ends there for the entire team. Only some members of the team went on to improve the project to be able to sell a license and only two worked in internships. It is critical to be clear about compensation. If the recompense is not clear, projects like Tidebanan never become exhibits as students and other stakeholders are not able to reach an agreement.

  6. 6.

    Build for museum strength: successful museum exhibits experience phenomenal usage. Both hardware and software must be robust and durable. All system components must be designed, developed and tested using rigorous engineering practices. Museum staff do not have the resources, time or competence to regularly maintain or repair exhibits. Once an exhibit is down, there is a very high likelihood it will remain down for a long period, particularly for informal projects with thin financing. More damagingly, exhibits that are out of order deface the museum. Exhibits must be built to last. In the words of our museum colleagues, they must have “museum strength”.

  7. 7.

    Let the museum bring out the big guns!: most student projects that reach the level of excellence required for a top grade would be considered merely a curiosity by museum curators. Students lack the competence and content for producing long-lasting experiences that engage museum visitors. More importantly, these goals are beyond course learning objectives. It is only after meticulous collaboration with museum curators that the project reached the levels required for becoming an exhibit. The curators contributed novel and exciting content and subtle story-telling techniques that made all the difference for visitors.

3.5 Virtual Reconstructions of Cranial and Postcranial Fragments Using Photogrammetry

Archaeological human remains, most often encountered in a skeletal state, are a fundamental element of the archaeological record and as such constitute a unique repository of information concerning the individuals and/or group they represent. As part of teaching collections, they perform an essential role in medical and anatomical education; they also contribute to museum displays and general education, and they form an important focus for public engagement. This case study involved the digitization of skull fragments from a single individual found in the “Crusaders’ pit” (Haber et al. 2019), a 13th century mass burial of Crusaders killed in battle that was recovered during archaeological research excavations in Sidon (Lebanon) between 2009 and 2010.

Skeletal remains are often fragile and individual in nature (human skeletal remains derived from archaeological excavations are often recovered in a fragmented state), presenting a finite physical resource requiring careful management and conservation (Caffell et al. 2001). Preserving such remains and fragments, which retain evidence of identity, trauma and pathological processes, is an important on-going concern, as these are typically the elements that garner most interest and consequently are subject to more handling and greater risk of damage and loss. 3D imaging techniques represent important opportunities to create detailed, objective records of such skeletal remains of which, ultimately, only dust will remain.

Within bioanthropology, digital 3D reconstructions have tended to employ computed tomography and laser scanning for data acquisition, however, Katz and Friess (2014) reported minimal deviation between models created by surface laser scanning and 3D photogrammetry, concluding that the latter afforded reduced costs and greater portability. Our case study formed part of a pilot study to demonstrate how fragmented human skeletal material could be reconstructed to demonstrate the nature of skeletal injuries and their location/distribution in a format more readily accessible than traditional 2D images/other formats. Specifically, the study aimed to evaluate the validity and potential usefulness of digital 3D reconstructions of bone fragments using digital photogrammetric techniques combined with standard manual 3D asset creation techniques more commonly used in the feature film visual effects industry, to virtually reconstruct partial cranial and postcranial bone fragments suitable both for public presentation as well as for academic researchers intending to analyse and interpret these fragments.

For the academic researcher, a useful 3D model requires a high resolution of at least 1 mm, if not 0.5 mm. A lay audience, e.g. a member of the general public at a museum display with no in-depth knowledge or experience, will have quite different needs. In the latter case, a reduced resolution is likely to be sufficient for highlighting major changes such as gross pathological changes or severe trauma. Specific to the two skull fragments that were modelled and refitted virtually here, the cranial remains show multiple sharp force trauma to the head of one individual. A single heavy blade wound has partially penetrated the posterior left parietal bone, whilst a second heavy blade cut has fully penetrated the inferior posterior aspect of the right side of the head, involving multiple bones. Together, these lesions represent evidence for perimortem trauma from interpersonal or inter-group violence.

3.5.1 Model Acquisition and Reconstruction

The first step in the process of creating the virtual version of the cranial fragment is taking a number of photographs of the fragment and processing the photographs with appropriate photogrammetry software. There are a number of constraints to take into consideration when photographing an object for photogrammetry, and failure to adhere to these constraints could result in the software being unable to process the images correctly. The first constraint is that the object must not be moved, i.e. it must remain still, in place, with the camera moving around it, so it is not possible to use a stationary camera with the object on a turntable. Consequently, more physical space is required to allow moving the camera (and possibly tripod) around the object in 360°. The second constraint is that there needs to be visual overlap between each of the photographs, meaning that it is necessary to move the camera in approximately 5° increments around the object.

For this project, 40–50 photographs of the skull fragment from different elevations were taken using a Canon EOS 5D Mk. III DSLR camera. The software used was Autodesk Recap Photo (although this can create a fairly accurate mesh from low-resolution photographs, best results are achieved using as high resolution photographs as possible). After the photographs had been taken, they were processed to produce a 3D mesh with a basic UV layout and textures applied.

The next step of the virtual reconstruction of the cranial fragment combined the output from the photogrammetry with manual asset creation methods from the feature film visual effects industry. This first required mesh clean-up and the creation of an improved UV layout, for which Maya (using the Maya LT feature set) was used. For best texture resolution and detail, textures were re-projected onto this UV layout from a sample of the source photographs, using the virtual cameras created by Recap Photo (Fig. 22.7a). Mari (by The Foundry) was used for this projection of the photographs onto the UV layout.

Fig. 22.7
figure 7

Digitizing cranial and postcranial skull fragments; left—a texture projection onto the 3D skull fragment in Mari; right—b integration of digitized bones with a generic skull model to contextualize multiple partial cranial fragments

As even lighting on all sides of the subject during photography from different angles for photogrammetry is difficult to achieve, usually there will be slight variations in object colour and lighting. Mari was also used for the final steps of the process of cleaning up and balancing brightness, contrast, and colour of the projection images to match one another. As a benefit, this method produces far higher quality results than the output from the photogrammetry process alone (Fig. 22.8).

Fig. 22.8
figure 8

Results from the photogrammetry process alone (left), compared to the combined process (right)

3.5.2 Visualisation Results

To help viewers to quickly identify and locate a bone fragment which they might otherwise be unable to relate to their own body, due to limited anatomical knowledge, the resulting skull fragments were integrated with a generic skull model, showing the skull fragments in their correct position (Fig. 22.7b). This is similar to the common practice employed by museum curators for displaying physical bone fragments: the fragments are placed on the appropriate location of a clay model, demonstrating how the part fits on the whole. The results indicate that challenges remain when attempting to digitally reconstruct smaller-scale bones or bone fragments with specific micromorphological characteristics by using the 3D photogrammetric methods described. More specifically, it may be extremely difficult to reconstruct accurate models of smaller-scale features, such as fracture surfaces (often including complex trabecular bone) or cortical fracture surface texture, both of which are important factors when interpreting fractures and how they have propagated/been generated. Despite these limitations, 3D models, such as those created using the described processes, employing affordable CG solutions more usually found in the visual effects industry, clearly demonstrate significant potential for use in public engagement activities. The achieved resolution is sufficient for models to clearly represent bone elements and gross changes and/or severe trauma affecting them.

4 Discussion and Conclusions

The proliferation of CG through new media over the past three decades, e.g. through the use of CG on the world-wide-web, means that CG is ubiquitous, and has found its way into many different domains, including CH. There it offers novel approaches to the preservation and presentation of (tangible and intangible) CH, where in particular, CG applications offer new levels of interaction. While interactive museum exhibits have existed for several decades, the advent of digital technologies provides potential for fundamentally deeper, richer, more complex and engaging interaction. Ranging from hands-on exploration and discovery to non-linear and non-deterministic storytelling, this creates paradigmatic shifts in active learning, from home, in educational institutions, in museums or at CH sites. This engagement with CH can extend from keyboard and mouse interaction to fully-immersive see-through AR experiences with whole-body interaction. Similarly, such CH applications afford richer and deeper levels of immersion. Studies have correlated immersion with learning, and while well-curated museums, by definition, provide immersive experiences to their visitors, the level of immersive reality and interactive experiences afforded by digital technologies does not yet have a clear precedent.

Levels of documentation have also improved far beyond what was considered possible only two to three decades ago. Home photography and film (and later video) have existed for almost a century. The vast pervasiveness of digital tools and devices over recent years has not only led to improvements of existing methods and techniques (e.g. better visual quality), but also opened up new avenues of recording and documenting CH with much greater fidelity than previously possible, allowing, for instance, fully-immersive VR exploration of content.

The same is true for the level of communication facilitated by digital technologies and new media. Museums and CH sites have employed a number of mechanisms to communicate across time and different kinds of people, such as guest books recording visitors’ comments. Digital technologies and new media, however, afford much richer and nuanced communication platforms. They provide a means through which groups of people across the world can communicate, learn together and share experiences without the necessity of being present in the same space, at the same time. These communication mechanisms are unique to the digital infrastructure provided by today’s ubiquitous off-the-shelf, mobile-networked and sensor-rich devices.

The majority of the case studies described in this chapter were created with the participation of university students (undergraduate and postgraduate), with most of the development work carried out by those students. The involvement of students in CH projects involving CG is frequently practiced, e.g. students created the majority of the 3D models for the Colonia 3D project mentioned in Sect. 22.2 (Trapp et al. 2012), but it presents a number of challenges (see Sect. 22.3.4.1). For example, the project duration will likely extend beyond the course during which it was created, so students will have to commit their personal time even after their course has concluded. Generated IP is another issue, especially if the project includes commercialisation of the work, as the Swedish university practice of making no claim on IP derived from student or research projects is not ubiquitous. A useful consideration is to keep project briefs involving students fairly open to allow exploration and experimentation. As digital CH presentations are often created to engage younger audiences, like the students themselves, giving them the creative freedom to build a CH system that would engage them, will most likely benefit the whole project.

A driving factor for the rapid spread of VH applications in recent years has been the considerable improvement of visual quality in real-time CG and interactive techniques. Itself driven by major advances in computer games technologies, these, in turn, have helped to bring forth standardisation of methods and techniques, and subsequent integration with standardised hard- and software systems. This has driven down costs, which in combination with their widespread acceptance has resulted in the creation of affordable off-the-shelf, consumer-level hard- and software systems.

The intended presentation platform that an interactive CH application is built on directly influences its potential features in terms of graphics capabilities and interaction interfaces. One therefore needs to keep in mind that this does not only present benefits, but also potential drawbacks as the use of off-the-shelf systems means that CH systems based on these will share their limitations. For wider public engagement purposes, whether online, through a mobile app, or within a museum display or educational resource, context is clearly important. Different audiences have differing needs and require varying amounts and/or types of information to interpret digital 3D models, therefore, the provision of appropriate contextual information is essential for the success of visualisation as a resource/tool. The application of VH infrastructure and techniques can provide this, and using of off-the-shelf systems creates affordable solutions.

To conclude, in this chapter we have presented a set of case studies that demonstrate how off-the-shelf CG systems, such as inexpensive developer and artists’ tools from the entertainment industries and consumer-level hardware, can be used for the creation of CG systems for the preservation and presentation of CH.