Keywords

1 Introduction

A visit to a museum is like immersion in an illustrated book, where key concepts have emerged from the textual contents and taken on full form, and suddenly sprung into vibrant full shapes. What we read in books will often draw us to a museum, to see in embodied form the ideas and items described, or to experience more of the material culture and cabinet curiosities which provide content and context for our tales. Conversely, the museum will often suggest further reading and follow up literature to carry forward ideas inspired by and presented in collection displays. This interrelation of the primary 2D and 3D aspects of our communications, culture and collections are perhaps the earliest and most extensive clues of what we know about our ancient human ancestors and species relatives. Early humans combined what they crafted with what they collected and conveyed this through drawing and construction, through various means of story-telling, visual representation, and writing.

The Veholder.org project has been working with institutional groups interested in exploring and addressing some challenges with expanding from 2D into 3D imaging, to enable and improve collaborative Augmented Reality and with other museums collections. The project was introduced at the 3rd International AR and VR Conference held 23rd February 2017 in Manchester (www.mmu.ac.uk/creativear/conferences/2017-augmented-and-virtual-reality-conference-2017 [June 19, 2018]), as described in “Eye of the Veholder: AR Extending and Blending of Museum Objects and Virtual Collections” (springerlink.bibliotecabuap.elogim.com/chapter/10.1007%2F978-3-319-64027-3_6 [June 19, 2018]). Veholder is a term intended to creatively describe someone who, by virtue of the ability of AR to wed physical and complementary virtual items, becomes a “Virtual Beholder”. Veholder also was formed as the acronym for “Virtual Environment for Holdings and Online Digital Educational Repositories”.

Although some museums may have previously been reluctant to scan and share objects, with concerns about reducing visits to the museum, the Veholder project represents an AR extension of the idea of special exhibitions and the well-established practice of lending objects from one collection to another. Such special exhibitions regularly increase a museum’s attendance, however they also can provide the perfect opportunity to scan and share for future agreed use by the already collaborating museums. In a special event for its Members, the Director of the British Museum hosted the Director of the State Hermitage Museum (St. Petersburg) in a discussion about the role of the encyclopaedic (or universal) museum in the 21st century (www.britishmuseum.org/whats_on/events_calendar/event_detail.aspx?eventId=3906 [June 19, 2018]), as well as the future of museums, their roles and options for collaboration. It was noted that one of the Parthenon Sculptures had been lent to the Hermitage for a special exhibition, as part of their 250th anniversary. Although great lengths were taken during transit to avoid controversy with Greece over the location of the sculptures, such concerns raise questions of combining casting and 3D scanning and printing, along with AR imaging options—concerns which the Veholder project aims to help address.

The project has been developing potential collaborations with suitable groups in Cambridge, where there are multiple museums (www.museums.cam.ac.uk [June 19, 2018]) and research collections, with a project proposal submitted in conjunction with the University of Technology Sydney, for the forthcoming 250th anniversary of Cook’s landing at Botany Bay, proposing 3D images and duplicates of expedition artefacts in Cambridge and London. There also have been partnership discussions with the Natural History Museum of Denmark (snm.ku.dk/english [June 19, 2018]), University of Copenhagen, and Cambridge’s Museum of Zoology (www.museum.zoo.cam.ac.uk [June 19, 2018]), and Duckworth Collections (www.human-evol.cam.ac.uk/duckworth.html [June 19, 2018])—potentially working together on a pan-primate 3D catalogue.

More recently, we have had initial and promising discussions with a network of developers and implementers of IIIF, the International Image Interoperability Framework (iiif.io [June 19, 2018]), about the interoperability standards they have advanced for the development of digital libraries. We hope that working together, including with the IIIF 3D community group (iiif.io/community/groups/3d [June 19, 2018]), can accelerate the process for a standardised approach to sharing 3D images, as well. This could extend the concept of universal digital library viewers, which they have created and are promulgating, to incorporate and integrate 3D images and ideally AR techniques as well.

2 Issues

While the aims of the Veholder.org project include enabling enhanced AR collaboration between museum holdings, archives and collections, the focus has largely been on images, display and visual technologies. For the future and for more immersive experiences, it is important to consider the wider scope of AR developments, which are attempting to engage us via other senses, in particular through smell, touch, and sound. For example, one approach to AR via the sense of smell is called Cyrano, by oNotes (onotes.com [June 19, 2018]) which is capable of programmable scent scenarios which can be shared between units, and which the producers refer to as the “first digital scent player”.

Touch is perhaps even more surprising for AR, and a new prototype tablet called Tanvas (tanvas.co [June 20, 2018]) simulates the feeling of various textures, including choppy, grainy, fine, and wavy (see Fig. 1). The combination of feeling connected with what we are seeing on a screen could be especially powerful for AR, and particularly helpful for interaction with virtual objects, providing a richer experience of the physical characteristics of 3D objects than available to sight alone.

Fig. 1
figure 1

Image courtesy of John Biehler

The image (Fig. 1) is of Tanvas, a prototype touchscreen which simulates how things feel, to augment what you see on the screen. Using electrostatic fields to create friction, through touch it can convey textures such as choppy, grainy, fine, and wavy. (johnbiehler.com/2017/01/11/wired-wednesday-favourites-ces-2017 [July 21, 2018])

Smart glasses are being developed with new sensory options for AR, as are tablets, with prototypes available for providing AR with hearing. In particular, Bose smart glasses (www.theverge.com/2018/3/12/17106688 [June 19, 2018]) look like sporty sunglasses, but can be paired with a smartphone to supply directions and commentary about where you are and, to some extent, what you are seeing. They can also deliver great music, channelling sound into the ears without using in-ear plugs. Other audio equipment companies seem interested in this area, and there should be some combinations of such smart audio with visual technologies for even smarter smart glasses.

Generally, however, there continue to be more developments in the area of sight and displays, including new options for smartphones and smart glasses. And yet, as with other areas of innovation we have a connected challenge of divergent directions and incompatible systems, as well as initially producing technologies focussed almost exclusively on solitary experiences. Thankfully, there are increasing efforts and indications of interest in greater collaboration and commitment to evolving standards.

From the early advances with virtual reality technologies in the mid 1800s, with the explosive expansion of stereographs and viewers, what was crucial to the widespread success was the early setting of a standard. In general, any of the many viewers available could be used to view the hundreds of thousands of stereograph cards produced (see Fig. 2). Those inclined could construct their own viewer, as well. Then, as now, a standard ensured that the viewer and cards would work together, no matter their origin. This legacy has endured, and stereographs from the Victorian era still work as designed and pleasingly well on modern VR and AR systems.

Fig. 2
figure 2

Images courtesy of Guild Films/Heiko Noack, Stadtmuseum Berlin

The first image (Fig. 2) shows Sherlock Holmes (Ronald Howard, 1954 TV series, Episode 28) using a stereoscope (archive.org/details/SherlockHolmes1954 [June 19, 2018]). The enormous popularity of stereographs prompted development of options for group sharing. The second image (Fig. 2) is a Kaiserpanorama (www.stadtmuseum.de/ausstellungen/kaiserpanorama [July 22, 2018]), also known as a Fotoplastikon, a Victorian era innovation for communal 3D viewing, precursor to cinema, and still in selective use.

While smart glasses for AR use continue to evolve, many projects have been developing around Microsoft Hololens (www.microsoft.com/en-gb/hololens [July 21, 2018]), at the same time that there has been extensive coverage and planning around the well-funded and promising Magic Leap (www.magicleap.com [July 21, 2018]), due for release in 2018. Epson’s Moverio glasses (www.epson.co.uk/moverio [July 21, 2018]) have a third generation model designed for multi-person use, particularly in museums (BT-350), as well as a novel release of the multi-person display and frames which can be used to develop or deploy with other smartphones or computer systems (BT-35E).

Among concerns expressed by a number of museums, about AR equipment and other technologies, are questions about affordability, robustness and sustainability. This is why some still prefer to focus on smartphones alone, with the idea that interested visitors can bring their own devices and use a suitably prepared app. Others have been hoping to pair up visitors’ smartphones with the AR equivalent of Google Cardboard (vr.google.com/cardboard ([July 21, 2018]). Late 2017 and early 2018 saw some innovators in this area, with two leading examples (see Fig. 3) in the Aryzon AR Headset (www.aryzon.com [June 19, 2018]) and the Haori Mirror (haoritechnology.com/en/col.jsp?id=109 [June 19, 2018])—or Docooler AR Headset, in the UK (www.amazon.co.uk/dp/B07851GG8Q [June 19, 2018]).

Fig. 3
figure 3

Images courtesy of Shenzhen Haori Technology Co. Ltd./Aryzon B.V.

The first image (Fig. 3) is of the Haori Mirror (marketed in the UK as Docooler AR Headset), with its Bluetooth controller. The second image (Fig. 3) is the Aryzon headset kit. Each has their own apps, can use a wide range of smartphones for viewing, works with a variety of AR and VR resources, and for AR can use an included target to place and anchor an image in local space. Each retails under £50.

As a potentially fruitful complement to smart glasses, or smartphones in headset holders or on their own, there are some recent options available for transparent touchscreen computer displays which can be built into museum cases. These can achieve some of the AR effects without glasses, directly shareable with groups, although of course within a fixed location rather than in a portable mode (see Fig. 4).

Fig. 4
figure 4

Images courtesy of Crystal Display Systems Ltd.

The images (Fig. 4) are from a demonstration of an interactive transparent display, using a computer touchscreen forming the main window for exhibiting a book within the cabinet (youtu.be/OeRpeBchZ0s [June 19, 2018]).

While new VR goggles and AR glasses (and novel displays) continue to emerge, along with new and alluring features, there continues to be sufficiently diverse and proprietary platforms to make it difficult to develop for the many format and feature differences. There are helpful approaches to these cross-platform challenges, notably with the widespread developments using the Unity engine (unity3d.com [July 21, 2018]). Yet, there have been understandable concerns about the lack of common ground and of standards, including variations in the use of the terms VR, AR, MR and XR (https://www.forbes.com/sites/charliefink/2017/10/20/war-of-arvrmrxr-words [June, 2018]), which have meant confusion as well as often higher costs of development for those who want to publish VR or AR (or related) content across the many available platforms.

In response, there are recent and high-level movements toward open standards, for development and delivery to be via our web browsers, with Mozilla leading the push for WebXR (blog.mozilla.org/blog/2017/10/20/bringing-mixed-reality-web [July 21, 2018]), built on earlier open standards in order to provide a common programming interface to simply development for both AR and VR devices. Mozilla is also providing the Firefox Reality browser (blog.mozilla.org/blog/2018/04/03/mozilla-brings-firefox-augmented-virtual-reality [July 21, 2018]), designed to help ensure these standards are available for stand-alone VR and AR headsets. WebXR is also being supported by Google (www.vrfocus.com/2018/05/google-introduce-webxr-standard-to-chrome July 21, 2018]) and Amazon (www.zdnet.com/article/aws-sumerian-a-bet-that-enterprise-augmented-and-virtual-reality-will-be-browser-based [July 21, 2018]), and the combined commitments should ensure continued traction for these much-needed development and delivery standards.

3 Procedures

To demonstrate the great potential for AR delivered via smartphones, and to provide initial samples of what options for AR with museum collections might be like, some photographic experiments were carried out to situate and display 3D models in selected physical spaces to produce suitably combined images.

The 3D models were selected from among the vast offerings found on Sketchfab (sketchfab.com [June 19, 2018]). The models were displayed using the Sketchfab app in AR mode on a suitable smartphone, with these experiments carried out using a Samsung Galaxy S8. Appropriate physical settings were selected to highlight the AR possibilities.

With 3D models of scans taken from Cambridge and British Museum collections, below are selected representatives of the initial outcomes (see Fig. 5).

Fig. 5
figure 5

Images courtesy of Ronald Haynes/3D models courtesy of Sketchfab. The first image shows a Neanderthal skull model from Cambridge ( https://skfb.ly/6yTt9 [June 19, 2018] visually scaled and placed on a tabletop, to produce a virtual-physical composite as an AR test. The second image shows a Cuneiform tablet model from Cambridge (skfb.ly/PySr [June 19, 2018]) above the same tabletop. The third image is of the same tabletop with an Easter Island monolith model from the British Museum (skfb.ly/6srQY [June 19, 2018])

4 Results

Following the promising initial samples, the AR photographic experiments progressed to bring together a pair of similar physical and virtual objects, and Sketchfab was again a good source of models, this time from Oxford and further afield (see Fig. 6).

Fig. 6
figure 6

Images courtesy of Ronald Haynes/3D model courtesy of Sketchfab

The image (Fig. 6) shows an AR combination of a physical-virtual composite, where the skull on the right is a 3D model of Australopithecus afarensis (Lucy), from the Oxford Natural History Museum collection (skfb.ly/HyJs [June 19, 2018]). The 3D model was scaled and positioned on the physical shelf, alongside the physical skull on the left, a modern human skull from the Duckworth Collection in Cambridge (www.human-evol.cam.ac.uk/duckworth.html [June 19, 2018]).

The side-by-side AR skulls experiments were well received and encouraging for additional testing. A next logical step was to attempt to virtually place a suitable 3D model among the physical objects within an existing museum case. A suitable case and objects were located, and once again Sketchfab was a good source of a 3D model, this time a computer-generated one from an independent project (see Fig. 7).

Fig. 7
figure 7

Images courtesy of Ronald Haynes/3D model courtesy of Sketchfab

The first image (Fig. 7) shows a display case before any AR experimentation, and in the left-hand corner is a large zoetrope and image strips designed by the physicist James Clerk Maxwell, key apparatus for what is believed to be the first time a moving picture was used for scientific demonstration. The case is part of the Cavendish Laboratory museum collection, in Cambridge (www.phy.cam.ac.uk/outreach/museum [June 19, 2018]). The second and third images include a 3D model of a praxinoscope (skfb.ly/6pQqK [June 19, 2018]), which is scaled and positioned to the right and looking nearly at home alongside its relative the zoetrope.

5 Conclusions

Along with the very promising results of the AR image tests noted, there remain concerns about scaling and image standards. It has been most encouraging to find the great progress which has been made in the digital library and archive world, dealing with the challenges of 2D imaging, including how to address the many difficulties associated with fragmented manuscripts. After meeting developers and community leaders from IIIF, the International Image Interoperability Framework community (iiif.io [June 19, 2018]), it has been very helpful to learn more about their progress and practices. IIIF has defined a set of programming interfaces, based on open web standards, derived from shared use cases, and is also is the community that implements the specifications.

IIIF has had a particular concern for the difficulties surrounding the sharing of historic manuscripts and other often disrupted 2D texts and images, whether due to fragmentation of the originals, or otherwise torn, worn or missing pieces, which at times can be reconstructed by virtually putting the pieces back together. For these challenges, they have created standards for interoperability, and introduced the concept of universal viewers, which can present composite images assembled from local sources and remote links to collaborating systems which are following the IIIF standards and so are compatible. The resulting environment ensures digital library patrons and researchers can reunite or creatively assemble more of the worlds disconnected knowledge (see Fig. 4).

The image (Fig. 8) is an illustration of how IIIF can help reunite the fragmented and distributed parts of a manuscript (resources.digirati.com/iiif/an-introduction-to-iiif [June 19, 2018]), where existing pieces are presented in the interoperability framework.

Fig. 8
figure 8

Image courtesy of Digirati

It also has been very heartening to find the IIIF 3D interest group (iiif.io/community/groups/3d [June 19, 2018]) and the open and collaborative approach they are taking to clarify interoperability and other challenges with 3D imaging. There is hope that some of the great success with interoperability and universality in sharing 2D materials, in particular through a viewer which can connect and integrate disparate resources, will help guide a similar process for 3D imaging. It is a worth considering ways to be able to bring together digital texts and images, and 3D models in one environment, for instance to enable with one viewer the ability to review da Vinci’s notes, to view his illustrations, and to interact with 3D models based on his designs. Similarly, although Darwin’s publications and manuscripts are online, the specimens from the Beagle expedition are divided between museums. The same integration in a universal viewer would benefit these and any other areas where image and text are meaningfully connected with 3D objects, and vice versa.

The option to further test and potentially incorporate high-feature smartphones, rather than solely focus on one or more proprietary smart glass models, opens up many more options for the always budget-sensitive museums. The move toward open web standards and increasing adoption of WebXR will also pave the way for greater collaboration, as well as more flexible and sustainable projects. It is hoped that the work with IIIF will also help advance efforts toward standards in 3D imaging, scaling and interoperability, to simplify sustainable collaborations between institutions.

The Veholder project may be best developed in optional phases, to start testing the technology at the soonest, and help build the required collaborations, including:

  • Phase I—360-degree live-streaming guided option, with guides to introduce new technology and help blend collection images and concepts.

  • Phase II—introduction of 3D scanned images, with guides to introduce the technologies and collections, clarifying any oddities.

  • Phase III—initial curated, blended special exhibition, with 3D images suitably scanned and scaled for compatibility across collections.

  • Phase IV—development of a larger catalogue of suitably matched 3D images, available for more extensive combinations across collections.

More details of the Veholder project will be found on www.veholder.org.