Introduction

In-depth understanding of facial anatomy is vital for safe and efficient clinical practices [1, 2]. Learning human anatomy through cadaveric dissection has been considered as the foundation of medical education from centuries. In recent years, anatomy education is at a conjuncture, due to various challenges such as cost, time, religious restrictions and adoption of the integrated medical curriculum; cadaveric dissection-based anatomy teaching is at a decline [3]. More recently, scientific advancement in the image and visualisation technology has given an entirely new dimension to anatomical education. The imaging and visualisation technology ranges from magnetic resonance imaging (MRI), computed tomography (CT) and new three-dimensional (3D) technologies such as virtual reality (VR), augmented reality (AR), mixed reality (MR) opened up a new world for anatomy education [4] .

The concept of learning and acquisition of knowledge in humans was shaped by several events peppered throughout history. The modalities that helped knowledge gain have changed with time. The concept of literacy and writing marked the shift from orality or oral learning [5]. A recent discovery revealed that the earliest record of writing dated back to 3200 B.C. and is believed to have been used for almost three millennia [6]. Several advances have since occurred in the way knowledge is transferred and the latest development that is transforming learning and education is digital technology. The electronic and digital technology era influenced learning through modalities of text, graphics, sound, video and some of the newer advances promise to virtually influence ‘reality’ [7]. This turn of phrase and actuality has been made possible by ‘AR’ and ‘VR’ technologies that are redefining ways of presenting visual knowledge. VR is entirely artificial and allows the user to immerse in the virtual environment fully; however, in AR technology virtual objects are objects are overlaid on the real-world environment which enhances the experience of the real world with digital objects. Interestingly, MR is a combination of virtual and real-world and the user gets the freedom to interact with both virtual and real-world environments.

Learning anatomy through cadaver dissection gives a real appreciation of the structures of the human body. An approach that replicates or approximates the visual appreciation of anatomical structures with a degree of fidelity, comparable to that achieved by cadaveric dissection, could be a fitting alternative. Being able to appreciate the structures in the face in three-dimensions as is in the case with cadaveric dissection would also be required to allow understanding of the spatial relationship of structures to one another. With the recent worldwide decline of dedicated cadaver-based dissection, anatomy teaching should embrace technology-enhanced learning in addition to traditional methods to evolve and address the need of the Twenty first century medical curriculum [8]. While conventional learning of anatomy with cadaver dissections allows three-dimensional (3D) appreciation, several challenges ranging from preservation, storage to ethical consideration and public perception contribute to the inadequacy of availability and access to students have been identified [9].

Augmented reality, mixed reality and virtual reality modalities offer new avenues for visual representation and transfer of knowledge through images. These technologies work by integrating computer-generated images into the real world [4]. A characteristic feature of these modalities is that they allow the visual appreciation of images or objects in three-dimensions in addition to either augmenting the virtual objects in the real environment or creating an alternate reality [10].

The authors envisioned a virtual cadaver ‘Virtual Face’ which had all the visual characteristics of a real cadaver and existed as a three-dimensional object that retained the spatial arrangement of facial structures. The present study also aimed to investigate the effects of facial anatomy training with an immersive MR system on physicians' learning compared to cadaver dissection.

Materials and Method

Concept

Photogrammetry (PG) requires a series of photographs of the subject taken from multiple angles and subsequently rendered into a 3D model [11]. The authors developed a 3D virtual face from a real cadaver using photogrammetry technique (PG). It is based on processing images of an object for mapping or development of 3D visualisations, animations and simulation. Photogrammetry is a 3D measurement technique that generates a dense systematic pattern of points which are processed to form line patterns, 2D images and 3D models. Several studies have described the development of 3D models based on information from laser scans. While both the PG and surface scanning achieve the development of a 3D object and are associated with different advantages and disadvantages, laser technology is considered to be inferior to optical imagery of high quality [12,13,14]. PG has been reported to be the solution in situations where distinct textures, described as the surface quality of the object, are the requirement. It is essential to achieve the virtual objects to a near-perfect resemblance to the real cadaveric model to achieve a high degree of fidelity. The relationships with contours and surface characteristics help differentiate various intricate anatomical structures. Close range is preferred over laser scan as it generates high-quality images and textures [12]. The construction of the ‘virtual face’ was carried out in two distinct phases: the dissection and scanning phase and development of the 3D model.

Dissection and Scanning

This study was conducted in accordance with the Declaration of Helsinki. An experienced plastic surgeon with interest in facial anatomy dissected a freshly frozen cephalus at The Academia, Singapore. The structures that needed attention were determined based on consensus recommendations of an expert panel that were published earlier [3]. The equipment setup was based on the ‘light stage’ developed by the university of southern California (USC) institute for creative technologies which utilises high-speed cameras and controlled lighting. The light stage is an advanced system used for creating photorealistic virtual humans and lighting them convincingly for use in different environments. The objective was to capture data that would help to create a three-dimensional model with high graphics detail and retains the visual characteristics from a cadaveric model. To this end, we set up equipment and the stage that consisted of diffuse lighting equipment and high-speed cameras.

A designated area, the ‘Stage’, was set up to perform the scans of the cadaveric head (Fig. 1). An evenly distributed lighting system was set up to avoid the appearance of shadow from any angle on the object while scanning. Additional lights illuminated the stage every time a scan was obtained and were synchronised to a flashlight. The stage was set up at a short distance from the surgeon who was dissecting to maintain the same set of conditions for every scan and allow the dissector to work freely. The dissection was conducted as per predefined steps that were planned according to the structures visible at that juncture. After every predefined step, the cadaveric model was shifted to the stage for scanning. A neutral position was defined for the specimen on the stage and was marked based on the midpoint of the tragus of the ears and the vertex of the skull (Fig. 2). These landmarks were not included in the scheme of the dissection. They were chosen as reference points to maintain a neutral position of the cadaveric specimen every time it was brought back on to the stage after dissection. The dissection was also planned in such a way that only one hemiface was dissected to expose underlying structures and the other was left intact. Such an approach was chosen to allow a comparison between the two sides based on surface landmarks and understand the location and depth of different structures. This enables users to understand the clinical relevance of structures underlying at a location on the face.

Fig. 1
figure 1

Stage setup

Fig. 2
figure 2

Surface markings

Scanning of the cephalus was done using canon EOS 7D Mark II cameras that can shoot about ten frames per second and at a resolution that could help discern a tenth of a millimetre. Fourteen cameras were utilised in setting up the stage. They were positioned in such a way that the dissected portion of the hemiface was captured from ten different angles and the non-dissected hemiface was captured from four angles. Another camera was used to capture the entire model from at least twenty different angles. Since the subject of the scan was stationary and was being repositioned after every stage of dissection, it was essential to ensure that that the focus of the lens of every camera was checked and adjusted for maximum clarity. The cameras were checked before each scan and manually triggered.

Development of the 3D Model ‘Virtual Face’

The raw pictures were converted to unidentifiable virtual 3D models using the 3DF Zephyr 3.503 software. Retopology of the model, development of partial segments, texturing and final rendering of the 3D model were performed using Autodesk Maya 2018, Adobe Photoshop cc, Blender 2.78 and GIMP 2.8.22. The initial 3D model was intricately detailed with a dense mesh with several million polygons (Fig. 3a, b, c). However, the existing software that was planned to be used required a less complicated mesh structure, which required the rebuilding of the 3D model while maintaining the shape and volume of the original model. This process, referred to as retopology, was conducted on all the scans to reduce the number of polygons in the mesh and maintain volumetric differences in each scan. Maintaining volumetric difference is critical to ensure that scans have a progressive reduction in volume as the dissection progressed more in-depth into the tissue. A crucial distinction intended in the 3D model was the degree of fidelity of the surfaces with the surface of the real anatomical structures. To achieve a high degree of fidelity, a high-frequency detail, surface texture, colour information and sufficient detail were developed from the images in every scan and rendered in high-definition as textures for 3D models of every scan (Fig. 4a, b, c, d).

Fig. 3
figure 3

3D mesh of the cadaveric model

Fig. 4
figure 4

(a, b, c, d) High-frequency detail, surface texture, colour information and sufficient detail were developed from the images in every scan and rendered in high-definition as textures for 3D models of the face

Validation

The authors established a questionnaire for the face and content validity as the literature search could not yield a validated method for evaluating facial anatomy simulations. There were three test elements in the questionnaire: The first was to test the validity (content and face) and the overall practicality with a 5-point Likert scales (5-strongly agreed, 3-neutral, 1-strongly disagreed). The second aspect was to equate the ‘Virtual Face’ to conventional cadaver-based teaching, in an assessment of the advantage of experts. The third component requested qualitative feedback with questions about the perceived benefits, drawbacks and other implementations of the current simulation model.

To assess and complete the evaluations, twelve experts who are actively involved in teaching facial anatomy were consulted. The experts had over ten years of experience in teaching facial anatomy and practising either as plastic surgeons or dermatologists.

Statistical Analysis

The statistical analysis was undertaken using IBM SPSS version 25 (IBM SPSS, SPSS Inc., Chicago, IL, USA). The level of significance was set to p < 0.05.

The inter-rater reliability was assessed computing the inter-class correlation. Coefficient on the defined between the experts was as described above.

Result

Twelve experts (8 plastic surgeons and 4 dermatologists) helped to validate the study. There were 8 males and 4 females, with 14.1 years of practice (range, 12–18 years).

Face Validity

On a 5-point Likert scale from 1 (strongly disagreed) to 5 (strongly agreed), the degree to which the simulator seems to be practical was measured. The mean scores for both fields of study are calculated at 4.5, falling between the groups ‘Agree’ and ‘Strongly agree’. This is an adequate practical portrayal of the anatomy in question. The domains assessed were: (1) Realistic appearance of the facial anatomy landmarks and (2) appearance of the blood vessels, nerves, ligaments. The mean ICC between the experts was 0.930 and 0.989 for both domains, respectively, considered as an acceptable agreement (Table 1).

Table 1 Face and content validity of the ‘Virtual Face’

Content Validity

The same Likert scale was used across the two domains: (1) evaluation of the facial anatomy; and (2) sensitivity to appropriate variations. The average scores in all two domains calculated again and the mean ICC between the experts were 0.910 and 0.922 for both domains, respectively (Table 1).

Qualitative Feedback

The qualitative portion of the study offered some helpful suggestions. The most common changes suggested were:

  1. 1.

    Haptic feedback would make it more realistic

  2. 2.

    More emphasis should be given on the importance of the facial vasculatures concerning the use of dermal implants.

  3. 3.

    Need for including more anatomical variations

Discussion

The 3D model of the face is intended to be used in the development of mixed reality, augmented reality or computer-based applications that can be used by learners to conduct virtual dissections of a realistic cadaveric face. These three applications have different advantages and disadvantages and the 3D face model can be evaluated for use in anatomy education objectively. Mixed reality and augmented reality applications allow integrating the model into the head-mounted display devices or other portable devices. Still, they have certain limitations related to the density of polygons in the models and the quality of surface textures. Mixed reality applications allow the users to interact with the virtual object in the environment, however, since the technology is nascent and still evolving the degree of fidelity that can be achieved with present technological limitations is restricted by the density of the mesh. This also affects the clarity of the textures used on the 3D models. Augmented reality applications are like mixed reality application in that the virtual model is placed in the environment of the user, but the interaction with the model is limited. However, the resolution of the virtual model and the detail achievable in the 3D model is better compared with the mixed reality application [15, 16]. A computer-based application can allow the use of a model with a denser mesh and a higher resolution texture than either of the previously described applications (AR/MR) due to superior processing power.

From centuries, anatomy teaching and learning were anchored on the cadaveric dissection [1, 17]. The process of dissection allows the students to appreciate a three-dimensional view of structures, helps to develop a sense of tactility and structural variations which are not possible to replicate in an anatomy atlas or textbook [8, 18]. The experience suggests that the students were amazed by noticing the differences between the images in the atlas, textbooks and the cadaveric specimens [18]. Most clinicians and students feel that it also enhances the revere towards the human body [1]. Moreover, cadaveric dissection is a way to familiarise medical students to the ultimate truth of death and their role as a physician; however, many students have real abomination and feel distressed during the dissection.

The modalities available for education continue to evolve in tandem with the advances in different disciplines of science. Advances in optical and visual technology have been applied in several different ways in medical education and training [19]. These technologies have opened the exciting possibilities of application, particularly in the learning of human anatomy. The advances in optical and visual technology bear the potential to add a new dimension to conventional methods of learning anatomy that rely on two-dimensional illustrations and usher a new paradigm in learning from cadaveric dissection.

Authors developed a three-dimensional virtual cadaveric face model that possesses all the visual anatomical characteristics of a real cadaver using close-range PG technique. The 3D virtual cadaver is planned to be used in augmented reality, mixed reality and computer-based applications that will serve as tools to learn anatomy. This 3D model can be dissected according to a predefined schema and allows the visualisation of high-resolution tissue level imagery at every level of dissection. The virtual cadaveric model can be used in various visual and imaging applications to conduct learning and teaching of the anatomy of the face. The technique for the development of the model is reproducible and can be used to create several 3D virtual cadaver specimens and allows the capture of a range of anatomical variations. Development of applications that use the 3D virtual cadaver can bring the experience of dissection that is untethered to the usual limitations of availability of facilities, and cadaver specimens can be used repeatedly.

While several 3D models such as Biodigital human, visible body human anatomy atlas and primal’s 3D real-time human anatomy also allow the study of the anatomy of the face [20,21,22]; the key differentiator of the present 3D model is its appearance which is exactly like the real cadaver with detailed tissue dispositions. In comparison, the 3D learning tools previously mentioned are illustrations rendered by artists and may not look like real tissue. While illustrations depict the normal structural anatomy, it fails the range of variability in structures, origin, insertion of the muscles, the location of blood vessels, nerves and their patterns of distribution etc. [23].

Limitations

The use of the 3D model in mixed reality applications is limited by the maximum density of the mesh that forms the three-dimensional structure. It is recommended that 3D objects have no more than 10000 polygons for use in HoloLens from Microsoft, a mixed reality visualisation tool. Further, limitations also exist related to the resolution of the texture and application of specular maps that define shiny surfaces in the model [24]. This limits the level of detail that can be achieved in small structures. Moreover, the system relies on the use of translucent holograms, which also affects the clarity of the textures used on the 3D models.

Closed-range PG generates a 3D model that captures the texture and histological characteristics of the surface in detail, but fails to capture the volumetric characteristics of the anatomical structures. Consequently, the volume depiction during the reconstruction of structures in different layers was based on estimates of the distance between the surfaces and not the exact change in volume with the removal of tissue from the predetermined levels. While volumetric 3D rendering would allow the capture and preservation of the actual volume as voxels, it should be noted that it requires a significant amount of time, computational power and investment [25].

Another limitation of the approach was an outcome of the position of the model on the stage. The cadaveric dissection model was set up within a supine position propped and held in place by a block of plasticine. While this was the most practical and technically appropriate way to set up the cadaveric model on the stage, authors acknowledge that this may have induced some artefacts and tissue realignment as it does not conform to the anatomically neutral position and may be influenced by changes in tissue characteristics, cytoskeletal changes and gravitational effects. These changes were approximated to the estimated standard location during the development of the virtual model and one cannot claim complete and absolute congruity between the cadaveric model and the 3D model.

Conclusion

The development of virtual cadavers presents a new opportunity for learning from cadaveric dissection; however, it does not obviate the need for cadaveric dissection or propose to replace it. The 'Virtual Face' attempts to bring advances in visual and imaging technology to anatomy learning for the physicians, thereby facilitating greater accessibility. In the present era where anatomy teaching is facing many challenges, the proposed model can complement the learning process in the absence of cadavers in graduate, postgraduate as well continuous medical education programs. Cadaveric dissection should be at the forefront of anatomy teaching where ever possible.