Introduction

Human anatomical organs are the fundamental resource used by medical students to learn about the structures of the human body. During past centuries, donated bodies were the primary source of anatomical organs for medical schools. Medical trainees were afforded the opportunity to see and feel real organs in dissection classes [1, 2]. Due to the scarcity of donated bodies, the high cost of maintaining cadavers, and ethical concerns [35], fewer trainees now have the opportunity to touch real anatomical organs; new educational resources that are easier to obtain and use are therefore urgently needed.

Thanks to progress in neuroimaging technologies, computerized tomography (CT), and magnetic resonance imaging (MRI) have become widely used tools for producing medical images of living subjects. Medical images, volume rendering images, and digital models are then used to display the geometric structures and details on computer screens and tablets; the limitations of these are that they can only be viewed and not touched [68]. Previous studies have proved that lifelike printed models may be employed as a useful educational tool to enhance the understanding of the geometric structure and the complex spatial characteristics of anatomical organs [5, 9].

Since Levy et al. implemented their selective laser sintering technique to produce a physical model of the temporal bone [10], three-dimensional printing (3DP) has been promoted as one of the most important tools available. The principle of 3DP is to produce a physical model from a digital model using additive layer manufacturing. Because of the high accuracy and low cost of producing the replica of the digital model, 3DP has been widely applied in surgical planning [1114], implant design [15, 16], education and training [4, 5, 17], and forensic science [18].

Although previous studies have already described the process of converting medical images to printed models [3, 4, 7, 1820], most studies concentrate on producing models of the bones and only few soft tissue organs, because of the challenge of their segmentation and reconstruction. Here, we shall describe the method for converting multiple modalities of medical images to digital resources (volume rendering images and digital models) and printed models for both hard and soft tissue organs. By considering the quality of medical image reconstruction, we present both visual and subjective qualitative assessment along with numerical and objective quantitative assessment. We integrate open source packages and a low-cost desktop 3D printer to reproduce digital and printed models of the cranial base, tooth and dental pulp cavity, the brain, blood vessels of the brain, parts of the heart chambers and vessel lumen, and pituitary tumors. These multiple resources are an available and cost-effective way to enhance the understanding of anatomical organs.

Methods

Medical images acquisition

The acquisition of medical images is the first step in producing digital resources and printed models. In order to enhance the contrast between regions of interest (ROI) and surrounding structures and to reduce the complexity and time cost of ROI extraction, the appropriate modality should be chosen. We use computed tomography (CT) to construct the cranial base and cone beam computed tomography (CBCT) images to construct the tooth. Alternatively, we use magnetic resonance imaging (MRI) to construct the brain and pituitary tumors, time-of-flight magnetic resonance angiography (TOF-MRA) to construct the blood vessels of the brain, and computed tomography angiography (CTA) to construct the heart chambers and vessel lumen. Medical images are archived in standard digital imaging and communications in medicine (DICOM) format.

Fig. 1
figure 1

Procedures of converting medical images to digital resources and printed models

Visualization of medical images

Volume rendering is an available and effective tool to illustrate both the interior and boundary of the volume data with transparency and various colors [6]. Trainees translate, rotate, and zoom 3D medical images in order to view the details and anatomic relationship between various organs and their surrounding structures without segmentation and reconstruction. However, volume rendering results are only image projections produced by simulating the absorption of light along the ray path. The open source package Visualization Toolkit (VTK) was chosen to perform volume rendering in real time, integrating a number of volume rendering algorithms [21]. We provide 3D texture-based volume rendering and ray tracing tools to realize the volume rendering.

Digital and printed model production

The procedures for generating digital models from medical images essentially consist of three steps: medical image segmentation, reconstruction, and mesh processing. Image segmentation is actually an optional step, and digital models could be directly generated using iso-surface extraction. However, some irrelevant triangle meshes and small holes are generated during this procedure. To address this problem, mesh processing is an indispensable tool to enable us to obtain higher accuracy in the digital model. Subsequently, the digital model is archived as a stereolithography (STL) format file and imported to a low-cost desktop printer to produce a printed model. A typical sequence of these procedures is shown in Fig. 1. These steps are discussed in detail below.

Image segmentation

The goal of image segmentation is to extract the ROI from medical images. The low contrast between the ROI and surrounding structures presented a challenge. The choice of modality and various segmentation algorithms were investigated to extract the ROI. We utilized the open source package Medicine Insight Segmentation and Registration Toolkit (ITK) to develop segmentation tools that integrate threshold, level sets, watershed, region growing, etc. [21].

We provided several tools to realize ROI segmentation for various organs. The automatic threshold segmentation tool was a fast and effective tool to distinguish the ROI from high-contrast images by setting a range of values, e.g., the bone was easily distinguished from CT images. The ROI was extracted by comparing the density value of each pixel with a given threshold.

Automatic extraction of soft tissue organs has always been a challenge in medical image segmentation. Watershed was one of the classic region-based segmentation algorithms to partition images into several regions and basins by a local minimum of density value until the water level has reached the highest possible value. Then, morphological region growing was used to merge various basins following connectivity and density similarity [22]. Another available tool for soft tissue organ segmentation was the level set approach, a numerical technique for tracking the evolution of interfaces [23]. With this tool, the operator was able to coarsely draw the close curve, and the fine boundary of ROI was detected after several iterations.

Mesh reconstruction

Once ROI was extracted, mesh reconstruction was performed to generate 3D meshes. The marching cubes algorithm was one of the most common methods to convert voxel data to a series of triangle meshes by extracting iso-surfaces [24]. This algorithm calculated the triangle vertices using linear interpolation between adjacent images and determined the topology according to a lookup table and the density value of each voxel. The original volume data without segmentation can also be reconstructed using marching cubes, i.e., the pixel of each voxel was labeled by comparing the intensity value with the given range [25]. Alternatively, contour-based triangulation was another reconstruction method on the basis of boundaries of ROI extraction and determined the topology by connecting adjacent contours of contiguous images. VTK integrated both two methods and the reconstructed models can be archived in the STL format.

Fig. 2
figure 2

Digital resources and printed model of cranial base. a Axial view; b sagittal view; c coronal view; d bone segmentation; e digital skull model; f display digital model with medical images in the common coordinate system; g convert STL format file to series of layers using Makerware software. Each layer includes supporting structures and printed model; h printed model of cranial base after manually removing supporting structures

Mesh processing

Some small holes and self-intersection triangle meshes occurred during the course of reconstruction. In addition, some irrelevant triangle meshes were generated without accurate segmentation. To address this problem, interactive triangle meshes processing was an effective solution. The open source package MeshLab was easily used to provide mesh editing, hole filling, smoothing, and self-intersecting inspection [26]. An operator lacking extensive 3D modeling skills could also select and remove irrelevant meshes, fill gaps and holes, and ensure the high quality of digital models.

3D printing

There exist many types of 3DP techniques able to satisfy various requirements for printability [27]. Having considered the costs of the printing device and printing materials, the Makerbot Replicator 2 (Makerbot, New York, USA) and polylactide (PLA) filament materials were used. The Makerbot Replicator is a type of fused deposition modeling (FDM) printer that utilizes a high-temperature nozzle to heat the PLA filament to a semiliquid state and prints it one layer at a time until every layer has been printed.

Once a highly accurate digital model had been generated after mesh processing, this model was imported to the Makerware software, which is bundled with the Makerbot Replicator. Translation, rotation, and scaling transformation were carried out to ensure that the printable model was vertically located at the top of the build plate and not beyond the printing space. Makerware automatically slices the digital model into a series of layers, and supporting structures are generated on the build plate for construction since PLA should be deposited on a layer. The printing precision of the 3D printer needs to be set. The higher the printing precision is, the more time will be taken for printing. We must balance the time cost and the accuracy required for each printed model. The printing precision was set to 0.2 mm for the bones and 0.1 mm for soft tissue organs. In most situations, since supporting structures are tightly attached to both the printed models and build plate, we had to use pliers to remove supporting structures; caution should be exercised when removing supporting structures from the tiny printed models.

Fig. 3
figure 3

Digital resources and printed model of tooth. a CBCT image; b volume rendering using ray tracing; c digital model of tooth and geometry structure between tooth and dental pulp cavity; and d printed model of tooth

Fig. 4
figure 4

Digital resources and printed model of brain. a MRI images; b facial volume rendering; c brain volume rendering; d digital model of brain; e display digital model with medical images in the common coordinate system; and f printed model of brain

Fig. 5
figure 5

Digital resources and printed model of parts of heart chambers and vessel lumen. a Heart CTA image; b coarse digital model of heart chambers and vessel lumen using marching cubes without accurate segmentation by setting a given range; c Fine digital model after mesh processing, removing irrelevant triangle meshes and filling holes; d display digital model with medical images in the common coordinate system; e 3D printer and printed model; f printed model of part of heart chambers and vessel lumen after manually removing supporting structures; and g comparison of digital model and printed model

Fig. 6
figure 6

Digital resources and printed model of blood vessel. a TOF-MRI image; b blood vessel volume rendering; c digital model of blood vessel from automatic image segmentation; d align the digital model from segmentation (red color) and digital model from region grow (yellow color); e the spatial structure between digital models of blood vessel and head; f display digital model with medical images in the common coordinate system; and g printed model of blood vessel

Fig. 7
figure 7

Digital resources and printed model of pituitary tumor. a MRI image; b interactive extraction of pituitary tumor; c manual segmentation result; d medical images; e volume rendering; f digital model of tumor; g display digital model with medical images in the common coordinate system; and h printed model of pituitary tumor

Table 1 Details of each case

Accuracy evaluation

To assess the accuracy of medical image reconstruction, the present study took both qualitative and quantitative assessments. Qualitative assessment is accomplished by means of visual inspection and the subjective evaluation method. Medical images and the reconstructed model were aligned in the common coordinate system and users interactively watched and inspected these resources from different viewpoints. The objective is a quantitative assessment, and a numerical tool is used to visualize the geometric deviation. An optical scanner has enabled highly accurate digitization of real objects, and we utilized a Handyscan 700 laser scanner (CreaForm Company, Canada) to acquire the digital model. The quantitative comparison of dimensional errors was conducted by comparing the reconstructed model and corresponding scanned model using the open source package CloudCompare, version 2.6.2.

Results

Multiple digital resources (volume rendering images and digital models) and printed models were reproduced from multiple modalities of medical images of living subjects (Figs. 2, 3, 4, 5, 6, 7). Details of these cases, including the choice of modality data, voxel dimension, physical resolution and the number of vertices and triangle meshes in the digital model, are given in Table 1. We provide medical images (coronal plane, sagittal plane, and transverse plane), volume rendering images, digital models, and printed models to illustrate the geometric structure and anatomical relationship with surrounding structures.

CT and CBCT images were the best choices for bone segmentation (Figs. 2, 3). The bone was extracted using the threshold tool by setting a given value (Fig. 2d) and the digital model was generated using marching cubes (Fig. 2e). After the digital model was imported to the 3D printer, each layer of the printed model and corresponding supporting structures were generated by slicing (Fig. 2g). Supporting structures had to be removed manually by pliers as shown in Fig. 2h. Since a high level of accuracy was required for the tooth, CBCT images were an effective choice as a high-resolution form of imaging, which have widespread application in oral and maxillofacial radiology. Both the tooth and dental pulp cavity are shown in Fig. 3.

Fig. 8
figure 8

Quantitative assessment. a CT images and reconstructed skull model; b skull digitization by laser scanning Handyscan 700; c two models displayed in a common coordinate system and pose inconsistency; d two models alignment; and e comparative deviation of two models

The production of digital and printed models of soft tissue organs has always been a challenge in medical training and in clinics due to the low contrast of the ROI and their surrounding structures. We have, however, succeeded in producing digital resources and printed models of soft tissue organs from multiple modalities of medical images (Figs. 4, 5, 6, 7).

The texture-based volume rendering algorithm was used to visualize the brain (Fig. 4c). Our previous study reported the use of the watershed and level set algorithm to extract brain tissues from MRI images [28]. The brain digital model was generated after segmentation (Fig. 4d). The medical images and digital model were located in the common coordinate system (Fig. 4e), and Fig. 4f shows the corresponding printed model.

CTA medical images were chosen (Fig. 5a) to reproduce the digital resources and printed model for parts of the heart chambers and vessel lumen, and the pixel of each voxel was labeled as 0 or 1 by comparing the density value of each pixel with the given range. Marching cubes was performed to generate digital models without accurate segmentation (Fig. 5b). The irrelevant triangle meshes were interactively removed using MeshLab and only the digital model of the heart chambers and vessel lumen was retained (Fig. 5c). Figure 5e, f, g shows the corresponding printed model.

Ray tracing was used to visualize the structure of the blood vessel from TOF-MRA images by setting the transfer function and color mapping (Fig. 6a, b). The digital model was first generated using an automatic segmentation method (Fig. 6c). However, some tiny structures were missing because of incomplete segmentation. To generate the complete model, another digital model was generated after region grow segmentation (Fig. 6d). We interactively marked seed points on the region of the blood vessel for segmentation, and the corresponding model was reconstructed. The irrelevant triangle meshes were removed using MeshLab. Subsequently, two digital models were aligned to generate a fine model (Fig. 6e). The spatial relationships between the medical images and digital model are presented in Fig. 6f. The printed model of the blood vessel is shown in Fig. 6g, and the supporting structures should be removed carefully as the tiny printed model was fragile.

Although manual segmentation is tedious work and requires the guidance of someone with anatomical knowledge, it is nevertheless the most accurate method to extract ROI from low-contrast brightness images. Before interactive segmentation, we adjusted the images’ brightness and contrast by setting the window level and width. The ROI was interactively selected by pressing and dragging the mouse (Fig. 7b). The pituitary tumor was extracted from MRI images (Fig. 7c, d). Volume rendering using the texture-based method, digital model, and printed model respectively are shown in Fig. 7e, h.

Qualitative and quantitative assessments have been carried out to determine the accuracy of converting medical images to digital models. As shown in Figs. 2f, 4e, 5d,  6f, and 7g, medical images and corresponding reconstructed models were located in the common coordinate system, and these discrepancies were empirically and subjectively viewed. To quantitatively assess the accuracy of reconstruction, the deviation between the reconstructed model and the scanned model was calculated. The skull of the cadaver was firstly reconstructed from CT images, which consisted of 950,426 vertices and 1,828,794 triangle meshes (Fig. 8a). The digital model produced by the laser scanner consisted of 1,551,041 vertices and 3,091,750 triangle meshes (Fig. 8b). As a result of pose inconsistencies between the two skull models (Fig. 8c), we aligned the two models in the common coordinate system (Fig. 8d). The geometric deviations are presented in Fig. 8e. Many of the highest deviations (blue and red regions) were located in the area around the tooth, zygoma, orbit, and lateral bulge of the chin mound.

Discussion

Medical schools have to cope with an ongoing shortage of donated bodies for education and research, so trainees have few opportunities to feel real organs. Multiple modalities of medical images are used to acquire data from living subjects, and then two-dimensional images and three-dimensional resources (volume rendering images and digital models) are provided to supplement training resources. Volume rendering is a useful approach to illustrate the geometrical structure and anatomic relationships between organs and their surrounding structures without image segmentation, but these can only be viewed and not printed. The procedures for constructing digital models are complicated due to the complexity of ROI segmentation, one advantage of them, however, is that can be printed as tangible models.

One limitation in the application of 3DP is the availability of digital models [29]. Since they are more easily distinguished, most previous studies have utilized digital resources and printed models of bones (such as the skull and vertebra) [5, 10, 18, 3035]. Few soft tissue organs have been able to be used in clinical case studies or surgical training because of the challenge of soft tissue organ segmentation. This study describes the method for converting multiple modalities of medical images to digital resources. The choice of modality is very important, because resolution and contrast brightness affect the quality of reconstruction, i.e., bones are easily distinguished from CT images but difficult to extract from MRI images. Automatic segmentation approaches are used to reduce the complexity and time cost to realize image segmentation. However, some tiny structures and details may be overlooked as a result of incomplete segmentation. The most promising solution is to utilize multiple segmentation and reconstruction approaches to generate digital models, and a better model will be reproduced after alignment, avoiding the limitations of the single-choice approach.

During the past ten years, the application of 3DP has been invaluable in anatomical training and surgical education, because it both avoids certain ethical concerns and provides more realistic educational resources [3, 5, 9, 17, 18, 36, 37]. 3DP is low in cost, high in accuracy, and involves a short production time. Multiple copies of dissected specimens and clinical case studies can be produced to scale in any size. Previous studies have shown that 3D printed models are more intuitive compared with digital models and have significantly higher effectiveness in capturing complex 3D spatial relationships [5, 9, 29, 36]. Our studies provide multiple printed models for hard and soft tissue organs in order to better explain the spatial structure of anatomical organs.

The procedures for producing printed models involve the acquisition of images, image segmentation, reconstruction, mesh processing, and 3DP. The main challenge to the accuracy of printed models remains in producing a high-quality digital model and the high precision of 3DP [29]. Fasel et al. suggested that many structures of the printed models were incorrect due to the inaccuracy of the digital model and the limiting layer thickness of 3DP [32]. To assess the accuracy of the digital model, this study presents both qualitative and quantitative assessments. Assessors can easily utilize qualitative methodology to detect any discrepancies when we align digital models and medical images in the common ordinate system. The discrepancies between the pixels of a medical image and the corresponding vertex of a digital model can be empirically viewed when some structures are not reproduced from the medical images due to incomplete segmentation. Subjective visual inspection is important, but it is not sufficient.

Quantitative methodology is a powerful tool to numerically compute these kinds of deviation. A common numerical approach is to statically analyze dimension errors between a reconstructed model and a real object. Three-dimensional scanning is a highly accurate method for acquiring the geometrical structure of a complex object, and any errors in dimension can be accurately calculated by comparing the real object and the reconstructed model. Considering the working principle of the laser scanner and the geometrical structure of soft tissue organs are easily influenced by gravity, this quantitative method is preferred for the assessment of bones.

Several researchers have expressed concern about the accuracy of printed models using 3DP. Most studies reported that printed models of anatomical structures can be printed with sufficient accuracy and that there was no significant difference between printed models and real objects [35, 9, 17, 38]. Three-dimensional printing will not result in any loss of accuracy once the printing resolution of the 3D printer is higher than the thickness of medical images. Silva et al. asserted that the precision of 3DP was acceptable and may be useful in most maxillofacial surgeries [20]. Wu et al. reported that spinal printed models were accurate enough for surgeons [39].

There are still some limitations in the application of 3DP: (a) the printable area of the 3D printer is limited so that larger digital models (such as pelvis, femur, and spine) have to be cropped into several small components. The complete printed model is produced by pasting together the printed model of each component. (b) Low-cost FDM printers and PLA materials have been successfully used for education and training, but more lifelike printed models of soft tissue organs with full colors and elasticity are necessary to satisfy increasing educational demands. However, these printers and printing materials are still expensive.

Conclusion

This study presents the method for converting medical images to digital resources and printed models using open source packages and a low-cost desktop 3D printer, well within the budget of medical schools and hospitals. The effectiveness of this method has been demonstrated by producing models of the cranial base, tooth, the brain, blood vessels of the brain, parts of the heart chambers and vessel lumen, and pituitary tumors. These various techniques, including volume rendering, reconstruction, and 3DP, are feasible and enable trainees to better understand the spatial structure of anatomical organs.