Introduction

Hypertensive intracerebral hemorrhage (HICH) is an intracerebral bleeding disease that is caused by capillary ruptures and hypertension-induced intracranial arterial or venous bleeding. In recent years, the incidence of HICH has increased rapidly, and the age of onset has been steadily decreasing. HICH affects approximately 2.5 per 10,000 people each year, and approximately 44% of those affected die within a month, highlighting the acute nature and dangerousness of the disease [1].

The basic treatment for this disease involve puncture of the dura with a brain puncture needle and piercing of the hematoma using a 12 F tube with a core to drain the blood fluids. Traditionally, the position and orientation of the skull drill and brain puncture needle are determined manually, which may result in low precision and neurologic sequelae [2].

Computer-aided minimally invasive surgery (MIS) has been adopted in the field of neurosurgery, completely replacing manual positioning processes. With the assistance of global-location devices (for example, optical trackers), surgeons can perform punctures more precisely, preventing large surgical traumas and leading to improved outcomes.

Zhang and Chen studied MIS in HICH patients [3] and found that with navigation via stereotactic positioning with computed tomography (CT) images, 57 patients had outcomes that were comparable to those of the traditional manual method. W.J. Wang conducted research on hypertensive cerebral hemorrhage patients using minimally invasive soft channel technology [4]. In their research, data from 85 HICH patients were used to validate the effectiveness of MIS. M. Ramanan researched the use of MIS for intracerebral hemorrhage [5], and Muschelli introduced a new method for treating intracerebral hemorrhage using an optical tracker [6], which offered improved outcomes. In summary, MIS has been widely used to cure HICH worldwide.

However, although MIS shows great advantages in HICH treatment, there are some disadvantages that are worth discussing. Traditional MIS is not a direct visualization process; the doctors need to switch their view from the patient to the screen frequently, which may lead to high time consumption [7]. Additionally, the registration processes are cumbersome, and fiducial markers need to be placed on the patients’ heads prior to computed tomography (CT), which may increase the waiting time prior to surgery. Recently, a new kind of technology called mixed reality has been used in surgery; a subset of augmented reality and mixed reality technologies enable users to engage with digital content and interact with holograms in the real world. Teatini presented a novel mixed reality system for orthopedic surgical navigation [8] in which mixed reality technology was used to give surgeons a more direct view of the operative field. Orthopedic planning was combined with real patient data to improve the speed and simplicity of the surgery. Fida presented research on the application of augmented reality and mixed reality in open surgery [9]. Additionally, Sakai Daisuke presented novel research on the use of mixed reality in spinal surgery [10], which significantly improved operative outcomes. The potential of mixed reality technology as an effective tool for MIS is being investigated worldwide; however, there are few reports on the use of this kind of technology in HICH patients. Combining mixed reality technology and HICH MIS, surgeons could directly observe the position of a hematoma and perform more accurate, less time-consuming punctures intraoperatively, thus compensating for the shortcomings of traditional HICH surgery.

Mixed reality can provide a different visual perspective to surgeons, allowing them to directly and easily locate a hemorrhage. Many studies have focused on the use of mixed reality in surgical navigation. Christopher M. Andrews developed a novel registration method for HoloLens and optical tracking devices to overlay medical datasets onto patient data and track surgical tools in real time intraoperatively [11]. Emily Rae’s research evaluated HoloLens tracking infrared markers [12], which are widely used in precise surgical navigation; their research revealed that a HoloLens and depth cameras could track traditional surgical tools. Kuzhagaliyev et al. developed a system for preoperative planning and intraoperative needle insertion tracking [13], wherein hand-eye calibration was used to find the transformation between the HoloLens and optical trackers. Although their research provided a highly accurate method for optical tracker and HoloLens device registration, the process is time consuming and complicated. Frantz et al. investigated a system for skull anatomy overlay named Vuforia [14], which was used for tracking; as this system uses a type of marker tracking method, registration can be difficult. Additionally, El-Hariri developed an intraoperative ultrasound system to locate bone structures and used an optical tracker to register the 3D model with real phantom data [15]. Infrared markers were attached to Vuforia markers, which made determining the transformation between the two groups of markers difficult and reduced the accuracy of the approach.

Limited by the field of view of the HoloLens camera, the space for tracking tools and registration is small and unsuitable for HICH patients, who undergo procedures under local anesthesia and move frequently. To solve this problem, optical trackers can be used to provide a large tracking space while allowing patient movement to be recorded so that adjustments can be made in real time.

In this paper, we demonstrated a novel method for registering the optical tracker and HoloLens tracker, resulting in a registration accuracy within 2 mm and a time consumption of less than 20 min; thus, this approach expands mixed reality applications to accurate surgical navigation. Additionally, we introduced a multimodel mixed reality surgical navigation system; with intraoperative surgical tool calibration, this system can track surgical tools in real time, which could help surgeons directly determine the insertion position in the head. The real-time movement adjustment process maintains the alignment between the virtual anatomy and that of the real patient during movement. Phantom experiments and clinical trials involving ten patients were included in this research to validate the accuracy and effectiveness of this approach. This paper contains three parts. The “Methods” section describes the theory of the quick registration method, the “Experiment and Results” section describes the phantom experiment and the clinical experiment, and the subsequent discussion section explains our future work. The overview of the HICH surgical navigation system was shown in Fig. 1.

Fig. 1
figure 1

Overview of the HICH surgical navigation system

Methods

Design Requirements and Basic Workflow

In this section, we briefly outline the design requirements and the basic workflow of the HICH surgical navigation system based on mixed reality. The whole system contains five parts: preoperative preparation, CT image segmentation, space registration, intraoperative surgical tool calibration, and mixed reality alignment. Microsoft HoloLens devices were used as head-mounted see-through devices, and HICH treatment–planning devices, built at Tianjin University, were used for preoperative image processing and planning. Visible-light optical trackers made at Tianjin University were used for space location and surgical tool tracking. The basic workflows are described below in Fig. 2.

Fig. 2
figure 2

Basic workflow of the navigation system

Step 1: Preoperative preparing process. In this process, the surgeon used a CT scanner to obtain medical images (CT images) for subsequent processing. After completion of the scan, the patient was typically sent to the operating room. Then, the surgeon needed to place optical tracking markers on the surface of the patient’s head to make adjustments based on the patient’s movements in real time. Finally, the surgeon fixed optical reference markers on the surgical bed to prevent optical tracker movement.

Step 2: Preoperative image processing and segmentation. First, the surgeon automatically segmented the skin on the head and the skull in the CT images. Then, the hemorrhage model was distinguished from the images using a seed-growing algorithm for further processing [16]. The planning system could automatically locate physiological markers, for example, the corners of the eye on the skin model, for initial registration. Additionally, the facial surface area was automatically segmented from the CT images for secondary registration. Finally, the surgeon manually performed preoperative planning to determine the best insertion path.

Step 3: Space registration process. This process was used for medical coordinate (CT coordinate) and optical tracker coordinate registration along with optical tracker and holographic coordinate registration for holographic alignment. First, the surgeon registered the medical coordinates with the optical tracker coordinates. This registration process can be divided into two parts: the initial registration, which uses physiological markers and the manual selection of marker positions, performed by the surgeon using an optical tracker calibration needle. Then, the surgeon scanned the surface of the patient using a calibration needle to create a secondary point cloud to perform secondary registration. Next, the transformation between the optical tracker and medical coordinates was recorded. The surgeon then registered the optical tracker coordinates with the holographic coordinates by equipping the HoloLens device and looking at the calibration tools fixed on the operation bed. Then, four corners were automatically located on the tools, and the system calculated the transformation between the optical tracker and holographic space. The details are shown in the following section. At the end of this step 3, the virtual anatomy was aligned with the real patient data from the HoloLens view.

Step 4: Intraoperation surgical tool calibration. In this step, the surgeon calibrated the surgical tools (usually a skull drill) intraoperatively to precisely track procedure progress. The surgeon fixed tracking markers to the bottoms of the surgical tools and placed the tip into a calibration cube, making these markers recognizable by the optical tracker. Then, the transformation from tip to tracking marker could be established. Intraoperatively, the surgeon only needed to use the tracking markers with the optical tracker, allowing the location of the tip of surgical tools to be shown in holographic space.

Step 5: Intraoperative holographic alignment. After steps 1–4 were finished, the surgeon could see the patient’s virtual anatomy aligned with his or her real anatomy through the HoloLens. Additionally, the virtual needle was fused with a real needle in real time to give surgeons a direct guide regarding the surgical tools that were to be used in the operation. Then, the surgeons could perform the surgery with the guidance of the HoloLens.

CT Image Segmentation

Two kinds of information were segmented from the CT images preoperatively. The first type of information, such as skin and bone organ data, come from the organ model. The second type of information, such as the surgical target, comes from the hemorrhage model. Manual outline segmentation and seed-growing segmentation were used to help the surgeon outline the hematoma model. The flying edge algorithm was used for skin and skull bone reconstruction. All models were exported into the HoloLens subsequence.

Figure 3 shows the process of seed hemorrhage segmentation. After obtaining the CT images, the surgeon manually chose seeds located inside the hemorrhage. Then, the seed-growing filter automatically outlined each medical image slice that contained indications of hemorrhage. Because the grayscale values of areas of hemorrhage and normal brain tissue are different, the seed-growing algorithm automatically reconstructed the hemorrhage model for subsequent import into a see-through device.

Fig. 3
figure 3

Hemorrhage segmentation

Automatic Physiological Marker Location from CT Images

For the subsequent initial registration between the virtual and real anatomies, we used physiological markers, such as eye corners, to obtain the initial transformation. Therefore, to make the system more effective and reduce time consumption, we designed an automatic physiological marker localization algorithm to help the surgeon automatically locate the initial markers as shown in Fig. 4.

Fig. 4
figure 4

Automatic physiological marker localization

First, the surgeon reconstructed the skin of the patient’s heads. In 3D view on a computer screen, we defined the view camera as a virtual camera to simulate stereo vision. Dlib-68 repos were used as single image physiological marker detection tools [17]. The natural network (computational neural network; CNN) automatically detected 68 points on the surface [18], all of which could represent a facial region. We simulated the virtual camera (view projection camera) on the 3D screen as a real camera. For two different positions and orientations of the virtual camera, we obtained transformation T. Then, we used T and two virtual cameras to create virtual stereovision. At two different virtual camera positions, we performed facial landmark detection twice. Then, using the 68 points in the two camera views, we used stereo triangulation to find the real 3D positions on the patient’s skin, ultimately obtaining five physiological markers, i.e., the left and right medial and lateral canthi and the tip of the nose, from 68 landmarks. The five physiological markers were specific and needed to be used for initial registration in neurosurgery. Then, we obtained data from the intrinsic virtual camera, with data points collected from camera 1 and camera 2. Using stereo triangulation [19], we obtained the real 3D positions of facial landmarks automatically and used them for subsequent initial registration.

Registration from the Medical Tracker to the Optical Tracker

We defined the medical coordinate system with the upper left corner of the first CT images as the origin and used the World Anchor system in the HoloLens as the room coordinate system, as it does not move intraoperatively. The CT images, head surface and hemorrhage, were aligned in the same coordinate system, \({O}_{\mathrm{medical}}\). The relationship between the CT images and the HoloLens coordinate system is presented in Fig. 5. The position of the 3D-printed marker in the real world was located automatically by marker position estimation, and the position of the marker in the medical coordinate system was automatically located by segmentation.

Fig. 5
figure 5

Registration process

The registration process consists of two parts. The first part is initial registration, which used the five physiological markers. The second step is scan registration, in which the surgeon placed the tip of the calibrated needle on the surface of the patient’s head and scanned and identified hundreds of points. In the subsequent process, the Euclidean distance between the collected scan points and the point clouds from skin models was minimized.

The coordinate transformation can be described as the computation of the relative positions between reference points. For this calculation, a minimum of three sets of reference points were required to perform the transformation between them. Regarding accuracy, we chose 4 sets of points to perform the registration. The transformation matrices were computed using an iterative closest point (ICP) algorithm and the Euclidean distance. The registration principle is shown below:

Suppose that there exists a set of n points whose coordinates are known in \({O}_{\mathrm{medical}}\) as \(\{{P}_{Mi},i=\mathrm{1,2},3,...,n\}\) and in \({O}_{\mathrm{room}}\) as \(\{{P}_{Ri},i=\mathrm{1,2},3,...,n\}\). Then, a transform \({}_{R}{}^{M}T\) can be sought to achieve a best-fit rigid mapping. We represent \({}_{R}{}^{M}T\) as a rotation transform matrix \({}_{R}{}^{M}R\) and a translation transform matrix independently. Consequently, the objective function for the mathematical can be intuitively represented as follows:

$$E=\sum_{i=1}^{n}\Vert {P}_{\mathrm{Ri}}-({}_{R}{}^{M}R\dot {P}_{\mathrm{Mi}}+{}_{R}{}^{M}X)\Vert \to \mathrm{min}$$
(1)

Because conventional algorithms (e.g., the quaternion and ICP algorithms) have distinct weaknesses, such as low accuracy and harsh initial conditions, we developed a new ICP algorithm that combines the virtues of both former algorithms to overcome their limitations. The core idea of the new method is to provide a satisfactory point cloud using the quaternion method as the initial condition for subsequent iteration procedures using the ICP [20]. Consequently, the ICP algorithm achieved wider applicability while guaranteeing the necessary registration accuracy. Thus, we could transform data using ICP to align medical data with real patient data.

Epipolar Line Adjustment and 3D Reconstruction

A HoloLens with two visible cameras was used for stereo epipolar adjustment and reconstruction and for identifying the different surgical tools. The basic workflow is shown in Fig. 6.

Fig. 6
figure 6

Workflow of surgical tool position estimation

First, we calibrated the two visible cameras in front of the HoloLens to determine the intrinsic and position transformations between the left and right cameras. After the calibration was complete, the epipolar geometry based on pinhole models was built to easily search for corresponding points in stereo matching. Then, we used a marker detection algorithm to detect black blobs in each camera.

We defined the epipolar constraint as follows:

$${g}_{ep}(i,j) = {e}^{\frac{-{\left|{P}_{iy}-{P}_{jy}\right|}^{2}}{2{\sigma }_{ep}^{2}}}$$
(2)

In this equation, \({P}_{i}\) and \({P}_{j}\) denote the positions after epipolar adjustment from the left and right images, respectively, and \({\sigma }_{ep}^{2}\) denotes the prediction error of the epipolar constraint.

We defined the similarity constraint as follows:

$${d}_{md}({R}_{i},{R}_{j}) = \frac{1}{N}\sum_{{P}_{i}}\Vert {P}_{i}-{C}_{cp}({P}_{i},{R}_{j})\Vert$$
(3)

In this equation, we defined \({C}_{cp}\) as the closest point operator:

$${C}_{cp}(a,\varepsilon ) = arg \underset{x\in \varepsilon }\;{\mathrm{min}}\Vert x-a\Vert$$
(4)

To calculate the best correspondence, we defined the Gaussian similarity matrix as follows:

$${G}_{ij}={g}_{ep}(i,j){e}^{\frac{-{d}_{md}{({R}_{j},{R}_{j})}^{2}}{2{\sigma }_{md}^{2}}}$$
(5)

Then, we used singular value decomposition (SVD) to solve \({G}_{ij}\) to obtain the following [21]:

$$G =\mathrm{ TD}{\mathrm{U}}^{T}$$
(6)

We use a single matrix E to replace D to obtain:

$$P =\mathrm{ TE}{\mathrm{U}}^{T}$$
(7)

If \({P}_{ij}\) is the largest value across both the columns and rows, i and j are the best corresponding pairs for the left and right images.

Then, the best correspondence between the left and right images was established. For each correspondence, 3D triangle reconstruction was used to determine the real position in left camera coordinates.

We obtained a noise point cloud from the stereo reconstruction. Because we used visible-light cameras, not infrared cameras, we needed to employ noisy point clouds to obtain real points for the surgical tools and identify the number of different tools.

Surgical Tool Identification and Pose Estimation

After the marker positions were determined, we identified different surgical tools with different numbers of markers. Because we were using the visible-light scheme, not the infrared scheme, there were many noise points in addition to those of the real markers. Therefore, the task was to identify the real markers from the noise point cloud and identify which markers indicated surgical tools.

In Fig. 7, we see that there are many noise points in the point cloud collection.

Fig. 7
figure 7

Basic workflow of surgical tool identification and pose estimation. Panel A shows three different sets of markers as different tools. Panel B shows the point cloud reconstructed from stereo vision, which may contain noise points. Panel C shows the identification of the different tool points and markers. Panel D shows the position estimation for each tool

We define the following:

$${d}_{\mathrm{tools}}=\{{d}_{1},{d}_{2},....,{d}_{N}\}, 3\le N$$
$${p}_{\mathrm{points}}=\{{p}_{1},{p}_{2},....,{p}_{N}\}, 3\le N$$

The surgical tools were defined in terms of the positions of N points. Thus, we could obtain the tool identification matrix \({d}_{\mathrm{tools}}\) (N × N) and the point cloud identification matrix \({p}_{\mathrm{points}}\) (M × M) using the lengths between points. We compared each row in matrix \({d}_{\mathrm{tools}}\) with each row of values in matrix \({p}_{\mathrm{points}}\) and recorded the number of nearly identical values for each element; thus, we could build a new matrix NUM (n × M).

For each value in the NUM matrix, if the value was the largest element in both the row and column, the row index and the column index were the best corresponding elements for tool identification and reconstructed points.

After we obtained the real position of the markers for specific surgical tools, we found the position of the tracking tools. The ICP algorithm was used to calculate the real position of each surgical tool from pre-identified files.

Registration Between the HoloLens and Optical Tracker

The two registration methods discussed in the introduction have certain shortcomings. First, the process of registration is complex for surgeons, and second, some methods are not sufficiently accurate for MIS. We developed a fast and high-accuracy method for registering two world coordinate systems in two ways, allowing the HoloLens to work in tandem with high-accuracy optical trackers. Figure 8 shows the transformations among the medical coordinates, HoloLens coordinates, and optical tracker coordinates.

Fig. 8
figure 8

Registration between the HoloLens and optical tracker

The reg-tools could be recognized with both the optical tracker and the two visible light cameras of the HoloLens. We obtained the four points in holographic space and the four points in optical tracker space. Then, we used the ICP algorithm to calculate the transformation between those two sets of points, which represented the bridge between the optical tracker and the holographic space.

We define the transformation between \({O}_{\mathrm{reg}(\mathrm{hololens})}\) and \({O}_{\mathrm{reg}(\mathrm{optical})}\) as \({T}_{\mathrm{reg}}\); then, we can obtain the equation for the medical transformation \({}_{R}{}^{M}T\) as follows:

$${}_{R}{}^{M}T={}_{R}{}^{H}T{}_{H}{}^{C}T{{}_{C}{}^{RH}TT}_{\mathrm{reg}}{{{}_{O}{}^{RO}T}^{-1}{}_{O}{}^{T}T{}_{T}{}^{M}TO}_{\mathrm{medical}}$$
(8)

With transformation \({}_{R}{}^{M}T\), we can easily transform the virtual medical models into HoloLens data for alignment with real anatomy.

The total relationship between the HoloLens and optical tracker is shown in Fig. 8. After we obtained transformation T, we overlapped the position of the surgical tools and the position of the virtual anatomy into holographic space.

Because HICH is treated with the patient under local anesthesia, the patient is not completely still when in the operating room. Therefore, we performed intraoperative movement adjustment in real time, in which the virtual anatomy reflects the patient’s true anatomy through the HoloLens if the patient is moving. Additionally, the patient also sometimes moved while we performed the registration process, so we made adjustments in real time during registration to minimize movement errors.

As shown in Fig. 8, the marker was attached to the head in a nonoperated area to track the patient’s movement in real time. When the surgeon performed the initial and second registrations, the position of the chosen points was multiplied by the inverse transform between the tracker markers and the optical tracker to align them in the stable coordinate tracker. When registration was performed, for each frame of the optical tracker, the calculated registration matrix was multiplied by the same inverse transformation; therefore, because the marker was firmly attached to the patient’s head, we could calculate the adjustment transformation in real time to account for patient movement.

Intraoperative Surgical Tool Calibration

Intraoperative surgical tool calibration is an important process for high-accuracy tool tracking in real time. We designed calibration tools for quick calibration in surgery. The calibration cube has a tube in the center; with the high-accuracy 3D-printed component (less than 0.02 mm), we can easily obtain the inner tube’s position by the calibration tool coordinates. The calibration tools contain four black points; for the positions of point 1 and point 2, we identified the orientation or offset between the drill tip and the end tracking marker. First, the surgeon fixed the tracking marker to the end of the surgical tool and then placed the surgical tool tip into the calibration cube, allowing the tool to be recognized with the optical tracker. Then, the optical tracker automatically calculated the transformation from the surgical tool tip to the end of the tracking tool and the direction from the center of the calibration cube and top of the calibration cube was calculated, which indicated the orientation of the surgical tool tip, as shown in Fig. 9.

Fig. 9
figure 9

Overview of intraoperative surgical tool (skull drill) calibration

Experiments and Results

Phantom Experiment Design

Prior to the clinical experiment, we designed a series of phantom experiments to test the accuracy and stability of the mixed reality navigation system under ideal conditions. We prepared a head phantom to simulate a real human head, inside which we prepared a target hemorrhage model to simulate a real HICH. The hemorrhage measured 30 × 30 × 25 mm and was placed in the center of the phantom model. We fixed the target inside the phantom to ensure that it would not move during the experiment.

One 3D-printed tracking marker was attached to different locations on the phantom surface. Then, we scanned the phantom model with CT scanner. After we obtained the medical images for the phantom model, we imported them into the planning system to automatically reconstruct the skin and the locations of the markers. Experienced surgeons performed puncture planning on the planning system, and all medical information was exported into the HoloLens via Wi-Fi. After the medical information was transmitted, the surgeon was asked to recognize the 3D-printed marker’s position and finish the space registration process. When the registration process was complete, the surgeon observed whether the virtual skin was aligned with the phantom model. Then, the surgeon performed the puncture procedure with guidance from real-time surgical tool tracking. Finally, the phantom model was sent back to the CT scanner to validate the accuracy of the simulated HICH surgery.

To verify the effectiveness of the mixed reality navigation system, we designed a postoperative validation process. The details are described as follows:

  1. 1.

    Registration error. We used a calibrated picker (with an accuracy within 0.5 mm) to pick 200 points randomly in different places on the phantom model and obtain their relative positions in the HoloLens space. Then, we defined the root-mean squared (RMS) registration error as follows:

    $$\mathrm{RMS }= \sqrt{{({P}_{i\_\mathrm{real}\_x}-{P}_{i\_\mathrm{holo}\_x})}^{2}+{({P}_{i\_\mathrm{real}\_y}-{P}_{i\_\mathrm{holo}\_y})}^{2}+{({P}_{i\_\mathrm{real}\_z}-{P}_{i\_\mathrm{holo}\_z})}^{2}}$$
    (9)
  2. 2.

    Insertion error. After the simulated surgery, we performed a secondary CT scan to obtain postoperative images. Comparing the preoperative and postoperative images, we defined the insertion error as the average distance between the preplanning insertion tip and the postoperative insertion tip, which was calculated by using the position of the tip of the postoperative needle and the tip of the preoperative needle.

  3. 3.

    Movement error. The movement error has the same definition as the RMS registration error and is used to verify the movement adjustment accuracy. After moving the phantom model, we calculated the new RMS error.

  4. 4.

    Total registration time consumption. To validate the effectiveness of this mixed reality guided HICH system, the time consumption was recorded from the time when the virtual model was imported into the HoloLens to the time when the registration process was completed.

Clinical Experiment Design

We included ten patients with hypertensive hemorrhage, with hematoma volumes of 19,111–38,635 mm3. The clinical experiment received ethical approval and was conducted in strict accordance with the Declaration of Helsinki. Due to the differences in the hemorrhage locations in each patient, experienced surgeons performed preoperative puncture planning in accordance with their clinical experience and selected the puncture target point to achieve the best result. Each patient was anesthetized, and the head was permitted to move intraoperatively. Tracking markers were affixed to nonsurgical areas of the face to rigidly track the intraoperative movement of the head. The surgical instruments, including the electric drill and puncture catheter, were calibrated for intraoperative tracking to determine the positions and poses of the surgical tools. First, the organ model and planning results from the experienced surgeons were exported into the mixed reality device. Then, point-to-point and surface matching methods were used to align the virtual model with the model of the real patient. Then, the surgeon affixed tracking markers to the surgical tools and performed intraoperative calibration. Finally, the surgeon directed the surgical tool’s tip to the correct position under the mixed reality surgical navigation system and adjusted the puncture angle in accordance with the preoperative plan. When the surgeons performed the puncture operation, the location of the surgical tool’s tip was displayed in the mixed reality view to help the surgeon accurately complete the surgery. After the operation, each patient underwent a postoperative CT scan to verify the effectiveness of the surgery. The validation process was the same as in the phantom experiment, including calculation of the registration error and the time consumption of the registration process.

Phantom Experiment Results

The surgeon performed the phantom experiment in the operating room. Table 1 shows the results for each experiment. The average registration error was 1.03 mm, the average registration error after movement was 1.24 mm, and the average insertion error was 1.65 mm. The average time consumption for the complete phantom experiment was 12.9 min.

Table 1 Results of the phantom experiment

Clinical Experiment Results

Surgeons performed mixed reality-based surgical navigation for 10 patients, and after the operation, each patient underwent a postoperative CT scan. We obtained the registration error from the postoperative images, and we continuously tracked the patients’ postoperative recovery. Figures 10 and 11 show CT images for 8 patients and pre- and intraoperative photographs for 4 patients, respectively. The mean postoperative error was 1.94 mm, and the total time consumption was 14.2 min. For all patients, the hemorrhages were successfully removed, demonstrating that the system is suitable for clinical use.

Fig. 10
figure 10

Eight medical images from clinical patients. Red rectangles indicate the hemorrhage region

Fig. 11
figure 11

Clinical trial for HICH surgical navigation. Four clinical patients are depicted in this figure. Panels A-1, B-1, C-1, and D-1 show preoperative patient alignment, and panels A-2, B-2, C-2, and D-2 show intraoperative patient alignment

The registration error and the total time consumption of the registration process are shown in Table 2.

Table 2 Results for the ten patients in the clinical trial

Discussion

Currently, computer-aided surgical navigation is widely used in neurosurgery to provide intraoperative guidance to minimize the surgical risk and improve accuracy. Some studies have investigated the application of surgical navigation in the treatment of HICH, achieving good outcomes. However, traditional surgical navigation systems still carry certain limitations; for example, the surgeon cannot gain a direct view of the patient’s anatomy, and it is difficult to observe the hemorrhage location and other important organs preoperatively. The surgeon must also frequently switch views between the surgical area and the computer monitor during the operation, which may increase the time consumption and lead to unexpected surgical risks. Many studies have investigated surgical navigation in neurosurgery. Mireli A Mongen [22] examined the accuracy of patient-to-image registration in neurosurgical navigation and found a clinically acceptable accuracy of 2.49 mm. Shin et al. [23] investigated a robotic surgical system for surgical navigation in treating HICH and reported a registration accuracy of 2.14 mm. Wang et al. [24] investigated a neurosurgical medical robot for surgical navigation in treating HICH. Seventeen clinical trials were performed, and the largest positioning error was 2.12 mm. Additionally, according to Fan et al. research [25], the accuracy of the mainstream surgical navigation system for neurosurgery (Medtronic S7) was 2.51 mm in clinical usage. Meanwhile, many studies have shown the efficacy of surgical navigation for treating HICH. Li et al. [26] investigated a low-cost surgical navigation approach for use in treating HICH, with a total time consumption of 1.52 h. Yuqian Li investigated 3D printing–based method for neurosurgical navigation in treating HICH and reported a time consumption of 104.3 min [27]. Mixed reality technology has great potential for application in computer-aided surgical navigation processes. Previous research on the application of augmented reality and mixed reality in neurosurgery has achieved good outcomes. Van Doormaal et al. [28] investigated the clinical usage of a holographic navigation and found an accuracy of 7.2 mm. Soares et al. [29] performed a study of HoloLens-guided surgery and reported an accuracy of reaching 2 cm. However, most of these studies used manual spatial registration methods to align virtual objects with the real anatomy, significantly decreasing the accuracy of surgery. Moreover, the tracking of surgical tools in real time to determine their locations in the real anatomy has not been addressed in the existing research.

In this paper, we presented a novel multimodel information surgical navigation system for HICH based on mixed reality. A HoloLens was used as a head-mounted device, and 3D-printed markers were used for space registration and real-time movement tracking. The surgeon could finish the whole HICH operation within 20 min. With this new tool allowing direct observation of the real patient anatomy, the surgeon can see a precise alignment between the virtual models and the real patient and directly obtain the insertion position through the HoloLens. This system helps overcome some of the disadvantages of traditional surgical navigation systems, as the surgeon no longer needs to switch views between the computer and the operation itself. Moreover, real-time tracking of surgical tools was achieved to provide surgeons with direct guidance for insertion. The registration error shown in clinical usage was 1.94 mm, and the average time consumption was 12.9 min. Compared with mainstream surgical navigation systems in neurosurgery, this system can provide clinically acceptable accuracy, significantly reduce the time consumption, and potentially improve efficacy.

With the use of this system, a reconstruction model (for example, of the skin and hemorrhage) and surgical preoperative insertion planning can be aligned with the actual structure in a HoloLens device. With SLAM technology [30], the virtual structure is placed at the same position intraoperatively to directly guide the surgeon during insertion. In the “Experiment and Results” sections, we described phantom and clinical experiments and showed that this system can be effective for minimally invasive HICH surgery and may significantly decrease the total time of the procedure. The time consumption and accuracy data presented in the “Results” section indicate that this system is suitable for clinical use and demonstrate its reliability and effectiveness in clinical HICH surgery.

Although the registration accuracy was less than 2 mm in both the phantom and clinical experiments, there are still some challenges that could be addressed by further research. First, the 3D-printed marker provided an effective way to quickly register the space between the holographic world and the medical world; however, the marker must be attached to the surface of the patient before surgery. This may increase the time required to prepare for surgery and may be inconvenient for the surgeon. Additionally, the registration error of the clinical trials was twice that of the phantom experiment, mainly because the patient may unconsciously contact the marker preoperatively, and the accuracy of the registration procedure may be disturbed in the real surgical environment, including by the light of shadowless lamps, which could reduce the location error of the surgical procedure. In the future, this kind of marker-based registration method will be replaced by a point cloud-based, markerless registration method [31] to reduce the preparation time and increase accuracy. Second, the registration method is time consuming. The scanning process may hurt the patient and cause skin deformation, which would decrease accuracy. In the future, an automatic point cloud registration method will be investigated to simplify the registration process, which will substantially affect the whole surgical process.

Conclusion

In this research, we presented a mixed reality–based, novel multimodel information fusion technology for HICH surgical navigation. A surgeon wearing a head-mounted device can gain a better understanding of the location of the hemorrhage in a patient’s head and make direct observations for preoperative surgical planning with the patient under local anesthesia. In addition, we achieved intraoperative tracking of surgical tools to provide the surgeon with direct guidance for tool insertion. Phantom and clinical experiments achieved registration errors of 1.03 mm and 1.94 mm, respectively. The average time consumption for surgery in the clinical experiment was 14.2 min. These results demonstrate the accuracy and effectiveness of the developed system and indicate that this system is suitable for clinical use.