Introduction

The last decades show considerable advances in the introduction of augmented reality during surgery [15]. In particular, the scientific community and clinicians have been focusing on minimally invasive surgery (MIS). This type of surgery has gained popularity and became a well-established procedure thanks to its benefits for the patient in terms of infection risks reduction and shortened recovery time. However, it remains complex from a surgical point of view, mainly because of the reduced field of view that significantly impacts depth perception and surgical navigation.

These limitations are reduced with computer guidance by overlaying a three-dimensional preoperative model of the patient’s anatomy onto the images provided by laparoscopic cameras. This augmentation typically involves two steps: the visualization of the anatomical or pathological structures present in the medical images, and the registration of the preoperative model onto the intra-operative view.

Common registration processes are either interactive or automatic. An interactive registration requires the manipulation of the virtual organ until its projection on the laparoscopic images matches the current shape of the organ. This task is time-consuming and the approach is limited to rigid registration. An interactive registration requires the selection of landmarks on both the virtual preoperative model and the intra-operative view whose quality and pairwise matching can hardly be guaranteed.

For these reasons, many works now focus on automatic non-rigid registration [9, 23, 28]. Till now, except for Petit et al. [21] and our previous work [17, 18] none of the proposed methods consider cuts, and therefore, topological changes, during the registration process. We advocate that the ability to detect surgical cuts in the laparoscopic images and to apply them to the preoperative model in real-time is a key feature for augmented reality applications. The present paper addresses the real-time elastic registration of 3D models subject to topological changes, observed from a single view by a monocular camera, extending the seminal work of Paulus et al. [18]. To our knowledge, this is the first work that proposes an automatic update of the topology and placement of the internal structures of the organs in response to topological changes detected in the registered models.

Related works

Augmentation of deformable objects

In the context of non-rigid objects, two main behaviors can be identified: inextensible ones such as clothes and elastic ones such as soft tissues. First, inextensible surfaces have been considered with the exploitation of distance constraints. The surface deformation is essentially computed through parametric geometrical models [2, 22]. Approaches based on learning methods have also been considered. The solution is estimated from a representative sample of possible shapes using a dimensionality reduction process [24, 25].

As regards elastic behaviors, geometrical properties cannot be exploited. The closed-form solution constrained by shading information [14] can capture stretching surfaces and yields good results. But the method assumes a Lambertian surface with a single-point light source what greatly limits the possible applications.

More recently, Agudo et al. [1] describe elastic shapes with physical models and combine the finite element method with an extended Kalman filter. Other approaches try to minimize a stretching energy [13] that unifies geometric and mechanical constraints using a projective camera and locally linear deformations. In a similar way, nonlinear elastic models [7] have been used to augment highly elastic objects. The model is constrained by features extracted from a single-view camera and a set of boundary points.

The aforementioned methods cope with the registration or augmentation of surfaces. Augmenting 3D objects requires a more advanced acquisition process. For instance, a stereoscopic visual tracking [8] can be associated with a physical model to augment a volumetric object. A regularization mechanism makes the method more robust to errors in the tracking process. In a similar way, Leiza et al. [12] exploit a depth camera to capture a 3D deformed shape.

The approach presented by Petit et al. [20] fits the point cloud provided by an RGB-D sensor with a tetrahedral mesh using a geometrical point-to-point coupling. Linear elastic external forces are applied on an adapted corotational finite element model increasing the weight of the forces applied to the contour of the object. The method supports occlusion of the object and strong or highly elastic deformations. It has been extended, but only for non-elastic objects, to cope with fractures [21] that appear when the maximal eigenvalue of the stress tensor exceeds the toughness of the material.

In the context of computer-assisted surgery, patient-specific biomechanical models demonstrated their relevance for volume registration. They take into account anisotropic elastic deformation to infer in-depth structure motion [9, 28]. Pratt et al. [23] use a 4D scan of the heart and a biomechanical model to couple the surface motion with external forces that emanate from camera data.

Nevertheless, cuts and resection are essential in surgical procedures. The virtual model on which the augmented view relies has to be updated to allow a sound localization of the internal structures or tumors. In the context of image-guided neurosurgery, Ferrant et al. [6] handle the registration issues induced by tumor resection with the removal of the elements of the brain model that contains the resected tumor.

Fig. 1
figure 1

Pipeline of our method: [S] segmentation, [1] identify image features \(p_{{l},\mathrm{f}}^{}\) (see “Identification and tracking of features” section), [2] map image features on model (see “Identification and tracking of features” , “Cut detection” sections), [N] construct feature neighborhood \(C_{\mathrm{F}}^{0}\) (see “Cut detection” section), [3] use initial/updated model \((P_{\mathrm{V}}^{},C_\mathrm{V}^{})\) and compare image \(p_{{l},\mathrm{f}}^{}\) and surface features \(p_{{l},\mathrm{F}}^{}\) by [4] calculating the measure \(\mu _{lm}\) on the neighborhood \(C_{\mathrm{F}}^{\,t}\), [5] detect outliers, insert cut points \(p_C\), expand cut points to cut lines, [6] expand cut line to cut surface \(S\) (see “Cut detection” section), update topology of model and internal structures (see “Simulation of cuts”, “Handling internal structures” sections), [7] solve minimization problem (5) of “Energy minimization problem” section, [8] display model on output frame; the dashed flashes are performed at initialization; one frame with image features can be used several times, repeating the steps [2]–[8] in the blue area

Simulation of cuts, fractures and tears

The simulation of surgical procedures raises specific issues and challenges. The organs undergo elastic—and sometimes plastic—deformations and, beyond that, are subject to cuts, tears or cauterization. The used biomechanical models must therefore support topological changes or, in other words, updates in the connectivity of the underlying meshes. The simulation of cutting, fracture and tearing remains an active research domain in computer graphics. A good overview is provided by Wu et al. [29]. In the following, we focus on the finite element method that is used in this paper.

The composite finite element method [30] (CFEM) uses a fine grid for the visualization and a coarse one for the deformations. The cuts are performed on the fine grid and propagated to the coarse mesh. This approach allows real-time performance but prevents partial cuts of the coarse elements to be visible.

The snapping method [16] consists in moving the vertices of the original mesh toward the separation surface before disconnecting the object along these snapped vertices. The cuts are limited to the topology of the original object that is often coarse to ensure real-time performances.

Local remeshing overcomes this limitation and allows for partial cuts of elements [11]. They may be combined with snapping to avoid instabilities in the simulation and to limit the increase degrees of freedom, as proposed by Paulus et al. [19].

Method overview

In the following, we use \(X_\mathrm {E}^\mathrm {S}\) to name a variable X of the entity \(\mathrm {E}\) in the simulation state \(\mathrm {S}\). The variable X can denote either one point p, a set of points P, the connectivity c of one FEM element or the connectivity C of a set of elements. The name \(\mathrm {E}\) denotes the entity the variable belongs to. In the case of a 3D object, we use V for the volume, \(\partial V\) for the surface of the volume, I for the internal structures and F for the surface features. On the other hand, in a 2D image, we use f for the features. Finally, the status \(\mathrm {S}\) can be either the initial state \(0\), the current state \(\,t\) or the target state \(1\). Let us point out that the targets denote here the features extracted from the images in the video stream that the virtual model tries to follow. Thus, the targets change with each video frame. In addition, the current state is related to the actual position of the virtual model, while it moves toward the target.

Before a surgical intervention, the organ and its internal structures are segmented from the preoperative images. Separate meshes are built for each of the considered 3D objects. The virtual organ is discretized in a set of tetrahedra \(C_\mathrm{V}^{0}\) whose elements are denoted \(c_\mathrm{V}^{0}\). Those elements connect the vertices \(P_{\mathrm{V}}^{0}\) of the volume mesh that models the organ. The internal structures are discretized as a surface mesh \(C_{\mathrm{I}}^{0}\) that connects the vertices \(P_{\mathrm{I}}^{0}\). To geometrically bind the two initial sets of points, the \(P_{\mathrm{I}}^{0}\) are expressed as barycentric coordinates of the \((P_{\mathrm{V}}^{0},C_\mathrm{V}^{0})\). Finally, constitutive laws and a set of parameters are chosen to approximate the elastic behavior of the organ and internal structures.

Feature points of the real organ are identified in the laparoscopic view and are registered to the virtual model in its initial position. We denote this set \(P_{\mathrm{F}}^{0} = \{p_{{l},\mathrm{F}}^{0}\}\). During the surgical intervention, the detected features are tracked in the video stream. They form the target points \(P_{\mathrm{F}}^{1} = \{p_{{l},\mathrm{F}}^{1}\}\). The real and virtual organs are coupled, by means of these two sets of features points: the tracked features \(P_{\mathrm{F}}^{1}\) and their initial registration \(P_{\mathrm{F}}^{0}\) that move according to the deformation of the virtual model and whose current positions are \(P_{\mathrm{F}}^{\,t}\).

Our method captures the deformations and detects cuts and tears, through an analysis of the displacement field of these two point clouds. Detected cuts are reproduced on the virtual organ, and the internal structures are updated providing additional information for the surgeon during the advancement of the surgical procedure.

Elastic registration

Physically based model

The choice of the constitutive law determines the set of deformations that can be represented while discriminating non-plausible configurations that could be induced by wrong target surface feature point cloud \(P_{\mathrm{F}}^{1}\).

The constitutive law of St. Venant–Kirchhoff represents hyperelastic material behavior and is often used to simulate nonlinear deformations in real-time. However, beyond other unintended side effects, due to the monotonie in compression, it can break down under extreme compression [27]. Thus, we apply the corotational linear elasticity that secures the rotational invariance and thus nonlinear characteristics, while keeping the simplicity of the stress–deformation relationship in linear materials [26]. The corotational finite element method uses the polar decomposition of the deformation gradient \(\mathbf {F} = \mathbf {R}\mathbf {U}\) to construct a new strain measure

$$\begin{aligned} \epsilon _c= \mathbf {U} - \mathbf {I} \end{aligned}$$
(1)

The global strain energy \(W_c\) estimated for each element \(c\) in the FE model is given by the equation:

$$\begin{aligned} W_c= \frac{\lambda }{2} [{\text {tr}}(\epsilon _c)]^2 + \mu {\text {tr}}(\epsilon _c^2) \end{aligned}$$
(2)

where \(\lambda \) and \(\mu \) are Lamé coefficients and can be computed thanks to the elastic parameters of the material E and \(\nu \). E is the Young’s modulus and is a measure of the stiffness of the material, while \(\nu \) is the Poisson’s ratio which estimates the compressibility of the material.

We obtain the global internal elastic energy by accumulating the strain energies \(W_c\) of the elements:

$$\begin{aligned} W_\mathrm{I}\left( P_{\mathrm{V}}^{},C_\mathrm{V}^{0}\right) =\sum _{c\in C_\mathrm{V}^{0}} W_c\end{aligned}$$
(3)

Identification and tracking of features

An initial registration of the biomechanical model to the video stream can either be performed manually, using contour-based approaches [5] or using anatomical landmarks.

From the first frame of a monocular video stream in a single-view position, we extract 2D features in their initial positions \(p_{{l},\mathrm{f}}^{0}\in P_{\mathrm{f}}^{0}\) using the Speeded-up Feature Detector. These 2D positions are mapped to the surface of the initially registered model, and the surface feature points \(p_{{l},\mathrm{F}}^{0}\in P_{\mathrm{F}}^{0}\) are expressed as barycentric coordinates of \((P_{\mathrm{V}}^{0},C_\mathrm{V}^{0})\). For the frame-to-frame tracking, the Lucas–Kanade optical flow yields 2D features \(p_{{l},\mathrm{f}}^{1}\in P_{\mathrm{f}}^{1}\) and, with the mapping to the surface, the target surface feature points \(p_{{l},\mathrm{F}}^{1}\in P_{\mathrm{F}}^{1}\).

Energy minimization problem

In the simulation, the current feature points \(p_\mathrm{F}^{\,t}\) are mapped to the FEM mesh and move according to the mesh nodes \(P_{\mathrm{V}}^{\,t}\), so we can write \(p_{{l},\mathrm{F}}^{\,t} = p_{{l},\mathrm{F}}^{\,t}(P_{\mathrm{V}}^{\,t})\in P_{\mathrm{F}}^{\,t}(P_{\mathrm{V}}^{\,t})\).

The coupling of the real and virtual models is obtained with the introduction of spring forces between each surface feature point \(p_{{l},\mathrm{F}}^{\,t}(P_{\mathrm{V}}^{\,t})\) and target surface feature point \(p_{{l},\mathrm{F}}^{1}\), which accumulate to the stretching energy:

$$\begin{aligned} W_\mathrm{S}\left( P_{\mathrm{F}}^{\,t}\left( P_{\mathrm{V}}^{\,t}\right) ,P_{\mathrm{F}}^{1}\right) = \sum _{l}\frac{1}{2} k_l\left\| p_{{l},\mathrm{F}}^{\,t}\left( P_{\mathrm{V}}^{\,t}\right) - p_{{l},\mathrm{F}}^{1} \right\| ^2 \end{aligned}$$
(4)

The parameters \(k_l\) are experimentally chosen and are in the same order of magnitude as the Young’s modulus of the deformable object. The updated set of vertices \(P_{\mathrm{V}}^{\,t}\) is obtained by solving the minimization problem between internal elastic energy and stretching energy:

$$\begin{aligned} \mathop {\mathrm{argmin}}\limits _{P_{\mathrm{V}}^{\,t}}\left( W_\mathrm{I}\left( P_{\mathrm{V}}^{\,t},C_\mathrm{V}^{0}\right) + W_\mathrm{S}\left( P_{\mathrm{F}}^{\,t}\left( P_{\mathrm{V}}^{\,t}\right) ,P_{\mathrm{F}}^{1}\right) \right) \end{aligned}$$
(5)

The surface feature points \(P_{\mathrm{F}}^{\,t}(P_{\mathrm{V}}^{\,t})\) and the internal structures \(P_{\mathrm{I}}^{\,t}(P_{\mathrm{V}}^{\,t})\) are updated applying their initial barycentric coordinates to the new positions of the \(P_{\mathrm{V}}^{\,t}\).

Since we use a dynamic simulation, there is no need for boundary conditions to obtain a stable problem. The minimization problem is solved every time step using the conjugate gradient method, and we apply the Euler implicit method for the time integration.

Topological changes

This section presents the core of the proposed method. We first give a mathematical formulation of the problem, stating our assumptions and our goal: provide an updated virtual organ, including its internal structures, even when dealing with large deformations and/or topological changes. Then, we describe the detection of a cut, the way it is processed and the update of the internal structures.

Problem formulation

We assume that the initial positions of the surface feature points \(P_{\mathrm{F}}^{0}\), the virtual organ \(P_{\mathrm{V}}^{0}\), the internal structures \(P_{\mathrm{I}}^{0}\) and the target positions of the surface feature points \(P_{\mathrm{F}}^{1}\) are given. The tetrahedral elements \(C_\mathrm{V}^{0}\), and respectively the triangular ones \(C_{\mathrm{I}}^{0}\), connect \(P_{\mathrm{V}}^{0}\) (and, respectively, \(P_{\mathrm{I}}^{0}\)) to the virtual organ (and the internal structures).

Our objective is to update the shape (geometry and topology) of the virtual organ \((P_{\mathrm{V}}^{0},C_\mathrm{V}^{0})\), based on the final position of the target surface feature points \(P_{\mathrm{F}}^{1}\). This includes the deformation and update of the topology of the model \((P_{\mathrm{V}}^{\,t},C_\mathrm{V}^{\,t})\) and the internal structures \((P_{\mathrm{I}}^{\,t},C_{\mathrm{I}}^{\,t})\). With the previous notation, the problem can be formulated as:

$$\begin{aligned}&\text {Given } P_{\mathrm{F}}^{0}, \left( P_{\mathrm{V}}^{0}, C_\mathrm{V}^{0}\right) , \left( P_{\mathrm{I}}^{0}, C_{\mathrm{I}}^{0}\right) \text { and } P_{\mathrm{F}}^{1} \end{aligned}$$
(6)
$$\begin{aligned}&\text {Find } \Big (P_{\mathrm{V}}^{\,t},C_\mathrm{V}^{\,t}\Big ) \text { solving }(5) \end{aligned}$$
(7)
$$\begin{aligned}&\text {and minimizing } \left\| P_{\mathrm{F}}^{\,t}\Big (P_{\mathrm{V}}^{\,t}\Big ) - P_{\mathrm{F}}^{1} \right\| _F \end{aligned}$$
(8)

When a ground truth of the surface of the volumetric model \((P_{\partial V}^{1},C_{\partial V}^{1})\) or of the internal structures \((P_{\mathrm{I}}^{1},C_{\mathrm{I}}^{1})\) are given, the problem could be complemented by:

$$\begin{aligned} \text {Minimize }&\left\| \Big (P_{\partial V}^{\,t}\Big (P_{\mathrm{V}}^{\,t}\Big ),C_{\partial V}^{\,t}\Big ) - \Big (P_{\partial V}^{1},C_{\partial V}^{1}\Big ) \right\| _{\partial V} \end{aligned}$$
(9)
$$\begin{aligned} \text {Minimize }&\left\| \Big (P_{\mathrm{I}}^{\,t},C_{\mathrm{I}}^{\,t}\Big ) - \Big (P_{\mathrm{I}}^{1},C_{\mathrm{I}}^{1}\Big ) \right\| _I \end{aligned}$$
(10)

The choice of the norms depends on the application and will be discussed in “Results” section. To solve the problem, the displacement field between the current features \(P_{\mathrm{F}}^{\,t}(P_{\mathrm{V}}^{\,t})\) and target features \(P_{\mathrm{F}}^{1}\) is analyzed. Discontinuities in this field reveal an inconsistency between the biomechanical model \((P_{\mathrm{V}}^{\,t},C_\mathrm{V}^{\,t})\) and the target surface feature points \(P_{\mathrm{F}}^{1}\). Such an inconsistency can be triggered by a cut of the real model that moves the target surface feature points \(P_{\mathrm{F}}^{1}\).

Cut detection

Detection of a cut region

The minimization problem (5) is solved at each time step, resulting in an update of the biomechanical model \((P_{\mathrm{V}}^{\,t},C_{{V}}^{0})\). When a smooth deformation of the organ occurs, the biomechanical model \((P_{\mathrm{V}}^{\,t},C_\mathrm{V}^{0})\), and with it the surface feature points \(P_{\mathrm{F}}^{\,t}(P_{\mathrm{V}}^{\,t})\), can properly follow the target surface feature points \(P_{\mathrm{F}}^{1}\). As soon as a cut occurs, observations of the surface feature points are twofold: First, the uncut model \((P_{\mathrm{V}}^{\,t},C_\mathrm{V}^{0})\) becomes unable to adapt to the motion enforced by the stretching energy. As a result, the vector between the current and the target positions \(d_l(P_{\mathrm{V}}^{\,t},P_{\mathrm{F}}^{1}) = p_{{l},\mathrm{F}}^{\,t}(P_{\mathrm{V}}^{\,t}) - p_{{l},\mathrm{F}}^{1}\) diverges around the cut. Secondly, the distance between two target feature points \(p_{{l},\mathrm{F}}^{1}\) and \(p_{{m},\mathrm{F}}^{1}\) that lie on two different sides of the cut increases much more than the average distance between feature points.

To exploit these observations, we call two feature points \(p_{{l},\mathrm{F}}^{0}\) and \(p_{{m},\mathrm{F}}^{0}\) neighbors iff their initial Euclidean distance \(\delta _{{{lm}},\mathrm{F}}^{0} = \Vert p_{{l},\mathrm{F}}^{0}-p_{{m},\mathrm{F}}^{0}\Vert \) is smaller than a given radius \(r_{P_{\mathrm{F}}^{0}}\), that is related to the density of the features. The Eucleadian distances \(\delta _{{{lm}},\mathrm{F}}^{\,t}\) and \(\delta _{{{lm}},\mathrm{F}}^{1}\) are defined, respectively, and will be used in the following. The neighborhood information is stored in the graph \(C_{\mathrm{F}}^{0} = \{(l,m)|\delta _{{{lm}},\mathrm{F}}^{0}<r_{P_{\mathrm{F}}^{0}}\}\) (see Fig. 1.[N]).

In the region of a cut, the physical model prevents the surface feature points \(P_{\mathrm{F}}^{\,t}(P_{\mathrm{V}}^{\,t})\) from moving toward their targets \(P_{\mathrm{F}}^{1}\) and the vectors \(d_l(P_{\mathrm{V}}^{\,t},P_{\mathrm{F}}^{1})\) and \(d_m(P_{\mathrm{V}}^{\,t},P_{\mathrm{F}}^{1})\)—simply denoted \(d_l\) and \(d_m\) in the following—point in two different directions. The similarity of the displacement field of neighboring features is evaluated through the Euclidean norm \(\Vert d_l-d_m\Vert \), taking account of the first observation. For the second observation, we evaluate the ratio \(\delta _{{{lm}},\mathrm{F}}^{1}/\delta _{{{lm}},\mathrm{F}}^{0}\) of the initial and final distance of neighboring surface feature points. An increase indicates either an elongation of the object or a separation due to a cut or a tear. These two cases are distinguished by comparing this ratio with its average in the neighborhood.

Summarizing, we define the measure

$$\begin{aligned} \mu _{lm} = \frac{\delta _{{{lm}},\mathrm{F}}^{1}}{\delta _{{{lm}},\mathrm{F}}^{0}}\Vert d_m-d_l\Vert , \quad \forall (l,m)\in C_{\mathrm{F}}^{0} \end{aligned}$$
(11)

and we denote \(\mu _{{\varnothing }}\) the mean value over \(C_{\mathrm{F}}^{0}\). With a scenario-dependent threshold \(\tau \), we can identify outliers—i.e., divergent neighbor features—with \(\mu _{lm}>\tau \mu _{{\varnothing }}, (l,m)\in C_{\mathrm{F}}^{0}\), that lie in a region where a cut or tear is likely to occur.

Insertion of cut points

To detect the cuts, we first search for couples of neighboring feature points (lm) that are outliers for the measure \(\mu _{lm}\). For two outliers \((l_0,m_0)\), \((l_1,m_1)\) with \(\mu _{l_0m_0}>\mu _{l_1m_1}>\tau \mu _{{\varnothing }}\), the cut should be closer to the \((l_0,m_0)\) couple. As soon as the number of outliers exceeds a given threshold n, we insert a cut point \(p_C\) at the averaged barycenter of the outliers weighted with their respective measure.

Then, to avoid the insertion of another cut point in the next time step at the same location, we delete from the set \(C_{\mathrm{F}}^{\,t}\) of neighbors the couples that cross the sphere of the radius \(r_{p_C}\) around the inserted cut point (see Fig. 2 left, middle). After the update of the neighbors, the measure defined in (11) is only evaluated on \(C_{\mathrm{F}}^{\,t}\).

Fig. 2
figure 2

From left to right: insertion of a cut point based on the barycenters of identified outliers (black crosses); update of the connectivity information (turquoise connections are not used any more), insertion of a second cut point yields a cut line (polygon); cut polygon is extended until the end of its radius \(r_{p_C}\)

With the first insertion of a cut point, we introduce a sequence of cut points \(\{p_{i_0,C}, \ldots , p_{i_n,C}\}\) with \(i_0 = i_n\) that extends to a cut polygon continuously, i.e., new cut points can be inserted before the first and after the last extremity. New cut points are inserted before or after the nearest extremity (Fig. 2, middle) and the polygon fits the widening of the cut.

Expansion to cut surface

Using the direction of the camera or a predefined vector, the separation polygon can be extruded to a separation surface \(S\), which can be forwarded to any cutting algorithm [29].

Optimizations

The parameters n and \(r_{p_C}\) allow to balance between the precision of the cut and the robustness of the cut detection. Setting these parameters on a high value, i.e., aiming rather at the robustness than at the precision of the cut detection, reduces the ability of the detection to react instantly on an emerging cut. That means, the detection has the tendency to lag behind. To overcome this negative side effect, we insert another cut point located at the intersection between the line connecting the last and the current cut point and the sphere around the current cut point (Fig. 2, right).

The surface feature points do not go beyond the boundary, i.e., inserted cut points stop before the boundary of the object, preventing the occurrence of complete cuts. The preventive step of the last paragraph results in a cut over the boundary which alleviates the problem without adding a parameter.

Simulation of cuts

The presented method to detect topological changes is independent of the separation algorithm [29]. In our work, we use a remeshing approach combined with a node snapping technique [19] which is presented below.

The cutting algorithm updates a FEM mesh at a predefined or detected cut (or separation) surface \(S\) in two steps. First the cut surface is sampled at the edges of the FEM mesh, which allows to identify vertices that are close to the cut and need to be snapped.

After that, in other cases, a local remeshing is performed: tetrahedra adjacent to a cut edge are replaced to introduce a vertex inside every tetrahedron traversed by the separation surface. Then, faces which are adjacent to the cut edge are flipped to insert new edges between the points on the separation surface. Finally, the edge that crosses the surface is deleted and triangles inside the mesh interpolate the separation surface.

This method introduces less nodes and tetrahedral elements than similar methods [3, 11] and reduces the overhead on some key steps of the finite element method (e.g., solving the linear system at each time step), making it well suited for real-time applications, such as augmented reality in surgery.

Handling internal structures

In our work, we use the biomechanical model \((P_{\mathrm{V}}^{\,t},C_{{V}}^{\,t})\) of the virtual organ to update internal structures given by a preoperative scan. For that, the initial positions \(P_{\mathrm{I}}^{0}\) and the initial triangular surface connectivity \(C_{\mathrm{I}}^{0}\) represent the internal structures like vessels, the urinary system, tumors or others. At initiation, barycentric coordinates are computed for each point \(p_{{l},\mathrm{I}}^{0}\in P_{\mathrm{I}}^{0}\) dependent on the shape functions used in the finite element formulation. Similar to the surface feature point s, this helps to express the current position of the internal structures dependent on the biomechanical model, i.e., \(p_{{l},\mathrm{I}}^{\,t}=p_{{l},\mathrm{I}}^{\,t}(P_{\mathrm{V}}^{\,t})\in P_{\mathrm{I}}^{\,t}(P_{\mathrm{V}}^{\,t})\).

When the cut or the tearing of the organ yields a detection of a separation surface \(S\) as being described in “Cut detection” section, we aim on propagating this change to the internal structures. In order to correctly update the internal structures, we update the connectivity \(C_{\mathrm{I}}^{\,t}\) by deleting the triangles that intersect with the separation surface \(S\), i.e.,

$$\begin{aligned} C_{\mathrm{I}}^{\,t}=\left\{ \left( c_{1,\mathrm I},c_{2,\mathrm I},c_{3,\mathrm I}\right) \Big | \left[ p_{c_{1,\mathrm I},\mathrm I}^{\,t},p_{c_{2,\mathrm I},\mathrm I}^{\,t},p_{c_{3,\mathrm I},\mathrm I}^{\,t}\right] \cap S=\emptyset \right\} \end{aligned}$$
(12)

Results

In this section, we demonstrate the potential of our approach to detect a (surgical) cut and to replicate the corresponding topological changes on a virtual model. Our experiments involve silicone data, in vivo liver data and ex vivo kidney data. All the results are obtained using a single view from a monocular camera. Qualitative and quantitative results are presented with Dice’s coefficient for the first two datasets (Table 1), and more extensive comparisons are reported with the latter one.

Table 1 Dice’s coefficient on silicone data and in vivo liver
Fig. 3
figure 3

Detection and simulation of cuts in silicone bands demonstrating the ability of our approach to distinguish between large strains (186 and 166%) and cuts: left without augmentation, right augmented with an uncut/cut model

Experiments on highly elastic silicone bands

Our algorithm was applied in two scenarios involving highly elastic silicone bands which are cut and then strongly deformed (see Fig. 3). The tracking of the features, the detection of the cut and the updates the deformable model due to topological changes are all performed in real-time on a single CPU computer.

Experiments on an in vivo liver

A second experiment was conducted from a video showing the cutting of a porcine liver lobe (see Fig. 4). A tumor has been inserted into the virtual representation of the organ. With no topology update, the augmented view of the tumor is distorted and misplaced, whereas with our cut detection and simulation, the tumor stays undeformed and correctly located below the cut.

Fig. 4
figure 4

Augmented reality on cut and deformed liver overlaid by the virtual organ and a tumor: with a normal elastic model (a) and with our method (b)

Validation on an ex vivo kidney

The last experiment involves the cutting of an ex vivo kidney. It provides a challenging evaluation of our method, as the internal structures of the kidney (the calyces, a part of the urinary system) have a complex geometry and cover a large part of the organ.

Ground truth

In order to obtain a trustworthy ground truth for the kidney and its internal structures, two CT scans were performed before and after the manipulation. To ensure a good visualization of the internal structures in the CT images and to avoid a loss of volume when the cut occurs, we filled the calyces with a gel that solidifies. The gel contains BaSO\(_4\) microparticles, showing high contrast in CT images [4].

The kidneys are cut perpendicularly to the long axis in the middle of the parenchyma, incising a part of the calyx system. The cut is widened by stretching the kidney along its longitudinal axis. The organs and internal structures have been segmented using active contour techniques (Snakes) [10]. The resulting surface meshes, displayed in the following, have been smoothed using nearest neighbor smoothing algorithm.

In order to quantify the obtained results, we measure the (sampled) Hausdorff distance \(H\) between the surface and the internal structures of the kidney comparing the simulated solution—with and without cutting—with the goal positions provided by the final CT scan. For the surface feature points, we use the Euclidean norm:

$$\begin{aligned} \left\| p_{{l},\mathrm{F}}^{\,t}\left( P_{\mathrm{V}}^{\,t}\right) - p_{{l},\mathrm{F}}^{1} \right\| _F = \left\| p_{{l},\mathrm{F}}^{\,t}\left( P_{\mathrm{V}}^{\,t}\right) - p_{{l},\mathrm{F}}^{1} \right\| _2 \end{aligned}$$
(13)

Averages and maximal values for the measures are collected in Table 2.

Table 2 Evaluation on ex vivo kidney data
Fig. 5
figure 5

Augmented reality on cut and deformed kidney 1 (top) and 2 (bottom) overlaid by the virtual organ, the initial registration (left), final registrations: uncut (middle left), cut (middle right) and reference registration (right)

Our results

Kidney 1 measures \(102 \times 27 \times 55\) mm, the internal structures \(58 \times 11 \times 27\) mm. We extracted 225 feature points that are connected using a the radius \(r_{P_{\mathrm{F}}^{0}}=15\) mm. We applied our algorithm with a threshold \(\tau =8\) to identify outliers when comparing to the average of the measure, inserting a new cut point \(p_C\) as soon as we have \(n=8\) outliers and updating the neighborhood information using \(r_{p_C}=7\) mm.

Kidney 2 measures \(96 \times 27 \times 56\) mm, the internal structures \(52 \times 12 \times 32\) mm. We identified and tracked 219 feature points in the video stream and applied our method with \(r_{P_{\mathrm{F}}^{0}}=20\) mm, \(\tau =10, n=3, r_{p_C}=13\) mm.

In both cases, the simulations were performed at minimally 25 fps, while on average the simulations run at 35 fps using a single CPU computer. The algorithms proposed in this work add less than 3% computational costs to a normal elastic FEM implementation, which is reflected in average by  0.15 ms for the detection of a cut and 0.15–0.25 ms for the update of the internal structures every time step.

For a visual comparison, we refer to the overlaid images in Fig. 5. More details can be retrieved from Fig. 6, which displays the values of the sampled Hausdorff distance overlaid on the meshs. Table 2 summarizes the results by giving average and maximal values for the measures mentioned in “Ground truth” section.

Fig. 6
figure 6

Visualization of the two parts found in the Hausdorff distance, surfaces X of the simulated and Y the reference solution; low values in blue, high values in red/orange

Short discussion and additional information

In the presented results, our method shows a clear advantage over existing approaches that do not account for (surgical) cuts. Particularly interesting is the impact of the parameters that have to be adapted for the different scenarios—for example the radius \(r_{p_C}\): for a small \(r_{p_C}\) (e.g., kidney 1), the cut advances in several steps (a progressing cut), while for a greater \(r_{p_C}\) (e.g., kidney 2), cut lines are longer, more robust, but less precise.

The results are highly dependent on the deformation. The “Experiments on highly elastic silicone bands” section and previous work [17] show the potentially positive impact of strongly deformed objects on the measure. Nevertheless, in the medical examples, we refrain from using strong deformations, to stay close to the medical workflow.

Limitations and discussion

Clinical feasibility

In the clinical context, the flow of features from a video stream is difficult to obtain, as surgical tools may block the direct view on the organ. On the other side, we do not benefit from information present in the images as the current state of the surgical intervention. For instance, combining our approach with the identification of surgical tools has a high potential and will be addressed in the future.

Measure on the surface feature points

Previous works [17, 18] use similar measures. While [17] relies on the first component \(\delta _{{{lm}},\mathrm{F}}^{\,t}/\delta _{{{lm}},\mathrm{F}}^{0}\), we use \(\delta _{{{lm}},\mathrm{F}}^{1}/\delta _{{{lm}},\mathrm{F}}^{0}\). The choice of [17] yields a higher stability for the simulation as the underlying mechanical model regularizes or smooths strong movements or jumps of the surface feature points. Our measure is closer to what happens with the real object and is independent on the choice of the virtual model. Finally, the choice of the measure is dependent on the application and whether the surface feature points are trustworthy. Future measures will contain parts purely dependent on the virtual model, e.g., using the stress or the strain inside the biomechanical model, allowing for a better detection of a fracture.

Partial cuts

For the separation surface \(S\)  a fixed cut direction is given and does not change in the time of the simulation. Therefore, the current version of our code cannot handle the detection of partial cuts in the direction of the camera. But a detection of a partial cut should be possible, when the cut volume is surrounded by surface feature points.

Conclusion

In this work, we addressed the coupling of a preoperative biomechanical model with the real organ, in such a way that topological changes such as surgical cuts can be taken into account. As a result, the virtual model with internal structures like vessels, the urinary system or tumors is kept coherent with the real organ even after an incision.

The virtual organ is deformed by taking account of its constitutive law and minimizing a stretching energy based on the tracking of surface feature points. From the motion of the different features, we retrieve information about the occurrence of a cut and the location where it has been performed. We then update the biomechanical model and its internal structures in real-time.

Our evaluation shows the potential of our approach on examples ranging from in vitro over in vivo to ex vivo, using different measurements to compare our methods to the specific ground truths. Our experimental data are made available online in order to allow for a better comparison to future works.Footnote 1