1 Introduction

In modern clinical medicine, the diagnosis of liver diseases remains in the stage of observing 2D CT images layer by layer. This method becomes increasingly important in disease diagnosis by virtue of its convenient acquisition of source data and high resolution ratio. The manual image processing method, with low efficiency, requires a heavy workload, and the accuracy would be affected in the process. In 1987, Kass et al. proposed the active contour model Snake [1], taking a key step for medical image processing. However, the trapping force of traditional snake model is so small that the model cannot fulfill the convergence at the edge of depressed lesions. In 1998, Xu et al. [2] then proposed the GVF-Snake model, whose traditional form enhances the trapping force. But this model needs multiple iterations to realize convergence, which is time consuming, thus it cannot meet virtual surgeries’ requirement for timeliness.

Currently, due to the limitations of 2D medical images, scientists often use 3D visualization by computer graphics and virtual reality technology and do interactive processing depending on the requirement, which helps the diseased tissue analysis, clinical treatment formulation in our liver virtual surgery, and provides doctors with some intuitive and reliable references. Traditional surface reconstruction technology is limited by the volume resolution, and it should maintain the topological consistency. In recent years, with the development of graphics hardware and software platform, volume visualization technology is moving on to the stage of points. However, because the coordinate information of surface point sets has noise and the resolution between layers and images is different, so the density of surface data is heterogeneous. In conclusion, to find a precise and efficient method suitable for medical images has currently become a research hotspot in the field of 3D reconstruction. There are many different surface reconstruction methods based on point cloud data, which can be divided into the explicit one and the implicit one in general. The explicit method gives precise surface location directly, which pays more attention to solving triangle patch by Delaunay Triangle and Voronoi-Diagram. Seo et al. [3] made outstanding contribution to that and proposed the β-shape algorithm. But the detail information is sometimes lost locally. Cazals et al. [4] presented the Power Crust algorithm to reconstruct model through signing a Delaunay tetrahedron. However, it is not suitable for the noise. The implicit method, formed by combinations of basis functions, is applicable to the complex model but time consuming. Vaněček [5] compressed the mass point cloud data by the combination of RBF and greedy algorithm. Wang et al. [6] made quadratic approximation on local surface by PHT-spline method and realized the adaptive reconstruction for the mass point cloud. However, the precision needs to be improved.

Our main contribution is proposing a new method for lesion volume visualization based on CT images. Figure 1 shows the whole process in brief. First of all, this paper puts forward an improved GVF-Snake algorithm based on region force to extract lesion contours in batches. Then, a tensor product B-spline surface reconstruction method [7] with IMN optimization is proposed for obtaining a high-quality lesion model. In general, our method has two advantages. The first one is that extraction process is well suitable for the complex lesion area and the large number of CT images. Secondly, the process is simple and interactive with high precision, which meets the requirements of virtual surgery.

Fig. 1
figure 1

The whole framework of this paper

2 Methods

2.1 Batch Extraction of Contour

2.1.1 Setting the Initial Contour

The first step of GVF-Snake model extraction is to determine the initial contour. In clinical medical diagnosis, most of the CT lesion is massive shadow for small area, and the gray level of edge is very similar within the neighbors. All these make it difficult to determine it automatically. In consequence, we choose a human interaction technology in this paper, like a part of experiment we researched before [8], and do some improvements. As shown in the top of Fig. 1, firstly we probably need a doctor to click on about ten lesion contour points on each of the CT images through the mouse, then conduct a Bézier curve fitting for these points. After this, the system will map each contour curve to the next CT image according to the position coordinates of points. So we will get lesion contours accurately only by correcting the contour according to the difference of regions, as described in the next paragraph. Moreover, the number of lesion CT images is probably twenty.

Here, we firstly need to contrast two contour areas before and after mapping, and define the same parts as the “public” area, the different one as the “particular”. And then calculate grayscale average \(\upalpha\), \(\upalpha_{0}\) of the public areas and particular areas, respectively, using the digital picture processing technique. We have known that the gray level of segmentation area \(\Omega_{1}\) and background area \(\Omega_{2}\) is \(\varpi_{1}\) and \(\varpi_{2}\) respectively. If a mild lesion area meet the conditions of \(\left| {\upalpha -\upalpha_{ 0} } \right| \prec\uplambda\), where we set \(\uplambda = Arg\hbox{max} \left( {{{\Omega_{1} \Omega_{2} } \mathord{\left/ {\vphantom {{\Omega_{1} \Omega_{2} } {\left( {\Omega_{1} + \Omega_{2} } \right)^{2} }}} \right. \kern-0pt} {\left( {\Omega_{1} + \Omega_{2} } \right)^{2} }}} \right)\) through the experiments, and its gray level is \(\varpi\), so \(\varpi_{ 1} \prec \varpi \prec \varpi_{ 2}\). This moment we should adopt object filling method to convert \(\varpi_{1}\) to \(\varpi_{2}\), which can help us prevent from missing segmentation. Likewise, we can adopt the method of filling the background area that converts \(\varpi_{2}\) to the average gray of the area for the condition of segmenting some background areas.

2.1.2 GVF-Snake Model with Region Force

It’s very common that the shadow block of non-target lesions has a pseudo boundary effect on patients’ abdominal CT images, but as we all know, traditional GVF model cannot reduce this effect [9]. Xu [10] et al. proposed an automatic medical image segmentation technique with the concept of region force in 2012, which could improve accuracy and sensitivity of active contour model. In this paper, in order to make our model be more adaptive to the complex region like lesion, we simplify Xu’s method and add it to our algorithm to improve the traditional GVF-Snake model. Set R as a lesion area on one CT image \(I\left( {x , { }y} \right)\), so we can define its gray information as:

$$S_{\text{R}} \left( {x , { }y} \right) = \begin{array}{ll} I\left( {x , { }y} \right) ,& \left( {x , { }y} \right) \in {\text{R}} \\ 0& \left( {x , { }y} \right) \notin {\text{R}} \\ \end{array}$$
(1)

If the contour and energy of R area is denoted by \(\varGamma \left( s \right)\) and \(E_{\text{R}}\) respectively, and \({\text{C}}\) is the weighted value, so the energy function of model can be written as:

$$E = \int_{0}^{1} {E\left( {\varGamma \left( s \right)} \right){\text{d}}s} {\text{ + C}}E_{\text{R}}$$
(2)

Set \(H\left( {S_{\text{R}} \left( {x , { }y} \right)} \right)\) as the conversion factor between image grayscale and active contour fitting, so according to the positional relations between R area and the initial contour, we can use Green’s identities derived by the divergence theorem to implement a conversion between the region field and the boundary field:

$$\begin{aligned} E_{\text{R}} & = \iint\limits_{\text{R}} {H\left( {S_{\text{R}} \left( {x , { }y} \right)} \right)}{\text{d}}x{\text{d}}y \\ & = \frac{1}{2}\oint {P_{\text{R}} } \left( {x , { }y} \right){\text{d}}x + Q_{\text{R}} \left( {x , { }y} \right){\text{d}}y \, \\ & = \frac{1}{2}\oint {\left[ { - \int_{0}^{y} {H\left( {S_{\text{R}} \left( {x , { }z} \right)} \right)} {\text{d}}z} \right]} {\text{d}}x \\ & \quad + \left[ {\int_{0}^{x} {H\left( {S_{\text{R}} \left( {z , { }y} \right)} \right)} {\text{d}}z} \right]{\text{d}}y \\ \end{aligned}$$
(3)

Here we define the region force as follow, which may contribute to fitting to the deep concave area of lesion:

$$F_{\text{R}} = {\text{C}}\left( {P_{\text{R}} \left( {x , { }y} \right) + Q_{\text{R}} \left( {x , { }y} \right)} \right)$$
(4)

The energy becomes minimum when the contour is in a state of balance, which means that \(F_{\text{int}} + F_{\text{GVF}} + F_{\text{R}} = 0\). Here \(F_{\text{int}}\) is internal force of the contour and \(F_{\text{GVF}} = F_{\text{ext}} = \left[ {u\left( {x , { }y} \right),v\left( {x , { }y} \right)} \right]\) is external force generated by the gradient vector field.

2.2 Preprocessing of Point Cloud Data

The scanning of CT images for each layer is at the same slice thickness. As a result, we can get thickness information of each slice according to the actual performance parameters of medical equipment. Then we will acquire 3D information of point set from 2D. Because the sampling resolution of each CT image is larger than that of interlayer, we need to homogenize point cloud data to meet the requirement of building distance vector field subsequently. Here we firstly adopt the points interpolation method provided in PCL (Point Cloud Library) [11], its core idea is: calculating MLS (Moving Least Squares) surface and the Voronoi diagram according to point cloud, so the new sampling point is the farthest vertex.

The spatial distribution of enormous data after interpolation is fairly dense, if we apply it directly to the following work will cause a lot of time and memory. So we should simplify the data on the premise of keeping the geometry features of model to improve the operation efficiency. In this paper, we divide and merge the spatial point cloud data according to the category based on BIRCH algorithm framework [12], so as to realize the adaptive simplification of point cloud. The basic idea of algorithm is regarding the point cloud space as a class, and making a splitting processing continuously through the calculation of covariance. Finally we choose the representative points which satisfying the MCD (Minimum Covariance Matrix) as results.

2.3 Establish the Distance Vector Field

2.3.1 Estimation and Orientation of the Normal Vector

As there is no connection information among spatial points, the normal vector of points can be only estimated by using k-Nearest Neighbors algorithm, namely, the normal vector is represented by the eigenvector corresponding to minimum Eigen value of its covariance matrix.

It is then required to determine the direction of normal vector and set up consistent interior and exterior orientations. An improved redirection algorithm based on local surface fitting is applied in our proposed method. As shown in Fig. 2, first, the point with the largest component of x coordinate is selected from sampling points as the initial point. Its normal vector is forced to point toward the positive direction of x axis. The direction of the normal vector is then diffused. Tangent constraint criterion is adopted to define its direction of diffusion. Based on normal vector similarity of adjacent points, the normal vector that is most similar to the current normal vector is searched to correct directions. If inner product of the two normal vectors is negative, the most similar normal vector shall be defined inversely. Otherwise, no additional definition is required.

Fig. 2
figure 2

An diagram of the normal vector establishment

2.3.2 Calculate the Distance Value for Grid

Given that space grid density is directly related to graphic accuracy and calculation, users shall first specify grid density interactively and divide spatial domains roughly into several small voxels according to the complexity of lesion shape before the space is subdivided. Distance value is then allocated to grid points of all subdivided voxels. The value is defined as the shortest distance \(d_{\hbox{min} }\) between grid points and the model. In order to improve the efficiency of algorithm without compromising the basic structure of model, the approximate algorithm of distance field is used for calculation, i.e. as shown in Fig. 3, the nearest points \(P_{i} \left( {x_{i} ,y_{i} ,z_{i} } \right)\) to the grid point \(G\left( {x_{0} ,y_{0} ,z_{0} } \right)\) is chosen from point set and their Euclidean distance is used to approximately replace \(d_{\hbox{min} }\):

Fig. 3
figure 3

Calculation of the distance value

$$\begin{aligned} d_{ \hbox{min} } & \approx\varvec{\rho}\left( {G,P_{i} } \right) \\ & = \sqrt {\left( {x_{i} - x_{0} } \right)^{2} + \left( {y_{i} - y_{0} } \right)^{2} + \left( {z_{i} - z_{0} } \right)^{2} } \\ \end{aligned}$$
(5)

2.4 Tensor Product B-Spline Surface Reconstruction

2.4.1 Basic Theory

A tensor product B-spline surface \(f(u,v)\) is expressed by the form of B-spline basis functions [13] and defined by:

$$f(u,v) = \sum\limits_{l} {\sum\limits_{r} {{\mathbf{C}}_{lr} L(u)R(v)} }$$
(6)

where the \({\mathbf{C}}_{lr}\) are a series of fitting factors, and \(l = 1,2, \ldots ,q\), \(r = 1,2, \ldots ,h\), the \(L(u)\) and \(R(v)\) are B-spline basis functions, which are corresponding with B-spline functions of model’s isometric node orders \(\left\{ {\varphi_{l} } \right\}\) and \(\left\{ {\varsigma_{r} } \right\}\), respectively. Given \(\varTheta\) as the closed region of distance field, the fitting surface, denoted by zero-set of \(f(u,v)\), is given by \(\left\{ {f(u,v) = 0\left| {(u,v) \in \varTheta } \right.} \right\}\).

2.4.2 Step-by-Step Dis-Field-Based Algorithm for Surface Fitting

$${\mathbf{P}}_{i,j} = \left\{ {\left( {x_{i,j} ,y_{i,j} ,z_{i,j} } \right)\left| {i \in \left[ {1,N_{i} } \right]} \right.,j \in \left[ {1,N_{j} } \right]} \right\}$$

Let be point sets from space division of distance field, and then we use the B-spline surface mentioned above to fit these points, which is expressed as:

$${\mathbf{P}}_{i,j} = \sum\limits_{l = 1}^{q} {\sum\limits_{r = 1}^{h} {{\mathbf{C}}_{lr} } } L(u_{i} )R(v_{j} )$$
(7)

In order to obtain this fitting surface, we should find the relation between each point and the undetermined coefficients. In the space of distance field, let \(d(u_{i} ,v_{j} )\) be a distance value distributed to grid point \(G(u_{i} ,v_{j} )\), \(\omega\) be the smoothing factor of model and \(E_{s}\) be the surface energy, so the error function between tensor product B-spline function and point cloud is denoted as the following expression:

$$ERF = \sum\limits_{i = 1}^{{N_{i} }} {\sum\limits_{j = 1}^{{N_{j} }} {\left( {{\mathbf{P}}_{i,j} - d(u_{i} ,v_{j} )} \right)^{2} } } + \omega E_{s}$$
(8)

Our target is to balance the fitting surface by minimizing this error function. Firstly we rewrite the tensor product B-spline surface function as:

$$\begin{aligned} f(u,v) & = \sum\nolimits_{l = 1}^{q} {L(u)\left( {\sum\nolimits_{r = 1}^{h} {{\mathbf{C}}_{lr} R(v)} } \right)} \\ & = \sum\nolimits_{l = 1}^{q} {{\mathbf{C}}_{l} L(u)} \\ \end{aligned}$$
(9)
$$\hbar_{i} \left( v \right) = \sum\nolimits_{r = 1}^{h} {\mu_{i} R(v)}$$

Known from the above formula, we can ignore the factor \(\omega\) and calculate the tensor product B-spline surface by two steps. Firstly, a set of grid points which have the same u component are fitted by the B-spline function and \(\mu_{i} \in \left[ {1,h} \right]\). After that we obtain:

$$\begin{aligned}\Phi \left( {\mu_{i} } \right) & = \sum\nolimits_{j = 1}^{{N_{j} }} {\left[ {\hbar_{i} \left( {v_{j} } \right) - d(u_{i} ,v_{j} )} \right]^{2} } \\ & = \sum\nolimits_{j = 1}^{{N_{j} }} {\left[ {\sum\nolimits_{r = 1}^{h} {\mu_{i} R(v_{j} )} - d(u_{i} ,v_{j} )} \right]^{2} } \\ \end{aligned}$$
(10)
$$\lambda{j} \left(u \right) = \sum\nolimits_{l = 1}^{q} {\tau_{j} L(u)},\quad \tau_{j} \in \left[{1,q} \right]$$

Then fit the factor \(\mu_{i}\) by function and get:

$$\begin{aligned} \varPsi \left({\tau_{j}} \right) & = \sum\nolimits_{i = 1}^{{N_{i}}} {\left[{\lambda{j} \left({u_{i}} \right) - \mu_{ij}} \right]^{2}} \\ & = \sum\nolimits_{i = 1}^{{N_{i}}} {\left[{\sum\nolimits_{l = 1}^{q} {\tau_{j} L(u_{i})} - \mu_{ij}} \right]^{2}} \\ \end{aligned}$$
(11)
$$\frac{{\partial \varPhi \left( {\mu_{i} } \right)}}{{\partial \mu_{i} }} = 0\quad \frac{{\varPsi \left( {\tau_{j} } \right)}}{{\partial \tau_{j} }} = 0$$

As a result, we must decide parameters of each fitting point by making a solution of two linear equations and of order h and q respectively. That way we will reduce the order and the amount of calculation must be saved greatly.

2.5 Model Optimization and Rendering

In the actual process of modeling, the smoothing coefficient \(\omega\) of reconstructive surface is an important parameter [14] and it can reflect smoothness of the model. However, the smoothness and accuracy are often contradictory. If \(\omega\) is too large, some details of the model will be ignored. But if \(\omega\) is too small, the model will seem to be very rough and have obvious seams effect. Therefore, in virtual surgery, we need to optimize the model interactively according to actual conditions, and put in some realistic texture effect to strengthen the visual perception, which can meet the needs of medical diagnosis.

2.5.1 Basic Idea

\({\mathbf{n}}_{p} = (n_{x} ,n_{y} ,n_{z} )\;\varvec{P}^{*} = \varvec{P} + d\varvec{n}_{\varvec{p}}\). Assuming that \(\left\{ {\varvec{P}_{i} \left| {i = 1,2, \ldots ,N} \right.} \right\}\) is point cloud on the tensor product B-spline surface. Let \(\left\{ {\varvec{P}_{i}^{*} \left| {i = 1,2, \ldots ,N} \right.} \right\}\) be the point cloud after IMN optimization. So, where \(d\) is the points’ relative distance before and after optimization, is a marching direction of nodes.

$$\varvec{W} = \left( {\sum\nolimits_{i = 1}^{N} {\omega_{i} x_{i} } ,\sum\nolimits_{i = 1}^{N} {\omega_{i} y_{i} } ,\sum\nolimits_{i = 1}^{N} {\omega_{i} z_{i} } } \right)d\left( {\varvec{n}_{\varvec{p}} } \right) = {{\varvec{q} \cdot \varvec{n}_{\varvec{p}} } \mathord{\left/ {\vphantom {{\varvec{q} \cdot \varvec{n}_{\varvec{p}} } {\left\| {\varvec{n}_{\varvec{p}} } \right\|^{2} }}} \right. \kern-0pt} {\left\| {\varvec{n}_{\varvec{p}} } \right\|^{2} }}\varvec{q} = \left( {q_{1} ,q_{2} ,q_{3} } \right) = {\varvec{W} \mathord{\left/ {\vphantom {\varvec{W} {\sum\nolimits_{i = 1}^{N} {\omega_{i} } }}} \right. \kern-0pt} {\sum\nolimits_{i = 1}^{N} {\omega_{i} } }} - \varvec{P}\;{\mathbf{n}}_{p}$$

If we assume that \(d\) is a function about, and. So we can get, where is the weight vector.

2.5.2 Realization of IMN Algorithm

\(d\left( {\varvec{n}_{\varvec{p}} } \right)\). In order to simplify calculation, we need to convert the optimizing progress to a solving procedure for the minimum of. For this, we can firstly solve the follow equation by Lagrange multiplier method:

$$\frac{{\partial d\left( {\varvec{n}_{\varvec{p}} } \right)}}{{\partial \varvec{n}_{\varvec{p}} }} = 0$$
(12)
$$\left\| {\varvec{n}_{\varvec{p}} } \right\|^{2} - 1 = 0$$

By setting \(\lambda\) as a Lagrange multiplier, we obtain the following equation under the constraint condition that :

$$L = d\left( {\varvec{n}_{\varvec{p}} } \right) + \lambda \left[ {\left\| {\varvec{n}_{\varvec{p}} } \right\|^{2} - 1} \right] = \varvec{q} \cdot \varvec{n}_{\varvec{p}} { + }\lambda \left[ {\left\| {\varvec{n}_{\varvec{p}} } \right\|^{2} - 1} \right]$$
(13)

By solving partial derivative of \(L\) for \(n_{x}\), \(n_{\text{y}}\), \(n_{\text{z}}\) and \(\lambda\) components, we obtain

$${\mathbf{n}}_{p} = \left( { - \frac{{q_{1} }}{2\lambda }, - \frac{{q_{2} }}{2\lambda }, - \frac{{q_{3} }}{2\lambda }} \right)$$
(14)

According to the actual conditions, we choose that:

$$\omega_{i} = \frac{1}{{\left\| {\varvec{P}_{i} - \varvec{P}} \right\|^{4} + 1}} \in \left( {0,1} \right]$$
(15)
$$\varvec{Q}_{\varvec{P}} = {{\left( {\sum\nolimits_{i = 1}^{n} {{\rm Z}_{i} \varvec{Q}_{i} } } \right)} \mathord{\left/ {\vphantom {{\left( {\sum\nolimits_{i = 1}^{n} {{\rm Z}_{i} \varvec{Q}_{i} } } \right)} {\left( {\sum\nolimits_{i = 1}^{n} {{\rm Z}_{i} } } \right)}}} \right. \kern-0pt} {\left( {\sum\nolimits_{i = 1}^{n} {{\rm Z}_{i} } } \right)}}$$

If, here, \({\rm Z}_{i}\) is the area of the ith patch, \(\varvec{Q}_{i}\) is its normal vector. Thus the optimal marching direction for nodes is:

$$\varvec{n}_{{\varvec{p}\left( {\text{opt}} \right)}} = \frac{{\varvec{Q}_{\varvec{P}} }}{{\left\| {\varvec{Q}_{\varvec{P}} } \right\|}} .$$
(16)

2.5.3 Texture Mapping

Texture is mostly in the form of two-dimension in computer graphics, which reflects the detail information about the structure of surface. In this part, we mainly use the idea of “lapped textures” [15], which is proposed by Praun et al. in 2000, so we only provide a brief introduction as follows. By observing some instances, we find that the texture of liver lesion model is an unstructured homogeneous texture. Therefore, we assume that the shape of texture can be pre-computed, and then we distribute these created texture blocks to Alpha mask for eliminating the seams effect [16].

Next, we need to assign tangential field on model surface to determine the size and direction of textures. Because of homogeneousness of the lesion texture, we only need to specify global scale field of space grid interactively. Users can freely control the scope of space to achieve the purpose of efficient mapping. In the progress of texture mapping, we firstly need to determine the transformation relation between each vertex of model and the texture space, which means a section mapping relation from 3D to 2D. Then the texture will spread over the whole model through adding patches. We should use the matching between the surface vector field and texture coordinates to make the model covered with texture in the end.

3 Results

In order to prove the feasibility presented above, we have applied this algorithm to some examples of liver complaint. The algorithm is implemented in VC++6.0 and OpenGL4.1.0. The CPU is Core i7-4700 HQ 2.4 GHz processor with 4G RAM. The graphics card is NVIDIA GeForce GT 745 M.

3.1 Contour Extraction Module

Firstly, we need to number slices according to the scanning order of original CT images and store them in the same folder. Users need to read the first slice when program is running. And then they should click on the contour of lesion interactively, about a few key points, to be the control points of Bézier curve. Results show that choosing 8–10 points in the first slice can achieve to a better fitting effect. After mapping the first fitting curve to the second slice, do a region filling and correction. If the difference of fitting area exceeds threshold, system will prompt users to choose some control points again. In this way we can determine each preliminary initial contour. At last, using the improved GVF-Snake algorithm and the accurate fitting results are shown in Fig. 4. The left one is a liver tumor, and the right one is a liver cystic lesion.

Fig. 4
figure 4

Lesion contour in one liver tumor CT image

Here, we use CvPoint, which is basic data types of OpenCV, to represent point’s coordinate as an 2D integer. Its constructed function is: inline CvPoint cvPoint(int x, int y). “Vector” is a dynamic array in std(standard library) of C ++, which can store and operate a variety of data types, and we can define a “vector” template class to store these extracted contour points as std::vector < CvPoint > CounterPoints. Table 1 shows some extreme values of these points’ 3D coordinates.

Table 1 The extreme value information of point cloud coordinates

3.2 Surface Reconstruction Module

PCL (Point Cloud Library) is a specialized library for processing point cloud data, and the web site of the PCL library is http://pointclouds.org/about/indicated. As shown in Fig. 5, we have created a PointCloud class to convert point cloud format to.PCD as: pcl::PointCloud < pcl::PointXYZ > cloud. And then we take the algorithm proposed in the second and third section for simulation through OpenGL technology. In terms of texture mapping, thanks to our medical image library for providing some precious material, we synthesis 2D texture based on some realistic texture samples of the liver lesion surface. Finally we obtain the high-quality lesion model as shown in Fig. 6. Drag the left mouse, and we will observe the spatial structure in every direction.

Fig. 5
figure 5

The surface point cloud distribution map of liver tumor and liver cystic lesion respectively

Fig. 6
figure 6

Liver lesion models diagram from different angles. a Front view, back view and top view of the liver tumor. b Front view, back view and top view of the liver cystic lesion

4 Discussion

\(\left\{ {S_{{\min_{i} }} } \right\}\) Firstly, we need to verify the accuracy of algorithm by calculating average error between reconstructive model and the original point cloud. Traverse all the points in point cloud space. When one point is not on model surface, we call it “offset point”. Map this point to the surface and we will obtain one “mapping point”. Then, search its nearest point on the surface, and we need to use orthogonally approximation technology to gradually make the “nearest point” close to the “mapping point”. Finally, we get a set of minimum distances called, which is between the model and point cloud. So the average error can be written as:

$$\sigma = \frac{1}{\text{M}}\sum\limits_{{i = 1}}^{\text{M}} {S_{{\min_{i} }} } \times 100\%$$
(17)

where M is the sum of points. We have compared the basic situation before and after the optimization, as shown in Table 2, and made a comparison with standard MC (marching cubes). We see from the table that the MC algorithm takes a tremendous amount of time and is very susceptive to the complexity of lesion models. Our algorithm takes only a few seconds to get the high-accuracy model, where the optimization process has a satisfactory result within less than 0.05 s. Model’s average error basically has a percentage-point decrease, which improves model’s precision.

Table 2 Model information before and after IMN optimization compared with MC algorithm

In terms of efficiency, we have compared our algorithm with previous work of CT visualization based on points [1719] on the basis of model’s average error less than 0.20%, as shown in Table 3, where refs [17]. shows a rapid algorithm of point clouds reconstruction, refs [18]. presents a tomographic surface reconstruction algorithm from point clouds, and refs [19]. proposes a method to efficiently and accurately reconstruct continuous surfaces from point clouds. It’s not hard to see that our proposed method will greatly reduce the calculating time in surface reconstruction of medical images and will provide a solid foundation for real-time modeling of lesions.

Table 3 Efficiency analysis

5 Conclusion

In this paper, a complete visualization method for lesion organization was proposed aimed at improving the sense of reality and efficiency of virtual surgery. Our main process include CT image preprocessing, point cloud processing, surface model reconstruction and texturing. Experiment shows that our method can perform efficiently and interactively in our reconstruction work. Moreover, it can maintain good accuracy for some complex structures. Our algorithm also has some limitations. For example, there are still a few visual deviations between built model and the actual one, which is due to the low resolution of texture samples and we ignore the light factor. In addition, we can see in Table 3 that it still takes too much time to render, because in our method we should repeatedly render for completely covering. So how to improve the efficiency of model rendering will be the problem we need to study in the next stage.