INTRODUCTION

The task of visualization lies in making the objects consisting of data arrays or several connected data sets visible to observer. One of the important problems of volume rendering is the visualization of samples of a three-dimensional scalar field obtained from such sources as x-ray computed tomography or ultrasonic scanners. It is also of current interest to visualize structural features, in particular, the boundaries of regions with identical scalar values different from the scalar values themselves.

The visualization process for sets of different data conceptually includes the three stages of transformation to obtain the images:

1. Classification. This stage transforms the original data set to a new set of classes of objects explicitly containing the information which we want to see.

2. Modeling. Transformation of feature maps of sets to the objects of the visual model.

3. Rendering. Visualization of the objects in the model.

Dependent on the specific realization, the order of these transformations may be changed or their union may be carried out.

At the classification stage the interesting features are separated that can be, for instance, boundaries of regions with identical values. The classification may be binary or non-binary. On the one side, binary classification gives the all-or-nothing results with regard to probabilities of existence. On the other side, non-binary classification may give any value in the range from 0 to 1. The binary and non-binary classifications have their advantages and disadvantages.

At the modeling stage the extracted set of attributes is put in correspondence to the set of attributes of visual information carriers in the visual model. The visual model is the modeled 3D scene in which information-carrying visual primitive 3D objects, the voxels, created with non-binary classifications, are placed. A voxel may have different attributes, including non-transparency and emmitance that can have color components.

The third stage of transformations is the object rendering. Ray casting is the most widespread approach to rendering volume due to its flexibility in generating a high-quality image [1, 2]. A large volume of homogeneous data containing in the volume model can be well parallelized in computations using the software-hardware shaders of the graphics processing unit (GPU). Today, a common graphics processing unit is able to map a single volume of high-resolution data with a high framing rate. However, mapping of a single homogeneous volume is not suitable for many applications. Using a combination of different volume data sets becomes popular in medicine applications, where both anatomical and functional structures are required. The multivolume rendering is the rendering of several volumes. Thus, the multivolume rendering is needed at processing several data sets which usually have different orientation and resolution. In medicine visualization such methods for obtaining anatomical images as computed tomography, magnetic resonance imaging (MRI), and ultrasonography and such methods for obtaining functional images as positron emission tomography, single-photon emission computed tomography, and functional MRI are used to examine a patient [3, 4]. Moreover, models of medicine tools and results of segmentation are required for computer assisted surgery. Modern methods have limited capabilities for interactive study of multivolume data in combination with polygonal models. Simultaneous rendering of several volumes in combination with surfaces is more difficult to perform than visualization of individual volumes because of processing volume overlaps and color mixing [5, 6]. Recomputation of volumes in a unified coordinate system is disadvantageous, because it leads to loss in quality or to increasing memory requirements. The combination of volumes with surfaces is the necessary requirement in virtual surgery. Functional models are well suited for embedding extrinsic objects (tools, implants, etc.) into volume data.

The current work is aimed at creating the method for visualization of multivolume data and functionally defined surfaces for medicine applications.

APPROACH TO DEFINING SURFACES

Visualization technologies for medicine applications apply high-resolution description of object surfaces using the (second-order) functions of deviation from basic quadric [7]. The function is defined by a second-order algebraic inequality with three unknowns \(x,y,z\) in the form \(F(x,y,z)\geqslant 0\). We consider the surfaces to be closed subsets of Euclidean spaces defined by the circumscribing function \(F(x,y,z)\geqslant 0\), where \(F\) is a continuous real function, \(x,y,z\) is a point in \(\mathbb{E}^{3}\) prescribed by coordinate variables. Surfaces are constructed using quadrics and represented by composition between the basic quadric and perturbations:

$$F^{\prime}(x,y,z)=F(x,y,z)+\sum\limits_{i=1}^{N}f_{i}R_{i}(x,y,z),$$
(1)

where \(f_{i}\) is the form factor and \(R_{i}(x,y,z)\) is the perturbation:

$$R_{i}(x,y,z)=\begin{cases}Q_{i}^{3}(x,y,z)\quad\text{if}\quad Q_{i}(x,y,z)\geqslant 0;\\ 0\quad\text{if}\quad Q_{i}(x,y,z)<0,\end{cases}$$
(2)

where \(Q_{i}(x,y,z)\) is the perturbation quadric.

The geometric model prepares conditions for constructing objects and their compositions of different complexity. For this purpose, we use the set of geometric operations \(\Phi\) mathematically defined as follows [8]:

$$\Phi_{j}{:}\quad M^{1}+M^{2}+\dots+M^{n}\to M.$$
(3)

To generate models of complex objects on the basis of perturbation functions, we use set-theoretic operations. They are carried out using Boolean operations of union and intersection. A binary operation (\(n=2\)) (3) of objects \(G_{1}\) and \(G_{2}\) means the operation \(G_{3}=\Phi_{j}(G_{1},G_{2})\) with definition

$$f_{3}=\psi(f_{1}(\mathbf{X}),f_{2}(\mathbf{X}))\geqslant 0,$$
(4)

where \(\psi\) is the continuous real function of two variables.

The performed experimental investigations showed that the compression coefficients of polygonal objects varied from 10 to 100 and higher in dependence on test models of different degree of refinement [9]. Thus, we may reduce the database of implants (Fig. 1).

Fig. 1
figure 1

Functionally defined implants.

RENDERING FUNCTIONALLY DEFINED SURFACES

This task is similar to methods of visualizing volume data which are frequently used in volume visualization (in such approaches the density function is prescribed). Their main difference is in the presence of discrete data. In our case there is an analytically defined density function, which allows more efficiently searching intersection points between rays and surfaces. To make the program working efficiently, we need to take into account peculiarities of the applied hardware. Recently, increase in the computational capacity occurs due to parallel computations. Therefore, the method is adapted for using computational resources of the graphics processing unit. In [10] a method was described for visualizing functionally defined objects without recursive division of the object space over the quaternary tree in which the cube is divided into parts in the plane \(XY\) according to the pixels in the image. Such approach was chosen due to specificity of architecture of graphics processing units in which conditional branches are costly operations. However, we here need to scan the entire screen space by rays even if the projection of the object onto the screen is small. In addition to conditional branches, there exists the balancing problem and overhead costs to its solution exceed the gain in performance due to recursive division of the object space over the quaternary tree. Due to a large number of processing units, testing of several rays is performed simultaneously. In most graphics processing units supporting CUDA (software-hardware architecture of parallel computations, which makes it possible to increase the computing performance due to use of the Nvidia graphics processing units), there exists at least 128 scalar computing cores. Consequently, a relatively large part of the cube is removed. Nevertheless, for small and moderate objects such method is not efficient. Moreover, it badly suits the combination rendering of surfaces and volumes, which will be mentioned further.

Hence, another method was developed [11]. The rasterization process is divided into two stages and is distributed between the central processing unit (CPU) and the graphics processing unit. The CPU executes division of the object space into quaternary tree to a cell of a certain size. The advantage of this approach consists in the possibility of removing large parts of the cube without the given object at earlier stage. The primitives of intermediate description are fragments of intersection between geometric objects and cells. Under fragments we understand parts of surfaces (1) and volumes belonging to a particular cell. All remaining computations are executed on the graphics processing unit.

SEMI-TRANSPARENT VOLUMES

In the course of scanning of the scene over the \(z\) coordinate (scanning of the volume in depth), the algorithm works until the volume is completed or a certain value of opacity larger than some threshold value is accumulated. The difference from the ray tracing algorithm is that we do not track secondary rays in order to increase the computation speed. In addition, in modeling the light passage through semi-transparent volumes, we ignore refraction and decay of secondary rays. Just the reflection and decay of light at its passage from the object to the eye of observer remain in this model.

The formula for computing the pixel color may be expressed as [12]

$$P_{\lambda}=\sum\limits_{n=0}^{N}I_{\lambda n}\Omega_{n}\prod\limits_{m=0}^{n-1}(1-\Omega_{m}),$$
(5)

where \(P_{\lambda}\) is the final color of pixel, \(\lambda\) is \(r\), \(g\), or \(b\) color (red, green, or blue color, respectively), \(I_{\lambda n}\) is the intensity in the \(n\)th voxel computed by the Phong lighting model, \(\Omega_{n}\) is the opacity of the \(n\)th voxel, \(I_{\lambda 0}\) is the reflected light from the first point on the scanning ray, \(I_{\lambda n}\) is the background color, and \(\Omega_{n}=1\). Passing the threshold may be traced as follows: if at the \(k\)th step the total transparency \((1-\Omega_{0})(1-\Omega_{1})\dots(1-\Omega_{k-1})\) becomes less than some \(\varepsilon\), then it means that the contribution of all voxels following after the \(k\)th voxel is small; therefore, scanning may be terminated.

There exists a problem with intersecting transparent objects due to necessity of choice of the spatial region belonging to both bodies. How can we choose the material of the intersection region without solving the problem of logical construction? The solution to this problem consists in assigning priorities to the objects of the scene (Fig. 2).

In this work we use quadrics and unions of quadrics as bounding shells for volume data (Fig. 3). How the shell is automatically constructed is the task similar in its complexity to the segmentation task and is outside the scope of this work.

Fig. 2
figure 2

Examples of volume images with different priorities.

Fig. 3
figure 3

Three-dimensional texture bounded by a shell in the form of union of two quadrics.

HYBRID RENDERING OF VOLUMES AND SURFACES

The method allows visualizing several volumes in combination with functional surfaces. We apply the visualization method of [11] as the basic method adapted to volumes. As we have described above, the rasterization process is divided into two stages. The first stage is terminated when the considered part corresponds to a cell of a certain size. The primitives of the intermediate description are fragments of intersection between the geometric objects and cells (of the equation of surfaces). The second stage of computations is in processing a list of objects and determining the visibility and color of pixels. The ray casting of the image with complex intersections between volume data and surface is applied and the hierarchical traversal of the space is used for computing the color in the pixel. In other words, the process of space processing is as follows: the volume is traversed over the quaternary tree (in the screen space, the CPU function) whose leaves are binary subtrees (it corresponds to rays directed inside the screen, the GPU function).

In the course of binary traversal, we determine the priority over the \(z\) coordinate in the case of quadrics and for accurate computation of the perturbation factor in the case of functionally defined surfaces. In the course of division, large parts of empty space are rather efficiently sorted out at the levels of both the quaternary division and the binary division (Fig. 4). After determining the intersection with the shell, its internal part filled with the 3D texture is scanned.

The volume data are stored as a three-dimensional texture [13]. We use the 3D texture map for volume data sets. The texture map is a single large 3D texture which is assigned to a single texture block. The maximum size of the texture map depends only on available graphical memory. All computations are executed on the graphics processing unit.

Fig. 4
figure 4

Binary division of a ray.

The graph of scene includes operations of constructive solid geometry and functionally defined surfaces with volume textures and material properties. The inner vertices of the tree are related with operations of solid geometry, and the leaves refer to the shell bounding the volume, to the volume texture, and to the material with its properties. In Fig. 5 we show the union of an oblique object (implant) with several semi-transparent volumes. Processing fragments is carried out in parallel-pixel mode for cells with a size of \(8\times 8\) pixels and traces the passage of rays through the volumes presented as 3D textures. To increase the computation speed, we use the cache of coherency of textures. The rendering system passes the scene once in contrast to multipass methods [14, 15]. In the course of the binary division of a ray, we determine the object nearest to the observer and the nearest point of intersection with the surface, thus providing the sorting of objects nearest to the observer. As a result, the color in a pixel is accumulated from the close objects to the further ones, that is, in the direct priority order, in the case when the objects are semi-transparent. Oblique surfaces veil objects further from the observer. The final ray tracing and shadowing require blending of several intersecting volume objects and efficient accumulation of individual volume contributions along the rays.

Fig. 5
figure 5

Visualization of combined anatomical 3D model (tissues and bone structure are given by volume data (3D texture) and the implant is given by functionally defined surfaces).

Fig. 6
figure 6

Multivolume data: skull, brain, tumor (upper part of brain), and head tissues.

RESULTS

To estimate the performance, we compare the proposed approach with the known approaches to multivolume visualization [14, 15] using the graphics processing unit GTX 470. In Table 1 we compare the framing rates (the number of frames per second) reached for medicine scenes with several volumes which use several modes and scans for different body parts. Tests 1 and 2 show the resolution of volumes and the time of anatomical scanning (Fig. 6). Test 3 is a combination of anatomical scanning of high-resolution volumes and isosurfaces for methods of [14, 15]. In the proposed method we use a combination of the volume and functional surface. Test 4 is a combination of volumes and a polygonal model with a size of \(3.87\times 10^{3}\) as an analog of the functional surface.

Table 1

Test number

Volumes

Polygons

[14]

[15]

Proposed

method

1

\(1\times 512^{3}\)

14–46

9–17

30

2

\(2\times 256^{3}\)

24–38

6–12

37

3

\(4\times 256^{3}\)

5–9

1–3

25

4

\(10\times 64^{3}\)

\(3.87\times 10^{3}\)

16

  1. Functionally defined objects are considered in the proposed method; there are five such objects in test 4.

Comparing to the known approaches, the main advantages of the proposed method include the following ones:

  1. 1.

    Generality: using this method, we can process not only functionally defined surfaces, but also semi-transparent volumes.

  2. 2.

    Complex models: the method provides visualization of a large number of volumes, a complex of semi-transparent and functionally defined objects, and complex semi-transparent volumes, including intersections of volumes.

  3. 3.

    Efficiency: using the functionally defined surfaces allows reducing the database of implants and decreasing the time of rendering.

  4. 4.

    Simplicity: the method is relatively simple in realization and has no problems with reliability or degeneration of surfaces.

  5. 5.

    Well approximation: preliminary application of the method for various test samples shows well approximation with a limited error at the boundaries of intersections between volumes and functionally defined surfaces.

CONCLUSIONS

In this work we created a new approach to visualizing several semi-transparent volumes and functionally defined surfaces. The method, including the rasterization stage, was implemented on CUDA, which allows completely controlling the memory hierarchy, in particular, the access to the high bandwidth capacity and to the low latency time of distributed memory. An interactive framing rate at simultaneous visualization of more than hundred arbitrarily overlapping volumes is provided. Computation takes several tens of milliseconds on processing units Intel Core2 CPU E8400 3.0 GHz and GPU GTX 470.

The method is topical for a wide spectrum of application of the volume rendering of complex scenes, including intersecting volumes and surfaces for medicine applications. As shown in this work, the method is simple in its implementation and allows using it to solve many problems.