Keywords

1 Introduction

The practice of visualizing radioactive sources and identifying their perfect distribution in large areas has been a well-known and widely discussed topic in many research areas. It has been of the interest in among many researchers since the development of the Anger camera [1], and has been widely established in many diverse research areas such as nuclear instrumentation, nuclear physics [2], medicine [3], astronomy [4, 5] and many non-destructive areas [6, 7]. A large number of researches has already been studied for proper detection and visualization of such sources throughout the past few decades. In our previous paper, we have affirmed not only the proper detection of radioactive sources is important, but also the accurate estimation of their 3D information in radiation-based applications [2]. Most existing methods have employed mono systems to visualize the behavior of radioactive source in an area, where in contrast, we have developed a stereo system to obtain more accurate 2D information.

Hal Anger was the first person who took the initiative of developing a gamma camera by utilizing multichannel collimators to modulate incident radiation [8]. Since then, many advanced researches have been proposed for visualizing radioactive sources. Obelleiro et al. introduced a fast 2D reconstruction method for radar imaging based on the generalized multi-pole technique [9]. Isernia et al. proposed a nonlinear estimation approach to solve inverse scattering problem, and to reliable tomographic reconstruction of 2D primitive profiles [10]. Barkeshli et al. also proposed a similar but an iterative approach to address the scattering problem of an electronic field [11]. Compton and coded-aperture based methods have been used to demonstrate the spatial distributions of cosmic gamma-ray sources [12].

However, the drawback of these proposed methods is that they have been limited only to 2D imaging systems. In modern applications, visualizing activities of radioactive sources in 3D space has become a core requirement. Most modern radiation detection devices are capable of producing proper 3D reconstruction results, but in professional level, they come at a very high-cost value. On the other hand, most of these high-cost devices only use mono sensor systems to characterize the behavior and spatial distribution of radioactive sources. The most common approach of these existing mono sensor approaches is to capture multiple 2D images from different view points, and combine them to form 3D rendering of the sources [13]. Mickisson et al. proposed a proper single view 3D imaging method by exploiting the advantage of the parallax effect [14], which requires a large solid angle coverage to function.

Some volumetric-based spatial distribution imaging techniques have been introduced in the recent decade. Raffo et al. proposed a similar approach to our proposed method, where conventional gamma-ray projection imaging techniques are combined with 3D models of the scene [15]. This approach demonstrates the importance of the context provided by merging with 3D models, but lacks with the idea of not implementing 3D model information into gamma-ray imaging technique. Article [16] proposed another 3D reconstruction technique for a series of X-ray pictures, but with some restrictions of the range of scan angles that limits the resolution. Article [17] gives insight of a proper method for visualizing spatial distribution of gamma-ray sources based on the idea of simultaneous localization and mapping (SLAM). The applicability of the proposed gamma-ray imaging to a wide variety of radiation search and mapping scenarios is greatly enhanced via scene data fusion method, providing 3D localization capability with environmental context in real time.

Our proposed method is based on approaches cited in [15, 17], but in contrast; we exploit the advantage of Semi-Global Block Matching (SGBM) algorithm to create disparity images for radioactive sources and respective vision scanning environment (this is the background where the radiation source is distributed). We generate 3D reconstructions using these disparity results and follow a color ICP-based registration technique to integrate individual reconstructions of vision and radiation sources, separately. We fuse these separate reconstructions together to visualize the complete spatial distribution of radioactive sources.

The structure of this paper is as follows: Sect. 2 - the preliminary section gives some point-form introductions to the stereo gamma radiation detection device we used to capture images, and the enhanced bilinear interpolation method we used to visualize clear 2D radiation images. Section 3 introduces the proposed spatial visualization method. For clear representations, we have divided the full section into subsections where each giving some insight for disparity image creation, individual 3D reconstructions and color ICP-based segment registration. We further illustrate this section by showing a real data reconstruction result for an LED light source which we considered as our radioactive source. Lastly, Sect. 4 summarizes the proposed idea along with some future works.

2 Preliminaries

This section consists of some brief introductions to our previously published articles, which have significant reflectance in the proposed method. In this section, we roughly describe about the structure of our radiation detection device and it’s calibration, the scanning technique of radioactive sources, and the method of generating 2D stereo images using an enhance bilinear interpolation method.

Fig. 1.
figure 1

The front view and the internal design structure of the stereo gamma radiation detection device

2.1 Stereo Gamma Radiation Detection Device

Figure 1 depicts the structure of the radiation detection device we have used. The device consists of a single collimator, radiation shield, scintillation photo multiplier tube sensor (SPMT) along with a vision sensor. All these mono devices are mounted on a freely rotating panning/tilting module. The SPMT tube converts the radiation signal of pixel locations at every scanning direction into corresponding light signals and generates a 2D gamma radiation image. The vision sensor captures coinciding scanning environment where the effect of the radioactive source is distributed. Rotating the device using the panning/tilting module allows to generate stereo images, and a detailed explanation of how stereo images are captured using these mono devices is given in our article [2].

2.2 Enhanced Bilinear Interpolation to Visualize Radioactive Sources

The direct conversion of radiation signals into corresponding light signals can generate 2D images containing lot of noise. Using images with noise could lead into erroneous position information estimations in some applications. Therefore, we applied an enhanced bilinear interpolation method to remove these noise and to generate very smooth 2D gamma radiation images. Reprinted Fig. 2 depicts an instance where noisy gamma radiation images are visualized where reprinted Fig. 3 depicts the corresponding interpolated result. The importance of visualizing smooth 2D images and the full interpolation method is properly described in our previously published article [18].

Fig. 2.
figure 2

Original 2D visualized gamma images consist of noise. No interpolation is applied

Fig. 3.
figure 3

2D gamma images visualized after applying interpolation. Holes and other noisy data are removed

2.3 Stereo Calibration of Radiation Detection Device

After visualizing smooth 2D gamma radiation images, we performed a planar Homography-based device calibration to know the relative pose between gamma and vision sensors. We performed a photogrammetric calibration, which is to observe and capture multiple images of a planar calibration object (a checkerboard pattern). The SPMT gamma detector manages to convert radiation signals into light signals when only the planar object emits radiation. We described a descent calibration technique in article [19], where we generated virtual gamma camera images based on Homography transformation. Figure 4 summarizes the whole process of stereo calibration.

Fig. 4.
figure 4

(a reprint of [19])

The complete stereo calibration technique

3 3D Spatial Visualization of Radioactive Sources - the Proposed Method

The new idea of visualizing spatial distribution of radioactive sources in 3D is described in this section. In our proposed method, we used the same experiment setup mentioned in article [2] (Fig. 1) and a single LED as the radioactive source. We mounted the LED light source on top of a vertical steel pole, and captured 2D images of the source and its scanning environment in three different locations. In one hand, taking multiple 2D images from different view points gives the ability to create a complete 3D model of the source, where in the other hand, it solves occlusion problems (Fig. 5). For further illustration of our method, we have stated reconstruction results also in the same section.

Fig. 5.
figure 5

A cad model describing how multiple images are captured. (a) Capturing 2D stereo images at three different locations using gamma and vision sensors. (b) Captured stereo gamma images. (c) Captured stereo vision images. Red arrow shows the position of the LED, where its shadow can be seen on the wall behind. (Color figure online)

3.1 Generating Disparity Maps and 3D Reconstructions

After capturing 2D images of both gamma source and corresponding vision scanning environment, we generated disparity maps based on the well-known SGBM algorithm [20]. However, the SPMT tube can only detect the sensitivity of the LED source. Therefore it generates 2D images containing white circular dots in a black background without any distinctive features. This could lead into ambiguities in calculating disparity values. As a solution, we applied salt and pepper noise before generating disparity images (Fig. 6). After generating disparity maps using both gamma and vision stereo images, we calculated depth values by reprojecting disparities. We used these depth calculations to generate respective 3D reconstructions (Fig. 7). We integrated individual gamma and vision reconstruction results using color ICP algorithm [21] to create complete 3D models of gamma source and the vision scanning environment, separately. Figures 8 and 9 depict two instances of how integrated results of vision scanning environment and gamma source look like, respectively.

Fig. 6.
figure 6

How the visualized image of the LED source looks like. Left: The original image with no background patterns. Right: The image after applying salt and pepper noise.

Fig. 7.
figure 7

Two separate 3D reconstruction results of gamma source (right image) and the corresponding vision scanning environment (left image)

Fig. 8.
figure 8

Full 3D model of vision scanning environment is created by integrating individual reconstruction results

Fig. 9.
figure 9

Full 3D model of gamma source is created by integrating individual reconstruction results

3.2 Final Spatial Distribution of Radioactive Source

As the final step, we merged the two individual 3D models with each other to create the full spatial distribution of the gamma source. To combine 3D models of gamma source and vision environment, we need to know the transformation relationship between two coordinate systems. However due to the symmetric rotation of the device using the pan/tilt module, coordinate origins of the gamma and vision cameras are the same. Hence, we can determine that 3D models of gamma sensor and vision camera align with each other. This assumption resolves the problem of calculating transformation relation between two 3D models. Figure 10 shows the final distribution result generated by merging two 3D models.

The algorithm was developed in a 64-bit Intel core i5-4670 processor. The total execution time of this proposed method is less than one minute (0.6709 min for exact), which affirms its capability in real-time implementations.

Fig. 10.
figure 10

Full spatial distribution result of the gamma source within its scanning environment

4 Conclusions

In this paper, we proposed a simple, but an efficient 3D visualization technique for radioactive sources. Unlike using scattered methods, we developed this new technique by integrating disparity images of radioactive sources and the imaging background where they are distributed in (we called this background as vision scanning environment). We used already studied semi-global block matching algorithm to generate disparity images of radioactive sources and vision scanning environment. For better representation of radioactive sources and to see how well they are distributed, we captured stereo images from different locations using our previously studied radiation detection device. After generating disparity images of both radiation and vision environment, we calculated their depth values and created 3D reconstruction results. We used the color ICP algorithm to integrate individual reconstruction results with each other and generated 3D maps of radiation sources and vision scanning environment, separately. We finally merged these separate fusion maps together to visualize the complete shape of radioactive sources and to see how they are distributed in the environment. The method only requires a small amount of computational power and can be easily implemented in many real-time applications with a less processing time. As for future experiments, we are planning to implement this proposed idea in GPU-level and to perform real-time image visualizations of real radioactive sources.