Introduction

In the industrial sector, neutron computerized tomography (NCT) has been utilized to examine items non-destructively by investigating the interior and exterior of objects. This method is solid for visualizing hydrogenous materials in three dimensions, including wood, rubber, water, oil, explosives, and even layers of thick metal. The primary distinction between X-rays and neutrons in their interaction with matter leads to additional advancements in neutron techniques, such as their heightened sensitivity to light elements and their high matter penetration capabilities. Neutron computerized tomography (NCT) is beneficial in a wide range of fields, including biology, geology, science, archeology, cultural heritage, and industrial applications [1, 2].

A radiation detector (CCD camera) on the opposite side of the object is used to measure the transmission of the neutron image, which is created by irradiating the object with a uniform neutron beam and recording the intensity of radiation it emits. Based on the object cross-section, 2D reconstructed images (projections) are created using the observed radiation. A 3D visualization of the object can then be created by compiling the 2D reconstructed images.

Through the use of mathematics, NCT reconstruction creates tomographic images from projection data that was collected around the sample from different angles using measurable data as an input to compute the output, or density distribution, of the desired cross-section of the sample under investigation. Consequently, it is possible to obtain the desired-cross-section two-dimensional image. The relevant reconstruction algorithms can be categorized into three groups: back-projection reconstruction, IR, and analytical reconstruction.

It is now feasible to establish software-based methods for IR of CT images thanks to the recent advancements in computing power [3, 4]. The iterative enhancement of observed projected and/or reconstructed image data by the application of filters based on statistical data models or mathematical models of the CT imaging process is the common technological principle of IR algorithms. This contrasts with well-known analytical image reconstruction algorithms like filtered back projection (FBP). Compared to FBP, these IR algorithms enable the simultaneous improvement of overall image quality and the decrease of image noise. Because noise and overall image quality are highly connected with the radiation exposure collection, suppressing or lowering noise through the use of IR algorithms allows for dose reduction [5,6,7]. The goal of this research work is to supply the research facility with open-source software and data processing techniques for image reconstruction that are both user-friendly and capable of producing the best reconstruction quality.

Materials and methods

The neutron tomography in the ETRR-2

The ETRR-2 NCT facility serves as the location for the actual work. The tomography setup was placed at the ETRR-2 horizontal beam. Figure 1 shows the detection system in the NCT facility, to shelter it from neutrons and gamma rays. The camera and electronic parts are shielded by boron carbide sheets. In a typical tomography system, a mirror, a cooled CCD camera, a computer support depending on Lab View, a neutron scintillation screen, a (ZnS (Ag)–6LiF) rotatable object, and other components, are used to obtain the image. For every projection, the transmitted neutron intensity is received by the scintillator screen, and the created light is reflected by the mirror and recorded by the cooled CCD camera. The light path is 90° bent by the mirror. The object is typically rotated in steps of 0–180° or 360° to obtain the required number of projections [8, 9]. The camera records the light from the scintillator using a 45° mirror to reduce radiation damage to the CCD ships caused by gamma and neutron radiation.

Fig. 1
figure 1

Neutron imaging system at the ETRR-2 facility

Implementation of CT reconstruction using an iterative algorithm

Before the end product becomes suitable for post-data analysis, a raw dataset in the tomographic reconstruction process passes through several data-processing algorithms [10]. Two fundamental requirements are met by the software: the observed refractive indices are the closest to the calculated values, and the reconstructed images exhibit the fewest artifacts. As shown in Fig. 2, preprocessing, reconstruction, and post-processing are the three phases of the reconstruction procedure. Numerous data processing algorithms can be applied at each step, and they can all operate in various spaces, such as reconstruction, sinogram, and projection. Because the transition between the sonogram and projection spaces necessitates intermediate data storage, preprocessing techniques must be selected to operate in the same area during this stage.

Fig. 2
figure 2

Three steps of the neutron tomographic reconstruction procedure, where many data processing techniques can be added to the software depending on the projection material and the hardware issues of the NCT system

Preprocessing techniques

  • Modulation transfer function (MTF)

Because of the scintillation screen scattering, pixel reaction is influenced by the amount of light from its surroundings. If the dispersion of photons is uniform, the linearity assumption remains valid. However, the form of the sample and its absorption properties determine this criterion. We may observe the effect of scintillation scattering quite clearly by examining the interface areas between the sample and the open space of the flat-field-corrected projection. The sample is surrounded by a dark current as a result. By figuring out the scintillator MTF using a calibration image and following the instructions in [11], the loss of resolution brought on by the scintillation screen scattering in a scintillator-based detector system is minimized.

  • Flat–field correction

By dividing an image with a sample by an image without a sample or a flat-field image, the intensity ratio of the image to the incoming beam at each pixel is corrected. A dark-field image is subtracted from each image before the division.

$$\frac{{tomography\;image \left( {x,y} \right) - \overline{{dark\;current\;image\left( {x,y} \right)}} }}{{\overline{{open\;beem\;image\left( {x,y} \right)}} - \overline{{dark\;current\;image\left( {x,y} \right)}} }}$$
(1)

Iterative reconstruction (IR) algorithms

A projection model plus back-projection of the error in the projection domain make up the IR algorithm [12]. Reconstruction algorithms that iteratively reduce image noise and artifacts have several benefits. Image quality can be significantly enhanced by using previous knowledge, particularly for sparse or missing data. Given the significant advancements in computer hardware, it is feasible that iterative algorithms find widespread applications.

While the simultaneous iterative reconstruction technique (SIRT) and simultaneous algebraic reconstruction technique (SART) update the rebuilt results by the average error of all rays, the algebraic reconstruction technique (ART) [13] updates the reconstructed results ray per ray. The results of the survey of open-source toolkits for the reconstruction of CT images are summarized in Table 1.

Table 1 Survey of CT image reconstruction open software

The vector f representing the solution of the equation g = Af is the target we are looking for. The goal of iterative algorithms is to find a solution through successive approximations. The measured projections and the projections that match the current estimates are contrasted. The current estimate is modified using the comparison outcome to provide a new estimate. The methods vary in how the estimated and measured projections are compared, as well as in the type of adjustment that is performed on the current estimate. The procedure begins with the arbitrary creation of a first estimate, such as a uniform image initialized to either 0 or 1 depending on whether the correction is performed as addition or multiplication. We will begin with the additive form of the ART [14]. The following is the iterative process:

$$f_{j}^{{\left( {k + 1} \right)}} = f_{j}^{\left( k \right)} + \frac{{g_{j } - \mathop \sum \nolimits_{j = 1}^{N} f_{ji}^{\left( k \right)} }}{N}$$
(2)

where \({f}_{j}^{(k)}\) and \({f}_{j}^{(k+1)}\) are the current and the new estimates, respectively; N is the number of pixels along ray I; \(\sum_{j=1}^{N}{f}_{ji}^{(k)}\) is the sum of counts in the N pixels along ray I for the kth iteration; and \({g}_{j}\) is the count that is measured for ray I. Equation 2 shows that the new estimate is found by summing over the present estimate with a correction term, and the estimated projections are subtracted from the measured projections as part of the comparison procedure. It is clear that the correction factor approaches zero, when the current estimates are close to the observed projections. 2 × 2 image blocks are processed using this algorithm as shown in Fig. 3.

Fig. 3
figure 3

What do ART algorithms do? A The task is to find values for four pixels given values in six bins. B ART algorithm: The number of pixels in a given direction is divided by the difference between estimated and measured projections. The outcome is included in the current estimate. C Initial action: Apply the ART algorithm, project the original estimate (zeroes) vertically, then update the pixel values. For horizontal E and oblique D rays, repeat this procedure. F After one complete loop, the solution is found. However, more iterations are usually needed for larger images [15]

Applying Eq. 2 to the measured projections one at a time yields the solution. The maximum likelihood-expectation maximization (MLEM) and the conjugate gradient (CG) algorithms [16,17,18] are covered in this paper. The MLEM algorithm criterion is to maximize the likelihood of the reconstructed image, whereas the  CG algorithm attempts to minimize the difference between g and Af. Through optimization, both algorithms determine the best estimate for the solution that satisfies a particular criterion.

Results

The same raw data were reconstructed for three studied samples from the neutron imaging facility in the ETRR-2 that are shown in Fig. 4. Both FBP, SIRT, and SART methods have been considered for comparison as shown in Figs. 5, 6, and 7.

Fig. 4
figure 4

Studied samples

Fig. 5
figure 5

Contrast CT image reconstruction using different algorithms with line profile for each image

Fig. 6
figure 6

Sea shell CT image reconstruction using different algorithms with line profile for each image

Fig. 7
figure 7

Fire valve CT image reconstruction using different algorithms with line profile for each image

The results of the reconstruction using FBP, SIRT, and SART algorithms are evaluated by image quality assessment measures blindly without the need for a reference image. Naturalness image quality evaluator (NIQE) [19], perception-based image quality evaluator (PIQE) [20, 21], and blind/reference-less image spatial quality evaluator (BRISQUE) have been adopted [22, 23], and results are given in Tables 2, 3, and 4.

Table 2 Comparison between reconstructed images for contrast samples using different algorithms
Table 3 Comparison between reconstructed images for the Sea shell using different algorithms
Table 4 Comparison between reconstructed images for the Fire valve using different algorithms

The images produced by the FBP algorithm are quite grainy and have a significant amount of noise. When employing an IR algorithm, the image smoothness and noise level are significantly affected.

The results of reconstruction by SIRT, and SART algorithms indicate that high-quality images can be obtained without losing information for the three studied samples with different size.

Conclusions

This paper presented a proposed algorithm for image reconstruction by IR algorithms in neutron tomography imaging. Two image reconstruction algorithms, namely SIRT, and SART were investigated. We showed that reconstruction of small objects can be performed. With the appropriate parameters for 3D image reconstruction, the proposed algorithm can be applied to practical problems with larger objects. IR algorithms provide several advantages compared to FBP algorithm as certain algorithms can even enhance low-contrast detection performance, while reducing streak artifacts. The IR algorithm can soon be used widely because of the combination of acceleration algorithms and faster computers.