Keywords

1 Introduction

Statistical image reconstructions (SIR) have been increasingly used in computed tomography and positron emission tomography (CT/PET) to substantially improve the image quality as compared to the conventional filtered back-projection (FBP) method [1] for various clinical tasks. SIR based maximum likelihood expectation maximization (MLEM) algorithm [2] produces images with better quality than analytical techniques. It can better use of noise statistics, accurate system modeling, and image prior knowledge. MLEM estimates the objective function that is being maximized (log-likelihood) when the difference between the measured and estimated projection is minimized. There have been further refinements of the SIR with introduction of ordered subset expectation maximization (OSEM) [3] that uses a subset of the data at each iteration, there by producing a faster rate of conversion.

Nowadays OSEM has become the most important iterative reconstruction techniques for emission computed technology. Although, likelihood increases, the images reconstructed by classical OSEM are still very noisy because of ill-posed nature of iterative reconstruction algorithms. During reconstruction process, poisson noise effectively degrades the quality of reconstructed image. Regularization is therefore required to stabilize image estimation within a reconstruction framework to control the noise propagation and to produce a reasonable reconstruction. Generally, the penalty term is chosen as a shift-invariant function that penalizes the difference among local neighbouring pixels [4]. The regularization term incorporates prior knowledge or expectations of smoothness or other characteristics in the image, which can help to stabilize the solution and suppress the noise and streak artifacts. Various regularizations have been presented in the past decades based on different assumptions, models and knowledge. Although some of them were initially proposed for SIR of CT and PET, they can be readily employed for CT. This regularization term is used to stabilize the image estimation. To incorporate prior knowledge or expectations of smoothness in the image, which encourage preservation of the piecewise contrast region while eliminating impulsive noise, but the reconstructed images still suffer from streaking artifacts and poisson noise.

Numerous edge preserving priors have been proposed in the literature [512] to produce sharp edges while suppressing noise within boundaries. A wide variety of methods such as the quadratic membrane (QM) [5] prior, Gibbs prior [6], entropy prior [7], Huber prior function [8] and total variation (TV) prior [9] which smoothes both high frequency noise and edge details tends to produce an unfavourable results while edge-preserving non-quadratic [10] priors tend to produce blocky piecewise regions. In order to suppress the noise and preserve edge information simultaneously, image reconstruction based on AD has also become the interesting area of research [11].

The main reason for the instability of traditional regularizations is that the image roughness is calculated based on the intensity difference between neighbouring pixels, but the pixel intensity differences may not be reliable in differentiating sharp edges from random fluctuation due to noise. When the intensity values contain noise, the measure of roughness is not robust. To address this issue, [12] proposed patch-based regularizations which utilize neighborhood patches instead of individual pixels to measure the image roughness. Since they compare the similarity between patches, the patch-based regularizations are believed to be more robust in distinguishing real edges from noisy fluctuation.

Here in this paper, we introduces and evaluates a hybrid approach to regularize which dominate in CT/PET images. Our model is looking equivalent to that proposed in [12], but it’s different in the sense that we focus on edge-preserving regularizer (PPB) with accelerated version of MLEM i.e. OSEM, which produces fast reconstructed results in an efficient manner. However, unlike [9, 11] which treat post-processing reconstruction steps our approach is based on an elegant formulation that use priors (filters) within the reconstruction process rather than using at the end after the reconstructed image is ready.

This paper is divided into the following sections. Section 2 formulates the backgrounds of reconstruction problem and introduces some notations of the OSEM method. Section 3 describes the proposed hybrid method using fusion of regularization term PPB with OSEM Sect. 4 presents simulation and results of the qualitative and quantitative experiments. It also verifies that the proposed method yields best results by comparing with other standard method using simulated data. A conclusion is in Sect. 5.

2 Backgrounds

Ordered Subset Expectation Maximization (OSEM) is one of the most widely used iterative methods for CT/PET reconstruction. Here a standard model of photon emission tomography as described in [13] is used and the measurements follow independent Poisson random distribution as follows:

$$y_{i} \sim Poisson\left( {\bar{y}_{i} \left( f \right)} \right),\;\;\;{\kern 1pt} i{\mkern 1mu} = 1, \ldots ,I$$
(1)

where y i is the measured projectional data which are counted by the ith detector during the data collection, f represents the estimated image vector and the element of f denotes the activity of image. In iterative methods, the calculation of the system matrix during the reconstruction process is essential and given as follows:

$$\bar{y}_{i} \left( f \right)\, = \,\sum\limits_{j}^{J} {a_{ij} f_{j} }$$
(2)

where \(A = \left\{ {a_{ij} } \right\} \in R^{{n_{i} \times n_{j} }}\) is the system matrix which describes the relationship between the measured projection data and the estimated image vector, with \(a_{ij}\) denoting the probability of detecting an event originated at pixel j by detector pair I. The probability distribution function (pdf) of the Poisson noise reads:

$$P\left( {\left. y \right|f} \right)\, = \prod\limits_{i}^{I} {\frac{{\bar{y}_{i} \left( f \right)^{{y_{i} }} }}{{y_{i} !}}} \exp \left( { - \bar{y}_{i} \left( f \right)} \right),$$
(3)

and the corresponding log-likelihood can be described as follow:

$$L\left( f \right)\, = \,\log P\left( {y\left| f \right.} \right)\, = \,\sum\limits_{i = 1}^{I} {\left( {y_{i} \log \left( {\sum\limits_{j = 1}^{J} {a_{ij} f_{j} } } \right)\, - \,\sum\limits_{j = 1}^{J} {a_{ij} f_{j} } } \right)}$$
(4)

where I is the number of detector pairs, J is the number of the objective image pixels, and P(y|f) is the probability of the detected measurement vector y with image intensity f. The penalized likelihood reconstruction estimates image by maximizing the following objective function:

$$f^{*} = \mathop {\arg \hbox{max} }\limits_{f \ge 0} \left( {L\left( {y\left| f \right.} \right) - \beta U\left( f \right)} \right)$$
(5)

where U(f) is the image roughness penalty.

$$f^{*} = \mathop {\arg \hbox{max} }\limits_{f \ge 0} \left[ {L\left( {y\left| f \right.} \right) - \beta U\left( f \right)} \right] = \mathop {\arg \hbox{min} }\limits_{f \ge 0} \left[ {\frac{1}{2}\left( {y - {\rm A}f} \right)^{T} \varLambda \left( {y - {\rm A}f} \right) + \beta U\left( f \right)} \right]$$
(6)

Conventionally the image roughness is measured based on the intensity difference between neighboring pixels:

$$U\left( f \right) = \frac{1}{4}\sum\limits_{j = 1}^{{n_{j} }} {\sum\limits_{{k \in {\mathbb{N}}_{j} }} {w_{jk} \varphi \left( {f_{j} - f_{k} } \right)} }$$
(7)

where \(\varphi \left( t \right)\) is the penalty function. The regularization parameter β controls the trade-off between data fidelity and spatial smoothness. When β goes to zero, the reconstructed image approaches the ML estimate.

A common choice of \(\varphi \left( t \right)\) in PET image reconstruction is the quadratic function:

$$\varphi \left( t \right) = \frac{1}{2}t^{2}$$
(8)

A disadvantage of the quadratic prior is that it may over-smooth edges and small objects when a large β is used in order to smooth out noise in large regions. Huber penalty [12] is an example of non-quadratic penalty that can preserve edges and small objects in reconstructions. It is defined as:

$$\varphi \left( t \right) = \left\{ \begin{aligned} {\raise0.7ex\hbox{${t^{2} }$} \!\mathord{\left/ {\vphantom {{t^{2} } 2}}\right.\kern-0pt} \!\lower0.7ex\hbox{$2$}},\quad \quad \quad \quad \left| t \right| \le \delta \hfill \\ \delta \left| t \right| - {\raise0.7ex\hbox{${\delta^{2} }$} \!\mathord{\left/ {\vphantom {{\delta^{2} } 2}}\right.\kern-0pt} \!\lower0.7ex\hbox{$2$}}\quad \quad \left| t \right| \ge \delta \hfill \\ \end{aligned} \right.$$
(9)

where \(\delta\) is the hyper-parameter to control the shape of the non-quadratic penalty. The parameter clearly delineates between the “non-edge” and “edge” regions, and is often referred as the “edge threshold” or “transition point”. Other family of convex potential functions is described in [12].

3 Methods and Model

In this paper, a new hybrid framework (here referred to as: OSEM+PPB) to reduce number of iterations as well as improve the quality of reconstructed images is proposed. Finally, hybrid method is applied to CT/PET tomography for obtaining optimal solutions. Generally, the SIR methods can be derived from the maximum a posteriori (MAP) estimation, which can be typically formulated by an objective function consisting of two terms named as “data-fidelity” term, models the statistics of projection measurements, and “regularization” term, penalizes the solution. It is an essential criterion of the statistical iterative algorithms that the data-fidelity term provides an accurate system modelling of the projection data. The regularization or penalty term play an important role in the successful image reconstruction. The proposed reconstruction is a hybrid combination of iterative reconstruction and a prior part as shown in Fig. 1.

Fig. 1
figure 1

The proposed hybrid model

Fig. 2
figure 2

The modified Shepp-Logan phantom with different reconstruction methods projection including 15 % uniform Poisson distributed background events

The proposed model works in conjunction to provide one iterative cycle of objective function and prior part. This is repeated a number of times till we get the required result. The use of prior knowledge within the secondary reconstruction enables us to tackle noise at every step of reconstruction and hence noise is tackled in an efficient manner. Using Probabilistic patch based prior (PPB) prior [12] inside reconstruction part gives better results than working after the reconstruction is over. It has been widely used for image denoising, image enhancement, image segmentation [13] and often obtains better quality than other methods.

The patch-based roughness regularizations are defined as:

$$f_{j,Smooth}^{n + 1} = U\left( f \right) = \sum\limits_{j = 1}^{{n_{j} }} {\sum\limits_{{k \in {\mathbb{N}}_{j} }} {\varphi \left( {\left\| {g_{j} \left( f \right) - g_{k} \left( f \right)} \right\|_{w} } \right)} }$$
(10)

where \(g_{j} \left( f \right)\) is the feature vector consisting of intensity values of all pixels in the patch centered at pixel j. The patch based similarity between the pixel j and k is measured by

$$\left\| {g_{j} \left( f \right) - g_{k} \left( f \right)} \right\|_{w} = \sqrt {\sum\limits_{l = 1}^{{n_{l} }} {w\left( {f_{jl} - f_{kl} } \right)^{2} } }$$
(11)

where \(jl\) denotes the lth pixel in the patch of pixel j and \(wl\) is the corresponding weight coefficient with \(w_{jk} = 1,or\;w_{jk} = 1/d_{jk}\). The weighting coefficient is smaller if the distance between the patch of a neighboring pixel and the patch of the concerned pixel is larger. By this way, the regularization can better preserve edges and boundaries. The basic idea of PPB is to choose a convex function that is unique and stable, so that regions are smoothed out and edges are preserved as compared to non-convex functions. The basic OSEM model as:

$$f_{j,OSEM}^{n + 1} = f_{j}^{n} ( {\frac{1}{{\sum\limits_{{j \in S_{n} }} {a_{ij} } }}\sum\limits_{{j \in S_{n} }} {\frac{{y_{j} a_{ij} }}{{\sum\limits_{{i^{\prime} = 1}}^{I} {f_{{i^{\prime}}}^{\left( n \right)} a_{ij} } }}} } ) \,,\,{\text{for}}\,{\text{pixels}}\,i\, = 1,2, \ldots ,\,I.$$
(12)

where \(f_{j}^{(n + 1)}\) is the value of pixel j after the nth iteration of OSEM correction step.

Finally the proposed model is given as follows:

$$f_{j}^{(n + 1)} = \mathop {\arg \hbox{max} }\limits_{f} \left( {\left( {f_{j,OSEM}^{n + 1} } \right) - \beta \left( {f_{j,Smooth}^{n + 1} } \right)} \right)$$
(13)

Towards the end, we refer to the proposed algorithm as an efficient hybrid approach for CT/PET image reconstruction and outline it as follows.

The Proposed Algorithm:

A. Reconstruction using OSEM algorithm

Let the following symbols are used in the algorithm:

\(X\) = true projections, \(a_{ij}\) = system matrix, \(y^{k}\) = updated image after kth iteration, \(x_{j}^{k}\) = calculated projections at kth iteration.

  1. 1.

    Set k = 0 and put:

    $$y^{0} = g_{final}$$
    (14)
  2. 2.

    Repeat until convergence of \(\hat{x}^{m}\)

    1. (a)

      \(x^{1} = \hat{x}^{m} ,\;m = m + 1 \quad (15)\)

    2. (b)

      For subsets t = 1, 2,…, n

      Calculate Projections: find projections after kth iterations using updated image

      $$x\left( j \right)^{k} = \sum\limits_{i = 1}^{I} {a_{ij}^{t} } \times y^{k} ,{\text{ for detectors }}j \in S_{n}$$
      (16)

      Error Calculation: Find error in calculated projection (element-wise division)

      $$x_{error}^{k} = \frac{X}{{x_{j}^{k} }}$$
      (17)

      Back projection: back project the error onto image

      $$x\left( i \right)^{k + 1} = x\left( i \right)^{k} \frac{{\sum\limits_{{j \in S_{n} }} {\frac{{y\left( j \right)a_{ij} }}{{\mu \left( j \right)^{t} }}} }}{{\sum\limits_{{j \in S_{n} }} {a_{ij} } }} ,{\text{ for pixels}}\,i \, = \, 1,2, \ldots , \, I.$$
      (18)
    3. c

      \(X_{error}^{k} = a_{ij} *x_{error}^{k} \quad (19)\)

  3. 3.

    Normalization: normalize the error image(element-wise division)

    $$X_{norm}^{k} = \frac{{X_{error}^{k} }}{{\sum\nolimits_{j} {a_{ij} } }}$$
    (20)
  4. 4.

    Update: update the image:

    $$y_{{}}^{k + 1} = y^{k} .*X_{norm}^{k}$$
    (21)

    B. Prior: Use PPB as prior

  5. 5.

    Set m = 0 and apply Probabilistic patch based filter:

    $$y_{m + 1}^{k + 1} = PPB\left( {y_{m}^{k + 1} } \right)$$
    (22)

    Put m = m + 1 and repeat till m = 3;

  6. 6.

    Put k = k + 1, repeat with OSEM reconstruction.

In our algorithm, we monitor the SNR during each loop of secondary reconstruction. The processing is stopped when SNR begins to saturate or degrade from any existing value.

4 Results and Discussions

In this simulation study, only two- dimensional (2-D) simulated phantoms were considered because our main aim here is to compare proposed method with other algorithms and to demonstrate that the proposed method was applicable to different ECT imaging modalities such as CT/PET, where 2-D phantoms were sufficient for this purpose. The comparative analysis of the proposed method is also presented with other standard methods available in literature such as OSEM [3], OSEM+QM, OSEM+Huber, OSEM+TV and OSEM+AD. For simulation study MATLAB 2013b software was used on a PC with Intel(R) Core (TM) 2 Duo CPU U9600 @ 1.6 GHz, 4.00 GB RAM, and 64 bit Operating system. For quantitative analysis the various performance measures used include signal-to-noise ratio (SNR), peak signal-to-noise ratio (PSNR), the root mean square error (RMSE), and the correlation parameter (CP) [14]. The SNR, PSNR and RMSE give the error measures in reconstruction process. The correlation parameter (CP) is a measure of edge preservation after the reconstruction process, which is necessary for medical images.

The brief description of the various parameters used for generation and reconstruction of the two test cases are as follows: The first test case is a Modified Shepp-Logan Phantom of size 64 × 64 and 120 projection angles was used. The simulated data was all Poisson distributed and all assumed to be 128 radial bins and 128 angular views evenly spaced over 180°. The second test case used for simulation was a gray-scale standard medical thorax image of size 128 × 128. For this test case, the projections are calculated mathematically with coverage angle ranging from 0 to 360° with rotational increment of 2° to 10°.

For both the test cases, we simulated the sinograms with total counts amount 6 × 105. A Poisson noise of magnitude 15 % is added to projections. The proposed algorithm was run for 500 to 1000 iterations for simulation purposes and the convergence trend of the proposed method and other methods were recorded. However, the proposed and other algorithms converged in less than 500 iterations. Also, this was done to ensure that the algorithm has only single maxima and by stopping at the first instance of stagnation or degradation, we are not missing any further maxima which might give better results. The corresponding graphs are plotted for SNR, PSNR, RMSE, and CP. The graphs support the fact as shown in Figs. 3 and 6. From these plots, it is clear that proposed method (OSEM+PPB) gives the better result in comparison to other methods by a clear margin. Using OSEM with PPB prior brings the convergence much earlier than the usual algorithm. With proposed method, result hardly changes after 300 iterations whereas other methods converge in more than 300 iterations. Thus we can say that using PPB prior with accelerated version of EM brings the convergence earlier and fetches better results. The visual results of the resultant reconstructed images for both the test cases obtained from different algorithms are shown in Figs. 2 and 5. The experiment reveals the fact that proposed hybrid framework effectively eliminated Poisson noise and it performs better even at limited number of projections in comparison to other standard methods and has better quality of reconstruction in term of SNRs, PSNRs, RMSEs, and CPs. At the same time, it is also observed that the hybrid cascaded method overcomes the short coming of streak artifacts existing in other iterative algorithms and the reconstructed image is more similar to the original phantom (Figs. 2 and 5).

Fig. 3
figure 3

The plots of SNR, PSNR, RMSE, and CP along with no. of iterations for different algorithms for test case 1

Fig. 4
figure 4

Line plot of Shepp-Logan phantom and standard thorax medical image

Fig. 5
figure 5

The modified Shepp-Logan phantom with different reconstruction methods projection including 15 % uniform Poisson distributed background events

Fig. 6
figure 6

The Plots of SNR, PSNR, RMSE, and CP along with no. of iterations for different algorithms for Test case 2

Tables 1 and 2 show the quantification values of SNRs, PSNRs, RMSEs, and CPs. in for both the test cases respectively. The comparison table indicates the proposed reconstruction method produce images with prefect quality than other reconstruction methods in consideration.

Table 1 Different performance measures for the reconstructed images in Fig. 2
Table 2 Different performance measures for the reconstructed images in Fig. 5

Figure 4 indicate the error analysis of the line profile at the middle row for two different test cases. To check the accuracy of the proceeding reconstructions, line plots for two test cases were drawn, where x-axis represents the pixel position and y-axis represents pixel intensity value. Line plots along the mid-row line through the reconstructions produced by different methods show that the proposed method can recover image intensity effectively in comparison to other methods. Both the visual-displays and the line plots suggest that the proposed model is preferable to the existing reconstruction methods. From all the above observations, it may be concluded that the proposed model is performing better in comparison to its other counterparts and provide a better reconstructed image.

5 Conclusion

In this paper, we have demonstrated a hybrid framework for image reconstruction which consists of two stages during reconstruction process. The reconstruction was done using Ordered Subset Expectation Maximization (OSEM) while probabilistic patch based prior (PPB) was used as prior to deal with ill-posedness. This scheme of reconstruction provides better results than conventional OSEM. The problems of slow convergence, choice of optimum initial point and ill-posedness were resolved in this framework. This method performs better at high as well low noise levels and preserves the intricate details of image data. The qualitative and quantitative analyses clearly show that this framework can be used for image reconstruction and is a suitable replacement for standard iterative reconstruction algorithms.