1 Introduction

Computed tomography (CT) is a high-resolution imaging modality with fast imaging speed and clear images. Owing to these advantages, it is widely used in medical diagnosis, such as cancer screening [1]. However, high doses of X-rays directly exposed to the human body during the scanning process produce ionizing radiation, which can cause damage to the human body and increase the risk of radiation-related diseases such as cancer [2]. Therefore, low-dose computed tomography (LDCT) imaging technology aims to reduce the radiation dose as much as possible on the premise of ensuring the reconstructed image quality.

One of the easiest ways to reduce the radiation dose from a CT scan is to reduce the X-ray tube current [3]. However, reducing the tube current will increase the noise and degrade the imaging quality. Currently, the LDCT image quality improvement algorithms are divided into three categories [4]: sinogram filtering algorithms, iterative reconstruction (IR) algorithms, and image domain post-processing algorithms. Sinogram filtering directly smooths raw data before reconstruction (i.e., filtered backprojection [FBP]). The traditional denoising methods in the sinogram domain include multiscale Gaussian filtering [5], nonlinear diffusion filtering [6]. IR algorithms usually integrate prior knowledge into the objective function as a penalty term, in order to smooth the noise in the image. Common assumptions include the total variation (TV) [7], dictionary learning [8]. These methods have improved the imaging quality of LDCT images, but all of them suffer from limitations such as difficulty to obtain projection data, long reconstruction process time, and occupying a large amount of storage space. In contrast, the method of processing reconstructed images has the benefits of fast processing, and does not depend on the projection data, avoiding the shortcomings of the methods. Therefore, this paper studies a post-processing algorithm of image domain.

Unlike sinogram filtering and IR methods, image domain post-processing algorithms are directly applied to the reconstructed LDCT images. Li et al. [9] adopted the adaptive Non-Local Means (NLM) filtering algorithm based on the local variation of noise level, which suppressed noise and artifacts, and improved the quality of LDCT images compared to the conventional NLM filtering algorithm. Chen et al. [10] used the improved block-matching and 3D filtering algorithm for LDCT image processing to effectively improve the contrast-to-noise ratio of images by exploiting the similarity of image sequences. Kang et al. [11] presented wavelet domain residual network (WavResNet) to recover LDCT images, which suppressed artifacts and maintained image details.

Based on wavelet transform and morphological component analysis (MCA) [12] methods are also popularly used in LDCT denoising. Chen et al. [13] proposed a sparse representation algorithm to suppress artifacts and achieve good results. Yang [14] presented a dictionary learning and equivalent number of looks LDCT artifacts suppression algorithm based on the wavelet domain to improve image quality. Cui [15] used MCA for the high-frequency (HF) component of LDCT images to separate the artifactual part from the tissue part, thus achieving the denoising effect. Zhang [16] introduced the MCA method into LDCT image denoising, and combined it with wavelet transform to suppress artifacts, thereby achieving desirable results, but did not consider the influence of different wavelet basis selection.

Therefore, to overcome the shortcomings of the existing LDCT image denoising methods, we propose a LDCT image quality improvement algorithm based on optimal wavelet basis and MCA. The method is based on improving the optimal wavelet basis selection method proposed by Cheng et al. [17] and combined with related studies such as the MCA method.

The main contributions of this paper are as follows:

  1. 1.

    LDCT image decomposition using stationary wavelet transform (SWT) to obtain HF and low-frequency (LF) components of the same size, ensuring consistent image resolution before and after decomposition.

  2. 2.

    The wavelet selection method of literature [17] is improved by increasing the diagonal directional coefficients and weighting the artifact information of different directional components, and then selecting the wavelet selection coefficient with the smallest as the optimal wavelet basis.

  3. 3.

    The MCA artifact separation algorithm is used to compare the processing effects of different wavelet basis.

The rest of this paper is organized as follows. Section 2 introduces the principle of relevant algorithms. Section 3 provides a detailed description of the method implementation. In Sect. 4, the experiments that were conducted to validate the effectiveness of our proposed method are presented. Finally, Sect. 5 provides the conclusion.

2 Principles of algorithms

2.1 SWT

In the classical wavelet transform algorithm, each component of the image becomes half the size of the original image after decomposition, resulting in the loss of HF information, which in turn reduces the denoising effect of the reconstructed image. After the image is decomposed using the SWT with redundancy and translation invariance, the HF and LF components with the same size as the original image are obtained, thus solving the problem of image distortion due to lost information [18].

Figure 1 shows how the LDCT image was schematically decomposed by the smooth wavelet level 2. The decomposition yielded one LF component as well as six HF components. Since most of the noise artifacts information existed in the HF components of the image, the improved quality image was reconstructed after processing the HF components using an image denoising algorithm.

2.2 MCA principles

The MCA image decomposition method was proposed by Starck et al. [19] in 2004 to decompose the image into polymorphic components of the “geometric structure,” “oscillation or texture,” and “noise.” The main idea was that the image was separated by using the differences between the different components of the image, and the image was a linear combination of multiple components with different morphologies. For each component, there existed a dictionary that could sparsely represent that component, while other components could not be sparsely represented by that dictionary.

The core of the MCA method lies in the dictionary selection of different components. As such, an online dictionary learning (ODL) algorithm was chosen for dictionary learning.

3 Methods

The principle of the LDCT artifacts suppression algorithm based on optimal wavelet basis and MCA is presented. First, SWT is performed on the image based on the wavelet basis database, and the wavelet selection coefficients are calculated. The wavelet basis with the smallest coefficient is selected as the optimal wavelet basis. Second, the optimal wavelet basis is used to decompose the image in level 2 to obtain a LF component \({Y}_{a2}\) and six HF components: level 1 horizontal component \({Y}_{h1}\), vertical component \({Y}_{v1}\), diagonal component \({Y}_{d1}\), and level 2 horizontal component \({Y}_{h2}\), vertical component \({Y}_{v2}\), diagonal component \({Y}_{d2}\). For the six HF components, the ODL-based MCA method is used to separate the structural information from the noise artifacts to obtain the HF components that remove: \({\overline{Y} }_{h1}\),\({\overline{Y} }_{v1}\),\({\overline{Y} }_{d1}\),\({\overline{Y} }_{h2}\),\({\overline{Y} }_{v2}\) and \({\overline{Y} }_{d2}\). Third, the LF image with the processed HF image processed using the inverse stationary wavelet transform (ISWT) is used to obtain the quality-improved image. Figure 1 shows the denoising flowchart.

Fig. 1
figure 1

Algorithm flowchart

3.1 Wavelet basis selection

Different images have different structural characteristics, and different wavelet basis exhibits different properties when decomposing and reconstructing images. Commonly used wavelet basis databases include Haar, Biorthogonal, Coiflet, Daubechies, and Symlet.

In [17], a wavelet basis selection method was proposed to calculate the wavelet selection coefficients based on the horizontal and vertical components of the image where the wavelet basis with the smallest wavelet selection coefficient was selected as the optimal wavelet basis. Careful analysis of the characteristics of the LDCT images showed that the horizontal and vertical directions contained more information and that certain noise artifacts were present in the diagonal direction. In view of the difference in the degree of artifacts in different components, the coefficients in the horizontal, vertical, and diagonal directions were weighted when the wavelet coefficients were calculated. Figure 6a shows that since the LDCT image had obvious artifacts in the horizontal direction, and these artifacts constituted the largest weight. Further, the diagonal direction contained relatively less artifacts information and constituted the second largest weight. Finally, the vertical direction contained the least amount of artifacts information and therefore constituted the smallest weight.

Based on the above-mentioned analysis, the wavelet selection coefficient formula, as proposed in [17], was improved by (1) adding the diagonal direction component to calculate the wavelet selection coefficient; and (2) weighting the horizontal, vertical and diagonal direction coefficients. By applying the wavelets presented in wavalet basis database to the images one by one, the horizontal direction coefficients \(H\left(\left|{\sum }_{x\in H}x\right|\right)\), the vertical direction coefficients \(V\left(\left|{\sum }_{x\in V}x\right|\right)\), and the diagonal direction coefficients \(D\left(\left|{\sum }_{x\in D}x\right|\right)\), the wavelet selection coefficients \(\overline{x }\) can be defined as follows:

$$\overline{x }=\frac{0.4*H\left(\left|{\sum }_{x\in H}x\right|\right)+0.25*V\left(\left|{\sum }_{x\in V}x\right|\right)+0.35*D\left(\left|{\sum }_{x\in D}x\right|\right)}{6*m*n}$$
(1)

where \(\overline{x }\)is the wavelet selection coefficients, is the wavelet selection coefficients, \(m*n\) is the picture size of horizontal, vertical and diagonal components. Further, 0.4, 0.25 and 0.35 are the weighting coefficients of the horizontal, vertical and diagonal components set empirically, respectively.

3.2 Combined dictionary construction

As per Fig. 1, “choose the optimal wavelet basis first, then decompose, finally process the image,” we first combined the simulation data, then selected the optimal wavelet basis as the haar, and then obtained the LF and HF components using SWT, which are as follows:\({ Y}_{a2}\) and \({Y}_{h1}\),\({Y}_{v1}\),\({Y}_{d1}\),\({Y}_{h2}\),\({Y}_{v2}\),\({Y}_{d2}\). The decomposed HF components were then trained subsequently using the ODL method to obtain the dictionaries: \({D}_{h1}\), \({D}_{v1}\), \({D}_{d1}\), \({D}_{h2}\),\({D}_{v2}\),\({D}_{d2}\). Figure 2 shows the HF component dictionaries.

Fig. 2
figure 2

The HF component dictionaries. ac Level 1 horizontal component dictionary \({D}_{h1}\), vertical component dictionary \({D}_{v1}\), diagonal component dictionary \({D}_{d1}\); df level 2 horizontal component dictionary \({D}_{h2}\), vertical component dictionary \({D}_{v2}\), diagonal component dictionary \({D}_{d2}\)

From Fig. 2, it can be seen that the HF dictionaries contained many detailed feature atomic information blocks in addition to a significant amount of artifactual atomic information blocks, which were arranged in a disorderly fashion. Since the focus of MCA was on combining the different component dictionaries, it is necessary that the atomic blocks of detailed features in the HF dictionaries are separated from the atomic blocks of pseudo-shadow information; i.e., dictionary partitioning. The dictionaries with detailed feature atomic information blocks were designated as detailed feature sub-dictionaries, and with pseudo-shadow atomic information blocks were designated as pseudo-shadow sub-dictionaries. Then, the detail feature sub-dictionary were \({D}_{h1}^{d}\),\({D}_{v1}^{d}\),\({D}_{d1}^{d}\),\({D}_{h2}^{d}\), \({D}_{v2}^{d}\), \({D}_{d2}^{d}\), and the pseudo-shadow sub-dictionaries were \({D}_{h1}^{a}\),\({D}_{v1}^{a}\),\({D}_{d1}^{a}\),\({D}_{h2}^{a}\),\({D}_{v2}^{a}\),\({D}_{d2}^{a}\). Combining the two types of sub-dictionaries; i.e.,\({D}_{h1}^{MCA}\),\({D}_{v1}^{MCA}\),\({D}_{d1}^{MCA}\),\({D}_{h2}^{MCA}\),\({D}_{v2}^{MCA}\),\({D}_{d2}^{MCA}\),\({D}^{MCA}=\left\{{D}^{d},{D}^{a}\right\}\). Dictionary partitioning using atomic block-based information entropy [16]: detailed feature atomic blocks had a stable structure and insignificant pixel value variations with small information entropy, while artifacts atomic blocks have rich pixel value variations and have a larger information entropy relative to detail feature atomic blocks.

Figure 3 shows the information entropy curves of the corresponding horizontal, vertical and diagonal HF components of the simulated pelvic data after level2 wavelet transform.

Fig. 3
figure 3

Information entropy curves of HF dictionaries atomic image blocks. a, b Level 1, level 2 decomposition the atomic block information entropy curves of HF dictionaries, respectively

Here, we used the atomic block information entropy method to partition the dictionary. The process calculated the value of information entropy of each atomic block in the dictionary, if the value was greater than the selected threshold, the atomic block was classified into the pseudo-shadow sub-dictionaries and vice versa, the atomic block was classified as a feature sub-dictionary, which resulted in dictionary partitioning. From Fig. 4, the reorganized dictionary \({D}^{MCA}\) was clearly partitioned into the detail feature sub-dictionary \({D}^{d}\) and the pseudo-shadow sub-dictionaries \({D}^{a}\).

Fig. 4
figure 4

Level1 vertical dictionary partition diagram. a The detail feature sub-dictionary \({D}_{v1}^{d}\), b the pseudo-shadow sub-dictionaries \({D}_{v1}^{a}\), c Combination Dictionary \({D}_{v1}^{MCA}\)

3.3 Quality improvement of LDCT image reconstruction

The MCA-based artifacts suppression algorithm aimed to separate the artifacts from the HF components; hence, the image was considered to have two different morphological components, detail features and artifacts.

Based on the recombination dictionary \({D}^{MCA}\), taking a vertical combination dictionary \({D}_{v1}^{MCA}=\left[{D}_{v1}^{d},{D}_{v1}^{a}\right]\), \({D}_{v1}^{MCA}\) as an example, using Orthogonal Matching Pursuit (OMP) algorithm, the corresponding sparse coefficients \({\alpha }_{v1}^{MCA}=\left[{\alpha }_{v1}^{d},{\alpha }_{v1}^{a}\right]\) obtained by sparse encoding the atomic image block of the combination dictionary. Then \({\overline{Y} }_{v1}^{d}={D}_{v1}^{d}\times {\alpha }_{v1}^{d}\) was expressed as the image block after artifact removal. The detailed feature part of the image was obtained by averaging the pixel values of overlapping pixels using \({\overline{Y} }_{v1}^{d}\). Additionally, other HF component images were obtained by this method. Finally, the unprocessed LF component and the denoised HF components were transformed by ISWT, which obtained the image \(\overline{Y }\) with better quality.

4 Experiments and analysis

Here, the proposed algorithm was applied to simulated pelvic bone data and clinical human chest data, and the peak signal-to-noise ratio (PSNR), structure similarity (SSIM), feature similarity (FSIM) and Visual Information Fidelity (VIF) to evaluate the algorithm performance.

The algorithms were run on the following hardware and software environments: Matlab 2017b; 64-bit windows 10 operating system; Intel(R) Core (TM) i7-9700 CPU @ 3.00 GHz; 4 GB RAM.

4.1 Parameter settings

The algorithms here mainly included: SWT, ODL, dictionary partition, and HF component images denoising. The parameters were set according to the results of the algorithm processing and the experience. In the SWT, we set the decomposition level 2.In the ODL,In online dictionary learning, we set the image block size to 16*16; the number of dictionary atoms to 900; and the number of iterations to 100.For the dictionary partition, the optimal wavelet basis for the simulated data is haar; the level 1 and level 2 information entropy thresholds are 1.89 and 2.13, respectively.The optimal wavelet basis for the clinical data is bior2.2; the level 1 and level 2 information entropy thresholds are 2.12 and 2.10, respectively.In the HF image denoising, the sparse constraint parameter is set to 10 and the error parameter is 0.1.

According to Fig. 1 algorithm flowchart and formula (1), the wavelet basis database was applied to the simulated pelvic bone data and the clinical human thoracic data for SWT to obtain the horizontal, vertical and diagonal directional component coefficients, where eight wavelet basis were selected for the experiment. Tables 1 and 2 show the wavelet coefficients of the two sets of data.

Table 1 Wavelet selection coefficients for simulation data
Table 2 Wavelet selection coefficients for clinical data

4.2 Simulated pelvic data experiment

The simulated pelvic LDCT images were reconstructed via the computer simulation of the projection data under low-dose scans, and then by using the FBP algorithm [16]. Figure 5 shows the (a) LDCT image and the (b) original image. To verify the consistency of wavelet basis selection with the final artifacts’ suppression effect, we conducted experiments based on the eight-wavelet basis selected in Table 1. Figure 5 shows the experimental results. To see more clearly the difference between the resulting images of different wavelet basis, the ROI marked with a red block is given at the bottom right of Fig. 5.

Fig. 5
figure 5

Simulated data different wavelet basis artifacts suppression images

Table 3 presents the objective evaluation indexes of different wavelet basis suppressed artifacts images. From Table 3, it was found that the PSNR values of the artifacts suppressed images after different wavelet basis processing were different, which showed a decreasing trend according to the above wavelet basis sorting. Combined with Fig. 5, it was viewed that the PSNR value of the denoised image based on haar wavelet basis was the largest, which was 1.74 –6.36 dB higher compared with other wavelet basis processing effect pictures, and the SSIM value reached 0.93 in comparison with the original image in Fig. 5b, which had a high similarity. The PSNR value of the denoising method based on bior2.2, db17 wavelet basis was relatively larger, representing relatively less distortion in the artifacts suppressed image. From the detailed information of the local zoom, the texture details in Fig. 5c were clearer, while Fig. 5d–j introduces other noise information to different degrees, which led to poor denoising effect. In conclusion, the artifacts suppression algorithm based on the optimal wavelet basis(haar) was the best processing for the simulated data.

Table 3 Objective evaluation indexes of different wavelet basis suppressed artifacts images

4.3 Clinical human chest experiment

To verify the effectiveness of this algorithm in processing clinical data, we selected the clinical human chest low-dose CT images, i.e., the Mayo dataset [20] as in Fig. 6a, to evaluate the proposed method. Additionally, the algorithm of this paper is compared with the classical NLM, KSVD and WavResNet.

Fig. 6
figure 6

Clinical data different wavelet basis artifacts suppression images

The clinical data contain noise in both normal dose CT(NDCT) and LDCT images; however, the strip artifacts in LDCT images were serious, which destroyed the display of the thoracic tissue structure to some extent. According to the eight-wavelet basis bior2.2, haar, db17, db23, db19, db27, sym28, sym16 selected in Table 2 for the experiment to obtain the improved results (see Fig. 6). The red blocks in the figure are the selected ROIs of the region of interests. Comparing the ROIs in Fig. 6b with those in Fig. 6e–j. Figure 6e–j still contains obvious noise and the edge contours are not clear. Comparing Fig. 6b with d, it can be seen that a large amount of noise is removed in Fig. 6d; however, there were different degrees of detail loss due to over-smoothing. In contrast, as can be observed from Fig. 6c, the bior2.2 wavelet-based denoising algorithm preserved the image edges and details to a greater extent while also effectively suppressing the noise artifacts; the thoracic tissue structures are also better displayed.

For a more complete comparison, the signal-to-noise ratio (SNR) was used to quantitatively describe the flat areas of the CT images with improved quality, in addition to the PSNR, SSIM, FSIM and VIF evaluations. The SNR was calculated as the mean value of the pixel values in the ROI region divided by the standard deviation. The results of the analysis presented in Table 4 show that the algorithms based on the different wavelet basis showed different degrees of suppression of image noise artifacts. The bior2.2 and haar basis denoising algorithms showed higher performance than the other basis denoising algorithms. The bior2.2 basis wavelet denoising method had the highest PSNR, SSIM and FSIM values, and the SNR values in the ROI1 and ROI2 regions were 9.9355 and 17.5437, respectively, which were approximately 0.2–3.9 higher than those based on the db17, db23, db19, db27, sym28 and sym16 wavelet basis denoising methods. Quantitative analysis of the bior2.2 wavelet basis-based artifact suppression effect, its objective evaluation index VIS value was consistent with the subjective evaluation results, with the best overall visual effect. Combining Tables 2 and 4 as well as Fig. 6, the algorithm proposed here could effectively remove the artifacts and the basic organizational structure was maintained, verifying the effectiveness of the denoising effect based on the optimal wavelet basis and the MCA algorithm, and verifying that the wavelet basis basically corresponded to the image artifact suppression effect.

Table 4 Objective evaluation indexes of different wavelet basis artifacts suppression images

To further validate the superiority of the algorithms, as Fig. 7 shows, we compared three models: NLM, K-SVD and WavResNet. The red blocks in Fig. 7a are the ROIs, and the bottom right corner of Fig. 7 shows the ROI3 enlargement. Based on careful observation, it was found that the noise in Fig. 7b–d was suppressed to different degrees as compared to the case presented in Fig. 7a, whereas the image quality in Fig. 7e is better than that in Fig. 7b–d. Moreover, the image structure was clear after the processing of the algorithm, and as such, most of the details were retained. Further, in the subjective evaluation, the above-mentioned five metrics were used to quantitatively analyze the denoising performance of each algorithm. As presented in Table 5, the PSNR, SSIM and SNR values of the proposed algorithm were better than those of the other algorithms. The FSIM values of all the compared algorithms were roughly similar, and the algorithm here showed the largest VIF value. Therefore, the proposed algorithm exhibited better performance in terms of both the visual effect and quantitative analysis index.

Fig. 7
figure 7

Clinical data different algorithms artifacts suppression images

Table 5 Objective evaluation indexes of different algorithms artifacts suppression images

5 Conclusion

In this study, we proposed a noise artifact suppression algorithm based on optimal wavelet basis and MCA for the problems associated with noise and serious strip artifacts in LDCT. The novelty of the algorithm lied in improving the wavelet selection formula, increasing the optimal wavelet basis by weighting the horizontal, vertical and diagonal component coefficients. Additionally, the algorithm combined the advantages of SWT and MCA image decomposition to suppress the artifacts, thereby avoiding the loss of image information and improving the quality of the image. The denoising effect of the proposed algorithm was evaluated via simulated data experiments and clinical data experiments. The quality improvement algorithm based on optimal wavelet basis and MCA achieved better denoising effect and also effectively retained image edge and detail information, thereby demonstrating better performance than that of the other wavelet basis-based denoising algorithms in terms of the subjective visual effect and objective evaluation index. The proposed algorithm is therefore expected to be of clinical importance in the further promotion and development of LDCT imaging technology and use in practical applications.