Abstract
Artificial intelligence (AI) including machine learning and deep learning methods has become one of the core technology among the most recent developments in the field of medical imaging. Traditional, quantitative image analysis has been indispensable to investigate clinical significance of nuclear medicine molecular imaging in patients with various neurodegenerative diseases. Of the AI techniques, deep learning is consisted of the artificial neural networks with multiple convolutional layers and nodes. Unlike machine learning, deep learning performs the feature extraction and learning based on a cascade of multiple layers of nonlinear processing units. High-quality data and labels are most important to improve the performance of deep learning models. In this chapter, we will focus on various methods of deep learning which has been applied to positron emission tomography/computed tomography (PET/CT) imaging of neurodegenerative diseases. This will include the classification of disease, segmentation of region-of-interest, image generation, image processing, and low-dose imaging.
Access provided by Autonomous University of Puebla. Download chapter PDF
Similar content being viewed by others
Keywords
- Artificial intelligence
- Machine learning
- Deep learning
- Nuclear medicine
- Alzheimer’s disease
- Neurodegenerative disease
1 Introduction
Though the diagnosis of neurodegenerative diseases is mainly based on clinical criteria, neuroimaging in nuclear medicine plays important supportive roles in diagnosis and differential diagnosis of neurodegenerative diseases and prediction of disease progression [1, 2]. Different from magnetic resonance imaging (MRI) dependent on morphological changes of cortical and subcortical structures, positron emission tomography/computed tomography (PET/CT) provides quantitative evaluation of functional or molecular changes related to metabolism, proteinopathy, enzyme expression, transporter, or receptor. In addition to visual analysis, quantitative image analysis is essential to investigate clinical significance of neuroimaging. Of them, voxel-based analysis and region-of-interest (ROI) or volume-of-interest (VOI) analysis are widely used for comparison between control (or normal) and patient groups. Statistical parametric mapping (SPM) is the most popular voxel-based approach, which demonstrates areas of the brain with a significant difference between normal controls and patients [3, 4]. ROI or VOI-based image analysis performs the calculation in the pixels of each ROI or VOI. Manual, semi-automatic, and automatic method can be used to draw a region or volume. Although accurate, manual drawing is time-consuming, operator dependent, and less reproducible. On the contrary, accurate region segmentation by automatic drawing should be guaranteed in each patient for robustness and reliability of data analysis.
Different from traditional image analysis, machine learning as a subset of the artificial intelligence finds patterns through big data. Based on the training data, it builds a mathematical model to make prediction. A learning method can be unsupervised, semi-supervised, or supervised. Supervised learning requires labeled data to find the pattern, whereas unsupervised learning uses unlabeled data and semi-supervised learning needs a small labeled data and a large unlabeled data. Machine learning is trained using a large number of input data with high reproducibility to extract the feature of clinical significance. After extraction, feature selection removes unnecessary features to reduce the training time and the possibility of overfitting, and avoid the dimensionality issues. Then, a classifier algorithm such as support vector machine, random forest, or artificial neural network is performed to map the feature for the classification of disease.
As a part of the machine learning, deep learning is consisted of the artificial neural networks with multiple convolutional layers and nodes. Unlike traditional machine learning, deep learning performs the feature extraction and learning by itself. For the feature extraction and transformation, the techniques of deep learning are based on a cascade of multiple layers of nonlinear processing units. High-quality data and labels are most important to train and test the deep learning models. Dataset is typically composed of training, validation, and test set. The training data are used to train a network that loss function calculates the loss values in the forward propagation and learnable parameters are updated via backpropagation. The validation data are to fine-tune hyper-parameters and the test data to evaluate the performance of the model. This chapter will focus on artificial intelligence used for neuroimaging in nuclear medicine including classification of diseases, segmentation of ROI or VOI, denoising, image reconstruction, and low-dose imaging.
2 Classification
2.1 Alzheimer’s Disease
Alzheimer’s disease (AD) is a neurodegenerative disease characterized by a decline in cognitive function. It mostly affects older people so that the prevalence of AD is increasing with the growth of the elderly population. Early diagnosis of AD before the symptoms become severe is of utmost clinical importance since it may provide opportunities for effective treatment. 18F-FDG PET/CT is one of the most useful modalities to support the clinical diagnosis of dementia including AD. It shows changes in glucose metabolism of the brain over various disease entities related to dementia with high sensitivity and specificity. In patients with AD, the reduction of glucose metabolism is expected stating from the mesial temporal to posterior cingulate cortex (PCC), lateral temporal, inferior parietal, and prefrontal regions to help diagnose [5].
Deep learning methods have been studied for the evaluation of patients with AD. Several auto-encoders with multi-layered neural network to combine multimodal features were applied for AD classification [6]. In a study with a stacked auto-encoder to extract high-level features of multimodal ROI and an SVM classifier, the proposed method was 95.9%, 85.0%, and 75.8% accurate for AD, MCI, and MCI-converter diagnosis, respectively, using the ADNI dataset [7]. Recently, CNN methods with 2D or 3D volume data of PET/CT or MRI scans were applied for AD classification [8,9,10,11]. In 2D CNN models, the features from the specific slices of axial, coronal, and sagittal scans were concatenated and used for AD classification. Using MRI volume data, skull stripping and gray matter segmentation were performed and the slices with gray matter information were used as CNN model input. Compared to 2D CNN models, studies have used 3D volume data with promising results. Using the Alzheimer’s Disease Neuroimaging Initiative (ADNI) MRI dataset without skull-stripping preprocessing, Hosseini-Asl et al. built a deep 3D Convolutional Neural Network (3D-CNN) upon a convolutional auto-encoder, which was pre-trained to capture anatomical shape variations in structural brain MRI scans for source domain [8]. Then, fully connected upper layers of the 3D-CNN were fine-tuned for each task-specific AD classification in target domain. The proposed 3D deeply supervised adaptable CNN outperformed several proposed approaches, including 3D-CNN model, other CNN-based methods, and conventional classifiers by accuracy and robustness. Liu et al. used cascaded convolutional neural networks (CNNs) to learn the multi-level and multimodal features of MRI and PET brain images for AD classification [10]. In the method, multiple deep 3D-CNNs were applied on different local image patches to transform the local brain image into more compact high-level features. Then, an upper high-level 2D-CNN followed by softmax layer was cascaded to ensemble the high-level features and generate the latent multimodal correlation features for classification task. Finally, a fully connected layer followed by softmax layer combined these learned features for AD classification. Without image segmentation and rigid registration, the method could automatically learn the generic multi-level and multimodal features from multiple imaging modalities for classification. With ADNI MRI and PET dataset from 397 subjects including 93 AD patients, 204 mild cognitive impairment (MCI, 76 MCI converters +128 MCI non-converters) and 100 normal controls (NC), the proposed method demonstrated promising performance of an accuracy of 93.26% for classification of AD vs. NC and 82.95% for classification MCI converters vs. NC.
Although studies have shown that various deep learning methods were effective for AD classification, the model performance of external validation compared to the training dataset is an issue to be resolved. In fact, the qualities and properties of medical images could be affected by the image-acquisition environment including the imaging acquisition system, acquisition protocol, reconstruction method, etc. Therefore, there is a need for a model with enhanced generalization performance to improve clinical utility of a proposed method. In a recent study using FDG PET/CT, instead of 3D volume data, slice-selective learning using a BEGAN-based model was constructed to solve the above (Fig. 9.1) [9]. The model was trained with an ADNI dataset, then performed external validation with their own dataset. A range was set to cover the most important AD-related regions and searched for the most appropriate slices for classification. The model learned the generalized features of AD and NC for external validation when appropriate slices were selected. The slice range that covered the PCC using double slices showed the best performance. The accuracy, sensitivity, and specificity was 94.33%, 91.78%, and 97.06% using their own dataset and 94.82%, 92.11%, and 97.45% using the ADNI dataset. The performance on the two independent datasets showed no statistical difference. The study showed the feasibility of the model with consistent performance when tested using datasets acquired from a variety of image-acquisition environments.
Despite remarkable diagnostic accuracy of deep learning, the correlation between the features extracted by deep learning model and diseases is hard to explain. Several studies proposed the methods for solving this problem by providing the feature map and input data responsible for the result of prediction. Class activation map (CAM) has been widely used to understand where the deep learning model evaluate for classes and to explain how deep learning models predict the outputs [12,13,14]. Choi et al. demonstrated that brain regions where the CNN model evaluated for AD with decreased cognitive function using CAM method, which can generate the heat map with the probability of AD [15]. However, CAM-based interpretation should be cautious because deep learning models may classify diseases by the regions that cannot be explained by the known knowledge.
2.2 Parkinson’s Disease
Parkinson’s disease (PD) is the second most common of neurodegenerative diseases which is mainly a movement disorder, such as resting tremor, bradykinesia, and rigidity [16, 17]. Alpha-synuclein aggregates, the primary PD pathology, are known to promote the dopaminergic loss [18]. Although non-invasive direct PET imaging of alpha-synuclein aggregates in the brain is limited, the quantification of presynaptic transporters of the nigrostriatal dopaminergic neurons can be performed with PET and SPECT using either 18F or 123I N-(3-Fluoropropyl)-2β-carbon ethoxy-3β-(4-iodophenyl) Nortropane (FP-CIT) [19, 20]. Dopamine transporter (DAT) in PET/CT has been widely used for the early diagnosis of PD and the discrimination between PD and other diseases showing parkinsonism.
Machine learning has been applied to diagnose PD using DAT-SPECT or PET scan [21,22,23,24,25,26,27]. The extracted feature from deep learning methods has outstanding diagnostic results. However, the clinical correlation between disease and deep learning methods needs further explanation and verification since low-level features extracted from deep learning methods may not reflect the neuropathological heterogeneity of PD. Shiiba et al. used semi-quantitative indicators and shape feature acquired on DAT-SPECT to train the model of machine learning for classification between PD and normal controls (NC) [28]. Striatum binding ratio (SBR) as semi-quantitative indicators and circularity index of shape were combined as a feature for machine learning. The performance of classification was significantly improved by using both SBR and circularity than by the one of SBR or circularity index (AUC for SBR and circularity: 0.995, AUC for circularity only: 0.990, and AUC for SBR: 0.973).
FDG PET/CT is also actively used for the evaluation of patients with parkinsonism, especially for the differentiation between idiopathic PD and atypical parkinsonism [29]. Wu et al. used support vector machine to classify PD patients and NC using radiomics features on 18F-FDG PET [21]. The proposed method showed that the accuracy of classification between PD and NC was 90.97 ± 4.66% and 88.08 ± 5.27% in Huashan and Wuxi test sets, respectively. In addition, several studies showed that the deep learning methods were also effective for classification between PD patients and NC [30, 31]. Zhao et al. developed a 3D deep residual CNN for automated differential diagnosis of idiopathic PD (IPD) and atypical parkinsonism (APD) [30]. With dataset from 920 patients including 502 IPD patients, 239 multiple system atrophy (MSA) patients, and 179 progressive supranuclear palsy (PSP) patients, the proposed method demonstrated the performance of 97.7% sensitivity, 94.1% specificity, 95.5% PPV, and 97.0% NPV for the classification of IPD, versus 96.8%, 99.5%, 98.7%, and 98.7% for the classification of MSA, and 83.3%, 98.3%, 90.0%, and 97.8% for the classification of PSP, respectively.
3 Segmentation
Despite the sensitivity of PET/CT is usually much higher than conventional structural images such as CT of MRI, it is considered difficult to extract anatomical information from PET/CT images because they are not well-distinguishable from low-resolution images of PET/CT [32]. So far, there are limited studies to segment anatomical structures on PET images using deep learning methods, especially in the diseases related to the brain. A 3D U-net shaped CNN has been used to segment cerebral gliomas on F-18 fluoroethyltyrosine (18F-FET) PET [33]. Of the deep learning methods, generative adversarial network (GAN) model received great attention due to the ability to generate data without explicitly modeling probability density functions. It has been applied to many tasks with excellent performance such as image-to-image translation, semantic segmentation, and resolution translation from low to high [34]. In particular, GAN models have been promising in the field of segmentation. Of the PET/CT studies, there is only one study applied pix2pix framework of GAN to segment normal white matter (WM) on 18F-FDG PET/CT [35]. The DSC of segmenting WM from 18F-FDG PET/CT was 0.82 on average. Despite the low resolution of 18F-FDG PET/CT, the results showed similar results compared to MRI [36, 37]. The study showed a feasibility of using 18F-FDG PET/CT in segmenting WM volumes.
In the WM, there are foci or areas called as white matter hyper-intensities (WMH) since they show increased signal intensity on T2-weighted fluid attenuated inversion recovery (FLAIR) on MRI. Despite seen in healthy elderly subjects, WMH are associated with greater hippocampal atrophy in non-demented elderly and cognitive decline in patients with CI [38,39,40]. Therefore, MRI has been invaluable in the assessment of WMH [41]. As mentioned, 18F-FDG PET/CT is useful in assessing the glucose metabolism in the cortex or subcortical neurons. However, the low spatial resolution and low glucose metabolism have limited the evaluation of the WM and WMH on 18F-FDG PET/CT. In our group, we applied a GAN framework to segment WMH on 18F-FDG PET/CT (In Fig. 9.2, unpublished data). A dataset of mild, moderate, and severe groups of WMH according to the Fazekas scoring system was used to train and test a deep learning model. Using WMH on FLAIR MRI as gold standard, a GAN method was used to segment WMH on MRI. The dice similarity coefficient (DSC) values were closely dependent on WMH volumes on MRI. With more than 60 mL of volume, the DSC values were above 0.7 with a mean value of 0.751 ± 0.048. With a volume of 60 mL or less, the mean value of DSC was only 0.362 ± 0.263. For WMH volume estimation, GAN showed excellent correlation with WMH volume on MRI (r = 0.998 in severe group, 0.983 in moderate group, and 0.908 in mild group). Although it is limited to evaluate WMH on 18F-FDG PET/CT by visual analysis, they are important vascular component contributing to dementia. Our GAN method showed a feasibility to automatically segment and estimate volumes of WMH on 18F-FDG PET/CT which will increase values of 18F-FDG PET/CT in evaluating patients with CI.
4 Image Generation and Processing
Artificial intelligence in nuclear medicine is also widely used in image processing technology, such as image reconstruction and attenuation correction. For PET/MRI, attenuation correction by making pseudo CT images from MRI has compared to CT-based methods [42,43,44,45,46]. In a method using Dixon sequence, PET activity in bone structure is underestimated in attenuation map [43, 44]. Despite many approaches, MR-based attenuation correction methods are considered lower performance than CT-based method for PET/CT. Recently, deep learning methods have been applied to the attenuation correction for PET/MRI. Hwang et al. [47] proposed a deep learning-based whole-body PET/MRI attenuation correction, which is more accurate than Dixon-based 4-segment method. The proposed deep learning method used activity and attenuation maps estimated using the maximum-likelihood reconstruction of activity and attenuation (MLAA) algorithm as inputs to a CNN to learn a CT-derived attenuation map. The attenuation map generated from CNN showed better bone identification than MLAA and average DSC for bone region was 0.77, which was significantly higher than MLAA-derived attenuation map (0.36). Liu et al. also demonstrated that deep learning approach to generate pseudo CT from MR image reduced PET reconstruction error compared to CT-based method [48]. With the retrospective T1-weighted MR images from 40 subjects, deep convolutional auto-encoder (CAE) network was trained with 30 datasets and then evaluated in 10 dataset by comparing the generated pseudo CT to a ground-truth of CT scan. The results of this study showed that the DSC for air region of 0.97, soft tissue of 0.94, and bone of 0.80.
A generation of MRI from CT or CT from MRI has been performed by a lot of researchers, but very few studies have been carried out for the generation of MR images from PET/CT. Choi et al. [49] built GAN model, based on image-to-image translation, to generate MR images from florbetapir PET images. The generated MR images are used for quantification of florbetapir PET and measured value was highly correlated with real MR-based quantification method. Although there was a high structural similarity of 0.91 ± 0.04 between real MR image and generated MR image, the differentiation between gray and white matter was difficult and there was blurring of the detailed structures in the generated MR. In our group, cycle GAN based deep learning method was applied for generating FLAIR images from 18F-FDG PET/CT. As shown in Fig. 9.3 (unpublished data), the generated FLAIR images from our method had excellent visual quality.
5 Low-Dose Imaging
High-quality PET images need a large number of gamma events either from high-dose injection or long scan time. Long scan time can result in patient motion artifacts and inconvenience, while high-dose administration increases radiation exposure to patients. To overcome these issues, the development of technology has concentrated on increasing the PET scanner sensitivity to detect a large number of coincidence events. A newer PET system with an axial field-of-view covering the whole body in a single bed position has shown a 40-fold improvement in effective sensitivity [50, 51]. In addition, numerous image reconstruction and noise reduction algorithm have improved spatial resolution and signal-to-noise ratio (SNR) of PET image [52, 53]. Ordered subset expectation maximization (OSEM) with modeling of the point spread function has been used to reconstruct gamma event for high-resolution PET imaging.
With deep learning method, convolutional neural network (CNN) models have been used to learn the relationship between full-dose and low-dose PET images [54,55,56]. Xu et al. [56] proposed a deep learning method, an encoder-decoder structure with concatenate skip connection with residual learning framework, to reduce dose of radioactive tracer in 18F-FDG PET imaging. They achieved significantly better performance compared with reconstructed by denoising algorithms (nonlocal means, block-matching 3D, and auto-context network) from 0.005 of the standard dose.
Chen et al. [57] proposed a method to reconstruct full-dose amyloid PET/MR using 18F-florbetaben (18F-FBB) image from low-dose image. Compared with low-dose image, the synthesized images using CNN model showed marked improvement on all quality metrics, such as peak signal-to-noise ratio (PSNR), structural similarity, and room mean square error (RMSE). In a visual reading of amyloid burden of synthesized FBB image using CNN model, accuracy for amyloid status was 89%. In addition, the CNN model showed the smallest mean and variance for standardized uptake value ratio (SUVR) difference to full-dose images. Ouyang et al. [58] also reported a generative adversarial network (GAN) model to reconstruct the full-dose PET image from low-dose image, which significantly outperformed Chen et al.’s method with the same input by 1.87 dB in PSNR, 2.04% in SSIM, and 24.75% in RMSE.
In our group, a CNN model with a residual learning framework was applied for predicting full-time 18F-FBB PET/CT images from short-time scan of 1 to 5 min with excellent image quality (Fig. 9.4, unpublished data). In amyloid imaging, amyloid positivity can be measured by quantitative analysis of SUVR, which were normalized to the mean value in the cerebellar cortex. The results of our ROC analyses showed that the cut-off values for amyloid positivity deduced from the images predicted from the CNN models using low-dose images from 1 to 5 min remained unchanged as compared with those obtained from the ground-truth images.
Scan time reduction using low-dose imaging has been tried for 18F-FDG PET/CT imaging. Kim et al. [59] proposed that deep learning method to synthesize the PET images with high SNR acquired for typical scan durations from short scan time PET images with low SNR using deep learning with a concatenated connection and residual learning framework (Fig. 9.5). The list-mode PET data were formatted into 10, 30, 60, and 120 s to investigate the effect of scan time on the quality of synthesized PET images. The PSNRs and NRMSEs of the synthesized 18F-FDG PET images were significantly superior to those of the short scan images for all scan times. As the scan time increased from 10 to 120 s, the PSNRs and NRMSEs of the synthesized 18F-FDG PET images were improved by an average of 21.6 ± 3.8% and 47.0 ± 5.5%, respectively.
As shown in Fig. 9.6, high quality of PET image generated using deep learning model with low count data and/or short scan time can have practical impact on reducing radiation exposure. It will provide new opportunities for PET/CT for those patients such as children, pregnant women, and patients prone to motion artifacts.
References
Staffaroni AM, Elahi FM, McDermott D, Marton K, Karageorgiou E, Sacco S, et al. Neuroimaging in dementia. Semin Neurol. 2017;37(5):510–37. https://doi.org/10.1055/s-0037-1608808.
Nasrallah I, Dubroff J. An overview of PET neuroimaging. Semin Nucl Med. 2013;43(6):449–61. https://doi.org/10.1053/j.semnuclmed.2013.06.003.
Friston KJ, Frith CD, Liddle PF, Frackowiak RS. Comparing functional (PET) images: the assessment of significant change. J Cereb Blood Flow Metab. 1991;11(4):690–9. https://doi.org/10.1038/jcbfm.1991.122.
Friston KJ, Frith CD, Liddle PF, Dolan RJ, Lammertsma AA, Frackowiak RS. The relationship between global and local changes in PET scans. J Cereb Blood Flow Metab. 1990;10(4):458–66. https://doi.org/10.1038/jcbfm.1990.88.
Nestor PJ, Altomare D, Festari C, Drzezga A, Rivolta J, Walker Z, et al. Clinical utility of FDG-PET for the differential diagnosis among the main forms of dementia. Eur J Nucl Med Mol Imaging. 2018;45(9):1509–25. https://doi.org/10.1007/s00259-018-4035-y.
Liu S, Liu S, Cai W, Che H, Pujol S, Kikinis R, et al. Multimodal neuroimaging feature learning for multiclass diagnosis of Alzheimer’s disease. IEEE Trans Biomed Eng. 2015;62(4):1132–40. https://doi.org/10.1109/tbme.2014.2372011.
Suk HI, Shen D. Deep learning-based feature representation for AD/MCI classification. Med Image Comput Comput Assist Interv. 2013;16(Pt 2):583–90. https://doi.org/10.1007/978-3-642-40763-5_72.
Hosseini-Asl E, Ghazal M, Mahmoud A, Aslantas A, Shalaby AM, Casanova MF, et al. Alzheimer’s disease diagnostics by a 3D deeply supervised adaptable convolutional network. Front Biosci (Landmark Ed). 2018;23:584–96. https://doi.org/10.2741/4606.
Kim HW, Lee HE, Lee S, Oh KT, Yun M, Yoo SK. Slice-selective learning for Alzheimer’s disease classification using a generative adversarial network: a feasibility study of external validation. Eur J Nucl Med Mol Imaging. 2020;47(9):2197–206. https://doi.org/10.1007/s00259-019-04676-y.
Liu M, Cheng D, Wang K, Wang Y. Multi-modality cascaded convolutional neural networks for Alzheimer’s disease diagnosis. Neuroinformatics. 2018;16(3–4):295–308. https://doi.org/10.1007/s12021-018-9370-4.
Mehmood A, Maqsood M, Bashir M, Shuyuan Y. A deep siamese convolution neural network for multi-class classification of Alzheimer disease. Brain Sci. 2020;10(2) https://doi.org/10.3390/brainsci10020084.
Rajpurkar P, Irvin J, Zhu K, Yang B, Mehta H, Duan T, et al. Chexnet: radiologist-level pneumonia detection on chest x-rays with deep learning. arXiv preprint arXiv:171105225. 2017.
Zhou B, Khosla A, Lapedriza A, Oliva A, Torralba A. Learning deep features for discriminative localization. In: Proceedings of the IEEE conference on computer vision and pattern recognition; 2016. p. 2921–9.
Zintgraf LM, Cohen TS, Adel T, Welling M. Visualizing deep neural network decisions: prediction difference analysis. arXiv preprint arXiv:170204595. 2017.
Choi H, Kim YK, Yoon EJ, Lee JY, Lee DS. Cognitive signature of brain FDG PET based on deep learning: domain transfer from Alzheimer’s disease to Parkinson’s disease. Eur J Nucl Med Mol Imaging. 2020;47(2):403–12. https://doi.org/10.1007/s00259-019-04538-7.
Dorsey ER, Constantinescu R, Thompson JP, Biglan KM, Holloway RG, Kieburtz K, et al. Projected number of people with Parkinson disease in the most populous nations, 2005 through 2030. Neurology. 2007;68(5):384–6. https://doi.org/10.1212/01.wnl.0000247740.47667.03.
Jankovic J. Parkinson’s disease: clinical features and diagnosis. J Neurol Neurosurg Psychiatry. 2008;79(4):368–76. https://doi.org/10.1136/jnnp.2007.131045.
Gratwicke J, Jahanshahi M, Foltynie T. Parkinson’s disease dementia: a neural networks perspective. Brain J Neurol. 2015;138(Pt 6):1454–76. https://doi.org/10.1093/brain/awv104.
Brooks DJ. Molecular imaging of dopamine transporters. Ageing Res Rev. 2016;30:114–21. https://doi.org/10.1016/j.arr.2015.12.009.
Brücke T, Djamshidian S, Bencsits G, Pirker W, Asenbaum S, Podreka I. SPECT and PET imaging of the dopaminergic system in Parkinson’s disease. J Neurol. 2000;247(Suppl 4):Iv/2-7. https://doi.org/10.1007/pl00007769.
Wu Y, Jiang JH, Chen L, Lu JY, Ge JJ, Liu FT, et al. Use of radiomic features and support vector machine to distinguish Parkinson’s disease cases from normal controls. Ann Translat Med. 2019;7(23):773. https://doi.org/10.21037/atm.2019.11.26.
Katako A, Shelton P, Goertzen AL, Levin D, Bybel B, Aljuaid M, et al. Machine learning identified an Alzheimer’s disease-related FDG-PET pattern which is also expressed in Lewy body dementia and Parkinson’s disease dementia. Sci Rep. 2018;8(1):13236. https://doi.org/10.1038/s41598-018-31653-6.
Zhang YC, Kagen AC. Machine learning interface for medical image analysis. J Digit Imaging. 2017;30(5):615–21. https://doi.org/10.1007/s10278-016-9910-0.
Augimeri A, Cherubini A, Cascini GL, Galea D, Caligiuri ME, Barbagallo G, et al. CADA-computer-aided DaTSCAN analysis. EJNMMI Phys. 2016;3(1):4. https://doi.org/10.1186/s40658-016-0140-9.
Oliveira FP, Castelo-Branco M. Computer-aided diagnosis of Parkinson’s disease based on [(123)I]FP-CIT SPECT binding potential images, using the voxels-as-features approach and support vector machines. J Neural Eng. 2015;12(2):026008. https://doi.org/10.1088/1741-2560/12/2/026008.
Huertas-Fernández I, García-Gómez FJ, García-Solís D, Benítez-Rivero S, Marín-Oyaga VA, Jesús S, et al. Machine learning models for the differential diagnosis of vascular parkinsonism and Parkinson’s disease using [(123)I]FP-CIT SPECT. Eur J Nucl Med Mol Imaging. 2015;42(1):112–9. https://doi.org/10.1007/s00259-014-2882-8.
Illan IA, Gorrz JM, Ramirez J, Segovia F, Jimenez-Hoyuela JM, Ortega Lozano SJ. Automatic assistance to Parkinson’s disease diagnosis in DaTSCAN SPECT imaging. Med Phys. 2012;39(10):5971–80. https://doi.org/10.1118/1.4742055.
Shiiba T, Arimura Y, Nagano M, Takahashi T, Takaki A. Improvement of classification performance of Parkinson’s disease using shape features for machine learning on dopamine transporter single photon emission computed tomography. PLoS One. 2020;15(1):e0228289. https://doi.org/10.1371/journal.pone.0228289.
Walker Z, Gandolfo F, Orini S, Garibotto V, Agosta F, Arbizu J, et al. Clinical utility of FDG PET in Parkinson’s disease and atypical parkinsonism associated with dementia. Eur J Nucl Med Mol Imaging. 2018;45(9):1534–45. https://doi.org/10.1007/s00259-018-4031-2.
Zhao Y, Cumming P, Rominger A, Zuo C, Shi K, Wu P, et al. A 3D deep residual convolutional neural network for differential diagnosis of parkinsonian syndromes on (18)F-FDG PET images. In: Conference proceedings: annual international conference of the IEEE engineering in medicine and biology society IEEE engineering in medicine and biology society annual conference. 2019; p. 3531–4. https://doi.org/10.1109/embc.2019.8856747.
Shen T, Jiang J, Lin W, Ge J, Wu P, Zhou Y, et al. Use of overlapping group LASSO sparse deep belief network to discriminate Parkinson’s disease and normal control. Front Neurosci. 2019;13:396. https://doi.org/10.3389/fnins.2019.00396.
Bouter C, Henniges P, Franke TN, Irwin C, Sahlmann CO, Sichler ME, et al. (18)F-FDG-PET detects drastic changes in brain metabolism in the Tg4-42 model of Alzheimer’s disease. Front Aging Neurosci. 2018;10:425. https://doi.org/10.3389/fnagi.2018.00425.
Blanc-Durand P, Van Der Gucht A, Schaefer N, Itti E, Prior JO. Automatic lesion detection and segmentation of 18F-FET PET in gliomas: a full 3D U-net convolutional neural network study. PLoS One. 2018;13(4):e0195798. https://doi.org/10.1371/journal.pone.0195798.
Yi X, Walia E, Babyn P. Generative adversarial network in medical imaging: a review. Med Image Anal. 2019;58:101552. https://doi.org/10.1016/j.media.2019.101552.
Oh KT, Lee S, Lee H, Yun M, Yoo SK. Semantic segmentation of white matter in FDG-PET using generative adversarial network. J Digit Imaging. 2020;33(4):816–25. https://doi.org/10.1007/s10278-020-00321-5.
Nie D, Wang L, Gao Y, Shen D. Fully convolutional networks for multi-modality isointense infant brain image segmentation. Proc IEEE Int Symp Biomed Imag. 2016;2016:1342–5. https://doi.org/10.1109/isbi.2016.7493515.
Zhang W, Li R, Deng H, Wang L, Lin W, Ji S, et al. Deep convolutional neural networks for multi-modality isointense infant brain image segmentation. NeuroImage. 2015;108:214–24. https://doi.org/10.1016/j.neuroimage.2014.12.061.
Erten-Lyons D, Woltjer R, Kaye J, Mattek N, Dodge HH, Green S, et al. Neuropathologic basis of white matter hyperintensity accumulation with advanced age. Neurology. 2013;81(11):977–83. https://doi.org/10.1212/WNL.0b013e3182a43e45.
Fiford CM, Manning EN, Bartlett JW, Cash DM, Malone IB, Ridgway GR, et al. White matter hyperintensities are associated with disproportionate progressive hippocampal atrophy. Hippocampus. 2017;27(3):249–62. https://doi.org/10.1002/hipo.22690.
Liu CK, Miller BL, Cummings JL, Mehringer CM, Goldberg MA, Howng SL, et al. A quantitative MRI study of vascular dementia. Neurology. 1992;42(1):138–43. https://doi.org/10.1212/wnl.42.1.138.
Scheltens P, Barkhof F, Leys D, Pruvo JP, Nauta JJ, Vermersch P, et al. A semiquantative rating scale for the assessment of signal hyperintensities on magnetic resonance imaging. J Neurol Sci. 1993;114(1):7–12. https://doi.org/10.1016/0022-510x(93)90041-v.
An HJ, Seo S, Kang H, Choi H, Cheon GJ, Kim HJ, et al. MRI-based attenuation correction for PET/MRI using multiphase level-set method. J Nucl Med. 2016;57(4):587–93. https://doi.org/10.2967/jnumed.115.163550.
Keereman V, Fierens Y, Broux T, De Deene Y, Lonneux M, Vandenberghe S. MRI-based attenuation correction for PET/MRI using ultrashort echo time sequences. J Nucl Med. 2010;51(5):812–8. https://doi.org/10.2967/jnumed.109.065425.
Rausch I, Rust P, DiFranco MD, Lassen M, Stadlbauer A, Mayerhoefer ME, et al. Reproducibility of MRI Dixon-based attenuation correction in combined PET/MR with applications for lean body mass estimation. J Nucl Med. 2016;57(7):1096–101. https://doi.org/10.2967/jnumed.115.168294.
Sekine T, Ter Voert EE, Warnock G, Buck A, Huellner M, Veit-Haibach P, et al. Clinical evaluation of zero-echo-time attenuation correction for brain 18F-FDG PET/MRI: comparison with atlas attenuation correction. J Nucl Med. 2016;57(12):1927–32. https://doi.org/10.2967/jnumed.116.175398.
Vandenberghe S, Marsden PK. PET-MRI: a review of challenges and solutions in the development of integrated multimodality imaging. Phys Med Biol. 2015;60(4):R115–54. https://doi.org/10.1088/0031-9155/60/4/r115.
Hwang D, Kang SK, Kim KY, Seo S, Paeng JC, Lee DS, et al. Generation of PET attenuation map for whole-body time-of-flight (18)F-FDG PET/MRI using a deep neural network trained with simultaneously reconstructed activity and attenuation maps. J Nucl Med. 2019;60(8):1183–9. https://doi.org/10.2967/jnumed.118.219493.
Liu F, Jang H, Kijowski R, Bradshaw T, McMillan AB. Deep learning MR imaging-based attenuation correction for PET/MR imaging. Radiology. 2018;286(2):676–84. https://doi.org/10.1148/radiol.2017170700.
Choi H, Lee DS. Generation of structural MR images from amyloid PET: application to MR-less quantification. J Nucl Med. 2018;59(7):1111–7. https://doi.org/10.2967/jnumed.117.199414.
Badawi RD, Shi H, Hu P, Chen S, Xu T, Price PM, et al. First human imaging studies with the EXPLORER total-body PET scanner. J Nucl Med. 2019;60(3):299–303. https://doi.org/10.2967/jnumed.119.226498.
Cherry SR, Jones T, Karp JS, Qi J, Moses WW, Badawi RD. Total-body PET: maximizing sensitivity to create new opportunities for clinical research and patient care. J Nucl Med. 2018;59(1):3–12. https://doi.org/10.2967/jnumed.116.184028.
Caribe P, Koole M, D’Asseler Y, Van Den Broeck B, Vandenberghe S. Noise reduction using a Bayesian penalized-likelihood reconstruction algorithm on a time-of-flight PET-CT scanner. EJNMMI Phys. 2019;6(1):22. https://doi.org/10.1186/s40658-019-0264-9.
Ashrafinia S, Mohy-Ud-Din H, Karakatsanis NA, Jha AK, Casey ME, Kadrmas DJ, et al. Generalized PSF modeling for optimized quantitation in PET imaging. Phys Med Biol. 2017;62(12):5149–79. https://doi.org/10.1088/1361-6560/aa6911.
Liu CC, Qi J. Higher SNR PET image prediction using a deep learning model and MRI image. Phys Med Biol. 2019;64(11):115004. https://doi.org/10.1088/1361-6560/ab0dc0.
Kaplan S, Zhu YM. Full-dose PET image estimation from low-dose PET image using deep learning: a pilot study. J Digit Imaging. 2019;32(5):773–8. https://doi.org/10.1007/s10278-018-0150-3.
Xu J, Gong E, Pauly J, Zaharchuk G. 200x low-dose PET reconstruction using deep learning. arXiv preprint arXiv:171204119. 2017.
Chen KT, Gong E, de Carvalho Macruz FB, Xu J, Boumis A, Khalighi M, et al. Ultra-low-dose (18)F-florbetaben amyloid PET imaging using deep learning with multi-contrast MRI inputs. Radiology. 2019;290(3):649–56. https://doi.org/10.1148/radiol.2018180940.
Ouyang J, Chen KT, Gong E, Pauly J, Zaharchuk G. Ultra-low-dose PET reconstruction using generative adversarial network with feature matching and task-specific perceptual loss. Med Phys. 2019;46(8):3555–64. https://doi.org/10.1002/mp.13626.
Kim J, Kang S, Lee K, Jung JH, Kim G, Lim HK, et al. Effect of scan time on neuro 18 F-fluorodeoxyglucose positron emission tomography image generated using deep learning. J Med Imag Health In. 2020;10:1–7. https://doi.org/10.1166/jmihi.2020.3316.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this chapter
Cite this chapter
Lee, S., Oh, K.T., Choi, Y., Yoo, S.K., Yun, M. (2022). Artificial Intelligence/Machine Learning in Nuclear Medicine. In: Veit-Haibach, P., Herrmann, K. (eds) Artificial Intelligence/Machine Learning in Nuclear Medicine and Hybrid Imaging. Springer, Cham. https://doi.org/10.1007/978-3-031-00119-2_9
Download citation
DOI: https://doi.org/10.1007/978-3-031-00119-2_9
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-00118-5
Online ISBN: 978-3-031-00119-2
eBook Packages: MedicineMedicine (R0)