Abstract
Positron Emission Tomography (PET) is now regarded as the gold standard for the diagnosis of Alzheimer’s Disease (AD). However, PET imaging can be prohibitive in terms of cost and planning, and is also among the imaging techniques with the highest dosage of radiation. Magnetic Resonance Imaging (MRI), in contrast, is more widely available and provides more flexibility when setting the desired image resolution.
Unfortunately, the diagnosis of AD using MRI is difficult due to the very subtle physiological differences between healthy and AD subjects visible on MRI. As a result, many attempts have been made to synthesize PET images from MR images using generative adversarial networks (GANs) in the interest of enabling the diagnosis of AD from MR. Existing work on PET synthesis from MRI has largely focused on Conditional GANs, where MR images are used to generate PET images and subsequently used for AD diagnosis. There is no end-to-end training goal.
This paper proposes an alternative approach to the aforementioned, where AD diagnosis is incorporated in the GAN training objective to achieve the best AD classification performance. Different GAN losses are fine-tuned based on the discriminator performance, and the overall training is stabilized. The proposed network architecture and training regime show state-of-the-art performance for three- and four- class AD classification tasks.
Access provided by Autonomous University of Puebla. Download conference paper PDF
Similar content being viewed by others
Keywords
1 Introduction
1.1 PET Imaging and AD Diagnosis
Alzheimer’s Disease (AD) is the most common cause of dementia, affecting quality of life for many elderly people and their families. Early diagnosis and intervention of AD can improve the quality of life by significantly slowing the progression of the disease, thus it is an active area of research. Positron Emission Tomography (PET) appears to be a very promising imaging technique to assess the progression and stage of the disease by monitoring the spread of Tau-protein in the form of Neurofibrillary Tangles (NFT) and Amyloid beta (A\(\beta \)) [12, 13, 21]. As a functional imaging technique, PET uses a radioactive tracer injected into the patient, and images the distribution of the tracer over the course of minutes or hours. In AD research, PET imaging techniques measure amyloid plaque (AV45) [4, 22] and tau protein aggregates (AV1451) [17, 24] that are essential to understanding AD pathology and diagnosis. Compared to AV45-/AV1451- PET, FDG-PET usually helps differentiate AD from other causes of dementia, because it can characterize the patterns of glucose metabolism in the brain that are specific to AD [16]. Example T1-weighted Magnetic Resonance images, AV45-/FDG- PET brain images of CN and AD are shown in Fig. 1.
While PET plays an important role for AD diagnosis, it can be prohibitive in terms of cost and planning: (1) the short half life of the radioisotopes requires on-site production in remote regions; (2) no-show patients result in radioisotopes being wasted; (3) the length of imaging sessions is determined by the tracer and the use case so motion artifacts may be unavoidable, and lastly; (4) small variations (\(\sim \)5 min) in the acquisition start time may cause over- or under-estimation of quantitative parameters. It is also not as widely available as Magnetic Resonance Imaging (MRI).
1.2 Synthesizing PET Images from MR for AD Diagnosis
To address the shortcomings of PET for AD diagnosis, a number of studies have attempted AD diagnosis from T1-weighted MR images. While T1-weighted MRI is most suitable for visualizing anatomical structures in the brain, it is not optimal for AD diagnosis because it does not highlight functional or metabolic properties of brain tissues. The question arises as to whether one can leverage existing combined PET-MR image pairs (a combined imaging modality available to only large research institutions) to generate PET images from MR-only image acquisitions.
Conditional generative adversarial networks (CGAN) [11] have previously been used to generate images of a modality from a paired input image of a different modality. Frequent examples of such paired images are images and label maps, images and sketch, and pictures of the same scene from one lighting condition to another (e.g. day/night). For medical image analysis, such as AD diagnosis, PET image is generated from MRI using CGAN. The generated PET is then used to train AD classification network.
This work proposes an approach similar to CGAN, where CGAN is trained end-to-end with the final goal of AD classification. If trained with classification goal, then the performance of generating realistic images may be compromised. We overcome this limitation by adaptively fine-tuning the GAN losses and classification losses. Also, the overall GAN training is stabilized by the loss fine-tuning. State-of-the-art result on three- and four- class AD classification are achieved with the proposed architecture and training regime.
1.3 Dataset and Classification of Cognitive Decline
We use the publicly available ADNI (Alzheimer’s Disease Neuroimaging Initiative) dataset comprised of F18-AV-45 (florbetapir) and F18-FDG (fluorodeoxyglucose) PET image pairs along with the co-registered T1-weighted MRI. The dataset contains six dementia related conditions: cognitive normal (CN), early mild cognitive impairment (EMCI), late mild cognitive impairment (LMCI), mild cognitive impairment (MCI), subjective memory complaint (SMC), and Alzheimer disease (AD). Among these conditions, SMC is difficult to subjectively distinguish from CN. Also, there may be overlaps between EMCI/LMCI and MCI. Therefore, we test binary classification of AD/CN, three- and four- class classification of AD/MCI/CN and AD/LMCI/EMCI/CN for early AD diagnosis. Figure 1 show some examples of CN, EMCI, AD images in the dataset.
We randomly divide the dataset with 70% training, 10% validation, and 20% testing according to the patients, resulting 722/104/207 subjects for each train/validation/test set. Some subjects have multiple scans (i.e., more than one temporal scan), with the total 1,525 image triplets (AD45-/FDG-PET, T1-MRI). The images are pre-processed using FreeSurfer [1]. The T1-weighted images are skull-stripped, where non-cerebral matters such as skull and scalp are removed. Registration, re-scaling [9] and partial volume correction [8] is applied to the PET images. The T1-weigthed images are re-scaled to \(1\,\text {mm}^{3}\) with \(256^{3}\) voxels, and PET images are \(2\times 93\times 76\times 76\) voxels with 2 temporal resolution.
2 Related Works
Image-based AD diagnosis is regarded as a challenging task. Most of the prior works use a combination of structural and functional imaging, such as T1-weighted MRI and PET, or T1-weighted MRI and functional MRI such as DTI (diffusion tensor imaging). They also typically focus on binary classification of each state category, such as AD vs. NC, or AD vs. MCI.
A combination of T1-weighted MRI, AV45-/FDG-PET was used with multi-feature kernel supervised within-class-similar discriminative dictionary learning algorithm to demonstrate binary classification of AD/NC, MCI/NC, AD/MCI in [15]. A combination of T1-weighted MRI and FDG-PET with three-dimensional convolutional neural network (CNN) was used to demonstrate binary classification of CN/AD, CN/pMCI, sMCI/pMCI in [10]. GAN was used to generate additional PET images from T1-weighted MRI that do not have AV45-PET image pairs in [25]. MRI and real-/synthetic- PET image pairs are subsequently used to train CNN to perform binary classification of stable-MCI/progressive-MCI.
Functional MRI (fMRI) is an MRI imaging technique most similar to PET that it can measure brain activity by detecting changes associated with blood flow. A minimum spanning tree (MST) classification framework was proposed in [5] to perform binary classification of MCI/NC, AD/CN, and AD/MCI using fMRI. A combination of T1-weighted MRI and Diffusion Tensor Imaging (DTI) was used with Multiple Kernel Learning to demonstrate binary classification of CN/AD, CN/MCI, AD/MCI in [2].
More recent work demonstrates diagnosing AD from T1-weighted MRI only. Longitudinal studies with landmark-based features and support vector machines to classify CN/AD and CN/MCI in [26]. T1-weighted MRI was used with convolutional autoencoder based unsupervised learning for the CN/AD and progressive-MCI/stable-MCI classification task in [18]. Other recent works show multi-class classification using T1-weighted MRI. A variant of DenseNet CNN was used for multi-class classification of AD/MCI/NC using MRI in [23]. T1-weighted MRI was used with CNN to demonstrate binary classification of NC/AD and three-class classification of NC/AD/MCI in [6].
3 Methods
The pix2pix [11] CGAN architecture is widely adopted in the medical image analysis domain for synthesizing from one image modality to another. For instance, Yan et al. [25] use the CGAN to generate AV45-PET from T1-weighted MRI to supplement the training dataset with additional synthetic PET-MRI image pairs. While for generating an image of different modality may be an end-goal for computer vision domain, in medical domain we often want to diagnose a disease, such as AD, using the generated image. We hypothesize that a GAN designed and trained with this diagnosis end-goal in mind can outperform in AD diagnosis, compared to other types of CGAN application where synthesis and diagnosis are trained separately.
3.1 Conditional Generative Adversarial Networks
The pix2pix [11] CGAN is trained with the following objective:
where \(\mathcal {L}_{cGAN}(G,D)\) and and \(\mathcal {L}_{L1}(G)\) are defined as
where x, y and G(x, z) can be regarded as MRI, PET input, and generated PET. The CGAN consists of a generator (G) that has encoder-decoder architecture, and a discriminator (D) that is a CNN classifier. The U-Net [19] architecture is usually used as the G that takes an input image and generates an output image of a same size but of different modality or characteristics. PET conventionally has lower image resolution than MRI, so we modify the U-Net architecture to take the different resolutions into account - MRI: \(256\times 256\times 256\); PET: \(2\times 93\times 76\times 76\). The encoder part has eight layers while the decoder part has five. Only the middle five layers in the encoder-decoder part has the skip-connection, with the last two up-sampling (transpose convolution) layers to make the target PET resolution. The discriminator CNN has three convolutional (conv-) layers that take MRI input, and two conv-layers that take PET input. The two branches of conv-layers are merged and followed by two additional conv-layers for classification.
3.2 GAN with Discriminator-Adaptive Loss Fine-Tuning
GAN is trained with minimax objective [7] where G and D compete with each other. CGAN is trained with an additional L1 loss for the G, and a patch-GAN [11] classifier for the D. The D in our generative network is trained with additional AD classification losses: (1) based on real MRI and generated PET input, multiplied by a hyper-parameter \(\lambda _{\text {GAN}_D}\), and (2) based on real MRI and PET, multiplied by \(\lambda _{\text {CLS}_D}\):
where \(\hat{y}\) is the AD label.
The G is also trained with AD classification loss based on real MRI and generated PET input, in addition to the GAN loss and L1 loss. Each loss is multiplied with hyper-parameters to control their relative importance during the training - \(\lambda _{\text {CLS}_G}\), \(\lambda _{\text {GAN}_G}\), and \(\lambda _{L1}\):
In the earlier phase of the GAN training, generated PET likely are far from the real ones. They progressively become more realistic as the training proceeds. Therefore, D is trained initially with small \(\lambda _{\text {GAN}_D}\) and gradually increased during the training, while \(\lambda _{\text {CLS}_D}\) starts from a larger value and gradually decreased. This encourages the D to focus on AD classification when G is improving to generate more realistic PET images. The G is trained with a large \(\lambda _{\text {GAN}_G}\) at first so it can focus on generating realistic PET in the beginning. It is gradually decreased as \(\lambda _{\text {CLS}_G}\) increases from a smaller value, to emphasize AD classification using the generated PET images. We set \(\lambda _{\text {GAN}_D}\) and \(\lambda _{\text {CLS}_G}\) as 0.01 and linearly increase 10 times per epoch, \(\lambda _{\text {CLS}_D}\) and \(\lambda _{\text {GAN}_G}\) initially as 100 and decrease 1/10 times per epoch. We train for 1000 epochs using ADAM optimizer [14].
Stabilizing Training. Training D and G independently, the D and G loss can oscillate rather than being in a stable convergence state [3]. To remedy this problem we continuously monitor the D and G loss, and adjust the hyper-parameters \(\lambda \) for the losses if any one is lower compared to the previous epoch. Loss oscillation generally occurs when the training has well proceeded, and this is when AD classification losses get higher weights. This is similar to the approach of [20] penalizing D weights with annealing to stabilize the GAN training. For example, when the D loss starts to oscillate and becomes higher compared to the previous epoch, then (1) its previous checkpoint is restored, and (2) \(\lambda _{\text {CLS}_D}\) gets decreased. The overall training pipeline is shown in Fig. 2.
4 Results
We perform two- to four- class AD classification using T1-weighted MRI input. The two-class AD classification results is shown in Table 1. The CNN approach in [6] report better performance on the two-class AD/CN classification, and GANDALF show similar performance to pix2pix + CNN method. We suspect this may be because AD vs. CN is more clearly visible than AD/MCI/CN or AD/LMCI/EMCI/CN on MRI, so a deep CNN with good hyper-parameter set can provide better result and PET plays a rather limited role for the diagnosis. We did not conduct a thorough hyper-parameter search for GANDALF in this study.
Results of the three-class AD/MCI/CN classification task are shown in Table 2. We achieve state-of-the-art performance on the three-class classification compared to the prior works using T1-weighted MRI input. MCI may show more subtle difference on the MRI compared to the AD as can be seen in Fig. 1. This, and the consistent better performance of the generative methods compared to the prior works could indicate that an additional training of synthesizing PET can help achieving better performance for early AD diagnosis.
Lastly, four-class classification of AD/LMCI/EMCI/CN results are shown in Table 3. We show a meaningful first result on classifying early-MCI and late-MCI from CN and AD, a promising first step for early AD diagnosis using T1-weighted MRI. Our proposed GANDALF method also shows improved performance compared to the pix2pix + CNN method. Towards the end of the GANDALF training, the entire network acts as a classification network with T1-weighted MRI input. Finding a better/deeper classifier/discriminator architecture could improve the final classification performance. However this should be balanced with the generator architecture/depth for the GAN training with the minimax objective. A thorough hyper-parameter search could also improve the final performance.
5 Conclusion
Early diagnosis and intervention of Alzheimer’s disease (AD) can significantly slow the progression of the disease and improve patients’ condition and the life quality of the patient and their caregivers. PET imaging can provide great insight for early diagnosis of AD, however, it is rarely available outside of research environments. Earlier works on MRI-based AD diagnosis use conditional generative adversarial networks (GAN) to synthesize PET from MRI, and subsequently use the generated PET for AD diagnosis.
We propose a network where AD diagnosis end-goal is incorporated into the MRI-PET synthesis and trained end-to-end, instead of first synthesizing PET and then use it for AD diagnosis. Furthermore, we suggest a training scheme to stabilize the GAN training. We achieve state-of-the-art MRI-based AD diagnosis for three-class AD classification of AD/MCI/CN. We also achieve the first meaningful result on four-class (AD/LMCI/EMCI/CN) classification that can be promising for early diagnosis of AD based on MRI, to the best of our knowledge.
References
FreeSurfer software suite. https://surfer.nmr.mgh.harvard.edu
Ahmed, O.B., Benois-Pineau, J., Allard, M., Catheline, G., Amar, C.B., Alzheimer’s Disease Neuroimaging Initiative, et al.: Recognition of Alzheimer’s disease and mild cognitive impairment with multimodal image-derived biomarkers and multiple kernel learning. Neurocomputing 220, 98–110 (2017)
Arjovsky, M., Bottou, L.: Towards principled methods for training generative adversarial networks. In: 5th International Conference on Learning Representations. ICLR 2017, Toulon, France, 24–26 April 2017. Conference Track Proceedings (2017)
Berti, V., Pupi, A., Mosconi, L.: PET/CT in diagnosis of dementia. Ann. N. Y. Acad. Sci. 1228, 81 (2011)
Cui, X., et al.: Classification of Alzheimer’s disease, mild cognitive impairment, and normal controls with subnetwork selection and graph kernel principal component analysis based on minimum spanning tree brain functional network. Front. Comput. Neurosci. 12, 31 (2018)
Esmaeilzadeh, S., Belivanis, D.I., Pohl, K.M., Adeli, E.: End-to-end Alzheimer’s disease diagnosis and biomarker identification. In: Shi, Y., Suk, H.-I., Liu, M. (eds.) MLMI 2018. LNCS, vol. 11046, pp. 337–345. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-00919-9_39
Goodfellow, I., et al.: Generative adversarial nets. In: Advances in Neural Information Processing Systems, pp. 2672–2680 (2014)
Greve, D.N., et al.: Different partial volume correction methods lead to different conclusions: an 18F-FDG-PET study of aging. Neuroimage 132, 334–343 (2016)
Greve, D.N., et al.: Cortical surface-based analysis reduces bias and variance in kinetic modeling of brain pet data. Neuroimage 92, 225–236 (2014)
Huang, Y., Xu, J., Zhou, Y., Tong, T., Zhuang, X., Alzheimer’s Disease Neuroimaging Initiative (ADNI), et al.: Diagnosis of Alzheimer’s disease via multi-modality 3D convolutional neural network. Front. Neurosci. 13, 509 (2019)
Isola, P., Zhu, J.Y., Zhou, T., Efros, A.A.: Image-to-image translation with conditional adversarial networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1125–1134 (2017)
Johnson, K.A., et al.: Tau positron emission tomographic imaging in aging and early Alzheimer disease. Ann. Neurol. 79, 110–119 (2016)
Johnson, K.A., AV45-A11 Study Group, et al.: Florbetapir (F18-AV-45) PET to assess amyloid burden in Alzheimer’s disease dementia, mild cognitive impairment, and normal aging. Alzheimer’s Dement. 9(5), S72–S83 (2013)
Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. In: 3rd International Conference on Learning Representations (ICLR) (2015)
Li, Q., Wu, X., Xu, L., Chen, K., Yao, L., Alzheimer’s Disease Neuroimaging Initiative, et al.: Classification of Alzheimer’s disease, mild cognitive impairment, and cognitively unimpaired individuals using multi-feature kernel discriminant dictionary learning. Front. Comput. Neurosci. 11, 117 (2018)
Marcus, C., Mena, E., Subramaniam, R.M.: Brain pet in the diagnosis of Alzheimer’s disease. Clin. Nucl. Med. 39(10), e413 (2014)
Marquié, M., et al.: Validating novel tau positron emission tomography tracer [F-18]-AV-1451 (T807) on postmortem brain tissue. Ann. Neurol. 78(5), 787–800 (2015)
Oh, K., Chung, Y.C., Kim, K.W., Kim, W.S., Oh, I.S.: Classification and visualization of Alzheimer’s disease using volumetric convolutional neural network and transfer learning. Sci. Rep. 9(1), 1–16 (2019)
Ronneberger, O., Fischer, P., Brox, T.: U-Net: convolutional networks for biomedical image segmentation. In: Navab, N., Hornegger, J., Wells, W.M., Frangi, A.F. (eds.) MICCAI 2015. LNCS, vol. 9351, pp. 234–241. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-24574-4_28
Roth, K., Lucchi, A., Nowozin, S., Hofmann, T.: Stabilizing training of generative adversarial networks through regularization. In: Advances in Neural Information Processing Systems, pp. 2018–2028 (2017)
Schwartz, A.J., et al.: Regional profiles of the candidate tau PET ligand 18F-AV-1451 recapitulate key features of Braak histopathological stages. Brain 139, 1539–50 (2016)
Sevigny, J., et al.: The antibody aducanumab reduces a\(\beta \) plaques in Alzheimer’s disease. Nature 537(7618), 50–56 (2016)
Wu, B., Sun, X., Hu, L., Wang, Y.: Learning with unsure data for medical image diagnosis. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 10590–10599 (2019)
Xia, C.F., et al.: [18F] T807, a novel tau positron emission tomography imaging agent for Alzheimer’s disease. Alzheimer’s Dement. 9(6), 666–676 (2013)
Yan, Yu., Lee, H., Somer, E., Grau, V.: Generation of amyloid PET images via conditional adversarial training for predicting progression to Alzheimer’s disease. In: Rekik, I., Unal, G., Adeli, E., Park, S.H. (eds.) PRIME 2018. LNCS, vol. 11121, pp. 26–33. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-00320-3_4
Zhang, J., Liu, M., An, L., Gao, Y., Shen, D.: Alzheimer’s disease diagnosis using landmark-based features from longitudinal structural MR images. IEEE J. Biomed. Health Inform. 21(6), 1607–1616 (2017)
Acknowledgement
The authors would like to thank Seonjoo Lee of Columbia University Medical Center for the discussion and help in data pre-processing.
Author information
Authors and Affiliations
Consortia
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2020 Springer Nature Switzerland AG
About this paper
Cite this paper
Shin, HC. et al. (2020). GANDALF: Generative Adversarial Networks with Discriminator-Adaptive Loss Fine-Tuning for Alzheimer’s Disease Diagnosis from MRI. In: Martel, A.L., et al. Medical Image Computing and Computer Assisted Intervention – MICCAI 2020. MICCAI 2020. Lecture Notes in Computer Science(), vol 12262. Springer, Cham. https://doi.org/10.1007/978-3-030-59713-9_66
Download citation
DOI: https://doi.org/10.1007/978-3-030-59713-9_66
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-59712-2
Online ISBN: 978-3-030-59713-9
eBook Packages: Computer ScienceComputer Science (R0)