Abstract
Neuroimaging represents an intriguing target for AI applications for several reasons including the high morbidity and mortality associated with neurological diseases. Technical challenges remain due to the volumetric and multiparametric nature of neuroradiological imaging; however advances in GPU power and development of novel deep learning architectures continue the field’s advancement. AI applications to neuroimaging have shown success at handling a range of tasks involving all stages from an imaging study’s acquisition through its interpretation. Preprocessing steps commonly performed before utilization in AI applications include skull stripping, normalization, and coregistration. AI has demonstrated the ability to perform study protocoling; shorten image acquisition times of conventional, DTI, and ASL MRI; and generate synthetic images using a different imaging modality. The use of AI to perform tissue and lesion (e.g., tumor and MS plaque) segmentation is an area of active research. Newer applications have shown success at identification and quantification of specific disease processes including infarcts, tumors, and intracranial hemorrhage, as well as more robust approaches that surveil for multiple acute neurological diseases.
Access provided by Autonomous University of Puebla. Download chapter PDF
Similar content being viewed by others
Keywords
1 Introduction
Neuroradiology has often been at the forefront of radiological imaging advances, such as the advent of diffusion-weighted MRI [1], due to the high stakes associated with diseases of the brain and spine as well as pragmatic factors such as the small field of view required for brain imaging and the sparing of the brain from respiratory motion artifact. With advances in computer vision in recent years, much interest has centered on the application of these technologies to neuroimaging; however, this presents a challenge due to the cross-sectional and, in the case of MRI, multiparametric nature of brain and spine imaging. The hardware demands associated with training deep learning networks using large numbers of three-dimensional image volumes are significant [2], although newer techniques [3] in combination with the availability of increasingly powerful GPU chips are beginning to overcome these challenges. AI applications to neuroimaging involve all aspects of image acquisition and interpretation and include study protocoling, image reconstruction, segmentation, and detection of disease processes (i.e., image classification).
2 Preprocessing of Brain Imaging
When utilizing supervised training for any task, the quality of the labeled training data has a profound impact on the success of the trained network. Accordingly, brain imaging data typically undergo several preprocessing steps before being utilized in AI applications. These steps include brain extraction (i.e., skull stripping), histogram normalization, and coregistration.
For many brain imaging AI applications, the removal of non-brain tissues from imaging data, including the skull, orbital contents, and soft tissues of the head and neck, leads to better performance [4,5,6]. The most commonly used tools for these tasks include the FMRIB Software Library (FSL) Brain Extraction Tool (BET) [7,8,9] and BET 2 [10], Brain Surface Extractor (BSE) [11], FreeSurfer [12], Robust Learning-based Brain Extraction System (ROBEX) [13], and Brain Extraction based on nonlocal Segmentation Technique (BEaST) [14]. For pediatric brain imaging, Learning Algorithm for Brain Extraction and Labeling (LABEL) has shown superior brain extraction performance as compared with several other commonly used tools [15]. Newer approaches for brain extraction that have utilized 3D convolutional neural networks (CNNs) have demonstrated superiority when used specifically for brain tumor studies [16] and have outperformed several older conventional non-CNN approaches~[17].
Many big data applications utilize MR images acquired from multiple centers and scanners, which introduces challenges related to source heterogeneity. For example, MR imaging is prone to various artifacts that may degrade the performance of AI applications. Variations in image intensity that occur due to inhomogeneities of MRI field strength, certain image acquisition artifacts, and patient motion may be addressed with bias field correction [18]. Commonly used tools for bias correction include nonparametric nonuniform intensity normalization (N3) [19] and N4ITK [20]. Another issue unique to MR imaging not encountered when using radiographs or CT is that variations in MRI scanner hardware and sequence designs frequently result in differences in image intensities for a given tissue class. Image histogram normalization is a common technique for standardizing these intensities across a heterogeneously acquired dataset. The most common methods include creating and applying an average histogram for the dataset [21] or matching individual images’ histograms to that of a chosen reference image [22].
For many AI applications, it is desirable to coregister brain images from different patients (and sequence acquisitions, when using MRI) to a standard geometry, commonly the Montreal Neurological Institute (MNI) space. Many software tools exist for coregistration, such as FMRIB’s Linear Image Registration Tool (FLIRT) [23, 24] and Non-linear Image Registration Tool (FNIRT) [25], Advanced Neuroimaging Tools (ANTs) [26], and FreeSurfer. A newer CNN-based approach dubbed Quicksilver has shown promising results and may outperform traditional methods [27].
Data augmentation is a technique for artificially increasing the number of training samples used in situations where large volumes of labeled data are unavailable [28]. Data augmentation has been described for mitigating the risk of overfitting of deep networks and as a method of handling class imbalance by increasing the proportion of the minority (often disease-positive) class. Pereira et al. performed augmentation using image rotation and reported a tumor segmentation mean performance gain of 2.6% [29]. Akkus et al. achieved an 8.8% accuracy gain for classifying 1p/19q mutation status in low-grade gliomas after augmentation by image rotation, translation, and flipping [30].
3 Applications
Applications of AI to neuroimaging address all stages of image acquisition and interpretation and approach both specific and complex tasks.
3.1 Protocoling, Acquisition, and Image Construction
Once an imaging study is ordered by a referring clinician an imaging protocol must be assigned that is appropriate for the indication and the patient’s medical history. Given the importance of cross-sectional imaging in neuroradiology, protocoling may be a complicated task (particularly in the case of MRI) and is typically performed by the radiologist, interrupting workflow [31] and in so doing potentially contributing to diagnostic errors [32]. In addition to unburdening the radiologist, automated protocolling has the potential to increase MR scanner throughput by including only the sequences pertinent to the given patient. Expanding on previous work applying AI to radiological protocoling [33], Brown and Marotta used natural language processing (NLP) to extract labeled data from radiology information system records, which were then used to train a gradient boost machine to generate custom MRI brain protocols with high accuracy [34].
Once MR data is obtained from the scanner it must first be processed into images for the radiologist to review. This initial raw data is processed by a series of modules that require expert oversight to mitigate image noise and other artifacts, adding time and introducing variance to the image acquisition process. Building on previous deep learning approaches for shortening MR acquisition times through undersampling [35, 36], a network trained on brain MRI called Automated Transform by Manifold Approximation (AUTOMAP) performs image reconstruction rapidly and with less artifact than conventional methods [37] (Fig. 15.1). Since AUTOMAP is implemented as a feed-forward system it completes image reconstruction almost instantly, enabling acquisition issues to be identified and addressed immediately, potentially reducing the need for patient callbacks.
Deep learning also shows promise for increasing the accessibility of specialized neuroimaging studies by shortening the acquisition time or enabling the generation of entire simulated imaging modalities. For example, diffusion tensor imaging (DTI), which provides information about white matter anatomy in the brain and spine, may be challenging to obtain on young or very sick patients due to the acquisition time and degree of patient cooperation required. Applying deep learning to DTI can achieve a 12-fold reduction in acquisition time by predicting DTI parameters from fewer data points than conventionally utilized [38]. Similarly, a reduction in acquisition time for arterial spin labeling perfusion imaging was achieved using a trained CNN to predict the final perfusion maps from fewer subtraction images [39].
Seven Tesla MR scanners can reveal a level of detail far beyond that of 1.5 or 3 T scanners [40]; however, 7 T magnets are generally confined to academic imaging centers and may be less tolerated by patients due to the high magnetic field strength [41]. By performing canonical correlation analysis on 3 T and 7 T brain MRI from the same patients, Bahrami et al. [42] were able to artificially generate simulated 7 T images using 3 T images for test patients. Furthermore, these simulated 7 T images had superior performance in subsequent segmentation tasks.
Recognizing that at their essence all radiological imaging modalities represent a type of anatomical abstraction, the ability to synthetically generate another MRI sequence, or imaging modality entirely, presents an intriguing target for AI. Using deep learning, brain MRI T1 images can be generated from T2 images and vice versa [43]. PET–MRI, which holds several advantages over PET–CT, including superior soft tissue contrast, has the disadvantage that in the absence of a CT acquisition it does not readily allow for attenuation correction of the PET images. However, supervised training of a deep network has enabled the generation of synthetic CT head images from contrast-enhanced gradient echo brain MRI, and these synthesized images achieve greater accuracy than existing methods when used to perform attenuation correction on the accompanying PET images [44]. A similar approach was used to train a CNN to utilize a single T1 sequence to generate synthetic CT images with greater speed and lower error rates than conventional methods (Fig. 15.2) [45].
3.2 Segmentation
Accurate, fast segmentation of brain imaging, which can be broadly divided into either anatomical (e.g., subcortical structure) or lesion (pathology-specific) segmentation is an important prerequisite step for a number of clinical and research tasks including monitoring progression of white matter [46, 47] and neurodegenerative diseases [48, 49] and assessing tumor treatment response [50]. However, since manual segmentation is tedious, time consuming, and subject to inter- and intra-observer variance, there is great interest in developing AI solutions. To facilitate the comparison of segmentation algorithms, several open competitions exist featuring public datasets and standardized evaluation methodology, several of which are described in this section.
Anatomical brain imaging segmentation entails the delineation of either basic tissue components (e.g., gray matter, white matter, and cerebrospinal fluid) or atlas-based substructures. For the former, commonly utilized brain tissue segmentation datasets include the Medical Image and Statistical Interpretation Lab (MICCAI) 2012 Multi-Atlas Labelling Challenge [51] and the Internet Brain Segmentation Repository (IBSR). Two more specialized MICCAI challenges exist, MRBrainS13 [52], which contains brain MRIs from adults aged 65–80, and NeoBrainS12, which is comprised of neonatal brain MRIs.
The most common brain lesion segmentation tasks addressed by AI are tumor and multiple sclerosis (MS) lesion segmentation. The MICCAI Brain Tumor Segmentation (BRATS) challenges have occurred annually since 2012, with the datasets growing in number over the years to include 243 preoperative glioma multimodal brain MRIs in the 2018 challenge [53, 54]. The winner of the BRATS 2017 segmentation challenge, as determined by the best overall Dice scores and Hausdorff distances for complete tumor, core tumor, and enhancing tumor segmentation, employed an ensemble CNN comprising several existing architectures under the principle that through a majority voting system the ensemble can derive the strengths of its best performing individual networks, resulting in greater generalizability for the performance of other tasks [55].
Additional deep learning segmentation applications target stroke (described subsequently), multiple sclerosis [56, 57], and cerebral small vessel disease (leukoaraiosis) [58] lesions. Anatomical Tracings of Lesions After Stroke (ATLAS-1) is a publicly available annotated dataset containing over 300 brain MRIs with acute infarcts [59]. For MS lesion segmentation, the major public datasets are MICCAI 2008 [60], International Symposium on Biomedical Imaging (ISBI) 2015 [61], and MS Lesion Segmentation Challenge (MSSEG) 2016 [62].
Due to the limited numbers of training and test subjects generally available within existing public annotated datasets, several of the best performing networks for various segmentation tasks have pooled multiple public datasets, supplemented with their own data, or employed data augmentation techniques [63,64,65,66]. A study by AlBadawy et al. demonstrated the importance of such measures, finding that the source(s) of tumor segmentation training data held a significant impact on the resulting performance during network validation (Fig. 15.3) [67].
3.3 Stroke
Stroke represents a major cause of morbidity and mortality worldwide. For example, in the United States stroke afflicts an estimated 795,000 people each year [68], accounting for 1 in every 20 deaths [69]. With over 1.9 million neurons lost each minute in the setting of an acute stroke [70], it is critical to quickly diagnose and triage stroke patients.
The Alberta Stroke Program Early Computed Tomography Score (ASPECTS) is a validated and widely used method for triaging patients with suspected anterior circulation acute stroke. ASPECTS divides the middle cerebral artery territories into ten regions of interest bilaterally [71]. The resulting score obtained from a patient’s non-contrast-enhanced CT head correlates with functional outcomes and helps guide management. e-ASPECTS, a ML-based software tool with CE-mark approval for use in Europe, has demonstrated non-inferiority (10% threshold for sensitivity and specificity) for ASPECT scoring as compared with neuroradiologists from multiple stroke centers [72]. Deep learning networks have also achieved high accuracy at quantifying infarct volumes using DWI [73] and FLAIR [74] MR sequences.
Once a patient is diagnosed with an acute stroke, there is a need to quantify the volume of infarcted (unsalvageable) tissue and the ischemic but not yet infarcted (salvageable) tissue. This latter salvageable tissue is referred to as the ischemic penumbra. Quantification of the infarct core and ischemic penumbra is generally performed with either CT or MR brain perfusion. In the latter approach, the diffusion-perfusion mismatch is used to guide thrombolysis and thrombectomy decision-making [75]. Using acute DWI and perfusion imaging in concert with follow-up T2/FLAIR as training data, Nielsen et al. developed a deep CNN to distinguish infarcted tissue from the ischemic penumbra using only acute MR perfusion data. They achieved an AUC of 0.88 for diagnosing the final infarct volume and demonstrated an ability to predict the effect of thrombolysis treatment [76]. Additional studies have investigated the prediction of long-term language [77, 78] and motor [79] outcomes using ML evaluation of stroke territory volumes and locations.
3.4 Tumor Classification
The ability to classify brain tumor type and World Health Organization grade using MRI has long been a goal of machine learning research. As early as 1998, Poptani et al. used an artificial neural network to differentiate normal brain MR spectroscopy studies from those with infectious and neoplastic diseases, achieving diagnostic accuracies of 73% and 98% for low- and high-grade gliomas, respectively [80]. More recent work has commonly employed support vector machines (SVMs) for tumor classification tasks, perhaps due to evidence that SVMs may perform better than neural networks with small training datasets [81]. In 2008, Emblem et al. applied a SVM approach to the differentiation of low- and high-grade gliomas using MR perfusion imaging, achieving true positive and true negative rates of 0.76 and 0.82, respectively [82]. Subsequent efforts have shown promising results for differentiating among glioma grades and other tumor classes using SVM analysis of conventional MRI without [83] or with [84, 85] the addition of perfusion MRI. Survival of patients with glioblastoma can also be predicted using SVM analysis of features derived from MR perfusion [86], conventional [87], and combined conventional, DTI, and perfusion [88] imaging features. SVM [88] and other [89] machine learning techniques have also been employed in radiomics research to investigate imaging markers for prediction of tumor molecular subtypes.
Differentiating glioblastoma, primary central nervous system lymphoma, and solitary brain metastasis is a common neuroradiological challenge due to the relatively high prevalence of these tumor classes and the potential for overlapping imaging characteristics. A multilayer perceptron trained using MR perfusion and permeability imaging was able to differentiate these tumor classes with high accuracy (AUC 0.77) comparable to that of neuroradiologists [90].
In the setting of chemoradiation therapy for glioblastoma, differentiating viable tumor from treatment-related necrosis (pseudoprogression) on follow-up brain imaging is a common challenge in clinical neuro-oncology [91]. The application of SVMs to differentiating these entities has shown high accuracy using MR conventional imaging in combination with either perfusion [92] or permeability [93] data. A study evaluating the use of only conventional MRI sequences found that the best SVM accuracy was obtained using the FLAIR sequence (AUC 0.79), which achieved better accuracy than the neuroradiologist reviewers involved in the study [94].
3.5 Disease Detection
Applications of AI for neuroimaging disease detection exist within a spectrum of task complexity. On one end, there are applications that perform identification of a specific disease process, which often result in a binary classification (i.e., “normal” vs. “disease”). For example, several applications have been described for differentiating normal brain MRIs from those containing epileptogenic foci [95,96,97]. On the other end of the spectrum are broader surveillance applications designed to diagnose multiple critical pathologies, which one may envision as ultimately integrating within a real-world clinical radiology workflow. This latter, nascent category has been the source of much excitement [98,99,100,101].
In light of the importance and urgency of diagnosing intracranial hemorrhage, a disease process requiring neurosurgical evaluation and representing a contraindication for thrombolysis in the setting of acute stroke, the use of AI for identification of hemorrhage on head CT has been investigated in several studies. Whereas earlier attempts demonstrated promising results employing preprocessing algorithms heavily tailored for isolating hemorrhage [102,103,104], more recent efforts have investigated whether existing deep CNNs that have shown success at identifying everyday (nonmedical) images could be applied to head CTs. Desai et al. [105] compared two existing 2D deep CNNs for the identification of basal ganglia hemorrhage and found that GoogLeNet [106] outperformed AlexNet [28], noting that data augmentation and pre-training with the ImageNet repository [107] of everyday images improved diagnostic performance (AUC 1.0 for the best performing network). Transfer learning was similarly employed by Phong et al. [108], who achieved comparably high accuracies for identifying intracranial hemorrhage.
A study by Arbabshirani et al. [109] using CNNs to diagnose intracranial hemorrhage differed in several important ways. Whereas the above-described studies utilized relatively small datasets (<200 CT head studies), Arbabshirani et al. included over 46,000 CT head studies. To generate labels for this large number of studies, the authors expanded on other work investigating NLP applications to radiology reports [110, 111] and employed NLP to extrapolate a subset of human-annotated labels to generate machine-readable labels for the remainder of the radiology report dataset. The trained image classification model, which achieved an AUC of 0.846 for diagnosing intracranial hemorrhage, was then prospectively validated in a clinical workflow to flag new studies as either “routine” or “stat” in real time depending on the presence of intracranial hemorrhage. During this 3-month validation period, the network reclassified 94 of 347 CT head studies from “routine” to “stat.” Of the 94 studies flagged, 60 were confirmed by the interpreting radiologist as positive for intracranial hemorrhage. An additional four flagged studies were later reevaluated by a blinded overreader and deemed likely to reflect hemorrhage; in other words, the trained network had found hemorrhage that was missed by the interpreting radiologist.
Seeking to diagnose a broader range of intracranial pathologies, Prevedello et al. [112] trained a pair of CNNs using several hundred labeled head CTs for the purpose of identifying a number of critical findings. A CNN for processing images using brain tissue windows was able to diagnose hemorrhage, mass effect, and hydrocephalus with an AUC of 0.90, while a separately trained CNN evaluating images using a narrower “stroke window” achieved an AUC of 0.81 for the diagnosis of an acute ischemic stroke.
Approaching this challenge of simultaneously surveilling for multiple critical findings, Titano et al. [113] utilized a larger dataset of over 37,000 head CTs, first employing NLP to derive machine-readable labels from the radiology reports. These labels were then used for weakly supervised training of a 3D CNN modeled on ResNet-50 architecture to differentiate head CTs containing one or more critical findings (including acute fracture, intracranial hemorrhage, stroke, mass effect, and hydrocephalus) from those with only noncritical findings, achieving a sensitivity matching that of radiologists (sensitivity 0.79, specificity 0.48, AUC 0.73 for the model). To validate the clinical utility of the trained network, the authors performed a prospective double-blinded randomized controlled trial comparing how quickly the model versus radiologists could evaluate a head CT for critical findings, demonstrating that the model performed this task 150 times faster than the radiologists (mean 1.2 s vs. 177 s). Pending further multicenter prospective validation, such a tool could be used in a clinical radiology workflow to automatically triage head CTs for review.
4 Conclusion
Having already demonstrated success at a diverse range of neuroradiology tasks, artificial intelligence is poised to move beyond the proof-of-concept stage and impact many facets of clinical practice. The continued advancement of AI for neuroradiology depends in part on overcoming hurdles both technical and logistical in nature. The need for large-scale training data can be addressed by the release of more public annotated datasets, through development of applications that facilitate the creation of labels from existing radiology reports and DICOM metadata, crowdsourcing initiatives, and through improving data augmentation methodologies. The high computational costs of applying deep learning to volumetric data may be overcome by advances in GPU hardware and new techniques that better leverage multicore GPU architectures. Several open-source platforms now exist that facilitate deep learning efforts, including Keras, Caffe, and Theano, and the arrival of turnkey AI development applications is likely imminent. Similarly, while deep neural network architectures currently vary widely in design, standards may arise for specific classes of neuroimaging tasks. Finally, once a deep learning application is developed it must undergo validation, which faces its own regulatory and practical hurdles. For example, the opacity of deep networks, which traditionally function as “black boxes,” can make auditing a challenge, although this may be partially addressed through technical means like generating saliency overlays (i.e., “heat maps”). Regulatory bodies are considering new programs that would allow a vendor to make minor modifications to its existing application without requiring a full resubmission for approval [114], potentially enabling AI tools to continue improving during the postmarket phase.
These advancements, coupled with the tremendous interest in AI applications to neuroradiology, ensure that the field’s pace of evolution will continue to hasten. Whether or not we will witness an AI application that is able to pass the neuroradiology equivalent of the Turing Test—that is, AI possessing diagnostic abilities truly comparable to those of a neuroradiologist—remains a point of considerable debate. It is clear, however, that AI will become an increasingly important part of clinical neuroradiology and will carry with it the accompanying benefits to both patients and physicians.
5 Take-Home Points
-
Neuroimaging represents an intriguing target for AI applications due to the high morbidity and mortality associated with neurological diseases.
-
Technical challenges remain due to the volumetric and multiparametric nature of neuroradiological imaging; however advances in GPU power and development of novel deep learning architectures may enable these challenges to be overcome.
-
AI applications to neuroimaging have shown success at handling a range of tasks involving all stages from an imaging study’s acquisition through its interpretation, including study protocoling; shortening image acquisition times of conventional, DTI, and ASL MRI; generating synthetic images using a different imaging modality; and lesion segmentation.
-
Newer applications successfully identify and quantify specific disease processes including infarcts, tumors, and intracranial hemorrhage, and more robust approaches have shown success in surveilling for multiple acute neurological diseases.
References
Le Bihan D. Diffusion MRI: what water tells us about the brain. EMBO Mol Med. 2014;6(5):569–73.
Prasoon A, Petersen K, Igel C, Lauze F, Dam E, Nielsen M. Deep feature learning for knee cartilage segmentation using a triplanar convolutional neural network. In: Medical image computing and computer-assisted intervention – MICCAI 2013 [Internet]. Berlin: Springer; 2013. p. 246–53. (Lecture Notes in Computer Science). Available from: http://springerlink.bibliotecabuap.elogim.com/chapter/10.1007/978-3-642-40763-5_31. Accessed 30 May 2018.
Kamnitsas K, Ledig C, Newcombe VFJ, Simpson JP, Kane AD, Menon DK, et al. Efficient multi-scale 3D CNN with fully connected CRF for accurate brain lesion segmentation. Med Image Anal. 2017;36:61–78.
Acosta-Cabronero J, Williams GB, Pereira JMS, Pengas G, Nestor PJ. The impact of skull-stripping and radio-frequency bias correction on grey-matter segmentation for voxel-based morphometry. NeuroImage. 2008;39(4):1654–65.
Sadananthan SA, Zheng W, Chee MWL, Zagorodnov V. Skull stripping using graph cuts. NeuroImage. 2010;49(1):225–39.
Popescu V, Battaglini M, Hoogstrate WS, Verfaillie SCJ, Sluimer IC, van Schijndel RA, et al. Optimizing parameter choice for FSL-brain extraction tool (BET) on 3D T1 images in multiple sclerosis. NeuroImage. 2012;61(4):1484–94.
Smith SM. Fast robust automated brain extraction. Hum Brain Mapp. 2002;17(3):143–55.
Jenkinson M, Beckmann CF, Behrens TEJ, Woolrich MW, Smith SM. FSL. NeuroImage. 2012;62(2):782–90.
Woolrich MW, Jbabdi S, Patenaude B, Chappell M, Makni S, Behrens T, et al. Bayesian analysis of neuroimaging data in FSL. NeuroImage. 2009;45(1, Suppl. 1):S173–86.
Jenkinsen M, Pechaud M, Smith SM. BET2: MR-based estimation of brain, skull and scalp surfaces. Eleventh annual meeting of the organization for human brain mapping, Toronto, Ontario; 2004. p.~716.
Shattuck DW, Sandor-Leahy SR, Schaper KA, Rottenberg DA, Leahy RM. Magnetic resonance image tissue classification using a partial volume model. NeuroImage. 2001;13(5):856–76.
Ségonne F, Dale AM, Busa E, Glessner M, Salat D, Hahn HK, et al. A hybrid approach to the skull stripping problem in MRI. NeuroImage. 2004;22(3):1060–75.
Iglesias JE, Liu CY, Thompson PM, Tu Z. Robust brain extraction across datasets and comparison with publicly available methods. IEEE Trans Med Imaging. 2011;30(9):1617–34.
Eskildsen SF, Coupé P, Fonov V, Manjón JV, Leung KK, Guizard N, et al. BEaST: brain extraction based on nonlocal segmentation technique. NeuroImage. 2012;59(3):2362–73.
Shi F, Wang L, Dai Y, Gilmore JH, Lin W, Shen D. LABEL: pediatric brain extraction using learning-based meta-algorithm. NeuroImage. 2012;62(3):1975–86.
Kleesiek J, Urban G, Hubert A, Schwarz D, Maier-Hein K, Bendszus M, et al. Deep MRI brain extraction: a 3D convolutional neural network for skull stripping. NeuroImage. 2016;129:460–9.
Duy NHM, Duy NM, Truong MTN, Bao PT, Binh NT. Accurate brain extraction using active shape model and convolutional neural networks. ArXiv180201268 Cs [Internet]. 2018. Available from: http://arxiv.org/abs/1802.01268. Accessed 21 May 2018.
Kahali S, Adhikari SK, Sing JK. On estimation of bias field in MRI images: polynomial vs Gaussian surface fitting method. J Chemom. 2016;30(10):602–20.
Sled JG, Zijdenbos AP, Evans AC. A nonparametric method for automatic correction of intensity nonuniformity in MRI data. IEEE Trans Med Imaging. 1998;17(1):87–97.
Tustison NJ, Avants BB, Cook PA, Zheng Y, Egan A, Yushkevich PA, et al. N4ITK: improved N3 bias correction. IEEE Trans Med Imaging. 2010;29(6):1310–20.
Nyul LG, Udupa JK, Zhang X. New variants of a method of MRI scale standardization. IEEE Trans Med Imaging. 2000;19(2):143–50.
Sun X, Shi L, Luo Y, Yang W, Li H, Liang P, et al. Histogram-based normalization technique on human brain magnetic resonance images from different acquisitions. Biomed Eng Online [Internet]. 2015;14. Available from: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4517549/. Accessed 21 May 2018.
Jenkinson M, Smith S. A global optimisation method for robust affine registration of brain images. Med Image Anal. 2001;5(2):143–56.
Jenkinson M, Bannister P, Brady M, Smith S. Improved optimization for the robust and accurate linear registration and motion correction of brain images. NeuroImage. 2002;17(2):825–41.
Andersson JLR, Jenkinson M, Smith S, Jenkinson M, Smith S, Andersson JLR, et al. Non-linear registration aka spatial normalisation FMRIB technical report TR07JA2. 2007. Available from: https://www.scienceopen.com/document?vid=13f3b9a9-6e99-4ae7-bea2-c1bf0af8ca6e. Accessed 21 May 2018.
Avants BB, Tustison NJ, Song G, Cook PA, Klein A, Gee JC. A reproducible evaluation of ANTs similarity metric performance in brain image registration. NeuroImage. 2011;54(3):2033–44.
Bernal J, Kushibar K, Asfaw DS, Valverde S, Oliver A, Martí R, et al. Deep convolutional neural networks for brain image analysis on magnetic resonance imaging: a review. ArXiv171203747 Cs [Internet]. 2017. Available from: http://arxiv.org/abs/1712.03747. Accessed 30 April 2018.
Krizhevsky A, Sutskever I, Hinton GE. ImageNet classification with deep convolutional neural networks. Commun ACM. 2017;60(6):84–90.
Pereira S, Pinto A, Alves V, Silva CA. Brain tumor segmentation using convolutional neural networks in MRI images. IEEE Trans Med Imaging. 2016;35(5):1240–51.
Akkus Z, Ali I, Sedlář J, et al. Predicting deletion of chromosomal arms 1p/19q in low-grade gliomas from MR images using machine intelligence. J Digit Imaging. 2017;30(4):469–76.
Schemmel A, Lee M, Hanley T, Pooler BD, Kennedy T, Field A, et al. Radiology workflow disruptors: a detailed analysis. J Am Coll Radiol. 2016;13(10):1210–4.
Balint BJ, Steenburg SD, Lin H, Shen C, Steele JL, Gunderman RB. Do telephone call interruptions have an impact on radiology resident diagnostic accuracy? Acad Radiol. 2014;21(12):1623–8.
Bhat A, Shih G, Zabih R. Automatic selection of radiological protocols using machine learning. In: Proceedings of the 2011 workshop on data mining for medicine and healthcare (DMMH ’11) [Internet]. New York: ACM; 2011. p. 52–5. Available from: http://doi.acm.org/10.1145/2023582.2023591. Accessed 22 May 2018.
Brown AD, Marotta TR. Using machine learning for sequence-level automated MRI protocol selection in neuroradiology. J Am Med Inform Assoc. 2018;25(5):568–71.
Wang S, Su Z, Ying L, Peng X, Zhu S, Liang F, et al. Accelerating magnetic resonance imaging via deep learning. In: IEEE; 2016. p. 514–7. Available from: http://ieeexplore.ieee.org/document/7493320/. Accessed 22 May 2018.
Yang Y, Sun J, Li H, Xu Z. Deep ADMM-Net for compressive sensing MRI. In: Lee DD, Sugiyama M, Luxburg UV, Guyon I, Garnett R, editors. Advances in neural information processing systems 29 [Internet]. Curran Associates; 2016. p. 10–8. Available from: http://papers.nips.cc/paper/6406-deep-admm-net-for-compressive-sensing-mri.pdf
Zhu B, Liu JZ, Cauley SF, Rosen BR, Rosen MS. Image reconstruction by domain-transform manifold learning. Nature. 2018;555(7697):487–92.
Golkov V, Dosovitskiy A, Sperl JI, Menzel MI, Czisch M, Sämann P, et al. q-Space deep learning: twelve-fold shorter and model-free diffusion MRI scans. IEEE Trans Med Imaging. 2016;35(5):1344–51.
Kim KH, Choi SH, Park S-H. Improving arterial spin labeling by using deep learning. Radiology. 2017;287(2):658–66.
Law M, Wang R, Liu C-SJ, Shiroishi MS, Carmichael JD, Mack WJ, et al. Value of pituitary gland MRI at 7 T in Cushing’s disease and relationship to inferior petrosal sinus sampling: case report. J Neurosurg. 2018:1–5. https://doi.org/10.3171/2017.9.JNS171969.
Schaap K, Vries YC, Mason CK, Vocht F, de Portengen L, Kromhout H. Occupational exposure of healthcare and research staff to static magnetic stray fields from 1.5–7 Tesla MRI scanners is associated with reporting of transient symptoms. Occup Env Med. 2014;71(6):423–9.
Bahrami K, Shi F, Zong X, Shin HW, An H, Shen D. Reconstruction of 7T-like images from 3T MRI. IEEE Trans Med Imaging. 2016;35(9):2085–97.
Vemulapalli R, Nguyen HV, Zhou SK. Chapter 16 – deep networks and mutual information maximization for cross-modal medical image synthesis. In: Deep learning for medical image analysis [Internet]. Academic Press; 2017. p. 381–403. Available from: https://www.sciencedirect.com/science/article/pii/B9780128104088000225. Accessed 22 May 2018.
Liu F, Jang H, Kijowski R, Bradshaw T, McMillan AB. Deep learning MR imaging-based attenuation correction for PET/MR imaging. Radiology. 2017;286(2):676–84.
Han X. MR-based synthetic CT generation using a deep convolutional neural network method. Med Phys. 2017;44(4):1408–19.
Kalkers NF, Ameziane N, Bot JCJ, Minneboo A, Polman CH, Barkhof F. Longitudinal brain volume measurement in multiple sclerosis: rate of brain atrophy is independent of the disease subtype. Arch Neurol. 2002;59(10):1572–6.
Mollison D, Sellar R, Bastin M, Mollison D, Chandran S, Wardlaw J, et al. The clinico-radiological paradox of cognitive function and MRI burden of white matter lesions in people with multiple sclerosis: a systematic review and meta-analysis. PLoS One. 2017;12(5):e0177727.
Jack CR, Slomkowski M, Gracon S, Hoover TM, Felmlee JP, Stewart K, et al. MRI as a biomarker of disease progression in a therapeutic trial of milameline for AD. Neurology. 2003;60(2):253–60.
Simmons A, Westman E, Muehlboeck S, Mecocci P, Vellas B, Tsolaki M, et al. MRI measures of Alzheimer’s disease and the AddNeuroMed study. Ann N Y Acad Sci. 2009;1180(1):47–55.
Bauer S, Wiest R, Nolte L-P, Reyes M. A survey of MRI-based medical image analysis for brain tumor studies. Phys Med Biol. 2013;58(13):R97.
Landman BA, Ribbens A, Lucas B, Davatzikos C, Avants B, Ledig C, et al. MICCAI 2012 workshop on multi-atlas labeling. In: Warfield SK, editor. CreateSpace independent publishing platform; 2012. 164 p.
Mendrik AM, Vincken KL, Kuijf HJ, Breeuwer M, Bouvy WH, de Bresser J, et al. MRBrainS challenge: online evaluation framework for brain image segmentation in 3T MRI scans [Internet]. In: Computational intelligence and neuroscience. 2015. Available from: https://www.hindawi.com/journals/cin/2015/813696/. Accessed 23 May 2018.
Menze BH, Jakab A, Bauer S, Kalpathy-Cramer J, Farahani K, Kirby J, et al. The multimodal brain tumor image segmentation benchmark (BRATS). IEEE Trans Med Imaging. 2015;34(10):1993–2024.
Bakas S, Akbari H, Sotiras A, Bilello M, Rozycki M, Kirby JS, et al. Advancing The Cancer Genome Atlas glioma MRI collections with expert segmentation labels and radiomic features. Sci Data [Internet]. 2017;4. Available from: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5685212/. Accessed 23 May 2018.
Kamnitsas K, Bai W, Ferrante E, McDonagh S, Sinclair M, Pawlowski N, et al. Ensembles of multiple models and architectures for robust brain tumour segmentation. ArXiv171101468 Cs [Internet]. 2017. Available from: http://arxiv.org/abs/1711.01468. Accessed 23 May 2018.
Brosch T, Tang LYW, Yoo Y, Li DKB, Traboulsee A, Tam R. Deep 3D convolutional encoder networks with shortcuts for multiscale feature integration applied to Multiple sclerosis lesion segmentation. IEEE Trans Med Imaging. 2016;35(5): 1229–39.
Valverde S, Cabezas M, Roura E, González-Villà S, Pareto D, Vilanova JC, et al. Improving automated multiple sclerosis lesion segmentation with a cascaded 3D convolutional neural network approach. NeuroImage. 2017;155:159–68.
Chen L, Carlton Jones AL, Mair G, Patel R, Gontsarova A, Ganesalingam J, et al. Rapid automated quantification of cerebral leukoaraiosis on CT images: a multicenter validation study. Radiology. 2018;288(2):573–81. https://doi.org/10.1148/radiol.2018171567.
Liew S-L, Anglin JM, Banks NW, Sondag M, Ito KL, Kim H, et al. A large, open source dataset of stroke anatomical brain images and manual lesion segmentations. Sci Data. 2018;5:180011.
Styner M, Lee J, Chin B, Chin MS, Commowick O, Tran H-H, et al. 3D segmentation in the clinic: a grand challenge II: MS lesion segmentation. MIDAS J. 2008;2008, 638.
Carass A, Roy S, Jog A, Cuzzocreo JL, Magrath E, Gherman A, et al. Longitudinal multiple sclerosis lesion segmentation: resource and challenge. NeuroImage. 2017;148:77–102.
Commowick O, Cervenansky F, Ameli R. MSSEG challenge proceedings: multiple sclerosis lesions segmentation challenge using a data management and processing infrastructure [Internet]. 2016. Available from: http://www.hal.inserm.fr/inserm-01397806/document. Accessed 23 May 2018.
Moeskops P, Viergever MA, Mendrik AM, de Vries LS, Benders MJNL, Išgum I. Automatic segmentation of MR brain images with a convolutional neural network. IEEE Trans Med Imaging. 2016;35(5):1252–61.
Chen H, Dou Q, Yu L, Qin J, Heng P-A. VoxResNet: deep voxelwise residual networks for brain segmentation from 3D MR images. NeuroImage. 2018;170:446–55.
Wachinger C, Reuter M, Klein T. DeepNAT: deep convolutional neural network for segmenting neuroanatomy. NeuroImage. 2018;170:434–45.
Chang PD. Fully convolutional deep residual neural networks for brain tumor segmentation. In: Brainlesion: glioma, multiple sclerosis, stroke and traumatic brain injuries [Internet]. Cham: Springer; 2016. p. 108–8. (Lecture Notes in Computer Science). Available from: https://springerlink.bibliotecabuap.elogim.com/chapter/10.1007/978-3-319-55524-9_11. Accessed 6 May 2018.
AlBadawy EA, Ashirbani S, Mazurowski Maciej A. Deep learning for segmentation of brain tumors: impact of cross-institutional training and testing. Med Phys. 2018;45(3):1150–8.
Benjamin EJ, Blaha MJ, Chiuve SE, Cushman M, Das SR, Deo R, et al. Heart disease and stroke statistics-2017 update: a report from the American Heart Association. Circulation, 2017. 135(10):e146–603.
Yang Q, Tong X, Schieb L, Vaughan A, Gillespie C, Wiltz JL, et al. Vital signs: recent trends in stroke death rates – United States, 2000–2015. MMWR Morb Mortal Wkly Rep. 2017;66(35):933–9.
Saver JL. Time is brain—quantified. Stroke. 2006;37(1):263–6.
Barber PA, Demchuk AM, Zhang J, Buchan AM. Validity and reliability of a quantitative computed tomography score in predicting outcome of hyperacute stroke before thrombolytic therapy. Lancet. 2000;355(9216):1670–4.
Nagel S, Sinha D, Day D, Reith W, Chapot R, Papanagiotou P, et al. e-ASPECTS software is non-inferior to neuroradiologists in applying the ASPECT score to computed tomography scans of acute ischemic stroke patients. Int J Stroke. 2017;12(6):615–22.
Chen L, Bentley P, Rueckert D. Fully automatic acute ischemic lesion segmentation in DWI using convolutional neural networks. NeuroImage Clin. 2017;15:633–43.
Maier O, Schröder C, Forkert ND, Martinetz T, Handels H. Classifiers for ischemic stroke lesion segmentation: a comparison study. PLoS One. 2015;10(12):e0145118.
Albers GW, Marks MP, Kemp S, Christensen S, Tsai JP, Ortega-Gutierrez S, et al. Thrombectomy for stroke at 6 to 16 hours with selection by perfusion imaging. N Engl J Med. 2018;378(8):708–18.
Nielsen A, Hansen MB, Tietze A, Mouridsen K. Prediction of tissue outcome and assessment of treatment effect in acute ischemic stroke using deep learning. Stroke. 2018;49(6):1394–401. https://doi.org/10.1161/STROKEAHA.117.019740.
Hope TMH, Seghier ML, Leff AP, Price CJ. Predicting outcome and recovery after stroke with lesions extracted from MRI images. NeuroImage Clin. 2013;2:424–33.
Hope TMH, Parker Jones Ō, Grogan A, Crinion J, Rae J, Ruffle L, et al. Comparing language outcomes in monolingual and bilingual stroke patients. Brain. 2015;138(4):1070–83.
Rondina JM, Filippone M, Girolami M, Ward NS. Decoding post-stroke motor function from structural brain imaging. NeuroImage Clin. 2016;12:372–80.
Poptani H, Kaartinen J, Gupta RK, Niemitz M, Hiltunen Y, Kauppinen RA. Diagnostic assessment of brain tumours and non-neoplastic brain disorders in vivo using proton nuclear magnetic resonance spectroscopy and artificial neural networks. J Cancer Res Clin Oncol. 1999;125(6): 343–9.
Shao Y, Lunetta RS. Comparison of support vector machine, neural network, and CART algorithms for the land-cover classification using limited training data points. ISPRS J Photogramm Remote Sens. 2012;70:78–87.
Emblem KE, Zoellner FG, Tennoe B, Nedregaard B, Nome T, Due-Tonnessen P, et al. Predictive modeling in glioma grading from MR perfusion images using support vector machines. Magn Reson Med. 2008;60(4):945–52.
Alcaide-Leon P, Dufort P, Geraldo AF, Alshafai L, Maralani PJ, Spears J, et al. Differentiation of enhancing glioma and primary central nervous system lymphoma by texture-based machine learning. Am J Neuroradiol. 2017;38(6):1145–50.
Zacharaki EI, Wang S, Chawla S, Yoo DS, Wolf R, Melhem ER, et al. Classification of brain tumor type and grade using MRI texture and shape in a machine learning scheme. Magn Reson Med. 2009;62(6):1609–18.
Zacharaki EI, Kanas VG, Davatzikos C. Investigating machine learning techniques for MRI-based classification of brain neoplasms. Int J Comput Assist Radiol Surg. 2011;6(6):821–8.
Emblem KE, Due-Tonnessen P, Hald JK, Bjornerud A, Pinho MC, Scheie D, et al. Machine learning in preoperative glioma MRI: survival associations by perfusion-based support vector machine outperforms traditional MRI. J Magn Reson Imaging. 2014;40(1):47–54.
Zhou M, Chaudhury B, Hall LO, Goldgof DB, Gillies RJ, Gatenby RA. Identifying spatial imaging biomarkers of glioblastoma multiforme for survival group prediction. J Magn Reson Imaging. 2017;46(1):115–23.
Macyszyn L, Akbari H, Pisapia JM, Da X, Attiah M, Pigrish V, et al. Imaging patterns predict patient survival and molecular subtype in glioblastoma via machine learning techniques. Neuro Oncol. 2016;18(3):417–25.
Kickingereder P, Bonekamp D, Nowosielski M, Kratz A, Sill M, Burth S, et al. Radiogenomics of glioblastoma: machine learning–based classification of molecular characteristics by using multiparametric and multiregional MR imaging features. Radiology. 2016;281(3):907–18.
Swinburne N, Schefflein J, Sakai Y, Oermann E, Titano J, Chen I, et al. Machine learning for semi-automated classification of glioblastoma, brain metastasis and CNS lymphoma using MR advanced imaging. Ann Transl Med. 2018; https://doi.org/10.21037/atm.2018.08.05.
Kim HS, Goh MJ, Kim N, Choi CG, Kim SJ, Kim JH. Which combination of MR imaging modalities is best for predicting recurrent glioblastoma? Study of diagnostic accuracy and reproducibility. Radiology. 2014;273(3):831–43.
Hu X, Wong KK, Young GS, Guo L, Wong ST. Support vector machine multiparametric MRI identification of pseudoprogression from tumor recurrence in patients with resected glioblastoma. J Magn Reson Imaging. 2011;33(2):296–305.
Artzi M, Liberman G, Nadav G, Blumenthal DT, Bokstein F, Aizenstein O, et al. Differentiation between treatment-related changes and progressive disease in patients with high grade brain tumors using support vector machine classification based on DCE MRI. J Neurooncol. 2016;127(3):515–24.
Tiwari P, Prasanna P, Wolansky L, Pinho M, Cohen M, Nayate AP, et al. Computer-extracted texture features to distinguish cerebral radionecrosis from recurrent brain tumors on multiparametric MRI: a feasibility study. Am J Neuroradiol. 2016;37(12):2231–6.
Hong S-J, Kim H, Schrader D, Bernasconi N, Bernhardt BC, Bernasconi A. Automated detection of cortical dysplasia type II in MRI-negative epilepsy. Neurology. 2014;83(1):48–55.
Ahmed B, Brodley CE, Blackmon KE, Kuzniecky R, Barash G, Carlson C, et al. Cortical feature analysis and machine learning improves detection of “MRI-negative” focal cortical dysplasia. Epilepsy Behav. 2015;48:21–8.
Rudie JD, Colby JB, Salamon N. Machine learning classification of mesial temporal sclerosis in epilepsy patients. Epilepsy Res. 2015;117:63–9.
Chockley K, Emanuel E. The end of radiology? Three threats to the future practice of radiology. J Am Coll Radiol. 2016;13(12, Part A):1415–20.
Jha S, Topol EJ. Adapting to artificial intelligence: radiologists and pathologists as information specialists. JAMA. 2016;316(22):2353–4.
Holodny AI. “Am I about to lose my job?!”: a comment on “computer-extracted texture features to distinguish cerebral radiation necrosis from recurrent brain tumors on multiparametric MRI: a feasibility study”. Am J Neuroradiol. 2016;37(12):2237–8.
Davenport TH, Keith J, Dreyer DO. AI will change radiology, but it won’t replace radiologists [Internet]. Harvard Business Review. 2018. Available from: https://hbr.org/2018/03/ai-will-change-radiology-but-it-wont-replace-radiologists. Accessed 25 May 2018.
Al-Ayyoub M, Alawad D, Al-Darabsah K, Aljarrah I. Automatic detection and classification of brain hemorrhages. WSEAS Trans Comput. 2013;12:395–405.
Scherer M, Cordes J, Younsi A, Sahin Y-A, Götz M, Möhlenbruch M, et al. Development and validation of an automatic segmentation algorithm for quantification of intracerebral hemorrhage. Stroke. 2016;47(11):2776–82.
Phan A-C, Vo V-Q, Phan T-C. Automatic detection and classification of brain hemorrhages. In: Intelligent information and database systems [Internet]. Cham: Springer; 2018. p. 417–27. (Lecture Notes in Computer Science). Available from: https://springerlink.bibliotecabuap.elogim.com/chapter/10.1007/978-3-319-75420-8_40. Accessed 6 May 2018.
Desai V, Flanders AE, Lakhani P. Application of deep learning in neuroradiology: automated detection of basal ganglia hemorrhage using 2D-convolutional neural networks. ArXiv171003823 Cs [Internet]. 2017. Available from: http://arxiv.org/abs/1710.03823. Accessed 26 April 2018.
Szegedy C, Liu W, Jia Y, Sermanet P, Reed S, Anguelov D, et al. Going deeper with convolutions. ArXiv14094842 Cs [Internet]. 2014. Available from: http://arxiv.org/abs/1409.4842. Accessed 25 May 2018.
Russakovsky O, Deng J, Su H, Krause J, Satheesh S, Ma S, et al. ImageNet large scale visual recognition challenge. Int J Comput Vis. 2015;115(3):211–52.
Phong TD, Duong HN, Nguyen HT, Trong NT, Nguyen VH, Van Hoa T, et al. Brain hemorrhage diagnosis by using deep learning. In: Proceedings of the 2017 international conference on machine learning and soft computing (ICMLSC ’17) [Internet]. New York: ACM; 2017. p. 34–9. Available from: http://doi.acm.org/10.1145/3036290.3036326. Accessed 25 May 2018.
Arbabshirani MR, Fornwalt BK, Mongelluzzo GJ, Suever JD, Geise BD, Patel AA, et al. Advanced machine learning in action: identification of intracranial hemorrhage on computed tomography scans of the head with clinical workflow integration. Npj Digit Med. 2018;1(1):9.
Chokshi F, Shin B, Lee T, Lemmon A, Necessary S, Choi J. Natural language processing for classification of acute, communicable findings on unstructured head CT reports: comparison of neural network and non-neural machine learning techniques. bioRxiv. 2017; https://doi.org/10.1101/173310.
Zech J, Pain M, Titano J, Badgeley M, Schefflein J, Su A, et al. Natural language-based machine learning models for the annotation of clinical radiology reports. Radiology. 2018;287(2):570–80.
Prevedello LM, Erdal BS, Ryu JL, Little KJ, Demirer M, Qian S, et al. Automated critical test findings identification and online notification system using artificial intelligence in imaging. Radiology. 2017;285(3):923–31.
Titano J, Badgeley M, Schefflein J, Pain M, Su A, Cai M, et al. Automated deep neural network surveillance of cranial images for acute neurologic events. Nat Med. 2018;24(9):1337–41.
Speeches by FDA officials – transforming FDA’s approach to digital health [Internet]. Available from: https://www.fda.gov/NewsEvents/Speeches/ucm605697.htm. Accessed 30 May 2018.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2019 Springer Nature Switzerland AG
About this chapter
Cite this chapter
Swinburne, N., Holodny, A. (2019). Neurological Diseases. In: Ranschaert, E., Morozov, S., Algra, P. (eds) Artificial Intelligence in Medical Imaging. Springer, Cham. https://doi.org/10.1007/978-3-319-94878-2_15
Download citation
DOI: https://doi.org/10.1007/978-3-319-94878-2_15
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-94877-5
Online ISBN: 978-3-319-94878-2
eBook Packages: MedicineMedicine (R0)