Abstract
Artificial Intelligence (AI) is the predominant term used to describe a rapidly evolving field of automatization and innovation finding new academic and industrial applications in healthcare, including molecular imaging. The concept of AI has been around since the mid-twentieth century, evolved over decades to become practical in the present, where computational power allows observational learning of patterns from large amounts of digitally medical data using novel computer algorithms, commonly known as deep learning (DL). Different AI/DL methods have been explored in various healthcare applications ranging from drug development, clinical workflow improvement, throughout the life cycle of structural and molecular images, for treatment planning and to predict clinical prognosis. Like with other advance statistical methods, the accuracy and generalizability of AI/DL methods is enhanced using large and heterogenous datasets to develop robust AI/DL models and applications that can transform the field of healthcare, hybrid and molecular imaging.
Access provided by Autonomous University of Puebla. Download chapter PDF
Similar content being viewed by others
Keywords
Artificial Intelligence (AI) is a broad term commonly used to describe one of the highest growth industries worldwide, which is quickly finding new and exciting applications in healthcare, particularly for image-based diagnostic workup [1,2,3]. The concept of AI has evolved significantly since early work using model-driven algorithms in the 1940s to its current state where computational power allows observational learning of pattern by a computer algorithm (i.e., machine learning (ML)) from large amounts of now digitally available medical data, precluding the need for a priori knowledge of an underlying process and thus eliminating the requirement for human-driven modeling and feature engineering [1]. ML, a narrower subfield of AI, develops algorithms that learn to perform tasks, make decisions, or predictions automatically from data, rather than having a behavior explicitly programmed. ML can be broadly further subdivided in supervised and unsupervised types of learning. The latter techniques find patterns in data and provide structural learning (e.g., classes within a dataset) without the need for data annotation, making these suitable to learn from large and unlabeled datasets. Clustering analysis and Principal Component Analysis (PCA) are techniques typically used for these tasks. A recent study using the former technique identified four new different clinical phenotypes in septic patients, each with its corresponding host-response pattern and clinical outcome [2].
In contrast, supervised learning techniques may be applied in learning tasks using “cleaned” data. That is data that is organized and provided in the form of input examples (and its features) paired with those examples’ specific outputs which constitute the “ground truth” (or labels) to be predicted when building a certain inference model. Supervised learning techniques include regression and classification algorithms such as linear and logistic regression, discriminant analysis, decision tree, random forest, naïve Bayes, support vector machines (SVM), and neural networks. These type of ML models have been evolving for decades and continue to be the most commonly used in healthcare. An early example is DXplain, a medical decision support system developed at the Laboratory of Computer Sciences at Massachusetts General Hospital, that takes a set of clinical findings (signs, symptoms, laboratory data) and produces a ranked list of differential diagnoses [3].
Deep Learning (DL) started having an impact in the early 2000s and it took over an additional decade for its use in healthcare. It also encompasses methods in the family of ML, and most frequently referrers to supervised learning using Artificial Neural Networks (ANN), which use multiple layers of features from inputs to progressively extract higher level features in order to predict labeled outputs. Examples of ANN include convoluted neural networks (CNN), deep belief networks (DBN), and recurrent neural networks (RNN). DL may also be used in unsupervised or semi-supervised learning and can become competent in exceedingly complex relationships between features and labels, for which have shown similar or exceeding capabilities to humans in solving problems of computer vision (CV) in medicine [4,5,6]. Usually the process of preparing datasets with features and labels can be time consuming; however, once readily available, ML algorithms can rapidly be trained and tested. For instance, during a developing pandemic, when essential swab and serologic testing was lacking in the USA and around the globe, a pre-trained ANN showed high accuracy in diagnosing coronavirus disease 2019 (COVID-19) while differentiating this disease from other types of viral or bacterial pneumonias using plain chest X-rays; similar ANN validated chest CT against viral RNA RT-PCR (reverse transcription polymerase chain reaction) test as a sensitive modality for diagnosis of COVID-19 [7,8,9].
Other computational training tasks use unstructured data to build models that can, for example, diagnose from physician’s notes in medical records (a.k.a. as “free text”) in combination with the medical literature, using either supervised or unsupervised natural language processing (NLP) learning techniques. Recent examples in this domain have shown that complications from spinal surgery can be screened from operative reports, septic shock, suicide risk can be predicted from clinicians’ notes and patients that need follow-up imaging can be detected using previous radiology reports [10,11,12,13].
Each of these ML methods is uniquely suited to certain tasks, but their products—models predicting a class or outcome in a narrow specific area of healthcare or medical imaging—are the current essence of AI in medicine.
AI’s proposed applications in healthcare are diverse, from hospital finance to diagnostic and therapeutic applications, promising to improve efficiency by decreasing both the time that tasks take and the potential for medical error [14]. A key characteristic of AI’s current and proposed applications is expanding what is possible today, not only increasing the speed or accuracy of current processes.
1 AI Applications Support the Infrastructure and Interventions of Healthcare, Including Molecular Imaging
At the time of writing this chapter there were 114,002 results for “Artificial Intelligence” in PubMed and, while these entries date back to the year 1951, over fifty percent of the results correspond to the last 7 years. This highlights the importance of time in scientific breakthroughs. While models and theoretical concepts for the now call AI/Deep Learning revolution were first introduced over half-a-century ago by Frank Rosenblatt, a psychologist from Cornell University [15] and Marvin Minsky, a cognitive scientist from MIT [16], it was not until recently that these concepts were perfected and implemented at scale in various industries (including more recently healthcare) by computer and data scientists due to more recent explosive growth in the global digital datasphere and computational power.
1.1 Drug Development
In drug development AI models predict the chemical and pharmaceutical properties of small-molecule candidates for drug design and development, new applications of existing drugs, and new patients who can benefit from drugs, predict bioactivity and identify patient characteristics for clinical trials [17,18,19,20]. Moreover, AI in combination with sources of large amounts of data in the biosciences has been used to build in silico models of disease processes, such as cancer, to enable computer-aided design and testing of potential therapeutic compounds [21]. In the realm of infection, an approach using ML known as computational phenotyping, was capable to predict antibiotic resistance phenotypes in various bacterial pathogens and another showed it could facilitate rapid drug development against SARS coronavirus 2 (SARS-CoV-2) the causative organism in COVID-19 [22, 23].
1.2 Clinical Workflow
In clinical medicine, some of the tasks most amenable to being performed by a computer, as well as some that can only be performed with levels of computational ability beyond the human brain, reside in the clinical workflow itself. By streamlining patient experience and clinical operations, and improving patient flow through key points in the clinical experience including admissions, discharges, and ICU transfers, AI has the potential to significantly improve the efficiency of clinical care [24]. AI also extends the capability of clinical care implementation, for example, through AI’s incorporation into robotic surgical guidance systems [25, 26]. During the COVID-19 pandemic, AI has also been proposed as a real-time forecasting tool [27], and for early infection identification, monitoring, surveillance, and prevention as well as mitigation of the impact to healthcare indirectly related to COVID-19 [28].
AI is now imbedded in various day-to-day operations of many imaging departments including scheduling, image acquisition, dose reduction, image reconstruction and post-processing, prioritization for reporting, classification of findings for reporting, and the reporting task itself [1]. Beyond plugging AI into various pieces of the existing workflow, eventually molecular imaging workflows will need to be redesigned to take full advantage of AI, for example through merging data sources into a data model to enable easier data exploration and visualization [29].
2 AI’s Clinical Applications with a Focus on Molecular Imaging
From a clinical perspective, models developed using modern DL methods can, in certain circumstances, be generalized across diseases and imaging modalities and are typically less susceptible to errors in predictions secondary to noise. The interactions of various systems within a disease as well as complex dependencies of disease states on each other can be better understood with AI through DL because of its ability to aggregate multiple data streams from imaging, laboratory, genomics, proteomics, pathology, as well as data from the electronic medical record, social networks, wearable sensors, and other data sources to create integrated diagnostic systems [30]. One could envision multimodality DL modeling approaches integrating multiple data streams not only to compute disease prognosis but in every step of the diagnostic imaging workflow to involve both upstream and downstream applications. Such instances could include: planning (e.g., patient selection and scheduling based on a patient’s disease profile, previous and future interactions with the healthcare system) to scanning (e.g., reducing diagnostic study radiation dose and bettering image quality), to reading (e.g., automated detection and classification of pathologies), and reporting (e.g., automating reports with reproducible measurements, automating prediction of clinical outcomes) (Fig. 1.1).
2.1 Understanding Disease
More and more, the concept that diseases are a manifestation of interconnected organ systems is gaining traction. Understanding these systems and their sequalae of signs and symptoms is an area where combined molecular and anatomic imaging modalities can contribute. However, the connections within and between these systems is highly complex and models built on AI can help in pattern elucidation. For example, areas of the brain associated with specific cognitive changes typical of genetic disorders affecting the brain primarily or secondarily have been identified using machine learning approaches [31, 32]. In their study of the ability of 18F FDG PET to predict neuropsychological performance (NPP) in patients with neurofibromatosis Type 1 (NF 1), Schutze and colleagues built on the anatomical findings of MRI studies of NF 1, concluding that the accuracy in predicting NPP based on PET suggested an underlying metabolic pattern of cognitive function [32].
2.2 Diagnosis
AI is used in a plethora of diagnostic tasks using data from traditional and untraditional sources. In primary care, patients report their symptoms and concerns to chatbots that then route their care to the appropriate channel for further diagnosis or treatment [33]. Difficult diagnoses are aided by piecing together symptoms of the patient with those of millions of other patients for diagnosis [34]. Given its particularly complicated origins including genetic and environmental factors interacting with the immune system and other normal tissues in the body, cancer diagnosis is another area where AI is helping in diagnostic tasks using data from general health screening and diagnostic tests, including blood testing, imaging, and pathology [35]. In addition to the field of cancer, numerous AI applications have been developed in neurology and cardiology [36, 37].
In diagnostic imaging tasks, methods of using computational analysis to aid in the detection of lesions have evolved from early work in temporal subtraction methods and artificial neural networks performed in the 1960s–1980s to sophisticated DL methodologies of today [38, 39]. AI can recognize specific diagnoses on imaging, such as pathologic bacteria on microscopy of blood samples or findings on a radiograph [40].
In cancer detection and diagnosis, AI can facilitate the workflow efficiency and accuracy of imaging clinician expertise through precise determination of tumor volume and its change over time and tracking of multiple lesions [30]. Automated PET segmentation of nodules based on neural networks trained in the spatial and wavelet domains have been shown to be reproducible, volumetrically accurate, and demonstrate lower absolute relative error when compared to other automated techniques [41]. Other ML approaches have been useful in dealing with segmentation of larger and more complicated tumors of the head and neck, particularly in the setting of heterogeneous radiopharmaceutical uptake, in segmenting brain tumors and classifying brain scans [42,43,44]. In evaluating measures that are not typically detectable by an imaging physician, AI can help further guide additional testing and patient management [45].
The greatest strength of AI may be its ability to integrate far more factors than are possible for a single physician. For example, by analyzing images in tandem with blood and other laboratory testing, genomics, and unstructured data from patient medical records, AI algorithms are being used to make diagnoses in a more wholistic manner, decreasing the physician’s difficulty of integrating disparate results from numerous tests and the medical history [46]. This approach is known as multimodality deep learning. A recent study using this approach showed that combining clinical, pathological, and imaging information increased the predictive power of clinical outcomes in glioblastoma multiforme, where survival is poor and ranges from one to two years in most patients [47]. Another recent promising methodology in this domain could be useful for pancancer prognosis prediction using clinical data, mRNA expression data, microRNA expression data, and whole slide histopathology images [48].
2.3 Radiologic-Pathology Correlation
The power of AI over the experience of any single imaging physician or pathologist is the ability to cross-reference imaging or other data from individual tumors to databases of limitless cases for comparison, rather than limiting comparison to those cases seen over the physician’s career [30]. AI solutions for pathology have been shown to make diagnoses over tenfold faster than pathologists; while having obvious direct clinical applications, AI-based pathology has shown high value in applications in the pharmaceutical industry [49].
2.4 Characterization
Beyond connecting lesions on imaging with specific pathologic correlations, AI can assist with other areas of classification as well. For example, in neurology, Parkinson’s Disease severity can be classified with 99mTc-TRODAT-1 SPECT Imaging based on support vector machine models [50]. Several applications of AI have been published in the cancer field including detection, characterization, and monitoring response to treatment of various cancer types [30]. In the realm of hybrid imaging molecular imaging modalities such as PET or SPECT provide molecular characterization of lesions possibly seen on companion anatomic imaging, such as CT. Increasingly, work is focusing on predicting the data that would be produced by the functional modality using the traditional anatomic modality in combination with artificial intelligence. For example, uptake of 68Ga DOTATATE on PET has been used to label bone metastases as active on PET-CT, with subsequent development of AI models to predict activity using only radiomic data from the CT portion of the study [51]. This new paradigm may enable a greater global reach of the benefits of molecular imaging, allowing even those geographies that lack molecular imaging systems the ability to better characterize lesions using staple and inexpensive modalities.
AI has been key in the proliferation of the field of radiomics. Radiomic approaches aim to identify imaging phenotypes specific to diseases that can be used in their diagnosis, characterization, and treatment management, approaches that some have coined as “radiomic biopsy.” These imaging phenotypes can be defined by characteristics of the images measured or observed by an imaging specialist or features extracted based upon pre-defined statistical imaging features, of which over 5000 have been described. Such radiomic features can identify key components of tumor phenotype for multiple lesions at multiple time points over the course of treatment, potentially allowing enhanced patient stratification, prognostication, and treatment monitoring for targeted therapies, although care must be taken in evaluating the generalizability of the results of these approaches [52]. Radiogenomics, the translation of intratumoral phenotypic features to genotypes, has been most explored in cancer imaging. Making these types of correlations requires the development of new methodologies to summarize phenotypes of large heterogenous populations of cells within a single tumor and look for underlying genotypic similarities [53].
2.5 Treatment Planning
In the realm of treatment, AI has become a pillar of the concept of personalized healthcare whereby machine-based learning based on numerous and seemingly endless sources of medical data is anticipated to identify insights into patterns of disease and prognosis [54, 55]. Models predicting response based upon any given choice of therapy would greatly inform choices of drug therapy made by patients and their physicians.
In the delivery of radiation, AI has enabled dose distribution prediction for intensity-modulated treatment planning based on patient-specific geometry and prescription dose on CT of cancers of the head and neck [56, 57]. Similarly, AI models can predict radiation dose to normal organs for preemptive adjustment of technique [58]. Ideally, these types of techniques will also inform therapies based on radiopharmaceuticals as the field of theranostics grows.
In treatments involving radiation, AI has the potential to improve safety and quality. For example, in the realm of radiation oncology, DL with convolutional neural networks has been used to identify radiotherapy treatment delivery errors using patient-specific gamma images [59]. However, AI can’t be treated as a black box whose output should be trusted at face value, particularly when this output directly affects therapy. Rather, the fields of molecular theranostics and radiation therapy must recognize the fallibility of any technology that is misused with potentially significant consequences and that the workforce, including physicians, technologists, and radiation physicists, must become more conversant in various AI approaches and algorithm development [60].
2.6 Prediction of Response to Treatment
Many current pharmacologic therapies and radiotherapy approaches rely on indirect actions on disease, requiring the decoding of complicated molecular inter-relationships to define response to therapy, a task that is suited for AI. Such predictive models may require input of clinical factors; for example, one study using pretherapeutic clinical parameters to predict the outcome of 90Y radioembolization in patients with intrahepatic tumors [61]. Alternatively, models may focus on predictions made solely upon imaging, such as the prediction of radioresistant primary nasopharyngeal cancers from CT, MR, and PET imaging prior to IMRT using radiomics analysis combined with machine learning to identify the most predictive features [62]. Finally, models may rely on a combination of predictors, such as a study by Jin et al. investigating the ability to predict treatment response based on a machine learning model combining computed tomography (CT) radiomic features and dosimetric parameters for patients with esophageal cancer (EC) who underwent concurrent chemoradiation (CRT) [63].
2.7 Overall Prognosis
Evaluations of overall prognosis can be helpful to guide therapeutic choices in highly aggressive diseases or diseases that have a more chronic course with multiple therapeutic options. Just as in the case of models to predict response to therapy, overall prognostic models may incorporate clinical information, imaging data, or a spectrum of data from in vivo molecular imaging to ex vivo tissue analysis and patient characteristics.
For example, the ability to train neural network models on data from a single low-cost, widely available test such as bone scan to predict prognosis in patients with metastatic prostate cancer or breast cancer enables the development of a widely applicable prognostic model [64] By comparison, models developed from studies such as one using a combination of highly specialized inputs including 11C methionine (11C MET) PET, tumor grade, histology, and isocitrate dehydrogenase 1 R132H mutational status to predict survival in glioma patients may only be applied under very specialized circumstances [65].
2.8 Reporting
The ability to provide a timely, accurate and actionable imaging report is paramount to ensure providing quality of care and better clinical outcomes. It is known that medical image reporting errors are not rare [66, 67]. A plausible explanation for this may be the increasing number and complexity of clinical imaging studies with lagging in training of radiologist specialists, rendering attending radiologists overburdened. Hence, solutions to augment imaging specialists improve and expediate clinical reporting could be helpful (Fig. 1.2). Already an AI framework that could provide considerable benefits for patient safety and quality of care for busy emergency and trauma imaging services that press radiologists to meet the demand of increased imaging volume and provide accurate reports has been proposed [68]. Similarly, another AI framework could potentially increase threefold the measurements of target lesions in oncologic scans and provided faster notification of actionable findings to referring clinicians [69]. Likewise, a recent study by Rao and colleagues demonstrated an AI tool that can serve as peer reviewer to augment radiologists diagnosing intracranial hemorrhage and reducing error rates [70].
Finally, healthcare records, imaging, medical decision-making, and treatment data are now continuously recorded within the boundaries of healthcare systems in siloed fashion. In part due to this, most of current machine learning efforts in healthcare, including those in hybrid imaging and molecular imaging, are unfortunately only based on data from single institutions. AI and machine learning are inherently statistical methodologies, and as such, they benefit the most from large and heterogenous datasets, ideally from multiple institutions. Never before has the failure to build robust data-sharing systems for large-scale and near real-time analysis in healthcare has been more evident than with the outbreak of COVID-19 pandemic. Nevertheless, an international shared data model exist for information from intensive care units (ICUs): MIMIC database is such model, it is publicly available, deidentified, and widely used by investigators and engineers around the world, helping to drive research in clinical informatics, epidemiology, and machine learning [71]. Efforts like this that can enable global researchers generate AI applications that empower imaging specialists and other healthcare workers to make data-driven decisions, are sadly lacking in the hybrid and molecular imaging community. MIMIC or other established models that enable medical data sharing between institutions may serve as direction for the molecular imaging and other medical communities yet, while such endeavor could boost the task of modeling using retrospective medical data, prospective multicenter validation of developed ML models should be warrant before and during clinical deployment.
3 Conclusion
With all these abilities, the boundary between the job of AI and the role of the human imaging specialist will be debated. While some prognosticate an era of medical specialists like radiologists and pathologists being augmented by large-scale computation from AI-based applications, others foresee a future where traditional imaging physicians and pathologists cease to have a role, replaced by a physician “information specialist” trained less-so in radiology/pathology and more-so in the data sciences, statistics, and parallel fields that serve as information sources, such as genomics and proteomics [72]. Beyond its influence on medical diagnosis and therapy, AI will have effects throughout the healthcare continuum, including keeping people healthy [73]. As the role and influence of AI in healthcare continues to evolve, real and potential benefits become certain, and so far suggest AI will not replace radiologists and physicians, but radiologists and physicians who use AI will replace those who don’t [74].
References
Nensa F, Demircioglu A, Rischpler C. Artificial intelligence in nuclear medicine. J Nucl Med. 2019;60(2):29S–37S.
Seymour CW, et al. Derivation, validation, and potential treatment implications of novel clinical phenotypes for sepsis. JAMA. 2019;321(20):2003–17.
Barnett GO, et al. DXplain. An evolving diagnostic decision-support system. JAMA. 1987;258(1):67–74.
Jiang F, et al. Artificial intelligence in healthcare: past, present and future. Stroke Vasc Neurol. 2017;2(4):230–43.
Esteva A, et al. Dermatologist-level classification of skin cancer with deep neural networks. Nature. 2017;542(7639):115–8.
Gulshan V, et al. Development and validation of a deep learning algorithm for detection of diabetic retinopathy in retinal fundus photographs. JAMA. 2016;316(22):2402–10.
Hall LO, et al. Finding Covid-19 from chest X-rays using deep learning on a small dataset. arXiv e-prints. 2020. arXiv:2004.02060.
Gozes O, et al. Rapid AI development cycle for the coronavirus (COVID-19) pandemic: initial results for automated detection and patient monitoring using deep learning CT image analysis. arXiv e-prints. 2020. arXiv:2003.05037.
Ai T, et al. Correlation of chest CT and RT-PCR testing in coronavirus disease 2019 (COVID-19) in China: a report of 1014 cases. Radiology. 2020;2020:200642.
Karhade AV, et al. Natural language processing for automated detection of incidental durotomy. Spine J. 2019;20(5):695–700.
Vermassen J, et al. Automated screening of natural language in electronic health records for the diagnosis septic shock is feasible and outperforms an approach based on explicit administrative codes. J Crit Care. 2020;56:203–7.
Levis M, et al. Natural language processing of clinical mental health notes may add predictive value to existing suicide risk models. Psychol Med. 2020; https://doi.org/10.1017/S0033291720000173.
Lou R, et al. Automated detection of radiology reports that require follow-up imaging using natural language processing feature engineering and machine learning classification. J Digit Imaging. 2020;33(1):131–6.
Balasubramanian R, Libarikian A, McElhaney D. Insurance 2030 - the impact of AI on the future of insurance. 2018.
Rosenblatt F. The perceptron: a probabilistic model for information storage and organization in the brain. Psychol Rev. 1958;65(6):386–408.
Minsky M. Steps toward artificial intelligence. Proc IRE. 1961;49(1):8–30.
Zhavoronkov A, et al. Deep learning enables rapid identification of potent DDR1 kinase inhibitors. Nat Biotechnol. 2019;37(9):1038–40.
Mamoshina P, et al. Machine learning on human muscle transcriptomic data for biomarker discovery and tissue-specific drug target identification. Front Genet. 2018;9:242.
Aliper A, et al. Deep learning applications for predicting pharmacological properties of drugs and drug repurposing using transcriptomic data. Mol Pharm. 2016;13(7):2524–30.
Fleming N. How artificial intelligence is changing drug discovery. Nature. 2018;557(7707):S55–7.
Bhattacharya T, et al. AI meets exascale computing: advancing cancer research with large-scale high performance computing. Front Oncol. 2019;9:984.
Drouin A, et al. Predictive computational phenotyping and biomarker discovery using reference-free genome comparisons. BMC Genomics. 2016;17(1):754.
Stebbing J, et al. COVID-19: combining antiviral and anti-inflammatory treatments. Lancet Infect Dis. 2020;20(4):400–2.
Tsay D, Patterson C. From machine learning to artificial intelligence applications in cardiac care. Circulation. 2018;138(22):2569–75.
Max DT. Paging Dr. Robot: a pathbreaking surgeon prefers to do his cutting by remote control. The New Yorker. 2019.
Gormley B. Impact of Auris Health’s acquisition could be felt across med-tech. In: The wall street journal. New York: Dow Jones & Company; 2020.
Hu Z, et al. Artificial intelligence forecasting of Covid-19 in China. arXiv e-prints. 2020. arXiv:2002.07112.
Ting DSW, et al. Digital technology and COVID-19. Nat Med. 2020;26(4):459–61.
Brodbeck D, et al. Making the radiology workflow visible in order to inform optimization strategies. Stud Health Technol Inform. 2019;259:19–24.
Bi WL, et al. Artificial intelligence in cancer imaging: clinical challenges and applications. CA Cancer J Clin. 2019;69(2):127–57.
Ding Y, et al. A deep learning model to predict a diagnosis of Alzheimer disease by using (18)F-FDG PET of the brain. Radiology. 2019;290(2):456–64.
Schutze M, et al. Use of machine learning to predict cognitive performance based on brain metabolism in Neurofibromatosis type 1. PLoS One. 2018;13(9):e0203520.
Winn AN, et al. Association of use of online symptom checkers with patients’ plans for seeking care. JAMA Netw Open. 2019;2(12):e1918561.
Tomasev N, et al. A clinically applicable approach to continuous prediction of future acute kidney injury. Nature. 2019;572(7767):116–9.
Putcha G. Blood-based detection of early-stage colorectal cancer using multiomics and machine learning. In: American Society of Clinical Oncology Gastrointestinal Cancers Symposium. 2020.
Bouton CE, et al. Restoring cortical control of functional movement in a human with quadriplegia. Nature. 2016;533(7602):247–50.
Mannini A, et al. A machine learning framework for gait classification using inertial sensors: application to elderly, post-stroke and Huntington’s disease patients. Sensors. 2016;16(1):134.
Shiraishi J, et al. Computer-aided diagnosis and artificial intelligence in clinical imaging. Semin Nucl Med. 2011;41(6):449–62.
Shiraishi J, et al. Development of a computer-aided diagnostic scheme for detection of interval changes in successive whole-body bone scans. Med Phys. 2007;34(1):25–36.
Smith KP, Kang AD, Kirby JE. Automated interpretation of blood culture gram stains by use of a deep convolutional neural network. J Clin Microbiol. 2018;56(3):e01521.
Sharif MS, et al. Artificial neural network-based system for PET volume segmentation. Int J Biomed Imaging. 2010;2010:105610.
Belhassen S, Zaidi H. A novel fuzzy C-means algorithm for unsupervised heterogeneous tumor quantification in PET. Med Phys. 2010;37(3):1309–24.
Blanc-Durand P, et al. Automatic lesion detection and segmentation of 18F-FET PET in gliomas: a full 3D U-Net convolutional neural network study. PLoS One. 2018;13(4):e0195798.
Nobashi T, et al. Performance comparison of individual and ensemble CNN models for the classification of brain 18F-FDG-PET scans. J Digit Imaging. 2020;33(2):447–55.
Dagan N, et al. Automated opportunistic osteoporotic fracture risk assessment using computed tomography scans to aid in FRAX underutilization. Nat Med. 2020;26(1):77–82.
Kaplan DA. How radiologists are using machine learning. In: Diagnostic imaging. New York: Springer; 2017.
Peeken JC, et al. Combining multimodal imaging and treatment features improves machine learning-based prognostic assessment in patients with glioblastoma multiforme. Cancer Med. 2019;8(1):128–36.
Cheerla A, Gevaert O. Deep learning with multimodal representation for pancancer prognosis prediction. Bioinformatics. 2019;35(14):i446–54.
Pokkalla H, et al. Machine learning models accurately interpret liver histology in patients with nonalcoholic steatohepatitis (NASH). Hepatology. 2019;70(S1):187.
Hsu SY, et al. Feasible classified models for Parkinson Disease from 99mTc TRODAT-1 SPECT imaging. Sensors. 2019;19:1740.
Acar E, et al. Machine learning for differentiating metastatic and completely responded sclerotic bone lesion in prostate cancer: a retrospective radiomics study. Br J Radiol. 2019;92(1101):20190286.
Morin O, et al. A deep look into the future of quantitative imaging in oncology: a statement of working principles and proposal for change. Int J Radiat Oncol Biol Phys. 2018;102(4):1074–82.
Chidester B, Do MN, Ma J. Discriminative bag-of-cells for imaging-genomics. Pac Symp Biocomput. 2018;23:319–30.
Mamoshina P, et al. Blood biochemistry analysis to detect smoking status and quantify accelerated aging in smokers. Sci Rep. 2019;9(1):142.
Ahmad MA, et al. Death vs. data science: predicting end of life. In: Association for the advancement of artificial intelligence conference on artificial intelligence.
Fan J, et al. Automatic treatment planning based on three-dimensional dose distribution predicted from deep learning technique. Med Phys. 2019;46(1):370–81.
Chen X, et al. A feasibility study on an automated method to generate patient-specific dose distributions for radiotherapy using deep learning. Med Phys. 2019;46(1):56–64.
Avanzo M, et al. Prediction of skin dose in low-kV intraoperative radiotherapy using machine learning models trained on results of in vivo dosimetry. Med Phys. 2019;46(3):1447–54.
Nyflot MJ, et al. Deep learning for patient-specific quality assurance: identifying errors in radiotherapy delivery by radiomic analysis of gamma images with convolutional neural networks. Med Phys. 2019;46(2):456–64.
Kearney V, et al. The application of artificial intelligence in the IMRT planning process for head and neck cancer. Oral Oncol. 2018;87:111–6.
Ingrisch M, et al. Prediction of (90)Y radioembolization outcome from pretherapeutic factors with random survival forests. J Nucl Med. 2018;59(5):769–73.
Li S, et al. Use of radiomics combined with machine learning method in the recurrence patterns after intensity-modulated radiotherapy for nasopharyngeal carcinoma: a preliminary study. Front Oncol. 2018;8:648.
Jin X, et al. Prediction of response after chemoradiation for esophageal cancer using a combination of dosimetry and CT radiomics. Eur Radiol. 2019;29(11):6080–8.
Inaki A, et al. Fully automated analysis for bone scintigraphy with artificial neural network: usefulness of bone scan index (BSI) in breast cancer. Ann Nucl Med. 2019;33(10):755–65.
Papp L, et al. Glioma survival prediction with combined analysis of in vivo (11)C-MET PET features, ex vivo features, and patient features by supervised machine learning. J Nucl Med. 2018;59(6):892–9.
Waite S, et al. Interpretive error in radiology. AJR Am J Roentgenol. 2017;208(4):739–49.
Sokolovskaya E, et al. The effect of faster reporting speed for imaging studies on the number of misses and interpretation errors: a pilot study. J Am Coll Radiol. 2015;12(7):683–8.
Jalal S, et al. Exploring the role of artificial intelligence in an emergency and trauma radiology department. Can Assoc Radiol J. 2020;72(1):167–74.
Do HM, et al. Augmented radiologist workflow improves report value and saves time: a potential model for implementation of artificial intelligence. Acad Radiol. 2020;27(1):96–105.
Rao B, et al. Utility of artificial intelligence tool as a prospective radiology peer reviewer - detection of unreported intracranial hemorrhage. Acad Radiol. 2020;28(1):85–93.
Cosgriff CVE, Celi LA. Data sharing in the era of COVID-19. Lancet Digital Health. 2020;2(5):E224.
Jha S, Topol EJ. Adapting to artificial intelligence: radiologists and pathologists as information specialists. JAMA. 2016;316(22):2353–4.
Duncan DE. Can AI keep you healthy. In: MIT technology review. Boston: MIT; 2017.
Cl L. Will artificial intelligence replace radiologists? Radiology. 2019;1(3):e190058.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this chapter
Cite this chapter
Davidzon, G.A., Franc, B. (2022). Role and Influence of Artificial Intelligence in Healthcare, Hybrid Imaging, and Molecular Imaging. In: Veit-Haibach, P., Herrmann, K. (eds) Artificial Intelligence/Machine Learning in Nuclear Medicine and Hybrid Imaging. Springer, Cham. https://doi.org/10.1007/978-3-031-00119-2_1
Download citation
DOI: https://doi.org/10.1007/978-3-031-00119-2_1
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-00118-5
Online ISBN: 978-3-031-00119-2
eBook Packages: MedicineMedicine (R0)