Abstract
Artificial intelligence (AI) has influenced every field of cardiovascular imaging in all phases from acquisition to reporting. Compared with computed tomography and magnetic resonance imaging, there is an issue of high observer variation in the interpretation of echocardiograms. Therefore, AI can help minimize the observer variation and provide accurate diagnosis in the field of echocardiography. In this review, we summarize the necessity for automated diagnosis in the echocardiographic field, and discuss the results of AI application to echocardiography and future perspectives. Currently, there are two roles for AI in cardiovascular imaging. One is the automation of tasks performed by humans, such as image segmentation, measurement of cardiac structural and functional parameters. The other is the discovery of clinically important insights. Most reported applications were focused on the automation of tasks. Moreover, algorithms that can obtain cardiac measurements are also being reported. In the next stage, AI can be expected to expand and enrich existing knowledge. With the continual evolution of technology, cardiologists should become well versed in this new knowledge of AI and be able to harness it as a tool. AI can be incorporated into everyday clinical practice and become a valuable aid for many healthcare professionals dealing with cardiovascular diseases.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Avoid common mistakes on your manuscript.
Introduction
Artificial intelligence (AI) is the process of having a computational program that can perform tasks of human intelligence (e.g. pattern recognition) by mimicking human thought processes [1]. These programs have been developed since the 1970s. It's been used in gaming, social media, and robotics for a long time. Machine learning has been developed since the 1980s. Machine learning is relevant for medicine as a precursor to AI. Conventional machine learning is a method in which a computer automatically provides appropriate judgments from inspection images by giving control rules (e.g., features). After 2012, deep learning has been developed by automated learning of features extracted through repetitive trial and error. Deep learning is a type of machine learning, the sophistication of its internal structure and learning methods enabling more accurate results than conventional machine learning [2]. Deep learning identifies patterns progressively from large databases without being explicitly programmed. In the last 10 years, deep learning has led to new strategies of machine learning and plays a central role in image recognition as a breakthrough method [3].
AI has influenced every field of cardiovascular imaging in all phases from acquisition to reporting [4,5,6]. Such examples of this are AI-developed computed tomography and magnetic resonance image measurement of lumen diameter, recognition of coronary calcium score, recognition of obstructive coronary disease, automated acquisition, segmentation, and report generation [7,8,9]. Compared with these methods, there is an issue of high observer variation in the interpretation of echocardiograms. AI can help minimize the observer variation and provide an accurate diagnosis in the field of echocardiography. In this review, we summarize the necessity for automated diagnosis in the echocardiographic field, and discuss results of AI application to echocardiography and future perspectives.
AI in the COVID-19 era
With the development of echocardiography, the number of echocardiographic parameters in routine examinations has increased and examination techniques have become more complex. Commercially available portable devices have expanded the range of physicians who perform echocardiography, not only cardiologists but also physicians in emergency medicine, anesthesiology, and general internal medicine. Thus, there may be a need for these non-specialists to make decisions based on echocardiographic findings.
Moreover, the outbreak of COVID-19 infection 2020 [10], all lifestyles and industries affected to some extent, such as lockdown on movement and a greater dependence in telecommunications. This wave of infections has also affected how we approach cardiovascular care and echocardiography. As shown in Supplemental Fig. 1, before the COVID-19 era, the patient was moved to the echo room and underwent an echocardiogram by expert sonographer or cardiologist. However, in an environment with a rapidly increasing number of confirmed/suspected COVID-19 patients, non-specialists, such as emergency physicians, are more likely required to perform this examination in the infection control room. In the usual settings of echocardiography, the risk of infection is high because practitioners must be in close proximity to the patient. Thus, personal protection is crucial in the COVID-19 era [11,12,13]. Many infected and sick patients need a quick, accurate, and reproducible diagnosis. Importantly, scanning time should be reduced to limit possible exposure. There have been many recommendations to reduce the scanning time by acquiring limited images, and measuring off site [12]. The problem with this approach is that we could miss important findings when we are mainly focused on taking many images instead of doing a thorough echocardiographic study. Also, when we are performing the measurements off site, we might realize that we have not taken the necessary images at the appropriate angles. All of this can relate to poor patient outcomes from missed diagnosis. AI can be an essential aid for solving these problems.
Overview of deep learning
To understand the role of deep learning in echocardiography, it is necessary to clarify the difference between conventional machine learning and deep learning. The biggest difference between deep learning and conventional machine learning is that deep learning uses multilayer neural networks. Using neural networks, the accuracy of models has been dramatically improved, especially in the fields of visual object recognition, object detection, and word recognition.
Learning with artificial intelligence can be classified into three categories: supervised learning, unsupervised learning, and reinforcement learning (Fig. 1). Supervised learning is a method of learning by labeling training data with the correct answers. Many medical image analysis studies have been conducted on images that humans correctly labeled beforehand. In the echocardiographic field, thousands of images with predefined labels should be included when making new models. Notably, no unlearned answer would be presented in any training model. For example, if a model for myocardial infarction is trained with only the options "with myocardial infarction" or "without myocardial infarction", the training model cannot determine the culprit coronary artery due to the lack of specific training.
Unsupervised learning is a method of learning without labels, one famous example being the AI Google created in 2012, that could recognize a cat without manual training [14]. This AI came to recognize a "cat" autonomously as it read web images and videos without labels, its learning process similar to human learning. Reinforcement learning, an example being Google's AlphaGo AI beating a professional Go player, is a method for finding ways to "maximize future value" through trial and error [15]. It is similar to supervised learning, but the machine predicts future values in a broader sense (in the case of Go, winning moves). Unsupervised learning and reinforcement learning are not yet something that we generally use in imaging, so here we focus more on supervised learning in this review.
An overview of the studies for the diagnostic ability of current deep learning models in the field of echocardiography in Table 1 [16,17,18,19,20,21,22,23,24,25,26,27]. The accuracy of AI models has been achieved around 80–90%. Sample sizes in these studies ranged from hundreds to tens of thousands. Recently, the method of AI is sophisticated and it has become possible to create accurate AI models without using a large number of images. In the future, many institutions will develop research from a reasonable sample size.
Steps of echocardiographic AI
Echocardiography is a useful tool in the diagnosis and management of cardiovascular diseases. Accurate and reproducible echocardiographic studies are required for good clinical practice. Echocardiography requires not only the adequate acquisition of images but also proper interpretation of images. The same physician may come to different conclusions on the same images. Thus, more objective and reproducible methods are needed. The process of applying AI to echocardiography involves four steps. We humans subconsciously perform these steps during each examination. However, when we develop the application of AI, it is necessary to divide these steps into separate tasks to reach a diagnosis (Fig. 2).
1st step: maintaining image quality
The first step in the application of AI to echocardiography is the evaluation of image quality. In humans, experience must obtain accurate images. Poor-quality images are often acquired by inexperienced examiners. Since image quality is directly linked to diagnostic accuracy, it is important to evaluate image quality correctly [28]. A recent paper showed that the accuracy of AI-based classification for image quality was excellent (score error: 0.11 ± 0.09). This proposed approach could also be generalized to other images involving deep learning in the cardiovascular field, where there are frequent gaps in clinical labeling [16].
In February 2020, the U.S. Food and Drug Administration (FDA) approved the first AI based echocardiographic device with "guiding to acquire appropriate images with good quality". The system guides the user on which way to adjust the probe for the acquisition of a better image, and is designed to be easy to use even for beginners [26]. The condition for use of this device is that a cardiologist must review and approve the acquired images for patient evaluation, rather than leaving everything to the machine. The extent to which everything can be trusted the machines, is an important problem that should be discussed in the future.
2nd step: view classification
The next step is the view classification and segmentation of cardiovascular structures. Echocardiographic images require many types of recordings because of the complexity of the cardiac structure. The view classification of cardiovascular structures can be useful in automated scans or detection of appropriate views. Several investigators reported a good accuracy for view classification models (accuracy: around 91–94% for 15-view classification) [17, 18]. Recently, we reported on our newly developed view classification model, based on convolutional neural network using 17,000 images. In this model, there were 1.9% mislabeled images. Clustering analyses showed that the neural network could sort heterogeneous input images into five pre-determined views (Fig. 3). In addition, to determine if the 98.1% accuracy rate was acceptable for creating a feasible prediction model, we tested the prediction model for ejection fraction (EF) using the learning data with a 1.9% error rate. The accuracy of the prediction model for EF was warranted, even with training data containing 1.9% mislabeled images. Thus, this approach may provide a clinically feasible method of view classification for the analysis of echocardiographic data [27]. Upon inspection of the misclassified images, we found that many of them were difficult even for experts to judge, suggesting that deep learning had been used to successfully imitate human reorganization (Supplemental Fig. 2).
3rd step: measurements
Once the image quality has been assessed and appropriate views have been determined, the next step is to measure and quantify the morphological structure. For example, the different frameworks between conventional machine learning and deep learning for the ejection fraction (EF) estimating method are shown in Fig. 4. In conventional machine learning for prediction of EF, after pre-processing of images, a human data scientist manually configures a point to focus their attention (region of interest: ROI and segmentation) and extracts features of the target region. Then, the extracted features are used to create a classification model while performing dimension reduction. In deep learning, all feature extraction steps are embedded in the algorithm, allowing for end-to-end learning to be performed.
Zhang et al. developed a method based on a deep learning approach for a fully automated analysis of echocardiographic data [18]. Their model showed a mean percentage error of approximately 10% for EF from the apical 2-chamber view, and 20% for EF from the apical 4-chamber view. Another paper showed that the EF based on AP2/AP4 views correlated well with reference EF (mean absolute deviation = 2.9%) [20]. Our recent report adds to this by demonstrating better performance of a deep learning algorithm when five views are used. Results of receiver operating characteristic (ROC) analysis were used to assess the diagnostic ability of classification of reduced EF (< 50%) are shown in Fig. 5a. For the classification of patients with reduced EF, we have compared the area under curves (AUCs) from each model, and the model with largest AUC was based on averaged 5 views (AUC: 0.99 ± 0.01 on fivefold cross validation). The AUC of estimated LVEF based on the averaged 5 views was larger to the AUC of estimated LVEF based on the other views (AUC by AP2 and AP4: 0.95 ± 0.01, AUC by AP2: 0.94 ± 0.01 and AUC by AP4: 0.93 ± 0.01 on fivefold cross validation, compared p value < 0.05) [21]. According to these results, it may be more accurate to make a prediction model for LVEF from multilevel images in the clinical setting.
4th step: detection of abnormality
The last step is the detection of abnormality. One of the most important evaluations in echocardiography is the detection of regional wall motion abnormalities (RWMAs). The presence of wall motion abnormalities is directly linked to the treatment decisions. Detection of regional wall motion abnormalities by echocardiography is a class I recommendation in multiple cardiovascular guidelines in patients complaining of chest pain [29, 30]. However, the evaluation of regional wall motion abnormalities has been traditionally subjective and relies on visual judgment, such examples being endocardial motion and myocardial thickness. Thus, an adequate objective method has long been desired.
Machine-learning models have been developed to identify and quantify RWMAs [24, 25]. Recently, our group developed an AI model for automated detection of RWMAs in myocardial infarction, using a deep learning algorithm including ResNet, DenseNet, Inception-ResNet, Inception, and Xception for a convolutional neural network [31,32,33]. We have compared the AUCs by several deep learning algorithms for detecting wall motion abnormalities. The largest AUC was ResNet (AUC: 0.97), but there was no significant difference among algorithms except for the Xception model (ResNet: AUC: 0.97, DenseNet: AUC: 0.95, Inception-ResNet: AUC: 0.89, Inception: AUC: 0.90, and Xception: AUC: 0.85, vs. other algorithms, p < 0.05). For the detection of the presence of RWMAs, the AUC of deep learning algorithm was similar to that of an experienced cardiologist/sonographer (0.97 vs. 0.95, p = 0.61), and significantly higher than the AUC of resident physicians (0.97 vs. 0.83, p = 0.003) (Fig. 5b). According to our findings, detection system of RWMAs may be of great value in the clinical setting.
Problems in AI
Building models using deep learning require a lot of data. To overcome this problem, the Japan Society of Ultrasonics in Medicine and Japanese Society of Echocardiography have begun a joint multicenter study to collect labeled echocardiographic data from 2018 onward. With this dataset, an effective AI model could be developed in the near future. The lack of interpretability of AI solutions was often considered as a major disadvantage. The overcoming of this limitation should be addressed in future studies.
Conclusions
Currently, there are two roles for AI in cardiovascular imaging. One is the automation of tasks usually performed by humans, such as image segmentation and measurement of cardiac structural and functional parameters. Another is the discovery of clinically important insights. Most reported applications were focused on the automation of tasks. Moreover, algorithms that can obtain cardiac measurements are also being developed. In the next stage, AI can be expected to expand and enrich existing knowledge. With the continual evolution of technology, cardiologists should become well versed in the new knowledge of AI and be able to harness it in the clinical setting. AI can be incorporated into everyday clinical practice and become a valuable aid for many healthcare professionals dealing with cardiovascular diseases.
References
Kulikowski CA. Beginnings of artificial intelligence in medicine (AIM): computational artifice assisting scientific inquiry and clinical art–with reflections on present aim challenges. Yearbook Med Inform. 2019;28:249.
Schmidhuber J. Deep learning in neural networks: an overview. Neural Netw. 2015;61:85–117.
LeCun Y, Bengio Y, Hinton G. Deep Learning Nat. 2015;521:436–44.
Kusunose K, Haga A, Abe T, et al. Utilization of artificial intelligence in echocardiography. Circ J. 2019;83:1623–9.
Johnson KW, Soto JT, Glicksberg BS, et al. Artificial intelligence in cardiology. J Am Coll Cardiol. 2018;71:2668–79.
Dey D, Slomka PJ, Leeson P, et al. Artificial intelligence in cardiovascular imaging: JACC state-of-the-art review. J Am Coll Cardiol. 2019;73:1317–35.
Lessmann N, van Ginneken B, Zreik M, et al. Automatic calcium scoring in low-dose chest CT using deep neural networks with dilated convolutions. IEEE Trans Med Imaging. 2018;37:615–25.
van Hamersvelt RW, Zreik M, Voskuil M, et al. Deep learning analysis of left ventricular myocardium in CT angiographic intermediate-degree coronary stenosis improves the diagnostic accuracy for identification of functionally significant stenosis. Eur Radiol. 2019;29:2350–9.
Zhang N, Yang G, Gao Z, et al. Deep learning for diagnosis of chronic myocardial infarction on nonenhanced cardiac cine MRI. Radiology. 2019;291:606–17.
Sugimoto T, Mizuno A, Kishi T, et al. Coronavirus disease 2019 (COVID-19) information for cardiologists- systematic literature review and additional analysis. Circ J. 2020;84:1039–43.
Kirkpatrick JN, Mitchell C, Taub C, et al. ASE Statement on protection of patients and echocardiography service providers during the 2019 novel coronavirus outbreak: endorsed by the American college of cardiology. J Am Soc Echocardiogr. 2020;33:648–53.
Seo Y, Daimon M, Yamada H et al. Review of the efforts of the Japanese Society of Echocardiography for coronavirus disease 2019 (COVID-19) during the initial outbreak in Japan. J Echocardiogr 2020.
Kusunose K, Matsunaga K, Yamada H et al. Identifying the extent of oral fluid droplets on echocardiographic machine consoles in COVID-19 era. J Echocardiogr 2020.
Spratling MW. Unsupervised learning of generative and discriminative weights encoding elementary image components in a predictive coding model of cortical function. Neural Comput. 2012;24:60–103.
Chen JX. The evolution of computing: AlphaGo. Comput Sci Eng. 2016;18:4–7.
Liao Z, Girgis H, Abdi A et al. On modelling label uncertainty in deep neural networks: automatic estimation of intra-observer variability in 2d echocardiography quality assessment. IEEE Transactions on Medical Imaging 2019.
Madani A, Arnaout R, Mofrad M, et al. Fast and accurate view classification of echocardiograms using deep learning. NPJ Digital Med. 2018;1:6.
Zhang J, Gajjala S, Agrawal P, et al. Fully automated echocardiogram interpretation in clinical practice. Circulation. 2018;138:1623–35.
Leclerc S, Smistad E, Pedrosa J, et al. Deep learning for segmentation using an open large-scale dataset in 2D echocardiography. IEEE Trans Med Imaging. 2019;38:2198–210.
Asch FM, Poilvert N, Abraham T, et al. Automated echocardiographic quantification of left ventricular ejection fraction without volume measurements using a machine learning algorithm mimicking a human expert. Circ Cardiovasc Imaging. 2019;12:e009303.
Kusunose K, Haga A, Yamaguchi N et al. Deep Learning for Assessment of Left Ventricular Ejection Fraction from Echocardiographic Images. Journal of the Am Soc Echocardiography 2020.
Ghorbani A, Ouyang D, Abid A, et al. Deep learning interpretation of echocardiograms. NPJ Digital Med. 2020;3:1–10.
Kusunose K, Abe T, Haga A, et al. A deep learning approach for assessment of regional wall motion abnormality from echocardiographic images. JACC Cardiovasc Imaging. 2020;13:374–81.
Raghavendra U, Fujita H, Gudigar A, et al. Automated technique for coronary artery disease characterization and classification using DD-DTDWT in ultrasound images. Biomed Signal Process Control. 2018;40:324–34.
Omar HA, Domingos JS, Patra A et al. Quantification of cardiac bull's-eye map based on principal strain analysis for myocardial wall motion assessment in stress echocardiography. In: 2018 IEEE 15th international symposium on biomedical imaging (ISBI 2018): IEEE, 2018:1195–1198.
Voelker R. Cardiac ultrasound uses artificial intelligence to produce images. JAMA. 2020;323:1034–1034.
Kusunose K, Haga A, Inoue M, et al. Clinically feasible and accurate view classification of echocardiographic images using deep learning. Biomolecules. 2020;10:665.
Abdi AH, Luong C, Tsang T, et al. Automatic quality assessment of echocardiograms using convolutional neural networks: feasibility on the apical four-chamber view. IEEE Trans Med Imaging. 2017;36:1221–30.
Kimura K, Kimura T, Ishihara M, et al. JCS 2018 guideline on diagnosis and treatment of acute coronary syndrome. Circ J. 2019;83:1085–196.
Ponikowski P, Voors AA, Anker SD, et al. 2016 ESC guidelines for the diagnosis and treatment of acute and chronic heart failure: the task force for the diagnosis and treatment of acute and chronic heart failure of the European society of cardiology (ESC) developed with the special contribution of the heart failure association (HFA) of the ESC. Eur Heart J. 2016;37:2129–200.
Chollet F. Xception: Deep learning with depthwise separable convolutions. arXiv preprint 2017:1610.02357.
Szegedy C, Vanhoucke V, Ioffe S et al. Rethinking the inception architecture for computer vision. In: Proceedings of the IEEE conference on computer vision and pattern recognition, 2016:2818–2826.
Deng J, Dong W, Socher R et al. Imagenet: A large-scale hierarchical image database. Computer Vision and Pattern Recognition, 2009 CVPR 2009 IEEE Conference on: Ieee, 2009:248–255.
Acknowledgements
The author acknowledged Robert Zheng, MD, for English editing.
Funding
This work was supported by the Takeda Science Foundation Japan Agency for Medical Research and Development under Grant Number JP19lk1010035, and Public Trust Cardiovascular Research Fund.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
Kenya Kusunose declares that they have no conflict of interest.
Human rights statements and informed consent
All procedures followed were in accordance with the ethical standards of the responsible committee on human experimentation (institutional and national) and with the Helsinki Declaration of 1964 and later revisions. Informed consent was obtained from all patients for being included in the study.
Informed consent
Informed consent was not obtained from the patients because of not patient study.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Electronic supplementary material
Supplemental Figure 1. Pre and post COVID-19 era echocardiographic examinations.
Supplemental Figure 2. Correctly classified and misclassified cases .
Below is the link to the electronic supplementary material.
Rights and permissions
About this article
Cite this article
Kusunose, K. Steps to use artificial intelligence in echocardiography. J Echocardiogr 19, 21–27 (2021). https://doi.org/10.1007/s12574-020-00496-4
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s12574-020-00496-4