Introduction

Today’s health care system, in the US and throughout the world, is still entering the 21st century. Costs remain high, there are great inefficiencies, and, for a large segment of the population globally, access to care is inadequate.

Our health care enterprises tend to focus on treating acute illness rather than improving and maintaining the health and wellness of populations. A powerful catalyst for change in the health care system—digital health— is happening now (Fig. 1).

Fig. 1
figure 1

Disruptive Innovation Healthcare. (From USA GE industry source)

Health care stakeholders are seeking disruptive innovation to transform the US health care sector in the years ahead. In New England Journal of Medicine Catalyst’s most recent New Marketplace survey, the Insights Council members, comprised of executives, clinical leaders, and clinicians, make it abundantly clear that they believe innovation will come from beyond traditional health care organizations.

A significant difference emerges when respondents consider whether buyers are willing to pay for solutions. Most notably, health care IT (eg, Clinical Decision Support Systems-CDSS, etc.) rises to the top of the list, named by half of respondents. Hospitals and health systems are second (46%).

Digital Health

Today, “digital health” means advanced analytics based on multi-modal data. The “Health Care Internet of Things” uses sensors, apps, and remote monitoring to provide continuous clinical information and data in the cloud that enables clinicians to access the information they need to care for patients in their home, their office, or 300 miles away, and to collaborate with specialists in another country. It means embracing the machine as an integral part of the health care team, and automating routine procedures and processes so that clinicians can focus on the most complex and critically ill patients. and using deep learning platforms to provide actionable tools at the point of care so clinicians can more efficiently and effectively diagnose and treat patients. It means automating billing, documentation, and regulatory processes so that the clinicians can focus on meeting every patient’s needs. Finally, digital health means caring for one patient at a time while also caring for millions of patients simultaneously.

Electronic Health Record (EHR) and Big Data

The overload of huge amount of patient clinical and imaging electronic information represents a big problem for physician. From its origin, the Electronic Medical Record (EMR) has captured all sorts of data about a patient not previously recorded, creating a possible solution. In the coming years, all these data, which include imaging and diagnostics systems data, lab values, waveforms, data automatically downloaded from implantable electrophysiology devices as well as hospital admission, discharge and transfer (ADT) data will significantly increase in a bidirectional way where patients can upload their own data and imaging to their EMRs [1] (Dave Fornell, Feb 24, 2017; https://www.itnonline.com/article/how-artificial-intelligence-will-change-medical-imaging).

The transition to electronic medical records and availability of patient data has been associated with increases in the volume and complexity of patient information, as well as an increase in medical alerts and increased expectations for rapid and accurate diagnosis and treatment [2].

Masafumi Kitakaze published an interesting article on the particular trends in cardiovascular disease in Asia and Japan, highlighting the need for epidemiological studies that permit an accurate recognition of risk factors, as well as their distribution and synergistic effects, in order to achieve a short- and medium-term modification thereof and effective prevention of ischemic heart disease and heart failure at the primary, secondary, and tertiary levels. The author also suggests that Big Data and data mining may be ways of obtaining it, but is skeptical about the chances of immediate applications [3].

There are irreversible technological realities that are essential for every cardiologists to know, such as:

  • High performance computing (HPC) using parallel processing to run advanced applications quickly, efficiently, and reliably

  • The number of supercomputer centers that employ co-processors and accelerators has doubled in the past 2 years

  • HPC resources in the cloud are increasingly available as a consumer service

  • We are reaching new technology platforms (eg, International Data Corporation) which consists of mobile computing, cloud services, Big Data, analytics, and social networks

The Internet of Things (IOT) is an accelerator of innovation and growth of the other components, through the development of new solutions based on intelligent embedded devices that go beyond the telecommunications industries, transforming various economic fields (finance, transportation, healthcare, location-based services, construction, etc.).

The combination of Artificial Intelligence, Big Data, and massively parallel computing offers the potential to create a revolutionary way of practicing evidence-based, personalized medicine.

Health care needs the transformative power of digital skepticism and hype; required for the medical community to embrace a world where data, machines, and analytics must be employed to deliver higher quality, more efficient care.

Artificial Intelligence

Artificial intelligence (AI) has captured the imagination and attention of doctors over the past years as several companies and large research hospitals work to perfect these systems for clinical use [4, 5]. The first concrete examples of how AI (also called deep learning, machine learning, or artificial neural networks - ANN) will help clinicians are now being commercialized. These systems may offer a paradigm shift in how clinicians work in an effort to significantly boost workflow efficiency, while at the same time improving care and patient throughput.

AI will not replace doctors but will significantly increase their ability to apply clinical appropriateness and reduce errors by easy analysis or EMR format key variables display. For example, when a radiologist receives a cardiac CT scan to read, the AI system will analyze the image and rapidly identify warning findings from the image, and combining them with clinical information and suggesting further management to be done. In the case of chest pain evaluation, AI system checks for:

  • Prior specific exams to prior cardiac history

  • Prior imaging tests of the chest

  • Prior reports for that imaging

  • Prior cardiac procedures

  • Recent lab test results

  • Clinical data from the event requiring the scan

This rapidly collected information would otherwise take too long to collect by any physician; it saves time in the daily workload [1].

The final diagnostic and clinical suggestion on the further management could be reached by implementation of a Clinical Decision Support System as the final application to be used [6].

Machine Learning

Artificial Intelligence “AI” has rapidly gone from science fiction fantasy to trendy buzz word to business application. Let’s take a quick look at the evolution of artificial intelligence, and what the latest developments mean for health.

“Machine learning” is a fascinating and eminently practical application of artificial intelligence that enables computers to detect patterns and learn new functions without being explicitly programmed. Essentially, customer service widgets capable of mimicking human interaction [7, 8] .

Strictly defined, “artificial intelligence” is exhibited by any device that perceives its environment and takes actions to maximize its success of achieving a goal. In the case of healthcare, analyzing the relationship between prevention or imaging and treatment techniques, and accomplishing optimal patient outcomes. In other words, an “intelligent machine” that approximates human cognition to help stakeholders throughout the patient journey.

Recently morphing from hope to hype to hero, AI for healthcare has rapidly exploded across the full spectrum of health system services, with dozens of startups in patient data and risk analytics, medical research, imaging and diagnostics, lifestyle management and marketing, mental health, emergency room and surgery, in-patient care and hospital management, drug discovery, virtual assistants, wearables, clinical decision support Softwares, and numerous other specialties ripe for “intelligent machines” (Fig. 2).

Fig. 2
figure 2

Core AI companies bring their algorithms to healthcare. (Adapted from: CBInsights; https://www.cbinsights.com/research/artificial-intelligence-startups-healthcare/.)

When compared with the common definition of machine learning – the practice of teaching a computer how to identify patterns and use these patterns to iteratively maximize its chances of success without explicit programming – it is clear that AI and machine learning are, in fact, somewhat different [9].

It may seem like a pedantic semantic argument, but for data scientists and clinical practitioners, the distinction is real and important.

Machine learning is about recognizing patterns. With more data and more opportunities to make increasingly granular distinctions based on the successes and failures of the past, a machine learning tool can improve its accuracy iteration after iteration without being told by a human what to do next.

But while machine learning simply serves up results, artificial intelligence must take pattern recognition one step further by planning a future action based on previous results, calculating the probability of that action producing a positive outcome, and executing the action with the highest likelihood of achieving maximum success based on a wide range of constantly changing and often poorly defined parameters into a more detailed suggested action to be possibly executed [10].

Clinical Decision Support Software

Nowadays, any healthcare reform tends to satisfy guidelines and to match appropriate use criteria when imaging and therapy are taken into account, in order to reduce costs reimborsement. Clinical decision support software (CDSS) implementation seems to favorably contribute to the matter [11,12,13]. Even in presence of antibodies from clinicians who believe CDSS substitute doctors, if well integrated into the clinical workflow and accepted by territorial health system, it may help hospital and medical personnel to follow the right way on patients, to avoid unnecessary tests and consequently to reduce healthcare costs.

CDSS is supposed to help clinicians do more with less by identifying at-risk patients, eliminating inappropriate procedures, and to help physicians adhere to practice guidelines. It is unreasonable to expect physicians to remember hundreds of pages of ever-changing appropriate use criteria (AUC), which is where CDSS can offer an instant resource. In addition, the software records data that can be mined for information, such as benchmarking to target education and quality improvement, or to see patterns of use over time [14, 15] .

The CDSS can help physicians to identify patients at risk with as few tests as possible, deleting inadequate procedures, for adhering to published guidelines. It is virtually impossible for clinicians to deal with all published papers and hundred of pages regarding evolution of the single discipline. Furthermore CDSS Big Data could be extracted easily for comparative analysis towards the increase of quality and to follow up models adopted in clinical routine [16].

In cardiology, the main use of a CDSS software is to rapidly and automatically identify whether the diagnostic test is useful for the patient at that moment with that clinical symptom. The software is based on two typical approaches: appropriate use criteria and published guidelines, both treated inferentially by a machine learning algorythm [17].

A significant example of a software to help clinicians remain up to date on the criteria and make it easier for them to implement these standards is a working progress in terms of clinical validation study (ARTICA project) [18]; (Marco Mazzanti et al.; personal communication)

The CDSS may be considered as a second set of eyes. It could be used at point-of-care in real time during clinical workflow or for interpreting medical images. The integrated report engine embedded with the software monitors all responses into the system and compares them with a clinical database. It also assesses possible differences of the clinician with previous medical record of the same patient. For example, CT angiography system assembles more than 160 expert rules derived from quality equipments throughout the whole world. So their suggestions help to decrease the learning curve and ensure clinicians adjournment with the latest trends [19].

Artificial Intelligence and Cardiac Imaging

Several studies and academic articles recently published offer updates regarding the use of AI in cardiology. AI techniques have recently been applied in cardiovascular medicine to explore novel genotypes and phenotypes in existing diseases, improve the quality of patient care, enable cost-effectiveness, and reduce readmission and mortality rates. Over the past decade, several machine-learning techniques have been used for cardiovascular disease diagnosis and prediction. Each problem requires some degree of understanding of the problem, in terms of cardiovascular medicine and statistics, to apply the optimal machine-learning algorithm. That is the reason for why, in the near future, AI will result in a paradigm shift toward precision cardiovascular medicine.

Dudchenko and coworkers very recently wrote a systematic review in decision support systems in cardiology. The aim of this work was to identify the most common approaches used in the intelligent decision support systems employed in the diagnosis of cardiovascular diseases and identify accuracy of these systems. Forty-one relevant publications were included in the review using Scopus and Web of Science. Knowledge base and fuzzy logic and ANN is the most commonly used approach to diagnosis and prediction. The accuracy of the considered systems reaches 98% [20••].

Noninvasive cardiac imaging plays a critical role in the diagnosis, outcome prediction, and management of patients with cardiovascular disease. The quality and amount of imaging data acquired with each scan are continuously increasing in all modalities, including nuclear cardiology, echocardiography, computed tomography (CT), and magnetic resonance imaging (MRI).

To date, the most fully automated approaches have been developed for nuclear cardiology, likely because of the lower image resolution and therefore simpler image analysis. However, advanced methods for other widely used cardiac modalities are being rapidly developed.

Piotor Slomka in his review focuses on the efforts in full automation of the widely used clinical imaging techniques and on the efforts to derive the final diagnosis or prognosis by such automated techniques [21••].

Myocardial perfusion SPECT and PET imaging (MPI) play a crucial role in the diagnosis and management of coronary artery disease, providing key information concerning myocardial perfusion and ventricular function.

Currently, when automated processing methods are employed, a common workflow is that the physician performs a final quality control check and overview in his concluding report. However, with advancing machine intelligence, this final human check may also become a surplus requirement. Instead, software tools could provide this final quality check and in fact offer a more sophisticated, reproducible conclusion drawn from comparison with not just one physician’s career experience but with massive ever-increasing training databases. Given the ever-abundant need for cost-effective diagnostic and treatment algorithms, supplanting the physician to achieve completely automated image processing, data analysis, quality control check, and final interpretation is not just an inspiring technical challenge but a valid option for reducing costs. It could be an imminent reality if this strategy is actively pursued by researchers and developed over the coming years. The visual analysis in this study was performed in four steps. In the final step, the physicians had all the clinical information available. Despite that, we can see that the overall diagnostic accuracies of the physician reading and the computer analysis are similar.

Echocardiography is widely available, does not utilize any ionizing radiation, and can be performed at the bedside. As a result, it is the most widely used noninvasive imaging technique in cardiology.

The acquisition of ultrasound images is still usually performed in 2D using several standardized views. The correct alignment of those views presents a challenge for sonographers. 3D-mode cardiac echo, which can be obtained with more complex transducers, solves this problem by the acquisition of volumes containing the myocardium. The 3D mode has the potential to derive more accurate and new quantitative parameters but suffers from reduced temporal resolution and image quality compared to the 2D mode. Ultrasound imaging is constantly being improved by vendors who introduce new generation traducers (2D and 3D), faster electronics, and novel signal/ image processing methods.

For 3D transthoracic echocardiography, fully automated quantification software was developed that simultaneously detects Left Ventricle (LV) and left atrium (LA) based on the detection of endocardial surfaces. In one approach, a model template describing the initial global shape and LV and LA chamber orientation is defined based on a large database of prior scans and followed by a patient-specific adaptation [22]. Once the model adapts to a current dataset, ventricle volumes, ejection fractions (EFs), 2D views, and other parameters are derived from the 3D model and used for the cardiac function evaluation. The automatic model shows an excellent correlation with manually derived volumes from a single-beat 3D echocardiography in challenging atrial fibrillation patients [23].

In recent years, quantitative values derived from echo sequences such as strain or strain rate have been shown to provide added diagnostic value. Two general approaches can be used to measure strain; they are based on tissue Doppler imaging (TDI) and speckle-tracking echocardiography (STE).

Sengupta et al. wrote a paper regarding cognitive machine learning, a pilot study for differentiating constrictive pericarditis from restrictive cardiomyopathy [24].

They hypothesized that a similar process using a cognitive computing tool would be well suited for learning and recalling multidimensional attributes of speckle tracking echocardiography data sets derived from patients with known constrictive pericarditis and restrictive cardiomyopathy.

This study demonstrates feasibility of a cognitive machine-learning approach for learning and recalling patterns observed during echocardiographic evaluations. Incorporation of machine-learning algorithms in cardiac imaging may aid standardized assessments and support the quality of interpretations, particularly for novice readers with limited experience.

Suctit Narula et al. published an original investigation in machine-learning algorithms to automate morphological and functional assessments in 2D echocardiography [25]. They used supervised machine learning with an ensemble of three different machine-learning algorithms. This approach entails techniques that create multiple models that are then combined to produce improved results. Such approaches attempt to decipher clinically useful information from noisy cardiac ultrasound motion and deformation data.

Jamil Tajik also wrote an excellent editorial review, where he reminds that cardiologists of the bygone era always carried calipers in their pocket so they could make painstaking measurements of P-, Q-, R-, S-, and T-wave durations and R-R cycle variability. Now, a half-century later, machine learning for echocardiography image interpretation is on its way. The current workflow of echocardiographic examination (30 to 60 minutes), analysis of images by sonographers (15 to 30 minutes), and final integration and reporting by cardiologists (10 to 20 minutes) is a very time-consuming and inefficient process. To this end, automated computer analysis of echocardiographic images will be a most welcome new change. Developments of such automated systems will reduce inter-observer variability and cognitive errors, increase efficiency, and further enhance the value of echocardiography [26••].

Coronary CT angiography (CTA) has recently emerged as a useful diagnostic test in selected stable but symptomatic patients needing noninvasive assessment of the coronary arteries.

However, standard visual interpretations have been shown to have a high rate of false-positive findings, which can lead to unnecessary additional testing and increased overall cost. Over the last few years, several automated methods have been developed for standardized, semi-automated quantification of noncalcified and calcified plaques, and lumen measures from coronary CTA [27,28,29], with research studies supporting this approach [30, 31].

The CTA tools are not yet fully automated to the level achieved in nuclear cardiology, and still require significant time of a skilled operator for the contour adjustments. Nevertheless, it is probably a matter of time when these methods reach a much higher level of automation, such as seen in nuclear cardiology studies. Unsupervised methods for the automated detection of subtle and significant coronary lesions from CTA have recently been demonstrated [32].

Furthermore, methods for semi-automated measurements of epicardial coronary fat from non-contrast CT have also been developed and validated (Fig. 3) [33]; developments of fully automated methods are currently in progress. These additional CT applications can in the future add to the comprehensive automated image assessment by this modality.

Fig. 3.
figure 3

Semi-automated measurements of epicardial coronary fat from non-contrast CT. (Reprinted from: Dey D, et al. Atherosclerosis. 2010;209(1):136-141, with permission from Elsevier) 34]

Several applications of machine learning have been proposed for feature extraction and segmentation of cardiology images. Machine learning techniques can be utilized for automatic identification of lesions on coronary CTA images. For example, improvements for automatic lesion localization have been demonstrated by support vector machine method, which integrated several quantitative geometric and shape features (including stenosis, minimum luminal diameter, circularity, eccentricity), resulting in high sensitivity, specificity, and accuracy (93%, 95%, 94%) (Fig. 4) [34]. Very recently, deep learning techniques have been applied for the identification of calcified plaques on CTA images, demonstrating improved accuracy over the existing methods [35].

Fig. 4
figure 4

Automatic identification of coronary lesions from coronary CT angiography by an algorithm based on machine learning. An example of lumen segmentation with lesion detection. (With permission from: Kang D, et al. J Med Imaging. 2015; 2:014003) 35]

MRI A standard clinical cardiac MRI (CMR) scan requires 1 hour of imaging time, followed by significant time for image post-processing. Many automated approaches for CMR heart segmentation have been described [21••]; however, large-scale clinical adoption of fully automated analysis methods for CMR analysis has not yet occurred. Attempts at analysis automation are complicated by the variety of different CMR pulse sequences, scanning parameters, and imaging protocols – each of which is tailored to the individual patient and the particular clinical question. Despite these limitations, there is a vast amount of ongoing work attempting to overcome these challenges and automate the key steps in CMR workflow.

There are examples of successful in-house custom software solutions for small uniform datasets. For example, Tarroni et al. successfully demonstrated near-automated evaluation of stress/rest perfusion CMR using image noise density distribution for endocardial and epicardial border detection combined with non-rigid registration (n = 42) [36]. Noise characteristics of the blood pool and myocardium were used to facilitate automated endo-/epicardial contouring. The only manual step was the placement of a seed point inside the LV cavity in a single frame and identification of the anterior RV insertion point. Contrast enhancement time curves were automatically generated and used to calculate perfusion indices. Automated analysis of one sequence required <1 min and resulted in high-quality contrast enhancement curves both at rest and stress, showing expected patterns of the first-pass perfusion – compared with at least 10 minutes, and often 30 minutes, for manual processing.

Accordingly, a machine learning solution to facilitate automated segmentation to obtain consistent measurements, and to save clinicians' time, is highly desirable. Initial approaches incorporating deep learning strategies have been demonstrated for the segmentation of CMR images [37,38,– 39].

Conclusion

The learning healthcare system uses IT and health Big-Data infrastructure to adhere to published scientific evidence at the Point-of-Care, and in the meantime it may take into account feedback and insights from that care to promote innovation in healthcare delivery and to fuel new registries and discoveries.

It is likely that within 5-years the level of automation for analysis and interpretation will be significantly raised, compared with what is possible today. It is likely that entirely unsupervised extraction of all image parameters will be possible for nuclear cardiology, and minimal supervision will be required for other modalities. Greater standardization of acquisition protocols will be needed to maximize the potential gains from automation and machine learning.

The transition to electronic medical records and availability of patient data has increased the volume and complexity of patient information and medical alerts, with raised expectations for rapid and accurate diagnosis and treatment. The greater risk for possible consequent diagnostic and therapeutic errors could be approached and solved by AI/CDSS/machine learning applications that will likely assist physicians with timely differential diagnosis of disease, treatment option suggestions, and recommendations, and, in the case of medical imaging, with cues in image interpretation. It can reduce cost, and ultimately improve the quality of healthcare. In 2017 for instance in the US, referring physicians must use appropriateness criteria when ordering advanced imaging for Medicare patients. CDSS will become a critical part of this process also contributing to the medical imaging chain from ordered study to communicating results, to achieve best practices.

This goal will require significant support from the vendors but also from the medical centers, to facilitate data sharing. Fully quantitative diagnostic and risk stratification scores will be developed for clinicians, and these will become integrated with the imaging software. Risk stratification will transition from oversimplified population-based risk scores to machine learning-based metrics incorporating a large number of clinical and imaging variables in real-time beyond the limits of human cognition; this will deliver highly accurate and individual personalized risk assessments and facilitate tailored management plans. However, the clinical translation of these exciting techniques will depend on many factors outside of technological progress, such as aspects related to logistics, legal issues, standardization, and reimbursement.