Keywords

1 The International Landscape of Artificial Intelligence for Healthcare Purposes

Artificial Intelligence (AI), defined as a system’s ability to correctly interpret external data, to learn from such data, and to use those learnings to achieve specific goals and tasks through flexible adaptation, can be considered one of the major drivers of the digital transformation challenge we are faced with.

Digital technologies, machine learning algorithms, and AI are transforming medicine, medical research, and Public Health. The United Nations Secretary-General has stated that safe deployment of new technologies, including AI, can help the world to achieve the United Nations Sustainable Development Goals, which would include the health-related objectives under Sustainable Development Goal 3 (United Nations 2022). AI could also be the key to improve global commitments on achieving universal health coverage.

The EU’s coordinated approach to make the most out of the opportunities offered by AI and to address the challenges that it brings is based on its Digital Single Market, where rules and regulations on various related topics (data protection, business development, etc.) create an environment for growth without leaving single countries lagging (European Commission 2022). In its seminal White Paper on Artificial Intelligence, the European Commission (2020a) describes the characteristics of the policy framework necessary to develop trustworthy and secure AI applications for all areas, including healthcare. The EU must be:

  • An “ecosystem of excellence”, starting in research and innovation, where the right incentives accelerate the adoption of solutions based on AI, including by small- and medium-sized enterprises.

  • An “ecosystem of trust” in which compliance with EU rules and regulations is enforced, including the rules protecting fundamental rights and consumers’ rights, especially for AI systems operated in the EU which pose a high risk.

The use of AI technologies to improve healthcare systems holds a promising future, with progress already being made in various health-related fields, such as drug discovery, medical imaging, screening/prevention. AI could be fundamental in assisting healthcare providers, helping to avoid errors, and allowing clinicians to focus on providing care and solving complex cases. However, from a Public Health perspective, to maximize the benefits for society, the legal, ethical, regulatory, economical, and social constraints of AI must be addressed rapidly.

AI technologies are usually designed by companies or through public–private partnerships (PPPs). To strengthen European competitiveness in the PPPs on AI and to engage various stakeholders and investors in the technological development of the EU, the European Commission spent between 20 and 50 million € per year to fund partnerships on AI Data and Robotics from 2014 to 2020, with an overall 1.1 billion € spent under the Horizon 2020 research and innovation programme, which included big data (Pastorino et al. 2019) and healthcare (OECD 2019).

Some of the world’s largest technology companies are developing new applications and services, which they either own or invest in. The potential benefits of these technologies and the economic and commercial potential of AI for health care are warranting an ever-greater use of AI worldwide (WHO 2021a).

The European Commission categorizes AI applications as “generally high-risk” because they are both employed in sectors where significant risks are expected to occur and in such a manner that increases the amount of predictable risk to be taken (these include healthcare, transportation, the energy sector, etc.). This is particularly true for healthcare systems, where applications can have unpredictable and far-reaching consequences, which might affect patient safety, access to care, quality of care, and certain fundamental human rights. The European Commission has continued to build on this work and has adopted the risk stratification process, varying from “Unacceptable Risk” to “Minimal Risk” in the ongoing discussion on the harmonized rules on Artificial Intelligence or what is better known as the “AI Act” (European Commission 2021). In parallel, this legislation is furthermore supported by the work being done on the “Data Governance Act”, “The Open Data Directive”, and the initiatives under the “European strategy of data”. In the European strategy of data, there is a focus on the relevant deployment of data infrastructures, tools, and computing for the European Health Data Space which will determine the availability of high-quality data essential for training and further developing Artificial Intelligence systems (European Commission 2020b). This specific space will facilitate non-discriminatory access to health data and the training of artificial intelligence algorithms on those datasets, in a privacy-preserving, secure, timely, transparent, and trustworthy manner supported by institutional governance.

In fact, through this regulatory framework, the European Commission is seeking to ensure common normative standards for all high-risk AI systems. The health sector’s involvement in high-stakes situations which require the use of sophisticated diagnostic systems and systems supporting human decisions need to be reliable and accurate. Building on this aspect is the important element of “Human Oversight” outlined in Article 14 of the “AI Act”, where human oversight will play an important role in preventing or minimizing the risks to health that may emerge because of the implementation of a high-risk AI System.

Different organizations throughout the world are building on this important discussion. The Pan American Health Organization (PAHO) offers its own guiding principles for all AI applications in healthcare or Public Health (PAHO 2021):

  • People-centred: AI technologies must respect the rights of the individual.

  • Ethically grounded: progress and discussion must be made in compliance with the principles of human dignity, non-maleficence, and justice.

  • Transparent: when developing AI algorithms, having clear objectives and goals is mandatory.

  • Data protection: data privacy and security are paramount in AI development.

  • Demonstrates scientific integrity: AI applications must be reliable, reproducible, fair, honest, and accountable, according to the best practices.

  • Open and shareable: openness and shareability must be the founding principles of every AI development process.

  • Non-discriminatory: AI for Public Health must be based on fairness, equality, and inclusiveness.

  • Human-controlled: all automated decisions must be reviewed by human beings.

These guiding principles will be fundamental in developing global cooperation initiatives centred on AI in Public Health and navigating the complex legislative and policy environment that is being developed for the safety of our increasingly interconnected society.

1.1 Artificial Intelligence Systems Suitable for Public Health Applications

AI encompasses many fields of scientific enquiry, and its objective of mimicking human cognitive functions has many facets. To deploy and implement AI systems in healthcare settings they need to undergo a process of training, in which various types of input data, depending on the use case, will be “fed” to the algorithm and return associations or predictions. In this way, clinical data, diagnosis data, or screening results can be used to predict individual or population trends, help diagnose disease or create associations between various features of care (FDA 2013).

Machine Learning (ML) is based on algorithms that improve automatically through experience, feedback, and use of data. The trained algorithm then generates rules that can be used to classify new data or predict future data; this has many applications in a Public Health perspective, as it can be used to understand the complex connections between genetics, environment, and diseases or to predict illness. ML algorithms can be divided into two major categories: unsupervised learning and supervised learning, with the latter being based on labelled data (the machine has some positive and negative examples of what it should be able to identify). Unsupervised learning is best applied to extract patterns and features from data, whilst supervised learning is more suitable for predictive modelling, given its ability, for example, to build relationships between the patient traits (age, gender, etc.) and the outcome of interest (i.e., cardiovascular disease) (Jiang et al. 2017).

Deep Learning (DL) leverages algorithms as networks of decisions to learn from data. These networks are called neural networks or deep neural networks, depending on the number of layers in the network. DL can identify diseases thanks to imaging and can predict health status relying on health records (e.g. diabetic retinopathy in retinal fundus photographs). Its main advantage over other learning algorithms is its returned performance over bigger databases, being able to draw patterns in an abundance of unlabelled data, making it highly scalable.

Different learning methods can then be used to create disparate types of AI that have applications in Public Health. Natural language-related AI is a subfield of AI that aims to bridge the divide between the languages that humans and computers use to operate. Specifically, Natural Language Processing (NLP) automates the ability to read and understand human language, and by doing so, it ensures behaviour and sentiment analysis through social media and consumer-generated data. Natural Language Understanding (NLU) understands human writing using a coded comprehension of grammar, syntax, and semantics; this might be employed, for example, in the identification of loneliness or depression in older adults based on the content and patterns of their text messages. Finally, Natural Language Generation (NLG) transforms structured data into plain language or text, which can be useful to automatically remove identifiers and sensible information from electronic medical records or to produce automated medical reports given certain exam results as input.

Automated scheduling and automated planning are a branch of AI focused on organizing and prioritizing the activities required to achieve the desired goal, and expert systems (also known as knowledge-based systems) are AI programs that have expert-level competence in solving specific problems. Possible functions of expert systems and management systems include identifying and eliminating fraud or waste, scheduling patients, predicting which patients are unlikely to attend a scheduled appointment, assisting in the identification of staffing requirements, optimizing the allocation of health-system resources by geographical location according to current health challenges, and using administrative data to predict the length of stay of health workers in underserved communities (NHS UK 2019). Digital Decisioning Platforms represent the evolution of automated scheduling and planning expert systems. Generative AI, powered by advanced models like GPTs and stable diffusion, is rapidly developing. Public health has been involved in this process mainly regarding tasks such as analyzing patient data, creating informative presentations, and writing public health messages. However, more research is needed to fully understand generative AI potential and limitations, also in the Public Health field.

Other domains of AI can benefit Public Health indirectly. One such example is Cognitive Search, which employs AI systems to merge and understand digital contents from different sources by deriving contextual insights from conceptual data, improving the relevance of the results generated from a user search, for example in a search engine. This could help evaluate the quality and intent of information distributed during health emergencies, and sift through emerging information based on source and credibility. During the COVID-19 pandemic, this has been applied to the most utilized search engines to give citizens the most up-to-date information on prevention and medication use, filter obsolete information and reduce confusion (Microsoft New Zealand News Centre 2021).

1.2 Improving Healthcare Services Using Artificial Intelligence

One of the main focuses of AI development in healthcare is creating a support system to improve the early diagnosis of various diseases. This is being particularly explored in the field of oncology, where AI is being evaluated for use in radiological diagnoses, such as in whole-body imaging, colonoscopies, and mammograms. AI can also aid in optimizing radiological treatment dosing, recognizing malignant disease in dermatology or clinical pathology, and guiding RNA and DNA sequencing for immunotherapy (Bi et al. 2019).

In general, AI developments in early diagnosis are being studied in most health-related fields, such as in the early diagnosis of diabetic retinopathy, cardiovascular disease, liver disease, and neurological disorders (Kamdar et al. 2020). Currently there are only a handful of prospective clinical trials on the effectiveness of AI in early diagnosis, with some showing promise of equivalent detection ability to human professionals in specific tasks, with even fewer focusing on the potential benefits of human–machine partnerships. One of the risks in relying excessively on AI and machine learning algorithms is the development of an automation bias, where medical practitioners might not consider other important aspects in patient care and overlook errors that should have been spotted by human-guided decision-making (The Swedish National Council on Medical Ethics 2020).

AI can also be used to digitalize and store traditional paper medical records and process large amounts of data from images and other types of inputs or signals (such as motion data or sound data). Steps in image and signal processing algorithms typically include signal feature analysis and data classification using tools such as artificial neural networks, which work via complex layers of decision nodes (Wahl et al. 2018). Medical imaging is one of the most rapidly developing areas of AI application in healthcare. Whilst improving automated image interpretation and analysis is a priority, other important aspects of AI application to medical imaging are being explored, such as data security and user privacy solutions for medical image analysis, deep learning algorithms for restoration/reconstruction and segmentation of complex imaging and creation of fuzzy sets or rough sets in medical image analysis (Xia 2021).

Furthermore, with health systems, in general, growing more complex every year, administration and management of care are becoming increasingly laborious. AI can be used to assist personnel in complex logistical tasks, such as optimization of the medical supply chain, to assume mundane, repetitive tasks or to support complex decision-making (Schwalbe and Wahl 2020). This is made possible by a combination of AI advancements in the fields of natural language processing, automated scheduling and planning, and expert systems (PAHO 2021).

Many AI tools can also be used in specific public health programmes or in wide public health approaches to improve wellbeing. AI can be used for health promotion or to identify target populations or locations with “high-risk” behaviour concerning communicable and non-communicable diseases. AI can improve the effectiveness of communication and messaging specifically directed to certain sub-populations, both in terms of its ability to recognize priority groups and in its adaptiveness in creating tailor suited messages to benefit population health (micro-targeting) (Privacy International 2021). One example of such application is micro-targeting individuals or communities with technological, linguistic, or cultural barriers to better communicate the importance and safety of vaccinations, such as the COVID-19 vaccination (NBC News 2021). AI tools could therefore be adapted to improve access and equity of care, furthering the development of truly personalized medicine.

AI can also have a leading role in performing analyses of patterns of data for health surveillance and disease detection (Russell and Norvig 2010; Alcantara et al. 2017; Morgenstern et al. 2021; CDC Foundation 2022): AI tools can be used to identify bacterial contamination in water treatment plants, identify foodborne illnesses in restaurants or hospitals, simplify detection and lower the costs. Sensors can also be used to improve environmental health, such as by analysing air pollution patterns or using machine learning to make inferences between the physical environment and healthy behaviour (Roski et al. 2019).

Another application of AI in public health surveillance is evidence collection and its use to create mathematical models to make decisions. Although many public health institutions are not yet making full use of all possible sources of data, some fields, such as real-time health surveillance, are steadily improving. This has improved the public health outlook on pandemic preparedness and response, though the long-term ramification of such important changes will only be evident in the future (Whitelaw et al. 2020).

The development of public health policy also proves to be fertile ground for artificial intelligence where attempts at analysing argumentation on food quality in a public health policy was attempted. Models with new recommendations based on stakeholders’ arguments by target specific audiences have been consequently generated (Bourguet et al. 2013). Healthcare has always depended in part on predictions, prognoses, and the use of predictive analytics. AI is just one of the more recent tools for this purpose, and many possible benefits of prediction-based health care rely on the use of this technology. For example, AI can be used to assess an individual’s risk of disease, which could be used for the prevention of diseases and major health events (OECD 2019).

Various studies suggest that artificial intelligence may improve several pathologies, such as heart failure, utilizing predictive models and telemonitoring systems for clinical support and patient empowerment. For example, given the expected increase in the number of heart failure patients in the future due to the ageing of the population, predicting the risk of a patient having heart failure could prevent hospitalizations and readmissions, improving both patient care and hospital management, which would have a high impact on costs and time (Larburu et al. 2018).

Machine learning is also increasingly being applied to make predictions related to population health: using novel big data resources, ripe with different data types, may allow for improvements in prediction algorithms necessary to navigate the complex health data ecosystem successfully (Alcantara et al. 2017). A good example of this, is the integration of data types to better understand complex associations between genetics, environment, and disease. The Harvard group has been using large administrative datasets to untangle the relationship between genetics and environment in all diseases recorded in health insurance claims data (Lakhani et al. 2019). Using biobanks and their massive datasets allows scientists around the world to discover new genetic variants (e.g. through genome-wide association studies) and novel risk factors associated with disease more efficiently and with higher sensitivity and specificity compared to traditional “one-at-a-time” methods (CDC 2019).

Using electronic medical record data, machine- and deep-learning algorithms have been able to predict many important clinical parameters, including suicide, Alzheimer’s disease, dementia, severe sepsis, septic shock, hospital readmission, all-cause mortality, in-hospital mortality, unplanned readmission, prolonged length of stay, and final discharge diagnosis (Topol 2019).

All in all, predictive models have been used much more widely by clinicians than by public health professionals. However, on closer inspection, any application improving patient care at any level can be considered relevant to the field of public health. The ability of clinicians and healthcare providers to make better informed decisions on patient health will be improved by context-specific algorithms, that use massive quantities of clinical, physiological, epidemiological, and genetic data. Precision Medicine will further benefit from these advanced algorithms, as their accuracy, timeliness, and appropriateness in clinical care improve over time, decompressing our reliance on human resources. This advancement, however, still necessitates computer-literate physicians, who are up to date with new generation data-driven approaches. The key to a complete incorporation of AI into clinical care will therefore be the integration of human clinical judgement with advanced clinical machine learning algorithms (Khemasuwan et al. 2020).

Table 1.1 shows the most common AI applications in healthcare:

Table 1.1 Most advanced AI applications in the field of healthcare

2 Current Limitations of AI Applications in Public Health

AI poses major technological, ethical, and social challenges, which need competent professionals to address. In fact, beyond the many opportunities, artificial intelligence presents some critical issues that could slow down the adoption of these applications. With Public Health interventions targeting entire populations, the introduction of AI might either improve or worsen health inequities on a large scale (Weiss et al. 2018); as assessed by many ethics and policy guidance documents, the promotion of AI deals with personalized recommendations and individual action, but this should not threaten the importance of collective action to take care of social and structural determinants of health (Panch et al. 2019).

Moreover, like all public health interventions, AI has the potential to create enduring benefits but will require not just a broad coalition of support and partnership between the public and private sector but also the trust and enduring support of patients.

Currently, despite their wide testing, AI-based prediction algorithms that affect patient care have not reached a sufficient level of accuracy needed for precise long-term predictions. This poses a serious challenge for healthcare workers, as long-term predictions of limited reliability could impact an individual’s life for years in the future: for example, both false-positive and false-negative predictions on an essential diagnosis could affect the level of risk clinicians are willing to undertake in order to treat a health condition, thus heavily impacting health outcomes. Furthermore, these prediction algorithms could be biased towards or against certain population sub-groups (e.g., ethnic groups, religious groups), both in terms of potentially discriminatory health practices suggested and issues about individual autonomy on personal data use and informed consent. These potential pitfalls of AI-based algorithms and their long-term health inferences raise essential ethical concerns that have to be addressed by all the stakeholders involved (WHO 2021b).

One of the main and most obvious implications of AI introduction in Public Health is the risk of inequalities in accessing technologies, in the opportunity to benefit from them and in the burdens generated by them (IEEE Standards Association 2019). An example is represented by developing countries that depend on AI-based platforms developed by other richer countries, which also leads to a significant financial burden. Most AI developments in healthcare respond to the needs of high-income countries (HICs), where most research is conducted, however low- and middle-income countries, where workforce shortages and limited resources constrain the access to quality of care, could also benefit from the implementation of AI in Public Health.

Another element that exacerbates this digital divide is the lack of availability and accessibility of Internet services: Mobile Health apps, which make heavy use of Artificial Intelligence, are of no use for people living in areas that do not have Internet access. Not only could this gap manifest between population groups, but also between researchers, public and private sectors and even health systems (Smith et al. 2020). A similar difference could present between those who choose to actively use AI technologies and those who do not. AI systems might be programmed according to certain values and judgements that could create or worsen health inequities, for instance, those related to gender, race or belonging to minority groups (Norris 2001). Moreover, there could be disagreement about the system of values that inform AI systems (Caliskan et al. 2017).

Through harmonized standards and requirements, both at the research stage and in the evaluation phase, it is essential to ensure effectiveness for the patient as well as safety in use. The current scientific landscape will see an increase in the number of clinical trials that verify the effectiveness and efficacy of AI in Public Health; proper development of a precautionary supervised system of trials will ensure that they are ethical, legal, and inclusive in scope. Nevertheless, the next decade will shed light on how the broad political, economic, and cultural global framework in which these technologies are developed will transform public health through the use of AI (Larburu et al. 2018).

The biggest challenge for AI in these healthcare domains will be ensuring their adoption in everyday clinical practice. Widespread adoption of AI systems must be approved by regulators, integrated with digital health platforms through health data pipelines, standardized at a level such that similar products perform similarly, taught to clinicians, accepted by the patients, paid for by public or private payer organizations, and updated over time. The change is expected to be based on multilevel—from local to supranational—collaborations, and supported by regulatory bodies acting for Public Health interests (Davenport and Kalakota 2019).