Keywords

1.1 Introduction

The world has entered a digital era since the start of the COVID-19 pandemic, with workers from practically every company and worldwide health-care system working remotely. This transition was expected to imitate the advancement occurred during the implementation of public Internet in the 1990s and intended to emphasise its crucial role in digital evolution of global health. Thus, it has been dubbed “the fourth industrial revolution” or “industry 4.0”. Its enormous promise for facilitating the global health has been implied progressively in recent years through AI-assisted clinical decision-making. In developed countries, the technology is employed for reducing the non-communicable disease; whereas in low-income countries, it is focusing to prevent or treat infectious diseases (Panch et al. 2019).

The potential applications associated with AI in global health consist of improved health surveillance (including capacitating individuals for self-assessing), clinical decision-making support systems, enabling health workers with different tools for implementing personalised interventions, diagnostic criteria, and more accurate referrals; which are poised to improve clinical care and strengthen his gaining health systems. Like in other areas, the application of artificial intelligence ranging from clinical decision-making to support-chain management is gaining attention during this time of digital growth (Wahl et al. 2018). However, in order to provide high-quality outputs to the exact query, AI-enabled technology requires enormous data sets to support machine learning algorithms (Paul and Schaefer 2020).

Another area where AI-driven therapies have been evaluated in the global health setting is morbidity and mortality risk assessment. These treatments are mostly based on machine learning classification tools, and they often consider different machine learning techniques to find the best method for recognising risk. This method has also been utilised in hospitals to predict illness severity in dengue fever and malaria patients and youngsters with acute infections. Researchers have used this method to measure the likelihood of cognitive sequelae in children following malaria infection and calculate the probability of TB treatment failure (Phakhounthong et al. 2018; Kwizera et al. 2019).

As a result, health systems will play a critical role in driving the development of AI-based solutions and reaping their advantages. Artificial intelligence has accelerated development in the deployment of such technologies in advanced economies, but low- and middle-income nations face significant hurdles in developing and deploying such advances (Paul and Schaefer 2020).

Irrespective to the influences of social variables on the outcome, the future of public health relies on the technical aspects of artificial intelligence. However, before applying AI concepts to global health on a big scale, such issues must be addressed (Khoury et al. 2016).

Bias in the data used for machine training is starting to have an impact on the health industry. Computer vision algorithms, for example, can categorise photographs of skin lesions as cancerous or benign, resulting in quick, accurate, and non-invasive diagnosis. Image classifiers, on the other hand, may function differently on darker skins, contributing to health inequalities across populations. Smaller sample numbers from dark-skinned patients are often included in dermatological data sets, and dark-skinned individuals are often found at later stages of disease, both of which can contribute to more frequent misdiagnosis. This form of bias can be mitigated by ensuring that the training data is representative of the patients who will be using the tools. However, this is not a simple problem to solve. Artificial intelligence-based products’ conclusions get encoded with the intricacies of the setting in which training data is obtained, limiting their capacity to function in a variety of geographical, ethnic, and economic circumstances (Haenssle et al. 2018).

Defects in the quality, completeness, and equality of health data pose special hazards in the age of artificial intelligence. Errors in data recording might lead to misguided actions and resource allocation in low-resource nations. Even when machine learning techniques deliver accurate findings, their existing capabilities, such as risk screening, diagnosis, and future danger assessment, frequently give only potentially actionable data. Information used to inform only a few discrete treatments is unlikely to result in positive health-care results.

There are several ways which may result in the failure of machine learning-based system, because of that a systematic approach is needed to make advancements in global health care including creating relevant and accurate solutions. Investment in global health requires implementing strategies that work for global population who use them by prioritising the health system investments. Quality can be improved by focusing on completeness, accuracy, and representativeness, using data generated through investments in health management information systems; equity can be maintained by increasing the representation of data used to develop machine learning-based tools from poor and marginalised groups; and safeguards can be established by focusing on maintenance of standards for ensuring representativeness and transparency of training data sets and processes for examining the working of automated clinical-decision support tools. In addition, investment should be made only when the health care system is strong enough to support machine learning tools and is capable to making results into action (Kwizera et al. 2019; Haenssle et al. 2018).

Artificial intelligence can be applied differently across population and its application throughout the health system should maintain and prioritise equity during the processes including improving the efficiency and effectiveness of services, enabling the personalised interventions and matching preventive services to individual needs; through which the scope of public health significance can be potentially broaden. In case of the development of these technologies, a large data set is required which should be representative of the population and in turn must benefit everyone by collaborating private-sector profit motives with social responsibility and public-health advances. In order to create the partnership with private sector in the application of AI in public health, public health organisations need to take the lead, which will produce tangible benefits and protections for the health institutions by developing specific contacting instruments, which in turn focuses on the populations they serve (Wahl et al. 2018).

1.2 Role of Artificial Intelligence (AI) in Public Health

AI is important in the context of public health since it has a variety of applications and can accomplish challenging jobs with minimal effort and accuracy.

Few as such include:

  • Make appointments

  • Assist doctors with disease diagnosis

  • Assist in surgical procedures

  • Make therapy and drug recommendations

  • Assist people in digitally resolving their problems, and often more (Table 1.1)

Table 1.1 Areas of public health with the use of artificial intelligence

1.2.1 Health Protection

1.2.1.1 Disease Detection

  • Oncology

    Every year, millions of individuals are affected by cancer, which is on the rising trend. Most cancers are discovered when they have progressed to an advanced stage. As a result, we require a more precise and time-efficient approach to more easily diagnose cancer.

    Cancer detection has become more accessible due to machine learning and deep data mining, aided them in classifying various types of breast cancer cells. CNNs (convoluted neural networks) may use pixels and disease labels as inputs to successfully discover and categorise malignancies in a much quicker way (Estava et al. 2017). There is the use of CNNs for brain tumour segmentation (Işın et al. 2016), liver tumour segmentation (Vivanti et al. 2015) and for optic path gliomas (Weizman et al. 2010). AI also helped in cancer diagnosis using histopathology, monitoring tumour growth, and predicting prognosis (Londhe and Bhasin 2019).

  • Cataract

    Cataract is the leading cause of vision impairment and blindness globally, with approximately 65.2 million cases of vision impairment and blindness (Flaxman et al. 2017). Several AI algorithms based on machine learning or deep learning approaches have also been evolved for the automated detection and grading of cataracts (Goh et al. 2020).

1.2.1.2 Data Pattern Analysis for Near-Real-Time Surveillance

A machine learning algorithm called FINDER (Foodborne IllNess DEtector in Real Time) can identify eateries with a high risk of foodborne illness in real time. Based on Google searches and location logs, FINDER utilises machine learning to determine whether restaurants have significant food safety violations that may be contributing to foodborne disease (Sadilek et al. 2018).

1.2.2 Health Promotion

  • Cardiovascular diseases

    In the entire world, cardiovascular diseases (CVDs) are the main cause of death. About 17.9 million fatalities worldwide in 2019 were attributable to CVDs, or 32% of all deaths. A heart attack or a stroke resulted in 85% of these fatalities (WHO 2021a, b). By addressing behavioural risk factors like cigarette use, poor diet and obesity, inactivity and excessive alcohol use, the majority of cardiovascular illnesses can be prevented. The cardiovascular disease must be identified as soon as feasible in order to start treatment with counselling and medication.

    Machine learning improves the accuracy of cardiovascular risk prediction significantly, raising the number of individuals diagnosed who could benefit from preventive treatment while trying to minimise treatment of others (Weng et al. 2017).

  • Diabetes

    As we all know, the prevalence of diabetes is increasing at an alarming rate, with approximately 422 million cases. An estimated 1.5 million deaths in 2019 were directly related to diabetes, making it the 9th leading cause of death (WHO 2021a, b). Diabetic retinopathy, kidney failure, heart attacks, strokes, and lower limb amputation are all common complications of diabetes. But one way to keep diabetes under control is to take medications or insulin.

    As a result, treating and controlling diabetes has become a particularly significant chore in recent times, and AI machines have been developed to assist in this process by providing various information and abilities that aid in the maintenance, prevention, and control of diabetes. It analyses the patient’s complexities and sets medicine or insulin administration times.

  • Neurological disorders

    Brain diseases are far more harmful, and many people are affected or suffering from them as a result. Every year, 6.5 million individuals die due to a stroke (WSO 2022). CT scans, MRIs, and PET scans are among the diagnostic imaging methods used by doctors.

    In the diagnosis of neurological disorders like as epilepsy, Parkinson’s disease, and Alzheimer’s disease, AI-based CAD (computer-aided diagnostic) systems are widely used (Raghavendra et al. 2019). Dementia is a neurological disorder that causes memory loss and antisocial behaviour, as well as communication and reasoning difficulties. On the other hand, PARO is a robot that was created to carry out therapies for dementia patients (Šabanović et al. 2013). Another example is RP-VITA, which provides a communication interface, allowing patients and doctors in hospitals to engage via wireless video teleconferencing (InTouch Health) (Table 1.2).

Table 1.2 Areas of diabetes care that AI/ML can help with

1.2.3 Improving the Efficiency of Healthcare Services

1.2.3.1 Detecting Diabetic Retinopathy

Deep neural networks may be trained to detect diabetic retinopathy or diabetic macular oedema in retinal fundus pictures with good sensitivity and specificity utilising large data sets and without specifying lesion-based criteria. This automated technique for detecting diabetic retinopathy has a number of benefits, including consistency of interpretation (since a computer will always make the exact prediction on the same image), high sensitivity and specificity, and near-instantaneous reporting of results. Furthermore, because an algorithm can have several operational points, its sensitivity and specificity can be tailored to meet the needs of distinct clinical contexts, such as high sensitivity for screening (Gulshan et al. 2016).

1.2.3.2 Health Informatics and Electronic Medical Records

Health informatics is the process of gathering, storing, retrieving, and using medical data to enhance patient care across interactions with the healthcare system. Health informatics can assist in the design of public health programmes by ensuring that crucial information is accessible for making appropriate policy and programme decisions. An important data source for health informatics are electronic medical records (EMRs), which are digital copies of patient and population health data (Panda and Bhatia 2018).

1.2.3.3 Booking Appointments

It is challenging to collect accurate data, schedule available days, respond to questions, and remain interactive while doing so. Although making appointments is a difficult task, customer satisfaction requires a well-organised appointment scheduling process. Customer service is critical in areas such as hospitals, where continual scheduling, booking, and reserving are necessary. The use of AI in conjunction with human agents appears to have resulted in a higher degree of customer satisfaction. It also permits clients to use self-service without constraints of time or language restrictions.

The potential of AI to track and store data while interacting with clients daily is incredible. When a regular client calls a hospital to arrange an appointment, AI may instantly access their information, recommend experts based on past bookings, and organise appointments with their regular physicians or doctors. By referring to the booking calendar, AI appointment scheduling can also distribute the physician’s time slot. When it comes to availability, conversational AI appointment booking is always on call, which means it is available 24 h a day, 7 days a week. To arrange bookings or reservations, one no longer needs to stick to a specific time frame, queue up, or wait on hold. The best part is that a single global language does not constrain it. Thanks to recent technological breakthroughs, AI booking systems can now engage in different languages. So, regardless of your client’s favourite language, you may quickly understand their difficulties, and requirements, and provide the availability they seek.

1.2.3.4 Surgical Assistance

During an operation, AI can offer surgeons direction. It may distinguish key view components as the surgery progresses, highlight places where dissection is safe (CVS: critical view of safety), and offer warnings/notifications appropriate to the individual phase. It can also point out locations where surgical procedures were carried out properly.

Machine Learning can be used to automate the indexing and bookmarking of operative steps (smart screenshots based on event recognition) and the compilation of surgical reports.

Artificial intelligence has numerous advantages in healthcare, including offering user-centric experiences, increasing operational efficiency, linking disparate healthcare data, and many others.

1.3 Global Health Challenges

Artificial intelligence has enormous potential for facilitating precision global health, and its diverse applications present several ethical, social, and political issues (Krittanawong and Kaplin 2021) With the aid of big data, supercomputing, sensor networks, brain science, and other technologies, recent advances in artificial intelligence have demonstrated success in a wide range of clinical tasks (Jiang et al. 2021). In low-income countries, physical technology powered by artificial intelligence (such as cloud computing, mobile health, and drones) can be utilised to prevent or treat infectious diseases. In developed countries, they can be used to reduce non-communicable, “lifestyle-related” diseases. Before applying AI principles to global health on a large scale, several limitations must be addressed. The difficulties of machine learning, the flaws in ethics and law, and society’s lack of acceptance have all hindered AI (Krittanawong and Kaplin 2021).

1.3.1 Impact of Data Biases

Data about personal health services are the most sensitive data that an individual may own. The right to privacy is a crucial ethical principle in health care because privacy is a function of patient autonomy, personal identity, and well-being (Weng et al. 2017). Upon generation of diverse, good quality datasets, it will be necessary to ensure that datasets are debiased in matching, recognition, and analysis. This will make it easier to create fair and inclusive algorithms (Buolamwini and Gebru 2018).

However, AI has the potential to replicate institutional and cultural biases encoded in the data it is learning from (West et al. 2019). The most common ethical issue is biased data sources. Almost any data set is biased in one way or another based on a range of factors, such as gender, sexual orientation, race, sociology, environment, or economics (Geis et al. 2019). Artificial intelligence (AI) programs learn and formulate conclusions based on existing data. These historical data also reveal patterns of inequity in healthcare, and machine learning models trained on them may perpetuate these patterns (Tobia et al. 2021). As a result, data bias training of the AI model is essential, as well as input regarding health data or other information. AI models may exhibit bias when the training data are not representative of the target population and when the training data are alternative, AI creators who are not sensitive to the social and cultural differences between their society and the target populations may reflect bias in their coding, which may negatively impact AI models and predictions. In such cases, AI’s potential impact on such settings could be undermined, resulting in mistrust of AI (Singh et al. 2020).

Therefore, human sensitisation must happen before training machines. AI developers may need to collaborate with ethicists, anthropologists, psychologists, and dermatologists (to assist in coding skin colours), as well as being informed by high-quality humanities, social science, and behavioural science research (Singh et al. 2020).

1.3.2 Absence or Insufficiency of IT Infrastructure

A significant obstacle to implementing AI is the inadequacy or absence of IT infrastructure (including Wi-Fi, servers, cloud computing, local area networks, and broadband) in the majority of LMICs. Given that the use of AI in medicine is still in its infancy stage, research laboratories and AI vendors are offering different pipeline strategies. Hardware can be embedded with AI, such as a CT scanner, a US unit, or a mammography station (Hosny and Aerts 2019).

AI deployments in LMICs are also hampered by the scarcity of skilled technologists in these countries. Therefore, a thorough curriculum for clinical radiology education must also pay attention to technologists. Technologists with good imaging skills are needed if AI is designed to assist them with quality control activities (Hosny and Aerts 2019).

1.3.3 Trustworthiness of AI

The trustworthiness of AI is a key barrier to its widespread adoption. Most present AI is criticised for its lack of transparency, generalisability, and explanation of outputs, which is known as the “black-box” phenomenon (Wahl et al. 2018). Integrating artificial intelligence (AI) into healthcare poses new issues for doctors (Tobia et al. 2021). Although a fully rule-oriented robot may appear to be more trustworthy at first glance, an ethical person is more dependable in scenarios requiring complex clinical practice decisions (Gelhaus 2011).

Critics are looking for pathophysiologic explanations for AI’s results. Employees must be trained to assess AI outputs and use them therapeutically, when necessary, for AI to function in resource-constrained health institutions in LMICs. Using AI without proper training, on the other hand, could lead to blindly accepting the output without critical evaluation. Regardless of the clinical interactions, AI tends to magnify biased findings (Oliveira et al. 2017).

1.3.4 Importance and Impact of AI Laws

AI’s legal concerns before being hired, healthcare personnel must pass a series of tests, and they must follow a set of rules in the workplace. Currently, there are no globally unified laws or regulations governing AI in medicine to standardise practitioners’ behaviour (Mitchell and Ploem 2018). As a result, the creation of broad and precise AI legislation is critical. However, there are a few issues to consider. To begin with, legal professionals alone will not be able to create such laws. Stakeholders interested in creating or developing AI-based medical solutions should be invited to participate. Second, when confronted with AI-related infringement, determine who is responsible: the AI manufacturer, user, or maintainer (Cath 2018).

1.3.5 Societal Acceptance

These days, AI is implemented using software programs. Engineers working with lots of codes will unavoidably make mistakes. An AI system can be enhanced after patches and upgrades. However, these mistakes could endanger the health of patients if AI programs are used in the medical industry. Developers usually evaluate the efficiency of the AI system rather than its security (Belard et al. 2017). Even though most patients have a tendency to trust AI-based diagnoses, they are more likely to do so when the two diagnoses are different (Ooi et al. 2021).

1.4 Artificial Intelligence and Its Opportunities in Global Health

In many of the world’s poorest areas, AI is a foreign or unintelligible idea, as millions of people in these areas have yet to adopt or pro fit from transformational technologies that emerged during the first, second, and third industrial revolutions (Singh 2019). These issues, however, do not have to be a barrier to such communities reaping the benefits of AI. If AI is appropriately positioned and used, the Sustainable Development Goals (SDGs), which aim to “Ensure healthy lifestyles and promote wellbeing for all at all ages”, can be achieved much more quickly as a result (2019). On the other hand, advances in artificial intelligence-enabled technology depend on large datasets to facilitate machine learning algorithms, which enable artificial-learning tools to deliver high-quality responses to extremely exact questions (Paul and Schaefer 2020). Some potential uses of artificial intelligence in the provision of global health services include enhanced health surveillance, enabling people to assess their own health risks, providing frontline health workers with tools for more precise referrals, individualised interventions, and diagnostic aids, as well as clinical decision support systems (USAID 2019).

1.4.1 AI-Interventions and Application Areas

The following application categories can be extensively used to categorise health-related AI initiatives in LMICs.

Affordable AI-powered mobile or portable device solutions fall under the first group. These are often operated by non-specialist community health workers (CHWs) and focus on treating common diseases in off-site locations including community centres and private residences. CHWs can assess patients using AI recommendations to determine which ones require more intensive follow-up. Additional applications are anticipated with the development of portable diagnostic technologies, such as microscopes and ultrasound probes, including the ability to diagnose skin cancer from photographs and analyse peripheral blood samples to detect malaria (Oliveira et al. 2017). As smartphone adoption increases, patient-facing AI applications may advise on lifestyle and diet, enable self-evaluation of symptoms, and offer guidance throughout pregnancies or recovery periods, empowering individuals to manage their own health and relieving the burden on already overwhelmed healthcare systems.

In order to aid physicians in their clinical decisions, the second area of application concentrates on more advanced medical requirements. Primary care doctors who are not experts may be able to perform specialised tasks like using AI to analyse diagnostic radiology and pathology images and only referring patients to specialists when necessary. AI tools could help professionals become experts in a range of subspecialties. This is particularly true in oncology, where a lack of subspecialists may necessitate the use of one oncologist to treat tumours in several anatomical locations, resulting in subpar care due to the continuously changing scope of services. AI can also help maintain long-term operations, detect problems, and reduce parts and consumables delays by evaluating prior maintenance data (Hosny and Aerts 2019)

The third application area is population health, enabling government agencies to understand cause-and-effect correlations. For example, AI could aid in the upkeep of national cancer registries. By eliciting data from for example radiology and pathology reports contents, automated registry may assist minimise labour costs, which account for 58% of all registry’s expenditures (Tangka et al. 2016).

Other applications include scheduling and optimising CHWs’ home visits, detecting hotspots for potential disease outbreaks in unmapped rural areas, and applying AI-powered analysis of aerial photography and meteorological trends. It is unclear how these applications will convert into advantageous long-term health policies, even though they may inspire quick practical solutions. The quality and accessibility of healthcare in LMICs has been improved through a number of digital initiatives. Examples of these include mobile health (mHealth) employing mobile phones and tablets, which uses technologies to support health care practices that use electronic processes (eHealth) and remote telecommunications (Telehealth). Best practices for scaling these programmes in LMICs have been developed based on real-world experiences, most notably through the development of the World Health Organization’s mHealth Assessment and Planning for Scale (MAPS) Toolkit (Labrique et al. 2018). These efforts may present the potential for similar digital AI applications to learn from.

Another potential junction of cutting-edge technology and medicine is the delivery of medications or vaccines in resource-poor places using medical robots (e.g., doc.ai) and programmable drones. Combining medical robots and drones with the previously described computer vision opens new possibilities. For instance, they employ computer vision to detect parasitic infections or tuberculosis remotely based on computer-analysed imaging data and then use drones to give anti-helminthic or anti-tuberculosis medications (Payal and Purva 2018). Finally, AI-assisted clinical decision-making has been employed in the medical field for decades and is now being gradually applied to global health care (Krittanawong and Kaplin 2021).

1.4.2 Artificial Intelligence Opportunities in Global Health

Many AI health treatments have demonstrated promising preliminary outcomes and could be utilised to supplement traditional health-care delivery systems in LMICs in near future. Especially in disease diagnosis, where AI-assisted interventions could be utilised in nations with a shortage of health experts, and risk assessment, where machine learning-based technologies could enhance clinical knowledge (Guo and Li 2018).

The so-called robot radiologist is one of the most talked-about AI applications in medicine (Reardon 2019). Early lung cancer detection, automated coronary calcium scoring, and synthetic MRI based on CT are just a few of the radiology AI advancements in a rapidly growing sector of research institutions, digital start-ups, and health care corporations. By automating complex four-dimensional cardiovascular flow models connected to sizable genomic datasets, AI extracts high-dimensional data from clinical images using radiomics (Rizzo et al. 2018). Radiotherapy treatment delivery, for instance, might be accelerated, patient intake increased, and more emphasis might be placed on the clinical specifics of patient management without the addition of more staff. Although the lack of diagnostic and therapeutic equipment may not be instantly resolved, the incorporation of AI into equipment design may help non-technical operators troubleshoot problems when technicians are scarce (Krittanawong and Kaplin 2021).

Many HICs are rapidly integrating AI into health care delivery, whereas most resource-constrained health institutions, notably in low- and middle-income countries (LMICs), lack digital infrastructure. Around two-thirds of the globe lacks or has insufficient radiography, and inequitable AI use could exacerbate radiology-related health inequities. However, this substantial disparity shows that if AI is successfully adopted, it might have a significant impact on global radiology service delivery and minimise inequities (Mollura et al. 2020).

One of the key uses of AI, according to studies, is the automation or support of the diagnosis of communicable and non-communicable diseases. To automate the diagnosis of infectious diseases, signal processing approaches are frequently combined with machine learning. Radiological data for tuberculosis (Lopes and Valiati 2017 and Aguiar et al. 2016) and drug-resistant tuberculosis (Jaeger et al. 2018), ultrasound data for pneumonia (Correa et al. 2018), microscopy data for malaria (Go et al. 2018; Andrade et al. 2010), and other biological sources of tuberculosis data were all used in signal processing interventions (Khan et al. 2018). Most AI-assisted diagnostic treatments in LMICs exhibited excellent sensitivity, specificity, or accuracy (>85% for all) and non-inferiority to comparative diagnostic instruments. Expert systems are used to diagnose tuberculosis (Osamor et al. 2014) and malaria, and machine learning assists clinicians in diagnosing tuberculosis (Elveren and Yumuşak 2011). AI-driven interventions in LMICs mostly focused on the diagnosis of non-communicable diseases.

As an AI-powered self-check-up programme, Ada Health GmbH (Berlin, Germany) has merged Swahili and Romanian languages to assist people in East Africa and Romania who have limited medical resources. In remote places, artificial intelligence-enabled clinical decision-making, such as the virtual doctor, will become more common (Park and Han 2018). Telemonitoring of heart failure patients was one area where it was implemented, as remote monitoring of fluid balance allowed for diuretic adjustments without an in-person office visit, reducing hospital resource utilisation. The possibilities for a programme like this are endless.

Another area where AI-driven therapies have been evaluated in the global health context is morbidity and mortality risk assessment. These treatments are mostly based on machine learning classification tools, and they often compare different machine learning approaches to find the best method for identifying risk. This method has also been utilised in hospitals to predict the severity of sickness in patients with dengue fever (Phakhounthong et al. 2018) and malaria (Johnston et al. 2019) and children with acute infections. Researchers have utilised this method to evaluate the probability of tuberculosis treatment failure (Hussain and Junejo 2019) and to assess the likelihood of cognitive sequelae in children after malaria infection (Veretennikova et al. 2018). Non-infectious illness health outcomes were also estimated using machine learning classification algorithms.

Robotics could bring new tools to help the elderly and frail. Engineers are investigating the prospect of incorporating artificial intelligence into robotic equipment to provide more intelligent support to these patients, such as reminding them when to take their medication. Furthermore, the “ImPACT” innovation initiative invests a lot of money in high-risk, high-impact research and development. Projects under its umbrella aim to increase nursing care recipients’ independence while reducing caregiver load, speed medical research and development, improve knowledge acquisition, and prevent cognitive decline in the elderly. Intelligent walkers and wheelchairs are examples of devices that can help with safety and freedom (Fenech et al. 2018).

Preventing and managing the evolution of HIV medication resistance is a vital component of a comprehensive and effective HIV strategy (WHO 2016a, b, 2017) if we want to eradicate HIV by 2030. Through the development of AI algorithms that predict HIV medication resistance and disease progression, AI could play an essential role in managing antiretroviral (ARV) therapy (Singh 2017; Hajek and Singh 2011). With the implementation of such a system, doctors may be able to predict how patients would react to various medications throughout various time frames. This could lead to the most effective drug being prescribed depending on individual results. AI has also been shown to be a good support system for detecting stained tuberculosis (TB) bacilli and assisting in clinical decision-making by improving the effectiveness and specificity of TB diagnostic procedures (Dande and Samant 2018; Xiong et al. 2018). Such applications can assist pathologists in coping with their tremendous workload and reduce the risk of misdiagnosis. AI can help us get closer to the SDG target of eradicating AIDS and tuberculosis by 2030. On the other hand, existing health measures will be insufficient to meet the SDGs. Evidence and creative initiatives will be required to drive policy reform towards achieving the SDGs.

Clinical professionals and clinical laboratory infrastructure are in short supply in low-income countries. Ineffective health professionals stifle efficiencies in the health system by contributing to inaccurate diagnoses and unnecessary medicine prescriptions, even though the cost is essential in attaining quality health care. It has been estimated that 20–40% of health spending is lost, owing to inefficiencies in the health workforce and inadequacies in human resource management (WHO 2010). According to the WHO, reducing inefficiencies is a critical component of the response required to address critical health worker shortages, as well as a prerequisite for developing a robust investment strategy that will enable the realisation of universal health coverage, which is the core goal of SDG (WHO 2016a, b). Medical professionals may be able to care for a more significant number of patients using AI (Meskó et al. 2018). By offering clinical decision assistance to overloaded physicians in LMICs and thereby boosting efficiencies by detecting and evaluating health risks through predictive analytics, AI could play an essential role in mitigating the dire health care worker shortage in the Global South. Patients’ vital signs, basic history, and notes from a physical examination by a nurse or mid-level practitioner, for example, might be entered into the AI framework when they arrive at a health facility, allowing the algorithm to create a predicted diagnosis. These projected diagnoses could be used to prioritise which patients should be seen by a physician first and which should be referred for routine out-patient follow-up. Waiting periods for emergency or urgent care may be reduced because of such triaging, allowing for improved access to care within a healthcare system with limited resources (Singh 2019). A system like this would also be less prone to individual medical biases, allowing clinicians to examine diagnostic alternatives that might not have been clear at first. Given that LMIC health staff shortages are expected to worsen by 2030 (WHO 2016a, b), universal access to health care will remain a pipe dream until AI clinical decision assistance is adopted. However, the use of AI in such situations will necessitate careful evaluation.

1.5 Demand and Drawbacks of AI in the Context of Global Health

Demand for any commodity primarily depends on its price, and AI technology is no exception. However, the determinants of demand for consumption goods, capital goods, and technological absorption differ because of the difference in characteristics of each category of goods. The primary determinant of absorption of AI depends on its price; the components of the price being the total cost of technology acquisition, maintenance costs, software updating costs, and training costs for professionals/personnel who would use it. The demand for AI for healthcare has two components: demand for AI-based wearables (including mobile phones, tablets, laptops, computers: the Internet of Things—IoT components), chatbots, etc., and need for AI infrastructure for healthcare professionals. AI infrastructure comprises a combination of a hardware component like instruments to examine the symptoms, diagnostic testing instruments, etc., and software to fetch, collect, or record data, perform algorithms to automate the diagnosis and medicine prescription, and an Internet connection. The wearables on the patients’ end also require an Internet connection.

Thus, apart from price, the demand determinants for any technology are its positive externalities, the absorptive capability of the technology under consideration, the readiness of resources to adopt it, and entrepreneurial leadership (Arifin et al. 2015). At a macro level, the influence of entrepreneurial leadership can be analogised with the governments’ approach towards use of technology and policies enabling technological absorption. Laryea (1999) observes, in the context of the adoption of IT, that government policies could have significant influence on the extent of technology adoption. Demand for AI in healthcare can thus, be said to be influenced by its positive externalities, government policies, its absorptive capability, and the infrastructure/resources required to adopt it.

The positive externalities of implementing AI in healthcare include an increase in the number of lives saved, a reduction in per-patient time spent by healthcare professionals, reduction in annual expenditure for healthcare (including the opportunity cost of time for healthcare professionals). An estimate by Deloitte Network (2020) shows that annually 170.9 to 212.4 billion euros could be saved annually (including the opportunity cost of healthcare professionals); wearable AI applications alone would save around 50.6 billion euros and 336.1 million hours by implementing AI in healthcare. The report also estimates that somewhere between 380,000 and 403,000 lives can be saved annually by the appropriate implementation of AI in healthcare. This is large enough a positive externality to boost the demand for AI in healthcare.

The rest of the demand determinants—the absorptive capability of AI in healthcare, the readiness of resources for its implementation, and government policies are enablers and constraints. Technology absorption, availability of resources and government policies go hand-in-hand. Thus, each of these is discussed in an interconnected manner in this section.

Absorption of AI technology requires to be examined from two levels: patients (as the end-user) and healthcare professionals (as primary users). The use of wearables (an end-user product) would help improve patients’ quality of life with chronic conditions. However, its absorption depends on its acceptability by the patients. Mercer et al. (2016) found that the acceptance of wearables for tracking activities was high among people with chronic conditions, aged 50 and above; 73% of respondents expressed their desire (and plans) to purchase one. Sun and Rau (2015) show that ease of use and society’s perception of the strong influence on the acceptance of personal health devices, like wearables, among patients with chronic conditions. The difficulties in ease of use, resulting in lower favour, are more among the older adults. AI-led health chatbots are found to have quiet acceptance because of the hesitancy among the prospective users concerning quality, trustworthiness, and accuracy (Nadarzynski et al. 2019). This influences and lowers the absorption of AI-based health chatbots.

The acceptance of AI also requires to be examined by healthcare professionals. The use of AI by healthcare professionals has a strong influence on reducing the information asymmetry in AI product markets (Cannavale et al. 2022). The reduction in information asymmetry in the product market would provide insights to the innovators to develop more relevant and valuable AI products and applications. A drop-in information asymmetry would also increase demand for AI in healthcare. Shah and Chircu (2018) observe a need for higher acceptance of AI-based technology in healthcare services. This area requires a more profound and broader exploration.

As discussed, absorption of AI in healthcare goes hand-in-hand with the fiscal strength of an economy, its legal system and the government’s attitude towards the use of AI in healthcare. He et al. (2019) have examined the constraints associated with the implementation of AI in healthcare. The limitations on AI technology absorption usually are in the form of the extent of budgetary allocation for the healthcare sector or the financial constraints by private healthcare service providers (private hospitals). They explain that optimal performance of AI depends not only on populating it with patients’ data but also on software updates and regular maintenance of the instruments. This involves a huge recurring expenditure to sustain the AI system implemented in the healthcare facilities. At this point, government support in the form of fiscal allocations and healthcare policies would play an important role in the absorption of AI in healthcare. A gap in the absorption of AI between high-, middle-, and low-income countries is inevitable, despite implementing an AI-friendly policy.

Apart from constraints in the absorption of AI in healthcare, there are other vulnerabilities associated with the premature implementation of AI. Shah and Chircu (2018) highlight the issue of data privacy and security hindering the absorption of AI in healthcare. Shaheen (2021) adds that AI’s vulnerability to incorrect diagnosis results in prescribing inappropriate treatment. Wrong diagnosis and inappropriate prescription would have a very heavy implication by worsening patient’s health and, in the extreme situation, would even cause death. A premature AI system will likely have noisy data and insufficient testing and validations of diagnosis and treatment prescription. They further highlight that a tiny flaw in the software could affect several patients’ health and lives. Thus, trust is crucial to accepting and subsequent absorption of AI in healthcare.

A strong Ethical Code of Conduct for healthcare professionals and the severity of sanctions on violations of the same—both at the government-level and organisational level are essential to ensure the absorption of AI in healthcare. In its absence, clinical decision support systems would be at the risk of being programmed to increase profits of certain pharmaceuticals, clinical testing laboratories, or even the healthcare professionals. Countries where healthcare services are offered by the private sector are at a higher risk of such abuse.

Thus, a plethora of factors have inhibited the absorption of AI in healthcare, despite its advantages, usefulness, and efficacy, if implemented with care and caution.

COVID-19 pandemic has completely transformed the ways of working and healthcare is no exception to it. This has resulted in an increased demand for AI, as it not only helps in early diagnosis but also in contactless and quick treatment. The applications of AI have seen a rising trend since past few years, which seems to have been expedited by the pandemic (Bohr and Memarzadeh 2020; CBInsights 2020).

1.6 Conclusion

Therefore, global health leaders need to consider the potential benefits that a widely applied AI in prevention and health maintenance would bring in the sector and hence it requires the global leaders’ commitments in universal implementation and application of AI technology in both developed and developing countries. The health outcomes of the AI application at global perspectives offset the drawbacks. Like any other areas of the new technology there are some concerns and controversies around AI (e.g., data bias, data security, legal side) that a joint global agreement, legislations, and commitment will pave the way of its universal benefits and improves efficiency and effectiveness of AI applications in global population health. It is expected that at global level as the population coverage by AI expands, the extent of trustworthiness of AI will rapidly increase through a continuous revision and evolving process. Finally, the application of AI may improve the equity index for the access to not only basic primary healthcare but also, more advanced, complex, and new interventions globally.