Keywords

1 Introduction

Astronomy is probably the first field in which openness in sharing the knowledge brought about the milestone progress for man as the notes of Chinese astronomers dated to 1054 AD enabled Carlo Otto Lampland to discover a remnant of the supernova Crab Nebula in 1921 (Fore 2019). The faith in the progress achieved through cooperation and parallel learning helps achieve success, even if it is a collaboration between distant generations. Likewise, in high-energy physics, access to open data is essential for further discoveries and scientific progress. Large international scientific organizations, for instance, the European Council for Nuclear Research (CERN), founded in Europe in 1952, have become world-class research bodies by sharing knowledge with the global community. This is a progressive way of science practice based on cooperation rather than pure competition posed to bring about great discoveries.

A steady increase in the amount of data to be converted into knowledge enforces adequate software solutions, workflow, and explanations (Chen et al. 2019). To process petabytes of data, it is of the highest need to implement solutions of artificial intelligence (AI) because the work cannot be performed by a man (Lorkowski and Malinowska 2020). The introduction of such solutions is unavoidable, especially during the SARS-CoV-2 pandemic, when the time seems increasingly essential. The pandemic has hastened up this process. The AI and deep neural networks (DNN), designed for the analysis and processing of tremendous volumes of data, appear of critical help in drug or vaccine discoveries, observation of clinical effects, and further outcomes (Grzegorowska and Lorkowski 2020).

The 1970s can be considered as the onset of AI and knowledge sharing ideas in the modern world. The oil crisis of the time forced Jack Niles to propose the terms “teleworking” and “telecommuting” to reduce fuel consumption in the United States. Based on this innovative concept, Alvin Toeffler raised the idea of an electronic village as the main place of production in the future. The idea was broadly discussed in Europe, especially in Scandinavian countries, but the concept of telework was nearly forgotten. Then, it came back to life in the 1980s and is steadily on the rise since (Dangelmeier et al. 1999). The current SARS-CoV-2 pandemic situation has made this concept come true as people have started working from home due mostly to social distancing.

Biomedical science focuses on four interrelated topics: personalized medicine, data-intensive technologies, big data and information technologies, and AI (Schork 2019). This chapter aims to present a short insight into connections between artificial intelligence, medicine, public health, and the economy. To this end, we screened PubMed and Google Scholar databases using the commands of “artificial intelligence” AND “public health” and “artificial intelligence” AND “medicine”. As of December 1, 2020, the search returned 18,000 and 11,000 entries concerning either combination terms, respectively, dating ten years back. From this huge volume of articles, we chose for further analyses those which specifically handled the issue of AI and were published in the following three renowned medical journals: 185 articles in Science, 61 in The Lancet, and 27 in The New England Journal of Medicine.

2 Public Health and Smart Cities

Charles Edward Amory Winslow of Yale University in New Haven, Connecticut, proposed the following definition of public health in 1920: “Public Health is the science and the art of preventing disease, prolonging life, and promoting physical health and efficiency through organized community efforts for the sanitation of the environment, the control of community infections, the education of the individual in principles of personal hygiene, the organization of medical and nursing service for the early diagnosis and preventive treatment of disease, and the development of the social machinery which will ensure to every individual in the community a standard of living adequate for the maintenance of health; organizing these benefits in such fashion as to enable every citizen to realize his birth right of health and longevity” (Winslow 1920). Public health uses specific methods and concepts to measure and describe health conditions. A widely used concept is the quality of life (QoL) is defined by the WHO as “individuals’ perception of their position in the life in the context of the culture in which they live and in relation to their goals, expectations, standards, and concerns”. The QoL is commonly evaluated in patients with cancer, mental illness, heart disease, gastrointestinal disease, chronic obstructive pulmonary disease, asthma, and in the elderly (Haraldstad et al. 2019). Public health sciences also explore how urbanization affects human life and health and to what extent citizens can influence their living conditions. It is known that city-dwelling relates to a better QoL, physical health, access to the healthcare system, as well as enabling the elderly to have better social and physical activities (Zagozdzon et al. 2011; Eggebeen and Lichter 1993). Such factors appear to entice people to dwell in big cities or their neighborhoods, which spurred the development of the 2003 New Charter of Athens, an innovative project connected to urbanization (Stouten 2003). From the healthcare standpoint, the purpose of such initiative has been to form a national health-promoting living space for residents by the reorganization of urban planning, inclusive of an entire range of public health services based on the use of information and communication technologies (ICT) (Lorkowski and Malinowska 2020). The ICT is involved in the prevention of life-threatening situations like cardiorespiratory arrest, cardiac and brain infarcts, accidental falls suffered by the frail elderly, and others requiring sophisticated and prompt help. These situations shape the idea of a smart city proposed by the International Communication Union and defined as “an innovative city that uses the ICT and other means to improve QoL, the efficiency of urban operations, and services, and competitiveness while ensuring that it meets the needs of present and future generations with respect to economic, social, and environmental aspects” (Toh et al. 2020). It follows that the use of AI is naturally integrated into the concept of a smart city. The United Kingdom, China, and India are the countries that have allocated substantial fiscal resources to design and implement smart cities (Toh et al. 2020).

The 2017 Global Smart City Performance Index, evaluating the four most important vectors in the city function such as mobility, healthcare, safety, and productivity, was topped by Singapore followed by London and New York (Smart Cities Association 2017). These cities ranked best on the number of hospital beds per capita, road security, air pollution, bicycle communication, public transport, telemedicine, digital health portals, virtual medical advisor assistance, elderly care, and campaigns promoting healthy ways of life. There are reports that each citizen “gets back” 15 days’ worth of time every year owing to the ICT. This time can be spent on physical activity or sleep or adoption of healthy eating habits, which all positively affect cardio-cerebrovascular health (Riggs et al. 2018; Seixas et al. 2018). Physical activity reduces weight and glucose level and improves the quality of sleep, making also chronic obstructive pulmonary disease (COPD) patients benefit from it (Lewthwaite et al. 2017). The Global Liveability Index (2021) created by The Economist ranks 140 major cities in the categories of stability, culture and environment, education, infrastructure, and healthcare. Healthcare is evaluated in the smart city context by assessing both public and private systems. In 2018, the maximum score in the index was achieved by the cities of Vienna, Melbourne, Osaka, Calgary, Sydney, Vancouver, Toronto, Tokyo, and Adelaide.

On the other hand, the contemporary SARS-CoV-2 pandemic points out some limitations inherent to the smart city concept. One of them is the problem of communication systems as they are fragmented and often understood only by service providers. An effort should be made to standardize and find the best solution to cooperate in the case of disasters and comprehensively build cities regarding public health, particularly when the situation enforces social distancing or other extraordinary measures (Allam and Jones 2020; Capolongo et al. 2020).

3 Artificial Intelligence (AI)

AI is used in various fields, from public health to biology, and can be reduced to the task of predicting an outcome from diverse features or finding repeating patterns in large datasets (Deo 2015). Today, AI is mostly applicable in machine learning (ML), also in medicine. There are three kinds of ML—supervised, unsupervised, and reinforcement learning. When the first one focuses on the classification of the data, the second is trying to find unique patterns and assess their viability. The patterns are then evaluated in supervised learning tasks. Reinforcement learning is reward-based learning used mostly in robotic applications (Aktolun 2019). The most widely used method is supervised learning that uses the following algorithms: linear and logistic regressions, artificial neuron networks (ANN), support vector machines (SVM), and tree-based methods; all can be combined. A special subset of ML is the deep learning that builds the ANN, the process inspired by neural interconnections in the human brain and consisting of multiple layers made of nodes. The nodes are interconnected with nodes in the back and forward layers as it takes place in the brain. In practice, these methods help analyze huge amounts of data in a chosen context. The SVM builds a model that assigns new items to a category in the linear classification. Applying a kernel trick allows making nonlinear classifications by mapping their inputs into high-dimensional feature spaces. Finally, a decision tree is the most common method that reconstructs the tree structure and assigns labels by creating appropriate “splits” (Al’Aref et al. 2019).

AI is a strong technological sequence providing sources of machine deduction, reasoning, learning, and interaction. It has entered our everyday life by solving business problems sensibly and acceptably, including building advanced algorithms, analyzing data, enhancing the computational power and storage, and lowering costs (Ergen 2019). Despite these facts, AI has been portrayed by Stephen Hawking as a possible threat to the world economy as it may one day replace humanity. Also, people should focus on creating a beneficial AI rather than any AI, which is of essential importance for AI not to become a digitalized utopia (Hawking 2015).

4 Blockchain Technologies

The WHO has proposed a classification of digital healthcare services into interventions for clients, healthcare providers, health system or resource managers, and data services. We are currently a part of digital transformation where digital medical networks can be created using two different concepts. The first concept assumes the use of a centralized network, a technology tested during the First Gulf War and then used by corporations such as IBM and Walmart, which substantially increased profits and helped improve the quality of customer services. Referring to the medical field, a basic issue is to develop a centralized network that would be a fast and effective information exchange platform that ensures the security of data (Lorkowski and Malinowska 2020). The second concept, which gains popularity and progresses promptly, relates to the development of cryptocurrencies. It is the blockchain technology that enables transactions between two or more parties without a centralized authority and secures them with cryptographic principles making them inexpensive and fast. This technology can be defined as a chain of blocks with time stamps connected through cryptographic hashes. The chain can grow all the time as new blocks are added, with each new block containing a link to the content of a preceding block’s link. The blockchain, called the distributed general ledger, is the distributed database, also known as the registry or common register. Each piece of information is stored as an independent copy of the registry on computers and servers around the world. It means that every user has its copy of all collected data (Kuo et al. 2017).

Key benefits of the blockchain, when compared to traditional databases for biomedical and healthcare applications, include decentralization and cryptography providing security and privacy in addition to immutability (data once saved cannot be changed, corrupted, or retrieved), assignment of health data to the patient, and data robustness, transparency, trust, and verifiability. This technology is used in electronic health records (EHR), remote patient monitoring, and pharmaceutical industries, where distributed data ledgers may improve the management of medical records and accelerate biomedical research and education. These fields also are vulnerable to potential threats of forgery.

Currently, a more patient-centered connection of different, previously independent noncommunication systems is needed, followed by transformation into reliable electronic databases. The development of blockchain networks in medicine is determined by connections between particular parts of the system. A major drawback is that these networks are created by different suppliers and based on different solutions, which creates compatibility and standardization problems. Another challenge is to adequately protect medical data against potential security breaches. Moreover, the European Union General Data Protection Regulation assumes that the user always has a right to request complete erasure of their data. The blockchain technology, on the other hand, makes it impossible due to immutability. Finally, very large volumes of data may degrade the processing performance by lowering the system’s speed (Agbo et al. 2019).

5 Sensitive Data

A basic issue concerning a large collection of data and AI in medicine and public health is database safety. As data in medicine contain racial, genetic, biometric, and even religious belief information and evidence of identity, they are regarded as sensitive. That does not exclude the anonymous use of data, like in meta-analysis, which is a very worthwhile cause. Fortunately, the unauthorized use of sensitive data is unlikely when proper security measures are adopted. It happens some online applications have racial and socioeconomic denigrating content, which meets with public outcry. Such is usually introduced onto platforms through the commonly anonymized unfiltered data streaming from the Internet. Luckily, there have been no similar incidents in the healthcare system. However, further watchfulness and testing the security measures are required before full implementation of AI in healthcare (Al’Aref et al. 2019). Researchers argue that anonymization of medical data, particularly identity, is so well advanced that the security breach is less likely than gaining obtaining information from within the system due to the vulnerability of hospital records to external hacking. It means that the weakest link in the system may be a man. (Lorkowski and Malinowska 2020; Mearian 2018). The first AI-based and US Federal Drug Administration (FDA)-approved application for facilitating clinical, particularly cardiovascular, diagnoses, called the Arterys medical imaging platform, provides a unique solution for the Protected Health Information (PHI) service. The application splits out data that may be responsible for the evidence of identity and then stores them in the Arterys cloud and the hospital PHI secure server. These data can be later rebuilt when an accredited user logs into the system (Marr 2017).

Medical data are a potential economic resource for individuals, companies, and countries. The issue of sharing medical data remains unresolved as they should be processed in compliance with the highest ethical standards. Plausibly, a worldwide collection of medical data might be the most valuable and largest bunch of data humanity possesses. It might also become a potential global resource of strategic importance for humanity’s development and prosperity. It all demands a special infrastructure, legal conditions, and making ways of collecting, processing, and storing of data, remembering to deal with the population data, not concerning individuals.

6 Medical Documentation

The volume of medical documentation increases every year, taking doctors’ time away from the direct care of patients. This takes a toll on both physicians and patients and also increases the costs of medical services (Raulinajtys-Grzybek and Lorkowski 2020). A burnout syndrome, on the increase in physicians, appears connected with the time spent on filling and creating medical documentation. It has been estimated that each hour spent with a patient elates to two hours spent on EHR, mostly on documentation. In the United States, a “wasted” has been priced at 90–140 billion dollars in lost physician productivity every year (Lin 2020). It is known that there are two possibilities for creating and storing medical documentation: traditional on paper and digital. It is most probable that a mix of the two will persist for a long time to come. An unfortunate underestimation of the advantages of digitalized documentation is felt, which is an attempt to deny the laws of high numbers and statistics (Bloom and Cadarette 2019).

It is estimated that about 91 billion dollars (14% of public health spending) were wasted due to invalid administration in 2018. That pointedly shows that the current model of appointing principals and tasks for health bureaucracies simply does not work. Applying AI to provide an appropriate execution of operations and optimization of procedures should contribute to reductions in costs and lost opportunities. The excessive paperwork is a consequence of unreasonable and often reprised document circulation. Jonathan Bush has described this phenomenon as “sewage” of modern medicine, and applying AI for the analysis of bureaucratic procedures will reduce, improve, and redirect to the useful operational tracks. A reorganization of funds allocation, reduction of costs of lost opportunities, and the optimal cash flow are the most expected results of these actions. For many observers, the hitherto system of medical documentation in healthcare is seen as a huge, monolithic, difficult to use, and reform-inert structure. Paradoxically, the world’s most popular systems of archiving data are often built based on old technologies that have been primarily dedicated to building first databases. That might be a reason they seem more intuitive for an average user (Bush and Baker 2014).

In an article published by The New Yorker, the implementation and evaluation of new software are time-consuming. Additionally, it needs constant improvements. A software system described was created not for the medical staff but for patients who would like to check their laboratory results, remember the drug list, or track their conditions. Any step forward toward more convenient patient-oriented management is beneficial from the medical and economic standpoints as the chances of health improvement and prompt return to work increase (Gawande 2018).

Creating the EHR helps streamline the patient visit. However, EHR affects the patient-physician communication in complex ways, enforcing changes in the consultation attitude that would satisfy both sides (Crampton et al. 2016). Medical Scribe services gain popularity, offering a smooth transition from speech into a written text, performing documentation in the EHR, and partnering with the physician to deliver the most efficient patient care (Ash et al. 2020). There also exist government-sponsored EHRs introduced to collect medical data from a wide range of people. Their main purpose is to monitor long-term clinical and public health patterns that might be therapeutically and epidemiologically useful for the future. The Biobank in the United Kingdom and the NIH All of Us Research Program in the United States of America change the practice of personalized medicine by collecting and evaluating data from millions of people to accelerate research, diagnosis, and treatment services (Bycroft et al. 2018; Sankar and Parker 2017).

Medical data are specific and heterogeneous and come in a variety of forms that are essential for proper identification of patient health status. Machine learning, which is capable of extracting information from data, is of substantial help in medical care. Large amounts of collected data require special processing to be introduced into AI algorithms that must be able to validate the information before its clinical adoption. That is why learning from and making predictions on data requires to split them into training, validation, and test sets. Test data should be prepared in a specific objective way, contain data from various hospitals and institutions, and used as often as possible (Park et al. 2019).

7 Multiomics, Precision, and Personalized Medicine

The biological identity is made of features characterized by individual epigenomics, genomics, metabolomics, microbiomics, pharmacogenomics, proteomics, and other “omic data”, which all create a “permutome”. The field has been called “multiomics” (Livingstone et al. 2015). The “omics” can be used to create personalized diagnostic and treatment procedures and increase their efficacy and safety. Precision medicine uses multiomic data to improve human health, better understand human biology, and create a perfect healthcare system that would be predictive, preventive, personalized, and participatory (“P4”) (Ziegelstein 2017a; Guzzi et al. 2016). Most implementations of genomic profiling can be classified into one of two models based on the outcome: (1) research to discover generalized knowledge and (2) search to recover individualized knowledge (Trent 2019). A perfect example of therapy based on recovering individualized knowledge is chimeric antigen receptor T-cell therapy (CAR-T) using T cells grown in the laboratory to treat various types of cancer (Graham et al. 2018).

In the past, medicine has been practiced without evidence-based knowledge but medical care was adjusted to the individual. Current medicine has been dominated by technology, electronic documentation, and a race of time that often pulls the doctor away from the full care for patients. A question arises of effective doctor–patient communication. Ziegelstein (2017b) emphasizes that therapy, based on created guidelines, is tailored to groups rather than to individual patients, which makes the patient invisible in a group with similar diseases and characteristics. A new term has been adopted in clinical practice—“personomics”—to describe this unique phenomenon. Personalized medicine uses the knowledge about the patient’s values, goals, preferences, and financial resources, all of which are included in “personomics”. As Ziegelstein (2017b) states, “The evolution from precision medicine to personalized medicine is the evolution from healthcare to health caring”. On the other hand, Jameson and Longo (2015) state that since medicine has been individualized and personalized, the term precision medicine should also consider the patient’s psychosocial status. AI has strongly influenced personalized medicine. Consequently, the best possible solution for connecting personalized medicine with healthcare is to create a single process in which diagnosis, treatment, and follow-up of the patient are supplementary to each other and enable a fluent transition from one activity to another (Schork 2019). As the primary goal of precision medicine is developing models for humanity to predict the health status and prevent disease and disability, its application without using AI to process large data collection is next to impossible.

8 Artificial Intelligence (AI) in Medicine

The AI is tested in increasingly large medical fields. The first recognition came with the discovery that the computer-aided reading of mammograms is equal to the reading by a physician (Gilbert et al. 2008). Since that time, a lot of articles have described the application of AI in radiology and other fields of medical imaging, for example, pulmonology, invasive and noninvasive cardiology, psychiatry, or orthopedics (Kalmet et al. 2020; Refaee et al. 2020; Al’Aref et al. 2019; Davatzikos 2019; Betancur et al. 2018). Precision medicine offers a broad application of deep learning methods in defining features responsible for or predisposing to various diseases such as Alzheimer’s, schizophrenia, dilated cardiomyopathy, or heart failure (Lin and Tsai 2019; Shah 2017). Genomic studies also amply use deep learning methods to name variants of protein sequences, predict mutation effects, or identify binding motifs (Bao et al. 2020). Of notable interest is a “deep patient” presented by Miotto et al. (2016), who have used the EHR to further clinical predictive modeling and observed substantial improvements in the diagnosis of testis and prostate cancers, sickle-cell anemia, attention deficit, and disruptive behavior disorders.

The proposed solutions influence the perception of modern medicine. The AI, whose increasing use is inescapable, substantially reduces healthcare costs. It helps doctors make their work and develop skills instead of spending time filling out medical records. The acceptance of AI in everyday life cannot adversely affect procedural outcomes and clinical efficacy. Conversely, it provides computational suggestions and models to avoid medical complications. The AI also makes it possible for doctors to extend the years of their active work time, with the accompanying advantages for the gross domestic product.

9 Conclusions

AI has undoubtedly improved medical care and management. Computer-based algorithms are time- and cost-effective measures that show the likely direction of future transformative developments (Anderson 2019). Innovative strategies in healthcare should center on the patient’s well-being. Electronic health records help personalize medical care using technology as a consolidative tool. The AI is fundamental for the medical practice of excellence (Souza Filho et al. 2019). On the other side, there may be limitations for AI use in medicine. Some medical professionals harbor a bit of prejudice toward machine learning methods. Where the precision counts most, like in a healthcare system, there can be some selection pitfalls. The sampling and observer selection bias, indifferent in most commercial settings, can adversely affect the process of medical modeling. Selection bias can occur when an algorithm concludes deeply flawed data (Al’Aref et al. 2019). The occurrence of some diseases can be underpredicted due to inadequate screening, epidemiologic, or other poorly controlled factors. Such issues are difficult to identify and remedy in clinical trial datasets based on machine learning methods. Nonetheless, progress in the development of AI holds humanity’s future in many a field, including resource efficiency, autonomous machines, healthcare, and the like. The AI is not likely to replace a man in many professions, but the expectation is that it will create more jobs than destroy, increasing productivity at the same time. As Ergen (2019) states, “The industrial era let machines do the physical work, the information era enabled machines to do the computation and storage, now the AI era will let machines make the decisions”.