Abstract
Artificial intelligence (AI) has the potential to make substantial progress toward the goal of making healthcare more personalized, predictive, preventative, and interactive. We believe AI will continue its present path and ultimately become a mature and effective tool for the healthcare sector. Besides this AI-based systems raise concerns regarding data security and privacy. Because health records are important and vulnerable, hackers often target them during data breaches. The absence of standard guidelines for the moral use of AI and ML in healthcare has only served to worsen the situation. There is debate about how far artificial intelligence (AI) may be utilized ethically in healthcare settings since there are no universal guidelines for its use. Therefore, maintaining the confidentiality of medical records is crucial. This study enlightens the possible drawbacks of AI in the implementation of healthcare sector and their solutions to overcome these situations.
Graphical Abstract
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Avoid common mistakes on your manuscript.
Introduction
The healthcare industry has emerged in the middle of transformation. This shift is being driven by the growing cost of health care as well as the resulting scarcity of educated experts. As a result, the healthcare industry is attempting to integrate new IT-based technologies and processes that may cut costs and give solutions to these expanding difficulties [1].
Accessibility, high costs, waste, and an aging population are just a few of the numerous difficulties confronting the world's healthcare systems. During pandemics such as the coronavirus (COVID-19), healthcare systems are stressed, resulting in concerns such as insufficient protective equipment, insufficient or erroneous diagnostic tests, [2] overworked physicians, and a lack of information exchange. More crucially, a healthcare tragedy like COVID-19 or the introduction of the human immunodeficiency virus (HIV) in the 1980s exposes the flaws in our healthcare systems. When crises exacerbate existing difficulties [3], such as uneven access to treatment, a lack of on-demand services, unreasonably expensive costs, and a lack of price transparency, we may envision and implement new systems of care and administrative support for healthcare [4].
When tackling these issues, we must keep in mind their interdependence belief that accessing healthcare is difficult, even though it is distributed via complex networks. This is not to say that providing high-quality healthcare is simple, but it does imply that we have some alternatives [5] to create simpler mechanisms that will offer better care and benefit everyone. ML is a technique used in healthcare system to assist medical practitioners in patient care and clinical data management. It is an artificial intelligence application in which computers are programmed to imitate how humans think and learn. Artificial intelligence (AI) has the potential to play a critical role in simplifying healthcare systems and advancing medical research. Medical care delivery systems are artificially intelligent. The COVID-19 challenge exemplifies one potential use of AI. Diagnostics [6], treatment choices, and communication are just a few of the many applications locating and using artificial intelligence-powered technologies [7, 8].
Artificial intelligence (AI) has the potential to make substantial progress toward the goal of making healthcare more personalized, predictive, preventative, and interactive [9]. We believe AI will continue its present path and ultimately become a mature and effective tool for biology [10]. The remainder of this essay will concentrate on the most essential applications of AI. There are several obstacles to successfully implementing any information technology in healthcare, much alone AI. These obstacles arise at all levels of AI adoption, including data collecting, technological development, clinical application, and ethical and societal concerns. This paper enlightens the drawbacks of AI in the healthcare industry besides its benefits.
Drawbacks
Data Collection Concern
The first problem is the inaccessibility of relevant data. Massive datasets are required for ML and DL models to properly classify or predict a wide range of jobs. The greatest significant advances in ML’s ability to generate more refined and accurate algorithms have occurred in sectors with easy access to large datasets. The healthcare business has a complex issue with information accessibility [11]. Because patient records are often regarded as confidential, there is a natural reluctance among institutions to exchange health data. Another difficulty is that data may not be readily available once an algorithm has been initially implemented using it. Ideally, ML-based systems would constantly improve as more data were added to their training set. Internal corporate resistance might make this difficult to achieve. It has been stated that the effective application of information technology and artificial intelligence in healthcare requires a paradigm shift from treating patients individually to improving healthcare. Some modern algorithms may be able to operate on a unimodal or less extensive basis as opposed to multimodal learning, and the converse problem of storing these ever-expanding datasets may be alleviated with the rise in use of cloud computing servers [12].
AI-based systems raise concerns regarding data security and privacy. Because health records are important and vulnerable, hackers often target them during data breaches. Therefore, maintaining the confidentiality of medical records is crucial [13]. Because of the advancement of AI, users may mistake artificial systems for people and provide their consent for more covert data collecting, raising serious privacy concerns [11]. Patient consent is a key component of data privacy issues since healthcare practitioners may allow wide usage of patient information for AI research without requiring specific patient approval. 2018 saw Google acquire DeepMind, a leader in healthcare AI. When it was discovered that the NHS had uploaded data on 1.6 million patients to DeepMind servers without the patients’ consent to construct its algorithm, Streams, an app with an algorithm for treating patients with acute renal impairment, came under criticism. A patient data privacy investigation on Google’s Project Nightingale was carried out in the USA. Data privacy is now much more of a problem since the app is now formally hosted on Google's servers [13, 14].
The General Computational Regulations of Europe and the Health Research Regulations, both of which went into force in 2018, are recent examples of legislation that may help resolve this problem by restricting the collection, use, and sharing of personal information. However, because various laws passed by various countries make problems of collaboration and cooperative research more difficult, data privacy regulations established to solve this issue may restrict the quantity of data accessible to train AI systems on a national and global scale [15]. We need more stringent data security regulations if we don’t want these restrictions to stifle innovation in the industry. One method is to improve client-side data encryption, and another is to employ federated learning to train models without data dispersion [12].
Analyzing the quality of the data used to develop algorithms is equally challenging. Given that patient data are estimated to have a ½ of around 4 months, certain predictive algorithms may not be as successful at predicting future results because they are at recreating the past. Additionally, medical records are seldom organized neatly since they are often erroneous and inconsistently stored. Datasets used to develop AI systems will always include unforeseen gaps, despite intensive attempts to clean and analyze the data. Although it is predicted that the broad deployment of electronic medical records will help to solve this issue, the amount of data that can be utilized to develop efficient algorithms is still constrained by issues with regulation and compatibility across institutions [16].
Algorithms Developments Concerns
Potentially distorted outcomes might be the consequence of biases in the data collection processes used to inform model development. For instance, under-representation of minorities as a consequence of racial biases in dataset development might lead to subpar prediction results. Many methods exist for combating this bias, such as the creation of multi-ethnic training sets. Yet, it’s possible for AI models to deal with bias on their own, like the existing stereotype neural network that dampens the effect of such ambiguous elements. Time will tell whether these strategies are successful in eliminating bias in the real world [15, 16].
The development of AI technology presents a new challenge after data collection. When the algorithm learns unimportant associations between patient features and outcomes, this is called overfitting. It happens when there are too many variables influencing the results, leading the algorithm to make inaccurate predictions. Thus, the algorithm may function well within the training dataset, yet provide inaccurate results when projecting future events. Data leakage is another area of worry. The method's ability to foretell occurrences beyond the training dataset is diminished if the algorithm achieves extremely high predicted accuracy since a covariate inside the dataset may have incorrectly referred to the outcome. However, a fresh dataset is required to corroborate the results reached to fix this issue [17,18,19].
One typical criticism leveled toward AI systems is the so-called “black-box” problem. Deep learning algorithms typically lack the ability to provide convincing explanations for their forecasts. If the recommendations are wrong, the system has no way to defend itself legally. It also makes it harder for scientists to understand how the data connects to their predictions. On top of that, the “black box” may cause people to lose faith in the medical system altogether. Although this discussion is ongoing, it is worth noting that the mechanism of action of many commonly prescribed medications, such as Panadol, is poorly understood, and that the majority doctors have only a basic understanding of diagnostic imaging tools like magnetic resonance imaging and computed tomography. The building of AI systems that can be understood by humans is still an active field of study, with Google having recently published a tool to help with this [20].
Ethical Concerns
Artificial intelligence has had ethical concerns raised about it ever since it was first conceived. The main problem is accountability, not the data privacy and security issues previously noted. Because of the gravity of the consequences, the current system requires that someone be held accountable when poor decisions are made, especially in the medical field. Many people see artificial intelligence (AI) as a “black box,” because researchers worry that it will be tough to figure out how an algorithm reached at a certain conclusion. Some have suggested that the “black-box” problem is less of a concern for algorithms used in lower-stakes applications, such as those that aren’t medical and instead prioritize efficiency or betterment of operations. Despite this, the issue of responsibility becomes much more important when thinking about AI applications that attempt to enhance medical outcomes, particularly when errors occur. Because of this, it is not apparent who is to blame in the event of a system failure. It might be hard to pin the blame on the doctor when they had no part in developing or overseeing the algorithm. However, the developer being at fault may appear unrelated to the clinical setting. Use of artificial intelligence for ethical decision-making in healthcare is prohibited in China and Hongkong [8,9,10, 21].
The absence of standard guidelines for the moral use of AI and ML in healthcare has only served to worsen the situation. There is debate about how far artificial intelligence (AI) may be utilized ethically in healthcare settings since there are no universal guidelines for its use. In that vein, the USA first attempts to establish criteria for evaluating the security and efficacy of AI systems has been undertaken by the Food and Drug Administration (FDA). To avoid adding unnecessary complexity to innovation and acceptance throughout the screening process, the NHS is also drafting standards for showing the effectiveness of AI-driven solutions. Both efforts are continuing, and they make it more difficult for courts and regulatory agencies to okay actions based on AI. Equally important is having a public conversation about these ethical dilemmas with the hope of arriving at a universal ethical standard that benefits patients [15, 16, 22].
Social Concerns
Humans have always feared that artificial intelligence (AI) in healthcare might eliminate their jobs. Some people are skeptical about and even hostile to AI-based projects because of the threat of being replaced. This perspective, however, is largely based on a misinterpretation of AI in its various manifestations. Even if we ignore the time, it will take for AI to evolve to the level where it can successfully replace healthcare personnel, the arrival of AI does not imply that employment would become obsolete [15], but rather that they will need to be re-engineered. Because of the human element and inherent unpredictability of many medical processes, they will never be as linear or as well ordered as an algorithm would be. Skepticism about AI, although understandable, clearly has a detrimental effect and acts as a barrier to wider acceptance of the technology. When it comes to the consequences and efficacy of AI, though, naiveté might lead to unrealistic expectations. The public might get disillusioned with AI if its current capabilities are overestimated. Greater public dialog about AI in health care is essential to address these attitudes among patients and medical professionals [2, 3].
Clinical Implementation Concerns
Lack of empirical data validating the effectiveness of AI-based medications in planned clinical trials is the main obstacle to successful deployment. Most research on AI's application has been conducted in the business setting; thus, we lack information on how it affects the final results for patients. Thus far, the majority of healthcare AI research has been done in non-clinical settings. Because of this, generalizing research results might be challenging. Randomized controlled studies, the gold standard in medicine, are unable to demonstrate the benefits of AI in healthcare. Due to the absence of practical data and the uneven quality of research, businesses are hesitant and difficult to implement AI-based solutions [22].
If artificial intelligence had been widely accepted, it may have been integrated into medical process for more efficient use. Effective load reduction relies on the usability of information systems. AI-based treatments must not slow down clinicians while examining or exploring electronic medical data. The price tag includes the investment of time and resources required to train medical professionals to effectively use the technology. Few instances of successfully incorporating AI into clinical therapy have been shown so far, with most cases remaining in the experimental phase [23]. Stakeholder participation in the development phase has been the key barrier to successful integration in many examples of innovation adoption. Getting input from a wide range of people is crucial to developing a solution that can be seamlessly integrated into clinical practice. Many AI advancements were made in the wake of the SARS and Ebola pandemics with the goal of bettering outcomes by means such as more accurate epidemiological forecasting or faster diagnosis. There are limitations to these rapidly evolving advances, however, since their usefulness in healthcare depends on their seamless incorporation into existing procedures without confusing or slowing down clinicians who lack training in AI and beside this the clinical research also faced issues related to the algorithms [24, 25].
Biased and Discriminatory Algorithms
The issue of “bias” is not limited to the social and cultural domains; it is also present in the technological domain. Biased software and technological artifacts may result from poor design or from incorrect or unbalanced data being input into algorithms. Therefore, AI only replicates the racial, gender, and age prejudice which already exists in our society, therefore widening the gap between the rich and the poor. You’ve certainly heard of Amazon’s controversial trial with a more nontraditional approach to recruiting from a few years back. The candidate search tool relied on AI to assess them on a scale first one to five stars, much way Amazon customers review things. Computer models developed by Amazon to screen job applications were biased in favor of male applicants and against those whose resumes contained the term “women,” because of a decade of data collection [26].
The lack of diversity between development teams is a problem, as is the biased nature of the data used to build the product. Due to a lack of variety, their cultural prejudices and misconceptions get embedded in the very fabric of technological development. As a result, businesses that fail to embrace diversity run the danger of creating services or goods that exclude large segments of the population. Research conducted four years ago discovered that certain face recognition algorithms erroneously classified less than 1% of white males but over 33% of black women. Even though the show’s creators insist the program is top-notch, the pool of participants they utilized to gauge effectiveness was over 77% male and 83% white [23,24,25, 28].
Suggested Potential Solutions to the Drawbacks of AI in Healthcare Sector
Ethical Concerns—Possible Solutions
The ethical accountability relation to AI in healthcare sector mainly fall in three categories concerning fairness, accountability, and transparency that has encouraged investigators to raise voice for these three paragons of AI ethics [26]. Biases can possibly be originated from the utilization of datasets that are overrepresented, underrepresented, or missed entire attributes which possessed relevant information for operation in question. There is also threat of “automation bias” means people begin to depend completely on the machine work, despite taking their own decisions and inspection [27]. Moreover, the practice of AI in the health sector evokes concerns about data security and privacy of patients’ private information. Since the algorithm training involves access to large datasets that preferably characterize different population groups, apprehensions about approval and efficacious de-identification and anonymization of data remain critical [28].
To overcome these hindrances, possible solutions have been suggested to deal with the issues of fairness, accountability, and transparency through the implementation of ethical governance, model explain ability, model interpretability, and ethical auditing [29]. In this way, development, certification, and application of AI in healthcare sector create possible biases transpicuous that will lead to better AI-based analysis and decision-making in various medical domains. These approaches also demand the enhancements in the trainings and education of health experts by providing efficient training sessions to the medical staff as well as students on the proper interaction and management of artificial intelligent equipment [30]. The regulation problems can also be resolved by author [31] two distinguished major approaches, the precautionary approach: claims that the implementation of AI is not allowed if the practice leads to harm and social inequality even if the evidence is not witnessed on the risk. It means it is explicitly elaborated that the application of AI is strictly controlled when it increases the social inequities, despite application evidence on risk are absent. The second approach is in contrast with the first approach; therefore, it is known as permission-less approach: argues that if there is not any evidence of hazards, then technological development is allowed. On broad terms, European approach is more strictly cautionary as compared to other countries, because it doesn’t allow the placement of technology despite no harm causing evidence, along with that possible advantages and dangers should be researched with deep comprehension.
AI and Education—Possible Solutions
AI education requires improvements in its implementation from its basic level to the high-level knowledge and practical skills [32]. AI education must be designed and developed in a way that demonstrates healthcare professionals to comprehend and work in AI domain which will implement in their clinical settings. Moreover, a platform in AI is given to trainees that enables them to contribute in health policy decisions associated with their field practice [33]. In future, AI will have a great impact to healthcare practice; therefore, it is crucial to integrate basics and AI tools applications and terminologies in medical institutes study programs. Particularly, training sessions on AI tools usage should be provided to present and future medical professionals to deliver valuable healthcare services, while following the ethical limits of AI systems would be useful [34].
The researcher suggested a stepwise method to provide AI education and its related applications in healthcare sector to future health professionals that initiates from undergraduate programs to onwards specializations in medical education [35]. In accordance with findings of research [36], an ideal model of AI conceptions could be categorized into three stages of medical education with reference to Oxford Medicine: undergraduate, postgraduate, and specializations. In undergraduate medical courses, medical professionals should be acknowledged with AI terminologies, basic knowledge of machine learning, deep learning, data science, AI proficiencies, and identification of AI applications in healthcare with suitable implementation of AI. In the next postgraduation phase, engagement in validation, evaluation process of models and installments of technologies should be emphasized. Ethical attentions and governance strategic policies should be deeply focused. In continuation to specialization professional growth, learning AI educational trainings, ethical guidance, social dialogs, and up-to-date AI knowledge and skills should be consistently provided [37].
Algorithms Development Concerns—Possible Solutions
Various AI algorithms that were used and will be utilized in future in clinical interpretation; the question arises here that have these algorithms earlier been permitted for clinical use? The AI-based algorithms that are designed for clinical interpretation require proper validation either hardware based, or software based because these are used by clinical experts for patient treatment and care, like decision-making in diagnosis and its related treatments, so for that purpose approval from regulatory authorities must be mandatory [38,39,40,41]. In clinical trials, it must be verified how accurately the established AI algorithms solution works as compared to the clinical standards like sensitivity and specificity of diagnostic tests. However, it is not entirely decided whether the good performance of AI algorithms is satisfactory in a case the solving way is a “black box” algorithm and not having transparency and logically explainable [40]. In addition, it is also not clear which suitable validation of a continuous learning-based solving procedure implies. A critical point is that deep learning-based “black box” algorithm lacks transparency so these types of algorithms cannot be easily rectified as compared to Bayesian models that are constructed on transparent structure [41,42,43].
Various new solving processes are capable and prepared to execute continuous learning [44]. On the other hand, in present regulations, an AI clinical setting system must be “frozen” so for that reason it cannot learn online and straightaway use new knowledge. Preferably, offline validation is required on an independent series of sample data (number of patients) from acquired “frozen” model. In the next continuous learning phase, the validation procedure required repetition again earlier to model’s new execution. In an ideal way, new clinically approved paths to lessen validation trails for digital applications in a patient safe environment must be established. It is estimated that special new processes will enable us to get regulatory acceptance of upgraded algorithms. In this connection, the Food and Drug Administration is actively engaged in developing a plan to cope with AI-based solutions [45]. Maximally, usage of current knowledge in causal and transparent model algorithms, like Bayesian models, is intended to assist in validation in clinical settings and acquiring regulatory acceptance, for unimodal and multimodal data. Therefore, it is crucial to obtain regulatory approval and proper validation of algorithms [46, 47].
Appropriate Methods to Apply AI Algorithms in Clinical Systems
Various AI algorithms have been discovered for clinical applications development [48]. Some algorithms have been proved more beneficial, and some have failed in clinical settings based on application types. Here research has suggested appropriate algorithms for specific diagnosis, for pathology in tissue slide images examination; deep learning is verified as a suitable method, while in the assessment of multimodal issues, like clinical prediction results and patient evaluation, approaches having domain knowledge are often preferred [49]. Probabilistic technology such as Bayesian modeling has proved advantageous in dealing complicated biological problems (e.g., Omics technologies, such as proteomics and metabolomics sample data). It has also proved useful in diagnostics and drug formation [50]. On the other hand, where knowledge is lacking, domain-agnostic generative AI methods are suitable; Bayesian reasoning and deep learning networks are considered well suited combination [51, 52]. Therefore, it is important to apply appropriate AI algorithms for specific clinical applications.
Some Crucial Recommendations for AI Approaches in Clinical Systems
Van Hartskamp et al. recommended that first it is necessary to find out the related and precise clinical information. Data analytics deprived of domain knowledge can be applicable in medical domain, but it will give irrelevant clinical results. Every new implementation of AI task must begin with explicit clinical questions and discussions with clinical professionals. And the results should again be revised under clinical and biological terms [53]. Suitable and accurate dataset is required to solve clinical questions. A dataset with ground truth must be adequately neat and authentic. Awareness of concealed variations that are not visible in dataset is a must. The dataset must be fitted to the query and represent the population under examination [54].
To obtain appropriate outcomes working with sufficiently large datasets in AI approaches is useful and reduction in variables where possible. And use of domain knowledge to avoid specious correlation is important. Association between given input and expected output variables, as dependent value, must be causal and undeviating as possible. The data ground truth and clinical question must be related together. Therefore, discovering new pathological features that greatly differentiate between two unlike pathological diagnoses can be efficacious [55].
In the perspective of clinical research, AI, ML, and DL bring innovations for professionals in medicines as well as in approaches like in materials sciences drug delivery vehicles (cyclodextrins [56], Ag nanoparticles [57], nanogels, TMPS [58]) structures are simulated by creating the algorithms [59] to explore the possibilities of their benefits. Beside Miley et al. 2021 reported the current issues, prognosis & possible solution for health hazards, clinical testing, approval, and technological uptake by patients and physicians in the domain of smart ingestible electronics. Furthermore, it is concluded that endoscopic therapies and diagnostics will become more reliant on AI, ML, and personalized treatments. Eventually, video capsule endoscopy might successfully supplement current surgical and radiologic procedures by developing safe & high-quality out-patient treatments, reduced medical problems, and faster diagnostics at cheaper rates [60].
Conclusion
The concern AI in the health systems is concluded by highlighting several implementation issues with AI both within and outside the health sector. The data privacy, social issues, ethical issues, hacking issues, developer issues were among the obstacles to implementing the successfully AI in medical sector. Based on our review, AI’s existence in the present day seems unavoidable. Significant technical developments have occurred since the at the dawn of the modern age, it seems that technology such as AI will expand swiftly and become a vital requirement throughout the globe. Although AI is created in the present world, it is still a limited AI that is currently weak. For the time being, this technology is employed to accomplish certain jobs by concentrating on recognizing things using sensors and then AI taking appropriate action based on preprogrammed rules.
The primary goal of today’s scientists is to develop a complete universal AI with advanced and trustable algorithms. This broad AI’s specialized duties are likewise more sophisticated than the current AI. It is important to see the adoption of AI systems in healthcare as a dynamic learning experience at all levels, calling for a more sophisticated systems thinking approach in the health sector to overcome these issues.
Data Availability
All data are provided with in the paper.
Abbreviations
- AI:
-
Artificial intelligence
- ML:
-
Machine learning
- DL:
-
Data learning
- HIV:
-
Human immunodeficiency syndrome
- SARS:
-
Severe acute respiratory syndrome
- NHS:
-
National Health Service
- FDA:
-
Food and Drug Administration
References
H.C.S. Chan, H. Shan, T. Dahoun, H. Vogel, S. Yuan, Advancing drug discovery via artificial intelligence. Trends Pharmacol. Sci. 40(8), 592–604 (2019)
O. Cruciger, T.A. Schildhauer, R.C. Meindl, M. Tegenthoff, P. Schwenkreis, M. Citak, M. Aach, Impact of locomotion training with a neurologic controlled hybrid assistive limb (HAL) exoskeleton on neuropathic pain and health related quality of life (HRQoL) in chronic SCI: a case study. Disabil. Rehabil. Assist. Technol. 11(6), 529–534 (2016)
Ó. Díaz, J.A.R. Dalton, J. Giraldo, Artificial intelligence: a novel approach for drug discovery. Trends Pharmacol. Sci. 40(8), 550–551 (2019)
J. Habermann, Psychological impacts of COVID-19 and preventive strategies: A review. (2021).
S. Harrer, P. Shah, B. Antony, J. Hu, Artificial intelligence for clinical trial design. Trends Pharmacol. Sci. 40(8), 577–591 (2019)
A. Holzinger, C. Biemann, C. S. Pattichis, D. B. Kell, What do we need to build explainable AI systems for the medical domain? (2017). https://arXiv.Org/:1712.09923.
P. Hummel, M. Braun, Just data? Solidarity and justice in data-driven medicine. Life Sci., Soc. Policy 16(1), 1–18 (2020)
U. Schmidt-Erfurth, H. Bogunovic, A. Sadeghipour et al., Machine learning to analyze the prognostic value of current imaging biomarkers in neovascular age-related macular degeneration. Opthamol. Retina 2, 24–30 (2018)
S.I. Lee, S. Celik, B.A. Logsdon et al., A machine learning approach to integrate big data for precision medicine in acute myeloid leukemia. Nat. Commun. 9, 42 (2018)
M. Sordo, Introduction to neural networks in healthcare. OpenClin. (2002).
S. Ji, Q. Gu, H. Weng, Q. Liu, P. Zhou, Q. He, R. Beyah, T. Wang, De-health: all your online health information are belong to us. arXiv preprint. (2019).
B. Lubarsky, Re-identification of “anonymized data.” UCLA L. Rev. 1701, 1754 (2010)
M.K. Baowaly, C.C. Lin, C.L. Liu, K.T. Chen, Synthesizing electronic health records using improved generative adversarial networks. J. Am. Med. Inform. Assoc. 26(3), 228–241 (2019)
S. Hamid, The opportunities and risks of artificial intelligence in medicine and healthcare. CUSPE Commun. (2016).
FDA. FDA permits marketing of artificial intelligence-based device to detect certain diabetes-related eye problems. (2018).
C. Bocchi, G. Olivi, Regulating artificial intelligence in the EU: top 10 issues for businesses to consider. (2021).
D.B. Neill, Using artificial intelligence to improve hospital inpatient care. IEEE Intell. Syst. 28, 92–95 (2013)
M. Fernandes, S.M. Vieira, F. Leite, C. Palos, S. Finkelstein, J.M.C. Sousa, Clinical decision support systems for triage in the emergency department using intelligent systems: a review. Artif. Intell. Med. 102, 101762 (2020)
F. Gama, D. Tyskbo, J. Nygren, J. Barlow, J. Reed, P. Svedberg, Implementation Frameworks for Artificial Intelligence Translation Into Health Care Practice: Scoping Review. J Med Internet Res. 24(1), e32215 (2022)
J. Wolff, J. Pauling, A. Keck, J. Baumbach, The economic impact of artificial intelligence in health care: systematic review. J Med. Internet Res. 22(2), e16866 (2020)
J.E. Reed, C. Howe, C. Doyle, D. Bell, Simple rules for evidence translation in complex systems: a qualitative study. BMC Med. 16(1), 92 (2018)
H. Alami, P. Lehoux, J.-L. Denis, A. Motulsky, C. Petitgand, M. Savoldelli et al., Organizational readiness for artificial intelligence in health care: insights for decision-making and practice. J Health Organ Manag. 35(1), 106–114 (2021)
L. Denti, S. Hemlin, Leadership and innovation in organizations: a systematic review of factors that mediate or moderate the relationship. Int J Innov Manag. 16(03), 1240007 (2012)
L.J. Damschroder, D.C. Aron, R.E. Keith, S.R. Kirsh, J.A. Alexander, J.C.J.I.S. Lowery, Fostering implementation of health services research findings into practice: a consolidated framework for advancing implementation science. Implement. Sci. (2009). https://doi.org/10.1186/1748-5908-4-50
T. Davenport, R. Kalakota, The potential for artificial intelligence in healthcare. Future Healthc. J. 6(2), 94–98 (2019)
T. Hagendorff, The ethics of AI ethics: an evaluation of guidelines. Mind. Mach. 30, 99–120 (2020)
M. Anderson, S.L. Anderson, How should AI be developed, validated, and implemented in patient care? AMA J. Ethics 21, 125–130 (2019)
D. Schönberger, Artificial intelligence in healthcare: a critical analysis of the legal and ethical implications. Int. J. Law Inf. Technol. 27, 171–203 (2019)
C. Cath, Governing artificial intelligence: ethical, legal and technical opportunities and challenges. Philos. Trans. R. Soc. A 376, 20180080 (2018)
M.J. Rigby, Ethical dimensions of using artificial intelligence in health care. AMA J. Ethics 21, 121–124 (2019)
A.D. Thierer, A. Castillo O’Sullivan, R. Russell, Artificial intelligence and public policy. Mercatus Res. Pap. (2017). https://doi.org/10.2139/ssrn.3021135
D. Wiljer, Z. Hakim, Developing an artificial intelligence–enabled health care practice: rewiring health care professions for better care. J.Med. Imaging Radiat. Sci. 50, S8–S14 (2019)
S.K. Kang, C.I. Lee, P.V. Pandharipande, P.C. Sanelli, M.P. Recht, Residents’ introduction to comparative effectiveness research and big data analytics. J. Am. Coll. Radiol. 14, 534–536 (2017)
L.G. McCoy, S. Nagaraj, F. Morgado, V. Harish, S. Das, L.A. Celi, What do medical students actually need to know about artificial intelligence? NPJ Digit. Med. 3, 1–3 (2020)
K. Paranjape, M. Schinkel, R.N. Panday, J. Car, P. Nanayakkara, Introducing artificial intelligence training in medical education. JMIR Med. Educ. 5, e16048 (2019)
R. Charow, T. Jeyakumar, S. Younus, E. Dolatabadi, M. Salhia, D. Al-Mouaswas et al., Artificial intelligence education programs for health care professionals: scoping review. JMIR Med. Educ. 7, e31043 (2021)
M. Van Hartskamp, S. Consoli, W. Verhaegh, M. Petkovic, A. Van de Stolpe, Artificial intelligence in clinical health care applications. Interact. J. med. Res. 8, e12100 (2019)
L.M. McShane, M.M. Cavenagh, T.G. Lively, D.A. Eberhard, W.L. Bigbee, P.M. Williams et al., Criteria for the use of omics-based predictors in clinical trials: explanation and elaboration. BMC Med. 11, 1–22 (2013)
J. He, S.L. Baxter, J. Xu, J. Xu, X. Zhou, K. Zhang, The practical implementation of artificial intelligence technologies in medicine. Nat. Med. 25, 30–36 (2019)
D. Sussillo, O. Barak, Opening the black box: low-dimensional dynamics in high-dimensional recurrent neural networks. Neural Comput. 25, 626–649 (2013)
L. Zhu, K. Ikeda, S. Pang, T. Ban, A. Sarrafzadeh, Merging weighted SVMs for parallel incremental learning. Neural Netw. 100, 25–38 (2018)
T.T. Lee, A.S. Kesselheim, US food and drug administration precertification pilot program for digital health software: weighing the benefits and risks. Ann. Intern. Med. 168, 730–732 (2018)
A. Esteva, A. Robicquet, B. Ramsundar, V. Kuleshov, M. DePristo, K. Chou et al., A guide to deep learning in healthcare. Nat. Med. 25, 24–29 (2019)
Z. Ghahramani, Probabilistic machine learning and artificial intelligence. Nature 521, 452–459 (2015)
Y. LeCun, Y. Bengio, G. Hinton, Deep learning. Nature 521(7553), 436–444 (2015)
F. Jiang, Y. Jiang, H. Zhi, Y. Dong, H. Li, S. Ma et al., Artificial intelligence in healthcare: past, present and future. Stroke Vasc. Neurol. (2017). https://doi.org/10.1136/svn-2017-000101
K. Zarringhalam, A. Enayetallah, P. Reddy, D. Ziemek, Robust clinical outcome prediction based on Bayesian analysis of transcriptional profiles and prior causal networks. Bioinformatics 30, i69–i77 (2014)
W. Verhaegh, H. van Ooijen, M.A. Inda, P. Hatzis, R. Versteeg, M. Smid et al., Selection of personalized patient therapy through the use of knowledge-based computational models that identify tumor-driving signal transduction pathwayscomputational models to identify tumor-driving pathways. Can. Res. 74, 2936–2945 (2014)
H. Van Ooijen, M. Hornsveld, C. Dam-de Veen, R. Velter, M. Dou, W. Verhaegh et al., Assessment of functional phosphatidylinositol 3-kinase pathway activity in cancer tissue using forkhead box-O target gene expression in a knowledge-based computational model. Am. J. pathol. 188, 1956–1972 (2018)
A. van de Stolpe, L. Holtzer, H. van Ooijen, M.A. de Inda, W. Verhaegh, Enabling precision medicine by unravelling disease pathophysiology: quantifying signal transduction pathway activity across cell and tissue types. Sci. Rep. 9, 1–15 (2019)
K. Zarringhalam, A. Enayetallah, A. Gutteridge, B. Sidders, D. Ziemek, Molecular causes of transcriptional response: a Bayesian prior knowledge approach. Bioinformatics 29, 3167–3173 (2013)
S.K. Gupta, Use of Bayesian statistics in drug development: advantages and challenges. Int. J. Appl. Basic Med. Res. 2, 3 (2012)
W. Hao, D.Y. Yeung, Towards Bayesian deep learning: a framework and some existing methods. IEEE Trans. Knowl. Data Eng. 28, 3395–3408 (2016)
H. Wang, D.-Y. Yeung, Towards Bayesian deep learning: a framework and some existing methods. IEEE Trans. Knowl. Data Eng. 28, 3395–3408 (2016)
A. van de Stolpe, R.H. Kauffmann, Innovative human-specific investigational approaches to autoimmune disease. RSC Adv. 5, 18451–18463 (2015)
B. Khan, S. Kumar, N. Sanbhal et al., synthesis and characterization of cyclodextrin-based scaffold incorporating ciprofloxacin antibacterial agent for skin infection prevention. Biomed. Mater. Devices (2022). https://doi.org/10.1007/s44174-022-00014-3
B. Khan et al., Synthesis of Mg/AL layer double hydro—oxide and silver nano -particle based green nanocomposite for drug delivery applications. (2022).
B. Khan, S. Kumar, Implementation of triply periodic minimal surface (TPMS) structure in mesenchymal stem cell differentiation. Res. Sq. (2022). https://doi.org/10.21203/rs.3.rs-2156625/v1
E.O. Pyzer-Knapp, J.W. Pitera, P.W.J. Staar et al., Accelerating materials discovery using artificial intelligence, high performance computing and robotics. npj Comput. Mater. (2022). https://doi.org/10.1038/s41524-022-00765-z
D. Miley, L.B. Machado, C. Condo, A.E. Jergens, K.-J. Yoon, S. Pandey, Video capsule endoscopy and ingestible electronics: emerging trends in sensors, circuits, materials, telemetry, optics, and rapid reading software. Adv. Devices Instrum. 2021, 1–30 (2021). https://doi.org/10.3433/2021/9854040
Acknowledgements
The authors would like to thanks Health @ InnoHK (Hong Kong Centre for Cerebro-Cardiovascular Health Engineering (COCHE), Shatin, Hong Kong, SAR China, Mälardalen University Sweden, and Mehran University of Engineering and Technology Jamshoro, Pakistan, for providing conductive work environment in documentation of the data.
Funding
Not applicable.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
Authors have no conflict of interest.
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
khan, B., Fatima, H., Qureshi, A. et al. Drawbacks of Artificial Intelligence and Their Potential Solutions in the Healthcare Sector. Biomedical Materials & Devices 1, 731–738 (2023). https://doi.org/10.1007/s44174-023-00063-2
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s44174-023-00063-2