Introduction

Artificial intelligence (AI) has seen increased public attention and industrial application in recent years [1,2,3,4,5,6,7]. Its ability in analysing complex data particularly suits it to automated interpretation in diagnostic imaging. Several collaborative programmes between medical institutions and data companies exist to establish reliable AI-based diagnostic algorithms [8,9,10].

Reception of such programmes has been mixed. While most believe that AI enhances the accuracy, efficiency, and accessibility of medical imaging, the role of the radiologist in this future remains uncertain [11, 12]. Attitudes of physicians are variable, ranging from envisioning an AI-dominated practice to optimism in broadening their scope of practice [13, 14].

AI refers to systems designed to execute tasks that traditionally require a human agent [15, 16]. Machine learning (ML) refers to computer algorithms applied in AI capable of automatic learning and data extrapolation [17,18,19]. ML can incorporate different learning algorithms, of which artificial neural networks (ANN) and deep learning (DL) are most well-known. ANN are collections of artificial neurons that analyse inputs and assign suitable weights to predict an outcome [18,19,20,21]. DL uses multi-level ANN for nonlinear data processing [22]. Experts predict that the coming decades will see DL grow into mainstream medical imaging [23, 24]. Although these definitions are understood by AI researchers, how clinicians or patients understand these terms and their implications remains unclear.

There is an increasing body of publications regarding attitudes toward the impact of AI in diagnostic imaging. A comprehensive review of stakeholders’ perspectives has yet to be performed. We undertook a scoping review to systematically search the literature, identify relevant stakeholders, and categorise their views on the use of AI in radiology.

Methods

Study design

The protocol was registered (DOI: 10.17605/OSF.IO/AXDPE). Based on the methodology outlined by Arksey and O’Malley (2005) and Levac et al (2010), this scoping review consisted of six stages [25, 26].

Stage 1: Formulating the research question

Following a preliminary exploration of published literature, the following research questions were identified:

  • To what extent is AI expected to influence radiology practice?

  • What are stakeholders’ views on the use of AI in radiology?

  • What challenges and advantages arise with AI use in radiology?

Stage 2: Identifying relevant studies

Database selection

Publications were identified from EMBASE, PubMed/MEDLINE, Web of Science, Cochrane Library, and Cumulative Index to Nursing and Allied Health Literature (CINAHL). Grey literature was searched using the Canadian Agency for Drugs and Technologies in Health (CADTH) Grey Literature Checklist, OpenGrey, and Google Scholar [27].

Search strategy

The search strategy was drafted with consultation of a research librarian and a timeframe of 1960, when “artificial intelligence” first appeared in the literature, to November 2019 (Supplement S1).

Eligibility criteria

Published studies and grey literature of any design, language, region, and timeframe including commentaries, abstracts, and reviews were eligible.

Stage 3: Study selection

Two levels of screening were conducted: (1) title and abstract review, and (2) full-text review. For level one screening, two reviewers (C.E., L.Y.) independently screened titles and abstracts for full-text review based on the inclusion criteria. In level two screening, full-text studies underwent independent review, including reference list searches. Relevant studies were included if perspectives on AI use in radiology were described. The PRISMA flow diagram tracking progress is shown in Figure 1 [28].

Fig. 1
figure 1

PRISMA flow diagram

Stage 4: Charting the data

All included articles were independently extracted using a standardised form. Characteristics organised included bibliographical information (i.e. authors, titles, dates, and journals), objective, study type, participant demographic, AI definitions, and attitudes toward AI.

Stages 5 and 6: Collection, summary, and consultation

The PRISMA-ScR guided the collection, interpretation, and communication of results [28]. Following the recommendations by Levac et al (2010), thematic analysis consisted of (1) analysing the data, (2) reporting results, and (3) applying meaning to the results [25]. A spreadsheet was generated with article characteristics and conclusions. Extracted text was grouped by stakeholder, issues discussed, and expressed views. Themes were subsequently identified. Following the initial groupings by C.E. and L.Y., methodological experts (R.A., P.S.) and radiologists (D.K., N.S.) reviewed the data to provide additional input and confirm interpretations. Several iterations were undertaken to ensure accuracy and consistency.

Results

Characteristics of all included publications

Sixty-two publications were included from the 3282 screened (Figure 1). These represented radiologists (n = 52, 7 surveys), medical students (n = 7, 4 surveys), the general public (n = 4, 1 survey), patients (n = 3, 2 surveys), computer scientists (n = 3, 1 survey), and surgeons (n = 1, 1 survey); many studies assessed more than one stakeholder group. Table 1 shows characteristics for surveys and Table S3 (online supplement) shows characteristics of non-surveys. Most publications were commentaries and editorials (n = 39) and a minority (n = 13) were surveys. The majority of publications (n = 50) represented North American and European views (Figure 2). The majority (81%) of eligible publications were dated after 2018, indicating a surge of interest in AI (Figure 4) (Table 2).

Table 1 General characteristics of survey studies (n = 13) grouped by stakeholders
Fig. 2
figure 2

A Study designs. (a) Includes perspective articles, editorials, commentaries, statements letters to the editor, and essays; (b) includes orations, presentations, lectures, forums, symposiums, and conferences summaries; (c) includes surveys, questionnaires, and interviews. B Geographic distribution

Table 2 Included themes in survey studies (n = 13)

Definition of AI in all included publications

A working definition of AI was found in 52% (n = 33) of studies. These were grouped into one or more of three broad categories that defined AI: (1) into its sub-concepts, ML, DL, and ANN [14,15,16, 29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51]; (2) with a technical description of its mechanisms [14,15,16, 33, 35, 37, 38, 40,41,42,43,44,45,46,47, 49,50,51,52,53,54,55,56,57,58]; and (3) via its current and future applications [30,31,32, 34, 39, 59]. Figure 3 is a weighted word map showing the dominant vocabulary used. A greater percentage of commentaries, narrative reviews, and social media studies explicitly defined AI compared to surveys and presentation abstracts.

Fig. 3
figure 3

Word cloud of most frequently used words in AI definitions, where size and colour is representative of frequency of use

Generally, AI was understood as the use of pattern-identifying computational algorithms extrapolating from training sets to make predictions, i.e. machines performing problem-solving tasks typically delegated to humans [14, 24, 33, 35, 37, 38, 40, 42, 49, 50, 52, 54,55,56, 58]. Commonly cited applications in radiology included quantitative feature extraction, computer-aided classification and detection, image reconstruction, segmentation, and natural language processing [30, 32, 34, 52, 59].

Analysis of themes expressing views on AI in radiology

Following content analysis, seven themes were identified: (1) predicted impact of AI on radiology, (2) potential for radiologist replacement, (3) trust in AI, (4) knowledge of AI, (5) education on AI, (6) economic considerations, and (7) medicolegal implications. Results were consistent across publications (online Tables S4 and S5). 81,82,83,84,85,86,87,88,89,90 As the surveys represented include larger sample sizes, we have summarised these separately (Tables 3 and 4) and will be prioritising them in our results. Findings from non-survey studies will be explicitly identified. Themes captured in surveys are shown in Table 2.

Table 3 Survey studies’ results part 1
Table 4 Survey studies’ results part 2

(1) Predicted impact of AI on radiology

While this theme did not include surgeons’ views, all remaining stakeholders—radiologists, medical students, patients, computer scientists, and the general public—were generally optimistic. Radiologists and residents consistently expressed that AI will have a significant, positive impact in daily practice [29, 59,60,61]. Most believe that practical changes will occur in the next 10–20 years [30, 60, 61]. The majority would still choose this specialty if revisiting the choice [29, 30, 59, 62], citing interests in advancing technology [29, 62]. Senior radiologists have greater confidence in the future of the specialty than trainees [59]. Radiologists emphasise that avoiding AI is not feasible, and most medical students agree that radiologists should embrace AI and work with the industry; this is reflected in both surveys [31, 72] and commentaries [39, 63]. Computer scientists had a higher estimate of AI’s impact on radiology than radiologists, with half predicting dramatic changes in the next 5–10 years [30]. Medical students believe that the impact of AI will be largely positive [31, 72]. Positions with procedural training are seen as having greater job security compared to diagnostic radiology, as AI is expected to automate image interpretation [62, 73]. Patients believe that AI will positively affect efficiency [58], and no negative impacts were anticipated by the general public in a search of Twitter opinions [54].

(2) Potential for radiologist replacement and (3) Trust

None of the stakeholder groups foresees total replacement of radiologists, and do not trust AI to make decisions independently. Radiologists and medical students expect AI to act as a “co-pilot” [30,31,32, 60]. Surgeons also expressed skepticism that AI can make clinical decisions alone, and are ambivalent about the endangerment of diagnostic radiology [62]. Radiologists do not expect their diagnostic roles to be replaced due to AI’s lack of general intelligence and human traits [29, 30, 59, 62]; they do expect that clinicians who embrace AI will replace those who do not. They anticipate that AI will shift their focus from repetitive tasks to activities involving research, teaching, and patient interaction [32]. Some radiologists expressed that the use of AI opens the possibility for other specialties to assume radiological tasks, and they anticipate a fall in job demand [32, 60, 62]. Medical students anticipate a similar reduction in radiologists needed, but most believe that such “turf losses” are unlikely [31]. Medical students also expressed worry about replacement and excitement about the use of AI in radiology [62, 72, 73]. Although uncertain about changes to workload, most radiologists are optimistic about job satisfaction and salary—a sentiment echoed by computer scientists [30, 32, 60]. Computer scientists did not predict replacement of radiologists in the next 5 years, and few predicted obsolescence in the next 10–20 years [30]. Notably, views presented at an international symposium indicated that computer scientists who worked in medical imaging are more skeptical about AI replacing radiologists [75].

Similarly, patients expressed a lack of trust in machine diagnoses, prefer personal interactions, and anticipate a lack of emotional support from AI [72]. Given equal ability, patients prefer human physicians [72]. However, if computers can perform better and more holistic assessments to predict future diseases, patients prefer AI [72]. An editorial suggests that the public is generally uncomfortable with technology without a human in command [68]. The majority surveyed felt that technology could not entirely replace radiologists [76], a view consistent with perspectives expressed in commentary [54].

(4) Knowledge of AI and (5) Education

Radiologists, medical students, and patients in surveys expressed a lack of knowledge on AI. Although most radiologists are aware of the prominence of AI in radiology, they report limited knowledge and training [29, 30, 32, 59]. They expressed interest in ongoing research and felt that AI should be taught in medical training [29, 30, 59, 60]. Radiologists pointed to their role in AI development for medical imaging, especially in task definition, providing labelled images, and developing applications [32]. Residents are especially enthusiastic to learn about technological advancement [29, 32]. Education can increase interest, as tech-savvy respondents are more likely to find AI and ML exciting for radiology [29].

Medical students surveyed overestimated their competency in AI [31]. Only half were aware that AI is a major topic in radiology and a third had basic knowledge [72]. Most agreed that there is a need for training in AI during medical school [31, 72]. Increased year level, exposure to AI, and obtaining knowledge from literature and radiologists decreased pessimism toward field prospects [31].

Although patients believe AI to be a useful checking tool, they report having limited knowledge and express uncertainty about how AI will affect workflow [74].

As expected, computer scientists have the most knowledge and exposure to AI [30]. Although this survey did not discuss education [30], one commentary emphasises on the collaboration between radiologists and AI experts to ensure clinical relevance of AI technologies [68].

There were no surgeon or public views on knowledge or education in the included articles.

(6) Economic considerations

While none of the surveys evaluated radiologists’ views on economic implications, this theme was explored in commentaries with mixed opinions [35, 57, 64,65,66]. Some radiologists cited an increase in costs associated with computer-aided detection systems without an increase in productivity [57]. Conversely, others suggested that AI may reduce burnout and cost while increasing care quality [35, 64]. Another urged that although AI may be more cost effective, progress must be driven by patient impact instead of financial considerations [55]. A commentary anticipates that hospitals—especially in publicly funded systems—may hesitate to invest in technology lacking vigorous testing, and may lack the network infrastructure to run these programmes [50].

Surgeons’, medical students’, patients’, and computer scientists’ views on economic considerations were not addressed in any of the included publications.

(7) Medicolegal implications

Both surveys [32, 62] and commentaries indicated that regulation, accountability, and ethical issues present barriers to AI implementation [14,15,16, 38, 40, 41, 44, 46, 47, 50, 55, 57, 65, 67,68,69,70,71]. Most radiologists surveyed believed they would assume responsibility for medical errors made by AI [32, 60]. Included non-survey articles echo similar sentiments [37, 44, 64, 70, 71]. Commentaries emphasised that time is needed to set up regulatory bodies [36, 40]. They suggest that radiologists should help develop assessment processes for AI tools based on evidence, and advocate for patients’ consent, privacy, and data security [14,15,16, 41, 46, 55, 66].

Patients surveyed suggest it is difficult to address computer errors and assigning accountability [58, 74]. Similarly, a social media analysis indicated that legal and regulatory concerns present a challenge to AI implementation; these issues are not frequently discussed [54].

Surgeons’, medical students’, and computer scientists’ views on medicolegal implications were not addressed in the included publications.

Discussion

This scoping review is the first step in summarising views on AI in medical imaging. Seven themes were identified from the included articles, representing views from six stakeholders. Radiologists’ views were predominantly represented. Inconsistencies existed within the half of the articles that provided definitions of AI. This may pose difficulties in future comparisons and syntheses. Overall, stakeholders do not trust AI in making independent diagnoses and do not believe that radiologists can be completely replaced. The general public and patients dislike AI due to a lack of “human touch”. However, patients would accept its use if it can provide more insight than a human clinician.

Instead of replacement, stakeholders expect AI to function as a “co-pilot” in reducing error and repetitive tasks. Nevertheless, a decrease in the demand for diagnostic radiologists is anticipated. Radiologists’ responsibilities are expected to shift from image interpretation to patient communication, policy development, and innovation. This is an important consideration for medical students when making residency choices. Radiologists, medical students, and patients indicated a need for education in the clinical use of AI.

There is opportunity for interdisciplinary collaboration between radiologists and AI experts to design technologies that advance the Quadruple Aim: patient outcome, cost-effectiveness, patient experience, and provider experience. The medical community must also work with legislative bodies to ensure that changes are driven by patient outcomes rather than economic considerations [55].

Economic considerations and medicolegal implications were not well addressed. No surveys consulted stakeholders in terms of economic considerations despite coverage in commentaries, indicating a need to outline this emerging technology’s financial angle. A recent systematic review similarly found a need for economic analyses of AI implementation in healthcare [77]. The values and resources of health systems may constitute an additional consideration. Although the general public and patients do not know how to address potential errors made by computer systems, radiologists believe that they should be “in-the-loop” in terms of responsibility; ethical accountability strategies must be developed across governance levels. In comparison with existing literature, several commentaries discussed ways to restrict data transmission, protect patient privacy, and suggested review boards to prevent information compromise [64, 78, 79]. In addition to ethical and medicolegal barriers, adoption may be slow if radiology does not identify a need for automatic image interpretation. The European Society of Radiologists and Canadian Association of Radiologists have written ethical statements on the subject, indicating the beginning of much-needed higher level regulation [15, 80]. These findings must be accounted for in large-scale decision-making.

Biases and limitations

AI is under investigation in multiple areas of radiology, such as pre- and post-imaging workflow. Since this review focused on AI use in image interpretation, radiologists’ perception of other applications fell outside our scope. Given the methodology of scoping reviews, risk of bias assessment was not performed, and would have been applicable only to the survey studies. Most of the publications represent Western radiologists’ perspectives (Figure 2). This indicates a gap in knowledge from non-Western countries and other stakeholders (e.g. government, insurance providers, radiation technicians, radiographers, or radiation technologists) [91, 92]. Due to an exponential rise in scientific publications on AI (Figure 4), there are undoubtedly new publications on views of stakeholders considered in this study as well as others that were not represented. As AI may increase accessibility of radiological diagnoses in low- and middle-income countries, it is important to include global perspectives in future study and adapt such technologies to different international healthcare contexts. There were limited publications capturing the views of surgeons (n = 1), computer scientists (n = 3), patients (n = 3), and the general public (n = 4). Radiologists’ views were more frequently published or evaluated in formal surveys, likely because of their proximity to these emerging technologies. However, it is important to incorporate other stakeholders’ views into the design of such systems. As patient information will be used in the development of AI technology and patient care will undergo significant changes, patient perspectives need to be prioritized.

Fig. 4
figure 4

Temporal distribution of included publications

Future directions

AI is innovative and highly applicable to radiology; barriers to entry and drivers of adoption must be considered. This scoping review can encourage a comprehensive plan in adapting current training and practice. There is a need for stakeholders to incorporate the growing body of evidence around AI in radiology in order to guide development, education, regulation, and deployment. Given that AI is currently an intervention of great interest in health contexts, it is beneficial to regularly update reviews capturing perspectives. More formal qualitative studies can further explore elements that facilitate or prevent AI implementation. The present scoping review serves as a first step toward future research and synthesis of such information.

Conclusion

The views of radiologists, medical students, patients, the general public, and computer scientists suggest that replacement of radiologists by AI is considered unlikely; most acknowledge its potential and remain optimistic. Stakeholders identified a need for education and training on AI, and specific efforts are needed to improve its practical integration. Further research is needed to gain perspectives from non-Western countries, non-radiologist stakeholders, economic considerations, and medicolegal implications.