1 Introduction

The field of human healthcare is frequently mentioned by politicians when arguing the potential societal benefits of AI. Healthcare is both highly dependent on, and a significant producer of, big data with examples like analysing radiography images, identification of ‘at-risk’ patients requiring additional care, and everyday management of hospital waiting lists often touted as reasons for societal roll-out of AI-based applications as technology designed for mass processing of such data [1]. Belief in the possibilities of AI for human health has been central, for example, in the UK government’s justification of its November 2023 deal with US firm Palantir to manage all national health data. This was despite controversy around one of Palantir’s co-founders who had both criticized public support of the NHS as a form of ‘Stockholm syndrome’ and played a prominent role in bringing Donald Trump to power in 2016 [2]. In such policy developments, data management and acquisition by firms with access to significant computational resources is often seen as key to the vision for AI healthcare in managing healthcare. Yet, as in the NHS-Palantir deal, there is societal concern that AI’s use in healthcare risks undermining data privacy and the possibility to opt out from data sharing such that public distrust could also prevent usage of the technology [3]. To help design policies that allow us to use AI appropriately in healthcare, the article provides a model for how to treat hype in AI healthcare – both positive and negative. As argued here, we need to bring the everyday experiences of those directly involved in healthcare – including healthcare professionals, patients, civil society, and others – into the development and critical review of AI. In so doing, we can better remember that AI is always embedded within societal power relations and, consequently, whilst it is important to focus on building the best AI possible we can never make it work as desired unless we consider the broader systems in which it operates and how they may need to be adapted by, for example, creating new forms of checks and balances to counter problems when the technology produces sub-optimal outcomes.

2 Hype as disempowering

We see significant hype in AI in all sectors – from the boom in firms buying up internet domain names with ‘AI’ and seeing their stock rise after renaming themselves accordingly despite no other change [4], to the media’s excessive coverage of Sam Altman’s resignation and subsequent return at OpenAI, and even a church dedicated to worshipping AI [5]. Fear that AI might pose an existential threat to humanity’s survival – either ‘Terminator’ style deciding to eradicate us, or rather being used by terrorists and rogue states to cause mass genocides – has only sometimes given way to more satirical voices asking, ‘Could AI bore us all to death?’ [6]. Kevin LaGrandeur warns that hyping AI is dangerous for two reasons: (a) it encourages firms to rush development and adoption for fear of otherwise being left behind; and (b) it removes nuance from societal debate over how best to use AI [7]. Of note is LaGrandeur’s warning that hype about the capabilities of AI across both higher- and lower-income professions is causing unnecessary levels of worker anxiety, leading to a fear of long-term job losses but also more immediate pressure to accept diminishing labour rights. Yuri Di Liberto [8] refers to hype as a form of ‘induced participation’, creating social cohesion through building a shared sense of time that has a systemic place within capitalism as ‘the cyclical frenzy that accompanies the introduction of new products and technologies.’ For the most part, narratives of hype are welcoming and desirable. Yet, equally, such hype is ‘spiced up with plenty of anxiety and unease’, as Bichler and Nitzan observe, such that hype is never purely positive but often combined with fear [9]. As induced participation, the hope and fear contained within the hype around new technologies sits within an inclusive vision of an imagined future.

Older technologies like Facebook and similar platforms provide both the allure of social connectivity but also the guilt of privacy loss such that it may be best to understand these otherwise seemingly opposite reactions to social media as mutually constitutive. Just because it is sometimes negative does not prevent hype from playing a role in building a sense of a common ‘we’ that supports the marketisation of new technology. Even though AI, like all technology, is likely to benefit some groups more than others, AI hype downplays those nuances and focuses most on the utopian dreams and existential threats as if they were common to all. Hype is effective as part of capitalism, in Di Liberto’s understanding, by the twin actions of social bonding (i.e. ‘we are in this together’) and distraction from the inequality and inequity that would otherwise question the marketisation of new technology and require social reform.

When in May 2023 a long list of CEOs and prominent figures linked to the development and commercialization of AI signed a statement arguing that ‘Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war’ [10], a sub-set of researchers were quick to counter-argue that the ‘existential risk’ narrative around AI was itself highly dangerous because it distracts society from looking at the more immediate problems with the current use of, for example, data surveillance and automated decision-making today and its particularly negative impact on already marginalized parts of society [11]. Despite that attempt to bring nuance to the debate, when in late October US President Biden issued an Executive Order intended to regulate AI [12], he is said to have been influenced by watching the latest installment of the Mission Impossible film franchise in which the hero’s nemesis is an all-powerful AI threatening humanity [13]. The US’ actions pre-empted the UK’s much-heralded ‘AI Summit’ which repeated the ‘AI as an existential threat’ narrative whilst offering very little in the way of regulatory solutions for immediate technological harms [14]. Shortly afterwards, UK Minister for AI and Intellectual Property, Viscount Jonathan Camrose, declared that the UK would not regulate AI in the short-term for fear that doing so would undermine innovation [15]. At the time of writing, the EU’s more socially ambitious AI Act is facing trouble at the trilogue stage, allegedly due to French and German technology firms that publicly claim AI needs regulating but, nevertheless, are opposed to public regulation of generative AI [16].

One of the greatest hurdles to better design and deployment of AI is a common bias amongst AI engineers but also politicians towards technological determinism – that is, a belief in which technology is seen as both autonomous, and a driver, of human choices [17]. Yet, as many scholars have argued, AI and similar information communication technologies need to be understood as ‘socio-technical systems’ that includes not only the computational mechanisms but also human behaviour and societal context [18]. Rather than being determined by technology, humans negotiate amongst a multitude of narratives framing its development and usage [19], and in which the interaction within everyday life is perhaps the most important [20]. Given the political-economic role of hype in selling technology, the article expands the socio-technical understanding of AI through highlighting its political-economic aspects. For example, the coupling of both negative hype and a lack of regulation of already identified harms such as racial and gender bias in automated welfare assessments [21] seems illogical until understood from Bichler and Nitzan’s perspective in which it matters less whether the narrative is positive or negative, and more that the hype serves to remove nuance and build a common sense of ‘we’ all being equally affected such that it is best to trust the elite actors – governments and firms – to act in the common good. The point is not that such actors are unable to help shape AI for societal good, but that hype undermines the much-needed space for applying democratic critique as a check and balance and, perhaps more worryingly, leaving non-elite actors uncertain and confused as to how they might engage with the technology such that their potential role in its development is greatly reduced. Ironically given that hype often uses language around innovation, by effectively excluding many actors from engaging in its development the hype around AI threatens both its innovative potential, as well as the capacity of society to support innovation in AI. To help explore AI hype in more detail, the article now turns to the example of AI hype as it has been used in healthcare and, in so doing, three types of hype are identified.

3 A brief overview of healthcare

Given the already-stated potential of AI to advance human healthcare it is no surprise that there is significant hype within the field. It helps to see this in the context of a political-economy understanding in which AI is not a neutral technology but exists as part of a production chain structured by relations of power, including wealth and ownership, shaped by the respective role of the state and market. Healthcare is a high-value industry, the organization of which varies greatly between countries in terms of state-market relations, made most apparent in whether medical services are provided on a tax-funded free at ‘point-of-use’, private insurance, or fee-paying basis. There is also variation in terms of how ‘healthcare’ is defined with consequences for what services are available, for who, and from whom. For example, healthcare can be seen as tied exclusively to the clinical context that begins at the point of a general practitioner that acts as a gatekeeper to the wider system except for in the case of emergencies. Or, healthcare can be broadened to include preventative healthcare, that seeks to not only treat but stop health conditions before they emerge. Whilst important, we know that access to healthcare accounts for only part of the way towards optimal human health – with some studies suggesting that 90% of health outcomes can be explained via socio-economic factors [22]. Research and policies within the ‘social determinants of health’ tradition see human health at the centre of a wide nexus of variables including housing, economic and social protection, education, and many other factors not normally seen as part of the healthcare system itself [23]. In recent years there has also been growing research using the term ‘commercial determinants of health’ that explores the impact corporate behaviour has on human health, including labour rights and environmental protection [24]. Both the social and commercial determinants of health literature underline the extent to which human healthcare touches upon a very wide range of often highly politicised issues in which relations of power and ownership are unavoidable, and the clinical environment plays only a small part.

Seeing that human health is dependent largely on factors outside the clinic gives particular importance to both how much healthcare is considered throughout society – from employment, housing, to immigration – and who provides healthcare when understood in such a holistic sense. For the last decade there has been an exponential rise in consumer healthcare products, many of which are digital, including those that encourage fitness, offer self-monitoring of identified conditions including wearable devices, and more within services sold on the promise that they can prevent serious ill health [25]. Exposure to digital healthcare increased during the pandemic, with patients recommended to download apps where lock-down restrictions prevented them from visiting their doctor in person [26]. Consequently, there is now much wider social and medical acceptance of such technology, with examples of better self-monitoring in conditions like chronic asthma illustrating ways in which people can be helped to manage their health outside the clinical context [27]. At the same time, healthcare systems globally often lack internal coherence such that information is not properly shared between specialisms, and patients feel disconnected from their own healthcare with often already-marginalised groups most at risk of harm [28]. Overall, as an advanced information communication technology, AI offers a new way by which to better connect the clinical context with the rest of society. Yet, as the impact of factors like housing and economic inequality on human health indicate, for AI to benefit human health it is necessary to see it within what are, ultimately, questions of political-economy. It is from that perspective that the article identifies three types of AI hype in healthcare, all of which in some respect need to be carefully managed to avoid being distracted from a more nuanced understanding of how to utilize AI towards improving human healthcare.

4 A typology of AI hype in healthcare

AI hype in healthcare is common and often well-resourced in terms of the consultancies, conferences, and lobbying services through which it is perpetuated. To be clear, to state that there is hype in the discussions around AI’s role in healthcare does not question its potential to improve human health, but rather calls out the phenomena in which rhetoric becomes polarized and erases the kind of nuanced debate needed to understand how any new technology relates to the complex political-social factors involved in human healthcare. How AI hype in healthcare functions depends on the professional identity of the individuals concerned, including their cultural context that affects the values they attribute to healthcare (e.g. profit-maximisation, care-giving, etc.), and their existing experiences with new technology [29]. In general, though, existing studies speak of a mix of ‘utopian’ and ‘dystopian’ narratives [30]. Building upon that literature, the article identifies three core types of AI hype most frequently observed in healthcare. First, a utopian vision of AI as a neutral technology, the adoption of which will improve human healthcare. Second, a dystopian narrative in which our current healthcare systems – at all levels – are failing such that any potential risks from AI are, by comparison, seen as minor and a necessary cost. Third, another dystopia in which doctors and other clinicians are side-lined by for-profit corporate controlled AI systems such that the concept of universal health coverage becomes impossible. To be clear, just because these visions are hype does not mean that they are without elements of truth but by describing them as ‘hype’ it becomes possible to draw out the normative societal role they play in narrating what is possible in the development and deployment of technology [31].

4.1 Type A

The utopian vision of AI as a neutral remedy for human healthcare focuses exclusively on the imagined potential of the technology. Coupled with a future with unlimited computational power, AI revolutionizes healthcare through collecting data and utilizing it in new ways across society. Information processing is relevant throughout the healthcare system, with AI software and enabled devices providing both better data and analysis. Medical research is strengthened through using AI to model a patient’s genome and the effect of treatments and be updated through much richer data that further empowers new discoveries. In diagnosis, it becomes easier to conduct highly detailed digital scans and identify abnormalities. For everyday clinicians working with patients, AI is promised to provide advance information (e.g. heart rate variation, BMI, etc.) but also helpful live suggestions for what questions to ask and how to interpret patient responses such that the consultation becomes more efficient. Furthermore, AI is offered to provide faster responses and treatments for patients with simpler conditions by automating the entire process, so that clinician resources can be prioritized for complex cases and more vulnerable patients unable to use digital tools (e.g. elderly populations). Criticisms are made regarding the technical feasibility of such AI technologies, but the power of this hype is that it rests on a general belief that any current limitations will be washed away by the oncoming wave of technological progress. The limits and dangers of this first type of AI hype in healthcare are apparent when asking who will own the data and the AI systems it is used to train. Like many other international organisations, the World Health Organization has largely shifted from initial concerns over data privacy towards an overwhelmingly positive view of AI’s potential to enhance healthcare systems [32]. Yet, although often overlooked, the WHO’s expert group on the ethics and governance of AI for health warned in 2021 that current intellectual property rules risked that AI firms could effectively limit access to healthcare if left unchecked. Their report recommended that:

Governments, research institutions and universities involved in the development of AI technologies should maintain an ownership interest in the outcomes so that the benefits are shared and are widely available and accessible, particularly to populations that contributed their data for AI development [33].

The WHO expert group’s guidance on the need to ensure public ownership of the data and algorithms in which they are already invested runs counter to many policy announcements seen globally. Even in the Nordic countries with advanced welfare systems and public healthcare there is a common view at the governmental level that, whilst clinicians must take responsibility for how AI outputs are used, it is the private sector that will exclusively provide the technology through which the future AI version of healthcare will be provided [34]. The utopian hype distracts from an otherwise much-needed conversation over the respective role of the state and market, as well as how to imagine the role of the nation-state as a regulator. For example, it is likely that the importance of foundation models and computational capacity mean firms like OpenAI and its close partner Microsoft will dominate the future of healthcare. Both firms are frequently mentioned as drivers within the utopian narrative around AI healthcare, alongside Apple and Amazon. Each of them is based in the United States. In addition, the political-economy of big data means for all but the most powerful nation-states the cost of restricting the transnational flow of sensitive population health data will greatly increase given the limitations such controls would place on access to AI healthcare technologies. The economic costs of data collection and processing lead to further critical questions, such as whose data will be used to teach AI algorithms used in healthcare, and to what extent access to technology will be impacted by geopolitical considerations. Current trade conflicts between the US, China, and the EU over access to semi-conductors and other material resources essential to AI underline the point that the utopian hype risks accepting major reform of healthcare, including a shift to the private sector and foreign states, without asking essential questions over how to ensure health equity.

4.2 Type B

The techno-utopian vision is frequently coupled with a dystopian narrative in which our healthcare systems are in crisis and at the point of collapse. This is a hugely powerful account since it matches many people’s personal experience as well as being supported within mainstream media. It is a view repeated both publicly and in private amongst civil servants, with prime concern focused on how to tame rapidly rising costs in healthcare and prepare for even higher costs as the older ‘baby boomer’ population come to need intensive healthcare. For example, some public sectors organisations are exploring automated decision-making systems in the hope they can help avoid needing to employ more human staff otherwise required to manage the growing number of care assessment decisions in care-of-the-elderly services. This form of dystopian hype featured prominently in the UK government’s justification of its contract with Palantir, and as with the more utopian form of hype it serves to close space for otherwise more critical questions. Even where it is far from clear that AI can save costs, the fear of system collapse instils sufficient desperation that it appears foolhardy to stand in the way of adopting the technology. It also distracts attention from asking after the political-economic causes behind current failings within our healthcare systems, that include the sell-off of key assets, new public management approaches that prevent a fairer distribution of resources across the system, but also economic structures that work against interdisciplinarity and equity (e.g. intellectual property in pharmaceuticals). The dystopic account is also used to close otherwise very important questions about why, for example, healthcarers are leaving the profession, as well as the impact of migration controls that make it harder to employ new staff. The risk of AI hype here is that it obstructs the type of political will needed to bring about more effective change – focusing too much on which oar to use, rather than the more fundamental question of how to stop the boat from sinking.

4.3 Type C

The third form of AI hype present in healthcare initially seems opposed to the first two, but nevertheless plays a similar role as will be elaborated here – the narrative being that AI in healthcare is a US corporate take-over. As explained above, hype’s role in the capitalist marketisation of new technologies rests not in being positive, but rather in removing nuance. Without nuance, it is harder to imagine a viable alternative by which action is possible. In the case of healthcare, the space to imagine a different way of developing and adopting AI closes when lost in hype. The risk of this dystopic hype is that critique of AI’s development in healthcare becomes narrowed down to a labour rights issue, alongside US-phobia, and scepticism towards the private sector. In so doing, it leads people to overlook the role of professional practices that undermine the kind of holistic information processes AI is supposed to promise in healthcare, but also budgetary practices that prevent clinics from being able to work more with preventative healthcare. It also oversimplifies state-market relations such that the concern with private ownership of AI data is at risk of being framed as a highly marginalised far left view such that it loses broader political credibility. State-market relations vary greatly between healthcare systems, with countries like Sweden heavily reliant upon quasi-private entities such as civil society and small technology firms in the development of its AI sector. There is a big difference in procurement, for example, between dealing with a large firm and its off-the-shelf AI that does not fit the use context, and the possibility of working with smaller firms closer to the communities affected by their product. The corporate takeover dystopia also makes it harder to imagine non-corporate, non-profit forms of AI that might be developed within healthcare. Whilst seemingly a highly critical view of AI healthcare, the vision of it as a US corporate takeover potentially serves to make that vision a reality by removing space for debate needed to communicate critical concerns within society.

5 Finding healthcare’s grey zones within the AI hype cocktail

The third type of hype is, in substance, contradictory to the first two but the act of hyping means that in effect they are mutually supportive. Without offering credible solutions to healthcare’s problems, the dystopia of AI in healthcare as a US corporate takeover groups critical voices in one bucket and drains them of energy. Subsequently, the vision of near system-collapse within healthcare coupled with the techno-utopia closes otherwise much-needed space to debate how best to organise the political-economy of AI’s future healthcare. If we see AI as a social-technical system in which the technology is never autonomous from, but rather, the product of human decisions then we need to find ways to better see those human roots and the underlying struggle [35]. Given that the problem with hype is the loss of nuance, the solution must be in rediscovering those nuances – the grey zones within the AI hype, as suggested also by Coeckelbergh [36]. Finding healthcare’s grey zones is relatively easy given that, as the social determinants of health show, simple and closed categories of what is health have little value. In the political-economy of healthcare there is often extensive reliance upon not-for-profit organisations, charities, community organisations, as well as support from the for-profit sector. When carried over to AI’s potential role in healthcare, we need to find ways to maintain those grey zones between public and private, state and market. For example, this can include not-for-profit development of AI health tools, providing new forms of welfare technology such that the future of AI in healthcare does not need to rest entirely on the private sector. AI needs to be considered in its societal context, to ask how it functions not only in an idealised scenario but within political-economic realities of inequality and inequity. Will, for example, AI serve to reinforce or reduce disparity and injustice in healthcare? This requires educating people in government at all levels to step back from the hype and, instead, speak of the technology as it would function within society. If the goal is to create a more holistic and just healthcare system, that future vision should not rest exclusively on AI but include policies that repair weaknesses also within the non-technological parts of the healthcare system. We know, for example, that some of the biggest cost savings in healthcare are achieved not through purchasing new technology but via closer involvement of diverse communities within their own healthcare [37]. Overlooking non-technological choices is a common tendency within AI hype overall, in part due to the way AI engineers are trained to solve technical problems (e.g. reliable automation of resource allocation) without considering the broader societal values associated with the deployment of such systems [38]. Some scholars have also argued that techno-deterministic narratives often serve to erase important political questions, often being used to promote neoliberal modes of governmentality that privilege individualistic notions of humanity that obscure structural causes of injustice [39]. If goals like universal health coverage and health equity are to be achieved, AI in healthcare needs to be seen in tandem with its societal context that includes core values over the meaning of ‘health’ and ‘care’, but also the broader political-economy structuring human life.

To summarise how we might find healthcare’s grey zones within AI hype, a few more productive paths can be plotted with relevance to both professional discussions in healthcare but also broader society. First, if the main danger of hype is a loss of nuance, use of catch-all terms like ‘AI’ needs to be limited in favour of detailing, for example, what tasks normally done by humans are to be automated. That includes not only the end-user stage of technology but also the automation of data collection and processing, requiring detailed discussion over the categories utilized (i.e. definition of a healthy body, mortality rates, etc.). Second, as Aquino et al. show [40], different professions but also non-clinical actors like patient associations have diverse experiences of technology, requiring dialogue amongst diverse stakeholders if AI is to be developed and use appropriately in healthcare. Third, dystopic hype regarding ‘system collapse’ or ‘deskilling’ can be tempered by consulting the rich body of research that evidences the role of political-economic decisions impacting healthcare systems, but also the broader role played by societal determinants in health outcomes. Such research shows the potential of AI interventions that may help preventative healthcare, for example, but also highlight the impact of government policies on healthcare costs such that it is far from clear AI is the easiest way to make cost-savings.

6 Conclusion

The article builds upon existing perspectives on AI hype in healthcare by exploring hype’s political-economic role, enabling better understanding of how both negative and positive hype are equally problematic by eroding much-need nuance in healthcare debates. Utopian visions of AI as a ‘magic cure’ for healthcare, combined with a dystopic vision of healthcare systems as on the ‘brink of collapse’, create a strong narrative for its rapid deployment. Yet, from a political-economy perspective, dystopic fears of AI as risking a de-skilling and privatisation of healthcare by US corporations as in the UK Palantir case may play a similar role. It is not that such fears lack a basis in fact, but rather that whenever AI is treated as having agency (i.e. as in taking over from doctors) or external (i.e. as in being foreign to the host society), it distracts attention from the societal and political-economic structures underlying the problems we claim AI can solve, as well as the normative values contained with its design. If left unchecked, hype serves then not so much to promote the use of AI, but to close important political questions over how present – not only future – societies are run.

As suggested paths ahead, the article points to: (a) avoidance of catch-all terms like ‘AI’ in favour of detailing what tasks, for example, are to be automated; (b) fostering diverse dialogues to guide AI development and deployment in healthcare; and, (c) situating AI healthcare narratives within the rich body of existing public health research, informing how we understand the meaning of ‘health’ but also the role of political-economic choices.

Those who hype AI’s role in healthcare, whether as a utopia or not, do so for a wide variety of reasons many of which are morally good in terms of having a goal to improve healthcare. The danger of hype, though, is that it closes debate needed if AI really is to benefit healthcare. It has potential, but that can only be achieved if we can maintain space for nuance and use our aspirations for AI to rethink our broader aspirations for human healthcare.