1 Introduction

Digital ecosystems in which people spend large portions of their time may be sapping one of our most important but under-valued abilities––attending sustainedly and calmly for lengthy periods to a single task or idea. Twenty-one percent of US consumers report checking their phones more than 50 times a day (Deloitte 2017, 9), 39% do so within five minutes of getting up (Deloitte 2017, 8), and 39% of US millennials note that they interact with their smartphones more than anything or anyone else on a given day (Bank of America 2016, 2). And while 60% of surveyed American consumers say they try to limit smartphone use, only 32% report succeeding (Deloitte 2018, 3). Frequent interruptions through sounds, vibrations, notifications, and intrusive thoughts can make it harder to pay “[f]ocused attention,” and distractions can carry high cognitive costs associated with task switching (Wilmer et al. 2017, 4–5). In fact, the mere presence of a mobile phone––even if one does not engage with it––has been found to reduce cognitive capacity and impair cognitive functioning on demanding tasks (Ward et al. 2017; Thornton et al. 2014). Multitasking, meanwhile, is routinely linked to worse academic performance (Chen and Yan 2016; Carrier et al. 2015; Wilmer et al. 2017, 10–11), and some (but not all) studies report that chronically heavy multitaskers exhibit higher task-switching costs and are worse at filtering out irrelevant information (Ophir et al. 2009; but see, e.g., Minear et al. 2013). Granted, caution is warranted because this area of research is relatively new. Still, the existing evidence on the costs of digital distraction is telling and concerning.

How should we theorize the problem of reduced attention spans and digital dependence in light of today’s technological, cultural, and economic realities? Are there grounds for alarm, reasons for optimism, or both? What is to be done, and by whom? Are recent steps taken by technology firms to integrate “ethical design” into their products and services––including easier ways to track use times and temporarily block content and apps––sufficient (e.g., Apple 2018)? Are new laws, regulations, and government policies needed? If so, on what basis might they be justified, what should they look like, and how can they be made a reality?

Securing attentional integrity against a torrent of digital distractions is as pressing a challenge, in my view, as tackling issues related to data privacy, algorithmic discrimination, regulation of smart devices, and other issues posed by new digital technologies. A comprehensive research program in philosophy centered on the value of attention in the digital age has not yet developed. Here, I will take some steps in this direction by discussing a number of salient topics and their interconnections. Some of these themes are addressed by James Williams in Stand Out of Our Light: Freedom and Resistance in the Attention Economy (2018), which describes the problem of attentional decline, explores its implications, and advances a creative set of solutions. Williams argues that the crisis of shrinking attention spans is a threat to autonomy, freedom, and prerequisites for political discourse. While I agree with Williams’s thoughtful account in many respects, I will part ways with him on a number of points.

2 Diagnosing the Problem

An initial challenge is describing the problem as precisely as possible both in conceptual terms and in a broader historical context.

To begin with, let us distinguish three phenomena:

Prolonged Immersion

Individuals spend excessive periods of time (however defined) using digital devices.

Frequent Distraction

Individuals engage with digital content in short bursts of attention because of habituated susceptibility to distractions.

Consumption of Divisive Content

Individuals consume copious digital content characterized by outrage, vitriol, and hyperbole.

These phenomena are distinct though related. For example, a person immersed in a videogame for hour-long stretches is not distracted in an ordinary sense. Indeed, she might admirably sustain attention on a single task for lengthy periods (see Wilmer et al. 2017, 6–7). At the same time, given the fast-paced, even frenetic, nature of some videogames, extended play might accustom gamers to a distracted mindset that, in turn, manifests in other areas of life. Next, a person who is frequently distracted in the ordinary sense might only periodically see inflammatory content online. By the same token, a user who spends much of her online time consuming ideological vitriol may only be an occasional digital user and need not have a short attention span. Yet outrageous and extreme online content might pique users’ interests and, by manipulating their deep-felt emotions, make them more susceptible to new distractions. In short, while the three phenomena identified above interact in interesting ways that merit further reflection, we should not lose sight of their differences.

Turning to distraction proper, we can briefly outline the background social and technological conditions that have engendered today’s crisis of attention, some of which James Williams identifies.Footnote 1 First, most obviously, ownership and use levels of digital devices are at unprecedented levels. Ninety-four percent of Americans own mobile phones (Pew Research Center 2019, 3), and American teenagers spend an average of 7:22 hours a day on entertainment screen media, excluding school-related uses (Common Sense Media 2019, 3).Footnote 2 Second, many of today’s widely used digital devices are not task-limited tools but all-purpose machines for work, leisure, socializing, and family life.Footnote 3 This trait contrasts with the entertainment-centered purposes of many twentieth-century technologies such as radio, television, movie theaters, portable music players, and game consoles. Third, ubiquitous design features such as pull-to-refresh and infinite scrolling function like slot machines, using variable reward mechanisms to incentivize us to check repeatedly if we have “scored” messages and notifications (Morgans 2017; Williams 2018, 34–35). We thus face an asymmetric matchup between our fallible mechanisms of self-restraint and armies of engineers, programmers, designers, and executives working to extract ever-smaller “slivers” of our focus in a highly competitive attention economy (Wu 2016, 268; Williams 2018, xi–xii, 33). Fourth, Internet connectivity opens the door to a negative feedback loop in which troves of individualized data are gathered, analyzed, and deployed to curate ads and experiences that distract us in increasingly personalized ways.

In addition to these characteristics, there is a related feature that goes hand in hand with the all-purpose nature of digital devices and sets today’s context apart from previous eras. I will call it the Indispensability thesis. A growing array of white- and blue-collar jobs requires use of digital devices, and the same can go for managing one’s personal life. As the US Supreme Court has noted, mobile phones today are “almost a ‘feature of human anatomy,’” and “the services they provide are ‘such a pervasive and insistent part of daily life’ that carrying one is indispensable to participation in modern society” (Carpenter v. United States 2018, quoting Riley v. California 2014). Many users, in other words, often lack genuine alternatives to being plugged into semi-addictive devices and platforms. This fact of relative unavoidability is a defining––and historically distinctive––aspect of the problem of digital distraction. The Indispensability thesis bears centrally on the justifiability of state action and the allocation of responsibility for declining attention spans.

3 “Freedom of Attention”

When theorizing a new social problem and encouraging calls for change, a fresh vocabulary can be helpful. How might we translate the stakes in the battle to preserve attentional integrity into language that resonates in moral and political terms?

One possibility, echoing the Anglo-American constitutional tradition, is to speak of the “freedom of attention,” as James Williams proposes (2018, 46, 106, 112). But if “freedom of attention” is to play a constructive role in philosophical theorizing and public discourse, it requires clarification. For, it is open to at least three interpretations. One reading picks out physical and psychological states of affairs that form part of a person’s array of negative freedoms. The other two readings focus on deontic states of affairs.

On the first interpretation, the “freedom of attention” refers to a person’s physical and psychological ability to stay focused on an idea or task under certain conditions––for example, with frequent distractors present in one’s environment. On a theory of negative liberty, person P is free to attend for time period T if and only if P is capable of doing so (see, e.g., Kramer 2003, 3). The “freedom of attention” could thus refer to a complex set of discrete freedoms that turn on the robustness of one’s attention span in various situations. Each person’s opportunities and life prospects––and how free she is overall in a negative liberty sense (see Kramer 2003, ch. 5)––will depend in complex ways on how sustainedly she can pay attention.Footnote 4

Turning to the deontic realm, the “freedom of attention” can first be construed as a Hohfeldian liberty-right to attend (at a rudimentary level, advanced level, or anywhere in between). So long as an agent is not morally or legally prohibited from attending, she remains normatively free to do so.Footnote 5 As should be apparent, a liberty-right to attend differs in important ways from familiar concepts like the freedom of religion. As to the latter, it is both coherent––and historically common––to encounter legal prohibitions on various religious practices. The same goes for freedom of assembly, press, and petition. Matters are different when it comes to attending. It would seem rare to encounter legal bans on paying attention––as opposed to, say, possessing or accessing censored materials. (Kurt Vonnegut’s dystopian 1961 short story “Harrison Bergeron” depicts a form of repeat, government-mandated cognitive interruption via ear implants that may illustrate something like a curb on one’s legal liberty to attend-to-whatever-one-wishes, coupled with a duty to attend-to-government-transmitted-gibberish.Footnote 6) Notice, too, that legal liberties to practice one’s religion, exercise freedom of speech, petition the government, and travel arise in the context of state restrictions on conduct. When it comes to declines in our attention, by contrast, the focus is on the actions of non-governmental actors who cannot modify our legal liberties in the same way as the state.

Finally, the “freedom of attention” can be interpreted as a Hohfeldian claim-right, paired with correlative duties borne by some other person(s). Such duties––on the part of technology firms, for example––might include an obligation (i) to refrain from taking certain actions that predictably erode users’ attentional abilities and/or (ii) to pursue affirmative steps that tend to enhance users’ attentive capacities.

Each of the two deontic readings, in turn, can be cast in moral and legal terms:

“Freedom of attention”

Moral domain

Legal domain

Liberty-Right

Everyone typically enjoys this liberty at any level of attentiveness.

Everyone typically enjoys this liberty at any level of attentiveness.

Claim-Right

Depends on a moral analysis of individual and corporate responsibility.

Depends on an analysis of moral claim-rights and tenets of political morality.

Every person is legally and morally free to attend at any level––whether or not she is currently able (and thus negatively free) to do so.Footnote 7 Making meaningful use of our liberties, of course, requires robust physical/psychological attentional capacities. One way to ensure that we have such capacities is to make a case for the existence of claim-rights of types (i) and (ii) above and advocate for their recognition and for effective compliance with them. Such rights can entail obligations on technology firms to (for example) make certain design choices that do not unduly distract users. The devil here will lie in the details. How should “unduly” be cashed out? What other obligations might be at play? May the state help enforce such rights? If so, what are the limits on state action in this new regulatory arena?

4 The Harm Principle and Moral Responsibility

The Indispensability thesis introduced earlier is relevant to analyzing both the justifiability of regulating technology firms and the distribution of culpability for declines in our attention spans between users and corporations. I address each topic in turn.

Governments might impose various requirements on technology companies, including mandating certain design choices for digital environments, as well as regulating the content, targeting, and placement of ads. James Williams’s proposals include outlining an ethically oriented Designer’s Oath (2018, 118–21), regulating advertising “targeted to ‘the child within us’” (2018, 122),Footnote 8 imposing transparency norms for persuasive design tools (2018, 116), levying “attentional [tax] offsets” (2018, 116), and giving users a “real say” in firms’ design processes (2018, 123).

However desirable and effective such steps may be, before endorsing any steps calling for state intervention, we must address questions of political morality. Is it morally permissible for a liberal state to impose coercive rules on industry with the intent of bolstering users’ attention spans? The answer will turn on one’s commitments in debates among varying stripes of paternalists (or liberal perfectionists) and their anti-paternalist (or anti-perfectionist) critics.

Here I will analyze one significant challenge to coercive state action based on Mill’s Harm Principle, a tenet of political morality endorsed by most political liberals. This Principle sets out a necessary, but not sufficient, condition for exertions of coercive state power. Roughly stated, it is an anti-paternalistic norm that allows the state to coerce only with the aim of preventing a person from inflicting (direct) harms on other agents and their (basic) interests. Insofar as a person’s own good is concerned, by contrast, it is up to her to choose and face the consequences.Footnote 9

We can begin by formulating the anti-paternalist position in its strongest form. The basic move is to assimilate reductions in attention to familiar examples in which state intervention is ruled out on anti-paternalistic grounds––consumption of unhealthy foods, smoking, gambling, use of hallucinogenic drugs, engaging in extreme sports, etc. A user’s penchant for toggling to social media or being distracted by ads while working is a matter of personal responsibility––even for people who find it hard to resist. Just as one can opt not to play videogames or enter a gambling hall or take drugs or go skydiving in the face of cravings––perhaps short of out-and-out addiction (but see Holton 2009, 103–04)––users must make their own choices about how to engage with their digital devices. It is up to sane adults to manage their pastimes and decide which activities––risky, addictive, unhealthy––to pursue, so long as their choices do not redound to others’ (direct) detriment.Footnote 10 Or so an anti-paternalist can contend.

How might this line of argument be rebutted? While the Harm Principle is often articulated using the self-regarding versus other-regarding distinction, another framing is more apt here. Following Ben Saunders, we can formulate the Harm Principle by contrasting consensual harm with non-consensual harm, a distinction that cuts across the self-regarding versus other-regarding divide. On that account, state “intervention may be justified to prevent an agent inflicting non-consensual harms (whether on herself or others)” (Saunders 2016, 1022–23).Footnote 11 When consent of the right sort obtains, state intervention will be off limits. “Consensual harm is never grounds for intervention, while intervention to prevent self-harm can be justified where that harm is non-consensual” (2016, 1017). How is “consent” understood here? Mill speaks of “free, voluntary, and undeceived consent” (Mill 1989, 15; Saunders 2016, 1017–18), and Saunders describes cases where consent is absent due to “voluntariness-defeating factor[s]” such as ignorance, coercion, or temporary incapacity (e.g., drunkenness) (2016, 1014, 1017).

With a consent-based framing of the Harm Principle in mind, the question is whether––given the realities of digital life––agents meaningfully consent to risks of attentional harm when interacting with digital devices, platforms, and environments.

The Indispensability thesis provides the most compelling basis to argue that consent to risk of attentional harms online falls short of being fully voluntary, at least in some cases. I’ll develop a hypothetical to illustrate why that may be so. My goal is to draw the sharpest line possible between risks of attentional harm and standard examples in the anti-paternalist literature where coercive state action is considered off-limits, including regulating sales of alcohol, many forms of narcotics, unhealthy foods, as well as participation in extreme sports.

Consider, then, the Hypnotizing Bus scenario:

Andy’s workplace is a two-hour walk from her home in an inclement climate. Given Andy’s schedule and stamina, walking to work is technically feasible if she leaves at 6 a.m. each day. A bus stops near Andy’s home that can deliver her to work quickly and at low cost. No other mode of transport is available, and Andy cannot readily change jobs. The bus comes with a catch. Riders are subjected to varying forms of hypnotic suggestion via loudspeakers. It is possible for riders to tune them out and counteract their psychic effects. But this requires significant mental concentration during the ride, as well as throughout the day to neutralize any lingering effects. Not all riders are equally susceptible to hypnotic inducement, and some have greater ability and/or willpower to resist its impacts.

How should we characterize Andy’s “choice” between walking and riding the bus to work? How might we allocate responsibility between Andy and the bus company if her mental well-being and productivity deteriorate noticeably because she must repeatedly fight off mental intrusions from hypnotic suggestion, or perhaps even develops a perverse attraction to it? If Andy’s “consent” to risks of cognitive harm falls short of being fully voluntary given her alternatives, something similar might be said about our engagement with digital ecosystems generally. While users’ experiences with digital devices are less drastic in certain respects, the scenario above dramatizes key features of today’s reality. Stated in general terms, full-fledged consent might be missing where (i) person P has no realistic alternatives to engaging frequently with digital platforms for lengthy periods and (ii) doing so responsibly––i.e., in a way that does not carry with it significant risk of harms––is highly psychologically taxing. Whereas no sane adult must smoke, use drugs, consume sugary foods, or gamble as a precondition to leading a fulfilling life or excelling in a profession, many sane adults have no practical way of avoiding often prolonged entanglement with digital ecosystems in the workplace and their personal lives. This entanglement poses formidable psychological challenges for self-regulation.Footnote 12 In short, the dearth of meaningful alternatives to exposing ourselves to risks of attentional harm online offers the most plausible foundation to contend that the Harm Principle might not foreclose coercive state action to promote people’s freedom of attention.Footnote 13

If an argument along the lines outlined above is unavailing––so that we cannot plausibly distinguish exposure to risk of attentional harms from more familiar anti-paternalist examples––we will have a number of options: eschew coercive state action aimed at safeguarding our attentional resources, reject or qualify the Harm Principle, or maintain that in certain circumstances it is morally optimal, though perhaps still wrong, to violate this Principle.

Whether or not the Harm Principle permits state regulation aimed at protecting our attentional integrity, the Hypnotizing Bus scenario helps make a case for assigning at least some responsibility to technology firms for attentional declines. In this context, James Williams too readily absolves firms and executives of culpability. “[T]here is no one to blame” (2018, 102), Williams concludes. He remarks that he has never met a programmer or engineer who joined the profession to addictFootnote 14 users or make their lives worse (2018, 102), nor do “engineers or product managers . . . want to undermine the assumptions of democracy” (2018, 94). Again: “No one in the digital attention economy wants to be standing in the lights of our attention” (2018, 94). Corporate heads are “well-meaning Alexanders of our time” who “don’t know that they’re standing in our light because it doesn’t occur to them to ask” (2018, 95). Yet other passages in Williams’s book use the language of “goals,” “purposes,” and “designs”––all of which strongly suggest intentional decisions to subvert attentional resources to drive corporate profits. As an example: “Thousands of the world’s brightest psychologists, statisticians, and designers are now spending the majority of their waking lives figuring out how to tear down your willpower” (2018, 101; see also 9, 30, 33, 111–12). And: “[I]t’s a [digital] machine designed to harvest our attention wantonly and in wholesale” (2018, 87, emphasis added). Thus, even if programmers, engineers, and executives did not join the profession to worsen our lives (which is surely right), Williams’s descriptions suggest that employees and managers are fully aware of the attention-undermining aims of the products they design and market once their work gets underway––and so are open to moral criticism.

If we replace “undermining human attention” with “outsourcing manufacturing to sweatshops” or “mining blood diamonds” or “selling AR-47s that predictably end up in the hands of warlords,” one wonders if we would absolve those actors of responsibility for foreseeable harms as quickly. If not, where does the difference lie? Is it that harms in the attention economy are less apparent? I doubt that is true in a morally significant way. Common sense suggests that hooking and tempting people as in a gambling hall hardly redounds to their benefit. And now that compelling research about heavy digital use is emerging, ignoring the effects of one’s daily work would seem to be willful blindness.Footnote 15 Of course, it may be said that consumers freely opt to use firms’ products and services. But that response returns us to the complex questions raised by the Indispensability thesis.

Even if no individual executive or employee is morally culpable, a corporate entity itself can be held responsible. Williams glosses over this possibility by invoking Steinbeck’s “monster” bank from The Grapes of Wrath. Yet doing so without further argumentation begs the question against those like Ronald Dworkin (1986, 167–71) and Philip Pettit (2007) who have argued that ascriptions of moral responsibility to groups and artificial persons are coherent and defensible. Indeed, where no individual warrants blame, assigning culpability to a corporation can be especially fitting (Pettit 2007, 195–96; Dworkin 1986, 169–70). Otherwise, it becomes less clear why we would have moral standing to demand that technology companies alter their conduct, casting doubt on the “freedom of attention” interpreted as a moral claim-right.Footnote 16

5 Reform Aims and Strategies

What should we do about declining attention spans? Ultimately, the problem is not especially well suited to government-based solutions. Unlike, for example, data privacy––an area where regulatory agencies can promulgate an array of rules specifying how technology firms may collect, store, use, and share personalized data––the challenge of digital distraction is more akin to a public health challenge that manifests at the level of psychological traits and habits. The brunt of the effort will have to be borne by people’s own choices (see, e.g., Newport 2019), emerging social norms, educational strategies, parental oversight, and design modifications made by technology firms, likely in response to public pressure.

Even if coercive state regulation of technology firms is off-limits for reasons of political morality such as the Harm Principle, it does not mean the government’s hands are entirely tied. The state might play a creative role to promote attentional integrity using a range of non-coercive strategies, including informational initiatives about the deleterious effects of chronic distraction and multitasking on the model of prominent anti-smoking and anti-drunk driving campaigns.

Governments should also consider promoting the existence of, and access to, what we might call technology-lite environments. Given how challenging it can be to resist distractions and temptations in real time, designing settings whose architecture, broadly construed, helps reduce distractions can serve as one useful antidote. Governments should explore funding technology-lite environments––schools, public spaces, and even residential buildings and neighborhoods. Such settings will include rules that restrict the kinds of technologies that may be used and regulate their manner of use. For example, buildings or neighborhoods might prohibit social robots in homes and limit augmented reality use in public spaces. The low-tech system of Waldorf Schools may be a model for tech-lite public educational streams or schools (Richtel 2011). (While extoling the virtues of digital immersion, tech executives unsurprisingly send their children to the Waldorf School in Silicon Valley). The overarching objective is to design opt-in environments that can make digital distractions less salient and strengthen healthy digital habits.Footnote 17 Crucially, poor and rich alike should have access to such settings, which may otherwise risk becoming a luxury for the few (cf. Madden et al. 2017).

What about steps taken by technology firms acting on their own initiative or in response to a public outcry? What contributions can we expect from them in tackling the problem of declining attention spans? My expectations here are limited. The overriding objectives of corporations are to maximize return on investments and grow shareholder value. These prerogatives are likely to overshadow limited steps taken in support of our attentional well-being, such as Apple’s Do Not Disturb feature, screen time tracker, and Downtime option (Apple 2018), or a proposed “use and abuse policy” where companies check in with heavy users as to their welfare (Eyal 2019). My modest expectations here contrast with James Williams’s position, which entails a bolder vision for reforms. Williams formulates his preferred outcome by way of an “alignment” between “our goals” of robust attentional capabilities and the goals of technology firms. He says that “it’s [un]acceptable for the technologies that shape our thinking and behavior to be in an adversarial relationship against us in the first place” (2018, 103; see also 97–98). In that case, the aim is “to bring the technologies of attention onto our side” (2018, 106).

The statements above––which seem to entail a substantial shift in how private firms operate––strike me as relatively unrealistic. We should not expect the priorities of major corporations to “align” with “our” priorities in meaningful ways––even if one views such an outcome as normatively desirable. The chief objective of private firms is to maximize long-term profitability, often by cultivating new desires and extracting value from consumers, as Williams himself observes (2018, 92). While regulatory interventions and public pressure may alter corporate behavior in some respects, most of the onus will lie on us as consumers to select products and services aligned with our attentional well-being. When we fall short, corporations will happily accommodate our sub-optimal but profitable choices. Just as Mars, Nabisco, and General Mills will not make truly concerted efforts to battle the childhood obesity epidemic to which their products have contributed (Moss 2013), advertisers and technology firms will never truly “support our intentions” or “advance the pursuit of our reflectively endorsed tasks and goals” in lieu of “exploit[ing] our mere attention” (Williams 2018, 111).Footnote 18 Such aspirations have utopian undertones that are in tension with the partially adversarial relationship between corporations and consumers in a relatively free market.

The landscape, in sum, is complex. When “freedom of attention” is construed as a moral claim right, technology firms might owe users certain obligations, especially since Indispensability is a morally salient and historically distinctive feature of today’s social and economic life. Will major technology firms voluntarily commit to respecting our attentional integrity in meaningful ways? I am pessimistic about the prospect of corporations reorienting their business practices in ways that yield genuine alignment with “our” aspirations as human beings. Such pessimism might invite a robust role for government action. The state, however, may be barred from imposing coercive regulations by norms of political morality such as the Harm Principle, though perhaps without ruling out an array of promising non-coercive policies. As calls to regulate technology companies intensify in spheres such as data privacy, consumer protection, and attentional harms, key questions of political morality deserve further study.