1 Introduction

A nudge, according to the term’s inventors Cass Sunstein and Richard Thaler, is an “aspect of the choice architecture that alters people’s behavior in a predictable way without forbidding any options or significantly changing their economic incentives”.Footnote 1 Nudges have been used to positively influence behavior on issues as diverse as the consumption of healthy foods, retirement savings, and urine spillage in public toilets. Typically, nudges are small, even tinyFootnote 2 interventions, such as setting a default for a choice in a beneficial way. For example, vastly more people in Austria are organ donors than in Germany, simply because the default is set differently in the two countries. Austrians have to opt out of organ donation, Germans have to opt in. For a variety of reasons, people tend to stick with the defaults that are chosen for them. Sunstein and Thaler have given the name libertarian paternalism to the political philosophy that holds that such empirical findings should be exploited to drive citizens to make better choices.

In this essay, we wish to draw attention to the fact that libertarian paternalistic measures, for all the good that they do, can pose serious threats to citizens’ privacy. On the one hand, this might not seem surprising. The ability to process personal data brings great advantages for policy makers. The more one knows about people’s preferences, past choices, social connections, daily routines, and so on, the easier it is to nudge them successfully, since many types of nudges can be more precisely targeted on the basis of such information.

On the other hand, one of the aims of libertarian paternalists is to nudge people to take better care of their personal data. Given this apparent concern for privacy, one might expect them to balk at the idea of harvesting personal data for their ends. However, we will see that this is not always the case. And even if libertarian paternalists did not express an interest in promoting privacy, we argue that to disregard privacy concerns in the design of nudges will lead to regulation that is incompatible with the sort of libertarianism the program propounds.

Our argument will proceed as follows: We start with some preliminaries on paternalism in Second section. In Third section, we will delineate the aspect of privacy that we focus on, namely informational privacy. In Fourth section, we ask what the value of privacy might be, focusing on autonomy and creativity. We argue that, since informational privacy is a precondition of autonomy, invasive measures cannot claim to be in the spirit of any truly libertarian program. In Fifth section, we consider nudges that are designed to steer people towards better privacy self-management, and display the incongruity that lurks in the idea of using personal information to achieve this aim. Sixth section takes us to a thought experiment involving an all-knowing government that seeks to avoid a particularly strong form of paternalism. In Seventh section, we broaden the scope of the discussion by considering privacy aspects of all kinds of nudges, not just those aimed at better privacy management. Finally, in Eighth section, we ask how privacy costs might be weighed against other benefits in assessing particular nudges.

We will for the most part concentrate on the works of Cass Sunstein, though we take our points to be important to the whole of the emergent libertarian-paternalistic industry. Sunstein suggests himself as a focal point for several reasons. Not only is he one of the inventors of libertarian paternalism and the most prolific writer on the subject, he has also had the chance to put his ideas into practice, having served as the head of the U.S. Office of Information and Regulatory Affairs (OIRA) from 2009 to 2012.Footnote 3 His views have already led to ample political change and will continue to do so, hence the study of these views is especially worthwhile.

2 Libertarian Paternalism

First, some clarification of the terminology that will play a role in the following, starting with the term libertarian paternalism. Sunstein and Thaler describe their measures as libertarian, since they allow the nudged citizens to disregard the nudges and follow their original intention, if that intention was strong enough. Much has been written about whether their program really is libertarian, and some of our comments will aim in a broadly similar direction. In this section, however, we want to concentrate on the term paternalism.

According to most definitions of paternalism, a rule is paternalistic if and only if it overrules the preferences of subjects solely for their own good.Footnote 4 The “solely” in the definition is important. A governmentFootnote 5 that explicitly rejects paternalistic measures might still override citizens’ choices that it feels are bad for them, if some other good is achieved in addition.

For example, German law states that you must wear a seat-belt. According to the legal construction, this is to protect other passengers from getting hurt by your body, which might be flung about uncontrollably after a high-velocity impact.Footnote 6 The rule is therefore not seen to be paternalistic, since it is not only protects your own personal safety.

Note that according to such an understanding of paternalism, many nudges would not count as paternalistic. In early papers, Sunstein and Thaler made a distinction between libertarian paternalism and libertarian benevolence,Footnote 7 the latter applying, for example, to interventions increasing organ donorship, which cannot be said to primarily benefit the donor. As far as we can tell, this distinction has disappeared from the literature; it now seems that libertarian paternalism encompasses all instances of regulations based on nudges.

This might suggest that a broader understanding of the term paternalism than the one we are using here has become dominant, one that perhaps equates paternalism with any kind of deviation from Mill’s harm principle. This principle says that the only legitimate reason for government to interfere with individuals’ choices is to avoid harm to others. In their more limited senses, both libertarian paternalism and libertarian benevolence extend that principle. In the first case, the main alteration is from “others” to “others or themselves”; in the second case from “avoid harm to others” to “avoid harm to others or bestow benefits on others.” The new understanding of libertarian paternalism seems to encompass both alterations. In the following, we draw attention to this distinction where it matters.

Another distinction that will be important below is the one made by Sunstein between means paternalism and ends paternalism.Footnote 8 The difference is whether the government accepts citizens’ ends, concerns and goals, and only intervenes when it sees them to be employing the wrong means to achieve those ends (means paternalism), or whether it wishes to supplant new ends for those of the citizens (ends paternalism). It appears that Sunstein wants to be a means paternalist as far as possible, since ends paternalism is the much more troubling and controversial idea.

However, there is also a problem with means paternalism, and it is an epistemic one: How can a government come to know the real ends of its citizens? People are heterogeneous in their aims and values; in promoting some salient ends, government will make sweeping choices that override or underestimate the ends of minorities. We will see that this problem is the source of one of the greatest temptations for libertarian paternalists when it comes to collecting information about citizens and invading their private spheres.

3 Privacy

Before we can express our worries about endangered privacy, we should also make clear what we mean when we speak of privacy. There is general consensus that attempts to analyze the concept of privacy have so far failed to come up with a conclusive result. The various characterizations that have been suggested, such as privacy as secrecy, as control over personal data, as limited access to persons or simply as intimacy, all leave out important aspects, or are too broad, and in most cases both.Footnote 9

That said, we find the tripartite distinction offered by Beate RösslerFootnote 10 to be useful. She distinguishes between local, informational and decisional privacy. While these areas might not encompass all aspects of privacy, and are not without significant overlap, they help to frame our present concern.

Local privacy Footnote 11 is the realm of privacy associated with the home and other spaces that are experienced as safe havens from unwanted attention. Though nudges that focus on behavior in the home or, for example, public toilets, should be designed with an acute awareness of the private functions of these spaces, we will not expand on this aspect of privacy here.

Instead, our main focus will be on issues of informational privacy. This concerns personal information, information the subject would like to be in control over but often is not. While dangers to informational privacy are not new, technological progress has hugely exacerbated them. Informational privacy is under threat by new (and old) types of information collection, information processing and information dissemination.Footnote 12

Lastly, we are talking about what Rössler refers to as decisional privacy whenever we say: “This is my private decision.” We do not necessarily mean that no one should know about our decision, but that no one except for ourselves should have a say in it. Decisional privacy is a central issue in the question of abortion, for example. Indeed, the verdict in the seminal case Roe v. Wade was explicitly grounded in a woman’s right to privacy.Footnote 13

Raising the question of decisional privacy in connection with nudges would readily lead us to the famous question of whether “libertarian paternalism” is an oxymoron.Footnote 14 If, as on might well argue, any form of paternalism compromises our decisional privacy, then the clash between nudging and privacy is inevitable. This, however, is not the discussion we wish to take up in this essay. While we are concerned with the loss of autonomy in general and decisional privacy in particular, we are only interested in such a loss insofar as far as it is the result of a breach of informational privacy. We will expand on this idea in the next section.

4 Privacy, Creativity and Autonomy

Much has been written on the intrinsic and instrumental values of informational privacy. We focus on two issuesFootnote 15 that we think should especially concern libertarians: PrivacyFootnote 16 as an element that is essential to the creative development of new ideas and, as mentioned above, privacy as a precondition for autonomy.

To create genuinely new ideas, we need to be able to consider and reject a variety of possibly unorthodox alternatives, and to change our opinion freely until our thoughts are ready to be shared with the world. Fear of disapproval might mean that new ideas are smothered by conformism and thus trapped by the familiar and the mainstream. Additionally, we might refrain from sharing controversial ideas with trusted persons in order to avoid negative consequences for them. In so doing, we forfeit the chance for these ideas to be improved by constructive criticism. As Neil Richards puts it, the “engine of expression—the imagination of the human mind”Footnote 17 can be endangered by a lack of informational privacy.Footnote 18

Apart form the danger of individual people becoming “boring”, as Richards puts it,Footnote 19 this smothering of new ideas may seriously harm society at large, in that innovative solutions to social and other problems may not be found. A loss of informational privacy will therefore not only affect the individuals, but lead to whole “communities governed by apathy, impulse, or precautionary conformism.”Footnote 20

A related consideration, one that should be even more important for libertarians, is that autonomy itself presupposes privacy. The mere impression of being watched can alter behavior. If we cannot be sure who is watching us and what activities our watchers might perceive as negative, whether today or any time in the future, it is likely that we will limit ourselves to the least-risky behavior. This mechanism has found its definitive description in Foucault’s famous discussion of the panopticon.Footnote 21

One might expect a libertarian paternalist to want to give this idea a different spin: If privacy infringements lead to predictable changes in behavior, then this might be far from being a problem. On the contrary, it might constitute a resource for a whole new series of nudges. But this seems an all too uncharitable suspicion, one that is unwarranted by libertarian paternalist writings. At least, nothing they have said so far advocates instilling and exploiting fear and discomfort.

The arguments for privacy as a source of creativity and autonomy go right back to Mill’s On Liberty, Footnote 22 and have been much developed since.Footnote 23 If they are right, as we believe they are, any movement that has the word “libertarian” in its title should seek to reduce privacy infringements in its policies as far as possible.

5 Nudging towards Privacy

With the sense of the importance of privacy in mind, we shall now turn to a discussion of specific nudges. We begin with those that aim to encourage people to take better care of their personal data. As Lauren Willis writes: “Consumer privacy on the internet appears poised to be the next arena [for libertarian paternalist interventions].” (Willis 2013, p. 1159)

There is ample evidence that most people are concerned about their privacy, but actually do little to protect it,Footnote 24 not unlike the many people who are positively disposed towards organ donorship but are not actually carrying donor cards. This similarity suggests that nudges that increase organ donorship might serve to protect private data, as well.

Given this initial diagnostic, what might a libertarian paternalistic strategy for privacy protection look like? An obvious proposal,Footnote 25 directly parallel to the organ donation case, would be to implement mandatory defaults in software services so that no data about users can be passed on to anyone, unless a clear and conscious choice is made to allow this.Footnote 26

But what if a government wants to avoid ends paternalism and acknowledges that not all people care about their privacy all that much? In order not to reshuffle people’s ends concerning privacy, would it not be better to set the defaults differently for those who would like to receive advertisements that really interest them, and would happily trade in their private data for the privilege? Would it not be better were the government to set the defaults in a way that respects the preferences of individual people, i.e., give the privacy lovers one set of defaults and another set to those who do not care about privacy?

As Sunstein writes, such an idea presupposes knowledge about privacy preferences:

If enough information is available about someone’s past choices or personal situation, we could design, for that person, default rules with respect to health insurance, privacy, rental car agreements, computer settings, and everything else. (Sunstein 2013b, p. 1871, emphasis added.)

Technically, acquiring such information might well be feasible; if it is not, it will be soon. Governments already have collected huge amounts of data about their citizens, private businesses probably even more (who knows?). It would presumably not be too difficult to analyze big data in a way that enabled the categorization of people along the required lines. For example, we could create one category for those who value their privacy and the integrity of their private data above most other concerns, another for those who are willing to give away certain bits of their data if they get a good return for it, and yet another for those who enjoy any sort of attention they get and want their profiles to be disseminated as widely as possible.

Government could now for each group set appropriate defaults for information use. For example, for the group that values privacy extremely highly: “Treat my personal data with the utmost confidentiality, even if that means that I will be unable to use certain services and other amenities”; and for the group that gives the least importance to privacy: “Make free use of my personal data”. In an article suggesting to personalize default rules for, amongst other things, privacy settings,Footnote 27 Ariel Porat and Lior Strahlevitz spell out such possibilities in much more sophisticated ways; however, our sketch here suffices for our purpose.

Now, that there is potential for conceptual pitfalls is clear. We find out that someone does not want his or her privacy invaded by invading his or her privacy. If this is not yet problematic in itself, then it surely becomes so if we insist that we are doing this while honoring his or her ends.

This is not lost on those who make such proposals. Porat and Strahlevitz call their own suggestion to invade privacy in order to further privacy protection paradoxical,Footnote 28 though they do not think that the paradox runs too deep. Likewise, Sunstein himself shows in an unpublished manuscriptFootnote 29 that he is well aware of the problem:

With respect to privacy, there is a great deal of heterogeneity and a risk of self-interested judgments on the part of choice architects, both of which argue for active choosing.Footnote 30

However, he also writes:

[P]erhaps choice architects know that Jones is fiercely protective of her privacy and that in the face of any kind of doubt, she prefers to prevent other people from knowing about her behavior and her choices. If so, that very knowledge can be used to produce privacy protective default rules.Footnote 31

There is a background assumption that Sunstein, Porat and Strahlevitz all seem to be making, though they do not quite articulate it. This assumption is that no one could be so concerned about their privacy that they wouldn’t want government to know about this concern. That strikes us as false. Just consider the many reports of people being placed under surveillance merely because they use services such as the anonymity network TOR.Footnote 32 The unspoken assumption presupposes a level of trust that governments today simply do not enjoy.

One of the many issues on which such trust hinges concerns the sources of information a government accesses to personalize its nudges. How exactly do the choice architects come to know of Jones’s status as “fiercely protective of her privacy”? Of course, it is conceivable that they might limit themselves to data that individual citizens have willingly and explicitly disclosed, a measure that would go some way to instill trust in the population. Sunstein’s own sympathy, however, seems to lie with much more sweeping big data analyses, as we will see in the next section.

6 Big Data and Ends Paternalism

In an interesting passage, Sunstein claims that a government that seeks comprehensive knowledge about its citizens can still be libertarian. What is more, he thinks that a highly informed government might not even be paternalistic any more:

Consider a thought experiment or perhaps a little science fiction. We should be able to agree that the government would focus only on means, and would not be paternalistic, if it could have direct access to all of people’s internal concerns and provide them with accurate information about everything that concerns them. And perhaps in the fullness of time, government, or the private sector, will be able to do something like that. (Sunstein 2013b, p. 1857)

This quote occurs, again, in the context of answering to the complaint that ends paternalism is inevitable, i.e., that government will unwittingly override the ends of minorities. This problem, so the idea goes, would disappear if nudges could be tailored to the concerns of the individual, given sufficient information about those concerns.

Take Smith, who loves fast food. She also has a terminal illness (unrelated to her preference for unhealthy food) that makes it very unlikely that she will live long enough to suffer from the consequences of her diet. If government had access to Smith’s medical file and could use that information in its design of interventions, it would not have to bother her by nudging her towards soybean products she neither needs nor wants.

This example, of course, is meant to make you feel uncomfortable, but we cannot see how it is precluded by anything Sunstein says about his fantasy. Comfortable or not, we hope that you will come to agree that it is far from uncontroversial that this fantasy would not constitute a form of paternalism, as he claims.

In fact, we take that claim to be false, and rather obviously so for those who have privacy concerns on their minds. A government that keeps tabs on its citizens to such a degree is surely paternalistic, because it overrides important concerns of those whose preference it is not to be spied on. We would guess that everyone has some concerns they would not want the government to be informed about.Footnote 33 However, for our argument it is not necessary that all, or even most people have such concerns. As long as in Sunstein’s fictional future there is at least one such person left (they certainly exist today), their concern for government not knowing certain things will be overridden, maybe for their own good, maybe not. In any case, we argue that this is surely a form of paternalism.Footnote 34 More, going down Sunstein’s science fiction route would constitute a perfect example for ends paternalism, not a way to avoid it.

7 Privacy Conscious Nudging

Let us then assume that a libertarian paternalistic administration has abandoned the dream, or nightmare, of a perfectly informed government (and maybe the dream of a pure means paternalism along with it). Let us also leave for a while the topic of pro-privacy nudges, as the administration has to make important choices in how far it will respect citizens’ personal information in order to better their behavior on other issues, as well.

For example, governments might start to monitor the calorie intake of their citizens, by tapping into the data streams generated by various self-improvement apps. This would set the scene for some possibly very successful nudges towards better health, but it would entail collecting data that many would consider highly sensitive.

There seem to be two options for those libertarian paternalists who take our concerns seriously. They might abandon privacy endangering practices as far as possible, even if that means less effective regulation. Or they might choose to openly give up on privacy protection and use all the information they can get their hands on. However, as we argued above, this second option might well imply a loss of entitlement to the label “libertarian”, and thus seems to be out of the question.

While the first option is thus the one we would hope legislators chose, it might turn out to be more cumbersome than expected. Let us pick two prominent nudges discussed in Thaler and Sunstein (2008) to see whether they might be problematic from a privacy perspective, their success in achieving laudable goals notwithstanding.

The first example involves adding an element of competition in order to get consumers to save energy.Footnote 35 Here, the first empirical finding is that electricity consumption goes down if those who consume above average are informed that they do so. This might be worrisome already from a privacy conscious perspective. What if someone glances over our shoulders as we open the letter from the company that informs us that we are reckless wasters of energy?

But there is another part to the story: Those who are informed that they consume less than average actually increase their consumption. It is only when that fact is presented as positive and directly compared to the households in the neighborhood that these electricity users further strive to lower their consumption. When comparisons are thus localized, it might not be hard to figure out who particularly wasteful members of the community are, and, given the right social climate, they might end up being ostracized for their behavior.

How about unpersonalized nudges, such as the simple setting of defaults across all users? At first sight, it seems nothing could go wrong here. No data about you is retrieved to set the appropriate default, rather it is uniformly chosen by the regulators in accordance with their aims. However, on second thought, it might not be so simple.

Let us return to organ donation, one of the most frequently cited examples of the power of “sticky” defaults. As was mentioned in the introduction, donor rates are massively higher in countries in which donorship is the default. This seems to be an impeccable success for libertarian paternalists who implement such a default, for their aim must be to get as many people who have no strong opinion about the subject to donate their organs, while enabling those who want to keep their organs to do just that.

But suppose you do not want to donate your organs after death, for reasons you are unwilling to discuss with anyone. In Germany, all you have to do is not opt in on organ donation. In other words, all you need to do is nothing at all. This is indistinguishable from not signing up out of laziness or being unaware of the possibility of giving away your organs after death.

In Austria, on the other hand, one has to actively opt out. This means that, should someone find out about your decision and press you on the matter, you will have less maneuvering space. It might mean that you picture the emergency surgeon muttering “What a selfish person!” as he fishes the “No-Donor”-card out of your wallet. Maybe you are so concerned about such a possibility that you decide not to opt out after all, even though this is against your aims. Even if opting out does not involve carrying a card, every such mechanism will need to keep some kind of record of individuals’ decisions, a record which they might be concerned and worried about, maybe to the point of not pursuing their true aims.

These examples are not meant to show that concern for privacy poses insurmountable problems for all kinds of nudges. Absolute protection of privacy is impossible in any form of regulation; privacy costs will always have to be balanced against other benefits. Maybe the impact of our privacy concerns will not tip the scales against the measures. Maybe the worst squanderers of energy are apparent simply by inspecting their Christmas decoration. Maybe people who can be swayed by the thought of possible social discomfort do not, after all, want to keep their organs enough for that wish to deserve protection.

What the examples are meant to establish, however, is that explicitly adding privacy concerns to the check list for future nudges will involve some thorny decisions and in some cases the abandonment of otherwise promising programs. It will involve answering difficult questions about retention periods for data, levels of data security, impacts on government trust, and so on.Footnote 36

8 The Weight of Privacy

We want to end by singling out the most fundamental of these problematic questions: How should privacy in practice be weighed against other concerns?

Much of the deliberation about a proposed nudge will involve some form of cost-benefit analysis. However, it is not easy to give a numerical value to such an intangible good as privacy. Sunstein mentions this difficulty in his recent book Valuing Life,Footnote 37 in which he gives an inside look into cost-benefit analyses he was engaged in as head of OIRA. In that book, he seems only to be thinking of privacy as it figures on the “benefit” side of the ledger. When assessing the weight of the kinds of risks we point to, of course, the problem of unquantifiability will crop up on the “cost” side, as well.

Indeed, we might have privacy featuring both on the “cost” and on the “benefit” side of the calculation, as is the case with the nudges towards better privacy control we discussed earlier. Since such calculations involve similar, though still hard to quantify, goods, they might be amongst the easiest to solve. Let us try our hand at some cases:

  • There are apps and plugins that pose a number of challenging puzzles in critical hours to see whether you are sober enough to be allowed to post on social media sites.Footnote 38 The first step in setting some of these services up is to indicate the hours in the week when you are likely to be too drunk or high to trust yourself in your posting behavior. Taking inspiration from this, governments might force social media providers to hardwire such an app into their services. They might not be activated by user-set default timeslots, but, more efficiently, by tacitly tracking users’ GPS-signals and activating the inhibiting device after a visit to the bar. Even if sensitive private data is spared from self-disclosure in this way, it seems to us that the privacy costs would clearly outweigh the benefits here.

  • Similarly, an algorithm was actually used in a (non-representative) studyFootnote 39 to monitor status updates for overly emotional language while they were written. Different types of warnings were shown, resulting in a decrease in posts that were later regretted. Suppose government wants to legally implement such algorithms in social media sites. Whether privacy gains outweigh privacy concerns will, among other things, depend on how plausibly government can claim that the algorithms only displaying their results to the user and do not feed secondary applications.

  • Consider a different and rather elaborate nudge: An interesting video produced on behalf of the Belgian Financial Sector Federation (Febelfin) shows a fake magician who is able to “mindread” his subjects and find out all sorts of sensitive information such as alcohol consumption, bank details, and so on. At the end of the video, it is revealed that it was not the magician but a team of hackers that was able to retrieve that information in real time, given apparently nothing more than the name of the subject.Footnote 40 The effect is like that of a benevolent hacker attack: to show that the possibilities of real harm are not just theoretical, and thus to overcome what behavioral economists call availability-bias.Footnote 41 Though the privacy invasion is substantial, the benefits could well be greater still. This is a question that only empirical studies of the post-assault privacy behavior of the victims could answer.Footnote 42

  • Lastly, what about our original problem, the personalized setting of privacy defaults on the basis of comprehensive data analyses suggested by Sunstein, Porat and Strahlevitz? Even if the government were able to instill warranted trust that it would not collect further information once it learned about Jones’s strong preference for privacy and delete all earlier records, we believe that this is a case in which privacy costs are higher than benefits, if the initial information was obtained against her wish. If, on the other hand, the model were one in which citizens had to actively opt in on government tracking for the purpose of having their defaults set for them, the issue might depend on how successful the ensuing privacy nudges were, as there will still be worries about data security and the like on the cost side.

While these cases are not all easy to decide, it gets even harder when privacy has to be balanced with goods of completely other types, such as health, wealth or happiness. We can suggest no metric that would solve this problem; however, that does not mean that nothing can be said on the matter.

Some calculations will be obvious, even if privacy is weighed against a quite different sort of good. Consider improving further on urinal performance by strategically placing surveillance cameras that broadcast to screens out in the hall, or maybe the internet. Quite effective, we imagine. Also, we hope, out of the question.

And even before we engage in any specific comparison, it might be a good idea to draw some red lines at the outset: Which, if any, privacy infringements should be out of bounds for a libertarian paternalist, no matter the benefits they might bring? Imagine, for example, that we were to find strong correlations between certain genes and responsiveness to certain types of nudges. Imagine that if we know that someone has these particular genes, we can design highly effective nudges for them, and maybe even for their close relatives. Should a government seek and employ knowledge about the genetic makeup of its citizens in order to optimize its design of nudges, or should it refrain from such possibilities categorically?

9 Conclusion

We have highlighted the potential dangers to privacy inherent in many nudges, whether already employed, merely proposed or (plausibly, we maintain) extrapolated by us. Further, we have argued that these concerns should not be dismissed lightly, since they strike the core of the libertarian enterprise. This does not mean that nudges that infringe upon privacy may never be licit. However, the dangers to privacy will have to be considered and balanced against the goals reached by the implementation of each nudge. This calculation will often be difficult to make. Nonetheless, libertarian paternalists should monitor their ideas closely for potential privacy infringements and abandon those that fail muster, even if that means abstaining from highly effective measures of regulation.