Keywords

High frequency trading algorithms got a shock in 2013 when an AP tweet claimed that Barack Obama had been hurt in an explosion. Over USD $130 billion in stock value was wiped in minutes. AP said its Twitter account had been hacked and the stock prices recovered quickly. Social media, clearly, affected the stock market. Bad news, good news, false news, true news, public opinion are all part of the mix of markets in the good old days and in the contemporary moment of fake news. However, fake news has introduced a new element in the reign of President Donald Trump that affects the checking of facts in the business of media and the business of business. The trusted source is no longer trusted. The rise of active and passive digital personae, swarms and doxing, have amplified gossip, rumor, populism, and confused the checking of trusted sources.

Of course, businesses can make money by delivering false news, as did William Randolph Hearst when he realized that manufacturing facts about the Spanish-American war increased circulation. Hearst, like Trump, also felt that “other people existed mainly to gratify his own desires” (Proctor 1998, p. 14). Indeed, the expression yellow journalism comes from the Hearst era with the yellow referring to the character yellow kid in the comic Hogan’s Alley, a favorite reading of Hearst. However, there is a dramatic qualitative difference between the Hearst era and today because it is often very difficult to work out whether a human or nonhuman is responsible for news. President Donald Trump’s Tweets are an interesting exception, like Hearst, because the assumption is, even in satire like Saturday Night Live, that Trump is the genuine author of his own Tweets for most of the time and in real time. But how influence works on the Internet is no simple matter. Reddit built its company by starting with fake followers and President Trump learnt the lesson. Figure 1 is a screenshot of Trump’s Twitter audit accessed on November 5, 2018, by the author showing 5,450,240 fake followers.

Fig. 1
figure 1

Twitter audit of Donald Trump’s followers, real and fake

What Donald Trump has recognized is that there is no newshole that limits his creation of news. The newshole is the amount of space available to a news organization to devote to a publication after advertising revenue for an edition has been calculated. In print newspapers, only a certain amount of physical space can be allocated to news before it becomes uneconomical in print production and circulation. Online news, of course, has changed the business models around newsholes.

But Donald Trump’s fake followers raise the problem of what counts as an “agent” or an “actor” in contemporary online news cycles. An agent in digital media can extend beyond human sentient beings. This is precisely why people get upset when they are misrepresented online, trying to work out whether actions are intentional or not. For instance, when Bettina Wulff, wife of former German president Christian Wulff, found that a Google search of her name came up “prostitute,” she sued Google, successfully. Bettina had a digital persona she did not want, projected into the minds of human agents but passively constructed by nonhuman agents, the Autocomplete feature in Google. In 2012 a Japanese man took Google to Court over its Autocomplete (Instant) that returned up to 10,000 results implicating him in crimes. Tokyo Courts ruled that Google suspend its Autocomplete and Google replied, “no,” that it would not obey Japanese law (Boxall 2012).

Debates about agency in philosophy, psychology, and sociology are complex. Barry Hindess (1988) in his critique of rational choice theory advanced a minimal concept of the agent as a site of decision and action, where the action is in some sense a consequence of the agent’s – actor’s decision. “Actors do things as a result of their decisions. We call those things actions, and the actor’s decisions play a part in their explanation. Actors may also do things that do not result from their decisions, and their explanation has a different form” (Hindess 1988, p. 44–45). Hindess argued that a capacity to make decisions is an integral part of anything that might be called an agent. For Hindess, therefore, state agencies, political parties, football clubs, churches are all examples of actors in his minimal sense. “They all have means of reaching decisions and of acting on at least some of them” (Hindess 1988, p. 46). The actions of Google’s Autocomplete feature, an organizational agent, is, of course, always dependent upon the actions of others such as managers, elected officers, employees, and other organizations.

Digital personae raise key issues in control and the extent to which an agent can intentionally interfere with the creation and maintenance of a digital persona. Businesses, governments, and individuals attempt to manipulate digital personae to make them attractive, by mimicking human behavior or providing visual and other cues that enhance the possibility of trust, precisely the concern in the United States about Russian interference in its elections. Never before has the role of nonhuman agents in this way been possible or their impact so far reaching.

In the case of normal everyday affairs, of course, there is a range of databases and many organizations involved in collecting information on individuals and creating profiles of them. There have been a number of terms to try to describe these profiles, such as Dividual, epers, shadow order, data double, capta shadow, databased self, and Cyber-I (Clarke 2013). However, digital persona best captures what happened to Bettina Wulff because it focuses clearly on the very idea of the person, of how we present our self and our identity to others. Roger Clarke coined the term digital persona in 1992 and continues to explore it (Clarke 2014). His original motivation was that “we need the construct as an element in our understanding of the emerging network-enhanced world” (Clarke 2014, p. 82). Internet robots, agents, who can act and interact with other agents, human and nonhuman, now account for over 60 percent of Internet activity (Madrigal 2013) and understanding the behavior of human and nonhuman agents is now essential in a democratic society that values autonomy.

Clarke (2001) further distinguished between active and passive digital personae. An active persona, in a digital context, is an agent that acts on behalf of the individual and runs in the individual’s workstation or elsewhere in the Internet. A simple implementation of this idea is in the vacation feature in email servers, which returns a message such as “I am away on holidays until <date>.” Where the sender is a mailing list, this may result in broadcast of the message to hundreds or thousands of list-members. A passive digital persona does not involve projection of the persona into the online world. Figure 2, adapted from Roger Clarke (2001), shows the operation of a passive digital persona. A Visa card transaction is an obvious example of part of the creation of a passive digital persona within the Visa system.

Fig. 2
figure 2

Passive digital personae

Projected active digital personae include mail filterers, news, and knowbots (intelligent searches of networks). Active digital persona can be projected by the individual or imposed by others. The difference between active and passive is in the degree to which control can be exercised over what is happening to the persona. If the individual is projecting their persona, they may wish to create filters around themselves and restrict the bombardment of information through the networked world. Clarke (2013) distinguishes between passive digital persona as superficial digital representations of a person or people and formal digital personae as structured data representations that occur as the result of physical online transactions of data. If someone has illegal access to someone’s computerized health data, for example, then they could construct a persona based on that structured data. Informal digital personae are, by contrast, based on people’s perceptions. Clarke (2013) also distinguishes between projected personae, a persona that a person wants to project and imposed personae, an image created by someone else. In the 15 or so years since Clarke introduced the construct, there has been a range of developments in information technology, not least the increasing use of sophistication in social media networks and associated algorithmic nonhuman agents that can have their own personalities, represent or gather information about real personalities, or impose personalities online. While how at a theoretical level, “all the layers of digital personae are simultaneously woven in a complex situation remains obscure,” there can be little doubt that the hypergiant, superaggregators, are well aware of the impact of digital personae (de Kerckhove and de Almeida 2013, p. 277).

The Hypergiants (Super Aggregators), Doxing and Swarms

We’re building toward a web where the default is social. Every application and product will be redesigned from the ground up to use a person’s real identity and friends. Mark Zuckerberg, CEO Facebook (Hongladarom 2011).

There is something innately threatening about a persona, constructed from data and used as a proxy for the real person. It is reminiscent of the popular image of the voodoo doll (Gibson 1984, p. 97).

We now know that Facebook failed to compel British political consulting firm Cambridge Analytica to delete all traces of data from its servers after acquiring details of 87 million Facebook users. These data enabled the company to retain predictive models drawn from social media profiles during the Clinton-Trump US presidential election (Lewis et al. 2018). A person’s ability to control their own persona is, of course, compromised when their data is sold to others without their knowledge. But third party sales are not the only problem facing the modern digital persona. Personal dataveillance, low data quality decisions, lack of subject knowledge of, and consent to, data flows, blacklisting, denial of redemption, arbitrariness, acontextual data merger, and complexity and incomprehensibility of data, among many others affect the degree of agency of any one of us. It is not within the compass of this chapter to cover all the possible variations in technical control, but the major Internet aggregators are well aware of the impact of various technical matters on business and others. In its 2012 regulatory filing, for example, Facebook identified over 83 million “fake” accounts, − impersonations, fake people, real pets, fake pets, “undesirable accounts” (undefined by Facebook), and others. Facebook understands these sites as agents, potentially damaging to other agents and their own users. A person’s Facebook identity is not simply a page but a network where people either feel more or less in control of what they do on that network. Over 11 million teenagers left Facebook to other sites, such as Instagram, WeChat, and other platforms, not only because of “cool” but precisely because of issues of control and importantly the “right to forget” (Kampmark 2015; Rosen 2012).

While Facebook seeks to control fake personalities, including fake and real pets, this does not extend to every nonhuman. Grumpy Cat (Fig. 3), Boo the Dog, and the popular Hatsune Miku (Vocaloid) are good examples. In 2013 Grumpy Cat won Webby’s Meme of the Year. Grumpy Cat, who has his own “reps,” is a highly successful nonhuman, nonanimal, active digital persona, which at the time of writing had 8.6 million followers on its Facebook page. Grumpy Cat’s owner made 24 million pounds from Grumpy Cat’s first two years as a digital persona (Goldhill 2014). Adweek in 2013 provided a summary of Grumpy Cat’s competition on the Internet. Tabatha Bundesen and her brother first posted pictures of the cat on Reddit after which the Internet community started to create memes. Grumpy Cat the business has monetized into fan sites, T-shirts, books, and wall calendars, among many other merchandise.

Fig. 3
figure 3

Grumpy Cat (the official picture). (Source: Grumpy Cat used under Fair Use)

Facebook calls Grumpy Cat and other digital personae making money on the Internet, like Boo the Dog, “public figures” in order to get around the problem of calling them “fake sites” in reporting to the US government (Sneed 2013). It is not surprising that nonhuman digital personae get status as public figures. Vocaloid 3-D projections came into public physical spaces in 2010 and have become as popular as Facebook-based animal celebrities. Vocaloid is a singing voice synthesizer created by a joint university and Yamaha project. Hatsuni Miku has become one of its nonhuman stars. She is a singing digital avatar created by Crypton Future Media. People can buy her and then program her to perform any song on a computer. But in 2010 she also went on tour as a 3-D hologram http://www.mikufan.com/). Miku would appear to breach the author’s definition of digital personae as “running in” computers. Miku, however, is still tied to computers and networks, even when she is a projection. What is important is that she is a nonhuman active digital personae not as a “one off” but, like Grumpycat, as a personality in a relationship with fans.

Readers might stop, at this point, and ask what Grumpy Cat has to do with management in crisis or with the title of this chapter. All of the examples so far touch on identity and trusted sources. The “good old days” of yellow journalism, its early days, have returned but with a twist. Public opinion and market information can be manipulated by nonhuman agents. Grumpy Cat is a fairly transparent digital persona, supervised by a human agent. President Trump is an active digital persona, a “doxing” President, with 5 million or more fake followers, exaggerating his impact on public opinion.

Communities can also combine to create digital personae. #Gamergate is a good example where a gaming community used its skill to dox others, that is, create a crowd persona that dramatically affects individuals. In August 2014, Eron Gjoni, a programmer, wrote posts about Zoe Quinn, a game developer for controversial Depression Quest, accusing her of sleeping with a video game journalist Nathan Grayson (https://thezoepost.wordpress.com/). Gioni had been in a relationship with Quinn (Dockterman 2014). Anita Sarkeesian, a high profile critic of sexism in games, with others, became the target of abuse, with one person even creating a computer game in which players were invited to abuse her (Dockterman 2014).

There were approximately 1.8 million users of the #gamergate between September 25, and October 25,, 2014, according to the analytics group Topsy.com. It is not surprising that terms such as “sealioning” and “swarm” have emerged precisely to describe the #gamergate phenomenon. Digital personae were not just individuals, but groups formed as crowd digital personae. “Gamers” with doxing skills of a high order demonstrated how quickly personal information that most people could not access can be deployed. Sarkeesian’s own digital persona, @femfreq, Feminist Frequency, was able to rapidly mobilize opposition to those in the gamer community who were using gamergate as a misogyny vehicle.

Andy Balow (2014) worked with the chief data scientist at Betaworks in the USA and took all the social graphs of everyone in the dataset of 316,669 tweets associated with #gamergate in order to visualize the different personae using open-source package Gephi. Figure 4 shows all the hundreds of small communities that fall into two major poles antigamergate on the right-hand side and pro-gamergate on the left-hand side, with little to not intersection between them.

Fig. 4
figure 4

#gamergate Swarm. (Source: Betaworks)

Trolls and others, of course, and unlike the #gamergate controversy, do not necessarily disclose who they are or, alternatively, create fake versions of other people. Chuck Norris, the US celebrity, fell victim to sites pretending to be him and leading to the famous Chucknorris facts site, providing humorous comments about Norris. Norris decided not to sue, appreciating that he had an expanded set of fans who were not using their sites maliciously. “I know there are a lot of fake Chuck Norris pages on places like Facebook & Myspace. To cut down on confusion, this is the only page I personally have on Facebook & I don’t have a page on Myspace. If you ever want to find out if a page is really mine, you can visit my website link […]. Sincerely your friend, Chuck Norris (Norris 2011).

The terrorist group Islamic State, or ISIS, was adept at being malicious and, as a comprehensive Bookings Institute study demonstrated, technically proficient at creating Twitter swarms and mass producing Twitter account to influence perceptions of ISIS size. Figure 5 is a representation of different accounts, but also the degree of reciprocity obvious among them.

Fig. 5
figure 5

ISIS Twitter. (Source: Berger and Morgan (2015) The ISIS Twitter Census: Defining and describing the population of ISIS supports on Twitter)

The sophistication of online collection of data about individuals is now well known. Acxiom perhaps is the largest of these aggregators. It works on a global scale and on-sells data about individuals to whoever can pay for those data. Many people have not heard about Acxiom. Most people though would have flash cookies on their computers that collect data for aggregators like Acxiom (flash cookies are normally undetectable, except now for Firefox browsers). Acxiom has in its database approximately 1500 facts about half a billion people worldwide. It works behind the scenes for Google and many other major Internet companies (Mason 2009). Acxiom holds passive digital personae.

We are now in a position to provide some clarity on the differences between active and passive digital personae and supervised and nonsupervised contexts in which they operate. Table 1 distinguishes between contexts where humans are directly involved in the activities of a persona and those where they are not. Donald Trump’s Tweets have a human agent behind them as far as we know. They are “supervised” in a general sense. Chuck Norris fake sites, while generated by humans, are not supervised by him (although they now have his imprimatur). Grumpy Cat, however, is fully nonhuman, although supervised. Data collected by the hypergiants and other major aggregation sources are not supervised by the human agents from which the data are collected.

Table 1 Human and nonhuman digital personae and human agency

“Control,” like constructs such as “power,” involves a range of other constructs in any conceptualization or measurement (Lukes 1974). Table 1 therefore is not intended as a final clarification of the complexity surrounding contemporary digital personae. However, there can be little doubt that crowd or individual digital personae can and do influence activity on the Internet and human perceptions based on that activity. There is also a level of technical complexity in generating persona that is beyond many people’s competencies and for many it is difficult to know when hypergiants are manipulating our personae.

Individuals wanting to control their own sites have its correlate in super aggregators, like Google and Comcast, wanting to control its spaces. Super aggregators can and do decide to bias one form of traffic over another. This is no minor issue and has led, of course, to Net Neutrality debates. The aggregators in all their forms have become more concentrated and more influential as well as more complicated. Google, for instance, purchased the currency platform, Jambool, to help developers manage and monetize their virtual economies across the globe (Takahashi 2010). Google, like Comcast, has vested interests in where traffic goes and a banking platform would no doubt affect the nature of traffic flows. Google could, for example, limit an aspiring creative artist’s digital persona by exercising what is called “ramp control” (Leaver et al. 2012, p. 2), slowing down the service in order to stop the artist uploading broadband materials that exemplify their work or even potentially affecting banking service. The individual might never know that it was the super aggregator limiting their activities, of course, left probably to think that they need a new service provider in order to increase bandwidth.

Social Presence, Social Proof, and the Social Distribution of Knowledge

Which brings us back to President Trump’s Tweets. He is not seen as a fake persona by his followers and his Tweets are perceived to be directly from him. In the language of theory, Trump has achieved social presence, trust, and social proof. These three dimensions have become important constructs in explaining the behavior of human agents in the study of digital personae. Social presence is conceived as the extent to which “the medium permits users to experience others as being psychologically present” (Fulk et al. 1987, p. 531). Since the construction of a digital persona is necessarily mediated through digital media, social presence plays a key role for actively created digital personae of human agents. Previous research found that displaying human photos (Cyr et al. 2009) or avatars (Teubner et al. 2014) on a Web site can increase a person’s perceived social presence. Similarly, many agents use human photos and avatars in the process of actively creating their digital persona. Importantly, an increase in social presence positively influences a person’s level of trust towards a platform (Gefen and Straub 2003). Trust is a key construct in technology-mediated transactions. In electronic commerce, it is conceived as “an expectation that others one chooses to trust will not behave opportunistically by taking advantage of the situation” (Gefen et al. 2003, p. 54). In the context of digital persona, trust has to be understood as a broader concept, because it does not only involve possible future decisions, but more subtle dimensions of trust, such as engagement and public opinion (Balnaves et al. 2014). This also becomes evident in the terminology “follower” used in the prominent social networks primarily used for creating digital personae.

Another reason why social presence and trust play a critical role for digital persona is that the purpose for actively (and often also for passively) creating a digital persona is – directly and indirectly – related to marketing activities in the broadest possible sense. A key construct in this context is social proof, that is, “the fact that people tend to believe that decisions and actions taken by the majority reflect the correct behaviour in a given situation” (Vastola et al. 2014).

The tendency to see an action as more appropriate when others are doing it normally works quite well. As a rule, we will make fewer mistakes by acting in accord with social evidence than contrary to it. Usually, when a lot of people are doing something, it is the right thing to do. This feature of the principle of social proof is simultaneously its major strength and its major weakness. Like the other weapons of influence, it provides a convenient shortcut for determining how to behave but, at the same time, makes one who uses the shortcut vulnerable to the attacks of profiteers who lie in wait along its path. (Cialdini 1983, 1994).

While social interactions have evolutionarily been dominated by human-human, and for most of human history even face-to-face interactions, advances in digital media have turned the domain of social interaction into a “mixed zone” in which sentient human beings and computerized agents interact (Riedl et al. 2014; Teubner et al. 2014). Agents in the digital society form beliefs about the agency of digital personae they encounter, and these beliefs in turn affect their intentions.

It can be reasonably argued, therefore, that Donald Trump has established social presence and social proof with his digital persona. Trump’s followers also by extension see his news as part of what Alfred Schutz (1946) would call their common intrinsic interests. Alfred Schutz’s (1946) work on zones of relevance and interest within social phenomenology fits well here. A person’s place in the social distribution of knowledge for Schutz is defined by the type of knowledge that they possess and the social role that they have at any particular point in time. There are, for him, four regions, or zones, of decreasing relevance of knowledge. There is knowledge and activity of primary relevance within our reach that can be immediately observed by us and also at least partially dominated by us. The zone of minor relevance is where individuals may be acquainted with knowledge that may contain reference to our chief interests. Zones of knowledge which are relatively irrelevant or absolutely irrelevant are areas of knowledge which people take for granted, but where the “that” and the “how” of things are not essential. For example, no car driver is supposed to be familiar with the laws of mechanics, no radio listener with those of electronics, although there are circumstances where such knowledge might be of primary relevance, such as experts or enthusiasts (Balnaves and Willson 2012, p. 69).

There is a point in society where my competencies allow me to operate either badly or well, especially in a modern society that puts a premium on knowledge. My set of competencies in building and managing my digital persona, therefore, is directly related to the amount of control and the degree of agency that I have over my persona. The more others control my digital persona online, the less capacity I have to change any imposed persona and the more an imposed persona is acting on someone else’s behalf. A person might, perhaps, voluntarily surrender their persona to another, and the populism of those like President Trump would fall into that category.

Conclusion

This chapter has not been written as an anti-Donald Trump piece. However, William Randolph Hearst and Donald Trump share similarities not only in their personalities but also in their capacity to generate, purposely, false news. Hearst would never have expected yellow journalism to become standard news, despite his voracious appetite for fake news. Nor would Hearst have expected a President of the United States to occupy the newshole so thoroughly. Hearst no doubt, though, would have done exactly what Donald Trump has done; amplify through Twitter.

In this chapter, the author has attempted to show how digital personae, active and passive, have become a permanent part of how knowledge is distributed and acted upon in contemporary society, affecting individual and business decisions alike, public opinion and markets. Jacques Derrida used the expression “democracy to come” to describe his ideal of democracy. Democracy, for Derrida, welcomes strangers, accepts diversity, and enhances participation (Lucy and Mickler 2006). While this chapter is not strictly about the theory or social constitution of democracy, the role of digital personae in democracy – online citizens-consumers-organizations – is obvious – as is their role in electronic markets. The capacity to swarm has obvious implications for public opinion, the formation of social movements and markets.

When Michel Foucault (1977) wrote about discursive practices as groups of statements that provide a language for talking about a particular topic at a particular historical moment, he did not have in mind nonhuman Internet robots and algorithmic languages (although mathematics, for him, is a discursive practice). Each discursive practice implies a play of prescriptions that designate exclusions and choices. Humans have developed an ability to make inferences about their counterparts’ mental states (cf. mentalizing or theory of mind, Frith and Frith 2006). This ability enables humans (i) to assess and predict the intentions, beliefs, and behaviors of others in communicative, collaborative, and competitive social interactions and thus (ii) to increase chances of survival and overall human success.

The idea of “mixed zone” fits well into Alfred Schutz’s ideas about our different positions in the social distribution of knowledge. The Internet has provided a system where one’s digital persona might be imposed by others, what Schutz would call “imposed interests” compared with “common intrinsic interests.” A person is, even without the Internet, dependent on the competencies of others, like doctors and accountants. However, the rise of digital personae changes the processes by which trust, social presence, and social proof operate. Islamic State was able to mobilize 47,000 Twitter accounts to project its digital persona – its knowledge of social media networks operates at technical and strategic levels.

The author has kept to the minimal concept of what counts as a social actor – proposed by Hindess (1988) – as a site of decision and action, where the action is in some sense a consequence of the agent’s decision. This conception keeps intentionality in the theoretical picture. Other explanations of the effects of complexity in contemporary society have taken other and different forms. Actor-network theory (ANT) or (AT) is a contemporary example of the attempt to analyze the role of human and nonhuman agents and their impact on society. Donald MacKenzie (2006) is a famous example of the application of the ANT concept of performativity in financial markets. He looked at the relationship between financial models and the actual practices of financial traders and firms, how particular financial technologies are created, and how they affect market structures. For MacKenzie (2006), neo-classical economics is not real until it is enacted into being (performativity). Actor network theory (ANT), enrolment theory, or the sociology of translation, created by Bruno Latour, Michel Callon, and John Law, looks at the agency of nonhumans, from animals to machines and links the concepts of actor and network to by-pass the classic distinction between agency and structure. Latour (2006), as a result, discounts traditional models of self, and by extension, intentionality, − “there is no model of (human) actor in AT nor any basic list of competences that have to be set at the beginning because the human, the self and the social actor of traditional social theory is not on its agenda.” (2006).

In the literature on persona, there is a wide range of definitions of persona, reflecting differences in the idea of agency. Persona Studies, for example, takes a dramaturgical approach, where “persona, in terms of origins, in and of itself implies performance and display. Jung, for instance, calls persona a mask where one is ‘acting a role’ … I have used persona to describe how online culture pushes most people to construct a public identity that resembles what celebrities have had to construct for their livelihood for at least the last century.” (Marshall 2014). ANT on the other hand emphasizes digital persona as a “hybrid or quasi-object” “it is a combination of both human creation and technological mediation ... Online persona, in the terminology of ANT, is a constructed, performative presentation of identity. It is constituted through a combination of human action, ongoing mediation of present human action, and the automation, through technological delegation, of previous actions” (Henderson 2014). Sherry Turkle (1997), on the other hand, argued that on-line life selves demonstrate a “de-centering” of the very idea of “self.” “What I am saying is that the many manifestations of multiplicity in our culture, including the adoption of multiple on-line personae, are contributing to a general reconsideration of traditional unitary notions of identity.”

However, as argued in this chapter, active and passive digital personae, as originally conceived by Roger Clarke (2001), makes a better fit for the highly technical nature of contemporary digital media and how actions follow from actual activities, whether swarms, doxing, or profiling.

Cross-References