Keywords

Introduction: Dupery and Its Intentions

Dupery and deception are generally unpleasant acts, and perhaps the attempt to see a positive side might be regarded as naïve. However, there are some kinds of people who are willing to explain why and how we have been duped, which could be of significant academic interest. Seeing how dupery works may help protect us from it. Not all deception has evil intent: most of us will have been taken in by benign forms at some stage. Young children believe that Father Christmas comes into their home during the night of Christmas Eve and leaves gifts or that a magician is able to produce a rabbit from a hat. Being duped may be part of the delight of a well-written detective novel with a twist, or of a particularly funny joke. Usually, with benign deception, we can eventually see how we were duped—the intention is not to deceive forever. The magician provides an interesting exception: most magicians will not reveal their methods, but some have (apparently) been willing to do so.

There is no doubt that some people have malicious intentions and will use dupery to achieve them. They are not completely avoided in this chapter. Some, on the other hand, do have more noble intentions when they deceive. A satirist may initially dupe audiences into thinking they are seeing a serious event and then shock them with humour that points up unflattering truths about a situation. This showed up in many comic memes during the start of the Covid-19 pandemic, where a video of an attempt to make a mask, or create an uplifting song, rapidly shifted to a ‘true’ picture of what was happening. Effects of such memes include laughter and community-building, as the shared experience comes to light.Footnote 1 Another good intention might be to prevent further malign deception. For example, a university lecturer may initially use deception to demonstrate the ‘Barnum effect’ to students: showing by example that even well-qualified students can easily be duped by vague statements supposedly about their personalities or life experiences from people trying to manipulate them (Beins 1993). For both the satirist and the teacher—in the situations described above—the important aspect about their dupery is that it is deliberately exposed and explained.

This chapter explores why and how people have been willing to reveal the workings of a deception and what we can learn from those revelations. It draws on the work of satirists, magicians, hoaxers, con artists, hackers and academics. Some have had several such roles: our interest here is their role in exposing a deception. Their relationship with the truth has different nuances: while magicians have a code of keeping their methods secret, satirists depend on audiences recognising what they are doing which may be a fabrication but can still expose an underlying truth. They also have different intentions: the same technique might be used by the magician to entertain and by the con artist to rob, but both are dependent on deception. Techniques are changing too with greater opportunities offered by digital technology to discover or amplify deception. Exposure of deception is an explicit intention for some uses, such as plagiarism detection, though these are also critiqued for participation in a surveillance culture with its own deceptions (Bayne et al. 2020). The result is a complex situation, as we see from other chapters in this book, but the focus here is on what we can learn from people implicated in deception who have decided to ‘show the workings’. I explore this issue initially through four settings where deception is present, and someone is exposing something about it. These settings show conditions where:

  • We are not supposed to be deceived.

  • We know we are being deceived.

  • We may be deceived, but we are being educated to avoid it.

  • We are exposed to deception that exposes other deceptions.

Some agents in these areas have contributed to the broad academic field of deception detection. Others simply want to help. I look at people who aim to expose the truth in these contexts, before considering two related settings showing the workings of deception on a grand scale, dwarfing (but still including) what is happening in the more subtle situations listed above. The result is an unfolding of human responses to familiar contexts and actions and also to actions that go largely unnoticed… until something happens. Throughout the chapter the effects afforded by technology are highlighted. The role of education is also a significant theme.

Satire: We Are Not Supposed to Be Deceived

Satire does not necessarily depend on deception, and any deception used is likely to be short lived, as shown in the YouTube meme in the previous section. Indeed, if a satirist has deceived someone, the practice has failed as satire. However, the workings of satire are interesting: if we can explore ‘success-versus-failure conditions’ of a thick description of what satire is about (Ryle 2009), we can perhaps learn something from its failure in relation to deception. Here we consider satire’s failure with some audiences and then how it has failed with a target of satire in a context where there is deception.

Satire fails if the audience is deceived into believing it is genuine, as frequently happens, for example, when articles from the satirical magazine The Onion are shared as though they are true stories. This has the unfortunate consequence of further spread of falsehoods (Rubin et al. 2016). Rubin et al. see such failure as a good starting point for developing an automated tool to detect deception in news articles. They reason that as satire requires at least some people to detect its nature, there should be linguistic cues. The success of their algorithm points to a way of minimising deception in satirical news. However, we might wonder whether dependence on algorithms to detect satire destroys the nature of satire itself; it has in part an aim to collude at an intellectual level. Perhaps education in linguistic cues—such as notions of the absurd and grammatical constructions—might help to keep satire alive, but it does feel like a further admission of failure to delegate this detective work to an algorithm.

Satire also fails if it has an inappropriate effect on its target, for example, if it becomes integrated into deceivers’ own repertoires. Armando Iannucci who famously satirised British politics in his popular television programme The Thick of It (2005–2012) explains why he cannot do it any longer:

Satire only works if there’s a series of accepted conventions and then you point out when the politicians or people in the public have departed from those conventions. But if the politicians are saying: ‘There are no more conventions anymore. I can do what the hell I like. I can change the name of my party to Factcheck UK’, it’s very difficult for someone to point out where those conventions have been breached. (Iannucci and Asthana 2020: 2:04-2:28)

Although the Twitter handle of the Tory party at the 2019 UK election sparked this observation, Iannucci has been commenting on satire for some time, tellingly in a New Statesman article where he observes: ‘…politicians no longer act like real versions of themselves. Instead, they come over as replicants of an idealised, fictional version of what they think a politician should be’ (Iannucci 2016: 88).

Idealised fictions are not the same as accepted conventions. Satirising an idealised fiction loses its bite. In this case, deception is still present, but the satire becomes a part of it. In an analysis of the satire in The Thick of It, Basu (2014) points to its targets—politicians and news media combined—as a ‘social apparatus’ (Foucault 1980) which is profoundly rotten. There is no redeeming feature, ‘not one character in the show with any integrity’ (Basu 2014: 92). It is ironic and perhaps depressing that the show itself has been incorporated into this very apparatus, as illustrated by politicians’ and news media’s overuse of an invented catchphrase (‘omnishambles’) designed to ridicule, but losing that effect through its adoption by the people it is satirising. The rotten social apparatus is reproducing itself, feeding on the satire.

Satire is often associated with parody (Sinclair 2020). Basu (2014) claims that The Thick of It is not parody though: ‘Parodies are able to “invade” other texts and “pollute” their meaning-making processes, and intertextuality can therefore be a subversive and liberating force’ (Basu 2014: 97). The opposite has happened with The Thick of It, because its own meaning-making has been polluted. While it undoubtedly succeeded in exposing the parlous state of the UK’s entwined politics and media, it ultimately contributed to that state, possibly leading to the alienation of its creator, Iannucci.

In short, satire’s job is to show the workings of a practice, often to reveal deception. It fails as satire if it deceives us into thinking it is the real thing; it fails in its mission if it is incorporated into the deceptive practices it targets. Satire’s tricks have to work. It needs to hold human failings up to ridicule, not as something to emulate. And if we need an algorithm to expose satire, we should be asking what that implies about education.

Magic: We Know We Are Being Deceived

… a unique form of deception, in which we know that we are being deceived, but still we are deceived. (Lamont and Steinmeyer 2018: 9)

Magic tricks have to work even though we know that they are tricks. While we the audience are looking for the sleight of hand, the piece of thread, or the hiding place, the successful magician has again left us gasping with wonder and possibly laughter too. Even though we know it is not really ‘magic’ in an occult sense, we believe what we see, something that should be impossible. And we are entertained by that belief, even while we wonder how on earth it was done.

Despite the motto of the UK’s Magic Circle being ‘not apt to disclose secrets’ (The Magic Circle 2020), there do seem to be some magicians willing to talk about their craft and even give away some tricks. As breach of the motto can lead to expulsion from the exclusive circle, what the magicians do disclose is perhaps not overly secret, but it may nevertheless be of value to our exploration of deception. In a brief magazine article, Teller, of the famous duo Penn and Teller, reveals the secrets of one card trick in some detail. He also provides his basic principles for altering perceptions:

  1. 1.

    Exploit pattern recognition.

  2. 2.

    Make the secret a lot more trouble than the trick seems worth.

  3. 3.

    It’s hard to think critically when you’re laughing.

  4. 4.

    Keep the trickery outside the frame.

  5. 5.

    To fool the mind, combine at least two tricks.

  6. 6.

    Nothing fools you better than the lie you tell yourself.

  7. 7.

    If you are given a choice, you believe you have acted freely. (Teller 2012)

This is a valuable list, as we shall see. Penn and Teller are particularly associated with revelations, perhaps because of their popular TV show Penn & Teller: Fool Us, where other professional magicians compete to fool the duo as to how their trick was done. The Penn and Teller analysis entails a post-trick discussion using insider ‘doublespeak’ to explicate the trick’s method. Revealing the trick’s method in this way doesn’t spoil it for the audience—indeed, it is possible to create more amazement (Pang 2015).

In any case, many of the methods of magicians have been known about for centuries, and it still doesn’t matter (Lamont and Steinmeyer 2018). These contemporary historians of magic say about Penn and Teller: ‘The audience is told how it is done, but it nevertheless seems impossible’ (Lamont and Steinmeyer 2018: 312). But are the magicians really giving away secrets, and is this right? Is it not an important aspect of magic that its secrets are always kept? Lamont and Steinmeyer say that we are told enough to satisfy our curiosity, but we should really be asking a different question: why does it work?

The ‘how’ is what bothers us, though, and distracts us from the ‘why’. Audiences have always tried to find out secrets and have been supported by changes in the technology of the era. Improved lighting, photography, film, television and now the Internet, among others, have all played their part in revealing tricks (as well as in performing them). Nowadays, we can easily find the secrets by looking online or using recordings. Technology has clearly aided our ability to keep reviewing tricks, pausing and magnifying the action to spot the moment where we’ve been distracted. This may take many iterations, if indeed it is possible; alternatively, it may be easy to spot when it has actually been done in plain sight, as is often the case. See, for example, the demonstration of a dropped lighter in a TED Talk by magician turned academic, Gustav Kuhn (2013). For those who don’t want to put in the effort, the Internet provides many explicit demonstrations of magic tricks for people unable to stand not knowing how something was done.

For additional examples of revealed tricks, it has been instructive to look at the website Magic Secrets Explained.Footnote 2 There are many other such sites, but this one contains an interesting question and answer about a young magician, Dynamo, known for stunning feats such as levitation and walking on water. The site asks: ‘Is Dynamo real or fake?’ This is in response to many apparently disappointed social media users who claimed to have seen the workings of Dynamo levitating at the London Shard, though this apparent reveal (of wires) might itself have been a deception.

The idea of a ‘fake’ magician calls into question my subtitle for this section: ‘we know we’re being deceived’. Presumably, the attribution of fakery means that some people see magicians as people with special powers, perhaps like those given to the fictional young wizard, Harry Potter. The ambiguity of the word ‘magic’ relating both to stage performances and to occult or paranormal experiences leads to a potential for complications in determining whether we are deceived knowingly or not. As Dynamo himself says: ‘It’s hard to know what to believe, in this day and age. And whether people want to believe what I’m doing is real magic or skill is up to them’ (Wolfson 2020: 13).

The website Magic Secrets Explained is in no doubt that what Dynamo is doing is skill. Showing the workings of magic to people who know they are being deceived but are still amazed by it has a very different effect from showing the workings to people who believe the magician has supernatural powers. Stories about the two kinds of magic have always co-existed, leading to myths about the gullibility of our ancestors in not being able to tell the difference between a stage magic trick and witchcraft (Lamont and Steinmeyer 2018). In the Victorian era especially, historians sometimes deduced that conjurors of old had been falsely accused of occult forms of magic. The evidence to support such claims was weak, though the inferences were understandable:

So, when we look more closely, we can see that none of these were actually conjurors who were persecuted for performing magic tricks. They may have used trickery to pretend to have genuine magical powers, but that, of course is another kind of deception. (Lamont and Steinmeyer 2018: 36)

The stories we tell about these two understandings of the same techniques persist to this day. And, as our contemporary historians of magic also observe, ‘magicians have not always played by the rules’ (Lamont and Steinmeyer 2018: 28). This is seen particularly strongly in the psychological sphere: for example, while our Victorian ancestors were less likely to believe in witchcraft, they could be taken in by magicians who turned to mesmerism and clairvoyance. These intertwining stories are still evident today. While ‘mind-reading’ that is known to be trickery falls into the category of stage magic, a magician who claims actually to have mind-control is crossing a line. This is what Derren Brown did when he began his mind-reading performance, claiming psychological powers that would have been regarded as paranormal in relation to current scientific knowledge (Lamont 2013).

Nowadays, Derren Brown denies having any ‘special’ psychological powers, and certainly no psychic ones (Brown 2019). In his books, he explains why people can be taken in by those who do claim such powers. He joins a long history of magicians who set out to debunk the activities of charlatans and quacks, as do Penn and Teller. Showing the workings of these particular practices falls into our next category of deception (fake science), and Derren Brown is not quite off the hook yet.

The main message from magic seems to be that knowing how a trick works is less important than understanding why we are likely to be deceived. Then, perhaps, we can be taught how to avoid it. The notion of a ‘fake’ magician—amplified by both contemporary obsessions with fakery and use of social media—suggests an additional need for education about the nature of stage magic and other genres and how they are affected by technology.

Fake Science and Classroom Deception: We May Be Deceived, but We Are Being Educated to Avoid It

Framing a magic performance as a psychological demonstration may …inadvertently help perpetuate false beliefs about psychology. (Lan et al. 2018: 2)

The idea of framing in the above quotation echoes point 4 in Fig. 13.1 ‘keep the trickery outside the frame’. The authors cited are concerned that performers such as Derren Brown use a pseudoscientific ‘story’ containing technical jargon as the background for their tricks: for example, Brown has suggested that he uses ‘unconscious primes’ to support his predictions, when really all he has been doing is conventional conjuring. One of the authors of the paper is Gustav Kuhn, himself a former magician who has become a psychology lecturer. He believes magic can inform the psychology of deception, which is particularly important in the era of fake news. In an interview for the British newspaper, The Observer, he says: ‘It’s not merely enough to tell people it isn’t real, to factcheck—people ignore that information. Only when we are told how a trick is done do we stop believing in it.’ (Wolfson 2020: 13)

Fig. 13.1
A list of seven basic principles for altering perceptions.

Altering perceptions (Teller 2012)

Kuhn was particularly shocked at undergraduates’ acceptance when a magician claimed to be talking to their dead relatives. Just telling them it was not true was not enough to convince them. Along with his co-authors (Lan et al. 2018), he was involved in an investigation of whether such framing—purportedly by either a stage magician or a psychologist—affected students’ beliefs in mind-reading. Alarmingly, they discovered the beliefs were not affected by whether the performer was seen as a magician or a psychologist. They conclude that realistic but fake evidence can be very powerful in perpetuating misconceptions and this needs to be challenged. All the stages of the trick need to be revealed.

Psychology students are often subjects for such experiments in universities. They may themselves want to use deception in experiments, and this has to be supervised carefully, for ethical reasons. One way of teaching students about deception has been to subject them to it, and a number of researchers/teachers have used deception to demonstrate the Barnum effect and also to discuss the effects of being deceived. This is named after the nineteenth-century showman and hoaxer P. T. Barnum, one of whose tricks was to ‘read’ personalities based on stereotypical generic descriptions that individual people think would be uniquely applied to themselves. The same effect might be seen in fortune telling and horoscopes, or any circumstances where ‘cold reading’ can be deployed—a technique also used by Derren Brown and a variety of con artists. Beins (1993) describes an example where students completed a so-called personality inventory and all received identical feedback. They were then asked how they felt about the deception involved. They felt some distress, but it did not last, and the majority of students thought it was an effective way to teach about deception as well as the Barnum effect.

Efforts to debrief the participants afterwards and explain the true purpose and methods used are crucial in cases where deception has been practised, and academic authors will stress this. In one notable case of classroom deception (Taras and Steel 2007), the academics and students involved continued to discuss the deception 2 years after the event, and the discussions showed that students had indeed consolidated the message. The students involved here were industrial relations students who had been deceived into signing up to organise collectively (i.e. commit to joining a union) to challenge a perceived ‘breach of contract’ with their professor: ‘sometimes the shock of being victimized by a well-planned trick is an opportunity for implanting a valuable life lesson’ (Taras and Steel 2007: 180).

The professor (Taras) told the class that their usual professor (Steel) was suspended, pending an investigation. She demonstrated sympathy towards him and the class and temporarily gained their trust, but then announced that there would be an adjustment resulting in loss of some bonus marks and also changes to the exam. When they complained, she offered them a sign-up sheet for a student association if they wanted to take their complaints further. Then she watched as collective action began to build up. The deception in this case was short-lived but profound: the deception was run in four classes and lasted only between 6 and 11 minutes. The debriefing and discussion about the deception lasted a lot longer. By ‘showing the workings’ through an escalating example of betrayal of expectations, the teachers were able to make points about injustices, consequences of decision-making, leadership, collective action and union-management relationships, all illuminated by the experience they had just undergone. Moreover, the discussion about the professors’ tactics allowed them to make points about psychological contracts, violation of trust and comfort zones. These topics had all been difficult to discuss in class where students had been resistant to the themes and previous attempts to engage them (case study, role playing, etc.) just had not worked. What did work was directly experiencing the emotions and the cognitive dissonance caused by the deception.

The perpetrators of the deception also debriefed themselves with the help of colleagues and through writing about it, paying attention to the ethics of the deception. This was important; they decided to disseminate their findings but would not want to repeat the experiment despite its success. It was too stressful for them. However, their final sentence is: ‘This definitely was the most effective classroom technique we ever used’ (Taras and Steel 2007: 196). Writing about it aimed to help future teachers in deploying deception, or even deciding if they should, and their experiences have led to a call for support for teachers using such challenging experiential techniques (Dean and Forray 2016).

When we add technology to the mix of deception in experiential teaching, the ethical issues become even more complex. For example, technology is widely used in educational simulations. Software could include visual immersive virtual environments and algorithms that underpin their operation. Hardware might provide three-dimensional immersive models of reality or even a simulated person (mannequin). Simulation can itself be regarded as deceptive because so many of its processes are ‘black-boxed’, reducing some aspects of human agency and embodiment and depriving students of the necessary ‘doubt’ about what they might be seeing (Turkle 2009). There are dangers of being seduced by the apparent authenticity of mannequins in nursing education. Although mannequins appear like real patients, their use still cannot take into account complex human responses which could lead to future errors if a student later expects a patient to respond in exactly the same way (Dunnington 2014).

Calhoun et al. (2015) discuss a case where deception was added to a simulation leading to a mannequin’s ‘death’ raising deep pedagogical and psychological issues for the simulation community, especially concerning the need for debriefing. A senior clinician had entered the simulation of a cardiac arrest and ordered an incorrect medication to be given. This is described as ‘misdirection’. The paper refers to Erving Goffman’s use of ‘frames’ (Goffman 1974) to consider the relationship between the ‘primary frame’ (the real world) and its recreation in a simulation. Framing is about how we understand and act in the world: we’ve already seen it in relation to magic and pseudoscience. Key issues here are the difficulties of working across frames when deception has been involved and an associated loss of trust.

All the people referred to in this section have experienced difficult emotions through practising and revealing deception: magicians and psychologists are shocked by the easily manipulated beliefs of undergraduates; teachers worry about potential effects of deception on their relationship with their students, which should be based on trust. As with satire and magic, one thing is clear: when teaching through deception, it is important to know what one is doing and what its criteria for success and failure are. For example, a deceptive classroom intervention might be deemed to fail (as teaching) when there is a subsequent loss of trust in the teacher. There is also a clear warning that in its mediation of additional ‘frames’ of activity, technology can add complexity to an already difficult situation.

Reactions: We Are Exposed to Deception that Exposes Other Deceptions

Sometimes dupers deliberately deceive in a tit-for-tat way to respond to other dupers, perhaps exposing them or even preventing them. It arises when people are dissatisfied by something in their environment and respond to it by doing something deceptive themselves.

The first example here continues the theme of university teachers and researchers concerned about effects of deception on their students and their practices. It also draws on the role of technologyin reframing academic research and teaching as performance indicators through massive datafication (Williamson et al. 2020). In what initially appears to be simply a whistle-blowing account of ‘dishonesty, deception and deceit by universities in the UK in pursuit of quality indicators’ (Rolfe 2016: 173), the startling conclusion is that the most effective strategy to deal with it is often more deception and dishonesty. The paper makes an interesting distinction between deception and deceit: the deception here is of the magician’s kind (‘we know we have been deceived’) and deceit is what commands a negative response if we are trying to be ethical. Dishonesty, Rolfe says, comes in between: telling lies may, in defined circumstances, be morally acceptable, though not all ethicists would agree. In research assessment, deception occurs through overuse of citation practices and skewing submission to only high-status journals, or engagement only in prestigious and profitable projects even though smaller initiatives might be more beneficial to some groups. All of these activities produce higher metrics, suggesting greater quality.

Deception turns to deceit when the tricks and strategies are used to claim improvements in quality without any actual changes to the professional or academic practice having been made. In teaching, deception and then deceit occur through enhancing league table positions through grade inflation, or artificial improvement in completion statistics. Rolfe’s concern is that in professional courses such as nursing, the reduction in standards following such deceit may be life threatening. His response has been to adopt a subversive approach to protect his academic values: ‘Even the most mildly subversive academics will be familiar with the adage that it is easier to obtain forgiveness than permission, which is an incitement to ‘do the right thing’ and apologize afterwards’ (Rolfe 2016: 180). The subversive approach is detailed in an earlier work (Rolfe 2013: 80), which stresses the importance of intentions, advising that we should be ‘good, collegiate and radical’ and explains what this means for his own practice.

A less collegiate approach to challenging practices in academic publication is the hoax submission. A notable example was the Sokal affair in 1996: a physicist submitted a fake paper on ‘quantum gravity’ to a postmodernist academic journal Social Text and subsequently produced an article for another journal, Lingua Franca, that revealed his deception (Sokal 1996). There he gives the reason for his parody as wanting to expose the (mis)appropriation of scientific language by cultural studies writers and the lack of rigour of the review process in accepting nonsense and ‘sloppy thinking’. A huge controversy emerged after the publication of the two articles, which lasted for some years.

The affair revisits some themes seen earlier, especially if we think about the conditions that determine what kind of deception a hoax is. In unpacking its rhetorical dimensions, Secor and Walsh (2004) suggest that the situation could have been prevented if all the players had been more sensitive to text, context and genre. They conclude:

They—we—can certainly try to keep our preconceptions from blinding us, and it is a noble effort. But savvy hoaxers like Sokal know we will fail. A clever magician can hide an elephant right in the middle of the stage by getting people to look in one direction while he blindsides them from another. (Secor and Walsh 2004: 89)

Sokal’s blindsiding involved a parody that was extremely close to the conventions of the genre, mimicking the specific appropriation of scientific language. At the same time, it did include guideposts to its parodic/satiric nature, as parody tends to do.

There is a long history of hoaxes, committed for a variety of reasons. Some are just fraudulent, and others, like the Sokal one, are apparently concerned with exposing the workings of a practice. Some are designed to prevent a practice before it occurs. An example is the mountweazel, a fake word in a dictionary, encyclopaedia or other reference source, inserted in order to detect plagiarism if another work also presents it as a real word. This device was named after a famous entry in the 1975 edition of The New Columbia Encyclopedia. Mrs. Lillian Virginia Mountweazel was apparently born in Bangs, Ohio, and died in an explosion 31 years later after an interesting, if short, life. However, she did not actually exist, as was admitted by an editor some years after it was published.

Fakery in works of reference that are expected to be impeccable sources of information may be disconcerting. Someone will have to show the ‘workings’ if the practice is successful in exposing the plagiarism, though the editors will hope it won’t be required, and the task would be delegated to someone other than the perpetrator. In a fascinating exegesis of the Mountweazel encyclopaedia entry, Williams (2016) recognises it as a piece of metafiction as well as a copyright trap. It has been carefully constructed to deceive readers about its authenticity; yet (because parody is involved) it contains allusions to its own role and construction that give clues to those in the know, for example, the association with the expression ‘weasel words’. Deceivers can be proud of the artistry of their deception.

In this category of contexts—deception that exposes other deceptions—we see a move to more aggressive uses of deception, though still for avowedly good purposes. By subverting the authority of universities, academic disciplines and encyclopaedias, the deceivers expose practices that need to be challenged. These include the datafication that results in black-boxed ways of referring to research and teaching ‘excellence’ (see also Fawns and Sinclair 2021). The interesting example of the mountweazel shows that some dupers, such as effective parodists and perhaps even ethical subversives, are likely to be proud of their skills. But the deception is risky; it results in a lack of trust not only in the institutions but also potentially in the dupers who tell.

Dupers Who Tell: Whistleblowers, Debunkers, Provocateurs and Poachers Turned Gamekeepers

Just as there are different intentions in dupery, there are different reasons for its exposure. The names we call dupers who tell are associated with risk and disapproval from the powers-that-be. Whistleblowers want to provide inside information on practices they are expected to support: Rolfe (2016) comes into this category with his exposure of deception and deceit in UK universities. Magicians and teachers become debunkers when they see practices that threaten to harm. Teachers who provoke their students into a response to injustice, or towards learning that blind obedience can have fatal consequences, might be called provocateurs, as might Sokal (1996) with his hoax against the cultural studies academics. Lexicographers who introduce mountweazels are bordering on the sense of provocateur that leads to entrapment, though their work has been seen by Williams (2016) as a form of whistleblowing too, about the nature of the relationship between authoritative texts, readers and writers. All of our dupers might be loosely regarded as ‘poacher turned gamekeeper’: ‘someone whose occupation or behaviour is the opposite of what it previously was, such as a burglar who now advises on home security’ (Collins English Dictionary 2020).

A satirist comments rather than lampoons; a magician reveals rather than hides mind-reading tricks; teachers deceive students (briefly) rather than elucidate; an academic deceives the employer through good behaviour rather than the promotion of business targets; a writer or lexicographer subverts the integrity of his or her publisher’s publications. But the strongest example of poacher turned gamekeeper is, as the definition above suggests, the criminal who brings knowledge and advice to those trying to prevent crime.

The Extent of the Problem of Dupery: Insider Stories

One way to learn about deception is to have it explained by someone who has been previously jailed for it. I have identified two candidates to guide us; they share some remarkable similarities, but also differ in how they ‘frame’ themselves. Comparing them illuminates a specific contemporary practice: hacking.

Frank Abagnale is one of the world’s most notorious con artists and Kevin Mitnick is one of its most notorious hackers. Both served a few years in prison following high profile capture. Both subsequently became legitimate security consultants. Both have written explicit books about fraud, exposing the tricks of fraudsters, including themselves: e.g. The Art of the Steal (Abagnale 2001) and The Art of Deception (Mitnick and Simon 2002). Both men are proud of their skills and enjoy showing their workings.

The confidence trickster or con artist is so called because of techniques used to make the victim feel confident and trusting. The two main elements of a con trick are the victim’s trust and an alluring bait (Orbach and Huang 2018). Con artists have an expression for what their expertise comprises; they get their targets ‘under the ether’:

Ether is a condition of trust and even infatuation with what is being presented. Getting a victim under the ether is crucial to all cons, no matter where or how they are perpetrated. This heightened emotional state makes it hard for the victim to think clearly or make rational decisions. To get their victims under the ether, fraudsters hit their fear, panic, and urgency buttons. (Abagnale 2019: 24)

To establish trust, con artists use dress, manner and tone that inspire confidence and sound comforting. It takes skill, then, to hit these other ‘buttons’ as well to achieve the scam. Abagnale advises us to know what our own ‘emotional hot buttons’ are, as the scammers are expert in finding them (Abagnale 2019: 37), which is their way in for distracting our attention and doing something that we cannot discern. Similarly, Mitnick refers to ‘developing a ruse that stimulates emotions, such as fear, excitement or guilt’ (Mitnick and Simon 2002: 105) and several of the scams he uncovers also incorporate a sense of urgency.

Mitnick calls his former self a ‘social engineer’ which he sees as a speciality within con artists. A social engineer uses the con artist’s tricks to influence and persuade people to do things, usually involving giving away information. For Mitnick this meant easier access to finding out about phone networks and computer security. Like Abagnale, Mitnick posed as a range of different characters, using insider language and knowledge of procedures to convince people he was entitled to do what he was doing. He distinguishes himself from the other type of con artist—the swindler—saying that they like to find greedy targets who will fall for a con; social engineers are more likely to seek out trusting and good-natured people (Mitnick and Simon 2002: 195). Both Abagnale and Mitnick observe that deceptions are magnified when it comes to adding the use of technology. ‘Technology breeds crime and it always has’ (Abagnale 2001: 18). ‘Social engineering attacks may become even more destructive when the attacker adds a technology element’ (Mitnick and Simon 2002: 191).

What is particularly striking about Mitnick’s fictionalised accounts of his activities is the level of detail a social engineer is prepared to invest in setting up tricks, reminiscent of the magician’s second principle in Fig. 13.1 (make it a lot more trouble than the trick seems worth). Indeed, magic was Mitnick’s main fascination as a child, leading him to want to find out how things work, especially phones and computers. He claims that all his ‘misdeeds were motivated by curiosity’ (Mitnick and Simon 2002: xii) and that he is ‘not a malicious hacker’ even though he is famed for this. This observation highlights the ambiguity of the word ‘hacker’: as we saw earlier with magicians, there is more than one type, and we need to know which one we are dealing with before we can consider the nature of any deception involved.

Mitnick prefers to think of a hacker as someone who tinkers with technology, out of curiosity and in pursuit of intellectual challenge, and this was certainly its original meaning. Since the 1980s, however, hackers have been more frequently associated with malicious hacking, partly in response to high profile criminal activity revealed by popular media. Holt (2020: 731) identifies a hacker subculture that values technology, knowledge and secrecy regardless of whether the hacker is operating ethically or maliciously. In the case of knowledge, though, hackers might be distinguished through the ‘hat’ they wear: a white-hat hacker is involved in legitimate security support, and one wearing a black hat is engaged in criminal activity. Those who use the skills for either purpose are known as ‘gray-hat hackers, recognizing their ethical flexibility over time’ (Holt 2020: 735). This indicates some of the complexity of using the term ‘hacker’, making learning from the workings of the hacker harder than the previous dupers considered here.

I do have concerns about both Abagnale and Mitnick as in exposing their workings so explicitly they may be ‘providing a great instruction book’ (Abagnale 2001: 25). However, the notion of the white-hat ‘ethical’ hacker that Mitnick purports to be creates an additional concern about the passing on of knowledge. Security professionals are now being educated at graduate level, and there is a debate about whether students should be taught the same skills as the attackers (Hartley 2015). Perhaps there is a case for teaching technical skills, but should they also be taught the ‘social engineering’ skills that go with them? If higher education teachers struggle when teaching about and through deception, how much worse will it feel for them to teach students how to deceive?

Conclusion: What We Learn from the Dupers

The deceptive practices considered in this chapter have exposed some commonalities and potential answers about why they work. The situations described here all depend on stories: ways of framing a context where the deception is occurring. The first opportunity for deception to arise is if we are not aware of the genre of the story. Perhaps we are experiencing satire, news or fake news; magic, psychology or the paranormal; simulation or reality; education or business; parody or the genuine article; a con trick or a business deal. Indeed, there are many possibilities, and the same story may belong to several genres. But often there are linguistic indicators in the telling of the story.

The story and how it is told frames what is happening. While Teller’s guidelines for magicians seeking to alter perceptions (Fig. 13.1) all resonate, number 4 seems especially important: ‘keep the trickery outside the frame’. We might be deceived if the story distracts us from what else is happening in the context: for example, if it leads us to look left when the deception is happening to the right. And we might be deceived if there is more than one story in the context, which is usually the case. If the trickery in the other story is outside our current frame of reference, we are unlikely to spot it, especially if the story we are presented with fits our expectations. The trickery depends on the deceiver’s intentions within the story and our altered perceptions of these intentions. We are led, through language and actions, and our own knowledge of the context, to interpret what is happening in an erroneous way. ‘Every day, we experience the world from a restricted point of view, directed by ways of thinking that we do not realize are there’ (Lamont and Steinmeyer 2018: 316).

This is exploited by magicians to enable us to experience the impossible and feel a sense of wonder; it is exploited by con artists to give us a bad deal and feel fear, panic or urgency and by social engineers to make us reveal vulnerable information. It was exploited by Taras and Steel to raise students’ awareness and make them feel anger and a sense of injustice. Knowing that deception results in an emotional response is key to understanding why it works. This is important as former criminals have exposed huge levels of criminal activity based on deception.

We have also seen how contemporary technology can be simultaneously useful and deceptive, depending on how it is used, for example:

  • Offering solutions to expose deception (which might be better provided by a teacher).

  • Amplifying messages through social media (including fake ones).

  • Simulating practice (and obscuring aspects of it, adding complexity to any deception used).

  • Reframing practice through datafication (and obscuring key messages, allowing deceit).

  • Creating new subcultures of deceptive practice (which might be framed as criminal even when not).

Technology can thus exacerbate emotional and ethical issues for educators even when supporting exposure of deception. The main question that now calls out for dialogue between educators is: Should we be teaching students how to deceive as well as how to avoid being deceived?