Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

3.1 Introduction

Normative evaluations of artefacts and technologies are commonplace. For example, many people find weapons, nuclear technology and cloning—to name a few—morally or ethically problematic. Indeed, one often hears a particular technology or artefact being declared as ‘good’ or ‘bad’. When making these evaluations people mostly have in mind the actual or anticipated consequences of the use of these technologies. They might suggest that technologies are just ‘neutral’ or ‘passive’ possibilities for doing things that only become morally significant when taken up by humans in line with their purposes (as represented in the slogan ‘guns do not kill it is people that kill’—see Latour (1999)). This would suggest that it is the human purposes and actions that are morally problematic not the technology as such. Others would claim that technologies or artefacts, in their very design, allow (or prohibit) certain practices (and not others). As such they are morally significant from the start. In other words the moral question is already present in some way even before they are taken up in social practices. Irrespective of the direction one goes in locating the moral problem (i.e. human or technology), the claim that a particular artefact or technology is morally problematic presumes that it would therefore be desirable for us to intervene in some way or another to address this moral issue or problem. If this is true, then the next question would be to know how, where and when to intervene. In other words, such a possibility for intervention presumes that we can locate the distribution of morally significant agency in a given sociomaterial arrangement in such a way as to affect appropriate change. I would therefore claim that the question of sociomaterial agency is necessarily at the centre of any discussion of the moral status or implications of technology (as it is generally accepted to be at the centre of any discussion about moral issues in society more generally).

Now, most people would agree that artefacts or technology does things—a kettle boils water, a hammer drives in a nail, a computer sends an e-mail, etc. Thus, it would not be too controversial to claim that the idea that artefacts have, or embody, some level of agency—even if it is very limited or derived in some way—is generally accepted. What is disputed is the nature and origin of that agency. The difficulty with this inability to locate or account for sociomaterial agency in a straightforward manner is that we do not know how to go about addressing the normative and political issues that technologically mediated practices quite evidently raise. If the problem was simply that people tend to use technology in a normatively questionable way then we plainly have to govern the use of the technology more effectively (laws regulating access, training, etc.). If the problem, on the other hand, is the fact that the particular design of the technology allows for practices that are normatively questionable or undesirable then we need to regulate the design of technology more effectively (for example as suggested in value sensitive design). If however, sociomaterial agency is constituted in a more complex and subtle way, as I would suggest below (following Latour, Barad and Heidegger), then the issue of the politics and ethics of technology is itself constituted in more complex and subtle ways—i.e. it is not open to simple intervention and correction (such as to regulate the use or to regulate the design). I would claim that without a satisfactory account of the constitutive nature of sociomaterial agency we will not be able to address adequately the normative and political implications of our increasingly technologically mediated sociality. More simply put: if we want to challenge, critique or change sociomaterial practices—normatively that is—then we need to know who (in terms of human and non-human actors) is doing what, when and how, i.e. we need to get a grip on the problem of the on-going constitution (or constitutive conditions) of sociomaterial agency. This is the aim of this contribution.

In what follows, I would like to explore, in a tentative way, the problem of sociomaterial agency and its moral implications. First, I will outline two different accounts of sociomaterial agency: a human-centred inter-actional account (Johnson and VSD) and a post-human intra-actional account (Latour, Barad and Heidegger). Second, I will use the post-human intra-actional approach to analyse the sociomaterial phenomenon of plagiarism detection. In doing this I will endeavour to show how the social and the technical is a co-constituted reality that is ontologically inseparable. Finally, I will propose the framework of disclosive ethics (in particular disclosive archaeology) as a way to deal with the ethical and political questions that our technologies raise.

3.2 Making Sense of Sociomaterial Agency (and Morality)

3.2.1 The Inter-actional Human-Centred Account of Sociomaterial Agency

It seems clear that it is not feasible, given all the work that emerged from the STS tradition, and the philosophy of technology, to maintain a simple dualistic view of agency which claims that agency is located either in the human or in the artefact. It would be reasonable to say that there is a generally accepted view that agency is more distributed than such a dualistic view would suggest. Nevertheless, although there is this understanding that agency is more distributed, there is a group of scholars that believe it is important to locate (or believe we ought to locate) the original and most fundamental source of agency on the side of the human. In this regard I want to refer to two examples: a recent paper by Johnson (2006) on the moral agency of computers systems and the work on value sensitive design by Friedman et al. (2006) and Friedman and Nissenbaum (1996).

In her paper Johnson (2006) argues that computers are moral entities but not moral agents. Her argument is based on the notion that computers do not fulfil the basic criteria for moral agency as traditionally conceived, by for example Kant. In particular she suggests that the key to moral agency is the ‘intending act’ “because the intending to act arises from the agent’s freedom. Action is an exercise of freedom and freedom is what makes morality possible” (199). She continues to argue that although computers do not exhibit ‘intending acts’—which would make them moral agents—it does not follow that they do not embody intentionality. According to her computer systems have intentionality in that they embody the intentionality ‘inserted’ into them by the intentional acts of designers. She suggests that designers design systems to be poised to behave in certain ways. However, as she suggests, this is not the only intentionality at work. There is also the intentionality of the user. Thus, she concludes: “when computer systems behave, there is a triad of intentionality at work, the intentionality of the computer system designer, the intentionality of the system, and the intentionality of the user” (202) She proposes that all three of these intentionalities interact to shape the moral terrain that should become the focus of moral evaluation. Thus, according to her argument it would be a mistake—and misleading—to allocate moral agency to computers independently of human agency. Nevertheless, she proposes that it is ultimately human agency that should be the core focus of moral scrutiny: “when attention is focused on computer systems as human-made, the design of computer systems is more likely to come into the sights of moral scrutiny, and, most importantly, better designs are more likely to be created, designs that constitute a better world” (204, my emphasis). This is exactly what the value sensitive design (VSD) approach advocates.

Value sensitive design (Friedman et al. 2006; Flanagan et al. 2008) accepts the idea that technology embodies certain intentionality as proposed by Johnson. They claim that a particular design renders possible certain behaviours (in support of certain values) and not others. Proponents argue that the moral problem is that most designers work—often uncritically—with a limited set of values that represents the interests and values of a privileged subset of stakeholders—such as economy, efficiency, safety, and so forth. They argue it is possible to design technologies that embody and render possible a wider, more inclusive, set of behaviours and values. Like Johnson they accept an inter-actional human-centred view which suggests that: “values are viewed neither as inscribed into technology (an endogenous theory), nor as simply transmitted by social forces (an exogenous theory). Rather, the inter-actional position holds that while the features or properties that people design into technologies more readily support certain values and hinder others, the technology’s actual use depends on the goals of the people interacting with it” (Friedman et al. 2006, p. 361).

Central to the human-centred inter-actional account of sociomaterial agency is the view that all sociomaterial agency is originally human i.e. that it is humans doing things with or through technology. It is never technology doing things with or through humans as such. Furthermore, even if sociomaterial agency is not originally human in the full sense of the word we need to, or ought to, be able to trace it back to humans because we can only make humans morally responsible and accountable—i.e. they are the only fully fledged moral agents with the freedom to choose and to act originally. This need to locate moral responsibility in human agents is clearly an important requirement for us to organise and regulate society. However, I will suggest that although we might want to locate or allocate responsibility and accountability ultimately in this way for very good reasons we should not allow this moral (and pragmatic) requirement to unwittingly lead us into accepting a dualistic account of sociomaterial agency. Or more fundamentally allow this requirement to lead us to accept an ontology in which we have to posit humans and technical objects as ontologically distinct entities (one intending and free, and the other not) which then interact to make sociomaterial entities possible. Besides the many philosophical controversies that such a view entails it must be said that the question of accounting for human agency as ‘an exercise of freedom’ is not unproblematic or uncontroversial.Footnote 1 How can we think of it otherwise?

3.2.2 The Intra-actional Post-humanist Account of Sociomaterial Agency

The implied ontological dualism (and substantialism) in the inter-actional approach to sociomaterial agency has traditionally given rise to a number of now well articulated questions. For example, to what degree can the affordances/prohibitions of technology ‘force’ or make the user to do something? What about the intentions of the users? What about the variety of ways that users can interpret these technical affordances (Norman 1988)? What about unintended consequences never anticipated by the designers? More specifically, where are the normative significant questions ‘located’: is it in the artefact, in the user or in both? These are all very good questions. However, I would argue that these questions do not help us to get to grips with the complexity of sociomaterial agency as it happens in our everyday technology saturated lives. What is needed, I would argue, is a fundamentally different post-human account of sociomaterial agency. I will attempt to give such an account by drawing on the work of Latour, Barad and Heidegger in particular.

Latour and the non-humans. For Latour, as for Barad and Heidegger, any talk of humans and non-humans in ways that suggest that they are, separately, already what they are and then we ‘add’ them together to ‘make’ a sociomaterial world would simply be wrong. He claims: “There exists no relation whatsoever between the ‘material’ and ‘the social world’, because it is this very division which is a complete artefact….” (2005, p. 76). He further suggests that both humans and non-humans share a common history: “Humans and non-humans are engaged in a history that should render their separation impossible” (2003, p. 39). More than that, they do not merely share a common history; they are each other’s common history: “A corporate body is what we and our artefacts have become. We are an object institution” (1999, p. 192). Very significantly to us he claims that in this institution that we are it is not a simple matter to allocate intentionality and properties this way or that way: “Purposeful action and intentionality may not be properties of objects, but they are also not properties of humans either. They are properties of institutions [collectives of humans and non-humans], apparatuses, or what Foucault called dispositifs” (1999, p. 192).

For Latour agency is distributed in such a way as to render it impossible to locate the sources of action in any precise way. He claims that an actor is not a source of action but rather the target of a vast array of entities that surround it. Action, he suggests, is “borrowed, distributed, suggested, influenced, dominated, betrayed and translated. If an actor is said to be an actor-network, it is first of all to underline that it represents the major source of uncertainty about the origin of action…” (2005, p. 46, emphasis added). This distributed, unoriginal, notion of agency should however not be seen as a ‘weak’ form of agency. Latour claims that when non-humans act as mediators they make other actors do things. He defines mediators as actors that associate with other actors in such a way that “they make others do unexpected things.” (2005, p. 106). If agency is unoriginal, distributed and has power to “make others do things”, as Latour suggests, then the issue of accounting for normative agency is indeed very important. In this regard Latour argues that if agency is distributed and not original to humans then so also is morality (i.e. those actions that are normatively significant):

Morality is no more human than technology, in the sense that it would originate from an already constituted human who would be master of itself as well as of the universe…Morality and technology are ontological categories …and the human comes out of these modes, it is not at their origin. (2002, p. 254).

If Latour is right about the distributed and unoriginal agency of actors (or more specifically normatively relevant agency of actors) then one might conclude that it is ultimately impossible for us to deal with the ethical and political implications of electronically mediated social practices. One might conclude that ‘following the actors’ (as is often suggested by ANT scholars) will only continuously displace agency to somewhere else as we transverse the network of humans and non-humans—i.e. an infinite regress. I would suggest that this is where the work of Barad and Heidegger is important to help us account for sociomaterial agency in a way that may provide a way forward.

Barad, phenomena and agential intra-action. Barad’s work is interesting as it emerges from the physical sciences, in particular her interpretation of the work of the physicist Niels Bohr and his attempt to find a convincing philosophical framework to account for the seemingly contradictory results of quantum physics. For Barad (2003) the observer, her instruments of measurements and the objects observed are an ontologically inseparable unity, what she calls a phenomenon: “phenomena are the ontological inseparability of agentially intra-acting “components.” That is, phenomena are ontologically primitive relations—relations without preexisting relata” (815, emphasis added). Phenomena are constitutive of reality, she argues. Barad (1996) proposes the notion of “intra-action” to deal with the fact that although phenomena are inseparable unities the two poles of the phenomenon (measuring apparatus and the object) do not exist as such apart from their ongoing intra-action. In other words there are not entities, which then interact. Rather, the entities are the performative outcome of the nexus of intra-acting relations—that is to say, these intra-acting relations are ontologically constitutive. In sociomaterial terms I take this to mean that the user/designer and the technological artefact or system is a phenomenon in which the social and the technical do not exist as such apart from their intra-action. In the nexus of intra-activity the phenomena are (re)produced: “phenomena are the place where matter and meaning meet” (Barad 1996, p. 185). Boundaries, between the social and the technical, are enacted and shaped through practices in intra-action, along with the phenomena. She suggests that “It is through specific agential intra-actions that the boundaries and properties of the ‘components’ of phenomena become determinate and that particular embodied concepts become meaningful. A specific intra-action (involving a specific material configuration of the ‘apparatus of observation’) enacts an agential cut … effecting a separation between ‘subject’ and ‘object’” (Barad 2003, p. 815; italics in original). For our purposes I would rephrase this to mean that it is in specific agential intra-actions between users (and designers) and materiality that the boundaries and properties of the social and the technical becomes constituted as an ongoing intra-actional performativity (Butler 1993). Barad (2003) summarises her approach as follows:

In summary, the universe is agential intra-activity in its becoming. The primary ontological units are not “things” but phenomena—dynamic topological reconfigurings/entanglements/relationalities/(re)articulations. And the primary semantic units are not “words” but material-discursive practices through which boundaries are constituted. This dynamism is agency. Agency is not an attribute but the ongoing reconfigurings of the world. (p. 818, emphasis added)

But what does this mean for responsibility? During intra-action, “marks are left on bodies. Objectivity means being accountable to marks on bodies” (Barad 2003, p. 824). For Barad the locus of responsibility is “a prosthetically embodied, performatively constituted agency” (Rouse 2004, p. 155) in which “we are responsible for the world in which we live not because it is an arbitrary construction of our choosing, but because agential reality is sedimented out of particular practices that we have a role in shaping” (Barad as quoted in Rouse 2004, p. 155). As Rouse (2004) suggests agency does not have to be an ‘all-or-nothing’ affair for us to take it seriously. Indeed, precisely because it is not an all-or-nothing affair do we need to subject the multiplicity of intra-actions, in concrete and specific practices of use and design, to meticulous analysis and scrutiny.

Heidegger and sociomaterial being-in-the-world. In Being and Time Heidegger argues that we humans (which he calls Dasein) exist in an ongoing structural openness ‘with’ the world in which we and the world are always already a unity, a being-in-the-world (Heidegger 1962, p. 297). We human beings (Dasein) are this unity or rather we have this unity as our ongoing way of being. Whenever we find ourselves or take note of ourselves, we find ourselves already in the world engaged in ongoing everyday activity in which things already and immediately show up as familiar ‘possibilities for’ this or that practical intention—never as mere objects that are just there. One could say their affordances are already immediately apparent to us. Indeed it is this prior apparentness that already makes them stand out as this or that particular thing in the first instance. Its location, arrangement, and all the implied references to a whole array of other things within the horizon of action (the already there referential whole) constitute it as ‘obvious’—so we simply draw upon it in-order-to do what we want or need to do. However, when we take up these tools, as tools, we do not take them up for their own sake; we take them up with an already present reference to our projects or our concerns. As beings that have ‘projectedness’ (being already future oriented) as our way of being we find ourselves already immersed in a nexus of concerns that constitute us as that which we are or want to become. Or rather we have as our way of being a prior immersion in a nexus of concerns. This is why Heidegger (1962) claims the way of being of Dasein is care (care as in ‘mattering’) (p. 236). We encounter things in the world as mattering (being significant) because we matter to ourselves as being or becoming such or such a particular being (father, teacher, etc.).

Thus, we do not simply bang on keys, we use the laptop to type, in-order-to write this text, to do e-mail, to surf the web, etc. Moreover, the writing of this text already refers to the possibility of a presentation. This presentation in its being already refers to an audience, which refers to an institution, which refers to future audiences, which refer to research, which refer to further possibilities, etc. These references ultimately refer back to the being that I am or am becoming to be, i.e. a very particular being in the world of ‘being an academic’. Heidegger (1962, p. 118) calls this recursively defining and necessary nexus of projects, or for-the-sake-of relations, the involvement whole. The equipment whole (of thing intra-relations) and the involvement whole (of care intra-relations) co-constitute each other—i.e. they are each other’s transcendental condition for being what they are—in Barad’s terms they intra-act each other. They sustain each other’s way of being as an ongoing horizon of meaning. Heidegger calls this horizon of meaning ‘the world’. The meaning (or coming into being) of us and our tools (the social and the technical) can only be understood within this already mutually defining referential whole, the world itself. Thus, as beings-in-the-world, our tools and us always already co-constitute each other’s possibility for being agents—not in some general sense but exactly that which we are in this or that particular world (of academia, business, and so forth). But this is not all. If it is true that we exist in a co-constitutive relation with technology (also in more general terms) then our technological world is also more than just this or that particular co-constitutive practice (my word-processor and the academic me). In other words there is a sense in which what it means to be human—and what counts as the real world—emerges from this co-constitutive whole.

In his essay “The Question Concerning Technology” Heidegger (1977) claims that: “Technology is therefore no mere means. Technology is a way of revealing. If we give heed to this, then another whole realm for the essence of technology will open itself up to us. It is the realm of revealing, i.e., of truth” (p. 12). Thus, for Heidegger technology is—in it co-constitutive becoming—the very disclosure of being.Footnote 2 Or as Ihde (1991) expresses it: “Technology, in the deepest Heideggerian sense, is simultaneously material-existential and cultural. …. It is a way of seeing [or being] embodied in a particular form” (Ihde 1991, pp. 56–57). One might say that in its ongoing becoming technology reveals, in a very fundamental manner, ‘a way of being’ in the world. That is why Heidegger (1971) claims in his essay The Thing that “the thing things world” (p. 181). Indeed that is the only way one can make sense of his suggestion that the “jug is not a vessel because it was made; rather, the jug had to be made because it is [already] this holding vessel” (p. 168). What we see is a seemingly ‘reversal’ of intentionality. The designer/craftsman did not decide (intend) to make the jug. The possibility of a jug was already suggested (intended) by the ongoing worlding of the world. The world (or referential whole) in which the jug, as a holding vessel, emerges as necessary is prior to this or that entity ‘jug’. Therefore, in making the entity ‘jug’ a world (a way of being), already present, is revealed. As such technology—or precisely the technological way of being—has as its being the revealing of a way of being (an originating intentionality) that is prior to this or that artefact.Footnote 3

Let me summarise what I suggest is Heidegger’s post-humanist account of sociomaterial agency—what I would like to describe as co-constitutive agency (or what Barad will describe as intra-action)—by taking the CCTV camera phenomena as an example. A CCTV camera mounted on a wall can make humans—that want to see at a distance (or not be seen at a distance)—do what they do—zoom in, take note of suspicious behaviour; or, cover their faces, follow other routes, etc.—not because there is a particular cause (or agency) in the artefact as such (or in the human as such) but because CCTV cameras appear in the world of police officers wanting to see at a distance (or humans wanting to avoid being a surveillance target) as already necessary and meaningful in that world of legal enforcement. If the possibility of surveying at a distance (or not becoming a surveillance target) does not concern you or me then the CCTV camera might merely be a decorative object on the wall. Thus, the CCTV camera will only show up or stand out as something potentially relevant and meaningful in a nexus of concerns (and equipmentality) where the possibility of seeing (or not being seen) ‘at a distance’ might be taken as a necessary condition to realise the concerns that constitute the ‘who’ (the identity) that such a CCTV camera assumes or already refers to (the police officer or the person on the street that does or does not want to be targeted). The important point is that the necessary or constitutive relation is not empirical as such, it is ontological—it renders possible the being-in-the-world of all the actors involved (camera, officer, suspect, etc.). It is the necessary ontological co-constitutive intra-relation between cameras, operators and targets that renders sociomaterial agency possible in the empirical world of everyday action—i.e. which makes the actors do the things they do. Artefacts do script our behaviour in our dealings with them, as Latour suggests, but this ‘scripting’ is rendered possible by a prior, but already present, ontological co-constitutive intra-relation. Without such an intra-relation there is no script, no camera, no policeman and no suspect.

The condition of possibility for agency of all the actors (what we call the co-constitutive agency) is the always and already present horizon of meaningful possibilities to be—that which they suppose themselves to be—in the world. That is, the already present necessary conditions for a being (a CCTV camera, an alert police officer, a surveillance target) to be that which they are already taken to be in the world where they have their being. In saying this we must be careful to note that the constitutive horizon of the CCTV camera constitutes a multiplicity of actors (and identities) in the world it operates ‘as a CCTV camera’. For example it constitutes what it means to be a police officer, what it means to be a ‘suspect’, how an officer relates to a ‘suspect’, what the prevention of crime means, and so forth. Furthermore, in and through the co-constitutive horizon (of CCTV cameras, police officers and surveillance targets) a particular understanding of the world (of crime, crime prevention, safety, security, etc.) is rendered possible and revealed as such. Thus technology, when it functions as such, reveals, in a very fundamental manner, ‘a way of being’ in the world (see also Introna (2009) for a more detailed discussion of the implications of this claim for human and non-humans).

Now that we have done a brief review of the post-human intra-actional account of sociomaterial agency I would like to consider the phenomenon of plagiarism detection in the world of learning and teaching to demonstrate how such an account might inform our understanding of the ethico-political implications of sociomaterial agency.

3.3 Figuring Intra-actional Agency in the Plagiarism Detection Phenomenon

In order to make the ethico-political implications of phenomena visible we need to do some figuring ‘out’ of the intra-actions. I want to suggest that we need to make some agential cuts to expose some of the ‘components’ or agencies that intra-act to constitute the being-in-the-world of plagiarism detection phenomenon. I want to propose—although I do not have space to defend this proposal here as such—that the following figuration agencies might be appropriate:

  1. (a)

    Affordances/ prohibitions—The material affordances and prohibitions that constitute the form, fit and function of the material artefact (the computer algorithm, the word processor, electronic text, etc.) as well as that which constrains and enables the sort of affordances that may be imagined and rendered possible legitimately.

  2. (b)

    (Cyborg) Identities—The ways of being someone in particular (teacher, student, author, plagiarist, etc.) as well as that which constrains and enables the sort of identities that can be assumed legitimately.

  3. (c)

    (Cyborg) Practices—The ways of doing something in particular (writing an essay, evaluating an essay, reusing material, etc.) as well as that which constrains and enables that which can be done legitimately.Footnote 4

  4. (d)

    Discourses—The ways of talking (or making claims) about something in particular (what learning, assessment and academic writing is supposed to be, what plagiarism is, etc.) as well as that which constrains and enables that which can be said legitimately.Footnote 5

These intra-actional agencies are in an ongoing co-constitutive intra-relation with each other to engender the ongoing becoming of the plagiarism detection phenomenon. Let us try and draw some brief and preliminary outlines of this phenomenon using the agencies above to figure it.

3.3.1 ‘Cutting and Pasting’ and the Reconstitution of Writing and Authorship

The automation of the construction of texts through the word processor reconstituted the practice of writing as well as the question of authorship in fundamental ways. For example Heim (1999) argues that in handwriting one’s thoughts had to be thought through before being committed to the page—in other words that there is thinking and then writing. In contrast, he argues, when writing on the screen writing loses its reflective craft-like nature. According to him words and ideas on the screen become constituted as fragments that can be ‘cut and pasted’ in a more or less thoughtless manner—the electronic text becomes constituted as never being thought as such. In the composition of electronic texts, he proposes, the relation between writing and thinking is reversed, more specifically, that there is writing and then thinking. Such an argument suggests that the text manipulation affordances of word processors such as ‘cutting and pasting’ not only makes the manipulation of text possible but it also reconstitutes the very practice of writing itself.

Moreover, when writing in an electronic media we find that authors do not just cut and paste within documents they also cut and paste between documents. As more and more texts became electronically constructed the idea of writing ‘from scratch’ becomes less and less attractive. In electronically mediated writing practices authors increasingly cut and paste from previously written texts—thus, we see the emergence of the practice of ‘reuse.’ This reuse is specifically implemented as the cutting and pasting of text ‘as is’—which is of course different to transcribing. For example consultants ‘reuse’ parts of client reports, academics reuse written arguments developed in previous papers, lawyers reuse standard formulations in contracts, students reuse parts of earlier assessments, and so forth. In a world where efficiency has become a legitimate way of thinking about work the notion of reuse is enormously attractive (even normatively compelling). As such we find that ‘reuse’ of text by ‘cutting and pasting’ from previous documents emerges as apparent and familiar. Indeed doing it from scratch might even be seen as being wasteful. Furthermore, one could argue that the obviousness of textual reuse makes sense in a world where the practice of ‘reuse’ has already become the constitutive basis for many other authoring practices. For example in software programming code reuse has become the dominant approach. The paradigm of object oriented programming is based on the notion that certain standardised code (standard routines for doing things), or ‘objects’ as they are known, should be made available in a central repository for reuse. A good programmer is able to use these standard routines or objects to build complex applications. My point is that the seemingly simple affordance of word processors to allow for ‘cutting and pasting’ has not only made text manipulation possible (as may have been intended by the designers) but has intra-acted to reconstituted the whole act of writing through the notion of reuse—especially in a world where reuse has already become a legitimate (even normatively required) practice of ‘being efficent’. Thus, what we increasingly see—especially amongst our students—is a form of writing that one might call patch-writing (Howard 1993, 1995). In patch-writing texts are constructed by using (or reusing) preformed fragments that can be cut and pasted from elsewhere as the basis from which the text becomes constructed—a very different practice of writing through which, or from which, thinking emerges rather than the other way around, as suggested by Heim (1999).

With the advent of the Internet (enabled by the search capability of for example Google), and electronic publishing, the database of electronic texts available for reuse has exploded. In the context of the availability (now on our desktop) of this massive database of electronic texts many authors, it seems, are increasingly not only cutting and pasting from their own previously constructed texts but also from texts constructed by other authors. In doing this not only the practice of writing has become reconstituted but also the meaning of what it means to be an author. Such practice of using other author’s texts seems quite legitimate in a world of efficiency where reuse and outsourcing (ghost-writers, speechwriters, etc.) is increasingly common (as has been in oral societies where stories were commonly owned and the notion of original authorship did not exist).Footnote 6 Furthermore, it seems that the question of reuse and outsourcing of textual fragments also makes sense to students in the context where the understanding of what education is (or supposed to be) has shifted with the increasing commercialisation and commoditisation of education (Saltmarsh 2004, 2005; Vojak 2006). Indeed, it is possible to see why students might think that if you pay for your courses why can you not also outsource the writing of your assessment—especially if you also have to hold down a part-time job to pay for your education (which turns out not to be ‘part’ time at all). Nevertheless, this reconstitution of the meaning of writing, authorship and education now emerges—especially in the university context—as the phenomenon of plagiarism—or more precisely the ethics and politics of plagiarism.

3.3.2 The Emergence of the Phenomenon of Plagiarism

In many subjects assessment of the student’s knowledge of the subject is understood as the ability to create an original text that reflects the student’s own understanding of the ideas in the form of the academic essay. But what if these texts are increasingly the outcome of a reconstituted practice of patch-writing? What is the student that constructs such a text? What is it that they think they are doing? Are they authors or plagiarists? How is plagiarism understood in this intra-action of agencies?

The Oxford English Dictionary Online (OED Online) defines plagiarism as “the wrongful appropriation or purloining, and publication as one’s own, of the ideas, or the expression of the ideas (literary, artistic, musical, mechanical, etc.) of another.” However, if we go back a bit further to Samuel Johnson’s Dictionary of 1755 he defines a ‘plagiary’ as “a thief in literature; one who steals the thoughts or writings of another” and “the crime of literary theft.” (Lynch 2002). It seems that the important difference between these two definitions is the notion of “the expression of the ideas” that seems to have been added by the Oxford dictionary to the 1755 meaning. The emphasis on ‘expression’ of ideas emerged later in the eighteenth century (Hesse 2002) as a way to allocate rights to authors (where ‘expressions’ are protected but not ideas). It seems that there has been a shift in focus from ‘thoughts or writings’ (i.e. ideas and works) to the notion of the ‘the expression of the ideas’ (exact copies of text). The emergence of this understanding of plagiarism is central to the constitution of the contemporary plagiarism detection phenomenon as we shall see. It must also be said that there is very limited consensus in practice amongst academics and teachers as to what constitutes plagiarism, as a study by Roig (2001) indicated.

3.3.3 ‘Cutting and Pasting’ and the Constitution of the Plagiarist

Plagiarism has always been an issue for universities. As suggested above, academic writing, the ability to construct an argumentative essay in response to a question that reflects ones understanding of a subject, has been at the heart assessment in the humanities and the social sciences for many years. Traditionally it was expected that any plagiarism by students would be picked up by the teachers involved when they tutor students in the writing task and when they mark or grade the essays. However, decreasing staff/student ratios as well as the sheer number of resources available to students has made this extremely difficult to achieve. In practice, what we find is that teachers tend to suspect plagiarism when they notice a sudden change in style (or voice) in the text. This happens most often with non-native speakers that lack the linguistic ability to integrate ‘cut and paste’ fragments into their patch-writing practices. The increased reporting of cases of plagiarism in the press as well as the availability of essay for sale on the web has created a situation of panic in which plagiarism detection systems (PDS) emerged as an obvious solution for universities (Lathrop 2000).

The market leader, Turnitin, claims that their system is used be 5,000 institutions in 80 countries worldwide (covering 12 million students and educators) and that 50,000 papers get submitted to their system every day. They also claim that their crawler ‘Turnitinbot’ has downloaded over 9.5 billion Internet pages to their detection database and that it updates itself at a rate of 60 million pages per day (Turnitin website). More recently academic publishers have also turned to Turnitin to help them protect themselves from publishing plagiarised material, which is obviously very damaging to their reputation (and profits one might add). Nevertheless, one of the most powerful arguments often put forward for adopting it (beyond resource constraints) is that it ‘levels the playing field’, indeed, that it is more fair than the hit and miss approach where individual teachers have to spot cases of plagiarism—it is what any fair teacher would do. The argument is made that teacher-based monitoring of plagiarism, as now constituted, tends to pick out weak students or non-native speakers because of the obvious shift in sophistication when a piece of plagiarised text is found embedded in an assessment document such as an essay or dissertation. But is it levelling the playing field or does it rather reconstitute a playing field that is even more uneven? I would argue that it is the latter. Moreover, that this is a much more serious issue since many of the important co-constitutive conditions (affordances) are now embedded in proprietary systems which are not open for scrutiny—an invisible micro-politics one might say. I would argue that in the phenomenon of plagiarism detection Turnitin does not function merely as a technology to ‘detect’ plagiarists but rather as a phenomenon to co-constitute plagiarists (and what plagiarism is now seen to be) in morally significant ways. In the co-constitutive horizon of PDS the being-in-the-world of teaching, learning, writing, assessment and what it means to be a ‘plagiarist’ is constituted in such a way that it is difficult to track down and account for very significant “marks left on bodies” (in Barad’s terminology).

If it is true that Turnitin covers almost all (if not all) of the web then anybody taking something from the web has an equal chance of being detected and that would most certainly be fair, a level playing field. However, what if Turnitin does not cover the entire web? In such a case the likelihood of somebody being detected would depend on whether they happen to take something from a place that Turnitin did (or did not cover). If Turnitin’s claim that they cover 9.5 billion pages is true and the estimate that the web consists of 11.5 billion pages is correct (which would give them 83.6 % coverage) then one could argue that there is a relatively high probability that a student will be detected if they take something from the web. However these figures are misleading because a lot of the content that Turnitin needs to cover is in fact behind passwords (i.e. in the deep web), such as academic journals for example. In a small scale experiment we selected 103 fragments from a number of likely sources where students may take material from—in the publicly available as well as the deep web—and submitted it to Turnitin. Turnitin was only able to detectFootnote 7 47 of these, a detection rate of 45.6 %. This experiment was repeated with a larger data set of 15,308 fragments. Of these Turnitin was only able to detect 48.4 %. If these results are to some extent generalizable (we are not claiming it to be at this stage) then a student taking something from the web has less than 50 % chance of being detected, which is quite low. My problem is not that some are caught and some get away, as it were. I am rather more concerned with the fact that Turnitin—in its increasingly pervasive status—has become the constitutive condition of what is seen as plagiarism and that most teachers are now beginning to think that a ‘green light’ from Turnitin means that a students has not cheated. In this constitutive horizon they often believe that those that are not detected by Turnitin are innocent and those that are detected are guilty. I would suggest that both of these assumptions are wrong or could be wrong. The first is partly wrong because of the partial coverage of Turnitin as indicated by our experiments. The second one might be wrong for more subtle and complex reasons, related to the operation of the algorithm and its interaction with patch-writing practices, which I now want to turn to.

One must first note that plagiarism detection software—contrary to what its name suggests—detects copies not plagiarism. How does it detect copies? A simple approach would be to compare a document character by character. However, this approach has a number of problems: (a) it is very time-consuming and resource intensive; (b) it is not sensitive to white spaces, formatting and sequencing changes; and (c) it cannot detect part copies from multiple sources. To deal with these problems a number of algorithms have been developed. Unfortunately many of these (such as Turnitin) are now proprietary software and therefore not available for analysis and scrutiny. However, we have studied the logic of certain published algorithms, such as winnowing (Schleimer et al. 2003), as well as doing some preliminary experimental research of the way the Turnitin algorithm seems to behave. From these we are able to draw some important conclusions, which I will discuss below.

All detection algorithms operate on the basis of creating a digital ‘fingerprint’ of a document which it then uses to compare documents against each other. The fingerprint is a small and compact representation (based on statistical sampling) of the content of the document that can serve as a basis for determining correspondence between two documents (or parts of it). In simple terms the algorithm first removes all white spaces as well as formatting details from the document to create one long string of characters. This often results in a 70 % reduction of the size of the document. Further processing is done to make sure that sequences of consecutive groups of characters are retained and converted through a hash functionFootnote 8 to produce unique numerical representations for each sequential group of characters. The algorithm then takes a statistical sample from this set of unique numerical strings (or hashes) in such a way as to ensure that it always covers a certain amount of consecutive characters (or words in our human terms) within a sampling window and stores this as the document’s fingerprint.Footnote 9 A fingerprint can be as small as 0.54 % of the size of the original document.

From this very limited description of the algorithm it is clear that the detection algorithm is very dependent on certain characteristics of the copied text to remain intact for detection to be possible. In some cases a small amount of change in the right way (or place) will make a copy undetectable and in other cases a large amount of changing will still make it possible to detect. One of the key requirements for detection is that a sufficiently long string of consecutive characters from the original is retained in the copied version. The location, within the fragment, of the consecutive string is also important due to the sampling window. For example in experiments we did with Turnitin it became clear that if one would change one word in a sentence at the right place—often between the 7th and 14th word in the sentence—then Turnitin did not recognise it even if all the rest of the sentence remained exactly the same. Indeed we were also able to submit a fragment of 300 words where we changed approximately every 7th to 10th word and remain undetected. In contrast Turnitin detected a small fragment of 26 consecutive unchanged words. Given this behaviour of the algorithm it is possible for a student to incorporate large amounts of copied material by intentionally or unintentionally changing words in the right places in the text submitted and remain undetected—see also Heather (2010) for ways in which text can be rendered undetectable. Now my concern here is not to suggest ways that students might cheat. My concern is rather the way this behaviour of the algorithm might constitute an uneven playing field, especially for non-native speakers.

We know that non-native speakers learn to write by using fragments as ‘patches’ to imitate the vocabulary and structure of expressions as part of their transition to become competent in academic writing (Howard 1993, 1995; Shi 2004; Leki and Carson 1997). This is true not only for non-native speakers, it is also true for native-speaking academics when paraphrasing a difficult-to-understand text—even material within their own discipline. Roig (2001), in a fascinating study, provided college professors in psychology (all members of the American Psychological Society) with two different texts to paraphrase: the first was a difficult text from a peer-reviewed psychology journal article and the second was an easy-to-read text from an introduction-level psychology textbook. Twenty-six percent (26 %) of the professors appropriated text—strings of five words in length or more without quotation marks—from the original text, whereas only three percent (3 %) appropriated text from the piece that was easier to read. If psychology professors—and most probably native speaking students—feel the need to ‘stay close’ to the text when confronted with difficult material, we can see why, students who understand the importance of ‘speaking’ like the teachers and the people they read, do the same when it comes to doing their assessments. We also know that it is possible to use phrases and fragments from a text to say something completely different than that which the original author has said. Nevertheless, this is not my concern here; rather, my claim is that non-native speakers (and novices in a discipline) will tend to use larger fragments of consecutive words, for fear of losing the meaning, than native speakers and experts. Furthermore, native speakers (and novices) will tend to have the vocabulary and linguistic skills to make changes to the fragments without a loss of meaning—especially in the middle of sentences where it really matters from a detection point of view. Thus, it is my claim that non-native speakers (and novices) who appropriate fragments as part of their patch-writing practices will be disproportionately detected as opposed to native speakers—see also Pecorari (2003). This becomes even more problematic when administrators (rather than teachers) are used to identify cases of plagiarism using the Turnitin’s ‘originality report’ traffic light system.Footnote 10

3.3.4 PDS, Education and the Production of Intellectual Property

There are many more intra-actions and agencies at stake in the phenomenon of plagiarism detection. For example the whole issue of intellectual property rights. When students’ work becomes incorporated into Turnitin’s database these essays partly enable Turnitin to perform its detection service (i.e. partly enables Turnitin to provide the service it charges for). In order to prevent legal problems universities ask students to sign agreements that their work can be submitted to Turnitin for purposes of plagiarism detection—i.e. sign away any property rights they might claim. Nevertheless, this very act of signing now constitutes the student as the producer and owner of intellectual property. Linked to this new identity is the increased value of ‘original work’ (now defined as that which the Turnitin system cannot detect). In this co-constitutive nexus students come to conceive of themselves as producing property (not doing an assessment) when they write an essay for a course assessment. Thus, in the context of the commodification of education (Vojak 2006) students quite naturally see themselves as producing intellectual property (now given extra value by Turnitin) to be sold in the open market. Hence, we now see students selling their essays and assessments on the internet (for example on e-bay). Moreover, in this constitutive context of assessments as ‘property’ and educational commodity markets we see the emergence of ghost writing services which can produce ‘original work’ that are guaranteed not to fall foul of the detection system.

Due to space limitations it is not possible to outline more of the co-constitutive agencies at work in the plagiarism detection phenomenon. Hopefully this brief sketch will at least indicate the potential of taking a different approach to sociomaterial agency. In Table 3.1 I summarise some of the co-constitutive intra-actional agencies at work in constituting the phenomenon of plagiarism detection in the educational context.

Table 3.1 Summary of some of the intra-actional agencies that co-constitute the phenomenon of plagiarism detection

In summary: my suggestion is that the large-scale use of Turnitin may be creating a set of constitutive conditions or intra-actions in which some students are being constituted as ‘plagiarists’, and others not, in an unfair uneven playing field. Most importantly, and quite ironically, most of the teaching staff that use Turnitin are not aware of this intra-action (and the intra-action of the plagiarism phenomenon more generally) and are contributing to it with the sincere intention to be fair. Moreover, a whole variety of practices, identities and discourses are being co-constituted through the ongoing intra-actional working out of sociomaterial agency in ways not anticipated or intended by any of the agents as such.

3.4 Intra-actional Agency and Disclosive Ethics

From our discussion of the plagiarism detection phenomenon above it is clear that the co-constitutive conditions (or intra-actions) that constitutes some students as ‘plagiarists’ (and others not) are not simply properties of software objects, but they are also not properties of the humans either. Indeed there is a fundamental co-constitutive agency at work in the nexus of intra-actional relationships. For example, we cannot say that the designers of Turnitin intended to discriminate against non-native speakers. The material agency of their code is but one element in the nexus of constitutive intra-actional relations. There is a multitude of other intentions and intra-actions at work that continues to render possible the ethico-political phenomenon or site in ways that transcend (even pervert) the intentions and affordances of any particular actant (in Latour’s language). What we see in the intra-action is a reversal of intentionality. The teacher wanting (intending to be fair) adopts the affordances of PDS. The affordances of the PDS unfairly constitute some as plagiarists and others not. The outcome of the intra-action is that the agency of the teacher is one of arbitrariness or unfairness. Moreover, we cannot simply say that the software objects are neutral means and it is the people (teachers and students) which use them that are at fault, or that they simply use them in an inappropriate ways. Of course some of that might be true, however, the software objects do embody certain (im)possibilities, (dis)functions, affordances/prohibitions that condition the way they are taken up as part of ongoing social practices (in searching and detecting). Nevertheless, we cannot talk about affordances without already having to invoke all the other intra-actional agencies (identities, practices and discourses).

Does this mean we cannot ‘locate’ sociomaterial agency? We have suggested above that agency is not an all-or-nothing affair. We can make ‘marks on bodies’ visible. We can reveal the way in which these co-constitutive conditions intra-act to constitute some as plagiarists and others not (although our analysis above is incomplete). Nevertheless, through this brief analysis we believe we have shown that the morally significant location of agency is the phenomenon, a ‘way of being in the world’ that acted as the ongoing co-constitutive horizon for the different actors (word processors, authors, plagiarists, teachers, students, etc.) to emerge in the way they did. I want to suggest that we need this type of disclosive analysis to help us make visible the nexus of co-constitutive intra-actions. I will refer to this as a disclosive archaeology of the phenomenon as part of a broader disclosive ethics approach (Introna 2007).

3.4.1 Disclosive Archaeology of Phenomena

Sociomaterial phenomena need to be subject to ongoing disclosive scrutiny through a process of disclosive archaeology as was briefly done with the plagiarism detection phenomenon above—and others such as search engines (Introna and Nissenbaum 2000), ATMs (Introna and Whittaker 2006), facial recognition systems (Introna and Wood 2004; Brey 2004) and virtual reality computer games (Brey 1999), to name but a few. When I use the term ‘archaeology’ here I am thinking of Foucault’s work—i.e. the (transcendental) co-constitutive conditions that rendered a phenomenon possible. As he explains:

… it is rather an enquiry whose aim is to rediscover on what basis knowledge and theory [sociomaterial agency in our case] became possible; within what space of order knowledge [sociomaterial agency] is constituted… Such an enterprise is not so much a history, in the traditional meaning of the word, as an “archaeology” (Foucault 1994, pp. xxi–xxii)

The purpose of disclosive archaeology is not to focus on material agency or human agency as such but rather to make visible the ongoing conditions of possibility, the way of being in the world, that render the co-constitution of agencies possible as part of the ongoing becoming of the phenomena. It must trace the contingent simultaneity of affordances, identities, practices and discourses to reveal the nexus that co-constitutes the ethico-political phenomenon or site of ongoing sociomaterial action—as was briefly sketched out above. But more than this it also needs to ask about the constitutive conditions that constrains and enables the sort of agencies (affordances, identities, practices and discourses) that can be imagined or emerge as legitimate in the nexus of co-constitutive intra-actions. In particular, what are the cultural historical conditions that enable and constrain the sort of affordances that is possible to conceive, the sort of identities that is possible to assume and the sort of practices that is seen as legitimate ways of acting? In our case example: how did it become possible for students to see education as a commodity? Why has academic writing and assessment become seen in the way that it did? Why did plagiarism and the need for plagiarism detection emerge? In other words, it is my claim that if we want to address the ethical and political questions that our technologies raise then we do not just need to address the affordances, identities, practices and discourses that constitute a particular sociomaterial phenomenon or site, we also need to ask about the constitutive conditions that enable and constrain the emergence of those particular agencies as legitimate in the first place.

3.4.2 Towards Intra-actional Responsibility

Having accounts of ‘marks on bodies’ is just one side of the equation; ultimately we need to act concretely in particular situations. In doing this we need to ensure that we address all intra-actional agencies in its full simultaneity of intra-activity. For example we need to address simultaneously the:

  • Affordances/prohibitions—We need to attempt to build values into the design of artefacts (as suggested by VSD) or materialise morality (as suggested by Achterhuis 1995; Latour 1991; Verbeek 2006). We also need to make artefacts more transparent so that the affordances and prohibitions of artefacts are more visible (Introna 2007; Winner 1980). We also need to build more engaging artefacts as suggested by Verbeek (2005) and Borgmann (1984). But more than this we also need to question the prevailing technological moods of our day. We must initiate, and participate, in the debates about the sort of technological futures we ought (or ought not) have.

  • (Cyborg) Identities—When thinking about affordances we should also ask questions as to what sort of cyborgs we are becoming. We must participate in society more generally in developing technologically afforded notions of ‘whole’ identities rather than ‘narrow’ identities (such as gadget people, google generation, etc.). We must propose and show that technology can also afford the development of ‘whole’ identities within more mindful practices. In other words that all cyborg identities need not necessarily ‘narrow.’ But we also need to attend to the central question of what sort of cyborgs we want to become.

  • (Cyborg) Practices—We need to understand the practices that are emerging around our technological affordances but we should also develop new technologically afforded (or cyborgian) practices that render possible our common human values. It is only in the nexus of practices of care (or mindfulness) that more mindful affordances can emerge as legitimate.

  • Discourses—Most important of all is the development of new discourses that will enable and legitimate the sort of affordances, identities and practices that will intra-enact our common human values. Foucault was right when he said that discourses constitute ‘subject positions’ and naturalise them. I will add to this not just ‘subject positions’ but also, more specifically, technologically afforded identities and practices.

These suggestions are not complete, unproblematic or uncontroversial. Nevertheless, they seem to me to go some way in taking the ethics and politics of our increasingly sociomaterial existence seriously. More importantly, they attempt to acknowledge that agency is complex, distributed and not amenable to simple interventions (except in isolated and specifically constructed spaces/places). All socio-material interventions are mostly, if not always, more or less ontological in as much as they can reconstitute the agents (human and non-human) in many unexpected ways, as our archaeology of the plagiarism detection phenomena above revealed. The decision to take my car, or the bus, or my bicycle to work, constitutes me as a being that cares (or not) for the environment—and much much more. The question of morality in the constitutive nexus of socio-material phenomena cannot be resolved once and for all but needs to be worked out in the specifics of each constitutive nexus, again and again. This is indeed what gives ethics its urgency; there is indeed much work left to be done for us cyborgs.