1 Introduction

In recent scholarship, the work of several French thinkers has been mobilized, in order to show their relevance to understand current technologies (Parrochia, 2009; Loeve, Guchet and Bensaude-Vincent, 2018). This article attempts to do something similar, but with an author who is similarly often overlooked in the canon of philosophy of technology, though typically linked with “French Thought”: Jean-François Lyotard (though see Sebbah, 2018). Often associated with themes in political philosophy and aesthetics, Lyotard is mostly known for his infamous definition of the postmodern in his best-known book, La condition postmoderne, “as incredulity towards metanarratives” (Lyotard, 1979, xxiv). Lyotard’s philosophy is thus normally seen as a call for the end of the dominant metanarratives, ranging from communism, liberalism to speculative philosophy. For instance, in Le différend, Lyotard writes: “Everything real is rational, everything rational is real: “Auschwitz” refutes speculative doctrine. This crime at least, which is real […], is not rational” (Lyotard, 1983, 179–180).

The thesis of this article is that these famous claims of Lyotard are actually embedded in a philosophy of technology, one that is, moreover, still relevant for understanding present technoscience. Hence, the two ambitions of this article: to sketch Lyotard’s philosophy of technology (Sects. 2 and 3) and show its contemporary relevance, by applying it to a number of recent technosciences, such as synthetic biology and data science (Sect. 4).

To do so, it aims to correct three common misconceptions of the work of Lyotard. The first misconception is that La condition postmoderne (1979) is typically read as if it would only be about the famous claim of the end of metanarratives. In fact, the book is mainly about what replaces these metanarratives. Lyotard’s central claim is that whereas science and technology used to be legitimatized through metanarratives invoking values such as emancipation of revolution, in our current society, knowledge rather follows a logic of performativity: knowledge has to produce results and increase efficiency.

This immediately leads to the second misconception, namely that this notion of performativity could be equated with capitalism. Though this association is not incorrect, it is misleading. In contrast, this article will highlight how, according to Lyotard, capitalism is not so much the cause of performativity, but rather the reverse: capitalism is simply the latest form in which the logic of performativity shows itself, a logic that is not restricted to the twentieth century, nor even to human history.

The final misconception is that Lyotard’s reflections on science and technology would be restricted to this book alone and simply disappear in his later work. This is incorrect. In his later work, we will find a well-articulated philosophy of technology, centered around the concept of technoscience. Hence, the first part of this article will explore what Lyotard means with this concept, and how it relates to how “technoscience” is used in the literature (Sect. 2). Next, I will explore Lyotard’s philosophy of technology, correcting the three mentioned misconceptions (Sect. 3). Finally, the value of Lyotard’s notion of technoscience is argued for, through a brief examination of two contemporary technosciences: synthetic biology and data science (Sect. 4).

2 The Ambiguity of Technoscience

Though often used by philosophers, historians, and sociologists, the concept of technoscience is rarely the object of explicit reflection. To the extent that the concept is analyzed, it is mainly to stress the plurality of different uses linked with the term (e.g., Bensaude-Vincent & Loeve, 2018; Guchet, 2011; Sebbah, 2010). The popularization of the term is typically ascribed to the work of the Belgian philosopher Gilbert Hottois, who started to use the term in the 1970s (Hottois, 2018). Soon, the term was picked up by Bruno Latour (1987), Don Ihde (Ihde & Selinger, 2003), and Donna Haraway (1997), but also Lyotard (1986).

In general, there are two main interpretations of technoscience, highlighting a fundamental ambiguity. A first interpretation is that technoscience mainly refers to an essentialist thesis about the nature of science: science is about more than words, thoughts, or theory but always implies a constitutive element of technology or materiality. Hence, we should rethink our conception of science and give it a new name: technoscience. This was the project of Hottois, who criticized an “inflation of language in contemporary philosophy” (Hottois, 1979) and can also be seen at work in, for instance, Latour’s constructivist take on science and technology. Opposed to ready-made science, the notion of technoscience is used “to describe all the elements tied to the scientific contents no matter how dirty, unexpected or foreign they seem” (Latour, 1987, 174).

However, technoscience is often invoked in a second sense, as a historical diagnosis: science is transforming into something new, which we should call technoscience. We see this at work, for example, in the work of Donna Haraway (1997) and in many contributions of historians and sociologists, who started to use the concept of technoscience independently from the 1960s (see Hottois, 2018). This historical diagnosis comes in two varieties, depending on which side of the couple science-society one focuses upon. Either the focus is on how social and political relations are shifting in relation to shifts in science and technology (e.g., Channell, 2017; Pickstone, 2001). Or it refers to how scientific practices are done differently, linked to shifts in society. The latter claim is more often found in studies concerning specific scientific fields that are labeled technoscience, such as synthetic biology (Schmidt et al., 2009; Simons, 2021a) or nanotechnology (Bensaude-Vincent, 2009; Schmidt, 2011).

Though analytically distinct, the essentialist and historical theses are often mixed. Once again, Latour is a good example, who justifies his Actor-Network Theory (ANT) both by arguing that science has always been technoscience (Latour, 1987) and simultaneously claiming that ANT becomes a necessity due to shifts in science and society (Latour, 1999). And although both claims seem to be in conflict with one another, we will see that Lyotard’s philosophy of technoscience offers us a possible reconciliation between both claims. Technoscience is the expression of a logic of performativity that has always been present, but that nonetheless has been released from its societal chains, which kept it relatively at bay until recently, due to recent events — such as the end of metanarratives. To understand this position, let us turn to Lyotard’s use of the concept of technoscience.

3 Lyotard and Technoscience

In his own work, Lyotard uses the notion of “technoscience” for the first time in a number of essays from the early 1980s, later collected in Le postmoderne expliqué aux enfants (1986). The book consists of a set of fictitious letters sent to the children of his friends. The theme of technoscience is a recurrent one in a number of these letters. In one of them, for instance, Lyotard will state: “One cannot deny the predominance of technoscience as it exists today, that is, the massive subordination of cognitive statements to the finality of the best possible performance – which is a technical criterion” (Lyotard, 1983, 9). Similarly, in a latter one, Lyotard speaks of the “the fusion of technology and science in the immense technoscientific network” (Lyotard, 1983, 83–84).

Lyotard most likely took the term technoscience from Gilbert Hottois, which is also suggested by Hottois himself (see Hottois, 2018, 124). In Lyotard’s work, we do indeed find references to Hottois’s work (e.g., Lyotard, 1983, 118), although often in an indirect manner. For example, Lyotard describes how he “read a young Belgian philosopher of language complaining that Continental thought […] replaced the paradigm of referentiality with one of adlinguisticity” (Lyotard, 1983, 2). Lyotard is referring to Hottois’ book, L'inflation du langage dans la philosophie contemporaine (1979), where the concept of technoscience is introduced.

3.1 The Postmodern Condition and the Logic of Performativity

However, very similar ideas are already present in Lyotard’s La condition postmoderne, which was published in 1979 as well, and so did not yet use the term. This thus brings us to the first misconception, namely that this book deals only with the end of the metanarratives.Footnote 1 Instead, as Lyotard makes clear in the book, the aim “of this study is the condition of knowledge in the most highly developed societies” (Lyotard, 1979, xxiii). Central to the book is the question of the legitimation of knowledge. In premodern societies, this question was solved through “narrative knowledge,” where knowledge is framed through narratives that meaningfully situate persons and events in time. Narrative knowledge, for Lyotard, is however incommensurable with scientific knowledge, which arose in modernity (see Burdman, 2020). Whereas a plurality of language games is possible in narrative knowledge, “[s]cientific knowledge requires that one language game, denotation, be retained and all others excluded” (Lyotard, 1979, 25).

But since scientific knowledge boils down to denotative statements, there is no longer any room to draw normative conclusions. Hence, the normativity of why specific types of knowledge deserve our attention needs to come from somewhere else, namely metanarratives. After the Second World War, however, these metanarratives lost their appeal. In La condition postmoderne (1979), Lyotard gives some suggestions of what the cause of this disappearance was, suggesting on the one hand it was a product of the development of technology and capitalism, but also on the other it was caused by “an internal erosion of the legitimacy principle of knowledge” (Lyotard, 1979, 39). In his later book, Le Différend (1983), Lyotard argues that (meta)narratives always are based on a central set of proper names in a culture — the name of the land, the people, important dates and places, and so on. In the twentieth century, however, many of our proper names — such as Auschwitz — have the unique property, according to Lyotard, that they fail to be recuperated in a narrative in a coherent way. As Lyotard states, “they place modern historical or political commentary in abeyance” (Lyotard, 1989, 393). The result is an inability to uphold old or create new metanarratives (see Simons, 2022, 118–122).

But, as I said, most of the book focuses on what fills in the vacuum left by the disappearance of these metanarratives. Lyotard names this alternative performativity: knowledge is no longer legitimized by a metanarrative, but its expansion and progress are rather “based on its optimizing the system’s performance – efficiency” (Lyotard, 1979, xxiv). Knowledge is thus produced in order to increase the efficiency and performativity of the system of which is its part (e.g., the university, the national economy, etc.). Knowledge therefore not simply has to report information, but immediately also highlight how it will result in even more knowledge or applications, or what we nowadays would call “impact.” Knowledge becomes a commodity in a knowledge economy where countries fight with one another for knowledge and information. Lyotard’s La condition postmoderne mainly concerns the impact of this shift on universities, both on the level of research and education. Put in simple terms, concerning academic research, we enter a system that is interested in ever-increasing the number of publications, applications, articles, or PhDs, in order to upscale performativity but never has to ask the question why we need more of these things. Similarly, according to Lyotard, contemporary knowledge economies tend to think about education in terms of its output: the number of degrees delivered or the salaries of its alumni, rather than focusing on any intrinsic value of the knowledge taught (as traditional metanarratives tended to do).

For Lyotard, the role of technology in this process is crucial: since the goal is not information per se, but information that has to have a performative impact, science is increasingly more and more technically invested, in order to create new scientific instruments, applications, and experimental effects that will highlight this impact. But because of this, Lyotard fears that the alliance between science and economic interest will become ever more dominant, since these sophisticated technological settings in laboratories do not come cheap. “The games of scientific language become the games of the rich, in which whoever is wealthiest has the best chance of being right. An equation between wealth, efficiency, and truth is thus established” (Lyotard, 1979, 45).

The end result is also a rat race between laboratories to ever increase their economic means and technical tools, in order to outperform the competition. Lyotard, in fact, seems to be inspired here by the early work of Bruno Latour, to whom he refers to in La condition postmoderne. Together with their mutual friend Paolo Fabbri, Latour published early articles on the rhetoric of scientific texts (Latour & Fabbri, 1977). Latour portrayed a picture of scientific statements as the product of an intense and costly construction process, where the statements of the lab that can successfully mobilize the most allies are accepted as true. Latour would flesh out this view on science more fully in Laboratory Life:

the set of statements considered too costly to modify constitute what is referred to as reality. Scientific activity is not ‘about nature,’ it is a fierce fight to construct reality. The laboratory is the workplace and the set of productive forces, which makes construction possible. Every time a statement stabilises, it is reintroduced into the laboratory (in the guise of a machine, inscription device, skill, routine, prejudice, deduction, program, and so on), and it is used to increase the difference between statements. The cost of challenging the reified statement is impossibly high. Reality is secreted. (Latour & Woolgar, 1979, 243)

While Latour keeps a certain sociological distance towards this analysis of scientific practices, Lyotard is worried about how the dominance of this logic of performativity threatens to exclude other forms of knowledge and science that seemingly do not fit this demand of performativity. “Research sectors that are unable to argue that they contribute even indirectly to the optimization of the system's performance are abandoned by the flow of capital and doomed to senescence” (Lyotard, 1979, 48). Hence, the logic of performativity implies a form of violence, according to Lyotard, embodied by the demand: “be operational (that is, commensurable) or disappear” (Lyotard, 1979, xxv). And though, for both Lyotard and Latour, this first of all concerns the question which scientific research gets funding, the repercussions are more far-reaching. Since both would argue that technoscience must be understood first of all as a practice that not just describes but transforms reality, investing in certain forms of knowledge, while ignoring others, also results in favoring one reality at the expense of others. To choose for technoscientific research is also to choose for a technoscientific society.

3.2 Performativity and Capitalism

Much of the secondary literature on Lyotard interprets this criticism of performativity as a critique of capitalism (e.g., Williams, 1998). Even in the entry for the Stanford Encyclopedia of Philosophy, Peter Gratton (2018) describes La condition postmoderne, together with later books such as L’inhumain (1988) as a critique of “the dehumanizing inhumanism of contemporary capitalism and its reduction of the human to modes of efficiency and the needs of the technocratic order.” This is, of course, not that surprising, since La condition postmoderne itself gives ample reason to believe this is Lyotard’s claim. Lyotard speaks, for instance, of how “both capitalist renewal and prosperity and the disorienting upsurge of technology would have an impact on the status of knowledge” (Lyotard, 1979, 38) Or in a similar passage, he describes how.

It was more the desire for wealth than the desire for knowledge that initially forced upon technology the imperative of performance improvement and product realization. The ‘organic’ connection between technology and profit preceded its union with science. Technology became important to contemporary knowledge only through the mediation of a generalized spirit of performativity. (Lyotard, 1979, 45)

In contrast, “[r]esearch sectors that are unable to argue that they contribute even indirectly to the optimization of the system's performance are abandoned by the flow of capital and doomed to senescence” (Lyotard, 1979, 47).

Similar messages are found in later works. For instance, in Le postmoderne expliqué aux enfants (1986), Lyotard stresses how he “struggled in different ways against capitalism’s regime of pseudorationality and performativity” (Lyotard, 1986, 73). He describes this capitalist logic of performativity also in terms of a.

complete hegemony of the economic genre of discourse. The simple canonical formula of this genre is I will let you have this, if you in return can let me have that. Among its other attributes, this genre always calls for new thises to enter into exchange (today, for example, technoscientific knowledge) and uses payment as a means of neutralizing their power as events. (Lyotard, 1986, 58)

However, I want to argue that Lyotard is claiming something else, which could be characterized even as the reverse: capitalism is not the cause of performativity, but one of its latest effects. Performativity, instead, refers to something that is not limited to twentieth-century capitalism, or even human history, but is a logic at work as a metaphysical principle throughout the whole history of the universe: “Capital must be seen not only as a major figure in human history, but also as the effect, observable on the earth, of a cosmic process of complexification” (Lyotard, 1988, 67).

One way to understand performativity as a metaphysical principle, is to turn to Lyotard’s L’économie libidinale (1974), where performativity is interpreted as the logic that aims to maximize the efficiency of libidinal flows. A very dense book, it was written in close dialogue with Deleuze and Guattari’s Anti-Oedipus (1972), which had a similar ambition. Very simply put, the latter contains a history of capitalism in three stages: nomadism, the despotic state, and capitalism. In nomadic societies, relations are coded by filiation and alliance, e.g., in terms of kinship, which limits the ways in which entities can interact. With the invention of the state, however, the “coded flows of the primitive machine are now forced into a bottleneck where the despotic machine overcodes them” (Deleuze & Guattari, 1972, 199). This new stage puts the State (and its King, for instance) at the center as an obligatory passing point, “overcoding” the existing codes.

Deleuze and Guattari read this history in terms of fear: a fear that without any code an endless accelerating flux would destroy all meaningful order, a tendency they associate with capitalism: “In a sense, capitalism has haunted all forms of society, but it haunts them as their terrifying nightmare, it is the dread they feel of a flow that would elude their codes” (Deleuze & Guattari, 1972, 140). Thus, in the third stage, that of capitalism, we witness a process of decoding (or “deteritorizalization”), where codes are mobilized, changed, uprooted, and destroyed in function of performativity. In that sense, although it announces the nightmare of complete decoding, capitalism still keeps certain codes at work to regulate and optimize its flows (a process of “reterritorialization”), rather than abandon them all. “And capitalism, the relative limit of every society, in as much as it axiomatizes the decoded flows and reterritorializes the deterritorialized flows” (Deleuze & Guattari, 1972, 266).

When Lyotard speaks of performativity, he has something similar to this axiomatization in mind: something that did not arise in the twentieth century but has been there from the start, only temporarily and imperfectly stabilized under codes. Thus, when Lyotard starts his La condition postmoderne with his famous claim of the incredulity of metanarratives, it is this that is at stake: these metanarratives were central codes that kept the “nightmare” of performativity at bay, while in the postmodern technoscience these restrictions slowly disappear. Contemporary capitalism in that sense no longer requires a metanarrative to organize society and has in fact often given up on any attempt to justify itself in these terms. Instead, capitalism must be understood in terms of a libidinal economy, where we should not focus on its ideological narratives, but rather on the way it steers streams of attention, information, and desire. In that sense, capitalism aligns itself with a more fundamental force that characterizes the whole history of the universe: a tendency, based not on intention but on probability, towards an increase in complexity. To understand this point, we need to include other parts of Lyotard’s oeuvre.

3.3 Technoscience and the Death of the Sun

In this way, we come to our third misconception, namely that this theme of technoscience and performativity would only be constricted to this one book, La condition postmoderne. In reality, performativity is both present in earlier and later books. In later texts such as L’Inhumain (1988) and Moralités postmodernes (1993), Lyotard mobilizes the concept of technoscience to capture how science and technology are currently framed through the logic of performativity. Through these texts, a number of additional characteristics of technoscience come to the foreground.

First of all, technoscience is linked to a certain type of antihumanism, but not so much in the sense of putting structure or language at the center, but rather in the line of information theory: the human subject is interpreted as merely one instance of information exchange, similar to how other objects emit and store information. The history of the universe is then read as a history of an ever-increasing complex organization of energy and information, of which the human form and the history of (techno)science are merely the latest historical episodes, reinterpreted and naturalized as part of this cosmic development. For this view, Lyotard seems to be inspired by Serres and Latour (see Simons, 2017, 2022), and the upcoming discipline of STS more generally:

Its science and technoscience also end up being part of nature. There can be a science of science - and there is – just as there is a science of nature. The same goes for technology: the whole field of STS (science-technology-society) appeared within a decade of the discovery of the subject’s immanence in the object it studies and transforms. And vice versa: objects have languages; to know objects you must be able to translate their languages. Intelligence is therefore immanent in things. In these circumstances of the imbrication of subject and object, how could the ideal of mastery persist? It gradually falls out of use in the representations of science made by scientists themselves. Man [sic] is perhaps only a very sophisticated node in the general interaction of emanations constituting the universe. (Lyotard, 1986, 21)

For Lyotard, this antihumanism is also a cause for pessimism, since this announces that the human is not the central reference point, including that of science: “Scientific or technical discovery was never subordinate to demands arising from human needs. It was always driven by a dynamic independent of the things people might judge desirable, profitable, or comfortable” (Lyotard, 1986, 83). Or as it put it a view pages further:

The needs for security, identity, and happiness springing from our immediate condition as living beings, as social beings, now seem irrelevant next to this sort of constraint to complexify, mediatize, quantify, synthesize, and modify the size of each and every object. We are like Gullivers in the world of technoscience: sometimes too big, sometimes too small, but never the right size. (Lyotard, 1986, 79)

This brings us to one of Lyotard’s main concerns in his later work, namely that this antihumanist tendency of technoscience will eventually make the human body disappear, because similar to the rest, it subjects the body to the logic of performativity, open for improvement and obsolescence. Lyotard symbolizes this moment by the future event of the “death of the sun”:

There will come a time when [the sun] will implode, and then life will become impossible on earth. This means that humanity is now counting down its lifetime, and must prepare for its exodus. It has begun to do so, to give itself the means to exode […]. And there, the question of the body comes up again. The human body must be able to withstand living conditions other than those on the earth and then the problem of the resistance of the body […] will arise for real. (Lyotard, 1985, 72)

The most dramatic form in which Lyotard sketches this history is in his “postmodern fable” (Lyotard, 1993). In this text, Lyotard sketches a history of the cosmos, starting with the formation of stars and planets, through the lens of performativity, the increasing process of complexification of energy. This whole history, including the rise of liberal democracy, is interpreted, not through the lens of a metanarrative of emancipation or revolution, but as a story of a blind process of complexification of energy, which only temporarily selected the familiar forms of the human body. “Thus after some time (very short on the astronomical clock) the system named Human was selected” (Lyotard, 1993, 83). Again, the fable ends with the impending event of the death of the sun, which will force this story of complexification, where it to continue, to take another form than that of the human body. “Thus the last exodus of the negentropic system was prepared far from the Earth. What the Human and his Brain could look like, or rather the Brain and his Human, when they left the planet forever, that history did not say” (Lyotard, 1993, 86).

In this light, Lyotard also interprets the current technosciences, namely as aimed not at human emancipation, but as the continuation of this process of complexification, and thus the end of the human form. “To meet this challenge, the system had already (at the time the fable was told) set out to develop prostheses capable of perpetuating it after the disappearance of the energy resources of solar origin that had contributed to the appearance and survival of living systems and, in particular, humans” (Lyotard, 1993, 85). In that sense, Lyotard sees technoscience not as a modern phenomenon, but rather as its destruction: it does not emancipate humanity but announces its obsolescence. “But the victory of capitalist technoscience over the other candidates for the universal finality of human history is another means of destroying the project of modernity while giving the impression of completing it” (Lyotard, 1986, 18).

Hence, Lyotard is also able to respond to the common criticism, namely that his own story is simply a new metanarrative: “the fable does not have the hallmarks of a modern ‘meta-narrative’” (Lyotard, 1993, 93). It is rather a story where the human species is not the hero. Instead, energy is the protagonist and the fable follows the internal struggle of energy in the form of negentropic forces struggling in an entropic universe. But energy is no real subject either, because it does not have the relevant properties. The temporality involved in this story is nothing but a diachronic time. “This time is not a temporality of consciousness which demands that the past and the future, in their absence, nevertheless be held 'present' at the same time as the present” (Lyotard, 1993, 91). Moreover, there is no horizon of emancipation. Possibly something is saved, though there is no guarantee, but this does not lead to a greater understanding but is merely “the result of cybernetic feedback with controlled growth” (Lyotard, 1993, 92). In that sense, the logic of performativity that dominates in technoscience is also no new meta-narrative: it is not a story about the emancipation of mankind working towards a clear goal, but rather one of a blind force that works towards complexification based on sheer probability.

3.4 Technoscience and the Thinking Body

Above, we have sketched Lyotard’s philosophical analysis of technoscience, and let us now briefly return to what we can learn from it. First of all, we find a way out of the abovementioned ambiguity between technoscience as an essentialist thesis and a historical diagnosis. From Lyotard’s perspective, technoscience is something that has been there from the start of the universe, in the form of the logic of performativity, but due to the disappearance of metanarratives it has been able to manifest itself in a more extreme fashion in recent technosciences.

Secondly, we can find a critical angle to these technoscientific practices, circling around the question of whether we indeed want to or should give up the human body. Lyotard stresses how the logic of performativity implies a form of injustice, since it erases all heterogeneities between different discourses. This heterogeneity, for Lyotard, must be defended and his philosophy is an attempt to bear witness to the inevitable conflicts, called “differends,” that arise between these different discourses (see Lyotard, 1983). But one can reframe this point in another idiom, perhaps more familiar nowadays, namely that of an ecology: Lyotard resists simply giving up the human body for alternatives that better perform, since he believes it is valuable to preserve an ecological diversity of bodies. It is important, therefore, to understand “body” not just as referring to the physical body of a person, but to the different layers that make living possible: thinking, forms of living, social structures, and so on. For Lyotard, diversity on all levels is worth preserving.

Thus, when Lyotard wants to preserve the human body against technoscience, it is not so much on the ground of a belief in a well-defined human nature that must be preserved from alienation. Rather, Lyotard believes it is more accurate to think of our contemporary times in terms of the inhuman (Sebbah, 2018).Footnote 2 We are already acquainted with a first inhumanity, that of the logic of performativity. But in opposition to this, Lyotard mobilizes another inhumanity, for which Lyotard uses the example of pedagogy. The paradox of pedagogy is this: human beings are never born as human, in the sense that any human first needs to go through the proper education to become fully human. Human beings can only become human beings. But, then, Lyotard wonders: “What shall we call human in humans, the initial misery of their childhood, or their capacity to acquire a ‘second’ nature which, thanks to language, makes them fit to share in communal life, adult consciousness and reason?” (Lyotard, 1988, 3).

Both options are possible. The traditional option is to call the adult the human, the one who purified himself from his or her own uncivilized animal nature, in order to acquire a second, educated nature. But the alternative is also possible, namely to call the undefined child as the real human: “Shan of speech, incapable of standing upright, hesitating over the objects of its interest, not able to calculate its advantages, not sensitive to common reason, the child is eminently the human because its distress heralds and promises things possible” (Lyotard, 1988, 3–4). Lyotard, however, is skeptical towards this as well: First of all because it tends to neglect the challenge of inhuman performativity. Secondly, such humanism also fails to acknowledge the heterogeneity of the human, namely elements that can never be fully harmonized in the human. Instead it is more adequate to speak of an undefined “inhuman,” that can never be fully actualized without creating differends. We thus end up with a vision where our “inhumanity” is threatened either by forcing it into one metanarrative (through education) or by performativity.

So what is this “inhuman” which is threatened by performativity and that we do not want to give up? According to Lyotard: thinking. Proper thinking, according to Lyotard, has this inhuman character, in the sense that it is indeterminate, but also without harmony and control. The thinking that is at work in a philosophical text, or can be provoked by an artwork, must be understood in terms of an event: we cannot control it, and we do not adequately know what it is, nor where it is going. Lyotard understands thinking rather as a struggle, a struggle with the inhuman present in us that we try to give shape and a proper place. “Being prepared to receive what thought is not prepared to think is what deserves the name of thinking” (Lyotard, 1988, 72).

Technoscience would make such thinking impossible, since the latter does not follow the logic of performativity. This is why Lyotard is deeply pessimistic about the future of philosophy, which he considered a format that embodies this inhuman thinking. Though it can be highly creative, it comes in a format of the “event” that escapes our control and thus can also not successfully be incorporated into the streams of the libidinal economy of performativity. “Reflection is not thrust aside today because it is dangerous or upsetting, but simply because it is a waste of time. It is ‘good for nothing’, it is not good for gaining time. For success is gaining time. A book, for example, is a success if its first printing is rapidly sold out” (Lyotard, 1983, xv).

4 Contemporary Technoscience

In this final section, I want to explore how Lyotard’s notion of technoscience is still relevant to analyze contemporary technoscientific practices. If Lyotard’s framework is of any help to understand contemporary technoscience, we will have to find in technoscience the following elements. First of all, technoscientific practices must follow the logic of performativity: the knowledge that is produced is mainly judged by what can be done with it, how it will improve the system of which it is part (the university or society), and how it might increase its efficiency and speed. Secondly, technoscientific practices will no longer uphold the human body as a reference point but rather explore alternative possibilities, deemed more efficient. I will restrict myself here to two examples: synthetic biology (Sect. 4.1) and data science (Sect. 4.2). Other disciplines could be used as examples of technoscience as well, ranging from robotics to nanotechnology. I choose these two cases for the following reasons. First of all, synthetic biology is one of the central examples of technoscience in contemporary literature (Schmidt et al., 2009; Simons, 2021a), thereby highlighting how Lyotard’s work can be linked to the existing literature. Data science, on the other hand, is less often discussed as a case of technoscience. I nonetheless will focus on it to highlight how Lyotard’s work can also expand the concept to new domains, such as data science.

4.1 Synthetic Biology

Synthetic biology is a field that came into being around 2000, which can be defined as an application of new methodologies, often borrowed from engineering, to biological systems in order to redesign them or even design biological systems de novo. Many commentators have noted that synthetic biology is strongly driven by incentives that can be captured under the banner of performativity: the criteria for good biological research is typically equated with the extent to which it leads to useful applications for the bio-economy, often equated biology with technology (see Carlson, 2011) and knowledge with making (Keller, 2009). Bensaude-Vincent also describes the field as following an “economy of promises,” where making grand promises about future possibilities is often more important than actual, practical results. “Negative results never generate blame or disqualification of colleagues. They are not skeptical scientists. […] Objections are turned into challenges, negative results into new opportunities” (Bensaude-Vincent, 2013, 28).

Others have noted how synthetic biologists display a strong urge to automate, “streamlining, not just the stuff of life, but the ways in which they went about doing so” (Roosth, 2017, 105). In that sense, humans are more and more out of the loop, risking ending up in a “biology without people” (Roosth, 2017, 119). This brings us to the second point, namely that the human body, more broadly understood here as terrestrial life, is slowly starting to lose its status as a reference point in research. Several scholars have noted that synthetic biology can be characterized by a stronger emphasis on exploring “biological possibilities” (Ijäs & Koskinen, 2021; Koskinen, 2017) instead of studying life as it factually and historically exists on earth. In other words, synthetic biologists are more and more interested in modal properties of life: what life could be, rather than what it actually is. Let me give two more concrete examples from my own work, which, though they do not mention Lyotard, seem to align very well with Lyotard’s central claim.

A first example is so-called protocell biology (Simons, 2021a), a field that aims to study the origins of life by studying synthetic “protocells”: cell-like entities in the lab that have at least some of the properties of life. The goal is to reconstruct the way in which life originated out of dead matter. Though an old question, most synthetic biologists redefine the question of the origin of life from a historical question into a question about biological possibilities. To give just one example, let me give an extensive quote by the synthetic biologist Stephen Mann:

is it possible for life to emerge through fundamentally different organizational, operational and evolutionary mechanisms, or are the core criteria of terrestrial biology – membrane-based cellularity, semi-conservative DNA/RNA-mediated self-replication, protein-regulated metabolism, Darwinian evolution, non-equilibrium energization – invariant and axiomatic? This wider perspective necessitates an intellectual shift away from the historical impasse associated with the study of the origin of life specifically on Earth to a broader perspective concerned with the generic transformation of inanimate matter to a life-like state. And by focusing attention towards the possibility of generating alternative models of life in the laboratory that are essentially devoid of historical content – that is, without needing to anticipate too many unknown boundary conditions – it should be possible for chemists to contribute significantly to understanding the origin of life as a general physical phenomenon, even if the actual origin of life as it occurred on the early Earth remains unresolved. (Mann, 2013, 156)

The goal, in order words, is no longer to focus on terrestrial biology, but on what Mann calls a universal biology: life as it could exist, regardless of whether it has even existed in this way on earth.

This becomes even clearer in the second case, namely research on minimal genomes (Simons, 2021b). Originally, research in minimal genomes was similarly concerned with a historical question: what is the minimal genome, the smallest set of essential genes that a living organism on earth needs to survive? It was initially conceived as a reconstruction of how the simplest form of life originally existed (or even still exists). But synthetic biologists such as J. Craig Venter have been trying in recent decades to synthesize such a minimal cell. But the relation between such a synthetic construct and the actual history of life on earth becomes unclear, since, once again, a shift occurs from studying biological actualities to designing biological possibilities, often acknowledging that they might not help us understand the history of life on earth. At best, it can teach us something about the modal properties of life: what life can possibly be or necessary for life to exist. In the case of minimal genome research, the project is then also often reconceived as aiming towards building a “chassis” for synthetic biology: “a basic and predictable set of genes to which one could subsequently add the desired genes for particular purposes” (Simons, 2021b, 131). In that sense, a shift away from terrestrial life also opens up room for a whole new domain of potential biotechnological applications, but not so much in the form of a direct application, but rather through opening up a whole new realm of potential applications. Both protocells and minimal genomes are often framed as a kind of platform technology, allowing biologists to create customized cells that produce whatever chemicals are needed in industry or medicine. Or to quote Venter himself:

we are conducting new research that has the long-term aim of creating a ‘universal recipient cell’ that can take any synthetic DNA software customized to create life and create that designated species. [...] In order to create a universal recipient cell, we are in the process of rewriting the genetic code of the mycoplasma cell to enable it to transcribe and translate any transplanted DNA software. (Venter, 2013, 188)

In the technoscience of synthetic biology, we thus see many of the characteristics that Lyotard’s framework highlights: first of all, scientific research that follows a logic of performativity, redefining itself in terms of its output rather than its insight; and secondly, as a field, synthetic biology seems rather inclined to leave the body, and in this case, the body of terrestrial life, focusing on alternative biological possibilities that are explored in terms of efficiency and productivity.

4.2 Data Science

Let me turn to the second case, data science. This field is very broad, containing any scientific study of data, ranging from statistics to informatics. Hence, let me narrow it down to one particular topic, namely algorithms, which can be defined as a finite sequence of well-defined instructions to solve a (class of) problem(s) and is nowadays typically embodied by computer programs. Computer algorithms are typically praised as a solution to all kinds of problems (often linked to a data deluge) and thus portrayed as tools that can implement all (or at least some) of our human desires: the best recommendations for the product we want to buy, the perfect job for our profile, and so on.

Perhaps, the clearest articulation of that promise is found in Pedro Domingos’ The Master Algorithm (2015). As the title already indicates, Domingos’ hope is that we will soon be able to design a “master algorithm,” the ultimate algorithm that “can derive all knowledge in the world—past, present, and future—from data. The Master Algorithm is our gateway to solving some of the hardest problems we face, from building domestic robots to curing cancer” (Domingos, 2015, xviii). All of our life decisions will be taken by this algorithm: “A model of you will negotiate the world on your behalf, playing elaborate games with other people’s and entities’ models. And as a result of all this, our lives will be longer, happier, and more productive” (Domingos, 2015, 43). Such an algorithm will not only make all meaningful choices for us but also automate scientific research itself, making scientists obsolete: “Give it the results of physics experiments, and it discovers the laws of physics. Give it DNA crystallography data, and it discovers the structure of DNA” (Domingos, 2015, 25–26).

However, in the meantime, it is clear that algorithms are also linked with a number of problems, ranging from biases (O’Neil, 2012) to the way how algorithms are used to influence our economic and political behavior (Zuboff, 2019). Indeed, one major blind spot in Domingos’ book is its assumption that human desires are fixed and given, and that algorithms can fulfill them and always should do so (see Sias, 2021). In reality, what we see is not so much that algorithms describe and fulfill “natural” human preferences but also highlight how they could be nudged, manipulated, and transformed (Zuboff, 2019). Or in the terminology of Lyotard: the human body and its desires are slowly disappearing as the reference point, being reshaped in terms of the demands of performativity.

But let us, once again, focus on a more concrete case, namely financial algorithms. Originally serving the regular economy, it is clear that financial markets have throughout the twentieth century slowly detached themselves from daily economic needs and demands, instead more and more following its own logic, again echoing the logic of performativity described by Lyotard. We thus end up in a market that functions without human bodies and follows an internal logic of efficiency that is hardly connected with fulfilling human desires and wishes (but instead would rather shape human desires to its own logic).

Perhaps, the clearest example of this is High-Frequency Trading (HFT). Though the history of financial markets has had an earlier history of technological changes, the introduction of computer algorithms to do the trading has had a profound impact on the structure of financial markets (see MacKenzie, 2021). HFT can be defined as “a subset of algorithmic finance in which orders are executed by fully automated algorithms in fractions of a second” (Borch, 2016, 352). Indeed, its first characteristic is the disappearance of human interventions and thus human limitations in financial transactions. As Michael Lewis, for example, notes in his book Flashboys: “Every day, the markets were driven less directly by human beings and more directly by machines. The machines were overseen by people, of course, but few of them knew how the machines worked” (Lewis, 2014, 39). Indeed, in HFT-dominated financial markets, “the question of the human body and its relation to market rhythms appears obsolete” (Borch, Hansen and Lange, 2015, 1090). Or as they immediately add, inspired by the rhythmanalysis of Henri Lefevre: we are not so much faced with the complete absence of the human body but are rather faced with “the need for a different kind of rhythmanalysis, one in which the human body is not necessarily at the center” (Borch, Hansen and Lange, 2015, 1092).

Not human bodies, but speed becomes the reference point: the main criterion is to be faster than the competitors. MacKenzie therefore speaks of “Einsteinian materiality”: “the materiality of a domain in which, as a result of how the practices of HFT have evolved, the speed of light has become a binding constraint” (MacKenzie, 2018, 1640). Or, similarly, according to Hayles, HFT can be “regarded as an evolutionary milieu in which speed, rather than consciousness, has become a weapon in the nonconscious cognitive arms race – a weapon that threatens to proceed along an autonomous trajectory in a temporal regime inaccessible to direct conscious intervention” (Hayles, 2017, 165). The result is again something that comes close to Lyotard’s framework: the technoscientific field of data science, in the form of the design and implementation of algorithms, reshapes a part of the social body, namely all kinds of economic transactions through the logic of performativity, embodied by ever faster algorithms that make interventions by human bodies obsolete. And again, these algorithms no longer serve the human body but instead redefine and reshape it. This, first of all, in its narrow sense: the bodies of the traders on financial markets have been reshaped to be focused on screens and algorithms. Secondly, also the social body is reshaped, in the sense that financial markets nowadays have far-reaching effects on domestic and foreign policy. As Christiaens suggests, the ultimate citizens are no longer the electorate, but the state’s bondholders: “Whether a government can, for instance, introduce a welfare policy depends largely on its impact on the nation’s creditworthiness and thus on its reputation with investors” (Christiaens, 2019, 108). Simultaneously, “the realm of finance […] tends to exclude the people from its operations when they are no longer valuable to shareholders” (Christiaens, 2019, 96), leading to a process that Christiaens, following Sassen (2014), calls expulsion.

HFT mitigates uncertainty in financial markets, creating new forms of control and efficiency, but at the cost of increasing instability and unpredictability in other parts of the real economy. Think of the financial and algorithmic innovations that lead to the 2008 financial crisis, or events such as the “Quant Quake” of 2007, the “Flash Crash” of 2010 or the “Hack Crash” of 2013. Such events are often framed in a terminology of radical unpredictability (see Lewis, 2014; Borch, 2016). However, it would be incorrect to interpret these unpredictable events as something that escapes technoscience, for instance in terms of Lyotard’s notion of the event, as that which can never be fully grasped or always escapes us (Lyotard, 1988, 74–75). Instead, what we see is the opposite. As Louise Amoore describes, “when algorithms appear to cross a threshold into madness, they in fact exhibit significant qualities of their form of rationality” (2020, 110). Algorithms, in HFT and beyond, create future stability based on probabilities, always reducing the undefined event to one probable and well-defined outcome — often acceptable, but occasionally “irrational.” Excluding these irrational events, such as the Flash Crash, would entail the exclusion of the method by which algorithms make the future predictable and controllable. In that sense, algorithms deny the inhuman that Lyotard advocates in favor of the inhuman of technoscience. Or to quote Amoore again: “among the most significant harms of contemporary decision-making algorithms is that they deny and disavow the madness that haunts all decisions. To be responsible, a decision must be made in recognition that its full effects and consequences cannot be known in advance” (Amoore, 2020, 120).

5 Conclusion

This article has examined the philosophy of technology at work in Lyotard’s notion of technoscience. In that sense, it corrected three common misconceptions about the work of Lyotard: his La condition postmoderne (1979) is not so much about the end of metanarratives, but mainly about how a logic of performativity replaces them. This logic of performativity is moreover not just a product of capitalism, but capitalism is instead an effect of this logic, which is found throughout the whole of cosmic history, and should rather be interpreted as a metaphysical principle. Hence, Lyotard’s criticism of performativity, and thus his philosophy of technology, is not restricted to one book but remains central in other works as well, embodied by the notion of technoscience.

But the goal was not just to historically reassess Lyotard’s philosophy, but also to show how his analysis of technoscience is still relevant to understand contemporary technoscientific practices. In this article, I have argued for this, based on two case studies: synthetic biology and data science. We saw how these fields can be characterized by a number of traits that Lyotard predicted: a logic of performativity, that slowly goes beyond the human body as a reference point. But as said, Lyotard’s framework is probably also relevant to understand other technosciences, such as nanotechnology or robotics.

Moreover, there are other dimensions of technoscience that have been left out of the picture in this article, for instance, the societal and political dimensions, of how technoscience also affects society. For instance, Lyotard suggests that because of technoscience, humanity is divided into two parts. “One faces the challenge of complexity, the other that ancient and terrible challenge of its own survival” (Lyotard, 1986, 79). We, in fact, already saw something similar in the case of financial algorithms, where groups are excluded if they cannot contribute to the performativity of financial markets. A topic of future inquiry, thus, is to what extent Lyotard’s notion of technoscience can help us to understand, not only the current transformations of scientific practices, but also of society as a whole.