Keywords

Introduction: Re-visiting the Neoliberal Knowledge Economy

In a world where income is being decoupled from education and work, and neoliberal capitalism has led to an increasing concentration of wealth (Piketty 2014), it is likely that social and educational inequalities will accelerate and proliferate when equality and excellence dominate Western educational policy agendas. Peter and Bröckling (2017) argue that equality and excellence constitute the hegemonic discourses of ‘economisation’ within the German education system, a thesis that has useful applications to education systems elsewhere. Equality of opportunity is increasingly seen in neoliberal terms as that designed to utilise the full limits of human capital. In higher education, ‘excellence’ serves to introduce a logic of competition for educating the elite. Peter and Bröckling (2017) adopt a theoretical approach from Foucault’s governmentiality and Luhmann’s systems theory to discuss how mechanisms of exclusion and inclusion operate in schooling and university education sectors. As they suggest:

Equality and excellence appear, at least superficially, as opposing and mutually exclusive orientations; one either supports the promotion of the elite and a competitive understanding of education, or one supports collective learning and the equal right to education for all - tertium non datur. (Peter and Bröckling 2017) (italics from the original)

They trace these antagonisms to a basic model of rationality that drives the global educational discourse where discourses of excellence and equality get cashed out in neoliberal market terms and can be understood in by reference to neoliberal governmentality. Specifically, embracing political discourse theory they argue: ‘equality and excellence represent models for two opposing hegemonic projects in the German education system, which nonetheless meet within a transnational framework of economic competition’. They demonstrate how the mobilisation of a concept of excellence is anchored in the qualitative improvement of education seen as human capital and learning for the ‘information age’ and how education has become decisive for competitiveness in a world where ‘productivity is based on the creation, distribution and use of knowledge’.

The economisation of education has taken place through the progressive amalgamation of discourse threads to form the ‘knowledge economy’ based on new endogenous growth theory developed by Romer and others in the 1990s and adopted and popularised by the OECD soon after. The OECD’s formulation became the dominant neoliberal discourse that blended elements from earlier management, sociological, and economic studies and recast education, effectively, from a welfare right concerned with equality of opportunity to the central theory of human capital development. As neoliberal policies followed notions of school choice, vouchers and privatisation with the marketisation of education, the liberal rhetoric of equality of opportunity faded away, surviving intact only at the primary school level. In the hard-core neoliberal states, the educational inequalities soon began to increase under system that decentralised state control and decision making to the local level in a form of institutional autonomy that had the effect of benefitting schools from ‘rich’ areas and diminishing those from ‘poor’ areas.

I have been interested in Foucault’s reading of neoliberalism and its application to discourses of the knowledge economy for some years now. In 2001, I published a paper that reviewed and critiqued national education policy constructions of the knowledge economy (Peters 2001). Referencing the post-war consensus that increasing emerged with the likes of economists, futurists, and sociologists, different threads of a blended discourse by Drucker, Machlup, Porter and Thurow, I charted the ruling paradigm the economics and productivity of knowledge had become the only source of comparative advantage commenting that many western governments had begun the process of restructuring their national education systems and redesigning the interface between universities and business-based neoliberal theories of human capital, public choice, and new public management.

In this context, I made reference to the discourse of the ‘future of work’ citing Charles Handy’s work in the 1980s to signal the end of full employment and the redesign of education to cope with increased job mobility and multiple careers. By ‘knowledge economy’, I stressed the main characteristics of received mainstream discourse that focused on (1) economics of abundance; (2) the annihilation of distance; (3) the de-territorialisation of the state; (4) the importance of local knowledge; and (5) investment in human capital. In the following section, I teased out several separate discourses from economics, management theory, futurology and sociology can be identified as having contributed to shaping the present policy narrative of the ‘knowledge economy’ including: The economics of information and knowledge (Marschak, Machlup, Becker, Friedman, Buchanan and Tullock, Romer); New management theory and knowledge managerialism; Sociology of the labour process; Sociology of knowledge and education; Futurology, futures research, forecasting, foresight studies; and, Communications and information technology theory.

I suggested that these are clearly disparate disciplines ‘fields and discourses that operate with different assumptions, employ different methodologies and reach different and sometimes opposing conclusions. The art of policy scholarship is intended, in part, to gain awareness of these different strands as they influence policy narratives, to disentangle them and comment upon inconsistencies.’ (Peters 2001) These discourses came from across the political spectrum and the blended discourse often represented wholesale conceptual ‘borrowings’ without proper attributions.

In the rest of the paper, I attempted to define the discourse of the knowledge economy by investigating three examples of national policy constructions in the UK, Scotland, and NZ, all of which were strong examples of policy discourses aimed at the economisation of education. In the final section and in those early days, I made clear that I was not in principle against the concept of the knowledge economy at least as it fits within the social democratic tradition that posits an economy as subordinate to the state and the question of sovereignty. I argued the notion of the ‘knowledge society’ provides grounds for both the reinvention of education as a welfare right and the recognition of knowledge rights as a basis for social inclusion and informed citizenship. This view can be contrasted with that of the ‘knowledge economy’ as simply an ideological extension of the neoliberal paradigm of globalisation, where the term stands for a ‘stripped down’ functionalist view of education in service of the multinational information capital. I was influenced by Stiglitz’s argument for knowledge as a global public good—a discourse that appeared at the end of the 1990s. In my critique, I challenged the easy accommodation between ‘knowledge’ and ‘information’ and returned to the question of employment and ‘knowledge workers’ by reference to Rifkin’s (1998) ‘end of work’ analysis of the US economy and the threat of automation in the shift from industrial to knowledge capitalism transforming the West into ‘workless worlds’, where only an elite technical labour force will find jobs.

If you remember, Rifkin’s educational solution was to expand education’s role in civil society as an arena for job creation and social-service provision in the coming century. I made reference to André Gorz and indicated that ‘[i]n the Hegelian and Marxist senses, the nature of work is tied up not only with “practico- sensory activity”, but with poiesis and self-creation’ (Peters 2001: 16) (italics from the original). Returning to Foucault, I emphasised how the formation, circulation and utilisation of knowledge in the late twentieth century had become a fundamental problem and followed Foucault who compared the accumulation of knowledge to that of capital (in nineteenth century capitalism). He asserted that at this juncture—in the age of the knowledge economy—it is now impossible to pursue the question of knowledge separately from the question of capital. Surely this early statement by Foucault made in conversation with the Italian Marxist Duccio Trombadori in 1978 is an instantiation of his concept of power-knowledge (le savoir-pouvoir): modern power is based knowledge and reproduces it; both share dynamic and unstable systemic characteristics as relational, ubiquitous, and productive (Foucault 1980). The central feature of political economy in the twentieth century was the formation, circulation, and utilisation of knowledge rather than that of capital and its dialectic of capital-labour. Through his later studies of neoliberalism, Foucault foresaw the importance of the centrality of knowledge for radical political economy and indicated a pathway to understand the further ‘technologicisation’ of an emerging single interconnected planetary system of global knowledge for the first time in human history.

That paper was published over eighteen years ago and has been well cited (over 250 times). Now it is old hat. Over the intervening years, I have followed through on many of these themes in a variety of papers and books that extend and depart from the original arguments, emphasising my concerns for technological unemployment, especially as regards youth and searching for viable social democratic alternatives. In subsequent work, I have employed an approach using Foucault’s work on neoliberal governmentality. Foucault gave his famous lectures on neoliberalism as a form of biopolitics in 1979 just as Margaret Thatcher came to power, focusing on Becker and human capital theory as the most advanced form of neoliberalism. Foucault died in 1984 and capitalism has kept on changing, teetering from one crisis to another—the crisis of productivity, the global financial crisis of 2008, the crisis of political legitimation following the socialisation of bank failures and austerity politics. Foucault, it might be observed, did not have much to say about capitalism per se as an international system accept except through glancing comments and his interpretation of neoliberalism—nevertheless, a major contribution. Certainly, Foucault did not anticipate the formalisation, mathematicisation, and compression that took place under processes of financialisation in what I have termed ‘algorithmic capitalism,’ sometimes also referred to as ‘platform capitalism’ or ‘high frequency trading’ (Peters 2017a, b), nor did he envisage the development of the concept of ‘cognitive capitalism’ based on his work and Marx’s ‘Fragment of Machines’ that with the autonomist school in Italy under Negri, Virno and Lazzarato. I have attempted to develop this attempt to marry Marx and Foucault in the field of education by focusing on the question of digital labour (Peters and Bulut 2011).

The Discourse of Cognitive Capitalism

Cognitive capitalism is the culmination and most systematic statement to date of the Italian autonomista of an outline of an economic theory of a form of capitalism superceding industrial capitalism. Boutang (2012) working with his colleagues in Paris around the journal, he established in 2012 called Multitudes, build on the work Antonio Negri, Paolo Virno, Christian Marazzi, Andrea Fumagalli, and others in the Italian autonomist school to focus on cognitive and ‘immaterial’ labour (Lazzarato 1996), after Marx’s ‘Fragment on Machines’ (in Marx 1978). Theorists of cognitive capitalism claim that a fundamental shift occurs in capitalism based on physical resources to knowledge and brain power as both input and output, signalling a break with Fordism and a historically new stage of capitalism. Postoperaismo is an empirical based understanding of changes in production and the shift to ‘immaterial labour’ characterises the growing significance of the service sector, creative industries, and so-called knowledge economy. Pitts paints what now seems a familiar scenario based on this interpretation (in order to contest this reading suggesting it overlooks the persistence of social relations):

The ‘Fragment on Machines’ (1973, 704-706) is a small section of his Grundrisse, the notebooks for what would later become Capital (1976). In it, Marx presents a future scenario where the use of machines and knowledge in production expands. Production revolves more around knowledge than physical effort. Machines liberate humans from labour, and the role of direct labour time in life shrinks to a minimum. Free time proliferates. The divorce of labour time from exchange value sparks capitalist crisis. But this technological leap brings about the possibility of a social development on a massive scale. Freed from physical subordination to the means of production, workers grow intellectually and cooperatively. This freely generated ‘general intellect’ reinserts itself, uncoerced, into production as fixed capital. The worker is incorporated only at a distance, rather than as a constituent part of the capital relation. The potential for an incipient communism arises. (Pitts 2017: 4)

I have no difficulty in holding with advocates of postoperaismo that ‘the technological leap’ may lead to ‘social development’ and even to a kind of ‘socialisation of thought’ but I have difficulty in accepting that this socialisation all points the same way or that it leads to the potential for communism based on ‘freely-generated general intellect’ especially when in face of technological unemployment, the notion of ‘worker’ and ‘labour’ is radically redefined.

If we accept the shift at face value, it highlights significant consequences for education and digital labour (Peters and Bulut 2011) and for the future shape of a sharing and participative economy based on education and learning considered in the broadest sense. At the same time, deep learning has come of age (developing well after postoperaismo). The rapid development of machines that can learn without explicit program instruction have experienced accelerated success in the last five years surpassing technical expectations. In combination with ‘big data’ analytics, deep learning perhaps best represented in Google’s DeepMind and IBM’s ‘Watson’, has defeated the best international chess and go players and make an unsurpassed contribution to cancer research (Peters 2017a, b). Deep learning, a branch of artificial intelligence, threatens to accelerate the automation of work and the concomitant process of technological unemployment at a time in contemporary history when youth employment has reached record post-war levels. A frequently cited Oxford study drawing upon recent advances in machine learning (ML) and mobile robotics (MR) estimates that 45 per cent of America’s occupations will be automated within the next 20 years (Frey and Osborne 2017). In a world where income is being decoupled from education and work, and neoliberal capitalism has led to an increasing concentration of wealth (Piketty 2014), it is likely that social and educational inequalities will accelerate and proliferate when equality and excellence dominate Western educational policy agendas.

There have been many attempts in the post-war era to characterise the future of capitalism from sociological work focused on postindustrialism as both a critique of industrialism and a prediction of economic shifts based on the centrality of theoretical knowledge (e.g. Bell, Touraine, Toffler) to more recent conceptualisations of the ‘information’ (e.g. Porat) and ‘knowledge economy’ (e.g. Drucker, Machlup, Romer). ‘Cognitive capitalism’ (CC) is a Marxist-inspired critique of the knowledge economy that has a debt to endogenous growth theory. This paradigm or hypothesis focuses on how the shift to knowledge as a factor of production and its characterisation in terms of cognitive activity transforms the labour/capital relationship. CC draws our attention to labour-process models that technologically extend human communication and realise the creation of value through the production of knowledge and other symbolic goods, increasingly organised in terms of large data networks.

I have pursued this topic in a number of publications on ‘knowledge capitalism’ (Peters and Besley 2006), ‘knowledge socialism’ (Peters 2004), the ‘creative economy’ (Araya and Peters 2010; Peters et al. 2009), ‘cognitive capitalism’ (Peters and Bulut 2011), ‘open knowledge’ and ‘open science economy’ (Peters 2009), ‘financialisation’ and ‘finance capitalism’ (Peters and Besley 2013) and ‘radical openness’ (Peters 2014a, b; Peters and Jandrić 2018a, b) that tries to develop an explicit recognition of the ways in which these shifts and forces reconfigure education at all levels at the centre of the knowledge economy and labour increasingly as the source of creative value. This is the kind of description I offered in my edited book with Ergin Bulut, Cognitive Capitalism, Education and the Question of Digital Labour:

‘Cognitive capitalism’ is a general term that has become significant in the discourse analysing a new form of capitalism sometimes called the third phase of capitalism, after the earlier phases of mercantile and industrial capitalism, where the accumulation process is centred on immaterial assets utilising immaterial or digital labour processes and production of symbolic goods and experiences. It is a term that focuses on the socio-economic changes ushered in with the Internet as platform and new Web 2.0 technologies that have impacted the mode of production and the nature of labour.

The core of cognitive capitalism is centred on digital labour processes that produce digital products cheaply utilising new information and communications technologies that are protected through intellectual property rights regimes which are increasingly subjected to interventions and negotiations of the nation states around the world. (Peters and Bulut 2011)

I am not concerned to defend this notion here, nor to comment on its Marxist origins and sometimes heavily romantic versions that lay stress on processes of collective intelligence, open science, and social innovation all of which I have indicated as ways to reclaim the public dimensions of knowledge (e.g. Peters 2013a, b). Neither am I concerned to acknowledge the ways in which CC underplayed the cultural dimension or the relational and affective aspects of the new capitalism (Peters 2019). In this paper, I want mainly to comment on the relation of CC to what I call ‘the epoch of digital reason’ (Peters 2016) and, in particular, the critical relationship between ‘deep learning’ and what is called ‘technological unemployment’ (Peters et al. 2019).

If the infinite substitution of labour is the driving motif of the transformation of labour in the shift from industrial to postindustrial forms of capitalism with its waves of automation based on robotisation, then ‘deep learning’ can be considered the key metaphor in the transformation of knowledge into data and information, and machine learning that can augment and replace human knowledge production systems with algorithms and large data sets. We might say that the infinite substitution principle of labour first into mechanised assembly plants and later robot manufacturing, duplicates the process for mental labour especially in the digital realm. In short, what is the impact of artificial intelligence on employment? The current anxiety seems well placed and we have been warned about the ‘jobless future’ not just for routine manual and cognitive jobs but also for non-routine ‘creative’ and highly skilled jobs.

The empirical analysis reveals a more complex picture where AI automation redefines employment and may even create some jobs. Autonomous vehicle or driverless cars may in fact disestablish many job types in transport while creating a few to cope with accidents and emergencies. Certainly, the scale and rate of job creation will be affected. More importantly, automation and the generalised ‘decline of labour’ seem to pose huge questions for education, labour politics, unions, and welfare. Capital no longer needs labour in the way it required the mass of unskilled labour, even at offshore cheap rates, that characterised early stages of industrial capitalism or its globalisation in the post-war period as jobs migrated East. Even skilled tasks now can be handled by robots at diminishing cost levels at 24/7 fully automated plants. We saw second wave automation of the service sector in the 1980s when white collar office jobs began disappearing and the ATM machine was first introduced in 1969 as part of the early process of financialisation. The digitisation of finance that among other things led to the automation of equity markets and the phenomenon of high-frequency trading represented a third wave automation associated with global finance capitalism coming to fruition in the early 2000s. The fourth wave automation of knowledge and research develop quickly with the growth of ‘platform capitalism’, the rise of algorithmic-based knowledge capitalism with the rise of search, big publishing, and metrics industries. Deep learning as an aspect of AI that has recently experienced a period of accelerated development and break-through technologies are the latest phase of automation that has the capacity to automate and augment human cognition.Footnote 1

Deep Learning and the Final Stage of Automation

Goodfellow et al. (2016) who wrote the first textbook on deep learning, comment: ‘The true challenge to artificial intelligence proved to be solving the tasks that are easy for people to perform but hard for people to describe formally—problems that we solve intuitively, that feel automatic, like recognising spoken words or faces in images.’ Their solution is

to allow computers to learn from experience and understand the world in terms of a hierarchy of concepts, with each concept defined in terms of its relation to simpler concepts. By gathering knowledge from experience, this approach avoids the need for human operators to formally specify all of the knowledge that the computer needs. The hierarchy of concepts allows the computer to learn complicated concepts by building them out of simpler ones. (Goodfellow et al. 2016)

Goodfellow, Bengio, and Courville identify three waves of development of deep learning: deep learning known as cybernetics in the 1940s–1960s appeared with biological theories of learning; deep learning known as connectionism in the 1980s–1990s that used ‘back-propagation’ to train neural network with multiple layers, and the current resurgence under the name deep learning beginning in 2006 and only appearing in book form in 2016. They argue that the current deep learning approach to AI goes beyond the neuroscientific perspective applying ‘machine learning frameworks that are not necessarily neurally inspired’. Deep learning, then, is ‘a type of machine learning, a technique that allows computer systems to improve with experience and data’ (Goodfellow et al. 2016).

Morris et al. (2017) report on the remarkable ‘take-off’ of artificial intelligence and with the resurgence also the return of the machinery question posed almost 200 years ago in the context of the Industrial Revolution. They note the upbeat analysis of mainstream press in 2016 and document the publication of several US and UK reports that suggest not only that ‘AI has arrived’ but also offers ‘huge potential for more efficient and effective business and government.Footnote 2 The economists cited welcome AI for productivity gains. They ask ‘[w]hat triggered this remarkable resurgence of AI?’ and they answer:

All evidence points to an interesting convergence of recent advances in machine learning (ML), big data, and graphics processing units (GPUs). A particular aspect of ML—called deep learning using artificial neural networks—received a hardware boost a few years ago from GPUs, which made the supervised learning from large amounts of visual data practical. (Morris et al. 2017: 407)

The popularity of ML, they note, has been enhanced by machines out-performing human in areas taken to be prime examples of human intelligence: ‘In 1997, IBM’s Deep Blue beat Garry Kasparov in chess, and in 2011, IBM’s Watson won against two of Jeopardy’s greatest champions. More recently, in March 2016, Google’s AlphaGo defeated Lee Sedol, one of the best players in the game of Go’ (Morris et al. 2017: 407). Following this popular success, as Morris et al. (2017) note the private sector took up the challenge. They note, in particular, that IBM developed its cognitive computing in the form of their system called Watson, a DeepQA system capable of answering questions in a natural language.Footnote 3 The Watson websiteFootnote 4 makes the following claim: ‘Watson can understand all forms of data, interact naturally with people, and learn and reason, at scale.’ And it also talks of ‘transforming learning experience with Watson’ taking personalised learning to a new level: ‘We are transforming the learning experience through personalisation. Cognitive solutions that understand, reason and learn help educators gain insights into learning styles, preferences, and aptitude of every student. The results are holistic learning paths, for every learner, through their lifelong learning journey.’

Already firms are talking about transforming the learning experience through personalisation with purported ‘cognitive solutions’ that understand, reason, and learn help educators gain insights into individual student learning styles and preferences. IBM’s Watson EnlightFootnote 5 is a planning tool to support teachers with curated, personalised learning content, and activities to align with each student’s needs. The IBM Whitepaper (2016) claims that ‘[d]ata-driven cognitive technologies will enable personalised education and improve outcomes for students, educators and administrators’.Footnote 6 Another prominent example is DeepMind that advertises itself in terms of artificial intelligence research ‘developing programs that can learn to solve any complex problem without needing to be taught how’.Footnote 7 One of the current developments focuses on the realm beyond automation to explore the advanced engineering of autonomous systems already is exploring how these systems will learn to adapt to new and unforeseen circumstances.

In terms of the labour market US experts are split between whether AI will displace more jobs than it creates with evidence suggesting that any jobs created will be those in STEM that complements AI. Lee (2016), a top White House science advisor, estimates that automated vehicles could threaten or alter 2.2 million to 3.1 million existing US jobs. As Obama claimed before the recent US election: ‘The next wave of economic dislocations won’t come from over- seas. It will come from the relentless pace of automation that makes a lot of good middle-class jobs obsolete.’ (in Rotman 2016: 92) It is clear that the comparative advantage of human forms of labour will be eroded as ML and deep learning systems become more sophisticated and more intelligent taking over and/or augmenting jobs in libraries, research, teaching, law and other tertiary sector and creative forms of employment that require a learning component and have previously been seen to be impervious to automation. In particular, the widespread development for ‘cognitive computing’, deep learning, and autonomous learning systems through applications and by start-ups strikes at the very heart of the so-called ‘knowledge economy’, or ‘cognitive capitalism’ where such systems are already able to augment and, in some cases, replace jobs in the engine room of ‘knowledge capitalism’.

The autonomous learning systems of AI, increasingly referred to as deep learning theoretically has the capacity to introduce autonomy into machine learning with the same dramatic impact that mechanisation had first in agriculture with the creation of industrial labour force and massive rural–urban migration that built the mega-cities of today. Fordist automation that utilised technologies of numerical control (NC), continuous process production, and the production processes using modern information technology (IT) introduced the system of mass production and later, the ‘flexible system of production’ based on the Japanese management principles. When Fordism came to a crisis in the 1960s with declining productivity levels where Taylorist organisational forms of labour reached its limits, the search for greater flexibility diversified into new forms of automation especially as financialisation took hold in the 2000s and high-frequency trading ensued on the basis of platforms of mathematical modelling and algorithmic engines (Peters et al. 2011; Peters 2012, 2013a, b, 2017a). These changes were developing since the 1960s with the invention of the credit card and the eventual automation of equity markets. This too-simple analysis that paints a broad picture of the dynamic changes of knowledge capitalism suggests a sequential or stage-related set of changes in automation of production, of economy and of labour. I do not use the term post-Fordism in this context because of its inherent analytical weakness (Vidal 2011).

In an interesting edited collection, Alleys of the Mind: Augmented Intelligence and Its Traumas Matteo Pasquinelli (2016a: 7) foregrounds ‘the reason of trauma’ as a search for positive definitions of ‘error, abnormality, trauma, and catastrophe—a set of concepts that need to be understood in their cognitive, technological and political composition’. Pasquinelli (2016a: 7) goes on to elaborate the philosophical context of segmented intelligence by reference to Foucault, Deleuze, and the Frankfurt on the instrumentalisation of reason. It may be surprising for some to find out that Foucault’s history of biopower and technologies of the self-share common roots with cybernetics and its early error-friendly universal machines. Or to learn that the desiring machines, which ‘continually break down as they run, and in fact run only when they are not functioning properly’ (Deleuze and Guattari 1983: 8), were in fact echoing research on war traumas and brain plasticity from the First World War. Across the history of computation (from early cybernetics to artificial intelligence and current algorithmic capitalism), both mainstream technology and critical responses to it have shared a common belief in the determinism and positivism of the instrumental or technological rationality, to use the formulations of the Frankfurt School (Horkheimer 1947; Marcuse 1964).

Pasquinelli’s ‘Keyword: Augmented Intelligence’, offered as an afterword for the collection makes clear the connection and synonyms, and the intellectual work that needs to be done in order to get a grip on this concept:

Synonyms include: augmented human intellect, machine augmented intelligence, and intelligence amplification. Specifically, extended mind, extended cognition, externalism, distributed cognition, and the social brain are concepts of cognitive sciences and philosophy of mind that do not necessarily involve technology (Clark and Chalmers 1998). Augmented reality, virtual reality, and teleoperation can be framed as a form of augmented intelligence, moreover, for their novel in uence on cognition. Brain-computer interfaces directly record electromagnetic impulses of neural substrates to control, for instance, external devices like a robotic arm, and raise issues of the exo-self and exo-body. (Pasquinelli 2016b: 2003)

I find the theoretical recourse to the history of modern cybernetics that characterises the collection both useful and instructive as means of viewing ‘algorithmic capitalism’—a term I have used myself (Peters 2012, 2013a, b, 2017a, b). Stiegler’s (2010) For a New Critique of Political Economy here is apposite and challenging when he argues that machines have confiscated the knowledge and memories of knowledge workers such that proletarianisation now encompasses not only the muscular system (Marx) but also the nervous system of the so-called creative workers in the knowledge economy. I found essay by Wheeler (2016) ‘Thinking Beyond the Brain: Educating and Building from the Standpoint of Extended Cognition’ and Luciana Parisi (2016) ‘Instrumental Reason, Algorithmic Capitalism, and the Incomputable’ particularly useful for the purposes of this essay.

My fear and I think it is well founded is not only of a ‘final’ stage of automation that takes place with the development of machine and deep learning that at least theoretically threatens ‘technological unemployment’ but also that an even more savage set of emerging inequalities will ensue. These growing inequalities seem likely to be focused on deepening youth unemployment and inequalities in educational opportunity that became pronounced under financialisation of education, the trillion-dollar blow-out in US student loans and austerity capitalism after the Great Recession, especially in the Mediterranean economies. In this new space of deep learning and its effects on university-based research and knowledge workers, human capital arguments seem old-fashioned and limp although the innovation side of endogenous growth theory may still hold. ‘End of work’ and ‘future of work’ discourse as Caffentzis notes has witnessed a return ‘reminiscent of the mid-1970s, but with a number of twists’:

In the earlier period, books like Where Have All the Robots Gone? False Promises (Aronowitz 1973) and Work in America, and phrases like ‘blue collar blues,’ ‘zerowork’ and ‘the refusal of work’ revealed a crisis of the assembly line worker which expressed itself most dramatically in wildcat strikes in U.S. auto factories in 1973 and 1974 (Linebaugh and Ramirez 1992)…

But in the mid-1990s books like The End of Work (Rifkin 1995), The Labour of Dionysius and The Jobless Future (Aronowitz and De Fazio 1994), and phrases like ‘downsizing’ and ‘worker displacement’ (Moore 1996) have revived themes associated with the crisis of work at a time when the power relation between workers and capital is the inverse of the 1970s. Whereas in the 1970s workers were refusing work, in the 1990s capitalists presumably are refusing workers! (Caffentzis 1999: 20) (italics from the original)

Caffentzis concludes:

Negri and Rifkin are major participants in the ‘end of work’ discourse of the 1990s, although they occupy two ends of the rhetorical spectrum. Rifkin is empirical and pessimistic in his assessment of the ‘end of work’ while Negri is aprioristic and optimistic. However, both seem to invoke technological determinism by claiming that there is only one way for capitalism to develop. (Caffentzis 1999: 35)

A working hypothesis and a dark scenario is that in an age of deep learning—the final stage of automation—the welfare state based on full employment, might seem a quaint and romantic past when labour, withdrawal of labour, and labour politics went together and had some force in an industrial age. In retrospect and from the perspective of algorithmic capitalism in full swing, the welfare state and full employment may seem like a mere historical aberration.

Algorithmic Capitalism in the Epoch of Digital Reason

In my work, I have become more interested in the question of education and digital labour in what I call the ‘epoch of digital reason’ in order to explore the basis for knowledge socialism rather than knowledge capitalism. Cognitive capitalism seemed to me to offer an alternative and opposing account of knowledge capitalism, and the notion of ‘creative labour’ provided an interesting alternative description to ‘human capital’. In this connection, I have explored, in particular, the wider philosophy and political economy of openness and open knowledge production with a strong emphasis on ‘radical openness’ and new forms of ‘co(labor)ation’. After completing my PhD thesis in 1984 on the problem of rationality in Wittgenstein, I was drawn to the work of Lyotard, Foucault, and Derrida and published works that explored a poststructuralist interpretation of Marxism and analysed education as a form of knowledge capitalism (e.g. Peters 2001, 2003). In later work, I was captured by the promise of the ‘paradigm of openness’ and became interested in all forms of openness as it represented a moment of collective intelligence in science and education (e.g. Peters and Roberts 2013; Peters 2013a, b). In a significant paper for my own thinking I outlined three forms and associated discourses of the ‘knowledge economy’: the ‘learning economy’, based on the work of Bengt-Åke Lundvall; the ‘creative economy’ based on the work of Charles Landry, John Howkins, and Richard Florida; and the ‘open knowledge economy’ based on the work of Yochai Benkler and others. I argued that these three forms (and discourses) represented three recent related but different conceptions of the knowledge economy, each with clear significance and implications for education and education policy, with the last providing a model of radically non-propertarian form that incorporates both ‘open education’ and ‘open science’ economies (Peters 2010). I have been seeking a social democratic alternative to constructions of the neoliberal knowledge economy that respects the collective and public dimensions of knowledge as a symbolic social good. In retrospect, I understand that I have been trying to subvert the discourse and have been trying in my own way to expand and experiment with the concept.

I became less satisfied with the concept of knowledge economy and sought a form of radical political economy in poststructuralist philosophy that had been a tendency in my early work. A major long-term historical tendency of capitalism not mentioned by Foucault because it only became evident in the years after his death, as I mentioned, is the dominance of finance culture and financialisation based on the increasing formalisation, mathematicisation, and automation of finance markets (Peters et al. 2015). This development that grew out of long-term developments in algebra and the algebrafication of logic, has increased the algorithmic governance and the growing prominence of big data informationalism. It indicated the close connection between information and market in a pronounced development of ‘knowledge capitalism’ that became increasingly more abstract and mathematic. It developed first in capital market applications and in the extension of world global finance markets and then in science and education through the application of search engines and online networks. I used the term ‘bioinformational capitalism’ in an echo of Foucault to describe and analyse the merging of two broad technological forces of contemporary capitalism: informational capitalism based on the rise of digital technologies on the one hand, and the new biology and biotech, on the other, that has created new life and, therefore, become able to renew its own material base (Peters 2012). These two major forces—the digital and the biological—are now inextricably entwined (the biologisation of information and the informatisation of biology) and represent a vector of critical convergence within the postdigital (Peters and Besley 2019).

Increasingly, I sought to understand the contours of what I called ‘the epoch of digital reason’ in relation to AI, deep learning, and ‘algorithmic capitalism’ (Peters and Jandrić 2015, 2018a, b; Peters 2017a, b; Peters and Besley 2019). In ‘Critical Philosophy of the Postdigital’ working with Tina Besley, we drew on our recent works on cybernetics, complexity theory, quantum computing, artificial intelligence (AI), deep learning, and algorithmic capitalism to bring these ideas together to develop a critical philosophy of the postdigital based on an understanding of quantum computing (QC) which is based on quantum mechanics and offers a radically different approach from classical computing based on classical mechanics. Cybernetics, and complexity theory, provide insight into systems that are too complex to predict their future. Artificial intelligence and deep learning are promising the final stage of automation which is not compatible with the welfare state based on full employment.

We have thus arrived into the age of algorithmic capitalism, and its current phase, ‘biologization of digital reason’, is a distinct phenomenon that is at an early emergent form that springs from the application of digital reason to biology and the biologization of digital processes. Rejecting a fully mechanical universe, therefore, a critical pedagogy of the postdigital is closely related to Whitehead’s process philosophy, which is a form of speculative metaphysics that privileges the event and processes over and above substance. A critical philosophy of the postdigital is dialectically interrelated with the theories such as cybernetics and complexity theory, and also processes such as quantum computing, complexity science, and deep learning. These processes constitute the emerging techno-science global system, perpetuating algorithmic capitalism, and the prospect of the application of ‘intelligent publishing’ in knowledge capitalism where machine learning also means ‘machine writing’ and AI applied to research can operate entirely without human beings to discover deep configurations in big data science.

This is the fourth knowledge revolution, following Schwab’s (2017) Fourth Industrial Revolution, even though I have misgivings about the ways in which this view is somewhat deterministic and technology-driven. The notion clearly requires more theoretical work. Am I optimistic about the prospects of openness or of ‘digital socialism’? I am warier and more sceptical than I was a decade ago about the opportunities for knowledge socialism especially in view of algorithmic capitalism, although there are still opportunities for full public knowledge, learning and publishing platforms that are, if not owned by the State, at least strongly regulated in the interests of public good science. Such public platforms are not obliged to return big profits from the mass personal data that the now soon to be trillion-dollar information utilities harvest from their users on a daily basis. Perhaps, the concept of collective intelligence will be best developed in the near future in terms of workable models of augmented intelligence—cognitive augmentation—which is a complement rather than replacement for human intelligence. When we speak of the fourth knowledge revolution, we are specifying the fifth generation cybernetic episteme driven by 5G wireless networks, quantum computing, deep learning and big data that replaces the Internet with a cyber-infrastructure that includes range of new converging technologies including AI that are fusing the physical, digital, and biological worlds and unifying science at the nano-level. It is on its way, the signs are there and it will impact all academic disciplines and institutions, creating an unsurpassable horizon in which human beings learn to become truly digital.

This will be the evolutionary cultural and symbolic system within which we experience what it is to know, communicate, and learn to be human. Reminiscent of Foucault’s early structuralist period, in the history of the systems of thought, it eclipses the individual knowing subject. This ‘disappearance’ or diminution of the knowing subject is not just a result of structures or structuralism and the decentering of the knowledge subject within enveloping global networks of power-knowledge—of the dynamic flows and circuits of knowledge, but rather a result of the conjunction of two forces of informationalism and new biology of genetics that I call ‘bio-informationalism’. When these two primary technological vectors converge with the newest technologies of AI, deep learning and quantum computing, on the one hand, and nano-scale technology, on the other, then the individual human knowing subject is superseded entirely or its centrality is completely displaced.