Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Digital data—data and metrics generated by social media and online interactions—are increasingly present in the working lives of journalists as result of the development and extension of digital technologies but also part of a social process of digitalisation. The use of data and analytics in business and organisation is not new; what recent advances in digital technology have provided is an extension of the scale and scope of data available. As Nick Srnicek (2016) argues, data are key to revenues in the digital economy as well as to monopolisation in the emerging platform landscape that seeks to monetise and profit from online activity. This proliferation of data is likely to have implications for the organisation and control of work; in some cases it has already. The research presented in this chapter looks at how digital data are put to use in digital newsrooms and aims to suggest an approach to the study of data and algorithms in the labour process. The chapter builds on discussions within labour process theory (LPT) about technology, particularly as a function of managerial control, where it embeds managerial prerogatives and, in the process, has the potential to objectify and obscure relations of domination and control (Edwards 1979; Callaghan and Thompson 2001). In view of this, it argues that technologies of measurement fit within a broader managerialist agenda of measurement linked to performance and that, even where they are not expressly linked to an employee performance agenda, they encourage comparison, which can lead to competition and, in the case of the newsroom studied, they reorganise priorities for journalists as well as reorganising their work.

An important objective of this chapter is to disentangle human from machine; to examine, with reference to a case study, how metrics and measurement change both how we work and how we think about work, but also to look at the ways humans—journalists and news production workers in this case—react, adapt accommodate and negotiate the introduction of these data technology into the labour process. To build on the critical studies that examine the application of digital data in the workplace in this volume, this chapter situates the discussion of data and analytics at work within broader sociological discussions of metrics and algorithms , developed largely in response to the growing use of big data , and will consider this in light of labour process approaches to the study and understanding of work together with empirical findings from a case study of newspaper journalism.

Data, Metrics and Algorithms at Work

The conversion of work processes and outcomes into quantifiable and measurable data that can be used by management to organise and rationalise labour is, of course, not new, but the development of new ICTs brings with it the capacity for new and more precise means of measuring. The quantification and measurement of work can be traced from the scientific–technical approach of time and motion studies (for a good overview see Braverman 1974: Ch 4) to performance management and KPIs as measures of effective and productive working. Discussions of knowledge work and creative labour have tended to note the difficulty of quantifying—and hence controlling—this work (Smith and McKinlay 2009), yet, it is precisely these forms of labour which have been increasingly subject to measurement and targets (Taylor 2013: 17; De Angelis and Harvie 2009; Moore and Robinson 2016). But to trace this history is not the objective of this section. Digital data have been enabled by new and more powerful network, storage and processing technologies and their entry into the labour process poses questions of whether the extended technological capacity to measure, combined with a new orientation towards data and their predictive capacity, changes the nature of work and managerial control.

Technology and Control from a Labour Process Perspective

One of the key and enduring insights from Labour and Monopoly Capital, was Harry Braverman’s insistence upon a ‘social approach’ to technology (1974: 184) and, as all chapters in this volume iterate, technology should be viewed not as a neutral instrument, but in its relation to human labour and as a social artefact, and, in particular, as an instrument of management organisation and control. For Karl Marx, whose discussion of machinery, large-scale industry and the labour process formed the basis of Braverman’s research, the machine in capitalist production is both a means for producing surplus value and, in its organisation within the division of labour, the means by which humans are subordinated to capitalist relations of production, alienated from their labour and these relations mystified: ‘The social forms of their own labour—both subjectively and objectively—are utterly independent of the individual workers. Subsumed under capital, the workers become components of these social formations but these social formations do not belong to them and so rise up against them as the forms of capital itself, as if they belonged to capital’ (1976[1890]: 1054).

Labour process studies of technology have tended to pursue ideas about technology and its relationship to control—although without Marx or Braverman’s revolutionary outlook—through a set of core propositions (Thompson 1989). These identify the key dynamic of the capitalist labour process as driven by the ‘indeterminacy of labour’; the issue for capitalists of converting living—or human—labour into productive labour power in order to extract a surplus; a situation of exploitation which labour has an interest in resisting. The ‘structural antagonism’ (Edwards 1990), built around this uncertainty and the possibility for worker resistance, is inherent to the employment relationship and links the management agenda to a need to direct and discipline labour—the ‘control imperative’, where this tension sits alongside management’s need for cooperation in the labour process. In the case of technology, a LPT approach to understanding how technology will be adopted and put to work in the labour process would suggest that management will develop or adopt technology in line with these underlying structuring forces, which can result in a variety of worker responses, from resilience, to the reworking of technology to resistance (Katz in Coe 2015), as well as attempts to find cooperation or consent (Burawoy 1979). Indeed, in chapter two of the present volume, Moore et al. look at some examples of resistance to these new initiatives.

LPT has had a long engagement with discussions about control and computerisation and ICTs, particularly with reference to empirical studies of call centres (Taylor and Bain 1999; Callaghan and Thompson 2001). Whereas much of the discussion about power and control in relation to ICTs at work has taken Foucauldian approaches, which highlight the relationship between surveillance and self-discipline (Fernie and Metcalf 1998; Zuboff 1988; Sewell and Wilkinson 1992), the call centre discussions in LPT have made a case for less totalising and more complex understandings of the dynamics of surveillance and control at work in the labour process (van den Broek 2002; Bain and Taylor 2000; Thompson 2002). More recently, Elliott and Long (2015) have attributed the use of tracking technologies and data to the automated micro-management of the tasks of warehouse workers which they demonstrate leads to isolation and individualisation of workers. These debates have emphasised the role of worker agency and, in particular, worker resistance in response to the use of technology to control their efforts, as well as recognising the way in which control through surveillance is only effective where it sits within broader structures of dominance—in the workplace that might take the form of direct supervision, bureaucratic control or other more self-regulatory or less coercive means of control. In an overview of the engagement of LPT with technology, Hall has discussed the general tendency in contemporary work towards the use of hybrid forms of control, where technological control most often combines with bureaucratic and normative control (2010).

Measurement and Performance Management

Performance management has been theorised from a human resources management (HRM) approach as the creation of incentives, as well as judgements and potential punishment, for workers through the development of goals and targets which align individual workers’ activities with the objectives of the organisation they are working for. It tends to be characterised in the HRM literature as the mutual alignment of goals and interests between management and employees which can secure cooperation and commitment from employees (Thompson and Harley 2007). Quantifiable goals and targets have often been developed for workers who have a high degree of discretion in their job, especially in professional, service and managerial work, where work processes can be unpredictable and the product intangible, although it is increasingly extended to other forms of work (Bach 2013; Taylor 2013). There is an assumption underlying much prescriptive performance management literature, that individual performance is a result of individual behaviours, actions and exertions and that there can be a clear line drawn between input, output and outcomes. While evidence for whether performance management improves organisational performance is scant and inconclusive (Bach 2013; Storey 2007), it is often cited as a justification for performance management regimes.

Critical discussion of performance management and measurement has tended to stress both the problem with its underlying assumption of mutuality of interest as well as the challenges to its implementation in a context where competitive market processes continue to place pressure on organisations to minimise costs and to secure profits. As discussed above, measure has long been a preoccupation for management in its attempt to create efficiencies, remove wasted effort and rationalise the production process with the aim of increasing productivity . Critiques of performance measurement have focussed on the difficulty of measuring performance and the subsequent contestation over performance criteria, or the development of proxy criteria that cannot measure what they are intended to measure (Bach 2013: 228–230). These can over-and undervalue aspects of work, redirecting efforts and potentially distort the way a job is carried out—a phenomenon particularly studied in relation to public services (see, for example Carter et al. 2013; McCann et al. 2015). The subjective nature of performance measurement and, in particular, appraisal of workers against measures, has led to claims of the potential for arbitrary, biased or unfair management practices during appraisal (Grint 1993). Phil Taylor’s extended research report for the Scottish Trade Union Congress examined the effect of performance management regimes in several industries and found that it was linked to work intensification, stress and ill-health for workers and was frequently used to ‘manage exits’ of workers (2013).

Big Data , Algorithms and the Social Construction of Data

With the advent of social media and the conversion of social interactions, internet searches and other online activities into quantitative data-driven algorithms this use of data, or ‘datafication’ has become both widespread and normalised (van Dijck 2014). The availability of large amounts of data has led to its growing use in social science and commercial and marketing research to answer social questions and predict behaviour (boyd and Crawford 2012; Moore and Piwek 2016), despite the fact that data science is the best correlative and rarely explanatory; it tends to tell us what has happened and, as a result of what has happened, what might happen, rather than why social phenomena or events occur. Studies of big data and algorithms in the emerging field of data studies have been preoccupied with examining and revealing the ways in which data are socially constructed, how the rules that inform algorithms contain values and choices that are ultimately social and how this effects the way they act on the world. This has some parallel with discussions in LPT about the tendency for technology to embed or objectify capitalist social relations while giving the appearance of neutrality (Braverman 1974; Edwards 1979; Callaghan and Thompson 2001) and in particular Richard Edwards’ argument that what he termed ‘technical control’ could displace relations of direct control and the resulting conflict between workers and supervisors. Critical data scholarship has examined the specific ways and moments at which these processes occur and how data impacts on society and our lives.

Data, when it is conceptualised as data, requires the categorising and accounting of some things and not others (Gitelman and Jackson 2013); what counts as data and which data is collected requires, as José Van Djick points out, ‘an interpretative frame [which] always prefigures data analysis’ (2014: 201). Data is captured because it is required to answer specific questions, for example data could be captured via Facebook about a defined demographic and would be used differently depending on whether they were put to use by a government for health planning, a marketing company for ad targeting or an insurance agency to set policies and premiums. Similarly with algorithms; they seem to remove subjective and value-laden decision-making and replace these with computational processes that, while giving the appearance of being based on the objective, mathematical reasoning, actually work to encode, and therefore reify, subjective decisions about that which they classify and categorise. While data are presented as facts or assumed to be objective, all data must be subject to interpretive processes. And in order for algorithms to make use of this data, complex processes must be simplified (Kitchin 2014) and certain practices and knowledges codified. Wendy Espeland and Mitchell Stevens suggest that in order to determine how data or numbers act on the world, we should look at the work and the conventions that are used to ‘make’ them (2008: 406).

In her work on data and border security, Louise Amoore (2011) draws a connection between the use of data and the automation of decision-making through rules, what she sees as the removal of meaningful decisions on the part of those workers directly carrying out the work. This is an idea with close parallels to Braverman’s (1974) discussion of the management prerogative in the capitalist labour process towards the codification of worker knowledge into technology or work processes—the separation of conception from execution—where worker knowledge and control over a process becomes embedded within machines or technology, automating decisions, removing workers’ autonomy over processes and centralising decision-making with the management and technical staff who design and calibrate those machines. Part of the question for LPT in its engagement with data at work must be whether there is a qualitative difference when these practices are codified into data and algorithms, rather than into machinery.

Espeland and Stevens (2008) emphasise that numbers are used in two main ways: to mark and to make objects commensurate. It is the latter which is invoked in the creation of objects, processes or outcomes into quantities or metrics—and, once expressed as such, new relations between things are created which are expressed as differences of magnitude. This, Espeland and Stevens argue, is ‘at the heart of disciplinary power’ (2008: 414). But this also opens up the question of whether turning things—the products of work in this case—into quantifiable data and reducing differences to points of magnitude have any identifiable inherent effect.

David Beer (2016) focusses on the relations that exist between measurement, circulation and possibility as he tries to locate and understand what he terms ‘metric power’. He argues that metrics—which are systematic collections of data—are part of a neoliberal project to insert market-like conditions between those things that are subject to measurement and metrics, and where metrics enable competition. Other studies have investigated the power and interests that are vested in big data . For example, the power asymmetry with big data and the relationships of ownership and control that define the relations between those who collect, aggregate and analyse data and those whose data is collected (Andrejevic 2014); how big finance and Internet companies who develop algorithms have vested interests in maintaining secrecy about the data they have and the way they use them, as well as the potential this has to affect outcomes and opportunities for people—from credit ratings, to insurance premiums to determining a potentially unreliable employee (Pasquale 2015); or the way algorithms shape knowledge as well as public discourse and cultural forms (Gillespie 2014).

As data have come to define and determine ever-greater aspects of our lives, some of the work in critical data studies has asked whether there are certain innate or necessary features or effects of datafication, measurement or algorithms and to think about the implications of this; others have tried to understand data and algorithms as rooted in, and the result of, the social, political, legal and economic systems in which they have been developed and operate. But overwhelmingly, these enquiries have been directed at the level of society in general and empirical studies have focussed on the production of knowledge (boyd and Crawford 2012); the public consumption of social media and other data and algorithm-led platforms (Andrejevic 2014; Bucher 2012); how social and political discourse is enabled and constrained by algorithms (Gillespie 2014), and the use of data by big business and government (Moore and Piwek 2016; Amoore 2011; Pasquale 2015).

These analyses provide a useful starting point for conceptualising data and algorithms and creating a methodology that can draw attention to the values or relations that are embedded in data, and to whose interests they serve. Although, while critical data scholarship has tended to focus on the power relations and agendas embedded within the construction of data, LPT , while similarly acknowledging that technology and the technical organisation of the labour process both reflect and embed existing social relations, emphasises that the extent to which technologies realign the frontier of control in the labour process will not only depend on its design but also will be conditioned by the balance of power between management and workers when and where it is applied. Discussions about data, algorithms and metrics and their relationship to power and control need to distinguish between different kinds of metrics, what they are used for and in what context; research on the use of data in the labour process needs to look at the specific application of data, algorithms and metrics into specific contexts to see how existing structures, organisational logics and social relations are articulated through code in order to understand and to locate the source of the rules or the power that enables data to act on the world. LPT points to an approach to studies of work that account for the way that relations are structured through the labour process and which establishes a clear relationship between the managerial drive to measure and its control imperative in the context of both capital accumulation and capitalist competition.

Finally, Rob Kitchin has highlighted that the reality of how algorithms operate may depart from their intended function both as a result of programming and ‘a lack of refinement, miscodings, errors and bugs’ (2014: 22), as well as the unexpected effects of their interactions in the contexts in which they are placed. In particular, he characterises the subtle forms of interdependence between people and algorithms where people internalise an algorithm and where the behaviour of the algorithm is ‘conditional on the input it receives from the user’ (2014: 22). One thing that abstract or logical discussions of measurement and data cannot necessarily determine are the limitations to their ability to affect and direct our daily lives and work or the social structures and forces we operate within.

The following sections will describe research conducted in a newsroom to describe how journalism has been made measurable through data analytics . It looks at what is measured—what counts and what does not count—and how this affects the work of journalism. It considers whether the digital data that are being used in newsrooms, and the analytics derived from them, are not only measuring audience preferences but are also acting as performance indicators for those workers writing and producing stories.

Context

Journalism—which is used here as shorthand for the production of news which involves reporters, editors, subeditors, content editors, moderators—offers a key site from which to study the process of digitalisation and the role that data technologies are playing at work. The digital disruption created as a result of the shift from newspaper production to the production of news, stories and other content for online news sites has reorganised the environment within which news agencies operate, exposing them to greater competition, often from new competitors, including non-news actors in the information sector and social media platforms as well as non-commercial and citizen journalism. As noted in a Business Insider article on 5 September 2016, Alan Rusbridger—former editor-in-chief at the Guardian —claimed in September 2016 that Facebook redirected nearly £20 million (US$27 m) of the newspaper’s digital advertising revenue in the 2015–2016 financial year. While it is not clear how Rusbridger made this calculation, it is clear that digital ad spending is growing but it is now technology companies who take the largest portion of it (65%), not journalism organisations (PEW Research Center 2016).

Pressures on revenue have driven traditional news outlets to develop and experiment with new revenue streams, business models and ways of organising work, with shrinking newsrooms a long-term trend. Since 1997, US newsroom staff at daily newspapers has declined by 20,000 positions or 39% (PEW Research Center 2016). In the UK, all national news agencies have had successive rounds of redundancies since the onset of digitalisation; since 2014 the National Union of Journalists, NUJ has kept a roll call of newspaper closures and job losses to reflect this. 1 These are just some of the results of sectoral shifts in the news media economy that have been prompted by the development of new networked technologies. While digitalisation has had a dramatic influence on the organisation and dynamics of the news sector as a whole and much has been written about it in academia and the media (for a good overview, see Franklin 2014), it has also affected media workers as producers and affected the circumstances of production.

Data analytics, tools that track reader behaviours on news sites, have become a standard feature of contemporary digital newsrooms. The digital data captured for use in these analytics are like big data in that they are constituted of voluminous, continuous flows of data but, unlike big data in that they are structured by the technical architecture of the news sites in order to capture particular values. But, for such a seemingly minor introduction into the changing landscape of digitising newsrooms, these data are having a significant effect on the work of producing news as well as reflecting a general shift in the decision-making and priorities that direct journalists’ work. News organisations have designed technological infrastructure that captures data such as the number of page views an online story receives; attention time; 2 where the site’s traffic travels from and to; where viewers are located and what kind of devices they are using. And increasingly, this data is visualised through real-time dashboard displays that are available to journalists and others involved in the news-making process.

Methodology

The research for this chapter is based on a case study that examined the development, use and interpretation of audience data analytics in the digital newsroom of a Digital Paper. The study looked at the way audience data analytics are used by journalists in this paper in which all employees have access to a data analytics dashboard. It looked at the kind of data collected and displayed and the way in which data was circulated within the organisation, how editors and reporters interpreted that information and how it informed their work and decision-making.

Data were collected principally from semi-structured interviews organised around themes which were recorded and transcribed, as well as from fieldnotes taken during non-participant observation of union chapel meetings and an NUJ-organised summit on digitalisation. Interviews were conducted with 13 journalists working across Digital Paper, including, 10 reporters and editors—who are responsible for decision-making about stories but who do not have managerial responsibility over other staff; two managing editors and a technical editor. Rather than seeking a representative sample, interviews were selected in order to examine whether experiences differed according to the perspectives and priorities that attach to positions within the division of labour and organisational hierarchy. Themes identified in early interviews were used to inform the lines of questioning in subsequent interviews. Interviewees were asked about whether or how data inform their work; whether they were provided with guidance or instructions about the use of data; who determined whether a story was ‘performing well’ and how; whether audience data are the subject of formal or informal performance discussions with managers or of disciplinary procedures.

While the number of interviews was limited, it was sufficient to identify common themes and to compare and analyse contrasting accounts. Interview data was combined with an analysis of dashboard charts and organisation reports on audience data. Interviews with journalists in the case study interviews were supplemented with data drawn from five interviews with journalists at five other national newspapers to determine whether business models or organisational differences might account for findings at the case study organisation. The research also drew upon interviews with three organisers from the National Union of Journalists (NUJ) and a representative from the NUJ’s Newspaper and Agency Industrial Committee (NAIC), who were sought due to their expertise in representing and negotiating with news agencies over collective issues tied to digitalisation.

Findings

Digitalisation has resulted in extensive and unprecedented change within the newsroom of this formerly print-only newspaper. It has led to the reorganisation of the newsroom, first with the convergence of digital and print sections of the news organisation, and then with the introduction of technical workers focussed on the development of digital technology for news-making and the introduction and expansion of audience , or search engine optimisation (SEO), teams in the newsroom. Journalists have been called upon to develop new skills, and to multi-skill, in a context where there has been an overall decline in the number of editorial staff which has not been matched by a decline in the number of stories produced. For example, a section editor may work alone to compile a story; scanning sources, writing and proofing the story, packaging it with other material—such as photos, graphics or links to other relevant articles—then launching it and publicising it on social media. Digitalisation has reduced or removed some roles—sub-editing and in-house photography has been particularly affected—and made the work of many editorial staff much more desk-based; journalists have increasingly become what ethnographer Dominic Boyer (2013) refers to as ‘screen workers’. Considering this scope of change, the development of data analytics would seem a small aspect of digitalisation, yet it has had a powerful effect on the organisation of work in the digital newsroom.

In Digital Paper, gaining audience numbers has become part of the process of production for editorial staff. Every member of editorial staff, regardless of their role, can access the analytics dashboard and checking it has become part of news workers’ daily work practice:

It’s not just chasing traffic. But sometimes you can see that … the headline is wrong because it’s not getting any traffic. And I think, “Well why is it not getting any traffic?” And then you think, “Well yeah, can people actually tell what it’s about from the headline?” And then you can just tweak that [the headline] and see what effect that has. I’m using it all the time and … if you see something that a lot of people are obviously landing on but then not clicking on anything else then you say, “Well is there anything else I can link to or package it with?”.

Online Editor, Interview, 16/09/2015

Stories are monitored by editors reporters and as well as managers and, if they are not performing well against the editor’s expectations, will be revised; given a new headline, tagged differently, ‘re-packaged’ with links to different articles or promoted differently on social media. Journalists regularly monitor and update stories with the effect that it extends the process of producing an article. Compared with print, where the intensity of work increased through the day towards a daily deadline, this process of monitoring online stories has removed that, resulting in a heavier workload and more constant workflow. Deadlines persist but rather than the working day building up to an end of day deadline, data derived from audience analytics have informed the creation of new deadlines, based on the ‘news spikes’—moments in the day when audience numbers have tended to cluster—but there may be up to two or three deadlines throughout the day, the number and timing of them varying depending on the desk.

Journalists have a keen sense of the tensions between data-driven decisions and editorial decisions and of the temptation to chase traffic:

I do try hard to stop myself from just following the traffic and just from constantly doing things that I think will generate a lot of traffic, because the nature of what I do means that to do my job well I think I have to do some things that are not going to reach massive audiences.

Section Editor, Interview, 10/11/2016

They tended to combine the use of data to inform their work with maintaining a reliance on their own news values to determine which stories to run. The analytics data were regarded by most journalists interviewed as a useful confirmation of what they felt they already understood about their audiences. One reporter commented:

We wouldn’t commission something if it was a load of total rubbish or we thought it was a load of total rubbish but “Oh it’s just going to get huge traffic”, and we wouldn’t not do something that was of interest to a niche audience—our audience but a niche section of it—if we knew the traffic wasn’t going to be good but it had worth in other ways. So we wouldn’t be dictated in the case of “We’ll just do click bait all the time to get loads of traffic”. But in terms of being aware of how well or not pieces are doing yeah, I mean I guess it must have an influence, in that it’s nice to see when something does well.

Reporter, Interview, 29/04/2016

Journalists are critically engaged in trying to interpret audience data, despite the limitations of what they can tell about the audience and its interactions with a story, and are aware of the challenge it potentially poses to their decision-making and of the effect that ‘following traffic’ could have on news quality. But, as the quote above suggests—more through its vagueness or omissions—is that, although these measures may not reflect the values or objectives journalists attach to the products of their labour, there is less questioning of the assumptions that underlie the kind of data that is collected and presented, or the way it is interpreted. A publicity comment from Ian Saleh, audience development editor at Guardian US, about the Guardian’s in-house analytics programme, Ophan, is characteristic of how technical developers understand data: ‘[the] analytics dashboard … allows editors and reporters to see the effects of their actions on reader behavior as well as on overall site performance and provide actionable intelligence on deepening engagement with readers, creating new content and building audience.’ 3

Here, it is clear that at least two assumptions are at play: first, the idea that the individual efforts or actions of editorial workers are responsible for different levels of audience traffic—as opposed to factors relating to the story itself or other contextual factors, such as the algorithms operating on the social media where it circulates —and, second and relatedly, that there can be a clear line drawn between input, output and outcomes. This ‘intelligence’, drawn from data analytics , is built on limited information about an aggregated audience; data such as median time spent on a story, geographical location, devices and search terms used. In order to inform future practice, it must then be interpreted with reference to factors such as the content of the story, its release time and how it is tagged and packaged. At Digital Paper, this practice is institutionalised in reports that are circulated around editorial desks via email, either daily or weekly, and which highlight data about top-performing stories or recommend best-practice:

There’s quite a lot of best practice meetings and that kind of thing so people will discuss if something works or not; that kind of thing. So you can see what other people are doing and each week there’s a team email so people can see this has done well.

Content editor, Interview 28/04/15

People used to share…“best practice”…I’m quite a cynic on this kind of thing. You’ll find loads of people who will just preach to you about exactly how to get more Twitter followers and get more hits. I just don’t buy into it personally, but I’m probably in a minority who don’t.

Reporter, Interview, 28/08/15

In Digital Paper, the use of data analytics has become a common-sense solution to understanding how online journalism should work. But, while it has reorganised journalists’ work and changed work practices, the question to which it is the solution is more opaque.

When the The New York Times’ Innovation report was leaked in May 2014, it revealed much about the way legacy news organisations were thinking about and attempting to respond to digitalisation (Sulzberger et al. 2014). The report located competitive advantage in digital journalism as a race to develop and adapt the newest technology and recommended business strategies based on the monetisation of audience data as well as the shift from advertising revenue to a subscriber base. This audience-based business strategy—favoured by many newspapers with a large online audience, such as The New York Times and the Guardian —aligned business and editorial objectives and has resulted in organisational restructuring that draws together a closer collaboration between marketing, digital product development and editorial (Sulzberger et al. 2014: 57). This closer collaboration is reflected in the design of data-capturing technologies and analytics, which were previously the preserve of commercial or marketing departments have become central to the work of editorial.

To return to the question of what is and is not counted, the data analytics at Digital Paper—the data which are collected, circuited and meaningful to journalists —are those which capture audience views and the time spent on a story (median time) and what the incorporation of these data into journalists’ work practices achieve most unambiguously is the drawing together of commercial objectives with the day-to-day of editorial decision-making—the kind of concerns that formerly would have resided with management. In this way, data can be seen as a key contributing factor towards the alignment of goals and interests between the commercial side of the news operation and editorial. It has the effect of shifting journalists’ priorities towards the organisations’ commercial goals, and potentially their responsibilities also.

Data analytics have also created new visibilities for journalists’ work that are more individualised than previous measures of a newspaper’s success, which raises the question of whether these are being used as a performance measure for journalists’ work. Reflecting on the contrast between the organisation of work before the analytics system had been developed and today at Digital Paper, one managing editor commented:

Nobody could measure…We didn’t really know how much stuff we were putting up online. …We just didn’t know and people were just putting stuff up there. And now we absolutely know, not just exactly how many words there were on everything we put online and when it went up, but exactly how many people hit on that story and where they came from.

Managing editor, interview, 15/10/15

In this quote it becomes clear that the internet architecture and the data it captures have a dual function; first, they measure outcomes in ways that are well-understood—those which count audience views and interactions with the site. But what it also confirms is that these data are explicitly measuring journalist outputs—at the least number and length of stories and launch times. Digital Paper journalists who were interviewed claimed this data has informed restructuring processes. While there was no confirmation of this from management or the unions, even the perception that data analytics were being used in this way, could create a powerful adherence to data analytics, especially where the newsroom had been through three rounds of redundancies in the seven years between 2009–2016.

Apart from organisation-wide restructuring, editors also described how columns written by freelancers had been discontinued as a response to poor audience traffic. Whether rationalisation of the news-making process was the original purpose of the analytics system or not, data make the relationship between audience numbers and journalists’ work more visible and, in the hands of management, who have pressures to contain costs in a difficult financial environment, seem to be an objective basis upon which to reorganise and rationalise.

Although their use is widespread across the organisation’s editorial operation and staff had a clear idea about the audience numbers they should be aiming for, which differed across sections, targets around audience data had not been explicitly integrated into performance appraisal for the journalists interviewed and none knew of any instance where individuals had been disciplined on the basis of analytics.

There are no guidelines and no one has ever said to me, “Oh yeah you should use it in this way” or “You should look at it and do this.”

Reporter, Interview 17/09/2015

But the setting of web targets is not unknown in journalism; at one English regional news publisher, management talk about ‘winning the Internet’ and journalists are given web targets and web lists of stories that will go online each day (Fieldnotes, NUJ Newspapers Summit, 25/01/15). Management at regional news publisher, Trinity Mirror , tried to introduce web targets at five of its regional daily titles but were forced to back down after journalists voted to strike over the measures. 4 In contrast, at Digital Paper, the use of data has been encouraged through mechanisms such as best practice sharing, daily and weekly audience traffic reports to editorial staff and the development of a culture that places data and technology as the future of journalism. While some journalists have embraced data, others have been more resistant:

To be honest, when I was doing that content coordinator stuff, I think I was meant to use [the analytics dashboard] . I just never actually worked out how to use it. So I just never did.

Reporter, Interview, 28/11/15

One journalist described a section of the news organisation that was staffed largely by people who had been working on the print paper since before the advent of digital and where she described a culture of resistance to the changes to work associated with digitalisation:

They have really different attitudes to journalism I think, and it’s quite interesting going there and hanging out with those people who are a bit older and don’t actually care about traffic or couldn’t give a shit what time things go online. (Laughs). I say that about the journalists, obviously the production people are very good at their jobs and very (laughs) —I’m sure they care a lot about when they go online and I’m sure they have things to stick to, but the reporters, I don’t think they’re too bothered about that type of thing.

Commissioning editor, Interview 22/10/2015

Discussion

The research examined the role of data and analytics in journalists’ decision-making, not to pursue an argument about the relationship between data and click bait, or declining quality of news, but to assess whether knowledge, tacit or otherwise, about what makes a story, might be being supplanted with machine thinking about what has done well, which then gets interpolated into predictions about what will do well. Audience data analytics , while they provide the basis for this kind of automation, have not been extended into decision-making at Digital Paper. Best practice, which is derived from data about stories that ‘do well’ is shared in the newsroom but how journalists analyse and utilise it is left to their discretion. Instead, what data analytics have done on a generalised level is create a new basis upon which decisions can be made and judged. Audience data has been accepted and legitimised by journalists as a useful and objective measure and, to that extent, journalists have internalised it as a good measure of audience engagement but also of their work. What the focus on data has also done is reinforce the idea, common to big data science, that data can usefully be employed to determine what factors will improve the likelihood that a story performs well and that journalists can, through their individual actions, affect this. The use of audience data throughout the editorial process instils a much keener audience-centred approach to journalism, much like customer-focus in service work. The day-to-day alignment of business objectives with editorial objectives has become embedded technologically and, as a result, decentralised responsibility for audience numbers into the day-to-day practices and consideration of journalists.

Beer asserts that the visibilities afforded by metrics have a disciplining power, where seeing is the first step to controlling (2016: 214). This line of argument closely echoes those accounts of surveillance of work that were influenced by Foucault and subsequently dissected by empirical studies of the labour process. But what are the subsequent steps to control and how necessary are they; is control part of the logic of visibility or just a possibility afforded by visibility? There has always been an element of visibility, or the desire for visibility, in journalism, at least for writers and reporters, although data analytics technology has changed this from one focussed on the broad measure of appearing the paper, or on the front page, to a much more careful accounting of the outcome of individuals’ efforts or, more accurately, audience response to journalists’ efforts. But in newspapers, surveillance is, if anything, a by-product of visibility.

For Beer, measurement works as a system of governance, and self-governance through affect: ‘The anticipation, the expectation, the worry, the concern, the fear of failure, the insecurity that comes with potential visibility’ (2016: 201). But, in order to operate as disciplinary mechanisms, data analytics require engagement, and unless they are coupled with other kinds of disciplinary frames, such as web targets and performance management, engagement cannot be assured. In the case where analytics and web targets have been devised and linked to performance appraisal, it has resulted in more direct antagonism between journalists and news managers, rather than creating indirect control through technology.

In terms of the technological pacing of work and intensification, journalists increasingly are required to ensure the wide distribution and the circulation of a story. But journalists’ workload becomes intensified not just because they are required to take on these additional tasks—although these play a part in work intensification—but because ensuring a story performs well according to the data analytics involves constant monitoring of real-time data and manicuring of stories in response. One key difference between static measures and real-time data is that real-time data has the potential to create a sense of continual movement , if not urgency and stress; it paces work, not in a rigid, mechanical way but because it ceaselessly produces information to which there is a normative expectation that journalists will consider and respond.

Together with this effect of real-time data, work intensification also arises from what the data doesn’t count and, in the case of audience data, what is not visible is the work that lies behind the page views. The process of producing a story is complex and variable, where it can be difficult to predict in advance and account for the time and effort it will take to compile a story. As a result, like with other forms of creative and professional work, journalists are given a large degree of autonomy over the way they compile stories; newsrooms or individual journalists may develop practices or routines that help them compile stories quickly, but there remains an element of uncertainty and reliance on external factors outside the control of journalists in the news-making process. What data analytics do is to abstract these complex processes into a quantitative outcome, where the particular concrete practices or labour that journalists must perform to compile a story become effaced. Espeland and Stevens (2008) show that quantification works to make disparate objects or actions commensurate. In the case of data analytics, one set of data about page views is made commensurate with another regardless of the time, effort or skill required to produce it; what matters is how well it performs with the audience.

When this focus on data analytics is indifferent to, or obscures, the work required in production, and is then coupled with journalists being made individually responsible for improving the circulation and readership of their stories, the journalist becomes disciplined by the audience via the audience data, even though their capacity to affect the data may be limited, or worse, illusory. When a story is not performing well, as long as the journalist sees it as both her individual responsibility and something she can change by her actions, there is the potential for workloads to mount. This lessens the need for direct managerial control, where issues about workload—which previously were areas of conflict between reporters and editors, or editors and news managers—become subject to the indifference of calculation. The struggle is no longer between management and journalists but between journalists and their ability to improve their data, where poor data can be seen to be an individual failing.

But, even in this case, journalists can only be disciplined by the data in this way as long as they accept it as their individual responsibility to maximise audience and believe that this audience growth can be achieved through their own exertion. In the newsroom studied, even the general acceptance and heavy reliance on data analytics has not removed the possibility of its contestation by workers. Journalists’ news values and ability to find a good story are still considered at Digital Paper as key defining features of journalists’ work and their organisation’s product. As a result, journalists have been able to exercise discretion over how they interpret and use data in their work and assert their professional values against the imposition of decisions driven by data.

Conclusion

In many ways, this discussion extends earlier observations and discussions from LPT about technology, control and autonomy into discussions about data analytics at work. The relationship between audience data and productivity is not arbitrary or coincidental; the development and growth of data analytics in newspapers have arisen out of a period of financial and technological uncertainly for newspapers, where managers have been under pressure to determine what aspects of the product can be monetised and which are making returns. The technology, or digital architecture, that has been developed in newsrooms to collect audience data has enabled managers to drill down to obtain detail about journalists’ output, including audience response to it, and to assess the work of individual workers. This has been utilised in both the intensification of work and its reorganisation, much as with the situation recounted by warehouse worker, Ingrid, to Phoebe Moore in Chap. 2, where data is being used to justify layoffs, or at Tesco , where studies by Moore and Robinson have found that data from wearable armband devices is used to reduce the need for warehouse workers on the floor (2016).

As far as their ability to act as a mechanism of control within the workplace, data are put to use in particular configurations of power; they have no power in their own right. When data and metrics come to circulate in the set of social relations and forces of production within the labour process, it is necessary to examine the specific constellations and contexts into which they operate in order to make an assessment about from where—or whom—their power to discipline arises and through what mechanisms. In this case, journalists, as a group of workers with a high degree of autonomy over their work have been able to maintain discretion over how they interpret and incorporate data into their work and decision-making processes. But this is not the whole story because the use of data as visible markers of journalists’ outputs in the newsroom has also created new responsibilities and accountabilities for journalists and contributed to the intensification of work.

Unlike machine pacing, or other forms of technological control, journalists’ subjection to discipline by data is heavily reliant on data in a context where there are other normative pressures at work; in this case, where data have been legitimised by journalists as a measure for their work, combined with the idea that journalists can affect audience numbers and interactions through their efforts—an effect that is neither necessary nor guaranteed.

Notes

  1. 1.

    Full list is available at: https://www.nuj.org.uk/news/roll-call-of-newspaper-closures-and-job-losses/ Accessed on 7/02/2017

  2. 2.

    How long a reader spends on a page, usually presented as median attention time.

  3. 3.

    http://mediashift.org/2015/03/7-media-metrics-analytics-and-impact-projects-to-present-at-collabspace-austin/ [Accessed 13/01/2017]

  4. 4.

    Details of the dispute can be found at Hold the Front Page: http://www.holdthefrontpage.co.uk/2015/news/trinity-mirror-shelves-plans-for-individual-web-targets/ Accessed 7/01/17