This is the rather clear warning uttered by activist group Anonymous (2012) in a video that was published in April of 2012. The trigger for the drastic choice of words, INDECT (intelligent information system supporting observation, searching and detection for security of citizens in urban environment), was a research project that was funded by the European Commission from January of 2009 until June of 2014 under the FP7 call “intelligent urban environment observation system” (SEC-2007-1.2-01). The overall goal of the project was “the development of solutions to and tools for automatic threat detection”Footnote 1, thereby pooling data from different sources and seeking to analyze them in an automated fashion. The potential threats to privacy and data protection (as well as other values and civil liberties) emanating from the project’s work plan caused considerable controversy, as becomes clear by a couple of rhetorical questions posed at the end of Anonymous’ video: “Do you want to be watched 24/7? Do you want your intimity [sic!] to be gone? Do you want your freedom to be taken?” (Anonymous 2012).

The INDECT project provides an apt example of the potential clash between an emerging technology and the ethical concerns that come with it. Science and technology are not neutral tools devoid of normative aspects. Quite on the contrary, the acknowledgement of the social construction processes of technologies implies that emerging technologies are subject to human choices and could be developed and designed in distinct ways. Such an angle allows us to study the social, political, economic, and not least normative considerations that they have been subject to (e.g., Pinch and Bijker 1984; Latour 1991; Winner 1980; Bowker and Star 1999). Emerging technologies undergo numerous moments of gravity that could nudge their eventual shape into one direction or the other—often with wide-ranging consequences for the capacities of the finished product or tool. The shaping of technologies, from such a perspective, is thus never innocent, but an ethically charged process that hinges on critical decisions and the social and moral consequences that emerge from those decisions.

Emerging technologies are therefore subject to frameworks of research governance that are set to ensure that the end results of development and design eventually comply with human rights and core values. Science and technology, from an ethical perspective, are marked by a responsibility to shape society in a fashion that is morally desirable, and that minimizes potential detrimental impacts as far as possible. Emerging technologies are thus rendered explicitly political in the sense that regulatory action must be undertaken to ensure the social and moral acceptability of development and design early on. There is a wide and diverse body of literature that engages politics and practices of governing research in accordance with ethical considerations. The problem to be tackled, from a broad regulatory perspective, thereby breaks down to questions of how to monitor and regulate emerging technologies such that they adhere to morally acceptable consensus.

Over the past decade, frameworks of “Responsible Innovation” (e.g., Hellström 2003; Owen et al. 2012, 2013a; Von Schomberg 2011) have in this vein made an effort to streamline the fragmented field of research governance into an integrative approach that fundamentally rests on a notion of inclusiveness and responsiveness of development and design processes. Responsible Innovation thereby notably includes all involved stakeholders, and highlights deliberative mechanisms not only among engineers, designers, and policy-makers, but also participatory fora that seek to take public debate and opinion into account. The problem with such an inclusive approach is however that it fundamentally rests on the willingness of the responsible managers to put effective mechanisms for deliberation into place. To a certain extent, Responsible Innovation assumes a given resonance space in which potential moral conflicts can be discussed and eventually mediated or settled. The case of the INDECT project discussed here demonstrates that this is not always the case, but that on the contrary, such a space must sometimes be claimed and established from the outside.

This paper seeks to address this conceptual gap by highlighting the notion of public accountability (e.g., Bovens 2005; Mulgan 2003) as a complementary tool for the establishment of an ethical resonance space for emerging technologies. Public accountability can render development and design processes of emerging technologies transparent through practices of holding those in charge of research accountable for their actions, thereby fostering ethical engagement with their potential negative consequences or side-effects. It builds on fundamental democratic ideas of sovereignty, inclusion, and participation. Through practices such as parliamentary questions, audits, and open letters, so the argument I put forward here, can emerging technologies be effectively rendered transparent and opened up to broader levels of scrutiny and debate, thereby contributing to a greater adherence of emerging technologies to ethics and moral consensus. Fundamental democratic practices can thus not only lead to better informed choices in design and development processes, but also contribute to more morally substantive outcomes.

The paper proceeds as follows. First, I will engage the literature that I seek to contribute to (research ethics and research governance; and particularly frameworks of Responsible Innovation). I will then explore the notion of accountability and its potential to create binding democratic oversight over emerging technologies. In order to illustrate the potentials of such an approach to the governance of development and design processes, I will in some depth engage the case of the INDECT project, which was, due to serious public concerns about the research activities undertaken during the project, subject to intense critique and scrutiny. In order to establish transparency with regard to the research activities, practices of democratic accountability were enacted through formalized questioning, auditing, and open letters. Sparked by the project’s rather Orwellian ambitions of surveillance and control, the creation of an account enabled a more transparent and substantive discussion of the ethical implications of the work plan, and thus forced the involved stakeholders to settle controversies and to adhere to the idea of shared moral responsibility within development and design.

Ethics, Governance, and Responsible Innovation

As an academic field, as Irwin (2008, p. 584) summarizes, research governance is “broadly concerned with the relationship between science, technology, and political power—with special emphasis on democratic engagement, the relationship between ‘scientific’ and wider social concerns, and the resolution of political conflict and controversy”. Frameworks of governing research are thereby necessarily always ethically charged, as ongoing processes of development and design present an opportunity for intervention and to nudge things into the ‘right’ direction early on. It is an ethics that must not remain abstract, but that needs to engage the concrete contexts of emerging technologies. It is thereby an ethics that is not opposed or hostile to technological change, but one that actively engages the conditions and premises under which technologies come into being (Rainey and Goujon 2011, p. 174). Regulatory choices in research governance must thus seek to render development and design of emerging technologies desirable in the sense that the ensuing outcomes contribute to desirable societal conditions. These conditions, in turn, are not pre-given or unanimously formulated, and neither are the ways in which emerging technologies could be beneficial or detrimental to them.

In the case of the INDECT project, it could for example be argued that extensive surveillance could contribute to enhanced security, as it could reveal potential threats early on. Others would refute such claims, referring to the negative consequences of surveillance, and questioning the effectivity and effectiveness of automated threat monitoring. And from a governmental perspective, an appeal to security has been demonstrated to legitimize extraordinary practices and trump other values (e.g., Buzan et al. 1998; Bigo 2002). The bottom line here is that research governance, in order to provide a comprehensive perspective on the emerging technologies to be regulated, must necessarily be open to scientific, political, and public debate (Grunwald 2000, p. 184). The goal of such debate must not be to foreclose results through an a priori determination of what is desirable or undesirable, but to enable an informed discussion about controversial choices.

Within the literature on research governance, several distinct (yet at times overlapping) strategies to render emerging technologies morally acceptable can be identified. One is regulation of technology through law. As Székely et al. (2011, p. 180) argue, “in modern constitutional democracies the law is not simply a matter of rules but is supposed to invoke moral principles”. A major concern with legal regulation is however the substantial time lags between the issues to be regulated and the eventual regulation itself. Technologies tend to only fully unfold their social consequences once they have been rolled out and incorporated into social and organizational routines, thereby revealing their potential to afflict and/or change values and moral consensus only after they have escaped the development and design phase. By that time, however, it is usually too late to modify the technology itself (Collingridge 1981). This fact, catalyzed by the significant speed of new technology roll-outs, creates more or less constant policy vacuums (Moor 2005). In other words, once new technologies have been fully established, the only legal leverage left is to regulate their use in order to contain their consequences (Székely et al. 2011).

Another route into the regulation of emerging technologies is ethical review prior to the start of the research. Particularly in the context of public research funding, proposals usually have to undergo a formalized board review in order to make sure that certain ‘critical’ tasks could only be performed once their moral harmlessness has been a priori approved by expert boards. Notably, such forms of review seek to address regularly encountered issues such as research on humans or human embryos and foetuses, animals, personal data, dual use, or misuse. For instance, in the European Commission’s Horizon 2020 research funding framework, applicants are expected to perform a self-assessment of their research goals against a pre-defined checklist, and in case their research would affect one or more checklist bullet points, their proposal would automatically be redirected for board review (European Commission 2014). This type of ‘check-list governance’ has been criticized both in terms of effectiveness and normativity. As Stahl (2011, p. 150) argues, it is “mechanistic and relies on an ethical issues list, which can give the misleading impression that ethics can be reduced to a pre-defined set of issues”. Particularly from an ethical angle, it must be regarded as formalistic, bureaucratic, and narrow, as it would possibly not be open and flexible enough to incorporate novel and unprecedented moral challenges that could emerge from radically new technologies or shifting social and institutional contexts (Rainey and Goujon 2011; Owen et al. 2013b; Stahl et al. 2009).

More recently, frameworks of “Responsible Innovation” have gained political traction, not least due their incorporation within EU research funding (e.g., Hellström 2003; Owen et al. 2012, 2013a; Von Schomberg 2011; Rommetveit 2011). Closely related to methodologies of impact assessment that outline project management alongside consecutive steps such as identification of stakeholders, consultation with stakeholders, analysis of compliance with existing legislation, risk management, and communication with the public (e.g., Schot and Rip 1997; Skorupinski and Ott 2002; Palm and Hansson 2006), Responsible Innovation seeks to establish a distinct approach to governing innovation that is to a large extent grounded in the acknowledgment of the benefits of dialogue. Whilst impact assessments are usually conducted with regard to the adherence to legal frameworks or economic aspects of development and design processes, and thereby enact a rather managerial angleFootnote 2, Responsible Innovation highlights the ethical stakes and benefits of mutual responsiveness.

Starting from the assumption that contemporary innovation processes cannot be conceived of in an isolated fashion that could be retraced to individual persons, research groups, or even institutions, but that they are embedded in wider societal networks that are comprised of research, engineering, design, marketing, policy-making, and implementation, advocates of Responsible Innovation highlight a shared responsibility for emerging technologies that is distributed across all involved stakeholders. Complex technologies are highly likely to be comprised of multiple, interacting elements that emerge in multi-year processes, undergo design and marketing choices, eventually become regulated, and might even unfold unprecedented social implications through the ways they become used on an everyday basis. As Von Schomberg (2013, p. 59; emph. in orig.) summarizes the ethical dimension of such processes, “modern ‘Frankensteins’ are not intentionally created by a single actor, but (if they arise), are more likely to result from the unforeseen side effects of collective action”.

As a direct consequence from such a vision of collective action, and subsequently shared responsibility among many actors, Von Schomberg (2013, p. 51) claims that development and design processes “should be understood as a strategy of stakeholders to become mutually responsive to each other”. Only when all relevant perspectives would be included in the critical moments of long-term processes, so the rationale of Responsible Innovation, could the end results be rendered in a fashion that would be beneficial for society on the broadest scale possible. This in turn means that fundamental ethical and moral questions concerning technical and design choices should be subjected to wider layers of discussion. In the words of Owen et al. (2012, p. 754), Responsible Innovation “asks for inclusive deliberation concerning the direction of travel for science and innovation—from the outset—opening up opportunities for these to be directed towards socially desirable ends”.

A cornerstone assignment of such an agenda must then be to incorporate public debate within deliberative processes of technology-shaping and to take moral concerns of laypersons seriously. Public participation, as Bucchi and Neserini (2008, p. 449) write, must in this sense be understood as a “diversified set of situations and activities, more or less spontaneous, organized and structured, whereby nonexperts become involved, and provide their own input to, agenda setting, decision-making, policy forming, and knowledge production processes regarding science”. The concept of Responsible Innovation is in this sense an explicitly political one. As Stirling (2007, p. 218) points out in this regard, “the language of ‘inclusion’, ‘engagement’ and ‘deliberation’ is moving into successive political arenas,” and there are arguably multiple reasons for this.

A considerably broadened perspective on ethics within development and design processes that opens up the scope to questions of politics, economics, social justice, or environmental issues could be regarded as an intrinsic value in the governance of innovation per se (Grunwald 2000). It is however specifically the inclusion of the public that unfolds a tempting promise. Sykes and Macnaghten (2013) in this sense differentiate the added benefits of public debate along the lines of normative, instrumental, and substantive contributions of deliberative approaches. More generally, so they claim, “we need to think more about the governance of science and innovation, and to explore and clarify any role dialogue might have” (Sykes and Macnaghten 2013, p. 104). The emphasis of dialogue brings together the normative assumptions of Responsible Innovation with reflections on how to effectively govern development and design processes. At the same time, through inclusive mechanisms, emerging technologies become politicized in the sense that they are opened up for controversy (Callon et al. 2009). As Owen et al. (2013b, p. 37) point out, questions of bridging the presumed gulf between experts (researchers) and laypersons (the public) “are inherently political discussions, involving considerations of power, democracy, and equity, and suggest that responsible innovation cannot, and should not, be decoupled from its political and economic context”.

Bringing the public into processes of emerging technologies is not an easy task, and there is a rich body of literature that deals with the question of how to best incorporate civic engagement in development and design (e.g., Wildson and Willis 2004; Bucchi and Neserini 2008; Sykes and Macnaghten 2013). And even in case deliberations took place, “a key unknown remains, concerning the impact of many of these kinds of initiatives on policy-makers and scientists: whether they actually listened or learned from the public” (Sykes and Macnaghten 2013, p. 90). The problem that this paper addresses is however a slightly different one. It is one that precedes the question of how to include. It rather explores the question: What if there is no public debate to begin with? What if the idea of mutual responsiveness does not become adequately implemented into development and design processes? What if emerging technologies are closed off from inquiry into their specifics and potential impacts?

In the words of Valkenburg (2016, p. 2), “exempting something from politics requires keeping control over agendas, silencing particular voices, and preventing specific harms from becoming visible and raising a concern”. Frameworks of research governance do not provide a clear answer to this problem. As Valkenburg (2016, p. 6) further notes with regard to moral controversy, “it is not self-evident how a citizen, as a member of a political community, is granted political agency”. In an attempt to bridge the gap between deliberative aspirations and binding regulatory force, I thus propose to turn to mechanisms and practices of accountability. By showing how the notion of accountability was put to work in the case of the INDECT project, I develop the idea that it can be conceptualized as a supplementary way to establish a space for debates about technological choices.

Democratic Accountability

Mechanisms of accountability are a basic feature of any (representative) democratic system, and as such an important means that ensures that those in power (government, public administration, and other state bodies or institutions) have to answer to those from which the power was initially derived (the citizens). Bovens (2005, p. 182) in this vein argues that “public accountability is the hallmark of modern democratic governance. Democracy remains a paper procedure if those in power cannot be held accountable in public for their acts and omissions, for their decisions, their policies, and their expenditures. Public accountability, as an institution, therefore, is the complement of public management”. In quite simple terms, accountability means the obligation to give an account—of practices, of events, of decisions, of expenditures, and of processes of government and administration more generally—that is supposed to inform the public, empower debate, and that can be rendered actionable in different forms, for instance in court, in inquests, or in truth finding commissions.

First and foremost, giving an account means to present one’s version of what has happened—to identify personal and institutional roles in the unfolding of events, to render decision-making traceable, or to identify fault or corruption. Notably, practices of holding to account are regularly enacted in the aftermath of failure, crisis, (moral) wrongdoings, or other undesirable events, as holding public representatives to account is a viable and direct way of putting democratic principles to work. It is worth quoting Bovens (2005, p. 192) at length here, as he highlights the importance of accountability for democracy:

Modern representative democracy can be analyzed as a series of principal-agent relations. Citizens, the primary principals in a democracy, transfer their sovereignty to political representatives who, in turn (at least in parliamentary systems) confide their trust in a cabinet. Cabinet ministers delegate or mandate most of their powers to the thousands of civil servants at the ministry, which in its turn, transfers many powers to more or less independent agencies and public bodies. The agencies and civil servants at the end of the line spend billions of taxpayers’ money, use their discretionary powers to grant permits and benefits, they execute public policies, impose fines, and lock people up.

Against the backdrop of globalization and supranational structures of government and regulation, scholars have pointed to the manifold problems that mechanisms of democratic accountability encounter when faced with complex, multi-layered political systems, and globalized forms of governance more generally (e.g., Bache and Flinders 2004; Scharpf 1988). Nevertheless, not least in light of debates on its democratic legitimization, the European Union has committed itself to a maximum level of transparency and accountability. In 2001, a white paper on European governance already highlighted that “democratic institutions and the representatives of the people, at both national and European levels, can and must try to connect Europe with its citizens” (European Commission 2001, p. C 287/281). In 2006, such efforts were followed up upon when the European Commission published a green paper on the “European Transparency Initiative”. The paper indeed makes a strong claim in this vein, as it highlights “the importance of a ‘high level of transparency’ to ensure that the Union is ‘open to public scrutiny and accountable for its work’”. (European Commission 2006, p. 2; emph. in orig.)

Through the idea of public scrutiny and debate, accountability links back directly to questions of research governance and the implementation of mutual responsiveness of stakeholders, as proposed by Responsible Innovation. The ideas of transparency and scrutiny thereby establish a key mechanism in opening up emerging technologies to broader levels of moral concern. Only through rendering development and design open and transparent can discussion take place in a truly informed fashion. This idea has in the European Union been institutionalized through the so-called “Transparency Portal” websiteFootnote 3, which provides, among other data, information on European legislation, official documents, consultations, and notably a searchable database of beneficiaries of EU funding. Working through the example of the INDECT project, which was funded by the European Commission and therefore bound by the principles of transparency and accountability, the remainder of this paper retraces how through practices sparked by moral concern, an account of the project’s activities was demanded and eventually produced, thereby holding the project consortium accountable for possible moral harm emerging from the outcomes of its work.

The INDECT Project

INDECT was one of the largest project consortia that were funded within the European Commission’s FP7 research funding framework. Receiving an EU contribution of close to 11 million EUR to the overall budget of close to 15 million EUR, INDECT assembled a total of 16 partners from the security industry, academia, and end users (police) in order

to develop a platform for: the registration and exchange of operational data, acquisition of multimedia content, intelligent processing of all information and automatic detection of threats and recognition of abnormal behaviour or violence, to develop the prototype of an integrated, network-centric system supporting the operational activities of police officers, providing techniques and tools for observation of various mobile objects, to develop a new type of search engine combining direct search of images and video based on watermarked contents, and the storage of metadata in the form of digital watermarks, to develop a set of techniques supporting surveillance of internet resources, analysis of the acquired information, and detection of criminal activities and threats.Footnote 4

From this description of work goals, it does not exactly come as a surprise that the project’s objectives stirred considerable critique once they surfaced to the public. As “STOPP INDECT”, a German initiative against the realization of the project, writes on their web page, “INDECT is the most extensive surveillance project ever planned or established,”Footnote 5 and Nicholas West (2013) adds in an article for the Activist Post that “the race to perfect and implement true pre-crime technology continues to accelerate”. These are but two of the many examples of NGOs, civil rights activists, critical lawyers, politicians, and scholars pointing to the detrimental social and societal effects of the INDECT work plan, including potential violations of human rights and civil liberties that could emerge from an approach that seeks to ‘connect the dots’ in order to prevent threat.

The notion of security through surveillance and intelligence here is one that notably comes into being predominantly through technology. The INDECT project provides an apt example of how “technical things bear responsibilities, express commitments, and assume roles as agents in the realm of human relationships”. (Winner 2006, p. 278) Distinct technologies are here envisioned to be combined into an encompassing assemblage for surveillance and data collection through the repurposing and/or expansion of existing tools for security, the establishment of links between already existing technologies of the everyday such as peer-to-peer networks, internet forums, and even private computers, coupled with data from CCTV systems, tracking devices, and algorithmic processing of the data that has been linked and rendered accessible across platforms. As the European Commission further details the project work plan and its desired outcomes:

The main expected results of the INDECT project are: piloting installation of the monitoring and surveillance system in various points of city agglomeration and demonstration of the prototype of the system with 15 node stations, implementation of a distributed computer system that is capable of acquisition, storage and effective sharing on demand of the data as well as intelligent processing, construction of a family of prototypes of devices used for mobile object tracking, construction of a search engine for fast detection of persons and documents based on watermarking technology and utilising comprehensive research on watermarking technology used for semantic search, construction of agents assigned to continuous and automatic monitoring of public resources such as: web sites, discussion forums, usenet groups, file servers, p2p networks as well as individual computer systems, building an Internet based intelligence gathering system, both active and passive, and demonstrating its efficiency in a measurable way.Footnote 6

The role of technology in the project, then, is one that is anything but neutral: it arguably unfolds considerable negative impacts on values and civil rights such as privacy, intimacy, the freedom of speech and movement, or the presumption of innocence. A surveillance system on such a massive scale would effectively put every given individual under a general, constant suspicion and thereby fundamentally threaten the core of any liberal, open society (e.g., Lyon 2003; Marx 1998; Ball and Webster 2003). Put differently: what we find here is a case of a stark moral controversy in which the public initially had little leverage. The work plan was negotiated between the consortium and the funding body (the European Commission), and eventually approved by the latter. As such, the political moment of an emerging technology had taken place, as is so often the case, without consideration of public opinion. As McCarthy (2013, p. 476, emph. in orig.) points out in this vein, “it is not that technology develops outside of human agency, but that it develops outside of some humans’ agencies. The ability to control technological design and development is a significant facet of social power relations”. This is precisely the point where, in the sense of Responsible Innovation, a re-politicization through the inclusion of all stakeholders, including the public, would be needed in order to re-establish the mutual responsibility of dispersed, multi-faceted development and design processes. Since deliberative fora for such a task were not to be found, ‘the public’ subsequently had to take matters into their own hands: through techniques of creating an account, as the next section details, the project’s inherent ethical stakes were brought into wider discussions.

Creating an Account

As Grant and Keohane (2005, pp. 29–30) write, “the concept of accountability implies that the actors being held accountable have obligations to act in ways that are consistent with accepted standards of behavior and that they will be sanctioned for failures to do so”. The INDECT project, I contend here, violated such accepted standards and behaviors through its aspirations of designing a platform of far-reaching surveillance and control, thereby either deliberately or involuntarily accepting the moral harm that can be inflicted by security measures. Seemingly, the projects’ leadership and the European Commission had all too easily assumed that the moral harm potentially caused by such measures would be outweighed by potentially enhanced security. With regard to this assumption, Valkenburg (2016, p. 20) however rightfully points out that “even in face of existential threats, it is possible for security to be compatible with forms of politics that comply with general principles of democracy, engagement, and human freedom”. And even if not all values should be compatible, the assumption of mutual responsiveness and shared responsibility would at least require an informed debate about this incompatibility.

Particularly when it comes to questions of security, the European Union has been for quite some time promoting what De Goede (2011) calls “European security culture”—an approach that is to a large extent based on the development and implementation of (networked) technologies of surveillance and control. Through its dedicated security research funding lines both within FP7 and the more recently established Horizon 2020 framework, the EU has actively made major efforts to institutionalize security research at the European level, thereby investing considerable amounts of (tax-payer) money. Arguably, through this very institutionalization of security research in publicly accountable funding frameworks emerges an opportunity to render emerging technologies more visible and transparent—and notably in a binding fashion. Demands of transparency of publicly funded research might provide a break to re-establish public awareness and debate about the role of (security) technologies and their societal and moral consequences.

As public research funding frameworks are financed through tax-payer money, the European Commission is obliged to make as much information about ongoing research as possible publicly available. In other words: European institutions can be held accountable, and this is indeed what happened in the context of the INDECT project. In fact, the project goals have been publicly rendered as the pinnacle of dystopian futures such as George Orwell’s novel 1984 (Johnston 2009) or Philipp K. Dick’s short story Minority Report (Der Standard 2014). As West (2013) summarizes the anxieties that INDECT’s work plan has sparked:

It is the amalgamation of all of the pieces that have so far been introduced: video surveillance footage, biometric information, web-based data, drones, GPS, police databases and more. […] The result is an all-encompassing attempt to render daily life as part of a terrorist threatscape where all are suspect and thus subjected to being scrutinized by the ‘flawless’ scanning devices and decision making of the computer mind.

Thus, in contrast to most security research at the EU level that is carried out rather unnoticed by the public, INDECT caused an intense debate about state surveillance and suspicion that was subsequently followed up and reinforced by a series of successful attempts to hold both the European Commission and INDECT leadership accountable for the project and its aims.

In November of 2010, after already having launched a series of parliamentary questions to the Commission in the period from February until May of 2010, the European Parliament took initiative with regard to the project. In a written declaration, the Parliament (2010, p. 2) explicitly expressed “concern about function creep, the possible impact on fundamental rights and the danger that researched technologies or collected information are used by public actors or third parties”, thus “strongly [urging] the Commission to immediately make all documents related to INDECT available and to define a clear and strict mandate for the research goal, the application and the end users of INDECT”. Such demands for public availability of documents and transparency of research goals and practices are in line with the green paper on the European Union’s Transparency Initiative.

Demands for an account have however not been limited to the supranational level. Shortly after the European Parliament had already taken action, a different approach of claiming an account occurred in Germany. In December of 2010, after the INDECT project had been discussed within the German parliament, MP Andrej Hunko published an open letter to the INDECT leadership in which he requested that public concerns be addressed by the consortium. The letter includes a list of 43 questions, most notably asking whether “in view of the numerous concerns expressed publicly by politicians, academics, journalists, students and civil-rights activists […], are the researchers involved in INDECT interested in exposing the project, or the possible future implementation of the project, to a broad public debate?” (Hunko 2010, p. 6; emph. added) In its response letter, the INDECT leadership made the following statement:

Researchers involved in INDECT continuously undertake efforts to inform the public about project objectives and the research done in the project. This comprises: constantly updated web-page; Publicly available project reports; large number of scientific publications (almost 300 by now); participation to events related to security research, privacy and ethical issues; contacts with politicians on national and European level; relatively high number of interviews and appearances in media; etc. (INDECT 2010, p. 10).

Such public information had hitherto been rather questionable. The lack of information about the project and the initial status of several project reports (‘deliverables’) as not publicly accessible had arguably contributed to the unease about INDECT in the first place. However, as the consortium received a considerable amount of EU money over its funding period, its leadership arguably had little choice but to declare full transparency on its activities and to re-declare several reports as publicly accessible.Footnote 7 In other words: project leadership, through public debate, parliamentary inquiry and the open letter, had been held accountable.

The unfolding of political events did however not stop with the response of INDECT to public inquiry. The European Parliament’s written declaration eventually ended up in the mid-term review report on FP7 activities in April of 2011, stating that

[the Parliament] stresses that all research conducted within the FP7 must be conducted in accordance with fundamental rights as expressed in the European Charter; therefore, strongly urges the Commission to immediately make all documents related to INDECT (a research project funded by the FP7 aimed at developing an automated observation system that constantly monitors web sites, surveillance cameras and individual computer systems) available and to define a clear and strict mandate for the research goal, the application and the end users of INDECT; stresses that before a thorough investigation on the possible impacts on fundamental rights is made, INDECT should not receive funding from the FP7. (European Parliament 2011, p. 10)

Notably, the mid-term review report goes decisively further than parliamentary questions and open letters. In exposing the ongoing research within INDECT to investigation and debate, it calls for the suspension of public funding until all open questions and doubts around the project would have been addressed and resolved through an audit. The audit, through the establishment of an account of the research activities, eventually came to the conclusion that “INDECT did not breach any ethical requirements,”Footnote 8 and project activities could therefore be resumed. However, this result does not exactly come as a surprise when considering the formalistic procedures of compliance checks with legal frameworks (for instance in terms of data protection or non-discrimination) that are applied in such audits (Stahl 2011).

The critical public discourse surrounding INDECT even unfolded notable effects on the positions of security actors unrelated to the project. For instance, by October of 2011, the German Bundeskriminalamt (Federal Criminal Police Office) felt the urge to distance itself from the project after it had been alleged of cooperating with the INDECT consortium by several media outlets. In a press release, the agency thus clarified that they had been offered a partnership with INDECT in 2007, but had refused to cooperate due to concerns towards the projects’ surveillance implications (Bundeskriminalamt 2011). Overall, I contend here that the scrutiny that emerged around the INDECT project was to a considerable extent empowered by practices of holding the project leadership accountable. Both the European Commission and INDECT leadership responded to the different practices of accountability by clarifying the project goals and making hitherto non-disclosed project documents publicly available, thereby opening up the project for informed debate. Moreover, public funding was suspended until a secondary board review of the ethical implications of the project had been completed.

Responsible and Accountable Innovation?

Attempts to govern emerging technologies, due to being charged with moral questions, are facing complex and at times contradictory and ambivalent challenges. As Stirling (2007, p. 231) claims, “the only way seriously to address these challenges lies in more direct, systematic and explicit attention to institutions and procedures for opening up, as well as closing down, in social appraisal”. The notion of accountability, so I put forward, indeed provides a viable way of ‘forcing’ such an opening-up of emerging technologies for wider debate that, in the vein of Responsible Innovation frameworks, would eventually lead to morally better and more substantive results. This is particularly the case when emerging technologies would have remained under the radar of public notice, and subsequently deliberation, until the eventual implementation of their resulting technologies. To call to mind, “the first and foremost task for responsible innovations [is] to ask what futures do we collectively want science and innovation to bring about, and on what values are these based?” (Owen et al. 2013b, p. 37).

Discussions about values and ensuing choices should thus, both in terms of Responsible Innovation and democratic principles, be rendered as open and inclusive as possible, so that claims of the mutual responsiveness of involved stakeholders can get actual traction. Accountability as a complementary tool for moral inquiry into emerging technologies should thereby not be misunderstood as a solution to controversy. Rather, it should be understood as a ‘lever’ that enables broader debates and establishes a resonance space for public debate in the first place. Creating an account of development and design processes does not foreclose or even resolve controversy. It can however help to create enhanced visibility of moral concerns, and subsequently provide incentives for those in charge to tackle and resolve them. Callon et al. (2009, p. 28) argue that “controversies enrich democracy”, and this is precisely how the idea of accountability should be understood in this context: as a tool for the democratization of emerging technologies. The creation of an account in this sense pins down development and design processes and the choices they are imbued with not only in moral terms, but notably in democratically binding terms.

Holding those in charge of emerging technologies accountable is an important democratic technique, as it links power back to the democratic sovereign in a direct fashion. Reiterations of claiming accounts from government, public administration, and institutions can thereby arguably contribute to practicing and upholding democratic principles. A particularly strong argument for practices of holding accountable is public expenditure. While this might be an instrumental rather than a moral argument, it aptly illustrates the moral thread that runs throughout the complex assemblage of contemporary development and design processes. As Stahl (2011, p. 143) points out, “governmental and funding bodies have a strong impact on research agendas and can shape future research”. In the case presented here, the European Commission had funded a project which goals clashed with what the public deemed morally acceptable.

The successful efforts to hold the European Commission and the INDECT project leadership accountable were strongly supported by the European Union’s overall commitment to normativity and transparency. The EU is indeed quite keen on rendering its expenditures transparent to the European public. This is an understandable aim, given that European institutions are ultimately spending the tax money of European citizens. At the same time, these institutions thereby deliberately render themselves accountable for money that is spent in a ‘wrong’ way. This mechanism has so far been rather under-acknowledged within the literature on research governance. While critical scholars have pointed to the potential dangers of European security research and a security culture that envisions technology as a ‘threat-resolution’ that falls in line with neoliberal marketization logics (e.g., Hayes 2010, 2012; Klein 2007; Neocleous 2008), the link between ethics, public funding of research, and ensuing accountability has not received adequate attention as a way to establish transparency and spaces of resonance for wider debates about emerging technologies.

Public expenditure directly links the spending of tax money back to wider social and political consensus on what is acceptable and what is not—both in form and in aim. It should be kept in mind here, as argued above, that there might be no such consensus in the first place. Controversies are usually an expression of conflicting values, and there might not be an easy solution. However, spending tax money in ways that are perceived as detrimental to society creates the duty to answer to worries and doubts. There is, as Bovens (2005, p. 185) puts it, a “close semantic connection between ‘accountability’ and ‘answerability’” that indicates the obligation to answer to questions in order to be held accountable. Questions in this sense are objections about emerging technologies that can be fueled by lack of information or clarity, by anxiety and unease about what is going on, or by any actual or potential harm to moral standards. Most basically, such objections are sovereign acts, however often mediated through the representative layer of parliament, through NGOs, or through civic protest groups (e.g., Irwin 2008; Callon et al. 2009; Cozzens and Woodhouse 1995).

The practices of asking the European Commission and the INDECT project consortium questions about specific work plan goals, about the compliance of the project with ethical, moral, and legal frameworks, and about their definition of security are a strong testimony of democratic practices of claiming an account from those in power and those at the receiving end of public funding. As Wildson and Willis (2004, p. 40) summarize the normative core here: “practised in a meaningful way, public engagement can lead to better, more robust policy and funding decisions, provided it is used to open up questions, provoke debate, expose differences and interrogate assumptions”. In other words: public inquiry has the potential to create controversy in the best possible sense. Controversy is both part and parcel of democratic processes, and it must be exposed to debate in order to be resolved in a meaningful fashion. It is through the settling of controversies that democratic processes obtain and preserve their legitimation.

Demanding and creating an account of emerging technologies that might turn out to create an impact on society must therefore be considered a key democratic principle that, as argued above, leads to more just and substantive development and design processes through the creation of controversy in the first place. Parliamentary questions from the European Parliament to the European Commission, the imposed audit on the INDECT consortium, and the open letter to the INDECT leadership are thereby inherently democratic means that derive from the sovereignty of the citizens. Through practices of holding to account, visibility and transparency expose both the rationales and problematizations of development and design processes and their ensuing technological solutions to public scrutiny. Most notably, however, they create a framework in which those in charge of emerging technologies are democratically obliged to take concerns into account, and to answer to concrete objections. In the vein of Responsible Innovation: they become obliged to act mutually responsive and enact their shared responsibility.

Conclusions

This article has argued that the notion of accountability has been underappreciated so far when it comes to the literature on research governance. While the moral responsibility of development and design processes is widely acknowledged, and Responsible Innovation frameworks have implemented the idea of mutual responsiveness of a wide-cast network of stakeholders throughout these processes, there is a conceptual gap when it comes to questions of how to establish a space of resonance in which mutual responsiveness can be enacted. Turning to accountability, as I have argued here, helps to address this gap on two levels. First of all, it ties in with public participation and therefore basic democratic notions that have been highlighted in the literature. Rendering research more just and substantive through layers of deliberation strikes at the heart of debates on science and research and ‘the public’. In overcoming the often presupposed divide between expert knowledge and lay knowledge through openness and debate, so the assumption within Responsible Innovation frameworks, emerging technologies would gain enhanced democratic legitimacy. Accountability itself is a fundamentally democratic mechanism that enables delegation and expertise within public administration and societal organization more generally, without losing the idea of citizens’ control that relates domain expertise back to sovereign power. The establishment of an account creates agency to act upon that sovereign power and to debate and inject public opinion into governance processes.

Second, the establishment of an account also serves as a binding mechanism that can pin down those in charge of emerging technologies to not just use public debate as a pretense for creating moral legitimacy. Active involvement of the public in this sense “come[s] closer to the ideal of participatory democracy […] than the alternatives: technical guardianship or democracy by opinion poll” (Cozzens and Woodhouse 1995, p. 547). The demand of an account of what is happening in the laboratory must thereby by no means be considered a hostile activity. Although public opinion is at times presented as an enemy of innovation, “lay knowledge is not an impoverished or quantitatively inferior version of expert knowledge; it is qualitatively different” (Bucchi and Neserini 2008, p. 451). Read through that lens, public concern presents an opportunity for emerging technologies rather than a roadblock. From an ethical as well as a democratic perspective, controversy is an inherently good thing, as it forces all involved stakeholders to reflect on their own moral presumptions and choices, as well as about the potential societal impacts that their work could unfold.

There remain however a couple questions regarding the wider applicability of accountability mechanisms in development and design processes. As Barnett (2016, p. 134) reminds us, “accountability is not a wonder drug and it does not guarantee remission”. In the case presented here, the creation of an account of the INDECT project’s goals and means arguably put the project under bright lights of scrutiny that it otherwise would not have received. Accordingly, project leadership had to be quite careful in order not to add to the already existing moral concerns. It could thus be assumed that being held to account contributed to more morally acceptable work within the consortium. Von Schomberg (2013, p. 71) argues that “public debate, ideally, should have a moderating impact”. It is precisely here where practices of accountability can establish a bridging mechanism between public opinion on moral acceptability, and enforce an obligation for those in charge to take those concerns seriously.

The economic link between public expenditure for research funding and ensuing calls for transparency and openness towards the taxpayer arguably present a particularly striking opportunity to claim an account. There remains however a concern with regard to the applicability of this link beyond the public sector. As Wildson and Willis (2004, p. 48) point out, “moving public engagement upstream is hard enough in the context of taxpayer-funded—and publicly-accountable—science. How can it possibly work in the private sector?” This is a problem that lacks a simple solution. A common way to address this issue would be through industry self-regulation. Such forms of ‘soft’ governance are however mostly voluntarily, and would thus fundamentally undercut the argument that the notion of accountability establishes a binding link between innovation processes and the public.

Another question that calls for further consideration is the notion of democracy itself. While ‘more democracy’ certainly has a positive ring to it, democratic theory is way more diverse and ambivalent than how it is often presented in policy and governance discourses. Especially when it comes to the shaping of ‘public opinion’, we must ask ourselves how such public opinion comes about, who is involved in its emergence, based on what power positions and resources, and what conflicting positions within the public might exist that ultimately become subdued under a prevalent position. The agents of the public in the presented case were NGOs, activist groups, and elected officials. They might not have been the only ones, but they certainly were those with the expertise and means to make their voices heard. Irwin (2008, p. 586) reminds us that “claims to ‘democracy’ and to ‘public opinion’ should similarly be viewed in contextual and contingent terms”. Issues that need to be addressed thus include inquiries about the meaning of democracy within research governance, and about its possible limits.

Last but not least, further notice should be given to the role of ethics experts that are involved in research consortia. The European Union, now fully implemented within the Horizon 2020 funding framework, requires that emerging technologies are exposed to ongoing monitoring in terms of their possible legal and ethical impacts by the project consortium itself or through external experts. This notably includes questions of privacy and data protection, which are pertinent when considering that many new technologies are networked ones that rely on data collection and processing. Ethical expertise and coverage ‘from within’ research, as well as the role of data supervisors, should thus receive some attention when further thinking about Responsible Innovation and ways to hold development and design processes accountable.

Future research must certainly involve those concerns, as well as seek to find ways how to more firmly anchor mechanisms of accountability in institutional design. Accountability should however be considered as a conceptual and practical supplement to the existing literature, particularly the literature on Responsible Innovation, as it potentially fills the void of enforceability of mutual responsiveness of the dispersed networks of actors and stakeholders that are involved in emerging technologies. If “participatory experiences highlight, among other things, a growing endeavor to bring back into mainstream democratic politics those transformations driven by science and the economy that modernity sought to exclude from it” (Bucchi and Neserini 2008, p. 466), then it seems timely to open the democratic toolbox and make use of its instruments for purposes of governing research in a morally sound fashion.