Keywords

Privacy and security have often been framed as conflicting concepts that must be conceived of as incommensurable and thus constitute a trade-off.Footnote 1 And although such a notion has been largely criticized for using under-complex definitions of both privacy and security, as well as for neglecting empirical examples of positive sum games and questions of whose privacy and whose security are affected,Footnote 2 the trade-off model appears quite persistent. Considering the contemporary nature of data-driven security measures, much digital ink has been spilled about the presumably weak standing of privacy in the face of a more or less overwhelming context of (inter-)national security.Footnote 3 This paper analyzes how the relation between privacy and security has been framed and re-framed in the field of European security research, eventually ending up as a question of privacy by design. Privacy by design, so the argument goes, enables new security technologies to be both privacy-preserving as well as effective and efficient, and thus would ultimately serve as the silver bullet that resolves the conflict/trade-off. However, this paper puts forward the claim that the notion of privacy by design rather puts old wine into new bottles, as a closer look reveals that the core problem is not tackled, but only re-framed according to the general technical scope of security research. Thus, it appears that the new emphasis on privacy and the ensuing argumentative mitigation of the conflict merely intends to comply with the EU’s increased focus on normative security and at the same time renders research governance as a technological fix for the technological fix that security is conceptualized as in the first place.

The paper proceeds by providing a brief overview of the emergence of security research at the EU level over the last decade and sheds light on its underlying rationalities, en passant retracing how the presumed trade-off between privacy and security was framed and eventually evolved into a privacy by design approach alongside the emergence of a more normatively coined EU ‘security project’. The paper concludes with a critical assessment that questions the suitability of privacy by design as the panacea that it comes advertised as.

1 EU Security Research – On the Emergence of a Field and a Conflict

“Security research is the new guy in town.”Footnote 4 As opposed to ‘traditional’ fields of research funded by the European Union, research that is explicitly dedicated to the security of the EU and its citizens has only been around for the relatively short term of about a decade,Footnote 5 and has at times struggled to find its niche among related fields with a strong ‘security touch’, such as for instance Information and Communication Technologies (ICTs). However, fostered by ‘new’ and global threat scenarios, the quest for appropriate remedies has become an integral part of the realm of fundamental and applied research that is set to produce new tools and technologies, and thus to contribute to effectively establishing security in the European Union – or so the argument goes. Arguably, the need for reinforced security solutions has been catalyzed by the debate that was kindled by the events of 9/11 and their massive aftermath in terms of security policy adjustments.Footnote 6 In the EU, security is now conceived of as a cross-cutting concept that has to tackle widespread areas such as terrorism, serious and organised crime, cybercrime, cross-border crime, violence itself, and natural and man-made disasters.Footnote 7 Thus, security research has eventually been established as a key area within the European funding framework.

This very framework, however, is currently undergoing structural change. In 2014, EU research funding has hit an institutional threshold as the established Framework Programmes (FP) come to an end with FP7 and will be replaced by an overhauled, streamlined, and arguably simplified and more efficient program entitled Horizon 2020.Footnote 8 Official documents promise that this new framework will, amongst other, set clearer scopes on societal issues, most notably privacy and data protection.Footnote 9 Thus, this structural change appears an appropriate break to analyze how the still emerging field of security research is being (re-)shaped alongside economic rationalities and the emergence of a European ‘security project’ itself, and how the relationship between privacy and security keeps evolving. In order to set out an analytical framework, this paper argues that EU security research funding follows two general trajectories: it is mainly conceived of as (1) a means to foster the European economy, and (2) as a primarily technical framework that aims to produce specific solutions to clearly defined security problems. In recent years, however, a third notion has been added to this dichotomy, as ‘security’ itself is now increasingly presented as a normatively embedded concept that needs to comply with human rights and civil liberties. This appears to be a major reason for abandoning the trade-off model and the search for new and integrative approaches, eventually ending up with privacy by design.

‘Historically’ speaking, EU security research can be framed as a field that has been shaped through an inextricable entanglement with the industrial sector, as has been compellingly shown by Bigo, Jeandesboz, Hayes, and others.Footnote 10 Multiple companies and personalities from the branch have been involved in setting up of the field and the intensified cooperation between the Commission and the industry, taking off in 2003 with the establishment of the Group of Personalities in the Field of Security Research (GoP)Footnote 11 and the initiation of the Preparatory Action on Security Research (PASR) in 2004. The GoP was eventually followed up by the European Security: High Level Study on Threats, Responses and Relevant Technologies (ESSTRT) in 2006Footnote 12 and the setting up of the European Security Research Advisory Board (ESRAB)Footnote 13 in 2005 and the European Security Research Innovation Forum (ESRIF)Footnote 14 in 2008, both of which further envisioned the future of security research at the EU level.

Throughout the published reports of the aforementioned fora, particularly privacy and data protection have been framed as disruptive elements for security technologies and thus for the overall goal of a secure European Union. For instance, as Bigo and Jeandesboz have pointed out, the ESSTRT final report frames the conflict such that “the underlying assumption is that intrusiveness is a requirement for efficiency, and that privacy undermines efficiency”,Footnote 15 and the ESRAB report states that “research into ethics and privacy, and the trade-off between improved security and loss of privacy, will influence technology development and in parallel address aspects of how citizens perceptive security and insecurity.”Footnote 16 Thus, privacy and security were generally conceived of as incommensurable concepts, and it was very clear where the preferences for effective security research had to be placed – the need for security apparently trumped the need for privacy. Either security measures would work, and this would be because they would be based on a sufficiently large database that allowed for glimpses of the future and the next event that needs to be canceled out – or they wouldn’t work because privacy claims and the restrictions of the data protection framework would thwart their effectiveness. More or less independent of any actual conceptualizations of privacy, be it as the classical “right to be left alone”Footnote 17 that entails a “boundary control process”,Footnote 18 as the “claim of an individual to determine what information about himself of herself should be known to others”Footnote 19 which in terms involves “a constraint on the use of power”,Footnote 20 or politically as the foundation of the democratic constitutional stateFootnote 21 – any position that values the (digital) personal sphere would be considered disruptive from an industry point of view. Especially when taking into consideration Helen Nissenbaum’s concept of privacy in context,Footnote 22 one might indeed be inclined to say that threat scenarios were used to create a contextual override for privacy arguments.

As mentioned earlier, such a trade-off model is certainly oversimplified, and arguably only represents a part of the full story. How come we find such a striking neglect of privacy arguments in official documents, then? The next section aims at unpacking the underlying notions of security and security research in the European Union. It will become clear that EU security research unfolds along a clear-cut economic agenda, and thus introduces a very specific and market-driven approach to the relationship between privacy and security.

2 Economics and Technologies

First trajectory. Both FP7 and Horizon 2020 documents acknowledge the economic goals identified by the Europe 2020 strategy,Footnote 23 framing “research and innovation as central to achieving the objectives of smart, sustainable and inclusive growth.”Footnote 24 The underlying rationale, as stated by the Staff Working Paper on Horizon 2020, is that “modern economic theory unanimously recognises that research and innovation are prerequisites for the creation of more and better jobs, for productivity growth and competitiveness, and for structural economic growth.”Footnote 25 For that purpose, a study on behalf of DG Industry & Enterprise has analyzed the global security market and the position of the European security industry, coming to the conclusion that “it appears vital to stimulate and create a proper innovation framework in the security domain and establish fast track development procedures for new market technology requirements.”Footnote 26 As a consequence from those findings, the European Commission in 2012 adopted an “Action Plan for an innovative and competitive Security Industry”Footnote 27 in order to secure and extend market shares in a rapidly growing global security economy.

In the same year, the Commission published a document on EU security research entitled “Safeguarding Society, Boosting Growth.”Footnote 28 Overlooking its content, it quickly becomes clear that the emphasis lies on the latter part, as the document states that

our objective, notably through our Security Industrial Policy initiative, is to improve the global competitiveness of the EU security industry by stimulating its growth, invest in the research and development of future, world-leading security technologies and processes, and launch any effort necessary to overcome the current market fragmentation for security products in the EU and thus establish a true Internal Market.Footnote 29

In fact, the conceptualization of EU research funding as a policy tool for economic growth has always been out in the open. Particularly, the purpose of security research can be identified by its institutional location. The housing within DG Enterprise and Industry instead of the maybe more natural fit DG Research & Innovation indeed provides a clear statement and has been criticized for its “significant consequences for the way we understand and do research on security as an ethically charged field of research.”Footnote 30 This general economic scope will likely be reinforced with the start of Horizon 2020. As the joint communication on the new framework states, “since the launch of the Seventh Framework Programme (FP7), the economic context has changed dramatically”,Footnote 31 and now urges the EU to provide even stronger incentives, since “research and innovation help deliver jobs, prosperity, quality of life and global public goods.”Footnote 32

The ECORYS report on the competitiveness of the European security industry bolsters those general assumptions with factual numbers. The global security market is estimated to be worth €100 billion, with the size of the European market in the range of €26 to €36.5 billion.Footnote 33 This translates into roughly 180,000 employees in the European security sector. Accordingly, security research receives a considerable amount of funding, with the security theme under the FP7 being worth an overall amount of €1.4 billionFootnote 34 and the financial terms for the “Secure Societies” action under Horizon 2020 alone determined at €1.7 billion. However, despite those efforts, the ECORYS report points out a “low aggregate level of EU funding for security-related research, technology development and innovation.”Footnote 35 In a comparative perspective, EU security research funding still remains “considerably below the efforts made in the USA”, leading to “potential weaknesses in the underlying competitiveness of the EU security sector.”Footnote 36 This could in terms lead to a predicted loss of market shares to a low of 20 % in 2020,Footnote 37 particularly with the Asian security industry massively catching up in the high-tech area, but also with considerable competition from Russia and Israel.Footnote 38 The remedy for such a threatening scenario appears quite simple: reinforcement of market stimulation through enhanced security research funding and faster product cycles.Footnote 39 Thus, one might indeed be inclined to agree with Bill Clinton’s famous statement that “it’s the economy, stupid”. Economic prosperity has been the driving force behind European integration from the beginning, and why should it change within security research, of all things?

The Action Plan for the security industry subsequently provides concrete steps of action in order to reinforce the competitiveness of the European security industry, suggesting the creation of a true Internal Market through favorable conditions, the enhancement of competition and lower production costs, as well as strengthened support for SMEs.Footnote 40 Apart from those issues, however, one of the most pressing concerns still appears to be the potential of privacy and data protection to thwart the effectiveness of security technologies and thus their successful market impact in the first place. Subsequently, the Action Plan takes up on that conflict and states that a major problem arising from the societal dimension of security research is the social acceptance of security technologies – or rather the lack thereof, which could result in a number of negative consequences for the security industry, i.e. wasted investments.Footnote 41 Most strikingly, privacy requirements are regarded to hurt the security market on both supply and demand side. For the supply side (i.e. the European security industry), this would mean that its products might not reach their maximum ‘security potential’ due to constraints in data collection and analysis, and “for the demand side it means being forced to purchase a less controversial product which however does not entirely fulfill the security requirements.”Footnote 42 Thus, from an industry angle, the situation appears quite clear: privacy hampers security. Or rather, it hampers security technologies, as EU security research is indeed primarily locked in on the emergence of new technologies.

Second trajectory. The rationale behind this scope becomes clearer when looking at how current security efforts within the EU are conceptualized as data-driven and risk-mitigating measures. As security policies increasingly emphasize the potential of databases, data-sharing and interoperability for the purpose of gathering knowledge and thus being able to prevent future risks,Footnote 43 Information and Communication Technologies (ICTs) have spilled over into security contexts – and with them issues of privacy (and data protection). Security technologies heavily focus on communication, social networks, and other forms of individual interaction with a digitized everyday environment, such as sensors or biometrics. The massive amount of personal and behavioral data constantly produced then serves as the basis for fighting crime and terrorism through various forms of data exploitation such as algorithmic profiling and probabilistic risk calculations.Footnote 44 Or, put more simply: security itself has indeed become dominated by the desire to accumulate data in order to predict the future and counter-act criminal and terrorist incidents. But when security is supposed to be enacted through mitigation of future risks, those risks first have to be identified.

ICTs have emerged as the very tools to do so, and such a notion has obviously evoked critical reactions. Thus, ICT research ethics have specifically been concerned with the implications of the use of personal information in distinct contexts.Footnote 45 Arguably, the increasing spill-over of ICTs into the realm of security is also the reason why privacy and data protection are framed as predominant ethical concerns of current security research within official EU documents. Whether or not this limitation of ethical concerns to one clear-cut area is by any means adequate remains questionable. It should clearly be noted that multiple other pending ethical issues such as autonomy, social inclusion, human dignity, or dual use and function creep/mission creep between the civil and the military realm of security also do require attention.

However, when looking at the political and financial efforts put into security research over the last decade, one might indeed be under the impression that “our political masters, aided and abetted by the security industry, often appear willing to sacrifice some of the citizenry’s privacy in order to better secure society”,Footnote 46 as van Lieshout et al. have provocatively formulated it. Thus, how come the stark contrast of a presumed trade-off was eventually transformed and is now conceived of as a resolvable privacy by design issue instead of the irreconcilable conflict that it was before?

3 A Normative Turn?

The answer arguably lies in the re-framing of the overall European ‘security project’. With the Treaty of Lisbon in 2009 and the ensuing legally binding status of the European Charter of Fundamental Rights,Footnote 47 the EU has – at least on paper – made a clear commitment to human rights and civil liberties. For the (broader) field of security, this commitment is reflected in the European Internal Security StrategyFootnote 48 of 2010 and the Stockholm program that provides the current concrete policy framework (2010–14).Footnote 49 The Internal Security Strategy, for instance, explicitly states that “Europe must consolidate a security model, based on the principles and values of the Union: respect for human rights and fundamental freedoms, the rule of law, democracy, dialogue, tolerance, transparency and solidarity.”Footnote 50 And the Stockholm Programme puts forward a Europe built on human rights, and goes as far as to claim that when it comes to security measures,

basic principles such as purpose limitation, proportionality, legitimacy of processing, limits on storage time, security and confidentiality as well as respect for the rights of the individual, control by national independent supervisory authorities, and access to effective judicial redress need to be ensured and a comprehensive protection scheme must be established.Footnote 51

This strengthened emphasis on normative aspects of security can also be found in the FP7 security scheme, claiming that “the potential impact of the resulting technologies and activities on Fundamental Rights, ethical principles and societal values should be addressed as part of the proposed research.”Footnote 52 Again, especially privacy and data protection have thus been officially tagged as norms that potentially become infringed by security technologies.Footnote 53 Apart from such official statements, the predominantly technological security tools that have emerged from the FP frameworks in recent years have become the target of normative interventions due to their potential negative impact on society.Footnote 54

Third trajectory. Alongside this new scope on the normative dimension of security, research funding, or rather the governance thereof, is also undergoing change. Security research now has to be ‘ethically compliant’ in order to take into account possible negative impacts on the societal level. Security research projects are thus to be accompanied by the explicit coverage of ethics boards in order to ensure that research is in line with normative principles. Subsequently, research ethics have come to enact a key role in the governance of security research, and are set to establish safeguards against detrimental societal impacts of security technologies at an early stage during research and development. In EU research funding, a dedicated ethical coverage of the research process has been introduced as “fundamental ethical principles”Footnote 55 since FP5 (1998–2002). Particularly, fields such as medical and biological research have a long history of a need for ethical coverage, as has become apparent by the emerging possibilities of ‘engineering’ human life at the genetic or molecular level. Security research is joining those fields as one of the areas that has be monitored and advised closely. As Burgess notes, “security comes with its own special ethical baggage”,Footnote 56 since it carries the potential to inflict curtailments on fundamental societal and individual values. In fact, numerous scholars have in recent years engaged with the threatening and negative consequences of new and emerging security technologies.Footnote 57

However, on the other hand, security itself represents an important value as it “embodies the social and cultural needs of a society, its hopes and fears, its past and its ambitions for the future.”Footnote 58 Read through that lens, security represents its own ethics as an overarching prerequisite for any society. Much has been written on the problems that can arise from over-emphasized security and ensuing detrimental impacts on human rights and civil liberties.Footnote 59 Adding to that list of potential negative consequences, security research

can include particular measures that have as a secondary effect an increase in insecurity – such as the development of scanning devices that cause unease, weapons systems that provoke fear or insecurity among innocent bystanders, or surveillance systems that are experienced as too invasive.Footnote 60

Thus, security research appears a Janus-faced phenomenon that possesses the potential of both detrimental and beneficial outcomes that indeed come as “inseparably intertwined.”Footnote 61 The delicate balance of the ‘goods’ and ‘bads’ of security for society subsequently underlies constant challenges through security research and the technological tools that emerge from it. A close look reveals, as mentioned earlier, that nearly all security-related research projects within FP7 do feature a technological scope, as “the Security theme supports R&D actions oriented towards new methodologies and technologies.”Footnote 62 Due to the sketched potential detrimental impact of security technologies on societies, coupled with the financial volume of security research funding, the stakes for particular security research ethics appear exceptionally high.Footnote 63 This constellation is indeed reflected in official documents – and once again it is predominantly framed in terms of privacy. The last call fiche for the security theme of FP7, for instance, states that “if ethical issues, including privacy are raised, they should be addressed in the core of the proposed activity”,Footnote 64 and the EC document on ethical and regulatory issues in research policy dedicates a whole chapter to “New Security Technologies and Privacy.”Footnote 65

This emphasis on privacy arguably comes from the aforementioned data-driven nature of contemporary security technologies that build on the collection and analysis of large amounts of data, as well as from the well-defined legal applicability of the data protection framework that gives privacy concerns a ‘procedural advantage’ over other normative concerns when it comes to security technologies. The interesting fact is now, that with this ‘new’ scope on morally right security, the original conflict between security and privacy becomes rather reinforced than mitigated. In other words: with the increased emphasis on the importance of privacy, the privacy side of the original equation has been upgraded and is now not so likely to be overridden by security anymore. And since there no longer seems to be an a priori choice which part of the equation should be more cherished, the decisive question then becomes: how to possibly resolve this dilemma and reconcile privacy and security such that their relationship complies with the upgraded normative take on security within the EU? The answer appears indeed an intriguing one: if it is not possible to overcome the conflicting positions of the trade-off (however oversimplified they appear), why not abandon the model, after all? The ensuing move beyond, as enthusiastically announced, has eventually resulted in privacy by design.

4 Privacy by Design: A Technological Fix for a Technological Fix?

In the effort to effectively govern emerging technologies from security research, the Commission has identified three main dimensions of regulatory privacy protection: (1) technical, (2) legal, and (3) self-regulatory.Footnote 66 Characteristically for the legal dimension is its rather spatial scope, as it is based on the European Convention on Human RightsFootnote 67 and the European Charter of Fundamental Rights,Footnote 68 rendering its power strongly connected to the jurisdiction of the EU. Within this jurisdiction, legal privacy and data protection provisions possess an enforceable status and thus provides strong incentives for any supplier of security technologies to stay within the explicitly formulated boundaries of data collection and processing. However, in times of global data flows, such a (supra-)national regulation appears hardly up to the task of effective privacy protection.

The self-regulatory dimension of security research governance, on the contrary, is based on voluntary commitments from the private sector. Self-regulation towards technology development that fulfills ethical requirements then is set to be achieved through the involvement of stakeholders and the establishment of ‘soft’ regulations.Footnote 69 The scope within self-regulatory governance lies on non-enforceable concepts such as “market self-regulation, corporate social responsibility (CSR), and governmental incentives for research that can drive technology towards more ethical development.”Footnote 70 Albeit admitting the potential of voluntary forms of research governance, Székely et al. have pointed out that monitoring and supervision of self-regulation within the area of emerging technologies appears a highly difficult task.Footnote 71

Thus, the official position of the European Commission with regard to security research governance can be summarized such that “weaknesses in self-regulation and legal governance suggest technological governance as a good site for concrete, operationalized engagement with tensions between the protection of privacy and the pursuit of security.”Footnote 72 One might be inclined to say that this preference in fact appears a technological fix to right the technological fix that is security research in the first place. Now how to achieve such technological reconciliation? From the official documents, it becomes quite clear that Ann Cavoukian’s concept of privacy by designFootnote 73 is now considered to be the silver bullet for the old clash between security and privacy. Thus, researchers and developers are encouraged to tackle possible privacy and data protection issues pro-actively from the very beginning in order to avoid costly adjustments later on.

In fact, the ESRIF final report in 2009 made an early effort to bridge the gap between privacy and security and stated that “ESRIF advocates implementation of a ‘privacy by design’ data protection approach that should be part of an information system’s architecture from the start.”Footnote 74 How does this work? Privacy by design starts with the assumption that “privacy is good for business”,Footnote 75 and develops the idea that privacy can be conceived of as a positive sum game. This is a crucial notion, as it stands opposed to the postulated zero sum game that is central to the hitherto dominant trade-off model. Furthermore, privacy safeguards then should be implemented proactively and early within the development and design of information processing technologies, and be built in a way that they last throughout the entire product life cycle.

Central in such a conceptualization of the relationship between technology and privacy/data protection is the assumption that privacy principles should be incorporated early in research and development in order to avoid costly retrofits at later stages.Footnote 76 It is exactly this presupposition that is now mirrored in EU security research. As stated by the Commission, privacy by design “should be recognized as a guiding and technologically neutral principle, suitable for flexible applications, in a general provision mandating that existing privacy and data protection principles be integrated into ICTs.”Footnote 77 Just as well, the Action Plan for the security industry suggests to make use of a privacy by design approach.Footnote 78 This falls also well in line with recent discussions about privacy-preserving data mining and privacy-enhancing technologies.Footnote 79

But does it really resolve the original conflict, namely the presumable choice between improved security or the protection of privacy? There are a number of issues to be found in the relationship of ‘security and/vs privacy’ that might not be elegantly resolved through privacy by design. A key element in privacy by design are the Fair Information Principles (FIPs), that are set “to limit collection, use and disclosure of personal data, to involve individuals in the data lifecycle, and to apply appropriate safeguards in a continuous manner.”Footnote 80 Thus, as Schaar argues, this means “the separation of personal identifiers and content data, the use of pseudonyms and the anonymization or deletion of personal data as early as possible.”Footnote 81 Such practices are undeniably suitable for organizational and economic contexts. However, as has been argued throughout this paper, data-driven security technologies derive their added value exactly from the information surplus that is accumulated through collection and processing of data that could eventually be connected to possible criminals or terrorists in order to cancel out future risks. And we should remember that by the logic of security experts and policy makers, the more information one can get, the better the prediction of the future and thus the better our overall security will be. In other words: security cannot thrive on informational parsimony. FIPs on the contrary radically take away the possibilities that come with advanced analytics in security contexts. This stark contrast stunningly reminds of the early days of security research, when the “trade-off between improved security and loss of privacy”Footnote 82 was openly framed as a major obstacle for the field. But how to achieve both effective security and non-intrusive privacy, then?

Certainly, there has been considerable progress in the techniques for data analytics. For instance, algorithms that allow for privacy-preserving ways of data miningFootnote 83 have been on the rise in recent years. But even with such privacy-friendly methods of data collection/analytics, the tension between privacy and security cannot be fully resolved. The “dimensionality curse”Footnote 84 states that in order to fully preserve privacy, the amount of personal attributes would need to be reduced to such an extent that the utility of processing the data is lost. Hence, the contradicting interests between privacy on the one hand and the benefit of being able to process data on the other hand cannot simply be resolved using technical means. Thus, a certain conflict remains between efficiency in terms of the generation of security knowledge and the preservation of privacy. In simple terms, the more (individual) attributes are reduced from the dataset, the less utility will emerge from analytics. Is the turn to privacy by design merely old wine in new bottles, then? Even if it does not convincingly resolve the tension between privacy and security, the transformative framing of the old ‘conflict’ tells us a lot about the current state of affairs with regard to privacy and security.

5 Conclusions

This paper has shown that the relationship between the concepts of privacy and security has come a long way from an early conceptualization as a sharp trade-off towards a contemporary framing as a technological issue that appears resolvable through privacy by design. However, this paper has put forward the claim that the current re-framing is not particularly well suited to actually mitigate or resolve the tension between privacy and security, but rather pays tribute to the technological scope on security, while at the same time acknowledging the increasingly normative take on security with the EU.

The trade-off model has always been troubled by the oversimplified claim that it was possible to put forward two unspecified concepts and outweigh them against each other. And while privacy has long been conceived of as “a moving target”,Footnote 85 the conceptualization of security is shifting as well. To stay within the metaphor, the second target is also starting to move quite rapidly, as the notion of security is undergoing deep-seated normative transformations. When thinking about the current relationship of privacy and security, it appears only appropriate to take into consideration the changing state of security between abstract concepts, concrete technological applications, economic desires and normative prerequisites and implications.

Is security merely a driver for economic growth and prosperity, or does it indeed come as an intrinsic value that has to be handled with care in order to avoid detrimental effects on societal values? Is privacy a value that is still trumped by the seemingly overarching desire for security, or does it have the capacity to challenge the paradigm of security through the EU’s confession to more human rights and civil liberties based security measures and the further incorporation of ethics into EU funded research? The ensuing constellation appears a puzzling one: depending on the perspective, security (technology) is regarded as either a serious threat for privacy or an opportunity for massive economic revenue – but should security by default not be a value itself? A basic need for any society to ensure its present and future prosperity and a safeguard for its individuals to flourish and realize their potential? It remains up for discussion whether privacy by design can provide a true reconciliation of privacy and security, or whether it solely serves as a veil that is set to obscure major concerns with regard to data-driven security technologies. It appears that such a technological approach to the governance of security research (and subsequently to ‘security’ itself) falls well in line with the general technological scope of EU security research. However, it remains open whether this ‘technological fix for a technological fix’ will strengthen the position of privacy and data protection, or whether security will further trump normative considerations and civil liberties/rights. To end on a critical note: privacy-by-design might not be the silver bullet that it is regarded to be right now, but might rather be a concept that at first sight appears to be easily applicable within the general technological paradigm of security, but only seemingly soothes the conflict between privacy and security.