1 Introduction

Traditionally, privacy and security are considered to be opposing values, constantly to be seen in contrast with each other. The perception of a dichotomous relationship has been further exacerbated with technological development entering into the framework. By impacting every aspect of modern human society, technology has detached both privacy and security from real world situations, allowing them to find their field of application in the arena of cyberspace. The purpose of this article is to demonstrate how technological development, instead of worsening the cleavage between privacy and security, allows considering the two principles as inter-related and reciprocally affecting each other. By theorising this relationship, the article will first focus on how the evolution of the concept of privacy led by the advancement of technology has moved the two values closer together. Then, by illustrating how it works in practice, the article will employ the European Union General Data Protection Regulation (henceforth GDPR or Regulation)Footnote 1 as a case-study to show that the security of individuals, software and data assumes a crucial role for the fulfilment of data protection objectives.

2 The evolution of privacy

2.1 Privacy and technology

Since its first codification into written law in the United States, privacy has been intertwined with the development of new technologies. Appearing for the first time in an influential article by Samuel Warren and Louis Brandeis in the Harvard Law Review in 1890, “the right to privacy” was originally invoked as a reaction against the intrusive activities of American journalists having no respect for personal feelings and sexual relations. In that context, “recent inventions and business methods”—namely “instantaneous photographs and newspaper enterprise”—contributed to the invasion of private and domestic life, preventing the implementation of a “right to be left alone”.Footnote 2

In accordance with the Fourth Amendment of the Bill of Rights, Warren and Brandeis’ view derived a right to privacy from the entitlement to individual freedom from unwarranted intrusion and focused on the harm caused by physical access to a person and his or her possessions. While subsequent development in U.S. constitutional law led to an account of privacy based on non-interference over one’s intimate and personal decisions, the case-law based on the Fourth Amendment further expanded the concept defended by Warren and Brandeis.Footnote 3 As the employment of wiretapping, bulk surveillance and eavesdropping techniques by law enforcement authorities became more frequent, American jurisprudence needed to abandon a view on privacy that relied on the physical world and moved towards the notion of “reasonable expectation of privacy”, as enunciated for the first time in the landmark case of Katz v. United States.Footnote 4

In Europe the conceptualisation of privacy was likewise substantially defined by its relationship with technology. Originally, Article 8 of the European Convention for the Protection of Human Rights and Fundamental Freedoms (ECHR)Footnote 5 played a crucial role, including the protection of the fundamental right to privacy within in its primary purpose of protection against arbitrary interference with private and family life, home and correspondence. Then, in 1981, a significant evolution was led by the Council of Europe’s Convention 108 (CETS No. 108), which conceptualised privacy in terms of “data protection” by taking account of “the increasing flow across frontiers of personal data undergoing automatic processing”.Footnote 6 The notion of data protection has clarified since then the relationship between privacy and technology, specifying what the object of the protection is and attributing to individuals a fundamental claim to their data, expanding in this way the writ of habeas corpus to a writ of habeas data.Footnote 7

2.2 Privacy and security

Theorists within the academic field of information and computer ethics have debated for long whether privacy is an intrinsic or an instrumental value. While the first view would hold that privacy is good in itself and for its own sake,Footnote 8 the second—reductionist—view argues that the importance of privacy derives from—and is reducible to—other values or sources of values.Footnote 9 Because it is difficult to defend the view that privacy is important independently from other considerations, and because history has demonstrated that the right to privacy has been susceptible to be defeated in trade-offs with other rights, theorists have put forward alternative proposals to justify privacy. According to these accounts, privacy is conceived as mean for the realisation of—alternatively—property rights, security, autonomy, friendship, democracy, dignity or utility and economic value.

Although it might seem that theoretical speculation has no space in a discussion dominated by law and technology, on the contrary it is convenient to frame our justification of privacy in moral terms before analysing and commenting on the policies in place. Indeed, in trying to find answers to questions like “what kind of value is privacy?” and “why are personal data worth protecting?” such accounts have provided useful insights for legislators called upon to regulate the processing of personal data.Footnote 10 Especially when advancements in information and communication technologies (ICTs) have contributed to change social norms, these theoretical debates can be particularly useful to balance apparently opposing values in a correct way.

For the purpose of this article, it is worth considering James H. Moor’s theory establishing privacy’s relation with the core value of security. According to Moor, core values are shared and fundamental to human evolution, essential for the sustainability and flourishing of cultures and societies. Privacy being the expression of the core value of security, Moor justifies its interpretation in terms of protection of personal information in this way:

“Without protection species and cultures don’t survive and flourish. All cultures need security of some kind, but not all need privacy. As societies become larger, highly interactive, but less intimate, privacy becomes a natural expression of the need for security. We seek protection from strangers who may have goals antithetical to our own. In particular, in a large, highly computerised culture in which lots of personal information is greased it is almost inevitable that privacy will emerge as an expression of the core value, security.”Footnote 11

In the context of the cyber-revolution, the increased connectivity derived from the explosive use of computer technologies has conditioned both quantitatively and qualitatively the dissemination of personal information, making the relationship between privacy and security even more concrete. With our world increasingly relying on data, the risks of unauthorised access to personal information pose great dangers not only to our lives but also to the survival of our societies.

  • On the one hand, a security breach allowing accidentally or intentionally the destruction, loss, alteration, or non-authorised disclosure of personal data can have a range of significant adverse effects, resulting in physical, material or non-material damage for individuals. Indeed, once our personal information ends up in the wrong hands, limitations to individual rights, discrimination, identity theft or fraud, financial loss and reputational damage become concrete challenges to our security.Footnote 12

  • On the other hand, the mass scale of big data analytic techniques has proven to have the potential of weakening the foundation of the democratic governance of our societies. Data mining and the extraction of patterns used to make decisions about users, as well as the possibility of profiling, influencing, nudging and otherwise changing behaviours represent challenges to the collective will that legitimises political power.Footnote 13

While traditional violations of privacy put at risk the security of individuals, new forms of big data interference threaten the security of entire communities. Having to protect against these highly technical challenges, legislators find themselves increasingly in need of juridical instruments oriented towards the security of people, software and data. For this reason, data protection laws should start displaying more characteristics in common with cybersecurity policies, allowing technology to shape not only what privacy ought to protect but also how this protection needs to happen. By taking the GDPR as a case-study, the following paragraphs will show how the concepts of privacy and security are converging under the influence of technological advancements.

3 The revolution of the general data protection regulation (GDPR)

3.1 The fundamental right to data protection

On 25 May 2018, the General Data Protection Regulation came into force, finally repealing Directive 95/46/EC of the European Parliament and of the Council of 24 October 1995 on the protection of individuals with regard to the processing of personal data and on the free movement of such data. Although the objective of the Directive had been that of harmonising fundamental rights related to data protection and to ensure the free flow of personal data within the internal market, differing national interpretations and applications led to fragmentation and a lack of legal clarity. By directly applying to its addressees and not requiring further implementation measures by EU Member States, the Regulation constitutes a suitable legal instrument for EU citizens willing to enforce their fundamental right to data protection as outlined by Art. 8 of the Charter of Fundamental Rights of the European Union Footnote 14 and Art. 16 of the Treaty on the Functioning of the European Union (TFUE).Footnote 15

By not mentioning the word “privacy” in any of its eleven chapters and ninety-nine articles, the GDPR marks the definitive emancipation of the right to data protection from the right to privacy, detaching it also from the “right to be left alone” and the protection of secrecy about personal matters. Constituting a direct response to the rapid technological advancements that allow making use of personal data on an unprecedented scale, the GDPR represents an essential step towards the development of a European data-driven economy within the context of the Digital Single Market strategy.Footnote 16 While the Directive considered data to be the property of the data subject and aimed to regulate----statically—its exchange with controllers, the Regulation aims to govern—dynamically—a much more intertwined technological context, with the purpose of neither restricting nor prohibiting the free movement of personal data within the Union.Footnote 17

In this sense, the GDPR assumes the character of a lex generalis and addresses the serious risk of circumvention led by technological evolution by establishing technologically neutral measures to protect natural persons.Footnote 18 By “taking into account the state of the art” and the “costs of implementation”, the GDPR prescribes controllers with the obligation to “implement appropriate technical and organisational measures to ensure a level of security appropriate to the risk”.Footnote 19 In this way and for the first time in data protection legislation, the GDPR requires controllers to process personal data securely, transforming what once were good and best practices into legal requirements. Introducing a real “security principle” within the context of data protection, Art. 5(1)(f) asserts that personal data is to be “processed in a manner that ensures appropriate security of the personal data, including protection against unauthorised or unlawful processing and against accidental loss, destruction or damage, using appropriate technical or organisational measures”.

3.2 Accountability and a risk-based approach

In order to demonstrate how data protection legislation is highly intertwined with the security of individuals and data, and how the GDPR considers the enforcement of security measures to protect personal data to be central to its scope, the principle of accountability should be taken as starting point. As thoroughly defined by Opinion 3/2010 of the Art. 29 Working Party (WP29), “a statutory accountability principle would explicitly require data controllers to implement appropriate and effective measures to put into effect the principles and obligations […] and demonstrate this on request”. Footnote 20 While this formulation in the context of data protection is not in itself new, but rather derives from OECD privacy guidelines adopted in 1980, the GDPR provides specific legal affirmation of accountability-based mechanisms both in Recital 78 and Art. 24(1). The latter states:

taking into account the nature, scope, context and purposes of processing as well as the risks of varying likelihood and severity for the rights and freedoms of natural persons, the controller shall implement appropriate technical and organisational measures and be able to demonstrate that processing is performed in accordance with this Regulation. Those measures shall be reviewed and updated where necessary.”

According to the Regulation, the controller is called upon to identify the security measures that are suitable to protect the processing of personal data and to continuously monitor these measures’ consistent appropriateness for the risks highlighted by technological developments. Minimising the risk of unauthorised access, misuse and loss of personal data, the implementation of these measures should foster compliance with the obligation of the Regulation, and be a useful tool for data protection authorities in their enforcement tasks. In this sense, the obligations deriving from the Regulation do not operate in an indiscriminate manner but take into consideration the risks that might arise in a specific processing, as well as the nature, scope, context and purposes of that processing. By offering controllers the opportunity to consult data protection authorities in case the technical and organisational measures employed would not sufficiently mitigate risks, the GDPR considers risk as the central parameter for the definition of further obligations.Footnote 21

Therefore, the principle of accountability is strictly connected with a risk-based approach to data protection, consisting of the identification and analysis of risks to the rights and freedoms of data subjects, and which is antecedent to the definition and design of appropriate security measures.Footnote 22 By considering state of the art technology, the personalisation of both technical and organisational measures should derive from a twofold analysis of risks which is composed by their assessment and their management: first, the impact of threats and vulnerabilities should be evaluated, then verification, checking, minimisation or elimination should finally ensure that the measures taken will guarantee appropriate levels of security and confidentiality.

3.3 Data protection impact assessment (DPIA)

Permitting the enforcement of accountability, and in line with the risk-based approach of the Regulation, the Data Protection Impact Assessment (DPIA) is one of the most relevant and innovative instruments of the GDPR.Footnote 23 The DPIA is a mechanism for building and demonstrating compliance with the Regulation which aims to describe the envisaged processing and its purpose, to evaluate its necessity and proportionality, to assess risks deriving from it to the rights and freedoms of data subjects, and to enlist the appropriate measures addressing those risks.Footnote 24 This kind of assessment is not due for any form of processing of personal data, but only for those “likely to result in a high risk to the rights and freedoms of natural persons”, particularly when new technologies are employed.Footnote 25 In particular, Art. 35(3) offers useful examples of cases in which the processing might require a DPIA:

  1. (a)

    a systematic and extensive evaluation of personal aspects relating to natural persons which is based on automated processing, including profiling, and on which decisions are based that produce legal effects concerning a natural person or similarly significantly affect a natural person;

  2. (b)

    processing on a large scale of special categories of data referred to in Article 9(1), or of personal data relating to criminal convictions and offences referred to in Article 10; or

  3. (c)

    a systematic monitoring of a publicly accessible area on a large scale.

With the adoption of guidelines,Footnote 26 the General Data Protection Regulation has further elucidated the notion of processing operations that are “likely to result in a high risk”, providing a more concrete set of criteria to be taken into consideration for determining the necessity of a DPIA:

  1. 1.

    evaluation, scoring, profiling or predicting techniques that would require the processing of data subject’s personal information concerning “the performance at work the economic situation, health, personal preferences or interest, reliability or behaviour, location or movements”;Footnote 27

  2. 2.

    processing operations aiming to take automated decisions on data subjects, and which would produce legal effects or may similarly affect in a significant way the natural person;Footnote 28

  3. 3.

    processing operations used to monitor, observe, or control data subjects in a systematic way—for, especially if information is collected in public (or publicly accessible) spaces, individuals might be unaware of being subjected to such processing and it may be unavoidable for them;

  4. 4.

    operations processing sensitive data or data of a highly personal nature, such as, for example, political opinions, biometric data, medical records, criminal convictions or offences;

  5. 5.

    data processed on a large scale. In other words, when the number of data subjects concerned, the volume of data, the retention or permanence, and/or the geographical extent of the processing activity might represent contributing factors to assess the high risk of the processing operation;

  6. 6.

    operations processing data by matching or combining datasets which might exceed the reasonable expectation of data subjects;

  7. 7.

    processing operations whose data may concern vulnerable data subjects, including children, employees, asylum seekers etc. This kind of processing may increase the power imbalance between controllers and data subjects;

  8. 8.

    processing operations requiring the innovative use of new technological or organisational solutions like, for example, the combined use of finger prints and face recognition techniques for improved physical access control. Indeed, the GDPR highlights that “the achieved state of technological knowledge” can lead to new forms of data collection and usage that may generate high risks to the rights and freedoms of individuals; and

  9. 9.

    processing operations that prevent data subjects “from exercising a right or using a service or a contract”.Footnote 29

While the Art. 29 Working Party considers that, in most cases, when two of these criteria are met a Data Protection Impact Assessment will be mandatory, data controllers can decide whether a processing that meets only one of these criteria requires a DPIA. In both cases, a data controller can still consider the processing operation not to be “likely to result in a high risk” but should be able to justify and document the decision for not carrying out a DPIA. A DPIA should be carried out “prior to the processing”, nonetheless it should represent a continual process that is regularly reviewed and re-assessed. By always seeking the advice of the Data Protection Officer (DPO), the controller should also seek—where appropriate—the views of data subjects or their representatives. The controller may also require a consultation with the supervisory authority, especially whenever sufficient measures to reduce the residual risks of the processing cannot be found. Finally, although this is not legally required by the GDPR, the controller can decide to publish the DPIA, fostering transparency and trust in its processing operations.

3.4 Data breach notification

Although the DPIA has the characteristics of a well-consolidated risk management practice,Footnote 30 breaches to the security of personal data may always occur, resulting in a concrete threat to the security of data subjects as well as a practical challenge for data controllers. According to the GDPR, a personal data breach is defined as “a breach of security leading to the accidental or unlawful destruction, loss, alteration, unauthorised disclosure of, or access to, personal data transmitted, stored or otherwise processed”.Footnote 31 While this assumes that all personal data breaches derive from security incidents, the Art. 29 Working Party specifies in its guidelines that not all security incidents have to involve personal data breaches. According to Opinion 03/2014, personal data breaches can be divided into three—not mutually exclusive—categories:

  1. a.

    “Confidentiality personal data breach”—consisting in the unauthorised or accidental disclosure of access to personal data;

  2. b.

    “Integrity personal data breach”—consisting in the unauthorised or accidental alteration of personal data.

  3. c.

    “Availability personal data breach”—consisting in the unauthorised or accidental destruction or loss of access of personal data.

When a personal data breach occurs, the controller should activate immediately all defensive measures—both operative and organisational—to mitigate and manage the crisis, including a potential notification to the supervisory authority and a communication to data subjects. Indeed, in most cases, supervisory authorities and data subjects are often unaware of the occurrence of a personal data breach, preventing them from taking action and protecting themselves from detrimental consequences. By imposing a notification requirement on controllers, the GDPR affirms the rights of individuals and limits the negative impact of a personal data breach. According to article 33(1) of the GDPR:

in the case of a personal data breach, the controller shall without undue delay and, where feasible, not later than 72 hours after having become aware of it, notify the personal data breach to the supervisory authority competent in accordance with Article 55, unless the personal data breach is unlikely to result in a risk to the rights and freedoms of natural persons.”

Therefore, the obligation to notify a data breach should not be automatic but should derive from an analysis of the risks on the rights and freedoms of natural persons. In case the personal data breach is likely to result in a high risk, controllers should—together with notifying the supervisory authority—communicate “without undue delay” the personal data breach to exposed individuals as well. If controllers do not respect the timing of the obligation, supervisory authorities are entitled to apply all the available corrective measures: \(i\).\(e\)., issue warnings, reprimands or fines, impose a temporary or definitive limitation to the processing, order the rectification or restriction of processing, withdraw certification or order the suspension of data flows.Footnote 32

Further demonstrating how data protection legislation is converging with cybersecurity measures, the obligation to notify a personal data breach under the GDPR resembles—and sometimes is associated with—other similar notification obligations under different legal instruments. Although varying between Member States, these requirements inter-relate with the GDPR and include:

  • Regulation (EU) 910/2014 on electronic identification and trust services for electronic transactions in the internal market (the eIDAS Regulation);Footnote 33

  • Directive (EU) 2016/1148 concerning measures for a high common level of security of network and information systems across the Union (the NIS Directive);Footnote 34

  • Directive (EU) 2002/58/EC concerning the processing of personal data and the protection of privacy in the electronic communications sector (the ePrivacy Directive); Footnote 35

  • Directive 2009/136/EC (the Citizens’ Rights Directive) and Regulation 611/2013 (the Data Breach Notification Regulation).

3.5 Privacy by design and privacy by default

As argued by Cavoukian, the “privacy by design” approach illustrates the idea that privacy concerns have to be kept in mind from the initial phase of design of technological systems processing data. In this way, “data protection needs to be viewed in proactive rather than reactive terms”,Footnote 36 making privacy considerations preventive and ex ante instead than remedial and ex post. With privacy as a default setting, data protection requirements are embedded into the architecture of information communication technology, ensuring that personal data is automatically protected in any given system or business practice.

Although some elements of the “privacy by design” principle could already be found in the Data Protection Directive 95/46/EC,Footnote 37 for the first time Art. 25 of the GDPR formalises “data protection by design and by default” into formal law, by including it in the general obligations of controllers and processors. According to the Regulation, controllers shall implement appropriate measures—both technical and organisational—by taking into account data protection concerns not only when determining the means for processing and during the processing itself,Footnote 38 but also when developing, designing, selecting and using applications, services and products that are based on the processing of personal data or process personal data to fulfil their task.Footnote 39

With technologies becoming more convoluted and unintelligible, the burden of responsibilities of privacy compliance can hardly be borne by the average user. The “privacy by design” approach implies that a project’s design needs to be carried out taking into consideration the final recipient of the technology. Being a user-centric methodology for data protection compliance, it incorporates the protection of individuals and their personal data in the requirements of the whole project lifecycle. Therefore, the use of Privacy Enhancing Technologies (PETs)—i.e., ICT systems that base from scratch their design on the minimisation of risks derived from personal data misuse—is favoured by the Regulation. According to this view, the development of ICT systems and services integrating safeguards and implementing data protection principles allows effectively combating threats to the security of individuals such as identity theft, fraud, and discriminatory profiling.Footnote 40

3.6 Pseudonymisation

The concept of pseudonymisation is defined in Art. 4(5) of the GDPR as:

the processing of personal data in such a manner that the personal data can no longer be attributed to a specific data subject without the additional information, provided that such additional information is kept separately and is subject to technical and organisational measures to ensure that the personal data are not attributed to an identified or identifiable natural person”.

This represents a well-established data protection and security practice which disassociates the identity of a data subject from its processed personal data. This de-identification process replaces a particular set of characteristics relating to the data subjects with so-called pseudonyms that do not allow the direct derivation of the original personal identifier—i.e., information or pieces of information that make identification possible.Footnote 41 Granting in some cases the possibility of a re-identification, pseudonymisation maintains an association between personal identifiers and pseudonyms, providing that the “additional information” necessary to reverse the process is secured. The GDPR puts a lot of emphasis on the securing of the additional information, stating that controllers should separate it from the pseudonymised data—either logically or physically—allowing the possible destruction of such association when the intention is to make the process irreversible.

The notion of pseudonymisation is often confused with that of anonymisation, leading to the common mistake of considering pseudonymised data as deserving the same level of protection as anonymous data. On the contrary, while anonymisation is a process that irreversibly alters personal data in a way that it can no longer be reconnected to the data subject, removing definitively the association between the identifying dataset and the identity of the data subject, pseudonymisation is based on the existence of this association.Footnote 42 The GDPR explicitly clears up this misinterpretation in Recital 26, where it states that pseudonymised data continues to be conceived of as personal data by reason of the fact that it remains attributable to a natural person with the use of “additional information”. Indeed, pseudonymisation techniques start with a single input (the original dataset) and result in a couple of outputs (the pseudonymised dataset and the additional information) that can together re-establish the original input. While the pseudonymised dataset is a modified version of the original dataset where initial identifiers have been substituted with pseudonyms, the additional information provides the link between pseudonyms and the identities of the data subjects. This decoupling of the original dataset into two parts allows the two different outputs to have a relationship regarding the specific data subject if they are in combination with each other. Therefore, all the relevant data protection principles foreseen by the Regulation apply both to indirect identifiers related to a data subject and to pseudonymised data.

Recognising the possible benefits of pseudonymisation, the Regulation refers to it fourteen times both as a data-protection-by-design mechanism and as a technique promoting the security of operations processing personal data:

  • Art. 25(1) considers pseudonymisation an appropriate technical and organisational measure implementing effectively data protection principles and integrating the necessary safeguards into the processing. By favouring respect for the principles of necessity, data minimisation, data accuracy, as well as supporting the data protection goal of unlinkability (which promotes the reduction of personal data linked across different data processing domains), pseudonymisation techniques represent the perfect exemplification of data protection by design.

  • Art. 32(1) considers pseudonymisation—as well as encryption—an appropriate technical and organisational measure for ensuring appropriate levels of security. Allowing concealment of the identity of the data subject and reducing the risks to the rights and freedom of individuals, pseudonymisation enhances the security and integrity of personal data.

Finally, it has been pointed out by several sources that, within the provisions of the GDPR, pseudonymisation allows a certain degree of “relaxation” of some of the controller’s obligation.Footnote 43 Indeed, in five different sections of the Regulation it appears that:

  1. 1.

    pseudonymisation may facilitate the processing of personal data beyond their original collection purpose;Footnote 44

  2. 2.

    pseudonymisation may reduce the possibility of identifying individuals when personal data breaches occur, decreasing the risk of harm to data subjects and positively impacting the process of risk evaluation which is functional to notify personal data breaches to data protection authorities;Footnote 45

  3. 3.

    pseudonymisation represents a relevant safeguard for processing personal data for archiving purposes in the public interest, as well as for scientific or historical research purposes or statistical purposes;Footnote 46

  4. 4.

    pseudonymisation may avoid controllers providing a data subject with access to data, with rectification and erasure of data, with restrictions on processing or with data portability;Footnote 47 and

  5. 5.

    pseudonymisation may be considered a mitigating factor for supervisory authorities calculating potential fines.Footnote 48

4 Conclusion

In conclusion, the article has demonstrated that the GDPR’s focus on security goes beyond Art. 32 on the “Security of Processing”. Not only does the Regulation include among its principles the security of personal data, but considerations about the security of individuals are also at the basis of the risk evaluation on which the principle of accountability is founded. Being concrete expressions of the principle of accountability and the risk-based approach, innovative data protection measures like DPIA and data breach notifications show how the GDPR puts emphasis both on the security of data subjects and on the security of processing operations. Through preliminary assessment of risks to individuals and consequent implementation of appropriate security measures, security concerns should encourage controllers to plan in advance—aiming to prevent incidents—and react in a timely manner whenever a privacy violation occurs. Finally, the employment of ICT solutions that—“by design”—integrate security measures implementing data protection safeguards has been prescribed as a way to ensure compliance with the obligations of the Regulation. Through detaching data subjects from personal data and resulting in the minimisation of risks to the rights and freedoms of individuals, the adoption of security measures like pseudonymisation may allow a “relaxation” of some of the data protection requirements of the GDPR.

Albeit theoretically grounded, the abovementioned discussion may have important implications for practitioners which will need to be further analysed in future publications. By acknowledging that the proper implementation of data protection always requires data security measures, professionals should question whether data protection should be considered a separate area of responsibility and expertise to data security. Furthermore, in recognition of the fact that data protection requires a combination of both legal compliance and technical solutions for practical implementation, cooperation between Data Protection Officers and information security officers should be promoted and enhanced.Footnote 49