Keywords

1 Introduction

The data generated in connection with emerging technologies is valuable from a business management perspective for several reasons: the amount of data produced and the ease of sharing information, the increasing processing power of the available machines, the combinability of information, and the durability of information, resulting in the fact that sovereignty over data is accompanied by power or influence.

The mere storage of information enables companies to analyze the behavior of a user and utilize the behavioral data, for example, for cookies-based advertising on the Internet. By harnessing information about a user’s behavior, the advertising displayed on the site is tailored to the interests of the user. This increases the chance that the user feels addressed by the advertisement and this might lead to a business transaction. On the one hand, the evaluation of information by means of self-learning systems generates insights that were previously beyond human capabilities, but on the other, it also leads to justified fears about the power that emanates from these decisions. An example of this is AI-controlled personnel selection procedures. Amazon tested a personnel selection tool up to 2018 which disadvantaged women and possibly encouraged other prejudices. The software was therefore never officially introduced (Sackmann 2018). The easy transferability of information between interested parties makes data misuse possible, because it has either been passed on illegally or transferred so frequently that its origin can no longer be traced.

In recent years, numerous events have come to light that attracted media attention because data was misused, or its use did not correspond to the original purpose of its collection. This public attention has not always resulted from outrage over violations of the law, but also moral sympathy for the data transfer (e.g. Edward Snowden and Wiki-Leaks founder Julian Assange) or the use of data, as in the case of Cambridge Analytica, which analyzed information about Facebook users and made it available for political advertising (Dachwitz 2020; Information Commissioner’s Office 2020).

The protection of personal data is becoming more important as digitalization and the ability to evaluate information have increased. It is precisely for this reason that data protection regulations have been tightened at international level in recent years. The question therefore arises as to whether these tougher rules slow down digitalization and which challenges data protection laws pose for emerging technologies.

2 Data Protection

Data protection is the definition of principles, conditions and limitations by governments and authorities in order to mandate a higher degree of privacy for personal data management and includes penalties for data flow and data processing violations (Jia et al. 2018, p. 3). The principles, conditions and limitations apply to personal data. The definition of personal data in this context of data protection efforts varies. For this section, personal data is data or information relating to an individual, which enables the identification of this individual (Politou et al. 2018, p. 3).

An integral principle of data protection is educating the concerned parties about their rights in order to allow individuals to make informed decisions about the sharing of personal data and empower them to decide on the usage of personal data by data processing companies. For these companies, data protection is considered a guideline for the planning and design of data processing activities (European Commission 2020). This results in a direct link between data protection and development activities for modern technology as these technologies serve as instruments for the realization of data processing activities. In terms of processing activities, the concept Privacy by Design should be utilized as this modern principle for technology development demands the realization of data minimization, purpose limitations, transparency and control within development practices (Politou et al. 2018, p. 4).

In summary, data protection is a legal construct created by legislators to protect the data of an individual, and Privacy by Design is a design principle for software to foster the conscious handling of personal data. This leads to the question of why data protection and Privacy by Design are necessary?

In a data-driven society, data is perceived as a valuable asset due to the potential to create added value by collecting, analyzing and trading data for its commercial value (Politou et al. 2018, p. 3). In terms of personal data, traits, behaviors, footprints, work and leisure habits are monetized by producing targeted products, services, financial offers, advertising and healthcare solutions (Jia et al. 2018, p. 2). This development is reinforced by technological progress, for example, Big Data or machine learning, which offer new dimensions of collecting and processing personal data in order to personalize services based on the profiles deducted from the collected data sets (Wachter 2018, p. 3).

The commercial value of data monetized in companies exploits the privacy of the individuals through increased accessibility, re-identification, secondary use, exclusion and decisional interference. This results in the individual being a consumer of goods, information and services manufactured by and a public producer of valuable data for these companies (Jia et al. 2018, p. 2).

As personal data is a core element of technology-driven innovations from the perspective of companies, the individual as a producer of data tends to lose control and part of their independence and maturity, for example, by distributing personal data decentralized over the Internet without being aware of the data processing procedures utilized by companies to derive the commercial value of their data. The inferential analysis and linkage of disparate records conducted by these procedures can result in discriminatory treatment for the individual or an influence on the formation of an opinion (Politou et al. 2018, p. 4; Wachter 2018, p. 3).

In a survey researching the usage and perception of social media, 91% of the respondents stated the feeling of losing control over personal data collected and monetized by the social media networks. In the same survey, 61% of the respondents specified the need for more privacy protection mechanisms (Rainie 2018). Privacy focuses on the protection of the personal space of an individual and, in broader terms, the principles of data protection can contribute to privacy. But the principles of data protection focus on the processing of personal data and not on the privacy of the individual itself (Politou et al. 2018, p. 2). This focus is interpreted differently by national legislations. The scope of the concepts of personal data, the conditions for legitimate data processing and the extent of penalties differ according to national data protection legislation. Section 3, therefore, offers a review of a selection of data protection regulations.

3 Instances for Data Protection Regulation

Due to globalization and digitalization, approaches to data protection cannot be limited regionally. Nevertheless, there are different legislations which interpret personal data and its protection differently. In order to clarify how data protection affects digitalization, Sect. 3 provides a brief comparison and highlights special features.

3.1 EU—General Data Protection Regulation (GDPR).

The General Data Protection Regulation, also called “EU GDPR” or “EU Data Protection Basic Regulation,” defines the circumstances under which personal data of EU citizens can be collected and processed. This regulation came into force on May 25, 2018 and is binding for all companies and institutions working with personal data of EU citizens. It is important to point out that the regulation applies to companies and institutions located within the EU, and to companies collecting or processing data of EU citizens (GDPR, General Data Protection Regulation 2016).

The goal of this directive is to guarantee the protection of personal data for all EU citizens and to ensure a uniform and free movement of data within the EU. Therefore, the EU GDPR specifies seven general principles for the processing of personal data (GDPR Section 5, General Data Protection Regulation 2016):

  • Faithfulness, lawfulness, transparency: Personal data will be processed only in the way and to the extent discussed in the initial collection. The responsible person collecting the data must act in full transparency and disclose their identity. The owner of the data has the right to disclose their data at any time.

  • Purpose restriction: The purpose of the data processing must be determined prior to the processing and must be clear and lawful. Use for a different purpose is only permitted in accordance with the legal basis. Consequently, the data may only be disclosed if the data owner has given their consent or on the basis of a justification by the data owner.

  • Data minimization: Only data that is appropriate, significant and relevant should be collected.

  • Accuracy: The data collected must be correct and complete before it is used. Incorrect data must be either corrected or deleted.

  • Storage limitation: Data must be deleted immediately as soon as the goal of the data collection is fulfilled. Thus, data may only be retained until the purpose for which it was collected has been fulfilled. Furthermore, everyone can claim the “right to be forgotten.” This means the data owner can order the data processing company to delete related personal data.

  • Integrity and confidentiality: All stored data must be treated confidentially and protected against unauthorized processing, for example, by unqualified persons or parties. To this end, the company must implement both technical and organizational measures.

  • Accountability: The data controller must demonstrate to the public and the relevant authorities that it fully complies with all data collection regulations.

In addition to the seven general principles of personal data processing, there are also other principles that apply to the consent to data processing. The approval of data processing is a central element of the new EU GDPR. The principle of forbiddance applies here, which means that without the prior consent or permission of the data owner, storage and processing is not permitted. In the event of non-compliance, sanctions of up to 4% of annual turnover can be imposed, which will be reviewed by established authorities within the EU countries (GDPR, General Data Protection Regulation 2016).

3.2 USA—California Consumer Privacy Act (CCPA).

In contrast to Germany (“BDSG-neu”) and the EU (GDPR), there is no uniform data protection law in the USA. There are regulations for individual sectors such as trade or healthcare. In addition, these regulations sometimes differ from one state to another. These include the California Consumer Privacy Act (CCPA), which came into force in California on January 1, 2020 (CCPA, State of California 2020).

Companies and service providers bound by the CCPA must respect the rights of consumers. The most important point is that the law binds companies to inform consumers about the collection of personal data and their rights and opportunities. The CCPA must be complied with by all companies, service providers and revenue-generating entities that collect personal information and operate in California. The location of the company’s registered office is irrelevant. In addition, one of the following three points must be fulfilled for the law to take effect (CCPA, State of California 2020):

  • The annual revenue (gross) is greater than 25 million US dollars

  • Per year, personal information is gathered from more than 50,000 customers, households or networked devices in California and used for commercial purposes

  • Half or more of the annual revenue comes from the sale of personal information from Californian customers, households or network-enabled devices.

Overall, the CCPA offers these three crucial data subject rights in particular:

  • Right of access: You may obtain information about the storage and use of your data by exercising your right of access and information. In doing so, you can find out which data has been collected and what it has been used for. In addition, the information is provided about which third parties have received which data (CCPA Section 1798.100, State of California 2020; CCPA Section 1798.110, State of California 2020; CCPA Section 1798.115, State of California 2020).

  • Right of deletion: The CCPA provides consumers with the “right to be forgotten,” whereby individuals can have all data that has been collected by a company or service provider deleted. There may be certain exceptions to this rule, which are set out in paragraph d (1–9) (CCPA Section 1798.105, State of California 2020).

  • Right of opt-out: Consumers are offered the chance to protest against the sale of their personal data to third contractors. In addition, the disclosure of the data to third parties can be refused (CCPA Section 1798.120, State of California 2020).

The affected parties include all California residents and customers from around the world, as it also affects California-based companies collecting customer data outside of California.

3.3 Canada—Privacy Act

The Privacy Act in Canada came into effect on July 1, 1983 and provides guidance to Canadian government institutions on how the Canadian government should handle personal information about Canadian citizens. The main points of the Privacy Act are as follows (Privacy Act, Canada 2020):

  • A government institution may only collect personal information if it is necessary for the performance of the institution’s functions, and if not, no personal information may be retained (Privacy Act—Section 4, Canada 2020).

  • An institution must inform the data owner about the purpose of the data collection; there are sometimes exceptions (Privacy Act—Section 5(2), Canada 2020).

  • Personal data may only be used for the purpose for which it was collected. If a different use is intended, the permission of the data owner must be obtained (Privacy Act—Section 7, Canada 2020).

  • Personal data must be treated confidentially and may not be disclosed without the permission of the data owner (Privacy Act—Section 8, Canada 2020).

  • Every Canadian citizen and permanent resident of Canada has the right to access and, if necessary, correct information under the control of governmental institutions (Privacy Act—Section 12, Canada 2020).

  • In addition, there is a Privacy Commissioner in Canada who deals with complaints from Canadian citizens regarding their data (Privacy Act—Section 29, Canada 2020).

3.4 Japan—Act on the Protection of Personal Information (APPI)

The Act on the Protection of Personal Information (APPI) came into force on April 1, 2005. On May 30, 2017, the APPI was again fundamentally revised and reformed. The reason for the reform was the desire to adapt the data protection guidelines to international standards. There are also a number of supplementary guidelines published by the Personal Information Protection Commission (PPC), which affect the general public or specific sectors (finance, medical, employment and telecommunications) (APPI, Japan 2016).

The APPI applies to anyone who controls and processes personal data, both private individuals and legal entities. In this context, the individuals or organizations must use the personal data for activities that serve the purpose of the company. It is not mandatory that the company or organization acts for profit. The areas press, politics, religion and publishing are partially excluded from the APPI (APPI, Japan 2016).

The PPC is the primary regulatory authority in connection with the APPI. It must manage and execute the following tasks (APPI, Japan 2016):

  • The privacy policy and the individual’s interest must be guaranteed when handling personal information and data.

  • It provides information to foreign data protection authorities and, to a limited extent, may provide information for police investigations by foreign government agencies.

  • It has the primary investigative, advisory and instructional authority under the APPI. This includes investigating the activities of PICs with an anonymized procedure for reviewing personal data (Sect. 4) and, if necessary, giving advice or instructions if the personal data is not handled in the manner that would be expected under the APPI.

The authority to initiate investigations in the context of data protection may, under certain circumstances, be delegated to competent ministers, with the exception of advisory and executive powers.

3.5 Summary and Brief Comparison

In summary, we have reviewed the privacy policies of different nations in this chapter. The guidelines of the European Union (GDPR), United States of America (CCPA), Canada (Privacy Act) and Japan (APPI) were selected, examined in detail and compared with each other. Special features were highlighted and presented.

In addition, a brief comparison of the different guidelines shall be carried out. The first point that stands out is that the Privacy Act from Canada already came into force in 1983. Other directives, such as the GDPR or the CCPA, are significantly younger or have been largely adapted recently, such as the APPI. Furthermore, there are differences in the jurisdiction of the guidelines, as they have different levels of rigor. For example, the Privacy Act protects the data of Canadian citizens, whereas the guidelines of the GDPR and CPPA are much further reaching.

In addition to these differences, there are also points that are very similar in the various directives. One example is the “right to be forgotten.” Furthermore, companies and institutions must disclose the personal data collected about individuals. This data may have to be changed and adjusted if it is not correct. In some cases, there are still special regulations and case distinctions in individual guidelines. Due to the scope required for this, it is not possible to go into these special cases in full at this point.

The following chapter focuses on the impact of data protection on the business sector and technological progress. Particular attention is paid to the topics of data processing, for example, in relation to software development or the testing of software, as well as the transfer of data. Furthermore, limitations in connection with emerging technologies and retention of data are considered in relation to the topic “right to be forgotten.”

4 Impact of Data Protection Regulations on Businesses and Technological Progress

The data protection regulations of governments and public institutions aim to protect people from having their data made available to unauthorized persons or having their data misused, rather than the protection of data itself (as the term would suggest). The regulations focus on protection against unlawful processing of personal data, promoting transparency in data processing (“Where does the data come from? Who uses it and for what purpose?”) and, due to the sometimes very high penalties (as is the case for the European GDPR), they ensure conduct in accordance with the rule of law. The data protection laws help to enforce the principle of “informational self-determination” by authorizing the data owner to determine the data collector and the scope of the data processing.

Against this background, data protection regulations are of great importance to consumers and the protection of their personal data. Imagine what a world without data protection laws would look like, where everyone is exposed to arbitrariness (even by government authorities), cannot object to the misuse of their data and has no right to have incorrect data deleted. On the other hand, this necessary protection of consumers’ data implies that companies are not allowed to utilize it in the most profitable way. This leads us to the challenges companies are facing with regard to processing data in compliance with the law. The following explanations always refer to the European GDPR, which was briefly presented in Sect. 3.1.

4.1 Challenges for Data Processing due to Purpose Restrictions

4.1.1 Software Development and Testing

As already mentioned, the stored personal data must be used for a specific purpose. The use of personal data for purposes the data owner is not aware of or has not expressly agreed to is not permitted (with the exception of legally based purposes). This has concrete consequences for the use of data for test purposes. For instance, if the proper functioning of a new database for customer relationship management (CRM) or a recruiting application is to be checked during development or test operation, it must not contain any data that can be assigned to specific persons. If data from real persons (i.e. from the real database) is used, this is considered improper use of data (the customers have not explicitly provided their data for testing purposes nor does a legally based purpose apply).

Test scenarios must be as realistic as possible to fulfill their purpose, for example, to determine whether data from one system arrives correctly in another system via a new interface. Sometimes, test databases with pseudo-entries are used for this purpose, but they are created with a lot of effort and time and usually do not meet the flexible requirements of the testers. Corresponding data fields such as e-mail addresses must still be recognizable as e-mail addresses in order for the business application to work without entering an error state. Certain information must also be linked to the data, for example, account numbers or customer identification numbers. It is possible to use real data for test purposes under very strict conditions. However, this requires, among other things, that the test data is deleted immediately afterward. The tests would then no longer be traceable, which would contradict recognized auditing principles. A different procedure should therefore be chosen, whereby the data can be modified in such a way that it prevents the original persons from being traced, but still remains suitable for the test procedure.

One possibility is the pseudonymization of data, i.e. personal data can only be assigned to the data subject by means of additional information, which is stored separately (see the definition of “pseudonymization” in Article 4(5) GDPR). However, the pseudonymization of data does not prevent this data from being considered personal data according to Article 4(1) GDPR. In this paragraph, personal data is defined as “any information relating to an identified or identifiable natural person (‘data subject’); an identifiable natural person is one who can be identified, directly or indirectly, in particular by reference to an identifier.” Thus, even indirect identification (such as via a reference table in which a name is assigned to the ID of the first table) does not prevent it from being personal data subject to the GDPR. But pseudonymization reduces the risk of disclosure and misuse of personal data and is a common way to deal with the restrictions implied by the GDPR (THE EUROPEAN PARLIAMENT AND THE COUNCIL OF THE EUROPEAN UNION 2017).

4.1.2 Restrictions for Data Trading

The purpose-limited use of personal data in accordance with Article 6 of the GDPR poses a challenge for emerging technologies in the area of data trading. The data collected about a person and their behavior is, on the one hand, useful to the company collecting the data and, on the other, a valuable resource for other companies. This is because data in itself has a value which can be enhanced by combining it with information collected by third parties. It is an intangible asset that can be sold for a profit.

For example, it can be useful for a company to combine the data collected about its customers with information from third parties, such as social networks. In this way, the interests of the users can be analyzed in detail and utilized for target-group-specific offers. Or vice versa, by selling data about customers to third parties.

These business interests are clearly in opposition to the regulations of the GDPR. According to Article 6 of the GDPR, it must be proven that the data is processed lawfully. For this purpose, at least one of the conditions below must be complied with and proven in writing.

  • “The data subject has given consent to the processing of his or her personal data for one or more specific purposes;

  • processing is necessary for the performance of a contract to which the data subject is party or in order to take steps at the request of the data subject prior to entering into a contract;

  • processing is necessary for compliance with a legal obligation to which the controller is subject;

  • processing is necessary in order to protect the vital interests of the data subject or of another natural person;

  • processing is necessary for the performance of a task carried out in the public interest or in the exercise of official authority vested in the controller;

  • processing is necessary for the purposes of the legitimate interests pursued by the controller or by a third party, except where such interests are overridden by the interests or fundamental rights and freedoms of the data subject which require protection of personal data, in particular where the data subject is a child.”

Article 7(1) adds: “Where processing is based on consent, the controller shall be able to demonstrate that the data subject has consented to processing of his or her personal data.” Due to these harsh restrictions, data can only be traded if the person from whom the data originates has expressly consented. A company can hope for the naivety of customers giving their consent without being aware of the scope and consequences. Or perhaps it can lure customers with a promise that the data trading is in some way useful or offers the customer additional value (and the customer is convinced that this added value is worth their consent!). However, no matter by what means or under what conditions this consent is obtained, it is always required. This makes data trading considerably more difficult, which limits or in some cases completely excludes certain business models based around Big Data.

4.2 Restrictions for Using Artificial Intelligence (AI) and Algorithm-Based Decision-Making

The progress systems using algorithm-based decision-making have made in recent years is enormous. This is mainly due to the sharp drop in the cost of computing power. However, it is still not predictable whether and when a complete digitalization of human thinking (so-called “strong artificial intelligence”) will be possible. Nevertheless, so-called “weak artificial intelligence,” i.e. the mapping of individual aspects of human intelligence, has made great progress. This includes, for example, knowledge databases which collect and store information but independently link and interpret information. It also includes the recognition, analysis and prediction of action patterns expressed, for example, in automated recommendations (“Other customers have also expressed interest in the following articles: …”).

However, data protection regulations clearly limit the possibilities for using artificial intelligence for decision-making. Article 22(1) requires decisions not to be made based solely on automated processing, including profiling. There are some exceptions to this general requirement, such as the explicit consent of the data subject or the fact that automated decision-making is necessary for the conclusion or performance of a contract.

This requirement is accompanied by the provisions of Article 15, which give a data subject the right to request information about the processing of their personal data. The right to information expressly includes information about the automated decision-making processes, i.e. an explanation of how the algorithm comes to the result. In doing so, the company must provide information about “the logic involved, as well as the significance and the envisaged consequences of such processing for the data subject.” Of particular importance here is the fact that the right of access refers to the processing logic of the data. But this is exactly what the black box is, because in most cases it is not at all comprehensible how the algorithm came to a certain decision.

It is obvious that the legislator wants to protect data subjects from becoming victims of unexplainable decisions made by a machine. As long as the decision-making process or the decisive factors influencing it cannot be understood, the right of information according to Article 15 GDPR remains an uncertainty factor that cannot be remedied.

4.3 Impacts of the Storage Limitations and the “Right to be Forgotten”

At the beginning of the computer age, data storage was expensive, which resulted in the storage of only necessary data. Today, data can be stored almost indefinitely and retrieved at any time. In order to forecast further or track development, it is in the interest of companies to store data for as long as possible. But the GDPR puts strong limits on the almost unlimited storage possibilities.

According to Article 5(1) GDPR, data storage is permitted “for no longer than is necessary for the purposes for which the personal data is processed.” There are a few legal exceptions, such as the public interest or data storage for scientific and historical research purposes. In all other cases, the principle of the GDPR strictly regulates the duration of data storage. This requirement is added by the provision in Article 13(2) GDPR: “The controller shall, at the time when personal data is obtained, provide the data subject with the following further information necessary to ensure fair and transparent processing: (a) the period for which the personal data will be stored, or if that is not possible, the criteria used to determine that period…”.

Limiting the time during which data may be stored (and then must be deleted!) is a great challenge. Every piece of information has a certain lifespan. This means, for example, that there is no longer any need to store it once the customer business has been fulfilled. However, there are still legal storage periods for invoices, letters or other business documents. In addition, data on business transactions that have already been successfully concluded with the customer can be useful for future business negotiations. So where do you set the limit? The requirement of limiting data storage raises numerous questions during practical implementation, such as: Which criteria are suitable to determine the point in time when the purpose of the data storage is fulfilled, and it must therefore be deleted? How to proceed with data backups stored geographically widespread in other countries or continents? How can the data to be deleted be identified if it has become part of risk models (e.g. data on the payment behavior of customer groups used for flat-rate credit assessments)? In order to answer these questions, an acceptable balance must be struck between being compliant with the law and the advantage of utilizing personal data during their lifecycle within the company. This can be perceived as an opportunity for new systems and models providing features by default.

A related requirement concerns the right of a data subject to be forgotten, which is enshrined in Article 17 of the GDPR: “The data subject shall have the right to obtain from the controller the erasure of personal data concerning him or her without undue delay and the controller shall have the obligation to erase personal data without undue delay where one of the following grounds applies: …”. Probably the most common reasons for immediate deletion of data include points b) and c) of Article 17(1) GDPR, which specify the following:

“(b) the data subject withdraws consent on which the processing is based according to point (a) of Article 6(1), or point (a) of Article 9(2), and where there is no other legal ground for the processing;

(c) the data subject objects to the processing pursuant to Article 21(1) and there are no overriding legitimate grounds for the processing, or the data subject objects to the processing pursuant to Article 21(2).”

A simple written declaration is therefore sufficient to object to the data’s further processing or to demand the deletion of data.

The challenges in practical implementation also apply to the “right to be forgotten,” e.g.: How can personal data be removed from backups? Can legally sound reasons be given that allow further, temporary storage of data in backups, e.g. until the backup cycle has expired? How to deal with data that has become part of risk models?

Digitalization must provide legally compliant answers to these questions and the processes must take the requirements of the GDPR into account.

One way to take these requirements into account is to use the concept of “privacy by design,” which data processing companies are also required to do under Article 25(1) GDPR. This concept comprises seven principles which aim to ensure that data protection regulations are observed in the best possible way as early as the hardware and software development phase.

The first three principles should be mentioned by way of example, as they provide a good impression of the objective of Privacy by Design (Cavoukian 2011):

  • “Principle 1—Proactive not Reactive; Preventative not Remedial: The Privacy by Design approach is characterized by proactive rather than reactive measures. It anticipates and prevents privacy invasive events before they happen. Privacy by Design does not wait for privacy risks to materialize, nor does it offer remedies for resolving privacy infractions once they have occurred − it aims to prevent them from occurring. In short, Privacy by Design comes before-the-fact, not after.

  • Principle 2—Privacy as the Default Setting: We can all be certain of one thing — the default rules! Privacy by Design seeks to deliver the maximum degree of privacy by ensuring that personal data are automatically protected in any given IT system or business practice. If an individual does nothing, their privacy still remains intact. No action is required on the part of the individual to protect their privacy—it is built into the system, by default.

  • Principle 3—Privacy Embedded into Design: Privacy by Design is embedded into the design and architecture of IT systems and business practices. It is not bolted on as an add-on, after the fact. The result is that privacy becomes an essential component of the core functionality being delivered. Privacy is integral to the system, without diminishing functionality.”

The consideration of the seven principles of Privacy of Design causes additional effort in software development, which of course affects the costs. Whether the consideration of these principles in software development impedes technological progress can be discussed controversially. In any case, these principles restrict companies in the collection of user information and thus also prevent evaluations of this information. But such clear guidelines are in the interest of citizens, who have a right to and need protection of their informational self-determination.

5 Summary and Outlook

In general, data protection regulations aim, on the one hand, to protect the informal self-determination of individuals by applying regulations to the collection, processing or storing of personal data and, on the other, to inform individuals about their rights in order to enable them to make informed decisions on sharing personal data. The European Union, for example, stated in a review of the effects of the GDPR in 2020 that the awareness of individuals in the European Union regarding data protection rights has increased since the release of the GDPR (European Commission 2020, p. 8).

For companies, data protection regulations offer stringent guidelines on dealing with personal data. These guidelines entail changes to the business models of companies and require assessments on the content of data processing, including utilizing emerging technologies to comply with the applicable data regulation. For emerging technologies like Big Data or artificial intelligence, companies need to invest in manpower and resources in order to adjust procedures, processes and algorithms collecting, processing and storing personal data. These adjustments result in an increase in labor costs for monitoring algorithms or adjusting existing solutions (Li et al. 2019, p. 2ff).

The impact of these implications must be monitored in order to determine whether companies with a business model relying on emerging technologies and companies situated in nations with strict data protection regulations face major disadvantages over competitors where data protection implications do not apply.

In Singapore, for example, a government subsidy program enables banks to gain full insight into their customers’ financial situation, and the central availability of data enables banks to tap into a previously under-served market. The data is collected via the banks’ mobile applications and requires the annual approval of the customers. Due to the stricter data protection laws in the European Union, a complete analysis of the customer data by banks so that they can offer additional business is only possible with the customer’s consent to the individual activity. Although principal banks manage and process a customer’s account movements, they have no insight into the reasons for a transaction or the sender and recipient. For this reason, European banks see companies entering the market from customer-oriented branches of the value chain as serious competition. Through the data generated from non-banking-related business transactions with customers, these companies have a more detailed view of their customers (Hein et al. 2020, p. 23). The GDPR and new market entries are forcing European banks to rethink their business models to gain insight into the available customer data and beyond through new offers. This development shows that not only companies from the technology sector are affected by the regulations, but also companies that want to integrate these technologies into their business models.

For companies servicing markets where different data protection standards apply, the question arises as to which standards a service will be tailored to or whether it is wise to no longer provide a service in specific markets.

In addition to these implications, for companies with data-protection-compliant solutions, the current market situation poses a competitive advantage as individuals and companies relying on the provision of services are willing to spend more on services or solutions complying with data protection standards (Capgemini Research Institute 2018, p. 22).