Keywords

Precision Medicine

Definition, Delimitation and the Translational Turn

Precision medicine grounds the diagnosis, treatment and prevention of diseases on the variability in genes, environment and lifestyle (Jonsson & Stefansdottir, 2019). In order to achieve this grounding, it aims to obtain and integrate genotypic and phenotypic information from molecular, physiological and environmental exposure as well as the behavioural level (Goetz & Schork, 2018).

The term personalised medicine is often used as having the same meaning, although there is an important difference. The latter term implies inter-individual variation in disease processes and tailoring medical interventions to unique characteristics revealed by genomic investigations, clinical information and real-world data at the individual level (Joyner & Paneth, 2019). In contrast to such a granular understanding, the term precision medicine focuses on stratification into subgroups or subpopulations for the purpose of targeted, i.e. precise, interventions (Kao, 2018). While stratification is not a new method of diagnosis, treatment, and prediction, the scale and speed of stratified medicine have increased dramatically in recent times (Batten, 2018) due to the amount of available high-resolution and longitudinal data and transformative technologies for its analysis and interpretation. Stratification through precision relies on an all-inclusive, complex and systemic assessment of health and disease (Auffray et al., 2009), lately further developed into a network-based systems paradigm (Tan et al., 2019). An evidence-based practice of systems medicine has been called for in order to promote the transfer of precision medicine results into healthcare (Beckmann & Lew, 2016). Paired with a stronger focus on information influencing health interrelated with genomic data, such as lifestyle, environments and communities, precision public health expands from individualised treatment to the broadest stratification, supporting health inferences and intervention on population level (Juengst & Van Rie, 2020; Khoury et al., 2016; Meagher et al., 2017).

Related to precision medicine, the term translation is used to describe the transfer of knowledge (Mandal et al., 2017) about disease mechanisms gained in the laboratory to clinical practice and health-related decision-making, public health and corresponding policies, and vice versa, thereby improving methods of diagnostics, therapeutics and prevention (Seyhan, 2019; Hunt, 2018; Petrini, 2011; Webb & Pass, 2004). The boundary between research on the one hand and clinical treatment and care on the other thus becomes blurred. As a consequence, precision medicine is not only systemic in that it connects individual and public health levels through stratification, but also in that it encompasses ethical constraints and moral issues within both research and healthcare contexts, until now separated in their significance as targets of policy application and in the ways they are handled within corresponding fields of governance and regulation. Ultimately, this leads to the need to respect a translational turn within the focus of corresponding normative and social sciences. Respecting the translational turn pushes concerned sciences towards an alignment of their subject matter, a partial approximation of their methods as well as of the aims of their anticipatory guidance beyond their disciplinary particularities, which has been reflected in the development of ELSI (‘ethical, legal, social issues’) as an interdisciplinary research and policy movement (Kaye et al., 2012; Hilgartner et al., 2017).

Setting the Stage for Bioethical Analysis

Characteristics of Data-Driven Precision Medicine

The decisive trigger for the development of precision medicine has been the technology of human genome sequencing (Collins, 1999). Thanks to increasing patient participation and a number of successful application examples in which examined genomic data have demonstrably contributed to improving patient management (Claussnitzer et al., 2020), genomics has been at the forefront of cancer medicine (cf. only Berger & Mardis, 2018; Huntsman & Ladanyi, 2018), followed by the fields psychiatry (e.g. Carter et al., 2017; Chang et al., 2018), cardiology (cf. Tada et al., 2020), drug (cf. Haley & Roudnicky, 2020) and diabetes research (e.g. Kwak & Park, 2016), as well as public health (Lacaze & Baynam, 2019; Ray & Srivastava, 2020), to name but a few, with an increasing broadening towards omics (cf. Pirih & Kunej, 2018 for its taxonomy).

Unlike with conventional medical interventions, most investigations ahead of precision medicine interventions do not require any substantial intrusion into the physical integrity of the person. On the contrary, the main focus of this investigation is data acquisition: it is the informational intervention that stands in the foreground (cf. Heyen, 2012 for an analogy with genomics; Molnár-Gábor & Weiland, 2014). Subsequently, the initial claim to focus on the individual’s health based on a specific medical indication morphs into an individual treatment aim that is preventive in nature, as well as into an interest in using health information gained to benefit stratified and public health investigations and treatments. With genomic data being extended by further health-related and real-world data, there is an ever-growing data pool at hand, whereby research aims related to this are changeable over time (Jonsson & Stefansdottir, 2019). Limitations to the analytical approach are undesirable or not possible. Using a broad bioinformatics filter, additional findings can be generated that provide information about a wide variety of genetic predispositions and possible future health developments (cf. Tabor et al., 2011 for genomics only; Fischer et al., 2016). The interpretation of data further requires molecular biology, bioinformatics and increasingly public health expertise, whereby the interpretation can also differ depending on the state of the art in science and technology, or might need clarification in the future.

The amount and diversity of data to be studied, their pooling and the methodology for their analysis, which is based on high statistical validity (in genomics: Molnár-Gábor & Korbel, 2017; in public health: Benke & Benke, 2018; Prosperi et al., 2018), are decisive for precision medicine. Certain patterns in large data sets are identified and hypotheses are formed from them in order to predict developments, decisions or behaviours and to assign these predictions to specific stratified groups. Further clarification is regularly required to ascertain whether and what correlation and what risk statement can later be used to substantiate actual causalities in disease identification, development and treatment. Until then, correlation assumes a similarly important role as causality within the translational process.

The distinctiveness and diversity of many diseases and disease types, including cancer, combined with the small number of patients for many disorders, not only effectively precludes conventional research discovery based on local sample cohorts, but also mandates cross-matching and sharing data between centres to increase cohort size and enable discoveries, replication and the translation of findings into therapies (Molnár-Gábor & Korbel, 2020). Lately, emerging projects have relied on patients’ genomic data, together with other sensitive information, being shared on a large scale across numerous countries (cf. ICGC/TCGA, 2020).

Ultimately, knowledge transfer in precision medicine relies not only on data sharing as such, but also on data transfer in the sense of the transfer of scientific content during the transition between the different phases of the intervention (Hulsen et al., 2019). Data sciences and the development of tools and devices to collect, analyse, interpret and share data hence become the pivotal point in precision medicine.

The Changing Circumstances of Bioethical Issues

Data-driven precision medicine on individual, stratified and public health levels fundamentally changes the situation of patients, affected persons, groups and communities, as well as the related ethical challenges.

The predictive content of health-related results contributes to extending their meaning for the affected persons in time, as analysis, interpretation and extension of data can be continued in silico after initial collection (Rehmann-Sutter, 2012). The changes in the concrete object of analysis and interpretation as well as in their methods contribute to research and care increasingly being designed and conducted independently of the patients as physical (animate) beings (Molnár-Gábor, 2017). Parallel to this, diseases and disorders can be modelled and examined in the laboratory in such a way that emerging results can readily be integrated into treatment and further research without additional interaction with the patients. Furthermore, patients can also be examined outside of the clinic and in their own individual private context with the help of various technologies and devices, as is the case with telematics and through self-directed health apps. Altogether, they threaten to turn patients into a “wandering”, mobile database. Integrating real-world data relies on both publicly available data sources that can be consulted (Rosen et al., 2020) and patients contributing their own input through appropriate devices. The latter option can lead to more involvement related to data provision, but also, possibly, to the increased medicalisation of various life issues of those affected. Parallel to this development in healthcare, commercialised direct-to-consumer offers in precision medicine increasingly come to the fore (Moore, 2020).

Changes in the roles of major actors and the involvement of new affected persons and stratified groups in precision medicine lead to a blurring of the traditional focusing on the individual in bilateral, personal relationships of care (cf. Konstantinidou et al., 2017 for the contrary). Besides the relevance of genomics within families and for patients’ relatives (Wolf et al., 2015), stratified and public health outcomes of translational medicine will generally take on community meaning (Juengst & McGowan, 2018), which can be further enhanced through data integration and federation. This contributes to a dissolution of conventional attributions of interests to those involved in precision medicine. Moreover, interests (Schaefer et al., 2019) can increasingly no longer be seen as condensed positions to which regulations governing data processing and bioethical guidance have until now responded. Conflict lines and overlaps between interests become changeable and blurred related to the same individual actors, or to affected actors belonging to the same or to a different group, and increasingly between individual and public interests.

Precision medicine thus not only requires new negotiations between individual rights, target group interests, and overall public welfare (Juengst & van Rie, 2020). In essence, it turns data collection, analysis and the application of interpretation results from a traditionally specific intervention into a dynamic process through which new health information gathered from single patients concerned can be generated and used successively and continuously on stratified levels and for public health measures as well as for the development of corresponding health policies. Accordingly, the need to coordinate and balance various interests involved in precision medicine also becomes a dynamic demand, contributing to a strong proceduralisation of decision-making exercises.

Overall, from a bioethical perspective, the model of shared decision-making in medicine (MacLean, 2009) encounters an unexperienced expansion in terms of time and space, actors, groups and populations involved and affected, in relation to the relevance of results and the causality of decisions as well as with regard to normative guidance needed. Commitments of traditional medical ethics to patient autonomy are extended to include concerns for group health interests (Meagher et al., 2017); traditional research ethics principles aimed at protecting individual participants have become supplemented with social obligations (Vos et al., 2017). Questions about individual and community perspectives of control over the generation of as well as access to and usage of identifiable health-related information (Juengst & Van Rie, 2020) lend a strong privacy and data protection perspective to challenges for autonomy. At the same time, individual disposition over health information diminishes as genomic risk stratification occurs – disparities raised have effects going beyond individual levels (Meagher et al., 2017). The exact benchmark of the obligation to avoid harm by protecting the privacy of identifiable information and by demonstrating professional transparency about information is revealed by health-related data changes (Brothers & Rothstein, 2015). The risks of stigmatisation and discrimination (Ferryman & Pitcan, 2018), distraction and disempowerment increasingly need to be addressed by measures of oversight and mechanisms of control (Haga, 2017). The creation of corresponding norms, their design, and structure with regard to the relation between bioethical guidance and binding legal regulation demand conceptual engagement with the governance of precision medicine. Last but not least, while engaging with these challenges, inherent and created (Minari et al., 2018) tensions among the values that drive and justify precision medicine on individual and public health levels (Rosen et al., 2020) need to be consciously encountered: control, transparency, accountability, justice, social value, harm minimisation, public health benefit and trustworthiness.

Ethical Concerns and Moral Quandaries

Justifying Data Processing

The Changing Role of Patients in Precision Medicine

Patients have different roles in precision medicine in relation to data processing: justification of data processing, and overview and control over data processing. The legitimising role of patients is reflected in consent. Their overview over data processing, which leads to its monitoring and evaluation, is enabled by transparency and information obligations as preconditions for their empowerment in conjunction with their right to access data about themselves. Patients exercise control over data processing through their individual rights, enabling them to actively intervene in processing operations. In this sense, individual rights help to operationalise patients’ self-determination in relation to their data. They are also suitable for bundling different, often contradictory, positions of patients’ interest related to the processing of their data in precision medicine contexts, thereby providing them with the basis to assert their interests according to their individual preferences in complex weighing situations, the outcome of which is delimited by respecting the most important values and corresponding ethical obligations intimately linked to autonomy and human dignity as well as integrity.

Increasing data usage for population health and in the public interest pushes back the role of patients in the process of justifying, assessing and controlling health data processing. Data research empowering communities but also putting burdens on them have lately given rise to the call for a focused discussion on ethical principles guiding data research and sharing in the public interest, such as proportionality, equity, accountability and trust, as well as their application in practice (Ballantyne, 2019). Public interest in data usage has recently been framed as societal permission and social licence (Muller et al., 2021; Ballantyne & Stewart, 2019), which enables the recognition of broader stakeholder interests in data processing, but can only be legitimised by increased patient engagement. While data processing in the public interest must accordingly rely on a strong legitimacy related to input, procedure and organisation, it can enforce ethical principles such as inclusivity and accountability that are also leading principles in the focus on precision medicine at the individual level. Operationalisation of trustworthiness when building such data processing systems, and before that, the identification of the relevant public interest as well as the means of a dynamic maintenance and reinforcement of the societal permission need to be further defined. Particular attention should be paid to common ethical and legal terms such as public interest that have divergent meanings dependent on the exact normative framework, resulting in the fact that a “licence” for a certain data processing conduct opposes individual interest in protection in the ethical sense and might go beyond the understanding embodied in legal frameworks (e.g. Ford et al., 2019).

Informed Consent

The restriction of the concept of informed and voluntary consent has been discussed for a long time in bioethics. It has since been impressively proven that the classical model of informed consent as a one-time act of approval is based on a truncated understanding of autonomy (cf. only Donchin, 2000; Brownsword, 2004; Manson & O’Neill, 2012; Christman, 2011). Concerns around the voluntary nature of consent have emerged primarily when participants belong to a socio-economically disadvantaged group or are in a situation of institutional or hierarchical dependency (O’Neill, 2003). Such dependencies may already arise among patients without a good health situation, resulting in concerns around power imbalance becoming inherent in the medical context.

Increasing medical data processing typical in the context of precision medicine has only further aggravated concerns about the justification of informational intrusion (McGuire & Beskow, 2010). Informational self-endangerment through consent is even being mooted in an increasing number of data processing situations. In addition to the reasons of uncertain information content and communication deficits, there are other closely related uncertainties concerning the secrecy, permanence, impact and value of information (Hermstrüwer, 2016). The consequences of these uncertainties related to data processing appear to have serious effects on decision making in often highly sensitive life situations in medicine.

In order to address the restrictions on consent to data processing, various concepts for its further development have been elaborated. In view of constraints on specific consent, broad consent (Fisher & Layman, 2018) can be used if the concrete design of data processing does not allow a comprehensive purpose to be defined at the time of data collection. In order to avoid blanket or vaguely formulated, and hence invalid, consent and to compensate for the abstract wording of broad consent, corrective measures that enhance transparency and confidence as well as measures implementing data security must be taken (DSK, 2019). Common measures to promote transparency are, for example, the publication of a research plan and the establishment of a website to inform study participants and patients. Additional measures for data security include technical-organisational instruments to minimise risks to privacy such as special provisions to restrict access to the collected data. Trust can be established, for example, by increasing the involvement of patients in data processing, granting, for example, the possibility to object before the data are used for new questions of investigation (DSK, 2019).

It is explicitly the increased involvement of patients that prominently distinguishes dynamic consent from other consent models. With dynamic consent, parallel to the flexible design of the research project, the basis of justification in the form of approval by the patient or participant is broken down in terms of time and content (cf. only Kaye et al. 2015). Based on this concept, general consent is obtained at the beginning of the research, and this can be progressively updated through smaller-scale extensions to additional data processing steps, often combined with tiered (Forgó et al., 2010) or layered (Bunnik et al., 2021; Bunnik et al., 2013) consent. Proponents of dynamic consent emphasise its advantages in fulfilling bioethical requirements, also in relation to data processing. Accordingly, it allows the conditions regarding expressiveness, specificity, informedness and unambiguousness of consent, revocability and clear recording of the will to be satisfied particularly well (Prictor et al., 2019). Critical voices nevertheless emphasise that dynamic consent offers no advantage in the informational dimension of approval, because it cannot simplify the complexity of the information provided, with detailed and continuous information leading to “information overload” and deterring patients (Sheehan et al., 2019; Steinsbekk et al., 2013).

Dynamic consent reflects a phase-oriented justification of data processing; the proceduralisation in the design of the justification accompanies the progress of the research project. It further emphasises the systematic proximity of the justification and the control of the data processing by the patient by closely coupling the principle of transparency by linking information obligations with the justification for data processing. Dynamic consent puts patients increasingly in a position of being able to assert their control with regard to the information provided throughout the consent process and thus to also position themselves in relation to their previous decision-making with regard to the approval of single data processing steps. Through this set-up, dynamic consent contributes to the operationalisation of patient autonomy and leads to a merging of the various roles of patients in relation to the data processing. In the precision medicine context, dynamic consent has the advantage that it best reflects the structure of a traditional communicative interaction between the actors involved. By giving greater weight to decision-making processes, it not only corresponds conceptually to the shared decision-making model of medical ethics, but also strengthens the understanding of privacy, which is captured as the result of formal and active freedom exercised by patients. With this, it can contribute to gradually smoothing out the imbalance of informational power between data processors and patients that stems from the different nature and level of their health-related knowledge and from natural constraints on the ability to judge each other’s knowledge (for more details, cf. Molnár-Gábor, 2021).

Furthermore, dynamic consent lends itself to a comparison of the information content conveyed in different processing contexts and also the flow of communication, especially due to the structuring of communication on the digital level. At a later point, it is also culturally conditioned, so that dynamic consent can be used as an important basis for the emergence of a standardised practice of cross-border consent that seeks common patterns of participation in cross-border translational data sharing programmes that are recognisable for the individual (Molnár-Gábor, 2021).

In practice, consent to data processing is often obtained at the same time as consent to the medical intervention in the course of a study or, more generally, to a treatment that is subject to the medical law standards of the relevant regulatory regime as well as medical ethics requirements. Increasingly, consent to a treatment that ultimately relies on data processing and is in compliance with ethical principles is considered to be an appropriate protective measure for the benefit of patients under data protection law, releasing consent having to justify data processing in a legal sense, but upholding its function to empower patients while complying with obligations stemming from medical ethics.

With precision medicine increasingly occupying the domain of public health, issues of consent in terms of groups and communities come to the fore. First, justification for data processing related to stratified groups relying on consent is a complicated issue in the absence of a recognised legal standing of affected groups (Weijer et al., 1999). Second, a new kind of trade-off emerges between the imperatives to protect patients and to integrate research and practice for the collective good, which must be guided by the principle of relational autonomy (Lee, 2021). In the course of its implementation, bolstering individual choices underlies the precondition of enhanced transparency, with transparency in turn preconditioning public deliberation about fairness and equity in data usage for public health (Lee, 2021).

Particular Issues Related to Privacy, Confidentiality and Disclosure

Data Security as a Reaction to Risks and Balancing Interests

Precision medicine situations are complex due to multipolar interests spread between actors, conflicting interests associated with the same actors, increased vulnerabilities related to data sharing as well as precision medicine’s public health perspectives. This gives rise to complex circumstances that require the concurrent application of relevant ethical principles and values, which often leads to the emergence of competing obligations that need to be carefully weighed and balanced when making research-, health- and care-related decisions.

When framing a major balancing need between the public and private interests in a simplified way and weighing these obligations, consideration must be given to the fact that intervening in the privacy interests and protection needs of patients is increasingly justified by the stratified benefits of the intrusion. The advantages of the intrusion at community and population levels can then be seen as benefits; the intrusion itself and its possible consequences, are mainly focused at the individual level as risks, whereby individual benefits for patients can additionally contribute to individual- and, in particular, privacy-related risks that have to be minimised.

Risks to data protection and privacy can be reduced by data security measures. By reducing the risk, the privacy interests concerned are less at risk, which in turn influences the weighing of corresponding obligations to protect against those risks and obligations to promote the ethical mandate of data sharing and usage in the interest of individuals and stratified groups as well as the public. With these divergent weighing exercises in mind, the primary role of data security can be seen in mirroring the outcome of the trade-off between the different facets of competing interests in weighing processes, to which the balancing of obligations will respond (Molnár-Gábor, 2023). In this way, trustworthy, coherent and secure data processing systems emerge to become a decisive principle of precision medicine.

Anonymisation

Within precision medicine, genetic data pose particular challenges for data protection, as they contain a large number of genetic tags that enable re-identification and are also regularly processed in a highly contextualised manner and combined with other data relevant in the particular context. Accordingly, risks for privacy through the reidentification of patients and participants are generally high. Based on the understanding of identifiability according to a contingent (or relative) notion of autonomy (Purtova, 2018), the decision on the ethically justified level of data protection and corresponding protection obligations can only be made depending on the actual data processing operation including the actors accessing data and information. The contingent understanding of autonomy also means that, from an ethical perspective, contextually anonymised data cannot be treated arbitrarily.

Altogether, the relative understanding of anonymity has three implications. First, anonymisation is not a technical but primarily an organisational measure to respond to the ethical challenges of data processing in precision medicine. While the boundary between technical and organisational measures is fluid, anonymisation is by no means a measure that takes place only on the technical, computerised level, but requires organisation and personnel. Second, contextual protective measures become ethically imperative, initiating sector-specific professional obligations. These are to be applied not only under the premise of integrating professional knowledge, but can also contribute to simplifying the assessment of privacy challenges through concretised ethical requirements in specific areas of processing. This can simplify proof of compliance with guiding values and ethical principles. Third, contextual processing rules can also help to define the transitions between privacy-relevant and -irrelevant processing operations in a given area by defining ethical privacy mandates in relation to the typified processing operation (in this sense, cf. Mourby, 2020). Besides data security and risk management measures, these may include purpose specifications, access rules, documentation requirements, but also procedural requirements in the case of unintentional identification (Mourby, 2020). Establishing these safeguards will help to further concretise medical privacy ethics obligations as part of broader informational governance within precision medicine.

Return of Results

Genomic analysis regularly yields information that can be used to make statements about disease patterns or health risks that are not primarily intended in the context of diagnosis and treatment (Molnár-Gábor et al., 2014). The combination of genomic data with other health-related data in precision medicine and appropriate bioinformatic filters lead to such findings that relate to present and predictive health status and can no longer be considered incidental, but must be expected (Lyon, 2012).

Additional findings from precision medicine contexts place new requirements on the physician’s duties to provide information and on their responsibility for treatment. These requirements should still offer protection against unauthorised treatment and treatment that is not sufficiently justified by information, the validity, utility and actionability of findings, whereby the return of such results itself is subject to separate consideration and has been guided by more than a decade of scholarly discussions.Footnote 1 How to avoid additional findings leading to introducing insecurities to patients’ perception of their own state of health? Do the principles of autonomy and integrity, which grant patients far-reaching decision-making options related to their health, justify a right to be informed about such findings, even if they are not actionable? Questions then arise as to the exact penetrance threshold at which a finding is actionable or needs to be communicated at all or how to deal with the problem of affected third parties. The prospect of additional findings has implications for the doctor’s duty of care. Are they allowed to consider the communication of treatable or curable findings and thus give priority to duty of care of non-harm over the patient’s right not to know? Are they allowed to comply with the right to information of family members at risk and place this above their duty of confidentiality and possibly above the patient’s right not to know? Information about additional findings also imposes responsibilities on patients relating to the communication of such findings to those also affected, to reproductive decisions and also to responsibility for their own state of health (Kollek & Lemke, 2008). These questions only serve to outline types of leading ethical concerns related to the return of results.

On a practical level, it should be noted that if a list of diseases or gene mutations is drawn up for which a search is to be carried out in addition to the diagnostic question, the doctor’s mandate changes: the doctor must not only pursue the initial diagnostic question, but also search for the findings on the list, often described as a “positive list”. Such lists might initiate an extended treatment mandate, linked to the “minimum list” established by various professional societies (cf. Green et al., 2013). Alternatively, some emphasise that a combination of the physician’s assessment prerogative as to whether various categories of findings can be reported back, and the experience about the patients’ decision-making whether to use their right to know and not to know, should play a decisive role in establishing action corridors for the return of results. Such “experience registers” allow a list of significant findings or genes to be compiled, which can be expanded over time and with growing knowledge about their actionability. Expanded by the documentation of notification experiences, such registries can function as forerunners of codified professional standards and allow an early respect for patient engagement (Tanner et al., 2016). With the emerging public health relevance of results and findings, new types of ethical weighing lines have opened up that demand respect for additional guiding elements in the balancing of public and private interests, duties of care and practicability (cf. Forsberg et al., 2009).

New Tools in Precision Medicine: Emerging Ethical Challenges Through Artificial Intelligence and Neurotechnology

AI tools and neurotechnology can contribute to patient empowerment in health contexts and beyond, and make significant a contribution towards allowing patients to experience a degree of autonomy, freedom of action, integrity and dignity that would be inconceivable without these tools (Ienca & Ignatiadis, 2020).

However, the application of such tools in precision medicine can have restrictive effects on patient autonomy. If artificial intelligence (AI) and machine learning (ML) systems are used to make a diagnosis or a treatment plan, but the physician is unable to explain to the patient how these were arrived at, this could limit the patient’s informational basis to make free, informed decisions about their health (Mittelstadt, 2017). The risk that ML-based systems in medicine might even directly restrict choices related to a patient’s health, and in this way manipulate them (Nuffield Council on Bioethics, 2018), must be weighed against the patient’s self-determination. Besides calculations about risks influenced by an AI system, such concerns may arise in cases where a (semi-)autonomous intelligent system is granted decision-making power based on an evolving and adaptive algorithm, such as when intelligent closed-loop devices actively interfere in the state of the brain (Kellmeyer et al., 2016).

The results of neurodata processing can greatly influence the future behaviour of the person concerned. In addition, it becomes more difficult to position the person affected by a neurotool, for instance a brain-computer interface (BCI), with regard to continuously running information processes and their results as a whole if it is unclear which parts of perception are due to their own brain activity and which parts are the result of brain-stimulating processing by an algorithm (Kellmeyer, 2021). The processing of neurodata could thus ultimately have an effect on the person’s relationship to themselves (abolition of self-authority; Gertler, 2020). Dynamic interactions between a patient and an ‘intelligent’ neurotechnological device may thus have a transformative effect on the sense of agency and the active self, inducing ethical constraints around identity and its connection to decision-making (Sarajlic, 2015). Such constraints on self-determination can serve as an example to demonstrate competing rights and interests of the patient in relation to the same data processing: negative liberty, i.e. the freedom from unwanted interference with one’s mental states and/or cognitive capacities by others, and the positive freedom to fully realise one’s cognitive capacities including through treatment and care (Kellmeyer, 2021, with further references).

Additionally, a third perspective of autonomy may be compromised by bringing diagnosis, treatment and care to the patient. Medical AI systems might limit a patient’s social interactions, where autonomy manifests itself on an interpersonal level, and raise the risk of social isolation in situations of vulnerability (cf. Sharkey & Sharkey, 2012; cf. also the concept of relational autonomy).

ML and neurotechnology tools challenge privacy in a different way to more established instruments in precision medicine. First, the sensitivity of neurodata is currently disputed (Rainey et al., 2020). It is unclear to what extent data on people’s cognitive system open up access to a person’s mental blueprint. Neurodata also have predictive potential, because the activity pattern of neurons maps structures of thinking may have significance for the person’s actions as a whole. In terms of predictive potential, however, neurodata differ substantially from genetic data in two respects (Molnár-Gábor & Merk, 2021). First, their predictive potential can be harnessed to a much greater degree. For example, when supplementing human cognitive abilities with BCI technologies, data can be analysed in a very close temporal sequence in a first step and brain-stimulated in a second step. Second, neurodata are more characterised by informational uncertainties than genetic data or other health data with predictive significance due to so-called cognitive biases, for example because of an uncertain information content or an uncertain information effect.

While genomic information and information derived from its combination with other health data are difficult to clarify and to explain, thus remaining as information that the patient cannot directly experience and reflect upon (Rehmann-Sutter, 2000), brain data increases difficulties around its perception by the patient as it is often produced on an unconscious level (Lavazza, 2018). This restriction is particularly evident in the application of the right to be forgotten, which is intended to prevent the permanent persistence of information about a person in order to ensure the possibility of the free development of personality (Mayer-Schönberger, 2009). The concept of forgetting does not necessarily include a third party, but means the disappearance of the information as such (Molnár-Gábor, 2019). In relation to neurodata, the right to one’s own oblivion of data is becoming increasingly important. Due to the close proximity of these data to the patients and participants and their identity, it is increasingly difficult to distinguish which data served as the basis for decision-making and which data were nevertheless returned to the patient in some form and were thus included in the structure of their decision-making. The process of one’s own forgetting is necessary when information processing detaches itself from the patient or participant and becomes independent, only to be fed back into their own decision-making processes (Molnár-Gábor & Merk, 2021).

Benefits for patients arise through respect for their well-being, whereby the patient’s subjective knowledge and life experience should guide any decision-making process, particularly the evaluation of risk information, false positives and false negatives. This knowledge should also inform measures of explaining and communicating health-related decisions made by involving AI applications.

The safety and reliability of AI systems is crucial to avoid malfunctions and undetected errors that might induce knock-on effects, producing harmful implications for patients. In addition to technical errors in ML-based devices, informational uncertainties associated with neurotechnology could cause physical injuries, for example, if the wrong control commands arrive in the case of digitally controllable prostheses (or other aids), or if there is a time delay in correcting errors in the control system, resulting in harm to the patient’s body or people in the vicinity (Yuste et al., 2017; Nuffield Council on Bioethics, 2018). AI might also be used for malicious purposes such as covert surveillance or the collection of revealing information about a person’s health without their knowledge (Fenech et al., 2018), for example based on an analysis of movement and mobility patterns detected by tracking devices.

Transparency and accountability are cornerstones of the just application of AI in healthcare. Difficulties related to the explainability of AI results create problems for validating the output of AI systems. Although AI applications have the potential to reduce human bias and error, they can also reproduce and reinforce biases in the data used to train them (Courtland, 2018). Concerns have been raised about the potential of AI to lead to discrimination in ways that may be hidden, as, for example, datasets used to train AI systems are often poorly representative of the wider population and, as a result, could make unfair decisions that reflect wider prejudices in society and lead to an uneven distribution of benefits of AI in healthcare (DeCamp & Lindvall, 2020). AI-based systems might work less well where data are scarce or difficult to collect and render digitally, negatively impacting underrepresented communities and individuals, for instance with rare medical conditions (Fenech et al., 2018). Altogether, data quality and data diversity emerge as values associated with the development of tools for precision medicine. Biases may be embedded in the algorithms themselves, reflecting the biased assumptions of their developers (House of Lords, 2018; cf. Martinez-Martin et al., 2021 for further types of biases). In this regard, it is vital to guide the implementation of AI by defining clear norms of accountability, as they can contribute to fair compensation in the event of harm. Corresponding professional obligations must encompass training and qualification requirements for medical staff (Nuffield Council on Bioethics, 2018; Brouillette, 2019; cf. Safdar et al., 2020 for insecurities of own workforces) and the reservation that ML-based systems may only be used by medical professionals. Maintaining their skills to be able to take over if AI systems fail might prove crucial in order to ensure the well-being of the patients and avoid harm to them in a just manner.

The challenges of knowledge transfer must be considered on the way to establishing ML-based applications in public health. These can only be addressed in a limited way by establishing professional duties or obligations for manufacturers. For this reason, measures are also necessary at the governance level that lead to better handling of the risks and need to be located within the realm of the leading principles of transparency, explainability and plausibility. Their implementation and application can foster an increased understanding of how AI systems function, also on a societal level. Such measures can be realised in many ways, from research funding to training and education (Campbell et al., 2007), as well as in the form of different tools to increase the competence of the actors, operators and manufacturers involved. From the perspective of accountability design, enhancing competent handling of AI systems, focus should be placed on ethical issues relating to interactions between humans and machines (cf. Reeves et al., 2021).

New precision medicine tools fuel discussion on equality and equity. Both principles strongly relate to the challenges raised by diverse types of biases through these tools, the connecting obligation of non-discrimination and just application, and the elimination of disparities in health research and care. The use of new tools in public precision medicine is explicitly framed by some as an instrument to combat inequality and the disregard of equity (Cooper et al., 2015). As new medical technologies are implemented in care, inequalities and equity challenges regarding benefit-sharing from the application of these technologies due to costs (Alami et al., 2020), access burdens and the disease- as well as individualised and stratified context-specificity of the technologies and software (WHO, 2019) will play a crucial role and need to be considered when defining obligations related to their development and application. On the other hand, measures of patient empowerment (Schulz & Nakamoto, 2013) and public engagement (Wiens et al., 2019) must increasingly focus on developing and reinforcing the competences of those affected by these technologies as well as preparing, designing and conducting public involvement and commitment on participatory levels of deliberation and decision-making.

Governance: Ethics and Law

The development, specification, and standardisation of obligations corresponding to values and guided by ethical principles can contribute to building a field of reference for conduct in precision medicine. Reference fields of conduct that are transparently guided by an ethical perspective help to increase individual, stratified and public empowerment in the respective field and can contribute to establishing and enhancing trust in compliant conduct.Footnote 2

The substantive-material standardisation of rules of conduct is inherently limited in areas of high ethico-moral constraints such as precision medicine. This is particularly relevant against the backdrop of the empirical turn in bioethics (Borry et al., 2005; Hurst, 2010), an approach which advocates greater focus on social context and experience and less focus on basic principles. The incorporation of empirical research into bioethics enables moral guidance to be given for specific situations and helps bioethics to become ethics in action. Ethics-in-action will usually be framed by guidance relating to the question of how a certain field, i.e. precision medicine can be practiced and will focus on procedural measures informing translational medicine. Its emphasis on the social context of research and healthcare also makes it a fruitful approach in the context of public precision medicine.

Procedural measures and tools framing the conduct of precision medicine have emerged in recent times, opening up to the integration of values and corresponding obligations in the decision-making processes. The most prominent examples are the establishment of codes of conduct, the broadened involvement of ethics committees, and data stewardship models.

Codes of conduct are collections of sectoral behavioural rules developed by the research community itself.Footnote 3 In this sense, such self-regulation is understood as the development of specific self-obligatory norms of behaviour through the setting of professional standards (Molnár-Gábor & Korbel, 2017). Codes of conduct point out routes of decision-making and corridors of action; their standards can be understood as interpretative aids for the implementation of general norms in a specific area.

Input legitimacy is crucial for the development of such codes in order to produce appropriate guidance for conduct. Accordingly, experts must be involved in the establishment of the standards to ensure disciplinary suitability of the regulations. Beyond the experts of the subject matter, the inclusion of ethical standards of conduct is a decisive element of input legitimacy of any rules of conduct and particularly for actors of such professions that cannot rely on an established canon such as bioinformatics, but are increasingly held accountable for respecting various facets of ethical standards in their conduct. Additionally, most legal systems define possibilities of giving binding legal force to self-regulatory measures by private actors, including codes of conduct, by referring to them in binding law or by particular legal measures giving them binding force through e.g. labour law measures. Ultimately, rules of codes of conduct representing state-of-the-art behaviour in sectoral areas can, over the long term, become the standard for reasonable care (for more detail on input legitimacy, cf. Famenka et al., 2016).

A greater involvement of ethics committees in decisions relating to data processing in precision medicine can now be observed in practice (cf. already re: MTAs, Chalmers et al., 2014; Ferretti et al., 2020). Ethics committees increasingly demand a description and justification of data processing in research study applications, which they tend to examine from a mixed perspective of privacy and data protection ethics combined with the main data protection principles. The ethical review of compliance with these principles is gaining particular relevance in the approval of research projects and precision medicine studies. The significance of such increasing ethical consideration of data processing is two-fold. First, many principles related to the ethics of privacy and data protection are also anchored in data protection law (Bygrave, 2014), revealing a concerted action in the normative governance of data protection. Second, some data protection laws explicitly address the binding nature of ethical reviews when regulating particular aspects of data protection, such as in scientific research, or in connection with broad consent.Footnote 4 While ethics committees, with a few exceptions, are regularly not commissioned to monitor adherence to data protection law, they are instructed to examine compliance with ethical obligations, including those rooted in the principles of data protection. Accountability, data minimisation, purpose limitation, transparency, and lawfulness are also ethical principles of data processing adherence to which can be an indicator of compliance with the law, but are no proof for compliance with legal regulations per se.

Cooperative forms of health data processing must also be designed from the governance perspective. The uncertainty surrounding the disclosure of data to external research actors often significantly contributes to the overall lack of trust in the further use of health and genomic data, even in protected form. To remedy this, data trustees can act as independent entities between the data provider and the data user to mediate data in such a way that its confidentiality and integrity are adequately preserved (Delacroix & Montgomery, 2020). With the help of a trustee, doctors can thus offer their patients the opportunity to make their genetic and health data available to further research in a protected form and to benefit translationally from it without exposing themselves to the risk of a breach of data protection or without losing control over their data. Insights into the delineation of the various purposes of data trustees, their powers and responsibilities, their accountability, and their procedures and modes of operation, provide information about how data protection and ethics concerns can be taken into account in their modus operandi, especially when communicating with participants (Rinik, 2020). Data trustees are increasingly defined by law and anchored in the governance of health data sharing. UK Biobank Ltd. is a prominent example of a successful data trustee initiative, with other countries following suit in establishing such entities. UK Biobank Ltd. was established as a not-for-profit limited liability company and enables access, including commercial access, to health data for research purposes (Bell, 2020). Other than that, the draft Data Governance Act of the European legislator also focuses on specific forms of enhancing trust in data sharing.Footnote 5 Data sharing service providers (data intermediaries) are expected to play a key role in facilitating data aggregation and sharing, and thus have the potential to contribute to the efficient aggregation of data and facilitation of data sharing (Recital 22 of the raft Act).

The boundary between ethics and law cannot be blurred; ethical principles only become legal principles when they are cast into their concrete form in compliance with the formal and material requirements. This being said, all three measures – the drafting of codes of conduct, the emerging practice of ethics committees and the development of data trustees – contribute to increased coordination between ethical guidance and legal rules in the area of precision medicine. Codes of conduct are developed based on a bottom-up approach and by integrating ethical advice, with the possibility to gain factual and legal binding force. Data protection laws increasingly mandate ethics committees to provide for the justification of the planned research and for patients’ integrity. Data trustees navigate patients’ and participants’ control over their data in different contexts, regularly instructed to adhere to the will and expectation of the patients and participants. At the same time, they are called on to register stratified and public attitudes towards data sharing and different data usages. By establishing their practice of navigating in areas that are not precisely defined by the law with regard to specific data processing situations or their own procedures of conduct, they can contribute to capturing and implementing individual, stratified and long-term, population-level attitudes to precision medicine.

Taken together, these governance measures can contribute to a formalised ethics-by-design in the performance of precision medicine and can reinforce coordinated and referenced conduct between ethical rules and obligations, where applicable, also prescribed by the law.