Four developments illustrating shortcomings of individual role-responsibility

There has been a proliferation of roles within which individuals define their responsibilities.

First, as a consequence of the professionalization of multiple tasks previously carried out in non-technical or private spheres, an enormous differentiation occurred of new roles individuals can take on in our society. Engineering itself provides some modest illustration of this, as it has broadened its functional specializations from research, development, design, and construction to include production, operation, management, and even sales engineering; and its content specializations have come to include biomechanical engineering, biomedical engineering, biochemical engineering, nanoengineering, and more. Stepping outside the technical fields, the unfortunate reductio ad absurdum in this trend is the role professionalization of virtually every work-related activity: janitors become maintenance professionals, friendship becomes professional grief counseling, one hires professional personal trainers to help one get the exercise right, etc. Although this development primarily manifests itslef as the quantitative proliferation of roles, it inevitably has qualitative implications (see, Illich et al. 1977).

Second, and in parallel, the area for which an individual may be held responsible has been narrowed down. This is aptly illustrated with an example from the sciences that would apply equally well to engineering. In the 1700s, there were natural philosophers who pursued natural science. In the 1800s, William Whewell coined the term “scientist,” and initially there were simply scientists as such (separate from philosophers). This was followed by a period in which it was possible to be a physicist, a chemist, or a biologist. Today, however, not even the term “microbiologist” is sufficiently descriptive of a definite scientific role. As a result, some individual scientists may be proficient only in research they conduct on one specific microorganism, and perhaps only in relation to a restricted number of biochemical processes in that microorganism. Individual scientists increasingly “know more and more about less and less,” and thus can hardly foresee the consequences of their discoveries for related fields, let alone the possible applications that could result from interactions with other fields. Such an excessive differentiation of roles implies both a formal and a substantial delimitation in individual role responsibility.

Third, the number of roles that any one individual may possibly fill has dramatically increased. Synchronically, one person may well be a structural engineer (that is, a kind of civil engineer) doing research on earthquake remedies, a grant or contract administrator, a professor of engineering, a student advisor or mentor, an academic administrator (as department head or dean), an author—not to mention a spouse, parent, church member, citizen, consumer, and more. Diachronically, the same person may alter all of these roles and/or complement them with literally hundreds of other roles. Moreover, the interchangeability of individuals and roles has expanded along with individual mobility, both temporally and geographically. This means, practically, that responsibility is more identified with a role than with a person, thereby complicating the responsible organization of professional tasks while significantly diminishing technical professional ethical commitments—not to mention loyalty.

Fourth, contemporary society is not only characterized by the differentiation of roles but also by the intensified institutionalization of the social-institutional spheres in which the role differentiation takes place. Science, engineering, economics, education, politics, art, religion, and more have all become so institutionally distinct that they largely determine the conditions for their own functioning. Regulation, insofar as it occurs, must increasingly take place internally within each sphere. Scientists regulate science, engineers engineering, economists the economy, and so on.

As a result of this four-dimensional transformation of role differentiation space, technical roles may be said to have become increasingly less robust at the same time that opportunities for role conflict have only intensified, proliferated, and specialized, with individuals more freely floating between roles, although large role aggregates are more rigidly separated from each other than ever before in history. Arguably, further on in the future robots will have taken over many “traditional” roles of human beings, following up what is already practiced in car factories where robots have replaced, by and large, traditional labour force. The philosophical question is of course, whether robots in fact can take over “roles”, and if they really then would also take over the corresponding “individual” role responsibilities. Yet, I would argue that the introduction of robots as such reallocates the responsibilities to those who manage the “buttons”, but then obviously at another level of complexity. As such robots thus do not seem to introduce additional ethical issues with respect to other technologies. However, robots contribute further to the complexity of our technological system over which humans may loose oversight.

The result is a multifaceted undermining of that very role responsibility which has been the traditional basis of social order—and for which it is dubious that principle responsibility alone is able to compensate.

Although roles are increasingly central to the functioning of technoscientific society, technical responsibility, while continuing to be framed in terms of roles, is progressively weakened in the moral sense. During the last half of the twentieth century in contemporary technological societies, professional roles gained such prominence that, together with their associated expectations and codes of conduct, they constitute one of the major foundations of contemporary ethical problems and dilemmas. Especially the role responsibility of executing assigned tasks from superiors has, outside of professional philosophy, become an important ethical issue of the 20th century.

As was most dramatically demonstrated in the 1962 trial of Adolf Eichmann, strict adherence to role responsibility easily leads to an almost banal immorality (see, Arendt 1963). During the trial, Eichmann defended himself by appealing to his role as chief administrator of the mass execution of Jews during Word War II, pointing out that his responsibilities were limited to administrative tasks in a hierarchy in which he had to fulfil the orders and follow the instructions given to him by superiors. Although the Eichmann case is exceptionally horrifying, the kind of appeal he made is not so exceptional at all, as Hannah Arendt documented in her famous book on the Eichmann case: for her the case documented the banality of evil, e.g., a type of ethics we all as ordinary citizens sometimes seem to refer to.Footnote 1

Repeatedly individuals in technoscientific and contemporary management positions find themselves resorting to a line of reasoning to justify their behaviour, not that dissimilar to Eichmann’s attempt to demonstrate the normality of his behaviour in the context of a hierarchical administrative process. Individuals may find themselves, in accordance with which role they identify themselves with, (partly) responsible for particular consequences but not for the whole overall-process. The assignment of blame to particular individuals (as in the Eichmann case) is a more difficult case to make in complex scientific-technological matters. The widely studied Challenger disaster of 1986, for example, may readily be interpreted as illustrating this phenomenon: roles and responsibilities of individuals in complex decision making processes overlap (see, Vaughan 1996).

This infamous example and its not-so-infamous parallels have not, however, led to any wholesale rejection of individual role responsibility ethics. Instead, in the first instance it is often used to argue that individuals must simply acknowledge more than administrative or technical role. Discussion has therefore focused more on the ethical dilemmas and conflicts that arise when two or more roles conflict.Footnote 2 This has varied from an emphasis on conflicts between the roles of being the member of a family and a professional to issues of the extent to which a technical professional may in certain situations have a responsibility to become a whistleblower. Rather than leading to examination of the ethical foundations of role responsibility itself or of the contemporary role differentiation pace, the dilemmas of role responsibility have became the focus of discussion. To resolve these dilemmas within an occupational role responsibility framework has been the primary intellectual concern, rather than to challenge the ethics of role responsibility itself.

Still a third attempt to address role responsibility problems has involved attempts to develop an “ethics of technology” (See particularly, Jonas 1984) or “ethics of science,”Footnote 3 as well as a variety of studies that typically build on the phrase “social aspect of” in their titles—e.g., the social aspects of engineering, the social aspects of computing, etc.Footnote 4 Such fields of scholarly activity are, however, more concerned with exploring and cataloguing the phenomena themselves than with the underlying social orders or the development of normative responses to the occupational responsibility problem itself.

Interdisciplinary studies of the ethics of science and technology nevertheless regularly highlight the extent to which people increasingly feel inadequate to deal will the complex moral dilemmas in which role responsibility places them. The more common phenomenon, in the face of Eichmann-like situations, is not Eichmann-like self justification, but what Austrian philosopher Gunter Anders might associate with the doubts and guilt manifested by “Hiroshima bomber pilot” Claude Eartherly.Footnote 5 But was Eartherly really responsible? What about J. Robert Oppenheimer, the leader of the scientists and engineers who designed the bomb? Or what about President Harry Truman, who ordered the bomb dropped? Or President Franklin Roosevelt, who established the Manhattan Project? Or even Enrico Fermi and Albert Einstein, who wrote the 1939 letter to Roosevelt that called attention to the possibility of an atomic bomb?

The very complexity of the atomic bomb project calls into question any attempt to accept personal responsibility for the results. Yet certainly Oppenheimer and many other atomic scientists experienced some guilt, and their concerns led to the kinds of public activism illustrated by the founding of the Federation of Atomic (later American) Scientists and the creation of the Bulletin of Atomic Scientists. The paradoxical critique and idealist call of Anders (1980) for expanding human powers of imagination and responsibility is the more philosophical manifestation of that intensification and multiplication of moral dilemmas which has led many people to feel that various issues are at once their responsibility and/or beyond their role competencies. The familiar not-in-my-backyard (NIMBY) syndrome in response to industrial construction or waste disposal and personal refusals to limit the consumption of high pollution consumer goods such as automobiles are but two sides of the same coin.

What thus emerges from our description of this four-dimensional transformation of the technical role responsibility space and the three attempts to respond to such a transformation is the picture of a society in which there is an imbalance in the relation between the individual’s responsibility for a particular and temporary role and the collective responsibility which is represented by the simultaneous fulfilment of great number of roles for the long-term. This is illustrated by the fact that in increasing numbers of instances it is impossible, even in a hierarchically structured technical professional system to assign to any one person responsibility for solving some particular problem. Who or what role is responsible for nuclear weapons proliferation? For stratospheric ozone depletion? For global climate change? Indeed, who or what role is responsible for even such mundane problems as traffic congestion? For the malfunctioning of my computer? For the presence of unlabelled genetically modified foods in grocery stores?

The chance that any one individual can be identified as responsible for the consequences of our collective actions within and between the myriad systems and subsystems of the technoscientific world has become infinitely small. Instead, in most instances it is increasingly the case that some form of co-responsibility for a collective organization and action leading to consequences (both intended and unintended) is operative. At the same time, such collective co-responsibility is difficult to grasp and elusive; it often seems as difficult to pin down an individual, organization, or even single that might be held accountable for scientific and engineering developments.

From individual role responsibility to collective co-responsibility

Karl-Otto Apel has tried to develop a philosophical justification for such an ethics [see especially his book Apel (1988)]. I cannot do justice to the complexity and the problems of such a justification in the context of this document.

I have described, in an admittedly summary manner but with some empirical references, a society in which it is difficult for anyone to be held responsible for the consequences of many technoscientific actions. We rely on a theory of occupational role responsibility that is no longer in harmony with existing social reality, in response to which we commonly propose an alternative and expanded notion of role responsibility. The fact is that the consequences of a wide variety of collective actions cannot be reconstructed from the intentions of responsible individuals, and role responsibility ethics can bear only on the consequences of individually and intentionally planned actions.

Individuals assume responsibility for the consequences of their actions if and only if they can intentionally direct those actions and reasonably assess the consequences, both intended and unintended. (Unintended consequences may on some occasions be effectively covered by insurance, as with automobile insurance.) But the consequences of scientific discovery and engineering design often escape all common or natural means of assessment.

Science and engineering exist, in the first instance, within the scientific and technological systems and, subsequently, by means of a complicated transformation and use, are transplanted into the system-specific logics of economy, politics, and law. None of these system logics are traceable to the intentions of individuals, nor are the possible unintended consequences always assessable. Scientists who have knowledge that leads to applications which are then criticized by many in society, may rightly point out that they anticipated other applications. Engineers who design products, processes, or systems that wind up actually being used in a variety of ways (guns that kill people as well as protect them, for example) make the same argument. Scientists and engineers may even claim that the possible applications and/or uses are not part of their occupational role responsibilities as scientists or engineers. In another sense, the scope of the ethics of engineers is a different one, than the responsibility for simple applications as such. For instance, a responsibility for the specification of particular technical standards for product-safety and efficacy rather than for the complete implementation of all kinds of requirements for a particular end-product. What is clearly required is thus some transformed notion of responsibility beyond the simple multiplication of roles or the expansion of occupational role responsibility to encompass public safety, health, and welfare. Indeed, techno-scientific applications can remain ethically problematic even in cases where scientists and engineers have the best possible intentions and users have not conscious intention to misuse or abuse. This situation constitutes the major ethical challenge we face today.

How are we to address the problematic consequences of collective action? Technological risks are examples of special concern. The nature of many technological risks is far beyond the framework of individual responsibility. Such risks arise, as Charles Perrow has argued, as a consequence of an interaction of semi-independent systems, many of which may themselves be in part so complex as to be outside direct control (see, Perrow 1984). (Think of the examples of the economy or the legal system as well as those of the various sciences and fields of engineering.) Such risks often cannot even be constrained within the dimensions of some particular time and place, which makes the identification of possible victims impossible. For such risks it is thus not even possible to take out insurance. Many of the technological risks in our society have the same status as natural catastrophes [See, the argument of Beck (1986)].

In response to this problem, we would need an ethics of collective co-responsibility. The itemized inadequacies of occupational role point precisely in this direction. Such a collective ethics of co-responsibility arises from reflection on the social processes in which technological decision making is embedded. (It may even be interpreted as involving a renewed appreciation of Cicero’s four-fold root of role responsibility.) That is, any new ethics must deal with the same substance as the old role responsibility ethics, namely with values and norms that restrict or delimit human action and thus enable or guide traditional decision making; but in the new ethics these values and norms will arise not simply in relation to occupational roles and their allocation to particular individuals. Here it is appropriate to address at least four general features and requirements for the implementation of such an ethics, from which I can only elaborate the fourth feature in more detail here.

1. Public debate:

To be co-responsible involves being personally responsive. It is clear that the norms of specific technical professions are insufficient because they arise from restricted perspectives. A true ethics of co-responsibility must be both interdisciplinary and even inter-cultural, in order to provide a standard of justice for evaluating and balancing conflicting occupational role responsibilities. If we fail to provide such an ethics, we inevitably continue to aggravate the clash of cultures and unarticulated hostile responses to particular (globalized) technologies.

According to my view, an ethics of collective co-responsibility is expressed at the level of free (international) public debate in which all should participate. It is unethical and even unreasonable to make any one individual responsible for the consequences and/or (adverse) side effects of our collective (especially technological) actions. It is, however, ethical and reasonable to have the expectation that informed and concerned individuals engage in the participation in public debates (subject, of course, to the particular situation), or at least make this the default position for which persons must give reasons for being excused from such a duty. Upon everyone’s shoulders rests a particular moral obligation to engage in the collective debate that shapes the context for collective decision making. It is not just engineers who do social experimentation; in some sense all human beings are engineers insofar as they are caught up in and committed to the modern project.

If we trace, for instance, the history of environmental challenges, we see that many issues which depend on the involvement of personally responsible professionals were first identified and articulated within the public sphere. Public deliberation does not primarily aim at creating of itself a reasonable consensus, but serves, among others, the function of presenting different relevant issues to the more or less autonomous systems and subsystems of society—that is, to politics, law, science, etc. The typically independent discourses of politics, law, science, etc. are called upon to respond to issues raised in public debate. An appropriate response by the appropriate subsystem to publicly identified and articulated issues constitutes a successful socio-ethical response. Conversely, responsible representatives of the subsystems are drivers for new debates, when they publicize particular aspects of an issue that cannot be fruitfully resolved within the limits of some specialized discourse. The continuous interaction between the autonomous subsystem discourses and a critically aware public provides an antidote for frozen societal contradictions between opposing interests, stakeholders, or cultural prejudices.

2. Technology assessment:

To be collectively co-responsible involves developing transpersonal assessment mechanisms. Although the institution of the public realm and interactions with the professionalized subsystems makes it possible for individuals to be co-responsive, these deliberations are in many cases insufficiently specific for resolving the challenges with which technological development confront us—that is, they do not always lead to the implementation of sufficiently robust national or international policies. Therefore all kinds of specific deliberative procedures—for instance deliberative technology assessment procedures—must be established to complement general public debate and to provide an interface between a particular subsystem and the political decision-making process. The widely discussed consensus-conferences are one example of an interface between science and politics, See Mayer (1997) and Mayer and Geurts (1998), (Of course, the question remains here, whether this type of interfaces are the adequate ones).

The implementation of ethics codes by corporations also constitutes an interface between the economic sector, science, and stakeholder interest groups, while national ethics committees are often meant as intermediaries between the legal and political system. Experiments with such boundary activities or associations have been, depending on the case, more or less successful. They represent important experiments for enabling citizens to act as co-responsible agents in the context of technological decision making. Yet the absence of adequately deliberative forums is certainly one reason why we are not yet able to democratically plan our technological developments.

3. Constitutional change:

Collective co-responsibility may eventually entail constitutional change. The initiation of specifically new forms of public debate and the development of transpersonal science and technology assessment processes may eventually require constitutional adjustment. Indeed, the adaptation of specific deliberative principles in our constitutions must not be ruled out.

Consider, for instance, the possible implementation of the precautionary principle, which is inscribed in the European Treaty and now also guides important international environmental deliberations (the Kyoto Protocol on Climate Change, the Biosafety Protocol, etc.). This principle lowers the threshold at which governments may take action and possibly intervene in the scientific or technological innovation process. The principle can be invoked if there is a reasonable concern for harm to human health and or the environment, in the light of persisting scientific uncertainty or lack of scientific consensus. The very implementation of such a principle requires new and badly needed intermediate deliberative science-policy structures.Footnote 7 It imposes an obligation to continue to seek scientific evidence and enables also an ongoing interaction with the public on the acceptability of the plausible adverse effects and the chosen level of protection. The principle gives an incentive for companies to become more proactive and necessarily shapes their technoscientific research programs in specific ways.

4. Foresight and knowledge assessment:

The issue of unintentional consequences can be traced back, among others, to the (principle) limited capacity of the scientific system to know in advance the consequences of scientific discoveries and technological actions. Virtually all complex technological innovations, from which our societies do benefit, are surrounded by scientific uncertainties and several degrees of ignorance. Instead of addressing the ethics of technology, it could therefore be more appropriate to address the “ethics” of knowledge transfer between our societal spheres such as the knowledge transfer between science and policy. As the “quality of the knowledge” will, by large, determine our relative successes in using this knowledge in the context of all kinds of possible applications. At the same time, we do constantly need a form of foresight (as predictions about our future have been shown to be enormously imperfect) in which we evaluate the quality of our knowledge base and try to early identify societal problems and new knowledge needs. In the next section I will analyse the normative elements of (foresight) knowledge assessment.Footnote 8

Foresight and knowledge assessment

The challenges that science related to public policy face today, have to do with the increasing recognition of complexity of socio-environmental problems, requiring (ideally) extended engagement of relevant societal sectors for their framing, assessment, monitoring, and an extended deliberation process.

Foresight aims at providing visions of the future to explore effective strategic policy. Envisioning is inherent to any technological, environmental and social activity. It becomes explicitly or implicitly in assessment methodologies, policy documents or political discourse. Foresight is naturally bound by uncertainty and ignorance, multiple values, requiring a robust knowledge base made of different types of knowledge as the background and the justification of the exercises’ outcomes.

The threats and opportunities of biotechnology have often been explored on the basis of the experience with nuclear technology. Nanotechnology is increasingly being compared on the basis of experience with biotechnology (see for example Grove-White et al. 2004) Analogies or counterfactuals, do not allow for predictions but produce prospective plausibility claims, which, however, do have sufficient power to allow us to explore the future on the basis of consolidated knowledge from known areas. Conflicting plausibility claims articulate and make us aware of uncertain knowledge whereby equally plausible claims are based on alternative sources of knowledge (most often from different scientific disciplines). However, these plausibility claims mutually lack any falsifying power (see, Schomberg 2003). They either loose substance or become more persuasive, once empirical research supports particular paradigms resulting from those plausibility claims. For instance, the argument (an analogy) of a “greenhouse effect” set the plausibility of the occurrence of global warming: an analogy that has been strengthened by actual observed temperature rises over the last decade, although the empirical basis in itself would not be sufficient to establish the “truth” of the thesis of the greenhouse effect. Foresight knowledge distinguishes itself from “normal” scientific knowledge, in the sense of Kuhn’s normal science and shares many aspects (although not identical) with what Ravetz and Funtowicz (1990) have called post-normal science:

Foresight knowledge can be distinguished from knowledge produced by normal science since it has the following features:

  1. 1.

    Foresight knowledge is non-verifiable Footnote 9 in nature since it does not give a representation of an empirical reality. It can, therefore, also not be related to the normal use for the “predictability” of events. The quality of foresight knowledge is discussed in terms of its plausibility rather than in terms of the accuracy of the predictability of certain events. Foresight exercises are therefore often characterised as “explorative” in nature and not meant to produce non-verifiable predictions;

  2. 2.

    Foresight knowledge has a high degree of uncertainty and complexity whereby uncertainties exist concerning particular causal relationships and their relevance for the issue of concern

  3. 3.

    Foresight knowledge thematises usually a coherent vision whereby relevant knowledge includes an anticipation of “the unknown”;

  4. 4.

    Foresight knowledge has an action-oriented perspective (identification of threats/challenges/opportunities and the relevance of knowledge for a particular issue) whereby normal scientific knowledge lacks such an orientation.

  5. 5.

    Foresight knowledge shares a typical hermeneutic dimension of the social sciences and the humanities, whereby the available knowledge is subject to continuous interpretation (e.g., visions of “the future” or what can account for a “future” are typical examples of such an hermeneutic dimension);

  6. 6.

    Foresight knowledge is more than future-oriented research: it combines normative targets with socio-economic feasibility and scientific plausibility;

  7. 7.

    Foresight knowledge is by definition multi-disciplinary in nature and very often combines the insights of social and natural sciences.

Foresight knowledge can be understood as a form of “strategic knowledge” necessary for agenda setting, opinion formation and vision development and problem-solving. In the case of underpinning the objective of sustainable development, Grünwald (2004) has captured the characteristics of “strategic knowledge for sustainable development”, in which many of the above mentioned general aspects of foresight knowledge reappear, in the following three statements:

  • strategic knowledge, as a scientific contribution to sustainable development, consists out of targeted and context-sensitive combinations of explanatory knowledge about phenomena observed, of orientation knowledge evaluative judgements, and of action-guiding knowledge with regard to strategic decisions (compare the aspects 4,5 and 7 above);

  • this strategic knowledge is necessarily provisional and incomplete in its descriptive aspects, as well as dependent on changing societal normative concepts in its evaluative aspects (compare aspects 2 and 6 above);

  • dealing with strategic knowledge of this sort in societal fields of application leads to a great need for reflection on the premises and uncertainties of knowledge itself. Reflexivity and the learning processes building upon it become decisive features in providing strategic knowledge for sustainable development (relates to aspects 1 and 3 above).

Foresight and deliberation

Foresight activities should be adapted to processes of deliberative democracy of modern western societies. Deliberation goes obviously beyond the meaning of simple discussions concerning a particular subject matter, and in its broadest meaning can be understood as “free and public reasoning among equals” (Cohen 2004).

Deliberation takes place at the interface of different spheres, as we will see for example when we deliberate on the basis of foresight knowledge. In this section, I especially explore the deliberations that take place at the policy making level and at the science-policy interface.

The deliberation levels that relate to particular spheres, such as “politics”, “science” or “policy”, can be characterised by specific normative boundaries. The specific outcomes from each deliberation level can be fed into other levels of deliberation, which are constrained by yet another set of distinct normative boundaries. Most often these boundaries are not simple consensual assumptions, justly shared by the actors involved, but may be fundamental policy or constitutional principles which are the result of longer learning processes and which have to be shared in order to achieve particular quality standards of policies and decisions. For instance, deliberation on risks and safety under product authorisation procedures within the European Union are guided by the policy objective, which is enshrined in the EU treaty, to aim at a high level of protection of the European citizen.

Below, I will outline the normative boundaries of the different levels of deliberation (see Table 1) within which foresight activities are invoked, implemented or applied. It should be noted that the different levels of deliberation do neither represent a hierarchy nor necessarily a chronological sequence, as deliberation levels mutually inform and refer to each other, deliberation at each particular level, can spark new deliberation at other levels.

Table 1 Deliberation levels involving the progressive invocation, application and implementation of (foresight) knowledge with its normative boundaries

We work here on the basis of examples of a most advanced form of embedded foresight integrated in a wider policy context. What follows is an ideal-type of description of all relevant deliberation levels in relation to the use of foresight knowledge (although there are striking similarities with the usage of (scientific) knowledge in policy as such). Theorists of deliberative democracy work on the clarification of particular levels of deliberation within particular spheres of society. Neblo (2004) describes levels of public deliberation in terms of “deliberative breakdown”. Fisher (2003) and Dryzek (1990) describe procedures of discursive politics. Grin et al. (2004) defines particular deliberations as practices of “reflexive design”. We will here elaborate the levels relevant for deliberating foresight knowledge for public policy.

The very first level concerns a broad political deliberation, which assumes a political consensus on the need for long-term planning when it engages in foresight exercises.

At that broad political level, foresight will be understood as a form for early anticipation and identification of threats, challenges and opportunities that lie ahead of us. Foresight exercises are essentially about the identification of such threats/challenges/opportunities. It is thereby important to realise that, for instance, a Technology Foresight exercise identifies technologies or other developments that may have an important impact, rather than assessing those technologies themselves:

The act of identification is an expression of opinion (italics: by authors of this paper) (which amounts to a form of implicit, covert assessment, the assessment of the relative importance of the technologies identified must necessarily follow their identification (Loveridge 2004: p. 9).

Those “opinions” are unavoidably normative in nature, and do not relate directly to the assessment of the technology but rather to the assessment of their potential with regards to particular perceived or actual threats/challenges and opportunities. A proper foresight exercise should therefore make these dimensions explicit in order to feed a deliberation process on a sound basis before achieving final conclusions. Foresight exercises need to refer to widely shared objectives (for instance those in international treaties and constitutions) such as the objective of sustainable development with its recognised three pillars (social, economic and environmental) in order to embed the broad political context. Foresight exercises can also be built on more controversial assumptions, yet those exercises may have a function of stimulating and informing a broader public debate rather than aiming at particular policies and or actions. Foresight exercises can be invoked at this political level of deliberation.

At a second level, one can identify deliberation at the policy level which immediately builds upon outcomes of political deliberation. It will need to map and identify those challenges/ threats and opportunities which are (in)consistent with more particular shared objectives, such as a high level of protection of consumers and the environment, sustainable growth and economic competitiveness. At this level a policy framework needs to be agreed upon for the implementation of foresight in a broad sense, at least by identifying institutions and actors which will take charge of foresight exercises. A number of countries have institutions, such as particular councils, committees or assessment institutes for those tasks in place.. Such institutions can then plan studies which are part of the foresight exercise and can include activities such as (sustainability) impact studies, cost-benefit analysis, SWOT analysis, scenario studies, etc. These studies should outline scenarios, challenges and threats and verify its consistency with relevant drivers for change.

A third deliberation level, the science/policy interface, is of particular interest since it qualifies the input of a diverse range of knowledge inputs, e.g., those of the scientific community, stakeholders and possibly the public at large by applying foresight (scenario workshops, foresight techniques/studies/panels, etc.).

At the science/policy interface, the state of affairs in science needs to be identified in relation to the identified relevant threats/challenges and opportunities. A particular task lies in the qualification of the available information by formulating statements on the available information in terms of sufficiency and adequacy—a preliminary form of Knowledge assessment. The identification of knowledge gaps is a particular task to sort out the state of affairs in science, possibly leading to later recommendations for further scientific studies to close those gaps. Also, depending on the timelines during which those decisions should be made, particular decision procedures for situations under conditions of uncertainty need to be taken into account. When communicating the results of the science/policy interface to the policy and political level, the proper handling of uncertainty has to be taken care of, and failure to do so have often lead to disqualifications of the used scientific knowledge at political level and in public debate. With uncertain knowledge, particular assumptions must be made as to whether particular consequences pose in fact a threat to us or not. For example: do we see 1, 2 or three degrees temperature rise as unacceptable consequence in terms of climate change? Do we think a 3 percent increase on public and private investments in science and technology by 2010 would make our economy sufficiently competitive? These assumptions represent “transformable norms”, as their acceptability changes in the light of ongoing new scientific findings. For instance, an initially assumed acceptable normative target of a global two degrees temperature rise may turn unacceptable when new scientific findings indicate to more serious consequences than previously thought. New knowledge about these issues leads to continuous reframing, making foresight and monitoring practices necessary partners.

Deliberation on robotics and human enhancement

In this final section I will use the example of robotics and human enhancement to illustrate the deliberation levels described above. This can give us an indication of how those deliberation levels should further materialise in the case of robotics in the future.

Among others, the following ethical issues of robotics can be identified:Footnote 10

  • Respect for fundamental ethical principles (EU Charter for Fundamental Rights, etc.)

  • Rights of access to information, protection of personal data (in the context of medical and security applications in combination with other technologies)

  • Dual use of technology (e.g., military use, use by terrorists)

  • Issues of human dignity, including issues of ICT implants in the human body which concern non-therapeutic human enhancement, and shifting self-images of human beings once the border-line between machine and human biology may fade in future man-machine interactions

  • Surveillance society issue, balance of privacy, limits to personal freedom, and security

My point on the ethics of knowledge assessment becomes now more concrete while applying it to the issue of robotics and converging technologies including nanotechnology. I do not wish here to take position on the substance of those issues. The overriding ethical issue here is perhaps not the substance behind each one of above mentioned issues, for instance on how we will define the limits of human enhancement; the crucial question will rather be: who will decide, under which procedures, on what issue and within which timeframe? Furthermore: how will the ethical issues be addressed under those procedures and seen as relevant for the further RTD process? However, in order to establish the relevance of the ethical issues, it is of crucial importance that our (foresight) knowledge concerning the development of technology is adequate or more adequately developed.

Therefore, it is necessary to have deliberative procedures in place which allow for comprehensive, democratic decision making at the right point in time. I believe that this process has merely started in the field of robotics. Possibly the ethical issue are still overseeable, yet the “feeding” of the different deliberation levels as they are mentioned in the overview with regard to robotics still needs to take place.

Ethical deliberations are underway at various levels, including initiatives initiated by Ethics councils and the Governance and Ethics Unit of DG Research of the European Commission, which is funding some research projects on the ethics of robotics and human enhancement. I can mention the consortia ETHICBOTS (see footnote 10) and ENHANCE.Footnote 11

The ETHICBOTS consortium will already contribute to deliberation on standard setting for robotics, by which they must then make normative qualifications with regard to the quality and level of knowledge in the field of robotics. Some early results were discussed at an international workshop in Napoli (Tamburrini and Datteri 2006).

Public deliberation on robotics has also started at various conferences at European and national levels. International dialogues on the responsible development of robotics are underway. In that regard, the announcement from the South Korean authorities to develop and adopt a charter for the responsible development of Robotic is a significant development (Sim 2007). Possibly, public deliberation should entail (knowledge) assessment of the (societal) visions behind technological inventions and their possible applications.

From this sketchy overview of the “status” of the different deliberation levels, it is clear that more work needs to be done as soon as (foresight and technology assessment) studies will further clarify the development of robotics and its possible applications.

Concluding remarks

Contemporary western society is not only characterized by the differentiation of (job) roles within which we have to take up ethical responsibilities but also by the intensified institutionalization of the social-institutional spheres in which further role differentiation takes place, in which we become more responsible over less: science, engineering, economics, education, politics and other spheres have all become so institutionally distinct that they largely determine the conditions for their own functioning.

This state of affairs led me to adopt the contested assumption that the current ethical theories cannot capture adequately the ethical and social challenges of scientific and technological development and that any ethical framework for new technologies should reflect a new ethics of collective co-responsibility. Such an ethics should focus on the ethics of knowledge assessment and knowledge policy in the framework of deliberative procedures, rather than on the ethics of technologies as such. This is also necessary in the case of robotics, and work has to be done to establish new deliberative procedures and processes at the science-society interface in which ethical issues concerning robotics can be discussed.

As one can glean from the above sketchy overview of activities in this emerging field, in which the ethical issues are still overseeable, we are only in the very beginning phase.