Keywords

Rationality is a generally acknowledged good, yet one that takes different forms in different contexts–and is sometimes qualified. The primary effort here is to reflect on some tensions between rationality as manifested in technology and in ethics. Initially, however, it is appropriate to venture some general observations about rationality.

Rationality and Its Diverse Contents

That rationality (which must be distinguished from the Enlightenment philosophy of rationalism) takes a variety of forms can easily be indicated. According to its English etymology and the lexical description from the Oxford English Dictionary, the term derives from the post-classical Latin rationalitas, faculty of reasoning. It is an abstract substantive from the adjective “rational,” indicating the exercise or possession of reason; a closely related abstraction is reasonableness. Its deeper root is the Latin ratio (often used in Roman philosophy to render the Greek logos), with meanings that range, according to the Oxford Latin Dictionary, from the act of reckoning or calculating, especially financial accounting, and proportion or relation, to the act or process of reasoning or working out, an explanation or reason, a descriptive account, the exercise of reason, an affair or business, a plan of action, guiding principle or rule, and method or means.

It is important to note, however, that in non-European languages etymology and denotations can be quite different. In classical Chinese philosophy, for instance, there is no word that can be translated as “rationality”; in modern Chinese the term most commonly translated as “rationality” is lixing, which consists of two compound characters. The first, li, is composed of the radical for king or ruler and the character for inside, village, or neighborhood; the second, xing, is constructed from the radical for heart and the character for birth or life.

Few twentieth century English-language encyclopedias of philosophy have entries on rationality itself, but often discuss rationality in some context. Neither the influential Encyclopedia of Philosophy (1967) nor the contemporary online Stanford Encyclopedia of Philosophy carry entries on rationality; the Routledge Encyclopedia of Philosophy (1998) only discusses the topic in distinct entries on “Rationality and Cultural Relativism,” “Rationality of Belief,” and “Rationality, Practical.” The Encyclopedia of Philosophy Supplement (1996) added a short entry by Paul Moser on “Rationality” (Moser 1996; reprinted unchanged in the Encyclopedia of Philosophy, second edition, 2005) that likewise distinguishes epistemic and practical rationality. Then with regard to practical rationality Moser further distinguishes instrumental versus substantive rationality; the former concerns the selection of effective means to achieve some predetermined end, the later with the identification of proper ends. While acknowledging that the modern marginalization of substantive notions of rationality (witness the focus on instrumental notions in decision theory and related research programs), Moser elaborates on four conceptions of egoistic, perfectionist, utilitarian, and intuitionist rationality distinguished by William Frankena (1983), adding a fifth, relevant-information conception. Moser argues that rationality and morality may or may not conflict, depending on the precise interpretation of each of these five types of ends. As if confirming the primacy of instrumental rationality, the Oxford Handbook of Rationality (2004) discusses substantive rationality at length in only one of 22 chapters.

Related to the instrumental/substantive distinction is another between rationality and reasonableness. According to a brief account by Alan Gewirth in the Encyclopedia of Ethics (1992 and 2001), persons are rational if they choose the most efficient means to their ends, whatever they may be; reasonable if they maintain a certain equitable relationship between themselves and others, that is, consider ends from an impartial perspective. “Thus, reasonableness is directly a moral quality, while rationality is often nonmoral, and may even be immoral if the agent’s ends are exclusively self-interested” (Gewirth 1992, p. 1069). Although the specific terminological distinction is not widely adopted, the distinction itself is real. Purely instrumental rationality can be at odds with moral reasoning.

A parallel trajectory of attention can be found in the social sciences. There are no entries on rationality per se in either the Encyclopedia of the Social Sciences (1935) or the International Encyclopedia of the Social Sciences (1968). Rationality is finally granted thematic treatment in the International Encyclopedia of the Social Sciences, second edition (2008) with an entry by philosopher Paul Weirich. After a brief general introduction, three-quarters of the main body of this short (seven column) entry are devoted to rationality as some version of instrumental utility maximization. Beyond Weirich, it is possible to distinguish at least four distinctive research programs dealing with rationality. One is the rational choice and decision theory research of economists and others who focus on instrumental rationality and seek to determine its procedural norms and applications. In the selection of means, what legitimately counts as evidence and how; how is the means-ends relationship most effectively internalized in decision making. Two is research by psychologists and others who seek to identify how people express and reveal preferences (ends) as well as actually make choices and decisions, never mind what is the most effective way to make such choices and decisions. Three is research by psychologists and anthropologists on the evolutionary origins of rationality. What are the biological or genetic foundations of reasoning and how has it evolved in conjunction with other features of human development. Fourth is research that seeks to evaluate the degree to which real-world decision making accords with decision theoretical norms along with what might be done to meliorate failures to do so. Leading insights of philosophical importance that cross these diverse research programs on instrumental rationality can be found in, e.g., the work of Herbert Simon (1969 and 1983) and Daniel Kahneman (2011).

In the present context, then, what may be emphasized from the start – on the basis of both philosophical and social science discourse – is the potential for rationality to be in tension with if not opposed to other aspects of human experience. Insofar as rationality denotes a dependence on reason it can be contrasted with or opposed to revelation, necessity, intuition, perception, beauty, emotion, and more. For present purposes the most salient tension is one between rationality and morality or ethics, the two most prominent normative dimensions of human experience. (Qualification: The terms “morality” and “ethics” are sometimes treated as interchangeable, although in technical parlance “morality” refers to behavior and “ethics” to critical reflection on behavior.)

In ordinary language it is not uncommon to hear the rationality/ethics tension expressed in one of the following templates: “X is rational but not ethical” or “Y is ethical but not rational.” Instances of X that fit the first case include the atomic bombing of Hiroshima and Nagasaki, the torturing of terrorists, transferring profits to tax haven jurisdictions, and more. Instances of Y in the second case include turning the other cheek, outlawing the death penalty, not cheating on one’s taxes when one can get away with it, and more. Of course, in each of these cases arguments can be made to harmonize rationality and morality, but the point is that such arguments need to be made. Prima facie, it makes more sense to say there is an opposition between what is reasonable and what is moral.

In ordinary language, however, it is difficult to think of instances in which rationality is in such obvious tension with or opposed to technology. That is, given the linguistic templates, “X is rational but not technological” or “Y is technological but not rational,” it is more difficult to imagine substitutes for X or Y. In the first case perhaps the best candidates are actions that are often characterized as performances, such as promising or loving; but to describe these as rational also sounds a bit odd. In the second case Y could be an inefficient or bad technology. Example: “That Rube Goldberg machine is technological but not rational.”

At the same time there is a deep sense in which any notion of rationality implicates some notion of the good. Rationality is not self-justifying. Although one can say it is irrational not to be rational, this is simply a tautology. More substantive is an argument that it is immoral or unethical not to be rational. Notice too how it is not equally the case that any commitment to ethics is rational; ethical commitments have been based on appeals to revelation, tradition, experience, and power, although some proponents defend such appeals as exhibiting their own distinctive forms of rationality. Even the Kantian effort to derive morality from rationality requires some prior sense of rationality as good.

One classical way of conceiving rationality as good is to understand it as the perfection or virtue of a faculty of the mind sometimes called the intellect. In Plato and Aristotle the functional perfection of the mind or intellect (nous) is intelligence (noesis), which could also be translated as rationality. Insofar as rationality is a perfection of the intellect, this implies a strong relationship between rationality and intelligence. Even today we often say that an intelligent person is rational or that a rational person is intelligent; so-called intelligence tests commonly measure skills of reasoning. In the analogy of the divided line, however, Plato distinguishes between noesis (intuitive reasoning) and dianonia (discursive reasoning) that foreshadows one between substantive and instrumental rationality thus suggesting two types of intelligence. Additionally, contemporary philosophically relevant work by Simon, Kahneman, and others has established that otherwise apparently quite intelligent people can often make irrational choices, thus implicating a need to distinguish intelligence and rationality.

From this brief review of rationality and its multiple manifestations, the points most germane to further reflection on rationality as manifested in technology and in ethics may be summarized as follows: We desire to be rational only insofar as we see rationality as a good, instrumentally or substantively. The basic argument for the pursuit of technological rationality is thus an ethical one. At the same time, arguments exist for making ethics itself more technological. Both arguments deserve explication and examination.

The complexity of the arguments at issue dictate that the present reflection be no more than a preliminary foray. Nevertheless, the ultimate aim is to defend the thesis that ethical rationality trumps technological rationality, both in practice and in theory–and for good reasons, indeed for the good.

The complexity here derives in part from the fact that beyond the contextual differences between technology and ethics, technological rationality and ethical rationality themselves exhibit multiple context-related forms. As a result, what follows will consist primarily of two case studies.

The first case highlights technology by considering how engineering has attempted to incorporate ethics into professional self-understandings. Engineering is thus taken as a central aspect of technology. As has been argued on other occasions, technology is constituted by the systematic making and using of artifacts, including all the artifacts themselves; engineering is the design and construction of artifacts (Mitcham 1994; Mitcham and Schatzberg 2009; see also McCarthy 2009, and Blockley 2012). Engineering ethics thus constitutes a specific effort to build a bridge between technological and ethical rationality, starting from the side of technology.

The second case considers some relations between ethics and policy. The relationship of policy to technology will be developed in due course. But insofar as policy can also be understood a kind of technologization of politics, the ethics-policy relationship is another instance of bridge building between two rationalities, this time beginning more from the side of ethics.

The two case studies are followed by some general reflections that relate the two cases in the form of comments on technology and democratic society.

Engineering and Ethical Rationality

The first case study considers the role of ethics in engineering. Because it is difficult to think engineering in general, the present historico-philosophical reflection further contextualizes engineering with a focus on how it has developed and been practiced in the United States.

The classic definition of modern or scientific engineering as practiced in North America derives from one formulated in conjunction with establishment of the British Institution of Civil Engineers (ICE). In 1818, at the first ICE meeting, H. R. Palmer, while lamenting the absence of any organization to serve as a “source of information or instruction for persons following or intending to follow the important profession of a Civil Engineer,” described the engineer as “a mediator between the Philosopher and the working Mechanic,” that is, one who learns the principles of nature from natural philosophy “and adapts them to [human] circumstances” while the “working mechanic … brings [the engineer’s] ideas into reality” (Institution of Civil Engineers, January 2, 1818). Ten years later, in conjunction with the application for a royal charter, ICE president Thomas Tredgold more carefully defined engineering as the “art of directing the great sources of power in nature for the use and convenience of [human beings]” (Tredgold 1828).

This definition significantly leaves out some issues while including others. Tredgold erases any explicit reference to science, although it had been present in Palmer’s description. But then Tredgold underlines Palmer’s notion of adaptation to human circumstances by referencing human “use and convenience” as the ethical end. “Use and convenience” is a semi-technical term associated with the development of utilitarian philosophy during the same period. In his Enquiry Concerning the Principles of Morals (1751), David Hume observed that all art or well-designed making is oriented toward human “use and convenience.” Notions of use and convenience subsequently came to play important roles in classical economics, from Adam Smith on.

However, even within the framework of modern human commitments to this world and material progress, use and convenience are subject to divergent interpretations. The social context in which this-worldly ends are to be pursued remains open and debatable. Although Tredgold and the Institution of Engineers viewed use and convenience as a non-problematic purpose for engineering, it is remarkable that no subsequent engineering ethics code affirms this end. Use and convenience have never functioned in quite the same way in engineering as health in medicine or justice in law; use and convenience tend to function more as what might be termed “ex-forming” than as informing ideals. To suggest the same point in different words, they operate at such a high level of generality as to require interpretation and application.

Ethics will necessarily come into play in any related discussion of the particular meaning of use and convenience or the contexts in which use and convenience are to be pursued. Useful to whom (factory owners, investors, clients, consumers)? Convenient for whom (workers, sellers, purchasers)?

In relation to the interpretation of use and convenience, it is important to draw attention to the relatively recent development of engineering as a profession. Human beings have since antiquity undertaken projects that may, from a modern vantage point, be interpreted as engineering – witness, for example, Egyptian pyramids, Roman aqueducts, the Brihadishwara Temple in India – just as science has projected its history back to the Greeks and beyond. Nonetheless, in the West the first engineers as such did not appear until the Renaissance. It was at this historical juncture that a systematic or scientific approach to questions of what works and why in both structures and machines began to displace the earlier trial-and-error thinking of artisans and architects. Indeed, Galileo Galilei’s Two New Sciences (1638), which adopts a scientific approach to practical problems and structural analysis, is widely regarded as a landmark text in the history of engineering.

If the birth of modern science as an institution can be dated from the founding of the Royal Society in 1660, engineering as a profession is best dated from a century later with formation of the Society of Civil Engineers in 1771. Since its distinctly modern emergence, there have developed three theoretical ideals in engineering ethics–which in effect constitute three interpretations of use and convenience.

Use and Convenience Through Obedience to Authority and Company Loyalty

The first theory of engineering ethics grants to the market the determination of use and convenience and thus makes engineers subordinate to corporations that employ them. Their fundamental obligation is to obey or to be loyal to organizations or firms in which they work.

Engineering as a profession initially took distinctive form in the military. An “engineer” was originally a soldier who designed military fortifications and/or operated engines of war such as catapults. The first engineering schools were founded by governments and closely linked with the military; one early example was the Academy of Military Engineering at Moscow created by Czar Peter the Great in 1698. What is often taken as the archetypical engineering school is the École Polytechnique, founded at Paris in 1794, which became a military institution under Napoleon Bonaparte. In the United States the first school to offer engineering degrees was the Military Academy at West Point, founded in 1802. Within such contexts, the over-arching duty of engineers, as with soldiers of all types, was to obey orders.

During the same period as the founding of professional engineering schools a few designers of “public works” began to call themselves “civil engineers”–a term that continues in some languages to denote all non-military engineers. The creation of this civilian counterpart to engineering in the armed forces initially gave little reason to alter the fundamental engineering obligation. Civil engineering was simply peacetime military engineering, with use and convenience replacing protect and destroy, and engineers remained duty-bound to obey those for whom they worked, whether some branch of the government or a private corporation.

The late eighteenth and early nineteenth centuries also witnessed formation of the first professional engineering societies as organizations. But none of the original associations included any formal code of ethics. Formal ethics statements had to wait until the early twentieth century. On analogy with physicians and lawyers, whose codes prescribe a fundamental obligation to patients and clients, the early codes of conduct in professional engineering–such as those formulated in 1912 by the American Institute of Electrical Engineers (later to become the Institute of Electrical and Electronic Engineers or IEEE) and in 1914 by the American Society of Civil Engineers (ASCE)–defined the primary duty of the engineer to serve as a “faithful agent or trustee” of an employing company.

The implicit argument behind this view of engineering ethics is that engineering best produces the general good of use and convenience through corporate subservience and the free market. But insofar as economic competition and consumer choice can produce mistreatment of workers, poor quality products for consumers, and environmental degradation, this ethical justification of engineering may be questioned as producing something other than simple use and convenience.

Use and Convenience Through Technocratic Leadership and Efficiency

At odds with both the implicit code of obedience and the explicit code of company loyalty is the ideology of leadership in technological progress through pursuit of the ideals of technical perfection and efficiency. During the first third of the twentieth century in the United States this vision of engineering activity spawned the technocracy movement or a belief that engineers should be given political and economic power. Although never explicitly articulated in the form of a code of conduct, it has influenced how engineers and the public think about the profession. Economist Thorstein Veblen, for example, argued that if engineers were freed from subservience to business interests their own standards of good and bad, right and wrong, would lead to the creation of a more sound economy and better consumer products (Veblen 1921).

The technocracy ideal was formulated during the same historical period that governments in Europe and North America were establishing independent agencies to regulate transport, construction, communications, foods, pharmaceuticals, banking, and more. In all such cases technical experts from science and engineering were given governing responsibilities to oversee the operations of public and private activities, with an aim of increasing perfection, efficiency, and safety. The pursuit of efficiency also spilled over from industrial to social life. Engineering efficiency became a model for enhancing personal use and convenience in managing one’s health, finances, education, and general comportment (Alexander 2008).

There are good reasons to practice some degree of technocratic leadership and efficiency. Certainly the subordination of production to short-term money making with little concern for product quality is not desirable in the long run, and inefficient or wasteful processes can readily be described as wrong. Moreover, in a highly complex technical world it is often difficult for average citizens to know what would be in their own best interests. Efficiency is not always adequately promoted by consumer pull in imperfect markets; it sometimes requires push from technical experts.

Nevertheless, when technical decision making becomes a formalized process, it is easily decoupled from general human welfare. Not only can regulatory agencies be captured by the industries or activities they are supposed to regulate (a version of the principal-agent problem) but the pursuit of efficiency is not always compatible with personal happiness. Concepts of technical perfection and efficiency virtually require the assumption of clearly defined boundary conditions that per force can exclude important and relevant factors, including legitimate psychological, environmental, and human concerns.

Technical efficiency entails minimizing inputs to achieve desired outputs or maximizing outputs from given inputs – or both. It therefore hinges completely on how inputs and outputs are framed, which is not a strictly technical matter. In recognition of such objections there has developed a third theory of engineering ethics, that of social responsibility.

Use and Convenience Through Public Safety, Health, and Welfare

The World War II mobilization of science and engineering for national purpose and the North American post-war economic recovery caused a provisional suspension of the tension between technical and economic ends, efficiency and profit, that had come to light in discourse associated with technocracy. But the anti-nuclear weapons movement of the 1950s and 1960s, in conjunction with the consumer and environmental movements of the 1960s and 1970s, brought tensions again to the fore and provoked some engineers to challenge national and corporate or business direction as well as the technocratic ideal. In conjunction with a renewed concern for democratic values–especially as a result of the civil rights movement–this led to new ideals for engineering.

The emergence of a new theory of social responsibility has a complex history that draws on multiple dissatisfactions with both the first and second theories. In the United States the seeds of transformation were planted immediately after World War II when in 1947 the Engineers’ Council for Professional Development (ECPD which later became the Accreditation Board for Engineering and Technology or ABET) drew up the first trans-disciplinary engineering ethics code, committing the engineer “to interest himself [or herself] in public welfare.” Revisions in 1963 and 1974 strengthened this commitment to the point where the first of four “fundamental principles” required engineers to use “their knowledge and skill for the enhancement of human welfare,” and the first of seven “fundamental canons” stated that “Engineers shall hold paramount the safety, health and welfare of the public …”

This third theory addresses many problems with the other two, and has been widely adopted by the professional engineering community, in the United States and elsewhere. It also allows for the retention of desirable elements from prior theories. For instance, obedience or loyalty remain, but within a larger or more encompassing framework. Now the primary loyalty is not to some individual or corporation but to the public as a whole. Leadership in technical perfection and efficiency likewise remains, but is explicitly subordinated to the public welfare, especially in regard to health and safety.

There are nevertheless questions to be raised with regard to the theory that use and convenience are to be achieved through engineering responsibility for public safety, health, and welfare. One concerns whether engineers qua engineers really have any privileged knowledge with regard to safety, health, and welfare. This issue will be returned to below.

Ethics, Policy, and Rationality

A second case study considers some relations between ethics and policy. Adapting a distinction prominent in discussions of science policy, it is possible to distinguish two relationships. One focuses on ethics for policy, another on policy for ethics. The former concerns how to bring ethics to bear in arenas of policy formation and decision making, the latter on what policies might best be used to promote or develop ethics.

Policy Itself

Before taking up these two relationships consider the concept of policy itself, increasingly understood (in the U.S. context) as scientifically and technologically based guidance for behavior that achieves rational outcomes. As such, policy constitutes a kind of technological rationality closely related to engineering; it might even be described as decision engineering.

The under-discussed demarcation problem in policy studies concerns the distinction between policies in this sense from other aspects of human affairs, such as politics, law, rules, plans, designs, principles, or ethics. One scholar prominent in developing the notion of policy as counterpoint to politics was the interdisciplinary American social scientist Harold D. Lasswell. In his classic paper, “The Policy Orientation,” Lasswell wrote:

“The word ‘policy’ is commonly used to designate the most important choices made either in organized or in private life. We speak of ‘government policy,’ ‘business policy,’ or ‘my own policy’ regarding investments and other matters. Hence ‘policy’ is free of many of the undesirable connotations clustered around the word political, which is often believed to imply ‘partisanship’ or ‘corruption’” (Lasswell 1951, p. 5).

Lasswell and others developed a method to formulate and assess policy by means of what he termed the “policy sciences” to support a systematic, interactive sequence that includes goal clarification; detailed empirical assessment of the situation in which a goal is to be pursued; the careful weighing of alternative courses of action; and the continuous evaluation and selection of optimal means for carrying out a selected course of action. Any science becomes a policy science insofar as it contributes to the policy making process.

Policies in this sense are supposed to be based not in politics (with its characteristic appeals to tradition, power, or majority rule) but in science (with its appeals to empirical evidence or theoretical adequacy); when not legally codified, they lack the force of law and the constitutive character of rules while still being able to guide law enactment and rule formulation; they thus serve as plans or designs both for subsequent decision making as well as for particular actions or artifacts; finally, they function at a level of abstraction intermediate between specific decisions and general principles. They can either include ethics or themselves constitute a kind of (primarily consequentialist) ethics.

Genesis of the term “policy” in this distinctive sense can be traced to the late 1800s and proposals for supplementing legislation on the basis of tradition, interest-group power, and common sense politics with the creation of laws and regulations focused on meeting a public interest while appealing to scientific evidence and analysis. For example, neither tradition, nor power politics, nor ethics, nor common knowledge were able in the eighteenth century to determine how best to supply safe drinking water to expanding urban centers. Instead, an ethical commitment to public health was increasingly dependent on knowledge supplied by scientists such as Antonie van Leeuwenhoek (1632–1723), who invented the microscope and discovered microbial life in ostensibly pure water, along with the technical skill of engineers such as John Gibb (1776–1850)–a founding member of the Institution of Civil Engineers – who designed the first large-scale water filtration system in Paisley, Scotland. Discoveries such as those of epidemiologist John Snow (1813–1858) regarding cholera transmission via water contamination and formulation of the germ theory of disease by biologist Louis Pasteur (1822–1895), further enhanced the ability of science and engineering to trump traditional politics and ethics alone in governmental efforts to protect the public from unsafe water, transport, structures, and food.

The typical process is for a legislative body to pass a law mandating the creation of, e.g., safe drinking water standards, then delegate determination of the standards themselves to the use of science and technology by some specified agency. Furthermore, to support evidence-based policy making, many modern governments have found it necessary to fund scientific and technological research through state research institutions, grant support to independent scientific institutions or researchers, and/or tax incentives that encourage private enterprises to undertake relevant research. This is obviously another version of second-stage engineering ethics ideals of technocracy–or, perhaps more accurately, what should now be termed “scientotechnocracy” or “technoscientocracy.”

Involvement by scientific and technical experts in effective policy making nevertheless creates the previously mentioned principal-agent problem: How can the principal (in this case, the state) be sure that expert agents (scientists and engineers) share the principal’s goals? It is precisely such concerns, especially in democratic contexts, that have promoted discussions that may be classified under the rubric “ethics for policy.”

Ethics for Policy

Ethics for policy is focused on how to bring ethical practices, principles, reasoning, and considerations to bear in areas of public policy deliberation, design and implementation. In practice the focus is largely on morals, or the inculcating of behavioral norms acceptable to decision makers and their constituents, and rarely on ethics, that is, critical reflection on such norms. Ethics for policy typically argues, for instance, in support of loyalty to established authorities, protection of confidential information acquired during the performance of professional duties, and avoiding conflicts of interest. Again, comparisons with first-stage engineering ethics codes should be obvious.

Consider again the case of drinking water. While even preliterate peoples understood the importance of an ample quantity of water to support human habitation, water quality was traditionally assessed simply on the basis of taste (non-salinity) and aesthetics features such as visual appearance (absence of turbidity) or smell. Microbiology made possible scientific and technological efforts to avoid water-born diseases. In the United States, during the late nineteenth and early twentieth centuries, safe drinking water standards were progressively recognized as important public health issues. In 1912 the newly established U.S. Public Health Service (PHS, as the reorganization of related agencies that can be traced back to 1798) was mandated to “study and investigate the diseases of men and conditions influencing the propagation and spread thereof, including sanitation and sewage and the pollution … of navigable streams and lakes” (Dupree 1957, p. 270). Two years later the PHS issued the first standards for the bacteriological quality of drinking water, which were revised and expanded in 1925, 1946, and 1962.

By the late 1960s, however, scientific research on drinking water was implicating not just infectious disease-causing pathogens but also industrial chemicals as causal factors in non-infectious birth defects, child developmental disabilities, and illnesses such as cancer. These technoscientific chemicals were increasingly finding their ways into water sources. To examine and regulate the associated risks, the U.S. federal government in 1970 established the Environmental Protection Agency (EPA) and followed up with a series of legislative acts explicitly addressing the need for further research and rulemaking: the Clean Water Act (1972) and the Safe Drinking Water Act (1974) with amendments (1986 and 1996). An official EPA twenty-five year history of the results concluded, “To continue learning about the health effects of known and/or regulated contaminants, and to begin studying emerging contaminants (e.g., newly discovered microbes, perchlorate), it will be imperative that the public and private sectors work together to more effectively and efficiently conduct sound scientific research in the future” (Environmental Protection Agency 1999, p. 35).

When, on the basis of its rulemaking authority and scientific evidence, agencies such as the EPA mandate compliance with new technical standards, the principal-agent problematic emerges. Since the rulemaking is dependent on scientific or technological knowledge that is seldom obvious to the general public, and in some instances the evidence itself may be unclear or conflicting, how can the government or public be sure that its technoscience and their technoscientific experts are not compromised by conflicts of interest or their own personal commitments?

One effort to respond to this question has been to create codes of ethics for technoscientific experts in government service who are involved in policy assessment, implementation, and other forms of public policy making. For instance, in 1958, in an effort in part and in effect to promote ethics for policy analysis and rulemaking, the U.S. Congress set forth a “Code of Ethics for Government Service,” stipulating that any civil servant should, among other things, “Make no private promises of any kind binding upon the duties of office, since a Government employee has no private word which can be binding on public duty.”

Such efforts to create codes of conduct to guide scientists and engineers in relation to their engagements with public policy are clearly related to the ethics codes of professional engineering societies – but with a difference. With engineering ethics codes, the impetus and development was internal to the profession, even if the effect was to adopt external ideals. By contrast, here the efforts themselves are external to the profession and so constitute what may be described as a “policy for ethics.” Especially is this the case insofar as such codes of ethics are associated with some systematic study of how best to promote behavior that achieves outcomes, and is thus instrumentally rational.

Policy for Ethics

Like science for policy, ethics for policy is focused on rationally influencing and shaping public decision making. Like policy for science, policy for ethics explores the need for and the most rational approaches to providing ethical guidance. That is, policy for ethics examines the nature, source, legitimacy, and promotion–including funding support–of such ethics for policy. In the broad sense, given the general role of ethics in society, policy for ethics includes questions concerning the best ways to promote or develop moral behavior and other dimensions of ethics, not just among policy professionals and public servants but among all citizens.

In one sense this issue is as old as Plato’s Republic and Aristotle’s Politics, both of which considered how best to structure the polis so as to cultivate virtue among citizens, although neither philosopher considered civic virtue as subject to modern scientific and technological policy determination. In modern secular societies, increasingly informed by and dependent on technoscience, many traditional institutions for instilling and enforcing norms–such as families, local communities, religious and educational institutions–have been significantly weakened, just at the point at which extended technological powers demand greater conscious reflection than ever before. In consequence, appeals are made to social scientific and other forms of expertise to help determine how to cultivate morality.

From the middle of the twentieth century, as the physical conditions of the human lifeworld were transmogrified through technoscience, a number of assumptions regarding the norms of human conduct underwent inversions. For example, when many children failed to live to adulthood and the human population was mostly stable over long periods of time, unlimited procreation was an obligation reinforced by natural human inclinations. Once advances in public health significantly increased the survival rate of children and lengthened the average human life span, procreation became an action subject to reflective delimitation. Indeed, through the technologies of birth control, what had once been more a behavior than an action, has increasingly come to call for conscious decision making and the taking into account of more factors than had ever previously been the case–entailing what has been termed a duty plus respicere (Mitcham 2011).

Another example: As long as farming was done on small scales with technics inherited from tradition, it made sense to cultivate all available land, as had been done for generations–although small scale alone does not insure against major failures to appreciate the limits of nature (Diamond 2005). As new technologies and scales were introduced, the conscious mediation of the agricultural extension agent became an almost necessary adjunct, and in some cases even the de-cultivation of arable land became a newly appropriate norm.

Insofar as humans experienced few lifeway options, they did not have to worry about which choices might be best; as options proliferated, they increasingly were encouraged to consider economic and psychological factors when making decisions. The pattern of increased need for research and reflection manifested in the public sphere design, construction, and operation of urban water systems was imported into many spheres of personal decision making.

The situation of medical care provides still another vivid illustration. As medicine has become increasingly empowered by life science engineering, public policy has been stimulated to develop formal ethical engagement mechanisms for decision making regarding utilization of the related therapeutic technologies. The establishment of ethics advisory committees–known in the United States as Institutional Review Boards (IRBs) and elsewhere as Ethical Review Boards or Independent Ethics Committees–has been a widely adopted initiative made on the basis of appeals to social science assessments of public need and to ethical expertise instead of to interest group politics; as such these too may be conceived as a form of policy for ethics. The U.S. National Research Act of 1974 defined the structure of IRBs and requires them for all research directly or indirectly funded by the Department of Health and Human Services with the aim of introducing an ostensibly non-partisan analysis of options along with enhanced and broadened ethical reflection.

During the last third of the twentieth century a host of similar initiatives emerged that, insofar as they have purported to be science- or reason-based programs for action to guide decision making toward the effective development and utilization of ethical expertise in human affairs, could be classified as part of a policy for ethics spectrum. Among such initiatives are the following:

1. Beginning in 1974 in the United States (and subsequently in many other countries), creation of a series of national bioethics commissions and committees to bring ethical expertise to bear at a public level in discussions of challenges emerging from advances in biomedicine (Briggle and Mitcham 2005).

2. Establishment in 1975 of the Ethics and Values in Science and Technology (EVIST) program at the U.S. National Science Foundation (with a related program at the National Endowment for the Humanities), which in the 1980s morphed into the Ethics and Values Studies program, to fund “studies of ethical and value aspects of the interactions between science, technology and society” as a new research area (Hollander and Steneck 1990).

3. Shortly after its formal inception in 1986, the Human Genome Project started to include Ethical, Legal, and Social Implications (ELSI) research into government-funded efforts to map and sequence the human genome (Langfelder and Juengst 1993). (In Europe ELSI research is more commonly termed ELSA or Ethical, Legal, and Social Aspects research.)

4. Following exposure of a series of medical research misconduct cases, in 1989 the U.S. National Institutes of Health began to require that all graduate students on training grants receive education in responsible conduct of research (NIH Guide 1989).

5. In 2003 the U.S. National Nanotechnology Initiative (NNI) includes a “post-ELSI” program to promote public engagement and the integration of social sciences into nanoscience and engineering, which led to establishment of two Centers for Nanotechnology in Society (one at Arizona State University and another at the University of California Santa Barbara).

6. In January 2010 the National Science Foundation, under mandate from 2007 legislation by the U.S. Congress, began to require that any student or postdoc who receives NSF support have training in responsible conduct of research.

Both ELSI and post-ELSI activities are significant developments of policy for ethics, in terms of scale and the inclusion of the social sciences and even the humanities in the policy process. In notable contrast to the ELSI program, which has been criticized precisely for its failure to inform policy as specified in its mandate (Fisher 2005), post-ELSI programs call for integration of the social sciences into decision making in order to influence the direction of research, development, and commercialization (Fisher and Mahajan 2006).

Policy Limits

Ethics for policy and policy for ethics are complementary efforts to make present something that is absent. In this the notion of ethics policy is continuous with the arguments of great nineteenth century radical philosophers such as Karl Marx (1818–1883) and Friedrich Nietzsche (1844–1900). But whereas Marx sought to reinsert ethics into the lifeworld through political activity and Nietzsche through a great-man transformation of culture, ethics policy proposes the more mundane approach of re-conceiving ethics as the realizing of common democratic aspirations in more effective ways than have previously been the case.

But how rational are these democratic aspirations? Consider an observation from John Dewey (1859–1952), near the end of an extended essay on the need to unite ethics, policy, and rationality. “It is quite true,” he admits, “that science cannot affect moral values, ends, rules, principles as these were once thought of and believed in.” Then he continues:

“to say that there are no such things as moral facts because desires control formation and valuation of ends is in truth but to point to desires and interests as themselves moral facts requiring control by intelligence equipped with knowledge. Science through its physical technological consequences is now determining the relations which human beings, severally and in groups, sustain to one another. If it is incapable of developing moral techniques which will also determine these relations, the split in modern culture goes so deep that not only democracy but all civilized values are doomed… A culture which permits science to destroy traditional values but which distrusts its power to create new ones is a culture which is destroying itself.” (Dewey 1939, pp. 117–118)

In other words, if we take policy thinking as an exemplar of the use of science and engineering in public affairs, we must admit that policy itself offers no insight or vision of the good independent of the needs and desires already manifest in the body politic. Its ability to help clarify, organize, and achieve the needs and desires as they are given deserves to be affirmed as a good. But is such a philosophical commitment to ethics policy sufficient to address the implicit nihilism that Dewey acknowledges? Or does it leave the door open to counter affirmations from alternative sources such as tradition or religion?

Engineering, Technology, and Democratic Society

The brief examination of engineering and ethics in section two concluded with a question concerning whether engineers possess any knowledge that would justify their professional claim to special understandings of public safety, health, and welfare. The examination of ethics and policy in section three concluded with a related question concerning the rationality of democratic knowledge of the good. In more general conclusion, consider some complicating reflections on these two questions–reflections that provide modest support for the thesis that ethical rationality trumps technological rationality.

Engineering Knowledge

First, the issue of engineering knowledge: It is not clear that engineers as engineers know anything special about safety, health, or welfare in anything like the way physicians have special knowledge about the nature of health and lawyers about the structure of justice. The engineering education curriculum includes hefty doses of science, mathematics, engineering sciences (e.g., statics and thermodynamics), and design. But only in restricted ways do engineers learn about safety, health, and welfare (Mitcham 2009). The strongest exception is safety, but as the fact that there is a specialized discipline termed “safety engineering” suggests safety is not as integral to engineering as is sometimes proposed. By contrast, almost all medical school courses necessarily involve learning something about health; anatomy and physiology both include and explicate built-in notions about the proper structure and functioning of the human organism. Surely physicians know more about health–and welfare economists, perhaps, about welfare–than engineers know about these ends or ideals as such.

More specifically, on what possible basis are engineers more qualified than anyone else to understand or determine the safety, health, and welfare that should be associated with engineered structures, products, or processes? For instance, in Walter Vincenti’s lucid analysis of What Engineers Know and How They Know It (1990) there is no indication that engineers qua engineers know anything special about safety, health, or welfare. Safety, health, and welfare are conspicuous by their absence.

Going further, the engineer philosopher Samuel Florman has argued explicitly with regard to safety that it would be crazy for engineers to determine “what criteria of safety should be observed in each problem” encountered. Artifacts can always “be made safer at greater cost, but absolute freedom from risk is an illusion.” Levels of safety are “properly established not by well-intentioned engineers, but by legislators, bureaucrats, judges, and juries [and it] would be a poor policy indeed that relied upon the impulses of individual engineers” (Florman 1981, pp. 171 and 174). The most engineers can do is help clients and the public understand the relevant degrees of safety and then invite them to decide how safe is safe enough. Engineers qua engineers are no more qualified to make such judgments than anyone else; they legitimately participate in making such determinations, but only as users and citizens.

Policy Ends

Florman’s argument implicates a second issue concerning ethics and policy. What is the basis for determining the good (or goods) that should serve as the end (or ends) in policy decision making? According to Dewey, ends should be determined by rational and intelligent democratic decision making. This is a contested proposition, but in the socio-historical period in which we live it remains the predominant view. Within such boundary conditions, then, what might be said about the rationality and ethical character of democratic decision making and participation processes?

The vision of engineering responsibility for more effective formulation and achievement of social policy goals, insofar as it limits citizen participation in decision making, functions as an implicit form of technoscientocracy. An engineer committed to the promotion of public safety, health, and welfare may make decisions about technical issues in an authoritarian manner at odds with democratic ideals, based on a strictly technical analysis and evaluation of the risks associated with some product or process. Recognition that technology often brings with it not only benefits but also costs and risks argues for granting all those affected some input into technical decisions. In result, at least two scholars have independently argued for a principle of “no innovation without representation” (Winner 1991; Goldman 1992).

One of the more provocative efforts to think through the participation principle in relation to engineering can be found in the collaboration of philosopher Mike Martin and engineer Roland Schinzinger. As their argument is stated in a widely adopted engineering ethics textbook, engineering should be seen as a form of “social experimentation.” Engineering projects, which are increasingly integral to public affairs, are experiments insofar as they are undertaken in partial ignorance, outcomes are uncertain, and future engineering practice is modified by knowledge gained as a result. More crucially, these experiments impact users, consumers, and those societies in which the engineered structures, products, and processes are created and deployed.

Viewing engineering as an experiment on a societal scale places the focus where it should be: on the human beings affected by technology. For the experiment is performed on persons, not on inanimate objects. In this respect, albeit on a much larger scale, engineering closely parallels medical testing of new drugs or procedures on human subjects (Martin and Schinzinger 1983, pp. 59–60).

In consequence, “the problem of informed consent (…) should be the keystone in the interaction between engineers and the public” (Martin and Schinzinger 1983, p. 60). Stimulated by critical reflection on nuclear power and public policy, Kristin Shrader-Frechette has likewise argued that principles of free and informed consent should to be extended from biomedicine and be applied to the development of technology generally (1991 and 2002).

Just as medical research with human subjects or participants is moral only to the extent it respects the free and informed consent of the persons involved, so must engineering undertake to respect the autonomy of those it affects. Commitment to public safety, health, and welfare as a substantive ideal is replaced by the ideal understood in procedural terms. The basic form of safety, health, and welfare is not to be subjected to risks, deprivations, or harms to which one has not knowingly acceded, so that the practice of free and informed consent becomes the basic form of engineering ethics. Safety, health, and welfare are then publically determined by those affected through their free and informed participation.

There are nevertheless at least two key differences between informed consent in medicine and in engineering. Physicians and medical researchers possess substantive knowledge about the nature of health with which they can genuinely inform those who are exercising consent. Additionally, they are largely dealing with individuals. Neither of these conditions are met with regard to informed consent related to engineering projects. The character of safety is largely determined by the public involved and it is a public rather than individuals that is asked to exercise consent.

The difficulties of establishing appropriate protocols for practicing informed consent with regard to medical research and human subject participants are considerable; they can only increase when informed consent is raised to the societal level, as proposed by Martin and Schinzinger. Admitting the problematics of informed consent, they argue that at a minimum engineers have a responsibility to provide users and consumers “information about the practical risks and benefits of the process or product in terms they can understand” (Martin and Schinzinger 1983, p. 60). In a subsequent version of their argument (1989, p. 69) for the term “free and informed consent” they substitute “valid consent” (adapting from discussions in biomedical ethics literature). Can something more be said about the conditions for constructing valid consent, which could also be thought of as promoting a robust, democratic, rational, and intelligent determination of goods or ends?

Consider the question from two perspectives: one of technoscientists and another of democratic citizens. The first emphasizes the responsibilities of scientists and technologists to enhance (procedural) rationality, the second the responsibilities of democratic citizens to act (procedurally) rational. Together the ideal would be to construct a common technological and ethical rationality for a democratic establishment of a (substantive) good.

Responsibility of Technoscientists

With regard to the perspective of technoscientists or engineers one can derive useful suggestions from an analysis by political scientist Roger Pielke Jr. (2007), who distinguishes four idealized roles for a scientist or engineer who engages with publics: pure knowledge exponent, advocate, arbiter, and honest broker. This is an analysis that complements the more simple idea of an ethics code for the ethical exercise of technoscientific advice in the public sphere.

To illustrate the distinctions, imagine a politician or citizen seeking counsel regarding geoengineering responses to climate change. The pure knowledge exponent engineer responds like a detached bystander, spelling out in detail the various chemical and/or mechanical engineering processes that can sequester carbon. Politicians and citizens might well feel like they had inadvertently walked into a technical engineering class.

The issue advocate engineer, by contrast, acts like a salesperson and immediately argues for a bioengineering-related seeding of the ocean with iron to stimulate phytoplankton growth that would consume carbon dioxide. But the argument would be made with a peculiarly technical rhetoric that deploys information about the chemical composition of the iron, transport mechanisms, relation to phytoplankton blooms, and more. Politicians and citizens might well think they were standing at the booth demonstrating a proprietary innovation at an engineering trade show.

The arbiter engineer acts more like a hotel concierge. Steering a course between that of neutral bystander and advocate, such an engineer starts by asking what the politician or citizen wants from a geoengineering response: simplicity, low cost, safety, dramatic results, public acceptability, or what? Once informed that the aim is safety, the arbiter engineer would identify a matrix of options with associated low risk factors. The arbiter engineer engages with the public and communicates knowledge guided strongly by publicly expressed needs or interests. The concierge might on another occasion work as a medical doctor or psychologist counseling a patient.

Finally, the honest broker engineer reaffirms some modest distance from the immediate needs or interests of any inquirer in order to offer an expanded matrix of information about multiple geoengineering options and associated assessments in terms of simplicity, cost, safety, predictable outcomes, and more. The effect will often be to stimulate re-thinking on the part of inquirers, maybe a re-consideration of the needs or interests with which they may have been operating, even when they did not originally take the time to express them. The experience might be more analogous to a career fair than a single booth at a trade show.

Pielke’s spectrum of alternative engagements between engineering and policy is certainly more adequate than any simply conceived, one-way, univocal engineering to policy model. It is also more robust than an ethics code. Although promoting the honest-broker role, Pielke is a pluralist insofar as he admits that any ideal type may be appropriate in the right context. Whatever ideal type is chosen, it just needs to be adopted with conscious recognition and transparent admission to any interlocutors. The advocate engineer, for instance, should say up front, “Let me tell you the technical reasons for adopting project X,” making it clear that other engineers might well marshal knowledge in support of project Y. What is illegitimate, Pielke argues strongly, is stealth issue advocacy, which occurs when the advocate engineer fails either to recognize or to admit advocacy. It is even more illegitimate when advocates consciously hide or deny their advocacy.

Pielke and associates (Pielke et al. 2010) argue that a better path is to recognize the limits of engineering and to distance advice from interest-group politics while more robustly connecting it to specific policy alternatives. Research will not settle political and ethical disputes about the kind of world in which we wish to live. But engineers can connect their research with specific policies, once citizens or politicians have decided which outcomes to pursue. In this way, engineers provide an array of options that are clearly related to diverse policy goals. Rather than advocate a particular course of action, either openly or in disguise, engineers should work to help policymakers and the public understand which courses of action are consistent with our current–always fallible–technical knowledge about the world and our current–always revisable–visions of the good.

But there are inadequacies in this account of the engineer-politics interface as well. Although Pielke characterizes his four possible models for science or engineering to inform policy as ideal types, that of the so-called honest broker is impossibly ideal. One of the strongest findings in philosophically oriented science, technology, and society (STS) studies is that all knowledge production has what may be termed an engagement co-efficient. Engagement co-efficients can be strong or weak, but they cannot be avoided, simply because engineers are embodied, historical, and culturally situated persons. To think otherwise is to imagine engineering or technology as a neutral tool constructed on the basis of a view from nowhere. But engineering is constituted by a unique stance toward the human lifeworld and a particular, culturally influenced set of moral norms. To propose any technical input to policy is, at a minimum, effectively to affirm and promote the value of engineering and technology themselves.

Co-efficients of engagement can be further expected to include commitments not just to the value of engineering in general but to the values of particular branches of engineering, research programs, and more. The vicious interpretation of such unavoidable commitments is that all engineering is culturally biased or captive of special interests and no better than any other worldview, a position that promotes skepticism if not relativistic cynicism about appeals to engineering or technology. But one need not go this far, and in fact there are strong arguments for a qualified realist interpretation that grants what might be termed, adapting Philip Kitcher (2001), “well-ordered engineering” as an appropriately qualified but nevertheless privileged position in the political realm. No matter how high the wall separating engineering and politics, the relation between the two will involve dialogue and dialectic. It may not be possible to transcend the arbiter or concierge models, which nevertheless can be understood as contributing to enhancing public rationality (see also Collins 2014).

Responsibilities of Democratic Citizens

Turning from the responsibilities of technoscientists to those of democratic citizens, one can hypothesize a companion suite of ideal types. Pielke’s four types are constructed from the point of view of the scientific experts. Another typology may be constructed from the perspective of those seeking scientific expertise, with such non-scientific principals are distinguished into those seeking agents, debate coaches, teachers, or proxies.

Again imagine politicians or citizens seeking counsel regarding geoengineering. Principals seeking engineers to act as agents (sometimes called “hired guns”) want someone to scout out and/or help run a gauntlet of ignorance to realize their own goals. They do not want those goals questioned. Having made some decision in favor of or against some geoengineering project, these citizens are seeking expert witnesses to advance their cause.

In like manner, principals seeking engineering debate coaches want help in responding to anyone who might offer intellectual objections to a decision. Technoscientific assistants to politicians and corporate leaders often function in this manner. (“Tell me what science and engineering I can cite if objection X comes up.”)

Principals seeking engineering teachers want to learn the science or engineering. They place themselves under the tutelage of technoscientists in order not to become scientists or engineers themselves but to acquire the knowledge of, for instance, science journalists or simply well informed citizens.

Finally, at the opposite extreme are principals who want technoscientific proxies to whom they can delegate decision making, trusting they share or will respect the principal’s basic values and commitments. This readily occurs when medical patients are so overwhelmed with information about an illness that they feel unable to decide between alternative treatments offered by an attending physician. (“Doctor, you decide. You know better than me. I cannot think about it anymore.”) It also occurs when congress or a public charges technical agencies with delegated rulemaking.

As is obvious, principals have needs and interests that can bias their use of technoscience just as scientists and engineers have needs and interests that can bias their inputs to policy. In any technoscience for policy both deserve to be acknowledged and should enter the dialectic or dialogue. Additionally, however, the ideal stance of principals seeking teachers–that is, principals who are in the first instance good listeners and learners–is arguably the one most likely to enhance democratic rationality.

To illustrate this point, it is possible to distinguish first-, second-, and n-order goods in well-designed structures, products, processes, and systems. Especially with technologies, first-order technical goods are constituted by the functioning of the object itself. This functioning can be strongly or weakly coupled to second- and n-order goods of a more public character. But the second-order goods are more issues of public than of engineering judgment. This fact offers another way to interpret the phrase “public safety, health, and welfare.” It is not so much that engineers have a responsibility to determine and protect public safety, health, and welfare as they see it, but that they have a responsibility to protect what the public determines as its vision of safety, health, or welfare. The more remote the order of goods and the more weakly coupled the relation between first- and n-ordered goods, the more this will be the case.

Engineered public goods in many instances are easily determined by the public that benefits directly from engineered buildings, consumer products, production processes, and transport or communication systems. Engineers thus have a corresponding duty to take into account how the public may indeed benefit, while being granted a certain degree of autonomy, not so much to converse among themselves (like scientists) as to converse with the public. This should include especially an ability to alert the public about situations that pose dangers or risks to use that may be obscured by short-term convenience or business interest.

Insofar as there is a tight coupling of first- and n-order goods, engineering also properly plays a more robust and direct role in decision making with regard to those projects it is assigned by political or policy decision. When President John F. Kennedy in 1961 committed to a Moon landing in less than ten years he could do so only after consulting with engineers about the feasibility of such a project. He (or his advisers) had to seek teachers of the technically feasible and be willing to listen and learn.

Similar consultation is required with regard to any technological project to be undertaken as a result of political or commercial decision making. When engineers are instead forced to behave as agents or debate coaches in the service of unrealistic goals, disaster often ensues, as was the case in the Soviet Union when Stalin enrolled engineers in impossible projects (Graham 1993).

Turning back to the responsibilities of engineers: Engineers themselves, independent of being consulted about externally conceived projects are regularly putting proposals before politicians, the public, and venture capitalists for what they imagine as new, reasonable engineering undertakings. In the process, they have obligations to bracket their technical enthusiasms in favor of thinking beyond the technical to consider as honestly as possible the potential for real public benefit. Indeed, commercial employers often admonish engineers to take on the perspective of management and try to anticipate the economic implications of their work. This is another version of the duty plus respicere, to take more into account: a general, abstract statement of the conditions for procedural rationality under conditions created by technology.

Coda

In an insightful reflection on The Techno-Human Condition (2011), Braden Allenby and Daniel Sarewitz offer another take on this tension between first- and n-order goods. They distinguish three levels of cause-and-effect relationship between engineering and the world. Level I occurs when a specific device is engineered to achieve a clearly defined function: “a vaccine prevents a particular disease, or a well-design manufacturing process eliminates the use of toxic chemicals.” Here one can be confident of the results because a policy good (health) and engineered instrumental means are simple and direct. First- and second-order goods are tightly coupled.

In Level II relationships, however, engineering becomes part of “a networked social and cultural phenomenon [functioning] in a broader context that can be complicated, messy, and far less predictable or understandable.” Examples include transport and communication systems. At Level II “acting to achieve a particular intended outcome is often difficult because the internal system behavior is too complicate to predict.” First- and n-order goods are less tightly coupled.

The first two levels are relatively familiar and engineers can make reasonable efforts to assess the outcomes, intended and unintended, of their actions into systems. At Level III, however, the system has become so large and complex that its boundaries are difficult to determine; it has become a “complex, constantly changing and adapting system in which human, built, and natural elements interact in ways that produce emergent behaviors which may be difficult to perceive, much less understand and manage.” At this level, it is increasingly difficult to be sure about the relation between first- and n-order goods. Additionally, our experience of the techno-lifeworld, according to Allenby and Sarewitz, is that “We inhabit Level III, but we act as if we live on Level II, and we work with Level I tools” (Allenby and Sarewitz 2011, pp. 63 and 161).

What does this recognition imply? The Allenby-Sarewitz insight is an ethical one. To summarize in the briefest possible terms: The rationality of ethics concerns how to allow ends to emerge. Rationality of technology concerns how best to achieve the ends once they have been posited. Ethical rationality thus trumps technological rationality. The former emphasizes the ethical responsibilities of citizens to act rationally and with intelligence by incorporating science and engineering into deliberation with regard to ends, the latter the responsibilities of scientists and engineers to act rationally and with intelligence in advising democratic citizens.