Keywords

8.1 Branches of Ethics

Nico Formanek, Juan Manuel Durán

8.1.1 Ethics in Technical Disciplines

Ethics has grown into many different branches, treating increasingly specialized but also interdisciplinary problems. There are ethics of engineering, ethics of computers, and ethics of medicine—just to name a few branches. In general, we will call them ethics of X, where X can be basically any field where ethical problems emerge. To call them the ethics of X presupposes that each branch purports different and disjoint ethical issues. Thus, the ethics of engineering is concerned with issues alien to the ethics of computer simulations. But in practice, it is quite common to find shared concerns among all these branches of ethics. In this context, there are different connections between, say, the ethics of engineering and the ethics of computer simulation. If one thinks computer simulation as a sub-discipline of engineering, then the ethics of engineering will treat more general problems arising from engineering practice without reference to computers, while the ethics of computer simulation will be an application of the ethics of engineering to computer science.

A general problem which is treated in the ethics of engineering is the problem of unintended consequences. Every technology is designed with a specific end. To reach this end desired and undesired impacts are accounted for. It is obvious that not every effect of a technology, desired or undesired, can be predicted in the design process. For example, the use of fossil energy on a vast scale, which emerged during the industrial revolution, has as we now know some very undesirable side effects. Those were certainly not intended, nor known, by the early innovators constructing steam engines.

A similar problem will of course arise with every technology, one of which happens to be computer simulation.

Before we consider the special case of the ethics of computer simulation, let us talk briefly about what ethics and ethical problems are in general.

Ethics is the branch of philosophy that studies moral problems, that is, problems of right and wrong action. It is thus closely connected to the philosophy of action. For the purposes of this article, it is sufficient if we remain close to a common-sense concept of what an action is: One does something under self-determining conditions of possibility (e.g., one is not being forced to take some action). Actions and their consequences can be evaluated, and the task of an ethics of X is to give reasons for evaluating the special class of actions picked out by X. The scenario is like this: If you ask yourself “What is the best action in situation A, is it F or G or …?” then the ethics of X will be a reservoir of reasons to pick out the best possible action—or so it is expected/desired.

For example, the ethics of medicine treats concerns about medical interventions on humans. Questions evaluating the action of the relevant stakeholders might be: How does a physician weigh the needs of patients suffering from an illness in a randomly controlled study? Should the cheapest but less effective treatment be chosen, or rather the most expensive but also most effective treatment be chosen?

It should be noted that there is a more general classification of ethics in philosophy. Standardly, they are classified as consequentialism, deontology, virtue ethics, and pragmatic ethics. An ethics of X will employ one or more of these frameworks to evaluate the problem at hand. A perhaps too simplistic description of those frameworks would be that consequentialism only evaluates an action with respect to its consequences, deontology according to rules for carrying out those actions, virtue ethics corresponding to the virtue of the agent, and lastly pragmatic ethics evaluates actions according to the wider context in which they occur.

Consider the following example on how an ethical evaluation could work in the different frameworks.

Situation: You are in a hurry to get a job interview and are stuck in traffic. There is a shortcut, but you have to pass wrongly through a one-way road.

Question: Should you take the shortcut?

Actions: (A) Take the shortcut. (B) Don’t take the shortcut.

Below are answers that certain ethical frameworks could give.

Consequentialism: Act always to maximize utility. A maximizes utility. Answer: Do A.

Deontology: Act always according to rule R. Action B satisfies rule R. Answer: Do B.

Virtue ethics: Act always to preserve your virtuousness. Action B preserves virtuousness. Answer: Do B.

Pragmatic ethics: The one-way road is rarely used by cars and taking it would reduce your stress levels. Answer: Do A.

Ethical frameworks do not generally provide unique answers about what to do. This has several reasons. Firstly, the frameworks themselves are justified by adducing artificial situations (sometimes more or less so—think of trolley problems), which makes picking out the adequate framework dependent on the situation description and the moral intuition underlying said description [1].

Secondly, even for non-pragmatic frameworks, the provided answers depend on the preselected actions. Those actions are generally not picked out according to some specified rule in the ethical framework, they rather depend on what the person doing the ethical evaluation is willing to admit. The kind of arguments that these frameworks supply for or against an action are at the very least enthymematic; in other words, in most cases it is not unclear if these frameworks can provide deductive certainty at all about which action to choose.

It is the uncertainty in the description of situations which connects ethics to other branches of philosophy like epistemology and philosophy of science. One of the biggest problems in ethics is how to make a morally sound decision under uncertainty.

You will notice that our later examples from the ethics of computer simulation could all be labeled as decision under uncertainty. While it would be nice if philosophy could reduce the uncertainty in the situation description, this is in many cases not possible. Uncertainty might even be the property of a situation description, e.g., limited time and mental capacity of a person to adequately evaluate all presented options.

8.1.2 The Ethics of Computer Simulation

With this in mind, we now turn to ethics of computer simulation. First, some foundations have to be laid. We would not say what a computer simulation is, but rather what can be done with it. A computer simulation can run on a computer to compute and obtain simulation results. These results give an answer to a previously—however vaguely—stated question. The quality of the answer depends on many factors, internal and external to the simulation. One external factor is the specificity of the question according to which the computer simulation was built. More specific questions lead to better models, which in turn lead to better computer simulations. An internal factor would be the quality of the program. Does it contain many hacks, kludges, etc., which might affect it is representational qualities?

In philosophy of science, models have for long been an object of inquiry. Models try to represent a part of the world and might include a number of idealizations, abstractions, and fictionalizations in order to do this. Uncertainty, then, is already introduced at these stages if it is unknown how those idealizations, abstractions, and fictionalizations affect the representational capacity of the model. The same is true for computer simulations with one addendum: computer simulations typically introduce more and different sources of uncertainty. This will become apparent in later examples. Among other things epistemology evaluates why some forms of uncertainty are tolerable while others might not.

Now, computer simulations would not be an interesting case for ethics if they did not figure prominently in ethical questions. And this is where the current literature on the topic takes its starting point. Everyone knows cases where simulation results have been used to justify policy decision. Examples include the IPCC report on global climate and the simulation of pedestrian traffic preceding the approval of the 2010 love parade in Germany.

So, whenever simulation results are used in situation descriptions for ethical questions this elevates the uncertainty of those results from a “mere” epistemological concern to an ethical issue.

Following [2] Chap. 7 and [3], I will now discuss several frameworks that have been proposed to cope with the uncertainty of simulation results in ethics.

According to Williamson [4], simulation results must be trustworthy if they are used in ethical decisions. Trustworthiness itself depends on several ethical and epistemic factors. For a simulation result to be trustworthy, it has to be credible, transferable, dependable, and confirmable. Williamson takes credibility to be established by inter-subjective methods of verification and validation, but also by expert authority. It is therefore a mixed epistemic and ethical concept to reduce the possible uncertainty inherent in the simulation results. The rest of the concepts are epistemic in nature. Transferability is the possibility of extending simulation results beyond their original context. A situation that often happens in policy decisions that should apply across different situations.

Dependability is the property of simulation results to apply after a certain amount of time passed. Uncertainty might arise due to the target system changing in unknown and unaccounted for ways.

Williamson’s last criterion of confirmability amounts to concerns about different idealizations that were introduced into the computer simulation, for example, idealizations that make the problem computationally tractable in the first place. Such idealizations can introduce uncertainties in the simulation results if they are not properly accounted for.

In the end, if uncertainties are present, they taint the ethical decision that rests on the simulation result, possibly leading to unethical choices of action.

A similar point is made by Brey [5], for whom uncertainty enters through misrepresentations. Computer simulations can represent or fail to represent a phenomenon, depending for example on which idealizations were in place during their implementation. Instances of misrepresentation might be hard to detect because direct comparison to experimental data is impossible. This epistemic concern again threatens ethical decision-making with uncertainty.

As we saw earlier, the authors of the quoted studies on computer simulation ethics are concerned with harm that might arise from ethical decisions which are based on uncertain simulation results. It is very hard to say what could be done to reduce the uncertainties, which is not bordering on platitudes like “improve verification and validation procedures.” The most general kind of advice that is given in the existing literature is contained in codes of conduct.

Ören et al. [6] proposes such code of conduct specifically for computer simulations which is also described in Sect. 8.3 of this book.

In general, codes of conduct follow the spirit of virtue ethics or deontology. They provide rules for action or guidance on how virtuous conduct can be achieved. Ören’s code is adapted to the needs of simulationists and thus applies only to ethical questions concerning the genesis, running, and use of computer simulations. The justification of the rules of the code depends on more general principles of good scientific conduct, best practices from programming, and previous codes of conduct for the engineering discipline.

8.2 Ethics for Simulationists and Analysts Using Modeling and Simulation

Paul K. Davis, Andreas Tolk

The rationale for addressing ethics in this volume on modeling and simulation has several components. For engineers, the basic rationale is that engineers build things that change the world. In doing so, they assume responsibilities to individual, organizational, and government clients, and to humanity in the large. Sometimes, the obligations are in conflict, which creates difficult tensions. Scientists, who often use M&S, have obligations such as the search for truth and advance of science but also obligations to the people and even animals who participate in or are subjects of research. A third category of users consists of analysts. These may also be scientists or engineers, but they aid decision-makers and often support activities affecting people and the world. Those who build models have the obligation to make it possible for analysts to use the models well, correctly and wisely inform decision-makers, and assure fairness, and minimize harm. This article addresses, in turn: (1) definitions, (2) ethics in the modeling and analysis cycle, (3) why such ethics matter, (4) approaches to ethics, and (5) the role of professional codes.

8.2.1 Definitions

A recent textbook covers definitions, distinctions, and comparisons. It then has a number of concrete examples that illustrate vividly the ethical issues that arise for engineers [7]. Most of its material applies also to those associated with science, technology, and analysis. This article draws also on ideas in other published papers. For example, an early text laid much groundwork that is still very relevant [8] and the need for simulationists to have a code of conduct was discussed in an influential conference paper [9].

The definitions of morals and “ethics” are often used interchangeably. Distinctions are sometimes drawn, but in contradictory ways. Here, we use:

Ethics, also Called Moral Philosophy, is the discipline concerned with what is morally good and bad and morally right and wrong. The term is also applied to any system or theory of moral values or principles. (https://www.britannica.com/topic/ethics-philosophy)

In making distinctions, we use the formula that

Ethics are the science of morals, and morals are the practice of ethics. (Fowler and Crystal [10])

The adjectives “ethical” and “moral” can also be ambiguous, but “moral” usually refers to personal matters whereas “ethical” is favored when referring to matters of, e.g., medicine, law, science, or business.

8.2.2 Ethics in the Cycle of Modeling and Analysis

Why does ethics matter in a volume on modeling and simulation? Adapting a concept laid out in the text mentioned above [7], we note that ethical considerations are or should be important in each stage of the cycle shown in Fig. 8.1. The top line shows the process from problem definition to the delivery of well-articulated evaluation of options. Feedbacks (shown as dashed lines) occur throughout the process. For example, as options emerge, one recognizes the need to consider additional objectives with corresponding metrics. Also, when comparing options, one may be sensitized to uncertainties that should be explicitly addressed in the analytic plan.

Fig. 8.1
A classification of well articulated evaluation includes problem definition, model and simulation building, setting up problem analysis, and developing and evaluating options. It also elaborates ethics related errors.

Ethics in the cycle of modeling and analysis

As indicated by the italic material at the bottom, numerous ethical issues arise or should arise at every step. These are illustrated by the items shown, which indicate only some of the many ethical errors or lapses that may occur, such as ignoring long-term effects on the environment or the public's interests in privacy [11], doing the analysis with biased data [12], or obfuscating risks and distributional issues (as in dwelling only on average economic effects). Some other examples are (1) omitting key variables, which precludes correctly analyzing their effects (e.g., omitting the possibility of a tax cut's stimulus effect), and (2) explaining results based on concepts not actually in the model (e.g., ascribing an intention to a model object that merely follows some rules, oblivious of intention).

8.2.3 Why Ethical Considerations Matter

Most readers may find the importance of such matters evident, but a few examples may be worthwhile. Consider urban planning that focuses entirely on economic rejuvenation. The results may include destroying neighborhoods and cultural features, depriving people of their life-long homes, forcing such people to move to more hostile but affordable areas, and generating a “sterile” downtown without character. Such obliviousness to the many dimensions of the problem might be seen as incompetence, but not if the only consideration was stimulating economic growth in the downtown area. Or consider developing a simulator for a new aircraft, a simulator that is exceedingly accurate for most conditions but does not address some plausible circumstances that would be expensive to understand and represent well. Pilot training in such simulators would not be prepared if the trouble circumstances arose. This occurred in the notorious case of failures of the Boeing 737-MAX. (A newspaper account touched high points [13], but more definitive accounts of the fiasco are slowly emerging [14]. The aircraft's failures killed 346 people. Many other examples could be given [7, 15].

One of the earliest discussions of ethics in the context of simulation was a paper by John McLeod, the founder of the Society for Modeling and Simulation [16]. McLeod was commenting on the danger that some use of simulation might be analogous to that of the accountant who “when asked ‘How Much is 2 + 2?’ replied ‘How much do you want it to be?’” McCleod went on to provide draft ethical guidelines that emerged from a study by the National Science Foundation. Many other references might be named, each with own bibliographies (e.g., [7, 15]) A recent paper illustrates with critical review the important ethical subtleties that arise when attempting to address social issues with simulation [17].

8.2.4 Approaches to Applying Ethics

It is sometimes useful to distinguish among three different approaches that scholars take in addressing issues. The exact labels vary, but the three approaches are (1) consequentialist (utilitarian); (2) deontological (duty-driven as with adherence to laws, norms, or principles); and (3) virtue-seeking (seeking good character traits, such as reliability, honesty, …). These are, roughly, associated, respectively, with Jerome Bentham and John Stuart Mill, Emanuel Kant, and Aristotle. They are discussed and compared, with examples, in Van de Poel and Royakkers [7].

8.2.5 The Role of Professional Codes

Many ways exist for addressing ethical considerations, but in this volume, we address only one, having professional organizations adopt codes of conduct.

Ethical codes can be crafted to be inspirational, advisory, or disciplinary in nature [18, 19]. Numerous examples exist, as well as a corresponding literature. Here, we merely touch upon examples.

An inspirational expression of engineering ideals is the oath taken to join the Order of the Engineer:

  • I am an Engineer; in my profession I take deep pride. To it I owe solemn obligations.

  • Since the Stone Age, human progress has been spurred by the engineering genius. Engineers have made usable Nature's vast resources of material and energy for Humanity's benefit. Engineers have vitalized and turned to practical use the principles of science and the means of technology. Were it not for this heritage of accumulated experience, my efforts would be feeble.

  • As an Engineer, I pledge to practice integrity and fair dealing, tolerance and respect, and to uphold devotion to the standards and the dignity of my profession, conscious always that my skill carries with it the obligation to serve humanity by making the best use of Earth's precious wealth.

  • As an Engineer, I shall participate in none but honest enterprises. When needed, my skill and knowledge shall be given without reservation for the public good. In the performance of duty and in fidelity to my profession, I shall give the utmost.

(the Oath is copyrighted by the Order of the Engineer, Inc.)

To be sure, not all engineers take the oath, and not all that do necessarily live up to it in all respects, but the oath reflects an ideal with which many can resonate and to which many make every effort to adhere.

Advisory professional codes provide guidelines that help the simulationist to make good decisions, often very similar to Code of Best Practices. Many codes of professional conduct advise society members how to behave professionally. The IEEE Code of Ethics—documented in the IEEE Policies, Sect. 7: Professional Activities (Part A: IEEE Policies)—falls into this category.

We, the members of the IEEE, in recognition of the importance of our technologies in affecting the quality of life throughout the world, and in accepting a personal obligation to our profession, its members and the communities we serve, do hereby commit ourselves to the highest ethical and professional conduct and agree:

  1. 1.

    to accept responsibility in making decisions consistent with the safety, health, and welfare of the public, and to disclose promptly factors that might endanger the public or the environment;

  2. 2.

    to avoid real or perceived conflicts of interest whenever possible, and to disclose them to affected parties when they do exist;

  3. 3.

    to be honest and realistic in stating claims or estimates based on available data;

  4. 4.

    to reject bribery in all its forms;

  5. 5.

    to improve the understanding of technology; its appropriate application, and potential consequences;

  6. 6.

    to maintain and improve our technical competence and to undertake technological tasks for others only if qualified by training or experience, or after full disclosure of pertinent limitations;

  7. 7.

    to seek, accept, and offer honest criticism of technical work, to acknowledge and correct errors, and to credit properly the contributions of others;

  8. 8.

    to treat fairly all persons regardless of such factors as race, religion, gender, disability, age, or national origin;

  9. 9.

    to avoid injuring others, their property, reputation, or employment by false or malicious action;

  10. 10.

    to assist colleagues and co-workers in their professional development and to support them in following this code of ethics.

Disciplinary codes impose negative consequences for violations of standards. The consequences may include, e.g., paying fees for not disclosing conflicts of interest, exclusion from certain types of contract competition because of past violations, or being removed from the professional society.

Although many professional science and engineering societies have their own code of ethics (e.g., that of the Association for Computer Machinery [20]), certain elements are common to most of them, such as pursuit of truth, protection of community and environment, accountability for actions, mentoring the next generation, and informing and engaging the public. Recently, diversity and integration of minorities have been recognized as valuable goals for fairness because they allow new perspectives and ideas suggesting better solutions supporting society. Rigor, respect, responsibility, honesty, and integrity have been identified as the core values for scientists and engineers, including simulationists.

It has been noted that policy analysts do not have an ethical code and that it would be difficult to develop a sensible code. Douglas Amy stated “Ethical inquiry is shunned because it frequently threatens the professional and political interests of both analysts and policymakers. The administrator, the legislator, the bureaucracy, and the profession of policy analysis itself all resist the potential challenges of moral evaluation” [21]. Others have long argued otherwise and have suggested a code of conduct [22, 23]. A book on the subject [24] includes examples and recommendations.

8.2.6 A New Obligation for Those Who Build M&S and Use It for Analysts

It has long been an ethic that analysts identify the assumptions on which their results depend. Much more is necessary. Analysts should routinely discuss how results vary with major assumptions on which there is uncertainty or disagreement. This should reflect exploratory analysis in which assumptions are varied simultaneously, rather than mere variable-at-a-time sensitivity analysis. Further, analysts should demonstrate ways in which clients can hedge against uncertainties, i.e., how to identify strategies that are relatively more Flexible (to changes of mission), Adaptive (to changes of circumstance), and Robust (to adverse shocks). This is sometimes referred to as planning for FARness [25, 26] or as what is becoming widely known as supporting Robust Decision-Making (RDM) under deep uncertainty [27, 28]. Such efforts should become an ethical obligation.

To put the matter differently, the analyst should go well beyond so-called best-estimate calculations (which are often misleading because of uncertainties) and indicate the range of circumstances under which the consequences of the strategies being considered are relatively predictable and favorable, relatively predictable and bad, or very uncertain and therefore risky [26]. M&S should be designed so as to make related analysis easier and routine. Failure to do such analysis may leave decision-makers with inappropriate confidence in best-estimate results, which may lead to seriously harmful decisions.

8.2.7 Final Observation

Today’s simulations are powerful computational tools that can be seen as the third pillar of science [29], along with theory and empirical data. When used with visualization tools and augmented reality, they allow immersion into the problem space and direct interactions with the model. This vividness, however, can deceive a user that into seeing the simulations as valid surrogates of the real system when they are not. The ethical responsibilities of simulationists and those who use simulations are growing in parallel to these technological advances.

8.3 Code of Ethics for Simulationists

Tuncer Ören

The code of ethics for simulationists (as posted at https://scs.org/ethics/) has been developed by the following members of the Ethics committee of the SCS:

  • Prof. Emeritus Tuncer I. Ören (Chair)—SCS AVP Ethicsş Founding Director of M&SNet—McLeod Modeling & Simulation Network of SCS

  • Prof. Emeritus Maurice S. Elzas, Wageningen Univ., Wageningen, The Netherlands

  • Prof. Emeritus Louis G. Birta—Ottawa Center of the McLeod Institute of Simulation Sciences

  • Dr. Iva Smit, E&E Consultants, Netterden, The Netherlands.

The rationale for the code is clarified in:

Ören, T. (2002). Rationale for A Code of Professional Ethics for Simulationists. Proceedings of the 2002 Summer Computer Simulation Conference, pp. 428–433. https://www.site.uottawa.ca/~oren/index-pubs/pubs-2000s.pdf

The code is posted at different languages:

English https://scs.org/wp-content/uploads/2015/12/Simulationist-Code-of-Ethics_English.pdf

Turkish https://scs.org/wp-content/uploads/2015/12/Simulationist-Code-of-Ethics_Turkish.pdf

French https://scs.org/wp-content/uploads/2015/12/Simulationist-Code-of-Ethics_Turkish.pdf

Italian https://scs.org/wp-content/uploads/2015/12/Simulationist-Code-of-Ethics_Italian.pdf

Chinese https://scs.org/wp-content/uploads/2015/12/ZH-20150810-03-Code_0_Chinese_Zhang.pdf

Bulgarian https://scs.org/wp-content/uploads/2020/08/Simulationist-Code-of-Ethics_Bulgarian.pdf

The English version of the Code is provided here in the following paragraphs.

The Society For Modeling and Simulation International logo.

Simulationist Code of Ethics

Preamble

Simulationists are professionals involved in one or more of the following areas:

Modeling and simulation activities.

Providing modeling and simulation products.

Providing modeling and simulation services.

  1. 1.

    Personal Development and Profession

As a simulationist I will:

  1. 1.1

    Acquire and maintain professional competence and attitude.

  2. 1.2

    Treat fairly employees, clients, users, colleagues, and employers.

  3. 1.3

    Encourage and support new entrants to the profession.

  4. 1.4

    Support fellow practitioners and members of other professions who are engaged in modeling and simulation.

  5. 1.5

    Assist colleagues to achieve reliable results.

  6. 1.6

    Promote the reliable and credible use of modeling and simulation.

  7. 1.7

    Promote the modeling and simulation profession; e.g., advance public knowledge and appreciation of modeling and simulation and clarify and counter false or misleading statements.

  1. 2.

    Professional Competence

As a simulationist I will:

  1. 2.1

    Assure product and/or service quality by the use of proper methodologies and technologies.

  2. 2.2

    Seek, utilize, and provide critical professional review.

  3. 2.3

    Recommend and stipulate proper and achievable goals for any project.

  4. 2.4

    Document simulation studies and/or systems comprehensibly and accurately to authorized parties.

  5. 2.5

    Provide full disclosure of system design assumptions and known limitations and problems to authorized parties.

  6. 2.6

    Be explicit and unequivocal about the conditions of applicability of specific models and associated simulation results.

  7. 2.7

    Caution against acceptance of modeling and simulation results when there is insufficient evidence of thorough validation and verification.

  8. 2.8

    Assure thorough and unbiased interpretations and evaluations of the results of modeling and simulation studies.

  1. 3.

    Trustworthiness

As a simulationist I will:

  1. 3.1

    Be honest about any circumstances that might lead to conflict of interest.

  2. 3.2

    Honor contracts, agreements, and assigned responsibilities and accountabilities.

  3. 3.3

    Help develop an organizational environment that is supportive of ethical behavior.

  4. 3.4

    Support studies which will not harm humans (current and future generations) as well as environment.

  1. 4.

    Property Rights and Due Credit

As a simulationist I will:

  1. 4.1

    Give full acknowledgement to the contributions of others.

  2. 4.2

    Give proper credit for intellectual property.

  3. 4.3

    Honor property rights including copyrights and patents.

  4. 4.4

    Honor privacy rights of individuals and organizations as well as confidentiality of the relevant data and knowledge.

  1. 5.

    Compliance with the Code

As a simulationist I will:

  1. 5.1

    Adhere to this code and encourage other simulationists to adhere to it.

  2. 5.2

    Treat violations of this code as inconsistent with being a simulationist.

  3. 5.3

    Seek advice from professional colleagues when faced with an ethical dilemma in modeling and simulation activities.

  4. 5.4

    Advise any professional society which supports this code of desirable updates.