Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

1 Introduction

In this chapter I will argue, along with Paul Farmer , that for disaster victims ‘research [on humans] is always a luxury’ (Farmer 2005, p. 205) . This conclusion is reachable by many routes, and I will concentrate on conducting an ethical harm-benefit analysis that treats victims of disasters as persons who are ends in themselves and who cannot be placed at excessive risk , only for the benefits of others. ‘Research’ is a ‘systematic investigation … designed to develop or contribute to generalizable knowledge’ (US Code of Federal Regulations 1976). Of course, there are different kinds of research. Not all research involving human beings is research on human beings, nor is there a “one size fits all” rule that applies to all types of disasters . Considerations in war are different from those in flood, earthquake, or famine. And the research method matters. Epidemiological research, including surveillance data collection and monitoring strategies to guide fair and efficient allocation of limited resources, for example, are reasonable and may even be required in the immediate aftermath of radiation exposure or in an epidemic outbreak of a viral disease (Barzilay 2013) . The Presidential Commission for the Study of Bioethical Issues has provided the ethical framework for the conduct of emergency research on pre- and post-event of [anthrax vaccine adsorbed post exposure prophylaxis] in children (Presidential Commission 2013). But biomedical (or clinical) research that involves physical, psychological or dignitary harm to research subjects is difficult to justify in the immediate response to a disaster—although it may be ethically acceptable during the next phase, i.e., the recovery phase of a disaster.

Disasters can claim more or fewer victims based on the magnitude of the damages they cause and the immediate response of humanitarian workers. First responders can be acclaimed as heroic rescuers one moment, and later denounced as self-interested looters. Researchers are not immune: they may be acclaimed as helpful scientists who produce new knowledge for the benefit of society at one moment, and then denounced as exploiters of human tragedy for their personal interests. Researchers may think that ‘without testing the outcomes of a potential new therapy in an actual emergency situation, it is impossible to improve practices in order to limit the damage in future situations. Research is necessary’ (Reed 2002, p. 10) . Researchers may even view ‘war as an amazing learning environment, the perfect laboratory for … research … [and argue that war] injuries are too rare to study in peacetime. [And thus] continuous research is not only desirable … [it] ought to be … obligatory’ (Bohannon 2011, pp. 1261–1263) . The goals of research and rescue may seem compatible and obligatory, just as the integration of research and care has appeared ethically necessary (Largent et al. 2011) . Harvard professor Jennifer Leaning aptly asked:

Are … clinical studies (such as severe malnutrition in children and adults) in which the unavoidable trade-off in risk and benefit is accentuated by the irreducible uncertainty of achieving truly informed and unforced consent, ever justifiable? The [research] field response is yes. Work with populations whose lives are at grave risk and dependent upon the relief community imposes an obligation both to provide help and to learn how to improve the quality of that help. It is hard to think of any other human setting in which the ethical burden of research is as great. (Leaning 2001, p. 1433)

A parallel argument has been made in medical education, for example, when we claim that the only way medical students can learn would be to practice on patients, and thus there is an ethical obligation to use patients to improve care and assist future patients.

2 Disasters and Research Ethics

Less enthusiastic investigators, however, while acknowledging the constraints and difficulties of doing research in time of disaster, recognise that ‘disasters rarely constitute an ideal environment to conduct research … [They] are the most chaotic, stressful and dangerous environments imaginable for doing … clinical research on human beings’ (Bohannon 2011, pp. 1261–1263) . Investigators may rush to a disaster site and embark on a research project which has not been fully developed and properly reviewed for its scientific and ethical merit. Procedures may be hastily applied, and protocol transgression may seem justified by the goal of improving future responses and saving lives. Yet, the research could be added onto already burdened humanitarian rescuers, and this added burden could cause additional harm to the research subjects, those who need the most immediate help and support, thus endangering rather than saving lives.

Obtaining a more nuanced and objective view of “disaster research” is complicated. Disasters are multiple and unpredictable; they each have their own peculiar dynamics. As the March 2011 earthquake-tsunami disaster in Japan illustrates, disasters are “uncharted territory” and cannot be effectively managed through traditional survival strategies, routine procedures and stockpiled resources. They create formidable challenges to the community of nations, and nongovernmental organisations (NGOs) that come to the rescue. Even countries like Japan, which are best prepared to manage disasters and boast to be among the nations with the most towering seawalls and the sturdiest buildings in the world, are not immune from the chaos of disasters. Mismanagement in the evacuation process, miscommunication between private and public agencies, and irresponsible conduct of those in charge of the rescue are not uncommon. For example, the rescuers in Japan in the aftermath of the 2011 earthquake and tsunami disaster abandoned survivors to die of starvation (Tabuchi 2012) , and allegations were made that doctors in New Orleans euthanized patients during the 2005 Hurricane Katrina (Fink 2009) .

Facts on the ground (number of deaths, nature of injuries, and the extent of destruction) rarely fall in line with predictions based on theories. Not all health hazards exacerbate human misery or create health problems; not all disasters are catastrophic or equally tragic. And there is the question of logistics , the day to day, hour to hour, even minute to minute disaster relief responses. Choices must be made between competing evils and be flexible. Disaster victims may qualify as a vulnerable population, although they have not been explicitly recognised as such in any research guidelines or regulations. We must therefore ask whether current research guidelines are applicable to research on victims of disasters.

2.1 Risk-benefit Assessments

Among the key ethical requirements for conducting biomedical research are informed consent of subjects and a favourable harm-benefit profile (Emanuel 2000) . Research ethics requires that informed consent of subjects is obtained and that the anticipated benefits justify the harms or the risks (of harm) done to research subjects. Harms or risks (of harm) must not outweigh the benefits to be expected from the research. Both requirements—informed consent and a favourable harm/benefit ratio —are necessary for the ethical conduct of human research, but neither alone is sufficient. These requirements are mutually complementary and of equal value: they form a “perfect union.” In practice, however, it is a real challenge to give these requirements equal value, particularly in disaster research because all of the benefits are to people in the future, i.e., the victims of future similar disasters, and all of the harms or “risks” are to current disaster victims who are used as research subjects, and who may not be in a realistic position to give their informed consent to the research.

In exploring these many challenges I will concentrate on biomedical research on humans because it is the kind of research towards which most guidelines and regulations are directed. I will examine how a utilitarian calculus of harm-benefit assessment plays out when research is done in the wake of a disaster and assess whether such a calculus is sufficient to justify using disaster victims as subjects in clinical trials.

It is worth noting here that the assessment of harm and benefit of research is not a purely objective exercise but is rather a “mix” of facts and values, i.e. the fact of the actual harm done to people right now, and the value that is, at once, attached to this fact (e.g., is the harm worth taking in light of the benefit?) by those making the assessment. This harm-benefit assessment becomes ambiguous and confusing when the concepts of “harm” and “risk” (of harm) are used interchangeably, as it is in most existing research guidelines, and by many investigators and ethics review board members. This is because the term “harm” is not to be understood in probabilistic term, while the notion of “risk” (of harm) is. (The conceptual nature of risks is the product of both probability, a scientific notion, and magnitude of harm, a subjective concept)). In making a risk-benefit assessment, investigators and institutional review boards (IRBs) would have to decide whether to “gamble” or “play it safe.” That this may be a challenging “gamble” at all times, but particularly in time of disaster, simply underlines the difficulties faced by IRBs (also called research ethics committees) in doing a “risk-benefit” or “harm-benefit” assessment themselves.

Moreover, to use disaster victims as subjects in clinical research seems ethically suspect because disasters have their greatest impact on already marginalised and impoverished populations. Yet, the increasing number of disasters worldwide, together with the professionalization of disaster responders and humanitarian organisations, has accelerated the attention paid to what has been labelled a new category of research, ‘disaster research’ , not dealt with in existing codes of research ethics. The need to articulate an ethical justification for this new category of research has also grown, and has gathered momentum in humanitarian and human rights circles where it has been recognised that existing ethics rules do not take into account the status of disaster victims. This book, and the conference on which it is based, is just one example of this recognition.

It seems uncontroversial to conclude that the first (ethical) duty of all humanitarian responders in the immediate response phase to a disaster is to provide the victims with the services they need to survive and continue to live decently. Doing clinical research on victims in life-endangering situations runs the risk of ignoring the plight of people caught in disasters. This is because once a favourable benefit-harm ratio is made by an ethics review panel, investigators may be encouraged to focus on possible future benefits to future disasters. As a consequence, the goal of meeting the immediate needs of disaster victims could be seen as nonessential, and informed consent itself may be marginalised, because an independent decision has been made that the research is valuable based on a favourable risk-benefit analysis (Roberts 2011; Boylan 2011; Epstein and Wilson 2011) .

3 “Big Tent” Research

3.1 Epidemiological Research

Research on human beings commonly falls into two main categories: epidemiological and interventional. Epidemiological research, by and large, applies non-invasive data collection methods, e.g., interviews , surveys , focus groups , questionnaires and other similar methodologies, to activities related to quality improvement, education programs, risk exposure, disease surveillance , and the determinants of health. Although not necessarily risk free (disclosure of confidential or embarrassing information is their main risk), epidemiological research is the least controversial because it is not intrusive, and poses no physical harms to those who consent to participate (Miller and Emanuel 2008) . This kind of research generally focuses on improving performance, safety and promoting the health of people right now. It constitutes an integral part of public health activities, and thus is best described as public health practice rather than medical research. The most well-known and foundational example is John Snow’s collection of data on the sources of drinking water during a major cholera epidemic in 19th century London. No one was put at risk by his data collection research, and it eventually led to ending the cholera disaster when he identified the source of contaminated drinking water through interpreting his data. Similar epidemiological research, which included accurate and wide sharing of scientific and medical information with both the public and public health professionals was used to identify the source of the Severe Acute Respiratory Syndrome (SARS) epidemic (Kahn 2003) .

Epidemiological activities are on-going (e.g. global health surveillance , population movement) and often intensify in complex disasters. The complexity may be related to both the multidimensional aspect of a disaster , and the multifaceted emergency responses it may elicit—responses which themselves may be complicated by the precarious situations of disaster victims. For example, internally displaced persons or refugees may be forced to live in temporary camps in foreign countries because they have been victimised by their home country, or by those they are fighting against in their own country, and may be denied human rights protection generally afforded individuals by treaties, covenant, and international human rights law and ethics.

Documentation of the impact of a disaster on a refugee population has been used to ramp up pressure on the international community to act. In Nowhere to Turn: Failure to Protect, Support and Assure Justice for Darfur Women, Physicians for Human Rights (PHR) documented rape , violence and other atrocities suffered by Darfur women in Chad-Sudan border refugee camps. The data they collected and published led to new strategies to try to prevent further assaults on women, meet the immediate needs of women, and plan for their (and their families) safe return to Darfur (PHR 2009).

Médecins Sans Frontières (MSF) describes their own data collection practices (e.g., population surveys , incidence/prevalence risk factor studies) as ‘operational research,’ emphasising ‘interventions’ that are readily actionable (and not simply possible or potential). Operational research ‘can be annexed to routine operations … [and] are likely to have a direct impact on policy and practice and the quality of assistance [MSF] renders to populations’ (MSF 2010, p. 9). As we move towards more complex studies (e.g. clinical trials), the greater the likelihood that the research will infringe upon routine operations. As a result, and because of the considerable resources and time required, ‘involvement in clinical trials will likely be an exception for MSF’ (MSF 2010, p. 9). Instead, by focusing on data collection activities, which can be categorised as epidemiological research, MSF believes it can directly contribute to improving program design and practices and the quality of MSF assistance provided. This kind of activity is analogous to “health services research” which collects data for quality improvement in the delivery of healthcare. In epidemiological or public health activities, including health services research, the risk of harm to people is minimal, and the benefit to others may be great and readily actionable.

3.2 Interventional Research

Clinical or interventional biomedical research contrasts sharply with epidemiological data-collection activities in that it uses a protocol and a specific design (e.g., placebo-controlled, randomised ) to test a new drug, device or treatment strategy which will predictably cause actual harm to some human beings. Biomedical or behavioural in nature, interventional research is a kind of research at which almost all existing ethical guidelines and regulations are directed, including, of course, the foundational Nuremberg Code , and the Declaration of Helsinki .

The ethical requirements for such research are summarised in the 1976 US code of federal regulations (CFR) , modelled after the Nuremberg Code and the Declaration of Helsinki , which requires a comprehensive review of the research protocol by an “ethics board” or institutional review board (IRB) before investigators are authorised to approach individuals to ask them whether they want to enrol in the research. IRB approval is based on a number of factors, including a determination that the study is scientifically sound, that the methods employed are appropriate, and, most important for this chapter, that the harm to subjects does not outweigh the anticipated benefits of the research. The study must not begin before individual consent is obtained, even when all other required conditions are met. These requirements are based on fundamental ethical principles of respect for persons and doing no harm. Another ethical principle, beneficence (or doing good), directs the investigator to maximise benefits and minimise harms, and this is usually translated into the requirement of a favourable risk-benefit ratio, the lack of which would invalidate most research on human beings (Wendler 2005) .

Looking at the unique research opportunities presented by disasters, together with the reasonable predication that disasters will continue to occur and research may help us gain greater knowledge on how to respond to future disasters, some have even argued that it would be ethically unacceptable not to embrace this opportunity to learn how to improve the care afforded (future) disaster victims and “save lives.” The central question, of course, is whether this type of research should be done at all. As I have already suggested, answering this question requires a determination of whether it is possible to objectively evaluate the risks and benefits of proposed disaster research, and whether it is realistic to expect that the disaster victims can provide voluntary informed consent. Before attempting to answer these questions it is useful to provide a brief overview of existing research guidelines and regulations as this may help to suggest the answers.

4 Making Sense of Guidelines and Regulations

Informed consent has been first, foremost and central to research on human beings at least since World War II when in 1947 US judges, sitting in judgment of the Nazi doctors at Nuremberg, articulated a set of ten rules which became known as the Nuremberg Code (Shuster 1997) . By putting informed consent first, the US judges—who believed they were articulating international law—hoped they could compel future investigators to remain focused on the rights and welfare of their individual subjects who bear the burden and harm of research and need human rights protection (Shuster 1998) .

The first Rule, ‘The voluntary consent of the human subject is absolutely essential,’ insists on obtaining voluntary, competent, informed and understanding consent, and also requires that the research subject have sufficient and sufficiently reliable information about ‘all inconveniences and hazards reasonably to be expected, and [about] the effects upon his health or person which may possibly come from his participation in the experiment.’ Thus, embedded in the informed consent process is a requirement to inform subjects of the risks of harms and potential benefits to be expected from the experiments. The Nuremberg Code has nine additional requirements, the most challenging of which are those that directly relate to the harm-benefit ratio (Nuremberg Code 1949):

Rule 2. The experiment should be such as to yield fruitful results for the good of society, unprocurable by other methods or means of study, and not random and unnecessary in nature;

Rule 4. The experiment should be so conducted as to avoid all unnecessary physical and mental suffering and injury;

Rule 5. No experiment should be conducted where there is an a priori reason to believe that death or disabling injury will occur, except, perhaps, in those experiments where the experimental physicians also serve as subjects;

Rule 6. The degree of risk to be taken should never exceed that determined by the humanitarian importance of the problem to be solved by the experiment.

Rule 7. Proper preparations should be made and adequate facilities provided to protect the experimental subject against even remote possibilities of injury, disability , or death.

Rule 10. During the course of the experiment the scientist in charge must be prepared to terminate the experiment at any stage, if he has probable cause to believe, in the exercise of the good faith , superior skill, and careful judgment required of him, that a continuation of the experiment is likely to result in injury, disability, or death to the experimental subject.

Seven of the ten Rules in the Code are relevant to the harm-benefit ratio of research, the assessment of which is done by the subjects (Rule 1) and the investigators (Rules 2, 4–7, 10). This is important because, taken together, these rules establish a balance between the importance given to informed consent (Rule 1), on the one hand, and the importance given to harm or risk-benefit assessment (Rules 2, 4–7,10), on the other hand. In short, both informed consent and a favourable risk-benefit ratio are necessary. Recently, international instruments have adopted the harm-benefit rules to supplement the informed consent requirement, e.g., the 1997 UNESCO Universal Declaration on the Human Genome and Human Rights (Article 10), the 1997 European Convention on Human Rights and Biomedicine (Article 2) and the 2005 UNESCO Universal Declaration on Bioethics and Human Rights (Article 3.2).

The 1964 Declaration of Helsinki , Ethical Principles for Medical Research Involving Human Subjects, under the auspices of the World Medical Association (WMA), revised eight times since, has emphasised the primacy of the human being over society, and has appealed to the notion of respect for persons and human dignity . (A similar statement is found in the Universal Declaration on Bioethics and Human Rights , Article 3.2.) In spite of its multiple revisions, the 2008 Declaration of Helsinki kept intact the language that gives priority to the interests and well-being of individual subjects over all other interests, a language that embodies the core values of all research on human beings (Solbakk 2011) . For example, its Introduction reads: ‘In medical research involving human subjects, the well-being of the individual research subject must take precedence over all other interests’ (WMA 2008, A.6).

The Declaration further expanded on a number of relevant ethical principles:

Principle 17. Medical research involving a disadvantaged or vulnerable population or community is only justified if the research is responsive to the health needs and priorities of this population or community and if there is a reasonable likelihood that this population or community stands to benefit from the results of the research.

Principle 18. Every medical research study involving human subjects must be preceded by careful assessment of predictable risks and burdens to the individuals and communities involved in the research in comparison with foreseeable benefits to them and to other individuals or communities affected by the condition under investigation (emphasis added).

Principle 20. Physicians may not participate in a research study involving human subjects unless they are confident that the risks involved have been adequately assessed and can be satisfactorily managed. Physicians must immediately stop a study when the risks are found to outweigh the potential benefits or when there is conclusive proof of positive and beneficial results (emphasis added).

Principle 21. Medical research involving human subjects may only be conducted if the importance of the objective outweighs the inherent risks and burdens to the research subjects.

Principle 24. In medical research involving competent human subjects, each potential subject must be adequately informed of the aims, methods, sources of funding, any possible conflicts of interest, institutional affiliations of the researcher, the anticipated benefits and potential risks of the study and the discomfort it may entail, and any other relevant aspects of the study.

… After ensuring that the potential subject has understood the information, the physician or another appropriately qualified individual must then seek the potential subject’s freely-given informed consent, preferably in writing… (WMA 2008; emphasis added)

Nonetheless, it is difficult to envision how these principles could be effectively applied and be actionable in time of disaster when people are vulnerable and socioeconomically distressed. More troublesome are on-going attempts to comply better with the macro-level interests of science and society at the expense of the micro-level interests of individual subjects and their communities. ‘Consequently, the quest and striving for a universal normative language in international research also seems to be a morally and legally justifiable endeavour’ (Solbakk 2011, p. 342) .

The Declaration of Helsinki introduces a distinction between “therapeutic (or clinical) research” where the risks of harm are assumed by subjects for their own benefit, and “nontherapeutic (or scientific) research” where the risks of harm are assumed by subjects for the benefit of science and society. As will be discussed later, to combine medical care with medical research complicates the validity of informed consent and compromises the risk-benefit calculation, including the delicate balance that exists between risks taken for one’s own benefits and risks taken for the benefit of others (Shuster 1998) .

The World Health Organization (WHO)-sponsored International Ethical Guidelines for Biomedical Research Involving Human Subjects published in 1993 and since revised by the Council for International Organizations of Medical Science (CIOMS) in 2002 restated the requirement of a favourable harm-benefit ratio and added two points in the context of research done in developing countries.

  1. 1.

    The research project must be responsive to the health conditions or needs of vulnerable subjects. Harm is more easily justified when it arises from interventions that hold out the prospect of direct benefit for the subject population. Harm that does not hold out such a prospect must be justified;

  2. 2.

    In order for the research to be ethical and not exploitive in this setting, it must offer the potential of actual benefit to the people in the developing country in which the research is done.

The WHO Handbook for Good Clinical Research Practice (2002), an adjunct to the WHO Guidelines for Good Clinical Practice (GCP) for Trials on Pharmaceutical Products (1995), reaffirms the importance of evaluating the benefits and risks of research by institutional review boards, in these terms:

Principle 3. Before research involving humans is initiated, foreseeable risks and discomforts and any anticipated benefit(s) for the individual trial subject and society should be identified. Research of investigational products or procedures should be supported by adequate non-clinical and, when applicable, clinical information.

Principle 4. Research involving humans should be initiated only if the anticipated benefits for the individual research subject and society clearly outweigh the risks. Although the benefit of the results of the trial to science and society should be taken into account, the most important considerations are those related to the rights, safety, and well-being of the trial subjects. (emphasis added)

It is worth noting here that Principle 4 is the very principle enunciated by the Declaration of Helsinki (Introduction A.6) which represents the “normative bedrock” of clinical research involving human beings. However, putting IRBs in charge of getting the best deal for consenting research subjects (through harm-benefit evaluation) and as “gatekeepers” of the entire human research enterprise runs the risk of turning upside down the values expressed in this principle to fit with the interests of the two most powerful players in the field: science and society (Maschke 2008; Solbakk 2011) .

Both the 1947 Nuremberg Code and the 1964 Declaration of Helsinki served as models for the 1976 US Code of Federal Regulations for the Protection of Human Research Subjects (CFR 1976) and the 1991 ‘Common Rule’ which applies these regulations across US federal agencies. These Regulations (since revised) utilise the concept of “risk” (of harm) rather than “harm” since the assumption is that almost no research protocol is ethically acceptable if actual harm is to be done to research subjects as part of the protocol. Approval of a research protocol by the Institutional Review Board (IRB) or ethic review panel depends on such assessment. IRBs must therefore find that:

  1. 1.

    Risks [of harm] to subjects are minimized;

  2. 2.

    Risks [of harm] to subjects are reasonable in relation to anticipated benefits, if any, to subjects, and the importance of the knowledge that may reasonably be expected to result. (CFR 1976, emphasis added)

Minimum risk should mean that ‘the probability and magnitude of harm or discomfort anticipated in the research are not greater, in and of themselves, than those ordinarily encountered in daily life, or during the performance of routine physical or psychological examinations or tests’ (CFR 2009). It is also stated that, in evaluating risks and benefits of research, ‘the IRB should consider only those risks and benefits that may result from the research (as distinguished from risks and benefits of therapies subjects would receive even if not participating in the research). The IRB should not consider possible long-range effects of applying knowledge gained in the research (for example, the possible effects of the research on public policy) as among those research risks that fall within the purview of its responsibility’ (CFR 2009).

Excluding long-term risks of research is troublesome and misguided in any research condition, but it is more so in time of disaster because the benefits are almost always in the future and valued in probabilistic terms as part of the harm-benefit assessment , and the risks are always actual and immediate. Not considering the long-term risk effects of research on individuals and society probably makes for an easier risk-benefit calculation, but not for an inclusive, complete and meaningful evaluation.

New ethics guidelines for research in disaster conditions have also been suggested (WHO 1999; Maschke 2008) . In 1997, WHO and the Macfarlane Burnet Center for Medical Research facilitated an advisory group whose mandate included articulating an ethics framework for research in emergencies, essentially to provide guidance on issues related to risk-benefit and informed consent. This group produced an Ethics Template addressing several areas of ethical concern. The section on risk-benefit reads:

  1. 1.

    The benefit derived from the research must accrue directly and temporally to the actual subjects of the research.

  2. 2.

    The research must be directed at questions that could not be answered in a non-emergency or non-refugee setting.

  3. 3.

    The risks to the individual subjects, their community, and their future security must be kept to an absolute minimum. These risks must be extensively and comprehensively detailed in the research protocol and also embedded in the protocol for obtaining informed consent. All anticipated risks should be identified and mechanisms described to monitor for adverse outcomes. A threshold must be defined for intervention and, if needed, interruption of the study. Mechanisms to treat those in whom adverse outcomes develop must be described. The feedback methods by which this monitoring system will be operationalized must be described. (Advisory Group 1997, emphasis added)

The group recognised that some research addresses questions that cannot be answered in nonemergency settings, and that adhering to research ethics principles may be challenging when research is done on people who may be entirely dependent on external aid for their survival. Nonetheless, the advisory group concluded that these challenges should not discourage conducting research during complex emergencies . It was also argued that it could be unethical to avoid carrying out certain relevant research in complex emergencies (WHO 1997). For example, it was suggested that it would be unethical not to conduct research on children who face malnutrition as a result of a disaster. Disaster conditions provide a unique opportunity to learn and gain knowledge on how to improve nutrition and manage malnutrition in these unique environments. Missing this opportunity and not doing such research which could save the lives of children in future disasters could be inherently unethical. Of course, nutritional research can be done, but it should be done on non-vulnerable populations who are not dependent on physician-researchers for their very survival.

One important ethical guideline that applies to all research is that vulnerable groups (such as disaster victims) should almost never be enrolled in research that can be done on non-vulnerable populations. This has also been the conclusion of the President’s Advisory Committee on the Human Radiation Experiments in the United States. The Committee condemned the use of vulnerable, institutionalised children for research on nutritional studies comparing breakfast cereals—because the study could have been done on non-vulnerable, free-living children, such as the children of the researchers from Massachusetts Institute of Technology (Advisory Committee on Human Radiation Experiments 1995).

5 Doing Good in Disaster Research

The difficulty, of course, is to develop a research protocol applicable to disaster conditions that not only values disaster victims, but also values both the requirement of informed consent and the requirement of an ethical harm-benefit ratio. To obtain a voluntary informed consent in any circumstance is extraordinarily difficult, time consuming and costly, when done right. This is more so with disaster research in which a modified “therapeutic illusion” seems inherent (also called the ‘philanthropic misconception’) . For example, disaster victims will assume what they are told, that the physician-rescuer is there to help them, and not to use them for their own research purposes. This belief (illusion) invalidates informed consent . To the extent that it exists, it also makes a harm-benefit assessment by the subject (regardless of the IRB’s assessment) unrealistic—since the subject’s assumption is that the intervention is being done to benefit the disaster victim now, not for the benefit of future disaster victims only.

By introducing a distinction between therapeutic and nontherapeutic research, the Declaration of Helsinki has done a disservice to research subjects since it reinforces this illusion by confusing the distinction between treatment and research. It permits a different threshold of acceptability of harm for therapeutic research, and thus a more favourable harm-benefit analysis, which is incorrect (Rid and Wendler 2011) . Jay Katz, a renowned psychiatrist and expert on informed consent , insisted on the vital importance of maintaining the distinction between research and therapy, and deplored its blurring in practice. Not making a clear distinction between research and treatment also blurs the distinction between physician and researcher, and between patient and research subject. Without a clear understanding that they are in research, people tend to believe that their personal interests and not science’s, are being served (Katz 1993) . Researchers follow a protocol to gain new knowledge from research subjects; research is not treatment done for the benefit of individual patients. The fiduciary relationship between doctors and patients fundamentally differs from the research relationship between investigators and subjects.

Disaster victims need help and treatment. They do not seek or desire to be subjects in a research project that holds risks but no benefits for them. Moreover, disaster conditions make it unlikely that victims will be able to give voluntary and informed consent or to make an independent harm-benefit assessment of any proposed clinical research. To approve the research protocol, the IRB must have found the harm-benefit assessment favourable. Therefore the protocol comes to the field with an embedded presumption that some “good” may come from the research, if not now, in the future. To obtain these benefits, however, the research must be done: giving a motivation to “sell” the research to the potential subject, a natural tendency that undermines the consent process itself. In this context, the harm-benefit trade-off, which presumably should protect subjects in research, is likely to provide little or no protection.

6 Minimising Risks of Harm

The US Federal Regulations state that risks to subjects must be minimal, “reasonable,” or not “excessive,” compared to potential benefits. Jennifer Leaning specifies that studies should impose ‘the absolute minimum of additional risk’ (Leaning 2001, p. 1432) . However, there is no agreed-on definition of what “minimal” risk should be. For example, the Additional Protocol to the Convention on Human Rights and Biomedicine, concerning Biomedical Research ‘deemed that the research bears a minimal risk if, having regard to the nature and scale of the intervention, it is to be expected that it will result, at the most, in a very slight and temporary negative impact on the health of the person concerned’ (Council of Europe 2005, Article 17). The CIOMS Guidelines hold that ‘the risk from research interventions that do not hold out the prospect of direct benefit for the individual subject [lacking decision making capacity] should be no more likely and not greater than the risk attached to routine medical or psychological examination of such persons’ (CIOMS 2002, Guideline 9). As noted previously, Federal Research Regulations define “minimal risk” as ‘the amount of risk ordinarily encountered in daily life.’ These definitions, however, are problematic.

Harm may be serious (magnitude of harm), and yet viewed as minimal because the probability of it occurring is extremely low, or conversely, it can be minimal and yet viewed as serious because the probability of it occurring is very high. This assessment may also be affected by its relation to benefits, whether they are present or future. To compare minimal risk to those risks “ordinarily encountered in daily life” is not useful because many daily life activities may include risks that are unacceptably high, particularly in time of disasters, and this could justify doing all kinds of research, even the most bizarre and fanciful.

The term “benefit” is no less problematic and controversial because the concept is given a positive value attached to future benefit, such as potentially improving the health and welfare of people in the future. But in Phase I clinical trials there are no expected benefits to subjects—Phase I is, by definition and purpose, a nontherapeutic, phase to determine safety. The main question in Phase I trials often boils down to this: ‘what level (or magnitude) of harm should a subject be allowed to withstand in the name of research?’ (Shamoo and Resnik 2003, p. 194) . Harm is expected to be immediate (i.e., in drug trials, dosage is increased incrementally until it harms the subject—hopefully minimally). Only a comprehensive informed consent regimen and a rigid monitoring protocol to minimise harm can justify a Phase I trial. But neither informed consent nor strict and careful monitoring of the research protocol seems likely in the immediate aftermath of a disaster. And it is difficult to envision how individuals caught in a disaster may “benefit from the fruits of research” (as suggested by CIOMS guidelines) either because disaster victims may have moved elsewhere and could not be located, have died or because research results have been too long coming, therefore no longer relevant, or if still relevant have not been made readily available to them. Requiring that subjects and beneficiaries of research are of the same group makes most research in disaster settings ethically untenable.

Research rules in ordinary settings are now being reconsidered in the United States, particularly, the cost-benefit assessment of applying the current research regulations themselves. One common belief is that current regulations impose burdensome bureaucratic procedures on researchers that do little to enhance effectiveness, protect research participants and reduce costs (Emanuel 2011) . These regulations frustrate researchers who may be prevented or significantly delayed from doing valuable research that could “save lives” (Whitney and Schneider 2011; Holm 2011) . However, improving research efficiency by reducing the cost of research and eliminating burdensome regulations has less to do with ethics than with economics. Efficiency is an economic goal; it is not an ethical principle.

Arguably, cost-benefit analysis is like risk-benefit assessment: it is about process and efficiency, and not about actually protecting the rights of subjects in research. It is about a determination by the IRB that the proportionality of foreseeable harm and potential benefit is (or is not) favourable and that risks of harm to subjects are in proportion to (and balanced by) the expected benefits (to subjects or others). Although statistical assessments are empirical or scientific, the evaluation of harms and benefits are value judgments made by those making the assessments. In clinical drug trials on healthy or diseased subjects, for example, harm to subjects is expected to be immediate and actual—and hopefully minimal—and IRBs must speculate about future benefits to subjects (if any) and to society (usually in non-probabilistic terms, and in a pro forma manner). In this context, IRBs’ pronouncements may not stand serious scrutiny since the data necessary to arrive at a valid risk-benefit evaluation are usually unavailable—and can emerge only if the research project is in fact done.

7 Prisoners of Disasters

It is unremarkable that would-be researchers largely expect that “good” will come from their work. But this is not evidence-based reasoning which anyone, including IRBs should take seriously. Investigators and IRB members must do better if the goal is to demonstrate that biomedical research in the immediate aftermath of a disaster can be done ethically, while protecting the rights and welfare of subjects. A comparison with prisoners , a vulnerable population, may be useful. Like prisoners, disaster victims (especially, but not only, those in refugee camps) may have lost their freedom of movement and become dependent on others for everything, including food, medicine and basic essentials. In this regard, because of the fear and danger inherent in disaster settings, victims of disaster are likely to be an even more vulnerable population than prisoners when it comes to proposals to do research on them.

In his Acres of Skin: Human Experimentation in Holmesburg Prison, Allen Hornblum underlines the effects of disasters (specifically, World War II) on human experimentation in America. ‘It was during the war years that the federal prison system swung into action as a major source of human subjects for research experiments … Prisoners were too valuable as research subjects to be jettisoned. They needed to be used, not protected’ (Hornblum 1998, pp. 83, 85) . Sociology Professor David Rothman, commenting on this time of transformation, explained that ‘a utilitarian ethic continued to govern human experimentation—partly because of the war precedent, partly because the benefits seemed so much greater than the costs, and partly, too, because there were no groups or individuals prominently opposing such an ethic’ (Rothman 1987, p. 1198) .

The 1976 Report of the National Commission for the Protection of Human Subjects in Biomedical and Behavioral Research permits research on prisoners only in limited circumstances. The Commission focused on two key ethical considerations: ‘(1) whether prisoners bear a fair share of the burdens and receive a fair share of the benefits of research [i.e. harm-benefit assessment] ; and (2) whether prisoners are, in the words of the Nuremberg Code , “so situated as to be able to exercise free power of choice”—that is, whether prisoners can give truly voluntary consent to participate in research’ (1976, p. 5, emphasis added). It found that prison was no place to conduct biomedical and behavioural research because prisoners are in a setting that makes it impossible for them to freely and voluntarily consent to participate.

The Institute of Medicine (IOM) was asked to review this position to determine whether it is justified today. Using a utilitarian ethics, the IOM Committee questioned the weight placed on informed consent and on prisoners’ status and concluded that current restrictions on using prisoners in biomedical and behavioural research should be loosened and the overall oversight boosted (IOM 2006). The Committee made a number of recommendations, the most telling of which was advising a shift from a category-based to a risk-benefit approach to research review.

Commenting on this recommendation, Osagie Obasogie, a law professor at the University of California, Los Angeles, perceptively observed that ‘shifting from prisoners’ almost categorical exclusion from research to a more permissive risk-benefit analysis—is where the ethical road meets the legal rubber’ (Obasogie 2010, p. 58) . Quoting from the IOM report that states that ‘[m]ore attention needs to be paid to risks and risk-benefit analysis rather than the formalities of an informed consent document’ (IOM 2006, p. 118), Obasogie contends: ‘This shapes the major recommendation [of the IOM Committee] to stop thinking of prisoners as a category of individuals who, by default, should not be human subjects…. [and instead] recommends looking at each research proposal on a case-by-case basis to assess its potential risks and benefits’ (Obasogie 2010, p. 58) . This, Obasogie argues, is a serious ethical mistake. To favour independently weighing risks against benefits to permit more prison research takes the emphasis off the rights and dignity of the individuals who bear the burdens of research, and transfers it to an abstract analysis of harms and benefits by an IRB.

The fatal flaw with the IOM Report is that once the IRB decides that the harm-benefit ratio of the proposed research is favourable, obtaining a valid informed consent from the prisoners can be viewed (and treated) as mechanical and marginal. The fundamental question is no longer whether research in prison conditions should be done (and whether a voluntary informed consent can be obtained), but rather how this particular research should be done, and informed consent obtained. ‘This shift from a substantive approach to justice and respect for persons (emphasizing protection, fairness, and burden-sharing) to a more procedural mechanism (emphasizing representation, along with the noncategorical risk/benefit analysis)’ is where the IOM committee misses the mark (Obasogie 2010, p. 59) .

Disaster victims are in many ways like prisoners, and disaster research is, at least in the immediate aftermath of a disaster, like prison research. Although no one has done the research needed to prove the proposition (by examining how IRBs actually review disaster research), I think it makes intuitive sense to believe that just as the IOM envisions harm-benefit analysis taking precedence over informed consent in prison research, this same result is likely in disaster research. This is for at least two reasons. The first reason is that the benefits are almost always overblown and concern future disaster victims, science and the greater society (Zhang 2013). The second reason is that the weaker the interest in improving the informed consent process, the stronger the need to provide exhaustive informed consent forms, while at the same time complaining that these forms consume too much IRB time, too much energy, too many resources, and are incomprehensible to most research subjects, and therefore useless (Emanuel 2011) .

It may be objected that we should not designate “disaster victims” as a new category of vulnerable research subjects because disaster research is critical for progress, and also because a utilitarian ethics (harm-benefit calculus) may be sufficient to protect human subjects from harm. Additional research protection for disaster victims may not only be unnecessary, it may also be counterproductive since it may slow or even stop research pertaining to their situations and could deny future disaster victims the benefits of research. The problem with this stance is that there appears to be no morally defensible reason for thinking that a risk-benefit calculation protects subjects from harm. This is because in this calculation, as previously noted, the benefits of research are always potential and speculative, i.e., for people in the future, and the risk of harm is always definite, actual and immediate, i.e. for people right now. Long-term potential harms to subjects are generally ignored or overlooked. In short, weighing the risk-benefit ratio of research on human beings may not promote subjects’ protection and may add to the pitfalls inherent to the utilitarian calculation noted above.

8 “Saving Lives”

The powerful “saving lives” mantra is the core justification for doing research in disaster condition (Annas 2010) . This rationale by investigators is, however, almost entirely self-serving, very much like the argument pharmaceutical companies make to justify the high price of their products to fund research on new products that might benefit people in the future. But people who are injured and traumatised by a disaster need treatment right now (Zhang 2013). Just as pharmaceutical companies have a moral obligation to sell their products at a price people who need them can afford, so too first responders and humanitarian workers have a moral obligation to provide immediate relief and help to people in distress without adding to their burdens and affliction.

Humanitarian and human rights physician, Paul Farmer , observed in the aftermath of Paul Duvalier’s violent and dictatorial regime:

In Haiti, research was not a part of what Haitians have hoped for … Research did not figure on the wish list of the people we were trying to serve. Services are what they asked for, and as people who had been displaced by political and economic violence, they regarded these services as rightful remedies for what they had suffered. (Farmer 2005, p. 205)

Disasters create health and human rights problems which investigators, IRBs and the entire human research enterprise are ill equipped to address: ‘in human rights work, research and critical assessment are insufficient—analysis alone cannot curb human rights violations …’ (Farmer 2005, p. 205) .

The 2011 earthquake-tsunami disaster in Japan demonstrates that there is plenty of room for disaster-related research activities, but none that I have been able to identify justify doing biomedical research on human beings in its immediate aftermath. For example, research is justified and even necessary to improve earthquake proofing of buildings for people living in earthquake prone countries. Such research is on buildings and is not on people and can be an important part of disaster planning (or Emergency Management) in prevention, preparedness , and recovery phases. Likewise, evaluating the benefit of seawalls as a first line of defence against tsunamis is necessary, but this research is not done on human beings; it is done on seawalls. Like scientific investigators, structural engineers may legitimately claim that they are “saving lives” by experimenting on how best to secure buildings immediately after a destructive earthquake (recovery phase). But, in the response phase of an emergency, they are obligated to do what they know best to manage damages and contain human casualties. There is no ethically appropriate research that cannot be done before a disaster or after disaster conditions have been addressed and people are out of danger.

Epidemiological research, including monitoring the radiation exposure of disaster victims , and keeping track of them for years, even decades, to see what doses caused what diseases, is perfectly reasonable. This kind of disaster research is not to be compared to the highly controversial “emergency research” which the US Food and Drug Administration (FDA) approved without informed consent on unconscious out-of-hospital heart attack or automobile accident victims with life-threatening injuries for which no good medical treatments existed (FDA 1996).

9 Summary

“Disaster research” is a new category of research that could rapidly expand. In this chapter I have argued that epidemiological research conducted by investigators or public health professionals in time of a disaster (i.e. in the response phase to a disaster) is both reasonable and necessary. Survey , data collection, questionnaire-type activities which do not add burdens onto people affected by a disaster and/or create obstacles to their being helped and treated, is reasonable and necessary; so too it is reasonable and necessary to assess “life safety” or security of those who flee their homes to find refuge in refugee camps. These activities are an integral part of public health practices which aim at helping people right now and improving their security , health and wellbeing. They are done in the interests of disaster victims, directly related to their immediate needs. Benefits are not secured at the cost of inflicting additional harms to disaster victims (though nothing in life is risk free) or at the cost of exploiting people for our purposes.

By contrast, biomedical research conducted in the immediate aftermath of a disaster is not in the interests of disaster victims. Disaster conditions are usually so dire that they create an obligation to categorise disaster victims as “vulnerable” and as a category of individuals who, like prisoners , need more, rather than less, protection from exploitation. Put another way, disasters create no exception to the ethical rules or human rights principles that govern biomedical research on human beings.

Debate, however, will undoubtedly continue and questions will be raised about whether ethics guidelines and regulations should be relaxed or waived to permit more research in disaster settings to “save lives.” The great American novelist, F. Scott Fitzgerald, gave an impressive portrayal of Americans and American culture when he wrote at the end of The Great Gatsby, ‘Gatsby believed in the green light, the orgiastic future that year by year recedes before us. It eluded us then, but that’s no matter—tomorrow we will run faster, stretch out our arms farther … And one fine morning—So we beat on, boats against the current, born back ceaselessly into the past’ (Fitzgerald 1925, p. 189) .

The research beat will also go on, faster and undoubtedly louder. And it is probably a good thing to believe, like Gatsby, in the ‘green light’ if we want to improve our lives, make worthwhile discoveries and assert the most basic right to survival. Research is an important human endeavour and may help us achieve these goals, but it is only a means to an end; it cannot (and should not) be an end in itself. This is because the progress that research promises may be too dazzling. It may come at an unacceptably high price, especially once it has been decided that research in the wake of a disaster has enormously vital potential benefits to science and society which could make the actual risks to research subjects marginal, even irrelevant. Hans Jonas, I think, got it right when he wrote about the relationship between progress and research on vulnerable humans ‘Progress is an optional goal, not an unconditional commitment, and that its tempo in particular, compulsive as it may become, has nothing sacred about it … Too ruthless a pursuit of scientific progress would make its most dazzling triumphs not worth having.’ (Jonas 1969, pp. 219–247)