Dear editors:

The threat of terrorism has alerted many investigators, scholars, policy analysts, and politicians to the fact that the same piece of scientific research can often be used for good or malevolent purposes. The anthrax attacks in the autumn of 2001, the pursuit of weapons of mass destructions by terrorist organizations, and the publication of several articles related to enhancing the virulence of deadly pathogens, have created a sense of urgency to developing policies to regulate and control research with the potential for harm. In 2003, the National Research Council coined a term, “dual use research,” to describe the problem of preventing harmful uses of biotechnology research, and an entire literature has emerged devoted to this topic (National Research Council 2003).

In an article published in Science and Engineering Ethics in 2007, Miller and Selgelid provide an excellent overview of the ethical and political dilemmas related to dual use research in biology, some basic policy options for overseeing dual use research, and some advantages and disadvantages of these options (Miller & Selgelid 2007). They consider key issues, such as controlling access to materials and technologies, providing training for investigators, weighing the risks and benefits of publishing dual use research, and safeguarding freedom of inquiry. Though Miller and Selgelid offer many useful observations and insights, they fail to clearly define the most important term in their article—“dual use.”

A clear definition of “dual use” is necessary for effective oversight of research that can have harmful consequences. The definition should be neither too narrow nor too wide in scope. If the definition is too narrow in scope, e.g., if it focuses only on research involving a select class of biological or chemical agents, then it may encourage scientists and policymakers to overlook some types of dangerous research. One of the most controversial examples of dual use research did not involve research with dangerous microorganisms or chemicals but was an article describing a mathematical model of contaminating the U.S. milk supply (Wein & Liu 2005). The Department of Health and Human Services (DHHS) found out about the article prior to its publication in Proceedings of the National Academy of Sciences (PNAS) and asked the journal to not publish it. PNAS editors met with DHHS officials and heard their concerns about the article. They decided to publish the article despite its national security risks because, in their judgment, the benefits of publication far outweighed its risks, since the article contained information that would help public health agencies prevent or mitigate a terrorist attack (Alberts 2005). There are other types of dangerous research that might be overlooked if “dual use” is limited to particular experiments involving only dangerous microbes or chemicals, such as research that could be used to damage buildings or infrastructure, or disrupt computer networks or electric power grids.

If the definition of “dual use” is too wide in scope, however, it may be needlessly applied to relatively benign areas of science that only have a remote chance of being used by terrorists or others to cause harm. Assigning too much research to the category of “dual use” would impose additional administrative burdens on scientists, which would interfere with progress and innovation. Scientists already have to deal with sizeable administrative and regulatory burdens related research, such as approvals for human research or animal experiments, biosafety requirements, financial audits, and so on (Steneck 2007). Most scientists would not welcome the additional red tape associated with institutional or government oversight for dual use research.

The National Science Advisory Board for Biosecurity has proposed a definition of “dual use research of concern” as “research that, based on current understanding, can be reasonably anticipated to provide knowledge, products, or technologies that could be directly misapplied by others to pose a threat to public health, agriculture, plants, animals, the environment, or materiel” (National Science Advisory Board for Biosecurity 2008). While this definition is helpful, two key terms, “reasonably anticipated” and “threat”, require further clarification. What does it mean to “reasonably anticipate” an outcome? Must one believe the outcome has a non-zero chance of occurring? A 5% chance? A 10% chance? And what is a threat? Must a threat involve the loss of at least one human life? Ten lives? One hundred lives? A million dollars in economic losses? Ten million dollars? One hundred million? Questions like these must be addressed to limit the scope of the definition and apply it to specific cases.

Developing a clear and coherent definition of “dual use research” is an important task for ethics and public policy, but it is beyond the scope of this letter. Perhaps Miller and Selgelid and other scholars can tackle this problem in the future.