Introduction

Financial conflicts of interest are widely recognized as a crucial source of public distrust in contemporary scientific research and development (see e.g., Kuzma and Besley 2008; McGarity and Wagner 2008; Shrader-Frechette 2007). This creates a dilemma for those who seek to develop governance strategies for emerging fields such as nanotechnology. On the one hand, government agencies are underfunded and have limited resources for performing safety studies themselves (Ramachandran et al. 2011). On the other hand, safety studies performed or directly monitored by chemical manufacturers (or “registrants”) are likely to be met with significant skepticism by the public, especially given the evidence suggesting that the research results tend to be correlated with the interests of study sponsors (Bekelman et al. 2003). Faced with this difficulty, at present it is unclear how to generate credible nanotechnology safety data that can serve as an adequate basis for public trust and effective oversight.

This issue was recently addressed in the April 2011 issue of the Journal of Nanoparticle Research (Vol. 13, No. 4), which featured a symposium on the topic of nanotechnology governance. The centerpiece of the symposium was an article offering the concluding recommendations from the National Science Foundation (NSF)-funded project, “Evaluating Oversight Models for Active Nanostructures and Nanosystems: Learning from Past Technologies in a Societal Context,” based at the University of Minnesota (Ramachandran et al. 2011). The authors suggested that the challenges of conflicts of interest in nanotechnology research could be addressed using a two-pronged approach: (1) the development of standardized protocols and procedures that would guide testing by both the manufacturers and agencies; and (2) the subsequent validation and internal/external peer-review of data by a coordinating agency that handles nanotechnology regulation (Ramachandran et al. 2011, p. 1361).

We support the goal of the Minnesota group to shift the burden of proof so that chemical manufacturers and users have more responsibility to generate safety data than they currently do under the U.S. Toxic Substances Control Act (TSCA) (Ramachandran et al. 2011, p. 1363). Nevertheless, we think that past experiences with standardized protocols developed under the guidance of the international Organization for Economic Cooperation and Development (OECD) and implemented by national regulatory agencies such as the U.S. Environmental Protection Agency (EPA) and Food and Drug Administration (FDA) highlight significant difficulties for the two-pronged strategy suggested by Ramachandran et al. (2011). In response to these difficulties, we suggest an alternative approach under which national regulatory agencies would collect funds from registrants—money that is already spent during late-stage product development—and use these funds to contract directly with Good Laboratory Practice (GLP)-certified academic laboratories or contract research organizations (CROs). Although we argue that this alternative is far preferable, we fully recognize that it may face political difficulties, as it would (1) eliminate registrant oversight or monitoring of safety studies and (2) raise potential confidentiality issues related to new products in development. While the latter problem could be addressed by establishing strong confidentiality agreements, the former issue would likely be more problematic since registrants could no longer pro-actively address unexpected product-related issues prior to regulatory submission. Therefore, we suggest three lessons for implementing Ramachandran et al.’s (2011) two-pronged approach to promote the authors’ goals of creating a dynamic approach to nanotechnology oversight that enhances public confidence.

Standardized test guidelines: lessons from the past

Standardized protocols and procedures have been used for decades by regulatory agencies in industrialized countries around the world under the guidance of the OECD (Paustenbach 2009). Therefore, the strengths and weaknesses of these approaches are already relatively clear. For our purposes, two features of these previous efforts at standardizing studies are particularly relevant. First, they have not eliminated concerns about the influences of financial conflicts of interest on policy-relevant research. Second, they have had significant unintended consequences, including committing regulatory science to relatively dated approaches and precluding consideration of innovative high-throughput approaches to toxicity testing such as those being developed under the Tox21 initiative (http://www.epa.gov/ncct/Tox21/).

The first concern is that standardized protocols and procedures, while rigid in certain respects, leave room for some flexibility in designing and interpreting studies, and this flexibility can provide opportunities for those with conflicts of interest to exert important influences. For example, unlike most studies performed for human health hazard assessments, pesticide registrants have significant latitude to choose the experimental design and statistical analyses for regulatory ecotoxicity tests (Chapman et al. 1996; Isnard et al. 2001). As a result, careful dose or concentration selection likely has a significant impact on subsequent conclusions about the No Observed Adverse Effect Levels (NOAELs) or No Observed Effect Concentrations (NOECs) for the tested pesticides. Manufacturers can also frequently choose among several different species or strains of animals, providing an opportunity for choosing those that exhibit lower sensitivity to chemical exposure. Finally, manufacturers are required to provide only the minimum information and interpretation required by standardized test guidelines, which may preclude regulators’ and other stakeholders’ access to additional information that could inform or facilitate more effective regulations.

It is crucial to recognize that these problems are exacerbated by current relationships between registrants and CROs. Due to increased pressure to decrease development costs and increase profit margins, today most science-based (R&D) manufacturers contract late-stage development (or “regulatory”) safety studies to CROs—organizations that have significant financial incentives to satisfy the demands of their clients (the registrants). Within the pharmaceutical industry, there is increasing evidence that CROs and medical education and/or communications companies (MECCs) have engaged in activities such as ghostwriting articles that are favorable to study sponsors and that are published under the names of prominent academics (Elliott 2004; McHenry and Jureidini 2008; Moffatt and Elliott 2007). However, even setting aside such egregious strategies, CROs have an incentive to work with manufacturers to generate a package of results and interpretations that “spins” new products in the best possible light.

The second difficulty with standardized study designs is that they have the potential to lock regulatory agencies into relatively dated scientific procedures while making it difficult to incorporate innovative scientific approaches and emerging hypotheses related to disease development and ecosystem-level effects. For example, ecotoxicity studies currently required under the US Federal Insecticide, Fungicide, and Rodenticide Act (FIFRA) are still based on draft guidelines developed in 1996 (http://www.epa.gov/ocspp/pubs/frs/home/guidelin.htm). While the goal of ecological risk assessment has always been to estimate population-level risks, these guidelines place too much emphasis on direct, high-concentration effects of pesticides (e.g., killing particular species of fish) and have not been updated to consider subtle, indirect ecological effects (e.g., harming critical habitat and/or food supply) that likely occur at environmentally relevant concentrations (Calow and Forbes 2003).

To take another example, the US Congress called on the EPA in 1996 to develop standardized Tier 1 and 2 testing procedures for detecting substances with endocrine-disrupting properties. While the majority of Tier 1 screening assays have been validated and data call-ins (DCIs) are currently being issued to registrants, validation of the Tier 2 testing assays is still ongoing and has not been completed (http://www.epa.gov/endo/pubs/assayvalidation/status.htm). Furthermore, some commentators claim that the standardized protocols used by industry to test substances like bisphenol-A (BPA) for endocrine disrupting properties are based on outdated approaches (Myers et al. 2009). Nevertheless, because these studies were performed based on approved standards, the studies performed by industry have received more weight in US and European regulatory decision-making than new NIH-funded academic studies that are allegedly more sophisticated in identifying hazards (Myers et al. 2009).

Suggestions for nanotechnology governance

Based on the previous oversight experiences of regulatory agencies in OECD-member countries, there appear to be significant dangers associated with the dependence on a battery of standardized studies to mitigate conflicts of interest in an emerging field like nanotechnology. As Ramachandran et al. (2011) emphasized, it is important to create a dynamic oversight scheme that can evolve rapidly in response to new information about nanotechnology risks. In contrast, standardized protocols take a long time to develop, are exceedingly difficult to change, and can be wielded by interest groups as a strategic ploy for dismissing cutting-edge studies that reveal new hazards. Moreover, as long as manufacturers continue to contract with CROs to perform safety studies for them, it is not clear that a standardization scheme will generate deep public confidence in the results.

We suggest that an approach to generating safety data previously proposed by Sheldon Krimsky is more likely to generate public trust and flexibility. In his book Science in the Private Interest (2003, p. 229), Krimsky suggested the creation of a National Institute for Drug Testing (NIDT). On his proposed scheme, any company that wanted to submit data to the FDA in support of a new drug application would be required to provide the NIDT with funds that the institute could then use to contract studies with academics or research centers. This scheme could also be extended to encompass other regulatory agencies such as the EPA. On Krimsky’s approach, conflicts of interest are addressed not solely through the standardization of studies but also through the relative independence of the NIDT; therefore, the NIDT could potentially have more leeway to experiment with new study designs that yield innovative information about hazards.

Admittedly, this approach would not eliminate all concerns about conflicts of interest. For example, Krimsky proposed that the NIDT or its equivalent would negotiate the details of study procedures with the companies that provided the funding for the studies. This relationship might still appear to provide manufacturers with too much power over study designs. Moreover, assuming that the NIDT would contract at least some of their studies out to CROs that previously worked with industry, one might worry that the employees of these CROs would already have pro-industry biases. Nevertheless, the crucial virtue of Krimsky’s proposal is that it breaks the particularly worrisome link between manufacturers and CROs, and therefore significantly lessens the incentives for CROs to generate studies that serve the interests of manufacturers.

Perhaps the greatest difficulty with Krimsky’s proposal is a political one: manufacturers are unlikely to be willing to cede control over studies to a separate entity such as Krimsky’s proposed NIDT. Nevertheless, given the virtues of this approach for generating public trust and credible data (especially if the NIDT were given significant autonomy to choose the design of studies independently from manufacturers), we think that it should be considered very seriously. Perhaps the momentum to develop new oversight strategies for nanotechnology would provide an opportunity to experiment with something resembling this approach.

Even if it proves politically impractical to adopt a strategy like Krimsky’s, our analysis of past experiences with standardized studies still suggests at least three other lessons for strengthening the oversight scheme proposed by Ramachandran et al. (2011). First, we have shown that the initial prong of their strategy (i.e., standardization of protocols and procedures) is unlikely to secure trust in safety studies. Therefore, the second prong of their strategy (namely, validation and vetting of the study data) must be developed with extraordinary care if their proposal is to remain reasonably effective. To their credit, the authors suggest that the group performing the vetting should include “members from the agencies, various stakeholder groups, and the public” (Ramachandran et al. 2011, p. 1361). In addition to including a carefully selected range of participants, an effective vetting scheme will also need to incorporate a highly transparent and inclusive procedure that generates maximal trust in the process. It may also be helpful for the vetting process to include adversarial forms of deliberation designed to highlight concerns or irregularities associated with safety data (Elliott 2011, p. 105).

A second lesson for strengthening nanotechnology oversight in response to conflicts of interest is to take intentional steps to prevent standardized protocols and procedures from ossifying. A central theme of Ramachandran et al’s (2011) article is that nanotechnology oversight needs to be “dynamic” and flexible in response to new data. Relying on standardized protocols to protect against conflicts of interest is a serious threat to maintaining a dynamic oversight system. Therefore, the procedures for approving and revising standardized study designs in a developing field like nanotechnology should be as flexible as possible. There should also be procedures for taking account of new information about nanotechnology hazards even if the information is generated using innovative protocols. Nanotechnology oversight must avoid the shortcomings of endocrine-disruption oversight; we cannot afford to spend 15 years trying to agree on standardized protocols and dismissing potentially significant studies. Perhaps recent proposals for developing tiered testing systems and high-throughput screening strategies can help contribute to the dynamic assessment scheme that we are recommending (NRC 2007).

A third lesson is that researchers and policy makers should continue to push aggressively for government funding of safety studies of new, current-use products rather than well-studied products that have been phased out and are no longer in commerce. While manufacturers should also be encouraged to do their fair share, it is exceedingly difficult to develop a system that maintains public trust in industry-funded safety data. Therefore, numerous commentators have argued that one of the best responses to conflicts of interest is to generate more independently funded safety studies on new drugs and chemicals (APHA 2003; Elliott 2011; Shrader-Frechette 2007). When these independent studies agree with standardized, industry-funded studies, they increase public trust in the results. When discrepancies arise, independent studies can contribute to a more dynamic oversight system by helping to determine whether current standardized protocols need to be revised.

Conclusion

We have argued that financial conflicts of interest require careful attention when developing an effective oversight scheme for nanotechnology. While Ramachandran et al. (2011) proposed that these conflicts be addressed with standardized study protocols and vetting procedures, we have argued that the past experiences of regulatory agencies raise significant concerns about such a conventional approach. In particular, conflicts of interest still threaten to influence standardized studies, and the process of standardization threatens to lock regulatory agencies into outdated procedures.

We have suggested that Sheldon Krimsky’s proposal to create a separate governmental institute for safety testing holds much more promise for alleviating conflicts of interest and maintaining opportunities for flexibility. Nevertheless, because Ramachandran et al.’s (2011) approach is likely to be more politically feasible in the near-term, we have also suggested three lessons for implementing their strategy in a more effective manner. Specifically, the process for vetting safety data should be designed for maximal transparency and effectiveness, standardized procedures should be approved and revised in the most flexible manner possible, and government funding of safety data should continue to be a very high priority.