Technologies in the Making

Many technologies have lasting impacts on social and environmental systems, and yet the long-term is not generally systematically considered in the choices made about emerging technologies. Such choices are tricky due to the Collingridge (1980) dilemma: outcomes cannot be predicted until a technology is adopted, yet once path dependencies materialize and technologies get “locked in,” control or modulation becomes difficult as markets, cultural values, institutions and policy become rigid. The ability to confront this dilemma and responsibly govern the outcomes of technological endeavors is lacking. How to create space for discerning dialogue, generating options, and setting priorities upstream requires further attention.

Efforts to more conscientiously assess and steer technological development face several problems. Assessments must involve a broad range of stakeholders engaged within different “epistemic cultures” (Knorr-Cetina 1999) or ways of knowing. Such diversity is evident in the natural sciences in fields such as nanotechnology which are comprised of material scientists, biologists, engineers, physicists and chemists. Further, scientific and technological knowledge are inseparable from social knowledge such that values, experience, institutions, discourses and policies are woven tightly together in the design, adoption, implementation and use of technology. Decades of research in science and technology studies (STS) (Hackett et al. 2007) has demonstrated how technology develops in concert with a broad range of actors and agendas through what Sheila Jasanoff (2004) calls the co-production of science, technology and society. However, a much more limited constituency, often with homogeneous interests, frequently makes decisions about technological development. This leads to a narrow awareness of what choices and options are available. Efforts to cultivate society’s ability to better govern emerging technologies must convene disparate groups, with contrary agendas, so as to increase the range of options considered and the sources of wisdom brough to bear on them.

Governance is also complicated by the speed of technological change. Emerging technologies such as nanotechnology are outpacing regulatory structures, political responses, educational systems, and the leveraging of social choice. The disparity between the speed of technological change and the rate at which governance systems respond and cultural understandings develop severely stresses society’s ability to act responsibly in the present on behalf of future generations. As a consequence, the governance of technology is a complicated affair riven by public controversies, accidents, delays, and difficulties in prioritizing investments to produce positive social outcomes. Contemporary debates about genetically modified organisms (GMOs), controversies over nuclear energy, and the politics of stem cell research illustrate this complexity.

Governing emerging technologies involves two main challenges: (1) dealing with the insufficiently diverse and reflexive decision making that marks contemporary governance; and (2) coping with rapid technological change uncoupled from the capacity for socio-political responses. A key feature of these dilemmas is uncertainty, which shows up at multiple levels. Emerging technologies, such as nanotechnology, have uncertain developmental paths. Scholars characterize the new modes of production of scientific and technological knowledge as “post-normal” (Funtowicz and Ravetz 1990), indicating that facts are uncertain, values are in dispute, stakes are high, and decisions are urgent. It is not only technical knowledge that is shrouded in uncertainty, but, crucially, also the social implications of emerging technologies.

Regular ways of dealing with uncertainty through prediction are insufficient. The linear model of innovation, in which the future flows neatly from the past, is outdated. An accurate prediction of technology and societal relations is not possible. The option to wait and see is not viable, nor responsible, for a variety of reasons, one of which revolves around the hardening of socio-technical pathways: once a pathway develops, it is difficult to change course (Arthur 1989). Guiding emerging technologies towards desirable societal outcomes and ensuring that positive impacts outweigh the negative requires upstream engagement (Macnaughten et al. 2005) which evaluates new technologies at an early stage, before lock-in limits the range of choices available.

Future-oriented tools and dialogues have the potential to build reflexivity into the design and development of emerging technologies, and are a key component of the form of technology assessment described as “anticipatory governance” (Barben et al. 2008): the ability to “collectively imagine, critique, and thereby shape the issues presented by emerging technologies” (p. 992). The challenge is to employ and refine methodologies that seek to understand uncertainty strategically, such as to create social learning and contextual awareness which can lead to better solutions for complex problems. While grasping future complexities and accounting for ongoing interactions between values, machines and regimes has proven daunting for the social sciences (Williams 2006; Selin 2008), the practice of foresight (Grupp and Linstone 1999; Tsoukas and Shepard 2004) has long dealt with reflection on alternative, plausible futures. Born from future studies (Bell 1997), technology assessment (Rip et al. 1996), and strategic planning (Wack 1984; van der Heijden 2005), foresight methodologies are a means to analyze the explicit and implicit stories that are embraced and circulated in coping with futures known and unknown. Scenarios are one time-tested foresight methodology for coping with uncertainties through instigating perceptual change and disestablishing entrenched modes of thought so as to enable public and private organizations and multi-stakeholder groups to act more intelligently and readily.

Future-oriented methods have been used in a national scenarios exercise conducted by the National Science Foundation-funded Center for Nanotechnology in Society at Arizona State University (CNS-ASU) (see Guston and Sarewitz 2002). The key impetus for the NanoFutures project was the tasking of CNS-ASU to investigate, through a suite of methodological tools, the implications of nanotechnology. This mandate was passed down from legislation (Public Law 108–153) which posited that social science should be conducted in cooperation with nanotechnology research in such a way as to influence outcomes in socially robust ways (Fisher and Mahajan 2006).

The question that arose from this mandate relates to methods for studying and supporting future-oriented deliberation. That is, nanotechnology is largely about potential and future deliverables. But, given its inchoate form, there are no completely reliable or grounded ways to talk about implications. This situation poses challenges for the social scientists summoned to go into the lab, talk to policy makers, and engage the public about nanotechnology. They therefore must confront the unknowability of the future.

“The future” is a high stakes conceptual landscape populated by hopes and fears and plans for generations to come. When it comes to technological futures, imaginaries are caught up with notions of progress, innovation and responsibility. Nanotechnology, as this decade’s “revolutionary” technology, is not immune to futured discourse; rather, it seems to thrive on it (Selin 2007). Even after acknowledging such dynamics of expectations, it is unclear how social scientists should conduct research and outreach around plausible futures. How should “credible” “data” about the future be investigated? What is the best way to convene actors in a future-oriented dialogue about the outcomes and embedding of new technologies? How can the ongoing co-production of technology and society be made visible and thus subjected to conscious choice and steering?

In working to meet these challenges, CNS-ASU was forced to deal with a lack of clarity and scholarship around the concept of plausibility. While a full theoretical rendering of plausibility is premature, plausibility can be operationalized in practice. Establishing plausibility requires negotiation and is a component of future-oriented technology assessment. Dealing in the future tense presents theoretical predicaments and epistemological ambiguities. Further, trespassing into the future stresses questions of normativity and thus necessitates careful reflection. This paper presents an opportunity to make transparent the decisions and dilemmas posed by NanoFutures in an effort to expose some of the tensions of the future tense and how one social science project managed them.

NanoFutures: A Virtual Experiment

NanoFutures works to bring the future into the present by allowing a broad range of stakeholders to consider plausible futures and to think in advance of the ossification of technologies. If democratic deliberation through early intervention is the objective, then stakeholders must be activated such that they can consider values, politics, and ethics in advance of the solidification of nanotechnologies’ markets, products, policies and practices.

NanoFutures refers to both a research project and a website. As a research project, it utilizes a host of methodological innovations oriented towards capturing the ways in which different professional communities characterize plausibility and imagine the social implications of nanotechnology. The website is a tool for outreach as well as a data collection vehicle that hosts a wiki platform and discussion forum and which presents future technological products for critique by a broad range of stakeholders.

NanoFutures has three components: (1) Development: constructing nano-enabled product scenes; (2) Vetting: establishing technical plausibility through multi-method investigations and interventions; and (3) Deliberation: presenting the scenes to a broad range of stakeholders for critique, expansion and discussion. Future products are co-created in the first instance with nanoscale scientists and engineers through vetting engagements and then opened up to broader scrutiny and collaborative authoring on the website. This intervention is an effort to ground the future and to co-create scenarios of nanotechnological products in an iterative fashion, in order to inspire debate and provide an opportunity for engagement around the social implications of nanotechnology.

Development: Constructing Naïve Product Scenes

The distinguishing characteristics of nanotechnologies emerge more clearly in the context of specific applications, and so the first step of the project was the creation of different naïve product scenes. These scenes are short vignettes that describe in technical detail, much like technical sales literature, a nano-enabled product of the future. One of the hallmarks of nanotechnology is that its products will impact diverse fields, including aerospace, healthcare, electronics, the military, and a wide variety of consumer products, and as such the scenes span a range of different application areas. In order to begin to narrow relevant nanotechnologies down, selection criteria were developed on the basis of nanotechnologies related to “Human identity, enhancement and biology” (a focal theme for CNS-ASU in 2007–2008).

The task of selecting a set of prospective technologies around which to craft the scenes was a daunting one, and required a rubric to lend structure and manageability. The subject of human enhancement helpfully limited the technologies while also referencing a spectrum of technologies that either constitute enhancements or are enablers of human enhancement technologies. NanoBioInfoCogno (NBIC) is by far the most common rubric for discussing converging technologies and provides a further means to parse the human enhancement space. NBIC also makes explicit reference to cognitive science, one of the more controversial aspects of human enhancement.

The scenes were generated from documented claims in the published scientific literature, popular science literature, and science fiction literature (Bennett 2008). Through the structured deliberations of a multidisciplinary CNS-ASU team, ten technical descriptions were initially created (later reduced to six through vetting, see Box 1) that suggested a reasonable mix between short, medium and long term developments.

Box 1 NanoFutures scenes

The use of future consumer products poses a tension in that they introduce broad deliberation about society, values, ethics, and control, yet do so within a limited framework of consumption. Though few would argue that values and consumption are not linked, framing futures in terms of products does force a framework of capitalism and market forces that may be thought to undermine deeper reflections on, for instance, enhancement, identity and religion. Despite these limitations, focusing on products seems a reasonable way to hone in on technology-in-use and put deeper ethics in context.

Evoking naivety in the development phase marks an innovation in traditional scenario methodologies. While scenarios have been used in many different fields, with different purposes and using a variety of methodologies (van Notten et al. 2003), the approach pursued here is novel. The product descriptions are intentionally called “scenes” to distinguish them from “scenarios” since scenarios are usually complete stories with a beginning, middle and end. In contrast, the scenes feature an extreme focus on technology: rather than constructing elaborate worlds that include politics, social movements and economic systems, as scenarios traditionally do, scenes describe a nano-enabled product, unencumbered by explicit illustrations of the social, political, economic and ethical implications of such products. This rendering of scenes as naïve leaves open and ambiguous the social implications of such technologies so as to invite others to frame their own issues and concerns within the deliberative thrust of the project.

Vetting: Establishing Plausibility

The scenes were vetted prior to their dissemination to counter critiques regarding the lack of realism attendant on much popular discourse surrounding nanotechnology. Nanotechnology is a subject matter infused with wildly speculative discourse, and there is a tendency to dismiss future applications as “impossible,” thus cutting discussions about upstream choices short.

The process of vetting followed three main lines: (1) focus groups with scientists with relevant expertise; (2) bibliometric analysis of key terms produced in the focus groups; and (3) research roadmapping.

The vetting workshops aimed to expose the scenes to relevant scientists for their evaluation of plausibility, timeliness, and relevance. The invitation to the workshop explained:

Some of the technologies exist today but are not scalable, while others rely on years of development—whichever the case, we are asking for leaps in imagination with the realism and measured judgment of expertise… Our goal is that you will challenge the scenes by looking for glaring technical reasons that the scene is invalid while hopefully suggesting alternative technologies and pathways to the eventual product. If the technical products look more or less reasonable, it would be helpful for us to know what sort of breakthroughs or technological advances need to take place for this to be realizable.

Participants for the vetting workshops were chosen from the Arizona State University (ASU) community based on how pertinent their scientific or technical expertise was to the technology described in the scene.Footnote 1 The scenes contain technological products that do not exist and which often rely on the convergence of different disciplines for their manifestation. While the futuristic and interdisciplinary character of the scenes could have been obstacles, my colleagues and I found that interdisciplinary understandings existed (especially with scientists well advanced in their careers) and were brought to bear on analyzing the future-oriented scenes.

During the vetting workshops, the scientists and engineers were asked whether the scene was feasible, and also to comment on the following parameters:

  • Technical validation—Within the realm of current understanding, is this technology possible? Are the descriptions technically complete and accurate?

  • Relevance—Does the scene capture what is interesting about this technological trajectory?

  • Alternatives—Is there a more elegant or effective way of arriving at a similar function?

  • Revisions—What changes should be made to the scene to make it more plausible?

In addition to the vetting criteria, the participants were also asked to generate search terms. The prompt was: “If you were going to begin a research project devoted to this application, what search terms would you use to discover the state of the art?” The search terms developed by participants in the vetting workshop were both specific, e.g., neuron chip, and general, e.g., bionano. The terms were then shipped to the Georgia Institute of Technology to search 4,700 publications pulled from Web of Science, Science Citation Index (see Porter et al. 2007). The search generated reports of top publications, research institutions, lead authors, and countries.

As a growing method of technology assessment, this data mining is another means of establishing that these scenes are relevant because, while the products they present are not commercially developed, there is ongoing research that indicates that they could be plausible. In this way, in addition to the live vetting in the workshops, the scenes are connected to published research and existing research activities in real time and thus access another layer of plausibility.

The last task of the vetting workshop was to produce a technology roadmap that answers the question: “What kind of research is necessary to realize this product?” A roadmap is an exercise in reverse engineering that:

  • Outlines and references current research;

  • Specifies the direction of research threads (relevant to the product);

  • Notes the technological obstacles that need to be overcome;

  • Estimates the dates of solutions/breakthroughs along the way.

The roadmaps linked current and future research and development and resulted in a chronological list of scientific problems and technical challenges. They serve as another means to frame conversation beyond “Is this possible?” and ask researchers to specify their views. The effort of sequencing discoveries and developments enables the focus group to explain in more detail the technical hurdles. In some instances, construction of the roadmap has led to other strategies for developing the same product more elegantly, thus revising the scene.

While there are some limitations to the approach, it is both technically and ethically robust compared to the more normal situation in which a technically trained individual offers an utterly ungrounded prediction of what additional funding in his or her area of research will do for society. Testing one’s view of the future against others in the workshop, as well as validation through the other vetting mechanisms, provided multiple avenues to check plausibility. This does not imply comprehensiveness but rather triangulation—an important concept when validation is not possible.

Vetting the scenes actually comes in two phases: in the first instance with the vetting procedures, and then again with the main deliberative thrust of NanoFutures, which involved broader stakeholders reflecting on technological expectations via the website and other outreach activities of CNS-ASU.

Deliberation: Open Source Scenarios

The deliberative component of NanoFutures is an attempt to discern how different groups of people assess and assign values to the technical scenes generated by the project. While this component is an intervention in its own right, it also serves as a means of data collection to determine how different communities assess plausibility. The hope is that communities working in and around nanotechnology and engaging with NanoFutures will become better equipped to confront technological choices and to understand more clearly the arguments of different stakeholders.

One of the rationales for NanoFutures was the idea that those involved in shaping nanotechnology not only hold different ideas of what nanotechnology is, but also what it will be. Creating a space for reflection on deeply, but often tacitly, held expectations is meant to serve as a corrective to myopia as well as forcing stakeholders to confront their own assumptions about the future.

The website is designed to allow users from different professional communities to see each other’s thoughts and critiques. Users can debate in a discussion forum where they are invited to critique the scenes and encouraged to address issues of governance, control, ethics, religion, and cultural, economic and legal change. Users can also further elaborate on the scenes in a wiki platform so as to add social context and complexity. This elaboration is meant to transform the scenes into scenarios, in which technical descriptions are fleshed out with attention to ethics and social dimensions such that stories have been constructed around the technical descriptions. Users also have the opportunity to write their own scenes or scenarios.

The NanoFutures website is designed so that each participant can see others’ contributions in real time, thus in principle allowing an ongoing, transparent and dialogic assessment of nanotechnology. The goal of the deliberation phase of NanoFutures is to create clear thinking around the social implications of nanotechnology and, as such, to open the future to critical reflection. Participants are explicitly told that the scenes are fictional and not predictions of what nanotechnologies will actually do in the future.

The community of users invited to participate were:

  • Social scientists (members of the Society for Social Studies of Science);

  • Selected members of the public (National Citizens Technology Forum participantsFootnote 2 or alumni of ASUFootnote 3);

  • Individuals with an interest in nanotechnology, in particular Foresight InstituteFootnote 4 members, the Center for Responsible NanotechnologyFootnote 5 community, and the CNS-ASU network;

  • Members of the Consortium for Science, Policy and Outcomes community engaged in public policy development and analysis;

  • Non-governmental organizations (NGOs) engaged with nanotechnology who were identified through internet research;

  • Scientists and engineers who had been awarded grants through the National Nanotechnology Initiative of the National Science Foundation.

The first round of NanoFutures was largely aimed at US audiences, though scholars in Latin America and Europe have expressed interest in translating and using NanoFutures. Soon after the launch of NanoFutures (in May 2008), a range of individuals and groups, including science teachers, an environmental advocacy group, defense analysts and museum professionals, expressed interest in using the site and the scenes in their respective professional activities. This response evidences the potential utility of the virtual outreach generally, and the value of the scenes for outreach and educational activities specifically.

While there are obvious shortcomings in the selection of these communities, we feel they offer a reasonable range of perspectives.Footnote 6 One might expect that these different communities maintain different epistemologies and as such will have different standards of plausibility and different ideas about governance, ethics and desirability. The analysis of the website entries shows how the societal implications were conceived and debated on the website by different communities (Selin and Hudson 2010).

Involving a wide range of stakeholders in deliberative technology assessment builds upon lessons from science and technology studies, particularly critical public understanding of science research, which has shown that people immediately outside of the realm of technological development make sense of technology in surprising ways—ways that cannot be known by the analyst a priori. Open source scenaric thinking provides a basic scaffolding for participants to elaborate, appraise, hack and customize scenario ingredients into more substantial critiques. Researchers cannot presume to know what different communities make of implications so instead the perspectives of members of these communities should be actively and explicitly solicited. Through employing naïve product scenes, Nanofutures sets the stage for “extended peer review” (Funtowicz and Ravetz 1991).

Negotiating Plausibilty

An intervention that focuses on promoting debate about plausible futures suggests predicaments. What does plausibility mean for claims that are unable to be confirmed? In dealing with anticipatory knowledge such as projections, visions and expectations, what counts as valid and trustworthy knowledge? Anticipatory knowledge is not about facts, historical evidence, or presently observable phenomena. Instead, it is speculation and knowledge claims positioned in the future. This trespass of knowledge into the future renders impotent traditional knowledge techniques such as validating facts, confirming history, and observing events. While notions of fact, history and events are regularly subject to interpretation, they are far more justifiable than anticipatory knowledge. What is lost are notions of evidence and proof. The question remains: with what consequences?

In NanoFutures, plausibility was negotiated, quantified, visualized and assessed through the vetting procedures. Plausibility was taken to mean such things as feasible, realistic, possible, tenable, credible or defensible. Yet surprisingly, fact and fiction were not heavily contested in the context of the vetting workshops. Caveats were regularly given when the vetting workshop participants were confronted with the scene. The scientists were quick to say, “That work is currently not happening.” The facilitator would then ask, “If one technical hurdle was surpassed, would this device work?” Through an iterative dialogue of such “ifs” and “thens” the scientists were able to specifically comment on what the technical hurdles are and what new lines of research or discoveries would be necessary. In lieu of proof for such developments, arguments were developed that maintained scientific credibility and conformity to current technical knowledge.

Many of the vetting sessions resulted in minor changes to wording or slight changes in the technology. The scene about ultra fast sequencing technology used to analyze the DNA in harvested wastewater was approved with a quick “Yes, that is exactly how it would work” by a senior scientist and his lab. Another scene—describing a cranial chip with a data feed that puts information into the brain—was modified from a single brain chip to a network of chips due to the lack of detailed knowledge about the processing of memory in the brain. In this case, uncertainty was figured into the technical description through the choice of a more robust technological pathway.

One scene was removed from the project due to the vetting session. This scene showcased adjustable tattoos created by injecting magnetically active ink under the skin which could then be shifted into a design using small electromagnets. When the CNS-ASU researcher approached the Center for Solid State Sciences to discuss the scene, an engineer explained that anytime you put that strong a magnetic force on a particle, you would attract the particle directly to the magnet, effectively removing the particle from the tissue. Whereas the scene relied on the horizontal movement of the ink, the force of the magnet would move the particles along a more vertical vector. The engineer then proceeded to pull up a series of equations about the movement of particles under different magnetic field strengths. In this vetting session, the engineer was so engaged in the project, and so devoted to developing a solution, that after exhaustively explaining how the scene would not work, he spent much effort developing another way for the magnetically based tattoo to function. The problem with the scene as originally proposed was that the particles would be sucked up laterally with the magnet, thus making it impossible to shift the design, but enabling a complete removal of the particles. The engineer proposed using the same technology to inject the magnetic ink into a desired form and using the force of the magnet to remove all the particles, inventing a removable tattoo. Through known calculations, the new scene was deemed plausible.

Plausibility was also negotiated through a system of checks and balances, a sparring of imagination and reason. Ira Bennett of CNS-ASU, who conducted many of the vetting sessions, reflects on one of the vetting workshops held in a laboratory meeting:

Students were eager to show their knowledge and ability to use it creatively while the faculty kept them in check concerning the practicality and feasibility of their ideas. Members of the laboratory group told me they felt as though the group had benefited from the experience as it provided some context to the students on where the technology could go into the future, past the day-to-day tedium of macaque models and algorithm development (Bennett 2008—p. 153).

In a quasi oral exam, plausibility was established through a balance of open-ended, creative discussion and expert validation. The value to the laboratory group also displays how the vetting sessions were an engagement in their own right, offering concrete outcomes in terms of productive interactions between social and natural scientists.

The vetting exercises established a first layer of plausibility through engagements with local communities of nanotechnology researchers. Yet the NanoFutures research lay beyond “technical” plausibility. The scenes are intentionally vetted on the technical front as a means to establish upfront the basis for a serious conversation about social implications that cannot be rejected out of hand for technical reasons. Thus establishing technical plausibility can be seen as a pre-engagement intervention which is, however, ultimately secondary to the broader plausibilities explored in the deliberation phase of the project.

The key thrust of the project lies in eliciting what broader stakeholders say about plausibility. We believe the open-source approach should liberate more useful information about what is thought to be plausible among a wide, “extended” (Funtowicz and Ravetz’s 1991) group of technical and lay actors. Deliberations through web activities are meant to establish community-determined plausibility to explore economic, social, political plausibility and presumably to shift into questions of desirability more generally.

Tensions of the Future Tense

Intervening in people’s views of the future intervenes in the future. On the one hand, this is nothing new: interviewing people who have been abused often refigures the past in the mind of the abused and they come to write their history in a new way upon further reflection. Similarly, putting the future in clear view and explicating expectations provides an opportunity for reflection. Those engaged may come to view their ambitions, goals and actions in the world differently. Social science research that focuses on historical events or experiences can reconfigure memories, but social science research that focuses on future events and desires can reconfigure intent and hence action, now and in the future.

The proposition that NanoFutures may affect one’s view of the future is not insignificant. One of the key rationales for the project is that expectations matter. NanoFutures is one way to try to articulate and challenge expectations that operate in a context of consequence. There is a significant, but under-explored, linkage between expectations and consequence. From Robert Merton’s (1948) self-fulfilling prophesies to more recent studies on the performativity of futures (Michael 2000; Brown et al. 2000), scholars understand that such visions are not just rhetorical articulations of the future but are actually constitutive of futures. While the recognition of performativity suggests the need to take expectations seriously, it also interestingly draws attention to how interventions on the future attempt (and inevitably do, to some extent) to shape futures by highlighting previously unseen choices.

NanoFutures thus has had an effect by intervening in futures, mainly through the deliberative component that initiates conversations. Stimulating debate always involves structuring and thus closing down particular avenues of concern. That is, my colleagues and I recognize that, by seeding the conversation with one scene rather than another, we have already directed the conversation. For this reason, we have been attentive to the balance of the scenes in terms of technology area and timeframe to realization, as well as with some thought to a mix of “positive” and “negative” scenes. Clearly we cannot choose all good (or all bad) scenes, for that would be propaganda in its own right. Choosing a journalistic approach of balancing one “good” scene with one “bad” is a reasonable approach, but that would also mean imposing our judgment of what is good and what is bad on other actors within what is in fact a more complex system of evaluation. That, in essence, is the purpose of the open source styling of NanoFutures—to investigate different communities’ evaluative schemes by focusing on specific instances of future nanotechnologies.

NanoFutures frames the future, and as such the intervention acts in real time, in the present, and has the potential to reorient attention and modify action. Understanding the import of imagined futures in priority setting, technological design, and public acceptance suggests that creating stories of the future for the purpose of social science research is about intervening in a contentious and influential debate. CNS-ASU recognizes that by presenting scenes, we are also shaping the discourses that surround nanotechnology. Depending on the reach of the website, scientists and policy makers may begin to think about their work in a different way.

Indeed, CNS-ASU means to evoke a particular set of competencies in anticipatory governance (Barben et al. 2008) with the aim of “build[ing] into the [research and development] enterprise itself a reflexive capacity that…allows modulation of innovation paths” such that they are more in line with social values (Guston and Sarewitz 2002, p. 100). Being able to grasp what those values are when it comes to technologies-in-the-making is the challenge and aim of NanoFutures. Yet by pursuing this inquiry, we likely modify what it is we are studying. The work of CNS-ASU is in this sense normative and meant to have consequence.

There are additional risks in employing the future tense in research. For example, there is a risk of avoiding or downgrading the present by centering debate in the future. Many of the societal issues posed by future nanotechnologies, such as toxicology, equity, or access, can often be more meaningfully framed in the present. While the import of acting now should not be underestimated, utilizing the future tense builds upon a central idea captured in anticipatory governance: technologies follow paths characterized by early flexibility and later obduracy, and these technologies can be made more socially robust by instigating such deliberations in advance of potentially entrenched problems with technology. There is also the idea that distancing the present by evoking the future provides some psychological comfort, divorced from the immediacy and urgency of quests for funding, agenda building and definitional disagreement. The future arises as common territory, a shared space that on the one hand appears more open-ended, but that also makes more obvious the role of choice and human agency in the development of new technologies.

By creating a space for reflection on plausible futures, CNS-ASU hopes to disrupt well-rehearsed and entrenched notions of progress that typically attend perspectives on new technologies. Without reflection, technological visions tend to overestimate the speed of technological change and underestimate the speed of social adoption (or rejection) and cultural change (Geels and Smit 2000). Without reflection, it is difficult to enable inclusive reflexive decision making across society on issues of technology. Establishing plausibility appears as a crucial element of future-oriented deliberative practices. Though not without risks, establishing plausibility seems to enable the conversation to begin. In this project, plausibility was something that was locally-defined in the vetting workshops, triangulated with data mining, and then extended broadly to stakeholder communities. This multi-method, real-time approach to plausibility captures the fleetingness of the future. Built into how plausibility was operationalized by CNS-ASU is an understanding that context matters, and that what is plausible now may not be in the future.