Keywords

It is difficult to pinpoint exactly when America moved from the geographic to the technological frontier. That moment may have been on July 12, 1893 when a young historian named Fredrick Jackson Turner declared that the country’s westward expansion – a movement that had shaped the American psyche – was over. Turner’s obituary for the frontier coincided with the Chicago World’s Fair, a six-month love fest with architecture and technology that featured the first glimpse of what electricity might bring to American society, from lighting to motion pictures. Turner noted that, “In this advance, the frontier is the outer edge of the wave….but as a field for the serious study…it has been neglected” (Turner 1893).

Today, the technological frontier remains a backwater to be experienced but seldom studied. Public policy makers operate daily on this frontier, but travel with little guidance and significant conceptual baggage. Like our forefathers on the geographical frontier, those on the technological frontier confront what Peter Bernstein has called the “wildness” – a world of change and uncertainty that confounds easy decisions, undermines predictions, and can often lead to embarrassing miscalculations by decision-makers. As Bernstein noted, “It is in [the] outliers and imperfections that the wildness lurks” (Bernstein 1996). Besides rampant uncertainty, the technological frontier shares one similarity with the old frontier – bad things can and do happen. Accidents are “normal” on the frontier, a point that Charles Perrow pointed out years ago (Perrow 1984). Despite the uncertainties, the frontier is where the expectations develop that shape business strategies, public opinion, and government actions over time (Bonini et al. 2006).

There are a host of issues that make navigating the technological frontier difficult for government entities including: novelty that undermines prediction, cognitive biases that blur our perceptions, framing that distorts emergent debates on public policies, intractable problems with too little funding to solve them, and a host of known unknowns that go unaddressed. One issue that has begun to attract more interest, and concern, is what some see as a growing mismatch between the rate of innovation in the public and private sectors. The basic argument is that this mismatch presents government with a quandary: Either speed up, which could lead to ill-considered actions or poorly conceived policies, or become irrelevant and incapable of impacting the dynamics of technological change (Popper 2003).

How serious this problem is depends on whether one believes there is an expanding gap in innovation rates – a tortoise-and-hare problem. There is evidence that the time it takes to introduce new technologies has been shrinking. Between 1990 and 1995, the time to develop and introduce US products fell from 35.5 to 23 months and the time needed to introduce high-tech products into the marketplace dropped from 18 months in 1993 to 10 months in 1998 (Griffin 1997; Tassey 1999). Taking a longer historical look, Yale university economist William Nordhaus has estimated that about 70 percent of all goods and services consumed in 1991 were different from those of a century ago (Nordhaus 2009). In the period from 1972 to 1987, the US government eliminated 50 industries from its standard industrial classification (SIC codes). In the decade following 1987, the government deleted 500 and added, or redefined, almost 1,000.

There is a tendency to evoke Moore’s Law – Gordon Moore’s 1965 prediction that the performance of integrated circuits would double every 18–24 months – as a metric of today’s rapid innovation tempo. However, the distance between computer chips and actual computers is large and the gap is littered with failed startups and wasted capital. Bhaskar Chakravorti coined the term Demi-Moore’s Law to indicate that technology’s impact on the market moves at a rate only one half the speed predicted by Gordon Moore (Chakravorti 2003). As Clayton Christensen at Harvard Business School has noted, technologists have a habit of overestimating consumer demand and often project huge markets that never materialize. It’s been jokingly said that computer scientists, looking at new markets, count 1, 2, 3,… a million (Seely Brown and Duguid 2000). Regardless of the absolute rate of change, the relative distance between private sector innovation and public sector response seems to be growing.

In one emerging area – nanotechnology – a growth of patents has yielded a correspondingly rise in products on the market with a 10–12 year lag between invention and market penetration. The number of manufacturer-identified, nano-based products on the market has risen from around 50 in 2005 to over 1,000 in August, 2009, and to 1,300 by the end of 2010. A linear regression model fitted to this trend data projected 1,700 products by 2013 (R2 = 0.996) (Project on Emerging Technologies 2011).

figure 4_a_213232_1_En

Nanotechnology patents (Chen et al. 2008) and Nano-based consumer products

As nanotechnologies were introduced into the marketplace, a secondary lag occurred between the introduction and an understanding of any risks to human health and the environment – a lag that is likely to grow. A recent study on the potential costs and time required to assess the risks of just 190 nanomaterials now in production indicated a required investment of $249 million (assuming optimistic assumptions about hazards and streamlined testing techniques) to almost $1.2 billion to implement a more comprehensive battery of tests in line with a precautionary approach (this approach would require between 34 and 53 years to implement) (Choi et. al. 2009). Keep in mind that the risk assessment challenge is likely to increase in complexity and cost with second and third generation nanotechnology products and materials. A third lag then occurred between the recognition of risks and attempts to manage them, either through voluntary approaches or mandatory reporting requirements and regulations. A new comparative US-EU report calls for mandatory reporting for nanomaterials in commercial use but, to date, only the Canadian government has implemented this type of regulation (Breggin et al. 2009).

The shock of the new is compounded by what English historian David Edgerton called “the shock of the old” (Edgerton 2007). Once introduced, technologies tend to linger, often for decades. Our strategic arsenal still relies on the B-52 bomber (in service since 1955), machetes and small arms kill most people in wars, and our environmental policies still focus on technologies developed during the last industrial revolution, such as the internal combustion engine, steam-powered electricity generation, and bulk chemical synthesis. The organizational challenge is dealing with three types of technologies simultaneously: old technologies from the past, old technologies combined in new ways, and the truly new and novel. So the flood of emerging nanoscale materials, many with highly novel properties, comes on top of 80,000 chemicals already in commerce that we know very little about in terms of their risks to humans and the environment (Environmental Defense Fund 1997).Footnote 1

1 Change the Metaphor

The frontier was, and still is, a powerful metaphor. If neuroscientists are right in asserting that metaphors are the foundations of our conceptual thinking, then we need to change the metaphor governing behavior and public policy on the technological frontier (Lakoff and Johnson 1980). The old policies and programs, based largely on an “assessment and regulation” paradigm, need a new operating system, one that moves from Newtonian mechanics to evolutionary biology and shifts the modus operandi from the interminably long process of issue identification, analysis, recommendations, and implementation to an emphasis on learning, adaptation, and co-evolution.Footnote 2

One useful biological metaphor for this new state of affairs is the Red Queen, the character in Lewis Carroll’s Through the Looking Glass who says to Alice: “Now, here, you see, it takes all the running you can do to keep in the same place” (Carroll 1872). Applying a biological metaphor to technological innovation might seem far-fetched, but the question is what we might learn from such an analogy.Footnote 3 As Stuart Kaufman once noted, “What can biology and technology possibly have in common? Perhaps nothing, perhaps a lot” (Kauffman 1995). Catching up with technological innovation is difficult and our governance institutions are handicapped by the existing approach to policy design – slow, expensive, and hard to maneuver in the face of change, uncertainty, and conditions of constant surprise. In this situation, metaphors matter because they serve as a means of structuring, and potentially changing, how we see, think, and act. Organizations viewed as machines, for instance, will operate very differently from organizations viewed by their members as brains or adaptive organisms (Morgan 1997).

One response to the Red Queen would be a shift from serial to parallel processing, or, to use an approach from the business world, a move towards concurrent engineering where product and process design run simultaneously, achieving time savings without sacrificing quality. Applying the Red Queen metaphor to public policy challenges on the technological frontier has three important implications for the behavior of organizations:

  • First, co-evolution is the only operable strategy. As John Seely Brown, the former head of the Xerox Palo Alto Research Center (PARC), once observed “The future is not invented; it is co-evolved with a wide class of players.” The players in the policy system become part of a diverse, complex, and dynamic innovation ecosystem, not isolated observers sitting on some external perch. The goal is to prevent risks, not just study them; to encourage innovation, not just write about it; and to accelerate the introduction of sustainable technologies into the marketplace, not to hinder it.

  • Second, time matters. Understanding the pace of change of the actors in the innovation system will define strategies (for instance, shaping or adapting, and impact actions, such as placing big bets or creating options and no-regrets moves) and determine the nature and ultimate outcomes of co-evolution (Courtney et al. 1997). This sense of time and timing depends on a high degree of situational awareness or what some term “mindfulness” of the environment, constraints, opportunities, and expectations (Weick and Sutcliffe 2001). One key piece of information is an understanding of the decision cycle of key actors in the ecosystem – from industry to the Congress – and being able to gain influence or competitive advantage by getting inside that cycle.Footnote 4

  • Finally, change/learn or die. One of the most important implications of the Red Queen metaphor is that previous behaviors and adaptations do not guarantee continued survival in the face of future challenges (Hoffman 1991). One has to effectively learn from the past, but adaptive learning on the fly is also critical and that implies continual experimentation with innovative methods and organizational structures.

Imagine a new set of functions designed to operate dynamically inside the innovation system in a parallel processing mode that focus on co-evolution and rapid learning. This list is not exhaustive, but exemplary, and designed to form the basis of an experimental and empowering niche that could support a broader transition to new policies and organizational strategies (Rotmans and Loorbach 2009).

2 Embed an Early Warning System

Without early warning, early action is difficult and a reactive response is almost preordained. Proponents of reflexive or anticipatory governance have raised the issue of early warning but little action has been taken on the part of government to institutionalize the function (Guston and Sarawitz 2002).

Here is one example of an early warning failure on the technological frontier. Concerns about possible inhalation risks of carbon nanotubes first appeared in a letter by industrial hygienist Gerald Coles written to Nature magazine in 1992.Footnote 5 In 1998, science journalist Robert Service wrote an article in Science magazine entitled: “Nanotubes: The Next Asbestos?” again raising concerns, which were downplayed by a number of nanoscientists, including Nobel prize winner Richard Smalley (Service 1998). As was recently noted, Smalley “…did not want to draw attention to the hypothetical dangers of nanotechnology in case it would undermine support for the field in the early days” (Toumey 2009). Fast-forward another decade and more evidence has accumulated that carbon nanotubes can cause asbestos-like pathogenicity in the lung and actually pass directly through the lung lining (Poland et al. 2008; Sanderson 2009). Recently, the Environmental Protection Agency declared it would finally enforce pre-manufacturing reviews for carbon nanotubes, declaring that carbon nanotubes “are not necessarily identical to graphite or other allotropes of carbon.”Footnote 6 This represents a minimal gap of over 15 years between early warning and regulatory action. During this time little funding was invested by government to resolve initial concerns and risks were in many cases actively downplayed by researchers and developers. This early warning was possible based on a structural analogy to a known, and highly toxic material – asbestos. Although the hallmark of innovation in areas like nanotechnology and synthetic biology is their ability to destroy analogy, to create novel materials and organisms with no historical referents that can guide prediction, there are nevertheless historical precedents and lessons that can provide valuable warning signals.Footnote 7

In a Red Queen world, warning moves along with the science, it does not come after the fact, especially after materials and products have already been introduced into commerce. One approach would be to establish in all oversight agencies – the Environmental Protection Agency, Food and Drug Administration, Department of Agriculture, and Consumer Product Safety Commission – an early warning officer (EWO) with associated support staff (3–4 full-time equivalents). The EWO would report directly to the head of the agency and provide once-a-month briefings that focused not just on threats, but on opportunities to leverage emerging technologies to improve the agency’s mission. Early Warning Officers from multiple agencies could also meet to exchange information on a regular basis and build a larger network that encompassed state, local, and international members. This type of strategic reconnaissance is fairly common in the business and intelligence sectors, so those models could be easily adapted to oversight organizations.

3 Track the Known Unknowns

When Wired Magazine called the EPA, FDA, and US Patent Office to ask about regulatory approaches to the emerging area of synthetic biology, the agency people had to ask what synthetic biology was (Keim 2007). As a new scientific field emerges there is far more that we don’t know about possible risks, unintended consequences, and governance options than we know. As Robert Proctor, an historian of science at Stanford, once noted “[It] is remarkable how little we know about ignorance” (Proctor and Schiebinger 2008). Ralph Gomory, the former president of the Sloan Foundation, once wrote a provocative essay on the unknown and unknowable, noting that “We are all taught what is known, but we rarely learn about what is not known, and we almost never learn about the unknowable. That bias can lead to misconceptions about the world around us” (Gomory 1995). One approach would be to develop an open-source tool that provided an evolving list of known unknowns for an emerging area of science and technology. As empirical evidence was gathered, issues could be modified, taken off the list, or new areas of inquiry added. For instance, in the area of synthetic biology, one unknown at the moment is: How to best assess the risks of novel organisms with little or no natural precedents? An evolving list of known unknowns (possibly maintained on a Wiki) would also constitute a de facto risk research agenda that could be addressed by national and international funders. Finally, it may reduce the potential for surprises, allowing policymakers the opportunity to consider various scenarios before they occur.

This exercise does not address the unknown unknowns or unknowables, but a continual focus on unknowns may force policymakers and researchers to begin to discriminate more carefully between various classes of unknowns and pay attention to building more flexible and adaptive organizations which can respond to surprises or events that occur beyond the realm of normal expectations (so-called Black Swans) (Talib 2007).

4 Focus on Bad Practices

It is common for those operating on the technological frontier to focus on best practices, often singling out particular companies and operations for awards. This is important but, paradoxically, one of the most important things to do when confronted with high degrees of technological uncertainty is to focus on the bad practices. Every single day vigilant and intelligent people recognize errors around them and can often come up with ingenious ways to correct problems. Taken one at a time, these bad practices seldom lead to a disaster, if recognized early and addressed. The challenge is to develop ways for “error correcting knowledge” to be collected, managed effectively, and channeled into solutions. One model for this is the Aviation Safety Reporting System (ASRS), which collects and analyzes voluntarily submitted reports from pilots, air traffic controllers, and others involving safety risks and incidents.Footnote 8 Operated by NASA for the aviation industry, ASRS is described as confidential, voluntary, and non-punitive. The reports are used to remedy problems, better understand emerging safety issues, and generally educate people in the aviation industry about safety. A similar system in the UK, called CHIRP, is designed to promote greater safety in both the aviation and maritime industries and is run by a charitable trust.

One option is to create a Safety Reporting System for emerging areas of science and technology where concerned people working in laboratories, companies, or elsewhere can anonymously share safety issues and concerns. The purpose is not “finger pointing” but encouraging proactive learning before something goes really wrong. Information could be used to design educational materials, better structure technical assistance programs, and provide a heads-up on a host of emerging safety issues.

If these systems fail, there is a final backstop before some disaster hits – internal audits by inspector generals and, finally, whistleblowers.Footnote 9 Whistleblowers are the ones who watch the watchmen, often risking their careers to rise above their bureaucratic brethren. They are the antidote to group think, to the perceived invulnerability of the organization, the rationalizations, and insulation from outside opinion (Sonnenfeld 2005). The price is high. One half to two thirds of all whistleblowers lose their jobs (Alford 2001). Despite recent efforts to shore up whistleblower protections (in the Consumer Product Safety Improvement Act and the Whistleblower Protection Enhancement Act) one group remains largely unprotected – government employees. Strong whistleblower protection, especially in our regulatory agencies, is absolutely necessary as scientific innovation moves rapidly forward.

5 Get the Right People to the Frontier

One way to provide oversight and governance of science is to have the scientists and engineers provide it themselves – an approach that has been put forth in the areas of nanotechnology and synthetic biology. Whatever historical precedents existed for this type of reflective self-governance are long gone. As Steven Shapin has pointed out in his recent exploration of the moral history of science, there are no real grounds today “to expect expertise in the natural order to translate to virtue in the moral order” (Shapin 2009). Recent survey work with university-based nanoscientists has indicated that researchers working on new technologies tend to view their work as not producing any “new” or substantial risks, while those scientists downstream of development often feel the exact opposite (Powell 2007). In addition, computer simulations of diverse problem solvers indicate that specialists often become trapped in suboptimal solutions to complex problems such as risk assessment (Hong and Page 2004).

Normally, people entering a frontier space are trained. Astronauts receive an average training of two years and brain surgeons undertake a six-year residency. This training promotes a professionalism that includes ethical components. But what about scientists and engineers operating on a technological frontier? A survey of over 250 accredited engineering programs in 1996–1997 found that only 1 in 5 offered students any significant exposure to ethics (Stephan 1999). Bill Wulf, who headed the National Academy of Engineering (NAE), said recently that “The complexity of newly engineered systems coupled with their potential impact on lives, the environment, etc., raise a set of ethical issues that engineers have not been thinking about,” and the NAE recently established a new Center for Engineering, Ethics, and Society to meet the challenges (Dean 2008).

As a backup for training approaches, one could also embed social scientists in the research enterprise, an approach some have called “lab-scale intervention” designed to enhance direct interaction between different social and natural science disciplines during the research phase (Schuurbiers and Fisher 2009). This approach is undoubtedly better than having scientists operate with little or no feedback on the social and ethical impacts of their research. But one problem is that the same organization (such as the National Science Foundation) often funds both the researchers and the social scientists with the same grant, creating a co-dependency situation that certainly has the potential to compromise the social oversight function. Adding a few bioethicists or nanoethicists to the scientific mix to watch for missteps still leaves open the question: “quis custodiet custodes ipsos?” (who guards the guardians themselves?) or the more modern version: “Who watches the watchmen?” (Moore and Gibbons 1987).

6 Develop and Implement a Learning Strategy

A recent article on technological innovation made the point that, “in an era of complex technologies, and that will surely be the dominant characteristic of the early part of the twenty-first century, public policy will need to facilitate learning and be ever more adaptable” (Rycroft 2006). The more experiments one can run, the more hypotheses one can test, the faster the rate of learning. It sounds paradoxical but in terms of learning and innovation, “Whoever makes the most mistakes wins” (Farson and Keyes 2002). Over the last few decades, the economics of experimentation have dropped dramatically in the private sector because of advances in computation and rapid prototyping systems as well as an increasing focus on testing new organizational and leadership paradigms.

Unfortunately we seldom crash test public policies, but instead wait for them to crash. When EPA launched a voluntary program to gather information on nanomaterials, a number of experts, drawing on years of research on voluntary agreements, warned that the program would be ineffective without stronger incentives for industry participation and the backup of mandatory measures. The EPA program took three years to implement, during which time a similar program, launched in the UK, failed moving to yield the needed information on emerging nanoscale materials. EPA persisted forward – slowly. Not surprisingly, critics at the end of this tedious experiment noted that, “With hundreds of nano products already on the shelves, EPA has squandered precious time while it slowly developed and pursued a program that informed stakeholders cautioned would not yield what was needed” (NanoWerk News 2009). EPA pursued an internally focused, serial processing strategy, not a co-evolutionary, time-sensitive approach.

It is not clear that the agency had, or has, a clear learning strategy, one that can mitigate the probability of future errors by either learning from past efforts (where applicable analogies hold), from parallel efforts by other credible actors, or from thinking smarter about the future (Garvin 2000). In this regard, it is important to remember that, “experiments that result in failure are not failed experiments” (Thomke 2003). The organizational pathologies that undermine learning in organizations are well documented and include: (1) insulation from outside expert opinions, (2) fixation on single paths, (3) no contingency planning, (4) an illusion of invulnerability, (5) collective rationalization, (6) the denigration of outsiders, and (7) a coercive pressure on dissenters (Sonnenfeld 2005). Prevalent maxims are also well researched and well known: learn from failure, refuse to simplify reality, commit to resilience and flexibility, don’t overplan (keep options open), and hire generalists (they’ll thrive longer in complex ecosystems) (Weick and Sutcliffe 2001). Given the large and looming retirement bulge in many US regulatory agencies, like EPA, we have an opportunity to restructure the workforce in new ways that could address learning issues.

7 Conclusion

In a recent McKinsey survey on what factors contribute most to the accelerating pace of change in the global business environment, the top response was “innovation in products, services, and business models (Becker and Freeman 2006). The actual rate of technological change came in sixth place. The point is that it is not just technology, but technology’s impacts on organizational strategy and ways of doing business, that cause an acceleration in innovation rates (for instance, the impact of high-speed computing on the entertainment or automobile industries). Charles Fine used an overarching approach in defining what he called organizational “clockspeed” – an evolutionary lifecycle defined by the rate a business introduces new products, processes, and organizational structures (Fine 1998).

Ultimately, this means that “pacing” governance to technological change will require focusing on the entire operating environment rather than just the technological components. This larger environment includes organizational structure, leadership, and securing talent as a strategic asset.Footnote 10 Counting bits per minutes or product introductions obscures the nature of the challenge that governments face. Viewed through a purely technological lens, the gap in innovation rates seems inevitable and insurmountable. Recognized and addressed as a “learning” problem provides some hope.

That does not mean the change in the public sector will be easy or fast. Organizations – in both the public and private sectors – often end up in what has been called a “competency trap” applying outmoded skills to emerging challenges (Levitt and Marsh 1988). By the time they catch up, competitive forces have created the next competency trap vis-à-vis a new set of actors and technological realities. In this situation, absolute speed becomes less critical than adaptive strategies because, as in evolution, competition and learning reinforce each other (Van Valen 1973). If we view biological and business evolution as complex adaptive systems, then the challenge for governments is to join the co-evolving system (Beinhocker 1999). That means turning a cognitive corner and seeing this rapid technological change as a learning and co-evolution challenge rather than just trying to run faster on the technological treadmill. In the end, disruptive innovation will require the application of disruptive intelligence in our public sector (McGregor 2005).