Keywords

.ike all sciences, biology is not an abstraction separate from society, impervious to trends or fads. In recent years there has been a marked influx, and sometimes a rapid retreat, of new terms eliciting passionate debate in scientific journals and even, on occasion, in mainstream media. The past 20 years have successively stoked interest in genetic engineering, genomics, systems biology, integrative biology, and (nano)biotechnologies. “Synthetic biology”Footnote 1 is the newest iteration in the series (Benner and Sismour 2005). Though it is perhaps still not yet clearly understood by the public as such, this UBO (Unidentified Biological Object), a field at the margins of biology, nevertheless raises a previously unheard combination of intriguing questions of both the fundamentals of biology as well as its applications or connections with society. A new contingent of researchers with diverse interests, some well beyond those of biology, have burst onto the scene in the life sciences field. They are shifting the issues with methodologies and approaches that often differ significantly from the classical practices of biology, and objectives that can seem staggering in their ambition and divergence. From one publication to the next, each researcher seems to lay claim the label of “synthetic biology”, either to understand the fundamental mechanisms of life, or to subject these mechanisms to productive tasks that have never before been endorsed. All of this is to say that in the context of this collection of essays, it is more than legitimate to attempt a critical dissection of this new trend, from the angle of its complex and often contrarian relationship to Darwinian dynamics, while also attempting to demonstrate that this analysis cannot, at its core, be separated from the study of its impact on society. We hope here to give a broad overview of what SB has to say about life, DNA, and society at large.

As is the case with any developing field of biological research, the basic lexicon of SB is a work in progress. I suggest that here we use the following definition: ‘Synthetic biology is the engineering of biology: the synthesis of complex, biologically based (or inspired) systems, which display functions that do not exist in nature. This engineering perspective may be applied at all levels of the hierarchy of biological structures—from individual molecules to whole cells, tissues and organisms. In essence, synthetic biology will enable the design of ‘biological systems’ in a rational and systematic way’ (NEST 2005). Although SB is neither “a new science nor a clearly defined research program yet” (Moya et al. 2009), some characteristics of the preceding definition shall here be highlighted. First is that it is an action-oriented practice, strongly influenced by engineering. In fact, “bio-engineering” is often used as a synonym for SB. The definition also suggests an interaction between “nature” and artificial systems, with all the fruitful tension that these two terms imply separately as well as together. It is very useful at this point to pause on the notion of “system”, which is intentionally rather vague here (Chopra and Kamma 2006), but which, when used in a more specific way, gives a clearer idea of the SB’s thematic subdivisions. The “system” can, in effect, take on different scales. For some research teams, the system will be a group of genes inserted into a bacterium in order to make it accomplish a new function. This branch of SB is thus related to genetic engineering. For other groups, the system may be an entire genome (the complete ensemble of genes that “allow”Footnote 2 an organism to function). For a third category, the system might be an entire cell reconstituted from more or less distant molecules from those that comprise life to make the cell functional. What are the consequences of these gradations? In certain cases, it profoundly transforms life as it exists already; in others, it is nothing less than a quest to recreate life from scratch. This is why SB is, to borrow Maureen O’Malley’s term, a very large “umbrella” that holds very different approaches that nevertheless share a pronounced engineering dimension. O’Malley currently provides a convincing typology for explaining SB. The three types of systems described above correspond respectively to the three categories she has proposed: “the construction of DNA machines”, “cellular engineering on the genomic scale”, and “the creation of protocells”. These three branches are, of course, not absolutely distinct from each other, and it is useful to explore their relationships (O’Malley et al. 2008). Approaching them one at a time, however, will provide here a basic understanding of their issues, or at the very least makes the links between them more evident. After outlining this foundation, we will follow with a discussion of theoretical challenges leveled at SB and conclude with an overview of SB’s relationship to broader society. To begin with, though, a brief historical background will be helpful.

It is often stated that Eric Kool first uttered the term “synthetic biology” in its contemporary form in 2000 at the annual conference of the American Chemistry Society, in the context of a paper he was presenting on DNA analogs and their potential therapeutic effects (Kool 2000). Biochemistry and medical applications were the metaphorical fairy godmothers that allowed SB to blossom. Nevertheless, the following paradoxes arose as well: SB, whose most enthusiastic proponents envision it as the key to biology’s bright future, is more often a matter of chemistry than of biology. Moreover, in a sign of the times, SB is linked in an almost quasi-constitutive manner to the promise of industrial applications. We will repeatedly see that these are not trivial observations.

But every story has it beginnings, and to fully grasp what is at stake with SB, we should look at two key early periods. The first is from 1970 to 1980, when the term “synthetic biology” first appeared under the visionary pen of the Polish geneticist Waclaw Szybalski: “Up to now we are working on the descriptive phase of molecular biology. (…) But the real challenge will start when we enter the synthetic biology phase of research in our field. We will then devise new control elements and add these new modules to the existing genomes or build up wholly new genomes”. (Szybalski 1974). Several years later, Barbara Hobom used the expression again to describe genetically modified bacteria (Hobom 1980). Though sporadic, these early references are nonetheless illustrative, revealing the fantasy of the ability to gain control over living beings that early genetic manipulations via recombination enzymes immediately raised.

Yet well before this period, at the turn of the twentieth century, we find another important chapter in SB’s (pre-)history. Jacques Loeb laid out the precocious argument for a rational research program based on the recreation of life in his work The dynamics of Living Matter. In the introduction he states: “We must admit that nothing prevents the possibility that the artificial production of life will one day be achieved” (Loeb 1906). As Ute Deichmann has pointed out, the goal of the German-American researcher was to find the physical-chemical laws that would explain life, while vehemently opposing certain hypotheses of the day proposing that life stemmed from a particular essence that could not be reduced to matter as physicists describe it (Deichmann in Morange 2009). Loeb also criticized the doctor Stéphane Leduc, author of the book La Biologie synthétique (1912), which was, despite its visionary title, dealing with mineral or chemical forms that imitated biological forms, sometimes quite well, but which were definitely not living. In this early history of SB, when Mendel’s laws had just been rediscovered and with them the hopes of what would soon come to be called “genetics”, dreams of possibly creating life quickly followed. The history of SB is thus one of the eternal return, the inevitable side effect of any advance in the understanding or mastery of life. Time will tell if SB’s most recent developments will prove lasting or yet another iteration of its earlier, ephemeral appearances.

1 The Three Schools of Synthetic Biology

1.1 Looking for the Protocell

Following the previously outlined typology, I will begin with perhaps the least-known category of those that comprise SB, and which aims to reconstitute living cells using base components (Robertson et al. 2000; Luisi 2002; Forster and Church 2006, 2007). It would seem that this is furthest from actual forms of life, and it is the most exploratory and audacious since it maintains a distance from issues of application and industrial possibilities. But the elegance of this so-called “bottom-up” branch (Simpson 2006) is precisely that in attempting to forge another “life”, it often teaches us more than the other categories than those that explicitly deal with the living world, displaying life’s fantastic diversity as well as its unity from the smallest bacterium to the largest sequoias. Characterizing this branch is not, however, so simple, since one must be clear about what is a basic “component” is. The more complex it is (for example a gene, or a group of genes), the smaller the gulf between inert matter and a living organism will be to bridge. But if the challenge is to start not with genes but with their precursors, nucleotides (which make up genes), or of even smaller molecules, the precursors to nucleotides, then the goal of obtaining a living cell in vitro becomes even more daunting. In sum, one must know what are the starting and the ending points to estimate the scope of the challenge (Channon et al. 2008). Confusions on this premises certainly explain why the news media regularly claim that life has been “recreated” in vitro, referring to scientific publications that “only” describe how some steps, sometimes crucial, of this process are achieved. But to be perfectly clear: today, no living organism has ever been created.

At this point comes the inevitable question: “what is life?”. As paradoxical as it may seem to non-specialists, there is no consensus among biologists as to what the definition is despite it being the subject of their studies.Footnote 3 This is undoubtedly where many of the misunderstandings come from in discussions of the frontier between life and non-life. Biologists like to say, and with good reason, that they know about living organisms rather than “life”, and that this is sufficient. This pragmatic approach must not, however, be used to obscure the issue. A quite general definition can be proposed and discussed, like this classical one we will refer to: any system capable of replicating itself, having a metabolism and evolving is living. One point must be stressed here: since it relies on three characteristics, this definition opens the door to different emphases for each one, and consequently for many debates. Some authors ascribe the utmost importance to replication, so that an entity that replicates itself and evolves but that does not have a metabolism, such as a virus, will be considered by some as quasi “living”, which poses less problems than for those that insist that metabolism, the active maintenance of an interior environment far from thermodynamic equilibrium, is most important in the definition. The inverse situation could also arise, as in the 2008 publication of a study demonstrating that a virus could infect another virus (La Scola et al. 2008). If the second virus were infected, in other words, it was sick. And if it is sick…then this is because it would have to be alive! The debate remains lively on the status of viruses (Moreira and López-García 2009). What is not up for debate, however, is the third characteristic of life: the ability to evolve (which does however pose a major epistemological problem, since one could object that a “capacity” might not be a “characteristic”… A single given organism does not evolve individually : its line does. This cardinal characteristic in the Darwinian paradigm could not, paradoxically, thus be that of an individual organism, but that of its lineage). In a sense, these three components of the definition are not equivalent : one could say that the two first criteria determine the third: without replication, there is no evolution, and without metabolism, there is not phenotypic basis on which natural selection may operate.

Assuming that this three-part definition is convincing, it becomes easier to understand SB research agendas: to find a truly contained molecular system that can have all of these three characteristics to some degree. One can also understand how much such research is in dialog with investigations into the origins of life (Maurel 2003). SB can do a lot to address this issue, which otherwise would remain an un-testable yet un-refutable speculation, a collection of pre-biotic scenarios that have existed since Miller’s famous experiment in 1953,Footnote 4 all more or less intriguing and simple hypotheses, but among which it would remain impossible to carve out a resolution (with one important caveat, which is the contribution of exobiology. The eventual discovery of life on other planets such as Mars would reveal resemblances and differences of each type of life and would thus give a fertile comparative bases for the questions of life’s origins and the unresolved question of the inevitability or not of the appearance of life forms when certain conditions exist together. But we are not quite there yet…). When dealing with the origins of autoreplicative systems, working mainly on RNA has become the norm. These molecules are, among other functions, the intermediaries in our cells between DNA, which contains genes, and the proteins that determine cell function, thanks to the genetic code. Why focusing on these intermediary molecules? The main reason is that, it has been demonstrated, in the early 1980s, that they could play a previously unsuspected role that had been assigned to proteins until then: RNAs can have a catalytic activity (in other words, the could act as an enzyme). The discovery that some RNA, named ribozymes, could have this function, helped to solve of a long lasting conundrum: when life first appeared, how would replicator molecules have worked without catalysts? And inversely, how could a catalyzing molecule have transmitted its function without a replication system? The discovery of ribozymes settled this chicken and egg issue via the hypothesis that primordial RNA could have played both roles. This has led to the popular hypothesis of the “RNA world” that would have preceded the living world that we know today (Forterre 2005), where the torch for replication has been passed from RNA to DNA as the replicating molecule (since it is more stable), and to proteins for enzymatic catalyzing (since they are more efficient). Since RNA are simple molecules to synthesize using commercial machines, it became possible to test this molecules vitro for their catalyzing abilities (ribozymes) or linking abilities (as proteic antibodies do). These RNA, they are referred to as aptameres when they are obtained in vitro and riboswitches when discovered later on in vivo. In a sort of study within a study, this research often uses in vitro techniques of “Darwinian” molecular evolution such as SELEX: one begins with random sequences from a RNA population, and via succession of chance/selection cycles, one progressively enriches the environment with molecules that have the desired function, e.g. a strong affinity for a target molecule. Thus, in what is perhaps a far-off echo of what occurred at the birth of life, evolution is both the goal of the study and the technique used to achieve it. These techniques can be applied to various goals, but regarding Origins of Life issues, they have help to overcome a long elusive challenge : designing a ribozyme that would be capable of catalyzing its own synthesis exponentially. This was a tricky problem for various reasons, one of which being that RNA needs to be linear in order to be duplicated, and 3D-folded in order to act as a catalyser. This problem seems to have been recently achieved, using directed evolution and design, thanks to a modular association between two linear sub-units that lead to a three-dimensionally structured ribosome (Lincoln and Joyce 2009). Have these researchers created a molecular protoform of life? Nothing is less certain, since at this stage, it is only a matter of replication more than of evolution (one mutation is enough to render the ribosome non-functioning) and metabolism is absent. Nevertheless, these exploratory studies are very stimulating, since they push the thinking further: e.g., how to add genetic modules to such molecular scaffold, that would trigger a form of proto-metabolism? Though RNA, and more generally SB research is riding high (Isaacs et al. 2006; Saito and Inoue 2007), there is also another symmetric situation. Some groups of researchers “play out”, or imagine protein-based self-replicating systems.Footnote 5 Such work on proteins should not be relegated, as is often the case, to exploring their structural or catalytic roles (Lee et al. 1996). But since we are dealing here with life’s boundaries, work on molecules that are not used by life is worth mentioning as well, such as nucleic acids (the molecular family to which DNA and RNA belong) modified (Benner 2004a, b), with new natural or artificial bases.Footnote 6 New structures, such as PNA (peptide nucleic acids), which is a sort of molecular hybrid between proteins and nucleic acids, multiply possibilities and play with the stability or the versatility of molecular associations. There are also working hypotheses of a more different life that is not based not on carbon chemistry, but on silicon or sulfur, and that evolved in a solvent other than water, such as methane, as is found on Titan (Benner et al. 2004). Such tests and hypotheses fascinate specialists in the origins of life, leading to the notion of “other life” (sometimes described as “weird life” or “shadow biosphere”). This field of research is based on the premise that life could have appeared on Earth in multiple periods based on a different chemistry than what currently exists, and that if such life still existed, presumably in microscopic forms, we might not detect it because, just because we lack the appropriate tools (Cleland and Copley 2006; Davies and Lineweaver 2005; Davies et al. 2009). Assuming that such “life” would exist, many questions come: would it then be totally independent from life as we know it? Would they be able to exchange all or part of their own modules? Today such questions may appear specious at first, since we have never found the smallest trace of a life that is not phylogenetically connected to all other forms. And yet these questions are anything but baseless. To begin with, these inquiries seek to explain why life would have appeared and persisted only one time, or to prove methodically how, for instance, it could ruthlessly eradicate any competing attempts at life that appeared at any given point in time. Moreover, these questions are a formidable call to think about other life forms here and elsewhere, and to ask the inevitable: would these forms then be entirely or partially Darwinian? Inversely, in the second case, at which point would we consider them living if we were to find them in some unlikely buried cave on Earth or even under the Martian ices and rocks?

A second, complementary and formidable question arises from the previous two: the crucial issue of compartmentalization. We have sometimes slipped into the habit of considering that elementary life is above all molecules that reproduce, setting aside the issue of the membrane that surrounds them. But, there is not a single living organism without a plasma membrane, which is, therefore, as universal as nucleic acids or proteins. Furthermore, without this membrane, primordial molecules would have dissolved in the immense sea of solvent, and there would be no way to concentrate molecules that confer selective advantage to the entities that produced them. Compartmentalization is key to the move from a form of molecular competition to a competition between molecular pools, and to link their fate. This is why the issue of forms that a primordial confinement could have taken is essential, including for protocells studies. As far as the origins of life are concerned, one proposal is that mineral forms of compartmentalization may have existed initially, favoring those that defended a life that had already initially developed its metabolic component, and these developed in stable mineral bubbles irrigated by the flow of primordial nutrients (Russell and Martin 2004; Robinson 2005). The issue of autonomy would come via a cellular encapsulation at a later point in time. The case of the algae Bryopsis plumosa is of particular interest in this case. Its giant cells have multiple nuclei. When its cytoplasmic material is accidentally expelled through a membrane rupture, it still retains its integrity and the cell lives temporarily without a membrane (Kim et al. 2001)! Its organelles band together and secrete a gelatinous envelope in several minutes; a few hours later, a cell membrane is regenerated. Might such transitory mechanisms have existed at life’s origins (in a much more simplified form)? It is an open question that “synthetic” biologists are bound to ask in their quest for the protocell. One of the most advanced works are coming out the team led by Jack Szostak, who is moving toward an understanding of the differential permeability mechanisms the such a membrane must have (Mansy et al. 2008). Even so, a system claiming the label of “life” must not only possess replicator molecules, a rudimentary metabolism and a membrane: these different aspects must also be linked together, and that the membrane’s future is not independent from that of the molecules it houses. The cellular metabolism, for example, consists precisely of regulating growth and the mechanical division of the membrane relative to the internal concentration of replicator molecules (Bartel and Unrau 1999). Only then would we could claim to have actually generated a form of life, a fragile and new line of life, for the first time in 3.8 billion years (Szostak et al. 2001; Deamer 2005).

Before moving on from this discussion of future protocells and current efforts to create them, let’s make a last small detour at the interface between “other life” and “mineral compartmentalization” studies. Taken together, the two subjects evoke the brilliant theoretical proposition put forth by Carl Woese and his colleagues, which is the theory of life’s initial appearance in its current form by a process of “competitions between innovation pools” (Vetsigian et al. 2006). They propose that life appeared “in several pieces” in the form of “other” life(s) more or less foreign to one another. In certain niches, very efficient molecular systems for replication would have appeared: in others, some very efficient systems for metabolizing molecules in the existing environment. These systems would have been developed in initially closed-off compartments. Making the hypothesis that transfers of genetic materials could, however, survive between systems, the researchers envision life as a system that would have found an equilibrium between replication efficiency and metabolic efficiency, achieved by a “genetic code” that would have taken hold of this solution and would have thus become widespread and be the dawn of initial molecular creativity. Woese’s hypothesis also has the important merit of historicizing the appearance of life by including it in a temporal process and imagining its appearance within a plausible context rather than as a sort of timeless, unique “big bang” that is consequently more difficult to conceptualize. His idea can also help make the definition of life more precise. It includes the ability to evolve in this two-part relationship of metabolism/replication—and thus suggests the following theoretical proposition: life is not as much a list of three characteristics as it is a relative sub-optimization, historically anchored in the context of settling the genetic code, of these three components.

1.2 Cellular Engineering at the Genome Scale

It is in large part due to the work of Craig Venter that SB finds itself once again into the limelight. The famous american biologist has carved out a specialty in putting technological challenges to the scientific community, often with the help of the media, for better or worse. He is most notably one of the pioneers of sequencing the human genome, which he marketed as a race against time, endorsing the role of “private” research in the face of the international “public” consortium that had started the project. He has also pioneered the field of metagenomics, an extension of genomics that aims to sequence the entire DNA content in, for example, in a drop of seawater, in order to better discover new genes and, potentially, new species. It is no surprise that the emerging field of SB, and it promises like the recreation of life, and all the fantasies it entails, quickly called his attention. His approach differs from what was described in the previous section, and more closely adheres to research on “minimal genes”, which can be summarized by the deceptively simple question: how many genes does an organism need to survive? For a long time an answer was a matter of pure speculation; large-scale genetic sequencing programs have recently begun to suggest the beginnings of an answer. Since the 1990s and the “Human Genome” project, a large number of genomes of differing sizes have been sequenced: we know now the exact sequences of millions and billions of base pairs that make up their genomic DNA. In 2009, one thousand organisms had been entirely sequenced. Among them, 80 % are prokaryotes,Footnote 7 single-celled organisms without a nucleus (without exception) whose genome is quantitatively smaller. When the first of these had been sequenced, such as Mycoplasma genitalium, Venter made his first foray into SB. Although the goal of recreating a living cell remains the same today as it did then, the starting point was quite different. The idea was to analyze existing life, to look into the genomes that were the result of 3.8 billion years of life’s history to see what solutions had been selected and to try to determine from them the minimum functional ensemble. It was thus a “top down” approach of reduction. It is the “inverse” of the “bottom-up” approach of trying to create protocells made up of several autocatalytic RNA, where the aim is to come up with a “minimal” cell that functions with existing genes and their actual rules of use, such as the genetic code.

Venter’s team used the following methodology: he removed the function of each of the genes of Mycoplasma genitalium one by one and then observed whether or not the mutated bacterium survived (Hutchison et al. 1999). Thus, he proposed a minimum set of genes, defined as the ensemble of those whose absence proved lethal, which numbered between 265 and 350 (out of a total of 480 genes). But Venter’s approach, though it yielded results, was quickly criticized. The main conceptual flaw was that it was likely to overestimate the minimum number of genes. We must look at the other methods of minimal genome analysis before going any further with Venter. Although other methods of experimental inactivation exist, comparative methods yield the clearest answers. Very early in the 1990s, complete sequences of prokaryotes became available. It was a small leap from there to think that these “simple” organisms contained the genetic quintessence of what was sufficient and necessary for a living being to function. This line of thinking gave rise to the field of research known as the search for “minimal gene sets” (MGS). Its basic principle is that since all living beings come from a common ancestor, if one compares simple organisms whose ancestors diverged long ago, then the genes they share in common are likely to be the essential ones that evolution has still not eliminated today. In 1996, Eugene Koonin and his team attempted this and compared the genomes of two parasite bacteria that had recently been sequenced: Mycoplasma genitalium and Haemophilus influenzae, proposing a much more substantial MGS of 256 genes compared to the one offered by Venter (Mushegian and Koonin 1996). But beyond this raw figure, what was most instructive was an unanticipated methodological consideration. Reasoning in a purely comparative manner did not yield sufficient results. Analyzing shared genes allowed certain major functions to be reconstituted, but some also remained incomplete. For example, one gene for glycolysis was missing, so that all the steps before and after were represented in the MGS. A correction “by hand” was necessary, which reintroduced subjectivity into a method that ostensibly existed without any a priori conditions. This problem affected a small number of genes, but it was nevertheless significant because nature had found, even in universally shared and preserved functions, solutions that diverged at points in time. Evidently, the more the genome sequences accumulated, the more one has been tempted to extend this comparative approach by predicting that the MGS would diminish enough to reach a lower limit. Yet to do so created a paradoxical situation. If this MGS could be whittled down to 208 genes or less, then would it still be relevant? As genes were removed from the list, subjectivity remained a guiding factor. The notion of the MGS itself was up for debate. There were ongoing redefinitions of the basic premise; what was briefly considered as indispensable could prove not to be several comparisons later. Mainly, though, what was most surprising was that there was no organisms that contained fewer genes than the Mycoplasma genitalium, the first to be sequenced, and which has long remained the most well known, which contained 468 protein-coding genes, roughly more than twice the MGS. This “rudimentary” organism seemed to suggest in its minimal complexity that life could not be reduced to a precise set of elementary instructions (Heams 2007). This conclusion echoes recent discoveries demonstrating the fundamentally exploratory rather than programmed nature of cells (Heams 2009, Kupiec, this volume). On the other hand, it also poses a question regarding the history of life. If such complexity is necessary, if these great numbers of genes are indispensable, by what fragile path could primordial life have risen to this level?Footnote 8 One particular discovery raised this very issue. In 2006 researchers discovered an organism with a genome that was vastly more limited that any previously sequenced genome. Candidatus Carsonella ruddii, challenged these hypotheses with its genome of a mere 180 genes (Nakabachi et al. 2006), far fewer than the reigning MGS! However, specialists quickly gave a plausible explanation for this apparent contradiction. C. ruddii is an endosymbiont, an intracellular parasite, and this bacterium has thus undergone a secondary reduction of its genome due to a large number of its basic functions being carried out by the host cell. Offloading occurs to such an extent that one can in fact consider that by losing its autonomy, C. ruddii is actually becoming an intracellular organelle (Tamames et al. 2007), in the manner of mitochondria (cells’ energy production factory) which are believed to have be the result of the internalization of an α-bacterium by a cell two billion years ago according to the endosymbiotic theory. C. ruddii is thus a fascinating case in the world of minimal genomes. It demonstrates the field’s complexity: instead of only searching for the MGS’s limit– and the set of genes that could be assembled into a minimal cell—studies reveal a more continuous reality, where the transition between (autonomous) life and the margins of life (the parasites that are usually set aside, organelles, viruses) is quite gradual (Rasmussen et al. 2004). It is a world that is “close to” life, that depends on life, but that is also one of reciprocity where life is allowed to exist. After all, if Mycoplasma genitalium only has 540 genes, the smallest nonpathogenic bacterium has more than 1,300: an awareness of such a progressive definition of life, fascinating as it may be, questions the relevancy of looking for minimal gene sets.

With the Carsonella case settled, the issue was then to move on from a pure accounting view of MGS in favor of studying its content, viewing it as a network whose topological analysis could provide a better understanding of what a minimal metabolism might be. Recent published works that describe this theoretical network are based on the MGS of 208 genes (Gil et al. 2004) – which seems plausible in that its connectivity follows a power law (many metabolites are weakly connected, and inversely, a small number are strongly connected, acting as major nodes in the global network) that makes it possible extrapolation from known natural genomes (Gabaldón et al. 2007). Moreover, this network is significantly robust in that it is resistant to random damage; that is, the organism’s resulting viability would not be immediately threatened at the first functional mutation. On average, in simulations that include stoichiometric relationships between gene products, around 20 of these types of mutational “attacks” would be necessary to cause a “collapse”. This work is extremely rewarding as a research methodology. But as its authors point out, the relationship between these theoretical and potential minimal organisms and their environment (and the latter’s complexity, which is no small matter) will be crucial if it is to lead to the creation of life in the lab, as well as an understanding of symbiosis and parasitism. Parallel to experimental studies in systematic genomic reductions (Fehér et al. 2007), using “directed evolution”Footnote 9 techniques, some computer simulations yield complementary informations (Banzhaf et al. 2006). One of these in particular (Pál et al. 2006) shows that simulating the progressive loss of genes in Escherichia coli, leads to several possible “minimized” genomes, both in number and composition. This underlines the important role of contingency in the structure of all current small genomes. These simulations also demonstrate that the MGS is over-represented in the results; thus it has a certain functional plausibility, all the more that E. coli is an autonomous bacterium and very different from parasites like M. genitalium from which it was initially obtained. Such research ultimately shows that it is possible to model the evolution of certain genomes with up to 80 % accuracy, when we know that a massive reduction in genes occurred, by adequately simulating the environmental conditions present at the time of the reduction.

Another approach to minimal genomes is the recent work of Antoine Danchin and his team. Using a different path, this group isolates a set of genes that tend to remain grouped together no matter which bacteria (several dozen species have already been tested) they are found in. These genes are thus conserved and topologically near to each other on the genome, and comprise a fraction of what they call the “paleome”. Obtained by a less selective method than that of MGS research, Danchin’s paleome is a group of 500 genes, some of which are “essential” and some are not. The first group includes the MGS, but the second is, here, of particular interest: it does not contain genes that are, strictly speaking, essential (the cell can virtually do without them); rather, it contains genes involved in energy-dependent mechanisms that “make way” for essential functions, and that also prevent the breakdown of functional entities. The authors describe this fraction of the paleome as the genes without which the hypothetical minimal cell will inexorably age and have to be permanently re-synthesized, or genes that fight aging. In this way, Danchin’s research provides a potential solution to the paradox mentioned earlier: the gap between the theoretical MGS and the actual simplest known genome, M. genitalium. In addition, if we view it as a network, the paleome is organized into three sub-groups as a function of the coherent connectivity of some elements: the least clear is a group of genes linked to an intermediary metabolism (nucleotides, coenzymes, lipids), then a second, better structured group that includes tRNA synthetases (translation enzymes), and finally a group that is closely connected around ribosome function. According to the researchers, these three groups would allow the history of primordial life to be retraced, first organized around metabolism and of which the first group would be the vestiges, then showing the appearance and fixation of a genetic code (which the second group would show) that would be consolidated via the system of ribosomes contained in the third group (Danchin et al. 2007). The dialogue between the “origin of life” and “synthetic biology” is thus endlessly rich. But this research mainly helps to usher SB in the age of maturity in the search for a the synthesis of a living cell with the aim of arriving at a minimal genome that is more sophisticated than a simple “shopping list”. Furthermore, it succeeds in taking an initial, albeit timid, step toward the topological aspect of the problem. Indeed, the order of genes on the bacterium’s chromosome—the distance of some in relation to others—is of utmost importance to the organism’s viability even as the list of the genes is discovered.

It is at this point that Venter reenters the picture. Though he is mainly important to the previous discussion of cellular engineering, Venter’s high-performing results in terms of synthesizing entire genomes touches upon the issue of MGS as well. His team first formed, as did others, around the synthesis of viruses (Cello et al. 2002; Tumpey et al. 2005; Smith et al. 2003) before turning its attention to synthesizing bacteria genomes with a series of publications describing the synthesis and assembly of the entire M. genitalium genome using the genome of the host yeast (Lartigue et al. 2007; Gibson et al. 2008a, b). This demonstrates that it is possible to assemble large DNA fragments. But “redoing” M. genitalium is not a conceptual step, since we know that M. genitalium does already exist. Venter’s approach is, however, a technological innovation: for the first time in the history of life, he reconstituted, apparently functional genomes have no direct parents because a machine has synthesized them. The true test still lies in defining the sequence to be assembled: it must be sufficiently new and not a simple “cut and paste” of what life already offers, yet sufficiently close enough to what we already know in order to be functional. Many challenges remain, such as the insertion into a lipid envelope, establishing a correct level of protein expression (which is, as we will see again, an illusion when we consider the random dimension of genetic expression) and their solubility, their interactions with the membrane, as the expert Pier Luigi Luisi points out in a prospective review of the numerous obstacles to overcome (Luisi et al. 2006). It is here critical to remember that the “minimal cell” is a different concept from the “minimal genome”. A cell cannot be reduced to its genome, no matter how important the latter is. Beside, some researchers are imagining theoretical “lipid-peptide” systems without DNA that could be qualified as living (Ruiz-Mirazo and Mavelli 2007), or a primordial “Vesicle World” (Svetina 2007), that is an ironic allusion to the “RNA World.

1.3 The Construction of “DNA Machines”

This final category of SB is perhaps the one less connected with the fundamental question of what life is, but instead has the closest link with actual bio-engineering life. This category envisions organisms as agents that execute a program, echoing the fundamental notions in genetic engineering, which, for better or for worse, currently produces genetically modified organisms, which we will examine more in depth at the end of the chapter. This type of synthetic biology is based on a representation of the space of genetic interactions that is very similar to a logical electronic circuit, where one gene’s expression causes a subsequent expression, inhibits another, etc., with great precision, following a deterministic view of cell function. Such an analogy appeared quite early in the history of recent genetic engineering, notably in a seminal article by Roger Brent which, without exactly naming the then nascent discipline of SB, impressively described its basic outlines (Brent 2000). This branch of SB claims as its founding principle, often quoted as gospel in publications or conferences on the subject, the observation from the physicist Richard Feynman: “what I cannot create, I do not understand” Applied to biology, it means that life must be deconstructed piece by piece if we are ever to truly understand how it functions. Yet to truly read this quote, it can also be interpreted as one of taking a step back from the classical study of biology, namely the desire to understand what exists in nature, in order to focus on the desire to transform it and to create new functioning systems. It is not necessary to go into great detail again on the tenuous exploratory approaches already described in the other categories of SB. It is enough to point out that the single-celled organisms (bacteria, yeast) instrumental to this third category are not configured from top to bottom, but are added and eventually subtracted some genes (Pósfai et al. 2006), so that a limited number of genes will possibly have a spectacular result. This is why this branch of SB suffers from a relatively ambiguous definition: whether or not a phenotypic effect deserves the label “spectacular” is largely subjective. Thus for each result of this kind, some will say it is actual SB, when others will judge it is classical genetic engineering. So much so that based on certain criteria, some authors already see many achievements in SB, whereas others find them to be quite limited.

Drew Endy (2005), one of the cofounders of the BioBricksFootnote 10 along with Tom Knight and Christopher Voigt, pushed the development of this type of SB. Their main initiative is an accessible online registryFootnote 11 of functions and the genes that carry them out. The project follows the “programmist” view described earlier in this chapter (Knight 2005; Voigt 2006). Inherent in this concept is the idea that by “deconstructing” life, it will be possible to assemble these bricks into a hierarchy and integrate them into a bacterium or yeast in order to make it achieve a function “on demand”. The BioBricks founders’ desire to rationally design organisms or functions marks a radical departure from Darwinian functioning, where lineages acquire characteristics via chance and selection. The new strategy is to adapt the organism to a desired situation or function by the rational engineering of its genes. This raises several epistemological assumptions and implications that we will explore later in this chapter.

What does this type of synthetic biology achieve? Or to put it bluntly, does it actually “work?” Of course, some landmark papers have substantiated these approaches. In 2000, a synthetic cellular oscillator was revealed in which three genes that inhibited one another caused a fluorescent protein to flicker (Elowitz and Leibler 2000; Stricker et al. 2008) inside a bacterium that did not initially have this glowing property. In many respects, this result served as proof that a deep modification of cell function was possible by adding a specific number of adequate genes and promoters. Similarly, the publication immediately following Elowitz and Leibler’s in the issue of Nature describes a construction that would make the host bacterium an interrupter that could be turned “on” or “off” (Gardner et al. 2000). Such results fall in line with other engineering work on bacteria and yeast (cf. Chang and Keasling 2006); for instance, obtaining bacteria that produce an “indigo” tint via the expression of a naphtalene-dehydrogenase enzyme, or the production of propanediol (a compound with many uses in the chemical industry). These are promising results for chemical industry, although the quantities that can currently be obtained by such engineering are infinitesimal. It is one thing to announce the production of an exogenous molecule in a bacterium after years of patient work on its genome, but it is quite another to produce this molecule en masse. Indeed, in many cases obviously, these molecules would not be well tolerated by the cellular system, and hijacking all of the cell’s energy for such a “task” would mainly be a technical challenge and even a biological illusion.

This is not, however, the case with “the” great achievement to date in SB, which belongs to Jay Keasling and his team (Ro et al. 2006). It describes a bacterial construction that produces artemisinic acid, a precursor to a medicine used mainly in the treatment of malaria. This illness, which affects hundreds of millions of people and kills more than a million each year, is a major global threat; there is no available vaccine, though testing is underway. One treatment known to be effective is artemisin, obtained from the Artemisia annua plant. Agricultural projects have existed for several years in order to produce pharmaceutical artemisine, since the purely chemical synthesis of this complex molecule proved to be a technological challenge whose economic viability was not clear. The idea of using living systems to engineer such a synthesis was tempting, and it is this drug or rather its immediate precursor that Keasling has obtained using SB methods. Deconstructing the metabolic chain of reactions that leads to its synthesis, his team inserted all the corresponding genes in a yeast, and succeeded in obtaining a large quantity of the desired product. In addition, the end result was easy to extract since it was secreted by yeast. According to the researchers, this method provides an economically viable source for an anti-malarial treatment, and one that is “ecologically responsible” and not subject to the whims of “climate or politics”. Keasling was quick to align himself with Amyris, the company supported by the Bill & Melinda Gates Foundation, then linked to Sanofi Aventis to finalize the industrialization of his discovery (Rodemeyer 2009). Is this the dawn of a new era, or is artemisine the tree that obscures the forest? In reality, very few concrete achievements besides Keasling’s are currently available. Among other projects are attempts to produce “biofuel” (e.g. Gunawardena et al. 2008) in the global context of dwindling supplies of fossil fuels (projects that Amyris is also involved in, as is Synthetic Genomics, Craig Venter’s company); nevertheless, biological systems that could filter out CO2, produce hydrogen, or produce other terpenoids than artemisinin, etc. on a large scale continue to capture researchers’ imaginations. They also imagine the production of biofilms and the synthesis of “biosensor” bacteria that would detect and signal pollution to help reduce it. In a world where scientific announcements and biotech companies’ opportunistic press releases increasingly overlap, it is sometimes difficult to have a clear perspective on what research is coming from which group’s projects. The main conclusion, however, is this: if life can be produced in small batches of promising functions that can be transplanted from one organism to another, it is tempting to start a business around each function that may one day carried out by a biosynthetic bacterium. The future will quickly tell us if this rather simplistic approach will lead to a boom in discoveries or to a general hangover in the biotech sector.

This branch of SB can, however, still be part of a rich debate over research fundamentals. Efforts have been made to help SB’s approach mature by introducing an “ecological” component to this type of research. All of Earth’s species (with a few surprising exceptions, cf. Chivian et al. 2008) live in interaction with others, according to varied modalities from parasitism to symbiosis, and predator-prey. Where there is life, there is exchange (which makes the definition of the life of “an” isolated organism a bit tenuous). And if we reflect on it a little more, the biosynthetic bacteria described in this section are considered pure systems of production without any interactions among each other, which marks a significant break with the natural, Darwinian world from which they come. This somewhat artificial situation is perhaps at a turning point, since several groups of researchers are now aiming for a concept of “microbial consortiums” instead of one exceptional bacterium (Brenner et al. 2008; Purnick and Weiss 2009). These consortiums include several species that contribute sub-tasks to the desired function. Despite the many difficulties inherent in this concept (How to manage each bacteria’s proportions? How to make species depend on each other? How to avoid horizontal genetic transfers? Etc.), it is interesting to see that researchers who do point out this conceptual drawbacks do not flatly dismiss this concept, even as they point out the fantasy of a “super bacterium” that could do everything. The fact that these engineers refer to ecological and evolutionary dynamics and modeling in the hopes of greater precision illustrates just how difficult it is to make life function using laws that do not apply to it.

There is one field where this “engineering” approach does legitimately merit enthusiasm. An offshoot of the Biobricks initiative, the iGEM contest is a competition among teams of students from all over the world. The goal is to evaluate projects that rely on the judicious use of these basic elements in order to come up with bacteria capable of all sorts of functions ranging from less serious to the outright baroque, and to provide either the effective demonstration of these functions, or at the least proof of the principle using a bibliography, simulations, or preliminary experimental results. Since 2007, the contest has been particularly popular in France due to the dynamism of the Parisian team, who proposed a proof of concept of a “multicellular” bacterium that compartmentalized tasks among “somatic” cells and “germinal” cells. The former would carry out the more “dangerous” functions like the production of toxic compounds without jeopardizing the cell line (Bikard et al. 2008). This work involved the students’ rigorous reflection of what compartmentalization is; their results were prospective and careful and provided the pretext for a deeper understanding of certain fundamental characteristics of life. The deconstruction/reconstruction approach taken by Biobricks, with all the reservations about its apparent simplicity, is nothing less than an innovative pedagogical tool in the context of iGEM; the approach is even useful when its own limits are being explored. Beyond the iGEM, it remains to bee seen if flickering bacteria that “take photos” (Levskaya et al. 2005) or draw rainbows will in fact be biology’s next frontier.Footnote 12

2 Some Theoretical Challenges of Synthetic Biology

An emerging “discipline” will, of course, not immediately overcome all its theoretical ambiguities. But since the discipline in question here has rapidly become the focus of fascination, with the capacity to attract human, technical, and financial capital, and, moreover, since it brings together research from the most fundamental to the most applied, often in rather tenuous ways that could ultimately come back to serve as cautionary tales, it is fair to give at least a partial overview of these ambiguities.

There are two main issues that give rise to a range of theoretical weaknesses: the relationship that SB attempts to create with the theory of evolutionFootnote 13 and the relationship that it seeks with life’s complexity, especially in recent demonstrations.

2.1 Synthetic Biology and Evolution

Where the theory of evolution is concerned, it is often stunning to hear about “synthetic” biologists’ projects. Evolutionary dynamics are erratic, random, and subject to contingency,Footnote 14 and according to its laws, organisms are not optimally adapted. Yet SB would be the opportunity, thanks to our state-of-the-art knowledge, to skip over evolution’s trial and error phases to obtain modified organisms via the precise implementation of modules that the organisms lacks in order to create new functioning. SB would save a considerable amount of time and yield technical advantages in the quest to domesticate life by logically rewriting viral sequences (Chan et al. 2005) or by “training” bacteria to fight cancer (Anderson et al. 2006). This vision, however, is something that SB shares in common with “classical” genetic engineering of GMOs, but this parallel between SB and genetic engineering does have its limits. Despite massive efforts, the actual diversity of GMOs– their technical principle relying almost always on the insertion of a single gene – is quite limited, and without delving too deeply into the polemics surrounding GMOs today, they are the subject of what is at the very least a skeptical evaluation of their utility and function for which they have been modified (Gurian-Sherman 2009), since any addition of a gene into an organism is a fundamentally disruptive action. Genes interact with one another, often so subtly that we can only imperfectly measure these interactions. Indeed, a thousand small effects that, added up, neutralize the goal and actually jeopardize the GMO’s viability may counterbalance the expected effect of a gene in a genome. The genome of each species living on Earth are the result of a long history that was able to progressively eliminate this type of threatening disruption. This is obviously not to say that nature is “perfect”: evolution’s paths are far from any notion of optimal. They correspond to a chain of DNA-based solutions over time to a succession of environmental constraints that are also constantly shifting. Lines that have overcome these obstacles and whose current offspring we see today comprising the current biosphere are those that have consolidated these solutions without also invalidating earlier solutions, as a result of a sustained equilibrium between robustness and evolvability.Footnote 15 It is this balance that one must keep in mind when attempting to modify a genome by adding in more genes (Koide et al. 2009). This could be a major explanation for the low number of current effective results and a major limit to the future of SB, which will mature if it integrates this parameter into its research agenda. As Michel Morange points out, this situation echoes the fascination with “drug design” in the 1980s (in Morange 2009). At that time, it became possible to know the three-dimensional structure of a given molecule, and researchers hoped to devise a complementary form (to make an antibody out of it, for example) using the power of computers that could integrate the complex rules of macromolecules’ folds. Today, the most effective techniques for obtaining such molecules are those of directed evolution, where large variety of potential molecules are blindly produced in vivo or in vitro and then the progressively selected for their affinity with the target.Footnote 16 It is thus a form of molecular Darwinism that turned the tables on engineers’ “drug design”, or rather provides the tool to complete it (Jäckel et al. 2008). These techniques of experimental evolution also help conceptualize the idea that Darwinian engineering is possible;Footnote 17 therefore it is not surprising that SB tends to rediscover the virtues of this type of approach when it reaches dead ends, finding help in the “corny and dusty” good old blind evolution. Losing not its enthusiasm, but a bit of its cocky adolescence would not be the worst thing for SB.

2.2 Synthetic Biology and Complexity

The second ambiguity in SB’s theoretical foundation is its shaky relationship with the notion of complexity. We do not have time here to go into an exhaustive exploration of the notion of “complexity” in biology, which is sometimes used rather sloppily. Yet the vast majority of authors will agree that the reducing a living organism to its genome, envisioned as an imprinted circuit is incredibly simplistic. Indeed, sticking to such reduction and metaphor would mark a serious regression to the postwar period when molecular biology borrowed concepts from the nascent field of computer science (cf. Segal 2003: chap. 7) to describe life as a deterministic form, at the heart of which living beings were the result of a “genetic program”. This first approximation of organisms’ function, as useful though it may be to teach the fundamental principles of genes’ molecular mode, does not account for the multiple interactions with the environment that any gene or organism has. The predictability of any genetic program constantly encounters difficulties because of the increasing complexity of constraints that vary in time and space that make the idea of a program (a word whose etymology means “written ahead of time”) much more an exception than a rule. How would SB’s proponents, who see living cells as little tunable machines, reply? Unsurprisingly, they do not support the notion that biological complexity is irreducible. As Bernadette Bensaude-Vincent (in Morange 2009) points out, SB supporters see the deconstruction of this complexity as an “opportunistic antidote” to break with the “chronic vitalism” that may be hidden behind the discussion of complexity.Footnote 18 This is the precisely the ambition in Yuri Lazebnik’s iconoclastic article, “Can a Biologist Fix a Radio?” (Lazebnik 2002), in which he defends the idea that with time and method, one can overcome obstacles complexity causes, and ultimately repair a cell just as an engineer would repair a transistor radio.Footnote 19 Such statement deserve several critics. To begin with, the critique of “rationality” can be countered by asking just how relevant it is to deconstruct a genome into base elements knowing that these elements have never existed individually in a catalogue independently of one another. As appealing as the “modular” view of life is, one must never forget that this is but one way of understanding the living world. All studies on modularity nuance this relevance of this very notion, because it is considered as more or less “dependent on the (cellular or environmental) context”. “The” modularity upon which the notion of the “living world as a catalog” relies, does not really exist: there is only a continuum between sub-groups of genes that almost never interact with the rest of the genome and other genes that are very connected. This has a major impact on how to “pilot” life via the addition of one of these modules and injects, at the very least, a bit of modesty into the goals. One responsible way out of this vitalism, or at least out of this “hazy” notion of complexity, relies less on the capacity to cut genomes into slices, than on the capacity to invent new type of explanations that would precisely not rely on life seen as a pure deconstruction of systems into genes. Biologists who study complexity cannot yet perhaps be led to offer universal methods for understanding or representations that appeal to this new direction, but there are signs of change. The recent connection between SB and systems biology (Cuccato et al. 2009; Purnick and Weiss 2009)Footnote 20 is particularly encouraging.

Another major theoretical obstacle is the intrinsically random dimension of cellular function. Unlike imprinted circuits, cells with the same genome (typically: that of an organism or a clonal bacteria population) are not identical. They have the same genes, but not really the same quantity of each of the proteins that are produced by them (often in low quantities, with significant sampling effects), and they do not move along fixed trajectories but in a random manner in a congested intracellular environment, and can thus reach their target with varying speeds depending on the cell. In eukaryotes, the relative position of chromosomes and genes inside the nucleus has an impact on the level of their expression, and this varies unpredictably from one cell to the next (Heams 2009). All of these recent observations and their impact on cellular processes make up what is now rather humbly referred to as the “cellular context”, a concept that has upset quite a few previously-held certainties and takes us still further from the view that cells are like predictable “computers”. Nevertheless, bioengineering still has its merits. One of the leading research teams on SB, Michael Elowitz’s lab, is also one of the most dynamic when it comes to tackling questions of stochasticity in genetic expression, reopening the issue in 2002 (Elowitz et al. 2002). His work illustrates how the apparent contradiction between such observations and SB can give way to a fruitful dialog and lead the way to a deeper investigation of the validity of these random dynamics. Ultimately, this revised perspective would help avoid later disillusionments in research programs that neglect the basic flexibility of cellular systems. SB will also have to move beyond the restrictive notion of the catalog and integrate the idea of a gene hierarchy. This concept of hierarchy does require some refinement, but it indicates that all genes and groups of genes do not have the same status, and the evolutionarily, certain ones are linked to differences among species, and others are linked to differences among genera (Erwin and Davidson 2009). Following this line of reasoning, certain genes are pure effectors, when others (homeogenesFootnote 21 for example) can regulate many others. For now, we can only guess as to what the impact of occasional disturbances will have on these genes’ targets, as SB has just begin to look at the issue. And finally, SB will also have to deal with functional impact of DNA topology (the three-dimensional structure of chromosomal surfaces, gene order, number of copies of each). This is critical if one wants to rationalize the eventual insertion of innovative genetic “modules” into bacterial genomes. This dimension is notably missing in the BioBricks initiative, for instance, but it could be a promising path to improvement.

All this new direction in research are what it will take for SB to emerge from its turbulent adolescence anchored around the promise of spectacular results and somewhat neglectful of certain increasingly evident biological realities. In addition, it is not mandatory to take the above mentioned Feynman’s mantra for granted, as fruitful as he can be. Building can indeed be useful, but if it were the only mode for accessing knowledge, we would certainly have a hard time understanding historyFootnote 22 or the cosmos (O’Malley et al. 2008). Nevertheless, it is rather intriguing to see an entire community of scientists dream of themselves as “builders” when what they are actually proposing at the moment is a program of deconstruction… Feynman’s maxim does not tell us whether SB’s goal is to understand life or to create its ow objects, although these two are not mutually exclusive. The historical example of synthetic chemistry in the nineteenth century, which had an applied goal but whose advances led to an understanding of the fundamental mechanisms of organic chemistry (Yeh and Lim 2007), is perhaps partly similar to the relationship between biology and SB. But if SB excessively orients itself toward the “creation” of docile, profitable life forms, a restrained collection of bacterial “employees of the month”, that are “tamed”, and predictably capable of skills on command, the field will remain a million miles from life, which is intrinsically rebellious, wild, and whose variety and adaptability in a myriad of forms is a completely different matter. This is an open question that will depend on scientific, social, economic and human forces as it seeks an answer.

3 Synthetic Biology and Society

Several examples of links between SB and social issues have already been underlined in this article, especially in new works regarding “DNA machines” (which I will be referring to exclusively until the end of this chapter). SB is alternatively, a pedagogical object, a regular media darling, a constant fantasy of return on biological investments, a promising solution to current problems (environment, health – cf. Khosla and Keasling 2003 –, etc.), an institutional trend (cf. NEST report 2005); it is impossible to fully understand the fascination with SB if we leave out this dimension, far from the lab though it may be, but inseparable from the interest it arouses. In one sense, SB is “of its time”. It deals with society and highlights some of its modern characteristics—which implies that SB is also a trend, even if it is far more than that. We should, however, keep this trendiness in mind when we look at this “discipline’s” ramifications within society.

As stated earlier in reference to the iGEM competition and BioBricks, SB is also a new way of conceiving of biology that relies on the collaborative nature of the Internet, and the open access it provides to many data sources. Its lexicon reads like a sort of “wikibiology” that will bring in more students, researchers, and an entire community of non-biologists converge towards SB and make it more dynamic by accessing the fields of engineering and computer science. Yet in its appeal to Web 2.0, and all the innovation and, in some ways, conformity that it implies, SB has only imperfectly anticipated their blind spots. For example the issue of intellectual property in the iGEM competition is not always easy to understand, and it seems at the very last clouded by a troubling vagueness. Such equivocation may even lead to setbacks in scientific production, since this competition that for many media outlets is the “heart” of SB, does not reward discoveries that have been validated definitively in peer-reviewed journals; rather, it seeks intellectual elaborations that are in search of credibility via a degree of modeling and the feasibility of future cellular constructions. While it is unfair to overlook the talent and energy these students channel into such intense work (the competition is annual), accepting the “proofs of principle” they provide during the competition as sound scientific results would be a mistake. It seems that at this stage, additional safeguards (that might seem counterproductive to the appealing freewheeling nature of the contest) are necessary to protect the students themselves from third-party theft of their intellectual property. In addition, the growling success of the iGEM competition tends to effectively give BioBricks a monopolistic status of “index of life”, a development that is not automatically a cause for celebration.

The issue also remains of how to reconcile this playful, competitive, open-source version of SB with the other movements in the background that are trying to privatize and profit off results. One of the reasons behind the enthusiasm for the “modular” descriptions in biology is that if life can be reduced to building blocks or bricks, then each block can be the basis for business. This explains the current flourishing market for start-ups raising money in the hopes of developing a synthetic bacteria that can respond to some need; it is a development that calls to mind the popularity of home Internet start-ups in the late 1990s. The great majority of those died looking for markets that simply did not exist. If that bubble keeps growing, the warnings about life’s complexity and the illusion of its modularity will no doubt have a difficult time in the years to come. But the scientific community has a responsibility in not allowing financial interests to impose their storytelling on this issue. Economic forecasting simulating the future of BS described different possible consequences depending if open or proprietary formats are chosen, and depict several types of interactions between start-ups: coexistence, symbiosis, or predation (Henkel and Maurer 2007). This is a direct result of the “brick by brick” view of life; and yet, such reasoning can be turned on its head. Can certain biological realities help point out the flaws in the basic conceptual fragility of such models of competition? Prudent investors would then be wise to pause before lending their capital in the heady hopes of creating DNA machines if they have not done so already. The realities of investing in SB have already been made clear in a reference article on “the economy of synthetic biology” (Henkel and Maurer 2007) that reveals that in the case of artemisine, 95 % of the time has been spent “trying to find and fix unintended interactions between parts”, details that biologists themselves sometimes conveniently forget to mention. A lot of money has already been spent and we are still very far from the creation of simple recipes for life.

Awareness of SB’s shortcomings as a business model is even more urgent given the damage that it could cause to communities. In the case of artemisinin, the anti-malarial agent described earlier, the only valid achievement would be industrial-level SB. Despite researchers’ “eco-responsible” promise in the course of their quest, it is not so simple. It must be clear by now that an artemisine “miracle solution” is quite a stretch. Though it may stand to make billions for industrialists, synthetic artemisinin is also (and perhaps already) likely to disrupt many agrarian communities in Asia and Africa who make their living growing Artemisiana annua at a certain price (ETC Group 2007). If the pharmaceutical industry concentrates artemisine production, a whole host of people will lose their livelihood. Thus the disruptive action discussed earlier in the cellular context can be to a certain extent, transposed to the social scale. To be perfectly clear: any promise of a singlehanded solution to a problem as serious as a worldwide disease, and even more one where small farmers are involved in the supply chain is evidence of alarming social irresponsibility. This is not to say that all scientific progress should be halted, but only that researchers must be accountable for the human implications as they work toward the greater good. Past examples in history make substantiate this claim. For example, we have already mentioned that bacteria have been modified to produce indigo. One can remember that this dye was first produced chemically in nineteenth century industrial Germany. At the time, business owners amassed great wealth as a result of this “advance”, while at the simultaneously dismantling traditional indigo production in their very own colonies (Yeh and Lim 2007). Is this pattern destined to repeat whenever a discovery is labeled as “decisive technical progress”? Better, it is time to reflect upon the way SB innovations can impact workers’ life, not only the patients’ or consumers’ one. If some biologists insist on entering the marvelous world of finance, then they could at least look beyond its cynicism when it comes to the human consequences of economic decisions.

SB biologists will eventually have to deal with a new contingent of NGOs that are dedicated to technological innovations. Deeply hooked into the Internet-based culture of transparency and immediacy, these new NGOs are remarkably informed. The Homeric battle underway against GMOs, for instance, are led by individuals who unite to collectively claim the right to reflect on the social implications of current research. It would be prudent for the scientific community to open up a dialogue and move beyond mistrust. A frank and ongoing conversation in the hopes of sharing expertise must take place. This is not to say that the two sides must always be in agreement, but a dialogue between them is crucial for two reasons. The first is to avoid making the same mistakes that led to the heated debate of GMOs; in many ways, SB products are GMO version 2.0 even if researchers do not dare say so. Yet by facing this reality, proponents of SB could avoid past mistakes. They could avoid the public’s initial fears when it comes to communicating their intentions to the public. They could, for example, appeal to rational discourse and explain that the modified organisms SB produces are not openly cultivated: they are bacteria or yeast that remain in fermenters, just as many “genetically modified” bacteria, such as those that produce insulin, have been for years without problems. SB researchers must also openly address the important issue of bio-security and the risk of their products’ dissemination and use as biological weapons. Again, rational responses to these concerns exist: these “super-organisms” would be quickly destroyed in the wild because they are so fragile beyond the confines of the fermenters that create optimal conditions for their growth. And organisms that would be modified to incorporate bases or amino acids that are not naturally occurring would of course have no way of surviving outside the lab (in fact, this creates a sort of built-in safeguard). These are only partial responses to what are truly legitimate concerns. When it comes to issues of patenting these discoveries and their social consequences, the debate between science and society will certainly be more complex; however, there is nothing to be gained by avoiding these inevitable concerns at present (Rai and Boyle 2007). Ignoring them will certainly push more individuals toward “bio-hacking” or “garage biology”: attempts to individually appropriate the power of current biotechnology as it becomes accessible. Bio-hacking does bring with it the potentially credible threat of a modification of life, even if they are rather far-fetched for now and more theoretical than concrete. The social sciences will play an important role analyzing these potentially harmful extremes and encouraging the best of these “non-specialists” in the field to share their scientific knowledge. One of the most positive aspects of SB is that it has welcomed from the beginning a variety of sociologists and philosophers of science that can be either observers, or even—as a recent classification of study the social impacts and social demands placed on researchers terms it— collaborators that contribute to the very definition of SB’s research goals (Calvert and Martin 2009). It is an invaluable perspective that allows diverse experts, rather than only biologists, to contribute to the definition of the field itself while also helping to clarify what is at stake (O’Malley et al. 2008). Collaboration also provides the means to reflect on the need for new tools in the bioethical debate surrounding SB, as well as the need to use new readings of old issues in order to better compare past and present. (Parens et al. 2008). Such perspectives are vital to the internal scientific debate as well as to synthetic biology’s public reception. It seems that at least from this perspective, SB is open to the virtues of cooperation, a notion itself that is profoundly Darwinian.