1 Introduction: Biodegradation Versus Bioremediation

The growing industrialization and urbanization of our societies along the last century has left us a heritage of emissions that, whether involving natural or synthetic molecules, have had an impact in virtually every Earth’s Ecosystems. Although anthropogenic pollutants have always co-existed with us (e.g., heavy metals), the development of transportation and large-scale manufacturing has not only released quantities of xenobiotic compounds into the biosphere but has also mobilized carbon species that were otherwise trapped geologically in fossil fuels or forming part of natural compounds (Alexander 1999). Scientific studies on microbiological biodegradation of potentially complex organic molecules can be traced to the work initiated in the 1950s by I.C. Gunsalus on transformations of aromatic compounds (Gunsalus et al. 1953) and terpenoids (Conrad et al. 1965) by environmental bacteria. His pioneering research spread recognition of how microorganisms isolated from nature could run amazing reactions that were thus far thought to be possible only through organochemical methods. While the industrial applicability of transformations of this sort became clear very early, realization of the immense potential of the same microorganisms for dealing with pollutants had to wait until the conspicuous environmental crises of the 1980s (e.g., the noticeable effects of the Agent Orange, the Bhopal accident, the Exxon Valdez spill) to receive a considerable public attention – along with the onset of green awareness in Germany and the popularization of recombinant DNA technology in laboratories throughout the World. This created a widespread expectation on the capacity of genetic engineering to solve many of Mankind’s problems, including environmental deterioration (Lindow et al. 1989).

This combination of circumstances set the motion for both the science of biodegradation (i.e., understanding – and ultimately refactoring – how microorganisms catabolize otherwise unpalatable environmental chemicals) and the technology of bioremediation (using biological agents for removing or at least alleviating pollution in given sites). Note that the number of variables in each case is quite different. One can address experimentally a biodegradation question by just setting system in which one microbial strain faces one target molecule in the controlled conditions of a Petri dish. In this case, the user can adjust the rest of the environmental conditions at will. In contrast, the physicochemical circumstances of a polluted site are generally prefixed and involve a much larger total of biological and abiotic factors. The setups for bioremediation research thus encompass more intricate experimental formats that try to capture the principal components of the target site (e.g., in a microcosm or a mesocosm; Teuben and Verhoef 1992). In any case, the challenge in the field has typically involved going from a certain biodegradative property in a strain or a consortium (whether naturally occurring or genetically enhanced) in the Laboratory towards a full-fledged, uncontained process for removal of the pollutant from a specific location. To this end, the generic concept of bioremediation is often broken down in various substrategies: i.e., bio-attenuation (basically let the indigenous microbes of the polluted site to do the cleanup with no or minimal intervention), bio-stimulation (addition of limiting nutrients or electron acceptors to foster the emergence and/or activity of naturally occurring degraders), bio-augmentation (inoculation of microbial strains to speed up the rate of degradation of the target contaminant), or bio-adsorption (capture of the contaminant on the microbial biomass in a biologically innocuous form). In every case, interventions may eliminate the problem altogether (e.g., mineralization) or at least alleviate its effects by, e.g., transforming toxic chemicals in less harmful species (mitigation). Note that each of these settings can be further addressed through intensive interventions (at the source, typically dealing with concentrated waste) or extensive actions (low levels of the pollutant spread through a wide-ranging area). Finally, treatments can be engineered ex situ (removal of the contaminated material to a different processing site, e.g., a water or soil treatment plant) or in situ (application of whatever decontaminating agent in the same place where the problem occurs). There are also in-between scenarios, such as soil farming that may involve partial or total relocation of the polluted layers for a more effective treatment. Ideally, extensive bioremediation with biological agents becomes the method of choice when physical and chemical removal of waste fails to take toxic levels below tolerable concentrations. As a consequence, bioremediation is often pictured as the complete mineralization of a given pollutant that contaminates a large site through in situ bio-augmentation with a degradative strain designed or nurtured in the Laboratory for doing the job. Needless to say that such a rosy scenario hardly ever happens in reality.

2 What Is an Environmental Pollutant and What to Do with Them

Intuitively one envisages pollutants as a suite of chemicals of either natural or synthetic origin which, once released into the environment, causes the malfunction of one or more components of an otherwise balanced niche or ecosystem. There are compounds that are completely natural (e.g., oil hydrocarbons) but which have been mobilized by the chemical industry to places that they do not naturally belong to. Others are entirely human-made xenobiotics (e.g., most cloro-organic or nitro-organic molecules) that were often synthesized for the very purpose of being extremely stable. They might be released to the environment deliberately (e.g., DDT and other pesticides) or accidentally (oil spills, PCBs) in different amounts. Their detrimental effects can stem from the inherent properties of the product or may reside in the harmful qualities of their synthesis intermediates or their partial metabolism. Finally, their chemical structures might be amenable to total or limited microbial transformations under specific circumstances – or recalcitrant to biodegradation and thus persist in the afflicted sites for long periods of time. The functionalities of these molecules are very diverse (Fig. 1a), but they form part of our modern society and it is hard to imagine living without them. We leave deliberately heavy metals and metalloids aside the picture, as they raise a different type of problems and possible solutions. Also, whereas intuitively one tends to picture pollutants as structurally intricate toxic compounds (e.g., PCBs or chlorodioxins), other superficially inconspicuous molecules as CO2, lignin, or plastics become perilous when released at high levels (Eriksen et al. 2014). By the same token, some bioactive molecules (e.g., pharmaceuticals and other so-called micropollutants ; Schwarzenbach et al. 2006) that are typically found at low levels may have a devastating impact in the corresponding ecosystems. The status of specific chemical species as environmental pollutants goes much beyond their inherent toxicity to biological systems, but encompasses at least the six parameters pictured in Fig. 1b.

Fig. 1
figure 1

Typical anthropogenic emissions. (a) Industrial and urban activities generate molecules that impact negatively the functioning of the biosphere, such as those indicated in the sketch. (b) The profile of an environmental pollutant is defined by the six parameters indicated, the outcome of which frames the bioremediation strategy (de Lorenzo et al. 2018)

Innate toxicity of specific molecular species can be in many cases predicted through the use of various computational platforms that deliver a score on the basis of the molecular structure of the compounds of interest (Mayr et al. 2016). Alas, most of these in silico tools neither detect the effects of micropolymers (e.g., partially degraded plastics) in food and reproductive chains nor possible toxic synergies of chemical mixtures. Another key parameter is recalcitrance or biodegrabability – and again users may make preliminary estimations by querying the compound at stake in various biodegradation-prediction platforms (Pazos et al. 2005; Ellis et al. 2006; Hadadi et al. 2016; Wicker et al. 2016; Latino et al. 2017). One study suggested that the ability of a certain chemical to be metabolized through the merged biochemical network of the known microbiota depends on the frequency of chemical triads (also called chemotopes ) in the molecule under scrutiny (Gomez et al. 2007). Unfortunately, these predictions say little on the kinetics of such potential degradation, which may vary enormously between different sites. Other parameters include gross physical characteristics (abundance, concentration, and mobility). And finally there is the issue of bioavailability: the fraction of the compounds that is accessible to the biological side. This is a somewhat slippery concept that is often associated to aqueous solubility and the measure of which is claimed to be delivered by dedicated whole-cell biosensors (Harms et al. 2006; van der Meer and Belkin 2010). Unluckily, the dearth of recognized standards for quantifying bioavailability makes this parameter difficult to tackle. In any case, Fig. 1b indicates that the qualification of a molecule as an environmental pollutant is related not just to its toxicity and degradability but also to a number of physicochemical circumstances that make them a lesser or a higher matter of concern.

The fate of the pollutants amenable to bioremediation also deserves a separate comment. As mentioned above, the ideal scenario is complete removal of the harmful compounds from the contaminated sites (e.g., mineralization to CO2 and H2O). However, CO2 is in itself a major issue because of its ramping levels in the atmosphere in recent decades. In this case, bioremediation also includes legitimately the efforts to capture the surplus of CO2 and convert it into sugars (Antonovsky et al. 2017), polymers, and other useful materials, something at hand with contemporary metabolic engineering. A separate issue is raised by recalcitrant polymeric materials whether synthetic (e.g., plastics) or with a natural origin (lignin). In a first sight, it would be desirable to nurture microorganisms with a superior capacity to degrade them altogether. However, if the catabolism of large amounts of such polymers were complete, we would be converting one pollutant into another (CO2). And if degradation is not complete, we could be aggravating the problem by generating microplastics and other microparticles (Andrady 2011) that can interfere with the trophic and reproductive chains of many organisms. In these cases, the adequate bioremediation strategy might be just the contrary: either compaction or disposal of the target materials in a biologically unavailable form and conversion into an entirely inert material. These considerations highlight the difficulty to engineer interventions that are both efficacious and safe.

The diversity and complexity associated to real polluted scenarios has often created a big gap between the science of biodegradation and the technology of bioremediation. Challenges include not just the genetic stability of the agents and the influence of the extremely variable abiotic factors that prevail in polluted sites, but also the frequent syntropy of the members of the resident microbial community for catabolism of complex molecules. Often the key environmental catalyst is not a single strain of one species but a consortium of microorganisms with different degrees of compositional complexity. Moreover, it is often the case that the bacteria that are best at degrading target compounds in nature are not the fast growers that are typically isolated in enrichment experiments. Furthermore, key degradative pathways are hosted by bacteria that cannot be cultivated: their presence and activity are revealed only through meta-genomic, meta-proteomic, and meta-transcriptomic approaches. Finally, resistance to colonization by exogenously added bacteria and the foraging of predatory protozoa do the rest to check the action of artificially implanted strains. But obviously, most of these caveats were unknown when the field started. Unsurprisingly, the somewhat naïve propositions of the late 1980s and 1990s to rationally create biodegradative superbugs which, upon release, could mineralize of many of the worst environmental pollutants, collapsed altogether one decade later (Cases and de Lorenzo 2005). But along the way, a wealth of new knowledge on microbial ecology was unveiled; clear indications of the blockages for effective bioremediation were pinpointed, and much of the fundamental work on unusual catabolic enzymes was repurposed for the sake of industrial biotransformations. In view of all this, where are we now?

3 Towards Bioremediation 3.0

It would not be realistic to deny that the interest on biodegradation and bioremediation that enjoyed a big hype in the late 1980 – mostly due to the work of Timmis’, Knackmuss’ and Chakrabarty’s Laboratories – lost much ground short after because of the failure to deliver the promise of environmental cleanup with genetically engineered microorganisms (Cases and de Lorenzo 2005). There are various follow-up developments that stem from such state of affairs. First, the biodegradation and bioremediation research of the 1980s–1990s suggested that one has to consider the target sites as a whole system, including the physicochemical and geological characteristics of the place, the ecology and dynamics of the native microbial community, and the catalytic potential of biological and non-biological components – including what could be called the post-mortem enzymatic activities of many microorganisms (French et al. 2014). While the complexity of such systems is indeed high, they offer entirely new opportunities to address them with the tools of systems biology and systems science – which were not available before. Second, GM strains with enhanced or entirely new catabolic activities that do well under controlled laboratory conditions generally perform poorly when released into the environment. This is connected to the long-standing biotechnological challenge of how to genetically program microorganisms to stably behave as desired. The issue of retroactivity (Gyorgy and Del Vecchio 2014) and metabolic burden (Ceroni et al. 2015) between implanted genetic constructs and the preexisting biochemical and regulatory network of the bacterial host goes much beyond the mere stabilization of the new genes by having them encoded in the chromosome (Fig. 2). Let alone that extensive inoculation technologies developed thus far have not been too successful and strains released as agents to clean up specified spots often succumb to predators and/or straight competition with other microbial inhabitants of the place. What one could call Environmental Galenic Science (i.e., strategies for maintaining, formulating, and delivering remedies to a sick individual in order to optimize their intake and action) has not really developed much despite the evident bottleneck that that step imposes to bioremediation as an advanced technology. Third, the main beneficiary of most of the advances in the field of biodegradation in the last 20 years has not been the environment, but the industrial biotechnology sector. The wealth of information on the genetic and biochemical diversity of the microbiota that thrives in polluted sites has enabled the setup of bio-based alternatives to chemical processes with whole-cell biocatalysts developed through metabolic engineering. Conspicuous examples include degradable polymers, biofuels and both bulk and fine chemicals, which are bound to take over a large portion of the current market, as oil becomes more and more scarce and expensive as the feedstock for their production (Chae et al. 2017). Furthermore, many emissions can be biologically met at the point of manufacturing, thereby preventing their eventual release and live catalysts can be integrated in zero-pollution industrial pipelines. Indirectly, the onset of biological manufacturing of more and more added-value chemicals contribute to sustainability by replacing otherwise polluting chemical processes. But the direct environmental dividends of the efforts done for two decades on molecular biodegradation and in situ bioremediation research have hardly materialized and the most popular technologies for in situ cleanup to this day largely involve attenuation and bio-stimulation (e.g., fertilization with N and P, injection of O2, NO3, etc)—bio-augmentation with microorganisms nurtured in the Laboratory still being marginal in comparative terms.

Fig. 2
figure 2

The interplay between the host’s regulatory and metabolic network and the genetic implants. Formalizing context-dependency for the performance of engineered genetic constructs. The preexisting physiological and metabolic host (chassis) of engineered devices and the corresponding constructs may mutually compete for cell resources (e.g., ribosomes, energy currency, metabolic building blocks, etc.). This creates a perturbation called retroactivity . In the best-case scenario, the implanted genetic modules can be made orthogonal, i.e., have little or no influence with the chassis and with other engineered devices. (Figure from de Lorenzo and Schmidt 2018)

While these developments set the scene for a strong research program that builds on previous successes and failures, reality is that the current commercial value of environment-oriented biotechnology has evolved orders of magnitude below that of health-oriented, agricultural, and other demand-driven counterparts. The share of the bioremediation field within the whole landscape of modern, frontline biotechnology is minimal, a typical case of the paradox between highly relevant research from a social point of view —but low-added value— versus highly profitable alternatives in the biomedical and food sectors. Does this mean that the subject has a bleak scientific and technological future?

The limitations of what has been called Bioremediation 1.0 (based on mere trial-and-error) and Bioremediation 2.0 (fostered by recombinant DNA technology) have become clear above. But at the same time, the last decade has witnessed the emergence of novel environmental challenges along with the onset of game-changing technologies in the life sciences realm that were not there before. Excess of greenhouse gases (CO2, CH4, N2O, chlorofluorocarbons, and hydrofluorocarbons) and the overwhelming abundance of microplastics in marine ecosystems have revealed themselves as the most phenomenal global challenges faced by our generation. Also, lignocellulosic waste, although not toxic by itself, has become a major unwanted residue that impacts the normal functioning of many ecosystems. This is accompanied by the global spread of pharmaceuticals (e.g., antibiotics) and the realization that many known pollutants (dioxin and dioxin-like compounds, polychlorinated biphenyls, DDT) and other pesticides, plasticizers, and flame retardants like bisphenol A behave as endocrine disruptors. Finally, a large number of new-to-nature molecular species with unusual chemical bonds (e.g., ionic liquids) has been synthesized and expected to hold many large-scale applications, but the environmental fate of which is virtually unknown (Jordan and Gathergood 2015). The onset of these new environmental concerns have gone in parallel with the onset of systems and synthetic biology, the availability of amazing volumes of omics data and the growing interface of life sciences research with information technologies. The scientific, methodological, and social background of biodegradation and bioremediation interests has thus changed profoundly since the field was created. But the same circumstances pave the way towards Bioremediation 3.0 (Dvorak et al. 2017).

4 Past and Current Challenges

Thirty years ago, the frontline research in the subject of biodegradation/bioremediation involved the cloning, sequencing, characterization, and – whenever possible – genetic enhancement of pathways for catabolism of certain pollutants following one’s intuition and limited knowledge, i.e., mostly a trial-and-error endeavor. Now one can predict accurately the metabolic potential of one bacterium and even a complete microbial community by just looking at DNA sequences and their associated transcriptomic, proteomic, and metabolomic data. And all this without having to hold the biological materials in hand. Numerous computational platforms are available to guide the genetic assembly of new pathways (Hadadi et al. 2016; Wicker et al. 2016; Latino and Wicker 2017). The earlier romanticism of microbial ecology about going out to unusual places to collect biodegradative strains has been replaced nearly entirely by hours of bioinformatic analyses of genomes and metagenomes (and other omes) in front of a computer. The focus on the pollutants of interest has also changed, as large emissions of apparently innocuous molecules like CO2 or plastics have revealed themselves as far more dangerous for the planet than intensive contamination of located sites with very toxic compounds (Eriksen et al. 2014). Instead, the most concerning environmental problems are now global and threaten the climate and the biogeological and reproductive cycles that sustain the functioning of Earth. As discussed above, qualification of a pollutant as such has to be unfolded in a whole of parameters beyond its intrinsic toxicity (Fig. 1b): large volumes of apparently safe but highly mobile molecules might be as risky at a global scale as lower concentrations of a site-bound dangerous compound. Once released, pollutants and emissions may propagate globally beyond the point of production and become a planetary problem. The scale of the problem asks also for new ways of delivering possible remedies, e.g., bioremediation strategies aimed at propagating an enhanced CO2 capture through the environmental microbiome and spreading plastic-degradation capacity to marine microorganisms (de Lorenzo 2017). The same applies to other globally spread pollutants like pharmaceuticals and endocrine disruptors (de Lorenzo et al. 2018).

The scenario that has therefore developed in recent years is one in which (i) many contaminants have dispersed through virtually every ecosystem, even to supposedly pristine locations; (ii) new molecules now qualify as real or potential environmental pollutants either because of their high concentrations (e.g., CO2 and greenhouse gases) or unusual chemical structures (e.g., ionic liquids); (iii) the information available on the multiscale responses of the microbiota to environmental stresses (from enzymes to pathways to communities) is unprecedentedly large; and (iv) contemporary systems and synthetic biology allows revisiting traditional pollution problems with conceptual and material tools for handling complex systems that were unheard of time ago. Given this frame, how may the field move further?

5 Biodegradation and Bioremediation in the Times of Systemic Biology

The term systemic attempts to merge the two major interpretative structures that contemporary biology has set in motion to address the problem of complexity in live systems. On one hand, systems biology attempts to describe and understand quantitatively biological objects in their entirety – in contrast with the extreme reduccionism of molecular biology. On the other hand, synthetic biology faces the same biological objects through the perspective of electric and mechanical engineering in the pursuit of the relational logic between their components that makes the system work. The tenet of synthetic biology is the famous Feynman’s statement that “… what I cannot create I do not understand …” In this respect it has been argued that synthetic biology is the authentic genetic engineering, as the term engineering is not a metaphor any longer but a veritable framework to both understand and refactor live entities (de Lorenzo and Schmidt 2018). The onset of systemic biology widens tremendously the scope and possibilities of biodegradation and bioremediation, as it allows addressing questions and entertaining intervention strategies for which there were no tools before. In fact, we can consider Bioremediation 3.0 fact the result of the encounter between the field and systemic biology (Dvorak et al. 2017). There are at least three aspects in which such an encounter may bring about entirely new angles to the subject.

The primary feature is the current ease of analyzing and genetically programming many types of microorganisms of environmental relevance such as Pseudomonads (Nikel et al. 2014, 2016). The last few years have witnessed the emergence of a suite of theoretical and practical tools to rewrite at user’s will the genome of bacteria of interest, including chemical synthesis of large chromosomal segments or even the complete genetic complement. This facilitates and accelerates studies on bottlenecks that limit catabolism of the compounds of interest, whether enzymatic, regulatory, or physiological. In some cases, it has been possible to create new enzymes from scratch through a combination of rational protein design and directed evolution approaches (Kan et al. 2016; Arnold 2017). This opens amazing opportunities to engineer agents able to cope with new types of chemicals for which nature has not yet invented a biodegradative solution. Efforts to develop new whole-cell catalysts are also assisted by computational platforms that guide the assembly of new pathways by automatically searching in databases the best enzyme and genes to deliver the activities of interest. The pioneering work of Larry Wackett with the University of Minnesota Biocatalysis and Biodegradation-Database (https://goo.gl/3x76uZ; now hosted and upgraded in EAWAG-ETH https://envipath.org/;Latino and Wicker 2017) has provided a generation of biotechnologists with a user-friendly resource for pathway prediction and assembly. More recently, the ATLAS platform (http://lcsb-databases.epfl.ch/atlas/) and its Pathway Search feature allow the user to search for all the possible routes from any substrate compound to any product (Hadadi et al. 2016). The resulting pathways involve known and novel enzymatic steps that may indicate unidentified enzymatic activities as well as providing potential targets for protein engineering to alter substrate specificity. Thereby assembled pre-pathways could then be optimized by playing in vivo with the wealth of biological parts available through various repositories of promoters and other regulatory components (e.g., http://parts.igem.org/). The objective of such an optimization is not so much to express the desired catalytic activity at very high levels, as to ensure that the genetic implants nest well within the broader regulatory and enzymatic network of the host. As mentioned above, the retroactivity between the genetic constructs and the genomic and biochemical chassis of the carrying bacteria (Fig. 2) is one of the frequent reasons of the instability of designer microbes and finding ways to minimize it is one of the key challenges of present-day genetic engineering. In sum, we have now a large number of tools for constructing bacteria à la carte – from protein engineering to whole cells. Furthermore, the adoption of CRISPR/Cas9 technology for genome editing (Aparicio et al. 2016, 2017; Choi and Lee 2016; Ricaurte et al. 2018) overcomes many of the concerns traditionally associated to transgenic microorganisms (see below). But the question remains on how to deliver biodegradative activities to a target site and how to scale this up to tackle global pollutants.

A second characteristic of Bioremediation 3.0 is the recognition of consortia rather than single species as the main agents for catabolism of most pollutants. Having one sole strain as the recipient of a complete pathway makes regulation of expression of the genes of interest the only possibility for adjusting the right dose of activity required for a specific application. In reality, it is unusual to find naturally occurring strains that can run by themselves a complex route, in particular for very novel compounds. A consortium can not only divide the catabolic labor between its various components but also let new functions to emerge that could not happen in single cells (Fig. 3). For instance, one member of the community may not contribute to catabolism of the target molecule, but might eliminate an intermediate toxic side-product. Moreover, the tuning of the optimal doses of biochemical activities for each step of the degradative route can be adjusted not just by endogenous, e.g., transcriptional regulation of the pathway in one strain but also by altering community composition and the breakdown of each species in the community. Having separate catabolic steps in different bacteria also allows optimization of the biochemical background of each of them for the best performance of individual reactions. Since the enzymes and transcription factors involved in many biodegradative pathways are often promiscuous, the catabolic capacity of a consortium is often higher than the sum of their components, a phenomenon known as ectopic metabolism (de Lorenzo et al. 2010).

Fig. 3
figure 3

The single-cell catalyst versus the catalytic consortium dilemma. (a) A pathway of interest may appear naturally in a given strain or can be engineered in a single host. However, the background metabolic network of such a single-compartment reactor may not be optimal for each biochemical step of the route. (b) An equivalent pathway could be formed by a consortium A, B, C where each member contributes to the biochemical itinerary with a separate enzymatic step. The advantages of this scenario of such a distributed catalysis versus the single-cell counterpart are discussed in the text

Finally, the tridimensional structure of microbial consortia makes a difference in the efficiency of the processes that they catalyze. Designing the architecture of a multi-strain agent thus becomes another point of action where systems-guided engineering can be applied for the sake of better remediating activities. A side-benefit of utilizing communities rather than single strains is that one can artificially assemble a catalyst by putting together specimens that naturally bear one or more of the steps of a pathway of interest. In this case, when brought together, the consortium displays the desired metabolic capacity (Zhou et al. 2015; Ponomarova et al. 2017). Doing this requires a deep knowledge of the metabolic networks of each of the components of the group, but it has the advantage of resulting in a biological material that is not genetically modified. The new wave of Biodegradation could thus expand the former focus on assembling pathways and constructing degradative superbugs towards engineering catalysts in their entirety, including consortia with a desirable physical shape. But still, these developments do not solve the problem of delivering cleanup activities prepared in the Laboratory into the target sites.

This takes us to the final feature of systemic bioremediation: rational spreading of biodegradative activities. As mentioned above, quite in contrast with the many pathway assembly methods there are only a few advanced technologies for releasing catabolic agents to the environment – whether natural or GM – in an efficacious form beyond mere spreading of the agents in a aqueous suspension. These include inter alia formulations with plant seeds (e.g., for rhizoremediation), adsorption to corncobs (Raina et al. 2008), trapping in silicon tubing (Mertens et al. 2006), dispersal with foam and packaging in gelatin capsules (de las Heras and de Lorenzo 2011). In many instances, such carriers are combined with additives that prolong the lifespan and shelf life of the biological components. Larger-scale bioremediation interventions may combine also biological activities with some type of civil engineering (e.g., reactive barriers) or blending with electrostimulation (Mena et al. 2016) in which electrical current serves as either electron donor or acceptor of the biological process.

While these could be good solutions for specific scenarios, such inoculation methods still need to be designed on a case-by-case basis and in no instance bioaugmentation is scalable to a very large level, as required for tacking global emissions. In the meantime, data on massive horizontal gene transfer has revealed how quickly a new trait (e.g., antibiotic resistances) spreads through the entire planet provided that there is enough selective pressure (Loftie-Eaton et al. 2015). Under the right conditions, DNA seems to move easily through the entire complexity pyramid (Fig. 4), and genetic innovations that appear in one genome may disperse through the environmental microbiome in just a few years. The idea of developing DNA super-donors able to implant and propagate particular activities in a community of recipients has been considered at various times throughout the history of bioremediation (Top et al. 1998; Dejonghe et al. 2000). Alas, the genetic tools available to engineer such occurrences were very limited and the concept has not been really developed to its full potential. But, this is not the end of the notion. There is a suite of plasmid-based DNA transfer systems that are promiscuous and active enough to think along the line. Furthermore, the role of phages to propagate new genotypes in a large bacterial population is becoming increasingly evident (von Wintersdorff et al. 2016). However, expression signals vary dramatically from one host to the other, and one pathway engineered in E. coli may not work at all when passed to another host whether through plasmid-based or phage-based systems. Synthetic Biology allows addressing this caveat through engineering of 5′ regions of the genes of interest bearing very promiscuous (if not universal) expression signals. The problem remains, however, on how to foster propagation of preset DNA sequences in the absence of a major selective pressure. In reality, this challenge is by no means exclusive of large-scale bioremediation, but it is a general one: how to stimulate propagation of beneficial traits through a population without an exogenous force to push it. For diploid species, a sophisticated strategy called gene drives has been developed which allows transmission of a given genetic cargo to the progeny at frequencies above the mere Mendelian distribution of inherited traits (Champer et al. 2016). After a few reproductive cycles, this allows them to eventually spread to all members of the target population. The potential power of the technology has raised serious safety concerns, as the method could be use to precisely eliminate particular species (e.g., by propagating infertility traits; Oye et al. 2014). But by the same token, one could think on using the approach for dissemination of activities that are intrinsically beneficial for the environment, e.g., CO2 capture, biodegradation of plastics, and mineralization of endocrine disruptors. That bacteria are generally haploids prevents doing something similar to what has been successfully attempted in yeasts and animals, but one could entertain scenarios in which cargoes engineered inside promiscuous plasmids or phages are directed to very conserved chromosomal regions of a suite of species and force recombination or else die. Whether adopting this strategy or formulating others for spreading DNA-bearing biodegradative activities (rather than degradative strains) remain one of the key challenges in the field. In this respect, while Bioremediation 2.0 was much concerned with containment of the agents engineered in the laboratory, Bioremediation 3.0 might walk in the opposite direction and explore possibilities for maximizing horizontal gene transfer. Note that discharges of greenhouse gases and microplastics in the oceans require interventions that go beyond merely decreasing emissions or acting on a limited number of sites (de Lorenzo et al. 2018). The much-debated geoengineering of planet Earth could potentially be complemented or even replaced by large-scale bioremediation strategies to capture CO2 and to improve the capacity of marine microorganisms to act on plastics and other globally widespread micropollutants. As before, there is a question mark on whether or not the public will accept such unprecedented actions for handling emissions, which are reminiscent of Terraforming (de Lorenzo 2017).

Fig. 4
figure 4

DNA propagates very quickly through the multiscale complexity pyramid. As the studies on global spread of antibiotic resistance have shown, mutations that emerge in a single genome may spread very rapidly through a suite of horizontal gene transfer mechanisms to eventually reach out virtually every ecosystem – provided that there is a strong selective pressure to do so. We argue that Bioremediation research could learn from such mechanisms of dispersal in order to engineer dissemination of beneficial traits (e.g., CO2 capture, degradation of microplastics, removal of micropollutants, etc.) at a global scale

In sum: while the science of biodegradation will remain focused on pathways, hosts, communities, and their interplay with the physicochemical constituencies of polluted site, the technology of bioremediation will move forward taking stock of past failures and capitalizing on the many opportunities and tools brought up by contemporary systems and synthetic biology. This includes methods that allow assembly of new pathways and enhancement of the catabolic capacity of a large community without necessarily relying on the release of transgenic GMOs.

6 Overcoming the GMO Controversy?

Although the primary reason for the difficulties of Bioremediation 2.0 to deliver sound intervention strategies was not public acceptance, the mere proposition to release genetically for environmental cleanup ignited a major argument between pro-GMO and anti-GMO parties that continues to this day. While massive evidence indicates that the impact of recombinant microorganisms when released for bioremediation purposes is not worse than naturally occurring counterparts, a large part of the public still invokes the Black SwanFootnote 1 argument (Taleb 2007) to oppose any purposeful liberation of genetically manipulated agents. The first concern is about the spreading non-natural genetic information (e.g., recombinant DNA) or GM strains into a new niche where the effects might be unknown. Early in the history of Bioremediation 2.0, this question was thoroughly tackled through (i) stabilization of transgenic genes in the chromosome of the carrier bacterium (in contrast to the use of transmission-prone plasmids), (ii) active killing of the bacterium at stake once the job has been completed – or when it departs from a specific target scenario, and (iii) vigorous barriers to horizontal gene transfer with conditional suicide genetic circuits (Ramos et al. 1995). While these genetic devices increased containment by various orders of magnitude, the figures never reached a full 100%. This was due to spontaneous mutations and the activity of mobile insertion sequences, an issue hardly tractable by that time. This same question has been picked again more recently by synthetic biologists in the pursuit of certainty of containment (CoC) for highly refactored microorganisms. The favourite approach in this case involves the emancipation of one of the stop codons and its recoding to guide insertion of non-natural amino acids in the structure of essential proteins. This makes viability of the bacterium entirely dependent on addition to the medium of such chemically synthesized amino acid (Rovner et al. 2015). Although the level of containment of such strains is extraordinary compared to previous ones, they are still above CoC. Attempts to move the figure still further up involve altering the genetic code, bear the genetic information in a non-DNA molecule, or replace one or more of the nucleotides with chemically synthesized alternatives (Schmidt and de Lorenzo 2012, 2016). In this way, the thereby modified genetic information could not be read by any potential capturer, which could not interpret standard DNA either. Whether or not these approaches will be useful for agents to be released is unclear, because the resulting strains may have lost much of the necessary competitive fitness of naturally occurring bacteria. And in any case, it is unlikely that the anti-GMO community could accept strains that are far more engineered than the first generation of environmental recombinants. Is there a way of producing advanced and efficacious bioremediating agents or strategies that circumvent this problem and thus ease public sympathy for the technology?

In the paragraphs above, we have hinted at some approaches to this challenge. At the time of writing this article, biological systems (including microorganisms) that have their genome edited with CRISPR/Cas9 technology do not qualify as GMOs proper (Waltz 2016). This opens a window of opportunity for their application to problems that were not amenable before because of the transgenic tag. A second possibility is systems-guided assembly of naturally occurring strains to form a catalytic consortium, with the added advantages mentioned before readjusting of expression levels and possibilities to design their 3D architecture. Also, propagating DNA rather than GMOs could be a way to go to scale up interventions (Fig. 4). Finally, we can entertain the design of cell-free agents (Karig 2017) or DNA-free cells (Rampley et al. 2017) that capture the pathways of interest and deliver their catabolic activities but are unable to spread beyond the site of application. However, the question remains on whether we can develop new catabolic and enzymatic activities without resorting to genetic engineering.

7 Research Needs

To address how biodegradation and bioremediation research may look like in the future, it is useful to look back into some of the earlier studies, in particular what was called by the time plasmid-assisted molecular breeding (Kellogg et al. 1981), a technology that become popular before the spreading of recombinant DNA methods. The key idea was to start with a complex community of microorganisms retrieved from sites with a history of pollution by the target compound, some of which bearing plasmids with catabolic genes. Progressive selection of best growers in a chemostat eventually led to isolation of strains that incorporated in their genome the complement of genes that could afford biodegradation of an otherwise recalcitrant compound (e.g., 2,4,5-trichlorophenoxyacetic acid). The solution to the metabolic problems was thus the result of horizontal gene transfer and spontaneous mutagenesis. Although the method was well-liked for a while, it was soon replaced by more directed approaches where the user had a better control of the changes leading to a degradative phenotype. The positive side of molecular breeding was, however, that the resulting strains were not GMOs and the assembled pathways non-recombinant, thereby enabling their immediate application if necessary. If we used today’s systems terminology, we could describe the setup as a case where a given metabolic problem is embodied in a material object (the starter microbial community) and the system let to fluctuate for exploring the solution space upon application of a selective force. And the result is a physical entity (a strain) that has gone through a multi-objective optimization. For the new strain to grow, the system had to solve not just the assembly of the metabolic route proper but also its adequate nesting in the host’s biochemical and regulatory network (Fig. 2). We argue that such methodology could be revisited in the times of systemic biology for generating new strains and properties that may not be amenable to forward design with the level of knowledge we have today.

While evolutionary optimization is the method of choice for fine-tuning expression parameters of pathways assembled in given hosts, its utility can be upgraded to generate new biodegradative strains and new consortia that could do ultimately better than any forward-designed alternative. The ease of DNA sequencing available today allows us to determine evolutionary itineraries in single strains and complete communities (Celiker and Gore 2014) and principles to guide further actions. Under the right selective pressure, bacteria seem to be able to invent reactions that were difficult to implement otherwise (Donnelly et al. 2018). It has been recently reported that metabolic stress resulting from faulty redox reactions generate reactive oxygen species (ROS), which in turn, accelerates diversification and solution-finding of the corresponding bacteria to overcome the metabolic bottleneck (Perez-Pantoja et al. 2013). This phenomenon, which might be at the basis of the rapid evolution of Rieske non-heme iron oxygenases (Pérez-Pantoja et al. 2016) could be exploited under a directed evolution setup to speed up discovery of enzymes able to cope with new substrates (Fig. 5). Development of microbial activity farms that combine the chemical computation power of naturally occurring bacteria with human-made DNA amendments could in fact result in highly evolvable systems. They could find catabolic solutions to virtually every present or future biodegradation challenge, and materialize the result in the form of mono-strains or consortia that are not GMOs.

Fig. 5
figure 5

The stress-genetic innovation cycle. The figure sketches how metabolic troubles (e.g., faulty oxidation of no-substrates or poor substrates of oxygenases) cause reactive oxygen species (ROS), which mutates DNA (as well as damaging RNA), triggers the SOS response and brings about genetic diversification, which might find a solution to the metabolic problem that originates the release of the mutagenic agent. While the ROS → DNA loop does not involve transfer of coded information, it does deliver an input that accelerates the rate of novelty production

Figure 6 shows a streamlined roadmap of bio-based approaches for tackling environmental pollution, from prevention to global-scale remediation. The process typically involves identification of new catalytic properties in a strain or a community – natural or human-designed – and it follows their utilization in prevention, monitoring, or bioremediation interventions. These diversify depending on a large number of parameters and the dimension of the problem, from local to global.

Fig. 6
figure 6

Bio-based approaches to tackle environmental pollution, from prevention to global-scale remediation. The direction of the arrow indicates the increasing complexity and diversification of the technologies involved (see de Lorenzo et al. 2018 and text for explanation). Some of the items shown in the flow are addressed separately in other Volumes of this series

Most contributions that follow this Introductory Chapter to the volume on biodegradation and bioremediation bear witness of the impasse triggered by the transition between stages 2.0 and 3.0 in the field, as discussed at length above. The former emphasis on genetic engineering as the main driver of the field has been largely left aside in favor of less risky and more acceptable approaches based on sound microbial ecology, geobiochemistry, and physicochemical methods. Also, the articles reflect the effort to understand in detail what is going in the biological realm during natural attenuation of pollution with or without much human intervention (e.g., biostimulation). But also, a large share of the work herein reported capitalizes on the suite of omics technologies that allow a detailed follow up of responses of individual strains, community composition, and activity monitoring in very different bioremediation scenarios. In the meantime, new and acute environmental challenges have become noticeable and cannot be ignored (e.g., global greenhouse emissions, plastics, and micropollutants) while novel conceptual and material tools have arrived – in particular those of synthetic biology. A new encounter between the immense possibilities of these new fields and the pursuit of remedies for both chronic and new environmental problems is not only desirable but also unavoidable. And some of the chapters also testify that frontline technologies can open avenues for solving thus far intractable pollution puzzles.