Keywords

1 Introduction

This chapter presents the main methods for the study of microorganisms in the environment. This study passes first through an adapted sampling for the different environments explored. The various components of the microbial community can then be characterized from the point of view of their biomass, their activity, and their diversity. To respond to these various issues, techniques as varied as those of flow cytometry, molecular biology, biochemistry, molecular isotopic tools, or electrochemistry are implemented. These different methods are described with their advantages and disadvantages for different types of biotopes (water, soil, sediment, biofilms). A final important point discussed in this chapter concerns the isolation of microorganisms from the environment and their culture in the laboratory. Without being exhaustive, this chapter emphasizes the importance of using appropriate and efficient methodological tools to properly explore the still mysterious compartment of microorganisms in the environment.

2 Sampling Techniques in Microbial Ecology

2.1 Soils Sampling Techniques

Sampling of soil for microbial ecology studies can be achieved with a shovel, but more usually using a hand auger (diameter between 1 and 10 cm) or vehicle-mounted hydraulic auger, which allows to collect at a constant depth. The depth is important because it influences many parameters (oxygen, temperature, water content, concentration of organic matter, etc.). The depth studied is variable in function of the questions. It can reach several meters (Box 17.1), but more often it is the layer between 0 and 30 cm that is studied. Techniques described above (use of a shovel or an auger) concern masses of soil from few hundred grams to several kilograms.

Box 17.1: Sampling the Deep Biosphere

Michel Magot

Institut des Sciences Analytiques et de Physico-chimie pour l’Environnement et les Matériaux (IPREM), UMR CNRS 5254, Université de Pau et des Pays de l’Adour, B.P. 1155, 64013 Pau Cedex, France

The results of a study published in 1998 sounded like a thunderclap in the sky of microbial ecology. The authors suggested that the amount of biomass hidden in the deep Earth subsurface, down to about 4,000 m deep, would be equivalent or higher to that of all living organisms on its surface (Whitman et al. 1998)! The total population of prokaryotes, only denizens of the deep, was estimated between 25 and 250 × 1028 cells, while about 15 years ago most microbiologists believed that the Earth subsurface was sterile beyond a few tens meters deep. Why did it take more than a century of research in microbiology for the estimation of such a great unknown biodiversity? The main reason undoubtedly lies in the difficulty of obtaining uncontaminated geological samples or deep fluids. Three main ways to collect microbiologically representative deep subsurface samples will be briefly described below.

Collecting Fluid Samples from Pre-existing Wells

This is a priori the simplest and cheapest method. This is what the American microbiologist Edson Bastin did during the 1920s to study sulfate-reducing bacteria from American oilfield waters, suggesting for the first time that bacteria could live in the deep subsurface. Whether collecting petroleum fluids or water samples from deep aquifers, the technique is the same: in principle, it just needs a production or monitoring well drilled in the geological formation of interest to collect liquid samples by following some basic rules to maintain aseptic conditions and avoid contact with oxygen. Studies based on such samples, either by conventional microbiology or molecular methods, show that things are really not that easy! Multiple sources of contamination can lead to erroneous conclusions. The two main ones result from (i) the drilling fluids used during the implementation of the well, which will be discussed below, and (ii) the colonization of the entire inner part of the well (the tubing) by a biofilm, which most often has nothing to do with the indigenous subsurface microorganisms (Magot 2005). A solution is nevertheless available to collect representative subsurface samples from wellheads: it necessitates the sterilization of the tubing throughout its length, i.e., sometimes a few thousand meters! The operation is obviously extremely cumbersome and costly, since it uses mechanical and chemical treatments involving specific equipments and specialized teams and lasts several weeks. But it is efficient, and the relevance of this approach was demonstrated (Basso et al. 2005).

Deep Drillings Dedicated to Microbiological Studies

Specific drilling operations taking into account all the constraints of microbiological sampling is the dream of any scientist interested in the deep biosphere. This has been done, albeit with some compromises, in the context of two major research programs (“Subsurface Science Microbiology” of the US Department of Energy – DOE – and Integrated Ocean Drilling Program – IODP) and some more specific operations. In the DOE program for which drilling down to a few hundred meters deep has been made, specific drilling tools were designed and tested. Samples (cores) could be packaged in sterile materials, protecting them from contamination before being brought to the surface (Box Fig. 17.1). More generally, precautions were taken to minimize or accurately assess the contamination introduced by drilling fluids. Actually, beyond several tens of meters deep, using more or less viscous fluid is needed to lubricate and cool the drill bit and bring the cuttings up to the surface. These fluids maycontaminate the porous geological material. As they are virtually impossible to sterilize, it is generally preferable to estimate the level of sample contamination by the use of tracers. For this, either chemical agents such as boron or physical techniques such as fluorescent latex microspheres having the size of bacteria can be used. Quantification of these tracers in samples later allows to estimate the level of potential contamination. Finally, the estimation of contamination ideally also needs the microbiological study of all tools or drilling fluids used: comparison of bacterial communities of drilling fluid samples with that of collected cores provides informations on exogenous contamination. Core sampling for microbiological studies also imposes harsh technical and logistical constraints: collected cores must be rapidly transferred in a sterile anaerobic device, e.g., anaerobic glove box, and then subsampled or crushed under sterile anaerobic conditions. The implementation of such research programs requires considerable funding that can be obtained only through large national and international collaborative projects.

Box Fig. 17.1
figure 1

Extraction of a core from a geological clay formation (Photograph: Laurent Urios)

Access to Deep Geological Formations Through Excavations

The last and simplest way to access a deep geological formation is simply to go down there with all the needed equipment! Mines are an obvious means of access to deeper geological layers, and highly relevant studies were, for instance, conducted in the South African gold mines that reach depths of about 5,000 m (Onstott et al. 1997). Underground laboratories represent other privileged sites. Some were developed for the study of underground storage of nuclear waste in several countries, one of the aims being to study the impact of microorganisms on the storage containers. Several sites exist in North America or Europe, in various geological formations such as granites (Aspö in Sweden) (Pedersen 2001) or clays (e.g., Bure in France, Mont Terri in Switzerland, etc.). In all these cases, and whatever the depth, access to the geological site under study is direct, and drilling small diameter, several meters deep cores in the wall of the excavation is just needed to collect samples. Precautions to prevent contamination must obviously be taken, but the difficulties and constraints are incommensurable with deep wells mentioned above.

The extension of research on the deep biosphere through these techniques will allow to describe the immense biodiversity of the subsurface of our planet and try to understand how bacterial communities have persisted in such extremely nutrient-limited environment, isolated from any contribution from the surface for tens of thousands, even hundreds of millions of years. The adventure is just beginning.

A major element to be taken into account during the sampling is the quantity and the quality of the organic materials that will in part determine the functioning of heterotrophic microflora representing the majority of the soil microorganisms. For this reason, many studies take into account the organization of the soil horizon. The superficial horizons (0–30 cm) are the most studied, because the most intense microbial activity is observed in these horizons. For the same reason, the rhizospheric soil (under the influence of the roots) is often the object of particular attention. Sampling rhizospheric soil is done by shaking the roots to remove not adhering soil, the soil still adhering constituting rhizospheric soil.

In all studies, it is important to adapt the size and the number of samples in order to consider the spatial heterogeneity existing in the soil (due both to the soil itself and the presence of vegetation cover), so that this heterogeneity does not hide the studied effect or get confused with it. To obtain a representative sample of the study site, it is better to collect several soil cores, group them, mix them, and sift them (with a 2 mm mesh sieve) to create a homogeneous sample before starting the analysis. To avoid the contamination between different sites, it is necessary between each site to clean the auger by ridding it of a maximum of soil and possibly to rinse it with ethanol (70 %) and then to dry it with a clean paper.

It is sometimes necessary to look the soil organization at a microscale, the scale of the soil aggregate. In this case, soil aggregates (for the desired size, often by a few millimeters) (Grundmann and Debouzie 2000) are sorted from the soil core, under a binocular loupe with sterile material before analysis. Some approaches are intended to dissociate the microorganisms located in or outside the aggregates (and therefore more or less exposed to mobile water with charged ions and soluble elements circulating in the ground). In this case, methods with moderate soil washing can be used to recover the microorganisms localized on the surface of aggregates (Ranjard et al. 1997), the microorganisms remaining with the aggregates being predominantly those located inside.

2.2 Sampling Techniques in Aquatic Environment

Although the development of automatic sensors now permits the in situ acquisition of many hydrological parameters related to the physiology of living organisms, water sampling still represents a primary and obligatory step in the collection of large amounts of data from the marine environment and especially for the collection of microorganisms. Collection can be considered the process of obtaining an aliquot of the studied aquatic environment. Sampling consists of retaining, preserving, and storing a portion of this collected water for analytical purposes. It should be noted that sampling is only relevant if the water collection is conducted correctly and is representative of the studied environment. In some cases, water collection and sampling are merged, especially when contamination absolutely has to be prevented. Collection and pretreatment of water samples and sediment in the field present certain difficulties in comparison to laboratory work. There are also difficulties associated with working on a vessel, where the working conditions are rarely comparable to those in laboratories. This chapter documents methods and useful equipment for collecting marine samples and processing them for various downstream applications, including analyses of nutrients and gases, biomass concentration, cell abundance and experimentation, both offshore rather than at the coast and in shallow lagoons.

2.2.1 Equipment for Water Collection

2.2.1.1 Collection of Surface Water by Hand

This procedure is only applicable to collection of surface water from the water’s edge (e.g., at a beach or harbor deck) or from a small boat at a short distance from the coast. This type of sampling does not require specific equipment. A surface sample can be collected using a simple bucket, assuming that this simple device is compatible with the subsequent analyses. Nevertheless, if possible, it is better to collect water samples directly in the flasks that will be used to store the samples, to reduce contamination due to intermediate handling. In all cases, the hands of the sampler have to be protected by polyethylene gloves, and sampling is performed by immersing the bucket or bottle under the surface, as far as possible from the boat or deck. A “fishing rod” system can be helpful in collecting water at a distance from a boat or deck.

2.2.1.2 Sampling with Hydrological Bottles

This technique is used to collect water at depths below the surface as well as at the surface when hand sampling is impossible or inappropriate. First, the boat must be equipped with a winch system to unwind a cable on which hydrological bottles are attached. On small boats, hand winches are suitable for this purpose. In all cases, it is necessary to ensure that the chemical quality of the cable is suitable for the chemical analysis and experiments to be conducted on the samples. In addition, it is essential to estimate the depth of sampling accurately. One suitable approach is to equip the winch with a counting pulley; another is to mark some lengths along the cable. However, the best approach is to install a pressure sensor or gauge at the end of the cable.

Nansen The first hydrological bottle was developed in 1910 by the oceanographer Fridtjof. This bottle was entirely made of brass and remained the typical oceanographic instrument until 1966, when Shale Niskin designed a new bottle from a PVC (polyvinyl chloride) material that was cheaper and easier to use. The Niskin bottle is currently the most commonly used device for all hydrology work. The simplest bottle is a PVC cylinder that can be hermetically sealed at both ends by removable valve systems and that have systems for attaching them to a cable (Fig. 17.1). Each hydrological bottle is equipped with an air inlet screw in the upper part and a draw valve at the bottom that permits sampling of the collected water. These two parts are always closed during water collection at depths below the surface. The bottle is opened before it is hooked on the cable and positioned under water at the required depth. Once this depth is reached, the bottle is closed with the help of a stainless steel block, called a messenger, launched along the cable from the surface (Fig. 17.2). Automatic closure can also be achieved with either a pressure sensor or by remote control using an electrical signal sent from the surface. Niskin bottles are available in various volumes from 1 to 100 l.

Fig. 17.1
figure 2

A Niskin bottle at sea. (a) Bottle positioned at the sampling depth. Valves are open. (b) The “messenger” triggers the closure. (c) The valves are closed. The water contained in the cylinder is isolated from the outside. The bottle can then be brought on board (Photographs: Courtesy of Centre d’Océanologie de Marseille)

Fig. 17.2
figure 3

How does a Niskin bottle function? The bottle is lowered to the selected depth with valves opened. A messenger made of steel is launched from the surface and glides along the cable. When this messenger hits the bottle support, it releases the two spring-loaded valves. Water is then trapped in the bottle and isolated from the water outside the bottle (Drawing: M.-J. Bodiou)

However, the closure mechanism (elastic, spring) is often located inside the bottle, i.e., in contact with the water sample. Some analyses require equipment made of materials that are even more “clean,” i.e., totally inert and easily decontaminable, such as glass, polycarbonate, or Teflon-coated plastic. In addition, to avoid contamination, some bottles are equipped with Teflon-coated tensioners. In some extreme cases, it is preferable to use bottles with external springs to avoid contact with the sample water. Most of the time, a hydrological bottle is opened when it is attached to the cable. However, this technique may cause some contamination during the period of exposure to the air prior to immersion or during the passage through the film surface of seawater, which is known to be enriched with contaminants (Box 17.2). To avoid this type of problem, especially whenever uncontaminated samples need to be obtained, for instance, for chemical analysis of trace metals in seawater, the use of specific hydrological bottles that are equipped with external elastic for closure and that can be closed when immersed (Go-Flo bottle; Fig. 17.3) is recommended. The bottle is then opened only under the surface of the water (usually at a depth of approximately 10 m), just before the vertical cast. In this case, it is imperative that the cable is stainless steel or Kevlar.

Box Fig. 17.2
figure a

Techniques for sampling the surface microlayer of aquatic ecosystems. (a) Membranes; (b) rotating drum; (c) glass plate; (d) metal screen (Photographs: Philippe Lebaron)

Fig. 17.3
figure 4

Go-Flo on the line ready to be dropped into the water. The Go-Flo has the advantage of passing through the sea surface microlayer closed, and thereby avoids its inner surface being coating with a surface film rich in contaminants (Photograph: Patrick Raimbault)

Box 17.2: Sampling the Surface Microlayer of Aquatic Ecosystems

Philippe Lebaron

The surface microlayer (SML) of aquatic ecosystems has been defined as the top 1–1,000 μm of the water surface, i.e., the interfacial region where many important bio-physicochemical processes and exchange of gases are taking place. SML plays an important role in photochemical and biologically mediated transformations of organic matter at sea surface. Depending on the type of transformations occurring in these layers, they may have important consequences in the transfer of pollutants to marine food webs. Therefore, the SML have a crucial role in the marine environmental protection and global changes (Liss and Duce 1997). SML of natural waters have generally been considered to be enriched relative to underlying water, with various chemical and microbiological components. There are more than 20 published techniques to collect the SML, but very few are suitable for simultaneous sampling of chemical and biological parameters (Agogué et al. 2004) (Box Fig. 17.2).

Membranes

To collect bacteria living in the first 10–20 μm of the SML, hydrophobic membranes are very efficient since they can collect viable bacteria by electrostatic forces. The hydrophilic filters are much less efficient to collect these bacteria. Membranes are placed on the surface of the water using a clamp. The hydrophilic Teflon membrane is often used for the isolation of bacteria because the adsorption of bacteria on the surface is lower and it provides higher rates of recovery.

The Glass Plate

Glass plates used to collect a slightly thicker layer (approximately 200 μm) are often preferred for the analysis of organic pollutants and heavy metals. Before using the glass plate (~0.25 × 0.35 m and 4 mm thick), it is cleaned thoroughly. The plate is introduced vertically in the water and is then removed vertically from the water very slowly (0.1 m·s−1) and wrung between two Teflon plates. The film adhering to the sides of the plate is then transferred to a clean glass bottle. Sampling is very long (about 1 l collected in 45 min).

The Rotating Drum

The SML collector consists of a rotating cylinder of glass or stainless steel roller coated with a very resistant ceramic material. A large Teflon blade is pressed tightly to the surface of the cylinder to remove continuously the film and adhering water (around 60–100 μm) which are collected in a clean glass bottle. During the collection operation, the apparatus is pushed ahead of the boat at low speed by the means of an electric motor. The speed has to be adapted to the currents, wind speed, etc. This equipment is suitable for relatively calm weather but cannot be used effectively in rough water.

The Metal and Nylon Screens

A rectangular (more or less 0.6 m × 0.75 m) stainless steel (or aluminum when applicable) framed screen is lowered vertically through the water and then oriented horizontally and raised through the SML. An alternative consists in lowering merely the sampler to touch the surface. This second procedure may be of great interest when the concentration of particles is important in surface waters. In all cases, efficiency is dependent on the mesh size and open space, and it explains why different thicknesses are reported in the literature. This technique is nonselective because it does not depend upon adsorption. Metal can be replacedby nylon when working on the chemistry of the SML.

Sampling Strategies

The time required to collect the water from the SML varies a lot depending on the type of sampler. Therefore, if we assume that the SML is patchy, important variations may exist at both spatial and temporal scales. This is probably the most difficult challenge in analyzing this biotope. One way to integrate this natural variability is to collect the SML during a few hours at the same station and to homogenize the sample. The glass plate technique generally requires the largest time of sampling (a few hours to collect a few liters). However, because all techniques did not require the same time to collect a given volume, it is important to sample at different intervals of time in order to allow further comparisons. The time intervals should be defined at regular intervals during the largest time of sampling (generally the glass plate).

Go-Flo bottles are tubular and are available in various sizes (5, 10, 30 l, and so on). The top and bottom of each tube are equipped with stopper balls, which must be rotated 90° to close them. The bottle goes into the water closed to avoid any surface-level contaminants or pollutants entering it. The bottle is lowered into the sea on a steel cable. As the bottle is lowered past the surface, the increased water pressure causes the pressure release valve to pop in and the balls at the top and the bottom of the bottle to roll open by 90°. As a result, the bottle is neither contaminated on deck nor as it is lowered into the water by the uppermost layer of seawater at the surface that is contaminated by interaction with the air. Once the required depth has been reached, a “messenger” weight is slid along the cable to the bottle and drops onto the closing mechanism. The bottle, now hermetically sealed, is then brought back up to the surface.

Originally, to obtain multiple water samples during the same vertical cast, several hydrological bottles had to be suspended one after the other on the cable at certain distances from each other. A messenger was attached to each bottle (except the deepest one) before lowering. Once all the bottles were on the line, the line was lowered to the desired depth and 2 min were allowed to pass. A messenger was sent down and a few minutes were allowed to pass to ensure that all the bottles closed. The line was reeled in and the bottles were unloaded as they came up. Now, water samples are collected more rapidly with the help of a “rosette” system on which a large number of hydrological bottles (12–36) are attached to a circular steel frame (Fig. 17.4). The rosette of hydrological bottles is typically connected to sensors attached to the center of the frame that provide real-time information on the hydrologic characteristics of the water column (e.g., temperature, salinity, oxygen, pH, fluorescence, and optical properties). This information is typically used to select the water collection depths precisely. Closure of the bottles is activated by the operator using an electrical impulse, usually as the rosette is being lifted. Water samples are obtained during the upcast, and the closure of the bottles is tripped while the package is stopped or still moving slowly.

Fig. 17.4
figure 5

Sampling rosette with 24 Niskin bottles. (a, b) Niskin bottles equipped with draw pipes, ready for sampling. (c) Rosette on the deck (Photographs: Patrick Raimbault)

The collected water is then brought on board very quickly and can be immediately used for chemical analysis or for experiments with microorganisms (Fig. 17.5). Nevertheless, the rapid rise from deep waters to the surface may cause large variations of some physical parameters such as pressure. Thus, for physiological studies of microorganisms that are adapted to grow under very high pressures (such as bacteria), microbiologists have developed bottles able to maintain the hydrostatic pressure during water collection and during sampling on board (cf. Sect. 17.3).

Fig. 17.5
figure 6

Teflon pump used for “ultra-clean” sampling (Photograph: Courtesy of Sophie Bonnet)

It is a great challenge to sample seawater across interfaces, such as the halocline or the redoxcline, to investigate trace metal distribution. With the use of 5- to 10-l sampling bottles mounted on a wire or a CTD rosette, it is possible to obtain a maximum vertical resolution of 5 m. For the detection of small vertical structures in the vertical distribution of trace metals across the redoxcline, a CTD-Bottle-Rosette is not sufficient. Therefore, a PUMP–CTD System has been developed that permits water sampling at high resolution (1 m maximum) along a vertical profile.

2.2.1.3 Sampling with a Pump

Some analyses or experiments dealing with trace elements (e.g., some metals or organic compounds) require water collection and sampling without any contact with the air. In these cases, a pumping system is better than hydrological bottles (Fig. 17.6). The upstream end of the pump is located below the surface while the downstream end is located in a clean room to which seawater is brought by pumping. Thus, the water sample is never in contact with the atmospheric air of the ship. Large quantities of water can be collected in a very short time (10 l in a few minutes), depending on the pump flow and the pressure drop along the pipes. In all cases, the objective is to not contaminate the water sample during the collection step at sea or the sampling operation on board. Thus, the whole system (pump and pipes) is entirely Teflon coated and the environment is protected from external contamination. This type of sampling requires more equipment and heavier installation onboard but allows the collection of large volumes of water and, above all, better protection against contamination. Unfortunately, the maximum depth of sampling reached is clearly dependent on the depression that can be created on the surface and cannot be deeper than 140 m. For deeper clean samples, only the use of Go-Flo bottles attached to a cable of inert material is possible.

Fig. 17.6
figure 7

In situ pump on a hydrological cable (Photograph: Patrick Raimbault)

2.2.2 How to Collect Particulate Matter?

To collect and often to concentrate suspended matter from seawater, the most commonly used technique is filtration, which permits the dissolved component to be separated easily from the particulate fraction. The choice of filtration membrane or filter depends on many criteria, such as the size of the particles to be collected, the volume to be filtered, and the analytical methods that will subsequently be used. Glass fiber filters, which have high filtration capacities, are commonly used for analysis of many parameters in the marine environment (e.g., phytoplanktonic pigments and the chemical composition of suspended matter). However, because their nominal porosity is on the order of 0.7 μm, microbiologists often prefer to use membranes that have better retention efficiency, with porosities on the order of 0.2 μm. These membranes are made of many materials (e.g., cellulose, cellulose esters, polycarbonate, Teflon, and metal). The choice of membrane filters is therefore very varied and can satisfy the majority of the purposes of analyses and experiments concerning particulate matter and organisms collected by filtration.

In the ocean, particles are primarily of biological origin (e.g., phytoplankton, zooplankton, bacteria, and fecal pellets) and a small fraction is of nonbiological origin (e.g., dust, aerosols, suspended sediment, and continental inputs). Biological activity is concentrated in the surface layer, in approximately the first 100 m, where the light intensity is sufficient to permit algae to achieve photosynthesis. Below this photic zone, the particulate matter content decreases exponentially, and as a result, the water volumes needed to collect sufficient quantities of suspended particles for experimentation increase considerably. Hydrological bottles typically have volumes of some liters that are often insufficient to be able to measure the chemical composition of deep particles. Only in situ pumps (ISP) are able to collect large amounts of particulate matter at depths below the water surface (Fig. 17.6). These autonomous pumps, powered by batteries, are suspended along a cable. Once at the selected depth, the pump is started and filters several hundred to more than 1,000 l of water in a few hours. On the same cable, between 5 and 12 pumps can be deployed simultaneously at different depths. In situ pumps can be submerged into ocean water for up to several hours at a time. All of the water that the pump pulls in is passed through filters, which catch the very fine particles. Several filters can be stacked to achieve a size-fractionated filtration. When the pumps are brought up to the surface after several hours, the material captured on the filters can be analyzed.

2.2.3 How to Collect Living Organisms in the Water Column?

The simplest and most commonly used device for collecting living organisms (plankton) floating and drifting in the water column is the plankton net. The amount of water filtered is large and the gear is suitable for both qualitative and quantitative studies. The plankton nets are of various sizes and types. The sampling success will largely depend on the selection of suitable gear, the mesh size of the netting material, the time of collection, the water depth of the study area, and the sampling strategy. With minor variations, the plankton net is conical in shape and consists of a ring (either rigid or flexible and either round or square), a filtering cone made of nylon, and a collecting bucket for collection of organisms (Fig. 17.7). The collecting bucket can be detached and easily transported to a laboratory. The different nets can be broadly placed two categories: open nets used mainly for horizontal and oblique hauls and closed nets with messengers for collecting vertical samples from desired depths. Horizontal collections are mostly carried out at the surface and subsurface layers. In oblique hauls, the net is usually towed above the bottom. The disadvantage of this method is that the sampling depth may not be accurately known. A vertical haul is made to sample the water column. The net is lowered to the desired depth and hauled slowly upward. The zooplankton sample collected is from the water column traversed by the net.

Fig. 17.7
figure 9

A zooplankton triple net is brought up on board (Photograph: Courtesy of Nicole Garcia)

The standard zooplankton net has a mesh size of 250 μm, a 25-cm frame diameter and 50-cm long net. A 250-μm net will retain small animals such as crustaceans while allowing most algae and protozoa to pass through. For unicellular microorganisms (mainly phytoplankton), it is necessary to use a very fine mesh, on the order of 64 μm. For quantitative plankton sampling, it is imperative to know the actual amount of water that passed through the net. For this purpose, an instrument called a flow meter is installed at the mouth of the net.

2.2.4 Collection of Particles

A significant fraction of organic matter produced in the oceanic surface layer has negative buoyancy and will tend to settle to the bottom. This sedimentation exports a large amount of organic matter and chemical elements that constitute an uninterrupted flow often called “marine snow.” It is a significant means of exporting energy from the light-rich photic zone to the aphotic zone below. Although most organic components of marine snow are consumed by microorganisms in the first 1,000 m of sedimentation, a fraction reaches the ocean floor and sustains the development of the deep benthic ecosystem. The small percentage of material not consumed in shallower waters becomes incorporated into the ocean floor, where it is further decomposed through biological activity.

The quantification of this vertical flux of matter and microorganisms, including dead or dying animals and plants (plankton), fecal matter, sand, soot, and other inorganic dusts, may be performed with specific devices called sediment traps. A sediment trap normally consists of an upward-facing funnel that directs sinking particulate materials toward a mechanism for collection and preservation. There are many types of sediment traps, ranging from detachable cylindrical sampling tubes mounted with lead weights at the bottom to sophisticated systems for sequential collection over long periods of time, both in the water column and at the bottom of the ocean (Fig. 17.8). Typically, traps operate over an extended period of time (weeks to months), and their collection mechanisms may consist of a series of sampling flasks that are cycled through to allow the trap to record the changes in sinking flux with time. A trap is often moored at a specific depth in the water column (usually below the euphotic zone or mixed layer) in a particular location. However, in some cases, a floating trap, also called a Lagrangian trap, which drifts with the surrounding ocean current, can be used. In any case, the construction of the sediment trap array ensures a permanently vertical position of the sampling tubes during deployment.

Fig. 17.8
figure 10

Launching a model PPS5 sequential sediment trap, consisting of a huge cone with an opening of a square meter (a) and a revolving wheel with sampling flasks in which particles are collected during sequential periods (b) (Photographs: Courtesy of Nicole Garcia)

2.2.5 Sediment Sampling

The main problems posed by the study of marine sediment are associated with its heterogeneity, both horizontal and vertical. In addition, for many workers in the field of oceanography, it is necessary to recover unmixed continuous sediment samples, including sediment/water interfaces, that represent sediment fractions and surface deposits that are as little disturbed as possible. For large samples, bins or grabs that can recover up to 1 m3 of sediment are frequently used to study meio- and macrobenthic communities, for instance. A grab is used to obtain sediment samples from the seafloor. The grab is lowered to the seabed on a steel cable with its “jaws” open. As soon as the jaws touch the bottom, the valve that holds them open is released. As the grab is pulled back up, the jaws close, scooping up sand and sediment from the seabed. However, the use of bins or grabs is not recommended for work on physiology at the water–sediment interface, as these types of equipment permanently mix and disturb sedimentary layers. To avoid mixing and disturbing the materials sampled, it is preferable to use a coring technique.

In areas directly accessible by foot (e.g., in a low-tide zone) or by diving, sampling of small volumes can easily be performed “by hand” using simple tools such as hand corer sediment samplers. In shallow waters, a sampler can be pushed into sediment using the handles on the head. If the water depth permits, extension handles of 4.5 m or 6 m can be used to sample from boats or docks. In deeper water, the sampler can be dropped by attaching a line to the clevis located on the head assembly between the handles. A simple valve allows water to flow through the sampler during descent and close tightly upon removal, minimizing sample loss. A corer takes a 50-mm-diameter sample and measures 50 cm long. A box corer is a marine geological sampling tool for use in soft sediments in lakes or oceans. It is deployed from a research vessel with a deep sea wire and is suitable for sampling at any water depth. It is designed to minimize disturbance of the sediment surface by bow wave effects, which is important for quantitative investigations of the benthos micro- to macrofauna, geochemical processes, sampling of bottom water, and sedimentology.

To obtain more accurate and less disturbed samples, it is preferable to use coring “tubes” recommended primarily for sampling the water–sediment interface for biological and physiological studies. Coring tubes can obtain long cores but have small cross sections. A coring tube is a long tube, typically made of metal, with a sealing system at its base. The coring tube is driven into the sediment to collect a truly undisturbed sediment sample from the seabed, including the sediment–water interface and overlying supernatant water. Many models have been developed that differ primarily in their modes of penetration (e.g., gravity, sinking, and vibration), shutter modes, and base types (e.g., diaphragms and pneumatic valves).

Recent developments in coring technology have enabled the use of an assembly of multiple corers for collecting several cores simultaneously in a small area (Fig. 17.9). The Multicorer is designed to obtain multiple samples of sediment from the seabed at great depths. The Multicorer is used for sampling in chemical, geochemical, and biological applications.

Fig. 17.9
figure 11

Octopus-type multitube corer is brought up on board (a). This is a sampler that permits simultaneous collection of 8 small sediment cores (less than 1 m in length) to study biogeochemical processes at the water-sediment interface (b) (Photographs: Courtesy of Nicole Garcia)

The Multicorer is used to obtain undisturbed sediment samples from the surface of the seabed. It consists of a system to which a series of tubes, measuring approximately 4 cm in diameter, are attached. Above the system, a weight is mounted, and this falls onto the assembly system when the Multicorer touches the sediment. The falling weight drives the tubes into the seabed so that when they are raised again, each of them contains a drilling core with sediment from the seafloor.

2.3 Deep-Seawater Sampling

While the dark ocean, below 200 m depth, represents the largest habitat, the largest reservoir of organic carbon, and the largest pool of microbes in the biosphere (Whitman et al. 1998), this realm has been much less studied than euphotic ocean certainly because of difficulty in sampling (particularly maintaining high pressure conditions) and the time and expenses involved.

2.3.1 From Adapted-to-Pressure Microorganisms Discovery to Physiological and Molecular Adaptation Face to High-Pressure Condition

The Challenger Expedition (1873–1876) is commonly credited as the historical beginning of deep-sea biology. The finding of live specimens at great depths obliterated the azoic-zone theory, below 600-m depth, that had been suggested in the 1840s by Edward Forbes. A very good summary of the controversial concept of the azoic zone is provided in Jannasch and Taylor (1984). In 1884, Certes (1884) examined sediment and water collected from depths to 5,000 m and cultured bacteria from almost every sample. In 1904, Portier used a sealed and autoclaved glass-tube device as a bacteriological sampler and reported counts of colonies from various depths and location (Jannasch and Wirsen 1984). Research into the effects of high pressure on the physiology of deep-sea bacteria was developed during the last century. A synthesis of that work can be found in ZoBell (1970), pioneer biologist in the study of the effects of hydrostatic pressure on microbial activities (ZoBell and Johnson 1949). ZoBell and Johnson (1949) began studies of the effect of hydrostatic pressure on microbial activity using pure cultures. “Barophilic” was the first term used to define optimal growth at a pressure higher than 0.1 MPa, or for a requirement for increased pressure for growth (ZoBell and Johnson 1949), but was subsequently replaced by Yayanos (1995), who suggested “piezophilic” (from the Greek “piezo,” meaning pressure). Current terminology (reviewed by Fang et al. 2010 and Kato 2011) defines pressure-adapted microorganisms either as piezotolerant (similar growth rate at atmospheric pressure and high pressure), piezophilic (more rapid growth at high pressure than atmospheric pressure), or hyperpiezophilic (growth only at high pressure), with pressure maxima increasing in rank order (highest for hyperpiezophiles). Organisms that grow best at atmospheric pressure, with little to no growth at increased pressure, are termed piezosensitive.

Pressure-adapted microorganisms have been isolated from many deep-sea sites by researchers around the world. Isolates include representatives of the Archaea (both Euryarchaea and Crenarchaea kingdoms) mainly from deep-sea hydrothermal vents, and Bacteria from cold, deep-sea habitats. Most of the bacterial piezophiles have been identified as belonging to the genera Carnobacterium, Colwellia, Desulfovibrio, Marinitoga, Moritella, Photobacterium, Psychromonas, and Shewanella (reviewed by Bartlett et al. 2007). The membrane properties of piezophiles have been described and other characteristics of piezophiles, including motility, nutrient transport, and DNA replication and translation under elevated hydrostatic pressure, have been explored (Lauro et al. 2008). Protein structural adaptation to high pressure has also been described in comparative studies of piezophilic and piezosensitive microorganisms (Kato et al. 2008). Recent studies have also highlighted that hydrostatic pressure influences not only the unsaturation ratio of membrane fatty acids but also the unsaturation ratio of intracellular wax esters (storage lipids) which accumulated in the cells of a piezotolerant hydrocarbonoclastic bacterium under the form of individual lipid bodies (Grossi et al. 2010). Finally, piezophilic bioluminescent bacteria shall produce more light under high pressure than at atmospheric pressure conditions conferring them an ecological advantage (Martini et al. 2013).

To obtain such results, experiments were performed using diverse “classical” high-pressure systems. Figure 17.10 presents a picture of apparatus presently used, for example, in the Douglas H. Bartlett Lab (Scripps Institution of Oceanography, USA) to perform high-pressure cultures and study pressure effect on the physiological and metabolisms of microbial strains.

Fig. 17.10
figure 12

Hyperbaric apparatus classically used to perform high-pressure cultivation. (1) Hand-operated high-pressure pump; (2) high-pressure vessels. Various systems used to hold culture incubated within high-pressure vessel: (3) sterile bags, (4) transfer pipettes, (5) multi-well plates (Photograph: Courtesy of Douglas H. Bartlett)

2.3.2 Microbial Activities Measured Under In Situ Pressure Conditions

Although the deep ocean supports a diversity of prokaryotes with functional attributes interpreted as adaptation to a pressurized environment (Lauro and Bartlett 2007; Nagata et al. 2010), the contribution of the natural microbial assemblages to the carbon cycle of the biosphere remains poorly understood. Recent reviews (Arístegui et al. 2009; Nagata et al. 2010; Robinson et al. 2010) strongly suggest reconsidering the role of microorganisms in mineralizing organic matter in the deep pelagic ocean. However, majority of estimates of prokaryotic activities have been made after decompression and at atmospheric pressure conditions. However, (i) results from experiments mimicking pressure changes experienced by particle-associated prokaryotes during their descent through the water column show that rates of degradation of organic matter (OM) by surface-originating microorganisms decrease with sinking and (ii) analysis of a large data set shows that, under hydrologic-stratified conditions, deep-sea pelagic communities are adapted to in situ conditions of high pressure, low temperature, and low OM (Tamburini et al. 2013). Measurements made using decompressed samples and atmospheric pressure thus underestimate in situ activity (Tamburini et al. 2013). To obtain such results, specific pressure-retaining samplers have been developed, which are deployed using research vessels, using manned-submersible (e.g., Shinkai 6500, JAMSTEC, Japan; Nautile, IFREMER, France; DSV Alvin, WHOI, USA) or remotely operated vehicle (ROVs).

Initial estimates of deep-sea microbial activity under elevated pressure were based on the unintentional experiment involving the “sandwich in the lunchbox” from the sunken research submarine Alvin, “incubated” in situ more than 10 months at 1,540 m depth in the Atlantic Ocean (Jannasch et al. 1971). According to Jannasch et al. (1971), the crew’s lunch was recovered and “from general appearance, taste, smell, consistency, and preliminary biological and biochemical assays, […] was strikingly well preserved.” Based on subsequent studies carried out employing in situ conditions of high pressure and low temperature, the Jannasch team concluded that deep-sea microorganisms were relatively inactive under in situ pressure and not adapted to high pressure and low temperature. However, Jannasch and Taylor (1984) offered the caveat that the type of substrate influenced the results and concluded, from laboratory experiments, that “barophilic growth characteristics have been unequivocally demonstrated.” These early observations of deep-sea microbial activity were accompanied by the development of pressure-retaining water samplers, with the conclusion from results of experiments employing these samplers that “elevated pressure decreases rates of growth and metabolism of natural microbial populations collected from surface waters as well as from the deep sea” (Jannasch and Wirsen 1973). Contrary to this early conclusion, virtually all subsequently collected data from the water column under in situ conditions have shown that the situation is the reverse, namely, those microorganisms autochthonous to depth are adapted to both the high pressure and low temperature of their environment (Tamburini et al. 2013).

2.3.3 High-Pressure Retaining Deep-Seawater Samplers

A limited number of high-pressure vessels have been constructed during the past 50 years to measure microbial activity in the cold deep ocean and evaluate the effects of hydrostatic pressure, as well as decompression, on deep-sea microbial activity. Sterilizable pressure-retaining samplers for retrieving and subsampling undecompressed deep-seawater samples have been developed independently by three laboratories including Jannasch/Wirsen at the Woods Hole Oceanographic Institution (USA), Colwell/Tabor/Deming at the University of Maryland (USA), and Bianchi at the Aix-Marseille University (Marseille, France) (Jannasch and Wirsen 1973; Jannasch et al. 1973; Tabor and Colwell 1976; Jannasch and Wirsen 1977; Deming et al. 1980; Bianchi and Garcin 1993; Bianchi et al. 1999; Tholosan et al. 1999; Tamburini et al. 2003). Extensive sampling equipment for cold deep-sea high-pressure work has also been developed by Horikoshi and his team (Jamstec, Japan) exclusively devoted to recovering new piezophilic microorganisms and to study the effect of pressure on those isolates, as described in the Extremophiles Handbook (Horikoshi 2011). At least two other groups are developing pressure-retaining samplers, the Royal Netherlands Institute for Sea Research (NIOZ) and the National University of Ireland (Galway), but the designs or initial results have not yet been published.

Relatively few interactions occurred between these laboratories; then only the high-pressure serial sampler (HPSS), always used today, is briefly presented in Fig. 17.11. More details can be found in Bianchi et al. (1999) and in Tamburini et al. (2003). The HPSS is based on a commercially available multi-sampling device that includes a CTD (Sea-Bird Carousel) and equipped with 500-ml high-pressure bottles (HPBs) fitted on polypropylene boards, totally adaptable to Sea-Bird Rosette 12 or 24 bottles (Fig. 17.11a). The HPSS allows collection of several ambient-pressure water samples at different depths during the same hydrocast down to a depth of 3,500 m.

Fig. 17.11
figure 13

High-pressure serial sampler. (a) Photograph of the high-pressure serial sampler (HPSS) on board the R/V Urania (Italy) during the CIESM-Sub cruise in the Tyrrhenian Sea. Six high-pressure bottles (HPBs) were mounted with 12 Niskin bottles on a Sea-Bird Carousel equipped with a CTD (Conductivity – Temperature – Density). (b) Diagrammatic representation of the filling of HPBs. For more details see Bianchi et al. (1999) and Tamburini et al. (2003) and see http://www.com.univ-mrs.fr/~tamburini (Photograph: Christian Tamburini)

HPBs are 500-ml APX4 stainless steel cylinders (75-mm OD, 58-mm ID, and 505-mm total length) with a 4-mm thick polyetheretherketone (PEEK®) coating. The PEEK® floating piston (56-mm total length) is fitted with two O-rings. The screw-top endcap is covered with a sheet of PEEK to avoid contact between the sample and the stainless steel. Viton® O-rings are used to ensure that the system is pressure-tight; Viton® O-rings are chosen instead of nitrile O-rings to eliminate possible carbon contamination of the sample. The screwed bottom endcap is connected, via a stainless steel tube, to the piloted pressure generator. HPBs are autoclaved to ensure sterility of sampling. Functioning of the HPBs is depicted in Fig. 17.11b; when the filling valve is opened at depth by the operator via the deck unit connected to the electric wire, the natural hydrostatic pressure moves the floating piston downward and seawater enters the upper chambers of two HPBs. The distilled water flushes from the lower chambers to the exhaust tanks through a nozzle that acts as a hydraulic brake avoiding decompression during filling. Samples are collected into two identical high-pressure bottles filled at the same time through the same two-way valve (Fig. 17.11b). One of these bottles retains the in situ pressure, while the other which lacks a check valve is progressively decompressed when the sampler is climbed back up to the surface. The decompressed bottle, like a classical retrieving bottle (e.g., Niskin bottle), is used in comparative fashion to estimate the effect of decompression on measurements of deep-sea microbial activity. Decompressed and high-pressure (ambient) samples are treated in exactly the same way. High-pressure sampling, subsampling, and transfer are performed using hydrostatic pressure (instead of gas pressure) thanks to a piloted pressure generator (described in Tamburini et al. 2009) in order to diminish risk during manipulation. In contrast to classical high-pressure pumps based on alternating movement of a small piston associated with inlet and exhaust valves, the piloted pressure generator is based on a step motor-driven syringe. A step motor with a high starting torque is connected to a high-precision worm screw operating the high-pressure syringe. Adjustment of the movement of the syringe is controlled by a digital computer fitted with a high-power processor. Subsampling is performed by the use of counterpressure (a maximum of 0.05 MPa) from the bottom of the HP bottle; opening the sampling valve allows the counterpressure to move the floating piston upward. The primary sample within the HPB is maintained under in situ pressure while the secondary sample (decompressed) is analyzed directly or else fixed (by adding a preservative solution like formaldehyde), stored, and analyzed later.

To evaluate the state of the field of piezomicrobiology, Tamburini et al. (2013) have compiled data from published studies of deep samples where prokaryotic activities were measured under conditions of in situ pressure and the results compared with those obtained using incubation at atmospheric pressure after decompression. The pressure effect (Pe) has been calculated from this compiled data set (n = 52 pairs of samples maintained under in situ pressure condition versus decompressed and incubated at atmospheric pressure condition). Pe is defined as the ratio between activity obtained under HP and that obtained under DEC conditions (Pe = HP/DEC), where a ratio >1 indicates piezophily (adaptation to high pressure) and a ratio <1, piezosensitivity. Calculation of Pe values has proven to be a useful diagnostic tool for evaluating the effect of decompression on metabolic rate in deep-sea samples. A Pe ratio >1 indicates the deep-sea prokaryotic assemblage is adapted to predominantly the in situ pressure, and prokaryotic activity will be underestimated if the sample is decompressed and incubated at atmospheric pressure. On the other hand, if the Pe < 1, inhibition by high pressure is indicated and metabolic activity will be overestimated if the sample is decompressed (Tamburini et al. 2013). From deep-seawater collected during stratified conditions (n = 120), the mean Pe (n = 120) was 4.01 (median 2.11), with 50 % of values between 1.50 and 2.82 and 90 % between 1.12 and 8.17. During stratified conditions, the prokaryotic assemblage was adapted to high pressure (Wilcoxon rank test Pe >1, p < 2.2 × 10−16) and the metabolic rate has to be determined under in situ pressure conditions to avoid underestimating activity (Tamburini et al. 2013).

2.3.3.1 Hyperbaric System to Simulate Particles Sinking Throughout the Water Column

Biogenic aggregates (>500 μm in diameter), including marine snow and fast-sinking fecal pellets of large migrating macrozooplankton, constitute the majority of vertical particle flux to the deep ocean (Fowler and Knauer 1986; Bochdansky et al. 2010). Enzymatic dissolution and mineralization of particulate organic matter (POM) by attached prokaryotes during descent can provide important carbon sources for free-living prokaryotes, thereby playing important biogeochemical roles in mesopelagic and bathypelagic carbon cycling (Cho and Azam 1988; Smith et al. 1992; Turley and Mackie 1994, 1995). Attached prokaryotes, however, tend to comprise a small fraction (5 %) of the total prokaryotic biomass (Cho and Azam 1988), reaching somewhat higher proportions (10–34 %) only when the concentration of aggregates is high (Turley and Mackie 1995). The extent to which sinking particles contribute to microbial community structure in the deep sea remains an open question.

To better understand the metabolic capacity of prokaryotes of shallow-water origin, which are carried below the euphotic zone on sinking particles, to degrade organic matter in the deep sea, different approaches proved informative. For instance, Turley (1993) applied increasing pressure to collections of sinking particles, obtained by trapping for 48 h at 200 m depth and containing microbial assemblages. These samples were placed in sealed bags incubated in pressure vessels at 5 °C. Pressures of 0.1, 10, 20, 30, and 43 MPa were applied in step function (within 30 min, then maintained constant for 4 h) to simulate pressure at the deep water–sediment interface. In the early work on heterotrophic microbial activity associated with particulate matter in the deep sea, comparative responses to moderate (surface water) versus extreme (abyssal) temperatures and pressures were used to diagnose prokaryotic origin (Deming 1985). Samples of sinking particulates, fecal pellets, and deposited sediments were collected in bottom-moored sediment traps and boxcores at station depths of 1,850, 4,120, and 4,715 m in the North Atlantic and incubated for 2–7 days under both surface water and simulated deep-sea conditions (the latter in sterile syringes in pressure vessels at 3 °C).

To simulate more accurately the increase in pressure (and decrease in temperature) prokaryotes associated with particles experience in sinking to depth, Tamburini et al. (2009) created a PArticulate Sinking Simulator (PASS) system (Fig. 17.12). Same high-pressure bottles (HPBs), developed for the HPSS, were used to incubate samples while pressure was increased continuously (linearly) by means of a piloted pressure generator. The HPBs were rotated (semi-revolution) to maintain particles in suspension during incubation in water baths reproducing temperature changes with depth. The PASS system can be used in the laboratory or at sea, depending on samples being analyzed and objectives of the study. Tamburini et al. (2006, 2009) focused on prokaryotic processes and particle degradation in the mesopelagic zone, at the time just after particles exit the euphotic zone and before they arrive on the deep seafloor, employing a realistic settling velocity.

Fig. 17.12
figure 14

PArticles Sinking Simulator (PASS). (a) Diagrammatic representation of the PASS composed of (1) several water baths where high-pressure bottles are rotated to maintain particles in suspension, (2) a cooler to control the temperature of water baths, and (3) a programmable computer-driven: piloted pressure generator. (b) Photograph of the PASS on board the R/V Endeavor (USA) during the MedFlux cruise where 3 HPBs were maintained at atmospheric pressure (ATM), while 3 others were connected to the piloted pressure generator, which continuously increased the hydrostatic pressure to simulate a fall through the water column (HP). See details in Tamburini et al. (2009) (Photographs – a3: Metro-Mesures Sarl, France; b: Christian Tamburini)

In summary, the effect of pressure on surface-derived bacteria attached to sinking organic matter is that their contribution to decomposition and dissolution of organic matter decreases with depth. This reinforces the conclusion that rapidly settling particles are less degraded during passage through the mesopelagic water column and, therefore, this phenomenon results in a labile food supply for bathypelagic and epibenthic communities (Honjo et al. 1982; Turley 1993; Wakeham and Lee 1993; Goutx et al. 2007). It also fits the results of in situ experimentation (Witte et al. 2003) and the calculation of recently proposed models (Rowe and Deming 2011) that show effective competition between metazoa and microorganisms for resources reaching the deep seafloor from the sea surface.

2.3.4 Concluding Remarks

Microbial communities found in the deep ocean comprise those microorganisms autochthonous to the deep sea and adapted by some degree to in situ temperature and pressure of the deep-sea environment and allochthonous microorganisms transported from the sea surface via sedimenting particles, deep migrating zooplankton, or other mechanisms.

Microbial metabolic rates are best measured under in situ conditions, which in the case of deep-sea microbial populations include high pressure, low temperature, and appropriate concentration (usually low) of ambient nutrient. Metabolic activity of an allochthonous community decreases with depth, limiting its capacity to degrade organic matter sinking through the water column (Turley 1993; Turley et al. 1995; Tamburini et al. 2006). Such microbial communities may be inactive (not dead) under conditions of low temperature and elevated pressure of the deep sea, but they can become dominant, i.e., more numerous and metabolically active when incubated under atmospheric pressure. Thus, community activity measured at atmospheric versus deep-sea pressure can reflect an entirely different mixture of community components.

3 Cytometry Techniques

3.1 Historical

Cytometry include a set of techniques to characterize and to measure the physical properties of individual cells and cellular components. The best known of these techniques and the oldest is the microscope. It appeared in the seventeenth century and quickly gave birth to an extraordinary period of discoveries in biology, in particular under the leadership of two Dutch, Antonie van Leeuwenhoek (1632–1723) and Jan Swammerdam (1637–1680), and one Italian, Marcello Malpighi (1628–1694).

Progress in the understanding of the fundamental laws of optics and in the construction of instruments in the fields of astronomy and navigation then led to a considerable evolution of microscopes in the eighteenth century. The increasing interest in biology and the discoveries done in the field of medicine in the nineteenth century have stimulated the need for observation and for the development of more effective instruments. The microscope allowed Pasteur to undermine the theory of spontaneous generation and Koch to discover several pathogens. In the twentieth century, advances in electronics, informatics, and optics have enabled the design of new generations of photonic and electronic microscopes with previously unmatched performance. Meanwhile, progress in the field of chemistry and methods for fluorescence labeling of cellular constituents have allowed the development of new applications of cytometry techniques in biology.

The more and more important need to automate the counting of cells, especially in the medical field for the analysis of blood cells, has led to the emergence of a new technique based on flow cytometry analysis of cells in liquid vein: flow cytometry. Designed by Moldavan in the early twentieth century, it was mainly developed in the 1970s by researchers at Los Alamos and Stanford in the USA who linked methods of measuring individual volume or fluorescence of cells driven by a stream with methods for electrostatic cell sorting in vital conditions.

This technique helped to make extraordinary discoveries in the field of biological oceanography by observing phytoplankton cells on the basis of their natural pigments. Thus, Sallie Chisholm and Robert Olson teams made the discovery of two new cyanobacteria in the years 1979–1986, Synechococcus and Prochlorococcus genera, and were discovered by this technique, and their quantification allowed to reveal the essential role played by these cyanobacteria in the functioning of the global ocean and CO2 fixation.

Flow cytometry was quickly used in biological oceanography by specialists of the phytoplankton under the leadership of Daniel Vaulot in Roscoff (Marine Biological Station, France) who participated in the work carried out in the USA. Later with the development of fluorescent cellular markers that could be applied to non-photosyntetic organisms, this technique has been developed and used in microbial ecology, and in the 1990s, it has become a routine technique in many laboratories for the enumeration and characterization of the physiological state of bacterial cells. All these instrumental developments combined with advances in parallel in the field of molecular biology have revolutionized microbiology, in general, and particularly environmental microbiology.

3.2 The Different Cytometry Techniques

This chapter does not aim to describe the different microscopy techniques since there are books and websites dedicated to these techniques. Some of them are listed below to which the reader should refer for further analysis and more technical details on instruments. Our goal is to provide the reader with a list of techniques that are available today to present their principles and main applications in the field of microbial ecology.

3.2.1 Microscopy Techniques

3.2.1.1 Optical Microscopes

This is the oldest technology used by Renaissance scientists, but they are still present in laboratories. Light microscopes can be bright-field, dark-field, polarization, fluorescence, phase contrast, interference, and confocal scanning laser.

3.2.1.1.1 Direct-Light Microscopes

Visible light is passed through the specimen and then through glass lenses. One of the limitations of the bright-field microscopy is insufficient contrast. The lenses refract the light in such a way that the image of a specimen is magnified as it is projected into the eye. The observed structures can be naturally colored, but dyes can also be used to stain cells and increase their contrast so that they can be more easily seen in the bright-field microscope. Depending on the intensity of the colored parts, the light will differently be absorbed and these parts will appear more or less dark. The coloration is due either to a dye that binds preferentially to a particular molecule or family of molecules or to a dark precipitate. This precipitate generally results from the action of an enzyme and occurs at the place where the protein is located, which allows to observe the distribution of the enzyme in the biological structure.

In these techniques, the light rays coming from the condenser can penetrate into the lens (bright-field) or indirectly through a set of mirrors (dark-field). The black-field is used for the observation of specimen whose structures have significant variations in refractive index and, because of the lack of contrast, are only slightly or not visible in bright-field microscopy (bacteria, nucleus, vacuoles, flagellar, skeletons of diatoms, etc.). The dark-field microscopy is very suitable for fresh observations, and its resolution is quite high. It is also an excellent way to observe the motility of microorganisms, as bundles of flagella are often resolvable in bright-field or phase contrast microscopes.

In a bright-field microscope, it is possible to insert a polarizer between the light source and the specimen and an analyzer. Polarizer and analyzer are polarizing filters. In this case, different areas of the preparation deviate differently the polarization plane. This microscope called “polarizing” will highlight the fibrous structures, lamellar, granular crystals and examination is often fresh.

3.2.1.1.2 Phase Contrast Microscopy

This is one of the few techniques that can be used to observe living cells. Phase contrast microscopy is based on the principle that cells differ in refractive index from their surroundings and hence bend some of the light rays that pass through them. Light passing through a specimen of refractive index different from that of the surrounding medium is retarded. This effect is amplified by a special ring in the objective lens of a phase contrast microscope, leading to the formation of a dark image on a light background. The phase contrast microscope is widely used in research applications because it can be used to observe wet-mount (living) preparations. It is thus widely used in cellular engineering.

3.2.1.1.3 Differential Interference Contrast Microscope

Differential interference contrast is a form of light microscope that employs a polarizer to produce polarized light. It uses either unpolarized light or polarized light for the observation of transparent or reflective objects. The most interesting ones use polarized light and produce images whose relief can be surprising and astonishing. The study of the light interference leading to the formation of images in different interference microscopes is beyond the scope of this chapter. It is widely used to perform micromanipulation of living cells but also to observe surfaces. Its use in microbial ecology is still quite limited.

3.2.1.1.4 Fluorescence Microscopy

It is widely used in microbial ecology. It exploits the ability of certain molecules such as fluorophores or fluorochromes to emit fluorescent light upon absorption of photons, which means after excitation by a light source. A fluorophore can be excited only by radiation that it can absorb. The absorption spectra are band spectra that include therefore the excitation spectra. Obviously, the specimen to study must be illuminated by a spectral interval that belongs to the excitation spectrum of the fluorophore. Generally, a spectral interval centered on the maximum wavelength of the excitation spectrum of the fluorophore is used. It is therefore important to ensure consistency between the excitation source whose spectral bands vary qualitatively and quantitatively from one source to another and the excitation spectrum of the fluorophore.

Light sources can be lamps, lasers, and more and more often diodes. Most lasers have a major and narrow band and must correspond to a spectral region of strong absorption to excite the fluorophore. Conversely, lamps (i.e., mercury vapor lamp) emit a spectrum with many bands of varying intensity, which is reflected on the spectrum with peaks more or less important. We choose a fluorophore which absorbs strongly in one of these spectral ranges that are produced by the light source. The fluorescence intensity is important when working with very small particles such as viruses or bacteria, and it can be increased by using light sources with a powerful light flow.

If the fluorophore is a molecule naturally present in the cell of interest, such as a chlorophyll pigment, the source will be selected based on the excitation spectrum of the pigment. Conversely, if the fluorescence is not constitutive and is based on the staining of a cellular component with a specific fluorochrome, in this case one may choose the fluorophore depending on the light source of the cytometer (microscope or flow cytometer). For a given cellular target, such as nucleic acids, there is a wide variety of fluorochromes with very varied peak wavelengths of excitation and emission (see website of Molecular Probes®).

In all cases the wavelength of the fluorescence emitted upon excitation is higher (lower energy) than the wavelength used to excite the molecule. This results from a loss of energy between excitation and emission phases. The selection of spectral bands is done with optical filters whose characteristics are very different. A band-pass filter is a device that passes frequencies within a certain range and rejects (attenuates) frequencies outside that range. These filters are mainly used as excitation filters. Other filters, high-pass or low-pass, can pass the wavelengths above or below a given value and are used for transmission, that is to say that the fluorescence light will be visible to the eye (for details, see the many websites that specialize in cytometry instruments). A dichroic mirror is used to reflect the light rays in order to direct them to the specimen for the excitation phase and the eyepiece for the transmission phase. The excitation and emission filters and the dichroic mirror are often grouped together in a removable block, and the microscope may be equipped with a set of different blocks in order to combine several fluorochromes within the same preparation. It is possible to use multiple fluorochromes that are excited simultaneously by a single light source by exploiting one or more spectral bands and emit with different wavelengths. This produces a multicolor staining. For example, it is common to combine a fluorochrome associated with a physiological function with a second fluorochrome associated with a taxonomic probe.

3.2.1.1.5 The Confocal Scanning Laser Microscopy (CSLM)

CSLM is a computerized microscope that couples a laser light source to a light microscope. A thin laser beam (argon ion) is bounced off a mirror that directs the beam through a scanning device. Then the laser is directed through a pinhole that precisely adjusts the plane of focus of the beam to a given vertical layer within the specimen. By precisely illuminating only a single plane of the specimen, illumination intensity drops off rapidly above and below the plane of focus, and because of this, stray light from other plane of focus are minimized. Therefore, in a relatively thick specimen such as a microbial film, not only the cells on the surface can be visible as is the case with a conventional light microscope but cells from the different layer scan can also be observed by adjusting the plan of focus of the laser beam. The images are of exceptional clarity. The image processing software associated with these microscopes allows us to reconstruct a complete image from images of different planes and reconstruct the three-dimensional arrangement of the various components of the preparation. In microbial ecology, this technique is mainly used to study microbial assemblages after staining with fluorescent probes including oligonucleotide probes to recognize specific cells or physiological probes to target specific functions. An important field of application is the study of biofilms and more generally interactions between organisms and the environment. The resolution is limited to that of a light microscope.

3.2.1.2 Electronic Microscopes

The main limitation of optical microscopy is its resolution because the smallest details that can be observed in theory correspond to the half wavelength of the light source. By optical microscopy, using the shortest possible wavelengths (UV radiation), the limit is around 0.25 μm which means that details of the cellular contents are beyond its capabilities. Electron microscopes are widely used for studying the detailed structure of cells. Since the photon does not go further, the idea was to use another elementary particle, the electron. The wavelength associated with the electron is indeed much lower than that of ultraviolet photon and the final resolution is much higher and in the nanometer range. Electron microscopy essentially allows to observe structures and sometimes macromolecules, such as proteins bound to DNA. Sample preparation is much more complex than light microscopy.

3.2.1.2.1 The Transmission Electron Microscope (TEM)

By far the most effective, this technique is similar to the direct light microscopy. The electron beam is produced by an electron gun, commonly fitted with a tungsten filament cathode as the electron source. The electron beam is accelerated by an anode typically at +100 keV (40–400 keV) with respect to the cathode, focused by electrostatic and electromagnetic lenses, and transmitted through the specimen that is in part transparent to electrons and in part scatters them out of the beam. When it emerges from the specimen, the electron beam carries information about the structure of the specimen that is magnified by the objective lens system of the microscope. The magnification is much higher than optical microscopy. This technique requires ultrathin sections with a thickness of about 50 nm or less made after hardening of the sample by inclusion. A staining step is sometimes required depending on the contrast of the preparation. In microbial ecology, this technique is used to describe the cellular structure of microorganisms, to observe viral particles inside the host cell (Fig. 17.13), to observe the organization of the nucleus in eukaryotic cells, etc.

Fig. 17.13
figure 15

Observation by transmission electron microscopy of Ostreococcus tauri infected by the double-stranded DNA virus OtV5. cy cytoplasm, pm plasma membrane, m mitochondria, c chloroplast, sg starch granule, vp viral particle (Photographs: Courtesy of Marie-Line Escande and Nigel Grimsley)

3.2.1.2.2 The Scanning Electron Microscope (SEM)

Unlike the TEM, where electrons of the high-voltage beam carry the image of the specimen, the electron beam of the Scanning Electron Microscope (SEM) does not at any time carry a complete image of the specimen. This technique provides images absolutely spectacular in pseudo 3D. When the electron beam bombards the preparation, a part of the electrons pass through the preparation while others are backscattered (secondary electrons) and used to construct the image using an electron collector. The result is a representation of the surface of the studied object (cell, particle, etc.). The biological sample must be metallic because electrons scattered by the metal are those who are collected to produce an image. A large range of magnifications can be obtained with the SEM. In microbial ecology, this technique is used to describe surfaces, cell morphology, and more generally the interaction between the cell and its environment. A new generation is derived from SEM microscope: it is the environmental microscope in which a partial vacuum is produced at the place of the sample in order to preserve its “environment”. In this case the metallization is no longer necessary.

3.2.2 Flow Cytometry (FCM)

In cell biology, FCM is a laser-based, biophysical technology employed in cell counting, cell sorting, biomarker detection, and protein engineering by suspending cells in a stream of fluid and passing them by an electronic detection apparatus. A beam of light (usually laser light) of a single wavelength is directed onto a hydrodynamically focused stream of liquid. A number of detectors are aimed at the point where the stream passes through the light beam: one in line with the light beam (Forward Scatter or FSC) and several perpendicular to it (Side Scatter or SSC) and one or more fluorescence detectors. Each suspended particle from 0.2 to 150 μm passing through the beam scatters the ray, and fluorescent chemicals found in the particle or attached to the particle may be excited into emitting light at a longer wavelength than the light source. This combination of scattered and fluorescent light is picked up by the detectors, and, by analyzing fluctuations in brightness at each detector (one for each fluorescent emission peak), it is then possible to derive various types of information about the physical and chemical structure of each individual particle.

It differs from microscopy because cells are no longer fixed on a support but driven by a liquid stream, hence the term “flow.” For a more technical description of the instrument, the reader is referred to one of the many websites dedicated to flow cytometry platforms.

The measured signals are essentially:

  1. (i)

    Physical signals which correspond to the properties of light scattering related to the dimensions of the particle, its internal structure, or form

  2. (ii)

    Optical signals which correspond to the constitutive fluorescence properties for cells that are naturally fluorescent (photosynthetic cells) or whose fluorescence is induced by a specific staining on the basis of one or more probes

Each cell is analyzed individually on the basis of several parameters. Multiparametric data are acquired over thousands of events per second, and it is thus possible to obtain statistics on very large populations. These data are shown as monoparametric histograms or cytograms combining two parameters.

Flow cytometry is a very common technique in microbial ecology not only for the enumeration of bacteria, pico- and micro-phytoplankton, and more recently microzooplankton but also for viruses (even if more difficult). FCM allows the study and quantification of the properties of cells. This technique has participated to the development of microbial ecology by providing informations on the abundance of microorganisms in a variety of environments and not only to discover many new groups (Synechococcus, Prochlorococcus, Ostreococcus, etc.) but also to reveal their importance in different oceanic regions and in the element cycles in the sea. Flow cytometers are increasingly onboard oceanographic vessels and provide near real-time data that are essential to improve sampling strategies. Cell sorters are flow cytometers that can sort cells based on their optical properties (Fig. 17.14). It is possible to sort cells on the basis of one or more cellular parameters in order to complete their analysis by other analytical techniques. The sort function can be very useful in microbial ecology to better understand the functional role of targeted cell populations sorted on the basis of taxonomic or physiological properties.

Fig. 17.14
figure 16

FACSAria™ flow cytometer (Becton Dickinson) equipped with a broadband cell sorter. 1 Optical bench; 2 cell sorting chamber; 3 screen control (Photograph: Courtesy of Jennifer Guarini)

3.2.3 Solid-Phase Cytometry (SPC)

SPC is not very common but it is a very powerful technique for studying rare events (Fig. 17.15). This technique is at the interface between microscopy and flow cytometry. In this case, the cells are fixed and deposited on a support as it is the case for microscopy. The light source is a laser which is directed toward a set of oscillating mirrors which can scan the entire surface of the support within a few minutes. Each particle is detected and fluorescent signals are collected and processed similarly to the processing implemented in a flow cytometer. The collected data are analyzed and discriminants (which can be changed depending on the application) can eliminate noncellular and/or nonspecific signals. SPC is connected to a fluorescence microscope whose stage is driven and controlled by the solid-phase cytometer. It is thus possible to have an eye control and to validate by microscopy all fluorescent events that were detected.

Fig. 17.15
figure 17

Solid-phase cytometer ChemScan RDI ® (Chemunex). 1 solid-phase cytometer; 2 screen control to locate the cells detected on the membrane; 3 epifluorescence microscope for eye validation (Photograph: Courtesy of Philippe Catala)

The solid-phase cytometer has been proposed as a tool to enable the very rapid detection of rare events in many different products and as an alternative to traditional plate counts. The advantage of this technique is that it allows the detection of rare events, which is not possible in flow cytometry or microscopy. It is therefore particularly suitable for pathogenic microorganisms when searching for a single cell within a sample.

3.3 Applications Related to Counting

The enumeration of microorganisms by cytometry techniques is most often done by fluorescence microscopy, flow cytometry, and more rarely by solid-phase cytometry. In microbial ecology, cell counting is important for different applications and in particular to assess the biomass of targeted microorganisms, including bacteria.

In fluorescence microscopy, the organisms are often concentrated on a membrane filter. When this membrane is then observed through the eyepieces, only a small fraction can be analyzed because the surface of a microscopic field (SMF – the surface of the membrane visible through an eyepiece) represents only a small fraction of the total area of the filter. For the enumeration of bacteria, we often use membrane filters (generally 25 mm diameter and 0.2 μm porosity) to concentrate the cells. The surface of the filter on which the bacteria are spread (UFS – usable surface of the filter) represents only a part of the total surface of the filter. SMF represents between 1/5,000 and 1/10,000 of UFS. This means that it should be necessary to observe more than 5,000 microscopic fields for counting the bacteria on the total surface of the filter. In practice, this is impossible and the operator randomly selects and counts 30 microscopic fields (SMF) to integrate the spatial heterogeneity in the distribution of bacteria. Then he reports the average number of bacteria per microscopic field at USF. This approach is tedious, time consuming, and often dependent on the experimenter. It is often difficult to count more than a few hundred microorganisms. With this approach being used, it is necessary to concentrate at least 5,000 bacteria on the filter in order to detect at least one cell per SMF and to quantify, which prohibits quantifying populations poorly represented in the studied communities (Lemarchand et al. 2001).

In flow cytometry, it is possible to count a large number of cells in a very short time. The flux used for microorganisms whose with a size close to or less than 1 μm is often close to 40 μl·min−1. It means that it is possible to analyze 0.2 ml in 5 min or 1 ml in 25 min. Moreover, it is necessary to detect at least a thousand cells to be able to analyze the cytograms. It is therefore necessary that microorganisms are present at a sufficient concentration to be analyzed.

In the aquatic environment, where there is usually between 100,000 and 500,000 bacterial cells per milliliter of water or thousands of cyanobacteria, it is possible to count thousands of cells in a few minutes when the objective is to enumerate total bacteria or cyanobacteria (Fig. 17.16). Conversely, it is difficult to detect and enumerate populations poorly represented (such as a specific taxonomic or functional groups), and it is difficult to concentrate the sample without introducing bias in the quantification.

Fig. 17.16
figure 18

Example of flow cytometric cytograms that displays the different groups that can be found in a marine microbial community. (a) Different groups of picophytoplankton are visible on a cytogram combining the relative red fluorescence (an index of the chlorophyll content) versus side scatter (SSC, an index of the cell size). PRO Prochlorococcus, SYN Synechococcus, PEUK photosynthetic picoeukaryotes; (b) Synechococcus are discriminated and counted on a cytogram combining the relative orange fluorescence (FL2 – an index of the phycoerythrin content) and the side scatter ssc; (c) the heterotrophic bacteria are detected after labeling with SYBR Green I on a cytogram combining ssc and green fluorescence (FL1). Two clusters of bacteria with different DNA content (FL1 values) and cell size (ssc values) are shown on a contour plot of relative green fluorescence (an index of the DNA content) and SSC. It is possible to discriminate two groups called HNA (“high nucleic acid content”) cells and LNA (“low nucleic acid content”) (Lebaron et al. 2001). Fluorescent beads of 1 μm are added to the samples during analysis, to relativize the cellular characteristics of microbial populations compared to beads (Joux et al. 2005) (Copyright: Courtesy of Life and Environment edition)

With solid-phase cytometry (SPC), it is possible to count rare cells after concentration on a membrane. This technique is very complementary to the FCM and microscopic tool for quantification. We can quantify a single cell of a pathogenic bacteria in a given volume of sample (e.g., 100 ml of water are filtered through a membrane filter). The only limit is the high sensitivity of this instrument to the fluorescence background of the filter or introduced by the presence of organic and/or inorganic particles. Many applications are currently under development. Figure 17.17 presents the limits of the different techniques in quantitative terms and an application to the quantification of bacteria (Lemarchand et al. 2001).

Fig. 17.17
figure 19

Domains of application of epifluorescence microscopy (EFM), flow cytometry (FCM), and solid-phase cytometry (SPC) depending on both the absolute number of labeled Escherichia coli cells and the volume of tap water analyzed. For each volume, the number of non-labeled cells present in the tap water is indicated. Each color represents the domain of application of the different instruments (Modified and redrawn from Lemarchand et al. (2001))

3.4 Activity Measurement at the Cellular Level

3.4.1 Use of Fluorescent Probes

There are many molecular probes that can be used to analyze the physiological state of the cells (Fig. 17.18). These probes conjugated to a fluorophore often have different targets (Joux and Lebaron 2000), and they can learn about the physiological state of individual cells within a population. They may be combined with each other or with a taxonomic probe provided that the appropriate excitation wavelength of the different probes are provided by the light source and that the emission wavelengths of the fluorophores can be easily distinguished. Fluorescent probes specific to nucleic acids are often used in microbial ecology for cell counting but also to analyze the nucleic acid content of individual cells. Within bacterial communities in aquatic environments, it is often possible to distinguish populations having different nucleic acids content. This discrimination is now often used in microbial ecology because the cells that have a high nucleic acid content are considered to be more active than those with a lower content of nucleic acids (Lebaron et al. 2001).

Fig. 17.18
figure 20

Different targets used to analyze the physiological state and identity of individual cells using fluorescent probes (Modified and redrawn from Joux and Lebaron (2000). Drawing: M.-J. Bodiou)

3.4.2 Microautoradiography

It is possible to characterize the metabolic activity of a cell by the use of radioactive substrates of commercial origin or produced by a biological organism (amino acids, carbohydrates, lipids, etc). The method consists in providing a given substrate to bacterial communities for a period of time depending on the biological activity. Cells that metabolize the substrate become radioactive, and this activity can be revealed by a photographic emulsion and the precipitation of silver salts around each active cell. The observation is made by light microscopy. Nevertheless, it is possible to combine this approach with a fluorescent nucleic acid probe (see below) and the method is called Microfish.

3.5 A Taxonomic Approach at the Cellular Level

3.5.1 Immunofluorescence

The immunoassay techniques are widely used in microbiology as a diagnostic tool. They can be used to search for a pathogen, its identification, and counting (especially in controlling the quality of the water). These techniques are based on the use of specific antibodies to pathogens. There are monoclonal antibodies (recognizes only one type of epitope) which are very specific and reproducible and polyclonal antibodies (mixture of antibodies recognizing different epitopes on the same antigen) that are less specific. Among the various immunoassays, immunofluorescence consists of chemically modifying an antibody by adding a fluorochrome without altering the specificity of the antibody (Fig. 17.18). The targeted microorganisms are then detected using an instrument that can detect fluorescent cells (epifluorescence microscopy, flow cytometry, or solid-phase cytometry). These techniques are mainly used in the field of food microbiology, because antibodies are commercially available for most pathogenic microorganisms. For taxonomic purposes, these probes are increasingly substituted by DNA probes whose specificity is often better and also because these probes can be defined for all microorganisms, without the need for prior cultivation.

3.5.2 In Situ Hybridization with Fluorescent Probes

In situ hybridization with fluorescent probes (“fluorescent in situ hybridization” FISH) is a cytogenetic approach based on the principle of complementary between nucleic acid chains and allows the specific pairing of a labeled oligonucleotide probe to an RNA or DNA sequence, ribosomes, or messengers. So it is an approach conceptually close to the gene amplification by PCR followed by sequencing, with the advantage of a possible eye control by microscopy; but since it is a short sequence, the disadvantage is that the resolution is lower.

This technique was originally developed for the detection of mutations in eukaryotic genomes after spreading of the chromosome on a glass slide. Subsequently adapted to the bacterial cells, the technique allows to observe cells without culturing.

Bacterial cells are surrounded by a lipid membrane and a cellular wall consisting of sugar (peptidoglycan) and proteins that can be made porous to allow the passage of small oligonucleotide sequences but not the ribosomes, much bigger, which therefore remain inside cells. Ribosomes are made of RNA sequences. Their number varies from 20,000 per active cell in E. coli to 0 in starved cells. The effectiveness of the FISH technique is proportional to the number of ribosomes and ultimately to the physiological state of the cell.

It is also possible to detect specific mRNA by in situ hybridization (Pernthaler and Amann 2004). In the latter case, the situation will depend on the level of expression of the selected gene, considering that some genes do not have enough expression to be detected by FISH. The probes are oligonucleotide sequences of about twenty bases. It is necessary that they have a short size to allow their penetration into the cells and also to have a melting temperature close to the room temperature to protect the cell structures. The sequences of the target group are aligned and analyzed, for example, using the ARB software (Ludwig et al. 2004), which allows to design an oligonucleotide sequence whose specificity can be checked in the database. When possible, mismatches are located at the center to increase the instability of hybrid and thus to reduce false positives. Cells undergo different treatments before hybridization, starting with diethylpyrocarbonate to inactivate intracellular RNases and then with lysozyme (which catabolizes the peptidoglycan) and proteinase K to permeabilize the cellular wall, then with paraformaldehyde to preserve the subcellular structures. The cells are incubated in a hybridization solution containing the labeled oligonucleotides for a few hours and then washed to remove probes that are unbound to their specific target. A second nonspecific dye (i.e., DAPI) is used to stain all the cells. Finally, the cells are observed under a common fluorescence microscope.

It is possible to use two primers in the same experiment to determine, for example, the proportion of cells belonging to two nested taxa, which is the case of cells belonging to the Enterobacteriaceae and to the genus Erwinia. In this case, two probes are used with different fluorochromes. It is also possible to use two primers, one targeting ribosomes and the other a functional gene in the same cell to link taxonomic and functional informations. Finally, it is possible to have two primers targeting two different taxa, such as one for E. coli and one for Pseudomonas fluorescens when the objective is to quantify their co-occurrence in a complex environment. One of the most limiting problems in the use of FISH is the electrostatic repulsion due to the negative charge of the phosphodiester groups of the DNA that make some ribosome regions inaccessible to the classic FISH method. It is then possible to use, instead of DNA, APN molecules (or acid peptido-nucleic) where sugar phosphates are replaced by neutral links based on polyamide and following the Watson–Crick matching rules, but which hybridize more rapidly and more efficiently (Stender et al. 1999). Finally, it is possible to use molecular beacons which operate on the basis of FRET (fluorescence resonance energy transfer) and are already used in quantitative PCR to reduce the background noise (Lenaerts et al. 2007) due to nonspecific hybridization.

4 Measurements of Microbial Biomass and Activities

4.1 Measurements of Biomass

4.1.1 Biochemical Methods

Biochemical methods for quantifying biomass are based on the measurement of characteristic cellular compounds or intracellular metabolites. To be meaningful, the selected compounds should comply with the following criteria:

  1. (i)

    The selected compounds should be specific, i.e., restricted to a genus, a functional group, etc.

  2. (ii)

    The selected compounds should disappear rapidly upon cellular death and, therefore, should be absent in the surrounding environment

  3. (iii)

    The selected compounds should be present in the cells at a fixed concentration, which level should be independent of both the physiological status of the cells and of the substrates it uses as a source of carbon and energy.

4.1.1.1 Lipid Analyses

The biochemical methods are mainly based on the analyses of membrane lipids and lipid components of bacterial cell wall. These particularly include the phospholipid fatty acids (PLFA)*. The lipids are extracted using the method of Bligh and Dyer (1959) and the extract is eluted and fractioned using a siliceous column to separate them from the apolar compounds as the aliphatic hydrocarbons (Syakti et al. 2006). The PLFA are methylated to produce fatty acid methyl esters. The position and the geometry of the double bonds can be determined by forming methyldisulfide-type derivatives (Nichols et al. 1986), and the position of the methyl groups can be determined after their derivatization into n-acyl pyrroline type compounds according the method of Andersson and Holman (1974). PLFA are separated, identified, and quantified by using gas chromatography coupled to mass spectrometry (GC-MS) (Syakti et al. 2006). The analyses of the PLFA allow estimating the viable bacterial biomass. Phospholipids are present in all bacterial membranes, are characterized by a high turnover rate, and are rapidly hydrolyzed into diglycerides by the phospholipases after cellular death. Hence, the estimation of the bacterial biomass is based on the PLFA weight using a conversion factor that has been determined for E. coli. In this species, PLFA represent 100 μmol·g−1 dry weight and 1 g of cells is equivalent to 4.3 ± 1.2.1012 cells (Balkwill et al. 1988). By knowing the concentration of PLFA in a natural sample, we can thus estimate the viable biomass concentration. This method has been applied successfully in different environments, although it may present specific problems, which are related to:

  1. (i)

    Differences in PLFA concentrations between species

  2. (ii)

    Variability of PLFA due to variations of cell size

  3. (iii)

    Variability of PLFA contents related to different environmental conditions, as, e.g., temperature, growth substrate

  4. (iv)

    Variability of PLFA contents related to the physiological status of the microorganisms

Despite these limitations, this method is one of the most efficient methods currently used for estimating the bacterial biomass. In certain cases, this method yields comparable results as those obtained by other approaches as, e.g., the measurement of the uraminic acid and the beta hydroxyl groups, which are characteristics for the lipopolysaccharides (Balkwill et al. 1988).

4.1.1.2 The Other Biochemical Methods

The PLFA are not the only cellular constituents that can be used for estimating the bacterial biomass. Muramic acid is specific for the bacteria and thus a good biomarker for estimating bacterial biomass. However, the cellular contents are highly variable and particularly divergent between the two main groups of Gram-positive and Gram-negative bacteria. ATP can potentially also be used, although its conversion to bacterial biomass is very difficult as its cellular levels are very variable among species and dependent on the physiological status of the cells.

4.1.2 Biovolume and Conversion Factors

Estimation of microbial biomass carbon is possible by the use of conversion factors indicating the average carbon per cell content and the result of cell counts obtained by microscopy or flow cytometry (cf. Sect. 17.2.3). These conversion factors are logically linked to the size of the cells, as is necessary in the case of the bacteria to determine their average biovolume by epifluorescence microscopy after cell staining with DAPI, for example. The most commonly used formula for this determination is as follows:

$$ \mathrm{Biovolume}\ \left({\upmu \mathrm{m}}^3\right)=\pi /4\times {D}^2\times \left(L-D/3\right) $$
  • with L: length of the cell (μm)

  • D: width of the cell (μm)

Relationships between biovolume and carbon biomass have been established by different authors, such as Norland (1993):

$$ \mathrm{pg}\ \mathrm{C}\ {\mathrm{cell}}^{-1}=0.12\times {\left({\upmu \mathrm{m}}^3{\mathrm{cell}}^{-1}\right)}^{0.7} $$

In the marine environment, carbon biomass of bacteria varies from 12 to 30 fg of carbon per cell depending on the particular environment (Fukuda et al. 1998). Similarly, relationships between biovolume and carbon content exist for other microorganisms (Davidson et al. 2002a). Concerning viruses, a factor of 0.055 fg of carbon by virus was estimated (Steward et al. 2007).

4.1.3 Quantification of Phytoplanktonic Biomass

The cycle of organic matter in the marine environment mainly results from the activity of extremely varied and numerous organisms, among which we can distinguish phytoplankton and algae as primary producers, zooplankton as secondary producers, and predatory chain and heterotrophic bacteria that remineralize organic matter. In general, the organisms in these groups are difficult to isolate and separate from other to observe and measure their abundance. The exception is phytoplankton, which has specialized chloroplasts containing pigments that allow it to capture the light energy necessary for photosynthesis. Although a wide variety of pigments exist, chlorophyll a (Chl a) is the main pigment in aerobic photosynthetic organisms (excluding cyanobacteria). The cell content of Chl a is 1–2 % by dry weight. Following collection of phytoplankton by filtration, it is quite easy to quantify this pigment by extraction using a nonpolar solvent. The method is simple: phytoplankton is captured on filters capable of retaining all cells (see Sect. 17.1.2) and chlorophyll is extracted with a nonpolar solvent, with or without mechanical action.

Measurement of extracted chlorophyll is based on two spectroscopic characteristics:

  1. (i)

    Its ability to absorb light at a well-defined wavelength (663 nm), which enables quantification by spectrophotometry

  2. (ii)

    Its ability, after excitation at a wavelength of 430 nm, (blue) to rapidly return a portion of the energy in the form of red light at a wavelength of 669 nm, according to the fluorescence process

The spectrophotometric method is the oldest method of measuring extracted chlorophyll (Richard and Thompson 1952), but has been replaced by the fluorimetric method (Yentsch and Menzel 1963) which is much more sensitive. However, it should be noted that spectrophotometric measurement is intrinsically calibrated because the absorption coefficients of the pigments are known precisely, while the fluorimetric method requires regular calibration of the fluorometer.

Many variants of the procedure have been developed, differing in terms of the quality of the solvent (e.g., acetone, ethanol, and methanol), the grinding process, and the duration of extraction. The literature is abundant in this area (cf. Jeffrey et al. 2005), and due to the interest in studying the productivity of aquatic environments, successive working groups have been formed since 1964, with the support of UNESCO (Scor-UNESCO 1966), to identify the best analytical procedures. A methodological synthesis was conducted under the auspices of the International Council of the Exploration of the Sea (Aminot and Rey 2002). Although these fast and simple methods are still widely used for the quantification of phytoplankton biomass, more efficient techniques now make it possible to study the entire pigment spectrum. Chief among these more efficient techniques are high-performance liquid chromatography (HPLC, Jeffrey et al. 1997) and spectrofluorimetry (Neveux and Lantoine 1993) (cf. Sect. 17.6.6).

Unfortunately the conversion of Chl a concentration in terms of total phytoplankton biomass is difficult and is still highly criticized because the chlorophyll content of cells varies greatly, depending on many factors. This is especially true for the Chl a carbon ratio, which changes over the life of the cell and depends on the available nutrients and light conditions. It is also well known that the chlorophyll content of phytoplankton cells tends to increase in low intensity light, whereas, in contrast, a nutrient deficiency may cause a destruction of chlorophyll (chlorosis). Chlorophyll: nitrogen and chlorophyll: phosphorus ratios seem to be more consistent (Strickland 1960), and in optimal growing conditions, 1 mg of Chl a is assumed to correspond to 14 mg of nitrogen.

The in vivo fluorescence technique (Lohrenzen 1966) permits the direct determination of chlorophyll in sea samples without the filtration step. However, the large variability often observed in the in vivo fluorescence/Chl ratio greatly limits the value of this parameter as a measure of biomass.

On the other hand, the determination of the optical properties of phytoplankton using satellite imagery has made possible significant advances in the study of the spatial and temporal variability of phytoplankton populations (Fig. 17.19). Indeed, by absorbing light, especially in the blue part of the visible spectrum, the Chl a selectively modifies photon flux that passes through the photic zone of the ocean. This absorption changes the spectrum of the sunlight reflected from the ocean (reflectance). Because the color in the visible light region (wavelengths of 400–700 nm) in most of the world’s oceans varies with the concentration of chlorophyll and other plant pigments present in the water, the more phytoplankton that is present, the greater the concentration of plant pigments and the greener the water. This property has been exploited by space agencies that have launched the so-called “ocean color” satellites to map chlorophyll content at the ocean surface (Gordon et al. 1988). The first demonstration was made during the testing of the Coastal Zone Color Scanner (CZCS), a sensor launched by NASA in 1978. CZCS was able to provide measurements of ocean color over large geographic areas in short periods of time in a way that was not previously possible with other measurement techniques, such as from surface ships, buoys, and aircraft. These measurements allowed oceanographers to infer the global distribution of the standing stock of phytoplankton for the first time. Collection of ocean color data has become very common with the launch of many other sensors: OCTS (NASDA) and POLDER (CNES), launched in 1996, which worked for only 8 months; SeaWiFS, MODIS-T, and MODIS-A (NASA), launched in 1997, 1999, and 2002, respectively; and MERIS (from the European Space Agency), launched in 2002. In addition, permanent improvements in both the quality and sensitivity of the sensors and algorithms make it possible now to extract data on other biogeochemical components. Information on three groups of phytoplankton (micro-, nano-, and pico-plankton), which differ in the proportions of their accessory pigments (i.e., other than Chl a) can be obtained (Uitz et al. 2006). Similarly, under certain conditions, populations of coccolithophorids (Fig. 17.20) and Phaeocystis, which have a very strong impact on the color of ocean water through the limestone pieces or the mucus they produce, can be identified. Recently developed algorithms make it possible to quantify the amount of organic carbon present from chlorophyll estimation (Stramski et al. 2008). Chl a being the main compound involved in the photosynthetic process, it is therefore reasonable to presume that primary production is directly related to the concentration of this pigment (Ryther and Yentsch 1957). The phenomenon is complex, but bio-optical models are constantly being improved, and data on in situ chlorophyll, such as those obtained by “ocean color” satellites, are now commonly used to estimate global primary production.

Fig. 17.19
figure 21

Map of chlorophyll (phytoplankton) in the ocean based on color sensor data collected by NASA’s SeaWiFS sensor, for the period September 1997–August 2000 (© SeaWiFS Project. NASA/Goddard Space Flight Center. ORBIMAGE – Authorization Gene Feldman)

Fig. 17.20
figure 22

Significant development, over 150,000 km2, of coccolithophores in the surface waters of the Barents Sea, detected by the MODIS sensor (August 1, 2007) (Authorization Gene Feldman)

4.2 Measurements of Primary Production

Primary production, i.e., the amount of organic matter produced each day by autotrophic microorganisms from atmospheric or aquatic carbon dioxide, is a fundamental parameter in the cycling of matter in aquatic environments. Almost all life on earth is directly or indirectly reliant on primary production. The organisms responsible for primary production are known as primary producers or autotrophs and form the base of the food chain. Phytoplankton are the organisms (primary producers or autotrophs) responsible for primary production in oceanic areas and form the base of the food chain. Primary production is distinguished as either net or gross, the former accounting for losses to processes such as cellular respiration and the latter not accounting for these losses. Indeed, research on oceanic primary production has increased dramatically since the development of the carbon-14 (14C) tracer method by Steemann Nielsen (1951). 14C is a radioactive isotope that is easy to use because it has little energy and has a very long half-life or period (5,730 years). The experimental procedure is simple. A small known amount (a few ml) of 14C as sodium bicarbonate is introduced into a water sample collected in a transparent flask. After placing the sample under conditions as close as possible to the sample’s initial environmental conditions for a period of time, called the incubation period, the particulate material is collected on a filter, and its radioactivity is measured using a scintillation counter. Primary production, or the quantity of carbon assimilated during the incubation period (PP), can then be calculated as follows:

$$ \mathrm{P}\mathrm{P}=\left({}^{14}\mathrm{Cphyto}{/}^{14}\mathrm{Cint}\right)\times \mathrm{C}\mathrm{t} $$

where

  • 14Cphyto = amount of 14C measured in the particulate fraction collected on the filter,

  • 14Cint = quantity of 14C initially added to the sample,

  • Ct = Inorganic carbon concentration in the sample.

The main variations of the method are:

  1. (i)

    The incubation period (from a few minutes to 24 h)

  2. (ii)

    The volume of the sample (a few ml to some liters)

  3. (iii)

    The material of the incubation flasks (glass, polycarbonate, quartz)

  4. (iv)

    The incubation conditions

Incubation can be carried out in incubators under natural or artificial light. It is imperative to strive to maintain a temperature equivalent to the initial temperature of the sample. The method of incubation that best reflects the initial conditions of the sample is the so-called in situ incubation method, in which the sample is returned to its original depth immediately after the addition of tracers. To carry out incubations at different depths, a mooring line is prepared (Fig. 17.21) with mooring points for positioning the incubation flasks at different depths. It should be noted that some bottles that are opaque to light (called dark bottles) are used to account for the biological processes that occur in darkness (dark carbon assimilation).

Fig. 17.21
figure 23

In situ incubation flasks on a mooring line ready for use (Photographs: Courtesy of Joséphine Ras)

Since the pioneering work of Dugdale and Goering (1967), the isotopic tracer technique has been generalized to detection of nitrogen compounds through the use of the stable isotope 15N. Quantifying the rate of uptake of nitrogen by microorganisms is necessary because the mineral nitrogen is often present at very low concentrations in surface waters and appears to be a factor that limits the growth of primary producers. In contrast, the carbon dioxide substrate for photosynthesis is always present in large quantities in natural waters. The incubation technique for quantification of nitrogen fluxes using 15N tracer is identical to that of 14C. A sample is enriched with an inorganic or organic nitrogen compound artificially labeled with nitrogen-15, and then the isotopic enrichment of the particulate fraction recovered on a filter after the incubation period is measured with a mass spectrometer to quantify the relative proportion of the isotopic 15N/14N ratio. It would be noted that dual tracer measurements using the stable isotopes 13C and 15N can be used to simultaneously estimate the uptake rates of dissolved inorganic carbon and nitrogen. The inorganic nitrogen available for microorganism growth in aquatic environments is present in several forms (mainly nitrate and ammonium), which allows this tracer method to differentiate between two types of production supported by substrates that differ in their origin (Dugdale and Goering 1967):

  1. (i)

    The regenerated production is the carbon assimilation rate, essentially supported by the assimilation of nitrogen substrates from recycling bacterial and zooplankton excretion (ammonium, nitrite, urea).

  2. (ii)

    The new production is the carbon assimilation based on the assimilation of nitrogen inputs from mineral reserves outside the area of primary production. These are mainly nitrate, which is the main reservoir in deep water, and molecular nitrogen from the atmosphere.

Recent developments in the measurement of the enrichment of dissolved inorganic and organic fractions have made it possible to complete the study of the nitrogen cycle by estimating the rate of excretion of dissolved organic nitrogen (Bronk and Glibert 1991; Slawyk and Raimbault 1995) and regeneration rates, based on the technique known as isotope (ammonium nitrate, urea) dilution.

4.3 Measurements of Heterotrophic and Chemoautotrophic Bacterial Production

Heterotrophic bacterial production is classically measured by radioisotopic techniques using thymidine or leucine (3H or 14C) incorporation into DNA and protein, respectively (Furhman and Azam 1980; Kirchman et al. 1985). These techniques are particularly suitable for samples of aquatic environments, but they can also be used for sediments and soils. Due to their high sensitivity, these measurements can be performed in low productive environments (e.g., deep seawater). These techniques also offer the advantage of limited sample handling (the measurement can be carried out on the raw sample without prefiltering) and a short incubation time (from 0.5 to 4 h depending on the activity of bacteria). Samples (1–10 ml) are incubated with a saturating concentration of radioactive tracer (to be determined beforehand by the means of a saturation curve with a range of concentrations of radioisotopes) in the dark at in situ temperature and stopped by addition of trichloroacetic acid (TCA). Bacteria are then recovered by filtration or centrifugation and cleared of radiolabeled molecules not incorporated by different stages of washing with cold TCA (Smith and Azam 1992). Radioactivity incorporated by bacteria is measured with a liquid scintillation counter, and the results are expressed in rate of thymidine or leucine incorporated per volume unit of sample and per unit of time. It is also possible to express these results in bacterial carbon production (e.g., μgC·l−1·d−1) using experimental conversion factors which consist to follow in parallel the rate of tracers incorporation and the rate at which cell numbers increased for natural samples after grazers have been removed by filtration or dilution. Theoretical factor can be also used to make this conversion. Unfortunately both approaches introduce uncertainty in the results. The common use of incubation in the dark was also criticized, as sunlight can stimulate or inhibit bacterial activity for various reasons (Gasol et al. 2008). An alternative measurement of bacterial production without radioactive materials has been proposed: it determines the incorporation of bromodeoxyuridine (BrdU) into DNA by means of immunological detection (Nelson and Carlson 2005).

The prokaryotic chemoautotroph activity can be measured by dark [14C]bicarbonate assimilation (Herndl et al. 2005). After incubation (24–72 h), the prokaryotes are recovered by filtration (0.2 μm). After rinsing, the filters are exposed to concentrated hydrochloric acid fumes and then the radioactivity of each filter is counted with a liquid scintillation counter.

4.4 The Measurement of Bacterial Respiration

4.4.1 Measurement of Respiration in the Aquatic Environment

Heterotrophic bacteria are involved for a variable and sometimes important part (e.g., oligotrophic ocean) to the planktonic community respiration. To determine more specifically the respiration related to the heterotrophic bacteria, it is necessary to eliminate the rest of the microorganisms by filtration (0.8 or 1 μm). This step can introduce bias because:

  1. (i)

    The possible size range overlaps between heterotrophic bacteria and cyanobacteria and photosynthetic protists.

  2. (ii)

    The possibility that heterotrophic nanoflagellates (2–3 μm) sneak through pores of size smaller than their own size.

  3. (iii)

    The particles-attached bacteria, which are generally the most active bacteria, are retained on the filter. Respiration can be measured by different ways.

The chemical method, known as Winkler method, is still the reference method for the determination of dissolved O2. This technique consists to distribute the filtrate in glass bottles with calibrated volume (60–200 ml) taking care to not introduce air bubbles. A part of the bottles are fixed at the beginning of the incubation and the other part at the end of the incubation. The variation of the amount of O2 dissolved measured during the period of incubation allows to estimate bacterial respiration. The time incubation needed to measure a significant difference of O2 depends on the activity of the bacteria in the sample. In oligotrophic systems, it is often necessary to incubate 24 h or more in order to measure a significant O2 variation in the bottles. These long incubations can introduce bias due to bottle effect (i.e., change in bacterial activity and composition due to confinement).

The principle of dissolved O2 determination involves different chemical reactions: a solution of manganese, potassium iodate, and a strong base is added to the sample to attach the O2 to manganese. After acidification, the precipitate is dissolved and the iodine ions are oxidized in iodine. The iodine released is then determined by thiosulfate. The equivalence point for the determination of iodine by thiosulfate is determined through an electrochemical (potentiometry) or photometric approach with an accuracy of the order of 2 μg·O2·l−1 (Carrignan et al. 1998). A spectrophotometric approach has also proposed to directly measure the iodine released after acidification by absorbance at 288 or 430 nm (Roland et al. 1999). This method provides a speed of analysis far greater than the potentiometric approach (about 2 min per analysis) but requires a calibration curve to transform the absorbance values measured in quantity of O2.

The bacterial respiration can be also measured with chemical and optical microsensors (cf. Sect. 17.5). These measurements can be achieved continuously or semicontinuously in closed bottles or in dedicated microchambers (Briand et al. 2004). The chemical and optical microsensors technology is changing rapidly with sensibilities that become more and more important. However, the number of samples that could be monitored at the same time with chemical and optical microsensors remains lower compared to the chemical approach.

The amount of O2 respired (μM·l−1·h−1) can be subsequently transformed into the amount of released CO2 using a respiratory quotient (RQ = CO2/O2). RQ is generally estimated between 0.85 and 1, although lower values were also measured (Robinson and Williams 1999).

When Bacterial Production (BP) Bacterial Respiration (BR) are measured on the same sample and expressed in the same unit (μg C·l−1·h−1), it is possible to calculate the bacterial growth efficiency (BGE), i.e., the effectiveness of the heterotrophic bacteria to transform organic carbon assimilated into biomass, as well as the bacterial carbon demand (BCD), i.e., the total quantity of organic carbon assimilated by heterotrophic bacteria (del Giorgio and Cole 1998):

$$ \mathrm{B}\mathrm{G}\mathrm{E}=\mathrm{B}\mathrm{P}/\left(\mathrm{B}\mathrm{P}+\mathrm{B}\mathrm{R}\right) $$
$$ \mathrm{B}\mathrm{C}\mathrm{D}\ \left(\upmu \mathrm{g}\ \mathrm{C}\cdot {\mathrm{l}}^{-1}\cdot {\mathrm{h}}^{-1}\right)=\mathrm{B}\mathrm{P}+\mathrm{B}\mathrm{R} $$

The calculation of BGE and BCD is complicated because BP and BR are measured at different time scales: while the extent of the BP is virtually instantaneous, BR requires sometimes more than 24 h of incubation.

4.4.2 Measurements of Respiration in Soils and Sediments

The bacterial respiration in soils and sediments is more complicated to measure than in planktonic environments due to the difficulty to dissociate the bacterial part from the metabolic activities of the other heterotrophic and autotrophic components. For this reason we generally measure in these environments the global flux of CO2 or O2. The soil respiration is an important component of the ecosystem carbon balance, with a strong contribution related to microorganisms. Respiration of the soil can be measured by recording the variations of CO2 concentrations by means of a respiration chamber with an infrared gas analyzer (Fig. 17.22). By measuring accumulation of soil CO2 productivity released from the soil surface, chambers are unable to provide information about soil profiles. Moreover, these measurements include some methodological biases that are largely related to the very low but sufficient air pressure changes induced by the rooms that can modify the CO2 flux coming out of the ground (Davidson et al. 2002b). The soil respiration can be also measured by analyzing regularly the gradient of CO2 into the soil using infrared sensors placed at different depths. Transmitters placed on the ground receive signals from the probes and send them to a datalogger (Tang et al. 2003).

Fig. 17.22
figure 24

Soil respiration measurement in the field with the use of a respiration chamber SRC-1 and a probe CIRAS-2 (PP Systems, International, Inc., Amesbury, USA) (Photograph: Courtesy of M.L. Doyle)

Measurements of respiration and photosynthesis in interstitial waters in microbial mats and coastal sediment can be performed using chemical microprobes (cf. Sect. 17.5.2). Benthic metabolism can be also measured on the intertidal areas using a benthic chamber coupled with a CO2 infrared analyzer (Migné et al. 2002). The use of dark incubation allows measuring respiration, while light incubation allows measuring the net community production.

4.5 The Measurement of Bacterial Enzymatic Activities

4.5.1 Profile of Enzyme Activities

The potential for degradation of different carbon substrates by bacterial communities can be measured using the Biolog EcoPlate™ system (Biolog Inc., CA, USA). Each 96-well microplate contains 31 carbon substrates (amino acids, sugars, carboxylic acids, phenolic compounds, polymers) in triplicate, as well as controls (wells without carbon sources). Additionally, each well contains a colorless tetrazolium dye. Environmental samples are inoculated in the microplate directly (water samples) or after suspension (e.g., soil, sediment, sludge) (Zak et al. 1994; Montserrat-Sala et al. 2005). The response of the bacterial community is monitored regularly during incubation over a period of 2–10 days. The use of the substrates is indicated by the development of a purple coloration due to respiratory activity of cells that reduces the tetrazolium salt. Color development is measured spectrophotometrically (optical density) at 590 nm with a microplate reader. Various indices of functional diversity can be calculated from the metabolic profiles obtained (Preston-Mafham et al. 2002).

4.5.2 Measurement of Bacterial Exoenzymatic Activity

In aquatic environments, organic matter comes mostly in the form of macromolecules that do cannot be directly assimilated by bacteria. Exoenzymatic hydrolysis of the polymeric material into low molecular weight organic matter (<600 Da) is required before its use by the bacteria. Hoppe (1983) showed that the bacteria contributed to the bulk of extracellular enzyme activities (EEA) (glucosidases, proteases). Bacterial EEA is measured with the use fluorogenic substrates composed of a fluorescent molecule (methylumbelliferone (MUF) or the 4-methylcoumarinyl-7-amide (MCA)), linked to one or several natural molecules (e.g., amino acids, glucose). This complex is devoid of fluorescence. After enzymatic hydrolysis (breakdown of the specific binding), the chromophore (MUF or MCA) is released and the fluorescence appears (Fig. 17.23). During incubation (from 1 to 3 h), the emitted fluorescence is quantified by a spectrofluorometer, and the EEA is expressed in units of fluorescence per unit time. The EEA is then converted to moles of substrate hydrolyzed per unit time and per unit volume through a calibration curve. In general, EEA follow a Michaelis–Menten kinetic (Hoppe 1991). The use of a range of different substrate concentrations (at least 5) allows to determine the enzymatic parameters of this kinetic: the constant Km (which indicates the affinity of the enzyme for the substrate) and the maximum rate of hydrolysis (Vm). The measure with a single concentration of substrate can also be done by choosing a saturating concentration to determine the Vm, or a very low concentration of substrate to determine the hydrolysis rate near the in situ rate.

Fig. 17.23
figure 25

Hydrolysis reaction of a fluorogenic substrate. In the presence of ß-glucosidase, the MUF-ß-D-glucopyranoside releases glucose and the MUF (fluorescent molecule)

A wide range of fluorogenic substrate analogues is available commercially to measure the EEA of different enzymes: aminopeptidase, glucosidase, phosphatase, etc. This technique can be applied for freshwater and marine waters, but also with sediment samples (Talbot and Bianchi 1995).

4.6 Determination of the Bacterial Mortality Mediated by Heterotrophic Protozoa and Viruses

4.6.1 Rates of Predation by Heterotrophic Protozoa

The measurements of the rate of protistan grazing may be classified into two types: those that use tracers to track bacteria in predator organisms and those that manipulate the communities in order to alter the rate of encounter between predators and bacteria (Strom 2000). These different approaches may yield results contrasted in the estimation of the rate of predation of bacteria, with often lower values for measurements using tracers compared to methods using manipulations of the whole community (Vaqué et al. 1994).

4.6.1.1 Methods Using Tracers

Fluorescent labeled bacteria are added to the sample. After incubation, the fluorescent bacteria present in the digestive vacuoles of the bacterivores can be visualized by epifluorescence microscopy. The decrease of free labeled bacteria can be also determined by epifluorescence microscopy or flow cytometry. The main limitation of this approach is the use of fluorescent bacteria derived from a culture which are not necessarily representative of the bacteria from the natural community. It is also possible to label radioactively the bacteria naturally present in the sample studied (e.g., 3H thymidine incorporation). The amount of ingested bacteria is measured after a short incubation period (<2 h), the protozoa are separated from bacteria by filtration (~2 μm), and the number of radioactive bacteria ingested is counted by a liquid scintillation counter. This approach also has drawbacks such as the release of radioactivity during digestion of bacteria or the imprecision of the size fractionation. Finally, it is possible to use some of E. coli minicells radioactively marked as tracers. These minicells are approaching the size of bacteria from the environment and have no possibility of multiplication. The decrease in radioactivity is followed within the bacterial compartment during incubation.

4.6.1.2 Methods Using the Sample Manipulation

Predation rates can be estimated from samples diluted to varying degrees with natural water filtered through 0.2 μm to remove all particles. The bacterial growth is followed during 12–48 h incubation in order to deduce the rate of predation. The idea is that dilutions will proportionally reduce the chances of encounter between bacteria and their predators, when the rate of bacterial growth remains unchanged. The latter assumption is certainly not respected knowing that bacterial growth is largely dependent on other organisms and, in particular, on phytoplankton.

Predation rate can be estimated by comparing the bacterial growth of a gross sample with a sample for which heterotrophic protozoa were eliminated by filtration. Despite its simplicity, this approach raises the problem of the choice of the membrane porosity to make this separation knowing that an overlap of size may exist between the bacteria and protozoa. Furthermore, the disadvantages described for the dilution method are found here (disruption of interactions between bacteria and phytoplankton). Finally, predation rate can be measured by comparing the bacterial response between a raw sample and samples treated with inhibitors of prokaryotic cells (penicillin, streptomycin) or eukaryotic (cycloheximide, colchicine). The use of these inhibitors can be problematic, because their actions can be insufficient (e.g., the activity of predation is removed incompletely by eukaryotic inhibitor) or nonselective (e.g., prokaryotic inhibitors also affect the rate of predation of protozoa).

4.6.2 Viral Production and Mortality Induced by the Virus

Different approaches have been proposed to measure the viral production and the mortality induced by the virus, but none of them has risen to the rank of standard method (Furhman 2000; Weinbauer 2004). The transmission electron microscopy allows the detection of bacteria infected by viruses when they arrive at the final stage of infection (i.e., with a sufficient number of viruses within the cell). The total infection rate can be calculated from this count using a simple model and taking into account the number of bacteria infected but not visible by microscopy.

Viral production is function of the number of cells infected by viruses and the burst size (i.e., the number of phages produced per infected bacterium). Viral production can be used to calculate the number of host cells lysed. Viral production can be calculated by measuring the decrease of the number of viruses in a sample where the viral production is stopped by the addition of cyanide, but where the elimination of the virus continues. Viral production can be also measured by adding viruses labeled with SYBR Green I to the sample. During incubation, the number of labeled viruses and the total number of viruses are followed. Viral production adds viruses unlabeled in the sample, thus reducing the proportion of labeled virus originally added. The elimination of viruses is supposed to be equivalent between labeled and unlabeled viruses. From these measurements, it is possible to simultaneously determine the rates of production and elimination of viruses.

The rate of DNA and RNA viral synthesis can be measured via the incorporation of thymidine-3H or orthophosphate-32P. After incubation with radiotracers, viruses are separated from the bacteria by filtration to measure the radioactivity in the viral fraction. The amount of radioactivity is converted into viral abundance using conversion factors which, unfortunately, vary greatly from one study to another.

Dilution technique similar to that used for the measurement of predation by protozoa has been proposed to measure bacterial mortality induced by viruses. In this case, water sample is diluted in different proportions with ultrafiltrated water free of viruses to reduce the chances of viral infection.

The proportion of inducible lysogens can be determined with specific treatments (e.g., addition of Mitomycin C, exposure to UV- C radiation) to induce a lytic cycle for the lysogenic bacteriophages. These treatments result in prophage induction by direct DNA damage. Induction was detected when we observed simultaneously a significantly lower bacterial abundance and higher viral abundance in the treatments relative to control.

5 Chemical and Optical Microsensors

5.1 Measuring Principles of Chemical Microsensors

Microelectrodes or electrochemical microsensors have been used in animal physiology since the 1950s and only been introduced in microbial ecology since the end of the 1970s. Their first application in microbial ecology was for the measurement of the concentration profiles of O2 and S2−, as well as of the pH, in the interstitial pore waters of microbial mats and coastal sediments. Due to impressive methodological developments, presently a large number of compounds can be measured with microelectrodes. The range of compounds has been further enlarged by the application of immobilized enzymes and cells in the electrochemical microelectrodes design, which are known as biosensors. Optical microsensors have been developed and introduced in microbial ecology since the mid-1980s. These comprise two classes, i.e.,

  1. (i)

    Microsensors designed for measuring different variables that characterize optical properties as, e.g., the flux of photons in a microhabitat.

  2. (ii)

    Microsensors designed for measuring the concentration of chemical compounds. The latter, also known under the name of optodes, use an indicator pigment that is sensitive to the concentration of the targeted compound.

The use of microsensors allows us to describe the chemistry at the local scale of the microhabitat of the microorganisms with a minimal disturbance. In sediments, biofilms, and microbial mats, it is particularly important to measure the local chemistry at a submillimetric resolution. For example, the measurement of O2 at high spatial resolution is important for a precise delimitation of the oxic and anoxic zones in microbial habitats, and thus to know where aerobic, microaerophilic, and anaerobic microorganisms may proliferate. In addition to this fine scale description of microhabitats, the use of chemical microsensors also allows us to study the metabolic activities of the microbes and to quantify the reduction and consumption of some key solutes. Hence, for O2 it is thus possible to study the processes of aerobic respiration and oxygenic photosynthesis at a high spatial resolution within biofilms.

5.1.1 How Do Microelectrodes Function?

5.1.1.1 Potentiometric Microelectrodes

The surface of the working electrode functions as an ion-selective membrane and a voltage is generated by charge separation. Three different types of membrane are used:

  1. (i)

    A specific type of glass (e.g., pH-sensitive glass for pH electrodes)

  2. (ii)

    A precipitate of an oxide or a sulfide coating the metal surface of the working electrode (e.g., a precipitate of Ag2S on the surface of silver allows to measure S2)

  3. (iii)

    A liquid ion exchange (LIX) membrane. This technique is very attractive, because the specificity of the electrode response is determined by the incorporation of a synthetic molecule which is called the ionophore. Most ionophores are highly specific for their target compounds.

Nowadays, many different ionophores have become available on the market and novel ionophores are continuously being developed (Table 17.1). The liquid membrane separates the environment from the liquid electrolyte in the interior of the working electrode. In many cases PVC is incorporated in the liquid membrane to stiffen it, which is particularly important for applications in sediments and other environments where mechanical forces may operate on the electrode. After equilibration the potential across the membrane of the working electrode related to the charge separation is described according the Nernst equation:

Table 17.1 Chemical microsensors used in microbial ecology
$$ \Delta E=\frac{RT}{zF} \ln \left(\frac{a_{\mathrm{e}}}{a_{\mathrm{i}}}\right) $$

where R is the ideal gas constant, T the absolute temperature, z is the ion charge, F the Faraday constant, and a e and a i represent the activities of this ion in the environment and in the interior of the working electrode. The potential of the working electrode is measured with respect to a reference electrode (Ag/AgCl or calomel electrode). The activity of the ion in the interior of the working electrode (a i) is constant, while the activity in the environment is directly proportional to its concentration (a e = f[ion]). The potential between working and reference electrode is thus described accordingly:

$$ \Delta E={E}_0+\frac{RT}{zF} \ln \left({a}_{\mathrm{e}}\right)\kern1em \mathrm{or}\kern1em \Delta E={E}_0^{\prime }+\left(\frac{k}{z}\right) \log \left[\mathrm{ion}\right] $$

where k/z corresponds to the slope factor, which ideally is equal to 59.2 mV at 25 °C for monovalent ions and 29.6 mV for bivalent ions (z = 2). However, ideal conditions are hardly never achieved and, in practice, potentiometric microelectrodes showing values of k between 56 and 59 mV are often used. The measured potential is thus proportional to the logarithm of the ion concentration. Detection limits of potentiometric microelectrodes are often between 10−5 and 10−6 M depending on the targeted ion.

The advantages of the potentiometric electrodes are related to the large spectrum of ions that can be analyzed by this technique (Table 17.1) and their ease of use. The disadvantages are related to their often slow response times, interfering compounds for specific cases, and the nonlinearity of the response as the signal depends on the logarithm of the concentration of the ion. Hence, a geometric series of ion concentrations needs to be prepared for the target ion to obtain a good quality calibration.

5.1.1.2 Polarographic Microelectrodes

A voltage is imposed on the working electrode with respect to the reference electrode. The target compound that is analyzed using a polarographic method undergoes an oxidation or reduction at the surface of the working electrode. This process generates a current between the working and the reference electrodes that is directly proportional to the concentration of the compound in the environment of the working electrode. The voltage used will determine which compounds will be subjected to an oxidoreduction process. The environment between the working and the reference electrode needs to be electrically conductive; hence, spatially separated electrodes can be used in a salty environment. However, the use of combined polarographic electrodes (Fig. 17.24) is preferred. A combined polarographic microelectrode houses the working electrode and the reference electrode within the same micropipette that is filled with a conducting electrolyte (e.g., 3 M KCl). In many cases, the specificity of the combined electrode can be improved by using of selective membranes as, e.g., for O2, H2S, N2O, and Cl2 microelectrodes (Fig. 17.24). The polarographic combined O2 microelectrode has been the most used microsensor in microbial ecology. In this case, the gold-plated working electrode is charged −0.8 V with respect to the reference electrode. O2 diffuses through a silicone membrane, which is only permeable for small uncharged molecules as dissolved gases and reaches the surface of the working electrode, where it is reduced to OH. A small silicone membrane is also used for the H2S microelectrode. Inside the micropipette, H2S is oxidized into S at the surface of the working electrode (charged +85 mV with respect to the reference electrode) and the electrons are transferred through intermediate compounds as the ferri-/ferrocyanide redox couple. The signal of this combined electrode is directly proportional to the concentration of the non-dissociated H2S. Therefore, measures of the pH at the same spot are needed to calculate the sum of the total dissolved sulfides (sum of H2S, HS, and S2−).

Fig. 17.24
figure 26

Designs of different microsensors used in microbial ecology. (a) Couple of potentiometric microelectrodes: working electrode with a liquid ion exchange (LIX) membrane; (b) couple of potentiometric microelectrodes for pH measurements, the working electrode has a sensing tip made of pH-sensitive glass; (c) combined polarographic O2 microelectrode. Both the working electrode and the reference electrode are located in the same micropipette filled with an electrolyte. This micropipette also contains a guard electrode which improves the performance of this sensor; (d) micro-optode; (e) miniaturized biosensor; the compound A passes through a membrane and is converted into compound B catalyzed by an enzyme or a prokaryotic cell

The response time of the polarographic combined microelectrodes can be extremely rapid (less than 0.2 s). However, this response time depends on their design, particularly the thickness of the membrane, the geometry of the sensing tip, and the insertion of the working electrode in the micropipette. The extremely low response times are a great advantage for studies of metabolic rates, particularly when transition states are analyzed. The perfect linear relation between the signal of the electrode (often in pA = 10−12 A) and the concentration of the target compound greatly facilitates their calibration. The disadvantage is related to the consumption of the compound by the electrode, which results in an artifact. However, for most microelectrodes this consumption is so low and the measuring can be neglected.

5.1.1.3 Voltammetry with Microelectrodes

A microelectrode measuring system based on voltammetry has been developed by geochemists, which, so far, has rarely been used in microbial ecology. This system allows the simultaneous measurement of a whole suite of compounds and ions. A gold microelectrode plated with mercury (Hg) is polarized with respect to a reference electrode and the targeted compounds or ions are oxidized or reduced as in polarography. However, this system applies rapidly changing voltages according a programmed cycle. For example, in the specific case of linear sweep voltammetry, the voltage is varied with time according to a linear function (e.g., from + 0.1 V to −2 V) and the instantaneous current is scanned and plotted as a function of the voltage applied. The relationship between the voltage applied and the current measured often shows stepwise increases with several plateaus, and the differences in current between plateaus allow, for example, to calculate the concentrations of Mn2+, H2O2, and O2. The advantage of this system is its capacity to provide measurements of dissolved Mn2+ and Fe2+, which are important products of anaerobic metabolism and can be used as electron donors for chemolithotrophy in the presence of O2. However, the signal of Mn2+ is masked by the signal of Fe2+ when the ratio of Fe2+/Mn2+ >20. It appears particularly interesting to measure both reduced metals in the presence of O2. Nevertheless, the measurement of O2 by this voltammetric system has a lower spatial resolution and is less precise than can be achieved with the polarographic oxygen microelectrodes described above (Fig. 17.24c).

5.1.1.3.1 Micro-optodes

Optodes are based on using an indicator fluorochrome pigment coated on the surface at one end of a fiber optic cable. The fluorescence signal of this pigment depends on the concentration of the target compound. Based on this principle, micro-optodes have been developed for measuring O2, Li+, pH, and temperature. The other end of the fiber optic cable is separated into two different light-transmitting channels. The first channel is coupled to a monochromatic light source, which light is used for excitation of the fluorochrome. The other channel is coupled to a detector which allows to measure the intensity of the signal and sometimes even the kinetics if the intensity changes. The intensity of the fluorescence signal is not linearly proportional to the concentration, but rather described according the equation of Stern–Volmer for ideal behavior. However, the immobilization of the pigment indicator induces a deviation from ideal behavior, and for the specific case of the O2 optode, the relation can be described according to the following adaptation of the Stern–Volmer equation:

$$ \frac{I_{\mathrm{c}}}{I_0}=\left(\frac{1-\alpha }{1+{K}_{\mathrm{sv}}\cdot C}\right)+\alpha $$

where C is the concentration of O2 and I C and I 0 represent the intensity of the fluorescence signal in the presence and absence, respectively, of O2. K SV is a constant that is characteristic for the pigment indicator and α represents a so-called non-quenchable fraction, which is often about 0.1.

The advantages of the optodes are their ease of use, their high stability, and the absence of autoconsumption. Some disadvantages are related to a more complex calibration procedure and a slower response time than observed for polarographic oxygen microelectrodes.

5.1.1.4 Miniaturized Biosensors

In a miniaturized biosensor, a very small-sized reaction chamber is combined with a potentiometric or polarographic microelectrode or with an optode (Fig. 17.24). An enzyme or prokaryotic cells catalyze the reaction, and the product formed is detected by the appropriate microsensor. For the CO2 microsensor, a silicon membrane is used to separate the micro-chamber from the exterior environment. The CO2 gas diffuses through this membrane into the chamber where it is hydrated by the carbonic anhydrase enzyme (CO2 + H2O → H2CO3). The dissociation of this weak acid induces a change of pH that is monitored in the micro-chamber with a pH microelectrode. At steady state when the system has equilibrated with its environment, the pH measured inside is related to the CO2 concentration outside according to a hyperbolic relationship. This biosensor has a detection limit of about 10−5 M and a response time of about 10 s. The biosensor for measuring NO3 and NO2 represents another example. This microsensor has been developed because the LIX nitrate microelectrode cannot be used in marine environments as Cl interferes with the measurements. NO3 and NO2 diffuse across a membrane into the micro reaction chamber where these compounds are converted into N2O by a specific culture of denitrifying bacteria. The N2O produced diffuses through a membrane to a polarographic N2O microelectrode, and the induced current is proportional to the sum of NO3 and NO2 concentrations in the direct environment of this biosensor.

5.2 Applications of Microsensors for Chemical Compounds

5.2.1 Applications in Homogeneous Liquids

The microsensors for chemical compounds can be used in extremely small volumes (e.g., pH or O2 concentrations in a water droplet or in microtiterplate wells). The polarographic oxygen microelectrodes are equally used in micro-respiration systems in volumes of 500 μl (e.g., measurement of the respiration of a small aquatic animal).

The polarographic O2 microelectrodes have been used for the measurement of oxygen production by photosynthesis and consumption by aerobic respiration in samples of larger volumes, because of the advantages of microelectrodes with respect to macro-electrodes. Hence, microelectrodes for O2 have very fast response times and very low autoconsumption of O2. This has allowed monitoring aerobic respiration at high time resolution in marine waters following a transition from light to dark. During the light phase, net oxygen production is calculated from the net increase of O2 with time. During darkness, the O2 concentration decreases because of aerobic respiration. Nevertheless, the rate of respiration is often very high directly after the transition from light to darkness and decreases thereafter. This important observation shows that we have to reject a classical hypothesis according which the respiration during light and dark periods are equal. Rather, during light there is enhanced respiration, which can be explained by the production of organic excretion products by the photosynthesizers and their subsequent consumption by chemoorganoheterotrophs. During the first minutes after the shift from light to darkness, these prokaryotes still continue to benefit from these excretion products, and this effect ebbs away later (Pringault et al. 2007).

5.2.2 Microsensor Measurements at Interfaces and Within Sediments and Biofilms

Chemical microsensors are used to measure solute gradients within sediments and biofilms and across the interface of these systems with their overlying water. Therefore, the microsensor is introduced in these systems with a micromanipulator in a specific direction (normally following the vertical axe for sediments and benthic biofilms). The microelectrode moves in this direction and is stopped at regular space intervals to take the measurements; the sensor signal is recorded after stabilization of the signal, which depends on the response time of the microsensor (cf. Sect. 17.5.1). The spatial resolution is about twice the dimension of the sensing tip. Hence, for an O2 microelectrode with a sensing tip of 40–50 μm, measurements can be taken about every 100 μm, while these intervals can be reduced to 20 μm for an electrode with a sensing tip of 5–10 μm.

The shape of the solute gradients is determined by the spatial separation of the sources and sinks for these solutes and by the mass transfer rate between sources and sinks. For example, for an aerobic heterotrophic prokaryote living in the sediment, the overlying water represents the source of O2, while the aerobic respiration processes by himself and other prokaryotes represent a sink for oxygen. Nevertheless, during daytime, cyanobacteria or microalgae living close to the sediment surface may represent another source of O2. The solute could be transported from the source to the sink by water currents that may percolate through permeable sediments or through the so-called water channels in some three-dimensionally structured biofilms. However, in many cases interstitial water does not move, as it is the case for non-permeable sediments and dense biofilms. In addition, most sediments and biofilms are covered by a fine water layer of 200–500 μm thickness that does not move and which is called the Diffusion Boundary Layer (DBL) or non-stirred layer. In quiet water, the mass transfer of the solutes occurs through molecular diffusion, which is based on the random movement of these molecules in water. The mass transfer for molecular diffusion can be described by the diffusion laws of Fick. Hence, using Fick’s laws, microbial ecologists have developed methods to infer the metabolic rates of microbial populations in solute gradients. This approach represents a particularly interesting extension for the application of microsensor studies in microbial ecology. Nonetheless, before applying Fick’s diffusion laws, the microbial ecologists need to check that no water movements occur in their sediment or biofilm samples.

The first law of Fick is used to calculate the diffusive flux, which corresponds to the mass transfer rate per unit surface of the concerned compound. Thus, the following equation describes the molecular diffusion in water for a one-dimensional system:

$$ J(x)=-{D}_0\bullet \frac{\delta C(x)}{\delta x} $$

where J(x) represents the diffusive flux along the axe, C(x) is the concentration of the solute at place x, and D 0 is a proportionality constant known as the diffusion coefficient. According to this equation, the diffusive flux is directly proportional to the slope δC(x)/δx. This equation needs to be adapted for sediments, where it takes the following expression:

$$ J(x)=-\sigma \bullet {D}_{\mathrm{s}}\bullet \frac{\delta C(x)}{\delta x}\kern1em \mathrm{with}\kern0.5em {D}_{\mathrm{s}}=\frac{D_0}{\theta^2} $$

where σ represents the porosity of the sediment (the fraction of the sediment volume occupied by interstitial water, value ranging from 0 to 1). D s is the sediment specific diffusion coefficient, which corresponds to the ratio of D 0/tortuosity (the tortuosity is represented by the symbol θ 2, which corrects for the fact that in the sediment the shortest diffusion distance in the interstitial water space of sediments is longer than the rectilinear distance as diffusion has to get round the sediment particles). In practice, the porosity (σ) can be easily determined (loss of water of a known sediment volume upon complete drying); in contrast, it is often very difficult to determine experimentally the tortuosity (θ 2), thus also the D s. Therefore, the values used for D s are often based on approximations. The second law of Fick is used to calculate the rate of the metabolic rates for microbial populations living in diffusion gradients both in sediments and in biofilms. Hence, the second law of Fick for a one-dimensional diffusion system adapted to sediments and biofilms takes the following expression:

$$ \frac{\delta C\left(x,t\right)}{\delta t}=\sigma \bullet {D}_{\mathrm{s}}\bullet \frac{\delta^2C\left(x,t\right)}{\delta {x}^2}+P\left(x,t\right)-K\left(x,t\right) $$

where the metabolic rates are represented by P(x,t), the metabolic production rate of the solute and K(x,t), its metabolic consumption rate. For O2, the terms P(x,t) and K(x,t) represent its oxygenic photosynthetic production rate and its respiration rate, respectively. The term C(x,t) represents the concentration of the solute located at x in space and at time t.

Hence,

$$ \frac{\delta C\left(x,t\right)}{\delta t} $$

represents the change in time of the concentration of x and the term

$$ \sigma \bullet {D}_{\mathrm{s}}\bullet \frac{\delta^2C\left(x,t\right)}{\delta {x}^2} $$

describes the mass transfer by molecular diffusion.

Generally, while a biofilm or a sediment is experimentally exposed to constant environmental conditions, the gradients tend to become stable in time and reflect steady-state conditions. Under steady-state conditions, i.e.,

$$ \frac{\delta C\left(x,t\right)}{\delta t}=0 $$

After rearranging this equation, one obtains:

$$ P(x)-K(x)=-\sigma \bullet {D}_{\mathrm{s}}\bullet \frac{\delta^2C(x)}{\delta {x}^2} $$

Accordingly, the net result of the metabolic rates (P(x) − K(x)) is directly proportional to the second derivative off the solute concentration C(x) versus x. However, this approach requests that the porosity (σ) is constant at all depth layers in the sediment profile. A mathematical approach for calculating these rates has been proposed by Berg et al. (1998). Another application of Fick’s second one-dimensional diffusion law is used to calculate the gross photosynthetic oxygen production rate using the light–dark shift method. Accordingly, the biofilm is exposed to constant light conditions, and the experimenter waits until steady-state conditions are established.

To realize a measurement of gross photosynthesis at place x, a particularly fast-responding O2 microelectrode is positioned at this spot. Subsequently, a shift is realized from light to darkness. During the light phase the steady-state conditions are checked, which means that:

$$ \frac{\delta C\left(x,t\right)}{\delta t}=\sigma \bullet {D}_{\mathrm{s}}\bullet \frac{\delta^2C\left(x,t\right)}{\delta {x}^2}+P\left(x,t\right)-R\left(x,t\right)=0 $$

where C(x,t) represents the O2 concentration at time t and at the spot x, P(x,t) represents the rate of gross photosynthesis and R(x,t) the respiration rate. During the dark phase, a decrease of O2 concentration with time is observed according to the following equation:

$$ \frac{\delta C\left(x,t\right)}{\delta t}=\sigma \bullet {D}_{\mathrm{s}}\bullet \frac{\delta^2C\left(x,t\right)}{\delta {x}^2}-R\left(x,t\right) $$

because P(x,t) = 0 in darkness.

By assuming that the respiration rate remains constant for a couple of seconds after the transition from light to dark, we can deduce that:

$$ P\left(x,t\right)\left(\mathrm{light}\ \mathrm{phase}\right)=-\frac{\delta C\left(x,t\right)}{\delta t}\left(\mathrm{dark}\ \mathrm{phase}\right) $$

This experimental measurement is repeated at different depth horizons in the biofilm to obtain a vertical distribution of gross photosynthesis rates in the biofilm. Nevertheless, for each measurement the steady-state conditions need to be checked for the light phase, before imposing the light–dark transition.

Figure 17.25 represents an example measured in cyanobacterial biofilm on mudflats in French Guiana for the profiles of gross photosynthesis and O2 concentrations. During darkness, O2 decreased with depth and the mud was completely anoxic below 1.2 mm depth. During the illumination phase using an artificial source illuminating with 414 μmol photons·m−2·s−1, O2 accumulated in the surface layer of the mud showing a maximum at 0.3 mm depth. The green bars represent the gross photosynthetic rates determined according the light–dark shift technique.

Fig. 17.25
figure 27

Profiles of oxygen (O2) concentrations and gross oxygenic photosynthesis rates measured in a biofilm sampled from the mudflats in the Kaw estuary in French Guiana

5.3 Optical Microsensors

Different optical microsensors have been developed using coated fiber optic cables (Fig. 17.26). One end corresponds to the sensing part where the light conditions are probed, while the other end is connected to a spectroradiometer used as the detector. The black coating prevents that parasite light enters the detector.

Fig. 17.26
figure 28

Designs for two different optical microsensors used for measuring the flux of photons in different spots of photosynthetic biofilms and sediments

The simplest form of an optical microsensor corresponds to a fiber optic cable of 20–50 μm diameter. This sensor has an acceptance angle of about 30° and is used to measure the photons according the direction of their flux. The direction in a three-dimensional space is described by the solid angle sr(θ,Φ). The variable that is measured with this microsensor is the field radiance which describes the flux of photons traveling through the probed spot in the experimentally fixed direction and is expressed per surface and solid angle, hence in mol photons·m−2·s−1·sr−1. By repeating measurements in different directions, one can obtain a detailed picture of the intensity and directionality of the light fluxes at different spots in photosynthetic biofilms. This field radiance microsensor has also been used to detect the localization of photosynthetic bacteria in biofilms by using their spectral signatures.

Another optical microsensor consists of such a fiber optic cable with an integrating sphere. This sensor allows measuring the total integrated flux of photons passing through the measurement spot independent of their direction. The measured variable corresponds to the scalar irradiance expressed in mol photons·m−2·s−1. The scalar irradiance determines the light energy available for photosynthesis at the measurement spot. The use of a spectroradiometer allows to quantify both variables according the wavelength (λ).

6 Stable Isotopes and Lipid Biomarkers

6.1 Concepts and Definitions

Organic biomarkers are compounds that have a biological specificity, in the sense that they are synthesized by a limited number of (micro)organisms or classes of (micro)organisms. In this section, the biomarker concept refers to the lipid components of prokaryotes (i.e., lipid biomarkers) such as phospholipid fatty acids (PLFAs), hopanoids, or some lipids specific to Archaea (Box 14.9).

The combined study of prokaryotic lipids and their natural stable isotopic composition (e.g., 13C/12C; D/H), the so-called compound specific isotope analysis (CSIA), is often used to characterize the carbon cycle and associated biogeochemical processes in recent or ancient ecosystems. CSIA is also useful for linking the structure of communities (phylogeny) with the functions of uncultivable microorganisms. This approach relies on the analysis of the natural stable isotopic composition (usually 13C/12C) of individual biomarkers (Boxes 4.1 and 14.9) or of their isotopic composition following uptake of a labeled substrate (enriched in stable isotopes).Footnote 1 We present below a few examples of both approaches.

6.2 Natural Stable Isotopic Composition of Lipid Biomarkers

The parameters controlling the stable carbon isotopic composition of prokaryotic lipids are varied and sometimes difficult to understand. In particular, they include the origin of the carbon substrate, the mechanism by which this carbon is assimilated, the biosynthetic pathways through which lipids are formed, and environmental and physiological conditions. Although the diversity and the variability of these factors complicate the interpretation of lipid δ13C values (cf. Box 4.1 for δ notation), such data still can provide valuable information about the biology and the chemistry of microorganisms and about the ecosystems in which they thrive.

6.2.1 Origin of the Carbon Assimilated by (Micro)organisms

The isotopic composition of heterotrophic bacterial populations is generally quite similar to that of their nutritional carbon source (in principle, “you are what you eat”) and may (inter alia) be used to characterize carbon cycling (e.g., food chains) in continental or marine sedimentary ecosystems (Boschker and Middelburg 2002; Pancost and Sinninghe Damsté 2003). For example, the analysis of biomarker 13C composition (such as in PLFAs) in some coastal environments has shown that organic matter derived from aquatic higher plants does not always contribute to bacterial growth, which can be supported through other carbon sources such as phytoplankton (Canuel et al. 1997; Boschker et al. 1999).

In studying the carbon cycle using CSIA (organic compounds being considered as representatives of the bacterial biomass), it is important to take into account the isotopic variability that exists between individual compounds, which arises from fractionation that occurs during biosynthesis. Lipids from heterotrophic organisms are generally depleted in 13C by 3–6 ‰ compared to bulk biomass and assimilated carbon, but different isotopic fractionations (in the range of +4 to −9 ‰) have been observed in some heterotrophic organisms metabolizing different substrates. Thus, establishing a link between the isotopic composition of biomarkers and a specific carbon growth substrate requires that the isotopic fractionation occurring during biosynthesis is known as precisely as possible and is relatively constant. Correction factors are often obtained by using appropriate control experiments (Boschker et al. 1999).

6.2.2 Identification of (Micro)organisms Involved in Biogeochemical Processes

Some prokaryotes use a carbon source with a very specific isotopic signature that is thereafter recorded in their biomarkers (following an eventual additional fractionation related to metabolism). These specific biomarker isotopic signatures can thus be used to identify parent microbial populations in the environment. This is especially the case for microorganism involved in the methane cycle since methane is commonly strongly depleted in 13C (due to fractionation occurring during its biological or thermal productionFootnote 2). For example, highly depleted d13C values (which can be lower than −110 ‰) of certain lipids, such as the hopanes derived from aerobic bacteria, may serve as indicators of aerobic oxidation of methane (i.e., by aerobic methanotrophs). However, synthesis of hopanes by some strict anaerobes warrants caution in this interpretation (Birgel and Peckmann 2008). Similarly, depleted 13C signatures in specific archaeal lipids (e.g., isoprenoid hydrocarbons and glycerol ethers with isoprenoid alkyl chains) may be used as markers of methanogenesis (Freeman et al. 1994). Like methanotrophs, autotrophic and methylotrophic methanogenic communities can exhibit biomass strongly depleted in 13C relative to the carbon growth substrate. It should be noted that the parameters controlling the isotopic composition of methanogens are still poorly constrained and that 13C depletion in their biomass and lipids may not occur systematically (Pancost and Sinninghe Damsté 2003).

The strongly depleted carbon isotopic composition of glycerol ether lipids specifically synthesized by Archaea has also provided the first irrefutable evidence of the involvement of these organisms in the anaerobic oxidation of methane (AOM) in habitats where the process was previously inferred (Hinrichs et al. 1999; Thiel et al. 1999). Since then, many additional isotopic studies coupled with microscopic observations and phylogenetic analyses have helped to refine our knowledge of AOM and to specify that it often involves a syntrophic association between anaerobic methanotrophic archaea (ANME) and sulfate-reducing bacteria (AOM consortia; Box 14.9). Our understanding of AOM, as well as demonstrations of its occurrence in different ecosystems and of its direct or indirect involvement in biogeochemical processes (e.g., precipitation of carbonates or iron sulfide nodules), is steadily increasing and relies largely on the analysis of lipid biomarker 13C composition.

6.2.3 Mechanism of Carbon Assimilation in (Photo)autotrophic (Micro)organisms

The differences in isotopic composition between fixed CO2 and the organic carbon synthesized by (photo)autotrophic bacteria may be characteristic of the modes of carbon fixation and assimilation.

Many autotrophic organisms synthesize biomass using the Calvin cycle, in which the enzyme Rubisco catalyzes the incorporation of 12CO2 preferentially to 13CO2. The cellular material produced by these organisms is consequently depleted in 13C (or isotopically lighter) by ca. 20–25 ‰ relative to CO2. Further isotopic fractionations also occur during the biosynthesis of specific cellular components. This produces distinct isotopic compositions for different compound classes (sugars, lipids, etc.), but also for individual compounds within the same class (Pancost and Sinninghe Damsté 2003). Thus, for the Calvin cycle, lipids with linear carbon chains (also called acetogenic lipids), such as PLFAs, are generally depleted in 13C by ca. 4 ‰ relative to biomass, while isoprenoid lipids are slightly less depleted (thus appearing slightly 13C-enriched relative to acetogenic lipids).

The isotopic relationships between the inorganic carbon source, biomass, and lipids vary differently in prokaryotes using assimilation pathways other than the Calvin cycle. Studies with pure cultures indicate that the reverse citric acid cycle existing notably in green sulfur bacteria, or the hydroxypropionate cycle used by the phototrophic bacterium Chloroflexus and some hyperthermophilic Archaea, generate smaller isotopic fractionations during biomass formation than the Calvin cycle (van de Meer et al. 1998, 2001). In addition, acetogenic lipids in (micro)organisms using the reverse citric acid cycle are enriched in 13C relative to biomass and to isoprenoid lipids (van der Meer et al. 1998). For the hydroxypropionate cycle, linear lipids are slightly depleted (by ca. 1–2 ‰) relative to biomass but are enriched relative to isoprenoid lipids (van der Meer et al. 2001).

6.3 Isotopic Labeling

Isotopic labeling is based on the (partial) consumption of a substrate artificially enriched in stable isotopes (e.g., 13C, 2H, or D) by (micro)organisms growing in laboratory micro- or mesocosms, or in the environment. In microbial ecology, this approach is often called the “SIP method” (for stable isotope probing, see Box 17.3).

Box 17.3: “Who Does What?” Isotope Probing

Pierre Peyret

One of the biggest challenges that microbiologists face is to identify which microorganisms are carrying out a specific set of metabolic processes in the natural environment. New approaches of isotope labeling (Box Fig. 17.3) allow a better ecophysiological understanding of microbial communities (Neufeld et al. 2007b). SIP (stable isotope probing) was first applied in the analysis of phospholipid fatty acids (PFLA) that can be extracted from microorganisms and analyzed by isotope-ratio mass spectrometry (IRMS). Although PFLA analysis offers great sensitivity, the use of labeled nucleic acids as biomarkers has the potential to identify a wider range of bacteria with a greater degree of confidence. DNA-based SIP (DNA-SIP) is increasingly being used in attempts to link the identity of microorganisms to their functions (Dumont and Murrell 2005). This approach has been used to characterize bacteria metabolizing C1 compounds such as methane, methanol, and methyl halides in various environments, as well as multicarbon compounds. The incorporation of a high proportion of 13C into DNA greatly enhances the density of labeled DNA compared with unlabeled (12C) DNA. The DNA was isolated and subjected to caesium chloride (CsCl) buoyant density-gradient centrifugation with ethidium bromide. The heavy 13C-DNA can be purified away from the light 12C-DNA by needle collection and used as a template in PCR, with general primer sets that amplify rRNA genes. It is possible to target “functional” genes as it was demonstrated for methanotroph bacteria (Cebron et al. 2007). FISH-microautoradiography (FISH-MAR) and the isotope array both use radioactive tracers to monitor the incorporation of substrate (Dumont and Murrell 2005). The isotope array involves incubating an environmental sample with a 14C-labeled substrate, after which the RNA is extracted from the sample, labeled with a fluorophore and analyzed with an oligonucleotide array that targets 16S rRNA. The array is then scanned for fluorescence and incorporation of radioactive isotope to determine which community members have metabolized the substrate. Alternatively, secondary ion mass spectrometry (SIMS) can be combined with in situ hybridization to reveal the relationship between phylogeny and naturally occurring variation in stable isotope ratios, indicative of particular metabolic processes such as anaerobic methane oxidation (Orphan et al. 2001). SIMS imaging and Raman microspectroscopy are suitable for detecting and quantifying stable isotope labeling of single microbial cells in complex microbial communities and can be combined with in situ hybridization to identify the active cells (Wagner 2009).

Box Fig. 17.3
figure 29

The SIP technique. (a) Stable isotope probing (SIP). The heavy 13C-DNA can be purified away from the light 12C-DNA to caesium chloride (CsCl) buoyant density-gradient centrifugation; fluorescent staining and micro-autoradiography. (b) DAPI staining of prokaryotic cells from a lacustrine ecosystem; (c) fluorescent in situ hybridization (FISH) using EUB338 probe; (d) DAPI staining and autoradiography after 3H-Thymidine labeling; (e) FISH using eub338 probe and autoradiography after 3H-Thymidine labeling (Photographs be: Courtesy of Delphine Boucher)

Depending on the process investigated (autotrophy/heterotrophy, metabolic pathways, etc.), the labeled substrate may be inorganic (e.g., 13CO2, NaH13CO3) or organic (e.g., 13CH3COOH, 13CH4, labeled pollutants, or planktonic cells). Following incubation, the cellular components (including lipids) of bacteria that have metabolized the substrate are enriched in the isotope being considered.

6.3.1 Deciphering Active Populations in Biogeochemical Processes

The use of labeled substrates together with biomarker investigation provides the ability to identify the part of the prokaryotic community involved in a biogeochemical process. To do so, it is necessary to compare the distribution of labeled biomarkers with known lipid compositions of (micro)organisms. For this type of study, the most commonly used biomarkers are phospholipid fatty acids (i.e., PLFAs-SIP method). In addition to identification of active organisms, degradation rates and growth yields can sometimes be estimated since lipid biosynthesis is closely linked to the growth of (micro)organisms.

Sulfate-reducing bacteria metabolizing acetate, one of the main degradation products of organic matter in anoxic environments, were studied by incubating uniformly labeled acetate (13CH3 13COOH) in different sediments (Boschker et al. 1998; Boschker and Middelburg 2002). The study of PLFAs showed that the labeled carbon predominantly occurred in compounds with even-numbered carbon chains (i.e., 16:1ω7, 16:1ω5, 16:0, and 18:1ω7) and only to a limited extent in fatty acids typical of Gram-negative sulfate-reducing bacteria (e.g., i17:1 and 10Me-16:0). In these experiments, the strong resemblance of labeled PLFAs profiles to those of the Gram-positive sulfate-reducers Desulfotomaculum acetoxidans and Desulfofrigus spp. suggests that these genera are involved in acetate mineralization. The same kind of approach with labeled propionate (13CH3CH2COOH) showed that this other ubiquitous intermediate may be mineralized without acetate production by populations of sulfate-reducing bacteria that are distinct from those involved in the oxidation of acetate (Boschker and Middelburg 2002).

The PLFAs-SIP method is often used to highlight the activity of a population composed of a small number of cells or with a low growth rate. A typical example is the consumption of atmospheric methane by soil bacterial communities. Ambient methane concentrations are generally low and soil methanotrophic populations are sparse, making measurement of methane oxidation rates difficult, as well as identification of the populations involved in the process. By continuously supplying portions of soils with small amounts of 13CH4, several studies (Neufeld et al. 2007a) have demonstrated the variable activity of two different methanotrophic bacterial populations (type I and/or type II) depending on ambient methane concentration (both populations having distinct PLFA profiles).

Use of the PLFAs-SIP method is not limited to natural substrates, and it can also be employed to characterize the populations involved in degradation of xenobiotics or of toxic substances such as toluene (Hanson et al. 1999) or phenanthrene (Johnsen et al. 2002).

6.3.2 Primary Production and Food Chains

Labeling of bicarbonate (i.e., NaH13CO3) coupled with PLFA analysis can be used to distinguish the primary producers (bacterial communities vs. phytoplankton) in aquatic ecosystems and to trace carbon transfer between autotrophic and heterotrophic populations (Boschker and Middelburg 2002). For example, microcosm incubations made with sediments from a brackish estuary have shown that, in this ecosystem, primary production may involve different organisms at different times of day. Carbon is essentially fixed by phytoplankton [characterized by polyunsaturated PLFAs (PUFAs), such as 18:3ω3 for green algae and 20:5ω3 for diatoms] under illuminated conditions, while most of the primary production occurring in the dark is due to chemoautotrophic bacteria (which do produce PUFAs). During an in situ study, monitoring of labeled carbon in the different compartments of a benthic ecosystem suggested that heterotrophic bacterial communities metabolized extracellular polymers originally formed by the phytoplanktonic community initially fixing carbon. The role of heterotrophic bacteria in food chains can also be directly inferred using labeled organic substrates (e.g., planktonic cells enriched in 13C and/or 15N). The transfer of carbon to higher trophic levels (meiofauna, macrofauna) can also be estimated in this manner.

6.3.3 Biodegradation Pathways

In addition to characterizing the microbial populations metabolizing specific organic substrates, labeled molecules can further be used to unambiguously determine their biodegradation (or biotransformation) pathways. This is particularly useful for monitoring the fate of pollutants in the environment.

For example, incubation of anaerobic bacteria (pure strains, populations, or communities) using labeled hydrocarbons as the sole source of carbon and energy has helped to elucidate some mechanisms involved in the anaerobic oxidation of aliphatic (Grossi et al. 2008) and aromatic (Foght 2008) compounds, which have long been considered to be refractory in the absence of oxygen. These studies also identified specific metabolites arising from the anaerobic oxidation of non-methane hydrocarbons (i.e., specific degradation intermediates which are not produced from other compounds or by abiotic processes). Some of these metabolites can then be used as indicators of anaerobic hydrocarbon degradation activity during in vitro or in situ investigations (e.g., Gieg and Suflita 2002; Young and Phelps 2005).

6.3.4 Concluding Remarks

Despite the clear promise of combined studies of prokaryotic lipids and of their natural stable isotopic composition, it is also worth keeping in mind the limitations of this approach.

The interpretation of CSIA data inherently depends on knowing the biomarker composition of existing prokaryotic populations, whose taxonomic identification may remain hypothetical and whose involvement in biogeochemical processes may be difficult to define precisely. The identification of novel molecular proxies specific to (micro)organisms and/or certain biochemical processes remains an ongoing objective. The diversity and the variability of factors controlling the natural 13C composition of prokaryotic lipids also require that we improve our understanding of natural isotopic fractionation, notably by performing additional studies based on isolated organisms.

An advantage of the PLFAs-SIP method is based on the fact that all biomarkers of an active (micro)organism can be labeled. In this case, biomarker specificity is a priori less essential than for studies based on the natural stable isotopic composition of individual compounds. However, even it is occasionally possible to identify uncultivable (micro)organisms by comparing their PLFA profiles with those of isolated and identified species, biomarkers generally provide limited information on the phylogeny of most uncultivable (micro)organisms. For such purposes, the DNA-SIP technique is preferred (Box 17.3). Moreover, methods of isotopic labeling are limited to the study of living (micro)organisms and/or of actual biogeochemical processes, whereas the natural stable isotopic composition of biomarkers can also be used to study geological samples (dating up to several million years old). However, in this latter case, the geological and thermal history of the samples may complicate the interpretation of biomarker δ13C values, and thus multidisciplinary (biogeochemistry, microbiology, sedimentology, etc.) or “multi-proxy” (organic /inorganic) approaches may be preferred.

Unlike studies based on the natural abundance of stable isotopes, isotopic labeling does not systematically require the use of an isotope-ratio mass spectrometer, since the identification of labeled lipids may be performed using a conventional mass spectrometer (GC-MS). The lower isotopic sensitivity of a GC-MS relative to a GC-IRMS may however require the use of extensive of labeling. It then becomes necessary to consider the commercial availability and the cost of the labeled substrate. If the substrate is not readily available and/or its price is too high, it can be synthesized in the laboratory using cheaper and commercially available labeled precursors.

7 Techniques for Microbial Diversity Studies

7.1 Nucleic Acids Extraction from Environmental Samples

Organisms contain DNA and RNA, DNA being the cell’s long-term biological memory with a lifetime that is significantly longer than that of RNA. This molecule, which is the cell’s short-term memory, is a fragile molecule that is regularly recycled to ensure responsiveness to biochemical and physical changes in the biotope. Extraction techniques targeting the DNA are much simpler and are based either on a direct extraction of bulk DNA by physical processes of sonication, thermal shocks, and purification by reverse phase chromatography on Elutip columns (Picard et al. 1992) or when a prior separation of cells in the inorganic matrix by Nycodenz gradient centrifugation, the latter approach is particularly appropriate when the matrix contains a lot of humic acids that are inhibitors of the polymerases used for RNA retrotranscription or to amplify DNA by PCR (Berry et al. 2003). Approaches targeting RNA have a supplementary step included for inactivation of RNases that uses guanidinium isothiocyanate, which allows to recover up to 26 % of the RNA initially present (Ogram et al. 1995). In many ecosystems, part of the microorganisms present die, and their nucleic acids are no longer repaired by enzymes normally present and thus accumulate damages due to ionizing radiation or related to the presence of reactive compounds in the environment. It is thus known that as DNA ages, for instance, after a few hundreds or thousands of years, it will become more and more difficult to analyze (Mitchell et al. 2005). The released nucleic acids can bind to the inorganic matrix, such as clay layers of the soil and keep well for several years (Frostegård et al. 1999). Nucleic acids have a life expectancy that varies depending on the types of environments, especially the presence of a matrix, of nucleases, of other microorganisms, of ionizing radiations. Reagent kits comprising a step of chemical lysis and column chromatography are now offered by various companies such as Mo-Bio ™ (www.mobio.com) ™ Bio101 (www.bio101.com) or Soil Master ™ kit (www.epibio.com) for this kind of approach, allowing for greater reproducibility.

7.2 The Different PCR Techniques

7.2.1 The Regular PCR

Polymerase chain reaction (PCR) method relies on the properties of DNA polymerase to extend from 5′ to 3′ end direction, an incomplete strand of a partially double-stranded DNA using the other strand as template (Saiki et al. 1988). The double-stranded zone corresponds to the sequence on which a small fragment of about 20 nucleotides of DNA called primer can bind. This primer serves as a starting point for DNA synthesis. Using primers, it is possible to copy the two strands of a DNA fragment, with a primer binding to the sense strand (for the synthesis of the antisense strand) and the other primer binding to the antisense strand (for the synthesis and the sense strand) (Fig. 17.27). The DNA polymerase used in PCR is thermostable, i.e., it is resistant to high temperatures (around 95 °C). To enable the copy of these fragments numerous times, it is necessary that DNA be single-stranded. In consequence, after their synthesis, double-stranded fragments need to be denatured by heating (denaturation step). Then, in order to generate partially double-stranded DNA, it is necessary to favor the annealing of primers to their complementary sequence, and the temperature is rapidly reduced to reach favorable temperature (hybridization step). Hybridization temperature depends on the length of the primers and their base composition (roughly the number of hydrogen bonds linking the two strands, that is, 3 and 2 between C-G and A-T, respectively). An empirical formula for determining the temperature is to count 4 °C for each G or C and 2 °C for each A or T of the primers; otherwise a number of software calculate a more accurate value, but this latter will depend on the amount of mono- and divalent ions in the PCR buffer. During the PCR reaction, when the temperature drops, the primers being more numerous than the DNA fragments, their attachment to the DNA strands is favored over the re-pairing of the two complementary DNA strands. Then, the temperature increases again to reach the optimum temperature of DNA polymerase (around 72 °C), and complementary strand synthesis occur by primer extension (elongation step) (Fig. 17.27). The duration of each cycle is variable, generally about 20–30 s for the denaturation and hybridization. Regarding elongation duration, it depends on the length of the fragment to be synthesized and the synthesis rate of the chosen polymerase. In general, it is about 1,500 bases synthesized by minute. This cycle of denaturation, annealing, elongation is repeated 20–40 times and corresponds to three steps PCR. Some enzymes present an elongation activity at lower temperature (68 °C), and two steps PCR can be performed in some case with only denaturation and annealing–elongation steps. In PCR, since complementary strand of target strand is synthesized at each cycle, the amount of DNA doubles each round of PCR during an exponential phase. After this phase, one or more components become limiting and a plateau is observed. Although the initial DNA fragment can be very long, the vast majority of amplified ones would have a size corresponding to the distance between the two primers. It is possible to visualize the amplified fragment by electrophoresis and check its size and the absence of nonspecific fragments. For each PCR, it is necessary to include a tube corresponding to the positive control (containing a tube having the DNA fragment to be amplified) and a tube corresponding to the negative control (with ultrapure water instead of DNA to verify the absence of any contamination). The choice of primers is essential for specific and efficient amplification. If the sequence of the gene is known, it is possible to use softwares designed to provide compatible primers with close hybridization temperatures, without sequences complementary to each other and separated with a close distant location (the more a fragment will be long, the lower the PCR efficiency will be unless using a specific polymerase). It is also necessary to verify the specificity of the primers by comparing their sequence in gene banks (using, e.g., Blastn software on the NCBI) (see Sect. 17.6.5). If the sequence to be amplified is not known for the target organism, one possibility is to compare all of the known gene sequences for different organisms (e.g., for global sequence alignment with the Clustal software) and then select conserved areas as potential sequences of PCR primers. The primers can be degenerated, i.e., it is possible to synthesize primer having a nucleotide variation at a given position. To increase their specificity and avoid mismatch (hybridization of noncomplementary bases), primers containing specific nucleotides (LNAs or locked Nucleic Acids) can be used. In these nucleic acids, furanose is chemically blocked, which increases its affinity for the complementary nucleotide (Koshkin et al. 1998; Vester and Wengel 2004). Only the DNA fragments can be amplified by PCR.

Fig. 17.27
figure 30

Amplification by PCR of a DNA fragment. Each cycle contain a denaturation, hybridization and an elongation step (Drawing: M.-J. Bodiou)

7.2.2 RT-PCR

PCR method is rapid, efficient, and frequently used. However, DNA can come from dead or living organisms and does not inform about the expression level of a given gene. In contrast, due to the instability of RNA, its detection indicates a recent synthesis from living organism; furthermore its quantity is an indication of the level of expression of the corresponding gene. However, PCR cannot be applied on RNA directly. To amplify a fragment of RNA, it is first necessary to synthesize the complementary sequence of RNA into DNA (cDNA) with a reverse transcriptase. This enzyme is used in nature by retroviruses and mobile element called retransposon to convert their genome constituted of RNA into DNA before their insertion into DNA genome of their host. The starting point of cDNA synthesis is a partly doubled stranded nucleic acid composed of RNA for one strand and a primer for the other strand. Primer must harbor complementary sequence to RNA strand. It must also bind far from the 5′ end of the RNA since the cDNA is synthetized by 3′ end extension of the primer in direction of the 5′ end of the RNA. Different types of primer can be used for cDNA synthesis:

  1. (i)

    For studying eukaryotic mRNA, as they comport at their 3′ end a poly A tail (about 200 nucleotides long), a poly T primer can be used.

  2. (ii)

    To study a specific ARN, the reverse primer of PCR can be used for reverse transcription step.

  3. (iii)

    To analyze the whole transcriptome of a sample, the use of random hexanucleotide can be useful. One should keep in mind that mRNA constitutes only about 5 % of total RNA. Removal of rRNA before reverse transcription might be necessary. After cDNA synthesis, PCR can be applied on the product of the reverse transcription step; depending on supplier, different types of polymerase can be found and RT-PCR can be performed by one (same mixture for reverse transcription and PCR) or two steps (reverse transcription and PCR being two independent reactions). In order to validate result of RT-PCR, control checking contamination of RNA extract by DNA should be included: The reverse transcription step must include a tube containing all constituents of the assay except reverse transcriptase. After PCR step, negative amplification should be observed for this tube.

7.2.3 Quantitative PCR: Real-Time PCR

Different methods have been proposed to use PCR for gene quantification. The most popular is call quantitative PCR (qPCR) or real-time PCR (be aware that RT-PCR term that can lead confusion with reverse transcription). As seen previously, in the exponential phase of PCR, the number of amplified fragments doubles theoretically after each PCR cycle. After N cycles, the theoretical amplification yield is 2n. If the number of genes initially present in the PCR tube was X 0, after N cycles, the number of X n gene will be theoretically: X n  = X 0 × 2n. Since X n is measurable after staining of DNA, it is theoretically possible to calculate X 0. If this gene is characteristic of a type of organism, and the number of genes per organism is known, the concentration of this organism in the considered sample can be assessed. Experimental data have shown, however, that the number of gene does not double after each PCR cycle and different inhibitors could alter the activity of the thermostable polymerase (humic acids, phenolic compounds, etc.). The yield of the PCR can be determined from the relation between X n and X 0 using a calibration curve constructed with different concentrations of solutions of known gene fragment (e.g., cloned on a plasmid).

The principle is to use a heat resistant dye (e.g., SYBR Green), which fluoresces only when fixed on double-stranded DNA. Fluorescence is proportional to the amount of DNA, and it is measured in each tube at the end of each elongation step (Fig. 17.28). Quantification is usually made by amplifying relatively small fragments (<500 bp, preferably around 200 bp). This technique requires relatively sophisticated equipment coupled with computer analysis of data. Peculiar PCR tubes or plates should be used and be particularly transparent. The experimenter determines a fluorescence threshold that must be in the exponential part of the curve of DNA amplification. In this part of the curve, amplification follows equation:

Fig. 17.28
figure 31

Real-time PCR. Fluorescence of SYBR Green depending on its fixation on DNA (left part); example of result allowing to construct a standard curve for real-time PCR quantification (right part); (a) fluorescence in each PCR tube during each PCR cycle; (b) relation between cycle number necessary to reach a given fluorescence and the base 10 logarithm of the target gene initial concentration

$$ {X}_n={X}_0{R}^n $$
  • with R, the yield of PCR,

  • n the number of cycles,

  • X 0 initial amount of the gene, and

  • X n the amount after n cycles.

The software calculates then the number of cycles necessary to reach the fluorescence threshold. This number of cycle is determined for each point of the standard curve. A relation between the base 10 logarithm of the initial concentration of fragments and the number of cycles required to reach the fixed fluorescence can be assessed (Fig. 17.29). This relationship is a straight line whose slope is 10−1/R. The determination of these parameters is then used to calculate X 0 for unknown samples.

Fig. 17.29
figure 32

RFLP and T-RFLP techniques. PCR amplified fragments are hydrolyzed with a restriction enzyme (1). The length of generated fragments (A, B, C, D) is visualized by electrophoresis (2). For RFLP, all fragment are stained, whereas for T-RFLP, only the terminal fragments (A and C) are visualized by fluorescence since during PCR, one of the two primers harbored a fluorochrome (yellow spot) (Drawing: M.-J. Bodiou)

7.3 The Molecular Fingerprints

Molecular fingerprinting techniques include different ways to quickly view and analyze diversity between gene fragments amplified by PCR. The most common are the RISA, RFLP (T-RFLP), DGGE, and SSCP.

7.3.1 RISA (Ribosomal Intergenic Sequence Amplification)

The method is based on amplification of intergenic region between two genes, in most cases those encoding the 16S and 23S rRNA (for prokaryotes) or the 18S and 28S rRNA (for eukaryotes). Ribosomal genes form an operon (adjacent genes and transcribed at the same time). The beginning and end of these genes are conserved allowing the fixing of PCR primers. Sometimes the intergenic region containing genes coding for transfer RNA and/or noncoding DNA of variable length. After PCR reaction, the amplified fragments size is analyzed on gel electrophoresis (agarose or acrylamide) or capillary. On a single genome, there are several ribosomal operons genes (between 1 and 15 for prokaryotic organisms and hundreds for eukaryotes) and difference between copies may be observed. In consequence, for a given organism, it is possible to obtain several intergenic fragments with different lengths. As mutations accumulate much faster in noncoding regions of the genome, this method can differentiate phylogenetically related organisms; however, as the separation is done only on the criterion of the fragment size, its resolution will depend on that of the electrophoresis method (better to lower: capillary, acrylamide, agarose). In addition, phylogenetically different organisms may yield RISA intergenic fragments of the same size. To overcome this drawback, some authors incubate restriction enzymes with PCR fragments, to increase the resolution of the method. A phylogenetic sequences analysis can be performed on fragment from RISA using preferentially sequences corresponding to the ends of the ribosomal genes since the number of intergenic sequences is still quite limited in gene banks. The automated method is called ARISA (Automated Ribosomal Intergenic Sequence Amplification).

7.3.2 RFLP (Restriction Fragment Length Polymorphism)

This method permit to analyze sequence diversity of PCR fragments after their hydrolysis with restriction enzymes and determination of the size of the hydrolyzed products by electrophoresis (Fig. 17.29). The chosen enzymes generally recognize a sequence of four nucleotides and on a statistic point of view are enzymes that cut frequently DNA. The resolution of the technique will depend on the enzyme used and for a given study, different tests must initially be performed. The multitude of bands can increase the difficulty of analysis and variations of this technique have been proposed. To limit the number of bands after electrophoresis, one of the fragments is analyzed by the method called T-RFLP (Terminal Restriction Fragment Length Polymorphism). For this, one of the terminal moiety of the PCR fragment is detected by fluorescence because during PCR, one of the two primers used included a fluorochrome group. If the RFLP and T-RFLP are applicable to any gene, the ARDRA analysis method (ARDRA amplified rDNA restriction analysis) focuses on the diversity of ribosomal genes only. The primers and enzymes are standardized allowing the identification of organisms thanks to a data bank containing size of the fragments generated corresponding to different reference bacterial strains.

7.3.3 SSCP (Single Strand Conformational Polymorphism)

The sequence diversity of the DNA fragments is analyzed by electrophoresis. To generate different electrophoretic profiles from fragments of the same size but of more or less variable sequence, the fragments are firstly denatured in dilution, after temperature drops, the strands do not hybridize with their complementary strands, but intramolecular reannealing occurs, between small complementary regions (Fig. 17.30). This will generate molecules with varying spatial structures which migrate differently in electrophoresis. The migration is performed on an acrylamide gel or in capillary at low temperature.

Fig. 17.30
figure 33

SSCP. DNA strands are denatured by heating then cooled suddenly (1). Thanks to dilution, each single strand hybridized on itself (2). For a given strand, one or different three-dimensional foldings can occur. The different conformations can be visualized by electrophoresis (3) (Drawing: M.-J. Bodiou)

7.3.4 DGGE (Denaturing Gradient Gel Electrophoresis) and TGGE (Thermal Gradient Gel Electrophoresis)

These are electrophoretic techniques that allow the gradual denaturation of PCR fragments during migration. This denaturation is due either to a gradient of concentration of formamide and urea in the gel (DGGE) or to a temporal increase of temperature during migration (TGGE). Partial denaturation of DNA fragments slows their migration because they become spherically larger and progress with more difficulty in the meshes of the gel (Fig. 17.31). To increase the resolution of the migration, a modified primer is used during the PCR. One of the two PCR primers comprises an additional part of forty pairs composed only of C and G (GC tail). This additional sequence is never denatured during migration and the two strands of the same fragment are always matched by that part. This lead to a merely stops of the fragment migration when it is denatured except by the GC tail. According to their sequence, the different fragments will stop migrating sooner or later and will be spatially separated in the gel. Resolution of this technique is a sequence variation for about 500 bp fragment.

Fig. 17.31
figure 34

DGGE. The denaturation of double strand DNA is gradual during electrophoretic migration due to the presence of gradient of denaturing compounds. Double-stranded DNA fragment (1), two of them are partially denatured (2), all of them are denatured except on one end due to the presence of a GC tail (red) (3). Migration is slowed down with the increase of partial denaturation. Example of a DGGE result (4)

7.4 Cloning, Sequencing

Cloning involves integrating a DNA fragment into a cloning vector i.e., on an extra chromosomal element like plasmid, cosmid or BAC (Bacterial Artificial Chromosome), for easier work. The principle is to linearize the cloning vector by using a restriction enzyme, to mix together the cloning vector, the fragment to integrate with a DNA ligase, an enzyme which will generate a covalent bond between the adjoining linear DNA strands (Fig. 17.32). For successful cloning, the molarity of the cloning vector and the fragment must be close, the ends of the linearized cloning vector and the fragment must be compatible in term of the form (blunt end with two strands ending at same level, or with a cohesive end with protruding edges strand relative to the other, and in the case of protruding edges, complementary between them). In order to generate compatible ends, it is possible to linearize the vector and generate the fragment to be cloned with the same restriction enzyme or enzymes giving end of the same type (this information is usually provided in catalogs of restriction enzyme suppliers). Particular attention must be paid to the ligation of PCR fragments. Indeed, according to the polymerase, the edges of the fragments are either blunt or have a protruding A 3′. In the first case, a linear vector with blunt ends should be used while in the second case, ligation should be performed with a cloning vector with a protruding 5′ T (some providers offer this type of plasmid that are already linearized). Another alternative is to make blunt end the 3′ A protruding PCR fragment. If the starting material is a mixture of different DNA fragments (e.g., fragments from the amplification of ribosomal genes of a community), cloning allow to separate them after transformation in E. coli cells, since each clone harbors a plasmid carrying a single type of fragment. Transformation of E. coli, may be carried out either by heat shock of CaCl2 treated cells, or by electroporation (electric shock treatment of cells in a solution without ions). If the sequence of the plasmid is characterized, it will be possible to sequence the cloned fragment with a primer binding to the plasmid by the Sanger method (see sequencing technique). The sequencing technique has been proposed in 1975 by Sanger method and allows the determination of the order of succession of different nucleotides comprising the DNA (Fig. 18.1) (Sanger et al. 1977). It is based on the synthesis by DNA polymerase of a complementary strand from an oligonucleotide primer. This synthesis is performed by the polymerization of deoxynucleoside triphosphates (reaction between the hydroxyl at the 3′ nucleotide of the n − 1 and the 5′ phosphate of the nucleotide n). If the reaction is made in four tubes and if each of the tubes a modified nucleotide (one of the four di-deoxynucleotide triphosphates, devoid of hydroxyl 3′) is also introduced it will lead to stop the polymerization reaction. In each tube, fragments whose synthesis was stopped randomly have different sizes and all end with the same nucleotide. In the initial method, the various DNA fragments were radiolabeled, and were subjected to electrophoresis in a polyacrylamide gel. Negatively charged fragments migrated the quicker their size is small. On photographic film, it will be possible to determine the sequence by comparing the size of the different fragments and knowing the di-deoxynucleotide terminal 3′ of each fragment. This initial method as evolved now and different sequencing methods are available (see Chap. 18).

Fig. 17.32
figure 35

Cloning and transformation of E. coli cells. Cloning vector (black) and fragment to be inserted (red) are ligated together with a DNA ligase (1). Cloning vector might close on itself. Ligation mix is introduced in E. coli cells (2); if the fragment has been inserted in the beta galactosidase gene, after growth on agar plate, it is possible to differentiate a colony harboring a vector with a inserted fragment (white colony) from those containing the vector closed on itself (blue colony). Since the vector harbors also an antibiotic resistance gene, only vector harboring cells are able to growth on antibiotic containing agar medium (Drawing: M.-J. Bodiou)

Initially cloning sequencing was used in diversity studies. To compare the diversity between samples one must be sure to have exhausted their diversity. In consequence the number of sequenced clones should be important. In such study, it is necessary to analyze the number of different sequences (OTU, Operational Taxonomic Unit) based on the number of clones sequenced. This curve has an asymptote of scarcity which is the number of different sequences in the sample (Fig. 17.33). The traditional approach is less and less used to the benefit of new sequencing techniques (new generation sequencing, NGS) (cf. Sect. 18.1.2), because they can increase drastically the number of processed sequences with lower costs. Number of techniques of sequencing and softwares for sequences processing are growing. As an example, a study of human intestinal microbiomes has analyzed nearly two million 16S rRNA sequences (Turnbaugh et al. 2009).

Fig. 17.33
figure 36

Example of relation between OTU number and number of sequenced clones

7.5 Bioinformatics Analysis of Sequences

Microorganisms all contain nucleic acids, be they genes or intergenic sequences and these sequences are transmitted vertically with occasional mutation events that make them gradually more and more different in various lineages. This mechanism of mainly vertical transmission should not obscure the fact that some genes are transmitted laterally; however, several genes are considered as molecular markers of the organism as a whole, first and foremost the ribosomal genes that have been considered by Woese et al. (1990) as molecular clock. The 16S gene in particular is now present in databases as more than 4,000,000 entries and is now used as a first approach for the identification of a new organism; it has actually become the gold standard of bacterial taxonomy.

The 16S rRNA gene is obtained by amplification using “universal” (cf. Sect. 8.4.2) primers targeting highly conserved sites at both ends of the gene, for instance, FGPS4-281bis (59-ATGGAGAAGTCTTGATCCTGGCTCA-39) and FGPS1509′-153 (59-AAGGAGGGGATCCAGCCGCA-39) (Normand 1995) or a partial sequence of about 500nt with other primers (com1 and com2; Lane et al. 1985). The sequence can be obtained by sequencing with the amplification primers or following cloning into E. coli if there are copies with slight variations as in Thermomonospora (cf. Sect. 6.1) that make direct sequencing impossible. Clones or amplicons can be sequenced by any of several private companies such as MWG (http://www.eurofins.com/en.aspx) or Genomex (http://www.genomex.com/) for a few euros per read.

The analysis of a given DNA sequence is done by comparing it to a set of other sequences using a computerized approach called “Basic Local Alignment Search Tool” (BLAST). This approach is based on a bioinformatics algorithm, where speed is privileged to the expense of accuracy. Speed is essential to compare a new sequence to huge databases containing 130 billion nucleotides in 110 million sequences and still growing at an exponential rate. It was thus not surprising that the paper by Altschul et al. (1990) describing BLAST has been the most cited in the years 1990, in January 2012, the total number of citations had reached 31,530. The BLAST approach begins by eliminating low complexity regions such as repeats, the sequence is then cut into short overlapping sub-sequences, of a length varying depending on the version (16–64 for Blastn as implemented at the NCBI), then these sub-sequences or “words” are compared to the database, receiving a positive score (1–5) for identity or negative (−1 to −4) in other cases. “Words” with a low score are eliminated, while the others are retained in the search tree. Neighborhoods upstream and downstream of the “words” with high scores are then examined for identities. These identities are then quantified and a second score calculated by taking into account gaps. These are then called “High Scoring Pairs” (HSP). The last step is the calculation of BLAST significance score for each HSP, which can be expressed as the probability that a given score is reached by chance, depending on the length of the sequence analyzed and the size of the database. Blast analysis thus yields three values, the % similarity, the score and the likelihood the sequence was obtained by chance or E-value. If the sequence to be analyzed is a protein sequence, the steps are roughly similar with simply a more detailed way to quantify the similarities between amino acids (tables WFP). Different variants of BLAST exist, for instance, BLASTP to compare protein sequences to a protein database, TBLASTN to compare one or several protein sequences to a nucleic acid database that will be translated into protein sequences in the six reading frames, BLASTX to compare a nucleotide sequence translated in the six reading frames against a database of protein sequences, or finally TBLASTX to compare nucleic acid sequence(s) translated into the six reading frames with a nucleic database also translated into proteins.

There are several databases used for this type of analysis. The database of the National Center for Biotechnology Information (NCBI, http://www.ncbi.nlm.nih.gov/blast/) is the best known. It is a database in which new sequences are deposited daily and shared with two other databases that operate in a coordinated manner, the DNA Data Bank of Japan (DDBJ) and the European Molecular Biology Laboratory (EMBL). The three databases have developed a collaboration 19 years ago, called the International Nucleotide Sequence Database Collaboration (http://insdc.org/) to ensure a regular exchange of data and their backups. These sites are sometimes victims of their popularity and can be difficult to access at certain times of the day. This problem is circumvented by the creation of many sites that download data regularly and that allow analyses, such as the PRABI site (www.prabi.fr/) in France. An example of bacterial identification using BLASTN is shown in Fig. 17.34.

Fig. 17.34
figure 37

Identification of a bacterial isolate from the 16SrRNA sequence. Soil bacterial isolates were grown in a Petri dish (Photo: Courtesy of P. Pujic), their DNA obtained following cell lysis was amplified with two universal 16S primers (Normand 1995), the sequence obtained was treated either by BLASTN in a general database, the NCBI (right), or in a dedicated database, the BIBI database (left). In both cases, an identification to the genus Terribacillus was obtained

Large numbers of input sequences can be treated in a single step via the command-line “megablast,” which is much faster than running BLAST several times. Many input sequences are concatenated to form a large sequence before searching the BLAST database, then treated to obtain individual alignments and statistical values.

Other possibilities exist for comparing a given sequence to a database. The best known and most widely used by microbial ecologists is the Ribosomal Database Project (RDP, http://rdp.cme.msu.edu/) where the new sequences in FASTA format are compared to a sequence alignment of 16S gene from bacteria and archaea with a Bayesian approach simplified for increased speed based on a reference hierarchy (Wang et al. 2007). The 16S sequences of type strains in the database are divided into “words” of eight nucleotides whose frequency is calculated. When a sequence is submitted for analysis, the joint probability of finding every “word” is calculated for each genus of the base. Subsets of “words” having a high probability are then used to recalculate the probability a hundred times. Each type strain sequence in the database then receives a number that is the sum of the probabilities of co-occurrence of the presence of “words.” For larger taxonomic entities, identification is made by summing the probabilities for each sequence. Other databases have been developed for those who want to work on genes other than 16S. It is the case of BIBI (http://umr5558-sudstr1.Univ-lyon1.fr/lebibi/lebibi.cgi) that allows identifying bacteria using the sequence of one of the following genes gyrB, recA, sodA, rpoB, tmRNA, tuf, groES, groEL, dnaK, dnaJ, fusA (bacteria), groel2-hsp65, and beta-lactamase. The result is given in the form of a phylogenetic tree with 30 leaves with the positioning of the unknown sequence and a sequence alignment (Devulder et al. 2003). It is possible and even necessary to automate some of these steps to identify, for instance, 1,000 16S sequences from a metagenomic sequencing project. The sequences must then be formated (FASTA format, the first comment line starting with a “>,” the sequence is on the following line) and use the “batch” option in the megablast version.

It is generally considered that an identity of >99 % with the entire 16S sequence of a bacterial sequence present in the base is sufficient to consider assigning it to a given species. This is true in general, but some distinct species have the same sequence, conversely some strains belonging to a given species have different 16S sequences. In general, the sequence of a gene cannot as such enable the identification of a species, but it must be complemented by the analysis of other genes ideally covering the whole genome or by other tests. Other servers do this kind of analysis locally, for example, MultHoSeqI (http://pbil.univ-lyon1.fr/software/HoSeqI/), which also implements a detection tool to detect the presence of chimeras (Arigon et al. 2008).

Databases should continue to expand in the years ahead with the exploration of complex environments using metagenomics approaches and characterization of complete genomes of many organisms at the genomic level. The sequencing capabilities will also increase with new sequencing technologies such as high-throughput pyrosequencing or the Solexa and Solid technologies (Chap. 18). New developments in computer science are also underway to expedite data processing and allow analysis of large data sets.

7.6 DNA Microarrays

DNA microarrays (DNA chips, microchips, biochips, gene chips) technology is a powerful, high-throughput experimental system that allows the simultaneous analysis of thousands to hundreds of thousands of genes at the same time. Originally developed in 1995 for monitoring whole-genome gene expressions (Schena et al. 1995), microarrays have been used for the first time in microbial ecology in 1997 (Guschin et al. 1997). Nine probes targeting rRNA 16S genes have been used to identify key genera of nitrifying bacteria. The application of microarray technology for microbial ecology is a rapidly developing approach (Dugat-Bony et al. 2012a). After a short description of the principle of the DNA microarrays approach, various platforms and applications in microbial ecology will be described.

7.6.1 DNA Microarrays Principle

DNA microarrays technology is based on nucleic acids hybridizations (Fig. 17.35). However, contrary to Northern-blot or Southern-blot, probes are attached to the solid surface and targets are labeled (Ehrenreich 2006). Under conditions suitable for hybridization, the probes on the chip are exposed to a solution containing a complex sample of fluorescent-labeled targets. DNA macroarrays (dot blots on nitrocellulose or nylon membranes) have the disadvantage of moderate throughput and uncontrolled binding of probes. Planar glass microarrays have become the most widely used type of array. DNA microarrays are solid surfaces to which arrays of specific DNA fragments of various lengths have been attached (ex situ) or in situ synthesized (photolithography technology of Affymetrix or the ink-jet technology of Agilent) at discrete locations. Oligonucleotide arrays are becoming the most widely used type of arrays with the exponential growth in available complete genome sequences, metagenomic data sets, and the low cost of DNA synthesis. Furthermore, with the advancement of microarray technology (in situ synthesis), high-density oligonucleotide microarrays can hold billions of probes on a single microscopic glass slide with multiplexing capacities. These molecular tools can be easily synthesized on-demand, in small batches, and at low cost. This flexibility combined to rapid data acquisition, management, and interpretation allows oligonucleotide microarrays to continue to challenge next-generation sequencing on various applications. However, a very detailed attention to the design of probes is needed for the development of accurate tool (Rimour et al. 2005; Militon et al. 2007). The cross-hybridization is the major point that limits the determination of specific probes.

Fig. 17.35
figure 38

Schematic representation of the different steps involved in the DNA microarrays approach (Drawing: Martine Chomard)

7.6.1.1 DNA Microarray Platforms

Owing to advances in microarray fabrication technology, many choices of DNA microarray platforms and physical formats are available (Dharmadi and Gonzalez 2004). Based on the arrayed material, currently there are two different microarray platforms (ex situ and in situ).

7.6.1.1.1 Ex Situ DNA Microarrays

Through surface derivatization, all kinds of nucleic acids (PCR products, cDNA, gDNA, and oligonucleotides) can be arrayed on solid surface like glass slides using robotic pin spotting or ink-jet printing (Fig. 17.36). One disadvantage of ex situ DNA microarrays is that each probe must be synthesized, purified, and stored prior to microarray fabrication (Schena et al. 1998). Such situation could be expensive for high-density DNA microarrays. Furthermore, probes quality is difficult to validate.

Fig. 17.36
figure 39

Schematic representation of the probes spotting on the solid surface of the DNA microarrays. (a) Contact spotting. (b) Ink-jet spotting (Drawing: Martine Chomard)

7.6.1.1.2 In Situ DNA Microarrays

In situ probe synthesis (oligonucleotide) allows a very flexible DNA microarray fabrication in a very high density (several millions probes). Affymetrix GeneChip® array uses photolithographic method and phosphoramidite chemistry for in situ synthesis of high-density 25-mer oligonucleotide probes (Fig. 17.37a). Adapting technologies used in the semiconductor industry, manufacturing begins with quartz wafer as solid surface (Dalma-Weiszhausz et al. 2006). Nimblegen activity is stopped so we will not describe this technology close to Affymetrix development. A different method for in situ synthesis of oligonucleotide probes (60-mer) using ink-jet technology (Wolber et al. 2006) is proposed by Agilent (Fig. 17.37b).

Fig. 17.37
figure 40

In situ synthesis of the oligonucleotide probes. (a) Photolithographic method used by Affymetrix. (b) Ink-jet technology used by Agilent (Drawing: Martine Chomard)

7.6.1.2 Probes Design and Targets Preparation

PCR products and cDNA have been first used as probes in transcriptomic assays. For comparative genomics or identification of close genomes in environmental samples, gDNA could also be used. As previously indicated due to ease of synthesis and quality control, oligonucleotides are largely used to date as probe improving specificity detection (Kreil et al. 2006). However, sensitivity detection could decrease with reduced length and needs new design strategy such as GoArrays algorithm (Rimour et al. 2005). One of the major drawbacks of DNA microarrays lied on a sequence a priori with constraints to survey only genes with available sequences in public databases. New probe design strategies like PhylArray (Militon et al. 2007) and KASpOD (Parisot et al. 2012) for phylogenetic microarrays and MetabolicDesign (Terrat et al. 2010) and HiSpOD (Dugat-Bony et al. 2011) for functional microarrays can get around the limitation of sequence availability and make possible the detection of uncharacterized sequences (Dugat-Bony et al. 2012a). Gene capture has been recently applied to environmental samples (Fig. 17.38) with such explorative probes (Denonfoux et al. 2013).

Fig. 17.38
figure 41

Schematic representation of the gene capture technique. (a) Classical approach used for large genomic region re-sequencing. (b) Innovative approach for metagenomics targeting (Drawing: Jérémie Denonfoux)

Many different fluorescent dyes and other labeling agents have been described in the literature to label the targets but the cyanine dyes Cy-3 and Cy-5 are most commonly used, offering strong fluorescence, similar chemical properties, well-separated fluorescence spectra, and little adherence to chip surface. Hybridization of DNA microarrays is done by placing labeled, denatured target on a slide. After washing for non-targets elimination, the DNA microarrays is scan to detect fluorescence revealing specific interaction between the probe and the target. The scanners mostly use lasers for exciting the surface of the hybridized microarray. The fluorescence emitted from the dyes linked to the targets is collected and quantified by photomultiplier tubes or charge-coupled device (CCD) cameras. To quantify the fluorescence of the features via image analysis, pixels have to be assigned either to a spot or the background. PCR amplification of the marker gene(s) is applied to improve the detection sensitivity and to limit nonspecific hybridization. Naturally amplified RNA molecules (rRNAs) offer a potential for PCR-free, direct detection, thus avoiding the inherent bias in consensus PCR (von Wintzingerode et al. 1997).

7.6.1.3 Hybridization and DNA Microarrays Analysis

Some factors affecting duplex formation on DNA microarrays include: probe density, microarray surface composition and the stabilities of probe-target duplexes, intra- and inter-molecular self-structures and secondary structures (Pozhitkov et al. 2006). Microarray hybridization has conventionally been conducted in a manual manner by placing the fluorescently labeled probe onto the array under a slide cover-slip and incubating in a humidified chamber overnight. Although conventional hybridizations are used due to their low cost and ease of implementation, the hybridization reaction is solely dependent on diffusion as the means of probe dissemination, limiting the ability of the probe to react with the entire bound target on the array. Hybridization performance could be improved by automated hybridization (Peeva et al. 2008). Scanning a microarray is a fairly simple task to execute, but it involves selection of a variety of parameters that can have profound effects on the resulting data (Timlin 2006). Image processing through several steps including quality controls allows efficient representation to facilitate interpretations (Ehrenreich 2006).

7.6.2 DNA Microarrays for Microbial Ecology

DNA microarrays have been primarily developed and used for gene expression profiling of pure cultures of individual organisms, but major advances have been made in their application to environmental samples (Gentry et al. 2006). Different categories of DNA microarrays have been applied for comparative genomics, transcriptomics assays, phylogenetic identification and functional characterization to precisely describe microbial communities (structure and function) and their dynamics but also to discriminate strains (Fig. 17.39).

Fig. 17.39
figure 42

DNA microarrays for microbial ecology (Drawing: Martine Chomard)

7.6.2.1 Genome-Based DNA Microarrays

Community genome arrays (CGAs) contain the whole genomic DNA of cultured organisms and can describe a community based on its relationship to these cultivated organisms (Wu et al. 2004). Metagenomic arrays (MGA) are a potentially powerful technique because, unlike the other arrays, they contain probes produced directly from environmental DNA itself and can be applied with no prior knowledge of the community (Sebat et al. 2003). Whole-genome open reading frame (ORF) arrays (WGA) contain probes for all of the ORFs in one or multiple genomes. Dong et al. (2001) used a WGA containing 96 % of the annotated ORFs in E. coli K-12 to comparatively interrogate the genome of the closely related (97 % based on 16S rRNA gene) Klebsiella pneumonia 342, which is a maize endophyte (Dong et al. 2001). More recently, a “pangenome” probe set provides coverage of core Dehalococcoides genes as well as strain-specific genes while optimizing the potential for hybridization to closely related, previously unknown Dehalococcoides strains (Hug et al. 2011).

7.6.2.2 Transcriptomics DNA Microarrays

DNA microarray studies are usually carried out as a comparison of two samples to identify differentially expressed genes (Schena et al. 1998). The application of the microarrays to analyze global gene expression of the microbial community in response to oxidative stress has been conducted with success (Scholten et al. 2007). DNA microarray studies of biofilm formation have also addressed numerous questions such as what genes are required for biofilm formation, what environmental signals regulate biofilm formation, how different are biofilm cells, and is biofilm formation a developmental process (Lazazzera 2005). However, it is difficult to evaluate the spatial genes expression in such structures. Probably, biofilm microdissection will help to solve such limits.

7.6.2.3 Functional DNA Microarrays

Functional gene arrays (FGAs) are designed for key functional genes that encode for proteins involved in various metabolic processes. Currently, the most comprehensive tool developed are the GeoChip 3.0 with ~28,000 probes covering approximately 57,000 gene variants from 292 functional gene families involved in carbon, nitrogen, phosphorus, and sulfur cycles, energy metabolism, antibiotic resistance, metal resistance, and organic contaminant degradation (He et al. 2010). Recently, an efficient functional microarray probe design algorithm, called HiSpOD (High Specific Oligo Design), was proposed to detect unknown genes (Dugat-Bony et al. 2012a). A microarray focusing on the genes involved in chloroethene solvent biodegradation was developed as a model system and enabled the identification of active cooperation between Sulfurospirillum and Dehalococcoides populations in the decontamination of a polluted groundwater (Dugat-Bony et al. 2012b). Another software program called Metabolic Design ensures in silico reconstruction of metabolic pathways and the generation of efficient explorative probes through a simple convenient graphical interface (Terrat et al. 2010).

7.6.2.4 Phylogenetic DNA Microarrays

Phylogenetic oligonucleotide arrays (POAs) are designed based on a conserved marker such as the 16S ribosomal RNA (rRNA) gene, which is used to compare the relatedness of communities in different environments. The most comprehensive POA developed so far are the high-density PhyloChip, with nearly 500,000 oligonucleotide probes to almost 9,000 operational taxonomic (Brodie et al. 2006). Currently, very few software dedicated to POAs that allows the design of explorative probes have been developed. The PhylArray program relies on group-specific alignments before the probe design step to identify conserved probe-length regions (Militon et al. 2007). KASpOD is a web service dedicated to the design of signature sequences using a k-mer–based algorithm. Such highly specific and explorative oligonucleotides are then suitable for various goals, including Phylogenetic Oligonucleotide Arrays (Parisot et al. 2012).

7.7 Pigment Analyses

The main biological functions of pigments in microorganisms are related to light harvesting and processing as well as for photo-protection. Photoprotecting pigment absorbs and neutralizes the photons that would be potentially damaging for the maintenance of the cellular structures. The analyses of pigments in microbial communities can thus tell us something about the ecological importance of both functions. The photosynthetic pigments can also be used as taxonomic biomarkers allowing the estimation of the quantitative importance of anoxygenic phototrophic bacteria, cyanobacteria and different classes of phototrophic eukaryotes in microbial communities. Pigment analyses of pure cultures are also used for ecophysiological and taxonomic studies. Hence, a detailed description of pigment composition is necessary for describing novel species of photosynthetic bacteria and phototrophic eukaryotes. However, ecophysiological studies have shown that the specific contents of pigments may change with environmental conditions as well as the relative ratio between the different pigments in a single species.

The color of the pigments is determined by its absorption spectrum, which is a graphical representation of its absorption of photons as a function of their wavelength (λ). The photosynthetic pigments have absorption maxima limited to the visible wavelength range (400–700 nm), with the exception of the Bacteriochlorophylls (BChl), which also show maxima in the ultraviolet (BChl a) and in the near infrared (BChl a, BChl b, BChl c, BChl d, BChl e, and BChl g). A first approach for analyzing photosynthetic pigments of a phototrophic microorganism in liquid culture is to measure an in vivo absorption spectrum using a spectrophotometer equipped with an integrating sphere. When a cuvette with a culture of phototrophic microorganisms is placed in the spectrophotometer, the photon flux density decreases along the optical path both because of light absorption by the pigments as well as due to diffraction, because of the optical behavior at the interfaces between the cells and their liquid environment. The integrating sphere allows to recover all the photons that have been deviated from their optical path by diffraction and thus to obtain good measurement of the photon absorption alone according their wavelengths (λ). Within the cells, the absorption spectra of the pigments can, however, be modified by biochemical and biophysical interactions and this is particularly the case for the chlorophylls. Therefore, the in vivo absorption spectra of living photosynthetic microorganisms is particularly relevant for the study of biophysical features and often less useful for calculating the specific contents of the different pigments in the cell. The latter can be achieved by extracting the pigments in solution.

There is not a single liquid that allows extracting all known photosynthetic pigments from the phototrophs. Hence, the choice of the extraction solvent determines which pigments are targeted for the analyses. An organic solvent as methanol or acetone is typically used to extract lipophilic pigments as chlorophylls and carotenoids. The hydrophilic pigments, as, e.g., the phycobiliproteins and mycosporine-like amino acids of cyanobacteria, are extracted using a buffered aqueous solution. In order to improve the extraction efficiency different treatments can be used, as, e.g., a freezing thawing cycle or using a French press. The pigment abstracts are normally centrifuged or filtered to obtain a solution that is free of cellular debris to prevent interference of turbidity during spectrophotometric analyses. When the pigment composition of the sample is well known, it is possible measure the concentrations by using multi wavelength (multi λ) spectrometry. Quantification is based on the Lambert-Beer law, using the following equation for a single pigment solution:

$$ {I}_{\left(\uplambda, x\right)}={I}_{\left(\uplambda, 0\right)}{\varepsilon}^{-KC} $$

where I (λ,0) is the incident flux of photon of wavelength λ, I (λ,x) is the flux of photons of wavelength λ leaving the cuvette, x is the optical path length in the cuvette, K is the absorption coefficient of the dissolved pigment at wavelength λ, and C is the concentration of the pigment in solution. The absorbance is defined by the following equation:

$$ {A}_{\left(\uplambda, x\right)}=- \log \frac{I_{\left(\uplambda, x\right)}}{I_{\left(\uplambda, 0\right)}}={\varepsilon}_{\uplambda}\bullet C\bullet x $$

where A (λ, x) represents the absorbance at wavelength λ (dimensionless), ε λ the molar extinction coefficient (en l.mol−1.cm−1). Note the difference in the base of the logarithm; hence ε λ  = K/2.3. After rearrangement, the concentration is directly proportional to the Absorbance according the following equation:

$$ C=\frac{A_{\left(\uplambda, x\right)}}{\varepsilon_{\uplambda}\bullet x} $$

This law is additive and can be adapted to calculate the concentrations of several pigments by using a multi wavelength approach (the number of different wavelengths should at least be equal to the number of pigments in solution). However, for pigment mixtures it is often preferable to separate the pigments in order to achieve a better quantification.

Liquid chromatography (LC) allows to separate pigments. LC is often used at high pressure and then referred to as High Pressure Liquid Chromatography (HPLC) where pressures range from 50 to 200 bars. HPLC is most commonly used to separate lipophilic pigments in organic solvents, although methods for separating hydrophilic pigments have also been developed. The pigment molecules are separated in a column filled with a stationary phase. A solvent flux, or mobile phase, elutes through the column and the separation is based on the fact that each pigment equilibrates in a different way between the stationary and mobile phases. The molecules that have a high affinity for the stationary phase are retained on the column for a long time, while the molecules with a lower affinity elute faster from the column. Therefore, retention time (R t) has been defined as the time between injection on the column and the time it leaves the column and enters the detector.

In most currently used HPLC protocols, the pigments are mainly separated according their hydrophobicity, while the molecule weight and its stereochemistry interfere less strongly. For historical reasons, the term normal phase is used for protocols where the most hydrophobic compounds have the shortest retention times and reversed phase is used for the contrary. Nowadays, most HPLC pigment protocols are based on reversed phase chromatography. An injector is located upstream the column and the mobile phase is delivered by pumps or a solvent delivery system. Isocratic conditions imply that the composition of the mobile phase remains constant. Nevertheless, many solvent delivery systems allow to change the composition of the mobile phase according a programmed solvent gradient. Thus, under reversed phase conditions, the degree of hydrophobicity of the mobile phase is increased during the chromatography in order to optimize pigment separation.

The outflow of the column is connected with a detector. A diode array spectrophotometer is often used for pigment analyses as it allows the instantaneous measurement of a full absorption spectrum. This way a three-dimensional data matrix is generated as the Absorbance (A) is expressed as a function of R t and λ. As in classical spectrometry the Lambert-Beer law is applicable and there is thus a direct proportionality between the response A(R t,λ). The software often allows the operator to visualize his data as three-dimensional graphs or to choose two-dimensional representations for selected chromatograms (λ fixed) or absorption spectra (R t fixed).

Other type of detectors can equally be used for the detection of photosynthetic pigments, which can often be coupled in series with a diode array detector. A fluorimetric detector may be particularly interesting for the detection and quantification of chlorophylls as the fluorescence signal is more sensitive than absorption, which thus allows to lower their detection limits. However, the non-linear response of the fluorescence signal is a drawback for quantifications and requests more elaborate calibration. Coupling of HPLC with mass spectrometry, known as LC-MS (Liquid Chromatography Mass Spectrometry) has been developed for pigment analyses. Carotenoids do poorly fragment, and the main information obtained thus concerns the total molecular weight of these compounds. In contrast, the chlorophylls fragment very well and LC-MS thus allows to deduce the following information:

  1. (i)

    The molecular weight of the chlorophyll.

  2. (ii)

    The molecular weight of the esterified alcohol.

  3. (iii)

    The molecular weight of the macrocycle of the chlorophyll molecule.

  4. (iv)

    The presence of substitutions on the macrocycle and their molecular weight. This type of information allows to determine the exact structure of different allomers of BChl c and BChl d and can be very useful for discovering novel pigments.

7.8 The Analyses of Phospholipid Fatty Acids

The analyses of phospholipid fatty acids (PLFA)* provides a quick way to study and compare the biodiversity of microbial communities and to reveal the impact of changing environmental conditions on these communities (variations of temperatures, oxygen concentrations, impact of pollutants as, e.g., hydrocarbons and heavy metals). Some PLFA can be used as biomarkers* of certain functional groups and of certain genera within a functional group (Spring et al. 2000) (Table 17.2). For example, some genera of sulfate-reducing bacteria are characterized by specific AFLP; i.e., Desulfobulbus spp. contain the 15:1ω6 and 17:1ω6 fatty acids, Desulfovibrio spp. contain the i17:1ω7c fatty acid, while Desulfobacter spp. contain the 10me16:0 and the Cy17:0 fatty acids.

Table 17.2 Compilation of the major microbial lipid biomarkers and their occurrence (modified after Spring et al. 2000)

While other characteristic PLFA have been detected in many different microorganisms, their contents are often highest in bacterial species. This is the case for the highly branched iso and anteiso 15:0 and 17:0. The iso and anteiso fatty acids constitute 75 % of the total lipids of Micrococcus agilis. For the species Micrococcus halobius the iso and anteiso fatty acids with aliphatic chain comprising from 14 to 17 atoms of carbon (14:0 to 17:0) represent almost the total amount of fatty acids, the branched 17:0 fatty acid alone representing already 45 %. Cyclopropane acids are major PLFA of numerous Gram-positive bacteria and Desulfobacteria spp. (Dowling et al. 1986). Other PLFA, e.g., palmitic acid (16:0) and linoleic acid (18:2ω6), are widely distributed among living organisms and can thus not be used to infer taxonomic affiliation or physiological status in microbial communities.

The following other type of information can be obtained from PLFA analyses:

  1. (i)

    The isomerization of the monounsaturated fatty acids, i.e., the conversion of the cis isomer into the trans isomer, can be used as an indicator for stress for bacteria that have been exposed to toxic organic compounds (phenol, toluene) (Heipieper et al. 1995).

  2. (ii)

    The ratio of vaccenic acid (18:1ω7) to oleic acid (18:1ω9) is very high for bacteria (25) and much lower in diatoms (1) and green algae (0.2).

  3. (iii)

    An index for hydrocarbonoclastic activity has been proposed by Aries and his collaborators (2001). This index allows inferring the proportion of bacterial growth sustained by use of hydrocarbons as a substrate compared to the proportion of growth sustained by use of hydrophilic growth substrates as, e.g., acetate. This index is calculated by summing up (i) the saturated linear fatty acids with an odd number of C atoms, (ii) the saturated and monounsaturated branched fatty acids, and (iii) the other monounsaturated fatty acids with an odd number of C atoms. This sum is divided by the total amount of monounsaturated fatty acids with an even number of C atoms. Values ranging between 0.8 and 1.3 are characteristic for cultures growing on petrol, while the value is systematically lower than 0.1 for cultures growing on acetate.

The analyses of PLFA allows to follow the temporal dynamics of microbial communities. For example, the chronology of the different phases that characterize compost formation have been studied by following the PLFA. Hence, it has been observed that initial communities are dominated by Gram-positive bacteria, and that this community shifts to increasing dominance of Gram-negative bacteria, actinobacteria, and fungi (Klamer and Bääth 1998).

The study of PLFA contributes very useful information about microbial communities, albeit it must be used with caution. While, the specific biomarkers for microorganisms have been discovered in axenic culture studies of these species under laboratory conditions, which may be very different from natural conditions in the environment. Moreover, the qualitative and quantitative screening for lipid biomarkers among microorganisms, so far, has been limited to a restricted number of species, which is probably insufficient. Novel biomarkers will certainly be discovered in the future as the number of studied strains increases. Therefore, interpretations will need to be updated and revisioned. The polyunsaturated fatty acids provide a good example of such a shift in interpretation. Long time it was considered that bacteria do not contain these polyunsaturated fatty acids, while they have been recently discovered in bacteria living under very high pressure in deep ocean trenches (extremely piezophilic bacteria) (Fang et al. 2002).

Two rather novel approaches in the study of lipids appear particularly promising:

  1. (i)

    Study of the intact phospholipids. For example, Mazella and his collaborators (2005) have shown that the phospholipid composition changes in the presence of hydrocarbons.

  2. (ii)

    Analyses of quinones. An example is provided by Tang and his collaborators (2004), who showed how microbial communities changed during the thermophilic phase during compost formation.

8 Methods of Isolation, Culture, and Conservation

8.1 Cultures in Aerobiosis

Cultivating microorganisms means to put them in conditions favorable enough to allow their development. These conditions include the definition of physicochemical and metabolic parameters (temperature, pH, salinity, oxygen): the cells must have access to an energy source and nutrients. During the culturing of a sample, microorganisms are placed in a new environment. Whether carried out on a solid or liquid medium, the choice of the medium is essential (Fig. 17.40).

Fig. 17.40
figure 43

Bacterial colonies obtained after spreading seawater sample on different agar plates culture media and an incubation of few days. Bacterial colonies obtained from a coastal lagoon after 2 weeks of growth on R2A medium (a) and on Marine Agar 2216 medium (b).The grid on photo (c) represents 1 cm2 (Photographs: L. Intertaglia)

For over a century, the list of culture media, presented as more or less selective and more or less “rich” has increased. Developed to be specific or nonspecific according to the data at the time of their formulation, often in a medical or health context, they are still widely used, sometimes simply out of habit. Knowledge and subjects have evolved and thus it is now necessary to reassess the characteristics of these media before considering their use.

Like any environment, any culture medium is necessarily selective because not all microorganisms could develop on it. Similarly, no medium can be considered as specific due to highly variable capabilities of the microorganisms. Consequently, the notion of specificity or “universality” should be considered carefully, especially according to the aim of the study for which the medium has to be used.

These “classical” culture media may be of defined composition, when all the components are known, or undefined when they contain substances of poorly precise composition (e.g., yeast extract). They must contain at least an energy source and nutrients providing all the necessary elements for growth. In the case of undefined media (e.g., containing cell homogenates), many elements present in trace amounts are present. In the case of defined media, trace elements and micronutrients have to be added (metals, vitamins, growth factors).

Defined media have the advantage of being perfectly controlled because their composition can be adapted and optimized, a modulation of the proportions of each ingredient being theoretically possible. Mathematical tools exist to facilitate what may become an extremely long and heavy work. The optimization is obviously more difficult for undefined media because of the partial ignorance of the compositions of their components.

Once the medium has been chosen, the physicochemical parameters need also to be determined. Incubating temperature, salinity of the medium and pH have to be defined. It seems reasonable to mimic the conditions of the original environment. Depending on growing conditions, microorganisms will have a tolerance range, with an optimum value, for each of these parameters. This range varies for each microorganism and may be as wide as very reduced. To maximize the chances of success, the cultures conditions should approach as close as possible environmental conditions of the biotope of origin.

In the context of a study of the most exhaustive cultivable biodiversity, it is necessary to choose the best culture conditions. This option requires beforehand the knowledge of the main characteristics of the environment of origin to reproduce them instead of selecting one or more media simply because they are usually used for years or even decades.

To limit the selectivity of the medium chosen, it will be rather preferable to work with a set of media. Choices are guided by the objective of the work, therefore by the control the known conditions and not by a historical practice. Moreover, it is also possible to define culture conditions (medium and parameters) to no longer seek to increase the cultivable biodiversity present in a sample, but to target a fraction of this diversity. For example, the search of microorganisms having abilities to resist to compounds, to degrade, transform, or use certain molecules, may be oriented by the choice of cultural constraints imposed voluntarily.

Despite all the precautions taken to reach the balance between the objective of the work, the environment of origin and culture conditions, a bias of selectivity will remain. Technical choices cannot be unlimited (combinations of several culture media and different temperatures, pH, etc.). The strategy for culturing in itself is a factor of selection (solid, liquid, batch). The current development of other practices (alternative techniques, continuous culture) shows how culturing, despite more than a century of microbiology, is still a field of exploration.

8.2 Dioxygen Requirements and Cultures Under Anaerobic Conditions

In the environment, concentrations of dioxygene are highly variable. There are a range of intermediates among the microorganisms that grow in the presence of oxygen and those growing in the total absence of dioxygen (see Sect. 3.3). Thus, it is necessary to distinguish between microorganisms (Fig. 17.41):

Fig. 17.41
figure 44

Development of microorganisms as a function of oxygen concentration. Test tubes of small diameter (about 5 mm) are filled to three quarters of their height by a culture medium containing a reducing agent (thioglycolate) and a small amount of agar (7 gl−1, called “deep agar media”), which is supplemented with resazurin indicator of oxidation-reduction; this indicator, colorless in reducing conditions, becomes pink in the presence of traces of dioxygen. The tubes completely devoid of dioxygen after sterilization (autoclaving for 20 min at 120 °C), are immediately immersed in cold water. After cooling, the dioxygen in the air dissolves at the agar surface defining an oxic zone revealed by the pink color of resazurin. Below this zone, the dioxygen concentration decreases, the bottom of the tube is completely devoid of dioxygen. After inoculation of the tubes over the entire culture medium with a platinum wire (seeding “pitting”), the microorganisms will develop according to the presence of dioxygen in the tubes. Colonies are represented by black dots, the pink area corresponds to pink resazurin in the presence of dioxygen (Drawing: M.-J. Bodiou)

  1. (i)

    Obligate aerobes that require dioxygen to grow; their respiration is aerobic.

  2. (ii)

    Microaerophiles, which cannot develop at dioxygen concentration equivalent to that of atmospheric level (20 %), but which still need dioxygen concentration between 2 and 10 %; their respiration is also aerobic.

  3. (iii)

    The facultative anaerobes, which are able to live either in the presence or in the absence of dioxygen; for example, denitrifying bacteria grow in the presence of dioxygen, but can grow in the absence of this acceptor of electrons if nitrate is available.

  4. (iv)

    Anaerobic air tolerant that tolerate dioxygen and grow in its presence; it is the case of some fermentative bacteria.

  5. (v)

    Obligate anaerobes which are inhibited or killed by dioxygen; these organisms obtain energy by fermentation or anaerobic respiration (Sects 3.3.2 and 3.3.3).

Isolation and growth of aerobic microorganisms take place in the presence of air and sometimes it is necessary to ensure maximum growth to aerate the culture medium by stirring or insufflation of sterile air. For handling anaerobic microorganisms, various techniques for eliminating all traces of dioxygen are implemented (Fig. 17.42). Anaerobiosis can be obtained by:

Fig. 17.42
figure 45

Devices used for the growth of anaerobic microorganisms. (a) Vacuum chamber, the air being replaced by dinitrogen or a nitrogen-carbon dioxide. The operation is repeated 3 times; (b) Hungate tube; (c) “anaerobic” jar (Drawing: M.-J. Bodiou)

  1. (i)

    The elimination of air which is replaced by nitrogen usually CO2 enriched for the growth of many anaerobic microorganisms. The operation is performed in either a sealed chamber (Fig. 17.42a), either in tubes, and particularly tubes called “Hungate tubes” which are closed by a stopper “butyl rubber” for withdrawals and transfers (Fig. 17.42b). These tubes are used for enumeration of anaerobic microorganisms by the most probable number (MPN) technique and for the measurement of their activity.

  2. (ii)

    The use of a culture medium containing a reducing agent (thioglycolate, cysteine, etc.).

  3. (iii)

    The removal of dioxygen by catalysis, in an “anaerobic” jar which contains hydrogen and CO2; in the presence of palladium as catalyst, hydrogen reacts with the oxygen present in the jar which is thus removed (Fig. 17.42c).

  4. (iv)

    The use of an “anaerobic chamber” (or anoxic glove bag) (Fig. 17.43) containing an atmosphere completely anoxic, which allows all the techniques used for aerobic bacteria (usually the air is replaced by nitrogen). A sas attached to the anaerobic chamber, wherein anaerobic conditions are established by replacement air by dinitrogen, allows the equipment input and output though this sas. This is the most effective technique, which requires a significant financial investment, but it is required in laboratories fully specialized in the study of anaerobic microorganisms.

    Fig. 17.43
    figure 46

    Anaerobic chamber. In the right of the chamber, there is the intermediate chamber for the transfer of material between the outside and the inside of the anaerobic chamber (Photograph: Courtesy of Bernard Ollivier)

8.3 Continuous Cultures

Conventional culture methods allow to cultivate a part, often estimated very low, of the total population of microorganisms in a sample. This problem is due to the artificial culture conditions in vitro which can be very different from those of the environmental context of origin. In addition, except the incubation temperature, the other parameters are defined at the beginning of the experimentation and are not further controlled. Thus, the development of certain microorganisms will be accompanied by a change in the composition of the medium due to the consumption of components and the production of compounds. This changing environment can cause an inhibition of growth of other microorganisms and even stop growth when the concentration of some compounds becomes limiting.

In an attempt to overcome these problems, some techniques of continuous cultures have been developed. The main feature of these techniques is to monitor a set of conditions so as to maintain them constant or to control their variations. Temperature, pH, agitation, and especially renewal of the medium are usually the parameters involved (Fig. 17.44). It becomes possible to approach the environmental context of origin while limiting the consequences of the depletion of the medium as those of the production of potentially inhibitory compounds. This culture method is applicable in aerobiosis and in anaerobiosis. In the latter case, specific adaptations need to be provided to avoid the presence of oxygen and to evacuate gases that can inhibit growth (Raven et al. 1992). Continuous culture is a useful tool for the production of biomass or compounds of interest developed by the biomass, but also for biotransformation processes. It also allows to study the physiology and metabolic capacities of microorganisms (Godfroy et al. 2000). In all these cases, the optimization of the process is made possible precisely through the control of culture conditions (Postec et al. 2005a). This ability to control and adjust parameters is also important for the study of microbial populations in microcosm to improve the characterization of the biodiversity of samples (Postec et al. 2005b), their resistance or adaptation ability as well as their relationships in the ecosystem (Postec et al. 2007). In addition, attempting to recreate conditions close to those of the environmental of origin, continuous culture can be used to cultivate microorganisms have not yet been described (Postec et al. 2005c).

Fig. 17.44
figure 47

Flow diagram of a bioreactor for continuous culture

However, the development and setting of this technique have disadvantages. The apparatus may be relatively difficult to implement and once the culture has been launched, it almost requires a daily monitoring for uninterrupted periods of several weeks to several months (supply of fresh medium, sampling for qualitative and quantitative monitoring). It is technically difficult to perform many cultures in parallel in different conditions, as it is possible with batch culture. Finally, manipulation and isolation of organisms being grown in continuous culture are not always easy during the transition to conventional culturing techniques. However, the continuous culture is a tool to address questions of microbial population ecology which are much more difficult to approach with conventional culture techniques.

8.4 Counting of Cultivable Heterotrophic Bacteria

Enumeration of cultivable heterotrophic bacteria can be achieved using solid or liquid media. In the case of solid culture media, a precise volume of sample (usually 100 μl) is spread on the agar surface of the medium. After incubation under conditions of temperature, oxygenation and duration defined according to the type of bacteria investigated, the colony-forming units (CFU) observable with the naked eye are counted in order to calculate the number of culturable heterotrophic bacteria present in the sample (Fig. 17.40). It is recommended to retain for counting the plates containing between 30 and 300 CFU. If the sample has a concentration of heterotrophic bacteria too high, it is possible to dilute it before using the technique of successive dilutions in sterile diluent. Conversely, if the concentration is too low, the sample can be filtered through a 0.45 μm membrane which is then deposited on the surface of solid culture medium. The diffusion of nutrients through the membrane allows the growth of bacteria and the development of CFU with quantification limits, lower and upper, respectively, 10 and approximately 100 CFU. In the case of liquid culture media, the technique of the Most Probable Number (MPN) is used to estimate the bacterial concentration via a statistical approach. This count is based on the detection of bacterial growth which is detected by turbidity in the liquid medium. The number of positive culture tubes in a series of tubes inoculated with different dilutions and several replicates per dilution (usually between 3 and 5) is then used to calculate the MPN of bacteria present in the original sample by the use of statistical tables. Whatever the growth medium used, the reliability of cultivable heterotrophic bacteria count requires the absence of bacteria attached to particles that cause an underestimation of the count. It is possible to use culture media for solid or liquid selective growth of certain bacteria, as it is the case for fecal coliforms or fecal enterococci used in aquatic health standards.

8.5 Alternative Techniques to the Direct Plating

Only less than 1 % of the total bacterial diversity would be currently culturable (Rappé and Giovannoni 2003). Some reasons could explain this “great plate count anomaly” (Stanley and Konopka 1985):

  1. (i)

    The term “isolation” means the complete loss of all interactions with the natural environment and the whole bacterial community (metabolic consortia, Quorum sensing).

  2. (ii)

    The use of inadequate media that could be too rich (nutrient) and in any case that could not completely mimic the environment targeted.

  3. (iii)

    The toxicity of some common products like the phenol traces in agar. So, it is a real challenge to overcome these difficulties, that is why alternative techniques have been developed during last decades (Alain and Querellou 2009; Pham and Kim 2012). These methods, sometimes cumbersome to implement and/or expensive, have enabled isolating strains of ecological (widespread) and/or biotechnological interest (Joint et al. 2010).

8.5.1 Micro-manipulation

It was in the late 1960s that these isolation methods of a single cell were born (Johnstone 1969). The first system consisted in the aspiration of a cell through a microcapillary. This technique is suitable for the isolation of large cells (eukaryotes), but for small cells (bacteria, spores), a highly accurate technique was developed: the optical tweezers. This method selects a single cell using an infrared laser under a light microscope coupled to a motorized stage. The cell of interest is then moved with the same laser through a capillary glass and then transferred into a nutrient medium (Ashkin et al. 1987 and Ericsson et al. 2000). The major advantage of this technique is the accuracy in which a single cell can be selected from a complex natural sample. The system is very expensive; the choice of the targeted cell is arbitrary and does not guarantee the future culturability.

8.5.2 Micro-encapsulation Coupled to Cell Sorting (GMDs)

In 2002, Zengler and colleagues (2002) described an original approach to isolate bacterial strains. The first step consists in the concentration of a natural sample and to mix it with a pre-heated (40 °C) agarose emulsion. After cooling the mixture, the emulsion will generate statistically much agarose micro-droplets (gel microdroplets or GMDs) than cells. Afterward, the GMDs that contain one cell (microcolonies) are selected under light microscope. The second step is the growth of these microcolonies, their selection, and their retrieval. Each GMD is deposited in selected chromatography columns added with continuous nutrient flow. Then, each GMD is deposited on microplate wells containing rich nutrient medium. To confirm the encapsulated cell growth, the GMDs were finally sorted by flow cytometry and double checked by microscopy. Here, the major advantage is the encapsulation a single cell that will grow slowly in a nutrient flow with free cells in the sample. Furthermore, this technique is applicable to high-speed (microplate). This system is expensive, hard to implement, and does not guarantee long-term culturability of the selected cells.

8.5.3 Dilution to Extinction

Dilution to extinction culture has emerged in the mid-1990 (Button et al. 1993) and developed in 2000s (Connon and Giovannoni 2002; Stingl et al. 2007) It consists in doing serial dilution on microplate until reaching 1 cell by well at the end using natural sampling water. It is also possible to distribute directly a small number of cells (1–5 for example) in each well. The main benefit of this technique is to allow a slow and gradual adaptation (time incubation for several weeks) of the bacterial cells in conditions that mimics the natural environment studied. In addition, the very low number of cells eliminates opportunistic bacteria that overgrow and inhibit the growth of slow growers of interest. Even if time-consuming, this technique has proved its evidence and continues to be improved. Indeed, many previously uncultured bacteria were isolated for the first time like those that dominate marine ecosystems such as SAR11 or OMG gammaproteobacteria (Rappé et al. 2002; Cho and Giovannoni 2004; Stingl et al. 2007) or some rare bacteria from the rumen (Kenters et al. 2011).

8.5.4 In Situ Colonizers and Traps

The principle of this method is to use organic or inorganic supports directly incubated in the natural environment. This method allows the colonization of microorganisms in their natural conditions. Many types of colonizers were described in the literature such as the diffusion chambers (Kaeberlein et al. 2002; Gavrish et al. 2008) used on marine sediment or soil, polyurethane foams (Yasumoto-Hirose et al. 2006) on marine samples, or more recently on steel metal minitraps use for the isolation of previously uncultured oral bacteria (Sizova et al. 2012).

8.5.5 Modifications of Growth Media and Conditions

Some modifications done on the sample and/or directly to the culture media can increase the culturability (Nyonyo et al. 2012):

  1. (i)

    Supplementation: It is the addition of non-traditional nutrient sources such as the culture supernatant of a species which stimulates the growth of another one (Tanaka et al. 2004), the use of cell signaling molecules such as cAMP or acyl homoserine lactone (Bruns et al. 2002) that can play on quorum sensing or growth inhibitors (antibiotics).

  2. (ii)

    Filtration: the sample is filtered prior on 0.22 μm polycarbonate filters in order to eliminate most of the bacterial cells and work on the filtrate fraction to recover Ultramicrobacteria or previously dormant cells (Hahn et al. 2003).

  3. (iii)

    An alternative to conventional gelling agents: classic agar is potentially toxic (phenol) and its physicochemical parameters limited. That is why alternative techniques have been developed, such as the use of floating filters (De Bruyn et al. 1990) or the use of other gelling agents such as agarose, gellan gum also called Gelrite or Phytagel (Nyonyo et al. 2012). This gellan gum was first used for the cultivation of thermophilic and acidophilic given its thermal stability and tolerance to pH variations. More recently, this gelling agent was used with success in the isolation of new bacterial diversity (Tamaki et al. 2005).

8.6 Management of Culture Collections

8.6.1 Culture Collections of Microorganisms

Isolation of microorganisms in the environment has many interests such as: accessibility to the genotype of a strain, the use of strain as experimental model, the production of molecules with high biotechnological potential, and/or the development of scientific collaborations. That’s why more and more laboratories around the world develop culture collections despite the requirement of significant human and material resources. In mid-2013, the World Federation for Culture collections or WFCC (http://wdcm.nig.ac.jp/wfcc/) referenced 645 culture collections in the world from 70 different countries with 5,248 people that are fully dedicated to it (Fig. 17.45). Among the 2,244,376 “microbials” stored in these collections, 977,858 are bacterial strains, 633,901 are fungi, and the rest is shared by virus and cell lines in the same proportion.

Fig. 17.45
figure 48

Distribution of the 645 culture collections referenced by the WFCC (May 2013)

Culture collections should ensure the long-term preservation of isolates, their viability, purity and access to the strains. The storage of the microorganisms is a crucial prerequisite for all the culture collections. The organisms should be long-term preserved in a revival conditions as long as possible. To do this, the metabolic activities have to be blocked in order to reduce the risk of cell damages. The two commonly used methods are freeze-drying and cryopreservation.

8.6.1.1 Freeze-Drying

This method was born in the 1950s. This technique consists in (Bimet 2007) dehydrating the cells at low temperature under vacuum from an added (e.g., skim milk, sucrose) culture. The freeze-dried powder can be stored at room temperature or at 4–8 °C (Heckly 1978). This method is performed in two steps: freezing and desiccation. For freezing, two methods are used: immersion in a dry ice-alcohol mixture (−78 °C) or centrifugation-freezing to −7 °C. For drying, three factors are important: the void, a very low temperature, and the apparatus for retaining water sublimation. The success by freeze-drying depends on the following parameters:

  1. (i)

    The number of cells

  2. (ii)

    The strain

  3. (iii)

    The cell size and its complexity

  4. (iv)

    Resuspending the medium

  5. (v)

    Maintaining the vacuum over time

This technique allows a long-term preservation (decades) due to the dehydrated conditions of the cells and an easy storage. Nevertheless, this technique requires specific equipment (freeze dryer, specific glassware) and cannot guarantee full success. It should be noted that very often, freeze-dried bacteria lose their plasmids during the process.

8.6.1.2 Cryopreservation

This method consists in keeping the cells alive at very low temperatures with the addition of cryoprotectants, compounds that limit adverse effects of freezing. In theory, the more temperature is lower, the better the storage is. But biological structures can be very disturbed by:

  1. (i)

    Mechanic breaks

  2. (ii)

    Membrane topography changes

  3. (iii)

    Water crystallization (and biochemical changes)

  4. (iv)

    Mechanical injuries (crystals)

  5. (v)

    The increasing of electrolyte concentration

The choice of the cryoprotectant is crucial for cryopreservation success. About 50 different kind of molecules were tested on cultures (Hubálek 2002) as:

  1. (i)

    Sulfoxides (e.g., DMSO)

  2. (ii)

    Alcohols (e.g., methanol, glycerol)

  3. (iii)

    Proteins (e.g., BSA)

  4. (iv)

    Polysaccharides (e.g., trehalose)

  5. (v)

    Complex compounds (e.g., yeast extract)

Their actions on the cells are multiple. Highly hydrophilic, they involve interactions with the water molecules and thus protect proteins. The permeable cryoprotectants (glycerol and DMSO) limit hyper concentrations of salts and prevent the formation of large ice crystals.

They lower the freezing point of water and biological fluids. The most effective cryoprotectants are: the dimethylsulfoxide (DMSO) and the glycerol. Temperatures are commonly used around −80 °C, but some may go down −130 °C or −196 °C (liquid nitrogen). Maximum survival is observed in a so-called “transition area” during which the formation of intracellular ice hyper concentration and saline are attenuated. This means that freezing must be slow (1 °C /min) and thawing needs to be fast. As the freeze-drying, cryopreservation also depends on many parameters:

  • Cell wall (Gram + > Gram −)

  • Cell size and shape

  • Growth phase (stationary)

  • Incubation temperature

  • Culture medium composition

  • pH

  • Osmolarity

  • Cell water content

  • Membrane lipid content

  • Composition of cryoprotectant

  • Cooling rate

  • Storage temperature

  • Storage time

  • Thawing rate

  • Revival medium

Cryopreservation can be done easily and quickly on many samples and allows long-term preservation (decades). Unfortunately, the success of this method is also highly variable given the many parameters that influence it. However, the freezing/thawing successive steps could be lethal for cells.

9 Conclusion

As shown in this chapter, the range of approaches developed in microbial ecology is extremely broad. Technological developments in this area evolve very rapidly and allow to describe more finely and precisely the structure and the activities of the communities of microorganisms in their biotopes. It is undeniable that molecular techniques have revolutionized microbial ecology and that they are now an integral part of research and teaching in this discipline. The recent application of high-throughput molecular biology methods to natural microbial communities is profoundly changing our view on the microbial world. By combining these new technologies with ecosystem and biogeochemical measurements, it becomes possible to identify more precisely environmental controls on microbial processes and the specific roles of microbes in biogeochemical cycles.