Keywords

1 Introduction

The cell is the smallest functional and structural unit of all living organisms. Interestingly, this name was coined by the English scientist Robert Hooke in 1665, upon observing cork sections with the first described composed microscope (Hooke 1665/2014). The observed polyhedric structures, nowadays known as plant cells, reminded him of either honeycomb cells or monastic cells in cloisters.

Thus, all living organisms are composed by cells, as they appear in endless forms from unicellular to more complex multicellular organisms, such as humans (Nelson and Cox 2017). These cells, even in multicellular organisms, have specific individual characteristics important for their specific functions and proper maintenance of the organism’s characteristics. In fact, cellular malfunctioning is at the basis of all diseases (Price and Culbertson 2007).

All cells are bounded by the plasma membrane that, as a selectively permeable barrier, separates the extracellular from the intracellular environment (Karp 2010; Alberts et al. 2014). The most important components for cellular structure and function are found in the intracellular compartment. These cellular components may be divided into two groups, and the first one is composed by supramolecular complexes with specific functions, known as organelles. Mitochondria, endoplasmic reticulum, the Golgi, ribosomes, vacuoles, and chloroplasts are examples of cellular organelles (Alberts et al. 2014). The remaining cellular components—proteins, nucleic acids, amino acids, metabolites, coenzymes, and ions—are found in suspension in the cytosol or in the nucleus, which is only found in eukaryotic cells (Alberts et al. 2014).

Most descriptions of cellular structure, particularly those of organelles, came from microscopy and classic functional studies, which consist in isolating the organelles or cytosolic components and evaluating their function by various techniques, including the use of optical and electron microscopy (Nelson and Cox 2017). Indeed, the increase in the understanding of cell biology can be historically correlated to the improvement in microscopy technologies and equipment. We can define microscopy as a group of techniques that allow the observation of structures that are not visible by the naked eye. Human curiosity to observe the microenvironment is ancient, and there are descriptions of amplifying lens use by the philosophers Seneca (4 BC–65 AD) and Pliny (23–79 AD). Many improvements have been achieved since the description of the first observations by Hooke in 1665 using a composed microscope. With the field evolution, we now have phase, fluorescence, and electronic microscopes with multiple functions that allow for the observation of specific structures with high precision, magnification, and resolution (Fig. 1).

Fig. 1
figure 1

Diagram showing the range of object sizes that can be visualized by different methods. The figure shows selected organisms, organelles, and molecules to exemplify the resolution power of different microscopes. The range for the naked eye resolution is shown in green, for light microscopes in red, for scanning electron microscopy in green and for transmission electron microscopy in yellow. Some techniques have overlapping ranges, what is shown in the figure by color overlapping (courtesy of Dr. Alexandre Z. Carvalho, UFABC-Brazil)

Although of extreme importance, functional in vitro studies of organelles and biomolecules isolated from the cells may not totally reflect their real function as when these molecules are working in the cell in vivo (Nelson and Cox 2017). Indeed, the ability of cells to respond to the environment is related to complex signaling systems, in which many biomolecules may be involved and interacting among themselves. Thus, the use of systems with intact cells, in vivo, such as modern optical microscopy techniques, may allow a better understanding on the behavior of groups of biomolecules or even of groups of organelles in their own environment (Tinoco and Gonzalez 2011).

Herein we discuss some highly efficient contemporary fluorescence-based microscopy techniques for eukaryotic cell studies. We focus on Confocal Microscopy that allows the capture of images with high resolution and 3D reconstruction of intact cells or tissues samples as well as real-time sequential image capture from living cells. In addition, the use of fluorescence, based on its high intrinsic sensibility, has become a great ally for the technological development of other microscopy techniques, such as Förster resonance energy transfer (FRET) and Total Internal Reflection Fluorescence (TIRF) that are also discussed here, which are aimed at monitoring intracellular molecules, reaching enough sensitivity and specificity to study the behavior of individual molecules in vitro (Toomre and Bewersdorf 2010; Neto et al. 2012).

2 Introduction to Biological Applications of Fluorescence

The absorption of electromagnetic radiation brings molecules from a fundamental electronic state (E0) to an excited higher energy electronic state (E1). The subsequent loss of this energy may occur by radiating or non-radiating pathways. In the radiating pathway, part of the energy is lost as light, with photon emission, through which the molecule then returns to its fundamental energetic state. If the excited chemical species spontaneously emits radiation while retaining its spin multiplicity, this phenomenon is called fluorescence, and the molecular entity that emits fluorescence is thus called a fluorophore.

Fluorescence emission of light usually occurs within nanoseconds after the absorption of a shorter wavelength (higher energy) light (Clegg 2012; Skoog et al. 2013). Its efficiency is defined by the ratio between the number of emitted photons and the total number of excitation photons. In other words, it is the difference between the exciting and emitted wavelength, known as Stokes’ shift, that makes fluorescence a powerful tool, which allows the specific visualization of the object that is fluorescent. Light emission that occurs after electron spin change (e.g., from the singlet to triplet state) has a prolonged duration in a phenomenon denominated phosphorescence. The processes that occur during molecular excitation and radiation emission are represented in the simplified Perrin-Jablonski diagram (Fig. 2).

Fig. 2
figure 2

Simplified Perrin-Jablonski diagram illustrating the electronic states of a molecule and the transition between them. S represents the singlet state and T the triplet state. The 0 represents the fundamental electronic state, and 1 and 2 represent the first and second excited electronic states, respectively. Reproduced with permission from Ref. (Clegg 2012)

As mentioned earlier, fluorescence has a high intrinsic sensibility compared to other techniques that rely on light absorption measurements. This occurs because the excitation and emission wavelengths are different, and thus it is feasible to completely filter out the excitation light without blocking the emitted fluorescence. Therefore, only the fluorescent object is imaged (Tinoco and Gonzalez 2011). This way, fluorescence has been used in many areas of study in which low detection limits are necessary, such as individual cell analysis (Wu et al. 2004; Huang et al. 2007), and biomolecule (Liu et al. 2013; Simplicio et al. 2010; Su et al. 2011), food (Tran et al. 2013; Zhou et al. 2013), or drug analyses (Kolberg et al. 2009; Chang and Yu 2011), among others.

Molecular fluorescence spectrometry is an analytical technique based on the detection of the emitted light from a molecule during the fluorescence event (Skoog et al. 2013). Likewise, fluorescence microscopes are light microscopes that use the fluorescence or phosphorescence phenomenon to generate an image or a measurable signal. The resulting image can be fairly simple, as the ones acquired by epifluorescence microscopes, or more complex, like the ones generated by confocal microscopes, using optical sections to obtain a better resolution of the fluorescent image, as further discussed below.

It is important to point out that not all molecules have the capacity to generate fluorescence, since their structure may preferably allow for energy loss in a non-radiative pathway after light excitation. Fluorescent molecules possess fluorophore groups in their structure, which are responsible for the light absorption and emission properties. More effective fluorescence effects are obtained using compounds that have resonant ring structures (such as aromatic molecules) (Skoog et al. 2013). These fluorescing structures can be made available within biological systems either by chemical derivatization of molecular probes with fluorophore moieties or, as it is increasingly more frequent, via the genetic expression of fluorescent protein sensors. Thus, many of the detection methods are performed indirectly, when a non-fluorescent molecule is analyzed after being detected upon specifically interacting with a fluorophore dye. These dyes can be used alone due to their affinity for specific molecules. For example, ethidium bromide intercalates in DNA and RNA molecules, generating a measurable signal. However, most dyes are conjugated with other molecules, such as antibodies, allowing for the detection of the target structure or molecule.

The most common synthetic fluorescent dyes used as conjugates are rhodamine-B, cyanine (Cy3 and Cy5), fluorescein isothiocyanate (FITC), tetramethyl rhodamine-isothiocyanate (TRITC) and variations of Alexa Fluor® (Liu et al. 2013; Lamichhane et al. 2013) (Table 1). As mentioned above, another important contemporary tool in cell studies are the fluorescent proteins, most of them derived from the green fluorescent protein (GFP), which was the first fluorescent protein to be described and isolated. Using biotechnology techniques, we now have many varieties of fluorescent proteins that have fluorescent color emissions that range from green to blue, red, and yellow (Table 2) (Snapp 2009).

Table 1 Common fluorophores used in fluorescence microscopy
Table 2 Common fluorescent protein used in cellular biology studies

As observed in Tables 1 and 2, each fluorophore and fluorescent protein has a specific excitation and emission wavelength. Thus, it is crucial to previously know this information to design the experiments and even to be sure that the equipment is prepared to detect a specific dye. Of note, many microscopy experiments may require evaluation of different molecules in the same sample, thus double or triple labeling may be used. In this case, it is important to use dyes with different excitation and emission wavelengths. Other important characteristics that need to be taken into account when choosing a dye are its photostability and phototoxicity (Coelho et al. 2013; Shaner et al. 2005). The first refers to its resistance to photobleaching (i.e., loss of the fluorescence property) after cycles of excitation and emission. Phototoxicity is a phenomenon that leads to cell damage by either exposure to low- or high-wavelength lights used to excite fluorophores, which can also release reactive oxygen species that are also potentially dangerous for cellular structures, and/or heat the sample as well (Dailey et al. 2006; Laissue et al. 2017).

3 High-Resolution Fluorescence Microscopy

The high detection power of fluorescence combined with the possibility of selectively labeling molecules or specific regions of macromolecules with fluorescent tags makes fluorescence a prime technique for the optical detection of molecules in biological samples. In fact, multiple variations of fluorescence spectroscopy have been developed to monitor cellular processes in vivo (Tinoco and Gonzalez 2011; Toomre and Bewersdorf 2010; Coelho et al. 2013): wide-field fluorescence microscopy, total internal reflection fluorescence (TIRF) microscopy, highly inclined and laminated optical sheet (HILO) microscopy, selective plane illumination microscopy (SPIM), fluorescent speckle microscopy (FSM), photoactivation and photoconversion (PA and PC) fluorescence, photoactivated light microscopy (PALM), stochastic optical reconstruction microscopy (STORM), stimulated emission depletion (STED) and Förster (or fluorescence) resonance energy transfer (FRET). The special resolution of some of these techniques is so high, such as PALM, STORM, and STED, that they are aptly considered super-resolution imaging techniques (Alberts and Davidson 2013). Though they are not the scope of this chapter, these techniques are worth mentioning since they are increasingly becoming more affordable, despite still requiring, for instance, high laser power, special photo-switchable fluorophores, outstanding photodetection devices, and/or massive digital processing.

Wide-field fluorescence microscopy is the simplest application of fluorescence microscopy to biological samples. It involves the illumination of a sample with an excitation wavelength and collection of the emitted light from in-focus and out-of-focus planes (thus the name wide-field). As the elicited emission happens in all directions regardless of the path of the excitation light, it can be detected either opposite from the source of excitation, in what is called transillumination, or by collecting the emitted fluorescence through the same objective lens through which the excitation light was delivered to the sample. This second means of fluorescence detection, in the same path as that of the excitation light, is called epifluorescence (Fig. 3a). When this illumination comes from a beam directed upwards from the objective, as in inverted microscopes, the lens can usually be brought much closer to the sample, thus the efficiency of illumination is maximized.

Fig. 3
figure 3

Confocal microscopes and confocal images compared to regular fluorescence microscopes. (a) Basic components of a conventional fluorescence microscope. Note that the emitted fluorescence is collected from both in-focus and out-of-focus depths of the specimen. (b) Illustration of the components of a laser confocal scanning microscope. The light orange spot below the focal plane also emits fluorescence, but that does not converge to the pinhole aperture, and so is not detected by the photomultiplier sensors located behind the pinhole. (c) Example of a blurred image that would be obtained if collected in a conventional fluorescence microscope. (d) The actual image, which was obtained in a laser confocal scanning microscope (3D culture of HUVEC cells expressing GFP grown in fibrin gel, 10x magnification—courtesy of Dr. Vanessa M. Freitas. University of São Paulo)

In wide-field microscopy, however, the image can be blurred by the collection of fluorescence from both in-focus and out-of-focus planes, and further improvements in the design of microscopes and illumination schemes are desired. In fact, spatial resolution, i.e., the ability of a microscope to distinguish two close objects in a focal plane, is limited by the diffraction of light coming from the sample in the aperture of the objective lens. This can be increased by the use of oil immersion lenses, as the oil occupies the space between the sample and the objective that would be filled by air, whose refraction index is very different from both the sample and the objective ones. This also allows for an increased angle of light coming from the sample that can be captured by the lens, and thus also more photons can be collected. This efficiency of the lens in collecting light from a single focal point at different angles is quantified in what is called the numerical aperture of the lens (Alberts and Davidson 2013). Despite that, conventional wide-field microscopy cannot achieve single-molecule imaging even with the use of high numerical aperture lenses. However, techniques such as TIRF and HILO have dramatically increased the spatial resolution of the images to limits much lower than those found in conventional microscopes. In some cases, resolution can be brought to 25–80 nm (Toomre and Bewersdorf 2010).

The various applications of the high-resolution fluorescence microscopies demonstrate their potential for the detection of multiple cellular components both in vitro and in vivo (Coelho et al. 2013). Among them, laser scanning confocal microscopy has been widely used to obtain images from very thin sections and combine them in highly accurate 3D reconstructions of the sample. Additionally, the analysis of single molecules by FRET (smFRET) has been praised for its capacity to detect, with a high temporal resolution, stochastic behaviors that are inherent to the chemical reactions and macromolecular conformational changes that are characteristic of the intracellular dynamic processes (Lamichhane et al. 2013; Wang et al. 2013). Likewise, the development of TIRF microscopy has allowed the acquisition of fast and reliable data with high resolution and reduced background fluorescence, which has boosted the applications of smFRET in recent years (Ha 2001; Lin and Hoppe 2013). The following sections will discuss these three modes of high-resolution fluorescence microscopy in more detail.

4 Confocal Microscopy

Confocal Microscopy, usually in the form of confocal laser scanning microscopy (CLSM), also known as laser confocal scanning microscopy (LCSM), is a powerful technique that revolutionized cell biology studies by providing optimal resolution of tridimensional images. Confocality can be defined as the capacity to select light coming from a specific depth in a sample by filtering out light emitted by or scattered from other depths. Confocal microscopy allows for the spatial localization of structures and molecules within the cell, using living and fixed cells as well as tissue sections. These 3-D images are obtained by computational reconstruction and combination of sectioned 2-D image data from the studied sample; thus, the term section microscopy is also used to describe this technique.

Another particularly important feature of confocal microscopes that make them much more potent than conventional wide-field optical and fluorescence microscopes is their increased optical resolution due to the elimination of background pick-up caused by out-of-focus light and scatter (Conchello and Lichtman 2005; Paddock 2000). This permits the captured sectioned images to be sharp, with great spatial precision. In conventional fluorescence microscopy, for instance, the illumination of the sample elicits fluorescence in the whole depth of the sample, rendering the majority of the cell volume out of focus, independently of the focus being set vertically on the sample (a cell, for example) (Fig. 3). In addition, this out-of-focus light coming from the image may suffer diffraction, reflection, and refraction by the sample, before being captured by the objective lens of the microscopy. Thus, the captured light emission will appear to be coming from the last point that was scattered and not from the real point from where it was emitted. This is highly affected by sample depth. Therefore, the use of confocality to capture images of thin slices from thick objects, this way removing the influence of out-of-focus light in each slice, allows the acquisition of extremely well-defined images (Paddock 2000). However, there is still a limit for the depth penetration of these microscopes, and for studies of thicker samples (e. g. large spheroids, organoids, and small animals), other microscopy approaches should be used, such as two-photon microscopy or LSFM.

The use of fluorescence is an important component for image detection in confocal microscopy, thus the principles of excitation and emission, signal loss, diffraction, autofluorescence, and fluorophores must be considered in any application of this technique.

The first known confocal microscope was built by Marvin Minsky in 1955, during his post-doctoral training at Harvard University, to study images of neural nets from brain living tissues (Minsky 1988). This technology, patented in 1957, is still used in modern confocal microscopes, with few improvements. Modern confocal microscopes have the same basic components: (1) a light source to be projected onto the sample (the laser beam is the ideal light source since it contains all its energy in a collimated coherent plane wave). (2) A scanner, also known as Z control, allows the laser beam focus to be moved line-by-line on the sample, and in this way, the image can be acquired in sections in the X-Y (lateral coordination) and Z (depth coordination) axes. (3) A detector, typically a photomultiplier tube (PMT), will detect emission and reflected photons from the sample. This type of detector has a low noise and fast response, being able to detect even single photons. (4) The pinhole, which is fundamental for the discrimination of the X and Y axis depth and position, allowing a focused spotlight to reach the detector. The pinhole position, always in front of the detector, is crucial to remove the background coming from the out-of-focus light. (5) The beam splitter, composed by dichroic mirrors and emission filters. (6) Objectives, responsible for the optical image formation, which determine the image quality properties, as well as the resolution in the X, Y, and Z axes (Fig. 3b–d).

The confocal specific principle is that the image is scanned by individual and sequential illumination of specific regions of the sample, while at the same time avoiding the detection of light coming from other regions (out-of-focus light). The resolution quality of the image (due to blocking of scattered light capture) is inversely proportional to the size of the scanned region at each moment. Thus, the laser beam, which provides a light focus sharply at a single point, decreasing the beam thickness, crosses the sample at each specific section before being deflected by the scanner to the adjacent section. In confocal fluorescence microscopy, the laser promotes fluorophore excitation, and the wavelength of the laser is specific for the fluorophore to be imaged (Pawley 1991; Webb 1999). Thus, depending on the required spectral line coverage needed for each study, there is a variety of lasers for confocal microscopy; the most common are the argon-ion lasers, which can emit from UV (230 nm) to green (514 nm). There are also the helium-neon (He-Ne) and Krypton-ion (that can also be combined with argon, Ar-Kr) and helium-cadmium (He-Cd) lasers, which provide lines from the blue to the red, as well as zinc-selenium (Zn-Se) diode lasers (Gratton and vandeVen 2006).

When the emitted light returns from the sample, it passes through the pinhole in a plane that allows the illumination spot and the pinhole aperture to be focused simultaneously at the same point, from which comes the name confocal scanning. This position allows the pinhole to detect only the light coming from the focal plane, rejecting the scattered light coming from around the illuminated point and part of the background light collected by the objectives. The diameter of the pinhole is important for its efficiency: while the background is minimal in exceedingly small pinholes, this compromises signal capture, especially when the light level is limited. Thus, the pinhole size should be between 60 and 80% of the diameter of the diffraction-limited spot of the image one intends to acquire (Conchello et al. 1994; Sandison et al. 1995).

The specific light is then detected with the help of the photomultipliers, and the information is processed using computers that reconstruct the several scanned images from the different points of the sample. To produce these images, the biological samples are immobilized and the light beam moves from each point to the next for the scanning to be performed. One common method used in cell biology studies uses a laser scanner with the help of two or three oscillating mirrors, which deflect the angle of the light beam going into the sample and of the emitted light coming from the sample (respectively described as scanning and “descanning”) across a fixed pinhole and detector (Conchello and Lichtman 2005). This way, each mirror scans the illumination along two axes (X and Y), creating a 2-D image. By moving the focus vertically (z-axis) it is possible to create stack images used for computational 3-D reconstructed imaging (Conchello and Lichtman 2005; Paddock 2000). This approach builds the image sequentially one pixel at a time, thus the capture of scanned images is not as fast as for conventional microscopy. This slow speed can be a problem since the time and the intensity in which the sample is illuminated can increase photobleaching and phototoxicity (better described below). The latter is especially important when live cells and organisms are studied. Although the photomultiplier tube has high sensitivity, being able to capture even single photons, they are not very efficient, detecting only 10% or less fluorescence signals coming from the pinhole, thus increasing scanning speed can generate loss of image quality (Paddock 2000).

Taking this into account, it is important to increase the velocity of the scanning, but in a way that preserves both the quality and accuracy of the image. For studies that involve only 2D images, the use of fast scan mirrors is acceptable. However, for 3D images a multiplex approach is commonly used. Thus, the image is formed by the illumination of several pixels and collection of light simultaneously. Commonly this is achieved using a disk with several pinhole apertures that move to different regions of the sample (spinning disk) (Paddock 2000; Xiao et al. 1988; Maddox et al. 2003). One disadvantage of the spinning-disc confocal scanning microscope is the decrease in sensitivity. To address this issue, another design was created by Yokogawa Electric Corporation in 1992, using a dual spinning-disk system (also known as micro-lens enhanced confocal scanning microscope). In this system, every pinhole is associated with a micro-lens, by the placement of a second spinning-disk containing micro-lenses in front of the pinhole spinning-disk (Maddox et al. 2003; Toomre and Pawley 2006). This fit significantly increases the amount of light directed to each pinhole, helping the capture of a broadband of light. In both cases, the light that passes through the pinhole is imaged in a detector that is typically a charge-coupled device (CCD) camera (Conchello and Lichtman 2005; Paddock 2000).

Despite being a powerful technique that revolutionized studies involving measurements in cells and tissues, as well image acquisition with a particular accuracy for spatial localization studies, there are some potential problems that compromise the quality of the information obtained by confocal microscopy. Most of the limits of the technique can be avoided by better understanding the intended specific applications and thus choosing a more suitable equipment, or configuration of the parameters and biological sample preparation. If the results can be analyzed by a 2-D image, the use of confocal microscopy with multiple mirrors (confocal laser scanning microscopes) is acceptable. However, if a 3-D image is necessary, the use of a spinning-disk CSM would be preferred, as it decreases the photobleaching effect.

Photobleaching is the irreversible destruction of the fluorochromes after repeated cycles of excitation and emission, leading to loss of signal (Lichtman and Conchello 2005). Since each fluorochrome can stand a determined quantity of excitation and emission cycles, knowing this is important to achieve the maximum efficiency of the specific fluorochrome in each analysis. In addition to using more efficient confocal scanning microscopes that reduce time and exposure intensity to light as well as increase the speed of signal recording (as discussed above), it is also important to choose more stable fluorochromes and use antifading conjugates that protect the sample from photobleaching (Dailey et al. 2006). Interestingly, the photobleaching principle helped the development of important methodologies used in biology studies, such as fluorescence loss in photobleaching (FLIP) and fluorescence recovery after photobleaching (FRAP), as ways to study mobility and molecular diffusion in cell systems (Dailey et al. 2006).

The duration of light exposure is also especially important when live samples are being analyzed, since, in addition to photobleaching, phototoxicity is also prohibitive, as discussed above. Of note, these characteristics are of great concern when using time-lapse imaging. This is a powerful technique in which a living cell (or organism) image can be recorded in sequence over long time periods, allowing the study of cellular dynamics within a living cell (Dailey et al. 2006; Pawley 1991).

Another important feature to be considered in biological studies is that some structures can produce their own fluorescence, which is known as autofluorescence. This can lead to confusion between the emission coming from the excited fluorophore and that from autofluorescence. This phenomenon cannot be overlooked since it can negatively affect the interpretation of the results. Thus, it is always important to use unlabeled samples (without fluorophore addition) as negative controls. In addition, a special attention is needed when preparing the biological sample for microscopy analyses, since some fixator solutions can induce autofluorescence.

Despite these few shortcomings that can be easily overcome by prior and careful knowledge of the equipment, the sample, the fluorochrome and the structures aimed at, confocal microscopy is a very powerful technique to obtain reliable and high-resolution images of cells and tissues. The widespread availability of confocal microscopes in research institutions and their increasing affordability testify to that.

5 TIRF Microscopy as a Technique for Selective Fluorophore Excitation in Cell Studies

TIRF microscopy is a technique based on the principles of light refraction and reflection. When an incident beam of light travels from a material with higher refractive index (n1) to a material with lower refractive index (n2), a change in the light angle relative to the normal is observed. Refraction then occurs and can be calculated by applying Snell’s law: n1 sin (θ1) = n2 sin (θ2), where θ1 and θ2 represent the angles of incidence and refraction, respectively. Therefore, if the angle of incidence is substantially increased, the angle of refraction will decrease proportionally, reaching a point where the refracted light will travel in a tangent between both surfaces with different refractive indexes. The angle in which this condition occurs is known as critical angle and can be calculated by \( {\theta}_{\mathrm{c}}={\sin}^{-1}\left(\frac{n_1}{n_2}\right) \). At any angle above θc, the inciding light will be totally reflected back to the material with index n1. These behaviors are depicted in Fig. 4.

Fig. 4
figure 4

Schematic representation of the light refraction in the techniques epifluorescence (EPI), highly inclined thin illumination (HILO), and total internal reflection fluorescence (TIRF). In this scheme, n1 > n2, θ1 is the incident beam angle and θc is the critical angle

In TIRF microscopy, the condition of total reflection is leveraged. The sample is illuminated by a light beam with an angle of incidence >θc through a glass slide with n higher than the fluorophore solution. When light is totally reflected, a weak local electromagnetic field known as the evanescence field is generated. The evanescent field propagates to the material of lower n and can excite fluorophores of interest present in this medium. The intensity of the evanescent field decays exponentially with penetration distance (the distance point in relation to the interface between the glass slide and liquid medium), allowing for a selective excitation in this small depth range, which is typically between 70 and 300 nm (Tinoco and Gonzalez 2011; Saffarian and Kirchhausen 2008). Due to this selective illumination, lower background fluorescence from out-of-focus molecules is generated. Thus, high sensitivity and signal/noise ratio can be obtained, allowing for the detection of extremely low fluorescent signals, such as for the detection of single molecules in vitro and in vivo (Lamichhane et al. 2013; Wang et al. 2011). The instrumentation for TIRF microscopy is fairly simple and usually requires illumination through an oil immersion objective or through a fused silica prism positioned under the sample compartment. Detection is often performed with a CCD camera, which becomes the limiting factor for the temporal resolution of this technique.

In an application example, Sgro et al. (2013) coupled TIRF and microfluidics to detect specific proteins present in neurotransmitter vesicles by isolating and labeling single vesicles with fluorescently labeled antibodies. The detection was possible because vesicles were immobilized directly onto the surface of the polydimethylsiloxane (PDMS) device. In fact, to ensure that events close to the cell membrane or organelles can be visualized with TIRF it is necessary to immobilize them on the microscope slide (or microfluidic device) surface to be imaged (Roy et al. 2008). The main disadvantage of TIRF, when applied to biological systems, is due to the depth of the evanescent field—the target molecule must be located in close proximity to the surface and cannot participate in dynamic processes that would cause it to diffuse out of the detection region. The free vesicle movement in endocytosis is an example of an in vivo process that would hardly be compatible with TIRF. On the other hand, processes that occur with limited diffusion can possibly be monitored. For instance, the spatial source of vesicles for regulated exocytosis, whether from vesicles previously bound to the membrane or recruited from the cytosol, can be studied using TIRF microscopy (Ohara-Imaizumi et al. 2009).

There are, however, methods that can be used to increase imaging depth. Illumination modes with alternative angles of incidence can be used depending on the application. Examples include epifluorescence and highly inclined thin illumination (HILO) microscopy. As discussed before, in epifluorescence, the sample is illuminated at an angle of incidence of 90°, generating a theoretically infinite depth (Fig. 4). However, in applications that require detection of low abundance molecules (or that have low fluorescence intensity in general), this technique can only be effectively applied if the background fluorescence and the total amount of fluorescent molecules is low. Unlike TIRF, epifluorescence excites the whole depth of the sample and molecules of the focal plane can contribute to the background fluorescence, effectively decreasing the signal/noise ratio and possibly increasing photobleaching (Coelho et al. 2013).

In HILO, the angle of incidence is set to a value slightly lower than the critical angle (Fig. 4). The light travels to the solution with lower n and is refracted with a high inclination angle, increasing the illumination depth up to 20 μm (Coelho et al. 2013). This illumination depth increase allows for the visualization of fluorescent molecules localized in regions further away from the glass/sample interface, but also decreases the signal/noise ratio when compared to TIRF. Konopka and Bednarek (2008) applied HILO for the visualization of the cortical cytoskeleton, membrane proteins, and organelles (mitochondria and Golgi complex close to the plasma membrane) of epidermal plant cells. In that work, the authors evaluated and exemplified HILO as a technique suitable for fluorescent imaging inside plant cells, which can have a wall thickness in the order of 0.5 μm. Although less sensitive, that work illustrates an example in which HILO is more suitable than TIRF.

Furthermore, the use of different illumination methods for the detection of biomolecules in cells has been discussed in the literature. For instance, Wang et al. (2010) evaluated the effect of extracellular stimuli on protein translocation between the cell membrane and cytosol using TIRF, epifluorescence and a microfluidic device (flow cytometry analogous) coupled with TIRF. In this work, the Syk kinase protein was fluorescently labeled, and its translocation was successfully visualized using TIRF when the molecule reached the evanescent field close to the cell membrane. Thus, the translocation was monitored by the increase in fluorescence signal observed 10 and 20 min after the stimulus. The epifluorescence technique generated an image with a lower fluorescence intensity that was only evident 60 min after the stimulus. The microfluidic flow cytometry device coupled with TIRF was used to monitor single cells passing through the microfluidic channel. Each cell was individually compressed under the detection regions to ensure proper evanescent field illumination, and each fluorescence signal was measured per cell after different time points following the extracellular stimulus. The data indicated that protein translocation occurred between 10 and 20 min after stimulus. In this case, the use of microfluidic flow cytometry device coupled to TIRF for this application has the advantage to be less susceptible to intensity losses, due to photobleaching, under a constant illumination source. However, the disadvantage of using this methodology is that it is impossible to perform real-time fluorescence monitoring due to the transient nature of the experiment. It also highlights that the flow cytometry experiment was favored by the use of a photomultiplier tube rather than a CCD camera due to its higher data acquisition. The authors, however, do not comment on the common drawback of having cells adhering to the walls of PDMS microfluidic devices (Lee et al. 2004). More importantly, the article represents a good example of study on how different microscopy techniques may affect the interpretation of the results obtained.

As an additional alternative to minimize the specific limitations of TIRF (as well as of HILO and epifluorescence), the detection of fluorescently labeled biomolecules can be performed in multi-angular illumination systems, i.e., a system in which the light beam angle of incidence is varied over time. In their work, Saffarian and Kirchhausen (2008) highlighted this approach by proposing a method for the monitoring of two sets of fluorophores using sequential TIRF and epifluorescence acquisition. In that work, the authors were able to monitor the axial separation between clathrin protein and its adaptor complex AP-2 during the formation of vesicles on the membrane of live BSC-1 cells.

In conclusion, TIRF, HILO, and epifluorescence are powerful techniques capable of generating fluorescent images with high resolution and provide means for several applications related to cell studies. Together with other high-resolution fluorescence techniques, they can help the scientific community to understand specific interactions in the intracellular medium. TIRF, especially when associated with Förster resonance energy transfer (FRET) microscopy, has provided excellent tools to detect dynamic cellular processes.

6 Monitoring Intracellular Biochemical Interactions and Macromolecules Conformation Using Förster Resonance Energy Transfer Microscopy

As discussed in detail in Chap. 4 “UV-Vis Absorption and Fluorescence in Bioanalysis,” Förster Resonance Energy Transfer (FRET) is useful to evaluate the relative positions of two fluorophore groups. When a molecule containing a fluorophore is excited (donor), the released energy can be non-radiatively transferred to a second molecule (acceptor), depending on the distance between them (Tinoco and Gonzalez 2011). For FRET to occur, it is necessary that the emission band of the donor overlaps the absorption band of the acceptor (Roy et al. 2008). In Chemistry and Biochemistry, this phenomenon can be explored to monitor the stochastic nature of a given chemical reaction, in which both reagents are labeled with fluorophores, and the interaction generates a measurable signal through FRET. Also, macromolecules such as DNA, RNA, and proteins can be labeled in specific positions to detect conformational changes of individual molecules (Sakon and Weninger 2010).

The efficiency in FRET (EFRET) is defined as the fraction of the transferred energy that is absorbed by the acceptor and is a function of the sixth power of the distance between donor and acceptor (R):

$$ {E}_{\mathrm{FRET}}=\frac{1}{1+\raisebox{1ex}{${R}^6$}\!\left/ \!\raisebox{-1ex}{${R}_0^6$}\right.}=\frac{\raisebox{1ex}{${I}_{\mathrm{A}}$}\!\left/ \!\raisebox{-1ex}{${Q}_{\mathrm{A}}$}\right.}{\raisebox{1ex}{${I}_{\mathrm{A}}$}\!\left/ \!\raisebox{-1ex}{${Q}_{\mathrm{A}}$}\right.+\raisebox{1ex}{${I}_{\mathrm{D}}$}\!\left/ \!\raisebox{-1ex}{${Q}_{\mathrm{D}}$}\right.} $$

Where R0 is the Förster distance (distance between donor and acceptor at 0.5 EFRET), IA, ID, QA, and QD are the intensities of fluorescence and quantum yields of the acceptor and donor, respectively (Tinoco and Gonzalez 2011).

EFRET ranges from 0 to 1, corresponding to bigger and smaller distances, respectively. Usually, EFRET is plotted as a time-dependent signal or by means of a histogram (Fig. 5). The resolution is about a few nanometers, although the absolute distances are difficult to measure.

Fig. 5
figure 5

(a) The efficiency in FRET as function of the distance between the donor and acceptor fluorophores; (b) Example of histogram of event counting in FRET; (c) Typical signal for EFRET monitored as function of time, containing information on null efficiency (E1), low FRET (E2), and High FRET (E3). Reproduced from Refs. (Roy et al. 2008; Lamboy et al. 2013) with permission

Either confocal or epifluorescence microscopes can be used to obtain FRET images. These techniques present high compatibility with a wide range of fluorophores, have high image acquisition rate and do not require post-processing to obtain real images (Toomre and Bewersdorf 2010). When compared to TIRF, the main disadvantages are the lower spatial resolution and stronger photobleaching. This way, the choice of the labeling fluorescent probe is of utmost importance in FRET. Most authors use fluorescent proteins derived from GFP (green fluorescent protein) or synthetic dyes (Tables 1 and 2), although other methods are also employed (Roy et al. 2008; Wegner et al. 2013).

Due to its characteristic mechanism of fluorescence generation, the applicability of FRET on biomolecules interaction studies is broad, providing information not available from other techniques. Lamichhane et al. (2013), using TIRF microscopy, employed single-molecule (sm) FRET to understand the mechanisms involved in the DNA polymerase 3′-5′ exonuclease activity, in vitro. Many DNA polymerases possess two domains spatially separated, the polymerase domain (pol), responsible for the addition of nucleotides (complementary to the DNA template) to a primer, and the exonuclease domains (exo), involved in the proofreading of the newly synthesized chain. Using Alexa Fluor dyes-labeled oligonucleotides and genetically modified DNA polymerase, it was possible to investigate the mechanisms and dynamics for DNA substrate switching between the pol and the exo sites, since the binding of the primer to the exonuclease domain was projected to reduce EFRET (Fig. 6). In fact, the authors demonstrated that the binding of the primer to the DNA polymerase can occur initially in both domains and that the change in domain proceeds through dissociation and diffusion of either the primer or the DNA polymerase. The retention time of the FRET state evidenced the applicability of this technique to study the biding equilibrium between primer and DNA polymerase domains (Fig. 6).

Fig. 6
figure 6

Dynamic domain change in DNA polymerase. (a) Proposed mechanism of domain change and values of retention times and kinetic constants; (b) EFRET signal showing the change in the domain at 0.80 and 0.65 and monitoring of the fluorescence intensities for the donor (green) and acceptor (red). Reproduced with permission from Ref. (Lamichhane et al. 2013). Copyright 2013 American Chemical Society

Regarding conformational analyses, Wang et al. (2013) also applied the concept of smFRET to study the bending of DNA by the HIV-1 nucleocapsid protein (NC). DNA strands labeled with Cy3 and Cy5 (Table 1) probes were immobilized on glass coverslip surfaces so that the bending of the DNA would cause the energy transfer effect (Fig. 7). The results indicate that the DNA strand stays in an unfolded form in the absence of NC given the small values of EFRET. In the presence of NC, EFRET presents a heterogeneous distribution, suggesting a dynamic change in the DNA chain bending, which may be related to the incorporation of the viral DNA to the genetic material of the host that is observed in vivo.

Fig. 7
figure 7

Schematic representation of FRET effect produced through the DNA bending by HIV-1 nucleocapsid protein. Reproduced from Ref. (Wang et al. 2013) with permission. Copyright 2013 American Chemical Society

In a broader approach, Su et al. (2013) showed the possibility of monitoring two biomolecular interactions using biosensors based on FRET and employing two pairs of fluorophores, measured simultaneously. This way, the authors monitored increases in [Ca2+]ic and Src kinase activity produced in the presence of endothelium growth factor (EGF). Measurements were done in the blue-green region for Src kinase and yellow-orange for Ca2+. As a result, it was possible to determine the kinetics of these two biomolecular events in the same sub-cellular site (Fig. 8). Upon EGF stimulus, Ca2+ was released in the cytosol and returned to basal levels after 8 min, while Src levels remained high during the whole time of acquisition of the images. Other biosensors have been proposed with a similar approach, such as for cAMP and cGMP monitoring (Sprenger and Nikolaev 2013).

Fig. 8
figure 8

Dual FRET biosensor. (a) Overlapped fluorescence intensities for the simultaneous monitoring of Src kinase (blue-green Src) and Ca2+ (yellow-orange TnC) in HeLa cells; (b) Normalized emission ratios for both pairs of fluorophores. EGF: endothelium growth factor. Reproduced from Ref. (Su et al. 2013) with permission

The comparison of different illumination techniques is important to decide the best methodology to apply. Lin and Hoppe (2013) compared the images and FRET signals by using both TIRF and epifluorescence modes. Gag genes from HIV virus units were labeled with the fluorescent proteins CFP mCerulean and YFP mCitrine (derived from GFP, Table 2) to exhibit FRET when these viruses interact with the plasma membrane of COS 7 cells. Due to the high resolution of the experiments, it was possible to detect events of high EFRET in TIRF that were not observed in epifluorescence, which were related to the presence of individual viruses at the cell membrane.

Taken together, these examples show how FRET can be used to obtain effectively and uniquely spatial, conformational, or kinetic information in biological systems that cannot be assessed with other light-based techniques. Moreover, the use of high-resolution microscopy techniques, such as TIRF, in combination with single molecular FRET would allow interesting research possibilities.

7 Concluding Remarks

The evolution of high-resolution fluorescence techniques is dependent on the current technological development of its different instruments, such as lasers, detectors, and data computational processing, as well as on the development of more stable and specific fluorescent probes. The detection methods for cellular components discussed in this chapter have an important common characteristic, which is that all can also be used for in vivo studies.

Laser scanning confocal fluorescence microscopy has been widely used to study cell structure and function with high resolution, and despite its limitations regarding photobleaching and phototoxicity, it still has a place in various biological studies. Also, TIRF microscopy is a valuable tool to visualize fluorescent molecules in the vicinities of the cellular plasma membrane, as well as to detect immobilized components in the glass/sample interface of the microscope. Additionally, TIRF is particularly suitable for applications based on FRET, which allows for the monitoring of inter and intramolecular interactions in a time-resolved manner.

Although FRET resolution is much greater using a TIRF microscope, this technique can also be applied using confocal or conventional epifluorescence microscopy, with satisfactory results for biomolecule analyses in vivo. Thus, TIRF allows FRET applications for analyses of biomolecules close to the plasma membrane, while the other techniques can be used to evaluate other cellular regions. In addition, combining these high-resolution fluorescence techniques with microfluidics devices improves the evaluation of single cells, eliminating heterogeneity with high sensitivity.

Therefore, the use of these different high-resolution fluorescence methods, each one with specific characteristics that extend the applicability and versatility of fluorescence use to biomolecule and organelle detection, combined with other analytical techniques, can increase the quality of the results obtained, improving the understanding of the multiple biological processes in the cellular context.