Keywords

13.1 Introduction

Improving the quality of health care has been an eternal pursuit of mankind. Quality health care has two prime objectives: first to detect disease at an early stage before it becomes difficult to manage and second to treat it with high selectivity and precision without any adverse effect on uninvolved tissues. Therefore, the utilization of the continued technological advances toward these objectives has always been a priority research area. Last century has witnessed the development of computerized X-ray tomography (CT scan), magnetic resonance imaging (MRI), ultrasonography, etc., which have made it possible to noninvasively peep inside the body and thus help detect disease at an early stage. Several new therapeutic modalities and drugs have also been developed to improve the selectivity of the treatment. The technological developments have also led to improved understanding of the cause of disease and the ways it can be treated. Lasers, one of the major inventions of the last century, are also playing a very important role in the pursuit of both the objectives. Although light has been used for diagnosis and therapy from time immemorial, the availability of lasers as light source with remarkable control on its properties, the advances in optics and instrumentation, and the large image processing capability of the computers are making possible a much more comprehensive use of the information content of the light coming from the tissue for high-resolution imaging and quantitative, sensitive, noninvasive diagnosis. Optical techniques are helping image microstructures in living tissue with resolution down to a few micrometers, whereas with current frontline biomedical imaging techniques like MRI, CT scan, ultrasonography, etc., it is difficult to achieve resolution better than 100 μm. Optical spectroscopy of elastically and inelastically scattered light from tissue is also facilitating in situ noninvasive diagnosis with no potential adverse effects associated with the use of ionizing radiation. Photoactivated drugs that are inert until photoexcited by radiation with the right wavelength are being used to target the tissue selectively by exercising the control on light exposure (only the tissue exposed to both drug and light will be affected). A good example is the fast developing photodynamic therapy of cancer. There are indications that selective photoexcitation of native chromophores in the tissue may also lead to therapeutic effects.

In this chapter we first provide a brief overview of the propagation of light through tissue and then discuss the use of light for biomedical imaging, diagnosis, and therapeutic applications. The use of light to manipulate single cells/subcellular objects and the role it can play in biomedical diagnosis at single cell level are also addressed.

13.2 Laser Tissue Interaction

Light passing through the biological tissue gets attenuated both because of absorption by its constituents and scattering due to the presence of microscopic inhomogeneities (macromolecules, cell organelles, organized cell structure, interstitial layers, etc.). Since the tissue is an inhomogeneous and multicomponent system, its absorption at a given wavelength is a weighted average of the absorption by its constituents. The major contributors of absorption in tissue in the ultraviolet spectral range are DNA and proteins. In the visible and near-infrared (NIR) wavelength range, the absorption in tissue is dominated by hemoglobin and melanin. Absorption by water, the main constituent of all tissues, becomes significant beyond ∼1 μm and becomes dominant beyond about 2 μm wavelength [1]. For biomedical imaging applications, one uses wavelengths in the so-called diagnostic window (650 nm to 1.3 μm) where tissue absorption is weakest. This is desirable for two reasons. First, it allows probing larger depths of the tissue, and, second, it avoids unnecessary deposition of energy in the tissue which might lead to adverse effects.

Attenuation of light propagating in a non-scattering medium is completely described by the Beer-Lambert’s law; I = I 0 exp (−μ a z), where μ a is the absorption coefficient. While scattering may remove photons from the beam path and thus contribute to its attenuation, multiple scattering events might also bring photons back into the beam path. These photons although not part of the collimated beam also add to the irradiance at a given point along the direction of propagation of the beam, making prediction of depth profile of the propagating beam a bit more complicated. It should also be noted that the degree of attenuation arising due to scattering depends on the angular distribution of the scattered photons which in turn has a strong dependence on the size of the scatterer. For scatterers with size ≪ wavelength, referred to as Rayleigh scatterers, the phase of the electromagnetic field across the scatterer can be treated as constant. Therefore, the light scattered by all the induced dipoles in the scatterer adds up in phase resulting in dipole-like scattering. Here the angular distribution of the scattered light often referred to as “phase function” shows no dependence on the angle of scattering in the plane transverse to the electric field of the incident light, but in the plane containing the electric field, it shows a cosine square intensity pattern with minima along the dipole axis due to the transverse nature of the electromagnetic wave. Further, as first shown by Rayleigh, for such small scatterers, the scattered intensity is inversely proportional to the fourth power of wavelength. For larger size scatterers (>λ), light scattered by all the induced dipoles in the scatterer does not add up in phase except only in the forward direction making the angular distribution of the scattered light peak in the forward direction.

An exact mathematical description of scattering from spherical particles of size > wavelength was provided by Ludwig Valentine Lorenz and Gustav Mie [2]. This is therefore referred to as Lorenz-Mie scattering or often just Mie scattering. In Mie regime the wavelength dependence of scattering coefficients for different tissue is given as λ −k where k typically varies from 1 to 2 [3]. The first moment of the phase function is the average cosine of the scattering angle, denoted by “g.” It is also referred to as the anisotropy parameter. The value of “g” ranges from −1 to +1, where g = 0 corresponds to isotropic scattering (Rayleigh scattering), g = +1 corresponds to ideal forward scattering, and g = −1 corresponds to ideal backward scattering. A photon acquires random direction after about 1/(1 − g) scattering events, which is only five for g = 0.8. Typical values of g for biological tissues vary from 0.7 to 0.99. The parameter used to describe the scattering properties of the tissue are the scattering coefficient μ s and the reduced scattering coefficient μ s /(=μ s(1 − g)). The reduced scattering coefficient defines the path length over which the incident light loses its directional information, that is, the angular distribution of the scattered light becomes isotropic. In Table 13.1 we show the value of these optical parameters for some biological tissue [4].

Table 13.1 The typical values for absorption coefficient (μ a), scattering coefficient (μ s), and anisotropy parameter (g) for some human tissues at different wavelengths (Adapted from Ref. [3])

13.3 Optical Imaging

A major difficulty in the use of light for biomedical imaging arises because in contrast to X-ray, visible light photons undergo multiple scattering in the tissue leading to a blurring of image. Therefore, for histopathology one makes use of transverse sections of tissue whose thickness is smaller than the mean free path for scattering. In order to comprehend how light can be used for in situ optical imaging of objects embedded in a turbid medium, let us consider the propagation of a short-duration laser pulse through the turbid medium. The un-scattered (ballistic) photons will emerge first followed by the predominantly forward scattered (snakelike) component and the multiple scattered diffuse component. To have a perspective of the relative magnitude of these components, let us take the value for the scattering coefficient to be ∼100 cm−1. Therefore, the number of ballistic photons on propagation through a 1 cm thick tissue will be of the order of e −100 of the incident number of photons, that of snakelike component e −10 assuming g ∼ 0.9 for the tissue, and the major fraction will be diffuse component. Since coherence of light is lost in a few scattering events, coherence gating can be used to filter out the ballistic photons which being un-scattered or minimally scattered have the highest image information and hence can provide images with the best resolution (down to a few μm). However, imaging depth will be limited to at best a few mm. Therefore, coherence gating can only be used for imaging of transparent objects (like ocular structure) or thin turbid tissue like the mucosal layers of hollow organs. Optical coherence tomography (OCT), the approach that exploits coherence gating for optical imaging, has emerged as a rapid, noncontact, and noninvasive high-resolution imaging technique and is already being used for clinical applications in ophthalmology, dermatology, etc. [5].

Another approach to filter out the multiple scattered light is to make use of the depolarization of the multiple scattered light or the fact that multiple scattered light travels longer and hence will take longer time to reach the detector [69]. Since in these approaches both the snakelike and ballistic photons are collected, the sum total of which is orders of magnitude larger than the ballistic component, these can provide image through larger depths. Further, with the use of nonlinear optical techniques like stimulated Raman scattering, the image-bearing component of light can be selectively amplified to further enhance the depth of imaging [10]. However, due to the use of predominantly forward scattered photons, which might have undergone several scattering events, the resolution is poorer (of the order of 100 μm).

For imaging through larger depths as, for example, for imaging human brain or female breast, one has to necessarily work with diffuse photons. Although the spatial resolution possible in imaging using diffuse photons is rather limited (at best few mms), there is considerable interest in this approach referred to as diffuse optical tomography because it allows imaging through the largest depths of the turbid medium.

13.3.1 Optical Coherence Tomography (OCT)

A schematic of an OCT setup is shown in Fig. 13.1. It comprises of a low temporal coherence light source and a fiber-optic Michelson interferometer, one arm of which has the sample and the other arm a reference mirror. Light reflected from a layer of the sample and the reference mirror will interfere when the two path lengths are within the coherence length of the source. Axial scanning of the reference mirror helps record interferograms from different depths of the sample. Two-dimensional cross-sectional images and three-dimensional tomograms of the backscattered intensity distribution within the sample can be obtained by recording the interference signals from various axial (A scan) and transverse positions (B scan) on the sample [1113]. High spatial coherence (i.e., single transverse mode) is needed since superposition of interferences corresponding to multiple spatial modes leads to washout of information. Considering the electric fields in the reference and sample arm to be ER and ES, respectively, the intensity at the detector can be described as

$$ {I}_{\mathrm{D}}\propto {\left|{E}_{\mathrm{R}}+{E}_{\mathrm{S}}\right|}^2 $$
(13.1)
$$ {I}_{\mathrm{D}}\propto {\left|{E}_{\mathrm{R}}\right|}^2+{\left|{E}_{\mathrm{S}}\right|}^2+2\left|{E}_{\mathrm{R}}\right|\left|{E}_{\mathrm{S}}\right|\mathrm{Re}\left[\gamma \left(\tau \right)\right] \cos \left(\frac{2\pi }{\lambda_0}2z\right) $$
(13.2)

where 2z is the round trip optical path difference between reference and sample arms, λ 0 is the central wavelength of the source, and γ(τ) is the complex degree of coherence of the electric fields. A scan of the optical path in the reference arm with uniform velocity v allows probing of different layers in the sample and generates amplitude modulated signal I D(t) at a carrier frequency determined by the velocity of scanning. This can be expressed as

$$ {I}_{\mathrm{D}}(t)\propto {\left|{E}_{\mathrm{R}}\right|}^2+{\left|{E}_{\mathrm{S}}\right|}^2+2\left|{E}_{\mathrm{R}}\right|\left|{E}_{\mathrm{S}}\right|\mathrm{Re}\left[\gamma \left(\tau \right)\right] \cos \left(\frac{2\pi }{\lambda_0}2 vt\right) $$
(13.3)

Unlike microscopy, axial and lateral resolutions are decoupled in OCT. The axial resolution is one half of the coherence length (l c) of the source which is directly proportional to the square of its central wavelength (λ 0) and inversely proportional to its spectral bandwidth (Δλ). For a source with Gaussian spectral distribution, axial resolution is given by \( \Delta z={l}_c/2=\frac{2\; \ln\;2}{\pi}\left(\frac{\lambda_0^2}{\Delta \lambda}\right). \) The transverse (lateral) resolution is governed by the spot size formed by the focusing optics used in the sample arm and can be expressed as Δx = 4λ 0 f/πd where d is the spot size on the objective lens and f is its focal length. Although the use of a higher NA objective lens can provide better lateral resolution, it comes with a reduced imaging depth arising due to the reduced depth of focus for high NA objective. Therefore, to achieve imaging depth larger than the depth of focus of the objective lens, it becomes necessary to focus the light beam at different depths and stitch the different images together [14, 15]. Other approaches investigated for ensuring large depth of focus with good lateral resolution are the use of an axicon lens [16] and tapered fiber tip probe to illuminate the sample [17].

Fig. 13.1
figure 1

(a) Schematic of a low coherence interferometry setup. The sample is assumed to comprise of three layers as shown by solid, dotted, and dashed line. The path-matched locations of reference mirror (a) as well as the interference pattern and its envelope (b) for the three layers in the sample are shown in solid, dotted, and dashed line, respectively

In time-domain OCT setups, the reference arm path length is changed either by moving the reference mirror or by using a Fourier optic delay line [18]. Though all depths in the sample are illuminated, data is collected sequentially only from one depth at a time. While this makes time-domain OCT setup less sensitive to vibrations or the movement of scatterers, it also leads to the drawback of slow image acquisition speed.

In Fourier domain OCT (FDOCT), the Fourier transform (FT) of the interference spectrum is used to retrieve the axial (depth) information from all depths without the need for scanning the reference arm [19]. This enhances image acquisition speed. There are two variants of Fourier domain approach: the one referred to as spectral domain OCT (SDOCT) utilizes broadband source (SLD) and a spectrometer in detection arm, and the other referred to as swept source OCT (SSOCT) makes use of a swept source (source whose wavelength is tuned as a function of time) and a single photodetector in the detection arm [20].

A schematic of SDOCT is shown in Fig. 13.2. Here the reflected light from reference mirror and sample layers is spatially dispersed with the help of grating on linear photodiode array or CCD. Because each pixel of the detector sees only a narrow spectral band (δλ), the resulting coherence length is large and allows interference of reference light and the light reflected from different depths of the sample. This leads to fringes in k space with the frequency of fringes increasing for signal arising from deeper layers. The highest resolvable spatial frequency is determined by the number of sampling points N, which corresponds to the number of the illuminated pixels. Because as per the Nyquist theorem, the sampling frequency should be at least twice that of maximum detectable frequency of the spectrum, the maximum imaging depth is given by

Fig. 13.2
figure 2

(a) Schematic of a SDOCT system. C collimator lens, FC fiber coupler, G galvo-scanner, L imaging lens, LSC line scan camera, M reference mirror, S sample, SLD superluminescent diode, TG transmission grating. (b) Relation between depth and frequency. Interference patterns shown in the red and blue colour are from the corresponding layers of sample. The measured interference spectrum is shown in black colour. Fourier transform of this provides peaks at the depth location corresponding to the two layers of the sample

$$ {Z}_{\max }=\frac{\lambda_0^2}{2n\;\Delta \lambda }N $$
(13.4)

where N is the number of pixels, ∆λ is the width of the spectrum recorded on the detector, and n is the refractive index of the medium. Since Fourier transform of the real-value spectrum is Hermitian symmetric, only half of this range can be used effectively for positive and negative distances. Therefore, the maximum imaging depth is given by Z max = ± λ 20 /4nδλ [21].

It should be noted that the data collected by spectrometer is sampled with equal wavelength increments and hence is unequally spaced in k domain. FT of unevenly spaced data points in k space results in broadening of the point spread function with increase in optical path delay. Hence, the axial resolution degrades with increasing depth inside the sample. A proper depth profile can be obtained only after the interference pattern is converted from evenly spaced λ to evenly spaced k domain. This is done by resampling the interference pattern I(λ) to generate equi-spaced data I(k) in k domain [22]. Another point to be noted is that the Fourier transform of a real-valued function like I(k) produces complex conjugate artifact. Therefore, FT of I(k) leads to two mirror images about the zero delay plane, the plane in the sample for which the path difference between the sample and the reference arm is zero. To avoid overlapping of images from different layers, the reference arm path length is kept such that the plane in the sample arm corresponding to zero path difference is on the outermost surface of the sample or preferably outside the sample by a distance of a few coherence length of the source. The complex conjugate artifact can be removed by generating a complex spectral intensity pattern from the measured real-valued function I(k). One approach used for this purpose is to impart a constant modulation frequency to the interferograms by moving the reference mirror with a uniform velocity during the B scan [23]. This not only doubles the imaging range of the FDOCT setup but also provides a way to place the highest sensitivity region, which is at zero delay plane, inside the sample.

At longer wavelengths (greater than 1,100 nm) where larger imaging depth can be achieved due to reduced scattering, the cheaper silicon-based area detectors cannot be used. Since SSOCT requires only a single point detector, its use at these longer wavelengths becomes even more attractive. A better SNR and dynamic range is generally achieved in SSOCT because it avoids the use of a spectrometer and the resulting losses. Further, here, balanced detection mode can be used to cancel the common mode noise, whereas this is not possible in SDOCT.

Although the Fourier domain approach facilitates rapid imaging, the trade-off between the lateral resolution and the depth of imaging is a major bottleneck limiting its applicability for in situ cytological analysis. Several approaches like the use of an axicon lens and binary phase filter, the use of computational methods to digitally focus the probe beam, or the use of multiple probe beams focused at different depths of the sample are being investigated to enhance the depth of focus without compromising the lateral resolution [16, 24, 25]. FDOCT systems are commercially available from several companies, like Carl Zeiss Meditec (Germany), Optovue (USA), Heidelberg Engineering (USA), and Topcon (USA), with scanning speeds of few tens of kHz that are adequate to provide low definition 3D images of tissues like human retina.

Full-field OCT (FFOCT) is another variant of OCT [26]. FFOCT uses wide-field illumination of the sample and a CCD or CMOS camera with a phase stepping technique for acquiring an en face or C scan image. Full-field imaging avoids the requirement of transverse scanning, thereby allowing capturing a 2D en face image just like in microscopy. Most often FFOCT employ a Linnik interferometer with a spatially incoherent light source to reduce speckles and inter-pixel cross talk. Since FFOCT acquires en face images, high NA (0.3–0.5) objectives can be used to obtain high transverse resolution. The high (∼1 μm) axial and transverse resolution offered by FFOCT makes it a useful tool for noninvasive histology and real-time cellular imaging as in embryology and developmental biology.

Apart from high-resolution structural information, OCT, with suitable adaptations, can also provide functional information about the sample. For example, by taking OCT images for the two orthogonal linear polarizations of the scattered light using polarization-sensitive OCT (PSOCT), we can get information about the birefringent properties of the tissue [27]. Since the morphology and even the magnitude of the connective tissue protein like collagen change during healing of the wounds or even during the progression of cancer, the measurement of tissue birefringence can provide valuable diagnostic information. Doppler OCT system can allow measurement of vascular blood flow in the sample and thus significantly add to the diagnostic potential for noninvasive monitoring of wounds.

At RRCAT, OCT setups of varying sophistication have been developed and used for noninvasive, high-resolution (∼10–20 μm) biomedical imaging applications. A typical OCT image of a zebra fish eye recorded in vivo is shown in Fig. 13.3a.

Fig. 13.3
figure 3

(a) In vivo OCT image of a zebra fish eye, (b) 2D cross-sectional image of zebra fish brain, and (c) 3D isosurface model of the zebra fish brain constructed using cross-sectional images

From the image we could estimate important ocular parameters like corneal and retinal thickness, the anterior angle of cornea with iris [28]. Further, exploiting the fact that OCT measures the optical path length, we measured the gradient refractive index profile of the lens without excising the lens by fitting the measured path length at different lateral positions to the known parabolic gradient profile [29]. The images of zebra fish brain sections recorded in vivo [30] using a real-time OCT setup are shown in Fig. 13.3b. About 90 cross-sectional images (XZ plane) of the brain were taken by moving the sample in the Y direction in a step of 0.05 mm. Internal structures such as bulbus olfactorius, telencephalon, tectum opticum, cerebellum, frontal bone, and eminentia granularis were clearly distinguishable in these images. The raw images were thresholded for minimizing the speckle noise. Using these images, a three-dimensional isosurface model of the zebra fish brain was constructed in the axial plane (Fig. 13.3c). The ability to record images of internal organs noninvasively has also been used to study abnormalities in the development of zebra fish embryos subjected to different toxins like alcohol. The zebra fish embryos were exposed to ethanol at varying concentrations in the range 150–350 mM for 48 h postfertilization, and OCT imaging was performed at regular intervals both on unexposed (controls) and ethanol-treated samples. The study showed that as compared to control, the ethanol-exposed embryos show shrinkage in the size of their eyes and the internal structures of the eye in the ethanol-exposed embryos were also less featured. Further, it was observed that while there was no change in the mean retinal thickness of the control larvae from 6 days postfertilization (dpf) to 10 dpf, the retinal thickness of the exposed larvae decreased during 6–10 dpf. The ethanol-exposed larvae also showed malformations in the spinal cord as evidenced by the distortions in the notochord and bending of tails [31].

The intensity and the retardance images of malignant (invasive ductal carcinoma), benign (fibroadenoma), and normal breast tissue samples obtained using a PSOCT system are shown in Fig. 13.4. The resolution of the OCT system used for these measurements was limited to ∼10–15 μm due to the limited bandwidth of the source and was not sufficient to discriminate cytological differences between the normal and the abnormal tissues. However, significant structural differences can be seen between the normal and the abnormal tissues. While the normal breast tissue is composed of large lipid-filled cells and hence has low attenuation, the abnormal tissues exhibit dense scattering effects. These differences lead to characteristic texture in their OCT images, which can be discriminated using statistical analysis of the OCT images. We have made use of spectral techniques involving Fourier-based classification method as well as statistical techniques involving texture analysis (TA) for the identification of three different histological tissue types, normal, fibroadenoma (FA), and invasive ductal carcinoma (IDC). Excellent classification results with specificity and sensitivity of 100 % could be achieved for binary (normal-abnormal) classification by the use of an algorithm that used Fourier domain analysis (FDA) of the OCT image data set to carry out the feature extraction and TA for classification. The method yielded specificity and sensitivity of 90 and 85 %, respectively, for the discrimination of FA and IDC [32]. The retardation images also show considerable difference for different pathologies of the breast tissue samples. The birefringence value (4 × 10−4) for benign tumor (fibroadenoma) was significantly higher than that of malignant tumor (8 × 10−5) and could be used to differentiate these [33].

Fig. 13.4
figure 4

(a) The intensity (I) and retardance (R) images of normal (1st column), malignant (2nd column), and benign (3rd column) human breast tissue samples. (B) Phase retardation depth profile of benign (top) and malignant tumor (bottom). Polarization-sensitive OCT images of (i) normal, (ii) malignant, and (iii) benign human breast tissues. (b) Retardance estimated from the PSOCT image

Polarization-sensitive OCT images and the corresponding histological measurements on the morphology of the tissues resected at different time points from the bacteria (Staphylococcus aureus) infected and uninfected wounds in mice are shown in Fig. 13.5. These measurements showed that compared to the uninfected wounds, the infected wounds had prominent edematic regions. Further, a significant delay was seen in re-epithelization and collagen remodeling phases of wound healing in infected wounds. The OCT measurements were found to be consistent with the corresponding histological measurements demonstrating the potential of OCT for monitoring the signatures of microbial infection in wounds as well as the progression of wound healing [34]. The results of a recent study by us further show that phase retardance of wound tissue increases with the healing of wound as is the case for wound tensile strength [35]. This indicates that retardance measured by PSOCT can be a good indicator of tissue tensile strength and wound repair. In contrast hydroxyproline estimation, which gives an idea of collagen synthesis, does not increase along with wound tensile strength beyond a certain time point. Therefore, a significant difference in wound tensile strength following a therapeutic intervention, compared to untreated wounds, might also be observed sometimes even without a difference in hydroxyproline content. This will necessitate repeated histological and biochemical measurements to assess a therapeutic outcome. By using PSOCT, these aspects may be addressed.

Figure 13.5
figure 5

Time-dependent structural changes in uninfected wound skin of mice. Left (a, d, g), middle (b, e, h), and right (c, f, i) panels represent backscattered intensity OCT images, PSOCT images, and histological images, respectively. Top (ac), middle (df) and lowermost (gi) rows represent images of resected wounded skin sample imaged on days 2, 4, and 10 of wounding, respectively. OCT images, image size: 1.5 mm × 3 mm. Histology images, scale bar: 100 μm

13.3.2 Diffuse Optical Tomography (DOT)

For imaging through larger depths as, for example, for imaging human brain or female breast, one has to necessarily work with diffuse photons [36]. The basic idea here is to measure the light emerging from the biological object for different source and detector positions around it and finding a three-dimensional distribution of the optical properties of the sample that is able to reproduce the measurements. This distribution corresponds to the image of the sample. This approach referred to as diffuse optical tomography (DOT) can be used in three modes: continuous wave (CW), time domain (TD), and frequency domain (FD) [3742]. In continuous wave DOT, a constant intensity light is used to illuminate the object, and measured intensity for various combinations of source-detector positions is used to reconstruct the image. The technique though relatively inexpensive and simple in nature suffers from the drawback that it is not possible to discriminate between the absorbing and scattering inhomogeneities which often provides useful clinical information. This is because the intensity of light which is the only measurable parameter in CW DOT can be affected by change in both absorption and scattering. Further it lacks the temporal information which is necessary for imaging fast spatiotemporal changes such that occur in hemodynamics during the brain activity. Time-domain DOT technique measures the delay and spread of an ultrafast (ps-fs) pulse and provides the most complete information about the optical properties on the medium and embedded heterogeneities through the diffuse photons reaching the detectors. Frequency-domain methods on the other hand measure the phase shift and demodulation of the intensity-modulated (MHz-GHz) waves propagating through tissue. Although frequency-domain DOT setup has a limitation in that measurements are made at only few discrete frequencies, it is still more widely used because of being less expensive and portable compared to time-domain setups [43]. By exploiting the differences in the absorption spectra of oxy- and deoxy-hemoglobin, DOT can provide useful functional information about the blood dynamics and oxygenation level.

In DOT large depth of imaging comes at the cost of resolution which is typically a few mm. Ultrasound-assisted DOT has been investigated as a means to improve the resolution [44]. In this method ultrasound waves focused in a tissue volume are used to cause localized modulation of the phase of the light scattered from this volume, thereby allowing the measurement of the optical properties of the scattering medium with ultrasound-limited resolution. Because only a small fraction of light traversing through the ultrasound focal region is modulated, a highly efficient photon collection and sensitive phase detectors are required [45]. Photoacoustic tomography (PAT) is another approach that makes simultaneous use of optical and ultrasound methods to achieve large depth of imaging with good resolution [4648]. Here, acoustic waves generated by absorption of a short laser pulse focused in a small tissue volume are detected by an ultrasonic transducer placed on the surface. By measuring the amplitude and time taken by the photoacoustic waves to reach the receiver and knowing the distribution of light inside the tissue volume using a suitable light propagation algorithm, one can determine the optical properties of the imaged region. The PAT imaging offers greater tissue specificity and differentiation than ultrasound because the difference in optical absorption between different tissue components is usually much larger than their acoustic impedance. Hence, features that are not visible with ultrasound can be observed with ease using PAT. Further, because the resolution of PAT imaging system is determined solely by the transducer geometry and the parameters of photoacoustic signal which depend on the energy deposited and thermal properties of the medium, the diffuse nature of the light does not hamper the resolution.

13.4 Optical Spectroscopic Diagnosis

While for imaging one exploits the intensity, coherence, or polarization of the scattered light, other parameters of the scattered light like its angular distribution and the spectral content also contain significant diagnostic information. As noted earlier the angular distribution of the scattered light can provide information about the density and the size distribution of scatterers. By making use of polarized fraction of the scattered light, one can selectively probe epithelial tissue and use the information on size distribution and the density of the nuclei for diagnosis of cancer in early stage [49]. Changes in the polarization parameters of the tissue (retardance, diattenuation, and depolarization) arising due to its birefringent (collagen, tendon, etc.) and chiral (glucose) constituents can also be exploited for diagnostics [50, 51].

It is important to note here that the scattered light also has a very weak component which is scattered inelastically, i.e., with a change in frequency via processes like fluorescence, Raman scattering, etc. The inelastically scattered light is the characteristic of the chemical composition and morphology of the tissue and thus can help in monitoring metabolic parameters of the tissue and also in discriminating diseased tissue from normal. Since the inelastically scattered light is a very small fraction of incident light, practical applications require use of high-brightness source like lasers and appropriate light delivery and collection systems. Both fluorescence and Raman spectroscopic approaches are being widely investigated for their diagnostic potential. These offer several important advantages for biomedical diagnosis like very high intrinsic sensitivity and the use of nonionizing radiation, which makes it particularly suited for mass screening and repeated use without any adverse effects. Further, the diagnosis can be made near-real time and in situ whereby no tissue needs to be removed. Also tissue diagnosis by this technique can be easily automated facilitating its use by less skilled medical personnel also. Here we shall restrict ourselves to the use of optical spectroscopy for noninvasive diagnosis of cancer.

13.4.1 Optical Spectroscopy for Cancer Diagnosis

Laser-induced fluorescence (LIF) has been used for diagnosing cancer in two ways. One approach involves systemic administration of a drug like hematoporphyrin derivative (HpD) which is selectively retained by the tumor. When photoexcited with light of appropriate wavelength, the drug localized in the tumor fluoresces. This fluorescence is used for detection and imaging of the tumor. Photoexcitation also leads to populating the triplet state via intersystem crossing. The molecule in excited triplet state can directly react with biomolecules or lead to the generation of singlet oxygen, which is toxic to the host tissue. The resulting destruction of the host tissue is exploited for photodynamic therapy of tumor. From the point of view of use in diagnosis, this approach has two drawbacks: a possible dark toxicity of the drug and the possibility of drug-induced photosensitization. There is therefore interest in developing tumor markers where the triplet state is rapidly quenched and thereby photosensitization is avoided. The other approach, the one that has received more attention, does not use any exogenous tumor markers. Instead it exploits for diagnosis the subtle changes in the parameters of fluorescence (spectra, yield, decay time, and depolarization) from native tissues as it transforms from normal to the malignant state.

The fluorescence of native tissue originates from a number of endogenous fluorophores that are present in tissue. Table 13.2 lists the prominent fluorophores, along with their excitation and emission bands.

Table 13.2 Excitation and emission spectra of the some endogenous tissue fluorophores

Extensive studies carried out on laser-induced fluorescence (LIF) from native tissues resected at surgery or biopsy from patients with cancer of different organs – uterus [52], breast [5356], and oral cavity [57] – have shown a significant variation in the concentration of the fluorophores in the different tissue types. In particular, it was inferred from these studies that while the concentration of NADH (reduced nicotinamide adenine dinucleotide) should be higher in malignant breast tissues compared to benign tumor and normal breast tissues [53], the reverse should be the case for tissues from oral cavity where NADH concentration was inferred to be higher in normal oral tissues [57]. The differences in fluorophore concentration, inferred from spectroscopic studies, were able to qualitatively account for the observed differences in the yield and spectrum of autofluorescence from the normal and diseased oral and breast tissues. Significant differences in the depolarization of fluorescence were also observed in malignant tissues compared to normal. Whereas for thin tissue sections of breast tissue (thickness < optical transport length), the depolarization of fluorescence was observed to be smaller in malignant sites compared to normal (due to the changes in biochemical environment of the fluorophores), the reverse was observed for thicker tissue sections because of the larger scattering coefficient of malignant sites. Because the fluorescence from superficial layer of tissue is the least depolarized and that originating from deeper layers becomes increasingly more depolarized, the depth dependence of depolarization could also be exploited to make a depth-resolved measurement of the concentration of fluorophores in tissue phantoms as well as in tissues [59].

A photograph of the first system developed at RRCAT for the evaluation of the LIF technique for in vivo diagnosis of cancer is shown in Fig. 13.6a. The system comprised of a sealed-off N2 laser (pulse duration, 7 ns; pulse energy, 80 μJ; and pulse repetition rate, 10 Hz), an optical fiber probe, and a gateable intensified CCD (ICCD) detector. The diagnostic probe was a bifurcated fiber bundle with a central fiber surrounded by an array of six fibers. The central fiber delivers the excitation light to the tissue surface and the tissue fluorescence is collected by the six surrounding fibers.

Fig. 13.6
figure 6

(a) The first prototype nitrogen laser-based fluorescence spectroscopy system for cancer diagnosis (b) a more compact version of the fluorescence spectroscopy system

An additional fiber was put in the diagnostic probe to monitor the energy of each pulse of nitrogen laser output by monitoring luminescence of a phosphor coated on the tip of this fiber. The light coming from the distal ends of the six collection fibers and the reference fiber is imaged on the entrance slit of a spectrograph coupled to the ICCD. One such unit was used at the Government Cancer Hospital, Indore, for a detailed clinical evaluation of the technique after satisfactory results were obtained in a pilot study on 25 patients with histopathologically confirmed squamous cell carcinoma of oral cavity [58]. The spectral database of in vivo autofluorescence spectra recorded from more than 150 patients enrolled at outpatient department of the Government Cancer Hospital, Indore, for screening of neoplasm of oral cavity and ∼50 healthy volunteers was used to develop algorithms that could efficiently discriminate between the spectral features of the malignant and nonmalignant tissue sites. Both linear and nonlinear statistical techniques have been investigated to explore their discrimination efficacy. The diagnostic algorithms developed to quantify the spectral differences in the nitrogen laser-excited fluorescence from malignant, benign tumor, and normal tissue sites provided good discrimination with sensitivity and specificity toward cancer of ∼90 % in general and up to 100 % in favorable cases [6062]. Multiclass diagnostic algorithms capable of simultaneously classifying spectral data into several different classes have also been developed using the theory of total principal component regression [63] and also by making use of nonlinear maximum representation and discrimination feature (MRDF) for feature extraction and sparse multinomial logistic regression (SMLR) for classification [64].

Figure 13.6b shows a photograph of the more compact version of the fluorescence spectroscopy system incorporating the nitrogen laser, a chip-based miniaturized ocean optics spectrograph and CMOS detector in one single box. The Raman spectroscopy-based system developed for in vivo screening of the cancer of oral cavity is shown in Fig. 13.7. The Raman setup was housed in a 32 suitcase for ease in portability. The system incorporates a 785 nm diode laser and a fiber-optic Raman probe to excite and collect the Raman-scattered light.

Fig. 13.7
figure 7

A compact Raman spectroscopy system for cancer diagnosis. (a) A schematic of the system and (b) a photograph of the portable unit

A notch filter placed at the distal end of the probe was used to remove the excitation light, and the filtered Raman output was imaged onto a spectrograph equipped with a thermoelectrically cooled, back-illuminated, deep-depletion CCD camera. Good quality tissue Raman spectra could be acquired from oral cavity tissue with an integration time of less than 5 s. Although the use of near-infrared excitation leads to significant reduction in the background, fluorescence extracting the weak Raman signal from the broad and much stronger background still remains a challenge. We have developed a method that makes use of iterative smoothening of the raw Raman spectrum to extract the Raman spectra. Compared to the widely used iterative modified polynomial fitting method, our method offers the advantage that the extracted Raman features are not sensitive to the spectral range over which the raw spectrum is fitted [65]. The central idea of this new approach is to iteratively smooth the raw Raman spectrum, by using moving average of the spectral data, such that Raman peaks are automatically eliminated, leaving only the baseline fluorescence, to be subtracted from the raw spectrum. The scheme allows retrieval of all Raman peaks and shows good range independence.

Both the Raman spectroscopy system and the compact version of the fluorescence spectroscopy system have been used at Tata Memorial Hospital (TMH), Mumbai, for the screening of the neoplasm of oral cavity. The study involved 28 healthy volunteers and 199 patients undergoing routine medical examination of the oral cavity. The different tissue sites investigated belonged to either of the four histopathological categories: (1) squamous cell carcinoma (SCC), (2) oral submucosal fibrosis (OSMF), (3) leukoplakia (LP), or (4) normal. Probability-based multivariate statistical algorithms capable of direct multiclass classification were used to analyze the diagnostic content of the measured in vivo fluorescence and Raman spectra of oral tissues [66]. Of the 227 subjects involved in this study, both fluorescence and Raman spectral data was available from the tissue sites of 138 patients and 26 healthy volunteers. The results of a comparative analysis of the diagnostic performance of two approaches using direct multiclass classification algorithms are shown in Table 13.3. While, over this population, an overall classification accuracy of ∼76 % was achieved using the fluorescence spectra, with Raman data the overall classification accuracy was found to be ∼91 %. For binary classification (normal vs. abnormal), the corresponding classification accuracy was 94 % and 98 %, respectively.

Table 13.3 Classification results for the use of fluorescence and Raman spectroscopy for in vivo diagnosis of cancer of oral cavity

The use of Raman spectroscopy for differential diagnosis over a database of 28 healthy volunteers and 171 patients enrolled for medical examination of lesions of oral cavity at TMH yielded an accuracy of ∼86 % in classifying the oral tissue spectra into the four histopathological categories. For binary classification, a sensitivity of 94.2 % and a specificity of 94.4 % were achieved in discriminating the normal from all the abnormal oral tissue spectra belonging to SCC, OSMF, and LP [67]. It may be noted that because of its higher molecular specificity, the Raman spectra of the different anatomical sites of oral cavity were found to exhibit significant differences and based on the similarity of spectral patterns, the normal oral tissue sites, could be grouped into four major anatomical clusters: (1) outer lip and lip vermillion border, (2) buccal mucosa, (3) hard palate, and (4) dorsal, lateral, and ventral tongue and soft palate. When the anatomy-matched data sets were used for classification, the overall classification accuracy was found to improve to 95 % with the algorithm correctly discriminating the corresponding tissue sites with 94 %, 99 %, and 91 % accuracy, respectively [68]. Another interesting observation made during this work was that if the spectra acquired from healthy volunteers with no clinical symptoms but having tobacco consumption history were removed from the “normal” database, a significant improvement in classification accuracy was observed for both fluorescence and Raman spectroscopy-based diagnosis.

A drawback of nitrogen laser-based fluorescence spectroscopy system is the need for periodic maintenance, cleaning of spark gap, and refilling of the sealed nitrogen laser tube. Therefore, with the availability of high-power white and near-UV (365 nm) LEDs, an LED-based combined fluorescence and diffuse reflectance spectroscopic system has been developed (Fig. 13.8). The LED-based system is even more compact, cheaper, and rugged compared to the nitrogen laser-based system. Further, incorporation of diffuse reflectance measurements helps monitoring the blood parameters of the tissue which is expected to further improve the diagnostic efficacy.

Fig. 13.8
figure 8

A compact LED-based fluorescence spectroscopy system for cancer diagnosis

The present point spectroscopy-based systems are better suited for screening of areas suspected to be abnormal by the doctor. Since qualified doctors may not be available in remote areas, to make a system better suited for use by rural health workers, a wide-area imaging system capable of delineating suspect areas has also been developed and is being integrated with the point spectroscopy setup. This wide-area imaging system will make use of differences in fluorescence intensities of certain bands between normal and abnormal tissue to demarcate abnormal areas which can be further investigated by the point spectroscopy system for screening of the cancer of oral cavity.

Apart from cancer diagnosis, Raman spectroscopy is being explored for several other diagnostic applications like measurement of the different analytes in whole blood (like glucose, cholesterol, urea, albumin, triglycerides, hemoglobin, and bilirubin) [69]. Raman spectroscopy is also being used for monitoring of the adulterations in food products and quality of drugs [70, 71].

13.4.2 Diagnostic Studies on Single Optically Trapped Cells

Lasers because of their ability to be focused to a diffraction-limited spot are also being used as a tweezers to hold and manipulate individual microscopic objects, like a single cell or even intracellular objects [72]. Since light carries momentum, its absorption, scattering, or refraction by an object will result in a transfer of momentum and thus a force on the object. While usually this force is in the direction of light propagation, it can be shown that for a tightly focused beam, there also exists a gradient force in the direction of the spatial gradient of the light intensity. A simple ray optics description [73], which is valid when the dimensions of the object is much larger than the wavelength of the laser beam, can be used to explain the existence of the gradient force and its role in stable three-dimensional trapping of the object. Referring to Fig. 13.9, consider two light rays (“a” and “b”) situated at equal radial distance from the beam axis. Due to the refraction of rays a and b from the sphere, assumed to have a refractive index higher than the surroundings, there will be forces F a and F b , respectively, on it. The net force denoted as F will try to pull the sphere to the focal point. When at the focal point, there is no refraction and hence no force on the sphere. It can be verified from Fig. 13.9 that in all the cases where the sphere is positioned away from the focal point, the resultant force acts to pull the sphere onto the beam focus (the equilibrium position).

Fig. 13.9
figure 9

Ray diagram explanation of the trapping of a dielectric spherical particle in a focused laser beam. F is the net gradient force on particle with geometric centre (a) below, (b) above, (c) left, and (d) right to the focus position of trapping beam. (Adapted from Ashkin [73])

For stable trapping in all three dimensions, the axial gradient component of the force that pulls the particle toward the focal region must exceed the scattering component of the force pushing it away from that region. To achieve this, trap beam needs to be focused to a diffraction-limited spot using a high numerical aperture (NA) objective lens. Generally for the manipulation of biological objects, we use laser in near-infrared wavelengths where the absorption in the object is minimal. This is to avoid photo-induced damage to the object.

The trapping efficiency (Q) of an optical tweezers is usually described as the fraction of the trap beam’s momentum being transferred to the particle. A value of 1 for Q corresponds to all the momentum of the beam being transferred to the particle. In conventional optical tweezers, the trapping efficiency and hence the trapping force in the lateral direction are usually an order of magnitude larger than the axial direction. Typical lateral trapping efficiency varies in the range of 0.001–0.5 depending on the difference in the refractive index of the object and the surrounding medium. This leads to trapping forces in the range of few pico-Newtons (pN) to hundreds of pN [74].

Optical tweezers are finding widespread applications in biological research and technology [72, 75] because unlike mechanical microtools, the optical trap is gentle and absolutely sterile and can be used to capture, move, and position single cells or subcellular particles without direct contact. Since optical tweezers can work as a precise pressure transducer in pico-Newton (pN) to several hundreds of pN range [72, 75, 76], these have been used to apply mechanical forces on single optically trapped cells and thus measure viscoelastic parameters of the cells and how these are altered under some disease conditions. In particular there has been considerable work on the use of optical tweezers for the measurement of the viscoelastic parameters of red blood cells (RBCs) which get altered in certain disease conditions. While silica beads attached to the RBC membrane have been used as handles to stretch RBC [77, 78], the RBC, optically trapped in aqueous buffer suspension, can also be stretched by moving the stage and thus the fluid around the cell (Fig. 13.10). Measurements are often made on RBCs suspended in hypotonic buffer (osmolarity of ∼150 mOsm/kg), since higher salt concentration inside the cell leads to flow of fluid into the cell causing it to get swollen and become spherical, the shape which is easier to model.

Fig. 13.10
figure 10

Stretching of an optically trapped RBC by the use of viscous drag. RBC (marked by arrow) on a stationary stage (a); stretched RBC on moving the stage at 10 μm/s (b). Scale bar: 5 μm

In Fig. 13.10a we show a trapped normal RBC when the stage was stationary and in Fig. 13.10b when it was moved at ∼100 μm/s. By varying the speed of stage, the viscous force on the cell was varied, and elongation along the direction of stretching and compression of the cell in orthogonal directions was measured, and from these the elastic properties were determined. These measurements showed a significant increase in the shear modulus for aged RBCs and cells infected with Plasmodium falciparum in comparison to that for normal [79]. An interesting consequence of the difference in membrane rigidity of normal and infected RBCs is that in hypertonic buffer medium (osmolarity >800 mOsm/kg), while the shape of normal RBC gets distorted to a peculiar asymmetric shape, the shape of infected RBC does not change because of the larger rigidity of its membrane. Therefore, while the asymmetric shaped normal RBC rotates by itself when placed in laser trap, at the same trap beam power, RBCs having malaria parasite, due to their larger membrane rigidity, do not rotate [80].

Optical tweezers can trap and immobilize a motile cell in a suspension away from a substrate and thus help acquire even weak Raman signals with good signal to noise ratio by allowing an increased time for spectral acquisition. Compared to the use of physical or chemical methods for immobilization of cells on a substrate, as is practiced in micro-Raman spectroscopy, optical trapping, being noncontact, helps minimize substrate effects and also the effect of the immobilization method used [81, 82]. Setups facilitating acquisition of Raman spectra from an optically trapped cell, referred to as Raman optical tweezers, are being used to monitor changes induced in isolated single cells by a change in their environment, for example, monitoring the real-time heat denaturation of yeast cells [83]. Since the binding or the dissociation of oxygen with heme leads to significant conformational changes of hemoglobin, Raman optical tweezers are particularly suited as a sensitive probe for monitoring the oxygen carrying capacity of RBCs under different physiological or disease conditions [84, 85].

Two- or three-dimensional trap arrays can be conveniently created using a holographic optical tweezers (HOT) setup, which makes use of a spatial light modulator (SLM) on which a computer-generated hologram is imprinted to phase modulate the wave front of the incident 1,064 nm laser beam. This results into a fan out of beams with suitable angular separation which, when coupled to the microscope objective lens, creates multiple traps at the focal plane. The dynamically reconfigurable trap arrays generated in two or three dimensions using HOT can be used to sort colloidal particles/cells of different size or composition by exploiting the difference in optical forces experienced by these when moving through a periodic array of optical traps [8688]. Multiple traps can also be used for controlled orientation or translation of the trapped cell with respect to a fixed excitation beam. This helps Raman spectroscopic measurements from different areas of the cell with spatial resolution of ∼1 μm.

A schematic of the first Raman tweezers setup developed at RRCAT is shown in Fig. 13.11. The setup used the same 785 nm laser beam from a Ti:sapphire laser for trapping as well as Raman excitation. One interesting study carried out using this setup was the Raman spectroscopy of optically trapped RBCs obtained from blood sample from malaria patients suffering from P. vivax infection (iRBCs) and healthy volunteers (hRBCs). As compared to hRBCs, significant changes were observed in the oxygenation/deoxygenation marker bands at ∼1,210, 1,223, 1,544, and 1,636 cm−1 in the spectra of a significant fraction (∼30 %) of iRBCs. The observed changes suggest a reduced oxygen affinity of iRBCs as compared to hRBCs [89].

Fig. 13.11
figure 11

A schematic of Raman optical tweezers setup. The solid and dashed lines indicate beam path for trapping laser beam from 785 nm Ti:sapphire laser and backscattered Raman signal from sample, respectively

Integration of Raman tweezers with holographic optical tweezers allows translating the trapped cell across a fixed Raman excitation beam to generate spatially resolved (resolution ∼1 μm) Raman spectrum. Investigations made with this setup on the oxygenation status of optically trapped red blood cells show that the cellular site where the trap beam is localized is more deoxygenated compared to the rest of the cell and the level of deoxygenation increases with an increase in trap beam power. Our studies have shown that this deoxygenation arises due to the photodissociation of oxygen from hemoglobin at increased trapping power [90]. The use of surface plasmon resonances of metallic nanoparticles to enhance the Raman spectra from optically trapped cells offers the possibility of selectively acquiring spectra from cell membrane and may help understand the changes occurring in the membrane under some disease conditions.

13.5 Therapeutic Applications

Surgical and therapeutic applications of lasers make use of the energy deposited in the tissue by the absorption of laser light [91, 92]. The absorbed energy can broadly lead to three effects. Most common effect is a rise in tissue temperature (photothermal effect). At high intensities associated with lasers operating in short pulse duration (nanosecond to femtoseconds), absorption of laser radiation may lead to the generation of pressure waves or shock waves (photomechanical effects). Short wavelength lasers can cause electronic excitation of chromophores in the tissue and thus initiate a photochemical reaction (photochemical effect). The relative role played by the three depends primarily on the laser wavelength, irradiance, and pulse duration.

Majority of the surgical applications of light exploit the biological effect arising due to the rise in tissue temperature following the absorption of light. The biological effect depends on the level of rise in tissue temperature, which is determined by two factors: first, the depth of penetration of the laser beam which determines the volume of the tissue in which a given energy is deposited and, second, the time in which the energy is deposited vis-à-vis the thermal relaxation time (the inverse of which determines the rate of flow of heat from heated tissue to the surrounding cold tissue). A small rise in temperature (5–10 °C) can influence the vessel permeability and the blood flow. Tissues heated to a temperature of 45–80 °C may get denatured as a result of breakage of van der Waal bonds, which stabilize the conformation of proteins and other macromolecules. Thermal denaturation is exploited for therapy in several ways. For example, hemostasis occurs because of increased blood viscosity caused by denaturation of plasma proteins, hemoglobin, and perivascular tissue. When the temperature exceeds 100 °C, water, the main constituent of tissue, boils. Because of the large latent heat of water, energy added to tissue at 100 °C leads to conversion of water from liquid to steam without further increase in temperature. A volume expansion by ∼1,670-fold occurs when water is vaporized isobarically. When this large and rapid expansion occurs within tissue, physical separation or “cutting” occurs. Tissue surrounding the region being vaporized will also be heated, resulting in coagulation of the tissue at the wound edges and thus preventing blood loss. If the rate of deposition of energy is faster than that required for the boiling of water, the tissue is superheated and can be thermally ablated. Thermal ablation or explosive boiling is similar to what happens when cold water is sprinkled on a very hot iron. In ablation, practically all the energy deposited in the tissue is converted into the kinetic energy of the ablation products leading to minimal thermal damage to the adjoining tissues. It is pertinent to emphasize that by exploiting the wavelength dependence of the absorption by different tissue constituents, it is possible to selectively deposit energy in a target site. Further, by use of laser pulses of duration shorter than the thermal relaxation time, heat can be confined within the target tissue so that it can be vaporized without significant effect on surrounding tissue. Such selective photothermolysis has been exploited for several therapeutic applications, such as laser treatment of port-wine stains. Another approach that is receiving attention for controlled localized heating involves the use of near-infrared light tuned to surface plasmon resonance of metallic nanoparticles. Such heating of metallic nanoparticles that have been selectively deposited in target cells can be used for applications such as hyperthermia for cancer treatment.

At high intensities, typical of short-duration (10−9–10−14 s) laser pulses, localized absorption of laser radiation can lead to very large temperature gradients which in turn result in enormous pressure waves causing localized photomechanical disruption. Such disruption is useful, for example, in the laser removal of tattoo marks. Tattoo ink has pigmented molecular particles too large for the body’s immune system to eliminate. Photodisruption of these into smaller particles enables the body’s lymphatic system to dispose them, resulting in removal of the tattoo mark. At high intensities, the electric field strength of radiation is also very large (about 3 × 107 V/cm at an intensity of 1012 W/cm2) and can cause dielectric breakdown in the tissue. The resulting plasma absorbs energy and expands, creating shock waves, which can shear off the tissue. Since the plasma generation can occur not only in the pigmented tissue but also in transparent tissues, the plasma-mediated absorption and disruption is applicable to all tissues. The plasma-mediated shock waves are used for breaking stones in the kidney or urethra (lithotripsy) and in posterior capsulotomy for the removal of an opacified posterior capsule of the eye lens. Localized deposition of energy by an intense focused laser pulse can also lead to cavitation. The shock waves generated as a consequence of the collapse of the low-pressure cavitation bubble are also used for photomechanical disruption.

The photothermal and photomechanical effects depend on the intensity of irradiation and will not be significant if the rate of deposition of energy is so low that there is no significant rise in temperature of the tissue. In such a situation, only photochemical effects can take place provided the energy of the laser photon is adequate to cause electronic excitation of biomolecules, which can be either endogenous or externally injected. The photoexcitation of molecules and the resulting biochemical reactions can lead to either bioactivation, exploited in various phototherapies [93, 94], or generation of some free radicals or toxins, which are harmful for the host tissue. The latter process is used for photodynamic therapy (PDT) of cancer [92, 95]. PDT involves the administration of a photosensitizing agent which over a period of time (typically 48–72 h) is excreted by normal tissue and preferentially retained by tumor. The photosensitizer when excited with light of the appropriate wavelength leads to the generation of singlet oxygen or other reactive oxygen species (ROS) (like superoxides, hydroxyl radicals, hydrogen peroxides) which are toxic to the host tumor tissue, thereby leading to tumor destruction [92]. Because of the preferential localization of the photosensitizer in tumor and the fact that the generation of toxins occurs only in the region exposed to light, photodynamic therapy provides much better selectivity compared to the more established treatment modalities, such as radiation therapy and chemotherapy.

An ideal photosensitizer for PDT should simultaneously satisfy several parameters like suitable photophysical/photochemical characteristics to result in selective and large uptake in tumor, a large quantum yield for ROS generation, low dark toxicity, minimal photo-transformation when subjected to photo irradiation and strong absorption in the 650–900 nm spectral region where tissues are relatively transparent. Further, the excited state of the photosensitiser should have sufficient energy to excite molecular oxygen present in the tissue from the ground triplet state to the singlet state. Since it is difficult to find photosensitizers that satisfy all the parameters well, the quest for better photosensitizers continues. At RRCAT we have focused our attention on the use of chlorophyll derivatives as PDT agents because of their strong absorbance peak in the red region and the economics of synthesis. Chlorin p 6 (Cp 6), one of the water soluble derivatives of chlorophyll, which has shown good tumor selectivity and localization, has been used for treating tumors induced in hamster cheek pouch animal models by the application of a carcinogen (7,12-dimethyl-benz(a) anthracene). Studies showed that for small tumors (size <5 mm), a complete tumor necrosis was achieved following PDT at 4 h after intraperitoneal injection of Cp 6. Treated tumor became edematous at 24 h after PDT, and then a reduction in tumor size was observed in the next 48 h. In the animal kept for follow-up a week after PDT, the tumor regressed completely and only scar tissue was observed (Fig. 13.12). However, for bigger tumors the accumulation of Cp 6 was inadequate which compromised the effectiveness of PDT [96]. To address this issue, chlorin p6-histamine conjugate was prepared, and enhanced histamine receptor-mediated cellular uptake in oral cancer cell lines was confirmed [97]. With the use of chlorin p6-histamine conjugate, tumors with volumes of up to ∼1,000 mm3 have been successfully treated [98]. Studies are also being carried out on the interaction of potential photosensitizers with nanoparticles to evaluate and comprehend the photodynamic effects of the photosensitizer-nanoparticle complex since the use of nanoparticles can provide a valuable approach for targeted delivery of drugs [99, 100].

Fig. 13.12
figure 12

Photodynamic treatment of the tumor in animal model, (a) before treatment, (b) one week after treatment

The use of PDT for antimicrobial applications [101] and for the management of wounds infected with antibiotic-resistant bacteria [102] is also being investigated at RRCAT with some promising results. The advantage of PDT over conventional antimicrobials is that the treatment is localized to light-irradiated regions of the drug-treated area, thereby avoiding adverse systemic effects. Further, the reactive oxygen species generated in PDT react with almost every cellular component, and therefore it is highly unlikely that bacteria can develop resistance to PDT [103].

13.6 Summary

The remarkable properties of lasers as a light source and the significant advancement in the photonic and information processing technology have led to an upsurge of interest in the utilization of optical techniques for noninvasive, near-real-time biomedical imaging and diagnosis and also for therapeutic modalities providing higher precision and selectivity than offered by the current frontline modalities. Rapid advancements being made in the ability to structure materials at nanoscale and tailor their optical properties to suit specific applications are expected to further enhance the efficacy and the range of these applications. Therefore, the use of optical techniques in advancing the quality of health care is expected to grow even more rapidly in the coming decades.