Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

1 Measurement of Water Currents

\(\pi {\acute{\alpha }} \nu \tau \alpha \quad {\grave{\rho }} \varepsilon {\tilde{\iota }}\)...

Everything flows...

Heraclit

\(\sim \)540–475 B.C.

The current or velocity field in a water body is a quantity that can relatively easily be observed. The identification of its structure and variations and the reasons of its formation are central questions for physical limnology, as well as for oceanography, hydrology and hydraulics. So, a wide variety of methods, tools and instruments have been and are still being developed to investigate the structure of the current field, to measure the speed and direction of the current, the velocity fluctuations and other dynamical characteristics.

The variability of the speed of the natural currentFootnote 1 in lakes is not very broad. Horizontal wind driven surface currents in large lakes might be up to 1–2 \({\text {m~s}}^{-1}\), and they rapidly decrease with depth; a typical scale is several units of \({\text {cm~s}}^{-1}\). Velocities of up/downwellings are typically in the range of 0.1–10 mm s\(^{-1}\) and horizontal bottom velocities in the hypolimnion are seldom larger than a few mm s\(^{-1}\). However, the current field is a most variable parameter, reflecting almost all the processes taking place in water basins. Wind action, inhomogeneity of the solar heating, internal or surface waves cause complex current patterns that are variable both in space and time. In lakes, small current speeds are frequently occurring, particularly, in the hypolimnion and the benthic boundary layer.Footnote 2 So, continuous small velocity recordings are important, if processes are to be properly understood, but it is exactly these, which are difficult to measure.

Various instruments, constructed for current velocity measurements, can roughly be divided into two general types: (i) current meters for more or less long-lasting measurement of the speed and direction of the water at a certain position or from a boat, and (ii) various drifting bodies,floats and tracers, moving together with the water and providing information about the water motion through space and time, represented by averaging the current velocity along some part of the trajectory during the course of the motion.

1.1 Current Meters

Autonomous current meters are the most commonly used instruments in field practice. They are usually placed at anchored buoy stations (on the surface, or submerged), called moorings (see Fig. 29.1 for details); the measuring method is therefore Eulerian. They are so constructed that registration of the water motion during long intervals (days to months) is possible by saving the collected information on some internal carrier or transmitting it by cable or radio. Often, in addition to sensors of current speed and direction, modern instruments have also sensors and measuring channels for temperature, electrical conductivity, speed of sound and hydrostatic pressure. Sometimes, for accurate measurements with instruments tied to long chains, the deviation of an instrument from its vertical alignment is also measured by recording its inclinations in two perpendicular vertical planes. Depending on the construction and principle of the operation of the sensing device, current meters are usually subdivided into (i) instruments with mechanical converters and (ii) devices which use an interaction of the flow with acoustic, magnetic, light or other fields.

In the simplest case of horizontal current-velocity recordings, two parameters must be measured: the current speed and its direction (an angle \(\phi \) between the plane of a meridian and the direction of the current vector). Traditionally, \(\phi =0\) corresponds to a current towards North, \(90^\circ \)—towards East, \(180^\circ \)—towards South, \(270^\circ \)—towards West, i.e., positive angles are calculated in the clockwise direction from geographical North (see Fig. 28.1). Note, that, by tradition, the directions of wind and water currents are denoted in different ways: oceanographers say that ‘the current goes from whilst the wind blows into the compass’. This way, \(\phi =0^{\circ }\) corresponds to a current towards North but a wind from North, \(\phi =90^\circ \)—current towards East but a wind from East etc., always calculating degrees in a clockwise direction from North.

There are two common schemes for horizontal current measurements: (i) recordings of two horizontal components of the velocity vector by a local movable (rotary) device with a fixed co-ordinate system and an angle of orientation of the device relative to the geographical axes and (ii) recordings of the velocity modulus and the sine or cosine of the direction angle with the plane of the meridian. Measuring the speed and direction requires in general different techniques, which also exhibit distinct error properties.Footnote 3 This implies that the analysis of the current itself needs some caution, because it is a composition of the two.

Fig. 28.1
figure 1

Traditionally, directions of wind and water currents are denoted in different ways: looking at the compass, one records the direction to which the current is moving, but from which the wind is blowing. So, \(\phi =0\) corresponds to a current towards North but a wind from North, \(90^\circ \)—current towards East but a wind from East etc., always calculating degrees in a clockwise direction from geographic North

Let us consider the main principles of operation of modern current meters: mechanical, acoustic and electro-magnetic instruments.

Main principles of operation of current meters

Mechanical current meters were and still are the traditional a nd most widely used velocity instruments employing impellers, rotors or screws. Their angular velocity is correlated to the strength of the flow and can be recorded (i) instantly with a certain time interval or (ii) averaged over the time of measurement. The direction of the flow is determined, as a rule, by recording the deviation of the flow direction from magnetic North. Often the sampling method for the speed is different from that of the direction.

Mechanical primary converters,Footnote 4 i.e. rotors, propellers, impellers, are known since centuries. An example of a typical propeller is given in Fig. 28.2a. It is a device which uses (two, three or more) profiled blades to convert the forward movement of water into rotation along an axis. The Savonius rotor (Fig. 28.2b) was originally developed in 1922 and patented in 1925 by the Finnish ship officer Sigurd J. Savonius for power generation. It has been extensively used as a sensor In various current meters. Its advantages are that it is rugged, omni-directional and linear in steady flow, but its response to time-varying flow and its susceptibility to contamination by vertical flows make it unsuitable for measurements near the surface where wave action creates both time-varying and vertical flow fields. The impeller is a type of propeller which produces suction when rotated; one can see it as part of a pump that moves the water. One of the first documented examples (see Fig. 28.3) is the Archimedes screw, whose original principle of operation is still in use in some of today’s current meters.

Fig. 28.2
figure 2

Examples of typical five-blade propeller (a) and the Savonius rotor (b), which is a propeller in reverse that spins when placed in water moving perpendicular to its axis

Fig. 28.3
figure 3

Detail of an antique engraved portrait of Archimedes and his screw

Famous ancient Greek mathematician and inventor, Archimedes was born ca. 290–280 BC in Syracuse (Sicily). The most commonly related story about Archimedes tells how he invented a method for measuring the volume of an object with an irregular shape. According to Vitruvius, a new crown in the shape of a laurel wreath had been made for King Hieron II, and Archimedes was asked to determine whether it was of solid gold, or whether silver had been added by a dishonest goldsmith. Archimedes had to solve the problem without damaging the crown, so he could not melt it down in order to measure its density as a cube, which would have been the simplest solution. While taking a bath, he noticed that the level of the water rose as he got in. He realized that this effect could be used to determine the volume of the crown, and therefore its density after weighing it. The density of the crown would be lower if cheaper and less dense metals had been added. He then went to the streets naked, so excited by his discovery that he had forgotten to dress, exclamating ‘Eureka!’ ‘I have found it!’. The story about the golden crown does not appear in the known works of Archimedes, but in his treatise On Floating Bodies he gives the principle known in hydrostatics as ArchimedesPrinciple. This states that a body immersed in a fluid experiences a buoyant force equal to the weight of the displaced fluid.

A large part of Archimedes’ work in engineering arose from fulfilling the needs of his home city of Syracuse. The Greek writer Athenaeus of Naucratis described how King Hieron II commissioned Archimedes to design a huge ship, the Syracusia, which could be used for luxury travel, carrying supplies, and as a naval warship. The Syracusia is said to have been the largest ship built in classical antiquity. According to Athenaeus, it was capable of carrying 600 people and included garden decorations, a gymnasium and a temple dedicated to the goddess Aphrodite among its facilities. Since a ship of this size would leak a considerable amount of water through the hull, the Archimedes screw was purportedly developed in order to remove the bilge water. It was turned by hand, and could also be used to transfer water from a low-lying body of water into irrigation canals. The Archimedes screw described in Roman times by Vitruvius may have been an improvement on a screw pump that was used to irrigate the Hanging Gardens of Babylon.

Text: http://en.wikipedia.org/; http://www.brown.edu/

Mechanical converters of different types may behave differently under particular conditions of the experiment. To see this, imagine some instrument exposed to a current which flows from West to East, then pauses, then flows from East to West with the same strength as before, so that there is no net flow (the mean current is zero). How will the propeller-type converter, the Savonius rotor, the paddle-wheel rotor and the Archimedes screw respond to such a current (see Fig. 28.4). It is assumed that the orientation of the speed sensors is not changed in space during the experiment.

Fig. 28.4
figure 4

Different mechanical primary converters subject to alternating currents of equal magnitude behave differently in alternating flow. Left to right propeller (a), Savonius rotor (b), paddle-wheel (c) and Archimedes screw (d) (adapted with changes from http://www.es.flinders.edu.au)

The propeller rotor (Fig. 28.4a) turns as the current flows from West to East, stops, and turns backward as the current flows from East to West. The backward turn, however, is not the same as the forward turn, because the propeller response to flow from its back is different to its response to flow from the front. Unless the propeller is specifically designed to have a cosine response to the flow, a propeller-type current meter thus only gives a reliable reading if the propeller always points into the current direction and its orientation is recorded along with the measured current speed.

The Savonius rotor (Fig.28.4b) turns anti-clockwise, stops, and continues to turn anticlockwise, effectively integrating over the current speed. It therefore gives a large apparent current speed. It does this independently of its orientation in space. The Savonius rotor thus gives a reliable reading only in situations where there is no high frequency alternating flow (such as produced, for example, close to the lake surface by wind waves or near boundaries). Its reading has to be supplemented by independent information on current direction.

The paddle-wheel rotor (Fig. 28.4c) does not rotate at all unless one side of it is sheltered from the current. If it is sheltered, the paddle-wheel turns anti-clockwise first, then stops, then turns clockwise, and the clockwise rotation cancels the anti-clockwise rotation exactly. A paddle-wheel rotor thus produces a reliable reading in a given situation. Its reading has to be supplemented by independent information on current direction as well.

The response of the Archimedes screw (Fig. 28.4d) is a combination of features of the other converters: it moves forward and backward symmetrically, does not measure properly high frequency alternating flow and requires independent measurement of the current direction.

Fig. 28.5
figure 5

Examples of routinely used current meters: a DISC-1, Ukrainian Marine Hydro-physical Institute, 1980, with 4-vane impeller; b Geodyne-850 (Geodyne Division, USA, 1963) and Vector Averaging Current Meter (VACM, Woods Hole, USA, 1975) are constructively alike and use a Savonius rotor; c RCM-4 (Aandreaa, Norway) with rotary sensor and steering plate for device orientation along the flow direction; d horizontally oriented MO 21 (Plessey, UK) with impeller, hanging freely on the track cable; e Model-135 of Inter Ocean System (USA) with Savonius rotor; f ACIT (AANII, USSR, 1977) with two impellers at \(90^\circ \) to one another; g CT-3 of Sea-Link System, USA, with electro-magnetic principle of operation; h CROUSET (France) is a Doppler acoustic current meter with two gauge lengths at \(90^\circ \) to one another; i acoustic two-component current meter ACM-1 (N. Brown, USA); j POTOK (Institute of Oceanology, USSR) with impeller and steering plate. Adapted from Fomin et al. (1989) [18], with changes. \(\copyright \) Nauka Publishing House, reproduced with permission

Thus, important conclusions come out of these simple considerations. First, when choosing the device for the particular experiment, one should foresee probable field conditions—the current magnitude and regime. Second, one must always take into account the construction of the current meter when analyzing the obtained data. This is why an information about various technical details of measurement is at least as important as the measurement itself; in other words, the measured data is practically useless without an information on how it was taken (Fig. 28.5).

The range of the measured velocities is for mechanical current meters 0.02–4 \({\text {m}}\,{\text {s}}^{-1}\); the accuracy of them is usually \(\pm \,3\!\!-\!\!10\) % for speeds and \(\pm 5\,\%\) for the current direction. The lower value of approximately \(0.02\,{\text {m}}\,{\text {s}}^{-1}\) defines the threshold at which the moveable devices are set in motion. This threshold value can vary considerably if instruments are exposed for longer periods over which algae can grow on the sensing parts of the instruments. Mechanical converters have quite restricted operating characteristics: high signal-detection threshold, high time lag, different rates of acceleration and deceleration in time-dependent flows, bio-fouling under long exposure etc. On the other hand, these simple and reliable constructions can be used successfully in flowmeters—instruments for flow measurements in rivers, estuaries, canals, sewage outfalls, and offshore applications. They usually measure water speed, volume, or discharge. An example of Model 2030 Mechanical Flowmeter, produced by General Oceanics,Footnote 5 Canada, is given in Fig. 28.6.

Fig. 28.6
figure 6

Flowmeter, model 2030 series mechanical flowmeter of general oceanics, Canada. It has a precision molded rotor coupled directly to a six digit counter which registers each revolution of the rotor. The counter is located within the body of the instrument and is displayed through the clear plastic housing. Threshold: approximately \(6\, {\text {cm s}}^{-1}\); range: approximately \(10\, {\text {cm s}}^{-1}\) to \(7.9\,\, {\text {m}}\,{\text {s}}^{-1}\) (photo from http://www.i-ocean.com/gometers.htm)

Fig. 28.7
figure 7

Current speed converters based on acoustic principles of operation: a pulse acoustic current meter;b Doppler current meter: 1—water flow, 2—sound emitting device, 3—acoustic receiver, 4—volume of fluid dispersing and reflecting a signal

Acoustic current meters. These instruments record phase shift, frequency, travel time variation or amplitude modulation of acoustic signals propagating in moving water and correlate these to the water motion. Two or three components of the current vector are usually measured. Let us describe them in greater detail.

Pulse acoustic current meters. Two sound speed gauges are used as a sensing device with opposite direction of signal propagation (see Fig. 28.7a). The device is so oriented that the direction of the acoustic pulse transmission is parallel to the direction of the flow under measurement. One gauge is measuring the sum of the velocities of sound and flow, and the other the difference between them. The difference of the times of the acoustic pulse passages is then proportional to the speed of the current, irrespective of the variations of the sound speed. The current meter GY-3, constructed by Chr. Michelsen Institute (CMI, Norway), for example, is using this principle and measures the current speed in a range from \(0\, {\text {to}} \, 2.5\, {\text {m s}}^{-1}\) with an error of \(\pm 1\,\%\).

Acoustic Doppler current meters Footnote 6 (ADCP, ADP).Footnote 7 There are acoustic current meters, which radiate an uninterrupted signal. The principle of their operation is based on the measurement of the phase shift between a base sinusoidal signal and a signal that passed through a volume of water under investigation (see Fig. 28.7b), which exhibits phase shift due to the Doppler effect. Current meters of N. Brown ACM-1, ACM-2 (see Fig. 28.5i) are measuring two velocity vector components by using this principle.

One of the construction principles of a Doppler acoustic current meter is based on the property of water to disperse and partially to reflect an ultrasonic signal due to its inhomogeneity and impurity inclusions. An ultrasonic signal that is reflected by a moving water particle has a frequency which differs from that transmitted (see Fig.28.5h—CROUSET acoustic current meter; Fig. 28.8 the UCM-60 current meter as an example).

Fig. 28.8
figure 8

The UCM-60 3 Axis Acoustic Current Meter of General Oceanics, Canada, is a three dimensional, high performance ultrasonic current meter. Acoustic travel time difference is used for current measurements. The current velocity sensors have a resolution of 1–2 \({\text {mm s}}^{-1}\) at the \(6\,{\text {m s}}^{-1}\) range (photo from http://www.i-ocean.com/gometers.htm)

The Doppler sensing device (Fig. 28.7b) includes a sonic beam transmitter 2 and a receiver 3. Since particles reflecting a signal move together with the flow of the water, the reflected signal carries information about its speed. Use of high frequencies allows to enhance the resolution of the device to form a narrow beam and to use small-size transmitters. 2–10 MHz is the optimal frequency range for Doppler current meters. A size of the scattering centers bigger than, or commensurable in size to, a wave length of \(\gamma = 0.15\,{\text {mm}}\) is sufficiently large in natural basins. Plankton, suspended particles and air bubbles serve as scattering centers.

The volume of the water under sounding is usually more than 30–60 cm away from the transmitter, and this is one of the main advantages of Doppler current meters: the volume of the water under measurement is far away from the device so that the instrument is unlikely to distort the flow in the medium. However, Doppler current meters operate badly in pure water, in which the signal suffers weak reflection and thus is comparable to a noise. High resolution capacity makes Doppler current meters especially useful for turbulence measurements.

As mentioned above, in two-component measuring systems one uses two pairs of transmit-receive cells, usually placed perpendicularly to one another. It is still a challenging technical problem for present-day current meters to measure all three components of the velocity vector. In addition, handling ADCP-data is rather difficult, needs specific computer tools and requires substantial experience for successful use.

In summary, the above described acoustic converters measure current speeds from 0.01 to \(5\,{\text {m s}}^{-1}\) with an accuracy of \(\pm 1\,\%\) for the speed and \(\pm 2\,\%\) for the current direction. The reaction time is in the milliseconds and the nonlinearity is small through the entire range of applicability about 1 %. The measurement technique is almost inertia-less, and so the instruments can be used for measurement of both mean currents and velocity fluctuations. However, measurements near boundaries such as the water surface or the bottom are distorted by the reflected signal, which is the main imperfection of ADCPs.

Fig. 28.9
figure 9

Von Kármán vortex street behind a cylinder in uniform flow from the left (http://www.icfd.co.jp/menu1/thermalconvection/therm.html)

Vortex current meter. The principle of operation of a vortex current meter is based on the measurement of pulse vortex oscillations that are generated by a cylinder placed in a flow of water. The velocity field that passes the cylinder generates a Von Kármán vortex street (see Fig. 28.9) of which the eddies are intermittently produced left and right. The mechanism works as follows. As a fluid particle flows toward the leading edge of a cylinder, the pressure in the fluid particle rises from the stream pressure to the stagnation pressure. The high fluid pressure near the leading edge impels flow about the cylinder as boundary layers develop at both sides. On the other hand, the high pressure at high Reynolds numbers is not sufficient to force the flow back to the cylinder. Near the widest section of the cylinder, the boundary layers separate from each side of the cylinder surface and form two shear layers that trail aft in the flow and bound the wake. Since the innermost portion of the shear layers, which is in contact with the cylinder, moves much more slowly than the outermost portion of the shear layers, which is in contact with the free flow, the shear layers roll into the near wake, where they fold on each other and coalesce into discrete swirling vortices. A regular pattern of vortices, called a vortex street, trails aft in the wake, according to Fig. 28.9. The vortices interact with the cylinder and they are the source of the effect called vortex induced vibration. The frequency of this vibration is proportional to the speed of the flow, and it depends on the diameter of the cylinder. Since an amplitude modulation of the acoustic signal is used for measurement, this kind of current meter also belongs to the acoustic tools.

Fig. 28.10
figure 10

Vortex current meter: 1—water flow, 2—cylindrical rod, 3—vortices generated in the flow, 4—acoustic wave, 5—sound emitting device, 6—acoustic receiver

A sketch of the converter is shown in Fig. 28.10. In the region where the vortices 3 are generated, a sound emitting device 5 and a sound receiver 6 are placed on opposite sides of the vortex street. They are oriented in such a way that the ultrasonic beam crosses the flow under measurement. As a result of the influence of the vortices on the acoustic signal, the latter undergoes an amplitude modulation with a frequency defined by the vortex oscillations.

The minimum measurable speed of the current for such converters is 5–10  \({\text {cm\,s}}^{-1}\). The main disadvantage of vortex current meters is their poor signal modulation for speeds of more than 80  \({\text {cm s}}^{-1}\); this leads to an underestimation of the speed indication. Furthermore, the dependence of the signal upon the orientation of the device is ambiguous: readings are increasing up to 2–3 % at angles up to \(10^\circ \), however, at 10–\(30^\circ \) the signals decrease by 20–25 %. This is a non-monotonic response. Overall, these current meters do not have optimal properties for lake applications.

Electro-magnetic current meters. The principle of operation of an electro-magnetic current meter is based on the effect, that in a flow of an electrolyte (water), which crosses a magnetic field, an electromagnetic force is induced, which is proportional to the speed of the flow and the magnetizing force. In order of magnitude, the Earth’s natural magnetic field can successfully be used. However, there are several difficulties here. The very first is a physical one: since the potential difference, \(\varepsilon \) is proportional to the sine of the angle between the flow velocity \({{\varvec{V}}}\) and the local magnetic induction \(\varvec{ B}\) of the Earth’s magnetic field,

$$\begin{aligned} \varepsilon = \kappa \varvec{ V}| \ l \varvec{ B}| \sin (\varvec{ V}; \varvec{B}), \end{aligned}$$

it becomes very small near the equator. In this formula, l is the distance between the electrodes and \(\kappa \) is a coefficient of proportionality. In addition, the records are influenced by cosmic noise, electric currents in the ionosphere, telluric currents near the shore etc. So, in addition to the vertical component of the Earth’s magnetic field, special magnetic systems are used which create a permanent or alternating magnetic field in a flow under investigation. As an example, the CT-3 current meter of Sea-Link System, USA, is shown in Fig. 28.5g. This device is inserted in a mooring line in such a way that it is free to pivot in a horizontal plane; the large vertical tail orients the meter in the direction of the current flow. At both sides of the hull electrode meters are placed. A permanent magnet is placed inside the hull, generating a magnetic field near the electrodes; so a potential difference is established that is proportional to the speed of the flow. The range of velocity measurements with this current meter is 0.03–3 \({\text {m s}}^{-1}\) with an accuracy of \(\pm 3\,\%\) for the speed and \(\pm 5\,\%\) for the current direction. One or two components of the current vector can be measured.

Many other principles of current measurements are used in the oceanographic field practice, each having its own merits and demerits. At least the hydrodynamic converters (Prandtl pressure tubes, spherical or cylindrical dynamic pressure sensors) should be mentioned. Well known are also the heat-loss anemometers, known as hot film anemometers, which make use of the cooling by the water flow of a sensor element, heated by an electric current. They can also be used for measurement of velocity fluctuations; examples of such instruments are described in Sect. 28.5 (‘Measurement of turbulence’). Technically difficult but promising is the use of Doppler optical current meters, which are based on laser techniques.

Since currents are three dimensional vector fields, varying in both time and space, a question of their presentation and data handling becomes a serious and complicated problem. In order to illustrate this, Fig. 28.11 shows various possible data presentations for a simple case of a two-dimensional current, measured with a certain constant time step in one single point, whose north-ward component is constant in time while the east-ward component varies sinusoidally,

$$\begin{aligned} u = u_m , \quad \quad v = v_m \sin \omega t . \end{aligned}$$
(28.1)
Fig. 28.11
figure 11

Various kinds of presentation of two-dimensional water current, which varies with time in accordance with equation (28.1): a progressive vector diagram, which clearly displays a generally north-ward current and times of certain measurement steps; b two velocity components: East-ward and North-ward components are separated to demonstrate their principally different variations in time; c velocity: speed and direction, both varying periodically; d velocity stick plot—the most natural presentation of current, measured at individual points; e current rose, displaying both predominant current directions and percentage of different velocity gradations. For convenience, two columns of figures (left bottom) show the very data calculated in accordance with equation (28.1): velocity modulus (\(=\)speed), in \({\text {cm s}}^{-1}\), and deviation from the north-ward direction

Fig. 28.12
figure 12

Fragment of an ADCP data file. Eastward component of the current (in \({\text {mm~s}}^{-1}\)), measured on the Mainau Sill in Lake Constance in November 2001. The file contains in total 128 columns (depth levels) and 480 lines (time steps) per day (Courtesy Dr. Lorke. Data contributed to ConstanceDataBand 2001)

Every kind of presentation—the \((u, v), ({\varvec{ V}}, \phi )\), velocity stick plot, progressive vector diagram, the current rose—demonstrates some specific features of the current in different ways, and may be suitable or unsuitable for understanding of certain features of the process under investigation. They are complementary, so that it is often useful to plot the same data in several presentations. One may also find it suitable to use histograms, spectral distributions, \(u(v)\) graphs, statistical plots, and many others.

Modern current meters (ADCP, laser current meters) can measure all three components of the current, as they vary in space and time. Then, the very presentation of the measured data becomes an important part of investigation, which requests (i) field experience, (ii) understanding of physical processes, and (iii) computer skill. An example of a data file structure after the current velocity measurement on the Mainau Sill (Lake Constance) using ADCP, fixed near the surface, is given in Fig. 28.12. The table shows an eastward component of the current only, measured every 50 cm along the vertical (starting from 1.1 m below the water surface) with time step of 3 min. The northward component was measured simultaneously. It is clear that data presentation and analysis here requires specific computer tools and ample experience.

1.2 Buoys, Floats, Tracers, Profilers

Drifting buoys are freely moving together with the water, i.e., their use implies a Lagrange an technique of measurement of water current. From ancient bottle mail and first drifters made and used for flow investigations by Leonardo da Vinci in the 15th century, they still remain the most effective tools for the observation of transport and the tracing of currents at large distances. Their shape, size, buoyancy and weight are to be selected especially in order to minimize a possible influence of the wind on the buoy motion under given wind and current conditions. In early investigations, their positions were registered by an accompanying boat or from a shore-bound station by echo-sounders. At present, satellite systems are used for buoy tracking with GPS or remote fixation. For oceanic constructions, as a rule, the underwater sail is arranged like a cross of rigid metallic rectangular frames covered with sailcloth. This sail, supported by underwater buoyancy, is attached to a flashing light on the surface. In lakes, lighter and less sophisticated constructions are used, however, it is very important to select an area for the underwater sail sufficiently large so that the drag of the approaching flow on the supporting line and its underwater buoyancy, as well as the drag of the wind on the surface buoy can be ignored. This is usually achieved by using a large light sail combined with a small weight of the whole construction. Several examples of drifter constructions for lake research are shown in Fig. 28.13. Such drifters are very convenient and often used for local purposes, like estimation of general flow direction, short (up to several hours) tests of the current structure in an area of investigation, and so on, and usually they are traced by the observer from the boat, while performing other measurements or sampling.

Fig. 28.13
figure 13

Simplest constructions of drifters

Floats of neutral buoyancy are used to trace in situ the course of currents at a prescribed depth. Some devices are also constructed for the investigation of the velocities of vertical currents in upwellings. Deep floats are equipped with a hydro-acoustic responder beacon and constructively made so as to accurately maintain a prescribed depth or isopycnic surface during a long time period. Locations of floats are usually fixed by boat, a coastal acoustic system or an autonomous acoustic buoy station. These measurements require some specific methods of tracking in situ-moving floats. They were of great interest in the mid 20th century, when strides, made in acoustics during World War II, yielded the necessary technology for acoustic tracking of instruments. Such floats are often called SOFAR-floats, named after coastal network stations for remote tracing of acoustic signal sources (SOund Fixing and Ranging). For deeper information the reader may consult the exhaustive investigations by John Swallow (1955) [71] and Henry Stommel(1955) [70]. Nowadays, the substantial progress in oceanic techniques allows construction of almost truly Lagrangean floats, which follow all three components of oceanic velocity on all time scales. As an example of a new float type, an Mixed Layer Float IInd generation (MLFII) is presented in Fig. 28.14: it is autonomous with exposure durations of months; it can alternate between Lagrangean and profiling modes, relay data via satellite, and can carry a variety of sensors. A novel hull design is light, strong, and has a compressibility close to that of seawater (D’Asaro (2002) [3]).

Fig. 28.14
figure 14

Lagrangean float MLFII (Mixed Layer Float IInd generation) of Applied Physics Laboratory and School of Oceanography, University of Washington. Float depth is measured using a pressure sensor. The float’s buoyancy is controlled by a piston that pushes out of the bottom endcap. SeaBird T-C modules on the top and bottom measure temperature and salinity. A Sontek pulse-pulse coherent Doppler sonar measures velocity relative to the float. Orbcomm, GPS, and Argos radio system provide positioning and two-way communications. Downward irradiance, chlorophyll fluorescence, and altitude off the bottom can also be added. The cloth drogue folds downward during profiling and autoballasting. Reproduced from D’Asaro (2003) [4]. \(\copyright \) AMS, Atmos.-Ocean Technol., reproduced with permission

Free-fall and attached velocity profilers are used for the measurement of currents as functions of depth and/or horizontal position. Their primary function in modern systems is to allow accurate data collection without the need to stop the vessel. The simplest construction of a free-fall sinking-float is the analogue of the meteorological balloon. It is tracked acoustically as it sinks, and the horizontal projection of its path is differentiated with respect to time to yield the velocity as a function of depth. A subset of this class is the transport float, whose position before and after a trip to the bottom is shown by dye patches at the surface. More complicated is the free-fall device which has a current sensor on it, including electromagnetic and airfoil lift probes. Finally, there are instruments consisting of a current meter that moves up and down a line attached to a ship, to a mooring or drifting buoy [5].

Various molecular tracers like spots of fluorescent dyestuff (fluorescene, rhodamine, etc.) or natural radioactive isotopes (e.g. SF6) are used for the observation of global water transport, as well as for analyses of the current field structure and processes of turbulent diffusion. Whilst the use of dyestuff is a traditional method, tracing of radioactive molecules is a rather new kind of measurement technique. It became possible when the accuracy of measurements of weak radioactivity became sufficiently high. At present, it is possible to detect radioactivity with a level of the order of \(10^{-14}\) Curie \(\ell ^{-1}\), i.e. 1–2 decays during 1 minute in a volume of 100 l; this is up to 1,000 times less than the natural radiation of water in the World Ocean (\(10^{-10}\)\(10^{-15}\) Curie \(\ell ^{-1}\)). Depending on the ratio of the time scale of the processes under investigation and the period of half-decay, four types of nuclear tracers are distinguished: radioactive conservative (\({\text {Sr}}^{90}\), \({\text {Cs}}^{137}\), T, \({\text {Ra}}^{226}\), \({\text {Ra}}^{228}\)) and non-conservative (\({\text {C}}^{14}\), \({\text {Rn}}^{222}\), \({\text {Si}}^{32}\)) tracers, and stable conservative (\({\text {O}}^{18}\), D) and non-conservative (\({\text {C}}^{12}\), \({\text {C}}^{13}\)) tracers. Two methods of analyses are applied: (i) collecting samples followed by determination of the isotope concentration and further analysis in the laboratory and (ii) direct measurement of the gamma-radioactivity by submerged gamma-spectrometers. It goes almost without saying that all tracer substances participate in the diffusion processes to which the lake water itself is also subject. So, whilst such methods may give a rough estimate of the velocity of a spot of dyestuff, by tracing the center of the spot with time, this method is very difficult and rather ineffective for the determination of the velocity. The growth of the spot, however, may yield estimates of the (turbulent) diffusivity along the trajectory of the spot.

2 Measurement of Water Temperature, Electrical Conductivity and Density

Whatever you would desire to measure,

your device is noting a temperature

N. V. Vershinsky,

1981

For many physical processes in limnology, the water temperature is definitely the most valuable water parameter. It varies in lakes both with space and time, in the surface layer with a maximum range of variation on a seasonal time scale from \(0^\circ \) in freezing fresh water lakes to more than \(40\,^\circ \text {C}\) in tropical lakes, and even sometimes up to \(100\,^\circ \)C and more in boiling mineralized geyser reservoirs. A typical seasonal variation of the vertical temperature profile in a deep lake is displayed in Fig. 1.8 of the Introduction in Vol. 1: it shows a relatively large temperature variation in the upper epilimnion, that is ‘monitored’ by solar radiation and turbulence and follows a seasonal circle, versus an almost stable deep hypolimnion temperature, more or less close to the most dense \(4\,^\circ \text {C}\)-water.

This slow and monotonous adjustment to a seasonal cycle is, however, only a rough background for the real ‘life’ of the temperature field of a lake. Powerful internal wave dynamics, wind-induced and gradient currents, wind and wave mixing, turbulence and heat exchange with the atmosphere and land—all these manifest themselves through variations in the temperature field by a multitude of time scales. So, it is not surprising, that the temperature field in a lake is composed of spots, strips and step-like structures, and that everyone investigating lake physics is doomed to measure the structure of the temperature field and its variations. Its temporal variations due to internal waves have periods from several days to minutes; moreover, the periodicity range from hours to seconds is filled with many other meso- and small-scale processes as well. The natural scale of the spatial variations of the temperature fields in lakes is from millimeters (fine structure, turbulent cells) to tens and hundreds of meters (convective cells, coastal heating, thermocline etc.). Many figures in this book display various physical phenomena which are clearly observed through temperature field data: internal waves (e.g., Figs. 13.4, 14.13, 16.12 in Vol. 2), differential coastal heating (Fig. 28.16), day/night variations in the upper layer (Fig. 28.17), penetrative and non-penetrative autumn convection, wind-induced thermocline motions (Fig. 28.18), Langmuircirculation, down-slope gravity currents (Fig. 8.5 in Vol. 1) and many others. Thus, measurement of the very temperature field and its variations is extremely informative for the investigation of lake dynamics. Besides that, there is one more physical reason for temperature recordings, namely, the all-embracing influence of the temperature on many other water parameters. To illustrate this, consider the measurement of the electrical conductivity of water as an example.

Fig. 28.15
figure 15

Vertical conductivity and temperature profiles measured near the Island Mainau in Lake Constance on 11 November 2001, indicating an intrusion in the upper thermocline, manifested only in the field of conductivity (Courtesy Dr. Bäuerle)

Fig. 28.16
figure 16

Two temperature profiles in the 5–10 cm sub-surface layer versus bottom profile across Lake Constance on 30.10.02 (12:40–14:20), demonstrating coastal heating of shallow littoral water in day-time (Chubarenkoet al. (2003) [11]) \(\copyright \) Limnology, PAGEPress, Pavia, Italy, reproduced with permission

Fig. 28.17
figure 17

Records of thermistor chains placed in Lake Constance in October, 2001 show daily sun heating and night cooling in the surface layer (http://www.cwr.uwa.edu.au)

Fig. 28.18
figure 18

Wind-induced inclination of the thermocline surface: temperature depth-time series of two thermistor chains, placed near opposite shores of Lake Constance in November, 2001. Wind speed time variation is shown in the central panel (http://www.cwr.uwa.edu.au, with additions)

The size of the water molecule is very small in comparison with usual non-organic and especially with organic molecules. In addition, it has a rather high electrical dipole moment. These two facts lead to an important consequence: pure water is an extremely active natural solvent. As a result, chemically pure water can be maintained only in the laboratory for a few seconds, and the liquid which fills natural basins is actually a solution of gases, liquids and other substances from the surrounding area. So, the electrical conductivity of natural water is the composition of the pure water conductivity and the conductivity due to the dissolved substances; in most cases its characteristics are manifestations of the chemical composition of the surrounding area, and they vary from one lake to another, sometimes to a very large degree. The point to emphasize here is that the conductivity measured in natural basins results from the transport of electrical charges by both water molecules and dissolved substances, whose speeds of motion increase together with the temperature. In other words, the electrical conductivity depends physically not only on the chemical composition, but also on the intensity of the movement of the molecules in solution, i.e., on the temperature. Thus, to obtain an amount of dissolved substances (the very reason to measure conductivity!) one should subtract from the measured signal the pure water conductivity and make a correction for temperature.

When considered together with the water temperature, conductivity data can provide an important information on internal lake dynamics. For instance, it is a very effective instrument for tracking trajectories of sewage flows or water from mineralized underwater sources, which may have the same temperature as the surrounding water. Figure 28.15 gives an example of a situation measured in Lake Constance in November, 2001: the conductivity data indicates an intrusion flow at the depth of 30–40 m; its different chemical composition can be affirmed because the temperature profiles do not show accompanied variations.

The electrical conductivity depends on the hydrostatic pressure as well, but not very significantly: for the temperature in the range of 0–28 \(^\circ \text {C}\) the variation of the electrical conductivity is about 2.2–2.3 % per \(1\,^\circ \)C, whereas a change of the hydrostatic pressure by 1 atmosphere (ca. 10 m of water depth) causes a change in electrical conductivity as small as 0.01 %.

One more principal aspect of the importance of temperature measurements is reflected in the epigraph to this section: to a certain extent, almost all instruments, as well as principles of their operation depend on the temperature. Resistance of wires and electrical networks, sizes of mechanical parts, volumes of liquids, frequencies of eigen-oscillations of membranes in sensors—all these depend on temperature, and as a rule in a nonlinear fashion. So, in order to treat the fine recordings of an instrument correctly, one must know the circumstances of measurement: i.e., necessarily the temperature itself, the principle of the operation of the instrument and appropriate temperature-related corrections to measured data.

2.1 Main Principles of Operation of Temperature Sensors

The following physical phenomena are most widely used in present-day oceanographic temperature sensors:

  • thermal expansion of liquids,

  • dependence on temperature of the resistance of an electrical conductor or semiconductor,

  • generation of thermo-emfFootnote 8 (thermocouple voltage),

  • dependence on temperature of rigid body eigen oscillations,

  • dependence of the speed of sound on temperature.

In correspondence with this list, the following primary converters of temperature are used: liquid-filled volumes; wire-wound-type conducting thermo-resistors or semi-conducting thermo-resistors (briefly called thermistors); cupric, constantan or semi-conducting thermocouples; quartz resonators or thermo-acoustic cells.

Fig. 28.19
figure 19

Reversing thermometer is a mercury thermometer for measuring the temperature in deeper water layers. The principle is that the indication of the thermometer at a given depth should be fixed, after which the thermometer is taken up again without change through overlying water layers with different temperatures, so that the indication can be read aboard. This fixation of the indication is realized by reversing the thermometer, which is constructed in such a way that the connection between the mercury in the capillary and the reservoir is thereby interrupted. The indication of the reversing thermometer should be read aboard along with the temperature of the second—normal—thermometer; the two readings give the temperature of water at the depth where the thermometer was reversed. The reversing thermometer is carried inside a glass tube which protects the thermometer from the pressure. If the thermometer is unprotected, the apparent temperature read aboard is proportional to the temperature and pressure at the depth where the thermometer was reversed. A pair of unprotected and protected thermometers (both are shown on the figure) gives temperature and pressure of the water at the depth at which the thermometer was reversed. By attaching the thermometer to a NISKIN bottle (see Fig. 29.15) or some other sampler with a reversing frame a simultaneous observation of temperature and salinity can be obtained. The temperature can be read by this instrument to about \(0.01^\circ \text {C}\). The break-off point appendix on the tube with mercury is shown by the arrow (from http://oceanworld.tamu.edu, with changes)

Liquid-filled thermometers, the direct derivatives of Galilei’s thermometer (Fig. II) constructed in 1592, participated in almost all oceanographic expeditions of the 20th century under the names ‘surface thermometer’, ‘reversing thermometer’Footnote 9 (see Fig. 28.19), ‘bathy-thermograph’, ‘photo-thermograph’ etc. These were simple and reliable instruments, successfully used in the whole temperature range of hydro-physical measurements. However, their analogue-type signal is difficult to transmit, it does not easily allow modern electronic data handling, let alone fulfill the requirements of today’s data accuracy. So, they are used nowadays only for the most simple surface temperature measurements.

Wire-wound-type conducting resistance thermometers (cupric, platinum, tungsten, nickel, cobaltic)   make use of the following relationship between electrical resistance and temperature:

$$\begin{aligned} R_t = R_0 (1 + \alpha T + \beta T^2), \end{aligned}$$
(28.2)

where \(T\) is in centigrades, \(R_0\) is the resistance at \(0\,^\circ \)C and \(\alpha \) and \(\beta \) are thermal coefficients of resistance. For the metals listed above, the value of \(\alpha \) varies between (3.5–6.6) \(\times \,10^{-3}\), [\(^{\circ }{\text {C}}^{-1}\)], and that for \(\beta \) lies in the range {\(-\)5.5–17} \(\times 10^{-7}\), [\(^{\circ }{\text {C}}^{-2}\)].

Semiconductor-thermoresistors (thermistor s) exhibit a non-linear dependence of the resistance on the temperature

$$\begin{aligned} R_t = e^{-\gamma /T^2}, \end{aligned}$$
(28.3)

where \(T\) is the absolute temperature, with negative or positive coefficient \(\gamma \).

High accuracy and sensibility, stability and linearity in a wide temperature range, and a frequency-type generated signal are the features of quartz resonators. Their principle of operation uses the following phenomenon: the frequency of the eigen-oscillations of the quartz crystal, cut under a certain angle to its optical and mechanical axes, is proportional to its temperature, with coefficients depending on the angle of the cut; the following polynomial approximation is used:

$$\begin{aligned} F_T = F_0 (1 + \alpha T + \beta T^2 + \gamma T^3) \qquad (T\, {\text {is absolute temperature}}). \end{aligned}$$
(28.4)

Here, \(F_T\) is the frequency of the eigen-oscillations (Hz), \(T\) is the temperature (\(^\circ {\text {K}}\)) and \(\alpha , \beta , \gamma \) are coefficients which depend on the angle of the cut. Figure 28.20 gives examples of quartz resonator crystals of different shapes, used in field instruments. The performance of quartz sensors is excellent, their response time is milliseconds and their accuracy higher than \(0.001\,^\circ {\text {C}}\). They open a wide perspective in future investigations.

Fig. 28.20
figure 20

Two different constructions of quartz resonators for measurement of water temperature: 1 quartz resonators (thickness 0.2–0.3 mm); 2 electrodes; 3 outlet to generator of external oscillation signal; 4 metallic cap for better heat contact

Actually, the accuracy of temperature measurements, though depending on the measuring principle, should be defined by the purpose of the investigation. The most accurate measurements are required for calculations of the water density and acoustical fields; for these, errors must be as small as \(10^{-3}\,^{\circ }{\text {C}}\) or smaller. Instruments, routinely used for standard measurements nowadays, have an accuracy of 0.1– 0.01 \(^{\circ }{\text {C}}\) or somewhat higher.Footnote 10

2.2 Salinity Measurements

At the times of reversing thermometers (Fig. 28.19), i.e., before the 1960s–70s, the standard technique for the determination of the salinity was the collection of water samples for later processing in the laboratory by titration or conductivity measurement. Very likely, the first reference to the use of conductivity as a means for measurement of sea water salinity appeared in Nansen’s report (1902) [45] of the Norwegian Polar Expedition. Instruments were initially called salinometers, and were used in Salinity-Temperature-Depth probes (STDs). Later, when conductivity was proved to provide reliable field data on water salinity, such probes were re-named Conductivity-Temperature-Depth (CTDs). Since modern probes are equipped with programs re-calculating the density, the acronym CTD is often considered as ‘Conductivity-Temperature-Density’. These probes have proved to be very reliable instruments and relatively simple to use, so they are most popular in present-day lake research practice. Figure 28.22 shows a working instant—the beginning of vertical sounding by CTD Idronaut during the measurement campaign in Lake Constance in 2001 (Chubarenko et al. (2003) [11]).

One way to express how much salt is dissolved in water is to measure the concentration of salt in the water. Concentration is the amount (by weight) of salt in water and can be expressed in parts per million (ppm). Since the time when titration has been the main method to measure salt content, scientists used to report salinity in ppt (abbreviated ‘parts per thousand’, which is equal to the quantity of grams of salt in one liter of the water sample; it is also called promille, from Latin—promille—per thousand); however, reporting salinity in ppt is obsolete now as the method of determining ocean salinity in ppt is no longer used. Instead psu (‘practical salinity units’) are used: 1 psu \(=\) 1 ppt \(=\) 1 g \(\ell ^{-1}\). One determines salinity by measuring the conductivity of the water as would be measured by a CTD instrument.

The development of a new standard TEOS10 [29] for the calculation of the properties of seawater, adopted by the Intergovernmental Oceanographic Commission, the International Association for the Physical Sciences of the Oceans and the Scientific Committee on Oceanic Research (see Sect. 28.2.3 on density calculation for more detail), has required the next step in characterization of water salinity. Different salinity measures were reviewed (such as Practical Salinity, Reference Salinity, Preformed Salinity, Density Salinity, Solution Salinity, Added-Mass Salinity (see, e.g. Wright (2010) [81]) and the Absolute Salinity (“Density Salinity”), traditionally defined as the mass fraction of dissolved material in seawater, was shown to be a most proper measure for calculation of the water density via TEOS10. Absolute Salinity is measured in SI units of kg \({\text {kg}}^{-1}\). The important thing is that, in field practice, we continue measuring water conductivity by CTD-probes and continue reporting to data-bases the Practical Salinity in psu; however, in order to calculate the water density, one should now additionally estimate composition anomalies of the given water sample.

Fig. 28.21
figure 21

Density of water as a function of its temperature and salinity, based on the formula by Chen and Millero (1986) [10]. Strait lines show the temperature of maximum density (\(T_{md}\)) and the temperature of freezing (\(T_{fr}\)), both depending on water salinity

Fig. 28.22
figure 22

Dr. Boris Chubarenko, making a vertical sounding in Lake Constance (2001) by CTD Idronaut Ocean Seven Model 316 of General Oceanics, Canada. The probe is equipped with the standard sensors to measure: pressure, temperature, conductivity, oxygen, pH, oxidation-reduction potential. Salinity is automatically calculated from conductivity, temperature and pressure values

It is often convenient to use also qualitative characteristics of water salinity: ‘fresh’ waters are those with salinity less than 1 psu, ‘slightly saline’ waters have 1–3 psu, ‘moderately saline’ waters—3–10 psu, ‘highly saline’—10–35 psu, and ‘hypersaline’—more than 35 psu. Ocean water has a salinity that is approximately 35 psu, or 3.5 % (‘percent’—one part per 100). Lakes typically have low water salinities (less than 1 psu), i.e. lake water has about 0.1 % salt in it. To denote small values of the lake-water salinity and its different chemical composition, one calls such salt content water mineralization rather than salinity.

There exists one more classification of natural waters due to their salinities, which is based on physical argumentation, see Fig. 28.21. If its freezing point (which depends on salinity) is above the temperature of maximum density (which is salinity dependent as well), such waters are considered as ‘saline’. In the opposite case, they are called ‘brackish’ waters. The freezing temperature equals the temperature of maximum density at the salinity 24.695 psu. This way, oceanic waters freeze before reaching the temperature of maximum density, whilst in rivers, lakes, inland seas, where the salinity is less than 24.695 psu, the water body, when being cooled must pass the temperature of maximum density before it freezes.

Even though the value of the salinity in lakes, as a rule, is very small, it provides an important information on the chemical environment. Furthermore, its influence on the density field may be significant; so, it has become standard now to measure simultaneously the water temperature and the conductivity. In the most popular modern instruments, conductivity sensors of several kinds are used: contact pickups, or so-called conductive sensors, where the salt molecules come in contact with metallic electrodes (and typically destroy them quite soon); non-contacting detectors based on the inductive coupling principle, which avoids the use of metal electrodes; or capacitive pickups. The life time of conductivity sensors is typically much shorter than that of the temperature sensors. This is the reason why almost all long-lasting field measurements are limited to temperature registration only. The accuracy of routinely used field instruments is 0.01–0.001 psu.

2.3 Density Measurements and Calculations

Structure and variability of the density field is of principal physical importance for water dynamics in lakes. Density variations in natural basins are due to several different physical reasons, and thus the field of density provides an integral information on ongoing physical processes. However, direct density measurement is presently almost never applied in field practice. On the one hand, there do not yet exist proper sensors with high accuracy, which would allow to record density variations in most convenient digital form, i.e., one still has not yet found a frequency-type primary converter for density. On the other hand, modern indirect methods which re-calculate the density from other measured parameters provide quite satisfactory data. Along with the above mentioned CTD-probes, which re-calculate density from salinity and temperature data, one commonly uses sound speed and laser optical measurements. With the former, a new and promising field is three-dimensional acoustic tomography by which the 3-dimensional picture of the density field can be determined. The latter ones are still difficult and not yet sufficiently developed for wide practical use as they require careful calibration by independent experimental data obtained by other instruments. There exist several other kinds of density measurement techniques as well, both direct and indirect ones; the need for the direct density measurements is obvious nowadays and drives the development of new instruments and the use of other physical principles.

Physically, water density depends on its temperature, amounts of minerals in solution (salinity) and suspension, and on the pressure. Natural variability of water density and its dependence on temperature and salinity are shown on Fig. 28.21. The natural variability of the density of pure water in the range 0–\(40\,^\circ {\text {C}}\) is between 992.2 and \(1,\!000\,{\text {kg m}}^{-3}\), i.e., 0.8 % only; so, the density anomaly \(\sigma _t\) is often used as a variable

$$\begin{aligned} \sigma _t = (\rho - \rho _\star ), \qquad (\rho _\star = 1000\, {\mathrm {~kg~m}}^{-3}). \end{aligned}$$
(28.5)

If \(\rho \) is expressed in \({\text {kg~m}}^{-3}\), \(\sigma _t\), shows how many kilograms lighter (or heavier) one cubic meter of water at a given temperature and salinity will be in comparison with pure water at \(4\,^\circ {\text {C}}\). For the temperature range 0–\(40\,^\circ {\text {C}}\) it amounts up to \(7.8\,{\text {kg m}}^{-3}\).

Dependence of water density on Temperature has a specific character. Contrary to the temperature dependencies of the density of almost all substances (which have decreasing density with growing temperature), the density of fresh water between (roughly) \(0\,^\circ \text {C}\) and \(4\,^\circ \text {C}\) increases with increasing temperature by 0.2 %, has a maximum near \(4\,^\circ \text {C}\) and decreases beyond it, so that water at \(0\,^\circ \text {C}\) and \(8\,^\circ \text {C}\) has almost the same density. Consequences of this anomaly are of global importance for the water dynamics of lakes and lead to phenomena which do not occur in oceans (e.g., an inverse vertical thermal stratification in winter, spring and fall overturns, formation of thermal bars).

Salinity. Even though the majority of lakes has a water salinity less than 1 ppt, its influence on the water density is also significant for many processes. Obviously, the more salt is dissolved in the water, the heavier will be a cubic meter of it; however, simple additivity of masses is not working here. For example, one cubic meter of pure water at \(0\,^\circ \)C is only 8 kg lighter than 10 ppt-salty waterFootnote 11 at \(0\,^\circ \text {C}\). Figure 28.21 displays the density between \(-1.5\,^\circ \text {C}\) and \(30\,^\circ \text {C}\), parameterized for a number of salinities. It is seen that, besides a considerable change in the absolute value of the density, the non-monotonicity of the temperature dependence of the density is lost when the salinity exceeds 24.695 psu. Thus, while cooling, water with the salinity above 24.695 psu freezes before reaching the temperature of maximum density, whilst the so-called brackish water (water with salinity less than 24.695 psu) passes its maximum density in its liquid phase. This has e.g. severe consequences in the upper layer mixing processes in spring and autumn. Most lakes on Earth comprise of water of very low salinity, so that in the limnological field practice the density reflects almost exclusively the variations of the temperature field.

Pressure. The influence of the pressure is rather weak, so that for many lakes it can often be neglected. However, for deep lakes with depths larger than 500 m and for thermobaric processes this pressure dependence cannot be ignored. For example, the World’s deepest Lake Baikal, located north of the Mongolian border, has a maximum depth of just over one mile (1637 m), hence, the pressure near the bottom is 164 times the atmospheric pressure! Even a very small compressibility is significant under such conditions. As the water pressure \(p\) rises an amount \(dp\) at a constant temperature, the density of the water increases by \(d\rho _w\) from its original density \(\rho _w\), and a given volume of water \(V_w\) will decrease by \(dV_w\) in accordance with

$$\begin{aligned} \beta dp = \frac{d\rho _w}{\rho _w} = - \frac{dV_w}{V_w}, \end{aligned}$$
(28.6)

where \(\beta \) is the isothermal compressibility of water. It varies only slightly within the normal range of lake water temperatures, from \(\beta = 4.9 \times 10^{-10}\) m\(^2\) N\(^{-1}\) at \(0\,^\circ \text {C}\) to \(\beta = 4.5 \times 10^{-10} \,{\text {m}}^2\, {\text {N}}^{-1}\) at \(20\,^\circ \text {C}\).

To illustrate how (in)compressible water is, let us estimate the density difference between water near the surface and at the bottom of a lake, 500 m deep, for the simplest case of winter homothermy, when the temperature is constant through the entire depth, and equals, say, \(4\,^\circ \text {C}\). The density at the surface under atmospheric pressure is \(\rho _{w} = 1000.0{\text { kg}} {\text { m}}^{-3}\). At the bottom, the additional pressure equals the weight of the water column

$$\begin{aligned} p = \rho _w g H = 1000 {\mathrm {~kg~m}}^{-3} \times 9.8 {\mathrm {~m~s}}^{-2} \times 500 {\mathrm {~m}} = 49 \times 10^5 {\mathrm {~Pa}}. \nonumber \end{aligned}$$

Using \(\beta = 4.8\times 10^{-10}\, {\text {m}}^2 \,N^{-1}\) we obtain \(d\rho _w = \beta dp \rho _w = 2.35\, {\text {kg m}}^{-3}\). Therefore, the water density at the lake bottom is \(1002.35 \,{\text {kg m}}^{-3}\), or 0.2 % larger. It is interesting to note, that the density difference between surface and bottom layers due to summer temperature stratification of a typical value of \((22\,^\circ \text {C}\)\(4\,^\circ \text {C}) = 18\,^\circ \text {C}\) amounts to almost the same value (see Fig. 28.21), namely 2.2 kg \({\text {m}}^{-3}\)!

Most lakes on Earth are much shallower, so (typically) the dependence of the water density on the temperature is the dominant factor.

Calculating water density. Various reliable methods of calculation of water density are available nowadays. Today’s CTD-instruments commonly contain sensors for temperature, conductivity and pressure, and are equipped with an electronic module for density recalculation. There is a great number of formulae expressing the density (as well as other water parameters) as functions of temperature, salinity and pressure (see also Chap. 10 of Vol. 1). Today’s almost universally accepted parameterisation of the density function for lake waters is that of Chenand Millero(1986) [10]; its analog for sea water is known as EOS-80 (UNESCO (1981) [74]). Another formula, still popular in the oceanographic literature, see Gill(1983) [23], is

$$\begin{aligned} \rho = \rho _{T} + \varDelta \rho _{S} \end{aligned}$$
(28.7)

where

$$\begin{aligned} \rho _{T}&= 999.842594 + 6.793953 \cdot 10^{-2}T - 9.095290 \cdot 10^{-3} T^2 \nonumber \\&\quad + 1.001685 \cdot 10^{-4} T^3 - 1.120083 \cdot 10^{-6} T^4 + 6.536332 \cdot 10^{-9} T^5 \end{aligned}$$
(28.8)

is the density of pure water in kg m\(^{-3}\) as a function of temperature (in \(^\circ \)C), and

$$\begin{aligned} \varDelta \rho _{S}&= (0.824493 - 4.0899 \cdot 10^{-3} T + 7.6438 \cdot 10^ {-5} T^2 \nonumber \\&\quad -\, 8.2467 \cdot 10^{-7} T^3 + 5.3875 \cdot 10^{-9} T^4) S \nonumber \\&\quad +\, (-5.72466 \cdot 10^{-3} + 1.0277 \cdot 10^{-4} T - 1.6546 \cdot 10^{-6} T^2) S^{1.5}\nonumber \\&\quad +\;4.8314 \cdot 10^{-4} S^2 \end{aligned}$$
(28.9)

is the change in density due to the dissolved substances, where \(T\) is the temperature in \(^\circ \text {C}\) and \(S\) is the salinity expressed in g salt per kg water, or ppt\(=\)parts per thousand [23].

A great advantage of such formulae, deduced mainly from sets of laboratory measurements, is that they deliver quite easy-to-use explicit expressions of the dependency of water density on parameters measured in field; this is convenient for both calculations and theoretical work. Overall, this approach (the Practical Salinity Scale 1978 and EOS-80), which expresses the density of seawater as a function of Practical Salinity, temperature and pressure, has served the oceanographic community very well for thirty years.

In 2010, the Intergovernmental Oceanographic Commission, the International Association for the Physical Sciences of the Oceans and the Scientific Committee on Oceanic Research jointly adopted a new standard for the calculation of the properties of seawater (IOC, SCOR and IAPSO (2010) [29])—the International Thermodynamic Equation Of Seawater (TEOS10). It uses a principally new approach: the thermodynamic properties of seawater are now calculated on the basis of one of its thermodynamic potentials—the Gibbs free energy, from which thermodynamic properties such as entropy, specific volume, enthalpy and potential enthalpy are directly calculated and thus are fully consistent with each other. The main motivations for an updated description were: (i) partial lack of consistency between several of the polynomial expressions of the EOS-80, (ii) development since the late 1970s of a more accurate and broader applicable thermodynamic description of pure water, and better measurements of the heat capacity of water, of the sound speed and the temperature of the maximum density. At the beginning of the 21st century, the impact of the composition of the water in different basins on its density became better understood; thus, the need arose for accurate expressions for entropy, enthalpy and internal energy of seawater (which were not available from EOS-80). Moreover, heat fluxes across the interfaces between the water, the atmosphere and the ice became of primary emphasis in interpretations of functioning of the global planetary heat engine.

Fundamental to TEOS10 are the concepts of Absolute Salinity and Conservative Temperature. The Gibbs free energy (or free enthalpy, Gibbs function or Gibbs potential) is a function of Absolute Salinity \(S_A\) (rather than of Practical Salinity\(S_P\)), Conservative Temperature and pressure (IOC, SCOR and IAPSO (2010) [29]). Absolute Salinity is traditionally defined as the mass fraction of dissolved material in seawater, and it is preferred over Practical Salinity because the properties of seawater are directly influenced by the mass of dissolved constituents whilst Practical Salinity depends only on conductivity. With this new approach, the particular water composition becomes of importance, as well as the solubility of salt constituents, the alkalinity, the pH value, etc. (The variations in the relative concentrations of water constituents caused by biogeochemical processes actually cause complications in even defining what exactly is meant by “absolute salinity”.) At the same time, the difference between \(S_A\) and \(S_P\) even for oceanic water is only about 0.47 %. The use of Absolute Salinity is a major departure from present practice. Absolute Salinity is also the appropriate salinity variable for the calculation of freshwater fluxes and for calculations involving the exchange of freshwater with the atmosphere and with ice, freezing temperature, latent heats of melting and of evaporation.

The new TEOS10 temperature variable, in replacement of the Potential Temperature, is the so-called Conservative Temperature; it is defined to be proportional to the potential enthalpy and is a very accurate measure of the “heat” content per unit mass of seawater.

The definitions of various thermodynamic quantities (which follow directly from the Gibbs function of seawater by simple mathematical processes such as differentiation), the computer software (the Gibbs-SeaWater (GSW) Oceanographic Toolbox) to evaluate these quantities, introductory articles about TEOS10 and user manuals are available from http://www.TEOS-10.org.

Although there are substantial advantages to using TEOS-10, the price to be paid for this is that the mathematical equations making up this standard are rather complex and involve many coefficients specified to 16 significant digits (IOC, SCOR and IAPSO (2010) [29]). It is obviously not recommended to programme these—instead, one is supposed to use the developed software, which is now freely available in FORTRAN, Visual Basic, and MATLAB at http://www.teos-10.org.

It is important to emphasize here once again: when in field, we continue measuring Practical Salinity and in-situ temperature by common CTDs, and we archive these data as they are measured. When calculating the density field and writing papers, we now use Absolute Salinity and Conservative Temperature instead (in place of former Practical Salinity and Potential Temperature).

3 Water Level and Water Depth Measurement

3.1 Water Level Elevation

Water level elevation, or the height of the water surface in relation to an established datum, is a vital parameter for almost all hydro-physical processes in lakes, and its measurement is of utmost importance in lake research. Nowadays it is routinely needed as a parameter in water-quality control and environmental monitoring. Furthermore, it is required for lake modeling: both for model calibration and as a boundary condition. So, it became a routine matter to continuously record the stage of the water level.

In the mid 19th century, reports on measurements of water level variations in lakes were devoted to the investigation of surface seiches. For instance, Nöschel in (1854) [46] reported water level measurements in Lake Goktscha (Sevan) in the Caucasus. Moreover, the Russian engineer Stabrowski [66] published in 1857 a small paper on the seiches of Lake Onega, in which he used the term seiche to characterize a periodic oscillation of the water level. The term ‘seiche’ was, however, made popular by Forel (Fig. 28.23) in his monumental work ‘Le Léman’ [19], published in 1895, where he devotes in volume 2 more than 150 pages to their manifestation, mostly in Lake Geneva. He states on page 41 of volume 2: ‘Les seiches sont signalisées pour le première fois en 1730, par Fatio de Duillier, ingénieur des fortifications de Genève’ [19]. Forels quotation from Fatio ends with the statement: ‘...Cette sorte de flux et reflux s’appelle à Genève des seiches’.Footnote 12 This shows that the term ‘seiche’ is known in the environs of Lake Genève and has been in use already in the 18th century, see also e.g. Jallabert (1742) [30]. On more than 20 pages Forel discusses observations and attempts of interpretations by physics, mostly for Lake Geneva, but also for other lakes worldwide. We may quote here (i) Vaucher (1833) [76], a ‘theologist’ and ‘naturalist’, whose work Forel charges to be particularly complete for the time it appeared, (ii) Schulthaiss (1549) [61] description of the wonder of Constance in 1549 as a ’curious’ manifestation of a seiche in Lake Constance and likely the first historical description, and (iii) his bibliography of seiches between 1870 and 1892 with 35 entries, many his own contributions, and those of Sarasin.

Fig. 28.23
figure 23

Left Portrait of F.-A. Forel. Right Water painting by Ernest Biélier showing Forel in his working chamber

Francois-Alphonse Forel (1841–1912) was born in Morges, a town located by the Lake of Geneva between the two largest cities of the region : Geneva and Lausanne. Although a professor of medicine at the University of Lausanne Mr. Forel devoted his life to study the lake. With his integrated investigations of biology, chemistry, circulation and sedimentation in modern Lake Geneva he was setting up the bases of modern lake studies. His book ‘Le Léman’ (Lake Geneva) is the first and most famous treaty dedicated to this new scientific discipline that he called limnology. In the opening chapter of his treaty he apologized for coining this new term: “This book should be called limnological monograph. But I have called it differently and have to explain the reasons why I had created this word and apologize for that it would be necessary (...). The subject of this book is dealing with a part of the Earth and, therefore, is geography. The geography of the oceans is in turn called oceanography. But a lake, as big as it can be, is by no means an ocean. Its limited area gives it a special feature which is very different of the endless ocean. I had to find a more modest word to describe my investigations, such as the word limnography. But, because a limnograph is a device to measure the water level of lakes, I had to coin the new word limnology. The limnology is in fact the oceanography of the lakes.

(After F.-A. Forel, ‘Le Léman’, Editions Rouges & Cie, Lausanne, 1892–1902. Translation D. A.).

Text: Homepage of Institute F.-A. Forel, University of Geneva

Fig. 28.24
figure 24

a Report on water level measurements in Lake Geneva in 1949–1951 and b the station for the measurements

A typical view of a station for water level measurements in the middle of the 20th century is presented in Fig. 28.24b. The photo is taken from the report on measurements in Lake Geneva in 1949–1951 (the report’s cover see panel a). An example of a record of water level variations from a Lake Geneva campaign and the very device, used in the 1951 campaign, called limnigraph, are shown in Fig. 28.25.

Fig. 28.25
figure 25

An example of a record of water level variations at permanent station Genèva-Sècheron and the limnigraph from the report ‘Les dénivellation du lac Léman. Recherches executes de 1949 à 1951. Communications du Service Féderal des Eaux. Département Féderal des Postes et des Chemins de Fer’. Berne, 1954

In principle, manual readings of a partially submerged graduated staff gage provide the simplest measuring device. The staff gage can be placed vertically or at an incline on a boat ramp or other construction. Gages can be placed permanently or temporarily for a particular study. However, to perform long-term measurements, comparable to any other ones, one needs a very accurate initial fixation of the instrument; special surveys are used to tie the staff gage to an elevation datum such as the mean lake level or an arbitrary reference datum. This last difficulty, on the one hand, and routine need of data, on the other hand, result in the establishment of a system of automatic stationary measurement stations in many lakes, rivers and channels. The most common stage-recorder devices use a float-type primary converter (like that shown in Fig. 28.24b) or a hydrostatic pressure gage (Fig. 28.27) to measure variations in water surface elevations that are then converted to stage readings.

Float-type converters. Routinely, in studies of large water bodies, the stage height is measured by using float-type devices, moored to the bottom and enclosed in pipe stilling wells to attenuate short-term fluctuations in the water surface. If the measurement station is located at the shore, the stilling well has water intakes, as illustrated by Fig. 28.26, and the water level within the stilling well changes in response to water level changes in the main basin. A float, connected with counter-weight by a rope, follows these changes. While the float moves up and down, the rope turns a wheel. A pointer connected with the wheel moves simultaneously and shows water level on some ruler; in many devices, a pen writes a line on a paper roll. The stilling well damps out the short-term variations in water levels due to short periodic surface waves and serves to protect the float gages and recording devices. Stage is recorded continuously but can also be AD-converted and telemetered directly to a central location, providing real-time monitoring of stage heights for flood warnings or reservoir operations.

Fig. 28.26
figure 26

Float-type station for water level measurements: stilling well (1), tube-intake (2), limnigraph (3); after Dimaksian (1972) [15]

Devices with remote control are developed as a result of evolution of float-type gages. These instruments convert some non-electric signal, like the turning angle of a wheel or the linear displacement of the counter-weight, to an electric signal, and transmit it to a distant receiver by cable or by radio. A value of an electric current or voltage in the output circuit, as well as its parameters (most often, the resistance; but also capacity or inductance) can be used to this end. In modern instruments, a frequency-type signal is often preferred, because it is the most convenient parameter to transmit [15, 39].

Observation accuracy for these instruments is mainly defined by the accuracy of time registration, instrumental error of the floating unit and delay of the pen response to float motion. Typically, the overall accuracy is estimated to be 1–2 cm.

Hydrostatic water gages. Stage gages of this type record the hydrostatic pressure of a water column. In one modification, the water pressure changes some parameters of a sensor, placed at a certain level (or at the bottom) of a basin. In another modification, the sensor is placed at the shore (manometer, membrane, silphon gage), whilst the very pressure is transmitted by a connecting line. This principle has certain advantages over the float-type gages: no wells are needed, and in wintertime a possible freezing of the float is avoided that may falsify the recording components.

One of the well-known examples of the second modification is a gas bubbler, which measures the pressure required to produce bubbles and converts this pressure to submergence depth. Gas-bubbler gages, as shown in Fig. 28.27 (after Dimaksian (1972) [15]) have the advantage of allowing the pressure recorder to be located away from the point of measurement in a shelter or building. In this instrument, air from a balloon or compressor (1) is pumped at a pressure of approximately 20 atm through a tube (5) into an air dome (6) placed in a basin. The pressure in the dome is equal to the hydrostatic pressure of the water around it, and is measured in the presented instrument by balancing it by pressure of a column of mercury in the manometer chamber (7). A small float on the surface of the mercury (4) is connected with a pen of a recorder (3). This construction was used extensively in the Netherlands in 60s–70s of the last century. The length of the underwater air pipe may be as long as 200–300 m; it is essentially only limited by the resistance of the pipe to airflow, possible moisture condensation, and gas leakage. The observation accuracy is typically 1–2 cm; however, it depends on the surrounding air and water temperature and the length of the connecting tube, and, under unfavourable conditions, can be as large as 3–5 cm.Footnote 13

Fig. 28.27
figure 27

Principal scheme of a bubbler gage: 1 balloon/compressor, 2 manometer, 3 recorder, 4 float, 5 tubes, 6 air dome, 7 chamber. After Dimaksian(1972) [15]

Many types of sensors are developed to measure water pressure in situ (first modification, mentioned above). They can be placed on the basin floor or attached to some other (temperature, conductivity, etc.) sensor—to monitor temporal variations of the depth. Such instruments convert the indications of the sensor to an electric signal and transmit it to a receiver. Most common types of such pressure sensors use mechanical (silphon) and electro-mechanical primary converters (vibrational; membrane; resistive, inductive, capacitive strain-gage; thermo-electrical, quartz, magneto-resistive, etc). Among them, the membrane, vibrational and quartz sensors are the most promising ones: they use the dependency of the vibration frequency of the sensors on the pressure; thus, the very sensors produce a frequency-type output signal. Hence, no other converters are needed, and the digitized signal can be directly transferred to a receiver, which ensures high accuracy of the measurement (less than 0.01 % of the pressure value). A sample construction of a quartz crystal resonator and absolute pressure transducer is presented in Fig. 28.28.

Pressure sensors can be used not only for water level variation and depth measurements, but also for the investigation of turbulent processes, analyses of the microstructure of the current field, surface wave heights and stroke pressure.

3.2 Measurement of the Water Depth

Measurement of the water depth, bottom profile and bathymetry is usually the very first step of limnological reconnaissance of a water basin. More specifically, accurate monitoring of the depth is required for every-day practice of navigation and fishing. Knowledge of the bathymetric field is an absolute necessity for any biological, chemical, physical field investigation, as well as for all kinds of numerical modelling.

Fig. 28.28
figure 28

Schematic of absolute-pressure transducer and quartz-crystal resonator used in the Digiquartz pressure transducer of Paroscientific, Inc. (after Baker (1981) [5]). http://ocw.mit.edu \(\copyright \) Creative Commons BY-NC-SA 396–433, reproduced with permission

Methods of water-depth measurement can be subdivided by the principle of operation into (i) primary mechanical methods, (ii) echo-sounding and (iii) use of pressure sensors.

Primary mechanical methods are used for ages: these are measurements using punt-pole and sounding lead (Fig. 28.29). A punt-pole is a smooth straight wooden pole, 4–6 cm in diameter, typically 4–8 m long, with steel chock at the lower end and level marks after every 5–10 cm. Sounding lead is an iron or lead weight (2–5 kg) attached to a graduated (in decimeters and meters) lead-line. As usual, sounding leads can be used for depths down to 25 m in rivers, and down to 100 m in still water. In order to increase the accuracy, one can use heavier weights; then, a winch is used, and the whole instrument is called a sounding machine. The accuracy of these methods is rather low (with errors in decimeters) and depends on the water current, possible boat drift, weight and shape of lead, and other circumstances. However, the tools are so simple, reliable and convenient under field conditions that not a single vessel is put out without them, let alone small boats and crafts.

Fig. 28.29
figure 29

Primary mechanical tools for water depth measurement: punt-pole, with steel chock (1) or pan (2) for crumbly bottom sediments, and sounding led (3) with end loop (4) and lead-line (5). Adapted from Dimaksian (1972) [15]

At present, the most common method of depth measurement is echo-sounding. The instruments (called echo-sounders) transmit a burst of sound of 10–30 kHz and listen for the echo from the basin floor. The time interval between the transmission of the pulse and the reception of the echo, when multiplied by the velocity of sound, gives twice the depth of the basin. Even though the speed of sound depends on the water density (i.e., its temperature, salinity, and depth), the accuracy of these instruments is rather high (relative errors are 1–2 % of water depth). The method has also other valuable advantages: the data is provided immediately while the ship continues its motion; not just a single point, but the bottom profile or even a (more or less) wide bottom zone is sounded at once.

Satellite altimetry may also be mentioned here as the most contemporary source of basin-wide data about surface elevation. Satellite altimeters register the shape of the water surface, however, ground-truth observations are still required to obtain an information in digital form. In the ocean, water surface elevation is shown to be very similar to the shape of the floor, so, it is used to fill the gaps of the bathymetry field in-between ship tracks [68]. Such data is not yet of common use mainly because of difficulties of data obtaining and handling. Its great advantage, however, is allowing for analysis of the entire basin at once.

4 Optical Measurements

If I might have guessed all deduced from the results of my experiment,

I’m sure,

I’d never made it!

A. Michelson,

1852–1931

Optical characteristics of the water were most probably the very first physical observations in lakes. Definition of the water color and the transparency are well known and widely applied since centuries. Nowadays, optical investigations are significantly extended by accurate measurement of the light scattering, attenuation and absorption characteristics, of the turbidity and the concentration and composition of suspended or dissolved solids, and the investigation of the current structure and turbulent diffusivity by optically-traced dye stuff. Recently, various oceanographic applications of lasers have complemented traditional water optics and opened new directions e.g. fine laser density field tomography and measurement of water current and its fluctuations by optical tools. Here, we will consider mainly traditional optical applications in lake hydro-physics, which are targeted to (i) the definition of the optical properties of the water and the distribution of the field of light (day light mostly) in natural reservoirs and (ii) some applications of optical instruments in hydro-physical research.

4.1 The Optical Properties of Natural Water and the Distribution of the Field of Light

These are physically defined by three optically-active components of the water: pure water, dissolved (inorganic and organic) substances and suspended (mineral and organic) matter. In addition, air bubbles and the inhomogeneity of the water density due to the influence of the turbulence on the propagation of light in a lake.

Fig. 28.30
figure 30

Hue scale for waters of low transparency

The most easily determined and basic optical properties of water in physical limnology were historically and still are the water colour according to the hue (colour) scale and the water transparency as determined by the Secchi disc.

The water colour is different in different basins and can be used as a qualitative characterisation of natural water. A special scale was introduced at the end of the 19th century, called now ForelUhl scale: a set of numbered tubes with well defined coloured liquids is compared with the water under observation. The number of the tube corresponding best to the water specimen defines ‘the colour of the water according to the scale of chromaticity’. Later, one more scale—the platinum-cobalt scale—was introduced for transparent water; so the ForelUhl scale is used now for water of low transparency [6, 39], see Fig. 28.30.

Fig. 28.31
figure 31

Left Portrait of Angelo Secchi (http://www.faculty.fairfield.edu/). Right The Secchi disc is a round white plate (sometimes with alternating white and black arcs) of 25 cm in diameter that is lowered through the water until it is no longer visible. The marks on the rope allow to estimate this depth

Angelo Secchi, S. J. (1818–1878) was born in Reggio, Italy and died in Rome. He was a physicist and mathematician with remarkable ability and passion for astronomy. Father Secchi worked in stellar spectroscopy, made the first systematic spectroscopic survey of the heavens, pioneered in classifying stars by their four spectral types, studied sunspots, solar prominences, photographed solar corona during the eclipse in 1860, invented the heliospectroscope, star spectroscope, telespectroscope and meteorograph. He also studied double stars, weather forecasting and terrestrial magnetism. He became director of the Vatican Observatory at the age of 32 and dedicated himself energetically to the task. He acquired an equatorial telescope of Merz with an aperture of 24 cm and a focal length of 435 cm, an excellent instrument for those times. Father Secchi decided to transfer the observatory to the top of the Church of St. Ignatius, a perfect foundation for an observatory, because the Church had been originally designed to support a dome 80 meters high and 17 m wide. This Pontifical Observatory, famous for the discoveries of Father Secchi, was certainly more known to many generations of Romans for the simple, practical, daily service it offered them—it gave them the exact time of day. Angelo Secchi had regular teaching assignments in astronomy and physics at the Gregorian University. He observed double stars, nebulae, planets and comets. He discovered three comets in the years 1852–1853. He studied terrestrial magnetism and meteorology; he was in charge of setting up a new triangulation base on the Via Appia; he went to various cities to repair or install new water systems; he established lighthouses in the ports of the Papal States; and he even had to look after the positioning of solar clocks. He invented a new instrument, the so-called Secchi disk, which is widely used for estimation of water transparency up to now. In addition to his great works on the sun, on the fixed stars, and on the unity of physical forces, he published about 730 small papers in various scientific journals.

The Secchi depth, determined by the Secchi disc, is the characteristic parameter of the water transparency (turbidity). The Secchi disc was devised in the 1860s by the Italian astronomer Angelo Secchi while he worked in the Mediterranean aboard the papal vessel Immacolata Concezione [6]. Its strength is its simplicity and the possibility to compare data collected with the same apparatus for more than a century. Presently, the Secchi disc (Fig. 28.31) is a round white plate (sometimes with alternating white and black arcs) of 25 cm diameter that is lowered through the water until it is no longer visible; and that depth is called the Secchi depth.

Usually in field practice, the Secchidepth and the water colour are measured simultaneously and as follows. One lowers the Secchidisc down to the depth where it is no longer visible and moves it up and down several times, in order to obtain more exactly the depth of the appearance/disappearance of the disk. The averaged value, usually taken with an accuracy of \(\pm \)10 cm, is qualified as the Secchidepth. Then the disk is lifted to half of this depth—and the water colour is determined by comparison of the colours of test-tubes in the scale of chromaticity, placed nearby, with the colour of the water as it is visible on the background of the white disc. Even though the determination of both the water colour by the hue scale and the water transparency by the Secchidisc are rather subjective, they are routinely used in practice because they can immediately and easily provide reliable data about the water mass near the shipboard; they help to define the general hydro-physical situation and to find the proper place for instrumental measurements.

Data on the attenuation, absorption and scattering, and on the spectral and spatial distributions of the field of light can be used for the analysis of a number of physical processes in lakes. The relative water transparency, determined by the Secchi disc and the water colour are good indicators of water masses. For example, a water body formed by river run-off, as a rule, can often be easily distinguished in lakes by their transparency and water colour. The vertical hydrobiological structure of the water body manifests itself also in the optical characteristics of the water: the relation between the structure of the vertical current and the distribution of the attenuation of the light is an experimentally established fact (see, e.g., Fedorov and Ginzburg (1988) [16]). In particular, an accumulation of suspended particles is very often formed in the vicinity of a density jump. Layers of low transparency may trace processes of horizontal turbulent diffusion during the penetration of a stream current in a region of otherwise calm water of different transparency. Observations of the dynamics of nepheloid layers (layers of different optical qualities) may provide information about the presence of internal waves. For many lakes, the concentration of suspended substances is relatively small so that suspended particles do not significantly influence the water dynamics. If so, this fact provides the possibility to deduce from the data on particle concentration and the picture of the distribution of suspended matter information about turbulent mixing. In particular, it allows estimation of values for the coefficients of vertical and horizontal turbulent diffusion. The features of light dispersion in natural basins also provide information about the detailed characteristics of the suspended matter: kind of matter, size of grains, concentration. An example of an interesting physical process deduced from optical measurements is the daily variation of the water transparency observed in the ocean, when light attenuation decreased during the day and increased during the night (Fedorov and Ginzburg(1988) [16]). A possible reason is the strengthening of the upper layer stratification due to solar heating, which entails reduction of the turbulence intensity and precipitation of suspended matter.

For quantitative measurements of the light variation, transparency meters and nephelometers are widely and successfully used in all natural reservoirs. The first typically record the light attenuation in the water, whilst the latter measure the light scattered by particles. This division is rather conventional, since both instruments provide physically the same information on the field of light and suspended matter, however the range of data variation is slightly different. The upper bound of the linear relationship between the output signal and the concentration of suspended matter for most part of nephelometers does not exceed \(5 \times 10^{-4} \,{\text {g cm}}^{-3}\), but for transparency meters it is \((5-10) \times 10^{-4}\, {\text {g cm}}^{-3}\). Despite their high processing speed, the vertical soundings with velocities more than 1 m s\(^{-1}\) can be distorted, especially in areas with high density gradients. The reason usually is related to constructive features of the particular instrument, when the water domain is slowly changing under measurement, sometimes with a different rate for sinking and raising.

In lakes, transparency meters of light-weight construction are applied. For spectral observations of daily and seasonal variations of the horizontal underwater and under-ice exposure one uses a device called photo-pyranometer, or photometer. The same instrument, equipped with a scanner is applied for measurement of the angular distribution of the brightness. Modern optical field measurements are supported also by equipment which provide information in real time: laser probing, observation on index of water chromaticity, spectral analysis of the back radiation of the water, etc. To provide absolute values, these devices require calibration obtained in the water under measurement by another methods.

Fig. 28.32
figure 32

Comparison of the Secchi depth with the data taken by more sophisticated instruments. Adapted from Martynand McCutcheon(1998) [39], data originally reported by Williams(1980) [79] \(\copyright \) Copyright 1998, reproduced with permission of TAYLOR & FRANCIS GROUP LLC—BOOKS in the format Textbook via Copyright Clearance Center

Photo-pyranometers provide numerical data which are close in physical interpretation to the definitions of the water colour and the Secchi depth. Indeed, both the Secchi depth coupled with the water colour and data on the variation of light with depth measured by a photometer, can be used for the evaluation of the depth of short-wave radiation penetration into the lake, i.e., the thickness of the photosynthetic layer. The relationship from which the vertical radiation profile is computed is Beer’s law. It states that the absorption of the radiation per unit length \(H_{sw}(z)\) is proportional to the radiation itself. Thus, for steady conditions and a strictly one-dimensional variation

$$\begin{aligned} \frac{dH_{sw} (z)}{dz} = -\varepsilon H_{sw} (z) , \qquad H_{sw}(0) = H_{sw}^0 , \end{aligned}$$
(28.10)

where \(z\) is the downward distance counted from the water surface, \(H_{sw}^0\) is the value of the radiation at the water surface and the coefficient \(\varepsilon \) [\(\mathrm{{m}}^{-1}\)] is called the light extinction or light attenuation coefficient. If \(\varepsilon \) is constant, integration yields

$$\begin{aligned} H_{sw}(z) = H_{sw}^0 e^{-\varepsilon z} . \end{aligned}$$

In this form \(\varepsilon \) has a simple physical interpretation: at the depth \(z_{e} = 1/{\varepsilon }\) the (short-wave) light intensity is \(e\)-times less than at the surface. This depth is also known as \(e\) -folding depth of light absorption. Obviously, the light absorbed in a particular layer is simply the difference between the light at the upper and lower boundaries of that layer. Typical extinction coefficients for a number of lakes are provided in Table 28.1 (modified from Martyn and Mc Cutcheon (1998) [39]).

When \(\varepsilon \) is not constant, but a function of \(z\), \(\varepsilon = \varepsilon (z)\), then integration of Eq. (28.10) yields

$$\begin{aligned} H_{sw}(z) = H_{sw}^0 {\text{ e }xp} \left( - \int \limits _0^z \varepsilon (\xi ) d\xi \right) \!. \end{aligned}$$

Such a situation may occur, for instance, when large algae blooming shadows a region and increases the value of the extinction coefficient.

Comparison of the Secchidepth with the data taken by more sophisticated instruments delivers empirical formulae to estimate the attenuation (extinction) coefficients for the visible part of the spectrum. One example is the relation developed by Williams(1980) [79] (Fig. 28.32; adapted from Martynand Mc Cutcheon(1998) [39], data originally reported by Williams(1980) [79]). This figure justifies a power law relation between the Secchidisc depth, \(z_s\) (in m), and the extinction coefficient, \(\varepsilon \),

$$\begin{aligned} \varepsilon = 1.1 z_s^{-0.73} \qquad (m^{-1}) .\qquad \qquad \end{aligned}$$
(28.11)

Data show a scatter with a coherence \(R^2 = 0.89\) and a root mean square deviation of \( 0.081\) [79].

Table 28.1 Examples of light extinction coefficients
Table 28.2 Characteristics of Baklan and Grif

It was also found that the euphotic zone (surface water layer down to the depth to which only 1 % of the surface illumination penetrates) is roughly three times the Secchi depth (Baretta-Bekker et al. (1998) [6]). As follows from Table 28.1 and relation (28.11), the Secchi depth in pure water should be about 30–35 m, and in highly stained natural waters ca. 20 cm. Field experience shows that in the cleanest and most transparent waters of Lake Baikal the Secchi disk is visible down to 19–21 m (Pokatilova (1984) [54]) in the second largest pre-alpine Lake Constance with some water of glacial origin the Secchi depth is at most 15 m (Stabe (1986) [65]; Tilzer (1983) [73]).

Fig. 28.33
figure 33

a Dependence of the light attenuation coefficient on the wave length for different compounds of natural water. b Light attenuation in the Baltic sea water, with general behaviour in accordance with curve (I) for pure water, but values show the presence of dissolved and suspended matter. Adapted from Monin (1978) [41] \(\copyright \) Nauka Publishing House, reproduced with permission

The light attenuation coefficient, \(\varepsilon \), depends also on the wave length, \(\lambda \), of the light. The function \(\varepsilon (\lambda )\) is different for pure water and water that is loaded with dissolved and suspended matter. Typical curves \(\varepsilon (\lambda )\) for different compounds are presented in Fig. 28.33a: curve (I) with a sharp minimum of the light attenuation in the blue part of the spectrum is for pure water; curve (II) with a significant rise of \(\varepsilon \) in the violet and ultra-violet parts describes \(\varepsilon (\lambda )\) for water with dissolved organic matter; curve (III) is for water loaded with relatively small mineral particles; finally, curve (IV) is for water loaded with large particles of biological origin (diatomite algae, foraminifera, detritus etc.). Contributions of each of the compounds (II), (III) or (IV) depend also on their concentrations. Figure 28.33b displays the light attenuation in the Baltic sea water, with general behaviour in accordance with curve (I) for pure water, but the values show the presence of dissolved and suspended matter (after Monin (1978) [41]).

Fig. 28.34
figure 34

Spectral distribution of light attenuation index \(e ~[m^{-1}]\) in lakes and rivers: 13—Southern Baikal , 4—river Irkut, 5—river Chulym (Siberia), 67—Sheksna storage pond, 8—lake Beloe. Numbers near the curves indicate water transparency by Secchi disc in meters (adapted from Pokatilova (1984) [54]) \(\copyright \) Nauka Publishing House, reproduced with permission

Fig. 28.35
figure 35

Light absorption spectra for the depths of 50 m (a) and 200 m (b), measured in marine water. Solid curve gives the light absorption, additional to that of pure water; oblique hatching shows an absorption by suspended matter; double hatching—absorption by dissolved substances (adapted from Monin (1978) [41]) \(\copyright \) Nauka Publishing House, reproduced with permission

Examples of field measurement data for different lakes and reservoirs are given in Fig. 28.34 showing the range of variability of the light attenuation index in natural waters. These figures demonstrate that, whereas the coefficient of extinction in natural waters depends on wave length, this dependency is not dramatic, and selection of a mean value that is representative for all wave lengths may be justified.

Analysis of the variation of absorption spectra with depth is also a method to obtain information about dissolved and suspended matter. As an example, Fig. 28.35 shows light-absorption spectra for the depths 50 m and 200 m, measured in marine water.

Another optical water characteristic is the dispersion of light. It can naturally arise due to molecular dispersion or dispersion by suspended particles. The former, molecular dispersion, may arise due to three kinds of fluctuations: density fluctuations, an-isotropic orientation of the water molecules and fluctuations of concentration of dissolved substances. The latter, dispersion by suspended particles, depends on the concentration of particles, grain composition, shape and orientation of the particles and their refractive index. From an optical point of view, grains within the size range \(10^{-2} \div 10^1 {\mu } m\) are of significance. Smaller particles do not influence the optical characteristics of water; moreover, the concentration of very large particles is usually insignificant, and their influence on the optical properties can also be neglected.

The index of light dispersion, \(\sigma \), and the angular distribution of the scattered light are the measured parameters. As an example, the index of light dispersion (in arbitrary units) in the near shore zone of the Bengal bay is presented in Fig. 28.36. The sharp increase of the light dispersion near the surface is due to river run-off.

Fig. 28.36
figure 36

Index of light dispersion (in arbitrary units) in a near-shore zone of the Bengal bay (adapted from Monin (1978) [41]) \(\copyright \) Nauka Publishing House, reproduced with permission

4.2 Instruments for Optical Measurements

The field of the variations of the intensity of light due to the properties and the concentrations of dissolved and suspended matter can be measured by an instrument called nephelometer, or turbidimeter, or tyndallometer. Modern instruments combine advanced laser technology and fiber optics. Since they are rather simple in use, give high accuracy, sensibility, and fast response of a particle counter, they are widely used for water quality monitoring and drinking water control. However, nephelometers require verification and calibration on site, thus, to obtain not only relative but also absolute quantitative values, simultaneous classical water sampling is required (Fig. 28.37).

Fig. 28.37
figure 37

Construction of photoelectrical meter of suspended matter concentration with halogen source of radiation (a) and its calibration curve (b): 1—source of infrared radiation, 2—window, 3—removable conic nozzles, 4—bearing rod, 5—container of photodetector (Samoliubov (1999) [60]) \(\copyright \) Publishing House Naunchy Mir, reproduced with permission

Fig. 28.38
figure 38

Example of a typical nephelometer of backward scattering (a) and its calibration curve (b): 1—head of the sensor, 2—infrared filter, 3—photodetector,4—infrared radiant diode (Samoliubov (1999) [60]) \(\copyright \) Publishing House Naunchy Mir, reproduced with permission

Fig. 28.39
figure 39

Principal arrangements for measurements of volume scattering function. Adapted from Haltrin et al. (1996) [26], with changes

Physically, the nephelometer measures the scattering of light. It detects the scattering properties by measuring the light directed into the monitored water and scattered backward or under some angle to the initial direction of propagation (examples for \(90^{\circ }\) and \(45^{\circ }\), see Figs. 28.38 and 28.39) by particles of suspended matter. Some instruments split the scattered light into red (wave length 700 nm), green (550 nm) and blue (450 nm); other instruments use 12 wavelengths in the range 350–700 nm (Kireev et al. (1985) [32]). The detected light intensity is proportional to the turbidity of the water. A second light detector may be used to correct for light intensity variations, colour changes, or lens fouling. The readings are recorded either in so-called nephelometric turbidity units (ntu) or in parts per million (ppm).

Figure 28.37 displays a scheme and calibration curve of a photoelectrical device for the measurement of the concentration of suspended matter. It has variable distance (1–5 cm) between the halogen source of radiation and the removable conic nozzles. The nozzles have a quartz window, through which the beam passes to a photo-detector (Samoliubov (1999) [60]). Figure 28.38 shows an example of a typical nephelometer of backward scattering (Samoliubov (1999) [60]) and its calibration curve.

The most complete understanding of the light-scattering properties of water can be obtained by measurements of the volume scattering function over the full angular range from several tens of minutes to angles close to \(180^{\circ }\). A typical scheme for the measurement of the volume scattering function is illustrated by Fig. 28.39 [26]. Panel (a) presents the principle, typically used in such instruments, panel (b) shows its variation in which a light source and a photo-detector are fixed, and the angle deviation is implemented via the rotation of a special periscope prism with three reflecting facets. The shape of the prism and precisely adjusted dimensions allow detection of the scattered radiance practically over the full angular range including direct measurement of the attenuation of the beam.

The main problem in the development of a polar nephelometer photoelectronic circuit is the very large dynamic range of the scattered light intensity. Due to the elongated shape of the volume scattering function this range may span seven or more orders of magnitude. To provide accurate measurement of the intensity over such a wide range, light attenuation is generally used by means of a combination of a diaphragm and standard neutral filters which reduce the light flux at certain times. Because of the complexity of the design, this method of measurement is preferred for instruments measuring the volume scattering function at a few discrete angles. In instruments with uninterrupted angle deviation, photomultipliers are usually employed with a logarithmic mode and degenerative feedback through the power source of the photomultiplier. This photometer scheme, despite its relative simplicity, allows for easy expansion of its dynamic range to the required 7 to 8 orders of magnitude; however, the accuracy of the resulting measurements is rather low and the stability is poor (Haltrin et al. (1996) [26]).

Measurement using fluorescent dyes. Intense fluorescence is the principal property that makes commercial dyes suitable for use in water tracer studies. Most commonly, rhodamine (B and WT) and fluorescene are used in field practice. These materials fluoresce or emit radiation in the form of light, upon receiving radiation from an external source. The emission of light fluorescence terminates when the radiation source is removed. Since some energy is lost in the process, the light is emitted at a lower frequency and longer wavelength than that absorbed. The more fluorescent the materials are, the greater will be the percentage of absorption of the received energy.

The characteristics of a dye directly affect the amount of the light absorbed and emitted at a particular wavelength as well as the difference between the absorbed and emitted wavelengths. Depending on the molecular characteristics, different dyes will have different characteristic wavelengths at which the absorbed and emitted light are at a maximum. For example, rhodamine dyes (such as rhodamine B and WT) have their greatest excitation at a wavelength in the green band at 555 nm and emit light in the yellow-green band near 580 nm. The relative intensity of the emitted light is also a function of the amount of the fluorescent material present. Therefore, combination of wavelengths and intensities of the light absorbed and emitted can be used to measure the amount of the material present.

A filter fluorometer is a spectrometer that measures the relative intensity of light emitted by a fluorescent substance. A fluorometer consists of a light source, primary fi1ter, sample holder, secondary filter, sensing device, and readout device, Fig. 28.40. The light source varies depending on the characteristics of the dye to be measured. Light from the source passes through the primary filter that limits the light reaching the sample to a narrow band centered about the wavelength of maximum excitation for the dye. For measurements, the filtered light is directed through a sample placed in a holder of known volume and optical properties. The light emitted from the sample passes through the secondary filter that limits the light reaching the photomultiplier to a narrow range fluoresced by the dye. A photomultiplier detects the incident radiation and produces an electronic signal. The signal intensity is converted to the amount of dye present in the sample. The types of filters, sources, and photomultipliers vary with the instrument and dye that are used [39].

Fig. 28.40
figure 40

Principal scheme of a simple filter fluorometer (after Martyn and McCutcheon (1998) [39]) \(\copyright \) Copyright 1998, reproduced with permission of TAYLOR & FRANCIS GROUP LLC—BOOKS in the format Textbook via Copyright Clearance Center

Fig. 28.41
figure 41

Principal scheme of a lidar—a device for remote sensing of atmospheric or water layers

Applications of lasers. Remote laser analyses of water media using the light interference, coherent spectrography, Doppler frequency- or phase-spectrography are very fine methods to measure the composition of water and suspended matter and the concentration of small admixtures, water density variations, particle velocities, spatial configuration of surface waves, etc. It has many advantages in field practice such as remotability and on-line data supply, handy data format, high sensitivity and accuracy, both universality and high selectivity due to a wide range of exploitable wavelengths and methods of registration.

The principal scheme of a lidar (light detection and ranging), a device for remote sensing of atmospheric or water layers, is presented in Fig. 28.41. It works not exactly as, but much alike a radar, using light instead of the radio waves. The control unit gives, via the power module, a signal to the laser, which emits a frequency- and time-modulated beam. Features of the (complicated) emitted signal modulation depend on the goal of the measurement, the distance to the sounded layer and the particular instrument characteristics. The beam, reflected from the particles of the sounded layer, is guided through a telescope and spectral filters to the photodetector and processing unit. The direct result of the measurement—the distribution of the number of photons versus their wavelengths—together with the information about the emitted signal is recalculated with the aid of a computer and transferred into data concerning particle concentration, their velocities, composition of admixture, etc. In the water, those measuring methods are commonly used, which are based on the mechanisms of re-emission of the light with the frequency shifted relative to that emitted by the laser. One example (described above) is the use of fluorescence, when the emitted light has a red shift. When several substances fluoresce, the fluorescence excitation spectrum (i.e. fluorescent intensity dependence on the length of the excited radiation) is used to identify the admixtures. These methods can provide an accuracy of the concentration of the suspended matter as small as \(10^{-10}\) moles per litre (or 2 particles of the admixture to 1 million of the medium particles). The use of combinational scattering features can provide even higher accuracy, up to 1 particle of admixture per 1 million of the environment particles.

5 Measurement of Turbulence

Big whorls have little whorls,

Which feed on their velocity;

Little whorls have smaller whorls,

And so on unto viscosity.

Richardson

5.1 Turbulence in Lakes

Direct observations show that the nature of the flows of a fluid or gas can be of two principally different kinds: quiet and smooth, so called laminar flows and their antipode, turbulent flows, where current speed, pressure, temperature and other hydro-physical parameters are chaotically fluctuating, varying seemingly randomly in space and time. The quantitative measure for this feature is the Reynolds number. Physically, it characterizes a relative balance of inertial and frictional forces, and, in order of magnitude, can be expressed as

$$\begin{aligned} Re = \frac{U^2}{L} : \frac{\nu U}{L^2} = \frac{UL}{\nu } , \end{aligned}$$

where \(U\) is a typical scale of the current speed, \(L\) is a length and \(\nu \) is the kinematic viscosity.

Fig. 28.42
figure 42

Typical example of a measured signal: a time variations of the northward component of current speed during the period of 100 hours measured by ADCP in the middle of Lake Constance at a depth of 2 m on 24–28 October 2001 (courtesy of A. Lorke, data delivered to ConstanceDataBand); b water temperature in laboratory flume, measured with a time step 5 ms

Flows with small Reynolds number, for which viscous frictional forces are high as compared to the inertial forces, tend to be laminar, whilst large Reynolds numbers correspond to turbulent motions. Typically, motions are laminar for \(Re<{1000}\) and turbulent when \(Re>{1000}\).Footnote 14 It is easy to see that in natural reservoirs like lakes or estuaries, where typical length scales for flows are from meters to kilometers and typical velocities are centimeters to meters per second, the Reynolds number

$$\begin{aligned} Re = \frac{UL}{\nu } \cong \frac{(10 \div 100)\,{m} \times (0.1 \div 1)\,{ms}^{-1}}{10^{-5}\,{m}^2{s}^{-1}} \sim 10^4 \div 10^7 \end{aligned}$$

is very large. So, almost exclusively we observe in lakes (more or less developed) turbulent processes. A typical example of a measured signal is presented in panel (a) of Fig. 28.42. It shows the time variations of the northward current component as measured by an ADCP probe in Lake Constance. This graph was generated from data that were collected at time intervals of 10 minutes, but it is clear that on longer time scales, say, 1 h, the graph of the northward velocity can be seen as a smoother process (shown as dashed curve) plus ‘chaotic’ fluctuations superimposed on it. In panel (b) of Fig. 28.42 a similar curve is shown but now the time step is 5 ms. In this case, the process in focus is smooth on time scales of seconds and fluctuations arise in the milliseconds. It transpires that the division into smooth and fluctuating contributions of a measured process is a matter of choice of the time (or length) scale that one wishes to resolve.

Fig. 28.43
figure 43

Spectral distributions of current speed \(S(\omega )\), in doubly logarithmic presentation, for a Lake Ladoga (Filatov (1983) [17]) and b Lake Ontario (Palmer (1973) [53]). \(\copyright \) J. Geophys. Res., reproduced with permission

It is natural, but not necessary, to describe such signals by statistical characteristics, e.g. spectra of probability densities associated with a certain given variable. On a more mundane level, the time scales of the process under consideration, when recorded over a sufficiently long time, may be regarded as a (quasi) periodic function and (after subtracting a linear trend) subject to Fourier transformations. The time series is then transformed into a spectral distribution in which the amplitudes (or more common their squares) of the Fourier components are plotted against their frequency, usually in doubly logarithmic representation. Such spectral distributions, calculated for currents measured in lakes Ladoga and Ontario are presented in Figs. 28.43 and 28.45. The same analysis performed for time series of the water temperature for two thermistors placed in Lake Zurich at 14 m and 30 m depth are given in Fig. 28.44 (Hutter and Vischer (1987) [27]).

Fig. 28.44
figure 44

Spectral distribution of time series of water temperature for two thermistors placed in Lake Zurich at 14 and 30 m depth, (Hutter and Vischer (1987) [27]). \(\copyright \) Elsevier, reproduced with permission

The spectral plots constructed from the measured time series provide information about the distribution of periodic processes which are contained in a given time series. The long periodic components are usually presented with large amplitudes; the saying is, they are the large energy components. Small periodic processes contribute with lesser energy and the ‘chaotic’ high-frequency components are represented by the ‘tail’ of the spectrum, see Fig. 28.43. To separate the two, one must construct the time series of the mean or averaged processes \(\bar{f}(t)\) and thereby automatically decide which time and length scales are significant; thus

$$\begin{aligned} f(t) = \bar{f} + f'(t) , \end{aligned}$$

where \(f(t)\) is the original time series, whilst \(\bar{f}(t)\) and \(f'(t)\) are the mean and the turbulent fluctuations. The position is now that \(f(t)\) is a stochastic function.

There are many physically distinguishable ways to calculate the mean value, \(\bar{f}\), of a stochastic function \(f\). However, experience shows that turbulent heterogeneities have time and space scales that are several orders larger than those typical for molecular motions. Thus, the smallest scales of the turbulent heterogeneities are seconds and millimeters (see, for example, Monin and Yaglom (1965) [40]).Footnote 15 This fact is of utmost importance for both theoretical and experimental investigations of turbulence. On the one hand, at distances comparable to the dimension of the smallest heterogeneities and for time periods comparable to the minimum periods of turbulent fluctuations, all hydrodynamic fields vary smoothly and can be described by differentiable functions, i.e. the description of turbulent flows by the usual differential equations of hydromechanics is warrantable. On the other hand, field turbulimeters must resolve fluctuations down to (fractions of) millimeters.

Time and space variations of currents at any scale can be presented in the form \(f = \bar{f} + f'\), in which \(\bar{f}\) explicitly depends on the chosen mean length/time scales; this decomposition automatically selects the character of the turbulence: large-, meso- or small-scale. In this way, e.g. meanders of lake-scale currents, large gyres and riverine water lenses in a lake may be considered as manifestations of large-scale turbulence. Typically, these fluctuations have a size larger than the thickness of the epilimnion, i.e. they extend over more than a few dozens of meters, and they have typical time scales of hours. Spectra of large-scale turbulence observed in Lake Ladoga (Filatov (1983) [17]) are given in Fig. 28.45. They characterize stratified (a) and homothermal (b) conditions as calculated on the basis of measurements by current meters placed on 16 moorings; they are very similar.

Fig. 28.45
figure 45

Spectra of large-scale turbulence observed in Lake Ladoga for stratified (a) and homothermal (b) episodes as calculated on the basis of measurements by current meters placed on 16 moorings (Filatov (1983) [17])

Conventionally, meso-scale lake turbulence covers fluctuations due to Langmuir circulations, surface and internal wave breaking, wind pulsations, current instabilities near coastal or bottom topographic obstacles, etc., which have length scales of tens to hundreds of meters and time periods of the order of up to tens of minutes. Finally, small-scale turbulent processes, often, simply called turbulence, have smaller time and space scales, i.e., seconds in time and centimeters to millimeters in space. They are at the very end of the energy cascade,Footnote 16 so that all hydrodynamic processes, dissipating with time, contribute to these micro-fluctuations. Figure 28.43 illustrates small-scale turbulence by spectra derived from measurements of water currents in lakes Ladoga (Filatov (1983) [17] and Ontario (Palmer (1973) [53]).

An extremely high intermittency in time and three-dimensional space is one of the most characteristic features of turbulence in natural basins. For small-scale turbulence, oceanographers and limnologists usually distinguish between internal and external intermittency. Internal small-scale intermittency is caused by fluctuations at scales related to the inertial-convective and viscous sub-ranges, and refers to the variability of fluctuations within a given turbulent patch. Such intermittency is inherent to any turbulent flow with high Reynolds numbers. External (or meso-scale) intermittency refers instead to the intermittency of the occurrence and variability among different oceanic turbulent events. Owing to the random breaking of internal waves, sporadic convective processes, and KelvinHelmholtz instabilities, baroclinic instabilities at fronts, etc., the amplitudes of any turbulent parameter, averaged over \(r > L_0\), where \(L_0\) is an external turbulent scale, are subjected to variations with typical scales of tens of meters vertically and hundreds of meters horizontally.

Large- and meso-scale turbulence can be observed by almost any standard instrument, but to capture the fine structures at small scales requires special, very sensitive instruments for their registration as well as specific techniques of experimentation. However, they are very informative, so that the overwhelming majority of field measurements of turbulence is devoted to the microstructure of hydrodynamic fields. Later, we shall also use the term turbulence to denote the small-scale fluctuations of the hydrodynamic fields. For currents this means a typical scale of velocity fluctuations of the order of mm s\(^{-1}\), and turbulent kinetic energy

$$\begin{aligned} \kappa = \frac{1}{2} (u'^2 + v'^2 + w'^2) \sim 10^{-6}\, {\text {m}}^2 \,{\text {s}}^{-2}. \end{aligned}$$
Fig. 28.46
figure 46

Turbulence in a medium-sized lake. Panel b shows the spatial structure of the turbulent activity: an energetic surface boundary layer, a slightly less turbulent bottom boundary layer, and a strongly stratified and almost laminar interior. On panel a, the dissipation of the turbulent kinetic energy is presented. Redrawn from Wüest and Lorke (2003) [82]. \(\copyright \) Annual Rev. Fluid Mech., Parabolic Press, reproduced with permission

Wüest and Lorke (2003) [82] have used an ADCP probe to investigate turbulent processes in a medium-sized lake. Figure 28.46 (panel (b)) shows the spatial structure of the turbulent activity: an energetic surface boundary layer, a slightly less turbulent bottom boundary layer, and a strongly stratified and almost laminar interior. On the panel (a), the dissipation of the turbulent kinetic energy\(\varepsilon \) into heat

$$\begin{aligned} \varepsilon = 7.5 \nu \left( \frac{\partial u'}{\partial z}\right) ^2 \end{aligned}$$

is presented, which is an important quantity of the physical processes and structure of the field of turbulence. The very boundary layers have their own specific turbulent structure, which are connected mainly with sediment re-suspension over the bottom and various mixing processes below the surface. Strong stratification suppresses the turbulence, so that suspended solids make the benthic layers less turbulent. Wind and waves generate turbulence at the lake surface and homogenize the upper layer, so that turbulence is highly developed throughout the mixed surface layer. However, density gradients at seasonal and daily thermoclines or jumps between a fresh rainwater layer and more mineralized lake water ‘lock’ the turbulence, preventing its penetration from one mixed layer to another, i.e., from the lake surface or the bottom into the lake body (Fedorov (1988) [16]). It causes, in particular, daily variations due to solar heating: the upper turbulized layer becomes much deeper during the night. Measurements in the equatorial Pacific Ocean, for example, show a daily increase of the intensively turbulized surface layer from  15–20 m in the afternoon at 1–4 p.m. to 60–80 m at night (Paka (1982) [51]). Suppression of turbulence by thermal stratification causes interesting effects of ‘clearing’ of the upper warm layer (down to \(\sim \!10\) m depth): A decreasing concentration of suspended matter due to the secession of turbulence manifests itself in a decreased light attenuation \((\varDelta a/a \sim \!0.004,...,0.2)\), seen immediately after the beginning of the daily heating at weak wind (Vasilkov et al. (1985) [75]). The depth where the turbulence generated at the surface suddenly stops, is often called turbocline, by analogy with the thermocline; however, because of the difference in mechanisms of generation, their depths are generally not the same.

Starting in the seventies of the last century, measurements and investigations of small-scale turbulence in lakes were performed by many researchers: by Anisimova and Speranskaya (1977) [1] in Lake Baikal, by Palmer (1973) [53] and Cannon (1971) [8] in the Great Lakes, by Kenney (1979) [31], Thorpe (1977) [72], Dillon and Powel (1976) [14] and many others.

5.2 Instruments

                            An event is such a little piece of time-and-space

                         you can mail it through the slotted eye of a cat.

                                                         Diane Ackerman

                                           Mystic Communion of Clocks

Fig. 28.47
figure 47

Vertical profile characteristics of a turbulent patch in Lake Kinneret, showing density \(\rho \) and its fluctuations \(\rho '\), as well as vertical and horizontal velocity fluctuations \(w'\), \(u'\), \(v'\). Their covariances allow determining buoyancy flux \({\rho }'w'\), dissipation \(7.5{\nu }(\partial {u'} / \partial {z})^2\), and TKE, \(1/2({u'}^2+{v'}^2+{w'}^2)\). Adapted from Saggio and Imberger (2001) [59], after Wüest and Lorke (2003) [82]. \(\copyright \) Annual Rev. Fluid Mech., Parabolic Press, reproduced with permission

Small-scale turbulent processes can be observed through fluctuations of not only currents, but also temperature, conductivity, pressure, density, transparency or other hydrophysical parameters. The first successful measurements of turbulent fluctuations in a marine environment were made by Stewart and Grant [69]. The instruments that can directly measure small-scale fluctuations of vertical shear, conductivity, and temperature in profiling and towing modes were first developed in the United States (Osborn (1978) [49]; Gregg et al. (1982) [24], Dewey et al. (1987) [13], Canada (Oakey (1982) [47], Russia (Moninand Ozmidov (1985) [42], Arvan et al. (1985) [2], Germany (Prandke et al. (1985) [56]) and Australia (Carter and Imberger (1986) [9]). As an example of typical field measurements of the turbulent microstructure, see Fig. 28.47: it shows panels of vertical profile characteristics of a turbulent patch in Lake Kinneret (adapted from Saggio and Imberger (2001) [59], after Wüest and Lorke (2003) [82]) and in the ocean (Paka et al. (1999) [52]). Microstructure profilers, towed bodies or stationary instruments, as well as various tracers are commonly used. Much of the progress in measuring microturbulence is linked to the enormous development of instrumental techniques, such as microstructure probes, often called turbulimeters (Gregg (1991) [25], Imberger and Head (1994) [28], Prandtke and Stips (1998) [58], Luketina and Imberger (2001) [38], Paka et al. (1999) [52], Osborn and Crowford (1980) [50], Oakey (1982) [47], etc.). Nowadays it is possible not only to measure high-resolution current and shearFootnote 17 profiles with coherent ADCP, but also calculate simultaneously with the same instrument, the TKE (turbulent kinetic energy), Reynolds’ stress, the production of turbulence, and the inertial dissipation (Lohrmann et al. (1990) [35], Lhermitte and Lemmin (1994) [33], Lu and Lueck (1999) [36], etc.). The submersible Particle Image Velocimetry (PIV) instruments (Bertuccioli et al. (1999) [7]) allow performance of measurements of the two-dimensional velocity vector maps down to the dissipative scales. For measurements closer to the sediment bottom, flow microsensors are now available to measure velocity profiles at a \(50{\mu } {\text{ m }}\) vertical resolution through the viscous boundary layer, the lowest cm of the benthic boundary layer (see Wüest and Lorke (2003) [82]). Such instruments will also make the ultra-weak turbulence, often found in lakes, better accessible. For the quantification of diffusive boundary layer fluxes and early-diagenesis in the sediment, a large number of in-situ microsensors are now available, which capture concentrations of Fe, Mn, \({\text {O}}_2\), \({\text {CO}}_2\), \({\text {NO}}_3^{-}, {\text {NO}}_2^{-}, {\text {N}}_2\text {O}, {\text {NH}}_4^{+}\), \({\text {H}}_2{\text {S}}, {\text {CH}}_4\), CO\(_3^{2-}\), among others (Müller (2002) [44]). These sensors are especially suitable in fresh waters, which contain fewer electrolytes but usually more nutrient species than oceanic waters, so that usual oceanic conductivity sensors cannot be used in lakes.

Fig. 28.48
figure 48

Measurements of turbulence in the near-shore zone of lakes are usually performed from stationary platforms (I), floating pontoons (II) or boats (III). 1—anemometer for wind measurements, 2—recorders, 3, 4—current meter and turbulimeters, 5—buoy, 6—stabilizer (Filatov (1983) [17])

Measurements of turbulence in the near-shore zone of lakes are usually performed from stationary platforms or floating pontoons (see Fig. 28.48). Surface waves, cable and instrument vibrations and the noise of electric systems influence the measured signal, so, for measurements from pontoons or ships, some rocking stabilizers and submerged buoys should preferably be used, and instruments are to be equipped with special accelerometers.

In lakes, small-scale velocity, temperature and conductivity fluctuations associated with turbulence are usually measured by microstructure profilers and towed bodies. The family of profilers consists of free-fall and tethered instruments (Osborn and Crowford (1980) [50]; Oakey (1982) [47]). Free-fall profilers do not have a firm connection with the ship and are usually recovered by a change of their buoyancy at the end of the cast. Tethered profilers are recovered by using a flexible cable with near-neutral buoyancy. Towed bodies are distinguished by the method of vibration suppression. ‘Slow’ bodies use a ‘fuzzy’ towing cable (or ‘haired fairing’), which prevents shedding of eddies behind the cable and reduces vibrations (Lueck (1987) [37]). ‘Fast’ bodies use a towing cable with fairing (Lilover et al. (1993) [34]). The main advantage of tethered profilers and fast towed bodies is the large amount of data that can be taken over a relatively short time in a wide depth range and over long distances. However, the noise levels of the signals are usually higher than with falling instruments. One of the main objectives of the measurements is a comprehensive analysis of the interrelations between small-scale thermal processes and mesoscale dynamics.

In large water bodies, tethered profilers and towed bodies are used to carry turbulimeters (as well as other sensors). Different research groups and commercial enterprises continue to improve and develop new microstructure instruments, e.g., Amp and Chameleon (Moum et al. (1995) [43]), Fly (Simpson et al. (1996) [63]) and Epsonde (Oakey (1988) [48]), Mss, (Prandke and Stips (1996) [57]), Baklan and Grif (Arvan et al. (1985) [2]; Paka et al. (1999) [52]), Turbomap, (Wolk et al. (2002) [80] and Pme (Stevens et al. (1999) [67]). In order to demonstrate some technical details, examples of constructive embodiment and instrument possibilities, consider the instruments Grif and Baklan, developed in the P. P. Shirshov Institute of Oceanology (Kaliningrad, Russia) (Paka et al. (1999) [52]). Baklan is a vertical profiler, whilst Grif is a towing body; they can carry the same set of sensors and electronics, and are used together or separately, depending on the goal of the measurements. Table 28.2 shows their main technical parameters.

Vertical profiler Baklan (Arvan et al. (1985) [2]), which is similar to Cormorant (Gibson et al. (1993) [21]), consists of a streamlined cylindrical pressure case with a sensor array at the nose and with a drag tail in the form of a crown made from cylindrical brushes (Fig. 28.49). Data transmission to the ship, and instrument recovery, are carried out with an inelastic, rubber-coated, three-wired cable that has a negative buoyancy. The maximum depth of the measurements (400 m) is limited only by the cable length. The sensor’s set can be modified, depending on the scientific objectives. The kinetic energy dissipation rate is usually measured by two perpendicularly placed shear probes in order to obtain two horizontal components of the vertical small-scale shear. When accurate measurements of salinity and density are desired, one shear probe is replaced by a precise low-frequency conductivity sensor. Temperature and conductivity sensors are extended 0.15 m beyond the lower end of the profilers.

A sensor of vertical accelerations and a pressure sensor are installed inside the case at the top cap. All output signals are digitized by a 15-bit AD converter and then transmitted to the onboard computer via the tether cable. The sampling rate for each signal is adapted to the time constant and spatial resolution of the corresponding sensor.

Baklan is launched from the windward side of a ship. The cable is released from a winch capable of being horizontally turned, so that its rotation axis is always perpendicular to the tether line. The speed of the cable release exceeds the sum of the fall speed of the profiler and the ship’s drift. The cable sinks with slack loops slightly slower than the profiler in order to minimize adverse effects of the cable movement on the measurements.

Fig. 28.49
figure 49

Vertical profiler Baklan and towing body Grif: 1—temperature, conductivity and shear sensors, 2—lead ballast, 3—pressure sensor,4—pressure case, 5—electronics, 6—acceleration sensor, 7—cable connection, 8—drag tail, 9—cable, 10—streamlined body, 11—towing line with fairings (Paka (1999) [52]). \(\copyright \) J. Atmos. Ocean. Tech., reproduced with permission

Towed turbulimeter Grif   (Fig. 28.49) is used for long-term horizontal measurements of the microstructure. It consists of a heavy streamlined body carrying a pressure case with sensors and electronics. Three sensors extend 0.15 m ahead of the body to measure the mean temperature, conductivity fluctuations, and the horizontal component of small-scale vertical shear.

The streamlined body is fabricated from a metal buoy with a tail. A strain gauge pressure sensor is inserted into the front part of the case, and a lead ballast is affixed to the bottom of the container. Grif is connected to the towing ship by an armored three-wired cable with detachable plastic fairings. Measurements can be carried out down to a maximum depth of 200 m at a towing speed of ca. \(3 \,{\text {m s}}^{-1}\) and down to 300 m if the speed is about \(2 \,{\text {m s}}^{-1}\).

Sensors used in BAKLAN and GRIF

Small-scale shear probe. The micro-scale velocity shear is measured using an airfoil shear probe. A piezo-bimorph beam is covered by polyurethane. The diameter of the shell is 6.5 mm (Fig. 28.50). The head of the beam covered by polyurethane has a rigid tip, increasing the bending moment and maximizing the probe sensitivity to transverse velocity fluctuations.

Fig. 28.50
figure 50

Sensors used in Baklan and Grif. a small-scale shear probe: 1—bimorph piezo-element, 2—polyurethane shell, 3—ebonite tip, 4—metal shell, 5—output wires. The main sizes are diameter d \(=\) 0.0065 m, the length of the front section \(\ell = 0.02\) m, and the total length of the sensor \(L = 0.03\) m. b capillary conductivity probe: 1—inner electrode, 2—outer electrode, 3—open cell, 4—inner cell, \(d = 0.001\) m—diameter of the capillary, \(\ell =\) 0.0003 m—the length of the front dielectric section (Paka (1999) [52]). \(\copyright \) J. Atmos. Ocean. Tech., reproduced with permission

For turbulent processes, the question of the accuracy of the measurement is especially important. The accuracy of this sensor depends mainly on the following factors: calibration errors, frequency response limitations, variations of the fall speed, non-linearity due to changes of the angle of attack, and anisotropy in the turbulent velocity field. It is shown that the calibration-related uncertainty of the turbulent dissipation rate\(\varepsilon \) for in-situ testing in the ocean is at least 30 %; this is about twice as large as for a standard laboratory testing of airfoil probes (Gargett et al. (1984) [20]). The error in the fall speed of the probe (usually calculated from the pressure signal) does not exceed 5 %. If the angle of attack is less than \(5^\circ \), which is most commonly the case for Baklan measurements, the non-linearity of the output signal accounts for 3–5 % of the total uncertainty in \(\varepsilon \). Gargett et al. (1984) [20] showed that the error in the response function of an airfoil sensor increases from 10 % at low wavenumbers to about 40 % at 100 cpm (cycles per minute). The upper limit of the frequency response is governed by the spatial resolution of the sensor (if \(V<1 \text {m s}^{-1}\)).

Microstructure conductivity sensor. Small-scale conductivity fluctuations are measured in Baklan and Grif by using a capillary conductivity probe (Paka et al. (1999) [52]). It is actually an advanced modification of the single-electrode probe (Gibson and Swartz (1963) [22]). The central solid circular microelectrode at the tip of the sensor is replaced by a deep capillary channel. The area of the contact surface between the interior electrode and the fluid is increased, leading to a decrease of the contact polarization noise. The interior electrode is inserted into the sensor body (Fig. 28.50). The length of the dielectric section (i.e., between the sensor tip and the capillary end) is half the diameter of the capillary. This design minimizes the concentration of the electric current inside the capillary tube electrode. As a result, polarization effects and overheating of the water near the sensor tip are decreased.

The sensor’s body is formed of ebonite, and the electrodes are made of stainless steel. The internal part of the conic tip adjacent to the interior electrode, and including the dielectric capillary, is made of glass. The sensor is driven by an alternating current at a frequency of 10-kHz, and the sensing circuitry converts water conductivity between the two electrodes to an output voltage. The stability of the sensor was found to be much better than that of a conventional single-electrode micro-conductivity cell.

The sensor’s frequency response, and hence the spatial resolution, is complicated by the influence of the dielectric fluid lying between the exposed orifice and the inner electrode within the capillary interior. The total probe resistance is controlled by two fluid regions: the flushed region exterior to the open cell with resistance \(R_{oc}\), and the unflushed interior dielectric region with resistance \(R_{ic}\) (Fig. 28.50). In homogeneous water with uniform electrical conductivity \(\sigma _0\) we have

$$\begin{aligned} R_{oc} = \frac{1}{2 d \sigma _0} \quad \mathrm {and} \quad R_{ic} = \frac{4l}{\pi d^2 \sigma _0} , \end{aligned}$$

where \(d\) is the exposed diameter of the capillary section. Here, the exterior resistance of the open cell, \(R_{oc}\) is modeled as the resistance of a flat disk in a semi-infinite conductive fluid. The ratio of \(R_{oc}/R_{ic}\) is \(\pi d/8l\) and so depends linearly on the aspect ratio of the dielectric region. For example, \(R_{oc}/R_{ic} = 0.8\) for \(l/d = 0.3\). Note that the ‘internal resistance’ \(R_{ic}\) is not negligible.

The interior and exterior resistive regions also have different frequency responses. The attenuation of high-frequency components of the signal due to the open flushed cell exterior is caused only by spatial averaging.

Temperature and pressure sensors, acceleration sensor. Baklan and Grif, as well as many other turbulimeters, carry resistance-wire-temperature sensors. The design of the sensor is as follows. A bundled copper wire, 2 m in length and \(2 \times 10^{-5}\) m in diameter, is placed into a 1-cm spiral nickel capillary tube with an interior diameter of \(6 \times 10^{-4}\) m. The tube is filled with silicon oil to improve heat transfer between the wire and the capillary. The time constant of the sensor is about 0.2 s, and the total resistance is 100 \(\varOmega \). The temperature measurements are highly stable and can be carried out with an accuracy 0.02 \(^\circ \text {C}\) over a few months. Pressure is measured by a commercially manufactured strain gauge transducer with thermo-compensation. An electromagnetic sensor for vertical accelerations is needed to show the level of the instrument vibrations. It is especially useful for indicating instances when Baklan or Grif are affected by cable tension during casts, and so, the measured turbulent signal is not reliable.

The last example brought us close to the next page of an introduction to the science of hydro-physical field measurements—to measurement methods and techniques.