Abstract
In this chapter the concept of discrete-time signals and sampled-data systems implemented on digital hardware versus those of continuous-time signals driving analog systems and processes is introduced early on. The processes of signal sampling and analog signal reconstruction are then investigated, and the Nyquist sampling rate to avoid aliasing is explained by means of Fourier series analysis. Then the Z-transform is introduced as the tool of preference for analysis and synthesis of discrete-time linear, time-invariant systems defined by difference equations in the discrete time domain. A detailed account of the most important and practical continuous-time system mapping techniques to discrete-time ones is then presented. A brief account of digital filter structures and types is also given along with a presentation of the fast Fourier transform algorithm for the calculation of the discrete Fourier transform of discrete-time signals with finite duration admitting periodic expansion. The notions of waveform statistics as encountered in random signals and stochastic processes are then given in order to conclude with the effect on a random signal’s statistics due to its propagation through a linear, time-invariant system. Finally, concepts of optimal signal estimation and the Wiener filter are presented, leading to matched filter and zero-forcing equalizer parametric designs in the discrete-time domain.
Access provided by Autonomous University of Puebla. Download chapter PDF
Similar content being viewed by others
Keywords
- Discrete-time Signal
- Reconstructed Analog Signal
- Random Signal
- Continuous-time Transfer Function
- Discrete Fourier Transform DFT (DFT)
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.
1 Discrete-Time Systems
The progress in integrated circuit manufacturing (Very Large Scale Integration (GlossaryTerm
VLSI
)) combined with the need for high complexity and accuracy signal analysis and processing algorithms without development cost or time increase are the basic factors that contributed to the continuously increasing use of digital systems in industry and technology [9.1, 9.2].1.1 Discrete-Time Signals and Digital Systems
A digital processing system is a system where the basic mathematical operations and functions, including signal transformations and data series analysis, are implemented as an embedded program onboard real-time, computer hardware [9.1, 9.2]. Since the computer hardware employed is digital, the signals that can be processed or generated cannot be continuous time, i. e., defined for any instant in time. Instead, the computer processes and generates tables (i. e., vectors) of values for the input and output signal, respectively. Such tables hold the values of the corresponding signals at discrete time instants and are referred to as discrete-time signals. The time instants for which a discrete-time signal is known or defined are in most practical cases equally spaced; this scheme is known as uniform sampling and the spacing interval between two successive samples of the signal is mentioned as sampling interval or period.
The systems, e. g., real-time digital hardware, processing discrete-time signals are known as discrete-time systems. The main advantages in using computers in marine vehicle or process instrumentation, telecommunications, acoustics etc., instead of analog (continuous-time) systems are [9.1, 9.3, 9.4, 9.5]:
-
1.
(Re)use of general purpose hardware for implementing a wide variety of algorithms instead of custom-built, proprietary designs that may become obsolete causing higher acquisition and development cost.
-
2.
Changes in the algorithm can be easily implemented as changes in the software through reprogramming without need to modify the hardware.
-
3.
Easy implementation of adaptive or time-varying algorithms due to programming.
-
4.
Very small to zero sensitivity to environmental conditions, e. g., ambient temperature, in contrast to, e. g., analog electric circuitry.
-
5.
Given, well-defined accuracy in computation determined by the computer’s word length.
-
6.
User-friendly human–machine interface (GlossaryTerm
HMI
). -
7.
Network connectivity.
However, there are some disadvantages in using discrete-time systems, the most important of which is the need for converter or adapter circuits or interface systems at their input and output in order to be able to interact with the vast majority of the real-world physical or engineering systems or processes, which are most often continuous-time. Such interfaces are the analog-to-digital (GlossaryTerm
A/D
) and digital-to-analog (GlossaryTermD/A
) converters.1.2 Signal Sampling
1.2.1 Modeling of Ideal Sampler
Consider a continuous-time signal x(t), which is driven through a sampling circuit (sampler) operating at a constant sampling rate (frequency), , as shown in Fig. 9.1.
The sampler’s output is a sequence of entries of the sampled signal, , where n is an integer and Ts the sampling period. Sequence is a discrete-time signal.
We will now represent the discrete-time signal as well as the sampling process in a continuous-time framework. Such a representation can be derived on the basis of Fig. 9.2.
A pulsed continuous-time signal , which corresponds to sequence , can be generated from the original continuous-time signal x(t) (dashed line), if the latter is multiplied by the following sampling pulse train
where
In effect
However, the following holds
In the above ideal case, the sampling pulse train is an impulse train or Dirac comb. In effect, (9.4) becomes
Above, the following fact for Dirac’s impulse delta has been used: , which implies the assumption that signal assumes non-infinite values.
Hence, the ideal sampler operation consists of multiplicatively applying the input signal by a Dirac comb with Ts. Now we will determine the spectrum of signal at the output of the sampler by employing the Fourier transform.
1.2.2 Fourier Series Expansion and Fourier Transform
Any periodic signal x(t) with period T can be expanded to a complex Fourier series according to the following
Coefficients c n of the series are calculated according to
The above can be extended to the case of aperiodic (non-periodic) signals assuming that period T is infinite. That is how the Fourier transform emerges
The inverse Fourier transform is defined by
To generalize formalism we refer to the Fourier transform of any signal, periodic or not. In the case of periodic signals, the Fourier transform is directly connected to coefficients c n of its series expansion as follows
The above can be derived directly by plugging it into (9.8) and generating (9.6). The Fourier transform, just like the Laplace transform, from which the former can be defined by setting , is a linear integral transform. Its properties are identical to those of the Laplace transform, and can be sought in the literature. An important property is that of multiplication-convolution duality
Here, stands for the convolution operator. Note that the above can be directly applied to the Dirac comb describing the function of an ideal sampler as well as its periodicity.
Indeed, the Dirac comb in (9.4) is a periodic signal with period Ts. Its Fourier series is given below
Hence, based on (9.10) the spectrum (Fourier transform) of the Dirac comb is given by
1.2.3 Nyquist Sampling Rate – Aliasing
Combining the relationship above for the Dirac comb spectrum with (9.5), as well as the Fourier transform duality between multiplication and convolution, one can determine the spectrum of signal at the output of an ideal sampler
The following property of the Dirac impulse delta can now be employed
Eventually one obtains
Using (9.15) one can assess the effect of sampling on spectrum of the analog, the continuous-time signal at the input. Sampling introduces, therefore, an infinite number of aliases of spectrum of the analog signal, centered at integer multiples of the sampling circular frequency ωs.
To visualize this, consider a real signal with all its spectral content lying within the low frequency range (low-pass or baseband signal); like, e. g., the signal with spectrum as in Fig. 9.3. Note that the amplitude spectrum of the signal is an odd function of circular frequency ω due to the fact is assumed to be purely real in the time domain.
The amplitude spectrum of signal is shown in the Fig. 9.4 as generated by sampling of low-pass signal x(t) above at three (9.3) different sampling rates: , , .
Using the plot, one can readily derive the following sampling rate criterion attributed to Nyquist (or the Shannon sampling theorem) for the low-pass signal like the one in Fig. 9.4
If the condition above holds as seen in (9.15) (spectrum periodicity for the sampled signal), as well as Fig. 9.5, an ideal (brick) low-pass filter with transfer function as follows can be used to reconstruct the original continuous-time signal
The original spectrum is ideally reproduced without any distortion since it holds that
This is made clearer next when reconstruction of a continuous-time signal based on the values of a discretized version is discussed.
1.3 Analog Signal Reconstruction Using a Discrete-Time Signal
The simplest process to generate a continuous-time signal, x(t), from a discretized sequence (discrete-time signal) x(n) is to employ the zero-order hold (GlossaryTerm
ZOH
) network. This system holds the output constant and equal to the one applied to its input for an entire sampling period Ts. In effect, the output of ZOH when driven by a discretized signal is given in the Fig. 9.5.The mathematical description of a ZOH system is achieved through the following impulse response and transfer function in the complex frequency domain of variable s
To better understand the action of the ZOH system, its transfer function in the circular frequency domain ω (Fourier transform) is given below, as well
In the above . In Fig. 9.6 the amplitude spectra of the transfer functions of the ZOH system (normalized by setting ) and the ideal reconstruction low-pass filter which was introduced with (9.17).
Indeed, in the circuit above if resistance R is small (theoretically zero), then time constant τ indicating the time needed for charging the capacitor in the hold circuit is small when compared to sampling period Ts. The voltage across the terminals of the capacitor stays at the value set in the previous sampling instant, when the switch was momentarily switched on, until the next instant the switch is on, etc. The differences of the analog memory element, as the switched resistor and capacitor (GlossaryTerm
RC
) circuit is known, and the model ZOH system are:-
a)
The ohm resistance value of the circuit is non-zero and, therefore, the time to charge the capacitor is finite and not zero.
-
b)
The duration of the conduction phase of the switch is finite and non-zero.
-
c)
The ohm resistance applied to the terminals of the capacitor when the switch is off is not infinite (open circuit) but finite, eventually leading to discharge of the capacitor.
The use of the RC hold circuit is widespread due to its simplicity. However, if a more accurate reconstruction with less distortion, without increasing the sampling rate, then hold circuits of order higher than zero (e. g., first-order hold, GlossaryTerm
FOH
) so that its transfer function better approximates the ideal brick low-pass filter in (9.17), as shown in Fig. 9.6.1.4 The Z-Transform
The single-sided (unilateral) Z-transform is a linear transform defined as follows for a discrete-time signal x(n)
The Z-transform is derived from the Laplace transform of signal , which is generated by the ideal sampler shown in Fig. 9.7,
In effect,
Variable z-1 is used for describing a signal, or system as we will see later on. Its significance lies in the fact that multiplying by z-1 in the transform domain translates to a time shift by one sampling period, Ts, in the discrete-time domain n. Indeed,
Furthermore, here are some more of the most important properties of the Z-transform:
-
1.
Linearity
(9.25) -
2.
Delay or advance by N discrete-time units
(9.26) -
3.
Initial or final value theorems
(9.27)Note: the final value theorem holds only if X(z) is an analytic function of complex variable z.
-
4.
Discrete convolution theorem
(9.28)Note: because x(n) needs to be causal, i. e., can only depend on values of signals x1 and x2 prior or up to instant n, the bounds of the convolutional summation are set as follows,
It is also pointed out here that the commutative and associative properties hold for the convolution operator applied on two discrete-time signals.
Finally, it is pointed out that the change of variable, , on the basis of which the Z-transform is obtained from the Laplace transform of signal , defines a single-valued mapping of complex frequency s-plane to the plane of, also complex, delay operator z. Remember that the stability region on the complex plane , is the left-hand half-plane defined by inequality . In effect, the stability region on the delay operator z-plane is defined by inequality
Since
That is, the stability region is the unit disc. Similarly, the origin of the s-plane is mapped to point on the z-plane, while the imaginary axis of the s-plane is wrapped around the unit circle on the z-plane. Finally, the real s axis is mapped to the real z semi-axis. The Z-transform of several elementary discrete-time signals are shown in Table 9.1. In the interest of completeness and allowing for comparisons a Laplace transform table is provided in Table 9.2.
1.5 Discrete-Time LTI Systems
1.5.1 Difference Equations
A special class of dynamic systems, widely used in systems and signal theory, is linear time invariant systems (GlossaryTerm
LTI
) [9.1, 9.2, 9.4, 9.5, 9.6]. Such systems are described in continuous time by linear differential equations with constant coefficients. In discrete time, such a differential equation is converted to a difference equation corresponding to a discrete-time system described by a transfer function in variable z, or equivalently z-1. The general form of a linear difference equation with input u(n) and output y(n) is the following,To obtain the iterative process started in order to obtain samples of discrete-time output signal y(n) for , N initial conditions are needed: , . Using (9.31) we can conclude that the n-th sample of the output is determined as a weighed sum of the current plus M past input samples and N past output samples.
1.5.2 Linear Time-Invariant Discrete-Time Systems
The generic difference (9.31) in the discrete-time domain defines an LTI discrete-time system. In the Z-transform domain the difference equation becomes the transfer function as follows
A special, yet very important case of discrete-time systems are finite impulse response (GlossaryTerm
FIR
) systems in contrast to the generic GlossaryTermIIR
(infinite impulse response) system defined in (9.31). For a FIR system it holds that , in (9.31), henceIn effect, the n-th output sample is exclusively determined as a weighed sum of the current plus M past input samples. In effect, the difference equations seizes to be recursive and that is why in the literature FIR systems are also mentioned as MA (moving average) ones.
In contrast, discrete-time LTI systems that have exclusively poles, i. e., , for every i > 0 in (9.32), are also known as AR (auto-regressive), while the generic IIR system is mentioned also as auto-regressive moving average (GlossaryTerm
ARMA
) in this framework.1.6 Continuous-Time System Mapping
There are many ways (methods) to map a continuous-time system to a discrete-time equivalent [9.1, 9.2, 9.4, 9.5, 9.6]. All of these methods are based on preserving some characteristic of the continuous-time system during this procedure; this is why there is more than one way. However, stability (or instability) of a continuous-time system is preserved by employing any of the methods presented here.
The mapping methods considered here can be classified into the following categories:
-
1.
Approximate differentiation or integration:
-
Backward difference
-
Forward difference
-
Trapezoidal integration (Tustin transformation).
-
-
1.
Time-domain response matching at sampling instants:
-
Impulse response matching
-
Step response matching.
-
-
1.
Dynamic matching, i. e., matching of all poles and zeroes appearing in the transfer function.
1.6.1 Approximate Differentiation or Integration
Consider a continuous-time signal x(t) being sampled with sampling interval Ts and the discrete-time signal x(n) is obtained. Then, the following approximations can be derived.
Approximate derivative with respect to time using backward difference
Approximate derivative with respect to time using forward difference
Approximate integral with respect to time using the trapezoidal rule
However,
The above in effect yields
Equations (9.34), (9.35), and (9.38) mean that when presented with a transfer function H(s) in the complex frequency (s) domain, describing a continuous-time LTI system, it is possible to derive a discrete-time transfer function H(z), or , defining a discrete-time LTI system by substituting variable s with the approximation desired.
1.6.2 Impulse Response Matching
The methods presented previously were based on approximating the integration or differentiation operator appearing in the differential (dynamic) equation of an LTI system. However, an LTI system can, also, be described in full by its response to an arbitrary input signal. Common choices include using either Dirac’s delta (impulse) or the unit step signal (Heaviside signal).
In effect, if the impulse or step response of a continuous-time system is sampled, then a discrete-time description can be derived as follows.
Invariance of impulse response: the impulse delta is defined as follows in discrete time
Given the transfer function, H(z) of a discrete-time LTI system, it is easy to prove that, just like in the continuous time case, H(z) is the Z-transform of the system’s impulse response.
Indeed, the output y(n) of a discrete-time LTI system is given by the following convolutional sum
In effect,
Consider a continuous-time LTI system with impulse response , which is being sampled at rate fs. This means that continuous-time signal corresponds to discrete-time signal , i. e., a sample set of h(t).
The Z-transform of ,
defines a discrete-time LTI system, the impulse response of which, by definition, coincides with that of continuous-time transfer function H(s), at all sampling instants.
Assume, now, that for every positive M there is a positive integer ε such that
Given the above, one can proceed to impulse response truncation with accuracy ε. This procedure consists of approximating the original IIR system with transfer function by the following FIR one with memory M
Approximating a continuous-time system with a discrete-time one, which on top might be FIR, comes with a number of advantages, especially for real-time applications.
Invariance of step response: the unit step signal in discrete time is defined as follows
The Z-transform of the unit step signal is derived by applying the lemma for the infinite sum of a geometric progression with a ratio that in magnitude is less than 1.
Consider now an LTI system with impulse response as follows
We will now determine a discrete-time transfer function H(z) such that the latter’s unit step response coincides with that of the continuous-time system above. Mathematically, this is defined as follows
Then, by employing (9.43), one obtains that
Since
We can finally obtain that
In the above, it holds that
1.6.3 Pole and Zero Matching of a Transfer Function
This method derives from equation
connecting variable z with complex frequency s. Combining the above with the following factorized form of a continuous-time scalar transfer function
One can map every pole and every zero of the continuous-time transfer function to a corresponding pole and zero of an equivalent discrete-time transfer function
In effect, the following discrete-time transfer function is derived in its factorized from
Factor is introduced in the discrete-time transfer function above in correspondence to factor with , which is hidden in the continuous-time transfer function (9.49). The intent is to equate the numerator degree to that of the denominator. This is achieved by introducing an extra zero at imaginary infinity with multiplicity using factor . In the discrete-time transfer function, factor is mapped to , since point on the s-plane is mapped to −1 on the z-plane.
Finally, gain k in the discrete-time transfer function is determined by equating its value at a certain z with that of the continuous-time transfer function at the equivalent point s. For example, for z = 1, the corresponding point is s = 0, and, therefore, k is calculated as follows
2 Digital Filters
In the previous section we studied different ways of describing discrete-time systems that are linear and time invariant. It was verified that the Z-transform greatly simplifies the analysis of discrete-time systems, especially those initially described by difference equations.
In this section, we study in finer detail several structures used to realize a given transfer function associated with a specific difference equation through the use of the Z-transform. The transfer functions considered here will be of the polynomial form (non-recursive filters, FIR) and of the rational-polynomial form (recursive filters, IIR). In the non-recursive case we emphasize the existence of the important subclass of linear-phase filters. Then we introduce some tools to calculate the digital network transfer function, as well as to analyze its internal behavior. We also discuss some properties of generic digital filter structures associated with practical discrete-time systems.
2.1 Important FIR Filter Structures
Non-recursive filters are characterized by a difference equation in the form
where the b l coefficients are directly related to the system impulse response; that is, . Owing to the finite length of their impulse responses, non-recursive filters are also referred to as finite-duration impulse response (FIR) filters. We can rewrite (9.53) as follows
Applying the Z-transform to the equation above, we end up with the following input–output relationship
In practical terms, (9.55) can be implemented in several distinct forms, using as basic elements the delay, the multiplier, and the adder blocks. These basic elements of digital filters and their corresponding standard symbols are depicted in Fig. 9.8. An alternative way of representing such elements is the so-called signal flow graph shown in Fig. 9.9.
These two sets of symbolisms representing the delay, multiplier, and adder elements are used throughout this text interchangeably.
2.1.1 Direct Form
The simplest realization of an FIR digital filter is derived from (9.55). The resulting structure, which can be seen in Fig. 9.10, is called the direct-form realization, as the multiplier coefficients are obtained directly from the filter transfer function. Such a structure is also referred to as the canonic direct form, where we understand canonic form to mean any structure that realizes a given transfer function with the minimum number of delays, multipliers, and adders. More specifically, a structure that utilizes the minimum number of delays is said to be canonical with respect to the delay element, and so on.
An alternative canonical direct form for (9.76) can be derived by expressing H(z) as follows
The implementation of this form is shown in Fig. 9.11.
2.1.2 Cascade Form
Equation (9.55) can be realized through a series of equivalent structures. However, the coefficients of such distinct realizations may not be explicitly the filter impulse response or the corresponding transfer function. An important example of such a realization is the so-called cascade form, which consists of a series of second-order FIR filters connected in cascade, thus the name of the resulting structure, as seen in Fig. 9.12. The transfer function associated with such a realization is of the form
In the above, if M is the filter order, then when M is even and when M is odd. In the latter case, one of the vanishes.
2.1.3 Linear-Phase Form
An important subclass of FIR digital filters is the one that includes linear-phase filters. Such filters are characterized by a constant group delay τ; therefore, they must present a frequency response of the following form
In the above is real and τ and φ are constant. We now proceed to show that linear-phase FIR filters present impulse responses of very particular forms. Specifically, if h(n) is to be causal and of finite duration, for 0 , we must necessarily have that
Therefore, one obtains that
This is the general equation that the coefficients of a linear-phase FIR filter must satisfy. In the common case, where all the filter coefficients are real, one finally obtains that the filter impulse response must be either symmetric or antisymmetric. In effect, the frequency response of linear-phase FIR filters with real coefficients becomes as follows
As a result, solely four distinct cases need be considered that are described by the equations above. Their types have been standardized and are referred to in literature as follows [9.2]:
-
Type I: k = 0 and M even
-
Type II: k = 0 and M odd
-
Type III: k = 1 and M even
-
Type IV: k = 1 and M odd.
Typical impulse responses of the four cases of linear-phase FIR digital filters are depicted in Fig. 9.13.
2.2 Important IIR Filter Structures
2.2.1 Direct Form
Recursive filters have transfer functions of the following form
Since, in most cases, such transfer functions give rise to filters with impulse responses having infinite durations, recursive filters are also referred to as infinite-duration impulse response (GlossaryTerm
IIR
) filters.We can consider that H(z) as above results from the cascading of two separate filters of transfer functions N(z) and 1/D(z). The N(z) polynomial can be realized with the FIR direct form, as shown in the previous section. The realization of 1/D(z) can be performed as depicted in Fig. 9.14, where the FIR filter shown will be an ()-th order filter with transfer function as follows
The direct form of realizing is shown in Fig. 9.15.
The complete realization of H(z), as a cascade of N(z) and , is shown in Fig. 9.16. Such a structure is not canonic with respect to the delays, since for an -th order filter this realization requires delays.
Clearly, in the general case we can change the order in which we cascade the two separate filters; that is, H(z) can be realized as
In the second option, all delays employed start from the same node, which allows us to eliminate the consequent redundant delays. In that manner, the resulting structure, usually referred to as the Type 1 canonic direct form, is the one depicted in Fig. 9.17, for the special case N = M.
An alternative structure, the so-called Type 2 canonic direct form, is shown in Fig. 9.18. Such a realization is generated from the corresponding non-recursive form.
The majority of IIR filter transfer functions used in practice present a numerator degree M smaller than or equal to the denominator degree N. In general, one can consider, without much loss of generality, that M = N. In the case where M < N, we just make the coefficients in Figs. 9.17 and 9.18 equal to zero.
2.2.2 Cascade Form
In the same way as their FIR counterparts, IIR digital filters present a large variety of possible alternative realizations. An important one, referred to as cascade realization, is depicted in Fig. 9.19a, where the basic blocks represent simple transfer functions of orders 2 or 1. In fact, the cascade form, based on second-order blocks, is associated with the following transfer function decomposition
2.2.3 Parallel Form
Another important realization for recursive digital filters is the parallel form represented in Fig. 9.19b. Using second-order blocks, which are the most commonly used in practice, the parallel realization corresponds to the following transfer function decomposition
The above is also known as the partial-fraction decomposition. This equation indicates three alternative forms of the parallel realization, where the last two are canonic with respect to the number of multiplier elements.
It should be mentioned that each second-order block in the cascade and parallel forms can be realized by any of the existing distinct structures, as, for instance, one of the direct forms shown in Fig. 9.20.
All these digital filter realizations present different properties when one considers practical finite-precision implementations; that is, the quantization of the coefficients and the finite precision of the arithmetic operations, such as additions and multiplications. In fact, the analysis of the finite-precision effects in the distinct realizations is a fundamental step in the overall process of designing any digital filter [9.2].
3 The Fast Fourier Transform (FFT )
The FFT (fast Fourier transform) algorithm is a faster version of the discrete Fourier transform (GlossaryTerm
DFT
). FFT utilizes some clever algorithms to do the same thing as the DFT, but in much less time [9.2].The DFT is extremely important in the area of frequency (spectral) analysis because it takes a discrete signal in the time domain and transforms that signal into its discrete frequency domain representation. Without a discrete-time to discrete-frequency transform we would not be able to compute the Fourier transform with a microprocessor or DSP-based (GlossaryTerm
DSP
: Digital Signal Processor) system [9.1, 9.2, 9.4, 9.5, 9.6].It is the speed and discrete nature of the FFT that allows us to analyze a signal’s spectrum, as will soon become evident.
3.1 Review of Integral Transforms
We first give a review of the integral transforms that have been used in the text, possibly in their unilateral (single-side) version.
The bilateral Laplace transform
The continuous-time Fourier transform
The bilateral Z-transform
The Laplace transform is used to obtain a pole-zero representation of a continuous-time signal or system, x(t), in the s-plane. Similarly, the Z-transform is used to find a pole-zero representation of a discrete-time signal or system, x(n), in the z-plane.
The continuous-time Fourier transform can be found by evaluating the Laplace transform form at . The picture can be extended by introducing the discrete-time Fourier transform (GlossaryTerm
DTFT
). The DTFT can be found by evaluating the Z-transform at , as followsOne needs to point out here that the frequency variable Ω is in normalized units of radians per sample rather than absolute units of rad∕s, which apply to ω appearing in the standard Fourier transform. This can be justified by recollecting that sequences x(n) are generated by sampling a continuous-time signal x(t) at a certain sampling rate fs. In this respect, the DTFT can be viewed as a discrete approximation of the Fourier transform as explained below
3.2 The Discrete Fourier Transform (DFT)
First of all, the discrete Fourier transform (DFT) is not the same as the DTFT. Both start with a discrete-time signal, but DFT produces a discrete frequency domain representation while DTFT is continuous in the frequency domain. These two transforms have much in common, however. It is, therefore, helpful to have a basic understanding of the properties of the DTFT.
3.2.1 Periodicity
DTFT is periodic because of the fact that the signal is a discrete-time one. Indeed,
One fundamental period is, therefore, 2π, i. e., extends from f = 0 to fs, where fs is the sampling frequency. Taking advantage of this redundancy, the DFT is only defined in the region between 0 and fs, in terms of f, or, 0 and 2π, in terms of Ω.
3.2.2 Symmetry
When the region between 0 and fs is examined, it can be seen that there is even symmetry around the center point, i. e., point (half the sampling rate). Indeed,
This symmetry adds redundant information. Figure 9.21 shows the DFT (implemented with Matlab’s FFT function) of a cosine with a frequency one tenth of the sampling frequency. Note that the data between 0.5fs and fs is a mirror image of the data between 0 and 0.5fs.
Therefore, the discrete Fourier transform (DFT) can be introduced as follows
Note that the above is actually a transformation between a finite-length real or complex sequence x n , corresponding to a sampled, at rate , segment of a real or complex signal with actual duration NT, to a generally complex sequence (discrete spectrum) of equal finite length N corresponding to frequency values in the range between 0 and with a step equal to ; note that the corresponding range in terms of Ω is 0 to and the step .
3.2.3 Fast Fourier Transform (FFT)
FFT is simply an algorithm to speed up the DFT calculation by reducing the number of multiplications and additions required. It was popularized by J.W. Cooley and J.W. Tukey in the 1960s and, was actually a rediscovery of an idea of Runge (1903) and Danielson and Lanczos (1942), first occurring prior to the availability of computers and calculators – when numerical calculation could take many man hours. In addition, the German mathematician Carl Friedrich Gauss (1777–1855) had used the method more than a century earlier.
In order to understand the basic concepts of FFT and its derivation, note that the DFT expansion shown in Table 9.3 can be greatly simplified by taking advantage of the symmetry and periodicity of the twiddle factors as shown in Table 9.4. If the equations are rearranged and factored, the result is the fast Fourier transform (FFT), which requires only complex multiplications. The computational efficiency of FFT versus DFT becomes highly significant when the FFT point size increases to several thousand, as shown in Table 9.5. However, notice that FFT computes all the output frequency components (either all or none!). If only a few spectral points need to be calculated, DFT may actually be more efficient. Calculation of a single spectral output using the DFT requires only N complex multiplications.
The radix-2 FFT algorithm breaks the entire DFT calculation down into a number of two-point DFTs. Each two-point DFT consists of a multiply-and-accumulate operation called a butterfly, as shown in Fig. 9.22. Two representations of the butterfly are shown in the diagram: the top diagram is the actual functional representation of the butterfly showing the digital multipliers and adders. In the simplified bottom diagram, the multiplications are indicated by placing the multiplier over an arrow, and addition is indicated whenever two arrows converge at a dot.
The eight-point decimation-in-time (GlossaryTerm
DIT
) FFT algorithm computes the final output in three stages as shown in Fig. 9.23. The eight input time samples are first divided (or decimated) into four groups of two-point DFTs. The four two-point DFTs are then combined into two four-point DFTs. The two four-point DFTs are then combined to produce the final output X(k). The detailed process is shown in Fig. 9.24, where all the multiplications and additions are shown. Note that the basic two-point DFT butterfly operation forms the basis for all computations. The computation is done in three stages. After the first stage computation is complete, there is no need to store any previous results. The first stage outputs can be stored in the same registers that originally held the time samples x(n). Similarly, when the second stage computation is completed, the results of the first stage computation can be deleted.In this way, in-place computation proceeds to the final stage. Note that in order for the algorithm to work properly, the order of the input time samples, x(n), must be properly re-ordered using a bit reversal algorithm.
The bit reversal algorithm used to perform this re-ordering is shown in Table 9.6 . The decimal index, n, is converted to its binary equivalent. The binary bits are then placed in reverse order, and converted back to a decimal number. Bit reversing is often performed in DSP hardware in the data address generator (.*), thereby simplifying the software, reducing overhead, and speeding up the computations.
The computation of FFT using decimation-in-frequency (GlossaryTerm
DIF
) is shown in Figs. 9.25 and 9.26. This method requires that the bit reversal algorithm be applied to the output X(k). Note that the butterfly for the DIF algorithm differs slightly from the decimation-in-time butterfly, as shown in Fig. 9.27.The use of decimation-in-time versus decimation-in-frequency algorithms is largely a matter of preference, as either yields the same result. System constraints may make one of the two a more optimal solution. It should be noted that the algorithms required to compute the inverse FFT are nearly identical to those required to compute the FFT, assuming complex FFTs are used. In fact, a useful method for verifying a complex FFT algorithm consists of first taking the FFT of the x(n) time samples and then taking the inverse FFT of the X(k). At the end of this process, the original time samples, , should be obtained and the imaginary part, , should be zero (within the limits of the mathematical round off errors).
The FFTs discussed up to this point are radix-2 FFTs, i. e., the computations are based on two-point butterflies. This implies that the number of points in the FFT algorithms must be a power of 2. However, non-radix-2 FFT algorithms have been developed and are available in modern computational software packages like Matlab, Mathematica, Maple, etc., to be used in a variety of applications in ocean engineering as well as beyond [9.1, 9.2, 9.4, 9.5, 9.6].
4 Waveform Analysis
A waveform is one recording of a deterministic signal or that of an instance of a random signal (also referred to in the literature as a stochastic process). Actually, in practice we can make a recording of finite duration, which means that a waveform is the set of signal recordings over possibly non-contiguous time intervals. Typical examples of (lumped) waveforms in ocean engineering are significant wave height at a point or average, oceanic temperature or salinity, acoustic recordings of marine life, sonar or radar signals, etc. [9.3].
4.1 Definitions for Waveforms and Random Signals
The time mean (or average) of a waveform is defined as follows in the continuous time domain
The time-wise cross-correlation of two waveforms is defined as follows
In the above, stands for the convolution operation applied in the square brackets between signal y and signal , which is the mirrored version of signal x about origin of time (t = 0), with time shift τ, i. e.,
Also, note that cross-correlation as defined above can be used to introduce (time-wise) autocorrelation of a single waveform if we set y ≡ x.
The limiting process to infinity for time variable T introduced in the equations above is required in the case when the waveform has infinite duration so that the definitions make sense and do not give rise to indeterminate or infinite sums.
On the other hand, if the waveform (or at least one of them in the cross-correlation case) is finite in duration, then the time interval () needs to be set equal to the full finite, and not infinite, duration of the signal. In this case, the definitions need to look like the following to avoid any issues
The above equations are well defined in the case of deterministic signals and waveforms that occur as instances of random signals. Specifically, a random signal, denoted with a capital letter, e. g., X(t), in contrast to lowercase latters used for deterministic signals, is associated with a probability density function (GlossaryTerm
As well established by probability theory, the probability density function is, in turn, associated with a cumulative distribution function (GlossaryTerm
CDF
) for random signal X(t) as followsOne can easily verify by combining (9.78) with (9.79) that
holds. Finally and assuming certain smoothness conditions hold, the PDF can be calculated as the derivative of the CDF for the same random variable, i. e.,
For a random signal one can, in effect, define a probabilistic mean value as follows
Along the same lines a probabilistic correlation (cross and/or autocorrelation) can be introduced
To properly define it, though, the joint probability density function needs to be introduced; as can be seen below, it quantifies the probability of random signal X assuming a value in an infinitesimal vicinity of x at instant t1 and, jointly, random signal Y assuming a value in an infinitesimal vicinity of y at instant t2
In effect, the probability on the right-hand side in the above is that of the joint event to find the values of signals X and Y at certain time instants within an arbitrary infinitesimal rectangle on the XY plane. Applying Bayes’ theorem on the joint probability of the right-hand side of (9.84) one can derive the following equations involving conditional probabilities
In the case that random variables X and Y are independent (not to be confused with mutually exclusive variables or events), then using (9.85) for the conditional probabilities one can see that the following holds
A very important category of random signals is that of wide sense stationary (GlossaryTerm
WSS
). A WSS signal is one that has: (a) constant, i. e., time invariant, probabilistic mean, and, (b) autocorrelation solely depending on the time shift (delay) and not on the individual time instants. In effect, for a random signal to be WSS it must hold thatIn part, WSS signals are important because both the probabilistic mean value and autocorrelation can be obtained as the time-wise mean and autocorrelation of anyone of its instances, x(t), introduced for waveforms earlier. For this to be possible though, the random signal (stochastic process) X needs to be ergodic on top of WSS. Indeed, for an ergodic process statistical (probabilistic) properties (such as its mean, correlation, and variance) can be deduced from a single, sufficiently long instance (sample realization) of the process. Therefore, for an ergodic WSS process, the following hold
It should be noted, however, that for the above equations to hold exactly the integrals over time in (9.74) and (9.75) need to span the entire real line from minus to plus infinity. If not, then (9.89) and (9.90) provide only an estimate of the mean and autocorrelation values of the random signal; the accuracy of the estimate (probabilistically) increases as time parameter T in the time-wise integrals of (9.74) and (9.75) grow larger.
A final note is made regarding strictly or strongly stationary processes versus WSS ones. A strictly stationary process is a stochastic process whose joint probability distribution does not change when shifted in time. This requirement is much stronger than just mean and autocorrelation and, therefore, much harder to meet or assume in practical situations; that is why in the remaining text we will only employ WSS as well as ergodicity.
4.2 Signal Power and Power Spectral Density
For a waveform x(t), i. e., a deterministic signal or an instance of a random one, the following definitions are given:
-
1.
The time mean is the direct current (DC ) offset (component) of the waveform and is the DC power of it.
-
2.
Consider
(9.91)This is the (total averaged) power of the waveform.
-
3.
Then consider
(9.92)This is the covariance or alternating current (AC ) power of the waveform. Also, is the standard deviation or GlossaryTerm
RMS
(root-mean-square) value of the waveform.
The use of term power here stands for electric power, i. e., that the waveform is considered as voltage across or electric current through a reference resistance equal to 1 ohm.
Finally, the following notation for cross-correlation (for two waveforms) or autocorrelation (for a single waveform)
In the case of two distinct waveforms is also identified as their dot product. In the case of a single waveform is its power.
In the frequency domain and using the two-sided Fourier transform as introduced in (9.8), the (cross) power spectral density (GlossaryTerm
PSD
) can be defined for a couple of waveforms as followsNote that for a real waveform it holds that
Furthermore
Therefore, for a real waveform it holds that
Then, using (9.94) one can directly derive the following
Moreover, in the case of autocorrelation PSD obtains the following form
In effect, the PSD of a single signal is an even, non-negative complex function of frequency. Also, for the autocorrelation it holds that it is an even function of time τ, i. e., .
A final note has to do with the extension of the PSD concept to random signals (stochastic processes in mathematical terminology). Assuming ergodicity and WSS, the Wiener–Khinchin theorem can be employed to introduce a PSD for a random signal X or at least a power spectral distribution function as follows
For the above to hold, the autocorrelation of X needs to exist and be finite for any value of time shift variable τ, but its Fourier transform may not be well defined. Indeed, the Fourier transform of a random signal may not exist in general, because stationary stochastic processes may not generally be square or absolutely integrable. Nor does their autocorrelation need to be absolutely integrable, so it need not have a Fourier transform, either. However, in most practical applications the autocorrelation is integrable and even satisfies the conditions to obtain a Fourier transform. In this case, the PSD can be introduced through the Fourier transform as follows
Finally, in this case the PSD is the averaged derivative of the power spectral distribution function introduced previously. This is why the latter is also referred to as the integrated spectrum of the stochastic process.
4.3 Waveform Propagation Through a Linear, Time-Invariant System
Consider a linear, time-invariant (GlossaryTerm
LTI
) system with a single input and a single output (GlossaryTermSISO
), as shown in Fig. 9.28, driven by input waveform x(t) and generating as response output waveform y(t).The system is assumed to be in continuous time, as well as the waveforms at its input and its output. Then, the output can be determined as the convolution of the LTI system’s scalar impulse response with the input waveform
In the above, h(t) stands for the system’s impulse response that can be determined on the basis of the system’s transfer function H(s) in the complex frequency (Laplace transform) domain
It is noted here that for a causal system, the convolutional integral must be finitely bounded as follows
For the means of the input and output waveforms the following can be derived
For the various correlations, as well as the PSD of the input and output, the following hold
All of the above can be derived on the basis of the definitions and properties given previously, as well as the commutativity and associativity of scalar convolution. For example, in the case of (9.104)
It is noted here that:
-
1.
-
2.
and
-
3.
5 Optimal Signal Estimation
5.1 System Identification
One of the most fundamental problems in science and engineering is the determination of a system’s dynamics, preferably in the form of a mathematical model, when presented with the system response(s) to given deterministic or random driving inputs. This is the fundamental system identification problem [9.7, 9.8].
In the case of an LTI system, especially in the case of discrete-time or sampled-data systems, system identification is essentially equivalent to a problem of minimum square error approximation. Consider the problem statement as shown in Fig. 9.29.
A fundamental prerequisite for the system identification problem to have a solution in its basic form is that the additive measurement noise superimposed to the system output is of zero mean and uncorrelated to the driving forcing signal applied as input to the system. Therefore, the following two conditions must hold
At this point it is important to emphasize the importance of the definition of the appropriate time window T in (9.75). Indeed, it is possible that for some (commonly small) value of T condition (9.109) may not be satisfied with sufficient accuracy. However, assuming that condition (9.109) is satisfied for an infinite time window as well as appropriate stationarity and ergodicity assumptions, it is possible to determine a finite value for T such that condition (9.109) is met at arbitrarily small (epsilon) accuracy.
Given the property in (9.109) and by use of the linearity property of both cross-correlation and PSD, as well as (9.105), one can derive the following
The above holds since and . In effect, the unknown transfer functions of the intermitted LTI system
In the time domain the equivalent equation to (9.111) is
In the above equation, the unknown signal (function) is the impulse, h(t). The integral equation is known as the Wiener–Hopf equation with infinite time horizon [9.1, 9.2, 9.4, 9.5, 9.6].
What is most important in both (9.111) and (9.112) is that system identification using them can be implemented with either deterministic or random signals. In effect, signals, x(t), v(t) and y(t) may be known only probabilistically (statistically); this means that the individual instances (realizations) of the signals do not need to be known as long as their statistics are, e. g., means, correlations, or covariance coefficients.
Typically in practice, statistical characterization of signals can be effectively, yet approximately, performed by taking advantage of ergodicity and stationarity, if they apply. Assuming that these hold, then a sufficiently long, yet finite, recording of a single realization of a random signal allows for its statistical characterization in terms of its autocorrelation and cross-correlation with other signals. Such a recording, e. g., for v(t) or x(t) in the case of our system identification framework, can be performed once and be used along with measurements for y(t) to identify the system; all due to ergodicity and stationarity.
5.2 Discrete-Time Wiener–Hopf Equation over a Finite-Duration Window
Integral (9.112), despite its immense theoretical value, does not really indicate how it can be applied in practice for system identification. In this passage, we will introduce a methodology allowing identification of an LTI-SISO system when presented only with sampled-data recordings of x(t) and y(t), with a sampling rate meeting the Nyquist requirement for both. Furthermore, it will be assumed that the observation window (time horizon) of the recordings of the forcing and the noisy response is finite in duration instead of infinite.
The outcome of the processing will be the determination of a given finite number of impulse response samples for the unknown system undergoing identification. In effect, it is possible to define a discrete-system FIR (finite impulse response) system in either the time domain, by its very impulse response, or as a transfer function in the Z-transform domain without any poles. In effect, the FIR system will approximate the behavior of the unknown LTI system undergoing identification in the sense of (truncated) impulse response matching.
We start from the integral equation for y(t) given previously
However, to satisfy causality for the unknown system the following needs to be introduced
Converting the above to discrete time by sampling interval Ts, compliant to the Nyquist rate criterion, one can derive the following
Without loss of generality we can assume that , since any other sampling interval value can be absorbed multiplicatively in the impulse response sample values.
An important prerequisite for our identification method to be successful is that the unknown LTI system is stable. If this holds, then for an arbitrarily small ε one can find a positive integer M, such that
In effect, with epsilon accuracy it is possible to truncate the impulse response of the unknown system. We note here that impulse response truncation is an approximation of the system undergoing identification by the following FIR one with memory M
Using the approximation above, (9.116) yields
Furthermore, assuming that the recording for waveform y(t) spans a finite interval , the identification problem can be reduced to determining the () samples of the impulse response, h(n), , of the unknown system so that the following () equations are satisfied
It is noted here that containing the recording of signal y(t) within interval allows limiting the need to record signal x(t) also in the finite interval .
In effect, the () set of algebraic (9.120) can be put in the following matrix form
In the above, vectors y and v are defined as follows
Vector h is defined as follows
Matrix X, known as observation matrix, is defined by the following
The equation set in (9.121) is not square; therefore, an algebraically exact solution is not feasible. In effect, only a solution minimizing the following mean square error (GlossaryTerm
MSE
) vector spanning the observation windowIf the above MSE vector is minimized, then the effect of the noise vector v, i. e., signal v(t), is removed from the measurement vector y, i. e., signal y(t). The Euclidean norm of vector e is defined as follows
The solution achieving minimization of can be proven to be that of the following square () set of algebraic canonical equations
It is interesting to observe the equivalence between the time-domain (9.127) and (9.112); the equivalence can be straightforwardly derived if one observes that (9.112) needs to hold for any time instant t. This means that if it were converted to discrete time a (countable infinite in number) set of algebraic equations would be derived like in (9.121). Furthermore, the solution of canonical (9.127) is fully equivalent to (9.111), since there exists an one-to-one correspondence between: (a) square matrix and autocorrelation , and thereof PSD ; (b) vector () and cross-correlation , and thereof PSD .
On a technical note, it is pointed out that for (9.127) to hold, matrix has to be invertible. In the case of zero initial conditions for x, i. e., , , it must hold that so that is, indeed, invertible.
5.3 Signal Estimation and the Wiener Filter
We will now look into a closely related problem: that of the estimation of a signal or waveform which is outlined in the block diagram of Fig. 9.30.
As can be seen a known, in this case, the LTI-SISO system is driven by an unknown (to be estimated) x(t). or the known LTI system, the transfer function G(s), in the complex frequency (Laplace) domain, or , in the Fourier (frequency) domain, or equivalently its impulse response g(t) in the time domain is assumed known. Such an LTI system may model a measurement instrument, or sensor, or transducer, or telecommunications receiver.
Then, a linear filter with transfer function H(s) is sought that will receive as input the noisy instrument response w(t) and output waveform y(t), which is expected to be the optimum, in the mean square sense, estimate of the unknown waveform x(t).
The objective is to determine transfer function H(s), or equivalently , of the estimation filter that minimizes the following objective cost function
In the above, the instantaneous error signal is defined as follows
For waveform y(t) it holds that
In the above, the convolutional integral has been approximated by the convolutional sum, assuming that sampling interval, Ts, satisfies the Nyquist criterion. Consequently, (9.128) becomes
In effect, the impulse response of the least-squares estimator has to fulfill the following condition, so that cost becomes minimum
However,
Therefore, condition (9.132) finally yields
Setting in the above and gradually diminishing Ts, we end up back in the continuous time domain, and (9.134) yields the following orthogonality condition
Using the above orthogonality condition
Furthermore, since , the following equation can be derived for the transfer function of the estimation filter
The above expression for the estimator is commonly known as the Wiener filter and of is of widespread use in many science and engineering application fields.
The signal power of the minimum square error, achieved by the Wiener filter in (9.137), can be determined as follows
For the derivation of the above it is reminded that
Also, using the orthogonality condition (9.135), the following important result can be derived
It is noted here that
An alternative expression for the minimum error PSD can be derived on the basis of the following
Then
Further treatment of (9.137) is possible since the additive measurement noise is uncorrelated to the unknown signal x(t) at the instrument input. In this case,
Also
Therefore, the Wiener filter in (9.137) becomes as follows
The above allows us to obtain an estimate of waveform x(t) appearing at the instrument input port. As can be seen, to achieve this the signal’s (waveform) statistics need to be known in advance, i. e., the signal PSD or autocorrelation , as well as the dynamics of the measurement instrument, i. e., its transfer function or its impulse response g(t); last, but not least, prior information of the additive noise’s statistics, ie, its PSD or autocorrelation .
Two important specific subcases of (9.137) are given below:
-
1.
Zero-forcing equalization [9.1, 9.5]:
If
(9.143)Evidently, in this subcase the statistics of either the unknown input signal or the additive noise need to be known in advance. Furthermore, one can easily verify that, if the inverse of the instrument’s transfer function does not introduce unstable poles, estimation error e vanishes.
-
2.
If
(9.144)In this subcase the major impairment toward obtaining an estimate of x(t) is the additive noise rather than the instrument distortion, as was the case previously. Moreover note that if , the instrument’s output w(t) will be noise-like, and yet by use of the Wiener filter it is possible to extract the unknown information signal x(t).
Matched filter estimator design is widely used in communications engineering. At least as a first approximation, transfer function models the telecommunication channel that introduces attenuation monotonically increasing with distance from the transmitter source. In a typical system, both the information signal x(t), as well as disturbance noise v(t) can be considered as white noise, i. e., signals with the following property
As can be seen, the case of a white noise signal demonstrates practically no predictability, since the value of the signal at any time instant is entirely uncorrelated with its value at any other time instant in the past or in the future. Therefore, if both the information signal and the additive noise are modeled as white noise, (9.144) becomes
The ratio with the PSD of the information signal x(t) in the numerator and that of the additive noise in the denominator is known as signal-to-noise ratio (GlossaryTerm
SNR
) and is a very important parameter to characterize a system’s sensitivity to external noise, as well as exogenous disturbance and crosstalk interference [9.1, 9.5].6 Concluding Remarks
In this chapter, an account of digital signal processing concepts, methods and techniques employed in ocean engineering is given. After looking into the fundamental processes of continuous signal sampling and reconstruction, the Z-transform is introduced as the tool of convenience to analyze difference equations just like the Laplace transform is suited to analyze differential equations. Digital filters are presented then as well as the Fast Fourier Transform algorithm. Fundamentals of waveform analysis and stochastic processes are presented last as employed for system identification and signal estimation.
Abbreviations
- A/D:
-
analog-to-digital
- ARMA:
-
Auto-Regressive Moving Average
- CDF:
-
cumulative distribution function
- D/A:
-
digital-to-analog
- DFT:
-
discrete Fourier transform
- DIF:
-
decimation-in-frequency
- DIT:
-
decimation-in-time
- DSP:
-
Digital Signal Processor
- DTFT:
-
discrete-time Fourier transform
- FFT:
-
fast Fourier transform
- FIR:
-
finite-duration impulse response
- FOH:
-
first-order hold
- HMI:
-
human–machine interface
- IIR:
-
infinite impulse response
infinite-duration impulse response
- LTI:
-
linear time invariant
- MSE:
-
mean squared error
- PDF:
-
probability density function
- PSD:
-
power spectral density
- RC:
-
resistor and capacitor
- RMS:
-
root mean square
- SISO:
-
single-input single-output
- SNR:
-
signal-to-noise ratio
- VLSI:
-
Very Large Scale Integration
- WSS:
-
wide sense stationary
- ZOH:
-
zero order hold
References
J.G. Proakis: Digital Communications, 4th edn. (McGraw-Hill, New York 2000)
J.G. Proakis, D. Manolakis: Digital Signal Processing: Principles, Algorithms and Applications, 3rd edn. (Prentice Hall, Upper Saddle River 1995)
W.A. Kuperman, J.F. Lynch: Shallow-water acoustics, Phys. Today 57(10), 55 (2004)
L. Ljung: System Identification: Theory for the User, 2nd edn. (Prentice Hall, Upper Saddle River 1999)
H.V. Poor, G.W. Wornell: Wireless Communications: Signal Processing Perspectives (Prentice Hall, Upper Saddle River 1998)
P.S.R. Diniz: Adaptive Filtering: Algorithms and Practical Implementation, 2nd edn. (Springer, Berlin, Heidelberg 2002)
T. Kohonen: Self Organization and Associative Memory, 3rd edn. (Springer Verlag, Berlin, Heidelberg 1989)
R.E. Schapire: The Design and Analysis of Efficient Learning Algorithms (MIT Press, Cambridge 1992)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2016 Springer-Verlag Berlin Heidelberg
About this chapter
Cite this chapter
Xiros, N.I. (2016). Digital Signal Processing. In: Dhanak, M.R., Xiros, N.I. (eds) Springer Handbook of Ocean Engineering. Springer Handbooks. Springer, Cham. https://doi.org/10.1007/978-3-319-16649-0_9
Download citation
DOI: https://doi.org/10.1007/978-3-319-16649-0_9
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-16648-3
Online ISBN: 978-3-319-16649-0
eBook Packages: EngineeringEngineering (R0)