1.1 Introduction

The objective of this chapter is to provide the reader with the necessary mathematical basis for understanding communication systems in conjunction with probability theory and stochastic processes. The reader will become familiar with concepts and equations involving Fourier series, which have a significant historical relevance for the theory of communications. Furthermore, both the theory and properties of the Fourier transform will be presented, which constitute powerful tools for spectral analysis (Alencar 1999).

1.2 Fourier Analysis

The basic Fourier theory establishes fundamental conditions for the representation of an arbitrary function in a finite interval as a sum of sinusoids. In fact, this is just an instance of the more general Fourier representation of signals in which a periodic signal f(t),  under fairly general conditions, can be represented by a complete set of orthogonal functions. By a complete set \(\mathcal S\) of orthogonal functions, it is understood that except for those orthogonal functions already in \(\mathcal S\), there are no other orthogonal functions not belonging to \(\mathcal S\) to be considered. It is assumed in the sequel that a periodic signal f(t) satisfies the Dirichlet conditions, i.e., that f(t) is a bounded function which in any one period has at most a finite number of local maxima and minima and a finite number of points of discontinuity (Wylie 1966). The representation of signals by orthogonal functions has very often an error, which diminishes as the number of component terms in the corresponding series is increased.

The fact that a periodic signal f(t) can, in general, be expanded as a sum of mutually orthogonal functions, demands for a closer look at the concepts of periodicity and orthogonality.

Periodicity relates to the repetitive character of the function. A function f(t) is defined to be a periodic function of period T, if and only if, T is the smallest positive number for which \(f(t + T) = f(t)\). In other words, f(t) is periodic if its domain contains \(t+T\) whenever it contains t, and \(f(t + T)= f(t)\). It follows from the definition of a periodic function that if T represents the period of f(t) then \(f(t) = f(t + nT),\) for \(n = 1,2,\ldots ,\) i.e., f(t) will repeat its values when integer multiples of T (Wozencraft and Jacobs 1965) are added to its argument as illustrated in Fig. 1.1.

Fig. 1.1
figure 1

Example of a periodic signal

If f(t) and g(t) are two periodic functions with the same period T, then their sum \(f(t) + g(t)\) will also be a periodic function with period T. We prove this result by making \(h(t) = f(t) + g(t)\) and noticing that \(h(t + T) = f(t + T) + g(t + T) = f(t) + g(t) = h(t)\).

We shall now investigate the concept of orthogonality. Orthogonality provides the tool for introducing the concept of a basis, i.e., of a minimum set of functions that can be used to generate other functions. However, orthogonality by itself does not guarantee that a complete vector space is generated.

Two real functions u(t) and v(t), defined in the interval \(\alpha \le t \le \beta \), are orthogonal if their inner product is null, that is, if

$$\begin{aligned} (u(t),v(t)) = \int _{\alpha }^{\beta } u(t) v(t) dt = 0 \end{aligned}$$
(1.1)

The set of functions \(f_n(t)\), as illustrated in Fig. 1.2, can be used for representing signals in the time domain. This set of functions constitutes an orthogonal set in the interval (0, 1).

Fig. 1.2
figure 2

Set of orthogonal functions

1.2.1 The Trigonometric Fourier Series

The trigonometric Fourier series representation of a signal f(t) can be written as

$$\begin{aligned} f(t) = a_{0} + \sum _{n=1}^{\infty } [a_{n}\cos (n\omega _{0}t) + b_{n} \mathrm{sin} (n\omega _{0}t)], \end{aligned}$$
(1.2)

in which the term \(a_{0}\) (the average value of the function f(t)) indicates whether or not the signal contains a DC value and the terms \(a_{n}\) and \(b_{n}\) are denominated the Fourier series coefficients, in which n is a positive integer. The equality sign holds in (1.2) for all values of t only when f(t) is periodic. However, the Fourier series representation is a useful tool for any type of signal as long as that signal representation is required only in the [0, T] interval. Outside that interval the Fourier series representation will always be periodic, even if the signal f(t) is not periodic (Knopp 1990).

The sine and cosine functions are examples of orthogonal functions because they satisfy the following equations, for integer values of n and m, denominated orthogonality relations:

$$\begin{aligned} \int _{0}^{T} \cos (n\omega _{o}t) \, \sin (m \omega _{o} t) dt = 0, \text { for all integers } n,m, \end{aligned}$$
(1.3)
$$\begin{aligned} \int _{0}^{T} \cos (n \omega _{o}t) \cos (m \omega _{o} t) dt = \left\{ \begin{array}{ll} 0 &{} \text { if } n \ne m \\ \frac{T}{2} &{} \text { if } n = m \end{array} \right. \end{aligned}$$
(1.4)
$$\begin{aligned} \int _{0}^{T} \sin (n \omega _{o}t) \, \sin (m \omega _{o} t) dt = \left\{ \begin{array}{ll} 0 &{} \text { if } n \ne m \\ \frac{T}{2} &{} \text { if } n = m \end{array} \right. \end{aligned}$$
(1.5)

in which \(\omega _{0} = 2\pi /T.\)

As a consequence of the orthogonality conditions, explicit expressions for the coefficients \(a_{n}\) and \(b_{n}\) of the Fourier trigonometric series can be computed. By integrating both sides in expression (1.2) in the interval [0, T], it follows that (Oberhettinger 1990)

$$ \int _{0}^{T} f(t) dt = \int _{0}^{T} a_{o} dt + \sum _{n=1}^{\infty } \int _{0}^{T} a_{n} \cos (n\omega _{o}t) dt + \sum _{n=1}^{\infty } \int _{0}^{T} b_{n} \sin (n\omega _{o}t) dt $$

and since

$$ \int _{0}^{T} a_{n} \cos (n\omega _{o}t) dt = \int _{0}^{T} b_{n} \sin (n\omega _{o}t) dt = 0, $$

it follows that

$$\begin{aligned} a_{o} = \frac{1}{T} \int _{0}^{T} f(t) dt. \end{aligned}$$
(1.6)

Now, by multiplying both sides in expression (1.2) by \( \cos (m \omega _{o} t)\) and integrating in the interval [0, T], it follows that

$$\begin{aligned} \int _{0}^{T} f(t) \cos (m\omega _{o}t) dt= & {} \int _{0}^{T} a_{o} \cos (m \omega _{o} t) dt \\+ & {} \sum _{n=1}^{\infty } \int _{o}^{T} a_{n} \cos (n\omega _{o}t) \cos (m\omega _{o}t) dt \nonumber \\+ & {} \sum _{n=1}^{\infty } \int _{o}^{T} b_{n} \cos (m \omega _{0} t) \sin (n \omega _{o} t) dt, \nonumber \end{aligned}$$
(1.7)

which after simplification produces

$$\begin{aligned} a_{n} = \frac{2}{T} \int _{0}^{T} f(t) \cos (n \omega _{o} t) dt , \text{ for } n = 1,2,3,\ldots \end{aligned}$$
(1.8)

In a similar manner \(b_{n}\) is found by multiplying both sides in expression (1.2) by \(\sin (n\omega _{o}t)\) and integrating in the interval [0, T], i.e.,

$$\begin{aligned} b_{n} = \frac{2}{T} \int _{0}^{T} f(t) \sin (n\omega _{o}t) dt , \end{aligned}$$
(1.9)

for \(n = 1,2,3,\ldots \).

1.2.2 Even Functions and Odd Functions

A function is called an odd function if it is antisymmetric with respect to the ordinate axis, i.e., if \(f(-t) = -f(t)\), in which \(-t\) and t are assumed to belong to the function domain. Examples of odd functions are provided by the functions t, \(t^{3}\), \(\mathrm{sin}\, t\) and \(t^{|2n+1|}\).

Similarly, a function is called an even function if it is symmetric with respect to the ordinate axis, i.e., if \( f(-t) = f(t) \), in which t and \(-t \) are assumed to belong to the function domain. Examples of even functions are provided by the functions 1, \(t^{2}\), \(\cos {t}\), |t|, \(\exp {(-|t|)}\) and \(t^{|2n|}\).

Some Elementary Properties  

  1. (a)

    The sum (difference) and the product (quotient) of two even functions is an even function;

  2. (b)

    The sum (difference) of two odd functions is an odd function;

  3. (c)

    The product (quotient) of two odd functions is an even function;

  4. (d)

    The sum (difference) of an even function and an odd function is neither an even function nor an odd function;

  5. (e)

    The product (quotient) between an even function and an odd function is an odd function.

    Two other important properties are the following.

  6. (f)

    If f(t) is an even periodic function of period T, then

    $$\begin{aligned} \int _{-T/2}^{T/2} f(t) dt = 2 \int _{0}^{T/2} f(t) dt. \end{aligned}$$
    (1.10)
  7. (g)

    If f(t) is an odd periodic function of period T, then

    $$\begin{aligned} \int _{-T/2}^{T/2} f(t) dt = 0. \end{aligned}$$
    (1.11)

    Properties (f) and (g) allow for a considerable simplification when computing coefficients of a trigonometric Fourier series:

  8. (h)

    If f(t) is an even function then \(b_{n} = 0\), and

    $$\begin{aligned} a_{n} = \frac{2}{T} \int _{0}^{T} f(t) \cos (n \omega _{o} t) dt, \, \text{ for } n = \text { 1,2,3, }\ldots . \end{aligned}$$
    (1.12)
  9. (i)

    If f(t) is an odd function then \(a_{n} = 0\) and

    $$\begin{aligned} b_{n} = \frac{2}{T} \int _{0}^{T} f(t) \sin (n\omega _{o}t) dt, \text{ for } n = \text { 1,2,3, }\ldots . \end{aligned}$$
    (1.13)

Example: Compute the coefficients of the trigonometric Fourier series for the waveform \( f(t) = A [ u(t + \tau ) - u(t - \tau ) ] \), which repeats itself with period T, in which u(t) denotes the unit step function and \(2 \tau \le T\).

Solution: Since the given signal is symmetric with respect to the ordinate axis, it follows that \(f(t)=f(-t)\) and the function is even. Therefore \(b_{n}=0\), and all that is left for computing is \(a_{o},\) and \(a_{n}\) for \(n=1,2,\ldots \). The expression for computing the average value \(a_0\) is given by

$$ a_{o}=\frac{1}{T} \int _{-\frac{T}{2}}^{\frac{T}{2}} f(t) dt =\frac{1}{T}\int _{-\tau }^{\tau } A dt= \frac{2A\tau }{T}. $$

In the previous equation the maximum value of \(\tau \) is T/2. The coefficients \(a_n\) for \(n=1,2,\ldots \) are computed as

$$ a_{n}= \frac{2}{T}\int _{0}^{T} f(t) \cos (n \omega _{o} t) dt= \frac{2}{T}\int _{-\tau }^{\tau } A \cos (n \omega _{o} t) dt, $$
$$ a_{n}= \frac{4A}{T} \int _{0}^{\tau } \cos (n \omega _{o} t) dt= \frac{4A}{T n \omega _{o}} \sin (n \omega _{o} t)\left| \begin{array}{c} {\tau } \\ _0 \end{array}= (4A\tau /T) \frac{ \sin (n \omega _{o} \tau )}{n\omega _0\tau } \right. . $$

The signal f(t) is then represented by the following trigonometric Fourier series:

$$ f(t) = \frac{2A\tau }{T} + \left( \frac{4A\tau }{T}\right) \sum _{n=1}^{\infty } \frac{ \sin (n \omega _{o} \tau )}{n\omega _0\tau } \cos (n \omega _{o} t). $$

1.2.3 The Compact Fourier Series

It is also possible to represent the  Fourier series in a form known as the compact Fourier series as follows:

$$\begin{aligned} f(t) = C_{0} + \sum _{n=1}^{\infty } C_{n}\cos (n\omega _{o}t + \theta _{n}). \end{aligned}$$
(1.14)

By expanding the expression \(C_{n}\cos (n\omega _{o}t + \theta )\) as \(C_{n}\cos (n \omega _{o} t) \cos \theta _{n} - C_{n}\sin (n\omega _ot) \sin \theta _n\) and comparing this result with (1.2) it follows that \(a_{o}=C_{o}\), \(a_{n}=C_{n}\cos \theta _{n}\) and \(b_{n}=-C_{n}\sin \theta _{n}\). It is now possible to compute \(C_{n}\) as a function of \(a_{n}\) and \(b_{n}\). For that purpose it is sufficient to square \(a_{n}\) and \(b_{n}\) and add the result, i.e.,

$$\begin{aligned} a_{n}^{2} + b_{n}^{2} = C_{n}^{2}\cos ^{2}{\theta _{n}} + C_{n}^{2}\mathrm{sin^{2}}\theta _{n} = C_n^2. \end{aligned}$$
(1.15)

From Eq. (1.15) the modulus of \(C_n\) can be written as

$$\begin{aligned} C_{n}=\sqrt{a_{n}^{2} + b_{n}^{2}}. \end{aligned}$$
(1.16)

In order to determine \(\theta _n\) it suffices to divide \(b_{n}\) by \(a_{n}\), i.e.,

$$\begin{aligned} \frac{b_{n}}{a_{n}} = -\frac{ \sin \, \theta _{n}}{\cos {\theta _{n}} } = - \tan \theta _n, \end{aligned}$$
(1.17)

which when solved for \(\theta _n\) produces

$$\begin{aligned} \theta _{n} = - \arctan \left( \frac{b_{n}}{a_{n}} \right) . \end{aligned}$$
(1.18)

1.2.4 The Exponential Fourier Series

Since the set of exponential functions \(e^{j n \omega _{o} t}\), \(n=0,\pm 1,\pm 2,\ldots ,\) is a complete set of orthogonal functions in an interval of magnitude T, in which \(T = 2\pi /\omega _{o}\), then it is possible to represent a function f(t) by a linear combination of exponential functions in an interval T.

$$\begin{aligned} f(t) = \sum _{-\infty }^{\infty } F_{n}e^{j n \omega _{0} t} \end{aligned}$$
(1.19)

in which

$$\begin{aligned} F_{n}= \frac{1}{T} \int _{\frac{-T}{2}}^{\frac{T}{2}} f(t) e^{-jn\omega _{0}t} dt . \end{aligned}$$
(1.20)

Equation (1.19) represents the exponential Fourier series expansion of f(t) and Eq. (1.20) is the expression to compute the associated series coefficients. The exponential Fourier series is also known as the complex Fourier series. It is immediate to show that Eq. (1.19) is just another way of expressing the Fourier series as given in (1.2). Replacing \(\cos (n \omega _{o} t) + j \sin (n \omega _{0} t)\) for \(e^{n \omega _{0} t}\) (Euler’s identity) in (1.19), it follows that

$$\begin{aligned} f(t)= & {} F_{o} + \sum _{n=-\infty }^{-1}F_{n}[\cos (n \omega _{o} t)+ j \sin (n \omega _{o} t)] \\+ & {} \sum _{n=1}^{\infty }F_{n}[\cos (n \omega _{o} t) + j \sin (n \omega _{o} t)], \end{aligned}$$

or

$$ f(t) = F_{o} + \sum _{n=1}^{\infty } F_{n} [\cos (n \omega _{o} t) + j \sin (n \omega _{o} t) ] + F_{-n}[\cos (n \omega _{o} t) - j \sin (n \omega _{o} t)]. $$

Grouping the coefficients of the sine and cosine terms, it follows that

$$\begin{aligned} f(t) = F_{o} + \sum _{n=1}^{\infty }(F_{n} + F_{-n})\cos (n \omega _{o} t) + j(F_{n} - F_{-n}) \sin (n \omega _{o} t). \end{aligned}$$
(1.21)

Comparing the above expression with (1.2) it follows that

$$\begin{aligned} a_{o} = F_{o}, \ a_{n} =(F_{n} + F_{-n}) \ \text{ and } \ b_{n} =j(F_{n} - F_{-n}), \end{aligned}$$
(1.22)

and that

$$\begin{aligned} F_{o} = a_{o}, \end{aligned}$$
(1.23)
$$\begin{aligned} F_{n} = \frac{a_{n} - jb_{n}}{2}, \end{aligned}$$
(1.24)

and

$$\begin{aligned} F_{-n} =\frac{a_{n} + jb_{n}}{2}. \end{aligned}$$
(1.25)

In case the function f(t) is even, i.e., if \(b_{n}=0\), then

$$\begin{aligned} a_{o} = F_{o}, \ F_{n} = \frac{a_{n}}{2} \ \mathrm{and} \ F_{-n} = \frac{a_{n}}{2}. \end{aligned}$$
(1.26)

Example: Compute the exponential Fourier series for the train of impulses

$$ \delta _T(t) = \sum _{n = - \infty }^{\infty } \delta (t - n T). $$

Solution: The complex coefficients are given by

$$\begin{aligned} F_{n} = \frac{1}{T}\int _{\frac{-T}{2}}^{\frac{T}{2}} \delta _{T}(t)e^{-jn\omega _{o}t}dt = \frac{1}{T}, \end{aligned}$$
(1.27)

since

$$\begin{aligned} \int _{-\infty }^{\infty }\delta (t - t_{o})f(t)dt = f(t_{o}) \ \text{(Impulse } \text{ filtering) }. \end{aligned}$$
(1.28)

It follows that f(t) can be written as

$$\begin{aligned} f(t) = \frac{1}{T}\sum _{n=-\infty }^{\infty }e^{-jn\omega _{o}t}. \end{aligned}$$
(1.29)

In practice, in order to obtain an impulse train, it is sufficient to pass a binary digital signal through a differentiator circuit and then pass the resulting waveform through a half-wave rectifier.

The Fourier series expansion of a periodic function is equivalent to its decomposition in frequency components. In general, a periodic function with period T has frequency components \(0,\pm \omega _{o},\pm 2\omega _{o}, \pm 3\omega _{o},\ldots ,\pm n\omega _{o}\), in which \(\omega _{o} = 2\pi /T\) is the fundamental frequency and the multiples of \(\omega _0\) are called harmonics. Notice that the spectrum exists only for discrete values of \(\omega \) and that the spectral components are spaced by at least \(\omega _{o}\).

1.3 Fourier Transform

It was shown earlier that an arbitrary function can be represented in terms of an exponential (or trigonometric) Fourier series in a finite interval. If such a function is periodic this representation can be extended for the entire interval \((-\infty ,\infty )\). However, it is interesting to observe the spectral behavior of a function in general, periodic or not, in the entire interval \((-\infty ,\infty )\). In order to do that the function f(t) is truncated in the interval \([- T/2, T/2]\), obtaining \(f_{T}(t)\). It is possible then to represent this function as a sum of exponentials in the entire interval \((-\infty ,\infty )\) by making T approach infinity. In other words

$$ \lim _{T \rightarrow \infty } f_{T}(t) = f(t) . $$

The \(f_{T}(t)\) signal can be represented by the exponential Fourier series as

$$\begin{aligned} f_{T}(t) = \sum _{n=-\infty }^{\infty }F_{n}e^{jn\omega _{o}t}, \end{aligned}$$
(1.30)

in which \(\omega _{o} = 2\pi /T\) and

$$\begin{aligned} F_{n} = \frac{1}{T}\int _{-\frac{T}{2}}^{\frac{T}{2}} f_{T}(t)e^{-jn\omega _{o}t} dt. \end{aligned}$$
(1.31)

\(F_{n}\) represents the spectral amplitude associated to each component of frequency \(n\omega _{o}\). As T increases, the amplitudes diminish but the spectrum shape is not altered. This increase in T forces \(\omega _{o}\) to diminish and the spectrum to become denser. In the limit, as \(T \rightarrow \infty \), \(\omega _{o}\) becomes infinitesimally small, being represented by \(d\omega \). On the other hand, there are now infinitely many components and the spectrum is no longer a discrete one, becoming a continuous spectrum in the limit.

For convenience, write \(TF_{n} = F(\omega )\), that is, the product \(TF_{n}\) becomes a function of the variable \(\omega \), since \(n \omega _{o} \rightarrow \omega \). Replacing \(\frac{F(\omega )}{T}\) for \(F_{n}\) in (1.30), one obtains

$$\begin{aligned} f_{T}(t) = \frac{1}{T} \sum _{n=-\infty }^{\infty }F(\omega )e^{j\omega t}. \end{aligned}$$
(1.32)

Replacing \(\omega _{0}/2\pi \) for 1/T it follows that

$$\begin{aligned} f_{T}(t) = \frac{1}{2\pi } \sum _{n=-\infty }^{\infty }F(\omega ) e^{j\omega t}\omega _{0}. \end{aligned}$$
(1.33)

In the limit, as T approaches infinity, one has

$$\begin{aligned} f(t) = \frac{1}{2\pi }\int _{-\infty }^{\infty }F(\omega )e^{j\omega t}d\omega \end{aligned}$$
(1.34)

which is known as the inverse Fourier transform.

Similarly, from (1.31), as T approaches infinity, one obtains

$$\begin{aligned} F(\omega ) = \int _{-\infty }^{\infty }f(t)e^{-j \omega t}dt \end{aligned}$$
(1.35)

which is known as the direct Fourier transform, sometimes denoted in the literature as \( F(\omega ) = \mathcal{F} [ f(t)]\). A Fourier transform pair is often denoted as \( f(t) \longleftrightarrow F(\omega )\).

In the sequel some important Fourier transforms are presented (Haykin 1988).

Bilateral Exponential Signal

If \(f(t) = e^{-a|t|}\) it follows from (1.35) that

$$\begin{aligned} F(\omega )= & {} \int _{-\infty }^{\infty }e^{-a|t|}e^{-j\omega t}dt \\ \nonumber= & {} \int _{-\infty }^{0}e^{at}e^{-j\omega t}dt + \int _{0}^{\infty }e^{-at}e^{-j\omega t}dt \\ \nonumber= & {} \frac{1}{a - j\omega } + \frac{1}{a + j\omega },\\ F(\omega )= & {} \frac{2a}{a^{2} + \omega ^{2}}. \end{aligned}$$
(1.36)

Gate Function

The gate function is defined by the expression \(p_{T} (t) = A [ u(t + T/2 ) - u( t - T/2 ) ]\), or

$$\begin{aligned} p_{T}(t) = \left\{ \begin{array}{ll} A &{} \text{ if }\ |t| \le T/2 \\ 0 &{} \text{ if }\ |t| > T/2 \end{array} \right. \end{aligned}$$
(1.37)

in which u(t) denotes the unit step function, defined as

$$\begin{aligned} u(t) = \left\{ \begin{array}{ll} 1 &{} \text{ if }\ t \ge 0 \\ 0 &{} \text{ if }\ t < 0 \end{array} \right. \end{aligned}$$
(1.38)
Fig. 1.3
figure 3

Gate function

The gate function is illustrated in Fig. 1.3. The Fourier transform of the gate function can be calculated as

$$\begin{aligned} F(\omega )= & {} \int _{-\frac{T}{2}}^{\frac{T}{2}} A e^{-j\omega t}dt \\ \nonumber= & {} \frac{A}{j\omega }(e^{j\omega \frac{T}{2}} - e^{-j\omega \frac{T}{2}})\\ \nonumber= & {} \frac{A}{j\omega } 2j{\sin (\omega T/2)}, \end{aligned}$$
(1.39)

which can be rearranged as

$$ F(\omega ) =AT\left( \frac{\mathrm{sin}(\omega T/2)}{\omega T/2}\right) , $$

and finally

$$\begin{aligned} F(\omega ) =AT \mathrm{Sa} \,\left( \frac{\omega T}{2}\right) , \end{aligned}$$
(1.40)

in which \(\mathrm{Sa} \,(x) = \frac{\sin x }{x}\) is the sampling function. This function converges to one, as x goes to zero. The sampling function, the magnitude of which is illustrated in Fig. 1.4, is of great relevance in communication theory.

Fig. 1.4
figure 4

Magnitude plot of the Fourier transform of the gate function

The sampling function obeys the following important relationship:

$$\begin{aligned} \int _{-\infty }^{\infty }\frac{k}{\pi } \mathrm{Sa} \, (kt)dt =1. \end{aligned}$$
(1.41)

The area under this curve is equal to 1. As k increases, the amplitude of the sampling function increases, the spacing between zero crossings diminishes and most of the signal energy concentrates near the origin. For \(k \rightarrow \infty \) the function converges to an impulse function, i.e.,

$$\begin{aligned} \delta (t) = \lim _{k \rightarrow \infty } \frac{k}{\pi } \mathrm{Sa} \, (kt). \end{aligned}$$
(1.42)

In this manner, in the limit it is true that \(\int _{-\infty }^{\infty }\delta (t)dt = 1\). Since the function concentrates its nonzero values near the origin, it follows that \(\delta (t) = 0\) for \(t \ne 0\). Therefore,

$$\begin{aligned} \int _{-\infty }^{\infty }f(t)\delta (t)dt = f(0)\int _{-\infty }^{\infty }\delta (t)dt =f(0). \end{aligned}$$
(1.43)

In general it is possible to write (1.44) as

$$\begin{aligned} \int _{-\infty }^{\infty }f(t)\delta (t - t_{o}) = f(t_{o}). \end{aligned}$$
(1.44)

This important relationship, mentioned earlier in (1.28), is known as the filtering property of the impulse function.

Impulse Function or Dirac’s Delta Function

By making \(f(t) = \delta (t)\) in (1.35) it follows that

$$\begin{aligned} F(\omega ) = \int _{-\infty }^{\infty }\delta (t)e^{-j \omega t}dt. \end{aligned}$$
(1.45)

Using the impulse   filtering property it follows that \(F(\omega ) = 1\). Therefore, the impulse function contains a continuum of equal amplitude spectral components.

Alternatively, by making \(F(\omega ) = 1\) in (1.34) and simplifying, the impulse function can be written as

$$ \frac{1}{\pi } \int _{0}^{\infty } \cos \omega t d \omega . $$

The Constant Function

If f(t) is a constant function then its Fourier transform in principle would not exist since this function does not satisfy the absolute integrability criterion. In general \(F(\omega ),\) the Fourier transform of f(t), is expected to be finite, i.e.,

$$\begin{aligned} | F(\omega ) | \le \int _{-\infty }^{\infty }|f(t)||e^{-j\omega t}|dt < \infty , \end{aligned}$$
(1.46)

since \(|e^{-j\omega t}|=1\), then

$$\begin{aligned} \int _{-\infty }^{\infty }|f(t)|dt < \infty . \end{aligned}$$
(1.47)

However that is just a sufficiency condition and not a necessary condition for the existence of the Fourier transform, since there exist functions that although do not satisfy the condition of absolute integrability, in the limit have a Fourier transform (Carlson 1975). This is a very important observation since this approach is often used in the computation of Fourier transforms of many functions. Returning to the constant function, it can be approximated by a gate function with amplitude A and width \(\tau \), and then making \(\tau \) approach very large values,

$$\begin{aligned} \mathcal{F} [A] = \lim _{\tau \rightarrow \infty }A\tau \mathrm{Sa} \, \left( \frac{\omega \tau }{2} \right) \end{aligned}$$
(1.48)
$$ = 2 \pi A \lim _{\tau \rightarrow \infty } \frac{\tau }{2\pi }\mathrm{Sa} \, \left( \frac{\omega \tau }{2} \right) $$
$$\begin{aligned} \mathcal{F} [A] = 2 \pi A \delta (\omega ). \end{aligned}$$
(1.49)

This result is not only a very interesting one but also somehow intuitive since a constant function in time represents a DC level and, as was to be expected, contains no spectral component except for the one at \(\omega = 0\).

Fig. 1.5
figure 5

Magnitude plot of the Fourier transform of the sine function

Fourier Transform of Sine and the Cosine

Since both the sine and the cosine functions are periodic functions, they do not satisfy the condition of absolute integrability. However, their respective Fourier transforms exist in the limit when \(\tau \) goes to infinity. Assuming the function to exist only in the interval \((\frac{-\tau }{2}, \frac{\tau }{2})\) and to be zero outside this interval, and considering the limit of the expression when \(\tau \) goes to infinity,

$$\begin{aligned} \mathcal{F} ( {\sin }\omega _{0} t ) = \lim _{\tau \rightarrow \infty } \int _{\frac{-\tau }{2}}^{\frac{\tau }{2}} {\sin } \omega _{0} t\,\, e^{-j \omega t} dt \end{aligned}$$
(1.50)
$$ =\lim _{\tau \rightarrow \infty } \int _{\frac{-\tau }{2}}^{\frac{\tau }{2}} \frac{e^{-j(\omega - \omega _{0})t}}{2j}- \frac{e^{-j(\omega + \omega _{0})t}}{2j} dt $$
$$ =\lim _{\tau \rightarrow \infty } \left[ \frac{j \tau {\sin } (\omega + \omega _{0})\frac{\tau }{2}}{2(\omega + \omega _{0})\frac{\tau }{2}} - \frac{j \tau {\sin } (\omega - \omega _{0})\frac{\tau }{2}}{2(\omega - \omega _{0})\frac{\tau }{2}} \right] $$
$$ =\lim _{\tau \rightarrow \infty } \left\{ j \frac{\tau }{2} \mathrm{Sa} \, \left[ \frac{ ( \omega + \omega _{0} ) }{2} \right] - j \frac{\tau }{2} \mathrm{Sa} \, \left[ \frac{\tau (\omega + \omega _{0})}{2} \right] \right\} . $$

Therefore,

$$ \mathcal{F} ({\sin } \omega _{0} t) = j \pi [\delta (\omega + \omega _{0}) - \delta (\omega - \omega _{0})]. $$

Applying a similar reasoning it follows that

$$\begin{aligned} \mathcal{F} (\cos {\omega _{0} t}) = \pi [ \delta ( \omega - \omega _{0} ) + \delta ( \omega + \omega _{0} ) ]. \end{aligned}$$
(1.51)

The the Fourier transform of the function \(x(t)= \sin ({w_{c}t})\) is illustrated in Fig. 1.5.

The Fourier Transform of \(e^{j \omega _{0} t}\)

Using Euler’s identity, \(e^{j \omega _{0} t} = \cos {\omega _{0} t} + j {\sin }\omega _{0} t\), it follows that

$$\begin{aligned} \mathcal{F} [e^{j \omega _{0} t}] = \mathcal{F} [ \cos {\omega _{0} t} + j {\sin } \omega _{0} t ]. \end{aligned}$$
(1.52)

Substituting in (1.53) the Fourier transforms of the sine and of the cosine functions, respectively, it follows that

$$\begin{aligned} \mathcal{F} [e^{j \omega _{0} t}] = 2 \pi \delta ( \omega - \omega _{0}). \end{aligned}$$
(1.53)

The Fourier Transform of a Periodic Function

We consider next the exponential Fourier series representation of a periodic function \(f_T(t)\) of period T

$$\begin{aligned} f_{T}(t) = \sum _{n = -\infty }^{\infty } F_{n} e^{j n \omega _{0} t}. \end{aligned}$$
(1.54)

Applying the Fourier transform to both sides in (1.55) it follows that

$$\begin{aligned} \mathcal{F} [f_{T}(t)] = \mathcal{F} \left[ \sum _{n=-\infty }^{\infty } F_{n} e^{j n \omega _{0} t} \right] \end{aligned}$$
(1.55)
$$\begin{aligned} = \sum _{n=-\infty }^{\infty }F_{n} \mathcal{F} [e^{j n \omega _{0} t}]. \end{aligned}$$
(1.56)

Now, applying in (1.57) the result from (1.54) it follows that

$$\begin{aligned} F(\omega ) = 2 \pi \sum _{n=-\infty }^{\infty } F_{n} \delta (\omega - n \omega _{0}). \end{aligned}$$
(1.57)

1.4 Some Properties of the Fourier Transform

Linearity

Linearity  is an important property when studying communication systems. A system is defined to be a linear system if satisfies the properties of homogeneity and additivity.

  1. 1

    Homogeneity—If the application of the signal x(t) at the system input produces y(t) at the system output, then the application of the input \(\alpha x(t)\), in which \(\alpha \) is a constant, produces \(\alpha y(t)\) at the output.

  2. 2

    Additivity—If the application of the signals \(x_1(t)\) and \(x_2(t)\) at the system input produces respectively \(y_1(t)\) and \(y_2(t)\) at the system output, then the application of the input \(x_1(t) + x_2(t)\) produces \(y_1(t) + y_2(t)\) at the output.

By applying the tests for homogeneity and additivity, it is immediate to check that the process that generates the signal \(s(t) = A \cos ( \omega _c t + \Delta m(t) + \theta )\) from an input signal m(t) is non linear. By applying the same test to the signal \(r(t) = m(t) \cos ( \omega _c t + \theta )\) it is immediate to show that the process generating r(t) is linear.

The Fourier transform is a linear operator, i.e., if a function can be written as a linear combination of other (well behaved) functions, the corresponding Fourier transform will be given by a linear combination of the corresponding Fourier transforms of each one of the functions involved in the linear combination (Gagliardi 1988).

If \(f(t) \longleftrightarrow F(\omega )\) and \(g(t) \longleftrightarrow G(\omega )\) it then follows that

$$\begin{aligned} \alpha f(t) + \beta g(t) \longleftrightarrow \alpha F(\omega ) + \beta G(\omega ). \end{aligned}$$
(1.58)

Proof: Let \( h(t) = \alpha f(t) + \beta g(t) \rightarrow \), then it follows that

$$\begin{aligned} H(\omega )= & {} \int _{-\infty }^{\infty } h(t) e^{-j \omega t }dt \nonumber \\= & {} \alpha \int _{-\infty }^{\infty } f(t) e^{-j \omega t} dt + \beta \int _{-\infty }^{\infty } g(t) e^{-j \omega t} dt, \nonumber \end{aligned}$$
(1.59)

and finally

$$\begin{aligned} H(\omega ) = \alpha F(\omega ) + \beta G(\omega ). \end{aligned}$$
(1.60)

Scaling

$$\begin{aligned} \mathcal{F} [f(at)] = \int _{-\infty }^{\infty } f(at) e^{-j \omega t}dt. \end{aligned}$$
(1.61)

Initially let us consider \(a > 0\) in (1.61). By letting \(u = at\) it follows that \(dt = (1/a)du\). Replacing u for at in (1.61) it follows that

$$ \mathcal{F} [f(at)] = \int _{-\infty }^{\infty } \frac{f(u)}{a} e^{-j \frac{\omega }{a} u} du $$

which simplifies to

$$\begin{aligned} \mathcal{F} [f(at)] = \frac{1}{a} F\left( \frac{\omega }{a}\right) . \end{aligned}$$
(1.62)

Consider now the case in which \(a<0\). By a similar procedure it follows that

$$\begin{aligned} \mathcal{F} [f(at)] = - \frac{1}{a} F\left( \frac{\omega }{a}\right) . \end{aligned}$$
(1.63)

Finally, Eqs. (1.62) and (1.63) can be combined and written as

$$\begin{aligned} \mathcal{F} [f(at)] = \frac{1}{|a|} F\left( \frac{\omega }{a}\right) . \end{aligned}$$
(1.64)

This result points to the fact that if a signal is compressed in the time domain by a factor a, then its frequency spectrum will expand in the frequency domain by that same factor.

Symmetry

This is an interesting property which can be fully observed in even functions. The symmetry property states that if

$$\begin{aligned} f(t) \longleftrightarrow F(\omega ), \end{aligned}$$
(1.65)

then it follows that

$$\begin{aligned} F(t) \longleftrightarrow 2 \pi f( - \omega ). \end{aligned}$$
(1.66)

Proof: By definition,

$$ f(t) = \frac{1}{2\pi } \int _{-\infty }^{+\infty } F(\omega ) e^{j\omega t} d\omega , $$

which after multiplication of both sides by \(2\pi \) becomes

$$ 2\pi f(t) = \int _{-\infty }^{+\infty } F(\omega ) e^{j\omega t} d\omega . $$

By letting \(u = -\ t\) it follows that

$$ 2\pi f(-u) = \int _{-\infty }^{+\infty } F(\omega ) e^{-j\omega u} d\omega , $$

and now by making \(t = \omega \), one obtains

$$ 2\pi f(-u) = \int _{-\infty }^{+\infty } F(t) e^{-jtu} dt . $$

Finally, by letting \(u = \omega \) it follows that

$$\begin{aligned} 2\pi f(-\omega ) = \int _{-\infty }^{+\infty } F(t) e^{-j\omega t} dt . \end{aligned}$$
(1.67)

Example: The Fourier transform of a constant function can be easily derived by use of the symmetry property. Since

$$\begin{aligned} A \delta (t) \longleftrightarrow A, \end{aligned}$$

it follows that

$$\begin{aligned} A \longleftrightarrow 2\pi A \delta (-\omega ) = 2\pi A \delta (\omega ). \end{aligned}$$

Time Domain Shift

Given that \(f(t) \longleftrightarrow F(\omega )\), it then follows that \(f( t - t_{0} ) \longleftrightarrow F(\omega ) e^{-j \omega t_0}\). Let \(g(t) = f(t - t_{0})\). In this case it follows that

$$\begin{aligned} G( \omega ) = \mathcal{F} [ g( t ) ] = \int _{-\infty }^{\infty } f ( t - t_{0} ) e^{-j \omega t} dt. \end{aligned}$$
(1.68)

By making \(\tau = t - t_{0}\) it follows that

$$\begin{aligned} G(\omega ) = \int _{-\infty }^{\infty } f(\tau ) e^{-j \omega ( \tau + t_{0} ) } d\tau \end{aligned}$$
(1.69)
$$\begin{aligned} = \int _{-\infty }^{\infty } f(\tau ) e^{-j \omega \tau } e^{-j \omega t_{0} } d\tau , \end{aligned}$$
(1.70)

and finally

$$\begin{aligned} G(\omega ) = e^{-j \omega t_{0} } F(\omega ). \end{aligned}$$
(1.71)

This result shows that whenever a function is shifted in time its frequency domain amplitude spectrum remains unaltered. However, the corresponding phase spectrum experiences a rotation proportional to \(\omega t_{0}\).

Frequency Domain Shift

Given that \(f(t) \longleftrightarrow F(\omega )\) it then follows that \(f(t) e^{j \omega _{0} t} \longleftrightarrow F(\omega - \omega _{0})\).

$$\begin{aligned} \mathcal{F} [f(t) e^{j \omega _{0} t}] = \int _{-\infty }^{\infty } f(t) e^{j \omega _{0} t} e^{-j \omega t} dt \end{aligned}$$
(1.72)
$$ = \int _{-\infty }^{\infty } f(t) e^{-j (\omega - \omega _{0} ) t } dt, $$
$$\begin{aligned} \mathcal{F} [f(t) e^{j \omega _{0} t} ] = F( \omega - \omega _{0} ). \end{aligned}$$
(1.73)

Differentiation in the Time Domain

Given that

$$\begin{aligned} f(t) \longleftrightarrow F(\omega ), \end{aligned}$$
(1.74)

it then follows that

$$\begin{aligned} \frac{d f(t)}{dt} \longleftrightarrow j \omega F(\omega ). \end{aligned}$$
(1.75)

Proof: Let us consider the expression for the inverse Fourier transform

$$\begin{aligned} f(t) = \frac{1}{2\pi } \int _{-\infty }^{\infty } F(\omega ) e^{j \omega t} d \omega . \end{aligned}$$
(1.76)

Differentiating in time it follows that

$$\begin{aligned} \frac{d f(t)}{dt}= & {} \frac{1}{2\pi } \frac{d}{dt} \int _{-\infty }^{\infty } F(\omega ) e^{j \omega t} d \omega \nonumber \\= & {} \frac{1}{2\pi } \int _{-\infty }^{\infty } \frac{d}{dt} F(\omega ) e^{j \omega t} d \omega \nonumber \\= & {} \frac{1}{2\pi } \int _{-\infty }^{\infty } j \omega F(\omega ) e^{j \omega t} d \omega , \nonumber \end{aligned}$$

and then

$$\begin{aligned} \frac{d f(t) }{dt} \longleftrightarrow j \omega F(\omega ). \end{aligned}$$
(1.77)

In general it follows that

$$\begin{aligned} \frac{d^{n} f(t) }{dt} \longleftrightarrow (j \omega )^{n} f(\omega ). \end{aligned}$$
(1.78)

By computing the Fourier transform of the signal \( f(t) = \delta (t) - \alpha e^{ - \alpha t } u(t) \), it is immediate to show that by applying the property of differentiation in time, this signal is the time derivative of the signal \( g(t) = e^{ - \alpha t } u(t) \).

Integration in the Time Domain

Let f(t) be a signal with zero average value, i.e., let \(\int _{-\infty }^{\infty } f(t) dt = 0\). By defining

$$\begin{aligned} g(t) = \int _{-\infty }^{t} f(\tau ) d \tau , \end{aligned}$$
(1.79)

it follows that

$$ \frac{ d g (t) }{dt} = f(t), $$

and since

$$\begin{aligned} g(t) \longleftrightarrow G(\omega ), \end{aligned}$$
(1.80)

then

$$ f(t) \longleftrightarrow j \omega G(\omega ) , $$

and

$$\begin{aligned} G (\omega ) = \frac{F(\omega )}{j \omega }. \end{aligned}$$
(1.81)

In this manner it follows that for a signal with zero average value

$$\begin{aligned} f(t) \longleftrightarrow F(\omega ) \end{aligned}$$
$$\begin{aligned} \int _{-\infty }^{t} f(\tau )d\tau \longleftrightarrow \frac{F(\omega )}{j\omega }. \end{aligned}$$
(1.82)

Generalizing, for the case in which f(t) has a nonzero average value, it follows that

$$\begin{aligned} \int _{-\infty }^{t} f(\tau )d\tau \longleftrightarrow \frac{F(\omega )}{j\omega } + \pi \delta (\omega )F(0). \end{aligned}$$
(1.83)

The Convolution Theorem

The convolution theorem is a powerful tool for analyzing the frequency contents of a signal, allowing obtention of many relevant results. One instance of the use of the convolution theorem, of fundamental importance in communication theory, is the sampling theorem which will be the subject of the next section.

The convolution between two time functions f(t) and g(t) is defined by the following integral:

$$\begin{aligned} \int _{-\infty }^{\infty } f(\tau ) g(t - \tau ) d \tau , \end{aligned}$$
(1.84)

which is often denoted as \(f(t) *g(t)\).

Let \(h(t) = f(t) *g(t)\) and let \(h(t) \longleftrightarrow H(\omega )\). It follows that

$$\begin{aligned} H(\omega ) = \int _{-\infty }^{\infty } h(t) e^{-j \omega t} dt = \int _{-\infty }^{\infty } \int _{-\infty }^{\infty } f(\tau ) g(t - \tau ) e^{-j \omega t} dt d \tau . \end{aligned}$$
(1.85)
$$\begin{aligned} H(\omega ) = \int _{-\infty }^{\infty } f(\tau ) \int _{-\infty }^{\infty } g(t - \tau ) e^{-j \omega t} dt d \tau , \end{aligned}$$
(1.86)
$$\begin{aligned} H(\omega ) = \int _{-\infty }^{\infty } f(\tau ) G(\omega ) e^{ -j \omega \tau } d \tau \end{aligned}$$
(1.87)

and finally,

$$\begin{aligned} H(\omega ) = F(\omega ) G(\omega ) . \end{aligned}$$
(1.88)

The convolution of two time functions is equivalent in the frequency domain to the product of their respective Fourier transforms. For the case in which \(h(t) = f(t) \cdot g(t)\), proceeding in a similar manner one obtains

$$\begin{aligned} H(\omega ) = \frac{1}{2\pi } [ F(\omega ) *G(\omega ) ]. \end{aligned}$$
(1.89)

In other words, the product of two time functions has a Fourier transform given by the convolution of their respective Fourier transforms. The convolution operation is often used when computing the response of a linear circuit, given its impulse response and an input signal.

Example: The circuit in Fig. 1.6 has the impulse response h(t) given by

$$ h(t) = \frac{1}{RC} e^{ - \frac{t}{RC} } u(t) . $$

The application of the unit impulse \(x(t) = \delta (t)\) as the input to this circuit causes an output \(y(t) = h(t) *x(t)\). In the frequency domain, by the convolution theorem it follows that \(Y(\omega ) = H(\omega ) X(\omega ) = H(\omega )\), i.e., the Fourier transform of the impulse response of a linear system is the system transfer function.

Fig. 1.6
figure 6

RC circuit

Using the frequency convolution theorem it can be shown that

$$ \cos ( \omega _c t ) u(t) \longleftrightarrow \frac{\pi }{2} [ \delta ( \omega + \omega _c ) + \delta ( \omega - \omega _c ) ] + j \frac{\omega }{\omega ^2_c - \omega ^2}. $$

1.5 The Sampling Theorem

A bandlimited signal f(t), having no frequency components above \(\omega _{M} = 2\pi f_M\), can be reconstructed from its samples, collected at uniform time intervals \(T_s = 1/f_s\), i.e., at a sampling rate \(f_s\), in which \(f_s \ge 2f_M\).

By a bandlimited signal \(f(t) \longleftrightarrow F(\omega )\) it is understood that there is a frequency \(\omega _M\) above which \(F(\omega ) = 0\), i.e., that \(F(\omega ) = 0\) for \(|\omega | > \omega _{M}\). Nyquist concluded that all the information about f(t), as illustrated in Fig. 1.7, is contained in the samples of this signal, collected at regular time intervals \(T_s\). In this manner the signal can be completely recovered from its samples. For a bandlimited signal f(t), i.e., such that \( F(\omega ) = 0\) for \(| \omega | > \omega _M \), it follows that

$$ f(t) *\frac{ \sin (a t) }{ \pi t } = f(t), \ \mathrm{if} \ a > \omega _M, $$

because in the frequency domain this corresponds to the product of \(F(\omega )\) by a gate function of width greater than \(2 \omega _M\).

The function f(t) is sampled once every \(T_s\) seconds or, equivalently, sampled with a sampling frequency \(f_s\), in which \(f_s = 1/T_s \ge 2 f_{M}\).

Fig. 1.7
figure 7

Bandlimited signal f(t) and its spectrum \(F(\omega )\)

Fig. 1.8
figure 8

Impulse train used for sampling

Consider the signal \(f_{s}(t) = f(t) \delta _{T}(t)\), in which

$$\begin{aligned} \delta _T(t) = \sum _{n = - \infty }^{\infty } \delta (t - n T) \longleftrightarrow \omega _o \delta _{\omega _o} = \omega _o \sum _{n = - \infty }^{\infty } \delta (\omega - n \omega _o). \end{aligned}$$
(1.90)

The signal \(\delta _{T}(t)\) is illustrated in Fig. 1.8. The signal \(f_s(t)\) represents f(t) sampled at uniform time intervals \(T_s\) seconds. From the frequency convolution theorem, it follows that the Fourier transform of the product of two functions in the time domain is given by the convolution of their respective Fourier transforms. It now follows that

$$\begin{aligned} f_{s}(t) \longleftrightarrow \frac{1}{2\pi } [ F(\omega ) *\omega _{0} \delta _{\omega _{0}}(\omega ) ] \end{aligned}$$
(1.91)

and thus

$$\begin{aligned} f_{s}(t) \longleftrightarrow \frac{1}{T} [F(\omega ) *\delta _{\omega _{0}}(\omega )] = \frac{1}{T} \sum _{n = -\infty }^{\infty } F(\omega - n \omega _{o}) . \end{aligned}$$
(1.92)
Fig. 1.9
figure 9

Sampled signal and its spectrum

It can be observed from Fig. 1.9 that if the sampling frequency \(\omega _s\) is less than \(2\omega _{M}\), there will be an overlap of spectral components. This will cause a loss of information because the original signal can no longer be fully recovered from its samples. As \(\omega _s\) becomes smaller than \(2\omega _{M}\), the sampling rate diminishes causing a partial loss of information. Therefore, the minimum sampling frequency that allows perfect recovery of the signal is \(\omega _s = 2 \omega _{M}\), and is known as the Nyquist sampling rate. In order to recover the original spectrum \(F(\omega )\), it is enough to pass the sampled signal through a low-pass filter with cutoff frequency \(\omega _{M}\).

For applications in telephony, the sampling frequency is \(f_S = 8{,}000\) samples per second, or 8 k samples/s.  Then the speech signal is quantized, as will be discussed later, for 256 distinct levels. Each level corresponds to an 8-bit code (\(2^8 = 256\)).

After encoding, the signal is transmitted at a rate of 8,000 samples/s \(\times \) 8 bits/sample= 64 kbits/s and occupies a bandwidth of approximately 64 kHz.

If the sampling frequency \(w_S\) is lower than \(2 \pi B\), there will be spectra overlap and, as a consequence, information loss. Therefore, the sampling frequency for a baseband signal to be recovered without loss is \(w_S = 2 \pi B\), known as the Nyquist sampling frequency.

As just mentioned, if the sampling frequency is lower than the Nyquist frequency, the signal will not be completely recovered, since there will be spectral superposition, leading to distortion in the highest frequencies. This phenomenon is known as aliasing.  On the other hand, increasing the sampling frequency for a value higher than the Nyquist frequency leads to spectra separation higher than the minimum necessary to recover the signal.

1.6 Parseval’s Theorem

For a real signal f(t) of finite energy, often called simply a real energy signal, the energy E associated with f(t) is given by

$$ E = \int _{-\infty }^{\infty } f^2(t) dt $$

and can equivalently be calculated by the formula

$$ E = \frac{1}{2 \pi } \int _{-\infty }^{\infty } | F(\omega ) |^2 d \omega $$

It follows that

$$\begin{aligned} \int _{-\infty }^{\infty } f^2(t) dt = \frac{1}{2 \pi } \int _{-\infty }^{\infty } | F(\omega ) |^2 d \omega . \end{aligned}$$
(1.93)

The relationship given in (1.93) is known as Parseval’s theorem or Parseval’s identity. For a real signal x(t) with energy E it can be shown, by using Parseval’s identity, that the signals x(t) and \(y(t) = x(t - \tau )\) have the same energy E.

Another way of expressing Parseval’s identity is as follows:

$$\begin{aligned} \int _{-\infty }^{\infty } f(x) G(x) dx = \int _{-\infty }^{\infty } F(x) g(x) dx . \end{aligned}$$
(1.94)

1.7 Average, Power, and Autocorrelation

As mentioned earlier, the average value of a real  signal x(t) is given by

$$\begin{aligned} \overline{x}(t) = \lim _{T \rightarrow \infty } \frac{1}{T} \int _{\frac{-T}{2}}^{\frac{T}{2}} x(t) dt. \end{aligned}$$
(1.95)

The instantaneous power of x(t) is given by

$$\begin{aligned} p_{X}(t) = x^{2}(t). \end{aligned}$$
(1.96)

If the signal x(t) exists for the whole interval \((-\infty , +\infty )\), the total power \(\overline{P}_X\) is defined for a real signal x(t) as the power dissipated in a 1 ohm resistor, when a voltage x(t) is applied to this resistor (or a current x(t) flows through the resistor) (Lathi 1989). Thus,

$$\begin{aligned} \overline{P}_X = \lim _{T \rightarrow \infty } \frac{1}{T} \int _{\frac{-T}{2}}^{\frac{T}{2}} x^{2}(t) dt. \end{aligned}$$
(1.97)

From the previous definition, the unit to measure \(\overline{P}_X\) corresponds to the square of the units of the signal x(t) (volt\(^2\), amp\(^2\)). These units will only be converted to watts if they are normalized by units of impedance (ohm). It is common use to express the power in decibel (dB). The power in decibel is given by the expression (Gagliardi 1988)

$$\begin{aligned} \overline{P}_{X, dB} = 10 \log \overline{P}_X . \end{aligned}$$
(1.98)

The total power (\(\overline{P}_X\)) contains two components: one DC component, due to a nonzero average value of the signal x(t) (\(\overline{P}_{DC}\)), and an AC component (\(\overline{P}_{AC}\)). The DC power of the signal is given by

$$\begin{aligned} \overline{P}_{DC} = (\overline{x}(t))^2. \end{aligned}$$
(1.99)

It follows that the AC power can be determined by removing the DC power from the total power, i.e.,

$$\begin{aligned} \overline{P}_{AC} = \overline{P}_{X} - \overline{P}_{DC}. \end{aligned}$$
(1.100)

Time Autocorrelation of Signals

The average time autocorrelation \(\overline{R}_{X}(\tau )\), or simply autocorrelation, of a real signal x(t) is defined as follows

$$\begin{aligned} \overline{R}_{X}(\tau ) = \lim _{T \rightarrow \infty } \frac{1}{T} \int _{\frac{-T}{2}}^{\frac{T}{2}} x(t) x(t + \tau ) dt. \end{aligned}$$
(1.101)

The change of variable \(y = t + \tau \) allows Eq. (1.101) to be written as

$$\begin{aligned} \overline{R}_{X}(\tau ) = \lim _{T \rightarrow \infty } \frac{1}{T} \int _{\frac{-T}{2}}^{\frac{T}{2}} x(t) x(t - \tau ) dt. \end{aligned}$$
(1.102)

From Eqs. (1.101) and (1.102), it follows that \(\overline{R}_X (\tau )\) is an even function of \(\tau \), and thus (Lathi 1989)

$$\begin{aligned} \overline{ R}_X(-\tau ) = \overline{R}_X(\tau ). \end{aligned}$$
(1.103)

From the definition of autocorrelation and power it follows that

$$\begin{aligned} \overline{P}_X = \overline{R}_X(0) \end{aligned}$$
(1.104)

and

$$\begin{aligned} \overline{P}_{DC} = \overline{R}_X (\infty ), \end{aligned}$$
(1.105)

i.e., from its autocorrelation function it is possible to obtain information about the power of a signal. The autocorrelation function can also be considered in the frequency domain by taking its Fourier transform, i.e.,

$$\begin{aligned} \mathcal{F} \{ \overline{R}_{X}(\tau ) \} = \int _{-\infty }^{+\infty } \lim _{T \rightarrow \infty } \frac{1}{T} \int _{\frac{-T}{2}}^{\frac{T}{2}} x(t) x(t + \tau ) e^{-j\omega \tau } dt\, d\tau = \end{aligned}$$
(1.106)
$$ = \lim _{T \rightarrow \infty } \frac{1}{T} \int _{\frac{-T}{2}}^{\frac{T}{2}} x(t) \int _{-\infty }^{+\infty } x(t + \tau ) d\tau dt $$
$$ = \lim _{T \rightarrow \infty } \frac{1}{T} \int _{\frac{-T}{2}}^{\frac{T}{2}} x(t) X(\omega ) e^{j\omega t} dt $$
$$ = X(\omega ) \lim _{T \rightarrow \infty } \frac{1}{T} \int _{\frac{-T}{2}}^{\frac{T}{2}} x(t) e^{j\omega t } dt $$
$$ = \lim _{T \rightarrow \infty } \frac{X(\omega )X(-\omega )}{T} $$
$$\begin{aligned} = \lim _{T \rightarrow \infty } \frac{{|X(\omega )|}^2}{T} \end{aligned}$$
(1.107)

The power spectral density \(\overline{S}_X\) of a signal x(t) is defined as the Fourier transform of the autocorrelation function \(\overline{R}_X (\tau )\) of x(t), i.e., as

$$\begin{aligned} \overline{S}_X = \int _{-\infty }^{\infty } \overline{R}_X (\tau ) e^{-j \omega \tau } d\tau . \end{aligned}$$
(1.108)

Example: Find the power spectral density of the sinusoidal signal \(x(t) = A \cos ({\omega }_0 t + \theta )\) illustrated in Fig. 1.10a.

Fig. 1.10
figure 10

Sinusoidal signal and its autocorrelation and power spectral density

Solution:

$$ \overline{R}_X(\tau ) = \lim _{T \rightarrow \infty } \frac{1}{T} \int _{\frac{-T}{2}}^{\frac{T}{2}} A^2 \cos (\omega _0 t + \theta ) \cos \, [\omega _0(t + \tau )+ \theta ] dt $$
$$ = \frac{A^2}{2} \lim _{T \rightarrow \infty } \frac{1}{T} \left[ \int _{\frac{-T}{2}}^{\frac{T}{2}} \cos \omega _0 \tau dt + \int _{\frac{-T}{2}}^{\frac{T}{2}} \cos \left( 2\omega _0 t + \omega _0 \tau +2\theta \right) dt \right] $$
$$ = \frac{A^2}{2} \cos \omega _0 \tau . $$

Notice that the autocorrelation function (Fig. 1.10b) is independent of the phase \(\theta \). The power spectral density (Fig. 1.10c) is given by

$$ \overline{S}_X(\omega ) = \mathcal{F} \left[ R_X(\tau ) \right] $$
$$ \overline{S}_X ( \omega ) = \frac{\pi A^2}{2} \left[ \delta (\omega + \omega _0) + \delta (\omega - \omega _0) \right] . $$

The power or mean square average of x(t) is given by

$$ \overline{P}_X = \overline{R}_X(0) = \frac{A^2}{2}. $$

1.8 Problems

 

  1. (1)

    Consider the signal

    $$ x(t) = \left\{ \begin{array}{rc} 1, &{} \ 0 \le t < \pi \\ -1, &{} \pi \le t \le 2 \pi \end{array} \right. $$

    which is approximated as \( {\tilde{x}}(t) = \frac{4}{\pi } \sin (t)\), in the time interval considered.

    1. (a)

      Show that the error in the approximation is orthogonal to the function \({\tilde{x}}(t)\);

    2. (b)

      Show that the energy of x(t) is the sum of the energy in the error signal with the energy of the signal \({\tilde{x}}(t)\).

  2. (2)

    Calculate the instantaneous power and the average power of the following signals:

    1. (a)

      \( x(t) = \cos (2 \pi t) \)

    2. (b)

      \( y(t) = \sin (2 \pi t) \)

    3. (c)

      \( z(t) = x(t) + y(t) \).

  3. (3)

    Determine the constant A such that \( f_1(t)\) and \(f_2(t)\) are orthogonal for all t, in which \( f_1(t) = e^{ - |t| }\) and \(f_2(t) = 1 - A e^{ - 2 |t| }\).

  4. (4)

    Given the set of functions \(f_n(t)\), as illustrated in Fig. 1.2, show that

    1. (a)

      This set of functions constitutes an orthogonal set in the interval (0, 1). For an orthonormal set, the integral of the product of the functions is one or zero. Is the set orthonormal?

    2. (b)

      Represent a given signal \(f(t) = 2t\) in the interval (0, 1), using this set of orthogonal functions.

    3. (c)

      Plot the function f(t) and its approximate representation \({\tilde{f}}(t)\) in the same graph.

    4. (d)

      Determine the energy of the error signal resulting from the approximation.

  5. (5)

    Represent the gate function, and its complement, using the unit step.

  6. (6)

    Represent analytically the graph of a series of triangular functions, using generalized functions.

  7. (7)

    Calculate the following integrals:

    1. (a)

      \( \int _{-\infty }^{\infty } e^{- \alpha t } u(t) dt\),

    2. (b)

      \( \int _{-\infty }^{\infty } e^{- \alpha t } \delta (t) dt\),

    3. (c)

      \( \int _{-\infty }^{\infty } e^{- \alpha t } r(t) dt\), for \(r(t) = \int _{-\infty }^{t} u( \tau ) d \tau \).

  8. (8)

    Calculate the Fourier transform of the impulse function assuming the Fourier transform of the unit step function is known.

  9. (9)

    Calculate the inverse Fourier transform of the function

    $$ F(\omega ) = A [ u( \omega + \omega _0 ) - u( \omega - \omega _0 ) ]. $$
  10. (10)

    Calculate the Fourier transform of the function \(f(t) = A e^{ - \alpha t} u(t)\), and plot the corresponding magnitude and phase diagrams.

  11. (11)

    Plot the magnitude and phase diagrams of the Fourier transform of the function \(\delta ( t + t_0 )\).

  12. (12)

    For a circuit with impulse response h(t),

    $$ h(t) = \frac{1}{\tau } e^{ - \frac{t}{\tau } } u(t), $$

    find the response for the excitation x(t) given by

    $$ x(t) = t e^{ - \frac{t}{\tau } } u(t). $$
  13. (13)

    Find the Fourier transform of the function \(g(t) = f(t) \cos (\omega _c t)\), given the Fourier transform of f(t).

  14. (14)

    Show that for a function f(t) in general:

    $$ \int _{-\infty }^{t} f(t) dt \longleftrightarrow \frac{F(\omega )}{j\omega } + \pi F(0) \delta (\omega ). $$
  15. (15)

    Prove that, for a real energy signal f(t), the energy associated to f(t),

    $$ \int _{-\infty }^{\infty } f^2(t) dt $$

    can be calculated by the formula

    $$ \frac{1}{2 \pi } \int _{-\infty }^{\infty } | F(\omega ) |^2 d \omega . $$
  16. (16)

    Use the property of the convolution in the frequency domain to show that

    $$ \cos ( \omega _c t ) u(t) \longleftrightarrow \frac{\pi }{2} [ \delta ( \omega + \omega _c ) + \delta ( \omega - \omega _c ) ] + j \frac{\omega }{\omega ^2_c - \omega ^2}. $$
  17. (17)

    A signal x(t) has the exponential Fourier series expansion as given.

    $$ x(t) = - \frac{2 A}{\pi } \sum _{n = - \infty }^{\infty } \frac{1}{4 n^2 - 1} e^{j 2 \pi n t}. $$

    Find its corresponding trigonometric Fourier series expansion.

  18. (18)

    By defining the cutoff frequency as the smallest frequency for which the first spectral zero occurs, determine the cutoff frequency \((\omega _0)\) of the signal x(t) in Fig. 1.11.

  19. (19)

    Calculate the Fourier transform of the signals represented in Figs. 1.12 and 1.13.

  20. (20)

    Calculate the frequency response of a linear system the transfer function of which is given, when the input is the pulse \(x(t) = A[ u(t + T/2) - u(t - T/2) ]\). Plot the corresponding magnitude and phase diagrams of the frequency response.

    $$ H(\omega ) = j u (-\omega ) - j u (\omega ). $$
  21. (21)

    A signal x(t) is given by the expression

    $$ x(t) = \frac{ \sin (At) }{ \pi t }. $$

    Determine the Nyquist frequency for sampling this signal.

  22. (22)

    What is the least sampling rate that is required to sample the signal \( f(t) = \sin ^3 (\omega _0 t)\)? Show graphically the effect caused by a reduction of the sampling rate, falling below the Nyquist rate.

  23. (23)

    Calculate the Fourier transform of the signal

    $$ g(t) = A e^{ - t } u(t) $$

    and then apply the property of integration in the time domain to obtain the Fourier transform of \(f(t) = A (1-e^{-t} ) u(t)\).

  24. (24)

    Determine the magnitude spectrum and phase function of the signal \(f(t) = t e^{ -a t } u(t)\).

  25. (25)

    A voltage signal

    $$ v (t) = V_0 + \sum _{n=1}^{\infty } V_n \cos (n \omega _0 t + \theta _n ) $$

    is applied to the input of a circuit, producing the current

    $$ i (t) = I_0 + \sum _{m=1}^{\infty } I_m \cos (m \omega _0 t + \phi _m ). $$

    Using the orthogonality concept calculate the power (P) absorbed by the circuit, considering

    $$ P = \frac{1}{T} \int _{- T/2}^{T/2} v(t) i(t) dt. $$

    What is the power for the case in which \( \theta _n = \phi _n \)?

  26. (26)

    Define an ideal low-pass filter and explain why it is not physically realizable. Indicate the corresponding filter transfer function and the filter impulse response.

  27. (27)

    Given the linear system shown in Fig. 1.14, in which T represents a constant delay, determine:

    1. (a)

      The system transfer function \(H(\omega )\),

    2. (b)

      The system impulse response h(t).

  28. (28)

    Let f(t) be the signal with spectrum \(F(\omega )\) as follows

    $$ F(\omega ) = \frac{AT}{1 + j w T}, \ \mathrm{in \, which}: \ T = 0.5\,\upmu \mathrm{s}, \ A = 5\,\mathrm{V}. $$
    1. (a)

      Calculate and plot the magnitude of the Fourier transform, \(|F(\omega )|\).

    2. (b)

      Calculate the frequency for which \(|F(\omega )|\) corresponds to a value 3 dB below the maximum amplitude value in the spectrum.

    3. (c)

      Calculate the energy of the signal in time, f(t).

  29. (29)

    Represent the following signals using the unit step:

    1. (a)

      the ramp function, \(r(t + T)\);

    2. (b)

      the echo function, \(\delta (t - T) + \delta (t + T)\);

    3. (c)

      a periodic sawtooth waveform with period T, and peak amplitude given by A.

    Plot the corresponding graphs.

  30. (30)

    Calculate the Fourier transform of each one of the following signals:

    1. (a)

      \( x(t) = e^{ - t + t_o } u(t - t_o) \);

    2. (b)

      \( y(t) = t u(t) \);

    3. (c)

      \( z(t) = y^{\prime } (t) \).

    Draw the time signals, as well as, the associated magnitude spectra.

  31. (31)

    Verify, by applying the properties of homogeneity and additivity, whether the process generating the signal

    $$ s(t) = A \cos ( \omega _c t + \Delta m(t) + \theta ) $$

    from the input signal m(t) is linear. Perform the same test for the signal

    $$ r(t) = m(t) \cos ( \omega _c t + \theta ). $$
  32. (32)

    A linear system has impulse response \(h(t) = 2 [ u(t) - u(t - T) ]\). Using the convolution theorem, determine the system response to the input signal \(x(t) = u(t) - u(t - T)\).

  33. (33)

    A digital signal x(t) has autocorrelation function

    $$ \bar{R}_X(\tau ) = A^2 \left[ 1 - \frac{|\tau |}{T_b} \right] [ u(\tau + T_b) - u(\tau - T_b)], $$

    in which \(T_b\) is the bit duration. Determine the total power, the AC power and the DC power of the given signal. Calculate the signal power spectral density. Plot the diagrams representing these functions.

  34. (34)

    Calculate the Fourier transform of a periodic signal x(t), represented analytically as

    $$ x(t) = \sum _{-\infty }^{\infty } F_n e^{j n \omega _0 t}. $$
  35. (35)

    Prove the following property of the Fourier transform:

    $$ x( \alpha t ) \longleftrightarrow \frac{1}{|\alpha |} X\left( \frac{\omega }{\alpha } \right) . $$
  36. (36)

    Calculate the average value and the power of the signal

    $$ x(t) = V u( \cos t ). $$
  37. (37)

    For a given real signal x(t), prove the following Parseval identity:

    $$ E = \int _{-\infty }^{\infty } x^2(t) dt = \frac{1}{2\pi } \int _{-\infty }^{\infty } | X(\omega ) |^2 d \omega . $$

    Consider the signals x(t) and \(y(t) = x(t - \tau )\) and show, by using Parseval’s identity, that both signals have the same energy E.

  38. (38)

    A signal y(t) is given by the following expression:

    $$ y(t) = \frac{1}{\pi } \int _{-\infty }^{\infty } \frac{ x(\tau ) }{t - \tau } d \tau , $$

    in which the signal x(t) has a Fourier transform \(X(\omega )\). By using properties of the Fourier transform determine the Fourier transform of y(t).

  39. (39)

    Given that the Fourier transform of the signal \( f(t) = \cos (\omega _o t) \) is \( F(\omega ) = \pi [ \delta (w + \omega _o) + \delta (w - \omega _o)] \), determine the Fourier transform of the signal \( g(t) = \sin (\omega _o t - \phi )\), in which \(\phi \) is a phase constant. Sketch the magnitude and phase graphs of the Fourier transform of this signal.

  40. (40)

    Calculate the Fourier transform of the radio frequency pulse

    $$ f(t) = \cos (\omega _o t) [ u(t + T) - u(t - T) ], $$

    considering that \( \omega _o \gg \frac{2 \pi }{T}\). Sketch the magnitude and phase graphs of the Fourier transform of this signal.

  41. (41)

    Calculate the Fourier transform of the signal \(f(t) = \delta (t) - \alpha e^{ - \alpha t } u(t) \) and show, by using the property of the derivative in the time domain, that this signal is the derivative of the signal \( g(t) = e^{ - \alpha t } u(t) \). Sketch the respective time and frequency domain graphs of the signals, specifying the magnitude and the phase spectra of each signal.

  42. (42)

    By making use of properties of the Fourier transform show that the derivative of the signal \(h(t) = f(t) *g(t)\) can be expressed as

    $$ h^{\prime }(t) = f^{\prime }(t) *g(t), \ \ \mathrm{or} \ \ h^{\prime }(t) = f(t) *g^{\prime }(t). $$
  43. (43)

    Determine the Nyquist frequency for which the following signal can be recovered without distortion.

    $$ f(t) = \frac{ {\sin } \alpha t \cdot {\sin } \beta t }{ t^2 }, \ \ \alpha > \beta . $$

    Sketch the signal spectrum and give a graphical description of the procedure.

  44. (44)

    Using the Fourier transform show that the unit impulse function can be written as

    $$ \frac{1}{\pi } \int _{0}^{\infty } \cos (\omega t) d \omega . $$
  45. (45)

    Using properties of the Fourier transform determine the Fourier transform of the function |t|. Plot the corresponding magnitude and phase spectrum of that transform.

  46. (46)

    Show that \( u(t) *u(t) = r(t) \), in which u(t) represents the unit step function and r(t) denotes the ramp with slope 1.

  47. (47)

    Determine the Fourier transform for each one of the following functions: \(u(t - T)\), t, \(t e^{-a t} u(t)\), \(\frac{1}{t} \), \(\frac{1}{t^2}\). Plot the corresponding time domain diagrams and the respective magnitude and phase spectrum of the associated transforms.

  48. (48)

    Calculate the Fourier transform \(P(\omega )\) of the signal \(p(t) = v^2(t)\) representing the instantaneous power in a 1 \(\Omega \) resistor, as a function of the Fourier transform \(V(\omega )\) of v(t). Using the expression obtained for \(P(\omega )\) plot the instantaneous power spectrum for a sinusoidal input signal \(v(t) = A \cos (\omega _o t)\).

  49. (49)

    Find the complex Fourier series for the signal, its Fourier transform, and plot the corresponding magnitude spectrum.

    $$ f(t) = \cos (\omega _o t) + \sin ^2 (\omega _o t). $$
  50. (50)

    Show that if x(t) is a bandlimited signal, i.e., \( X(\omega ) = 0\) for \(| \omega | > \omega _M \), then

    $$ x(t) *\frac{ \sin (at) }{ \pi t } = x(t), \ \mathrm{if} \ a > \omega _M. $$

    Plot the corresponding graphs to illustrate the proof.

  51. (51)

    Prove the following Parseval equation:

    $$ \int _{-\infty }^{\infty } f(x) G(x) dx = \int _{-\infty }^{\infty } F(x) g(x) dx . $$
  52. (52)

    Find the Fourier transform of the current through a diode, represented by the expression \( i(t) = I_o [ e^{ \alpha v(t)} - 1 ] \), given the voltage v(t) applied to the diode and its Fourier transform \(V(\omega )\), in which \(\alpha \) is a diode parameter and \(I_o\) is the reverse current. Plot the magnitude spectrum of the Fourier transform.

 

Fig. 1.11
figure 11

Shifted gate function

Fig. 1.12
figure 12

Triangular waveform

Fig. 1.13
figure 13

Trapezoidal waveform

Fig. 1.14
figure 14

Linear system with feedback