4.1 Basic Concepts of Dynamic Nonlinear Networks

Definition 4.1

A network \(\mathscr {D}\) consisting of an arbitrary interconnection of a finite number of four fundamental circuit elements, is called a dynamic nonlinear network .

Before we begin, we ask the reader recall from Sect. 3.3, to not “lose sight of the forest for its trees”. That is, one should not be so consumed by the systematic techniques that we lose total insight into circuit behavior. Also recall that it is often through the introduction of hypothetical, and sometimes pathological circuits, that one gains an in-depth understanding of this subject.

By Definition 4.1, \(\mathscr {D}\) represents the class of all nonlinear networks other than resistive networks [3]. Since this class of dynamic networks is so much larger than the class of resistive networks, it is virtually impossible for us to formulate a general theory that is applicable to the solution of all dynamic networks. After all, it took us an entire chapter just to give an overview of the analysis techniques for resistive nonlinear circuits.

Hence in this chapter, we will primarily use two-terminal dynamic elements and also restrict our discussion to fundamental concepts, starting with the order of complexity.

4.1.1 Order of Complexity

Since the basic problem in dynamic nonlinear networks is to find the solution to a system of nonlinear ordinary differential equations, it is more appropriate to classify dynamic networks according to the “complexity” of their system of differential equations. It is well known that the solution to any system of differential equations can be found only to within a number of arbitrary constants k 1, k 2, ⋯ , k n. In order to determine the n arbitrary constants, we must specify n independent initial conditions .

Definition 4.2

A set of initial conditions is said to be independent if its values can be arbitrarily chosen.

Two systems of differential equations requiring different numbers of initial conditions are usually solved by quite different methods. Hence one meaningful basis for classification of \(\mathscr {D}\) can be stated in terms of the number of independent initial conditions that must be specified in order to uniquely solve for the solution of the network.

Definition 4.3

The order of complexity of a dynamic network is the minimum number n of independent initial conditions that must be specified in terms of the circuit variables in \(\mathscr {D}\), for completely describing the behavior of the network.

For convenience we shall refer to \(\mathscr {D}\) as a first-order network if n = 1 and a second-order network if n = 2. Since n ≥ 1 for any dynamic network, we might, for the sake of completeness, refer to any resistive network as a zero-order network .

It is important to observe that Definition 4.3 requires that the number of initial conditions be independent of one another. Definition 4.2 implies that none of the specified initial conditions can be derived from the rest, as Example 4.1.1 illustrates.

Example 4.1.1

Determine the order of the two networks in Fig. 4.1.

Fig. 4.1
figure 1

Circuits for Example 4.1.1

Solution

Since \(\mathscr {D}_a\) contains only one storage element, we can easily infer that \(\mathscr {D}_a\) is a first-order network. Now, since \(\mathscr {D}_b\) contains two storage elements, it appears at first sight that we can specify two initial conditions, namely, the voltage v 1(t 0) across capacitor C 1 and the voltage v 2(t 0) across capacitor C 2 at some time t 0. However, since by KVL v 2 = v 1 − v DC, the two initial conditions are dependent because once v 1(t 0) is specified, v 2(t 0) is constrained by v 1(t 0) − v DC, and hence v 2(t 0) cannot be specified arbitrarily. Therefore \(\mathscr {N}_b\) is a first-order network.

From the theory of differential equations in the normal form , it is known that a system of n differential equations requires exactly n initial conditions for its solution. Therefore, it is important that we understand Definition 4.4 for the normal form.

Definition 4.4

The system of n first-order differential equations:

$$\displaystyle \begin{aligned} \frac{dx_1}{dt}&=f_1(x_1,x_2,\cdots,x_n) \\ \frac{dx_2}{dt}&=f_2(x_1,x_2,\cdots,x_n) \\ \cdots&\cdots\cdots \\ \frac{dx_n}{dt}&=f_n(x_2,x_2,\cdots,x_n) {} \end{aligned} $$
(4.1)

is said to be in normal form because:

  1. 1.

    Only first-order time derivatives appear on the left-hand side of the equations.

  2. 2.

    No time derivatives appear on the right-hand side of the equations.

  3. 3.

    The dependent variables coincide with the state variables that appear on the left-hand side.

Since the order of complexity is equal to the number of state variables when the system equations are written in normal form, one approach to determining the order of complexity would be to always write normal form equations for \(\mathscr {D}\), as shown in Example 4.1.2.

Example 4.1.2

Determine the dynamic equations for the network in Fig. 4.2. The characteristics of the various circuit elements are:

$$\displaystyle \begin{aligned}\begin{array}{r*{20}l} &\mathscr{N}_C:\quad && q(v)=2-3v^3+5v^5 \\ &\mathscr{N}_L:\quad && \phi(i)=1+2i-3i^2+i^3 \\ &\mathscr{N}_R:\quad && i_1\quad =1+v_1+3v_1i_2^3-4i_2^5 \\ &\quad && v_2\quad =4-i_2v_1-2i_2^2v_1^5+v_1^3 \end{array}\end{aligned} $$
(4.2)
Fig. 4.2
figure 2

Circuit for Example 4.1.2

Solution

For this circuit, we can determine the dynamic equations by inspection, without resorting to advanced techniques like MNA that will be discussed later in this chapter. Recall the memory property for inductors and capacitors from Chap. 1: Eq. (1.63) implies that a current i L(t 0) through an inductor is an initial condition. By duality, Eq. (1.71) implies that a voltage v C(t 0) across a capacitor is another suitable initial condition. Hence let us choose v 3 and i 4 to be the state variables. Hence the order of complexity is 2. Thus we need to obtain the following normal form:

$$\displaystyle \begin{aligned} \frac{dv_3}{dt}&=f_1(v_3,i_4) \\ \frac{di_4}{dt}&=f_2(v_3,i_4) \end{aligned} $$
(4.3)

For \(\mathscr {N}_C\), in terms of circuit variables the q 3 − v 3 characteristic is \(q_3(v_3)=2-3v_3^3+5v_3^5\). Differentiating with respect to time and applying the chain rule we get:

$$\displaystyle \begin{aligned} i_3&=-9v_3^2\frac{dv_3}{dt}+25v_3^4\frac{dv_3}{dt} \end{aligned} $$
(4.4)

Since by KCL i 3 = i 1 and by KVL v 1 = E − v 3, we can simplify the equation above as:

$$\displaystyle \begin{aligned} \frac{dv_3}{dt}&=\frac{i_1}{v_3^2\left(25v_3^2-9\right)} \\ &=\frac{1+v_1+3v_1i_2^3-4i_2^5}{v_3^2\left(25v_3^2-9\right)} \\ &=\frac{1+(E-v_3)+3(E-v_3)i_2^3-4i_2^5}{v_3^2\left(25v_3^2-9\right)} \end{aligned} $$
(4.5)

From KCL at the output port of \(\mathscr {N}_R:i_2=-i_4\). Thus we have the \(\frac {dv_3}{dt}\) equation as:

$$\displaystyle \begin{aligned} \frac{dv_3}{dt}&=\frac{1+(E-v_3)-3(E-v_3)i_4^3+4i_4^5}{v_3^2\left(25v_3^2-9\right)} \end{aligned} $$
(4.6)

With respect to the second state equation, for \(\mathscr {N}_L\) in terms of circuit variables the ϕ 4 − i 4 characteristic is: \(\phi _4(i_4)=1+2i_4-3i_4^2+i_4^3\). Taking the derivative of this characteristic with respect to time and applying the chain rule:

$$\displaystyle \begin{aligned} v_4&=2\frac{di_4}{dt}-6i_4\frac{di_4}{dt}+3i_4^2\frac{di_4}{dt} \end{aligned} $$
(4.7)

Rewriting in terms of \(\frac {di_4}{dt}\) and using the fact that by KVL v 4 = v 2, along with the v 2 definition from \(\mathscr {N}_R\), we get:

$$\displaystyle \begin{aligned} \frac{di_4}{dt}&=\frac{4-i_2v_1-2i_2^2v_1^5+v_1^3}{2-6i_4+3i_4^2} \end{aligned} $$
(4.8)

Applying KCL: i 2 = −i 4, KVL: v 1 = E − v 3 and simplifying we get the second state equation:

$$\displaystyle \begin{aligned} \frac{di_4}{dt}&=\frac{4+i_4(E-v_3)-\left(2i_4^2(E-v_3)^2+1\right)(E-v_3)^3}{2-6i_4+3i_4^2} \end{aligned} $$
(4.9)

We need to be aware that it may not be possible to write normal form equations, given a specific choice of state variables. To demonstrate the difficulty involved, let us examine Eq. (4.4) more closely. Observe that we were able to express i 3 in terms of v 3 and \(\dot {v}_3\) because \(\mathscr {N}_C\) was voltage controlled. Suppose instead \(\mathscr {N}_C\) was charge-controlled: \(v_3(q_3)=q_3^3-q_3\). In this case, it is necessary that we express q 3 in terms of v 3 before applying the chain rule (to evaluate \(\frac {dq_3}{dv_3}\)). Unfortunately, this is not possible because q 3 is a multivalued function of v 3. This is equivalent to saying that the inverse function does not exist. In this case, the normal form equations cannot be obtained, if we insist on v 3 as the state variable.

There is, of course, no reason why we should insist on choosing only voltages and currents as state variables. Any other set of variables x 1, x 2, ⋯ , x n is just as valid, provided Definition 4.4 is satisfied.

Although we could always determine the order of complexity by writing the state equations for \(\mathscr {D}\), we shall now develop a simple technique for determining the order of complexity for a particular classFootnote 1 of \(\mathscr {D}\) by inspection, i.e., without writing down any equation. In order to understand how this method works, it is important for us to obtain a deeper understanding of why initial conditions are necessary from the network’s point of view, and to understand which electrical variables qualify as an appropriate set of initial conditions.

From the mathematical point of view, initial conditions are introduced as a “gimmick” for determining the values of the arbitrary constants associated with the solution to a system of differential equations. From the network’s point of view, initial conditions are introduced because of our ignorance or incomplete knowledge of the past history of excitations that have been applied to the network. In order to understand the above reason, let us consider an arbitrary capacitor C j of an arbitrary network \(\mathscr {D}\). Suppose we want to find the charge q j(t) of this capacitor at time t, namely,

$$\displaystyle \begin{aligned} q_j(t)&=\displaystyle\int\limits_{-\infty}^ti_j(\tau)d\tau \end{aligned} $$
(4.10)

From Eq. (4.10) it is clear that q j(t) can be found only if we know the exact waveform of the capacitor current i j(t) from t →− up to the present time t, that is, from the time the capacitor was manufactured. However, practically speaking, in any physical network excitations are applied at some finite time in the past, say t = t 0. Hence we would usually have information on the excitation of waveforms only for t ≥ t 0. This ignorance of the past history of i j(t) prevents us from determining q j(t). However, let us rewrite Eq. (4.10) in the form:

$$\displaystyle \begin{aligned} q_j(t)&=\displaystyle\int\limits_{-\infty}^{t_0}i_j(\tau)d\tau + \displaystyle\int\limits_{t_0}^ti_j(\tau)d\tau \end{aligned} $$
(4.11)

The second integral can be found because we know i j(t) for t ≥ t 0. It is the first integral that is giving us trouble. Observe, however, that at t = t 0 Eq. (4.10) becomes:

$$\displaystyle \begin{aligned} q_j(t_0)&=\displaystyle\int\limits_{-\infty}^{t_0}i_j(\tau)d\tau \end{aligned} $$
(4.12)

Hence Eq. (4.11) becomes:

$$\displaystyle \begin{aligned} q_j(t)&=q_j(t_0) + \displaystyle\int\limits_{t_0}^ti_j(\tau)d\tau \end{aligned} $$
(4.13)

where t ≥ t 0. Equation (4.13) tells us that, provided we are interested only in knowing q j(t) for t ≥ t 0, it is not necessary to know the entire past history of i j(t) for t < t 0. Instead, we need to know only the value of the charge q j in the capacitor at the initial time t 0. This value q j(t 0) is called the initial condition.

Let us now recall that a capacitor is characterized by a curve in the v − q plane, and if we know v(t), we can find q(t) and vice versa. Since it is necessary to know the initial condition q(t 0) in order to find q(t) for t ≥ t 0, it follows that it is necessary to know v(t 0) in order to find v(t) for t ≥ t 0. However, since given v(t 0) we can find q(t 0) and vice versa, it is sufficient to specify an initial condition either in terms of capacitor charge or voltage at time t 0. But, notice an examination of Eq. (4.13) shows that specifying capacitor current i j(t 0) would not do any good because one cannot determine q j(t 0) from this information alone. We conclude therefore that the current in a capacitor is not an appropriate initial condition.

By exact dual arguments, we find that for an inductor:

$$\displaystyle \begin{aligned} \phi_j(t)&=\phi_j(t_0)+\displaystyle\int\limits_{t_0}^tv_j(\tau)d\tau \end{aligned} $$
(4.14)

Thus we can specify either the flux linkage ϕ(t 0) or inductor current i(t 0) as appropriate initial conditions. The voltage across an inductor at t 0 is not an appropriate initial condition.

Let us now explore the concept of independent initial conditions in more detail. We have already seen in Example 4.1.1 that the order of complexity of a dynamic network may not be equal to the number of energy storage elements, because some initial conditions may not be independently specified. In order to diagnose the source of “dependency,” let us consider the more complicated network \(\mathscr {D}\) in Fig. 4.3.

Fig. 4.3
figure 3

An example of the two possible sources of dependent initial conditions, namely, a loop of capacitors and voltage sources, and a cut set of inductors and current sources

Since \(\mathscr {D}\) contains ten energy-storage elements (six capacitors and four inductors), it appears that we can specify 10 initial conditions, v C1, v C2, v C3, v C4, v C5, v C6, i L1, i L2, i L3, and i L4. However a more careful inspection of the network shows that not all these initial conditions are independent. For example, the loop consisting of capacitors C 1, C 2, C 3 and voltage source E 0 imposes a constraint due to KVL:

$$\displaystyle \begin{aligned} v_{C1}+v_{C2}+v_{C3}&=E_0 \end{aligned} $$
(4.15)

This equation implies that only two of the three initial conditions v C1, v C2, and v C3 can be specified arbitrarily. We conclude that although there are six capacitors, only five capacitor voltages are independent. Similarly, the cut set consisting of inductors L 2, L 3, L 4 and current source I 0 imposes a constraint due to KCL:

$$\displaystyle \begin{aligned} i_{L2}+i_{L3}+i_{L4}&=I_0 \end{aligned} $$
(4.16)

Thus only two of three initial conditions i L2, i L3, i L4 can be specified arbitrarily. Hence we conclude that although there are four inductors, only three inductor currents are independent. The maximum number of initial conditions that can be specified is therefore equal to 5 + 3 = 8.

Based on our discussion above, it is clear that a dependency exists whenever it is possible to write a constraint involving only capacitor voltages and voltage sources; therefore, we must subtract one initial condition from the total number of energy-storage elements. Similarly, it is clear that a dependency exists whenever it is possible to write a constraint involving only inductor currents and current sources; therefore, we must likewise subtract one initial condition from the total number of energy-storage elements. The first constraint involving only capacitor voltages and voltage sources occurs if and only if there exists a loop in the network containing only capacitors and independent voltage sources. A dual argument applies to inductors and current sources: a constraint occurs if and only if there exists a cut set in the network containing only inductors and current sources.

Hence, we have the following theorem [4] for the order of complexity:

Theorem 4.1 (Order of Complexity)

Let \(\mathscr {D}\) be a network containing only two-terminal fundamental circuit elements and independent sources. Then the order of complexity m of \(\mathscr {D}\) is given by:

$$\displaystyle \begin{aligned} m&=(b_L+b_C+b_M)-(n_M+n_{\mathrm{CE}}+n_{\mathrm{LM}})-(\hat{n}_{M}+\hat{n}_{\mathrm{LJ}}+\hat{n}_{\mathrm{CM}}) \end{aligned} $$
(4.17)

where:

  1. 1.

    b L is the total number of inductors

  2. 2.

    b C is the total number of capacitors

  3. 3.

    b M is the total number of memristors

  4. 4.

    n M is the number of independent loops containing only memristors

  5. 5.

    n CE is the number of independent loops containing only capacitors and voltage sources

  6. 6.

    n LM is the number of independent loops containing only inductors and memristors

  7. 7.

    \(\hat {n}_{M}\) is the number of independent cut sets containing only memristors

  8. 8.

    \(\hat {n}_{\mathrm{LJ}}\) is the number of independent cut sets containing only inductors and current sources

  9. 9.

    \(\hat {n}_{\mathrm{CM}}\) is the number of independent cut sets containing only capacitors and memristors

Proof

Footnote 2

We have just discussed the order of complexity for \(\mathscr {D}\) without memristors: \(m=(b_L+b_C)-n_{\mathrm{CE}}-\hat {n}_{\mathrm{LJ}}\).

From the definition of a memristor, for a \(\mathscr {D}\) with \(n_M=n_{\mathrm{LM}}=\hat {n}_M=\hat {n}_{\mathrm{CM}}=0\), each memristor introduces a new state variable and we thus have: \(m=(b_L+b_C+b_M)-n_{\mathrm{CE}}-\hat {n}_{\mathrm{LJ}}\).

Observe next that a constraint among state variables occurs whenever an independent loop consisting of elements corresponding to those specified in the definition of n M and n LM is present in the network. This is because we assume the algebraic sum of flux linkages around any loop (charges flowing into any node, recall equivalence of KCL node to cut sets, Theorem 3.1) is zero. We now have: \(m=(b_L+b_C+b_M)-(n_M+n_{\mathrm{CE}}+n_{\mathrm{LM}})-\hat {n}_{\mathrm{LJ}}\).

Finally, by duality, a constraint among state variables again occurs whenever an independent cut set consisting of elements corresponding to those specified in the definition of \(\hat {n}_{M}\) and \(\hat {n}_{\mathrm{CM}}\) is present in the network. We thus have: \(m=(b_L+b_C+b_M)-(n_M+n_{\mathrm{CE}}+n_{\mathrm{LM}})-(\hat {n}_{M}+\hat {n}_{\mathrm{LJ}}+\hat {n}_{\mathrm{CM}})\). □

4.1.2 Principles of Duality

In light of the enormous solution space of dynamic nonlinear networks, it would be instructive to check if there are techniques that help us reduce this solution space. One such powerful technique is duality (alluded to in earlier chapters), and since duality is particularly useful in the analysis of dynamic networks, we have deferred a rigorous discussion of duality till this chapter.

A significant fact about dual networks is that once we know the solution of one network, the solution of the dual network can be obtained immediately by simply interchanging the symbols. This means that as soon as we know the behavior and properties of one network, we immediately know the behavior of the properties of dual network. Hence a lot of redundancy is avoided if we can recognize dual networks.

Generally speaking, we say two systems or phenomena are duals of each other if we can exhibit some kind of one-to-one correspondence between various quantities or attributes of the two systems. For example, in physics, for each translational system or problem there exists a corresponding rotational system or problem, and they are usually referred to as dual systems. In mathematics, two equations which differ only in symbols but are otherwise identical in form are said to be dual equations. In electrical engineering, besides circuits, duality is widely used in digital design because of dual Boolean relationships. The recognition of dual quantities, attributes, phenomena, properties, or concepts often leads to the discovery and invention of new ideas.

Before we render the concept of duality more precise, it is instructive to consider first the two nonlinear networks shown in Fig. 4.4a and b . The laws of elements and the laws of interconnection for these two networks are readily obtained and tabulated in Table 4.1. A careful comparison of the expressions in the two columns of this table reveals a one-to-one correspondence between the equations. As a matter of fact, except for the symbols, the equations in the two columns are in identical form. Observe that, had we replaced v j by \(i_j^{\prime }\), i j by \(v_j^{\prime }\), ϕ j by \(q_j^{\prime }\), q j by \(\phi _j^{\prime }\) for the variables in the left column, the result would be identical with that in the right column, and therefore the two networks are said to be dual networks. We are now ready to precisely define the concept of duality .

Fig. 4.4
figure 4

The dual of a series nonlinear network is a parallel nonlinear network

Table 4.1 Circuit equations for the networks in Fig. 4.4

Definition 4.5 (Duality)

Let \(\mathscr {D}\) and \(\mathscr {D}'\) be a pair of networks each containing b two-terminal network elements which are not controlled sources. Then \(\mathscr {D}\) and \(\mathscr {D}'\) are dual networks if the elements in \(\mathscr {D}\) and \(\mathscr {D}\)’ can be labeled, respectively, as b 1, b 2, ⋯ , b b and \(b_1^{\prime }, b_2^{\prime }, \cdots , b_b^{\prime }\) such that the circuit equations for the two networks are identical.

A few points to note from Definition 4.5:

  1. 1.

    It is possible to generalize the definition of dual networks to include controlled sources. However, the procedure for constructing such networks is much more complicated and will not be discussed in this book.

  2. 2.

    We have defined duality for dynamic networks, \(\mathscr {D}\), but it should be obvious that the definition is also applicable to (nonlinear) resistive networks \(\mathscr {N}\).

  3. 3.

    In order to find \(\mathscr {D}'\), we need to uncover the duality relationships that must be satisfied by the laws of elements and the laws of interconnections. Due to space limitations, we will only cover the laws of elements. With respect to duality and the laws of interconnections, we will restrict our discussion to memristive networks. For a general graph theoretic approach to duality relationships from the laws of interconnections, the reader is referred to [3].

Definition 4.6 (Dual Resistor)

If element b j is a two-terminal resistor in \(\mathscr {D}\) characterized by a curve Γ in the v − i plane, then the corresponding dual element \(b_j^{\prime }\) in \(\mathscr {D}'\) must also be a two-terminal resistor characterized by the same curve Γ in the i′− v′ plane.

For example, if element b j of \(\mathscr {D}\) is a resistor characterized by \(i_j=v_j^3-3v_j\), then the dual resistor in \(\mathscr {D}'\) is a resistor characterized by \(v_j^{\prime }=i_j^{\prime 3}-3i_j^{\prime }\). Observe that the dual of a given resistor is a new resistor, which may need a new name and a new symbol. However, there are some two-terminal elements which have the interesting property that the dual of the element is the same element with its two terminals interchanged. For such elements, a new symbol is obviously not needed. The simplest example of this type of element is the ideal diode.

Definition 4.7 (Dual Inductor)

If element b j in \(\mathscr {D}\) is a two-terminal inductor characterized by a curve Γ in the i − ϕ plane, then the corresponding dual element \(b_j^{\prime }\) in \(\mathscr {D}'\) must be a capacitor characterized by the same curve Γ in the v′− q′ plane.

For example, the dual of an inductor characterized by \(\phi =\log i\) is a capacitor characterized by \(q'=\log v'\).

Definition 4.8 (Dual Capacitor)

If element b j in \(\mathscr {D}\) is a two-terminal capacitor characterized by a curve Γ in the v − q plane, then the corresponding dual element \(b_j^{\prime }\) in \(\mathscr {D}'\) must be an inductor characterized by the same curve Γ in the i′− ϕ plane.

For example, the dual of a capacitor characterized by \(q=\tanh v\) is an inductor characterized by \(q'=\log v'\).

Definition 4.9 (Dual Ideal Memristor)

If element b j in \(\mathscr {D}\) is a two-terminal ideal memristor characterized by a curve Γ in the ϕ − q plane, the corresponding dual element \(b_j^{\prime }\) in \(\mathscr {D}'\) must be an ideal menductor characterized by the same curve Γ in the q′− ϕ′ plane.

Note that by mutatis mutandis, we can define the dual of an ideal menductor.

Definition 4.10 (Dual Memristive Device)

If element b j in \(\mathscr {D}\) is a two-terminal current-controlled (voltage-controlled) memristive device, the corresponding dual element \(b_j^{\prime }\) in \(\mathscr {D}'\) must be a voltage-controlled (current-controlled) memristive device.

Example 4.1.3

Determine the dual of the memristive circuit in Fig. 4.5.

Fig. 4.5
figure 5

Circuit for Example 4.1.3

Solution

Based on Definitions 4.8, 4.7, and 4.10, the dual of the circuit is shown in Fig. 4.6.

Fig. 4.6
figure 6

Dual network \(\mathscr {D}'\) for the circuit in Fig. 4.5

In other words, the dual of a linear capacitor with capacitance N F (q − v relationship: q = Nv) is a linear inductor with inductance N H (ϕ′− i′ relationship: ϕ′ = Ni′). Analogously, the dual of a linear inductor with inductance K H (ϕ − i relationship: ϕ = Ki) is a linear capacitor with capacitance K F (q′− v′ relationship: q′ = Kv′).

For the memristive device in \(\mathscr {D}\), we have (recall Eq. (1.86)):

$$\displaystyle \begin{aligned} v&=R(\mathbf{x},i)i \\ \dot{\mathbf{x}}&=f(\mathbf{x},i) \end{aligned} $$
(4.18)

Hence the dual voltage-controlled equations are:

$$\displaystyle \begin{aligned} i'&=G(\mathbf{x}',v')v' \\ \frac{d\mathbf{x}'}{dt}&=f(\mathbf{x}',v')\end{aligned} $$
(4.19)

Since we have a series network for \(\mathscr {D}\), simple application of KVL and the element laws gives:

$$\displaystyle \begin{aligned} \frac{dv_N}{dt}&=\frac{i}{N} \\ \frac{di}{dt}&=\frac{1}{K}\left(v_N+R(\mathbf{x},i)i\right) \\ \frac{d\mathbf{x}}{dt}&=f(\mathbf{x},i) \end{aligned} $$
(4.20)

Notice we have normal form equations for the \(\mathscr {D}\). Using duality, we get:

$$\displaystyle \begin{aligned} \frac{di_N^{\prime}}{dt}&=\frac{v'}{N} \\ \frac{dv'}{dt}&=\frac{1}{K}\left(i_N^{\prime}+G(\mathbf{x}',v')v'\right) \\ \frac{d\mathbf{x}'}{dt}&=f(\mathbf{x}',v') \end{aligned} $$
(4.21)

Table 4.2 summarizes the dual relationships that we have discussed.

Table 4.2 Common dual quantities

On a brief note, the question of existence and uniqueness theorems for dynamic nonlinear networks does not carry much meaning [6], unlike linear dynamic networks. Two reasons are: the solution of the normal form Eq. (4.1) can exhibit many qualitatively different behaviors, depending only on the choice of the initial state. The second reason is that some steady state behavior can be extremely complicated (chaos in Chap. 5) precluding the existence of a closed form solution.

So, the correct approach is to study the qualitative behavior of dynamic nonlinear networks. There are a variety of techniques, in the context of the scope of this book, we will discuss impasse points later in Sect. 4.2.1.6. Other advanced concepts can be found in [6].

4.2 Time Domain Analysis of nth-Order Nonlinear Networks

In this section, we will analyze nth-order dynamic nonlinear networks in the time domain. That is, we will write differential equations as functions of time for the dynamic networks in question. We will start with first-order networks because a variety of important results can be easily understood using first-order networks [12].

Since circuit analysis techniques for memristor networks are still a topic of active research, we will postpone discussion of such networks till Sect. 4.4. Hence until then, our circuits will contain only capacitors and inductors as the dynamic element(s).

4.2.1 First-Order Circuits

Circuits made of one capacitor,Footnote 3 resistors, and independent sources are called first-order circuits [8]. Note that “resistor” is understood in the broad sense: it includes controlled sources, gyrators, ideal transformers, etc.

In this section,Footnote 4 we study first-order circuits made of linear time-invariant elements and independent sources. Any such circuit can be redrawn as shown in Fig. 4.7a, where the one-port N is assumed to include all other elements (e.g., independent sources, resistors, controlled sources, gyrators, ideal transformers, etc.). Applying the Thévenin equivalent one-port Theorem 3.6 from Chap. 3, we can, in most instances, replace N by the equivalent circuit shown in Fig. 4.7b.

Fig. 4.7
figure 7

(a) First-order RC circuit. (b) Thévenin equivalent

Applying KVL we obtain

$$\displaystyle \begin{aligned} R_{\mathrm{eq}}i_c+v_C&=v_{\mathrm{oc}}(t) \end{aligned} $$
(4.22)

Substituting \(i_C=C\overset {\bullet }{v}_C\) and solving for \(\overset {\bullet }{v}_C\), we obtain:

$$\displaystyle \begin{aligned} \overset{\bullet}{v}_C&=-\frac{v_C}{R_{\mathrm{eq}}C}+\frac{v_{\mathrm{oc}}(t)}{R_{\mathrm{eq}}C} {} \end{aligned} $$
(4.23)

Since the first-order linear differential equation above is in normal form, v C(t) is the state variable. Recall from our discussion of initial conditions in Sect. 4.1.1 that v C(t) depends only on the initial condition v C(t 0) and the waveform v oc(⋅) over [t 0, t].

In Sect. 4.2.1.1 we show that the solution of any first-order linear circuit can be found by inspection, provided N contains only DC sources. By repeated application of this “inspection method,” Sect. 4.2.1.2 shows how the solution can be easily found if N contains only piecewise-constant sources. This method is then applied in Sect. 4.2.1.3 for finding the solution—called the impulse response —when the circuit is driven by an impulse δ(t). Finally, Sect. 4.2.1.4 gives an explicit integration formula for finding solutions under arbitrary excitations, which is then applied in Sects. 4.2.1.5 and 4.2.1.6.

4.2.1.1 Circuits Driven by DC Sources

When N contains only DC sources, v oc(t) = v oc is a constant in Fig. 4.7b and in Eq. (4.23). Let us rewrite the equation as follows:

$$\displaystyle \begin{aligned} \overset{\bullet}{x}&=\frac{x}{\tau}+\frac{x(t_\infty)}{\tau} {} \end{aligned} $$
(4.24)

where

$$\displaystyle \begin{aligned} x&\overset{\triangle}=v_C \\ x(t_\infty)&\overset{\triangle}= v_{\mathrm{oc}} \\ \tau&\overset{\triangle}=R_{\mathrm{eq}}C \end{aligned} $$
(4.25)

Given any initial condition x = x(t 0) at t = t 0, Eq. (4.24) has the unique solution:

$$\displaystyle \begin{aligned} x(t)=x(t_\infty)+[x(t_0)-x(t_\infty)]e^{\frac{-(t-t_0)}{\tau}} {} \end{aligned} $$
(4.26)

which holds for all times t, i.e., − < t < . To verify that this is indeed the solution, simply substitute Eq. (4.26) into Eq. (4.24) and show that both sides are identical. Observe that at t = t 0, Eq. (4.26) reduces to x(t) = x(t 0), which makes physical sense. Note also that the solution given by Eq. (4.26) is valid whether τ is positive or negative.

The solution in Eq. (4.26) is determined only by three parameters x(t 0), x(t ) and τ. We call them initial state , equilibrium state, and time constant, respectively. To see why x(t ) is called the equilibrium state, note that if x(t 0) = x(t ), then Eq. (4.24) gives \(\overset {\bullet }{x}(t_0)=0\) and thus x(t) = x(t ) for all t. Hence the circuit remains “motionless” or in equilibrium .

Since the inspection method to be developed in this section depends crucially on the ability to sketch the exponential waveform quickly, the following properties are extremely useful. These properties in turn depend on whether τ is positive or negative. For τ > 0, the exponential waveform in Eq. (4.26) tends to a constant as t →. For τ < 0, the exponential waveform in Eq. (4.26) tends to ±, as t →. Hence it is convenient to consider the two cases separately.

Case 1: τ > 0

In this case Eq. (4.26) shows that x(t) − x(t ), i.e., the distance between the present state and the equilibrium state x(t ) decreases exponentially. For all initial states, the solution x(t) approaches equilibrium and |x(t) − x(t )| decreases exponentially with a time constant τ. The solution in Eq. (4.26) for τ > 0 is sketched in Fig. 4.8 for two different initial states x(t 0) and \(\tilde {x}(t_0)\). Observe that because τ is positive, x(t) → x(t ) as t →.

Fig. 4.8
figure 8

The solution tends to the equilibrium state x(t ) as t → when the time constant τ is positive. \(\varDelta x_1=0.63[x(t_0)-x(t_\infty )], \varDelta x_2=0.63[x(t_\infty )-\tilde {x}(t_0)]\)

Thus when τ > 0 we say the system in Eq. (4.24) is stable ,Footnote 5 because any initial deviation x(t 0) − x(t ) decays exponentially and x(t) → x(t ) as t →.

The exponential waveforms in Fig. 4.8 can be accurately sketched using the following observations:

  1. 1.

    After one time constant τ, the distance between x(t) and x(t ) decreases approximately by 63% of the initial distance |x(t 0) − x(t )|.

  2. 2.

    After five time constants, x(t) practically attains the equilibrium state (or steady-state ) value x(t ) (e −5 ≈ 0.007).

Example 4.2.1

Recall the opamp voltage follower from Example 2.5.3, but now we have a switch closing at t = 0 as shown in Fig. 4.9. Sketch v o(t) for t ≥ 0.

Fig. 4.9
figure 9

Circuit for Example 4.2.1

Solution

The switch shown models the fact that in practice, the output is observed to reach the 10 V solution after a small but finite time. In order to predict this transient behavior before equilibrium is reached, we will use the finite gain opamp model from Exercise 2.5, augmented with a capacitor, to obtain the dynamic model shown in Fig. 4.10a.

Fig. 4.10
figure 10

(a) Dynamic opamp model (b) Thévenin equivalent, notice R eq is positive

To analyze this first-order circuit, we extract the capacitor and replace the remaining circuit by its Thévenin equivalent as shown in Fig. 4.10b, where:

$$\displaystyle \begin{aligned} R_{\mathrm{eq}}&=\frac{R}{A+1}\approx\frac{R}{A}\;\;\text{since }A >> 1 \\ v_{\mathrm{oc}}&=\frac{10A}{A+1}\approx 10\;\;\text{since }A >> 1 \end{aligned} $$
(4.27)

Assuming A = 105, R = 100  Ω, C = 3 F, we obtain R eq ≈ 10−3  Ω and v oc ≈ 10 V. Consequently, the time constant and equilibrium state are given, respectively, by τ = R eq C = 3 ms and v o(t ) ≈ 10 V. Assuming the capacitor is initially uncharged, the resulting output voltage can be easily sketched as shown in Fig. 4.11. Note that after five time constants or 15 ms, the output is practically equal to 10 V.

Fig. 4.11
figure 11

Exponential voltage waveform for Example 4.2.1

Case 2: τ < 0

In this case Eq. (4.26) shows that the quantity x(t) − x(t ) increases exponentially for all initial states, i.e., the solution x(t) diverges from equilibrium and hence the corresponding system is unstable . The solution for Eq. (4.26) is sketched in Fig. 4.12 for two different initial states x(t 0) and \(\tilde {x}(t_0)\). Observe that since the time constant τ is negative, as t →, x(t) → if x(t 0) > x(t ) and x(t) →− if x(t 0) < x(t ).

Fig. 4.12
figure 12

The solution tends to the “virtual” equilibrium state x(t ) as t →− when the time constant τ is negative. \(\varDelta x_1=1.72[x(t_0)-x(t_\infty )], \varDelta x_2=1.72[x(t_\infty )-\tilde {x}(t_0)]\)

However, if we run time “backward,” then x(t) → x(t ) as t →−. Consequently, x(t ) can be interpreted as a virtual equilibrium state .

Analogous to the stable case, the exponential waveform can be accurately sketched using the observation that at t = t 0 + |τ|, the distance |x(t 0 + |τ|) − x(t )| is approximately 1.72 times the initial distance |x(t 0) − x(t )|.

Example 4.2.2

Consider the positive feedback opamp circuit shown in Fig. 4.13. Determine v o(t) for t ≥ 0.

Fig. 4.13
figure 13

Circuit for Example 4.2.2

Solution

The opamp circuit in Fig. 4.13 is identical to that of Fig. 4.9 except for an interchange between the inverting (−) and noninverting (+ ) terminals. Using the ideal opamp model in the linear region, we would obtain exactly the same answer as before, namely v o = 10 V for t ≥ 0, provided E sat > 10 V.

But, let us see what happens if the opamp is replaced by the dynamic model adopted earlier, as shown in Fig. 4.14a. Note now the polarity of v d is reversed. The parameters in the Thévenin equivalent circuit now become:

$$\displaystyle \begin{aligned} R_{\mathrm{eq}}&=-\frac{R}{A-1}\approx-\frac{R}{A}\;\;\text{since }A >> 1 \\ v_{\mathrm{oc}}&=\frac{10A}{A-1}\approx 10\;\;\text{since } A >> 1 \end{aligned} $$
(4.28)

Notice R eq is now negative. Assuming the same parameter values as in Example 4.2.1, we obtain R eq = −10−3  Ω and v oc ≈ 10 V. Consequently, the time constant and virtual equilibrium state are now given by τ ≈−3 ms and v oc(t ) ≈ 10 V, respectively. Hence the solution drastically differs from that of Example 4.2.1:

$$\displaystyle \begin{aligned} v_o(t)&=10\left(1-e^{\frac{t}{3\,\text{ms}}}\right) \end{aligned} $$
(4.29)

v o(t) →− as t →. Of course, in practice, when v o(t) decreases to − E sat, the opamp saturates and the solution would remain constant at − E sat. The sketch of v o(t) is trivial and is left as an exercise for the reader.

Fig. 4.14
figure 14

(a) Dynamic opamp model (b) Thévenin equivalent, notice R eq is negative

Example 4.2.2 shows us why the “middle” segment in the positive feedback circuit (Fig. 2.36) and Schmitt trigger VTCs from Sect. 2.5.3 are physically absent. Parasitic elements such as capacitors result in the opamp circuit model corresponding to the “middle” segment to display unstable behavior. In other words, the R eq seen by the parasitic capacitor turns out to be negative. A detailed analysis is given in [20].

We will often need to calculate the time interval between two prescribed points on an exponential waveform. Given any two points [t j, x(t j)] and [t k, x(t k)] on an exponential waveform (see for example Figs. 4.8 and 4.12), the time it takes to go from x(t j) to x(t k) is given by the elapsed time formula :

$$\displaystyle \begin{aligned} t_k-t_j&=\tau\ln\frac{x(t_j)-x(t_\infty)}{x(t_k)-x(t_\infty)} {} \end{aligned} $$
(4.30)

To derive Eq. (4.30), let t = t j and t = t k in Eq. (4.26), respectively:

$$\displaystyle \begin{aligned} x(t_j)-x(t_\infty)&=[x(t_0)-x(t_\infty)]e^{\frac{-(t_j-t_0)}{\tau}} {} \end{aligned} $$
(4.31)
$$\displaystyle \begin{aligned} x(t_k)-x(t_\infty)&=[x(t_0)-x(t_\infty)]e^{\frac{-(t_k-t_0)}{\tau}} {} \end{aligned} $$
(4.32)

Dividing Eq. (4.31) by (4.32) and taking the logarithm on both sides, we obtain Eq. (4.30). Notice the derivation does not depend on whether τ is positive or negative.

We are now ready to formally state the inspection method. Consider again the first-order RC circuit from Fig. 4.7a where all independent sources inside N are DC sources. Equation (4.26) gives us the voltage across the capacitor:

$$\displaystyle \begin{aligned} v_C(t)&=v_C(t_\infty)+[v_C(t_0)-v_C(t_\infty)]e^{-\frac{(t-t_0)}{\tau}} {} \end{aligned} $$
(4.33)

Suppose we replace the capacitor with a voltage source defined by Eq. (4.33). Let v jk denote the voltage across any pair of nodes j and k in N. Assume that N contains α independent DC voltage sources V s1, V s2, ⋯ , V and β independent DC current sources I s1, I s2, ⋯ , I . Applying the superposition theorem 3.5, we know that the solution v jk(t) is given by an expression of the form:

$$\displaystyle \begin{aligned} v_{jk}(t)&=H_0v_C(t)+\displaystyle\sum_{j=1}^\alpha H_jV_{sj} + \displaystyle\sum_{k=1}^\beta K_jI_{sj} {} \end{aligned} $$
(4.34)

where H 0, H j, and K j are constants (which depend on element values and circuit configuration). Substituting for v C(t) in Eq. (4.34) from (4.33) and rearranging the terms, we obtain:

$$\displaystyle \begin{aligned} v_{jk}(t)&=v_{jk}(t_\infty)+[v_{jk}(t_0)-v_{jk}(t_\infty)]e^{-\frac{(t-t_0)}{\tau}} {} \end{aligned} $$
(4.35)

where

$$\displaystyle \begin{aligned} v_{jk}(t_\infty)&\overset{\triangle}=H_0v_C(t_\infty)+\displaystyle\sum_{j=1}^\alpha H_jV_{sj} + \displaystyle\sum_{j=1}^\beta K_jI_{sj} \end{aligned} $$
(4.36)

and

$$\displaystyle \begin{aligned} v_{jk}(t_0)&\overset{\triangle}=H_0v_C(t_0)+\displaystyle\sum_{j=1}^\alpha H_jV_{sj} + \displaystyle\sum_{j=1}^\beta K_jI_{sj} \end{aligned} $$
(4.37)

Since Eq. (4.35) has the exact same form as Eq. (4.26), and since nodes j and k are arbitrary, we conclude that: the voltage v jk(t) across any pair of nodes in a first-order RC circuit driven by DC sources is an exponential waveform having the same time constant τ as v C(t). By the same reasoning, we can also conclude that the current i j(t) in any branch j of a first-order RC circuit driven by DC sources is an exponential waveform having the same constant τ as that of v C(t).

The above “exponential solution waveform” property of course assumes that the first-order circuit is not degenerate, i.e., that it is uniquely solvable and 0 < |τ| < . Also note that as we approach equilibrium, i.e., when t → + (if τ > 0) or t →− (if τ < 0), the capacitor current tends to zero. This follows from Figs. 4.8 and 4.12, \(i_C=C\overset {\bullet }{v}_C\).

Since an exponential waveform is uniquely determined by only three parameters (initial state x(t 0), equilibrium state x(t ) and time constant τ), we can now formally state the inspection method for first-order RC circuits driven by DC sources :

First-order Circuit Inspection Method:

  1. 1.

    Replace the capacitor by a DC voltage source with a terminal voltage equal to v C(t 0). Label the voltage across node-pair j, k as v jk(t 0) and the current i j as i j(t 0). Solve the resulting resistive circuit for v jk(t 0) and i j(t 0). In other words, we are solving for the initial state.

  2. 2.

    Replace the capacitor by an open circuit. Label the voltage across node-pair j, k as v jk(t ) and the current i j as i j(t ). Solve the resulting resistive circuit for v jk(t ) and i j(t ). In other words, we are solving for the equilibrium state.

  3. 3.

    Find the Thévenin equivalent circuit of N, so that the time constant can be computed as τ = R eq C.

The reader should use the above three parameters to make a quick sketch of the exponential waveform, as a sanity check.

4.2.1.2 Circuits Driven by Piecewise-Constant Signals

Consider next the case where the independent sources in N of Fig. 4.7a are piecewise-constant for t > t 0. This means that the semi-infinite time interval t 0 ≤ t <  can be partitioned into subintervals [t j, t j+1), j = 1, 2, ⋯ such that all sources assume a constant value during each subinterval. Hence we can analyze the circuit as a sequence of first-order circuits driven by DC sources, each one analyzed separately by the inspection method. Since the circuit remains unchanged except for the sources, the time constant τ remains unchanged throughout the analysis.

The initial state x(t 0) and equilibrium state x(t ) will of course vary from one subinterval to another. Although the inspection method holds in the determination of x(t ), one must be careful in calculating the initial value at the beginning of each subinterval t j because at least once source changes its value discontinuously at each boundary time t j. In general, \(x(t_j^-)\neq x(t_j^+)\), where the − and +  denote the usual limit of x(t) as t → t j, from the left and from the right, respectively. The initial value to be used in the calculation during the subinterval \([t_j,t_{j_1})\) is \(x(t_j^+)\).

Although in general both v jk(t) and i j(t) can jump, the continuity property from Sect. 1.9.3 guarantees that in the usual case where the capacitor current (inductor voltage) waveform is bounded, the capacitor voltage (inductor current) waveform is a continuous function of time and therefore cannot jump. This property is the key to finding the solution by inspection, as Example 4.2.3 illustrates.

Example 4.2.3

Find and sketch v C(t), i C(t) and v R(t) in Fig. 4.15 by inspection, for t ≥ 0. Assume v C(0) = 0 V (capacitor is initially discharged).

Fig. 4.15
figure 15

Circuit for Example 4.2.3

Solution

Since v C(0) = 0 and v s(t) = 0 for t ≤ 0, it follows that i C(t) = 0, v C(t) = 0, v R(t) = 0 for t ≤ 0.

The solution waveforms for t > 0 obviously consists of exponentials with a time constant t = RC. At t = 0+, using the continuity property, we have v C(0+) = v C(0) = 0. Therefore, by KVL, v R(0+) = v s(0+) − v C(0+) = E and i C(0+) = ER, by Ohm’s law. To find the equilibrium state, we open the capacitor and hence find that i C(t ) = 0, v C(t ) = E, v R(t ) = 0. We now have enough information to determine the expressions (t ≥ 0) as:

$$\displaystyle \begin{aligned} v_C(t)&=E\left(1-e^{-\frac{t}{RC}}\right) \\ i_C(t)&=\frac{E}{R}\left(e^{-\frac{t}{RC}}\right) \\ v_R(t)&=E\left(e^{-\frac{t}{RC}}\right)\end{aligned} $$
(4.38)

The waveforms are sketched in Fig. 4.16. Note that i C(t) = Cdv C(t)∕dt and v R(t) + v C(t) = E for t ≥ 0, as they should. Also observe that whereas v R(t) and i C(t) are discontinuous at t = 0, v C(t) is continuous for all t as expected.

Fig. 4.16
figure 16

Exponential waveforms for Example 4.2.3

The circuit in Fig. 4.15 is often used to model the situation where a DC voltage source is suddenly connected across a resistive circuit which normally draws a zero-input current. The linear capacitor in this case is used to model the small parasitic capacitance between the connecting wires. Without this capacitor, the input voltage would be identical to v s(t). However, in practice, a “transient” is always observed and the circuit in Fig. 4.15 represents a more realistic situation. In this case, the time constant τ gives a measure of how “fast” the circuit can respond to a step input. Such a measure is of crucial importance in the design of high-speed circuits.

Since the term time constant is meaningful only for first-order circuits, a more general measure of such “response speed” called the rise time is used. The rise time t r is defined as the time it takes the output waveform to rise from 10% to 90% of the steady-state value after application of a step input. For first-order circuits, t r is easily calculated from the elapsed time formula in Eq. (4.30):

$$\displaystyle \begin{aligned} t_r&=\tau\ln\frac{0.1E-E}{0.9E-E} \\ &=\tau\ln 9 \\ &\approx 2.2\tau \end{aligned} $$
(4.39)

4.2.1.3 Linear Time-Invariant Circuits Driven by an Impulse

Consider the RC circuit shown in Fig. 4.17. Let the input voltage source v s(t) be a square pulse p Δ(t) of width Δ and height 1∕Δ, as shown in Fig. 4.18a. Assuming zero initial state (i.e., v C(0) = 0), the response voltage v C(t) is shown in Fig. 4.18b. We define:

$$\displaystyle \begin{aligned} h_\varDelta(\varDelta)&\overset{\triangle}=\frac{1-e^{\frac{-\varDelta}{\tau}}}{\varDelta}\overset{\triangle}=\frac{f(\varDelta)}{g(\varDelta)} \end{aligned} $$
(4.40)

The input and response corresponding to \(\varDelta =1,\frac {1}{2},\frac {1}{3}\) are shown in Fig. 4.18c and d, respectively. Note that as Δ → 0, p Δ(t) tends to the unit impulse shown in Fig. 4.18e. The unit impulse or the Dirac delta function Footnote 6 tends to infinity at t = 0 and to zero elsewhere, while the area under the pulse is unity. More precisely, the unit impulse is defined such that the following two properties are satisfied:

$$\displaystyle \begin{aligned} \text{1. } \delta(t)&\overset{\triangle}= \begin{cases} \text{singular} & t=0 \\ 0 & t\neq 0 \end{cases} \end{aligned} $$
(4.41)
$$\displaystyle \begin{aligned} \text{2. } \displaystyle\int\limits_{-\epsilon_1}^{\epsilon_2}\delta(t)dt&=1\;\;\text{for any }\epsilon_1>0,\epsilon_2>0\end{aligned} $$
(4.42)

The derivative “in the distribution sense”Footnote 7 of δ(t) is the unit step function defined as:

$$\displaystyle \begin{aligned} u(t)&\overset{\triangle}= \begin{cases} 0 & t < 0 \\ 1 & t \geq 0 \end{cases}\end{aligned} $$
(4.43)
Fig. 4.17
figure 17

Various v s(t) inputs are shown in Fig. 4.18

Fig. 4.18
figure 18

As Δ → 0, the square pulse tends to the unit impulse δ(⋅). The corresponding response tends to the impulse response h

Note that the “peak” value h Δ(Δ) of the response waveform in Fig. 4.18b increases as Δ increases. To obtain the limiting value of h Δ(Δ) as Δ → 0, we apply L’Hospital’s rule:

$$\displaystyle \begin{aligned} \lim_{\varDelta\rightarrow 0} h_\varDelta(\varDelta)&=\lim_{\varDelta\rightarrow 0}\frac{f'(\varDelta)}{g'(\varDelta)} \\ &=\lim_{\varDelta\rightarrow 0}\frac{(1/\tau)e^{(-\varDelta/\tau)}}{1} \\ &=\frac{1}{\tau} \end{aligned} $$
(4.44)

Hence the response waveforms in Fig. 4.18d tend to the exponential waveform for t ≥ 0 shown in Fig. 4.18f, compactly written using the unit step function defined earlier as:

$$\displaystyle \begin{aligned} h(t)&=\frac{1}{\tau}e^{-t/\tau}u(t) {}\end{aligned} $$
(4.45)

Because h(t) is the response of the circuit when driven by a unit impulse under zero initial conditions, it is called the impulse response. In Sect. 4.3.3, we will show that given the impulse response of any linear time-invariant circuit, we can use it to calculate the response when the circuit is driven by any other input waveform.

4.2.1.4 Circuits Driven by Arbitrary Signals

Let us consider now the general case where the one-port N in Fig. 4.7a contains arbitrary independent sources. This means that the Thévenin equivalent voltage source v oc(t) in Fig. 4.7b can be any function of time, say, a PWL function, a sine wave, etc. Our objective is to derive an explicit solution and draw conclusions from our result.

Consider the RC circuit in Fig. 4.7b whose state equation is:

$$\displaystyle \begin{aligned} \overset{\bullet}{v}_C(t)&=-\frac{v_C(t)}{\tau}+\frac{v_{\mathrm{oc}}(t)}{\tau} {}\end{aligned} $$
(4.46)

where \(\tau \overset {\triangle }=R_{\mathrm{eq}}C\).

Theorem 4.2 (Explicit Solution for First-Order Linear Time-Invariant RC Circuits)

Given any prescribed waveform v oc(t), the solution of Eq.(4.46) corresponding to any initial state v C(t 0) at t = t 0 is given by

$$\displaystyle \begin{aligned} v_C(t)&=\underbrace{v_C(t_0)e^{-\frac{(t-t_0)}{\tau}}}_{\mathit{\text{zero-input response}}} + \underbrace{\displaystyle\int\limits_{t_0}^t\frac{1}{\tau}e^{-\frac{(t-t')}{\tau}}v_{\mathrm{oc}}(t')dt'}_{\mathit{\text{zero-state response}}} {}\end{aligned} $$
(4.47)

t  t 0 . Here τ = R eq C.

Proof

  1. (a)

    At t = t 0 , Eq. (4.47) reduces to

    $$\displaystyle \begin{aligned} v_C(t)\Big\rvert_{t=t_0}&=v_C(t_0)\end{aligned} $$
    (4.48)

    Hence Eq. (4.47) has the correct initial condition.

  2. (b)

    To prove that Eq. (4.47) is a solution of Eq. (4.46), differentiate both sides of Eq. (4.47) with respect to t. First, we rewrite Eq. (4.47) as:

    $$\displaystyle \begin{aligned} v_C(t)&=v_C(t_0)e^{-\frac{(t-t_0)}{\tau}} + \frac{1}{\tau}e^{-t/\tau}\displaystyle\int\limits_{t_0}^te^{t'/\tau}v_{\mathrm{oc}}(t')dt' \end{aligned} $$
    (4.49)

    Then upon differentiating with respect to t, we obtain for t > 0:

    $$\displaystyle \begin{aligned} \overset{\bullet}{v}_C(t)&=-\frac{1}{\tau}v_C(t_0)e^{-\frac{(t-t_0)}{\tau}}+\left(-\frac{1}{\tau^2}e^{\frac{-t}{\tau}}\right)\displaystyle\int\limits_{t_0}^te^{t'/\tau}v_{\mathrm{oc}}(t')dt' \\ &+ \left(\frac{1}{\tau}e^{\frac{-t}{\tau}}\right)\left[e^{\frac{t}{\tau}}v_{\mathrm{oc}}(t)\right] {} \end{aligned} $$
    (4.50)

    where we used the second fundamental theorem of calculus [29]:

    $$\displaystyle \begin{aligned} \frac{d}{dt}\displaystyle\int\limits_a^tf(t')dt&=f(t) \end{aligned} $$
    (4.51)

    Simplifying Eq. (4.50), we obtain:

    $$\displaystyle \begin{aligned} \overset{\bullet}{v}_C(t)&=-\frac{1}{\tau}v_C(t_0)e^{\frac{-(t-t_0)}{\tau}}-\frac{1}{\tau}\left[\displaystyle\int\limits_{t_0}^t\frac{1}{\tau}e^{\frac{-(t-t')}{\tau}}v_{\mathrm{oc}}(t')dt'\right]+\frac{1}{\tau}v_{\mathrm{oc}}(t) \\ &=-\frac{v_C(t)}{\tau}+\frac{v_{\mathrm{oc}}(t)}{\tau} \end{aligned} $$
    (4.52)

    Hence Eq. (4.47) is a solution of Eq. (4.46).

  3. (c)

    From our basic calculus courses, we know that the differential equation (4.46) has a unique solution. Hence Eq. (4.47) is indeed the solution. □

The solution Eq. (4.47) consists of two terms. The first term is called the zero-input response because when all independent sources in N are set to zero, we have v oc(t) = 0 for all times and v C(t) reduces to the first term only. The second term is called the zero-state response because when the initial state v C(t 0) = 0, v C(t) reduces to the second term only.

Example 4.2.4

Find the solution v C(t) in Fig. 4.15, using Eq. (4.47).

Solution

In this case we have: v C(t 0) = 0, t 0 = 0, τ = RC and v oc(t) = E, t ≥ 0. Substituting these parameters in Eq. (4.47) and simplifying, we get:

$$\displaystyle \begin{aligned} v_C(t)&=E\left(1-e^{\frac{-t}{RC}}\right) \end{aligned} $$
(4.53)

which coincides with the solutions in Example 4.2.3, as it should.

Note that in Eq. (4.47), the total response can be interpreted as the superposition of two terms, one due to the initial condition acting alone (with all independent sources set to zero) and the other due to the input acting alone (with the initial condition set to zero). Also, the equation is valid for both τ > 0 and τ < 0. Consider the case τ > 0. For all values t′ such that t − t′ >> τ, the factor \(e^{\frac {-(t-t')}{\tau }}\) is very small; consequently, the values of v oc(t) for such times contribute almost nothing to the integral in Eq. (4.47). In other words, the stable RC circuit has a fading memory. Inputs that have occurred many time constants ago have practically no effect at the present time. Thus we may say that the time constant τ is a measure of the memory time of the circuit.

4.2.1.5 First-Order Linear Switching Circuits

Suppose now that the one-port N in Fig. 4.7a contains one or more switches, where the state (open or closed) of each switch is specified for all t ≥ t 0. Typically, a switch may be open over several disjoint time intervals, and closed during the remaining times. Although a switch is a time-varying linear resistor, such a linear switching circuit may be analyzed as a sequence of first-order linear time-invariant circuits, each one valid over a time interval where all switches remain in a given state. This class of circuits can therefore be analyzed by the procedures given in the previous sections. The only difference here is that the time constant τ will generally vary whenever a switch changes, as demonstrated in Example 4.2.5.

Example 4.2.5

Determine v o(t), t ≥ 0 for the circuit in Fig. 4.19. Assume that the switch S has been open for a long time prior to t = 0.

Fig. 4.19
figure 19

An RC switching circuit, where S is open during t < 1 s and t ≥ 2 s, and closed during 1 ≤ t < 2

Solution

Given that the switch is closed at t = 1 s and then reopened at t = 2 s, our objective is to first find v C(t) (since voltage across a capacitor should be a continuous function of time) and then find v o(t).

Since we are only interested in v C(t) and v o(t), let us replace the remaining part of the circuit by its Thévenin equivalent circuit. The result is shown in Fig. 4.20a and b, corresponding to the case when S is open or closed, respectively. The corresponding τs are τ 2 = 1 s and τ 1 = 0.9 s, respectively.

Fig. 4.20
figure 20

Equivalent circuits from Fig. 4.19 when (a) switch is open, (b) switch is closed

Since the switch is initially open and the capacitor is initially in equilibrium, it follows from Fig. 4.20a that v C(t) = 6 V and v o(t) = 0 V for t ≤ 1 s. At t = 1+, we change the equivalent circuit to Fig. 4.20b. Since by continuity, v C(1+) = v C(1) = 6 V, we have i C(1+) = (10 − 6) V∕(2 + 1.6) k Ω ≈ 1.11 mA and hence v o(1+) = (1.6 k Ω)(1.11 mA) ≈ 1.78 V. Note that we have obviously used the passive sign convention when computing the currents.

To determine v C(t ) and v o(t ) for the equivalent circuit in Fig. 4.20b, we replace the capacitor with an open circuit and obtain v C(t ) = 10 V and v o(t ) = 0 V. The waveforms of v C and v o during [1, 2) are drawn as solid lines in Fig. 4.21a and b, respectively. The dashed portion shows the respective waveform if S had been left closed ∀ t ≥ 1 s.

Fig. 4.21
figure 21

(a) v C(t) and (b) v o(t) plots for Example 4.2.5

Since S is closed at t = 2 s, we must write the equation of these two waveforms to calculate v C(2) ≈ 8.68 V and v o(2) ≈ 0.59 V (we leave the verification of these calculations to the reader). At t = 2+, we return to the equivalent circuit in Fig. 4.20a. Since v C(2) = v C(2+) ≈ 8.68 V, we have i C(2+) = (6 − 8.68)∕(2.4 + 1.6) mA ≈−0.67 mA and v o(2+) = (1.6 k Ω)(−0.67 mA) ≈−1.07 V. Note that v o has a discontinuous jump at t = 2 s.

To determine v C(t ) and v o(t ) for the circuit in Fig. 4.20a, we again replace the capacitor with an open circuit to obtain v C(t ) = 6 V and v o(t ) = 0 V. We have completed the waveform plots in Fig. 4.21.

4.2.1.6 First-Order PWL Circuits: Dynamic Route, Jump Phenomenon, and Relaxation Oscillations

Consider the first-order circuit in Fig. 4.22 where the nonlinear resistive one-port \(\mathscr {N}_R\) may now contain nonlinear resistors (in addition to linear resistors and DC sources). As before, all resistors and the capacitor are time-invariant. This class of circuits includes many important nonlinear electronic circuits such as multivibrators, relaxation oscillators, etc. In this section, we assume that all nonlinear elements inside \(\mathscr {N}_R\) are PWL so that the one-port is described by a PWL DP characteristic.

Fig. 4.22
figure 22

A PWL RC circuit

Our main problem is to find the solution v C(t) for the RC circuit, subject to any given initial state. Since the corresponding port variables of \(\mathscr {N}_R\), namely [v(t), i(t)], must fall on the DP characteristic of \(\mathscr {N}_R\), the evolution of [v(t), i(t)] can be visualized as the motion of a point on the characteristic starting from a given initial point.

Since the DP characteristic is PWL, the solution [v(t), i(t)] can thus be found by determining first the specific “route” and “direction,” henceforth called as the dynamic route , along the characteristic where the motion actually takes place. Once this route is identified, we can apply the “inspection method” developed in Sect. 4.2.1.1 to obtain the solution traversing along each segment separately, as illustrated in Example 4.2.6.

Example 4.2.6

Given the circuit in Fig. 4.22 and the associated DP characteristic for \(\mathscr {N}_R\) in Fig. 4.23, determine v C(t) ∀t ≥ 0. Let v C(0) = 2.5 V.

Fig. 4.23
figure 23

DP characteristic of \(\mathscr {N}_R\), with dynamic route (red) indicated, for Example 4.2.6

Solution

Step 1: Identify the initial point. Since v(t) = v C(t), for all t, initially v(0) = v C(0) = 2.5 V. Hence the initial point on the DP characteristic is P 0, as shown in Fig. 4.23.

Step 2: Determine the dynamic route. The dynamic route starting from P 0 contains two pieces of information: (a) the route traversed and (b) the direction of motion. They are determined from the following information:

Key to dynamic route for RC circuit:

  1. 1.

    The DP characteristic of \(\mathscr {N}_R\).

  2. 2.

    \(\overset {\bullet }{v}(t)=-\frac {i(t)}{C}\).

Since \(\overset {\bullet }{v}=-i/C<0\) whenever i > 0, the voltage v(t) decreases as long as the associated current i(t) is positive. Hence for i(t) > 0, the dynamic route starting at P 0 must always move along the v − i curve toward the left, as indicated by the red directed (red) line segments P 0 → P 1 and P 1 → P 2 in Fig. 4.23. The dynamic route for this circuit ends at P 2 because at P 2, i = 0, so \(\overset {\bullet }{v}=0\). Hence the capacitor is in equilibrium at P 2.

Step 3: Obtain the solution for each straight line segment. Replace \(\mathscr {N}_R\) by a sequence of Thévenin equivalent circuits corresponding to each line segment in the dynamic route. Using the method from Sect. 4.2.1.1, find a sequence of solutions v C(t). For this example, the dynamic route P 0 → P 1 → P 2 consists of only two segments. The corresponding equivalent circuits are shown in Fig. 4.24a and b, respectively.

Fig. 4.24
figure 24

(a) Equivalent circuit corresponding to P 0 → P 1 (b) Equivalent circuit corresponding to P 1 → P 2

To obtain v C(t) for segment P 0 → P 1, we calculate τ = −62.5 μs. v C(0) = 2.5 V and v C(t ) = 3.25 V. Since the time constant in this case is negative, the corresponding circuit is unstable and hence the exponential is unbounded. We leave it to the reader to verify that v C(t) for P 0 → P 1 is given by:

$$\displaystyle \begin{aligned} v_C(t)&=3.25-0.75e^{\frac{t}{62.5\,\upmu\text{s}}} \end{aligned} $$
(4.54)

Since v C(t) = 2 V at P 1, we can use the expression above to find the time t ≈ 31.9 μs when v C(t) = 2 V. We hence use v C(0) = 2 V for the following bounded exponential from P 1 → P 2:

$$\displaystyle \begin{aligned} v_C(t)&=2e^{\frac{-t}{100\,\upmu\text{s}}} \end{aligned} $$
(4.55)

A plot of v C(t) is given in Fig. 4.25.

Fig. 4.25
figure 25

v C(t) for Example 4.2.6, unbounded (red) and bounded (blue) exponential functions corresponding to the unstable and stable circuits in Fig. 4.24a,b respectively

After some practice, one can obtain the solution in Fig. 4.25 directly from the dynamic route, i.e., without drawing the Thévenin equivalent. Note that in the RC case, the dynamic route always terminates upon intersecting the v axis (i = 0).

We will now discuss a very important application of the dynamic route technique—the opamp relaxation oscillator . Oscillation is one of the most important and exciting phenomena that occurs in physical systems (e.g., electronic watch) and in nature (e.g., planetary motions). In this section, we will focus on a particular type of oscillator, the relaxation oscillator.Footnote 8 Section 4.6.3 will further explore the ideas behind nonlinear oscillators.

Consider the RC opamp circuit shown in Fig. 4.26a. The DP characteristic of the resistive one-port \(\mathscr {N}_R\) was derived in Sect. 2.5.3.2, and is reproduced in Fig. 4.26b for convenience.

Fig. 4.26
figure 26

(a) RC opamp circuit. (b) DP characteristic of \(\mathscr {N}_R\). (c) Solution locus of (v(t), i(t)) for the remodeled circuit. (d) Dynamic route for the limiting case

Consider the four different initial points Q 1, Q 2, Q 3, Q 4 (corresponding to four different initial capacitor voltages at t = 0) on this characteristic. Since \(\overset {\bullet }{v}(t)=\overset {\bullet }{v}_C(t)=-i/C\) and C > 0, we have:

$$\displaystyle \begin{aligned} \overset{\bullet}{v}(t)&>0\;\;\text{for all }t\text{ such that } i(t) < 0 \end{aligned} $$
(4.56)

and

$$\displaystyle \begin{aligned} \overset{\bullet}{v}(t)&<0\;\;\text{for all }t\text{ such that } i(t) > 0 \end{aligned} $$
(4.57)

Hence the dynamic route from any initial point must move toward the left in the upper half plane, and towards the right in the lower half plane, as indicated by the arrowheads in Fig. 4.26b.

Since i ≠ 0 at the two breakpoints Q A and Q B, they are not equilibrium points of the circuit. It follows from Eq. (4.30) that the amount of time T it takes to go from any initial point to Q A or Q B is finite because x(t k) ≠ x(t ).

Since the arrowheads from Q 1 and Q 2 (or from Q 3 and Q 4) are oppositely directed, it is impossible to continue drawing the dynamic route beyond Q A or Q B. In other words, an impasse is reached whenever the solution reaches Q A or Q B.

Any circuit which exhibits an impasse is the result of poor modeling. For the circuit of Fig. 4.26a, the impasse can be resolved by inserting a small linear inductor in series with the capacitor; this inductor models the inductance L of the connecting wires. As will be shown in Sect. 4.6.3, the remodeled circuit has a well-defined solution ∀ t ≥ 0, so long as L > 0. A typical solution locus of (v(t), i(t)) corresponding to the initial condition at P 0 is shown in Fig. 4.26c. Our analysis in Sect. 4.6.3 will show that the transition time from P 1 to P 2, or from P 3 to P 4, decreases with L. In the limit L → 0, the solution locus tends to the limiting case shown in Fig. 4.26d with a zero transition time. In other words in the limit where L decreases to zero, the solution jumps from the impasse point P 1 to P 2, and from the impasse point P 3 to P 4. We have used arrows to emphasize the instantaneous transition.

Both analytical and experimental studies [20] support the existence of a jump phenomenon , such as the one depicted in Fig. 4.26d, whenever a solution reaches an impasse point. This observation allows us to state the following rule which greatly simplifies the solution procedure.

Jump Rule

Let Q be an impasse point of any first-order RC circuit (respectively, RL circuit). Upon reaching Q at t = T, the dynamic route can be continued by jumping (instantaneously) to another point Q′ on the DP characteristic of \(\mathscr {N}_R\) such that v C(T +) = v C(T ) [respectively, i L(T +) = i L(T )] provided Q′ is the only point satisfying the continuity property.

Note that the jump rule is also consistent with the continuity property of v C or i L. Also, the concepts of an impasse point and the jump rule are applicable regardless of whether the DP characteristic of \(\mathscr {N}_R\) is PWL or not. A first-order RC circuit has at least one impasse point if \(\mathscr {N}_R\) is described by a continuous nonmonotonic current-controlled DP characteristic. The instantaneous transition in this case consists of a vertical jump in the v − i plane, assuming i is the vertical axis. A dual argument is applicable to a first-order RL circuit. Once the dynamic route is determined, with the help of the jump rule, for all t > t 0, the solution waveforms of v(t) and i(t) can be determined by inspection, refer to Exercise 4.7. This exercise should also enlighten the reader as to why the circuit in Fig. 4.26a is a prototypicalFootnote 9 relaxation oscillator.

4.2.2 General Dynamic Circuits

So far we have analyzed first-order capacitor networks. As we transition from first to second (and higher) order nonlinear circuits, the complexity of steady-state behavior increases tremendously. In fact, third (and higher) order nonlinear continuous-time circuits exhibit the fascinating phenomenon of chaos (to be studied in Chap. 5).

Since it is impossible to cover all the techniques for general dynamic circuits in one section, we will instead present techniques that will help the reader formulate the equations governing such circuits. This is tremendously helpful because:

  1. 1.

    The reader will notice that we will extend the primary techniques from Chap. 3, nodal and tableau analysis, to cover dynamic networks.

  2. 2.

    Formulating the dynamic equations is the first (and probably most important) step in using a computer to simulate the associated network. Due to the complex behavior of third (and higher order) networks, computer simulations play an important role in studying such networks. Hence it is vital that the reader understand how to obtain the associated circuit equations.

4.2.2.1 Modified Nodal Analysis (MNA)

In Sect. 3.4, we studied node analysis for resistive circuits. For any resistive circuit made up of voltage-controlled resistors, we can write the node equations by inspection. MNA is based on node analysis but is suitably modified so that it can be used on any dynamic circuit. The goal of MNA is to obtain a set of coupled algebraic and differential equations. Consequently to specify a linear time-invariant inductor we use the differential equation

$$\displaystyle \begin{aligned} v(t)&=L\frac{di}{dt} \end{aligned} $$
(4.58)

rather than the integral equation

$$\displaystyle \begin{aligned} i(t)&=i(t_0)+\frac{1}{L}\displaystyle\int\limits_{t_0}^tv(t')dt' \end{aligned} $$
(4.59)

The underlying ideas of MNA are:

  1. 1.

    Write node equations using node voltages as variables.

  2. 2.

    Whenever an element is encountered that is not voltage-controlled, introduce in the node equation the corresponding branch current as a new variable and add, as a new equation, the branch equation of that element.

The result is a system of equations where the unknowns are node voltages and some selected branch currents.

The equations of MNA can be written down by inspection. The number of equations is always smaller than that of tableau analysis (Sect. 4.2.2.2). But since MNA equations contain information about the interconnection as well as the nature of the branches, the equations of MNA do not have the conceptual clarity of the tableau equations. Many circuit analysis programs use MNA, SPICE in particular. As in the case of Chap. 3 we will first use example(s) to illustrate the ideas and then detail the algorithm.

Example 4.2.7

Write MNA equations for the circuit in Fig. 4.27.

Fig. 4.27
figure 27

Circuit for Example 4.2.7

Solution

The circuit shown in Fig. 4.27 includes an independent voltage source, a pair of coupled inductors (with mutual inductance M, self-inductances L 11, L 12), two resistors and a capacitor. We have b = 6 and n = 4. In writing the node equation for node 1, since the independent source is not voltage-controlled, we have inserted the branch current i 6. In considering nodes 2 and 3, we introduce inductor currents i 1 and i 2. We append these three suitably modified node equations with the branch equations of the voltage source and of the two (coupled) inductors. The result is:

(4.60)

MNA gives six equations in the node voltages e 1, e 2, and e 3 and in the selected currents i 1, i 2, and i 6. Eq. (4.60) forms the required set of coupled algebraic and differential equations.

Example 4.2.8 shows that the basic idea of MNA works quite easily for nonlinear circuits.

Example 4.2.8

Write MNA equations for the circuit in Fig. 4.28. For the opamp, we will use the finite-gain model from Exercise 2.5, the nonlinear capacitor is specified by its small-signal capacitance C(⋅), the nonlinear inductor by its small-signal inductance L(⋅), and the current-controlled nonlinear resistor is specified by its characteristic \(\hat {v}_6(\cdot )\).

Fig. 4.28
figure 28

Circuit for Example 4.2.8

Solution

Recall that the finite-gain model of an opamp:

$$\displaystyle \begin{aligned} v_o(v_d)&=\frac{A}{2}|v_d+\epsilon|-\frac{A}{2}|v_d-\epsilon| \end{aligned} $$
(4.61)

where v o is the output voltage of the opamp and \(v_d\overset {\triangle }=v_+-v_-\). The MNA equations can be easily written by inspection:

(4.62)

Equation (4.62) constitutes a set of nine coupled algebraic and differential equations in nine unknown functions: the four node voltages e 1(⋅), e 2(⋅), e 3(⋅), e 4(⋅) and the five selected currents i 2(⋅), i 4(⋅), i 5(⋅), i 6(⋅), i 7(⋅). Note that the variable i 4, the opamp output current, appears only in the third node equation. This node equation is thus a recipe for calculating i 4, once e 2, e 3, and i 5 are known. If i 4 is not required, the third node equation can be dropped.

Examples 4.2.7 and 4.2.8 have shown how easy it is to write MNA equations for any circuit, the algorithm is summarized below.

MNA Algorithm:

Data:

  • Circuit diagram with assigned node numbers and assigned current reference directions

  • Branch equation(s) for each element of the circuit

Steps:

  1. 1.

    Choose a ground node, say n and draw a connected digraph (may require hinging some nodes).

  2. 2.

    For k = 1, 2, ⋯ , n − 1, write KCL for node k using the node-to-ground voltages as variables, keeping in mind that a if one or more inductors are connected to node k, then the branch currents of that inductor is entered in the node equation and the branch equation of the inductor is appended to the n − 1 node equation; b if one or more branches which are not voltage-controlled are connected to node k, then the corresponding branch current is entered in the node equation and the corresponding branch equation is appended to the n − 1 node equations.

4.2.2.2 Tableau Analysis

Tableau analysis is the second method for writing dynamic circuit equations. The method essentially mirrors the technique in Sect. 3.5, hence we will simply show a nonlinear example (very similar to Example 4.2.8) and then discuss the general technique.

Example 4.2.9

Write tableau equations for the circuit in Fig. 4.29.

Fig. 4.29
figure 29

Circuit for Example 4.2.9

Solution

We will assume that \(\mathscr {N}_R\) is voltage-controlled. We will assume the same characteristics for \(\mathscr {N}_L\) and the opamp as Example 4.28.

By inspection we can write KCL and KVL for the circuit:

$$\displaystyle \begin{aligned} \mathbf{A}\mathbf{i}(t)&=\mathbf{0} \\ \mathbf{v}(t)-{\mathbf{A}}^T\mathbf{e}(t)&=\mathbf{0} {} \end{aligned} $$
(4.63)

using the suitable reduced incidence matrix A. Using the branch equations from the circuit, we get:

$$\displaystyle \begin{aligned} v_1-R_1i_1&=0 \\ i_2&=0 \\ C\overset{\bullet}{v}_3-i_3&=0 \\ v_4-v_o(-v_2)&=0 \\ L(i_5)\overset{\bullet}{i}_5-v_5&=0 \\ i_6-\overset{\bullet}{i}_6(v_6)&=0 \\ v_7&=e_s(t) {} \end{aligned} $$
(4.64)

In Eq. (4.64), we know the constants R 1, C and the functions \(v_o(\cdot ), L(\cdot ), \hat {i}_6(\cdot )\) and e s(⋅). The unknown functions are e(⋅), v(⋅), and i(⋅). Equations (4.63) and (4.64) are the tableau equations for the given circuit.

In general, the tableau equations for a nonlinear dynamic circuit are:

$$\displaystyle \begin{aligned} \text{KCL:}\quad \mathbf{A}\mathbf{i}(t)&=\mathbf{0} \\ \text{KVL:}\quad \mathbf{v}(t)-{\mathbf{A}}^T\mathbf{e}(t)&=\mathbf{0} \\ \text{Branch eqs.:}\quad \mathbf{h}(\mathbf{\overset{\bullet}{v}}(t),\mathbf{v}(t),\mathbf{\overset{\bullet}{i}}(t),,\mathbf{i}(t),t)&=\mathbf{0} \end{aligned} $$
(4.65)

Comparing Eqs. (3.121) and (4.65), we see that for the dynamic case we have derivatives in the branch equation.

For a connected digraph of b branches and n nodes, the tableau equations (4.65) constitute a system of 2b + n − 1 scalar equations in 2b + n − 1 unknown functions e j(⋅), j = 1, 2, ⋯ , n − 1, v k(⋅), k = 1, 2, ⋯ , b and i l(⋅), l = 1, 2, ⋯ , b.

In the derivation of the tableau equations (4.65) we considered only a nonlinear inductor specified by its small-signal inductance L(i). The dual case would be a nonlinear capacitor specified by its small-signal capacitance C(v).

Suppose, however, we have a capacitor that is charge-controlled (\(v_C=\hat {v}(q)\)) and an inductor that is flux-controlled (\(i_L=\hat {i}(\phi )\)). If we use the chain rule as before, we are stuck because q and ϕ appear as arguments in \(\hat {v}'(q)\) and \(\hat {i}'(\phi )\). The remedy is to use q and ϕ as additional variables and to describe the capacitor by:

$$\displaystyle \begin{aligned} v_C&=\hat{v}(q) \\ \overset{\bullet}{q}&=i_C \end{aligned} $$
(4.66)

and the inductor by:

$$\displaystyle \begin{aligned} i_L&=\hat{i}(\phi) \\ \overset{\bullet}{\phi}&=-v_L \end{aligned} $$
(4.67)

4.2.2.3 Small Signal Analysis Revisited

We have already encountered the concept of small-signal analysis with respect to \(\mathscr {N}_R\) in Sect. 3.1.1. We will see in this section that the method of small-signal analysis, when applied to dynamic circuitsFootnote 10 helps reduce the analysis of a nonlinear dynamic circuit to that of a nonlinear resistive circuit, then to that of a linear dynamic circuit. The goal of this section is to state and justify the algorithm which delivers the small-signal equivalent circuit of any nonlinear time-invariant dynamic circuit about a fixed operating point.

In order to avoid complicated notations, this section studies the circuit shown in Fig. 4.30. We have chosen this circuit so that it includes most of the analyses required for obtaining a small-signal equivalent circuit. The aim of small-signal analysis is to take advantage of the fact that e s(⋅) is small (in the sense that, for all t ≥ 0, the values of |e s(t)| are small: higher order terms of any nonlinear expression are negligible). The circuit \(\mathscr {D}\) includes a linear resistor R, a linear capacitor C, a linear inductor L, a nonlinear VCCS specified by its characteristic f 0(⋅), a nonlinear current-controlled inductor specified by \(\hat {\phi }_6(\cdot )\), a nonlinear voltage-controlled capacitor specified by \(\hat {q}_7(\cdot )\), and a nonlinear voltage-controlled resistor specified by \(\hat {i}_2\).

Fig. 4.30
figure 30

Nonlinear time-invariant circuit \(\mathscr {D}\) driven by the DC source E s and the AC source e s(⋅)

The tableau equations of \(\mathscr {D}\) can be written as:

$$\displaystyle \begin{aligned} \text{KCL:}\quad \mathbf{A}\mathbf{i}(t)&=\mathbf{0} \\ \text{KVL:}\quad \mathbf{v}(t)-{\mathbf{A}}^T\mathbf{e}(t)&=\mathbf{0} \\ \text{Branch eqs.:}\quad \mathbf{f}(\mathbf{\overset{\bullet}{v}}(t),\mathbf{v}(t),\mathbf{\overset{\bullet}{i}}(t),\mathbf{i}(t))&={\mathbf{u}}_s(t) \end{aligned} $$
(4.68)

Notice that Eq. (4.68) is slightly different from Eq. (4.65): we emphasize the fact that f does not depend explicitly on time. Column vector u s(t) bookkeeps the contribution of the independent sources: E s + e s(t).

In order to derive approximate equations representing \(\mathscr {D}\) we proceed in three steps:

Step 1. Calculate the DC Operating Point

Q, i.e., E Q, V Q, I Q

Set the AC source e s(⋅) to zero, turn on the DC source, and call E Q, V Q, I Q the resulting DC steady-state. The corresponding tableau equations read:

$$\displaystyle \begin{aligned} \text{KCL:}\quad \mathbf{A}\mathbf{I_Q} &=\mathbf{0} \\ \text{KVL:}\quad \mathbf{V_Q}-{\mathbf{A}}^T\mathbf{E_Q}&=\mathbf{0} \\ \text{Branch eqs.:}\quad \mathbf{f}(\mathbf{0},\mathbf{V_Q},\mathbf{0},\mathbf{I_Q})&={\mathbf{U}}_s \end{aligned} $$
(4.69)

where U s denotes the contribution of the DC source E s. Since V Q and I Q are constant vectors, \(\overset {\bullet }{\mathbf {V}}_Q=\mathbf {0}\) and \(\overset {\bullet }{\mathbf {I}}_Q=\mathbf {0}\). For this particular circuit, the branch equations read:

$$\displaystyle \begin{aligned} V_1&=E_s \\ \hat{i}_2(V_2)-I_2&=0 \\ -f_0(V_2)+I_3&=0 \\ V_4&=0\;\;(\text{because }\frac{dI_4}{dt}=0) \\ I_5&=0\;\;(\text{because }\frac{dV_5}{dt}=0) \\ V_6&=0\;\;(\text{because }\frac{dI_6}{dt}=0) \\ I_7&=0\;\;(\text{because }\frac{dV_7}{dt}=0) \\ V_8-RI_8&=0 {} \end{aligned} $$
(4.70)

From Eq. (4.70), we see that to calculate the DC operating point, (a) we replace each inductor by a short circuit and (b) we replace each capacitor by an open circuit; (c) we solve the resulting nonlinear resistive circuit, shown in Fig. 4.31. In the next step, we assume that E Q, V Q, I Q are known.Footnote 11

Fig. 4.31
figure 31

Nonlinear resistive circuit whose solution V 1, V 2, ⋯ , I 1, I 2, ⋯ specifies the operating point Q. Note the inductors have been replaced by short circuits and the capacitors by open circuits

Step 2. Change of Variables

The idea is to use the fact that the AC source is small, and consequentlyFootnote 12 the actual node voltages e(t) will be close to E Q, v(t) will be close to V Q and i(t) will be close to I Q. So we write:

$$\displaystyle \begin{aligned} \mathbf{e}(t)&=\mathbf{E_Q}+\tilde{\mathbf{e}}(t) \\ \mathbf{v}(t)&=\mathbf{V_Q}+\tilde{\mathbf{v}}(t) \\ \mathbf{i}(t)&=\mathbf{I_Q}+\tilde{\mathbf{i}}(t) \end{aligned} $$
(4.71)

The point is that \(\tilde {\mathbf {e}}(t), \tilde {\mathbf {v}}(t), \tilde {\mathbf {i}}(t)\) are small deviations from the operating point E Q, V Q, I Q, respectively. If we substitute the expressions for e, v, i from Eq. (4.71) into the KCL Eq. (4.68) and the KVL Eq. (4.68) while taking into account the corresponding tableau KCL, KVL Eq. (4.69) about the DC operating point, we obtain:

$$\displaystyle \begin{aligned} \mathbf{A}\tilde{\mathbf{i}}(t)&=0 \\ \tilde{\mathbf{v}}(t)-{\mathbf{A}}^T\tilde{\mathbf{e}}(t)&=0 {} \end{aligned} $$
(4.72)

Note that the equations above are exact, no approximation is involved.

We could perform the same substitution in the branch Eq. (4.68) and use Eq. (4.69) to obtain:

$$\displaystyle \begin{aligned} \mathbf{f}\left(\overset{\bullet}{\tilde{\mathbf{v}}}(t),\mathbf{V_Q}+\tilde{\mathbf{v}}(t),\overset{\bullet}{\tilde{\mathbf{i}}}(t),\mathbf{I_Q}+\tilde{\mathbf{i}}(t)\right)-\mathbf{f}(\mathbf{0},\mathbf{V_Q},\mathbf{0},\mathbf{I_Q})&={\mathbf{u}}_s(t)-{\mathbf{U}}_s \end{aligned} $$
(4.73)

However it is more instructive to proceed by considering one branch at a time, because Eq. (4.73) is still a nonlinear equation and we would like to linearize it by using Taylor series (since \(\tilde {\mathbf {e}}(t), \tilde {\mathbf {v}}(t), \tilde {\mathbf {i}}(t)\) are small).

Step 3. Obtain Approximate Branch Equations

We consider successively resistors, controlled sources, capacitors, and independent sources. Since inductors are the dual of capacitors, the corresponding derivation is trivial and is left as an exercise for the reader.

The final result will be obtained by using a Taylor series expansion and dropping the higher-order terms. The result is a set of approximate linear time-invariant equations relating \(\tilde {\mathbf {v}}(t), \tilde {\mathbf {i}}(t)\) and the AC source. The linear small-signal circuit corresponding to \(\mathscr {D}\) is shown in Fig. 4.32.

Fig. 4.32
figure 32

The small-signal linear time-invariant circuit of \(\mathscr {D}\) about the operating point (V Q, I Q)

For the nonlinear resistor, we have:

$$\displaystyle \begin{aligned} i_2(t)&=\hat{i}_2(v_2(t)) \end{aligned} $$
(4.74)

Substituting for i 2(t) and v 2(t), we get:

$$\displaystyle \begin{aligned} I_2+\tilde{i}_2(t)&=\hat{i}_2(V_2+\tilde{v}_2(t)) \end{aligned} $$
(4.75)

Expanding the RHS using Taylor series, we get:

$$\displaystyle \begin{aligned} I_2+\tilde{i}_2(t)&=\hat{i}_2(V_2) + \frac{d\hat{i}_2}{dv}\Big\rvert_{V_2}\tilde{v}_2(t)+\text{higher order terms} \end{aligned} $$
(4.76)

Now if \(\tilde {v}_2(t)\) is small, we may neglect the higher order terms and since \(I_2=\hat {i}_2(V_2)\), we get:

$$\displaystyle \begin{aligned} \tilde{i}_2(t)&=\frac{d\hat{i}_2}{dv}\Big\rvert_{V_2}\tilde{v}_2(t) {} \end{aligned} $$
(4.77)

Equation (4.77) is the equation of a linear time-invariant resistor with conductance \(\frac {d\hat {i}_2}{dv}\Big \rvert _{V_2}\), the slope of the resistor characteristic at its operating point. Note that for a linear resistor, \(\frac {d\hat {i}_2}{dv}\Big \rvert _{V_2}=\frac {1}{R}\). Hence, comparing Figs. 4.30 and 4.32, we see that the linear resistor remains.Footnote 13

For the controlled source, we have:

$$\displaystyle \begin{aligned} i_3(t)&=f_0(v_2(t)) \end{aligned} $$
(4.78)

Substituting for i 3(t) and v 2(t), we get:

$$\displaystyle \begin{aligned} I_3+\tilde{i}_3(t)&=f_0(V_2+\tilde{v}_2(t)) \end{aligned} $$
(4.79)

Expanding the RHS using Taylor series, we get:

$$\displaystyle \begin{aligned} I_3+\tilde{i}_3(t)&=f_0(V_2) + \frac{df_0}{dv}\Big\rvert_{V_2}\tilde{v}_2(t)+\text{higher order terms} \end{aligned} $$
(4.80)

Now if \(\tilde {v}_2(t)\) is small, we may neglect the higher order terms to get:

$$\displaystyle \begin{aligned} \tilde{i}_3(t)&=\frac{df_0}{dv}\Big\rvert_{V_2}\tilde{v}_2(t) {} \end{aligned} $$
(4.81)

Equation (4.81) is the equation of a linear time-invariant VCCS.

For the nonlinear capacitor, we have:

$$\displaystyle \begin{aligned} q_7(t)&=\hat{q}_7(v_7(t)) \end{aligned} $$
(4.82)

Using the chain rule and substituting for v 7(t), we get:

$$\displaystyle \begin{aligned} \tilde{i}_7(t)&=\hat{q}^{\prime}_7(V_7+\tilde{v}_7(t))\cdot\overset{\bullet}{\tilde{v}}_7(t) \end{aligned} $$
(4.83)

Expanding the RHS using Taylor series, dropping the higher order terms and using the fact that the DC equivalent of a capacitor is an open circuit, we get:

$$\displaystyle \begin{aligned} \tilde{i}_7(t)&=\frac{d\hat{q}_7}{dv}\Big\rvert_{V_7}\cdot\overset{\bullet}{\tilde{v}}_7(t) {} \end{aligned} $$
(4.84)

Equation (4.84) is the slope of the nonlinear capacitor at the operating point V 7.

For the independent AC source, we trivially get: \(\tilde {v}_1(t)=e_s(t)\).

Hence the resulting branch equations for the small-signal linear time-invariant circuit in Fig. 4.32:

$$\displaystyle \begin{aligned} \tilde{v}_1(t)&=e_s(t) \\ \tilde{i}_2(t)-\hat{i}^{\prime}_2(V_2)\tilde{v}_2(t)&=0 \\ \tilde{i}_3(t)-f^{\prime}_0(V_2)\tilde{v}_2(t)&=0 \\ \tilde{v}_4(t)-L\overset{\bullet}{\tilde{i}_4}(t)&=0 \\ \tilde{i}_5(t)-C\overset{\bullet}{\tilde{v}_5}(t)&=0 \\ \tilde{v}_6(t)-\hat{\phi}^{\prime}_6(I_6)\overset{\bullet}{\tilde{i}_6}(t)&=0 \\ \tilde{i}_7(t)-\hat{q}^{\prime}_7(V_7)\overset{\bullet}{\tilde{v}_7}(t)&=0 \\ \tilde{v}_8(t)-R\tilde{i}_8(t)&=0 {} \end{aligned} $$
(4.85)

Let us abbreviate these equations in the form:

$$\displaystyle \begin{aligned} ({\mathbf{M}}_{0Q}D+{\mathbf{M}}_{1Q})\tilde{\mathbf{v}}(t)+({\mathbf{N}}_{0Q}D+{\mathbf{N}}_{1Q})\tilde{\mathbf{i}}(t)&=\tilde{\mathbf{u}}_s(t) {} \end{aligned} $$
(4.86)

where the constant matrices M 0Q, M 1Q, N 0Q, N 1Q are directly read from Eq. (4.85) and \(\tilde {\mathbf {u}}_s(t)\) is the column vector of AC sources in Eq. (4.85).

Conclusion

If we collect KCL, KVL from Eqs. (4.72) and (4.86), we get the tableau equation of a small-signal equivalent circuit:

$$\displaystyle \begin{aligned} \mathbf{A}\tilde{\mathbf{i}}(t)&=0 \\ \tilde{\mathbf{v}}(t)-{\mathbf{A}}^T\tilde{\mathbf{e}}(t)&=0 \\ ({\mathbf{M}}_{0Q}D+{\mathbf{M}}_{1Q})\tilde{\mathbf{v}}(t)+({\mathbf{N}}_{0Q}D+{\mathbf{N}}_{1Q})\tilde{\mathbf{i}}(t)&=\tilde{\mathbf{u}}_s(t) \end{aligned} $$
(4.87)

We will denote the small-signal equivalent circuit as \(\mathscr {L}_Q\). Since the concept of small-signal equivalent circuits is very important, we summarize the procedure in detail by the following algorithm.

Algorithm to obtain the small-signal equivalent circuit \(\mathscr {L}_Q\) of \(\mathscr {D}\)

Data

  • Circuit diagram of the nonlinear time-invariant circuit \(\mathscr {D}\) driven by DC and AC sources, with nodes numbered and with current reference directions

  • Branch equations for each element in \(\mathscr {D}\)

First we determine the operating point Q

  1. 1.

    In \(\mathscr {D}\), set all AC independent sources to zero.

  2. 2.

    Replace all inductors by short circuits and all capacitors by open circuits.

  3. 3.

    Solve the resulting resistive circuit, which is now driven by DC sources only. Call Q the resulting operating point specified by the solution (V Q, I Q). If there are multiple operating points, we choose the one of interest and study the dynamics of the circuit about that operating point.

Second, we determine \(\mathscr {L}_Q\)

  1. 1.

    In \(\mathscr {D}\), set all DC independent sources to zero.

  2. 2.

    Leave all linear elements.

  3. 3.

    Replace every nonlinear element by its (linear) small-signal equivalent circuit about the operating point found in step 3. The resulting linear time-invariant circuit is \(\mathscr {L}_Q\), the small-signal equivalent circuit of \(\mathscr {D}\) about the operating point Q.

4.3 Frequency Domain Analysis of Linear Time-Invariant Circuits

In this section, we consider exclusively linear time-invariant circuits and we concentrate on their sinusoidal steady-state behavior, that is, their behavior when they are driven by one or more sinusoidal sources at some frequency ω and when, after all “transients” have died down, all currents and voltages are sinusoidal at frequency ω.

This section has a somewhat narrow focus in the sense that we do not discuss nonlinear circuits. However, the concepts and techniques this section covers are fundamental to science, in the sense that frequency domain analysis helps transform the analysis of differential equations from the time domain (Sect. 4.2), into analysis of algebraic (albeit complex) equations in the frequency domain. Also, we will see later in this chapter that a variety of small-signal AC analysis techniques (for example, with higher-order circuit elements in Sect. 4.6.2) will make use of frequency response concepts.

Moreover, discussing frequency domain techniques for nonlinear circuits is beyond the scope of this book, as we need to develop the mathematical machinery (such as describing functions) first. We plan to add this topic as part of our follow-up advanced volume on nonlinear circuits and networks.

The analysis technique when sinusoidal inputs are applied to linear time-invariant circuits is called AC analysis or sinusoidal steady-state analysis. Our first task would be to systematically develop the concept of a phasor : to each sine wave (of voltage or current) we associate a complex number, to encode both the magnitude and the phase.

4.3.1 Complex Numbers and Phasors

We will first discuss some important ideas regarding complex numbers. We would like to emphasize that our approach to deriving the phasor concept from complex numbers is probably unique because we use a historical approach [18], covering important concepts along the way. Hence we encourage readers who are familiar with complex numbers to at least glance through this section to make sure that they do not miss out some on fascinating facts. Many texts seek to introduce complex numbers with a convenient historical fiction based on solving quadratic equationsFootnote 14 [25]:

$$\displaystyle \begin{aligned} x^2&=mx + c \end{aligned} $$
(4.88)

Two thousand years BC, it was already known that such equations could be solved using a method that is equivalent to the modern formula:

$$\displaystyle \begin{aligned} x_{1,2}&=\frac{m\pm \sqrt{m^2+4c}}{2} \end{aligned} $$
(4.89)

But what if m 2 + 4c (discriminant) is negative? This is where many textbooks are historically inaccurate in the sense that they state: the need for Eq. (4.88) to always have a solution forced mathematicians to take complex numbers seriously for negative discriminants.

But that is simply false. For the ancient Greeks mathematics was synonymous with geometry. Thus an algebraic relation such as Eq. (4.88) was not so much thought of as a problem in its own right, but rather as a mere vehicle for solving a genuine problem in geometry. In other words, Eq. (4.88) was simply seen to represent the problem of finding the intersection points of the parabola y = x 2 with the line y = mx + c. Thus, depending on the sign of the discriminant, the equation either had two, one, or no real solutions. So, if the solution was absent, then it was correctly manifested by the occurrence of “impossible” (now known as complex) numbers in the formula.

It was not the quadratic that forced complex numbers to be taken seriously, it was the cubic:

$$\displaystyle \begin{aligned} x^3&=3px+2q \end{aligned} $$
(4.90)

Exercise 4.9 shows that any cubic equation can be reduced to the form above. This equation represents the analogous problem of finding the intersection points of the cubic y = x 3 with the line y = 3px + 2q. Girolamo Cardano in his Ars Magna (which appeared in 1545) showed that this equation could be solved by means of the elegant formula (see Exercise 4.10):

$$\displaystyle \begin{aligned} x&=s+t \\ \text{where: } s^3=q+\sqrt{q^2-p^3}&\quad t^3=q-\sqrt{q^2-p^3} \end{aligned} $$
(4.91)

Some 30 years after this formula appeared, Rafael Bombelli in L’Algebra recognized that there was something strange and paradoxical about it. First note that if the line y = 3px + 2q is such that p 3 > q 2 then the formula involves complex numbers. For example, Bombelli considered x 3 = 15x + 4 which yields as one of the solutions:

$$\displaystyle \begin{aligned} x&=\sqrt[\leftroot{-3}\uproot{3}3]{2+11j}+\sqrt[\leftroot{-3}\uproot{3}3]{2-11j} \end{aligned} $$
(4.92)

In the previous case of the quadratic, this merely signaled that the geometric problem had no solution but in the case of the cubic, the line will always Footnote 15 hit the curve. In fact, we can (graphically) show that Bombelli’s example yields the solution x = 4.

As he struggled to resolve this paradox, Bombelli had what he called a “wild thought”: perhaps the solution x = 4 could be recovered from the above expression if \(\sqrt [\leftroot {-3}\uproot {3}3]{2+11j}=2+nj\) and \(\sqrt [\leftroot {-3}\uproot {3}3]{2+11j}=2-nj\). Of course for this to work he would have to assume that the addition of two complex numbers \(A=a+j\tilde {a}\) and \(B=b+j\tilde {b}\) obeyed the plausible rule,

$$\displaystyle \begin{aligned} A+B&=(a+j\tilde{a})+(b+j\tilde{b}) \\ &=(a+b)+j(\tilde{a}+\tilde{b}) \end{aligned} $$
(4.93)

Next, to see if there was indeed a value of n for which \(\sqrt [\leftroot {-3}\uproot {3}3]{2+11j}=2+nj\), he needed to calculate (2 + jn)3. To do so he assumed that he could multiply out the brackets as in ordinary algebra and assuming j 2 = −1:

$$\displaystyle \begin{aligned} (a+j\tilde{a})(b+j\tilde{b})&=ab+j(a\tilde{b}+\tilde{a}b)+j^2\tilde{a}\tilde{b} \\ &=(ab-\tilde{a}\tilde{b})+j(a\tilde{b}+\tilde{a}b) \end{aligned} $$
(4.94)

This rule vindicated his “wild thought,” for he was now able to show that (2 ± j)3 = 2 ± 11j.

While complex numbers themselves remained mysterious, Bombelli’sFootnote 16 work on cubic equations thus established that perfectly real problems requires complex arithmetic for their solution. This justifies our use of complex arithmetic in AC circuit analysis: complex numbers provide an elegant way to encode both the magnitude and phase of a sinusoid. In fact, the subsequent development of the theory of complex numbers was bound with progress in other areas of physics and mathematics. That discussion is beyond the scope of this book, the interested reader is referred to [25].

We will now introduce the modern terminology and notation for complex numbers. Throughout this discussion, refer to Table 4.3 and Fig. 4.33.

Fig. 4.33
figure 33

Complex numbers terminology (contd.)

Table 4.3 Complex numbers terminology

It is valuable to grasp from the outset that (according to the geometric view) a complex number is a single, indivisible entity—a point in the plane. Only when we choose to describe such a point with numerical coordinates does a complex number appear to be compounded or “complex.” More precisely, \(\mathbb {C}\) is said to be two dimensional, meaning that two real numbers (coordinates) are needed to label a point within it, but exactly how the labeling is done is entirely up to us.

One way is to label the points with Cartesian coordinates (the real part x and imaginary part y), the complex number being written as z = x + jy. This form, called the standard form (encountered earlier via Bombelli’s work), is the “natural” labeling when dealing with addition (or subtraction) of two complex numbers z 1 = a + jb, z 2 = c + jd:

$$\displaystyle \begin{aligned} z_1+z_2&=(a+c)+j(b+d) \end{aligned} $$
(4.95)

We simply add the real parts to get the real part for the sum, and add the imaginary parts to get the imaginary part for the sum.

But, when multiplying (or dividing) two complex numbers, the standard form is cumbersome. To emphasize this point, let us again multiply two complex numbers in standard form:

$$\displaystyle \begin{aligned} z_1*z_2&=(a+jb)*(c+jd) \\ &=(ac-bd)+j(ad+bc)\;\;\;(j^2=-1) \end{aligned} $$
(4.96)

There is a more elegant way to multiply (divide) complex numbers. We will simply state the rule since a detailed explanation is beyond the scope of this book: labeling z with its polar coordinates, \(r=|z|, \theta = \arg (z)\), we can now write \(z=r\angle \theta \) where the symbol \(\angle \) serves to remind us that θ is the angle of z .

The geometry multiplication rule takes the simple form:

$$\displaystyle \begin{aligned} (R\angle\phi)(r\angle\theta)&=(Rr)\angle(\phi+\theta) \end{aligned} $$
(4.97)

In words: The length of z 1 z 2 is the product of the lengths of z 1 and z 2 , and the angle of z 1 z 2 is the sum of the angles of z 1 and z 2.

Complex division can now be defined in a simple manner:

$$\displaystyle \begin{aligned} \frac{R\angle\phi}{r\angle\theta}&=\frac{R}{r}\angle(\phi-\theta) \end{aligned} $$
(4.98)

One important concept is that: in common with the Cartesian label x + jy, a given polar label \(r\angle \theta \) specifies a unique point, but (unlike the Cartesian case) a given point does not have a unique polar label! Since any two angles that differ by a multiple of 2π correspond to the same direction, a given point has infinitely many labels:

$$\displaystyle \begin{aligned}\begin{array}{r*{20}l} \cdots&=r\angle(\theta-4\pi)&=r\angle(\theta-2\pi)&=r\angle\theta&=r\angle(\theta+2\pi)&=r\angle(\theta+4\pi)&=\cdots \vspace{-3pt} \end{array}\end{aligned} $$
(4.99)

This simple fact about angles is one of the most important concepts in complex numbers that is encountered many times in science and engineering. Before proceeding, you should solve Exercise 4.11 so that you thoroughly understand and are comfortable with the concepts, terminology and notation for complex numbers.

We are now in a position to look at probably the most elegant formula in mathematics, called Euler’s formula :

$$\displaystyle \begin{aligned} e^{j\theta}&=\cos{}(\theta)+j\sin{}(\theta) \end{aligned} $$
(4.100)

Simply stated, “Euler’s formula relates polar form to standard form.” But this does not help us understand what the formula means. Simply stating “Euler’s formula relates polar form to standard form” reduces one of Euler’s greatest achievements to a mere tautology. Perhaps the best approach to understanding Euler’s formula is to go visualize e in the complex plane, as shown in Fig. 4.34. Given the fact that the complex number e in standard form is x + jy, we can see from Fig. 4.34 that since the magnitude of e is 1, \(x=\cos {}(\theta ),y=\sin {}(\theta )\) by the definition of the trigonometric functions from a right-angled triangle. Obviously, if we scale the magnitude of a complex number by r, Fig. 4.34 shows \(re^{j\theta }=r\cos {}(\theta )+jr\sin {}(\theta )\).

Fig. 4.34
figure 34

Interpreting Euler’s formula via the complex plane

Now, we are ready to discuss the concept of a phasor.

Definition 4.11

A sinusoid of angular frequency ω (rad/s) is by definition a function of the form \(A_m\cos {}(\omega t + \theta )\) where the amplitude A m, phase θ, and the frequency ω are real constants. The amplitude A m is always taken to be positive. The period T = 2πω is in seconds. Also note that given a frequency f in Hz, ω = 2πf.

Definition 4.12

To the sinusoid in Definition 4.11, we associate a complex number A called the phasor Footnote 17 (of that sinusoid) according to the rule: \(A\overset {\triangle }=A_me^{j\theta }\).

It is crucial to note that the phasor does not explicitly involve ωt! The best way to understand this is visually, refer to Fig. 4.35, called the phasor diagram . We plot the phasor A in the complex plane as a vector from the origin to the point A = A m e . We now imagine the vector rotating counterclockwise at angular velocity of ω rad/s, namely, we consider Ae jωt as t increases. Whenever we want x(t), we project orthogonally on the x-axis the tip of the vector.

Fig. 4.35
figure 35

The sinusoid \(x(t)=A_m\cos {}(\omega t+\theta )\) is viewed as being generated by the projection of the tip of the “rotating phasor” Ae jωt

In other words, knowing the frequency ω, the phasor A specifies uniquely the sinusoid by the formula:

$$\displaystyle \begin{aligned} \text{Re}[Ae^{j\omega t}]&= \text{Re}[A_me^{j(\omega t+\theta)}] \\ &=A_m\cos{}(\omega t + \theta) \end{aligned} $$
(4.101)

In summary, there is a one-to-one correspondence between sinusoids (at frequency ω) and phasors:

(4.102)

Equivalent (4.102) states that

$$\displaystyle \begin{aligned} A_m\cos{}(\omega t + \theta) = \text{Re}[A]\cos\omega t - \text{Im}[A]\sin\omega t \end{aligned} $$
(4.103)

4.3.2 Sinusoidal Steady-State Analysis Using Phasors

The use of phasors in the analysis of linear time-invariant circuits in sinusoidal steady-state becomes completely obvious once the following lemmas are thoroughly understood.

Lemma 4.1 (Uniqueness)

Two sinusoids are equal iff they are represented by the same phasor; symbolically for all t,

$$\displaystyle \begin{aligned} \mathit{\text{Re}}(Ae^{j\omega t})=\mathit{\text{Re}}(Be^{j\omega t}) \Leftrightarrow A=B \end{aligned} $$
(4.104)

Proof

  1. (a)

    Assume A = B. Consequently, for all t,

    $$\displaystyle \begin{aligned} Ae^{j\omega t}=Be^{j\omega t}\quad \text{and}\quad \text{Re}(Ae^{j\omega t})=\text{Re}(Be^{j\omega t}) \end{aligned}$$
  2. (b)

    Assume, for all t:

    $$\displaystyle \begin{aligned} \text{Re}(Ae^{j\omega t})=\text{Re}(Be^{j\omega t}) \end{aligned} $$
    (4.105)

    In particular, for t = 0, we get: Re(A) = Re(B). Similarly for t 0 = π∕(2ω), \(e^{j\omega t_0} = e^{j\pi /2}=j\). Thus Re(Aj) = −Im(A) and hence Eq. (4.105) gives Im(A) = Im(B). Therefore:

    $$\displaystyle \begin{aligned} A&=\text{Re}(A)+j\text{Im}(A) \\ &=\text{Re}(B)+j\text{Im}(B) \\ &=B \end{aligned} $$
    (4.106)

Lemma 4.2 (Linearity)

The phasor representing a linear combination of sinusoids (with real coefficients) is equal to the same linear combination of the phasors representing the individual sinusoids. Symbolically, let the sinusoids be

$$\displaystyle \begin{aligned} x_1(t)=\mathit{\text{Re}}[A_1e^{j\omega t}]\quad \mathit{\text{and}}\quad x_2(t)=\mathit{\text{Re}}[A_2e^{j\omega t}] \end{aligned} $$

Thus the phasor A 1 represents sinusoid x 1(t) and the phasor A 2 represents x 2(t). Let \(a_1, a_2\in \Re \) ; then the sinusoid a 1 x 1(t) + a 2 x 2(t) is represented by the phasor a 1 A 1 + a 2 A 2.

Proof

We verify the assertion by computation:

$$\displaystyle \begin{aligned} a_1x_1(t)+a_2x_2(t) = a_1\text{Re}[A_1e^{j\omega t}]+a_2\text{Re}[A_2e^{j\omega t}] \end{aligned} $$
(4.107)

Now a 1 and a 2 are real numbers, hence for any complex numbers z 1 and z 2,

$$\displaystyle \begin{aligned} a_i\text{Re}[z_i]&=\text{Re}[a_iz_i]\quad i=1,2 \\ \text{and}\;a_1\text{Re}[z_1]+a_2\text{Re}[z_2]&=\text{Re}[a_1z_1+a_2z_2] \end{aligned} $$
(4.108)

Now applying this fact to Eq. (4.107) we have:

$$\displaystyle \begin{aligned} a_1\text{Re}[A_1e^{j\omega t}]+a_2\text{Re}[A_2e^{j\omega t}]&=\text{Re}[(a_1A_1+a_2A_2)e^{j\omega t}] \end{aligned} $$
(4.109)

Combining the equation above with Eq. (4.107) we get:

$$\displaystyle \begin{aligned} a_1x_1(t)+a_2x_2(t)&=\text{Re}[(a_1A_1+a_2A_2)e^{j\omega t}] \end{aligned} $$
(4.110)

The proof is easily extended to a linear combination (with real coefficients) of n sinusoids.

Lemma 4.3 (Phasor Differentiation)

A is the phasor of a given sinusoid \(A_m\cos {}(\omega t+\theta )\) iff jωA is the phasor of its derivative, \(\frac {d}{dt}[A_m\cos {}(\omega t+\theta )]\) . Symbolically,

$$\displaystyle \begin{aligned} \mathit{\text{Re}}[j\omega Ae^{j\omega t}]&=\frac{d}{dt}[\mathit{\text{Re}}(Ae^{j\omega t})] \end{aligned} $$
(4.111)

Proof

Note that it is convenient to think of Eq. (4.111) as stating that the linear operators Re and \(\frac {d}{dt}\) commute:

$$\displaystyle \begin{aligned} \text{Re}\left[\frac{d}{dt}(Ae^{j\omega t})\right]=\text{Re}[j\omega Ae^{j\omega t}]&=\frac{d}{dt}[\text{Re}(Ae^{j\omega t})] \end{aligned} $$

Now:

$$\displaystyle \begin{aligned} \frac{d}{dt}[\text{Re}(Ae^{j\omega t})] &= \frac{d}{dt}[\text{Re}(A_me^{j(\omega t+\theta)})] \\ &=\frac{d}{dt}[A_m\cos{}(\omega t+\theta)] \\ &=-A_m\omega\sin{}(\omega t+\theta) \\ &=\text{Re}[j\omega A_me^{j(\omega t+\theta)}] \\ &=\text{Re}[Ae^{j\omega t}] \end{aligned} $$
(4.112)

Example 4.3.1

Simplify: \(12\cos {}(\omega t+23^\circ ) + 7\cos {}(\omega t-57^\circ )+\frac {d}{dt}(0.2\cos {}(\omega t+71^\circ ))\)

Solution

We could combine all the functions using trigonometric formulae, however, this approach gets very complicated. Instead let us use the phasor rules we just learned, understanding that ω for all the functions is the same. Let ω = 377 rad/s. Hence, the phasor formulation for each function is:

$$\displaystyle \begin{aligned} A_1&=12e^{j23^\circ} \\ A_2&=7e^{-j57^\circ} \\ A_3&=j\omega 0.2e^{j71^\circ}=75.4e^{j161^\circ}\end{aligned} $$
(4.113)

For A 3, we used the differentiation rule. Since we are going to be adding complex numbers, let us write each of the phasor in standard form:

$$\displaystyle \begin{aligned} A_1&\approx 11.05+j4.69 \\ A_2&\approx 3.81-j5.87 \\ A_3&\approx -71.29+j24.55 \end{aligned} $$
(4.114)

We will add and convert back to phasor form, so we can interpret the result as a sinusoid:

$$\displaystyle \begin{aligned} A_1+A_2+A_3&=-56.43+j23.34 \\ &=61.08e^{j157.51^\circ} \\ &\overset{\triangle}=A \end{aligned} $$
(4.115)

Thus the resulting sinusoid is: \(\text{Re}[Ae^{j\omega t}]=61.08\cos {}(377t + 157.51^\circ )\).

We will now solve a differential equation using phasor formulation.

Example 4.3.2

Given the circuit in Fig. 4.36, determine i L(t) for all t. \(i_s(t)=I_{\mathrm{sm}}\cos {}(\omega t +\angle I_s)\). Assume L > 0, R > 0, C > 0.

Fig. 4.36
figure 36

Circuit for Example 4.3.2

Solution

The time domain equation for i L(t) can be easily found via inspection as:

$$\displaystyle \begin{aligned} \frac{d^2}{dt^2}i_L(t)+2\alpha\overset{\bullet}{i}_L(t)+\omega_0^2i_L(t)&=\omega_0^2i_s(t) \end{aligned} $$
(4.116)

\(w_0^2=1/LC, 2\alpha =1/RC\). Let the phasor representation of the sinusoidal current source be \(I_s=I_{\mathrm{sm}}e^{j\angle I_s}\). Since a phasor is an exponential function and Eq. (4.116) is a linear ODE, we try the solution Re(I L e jωt) where the complex number I L is the yet-undetermined phasor which specifies this particular sinusoidal solution. Substituting into Eq. (4.116) we obtain for all t:

$$\displaystyle \begin{aligned} \frac{d^2}{dt^2}[\text{Re}(I_Le^{j\omega t})] + 2\alpha\frac{d}{dt}[\text{Re}(I_Le^{j\omega t})] + \omega_0^2\text{Re}(I_Le^{j\omega t})&=\omega_0^2\text{Re}(I_se^{j\omega t}) \end{aligned} $$
(4.117)
  1. 1.

    Using the differentiation lemma three times, we get:

    $$\displaystyle \begin{aligned} \text{Re}[(j\omega)^2I_Le^{j\omega t}]+2\alpha\text{Re}[(j\omega)I_Le^{j\omega t}]+\omega_0^2\text{Re}(I_Le^{j\omega t})&=\omega_0^2\text{Re}(I_se^{j\omega t}) \end{aligned} $$
  2. 2.

    Using the linearity lemma we obtain (since α and \(\omega _0^2\) are real):

    $$\displaystyle \begin{aligned} \text{Re}{[(j\omega)^2+2\alpha(j\omega)+w_0^2]I_Le^{j\omega t}}&=\omega_0^2\text{Re}(I_se^{j\omega t}) \end{aligned} $$
  3. 3.

    Using uniqueness lemma, we obtain an algebraic equation for I L:

    $$\displaystyle \begin{aligned}{}[(j\omega)^2+2\alpha(j\omega)+\omega_0^2]I_L&=\omega_0^2I_s \end{aligned} $$
    (4.118)

Hence

$$\displaystyle \begin{aligned} I_L&=\frac{\omega_0^2I_s}{(\omega_0^2-\omega^2)+2\alpha j\omega}\overset{\triangle}=I_{\mathrm{Lm}}e^{j(\theta_L+\angle I_s)} \end{aligned} $$
(4.119)

with

$$\displaystyle \begin{aligned} I_{\mathrm{Lm}}&=\frac{\omega_0^2}{\sqrt{(\omega_0^2-\omega^2)^2+(2\alpha\omega)^2}}I_{\mathrm{sm}}\quad \theta_L=-\tan^{-1}\frac{2\alpha\omega}{\omega_0^2-\omega^2} \end{aligned} $$

The sinusoidal solution is then:

$$\displaystyle \begin{aligned} i_{\mathrm{Lp}}(t)&=\frac{\omega_0^2I_{\mathrm{sm}}}{\sqrt{(\omega_0^2-\omega^2)^2+(2\alpha\omega)^2}}\cos{}(\omega t+\angle I_s+\theta_L) \end{aligned} $$
(4.120)

where the subscript p reminds us that i Lp is the sinusoidal particular solution. The physical meaning of this particular solution is the following: since R, L, C are positive constants, it follows that α > 0 and \(\omega _0^2 > 0\). Consequently, the two natural frequencies s 1, s 2 of the circuit, i.e., the zeros of its characteristic polynomial \(C(s)=s^2+2\alpha s + \omega _0^2\) have negative real parts. Therefore, any solution of Eq. (4.116) starting at any t 0 from any initial condition has the form:

$$\displaystyle \begin{aligned} i_L(t)&=k_1e^{s_1(t-t_0)}+k_2e^{s_2(t-t_0)}+i_{\mathrm{Lp}}(t) \end{aligned} $$
(4.121)

Note that we have assumed s 1 ≠ s 2.

Example 4.3.2 illustrates the following ideas:

  1. 1.

    Since Re(s 1) < 0, Re(s 2) < 0, as t → x, i L(t) → i Lp(t). This particular solution is called the sinusoidal steady-state solution of the circuit . The difference between the total response i L(t) given by Eq. (4.121) and the particular solution given by Eq. (4.120) is called the transient response.

  2. 2.

    Note that the frequency of the output is the same as the frequency of the input. This property is true in general for any linear time-invariant circuit: if all its natural frequencies have negative real parts, then for any initial conditions and for any set of independent sources, each one sinusoidal at the same frequency ω, all currents and all voltages will tend exponentially as t → to sinusoidal waveforms at frequency ω. When that situation occurs the circuit is said to be in the sinusoidal steady-state . Note that sinusoidal steady-state does not depend on the initial conditions. A general proof is beyond the scope of this book, the interested reader is referred to [12].

  3. 3.

    Comparing Eqs. (4.116) and (4.118) we see that a differential equation in the time domain has been converted to a complex algebraic equation in the phasor domain. So a natural question is: can we obtain the algebraic equation in the phasor domain directly from the circuit, instead of writing the time domain differential equation?

The answer is yes, and simply involves reformulating the laws of interconnections (KCL, KVL) and laws of elements in the phasor domain.

For example, in Fig. 4.36, KCL reads for all t:

$$\displaystyle \begin{aligned} i_L(t)+i_R(t)+i_C(t)&=0 \end{aligned} $$
(4.122)

For k = L, R, C, let I k be the phasor representing the sinusoid i k(t). Thus, Eq. (4.122) gives, for all t:

$$\displaystyle \begin{aligned} \text{Re}(I_Le^{j\omega t})+\text{Re}(I_Re^{j\omega t})+\text{Re}(I_Ce^{j\omega t})&=0 \end{aligned} $$
(4.123)

Using the linearity and uniqueness lemmas, we obtain:

$$\displaystyle \begin{aligned} I_L+I_R+I_C&=0 \end{aligned} $$
(4.124)

Since the reasoning is quite general, we can state the following conclusion.

Theorem 4.3 (KCL in the Phasor Domain)

In the sinusoidal steady-state, for any connected circuit \(\mathscr {D}\) , KCL reads:

$$\displaystyle \begin{aligned} \mathbf{A}\bar{\mathbf{I}}&=0 \end{aligned} $$
(4.125)

where A is the (n − 1) × b reduced incidence matrix of real numbers and \(\bar {\mathbf {I}}\) is an b-vector current phasor. We use \(\bar {\mathbf {I}}\) to avoid confusion with the identity matrix.

We can make a similar argument for KVL and hence we get:

Theorem 4.4 (KVL in the Phasor Domain)

In the sinusoidal steady-state, for any connected circuit \(\mathscr {D}\) , KVL reads:

$$\displaystyle \begin{aligned} \mathbf{V}&={\mathbf{A}}^T\mathbf{E} \end{aligned} $$
(4.126)

where A is the (n − 1) × b reduced incidence matrix of real numbers and E is a (n − 1)-vector voltage phasor. Notice that V is a matrix with complex components.

The laws of elements in the phasor domain can also be derived in a straightforward manner by application of the three lemmas to the time domain element laws. Table 4.4 has the results. The expressions R, jωL and \(\frac {1}{j\omega C}\), are the impedances at frequency ω of the circuit elements R, L, and C, respectively; \(\frac {1}{R}, \frac {1}{j\omega L}, j\omega C\) are the corresponding admittances; μ is a voltage gain; α is a current gain; g m is a transconductance, and r m is a transresistance. The crucial point again is that in terms of phasors, the branch equations become algebraic equations with complex coefficients in the phasor domain.

Table 4.4 Laws of elements in the time domain and phasor domain

Also, as shown in Fig. 4.35, it is common to visualize phasors as rotating counterclockwise. Hence, referring to the phasor domain constitutive relations for the inductor and capacitor, we say the inductor current phasor I L lags the inductor voltage phasor V L by 90 and the capacitor current phasor I C leads the capacitor voltage phasor V C by 90.Footnote 18 We will see in Sect. 4.4.2 that capacitive and inductive parasitic effects in physical memristors lead to “unpinching” of memristor hysteresis loops, due to the leading (or lagging) behavior of current and voltage variables (under sinusoidal excitation).

Thus we have in essence “resistive” circuits in the frequency domain, except now our resistances are in the form of frequency-dependent impedances. Therefore, techniques such as tableau analysis are applicable and to avoid repeating the concepts from Chap. 3, we will simply summarize the main ideas, by drawing an analogy with tableau analysis for resistive circuits.

Let \(\mathscr {N}_R\) be a linear time-invariant resistive circuit with a connected graph having n nodes and b branches. Suppose that we first replace a number of resistors of \(\mathscr {N}_R\) by inductors or capacitors, and second, drive the resulting circuit by sinusoidal sources all operating at the same frequency ω. Assume that the resulting circuit is in the sinusoidal steady-state and call the circuit N ω. We have chosen this label to emphasize that we consider its sinusoidal steady at frequency ω.

Linear time-invariant resistive circuit N R (see Eq. (3.115))

$$\displaystyle \begin{aligned} \begin{bmatrix}\mathbf{0}&\mathbf{0}&\mathbf{A}\\-{\mathbf{A}}^T&\mathbf{I}&\mathbf{0}\\\mathbf{0}&\mathbf{M}(t)&\mathbf{N}(t)\end{bmatrix}\begin{bmatrix}\mathbf{e}(t)\\\mathbf{v}(t)\\\mathbf{i}(t)\end{bmatrix}&=\begin{bmatrix}\mathbf{0}\\\mathbf{0}\\{\mathbf{u}}_s(t)\end{bmatrix}\end{aligned} $$
(4.127)
  1. 1.

    e(⋅), v(⋅), i(⋅), u s(⋅) are vector-valued functions of time.

  2. 2.

    The tableau matrix T has real entries.

  3. 3.

    N R is completely described by Eq. (4.127), i.e., a set of linear algebraic equations with real coefficients.

Linear time-invariant circuit N ω operating in the sinusoidal steady-state

$$\displaystyle \begin{aligned} \begin{bmatrix}\mathbf{0}&\mathbf{0}&\mathbf{A}\\-{\mathbf{A}}^T&\mathbf{I}&\mathbf{0}\\\mathbf{0}&\mathbf{M}(j\omega)&\mathbf{N}(j\omega)\end{bmatrix}&\begin{bmatrix}\mathbf{E}\\\mathbf{V}\\\bar{\mathbf{I}}\end{bmatrix} \\ &=\begin{bmatrix}\mathbf{0}\\\mathbf{0}\\{\mathbf{U}}_s\end{bmatrix}\end{aligned} $$
(4.128)
  1. 1.

    \(\mathbf {E},\mathbf {V},\bar {\mathbf {I}},{\mathbf {U}}_s\) are vectors whose components are phasors.

  2. 2.

    The tableau matrix T() has complex entries in its bottom b rows.

  3. 3.

    N ω is completely described by Eq. (4.128), i.e., a set of linear algebraic equations with complex coefficients.

Moreover:

  1. 1.

    The superposition theorem holds for N ω: provided det[T()] ≠ 0, the sinusoidal steady-state (at frequencies ω) due to several independent sources (at frequency ω) is equal to the sum of the sinusoidal steady-states due to each independent source acting alone (see Sect. 3.6.1).

  2. 2.

    Thévenin-Norton equivalent: For example, if the DP characteristic of N ω at a pair of terminals 1,1 is current-controlled, then the resulting one-port may be replaced by a Thévenin equivalent, but with a V oc that is the phasor representing the open-circuit voltage at 1, 1 and Z eq is the impedance of N ω0 seen at 1, 1, ω0 is the particularly forcing frequency at which the impedance is determined (see Sect. 3.6.2).

4.3.3 Laplace Transforms

In the preceding section, we studied linear time invariant circuits in the sinusoidal steady-state, and our main tool was phasor analysis. In this section, we continue to study linear time-invariant circuits, but we do it now under general excitation. We will again encounter a number of basic concepts and properties that are indispensable to the solution of many scientific problems.

Since the Laplace transform is a generalization of the phasor concept, we will avoid repetition and discuss the main differences in this section between the Laplace transform and phasors through examples. Particularly:

  1. 1.

    The Laplace transform can be utilized to obtain both the transient and steady-state response.

  2. 2.

    Inverse Laplace transforms (usually by partial fraction expansion) are needed to obtain the corresponding time-response.

Throughout this section, the variable s will be a complex variable expressed in standard form: \(s=\sigma +j \omega ,\sigma ,\omega \in \Re \). We view s as a point in the complex plane: σ is its abscissa and ω is its ordinate. The (one-sided) Laplace transform of a time domain function f(t) is defined as:

$$\displaystyle \begin{aligned}F(s)&\overset{\triangle}=\displaystyle\int\limits_{0^-}^\infty f(t)e^{-st}dt \end{aligned} $$
(4.129)

In the integral above, t is the integration variable and hence the integral depends only on the time function f(⋅) and on a particular value of s, the complex frequency. Few remarks:

  1. 1.

    The lower limit of integration is chosen to be 0 − so that whenever f(t) includes an impulse at the origin, it is included in the interval of integration (see Example 4.3.5).

  2. 2.

    The operation of taking the Laplace transform is denoted by \(\mathscr {L}\), thus we write: \(F(s)=\mathscr {L}\{f\}(s)\).

  3. 3.

    The operation of taking the inverse Laplace transform is denoted by \(\mathscr {L}^{-1}\): \(f(t)=\mathscr {L}^{-1}\{F\}(t)\).

  4. 4.

    If we take the Laplace transform of a voltage v(t) or current i(t), we denote them by V (s) and I(s). Thus we use uppercase letters to denote Laplace transforms.

Example 4.3.3

Show that the Laplace transform of the impulse function δ(t) is \(\mathscr {L}(\delta )=1\).

Solution

Let us approximate δ(t) by using the procedure from Sect. 4.2.1.3. Consider the unit area rectangular pulse p Δ(t):

$$\displaystyle \begin{aligned} p_\varDelta(t)&= \begin{cases} \frac{1}{\varDelta}& \text{for }0\leq t\leq\varDelta \\ 0& \text{elsewhere} \end{cases} \end{aligned} $$

Using p Δ in the definition of the Laplace transform in Eq. (4.129) and simplifying:

$$\displaystyle \begin{aligned} \displaystyle\int\limits_{0}^\infty p_\varDelta(t)e^{-st}dt&=\displaystyle\int\limits_{0}^\varDelta\frac{1}{\varDelta}e^{-st}dt \\ &=\frac{e^{-st}}{-s\varDelta}\Big\rvert_0^\varDelta \\ &=\frac{1-e^{-s\varDelta}}{\varDelta} \end{aligned} $$

Now let Δ → 0, then p Δ(t) → δ(t) and \(\mathscr {L}\{p_\varDelta \}\rightarrow \mathscr {L}\{\delta \}\). Thus we have:

$$\displaystyle \begin{aligned} \mathscr{L}\{\delta\}&=\displaystyle\lim\limits_{\varDelta\rightarrow 0}\frac{1-e^{-s\varDelta}}{s\varDelta} \\ &=\displaystyle\lim\limits_{\varDelta\rightarrow 0}\frac{1-(1-s\varDelta+s^2\varDelta^2/2-\cdots)}{s\varDelta} \\ &=1 \end{aligned} $$

Example 4.3.3 shows the significance of the impulse response: since the Laplace transform of δ is unity, from a (complex) frequency standpoint, we say δ(t) contains “all frequencies.” Hence the impulse response of a linear time-invariant circuit (system) contains all information about the system.

There are also a variety of properties of Laplace transforms that follow from phasors: linearity, etc. But the uniqueness property of Laplace transforms is general in the sense Eq. (4.129) establishes a one-to-one correspondence between f and F. This is a deep theorem of mathematical analysis, whose proof is beyond the scope of this text. But it is extremely useful and justifies the fact that we can transform a time-domain problem into a frequency-domain problem, solve it in the frequency domain, and then go back to the time-domain solution. The uniqueness of Laplace transforms guarantees that the procedure gives the solution of the original problem.

The important difference of Laplace transforms being able to “handle” initial conditions (as opposed to phasors) is illustrated by Example 4.3.4.

Example 4.3.4

Show that: \(\mathscr {L}\{\frac {d}{dt}f(t)\}=sF(s)-f(0^-)\).

Solution

Using integration by parts in the definition of the Laplace transform:

$$\displaystyle \begin{aligned} \displaystyle\int\limits_{0^-}^\infty \underbrace{e^{-st}dt}_{u}\underbrace{\overset{\bullet}{f}(t)}_{dv}&=\underbrace{e^{-st}}_{u}\underbrace{f(t)}_{v}\Big\rvert_{0^-}^\infty - \displaystyle\int\limits_{0^-}^\infty\underbrace{f(t)}_{v}\underbrace{(-{\mathrm{se}}^{-{\mathrm{st}}})}_{du}dt \\ &=-f(0^-)+s\displaystyle\int\limits_{0^-}^\infty f(t)e^{-st}dt \\ &= sF(s)-f(0^-) \end{aligned} $$
(4.130)

To obtain the final result note that we have used the fact that Re(s) is sufficiently large so that f(t)e st → 0 at t →. This is true for all non-pathological physical functions f(t).

Exercise 4.14 generalizes Example 4.3.4 to nth-order.

The analysis of a circuit by Laplace transforms yields the transform of the output variable. The next step is to go from the Laplace transform back to the time function, or as engineers say, from the frequency domain to the time domain. An extremely useful technique is the partial fraction expansion.

Suppose we are given a Laplace transform F 0(s) which is a rational function n 0(s)∕d 0(s), where n 0(s) and d 0(s) are polynomials with real coefficients. We further assume that n 0(s) and d 0(s) are coprime, that is, any nontrivial common factor has been canceled out.

If the degree of n 0 is greater than or equal to the degree of d 0, we first divide the polynomial n 0(s) by d 0(s) to obtain the quotient polynomial q(s) and the remainder polynomial r(s). For example:

$$\displaystyle \begin{aligned} \frac{2s^2+8s+7}{(s+1)(s+3)}&=2+\frac{1}{(s+1)(s+3)} \end{aligned} $$

with q(s) = 2, r(s) = 1. Since the property of linearity carries over to the Laplace transform from phasors:

$$\displaystyle \begin{aligned} \mathscr{L}^{-1}\left(\frac{2s^2+8s+7}{(s+1)(s+3)}\right)&=\mathscr{L}^{-1}(2)+\mathscr{L}^{-1}\left(\frac{1}{(s+1)(s+3)}\right) \end{aligned} $$

The inverse Laplace transform can be looked up from tables, but we know from Example 4.3.3 that:

$$\displaystyle \begin{aligned} \mathscr{L}^{-1}(2)&=2\delta(t) \end{aligned} $$

To determine the inverse Laplace transform of \(\frac {1}{(s+1)(s+3)}\), we know from basic algebra that:

$$\displaystyle \begin{aligned} \frac{1}{(s+1)(s+3)}&=\frac{A}{s+1}+\frac{B}{s+3} \end{aligned} $$

We can solve for A and B by any convenient technique. We thus have:

$$\displaystyle \begin{aligned} \frac{1}{(s+1)(s+3)}&=\frac{0.5}{s+1}-\frac{0.5}{s+3} \end{aligned} $$

From Laplace transform tables (or the reader can easily derive the expression below from the Laplace transform definition), we get:

$$\displaystyle \begin{aligned} \mathscr{L}^{-1}\left(\frac{k}{s+a}\right)&=ke^{-at}u(t) \end{aligned} $$

We insert the unit step function to remind ourselves that f(t) < 0 for t < 0 (we have not defined the double-sided Laplace transform). Thus:

$$\displaystyle \begin{aligned} \mathscr{L}^{-1}\left(\frac{2s^2+8s+7}{(s+1)(s+3)}\right)&=2\delta(t)+(0.5e^{-t}-0.5e^{-3t})u(t) \end{aligned} $$
(4.131)

The subject of partial fraction expansion as applied to Laplace transforms can be found in any text on electrical engineering. Hence, we will not discuss the topic further and instead we will now illustrate how to reformulate a linear time-invariant circuit in the frequency domain using Laplace transforms, with Example 4.3.5.

Example 4.3.5

Reconsider the series RC circuit from Sect. 4.2.1.3. Derive the impulse response.

Solution

Consider the element law (following the passive sign convention) for the linear capacitor:

$$\displaystyle \begin{aligned} i&=C\frac{dv}{dt} \end{aligned} $$
(4.132)

Assuming zero initial conditions, taking Laplace transforms on both sides and using the differentiation rule, we get:

$$\displaystyle \begin{aligned} I(s)&=sCV(s) \end{aligned} $$
(4.133)

For a linear resistor, the V (s) − I(s) relationship is trivial: V (s) = RI(s). Therefore, the circuit in Sect. 4.2.1.3 can be transformed to the Laplace domain as shown in Fig. 4.37. As stated earlier, since the Laplace transform is a generalization of the phasor technique, KCL, KVL, etc. are all valid in the Laplace domain. Therefore, using voltage divider and simplifying:

$$\displaystyle \begin{aligned} V_C(s)&=\frac{1/sC}{R+1/sC} \\ &=\frac{1}{1+sRC} \\ &=\frac{1/RC}{s+1/RC} \end{aligned} $$
(4.134)

Using inverse Laplace transform:

$$\displaystyle \begin{aligned} v_C(t)&=\frac{1}{RC}e^{-t/RC}u(t) \end{aligned} $$
(4.135)

which is exactly Eq. (4.45), since τ = RC.

Fig. 4.37
figure 37

Circuit for Example 4.3.5

Note Example 4.3.5 shows the Laplace transform is applicable even when the input is nonsinusoidal. Example 4.3.5 also shows that provided all time functions are 0 at t = 0 (equivalently, all initial conditions are zero at t = 0) the rules for manipulating phasors and the rules for manipulating Laplace transforms are identical, except for replacing by s. Example 4.3.6 further illustrates this point.

Example 4.3.6

Reconsider the RLC circuit from Example 4.3.2. Determine I L(s).

Solution

We can redraw the RLC circuit in the Laplace domain and solve for I L(s). But, let us simply take the differential equation from Example 4.3.2:

$$\displaystyle \begin{aligned} \frac{d^2}{dt^2}i_L(t)+2\alpha\overset{\bullet}{i}_L(t)+\omega_0^2i_L(t)&=\omega_0^2i_s(t) \end{aligned} $$

and take its Laplace transform (assuming zero initial conditions):

$$\displaystyle \begin{aligned} (s^2+2\alpha s+\omega_0^2)I_L(s)&=\omega_0^2I_s(s) \end{aligned} $$
(4.136)

We have used Exercise 4.14 for the Laplace transform of the second derivative. Simplifying:

$$\displaystyle \begin{aligned} I_L(s)&=\frac{\omega_0^2I_s(s)}{(s^2+2\alpha s+\omega_0^2)} \end{aligned} $$
(4.137)

The phasor Eq. (4.118) and the Laplace Eq. (4.136) have the exact same form except for being replaced by s. But, we would like to again emphasize that the two equations have different meanings: Eq. (4.118) is only valid for sinusoidal inputs at steady-state. The Laplace Eq. (4.136) is valid for arbitrary inputs. Moreover, Exercise 4.15 generalizes Example 4.3.6 to the case when the initial conditions are not zero.

Example 4.3.6 also shows an example of a network function . A detailed discussion is beyond the scope of this book but can be found in excellent references such as [12].

However, one can understand the concept by considering \(H(s)\overset {\triangle }=I_L(s)/I_s(s)\) in Eq. (4.137). Notice H(s) (or the current transfer function) depends only on the circuit parameters, it does not depend on I s(s) (the input). Thus, we will adopt the following general definition of a network function, which basically describes the properties of the circuit:

$$\displaystyle \begin{aligned} \text{Network Function}&\overset{\triangle}=\frac{\mathscr{L}(\text{zero-state response})}{\mathscr{L}(\text{input})} \end{aligned} $$
(4.138)

For example, Exercise 4.16 asks you to derive the input impedance of a gyrator, which is a network function.

4.4 Memristive Networks

We will now discuss memristive networks. We will split the discussion into two parts—discussion of ideal memristors and memristive devices. For the ideal memristors, we will introduce the Flux-Charge Analysis Method (FCAM) developed by Fernando Corinto and Mauro Forti [13], that helps us write minimal number of ODEs for ideal memristive networks. For memristive devices, we will study some very fundamental properties related to sinusoidal excitation. We will also use only linear L, R, C in memristor networks.

4.4.1 Flux-Charge Analysis Method (FCAM)

Example 4.4.1 illustrates the main concept behind FCAM: the idea of an incremental flux (charge).

Example 4.4.1

Derive circuit equations for the L − M network in Fig. 4.38. Notice that we have a charge-controlled memristor.

Fig. 4.38
figure 38

Circuit for Example 4.4.1

Solution

We can easily derive the normal form circuit equations by inspection:

$$\displaystyle \begin{aligned} \frac{di_L}{dt}&=\frac{-R(q_M)i_L}{L} {} \end{aligned} $$
(4.139)
$$\displaystyle \begin{aligned} \frac{dq_M}{dt}&\overset{\triangle}=i_L {} \end{aligned} $$
(4.140)

with the given initial conditions. Notice, however, that Eq. (4.139) can be rewritten using the fact that ϕ = s(q M) (Sect. 1.9.4) for \(\mathscr {N}_M\):

$$\displaystyle \begin{aligned} \frac{di_L}{dt}&=\frac{1}{L}\frac{d}{dt}s(q_M(t)) \end{aligned} $$
(4.141)

Note from the passive sign convention: \(\frac {dq_M}{dt}=-i_L\). Integrating both sides from t 0 to t and applying the first fundamental theorem of calculus, we get:

$$\displaystyle \begin{aligned} i_L(t)-i_L(t_0)&=\frac{1}{L}\left(s(q_M(t))-s(q_M(t_0))\right) \end{aligned} $$
(4.142)

In other words, we have the following first-order ODE (with two initial conditions):

$$\displaystyle \begin{aligned} \frac{dq_M(t)}{dt}&=-\frac{s(q_M(t))}{L}+\frac{s(q_{M_0})}{L}-i_{L_0} \\ q_M(t_0)&=q_{M_0} \end{aligned} $$
(4.143)

Example 4.4.1 shows that for ideal memristor networks, an nth-order ODE in the (v, i) domain can be reduced to an (n − 1)th-order ODE in the (ϕ, q) domain. But, the order of complexity is still n, because we still need n initial conditions.

The example also shows that the fundamental step in reducing the number of ODES by one is the integral of KVL in (t 0, t), referred to as KϕL [13]. Formally:

Definition 4.13 (KϕL)

The algebraic sum of incremental flux around any closed circuit is zero.

With respect to Example 4.4.1, Eq. (4.142) can be written as:

$$\displaystyle \begin{aligned} Li_L(t)-Li_L(t_0)-\left[s(q_M(t))-s(q_M(t_0))\right]&=0 \\ \phi_L(t;t_0)-\phi_M(t;t_0)&=0 \end{aligned} $$
(4.144)

where we have used the notation: \(\phi _L(t;t_0)\overset {\triangle }=Li_L(t)-Li_L(t_0)\) (similar notation for ϕ M(t;t 0)). Notice as expected KϕL is simply the equivalent of KVL in the flux domain: there is only one flux in the circuit of Fig. 4.38 since the voltage across both elements is equal. By duality, we have KqL:

Definition 4.14 (KqL)

The algebraic sum of incremental charge in a closed surface is zero.

Now that we have the laws of interconnections for ideal memristor networks, we can easily reformulate the fundamental circuit elements in the (ϕ, q) domain [13] as shown in Fig. 4.39. In Fig. 4.39, we have:

  1. (a)

    Ideal voltage source: ϕ(t;t 0) = ϕ e(t;t 0), ∀q e(t;t 0)

  2. (b)

    Ideal current source: q(t;t 0) = q a(t;t 0), ∀ϕ a(t;t 0)

  3. (c)

    R: ϕ R(t;t 0) = Rq R(t;t 0)

  4. (d)

    L: \(\phi _L(t)=L\frac {d}{dt}(q_L(t))\quad \phi _L(t;t_0)=-\phi _{L_0}+L\frac {d}{dt}(q_L(t;t_0))\)

  5. (e)

    C: \(q_C(t)=C\frac {d}{dt}(\phi _C(t))\quad q_C(t;t_0)=-q_{C_0}+C\frac {d}{dt}(\phi _C(t;t_0))\)

  6. (f)

    Flux-controlled \(\mathscr {N}_M\): \(q_M(t;t_0)=f(\phi _M(t;t_0)+\phi _{M_0})-q_{M_0}\)

  7. (g)

    Charge-controlled \(\mathscr {N}_M\): \(\phi _M(t;t_0)=h(q_M(t;t_0)+q_{M_0})-\phi _{M_0}\)

Although we could have reduced the number of relationships above by invoking duality, we would like for the reader to have a complete reference for FCAM.

Fig. 4.39
figure 39

The various two-terminal circuit element equivalents in the (ϕ, q) domain

4.4.2 Memristive Devices

It is possible to systematically derive differential-algebraic equations for memristive devices, based on tableau analysis, see [27]. But since this topic is beyond the scope of this book, we will simply obtain the circuit equations for networks with memristive devices in the (v, i) domain by inspection.

We will also focus on passivity and frequency-characteristics Footnote 19 theorems the memristor. These theorems will help us identify physical memristors. We will not rigorously prove these theorems as all the proofs can be found in [9]. Rather, we will give examples from physical memristors. We will state all theorems for current-controlled (recall Eq. (1.86)) memristive devices:

$$\displaystyle \begin{aligned} \dot{\mathbf{x}}&=f(\mathbf{x},i,t) \\ v&=R(\mathbf{x},i,t)i {} \end{aligned} $$
(4.145)

The theorems are valid for voltage-controlled memristive devices, by duality.

Theorem 4.5 (Passivity Criterion)

Let a current-controlled memristive one-port be time-invariant and let its nonlinear memristance function R(⋅) satisfy the constraint R(x, i) = 0 only if i = 0. Then the one-port is passive iff R(x, i) ≥ 0 for any admissible input current i(t), for all t  t 0 where t 0 is chosen such that x(t 0) = x , where x is the state of minimum energy storage.

This theorem essentially says that for a memristor to be passive, its (v, i) characteristic should lie in the first and third quadrant. For example, consider the discharge tube v − i from Chap. 1, reproduced in Fig. 4.40. Notice how the Lissajous figure is only present in the first and third quadrants, hence the discharge tube is a passive memristor. However, in each quadrant, the curve is passive but not strictly passive.

Fig. 4.40
figure 40

Measured discharge tube characteristics

Theorem 4.6 (DC Characteristics)

A time-invariant current-controlled memristive one-port under DC operation is equivalent to a time-invariant current-controlled nonlinear resistor if f(x, I) ≈ 0 has a unique solution x = X(I) such that for each value of \(I\in \Re \) , the equilibrium point x = X(I) is globally asymptotically stable.

An example of DC characteristics is shown in Fig. 4.41 for an emulated memristor [23] that is used in the Muthuswamy-Chua (Sect. 5.4.1) chaotic circuit.

Fig. 4.41
figure 41

Measured DC memristor characteristics. Experimental oscilloscope picture has been offset for clarity, the x axis is current mapped to voltage. We have marked axes in blue

Theorem 4.7 (Double-Valued Lissajous Figure)

A current-controlled memristive one-port under periodic operation (i.e., response is periodic with same period as input) with \(i(t)=I\cos {}(\omega t)\) always gives rise to a v  i Lissajous figure whose voltage v is at most a double-valued function of i.

Figure 4.40 shows the classic pinched-hysteresis fingerprint of a memristor.

Theorem 4.8 (Limiting Linear Characteristics)

If a time-invariant current-controlled memristive one-port described by Eq.(4.145) is BIBO stable, then under periodic operation it degenerates into a linear time-invariant resistor as the excitation frequency increases towards infinity.

The effect of limiting linear characteristics is shown in Fig. 4.42. Notice from Fig. 4.42 that we have “lost” the limiting linear characteristics as the frequency increases. This does not imply Theorem 4.8 is invalid. Rather, a physical memristor is not exactly modeled by Eq. (4.145).

Fig. 4.42
figure 42

Experimental measurements and corresponding simulated results (red) [28] of Thermometric’s NTC diode thermistors (NTC-3.896KGJG), illustrating Theorem 4.8. The input is a sinusoidal source with amplitude A = 5 V. The experiment on the NTC thermistor was conducted at room temperature. The parameters used for simulations of the generic memristor device model of the NTC thermistor are: T 0N = 300 K, R 0N = 3.89 K Ω, H CN = 0.14 J/K, δ N = 0.1 W/K, β N = 5 × 105 K. For parasitic effects (Fig. 4.43), C P = 5 nF, L p = 2 mH, E P = 0 V, I P = 0 A

Recall from Sect. 1.7.1 about the essence of modeling: we extract the essential factors of the device based on the circuit in question. In the case of physical memristive devices, we need a generic device model since measured pinched-hysteresis loops need not pass through the origin due to parasitics. This generic device model is shown in Fig. 4.43 [28] and has been used in the simulation results for Fig. 4.42. The NTC thermistor model used in the circuit of Fig. 4.43 for obtaining Fig. 4.42 is given by Eqs. (4.146) and (4.147).

$$\displaystyle \begin{aligned} W(T_N)&\overset{\triangle}=\left(R_{0N}e^{-\beta_N\left(\frac{1}{T_N}-\frac{1}{T_{0N}}\right)}\right)^{-1} \\ i_N&=W(T_N){}v_N \end{aligned} $$
(4.146)
$$\displaystyle \begin{aligned} \frac{dT_N}{dt}&=\frac{\delta_N}{H_{\mathrm{CN}}}(T_{0N}-T_N)+\frac{W(T_N)}{H_{\mathrm{CN}}}v_N^2 {} \end{aligned} $$
(4.147)

We will now examine another example of Fig. 4.43, detailed analysis can be found in [28]. Consider two simulated pinched-hysteresis loops for the discharge tube memristor, shown in Fig. 4.44a and b [24]. Experimental confirmation can be found in [24].

Fig. 4.43
figure 43

Generic memristor device model

Fig. 4.44
figure 44

Simulation parameters for the discharge tube model in Eqs. (4.148) and (4.149) are: β = 0.1, α = 0.1, F = 1, ω = 0.063. (a) Simulated i M − v M curve for an inductor L p = 5 H in series with a memristive discharge tube. The arrows indicate the trajectory of (i M(t), v M(t)) at t →. We have assumed E p = 0, I p = 0 and \(|Z_{C_p}(j\omega )|\rightarrow \infty \). (b) Simulated i M − v M curve for a capacitor C p = 1 F in parallel with a memristive discharge tube. The arrows indicate the trajectory of (i M(t), v M(t)) at t →. We have assumed E p = 0, I p = 0 and \(|Z_{L_p}|\rightarrow 0\)

If we have a parasitic inductor in series with a memristor, as in Fig. 4.44a, we know that an inductor causes current to start lagging voltage. Hence when i M = 0, if v M > 0, then v M should be increasing because current is lagging voltage. Thus \(\overset {\bullet }{v}_M>0\). Similarly, when i M = 0 and v M < 0, then v M should continue to decrease and thus \(\overset {\bullet }{v}_M<0\). Hence the parasitic pinched-hysteresis loop ends up having no “crossings.” A dual argument applies to Fig. 4.44b but in this case we get two “crossings.”

Another very important point about modeling: it is irrelevant with respect to terminal behavior how the internal state of a memristor is represented. For example, there are two known internal state variables for the memristive model of a discharge tube: the number of conduction electrons n [11]:

$$\displaystyle \begin{aligned} v&=M(n)i {} \end{aligned} $$
(4.148)
$$\displaystyle \begin{aligned} \frac{dn}{dt}&=-\beta n+\alpha M(n)i^2 {} \end{aligned} $$
(4.149)

or tube temperature T [21]:

$$\displaystyle \begin{aligned} R(T)&\overset{\triangle}=a_5T^{-3/4}\exp(ea_6/2kT) \end{aligned} $$
(4.150)
$$\displaystyle \begin{aligned} v(t)&=R(T)i(t) {} \end{aligned} $$
(4.151)
$$\displaystyle \begin{aligned} \frac{dT}{dt}&=a_1[i^2R(T)-a_2\exp(-ea_3/kT)-a_4\exp(T-T_0)] {} \end{aligned} $$
(4.152)

The values of the constants and the physical meaning of the variables in Eqs. (4.151) and (4.152) depend on whether the discharge tube being modeled is either a high-pressure lamp or a low-pressure lamp. For instance, for high pressure lamps, T is the gas temperature T g and T 0 is the tube-wall temperature. a 1 = 20976.1, a 2 = 54350.4, a 3 = 0.986, a 4 = 0.128, a 5 = 2012.0, a 6 = 0.375, T 0 = 1000 K. e = 1.6 × 10−19 C is the electron charge and k = 1.38 × 10−23 J/K is Boltzmann’s constant.

Irrespective of the choice of the internal state variable for the memristive model of the discharge tube, the v − i terminal behavior still shows pinched-hysteresis. For investigating the parasitic behavior, we chose the simpler of the two models: the internal state being a function of the number of conduction electrons [24]. This point bolsters our theme of modeling throughout the book, which is summarized by a quote from Einstein: “It can scarcely be denied that the supreme goal of theory is to make irreducible basic elements as simple and as few as possible without having to surrender the adequate representation of a single datum of experience” [2].

Theorem 4.9 (Small-Signal AC Characteristics)

If a time-invariant current-controlled memristive one-port is globally asymptotically stable for all DC input current I, then its small-signal equivalent circuit about the DC operating point is shown in Fig. 4.45 , with a small-signal impedance given by:

$$\displaystyle \begin{aligned} Z_Q(s)&\overset{\triangle}=\frac{\varDelta V(s)}{\varDelta I(s)}=\frac{\partial R(X,I)I}{\partial i} + \frac{\beta_1 s^{n-1}+\beta_2 s^{n-2}+\cdots+\beta_{n-1}s+\beta_n}{s^{n-1}+\alpha_1 s^{n-1}+\cdots+\alpha_{n-1}s+\alpha_n} \end{aligned} $$
(4.153)
Fig. 4.45
figure 45

The small-signal AC equivalent circuit for Eq. (4.145)

A small-signal equivalent for the thermistor is shown in Fig. 4.46, where:

$$\displaystyle \begin{aligned} C_1&=\frac{C}{2\alpha PR(T)}\overset{\triangle}=\hat{C}_1(T,I) \\ R_1&=\frac{2\alpha PR(T)}{\delta_N-\alpha P}\overset{\triangle}=\hat{R}_1(T,I) \\ \alpha&=\overset{\triangle}=-\frac{\beta_N}{T^2}<0,\quad P\overset{\triangle}=VI=R(T)I^2 \end{aligned} $$

Since C 1 is negative, the thermistor is inductive under small-signal operation.

Fig. 4.46
figure 46

The small-signal AC equivalent circuit of a thermistor [9]

The reader should hence realize from this section that a memristor is described by two concepts: memory and resistance. Memory occurs in the form of hysteresis in a v − i plot, resistance in the form of pinching behavior at the origin in the v − i plot. Note that memory need not imply “storage” in the sense of a capacitor or inductor. Rather, a memristor’s resistance (conductance) depends on past history of a particular state variable.

Therefore, in conclusion to this section, we have the following working hypothesis for memristors:

Since a memristor is described by two concepts: memory and resistance, memristor physics cannot be fully explained by electromagnetic field theory. Specifically, the memristor state equation requires another branch of science. This is in sharp contrast to resistors, capacitors, and inductors, whose material behavior is the subject of electromagnetic fields in matter (conductive, dielectric, and ferromagnetic media respectively).

For example:

  1. 1.

    The Josephson junction ideal menductance is described using superconductivity (and hence quantum mechanics).

  2. 2.

    Discharge tube state equation is described using plasma physics.

  3. 3.

    pn-junction diode memristance requires junction physics. In fact, the memristance arises because the semiconductor bulk resistance is not a constant, but a function of the charge flowing through it [11, 26].

We encourage readers to rigorously investigate and prove or disprove the hypothesis above.

4.5 Energy Approach: Lagrangian and Hamiltonian

In this section,Footnote 20 we will start out by discussing energy expressions for two-terminal resistor, capacitor, and inductor.Footnote 21 As examples, we will obtain system equations for a circuit using the Lagrangian and Hamiltonian. The purpose of doing so is to provide the reader with a third (in addition to time and frequency) approach to writing circuit equations.

A key aspect of the Lagrangian and Hamiltonian frameworks is that they bring to forefront one of the most fundamental concepts in physics—energy. A second motivation is that the energy based approach helps us to view a circuit as a (usually simpler) set of subsystems that exchange energy among themselves and the environment. Unfortunately, we can only scratch the surface of this fascinating topic in this section. The interested reader is referred to [19] and [17] as starting points.

We know from basic physics that energy is defined as the integral of power:

$$\displaystyle \begin{aligned} w(t_1,t_2)&=\displaystyle\int\limits_{t_1}^{t_2}p(\tau)d\tau \end{aligned} $$
(4.154)

From the definition of power for a two-terminal element, we get:

$$\displaystyle \begin{aligned} w(t_1,t_2)&=\displaystyle\int\limits_{t_1}^{t_2}v(\tau)i(\tau)d\tau \end{aligned} $$
(4.155)

With respect to a resistor, Eq. (4.155) would imply that no energy is stored. For example, for a linear resistor, we get:

$$\displaystyle \begin{aligned} w_R(t_1,t_2)&=\displaystyle\int\limits_{t_1}^{t_2}[i(\tau)R]i(\tau)d\tau \\ &=R\displaystyle\int\limits_{t_1}^{t_2}i^2(\tau)d\tau \\ &=\frac{1}{R}\displaystyle\int\limits_{t_1}^{t_2}v^2(\tau)d\tau \end{aligned} $$
(4.156)

If R > 0, the energy is dissipated usually in the form of heat and is lost as far as the circuit is concerned. Such an element is therefore said to be lossy.

In contrast, capacitors and inductors store energy. The energy w c(t 1, t 2) entering a charge-controlled capacitor during any time interval [t 1, t 2] is independent of the capacitor voltage or current waveforms: It is uniquely determined by the capacitor charge at the end points, namely, q(t 1) and q(t 2):

$$\displaystyle \begin{aligned} w_c(t_1,t_2)&=\displaystyle\int\limits_{t_1}^{t_2}\hat{v}(q(t))\frac{dq}{dt}dt \\ &=\displaystyle\int\limits_{q(t_1)}^{q(t_2)}\hat{v}(q)dq \end{aligned} $$
(4.157)

Suppose we have a C-F linear capacitor having an initial voltage v(t 1) = V  and initial charge q(t 1) = Q = CV  at t = t 1. Let the capacitor be connected to an external circuit at t = t 1. The energy entering the capacitor during [t 1, t 2] is given by Eq. (4.157):

$$\displaystyle \begin{aligned} w_C(t_1,t_2)&=\frac{1}{2C}[q^2(t_2)-Q^2] \\ &=\frac{1}{2}C[v^2(t_2)-V^2] \end{aligned} $$
(4.158)

Note that whenever q(t 2) < Q or v(t 2) < V , then w C(t 1, t 2) < 0. This means energy is actually being sent out of the capacitor and returned to the external circuit. It follows from Eq. (4.158) that w C(t 1, t 2) is most negative when q(t 2) = v(t 2) = 0, whereupon \(w_c(t_1,t_2)=-\frac {Q^2}{2C}=-\frac {1}{2}CV^2\). Since this represents the maximum amount of energy that could be extracted from the capacitor, it is natural to say that an energy equal to

$$\displaystyle \begin{aligned} \mathscr{E}_C(Q)&=\frac{Q^2}{2C} \\ &=\frac{1}{2}CV^2 \end{aligned} $$
(4.159)

is stored in a linear capacitor C having an initial voltage v(t 1) = V  or initial charge q(t 1) = Q = CV .

By duality, an energy equal to:

$$\displaystyle \begin{aligned} \mathscr{E}_L(\phi)&=\frac{1}{2L}\phi^2 \\ &=\frac{1}{2}LI^2 \end{aligned} $$
(4.160)

is stored in a linear inductor L having an initial current i(t 1) = I or initial flux ϕ(t 1) = ϕ = LI.

Now that we have expressions for the energy stored in a (linear) capacitor or inductor, we need to understand the meaning of “kinetic” and “potential” energy in electric circuits, before we can discuss how to obtain circuit equations via the Lagrangian and the Hamiltonian. To do this, we will appeal to the reader’s “natural intelligence” with respect to (translational) mechanical systems. Consider the following (we will again assume all mechanical elements are linear and we will not worry about relativistic effects):

  • m (mass)—Characteristic Equation: p = mv, p: linear momentum, v: velocity

  • k (Spring constant)—Characteristic Equation: F = kx, F :  force, x: displacement

We know the energy expressions for a mass and spring as:

  • m: \(\mathscr {E}_m=\frac {1}{2}mv^2\)

  • k: \(\mathscr {E}_k=\frac {1}{2}kx^2\)

Obviously, a moving mass has a kinetic energy \(\mathscr {E}_m\) and a compressed spring has a potential energy \(\mathscr {E}_k\). Now consider the energy expressions for L and C:

  • L: \(\mathscr {E}_L=\frac {1}{2}LI^2\)

  • C: \(\mathscr {E}_C=\frac {1}{2}CV^2\)

It should now be clear that since an inductor’s stored energy is due to current or moving charge, our mechanical analog of an inductor is the mass. Since a capacitor’s stored energy is due to electrostatic potential, our mechanical analog of a capacitor is the spring. Hence (↔ stands for analog):

$$\displaystyle \begin{aligned} p\text{ (momentum)} &\longleftrightarrow \phi \text{ (flux)} \end{aligned} $$
(4.161)
$$\displaystyle \begin{aligned} v\text{ (velocity)} &\longleftrightarrow i \text{ (current)} \end{aligned} $$
(4.162)
$$\displaystyle \begin{aligned} x\text{ (displacement)} &\longleftrightarrow q \text{ (charge)} \end{aligned} $$
(4.163)
$$\displaystyle \begin{aligned} F\text{ (force)} &\longleftrightarrow v \text{ (voltage)} \end{aligned} $$
(4.164)

Next, let x = [x 1, ⋯ , x n]T denote a column vector and V (x) denote a scalar function \(V:\Re ^n\rightarrow R\). The gradient of V (x) with respect to x is denoted by:

$$\displaystyle \begin{aligned}\nabla V_x(\mathbf{x})&\overset{\triangle}=\begin{bmatrix} \frac{\partial V}{\partial x_1} \\ \frac{\partial V}{\partial x_2} \\ \cdot \\ \cdot \\ \cdot \\ \frac{\partial V}{\partial x_n} \end{bmatrix} \end{aligned} $$
(4.165)

Mechanically, the Lagrangian and Hamiltonian are described in terms of generalized coordinates. In the case of electric circuits, we will use q (capacitor charge(s)) and consequently \(\overset {\bullet }{\mathbf {q}}\) as our generalized coordinates. The formalism requires that the Lagrangian L be expressed in terms of q and \(\overset {\bullet }{\mathbf {q}}\):

$$\displaystyle \begin{aligned} L(\mathbf{q},\overset{\bullet}{\mathbf{q}})&=\mathscr{E}_L-\mathscr{E}_C \end{aligned} $$
(4.166)

where \(\mathscr {E}_L\) represents the total energy stored in inductor(s) and \(\mathscr {E}_C\) represents the total energy stored in capacitor(s). Notice this is equivalent to the definition from mechanics, if we consider energy stored in inductor(s) as “kinetic” energy and energy stored in capacitor(s) as “potential” energy.

The total energy of the capacitors \(\mathscr {E}_C\) can be readily expressed in terms of charge:

$$\displaystyle \begin{aligned} \mathscr{E}_C(\mathbf{q})&=\displaystyle\sum_{n=1}^{N_C}\frac{q_n^2}{2C_n} \end{aligned} $$
(4.167)

where N C is the total number of capacitors in the circuit. However the total energy of the inductors \(\mathscr {E}_L\) is usually expressed in terms of inductor currents i:

$$\displaystyle \begin{aligned} \mathscr{E}_L(\mathbf{i})&=\displaystyle\sum_{n=1}^{N_L}\frac{1}{2}L_ni_n^2 \end{aligned} $$
(4.168)

where N L is the total number of inductors in the circuit. We must therefore first express the inductor currents i in terms of \(\overset {\bullet }{\mathbf {q}}\):

$$\displaystyle \begin{aligned} \mathbf{i}&=\mathbf{A}\overset{\bullet}{\mathbf{q}} \end{aligned} $$
(4.169)

where A is an N L × N C matrix. This can be done using KCL (as shown in Example 4.5.1). Now, we can write Lagrange’s equations in terms of the Lagrangian:

$$\displaystyle \begin{aligned}\frac{d}{dt}\nabla L_{\overset{\bullet}{q}}(\mathbf{q},\overset{\bullet}{\mathbf{q}})-\nabla L_q(\mathbf{q},\overset{\bullet}{\mathbf{q}})&=\mathbf{0} \end{aligned} $$
(4.170)

where \(L(\mathbf {q},\overset {\bullet }{\mathbf {q}})=\mathscr {E}_L(\mathbf {A}\overset {\bullet }{\mathbf {q}})-\mathscr {E}_C(\mathbf {q})\).

Example 4.5.1

Write system equations for the circuit in Fig. 4.47 using the Lagrangian, for t ≥ 0. Assume the inductors have initial current i 1(0), i 2(0) and capacitors have initial charge q 1(0), q 2(0) at t = 0.

Fig. 4.47
figure 47

Circuit for Example 4.5.1

Solution

For the circuit, we have:

$$\displaystyle \begin{aligned} \mathscr{E}_L-\mathscr{E}_C&=\frac{1}{2}L_1i_1^2+\frac{1}{2}L_2i_2^2-\frac{q_1^2}{2C_1}-\frac{q_2^2}{2C_2}\end{aligned} $$
(4.171)

We need to rewrite the energy expression above in terms of \((\mathbf {q},\overset {\bullet }{\mathbf {q}})\) for the Lagrangian, where \(\mathbf {q}=\begin {bmatrix}q_1\\q_2\end {bmatrix}\). From KCL:

$$\displaystyle \begin{aligned} i_1&=-\overset{\bullet}{q}_1-\overset{\bullet}{q}_2 \\ i_2&=\overset{\bullet}{q}_2 \end{aligned} $$
(4.172)

Thus the Lagrangian for the circuit is:

$$\displaystyle \begin{aligned} L(\mathbf{q},\overset{\bullet}{\mathbf{q}})&=\frac{1}{2}L_1(\overset{\bullet}{q}_1+\overset{\bullet}{q}_2)^2+\frac{1}{2}L_2\overset{\bullet}{q}_2^2-\frac{q_1^2}{2C_1}-\frac{q_2^2}{2C_2} \end{aligned} $$
(4.173)

Hence we have:

$$\displaystyle \begin{aligned} \frac{d}{dt}\nabla L_{\overset{\bullet}{q}}(\mathbf{q},\overset{\bullet}{\mathbf{q}})-\nabla L_q(\mathbf{q},\overset{\bullet}{\mathbf{q}})&=\frac{d}{dt}\begin{bmatrix}\frac{\partial L}{\partial \overset{\bullet}{q}_1} \\ \frac{\partial L}{\partial \overset{\bullet}{q}_2}\end{bmatrix} - \begin{bmatrix}\frac{\partial L}{\partial q_1} \\ \frac{\partial L}{\partial q_2}\end{bmatrix} \\ &=\frac{d}{dt}\begin{bmatrix}L_1(\overset{\bullet}{q}_1+\overset{\bullet}{q}_2) \\ L_1(\overset{\bullet}{q}_1+\overset{\bullet}{q}_2)+L_2\overset{\bullet}{q}_2\end{bmatrix} - \begin{bmatrix}-\frac{q_1}{C_1} \\-\frac{q_2}{C_2}\end{bmatrix} \\ &=\begin{bmatrix}-L_1\overset{\bullet}{i}_1+\frac{q_1}{C_1} \\-L_1\overset{\bullet}{i}_1+L_2\overset{\bullet}{i}_2+\frac{q_2}{C_2}\end{bmatrix} \end{aligned} $$
(4.174)

Therefore Lagrange’s equations give us:

$$\displaystyle \begin{aligned} -L_1\overset{\bullet}{i}_1+\frac{q_1}{C_1} &= 0 \\ -L_1\overset{\bullet}{i}_1+L_2\overset{\bullet}{i}_2+\frac{q_2}{C_2}&=0 \end{aligned} $$
(4.175)

Notice Lagrange’s equations give rise to the KVL equations for the circuit.

By applying a (Legendre) transformation Lagrange’s Eq. (4.170), we get the Hamiltonian. We first define ψ as the conjugate momenta to q:

$$\displaystyle \begin{aligned} \boldsymbol{\psi}&\overset{\triangle}=\nabla_{\overset{\bullet}{q}}L(\mathbf{q},\overset{\bullet}{\mathbf{q}}) \end{aligned} $$
(4.176)

Notice that ψ has the same number of components as q, i.e., N C components while there are N L inductor fluxes: \(\boldsymbol {\phi }=\nabla _i\mathscr {E}_L(\mathbf {i})\). In general, ψ n ≠ ϕ n. Specifically, using the chain rule:

$$\displaystyle \begin{aligned} \boldsymbol{\psi}&=\nabla_{\overset{\bullet}{q}}L(\mathbf{q},\overset{\bullet}{\mathbf{q}}) \\ &=\nabla_{\overset{\bullet}{q}}\mathscr{E}_L(\mathbf{A}\overset{\bullet}{\mathbf{q}}) \\ &=\nabla_{\overset{\bullet}{q}}(\mathbf{A}\overset{\bullet}{\mathbf{q}})\nabla_{\mathbf{A}\overset{\bullet}{\mathbf{q}}}\mathscr{E}_L(\mathbf{A}\overset{\bullet}{\mathbf{q}}) \\ &={\mathbf{A}}^T\nabla_i\mathscr{E}_L(\mathbf{i}) \end{aligned} $$
(4.177)

Thus: ψ = A T ϕ. The Hamiltonian formalism requires that the Hamiltonian function H be expressed in terms of the generalized coordinates q (capacitor charges) and their conjugate momenta ψ:

$$\displaystyle \begin{aligned} H(\mathbf{q},\boldsymbol{\psi})&=\mathscr{E}_L(\boldsymbol{\psi})+\mathscr{E}_C(\mathbf{q}) \end{aligned} $$
(4.178)

Hamilton’s equations are hence given by:

$$\displaystyle \begin{aligned}\overset{\bullet}{\mathbf{q}}&=\nabla_\psi H(\mathbf{q},\boldsymbol{\psi}) \end{aligned} $$
(4.179)
$$\displaystyle \begin{aligned} \overset{\bullet}{\boldsymbol{\psi}}&=-\nabla_q H(\mathbf{q},\boldsymbol{\psi}) \end{aligned} $$
(4.180)

Example 4.5.2

Write system equations for the circuit in Fig. 4.47 using the Hamiltonian, for t ≥ 0. Assume again the inductors have initial current i 1(0), i 2(0) and capacitors have initial charge q 1(0), q 2(0) at t = 0.

Solution

In Example 4.5.1, we derived the Lagrangian as:

$$\displaystyle \begin{aligned} L(\mathbf{q},\overset{\bullet}{\mathbf{q}})&=\frac{1}{2}L_1(\overset{\bullet}{q}_1+\overset{\bullet}{q}_2)^2+\frac{1}{2}L_2\overset{\bullet}{q}_2^2-\frac{q_1^2}{2C_1}-\frac{q_2^2}{2C_2} \end{aligned} $$
(4.181)

We can now find the conjugate momenta:

$$\displaystyle \begin{aligned} \psi_1&=\frac{\partial L}{\partial\overset{\bullet}{q}_1} \\ &=L_1(\overset{\bullet}{q}_1+\overset{\bullet}{q}_2) \\ &=-L_1i_1 \\ &=-\phi_1 \end{aligned} $$
(4.182)
$$\displaystyle \begin{aligned} ~ &~\\ \psi_2&=\frac{\partial L}{\partial\overset{\bullet}{q}_2} \\ &=L_1(\overset{\bullet}{q}_1+\overset{\bullet}{q}_2) + L_2\overset{\bullet}{q}_2 \\ &=-L_1i_1+L_2i_2 \\ &=-\phi_1+\phi_2 \end{aligned} $$
(4.183)

The total energy is:

$$\displaystyle \begin{aligned} \mathscr{E}_L+\mathscr{E}_C&=\frac{\phi_1^2}{2L_1}+\frac{\phi_2^2}{2L_2}+\frac{q_1^2}{2C_1}+\frac{q_2^2}{2C_2} \end{aligned} $$
(4.184)

The Hamiltonian is:

$$\displaystyle \begin{aligned} H(\mathbf{q},\boldsymbol{\psi})&=\frac{\psi_1^2}{2L_1}+\frac{\left(\psi_2-\psi_1\right)^2}{2L_2}+\frac{q_1^2}{2C_1}+\frac{q_2^2}{2C_2} \end{aligned} $$
(4.185)

Hamilton’s equations give:

$$\displaystyle \begin{aligned} \overset{\bullet}{q}_1&=\frac{\partial H}{\partial \psi_1} \\ &=\frac{\psi_1}{L_1}-\frac{\psi_2-\psi_1}{L_2} {} \end{aligned} $$
(4.186)
$$\displaystyle \begin{aligned} \overset{\bullet}{q}_2&=\frac{\partial H}{\partial \psi_2} \\ &=\frac{\psi_2-\psi_1}{L_2} {} \end{aligned} $$
(4.187)
$$\displaystyle \begin{aligned} \overset{\bullet}{\psi}_1&=-\frac{\partial H}{\partial q_1} \\ &=-\frac{q_1}{C_1} {} \end{aligned} $$
(4.188)
$$\displaystyle \begin{aligned} \overset{\bullet}{\psi}_2&=-\frac{\partial H}{\partial q_2} \\ &=-\frac{q_2}{C_2} {} \end{aligned} $$
(4.189)

It is trivial to verify that Eqs. (4.186) through (4.189) give rise to KCL and KVL.

4.6 Miscellaneous Topics

We would like to wrap up this chapter by discussing three important and fundamental concepts.

4.6.1 Reciprocity

The reciprocity theorem appears in various fields of science and engineering: physics, mechanics, acoustics, electromagnetic waves, and electric circuits [12]. Roughly speaking, it deals with the symmetric role played by the input and output of a physical system. In electric circuits, reciprocity holds for a subclass of linear and nonlinear circuits. In this section, we will only focus on linear time-invariant circuits. Reciprocity with respect to memristors is an active area of research, see [16]. We will simply give three statements of the theorem and illustrate an application of one of the statements with an example [14].

Consider a linear time-invariant network \(\mathscr {N}\) which consists of resistors, inductors, mutual inductors, capacitors, and transformers only. \(\mathscr {N}\) is in steady-state and not degenerate. Connect four wires to \(\mathscr {N}\) obtaining two pairs of terminals 1, 1 and 2, 2.

Theorem 4.10 (Reciprocity Theorem Statement 1)

Connect a voltage source e 0(⋅) to terminals 1, 1′ and observe the zero-state current response j 2(⋅) in a short circuit connected to 2, 2′ (see Fig. 4.48 a). Next, connect the same voltage source e 0(⋅) to terminals 2, 2′ and observe the zero-state current response \(\hat {j}_1(\cdot )\) in a short circuit connected to 1, 1′ (see Fig. 4.48 b). The reciprocity theorem asserts that whatever the topology and element values of \(\mathscr {N}\) and whatever the waveform e 0(⋅), \(j_2(t)=\hat {j}_1(t)\;\forall t\).

Fig. 4.48
figure 48

(a), (b): Reciprocity theorem statement 1, (c), (d): Reciprocity theorem statement 2, (e), (f): Reciprocity theorem statement 3

In the statement above, we are essentially saying that if the voltage source is interchanged for a zero-impedance ammeter, the reading of the ammeter will not change.

Theorem 4.11 (Reciprocity Theorem Statement 2)

Connect a current source i 0(⋅) to terminals 1, 1′ and observe the zero-state voltage response v 2(⋅) in an open circuit connected to 2, 2′ (see Fig. 4.48 c). Next, connect the same current source i 0(⋅) to terminals 2, 2′ and observe the zero-state voltage response \(\hat {v}_1(\cdot )\) in an open circuit connected to 1, 1′ (see Fig. 4.48 d). The reciprocity theorem asserts that whatever the topology and element values of \(\mathscr {N}\) and whatever the waveform i 0(⋅), \(v_2(t)=\hat {v}_1(t)\;\forall t\).

In the statement above, we are observing open circuit voltages.

Theorem 4.12 (Reciprocity Theorem Statement 3)

Connect a current source i 0(⋅) to terminals 1, 1′ and observe the zero-state current response j 2(⋅) in a short circuit connected to 2, 2′ (see Fig. 4.48 e). Next, connect a voltage source e 0(⋅) to terminals 2, 2′ and observe the zero-state voltage response \(\hat {v}_1(\cdot )\) in an open circuit connected to 1, 1′ (see Fig. 4.48 f). The reciprocity theorem asserts that whatever the topology and element values of \(\mathscr {N}\) , whenever i 0(t) = e 0(t), \(\hat {v}_1(t)=j_2(t)\;\forall t\).

In the statement above, for both measurements, there is an “infinite impedance” connected to 1, 1 and a “zero impedance” connected to 2, 2. The reader should have noticed that since the reciprocity theorem deals exclusively with the zero-state response (including steady-state response as t →) of a linear time-invariant network, it is convenient to describe it in terms of network functions. We will illustrate the idea in Example 4.6.1 for statement 3 from Theorem 4.12.

Example 4.6.1

Confirm if statement 3 of the reciprocity theorem is true for the circuit shown in Fig. 4.49.

Fig. 4.49
figure 49

Circuit(s) for Example 4.6.1

Solution

We have defined the ports 1, 1 and 2, 2 as shown in Fig. 4.49b and c, respectively. We are going to find the impulse response and since we do not have any other source, we know at steady state all voltages and currents must tend to zero (this will serve as a “sanity check”). By node analysis and Laplace transform in Fig. 4.49b we obtain:

$$\displaystyle \begin{aligned} \begin{bmatrix} 0.2+s+\frac{1}{s} & -\frac{1}{s} \\ -\frac{1}{s} & 1+\frac{1}{s} \end{bmatrix} \begin{bmatrix} V_1(s) \\ V_2(s) \end{bmatrix} &= \begin{bmatrix} 1 \\ 0 \end{bmatrix} \end{aligned} $$
(4.190)

Hence:

$$\displaystyle \begin{aligned} V_2(s)&=\frac{1/s}{(0.2+s+1/s)(1+1/s)-(1/s)^2}=\frac{1}{s^2+1.2s+1.2} \end{aligned} $$
(4.191)

Taking the inverse Laplace transform (using reliable online tables) and noting that j 2(t) = 1 ∗ v 2(t), we obtain:

$$\displaystyle \begin{aligned} j_2(t)=1.09e^{-0.6t}\sin{}(0.916t)\;t\geq 0 \end{aligned} $$
(4.192)

For the network in Fig. 4.49c, we will set up circuit equations in terms of \(\hat {I}_1(s), \hat {I}_2(s)\) (this is called mesh analysis ). The matrix equations are:

$$\displaystyle \begin{aligned} \begin{bmatrix} 5+\frac{1}{s}&-\frac{1}{s} \\ -\frac{1}{s}&1+s+\frac{1}{s} \end{bmatrix} \begin{bmatrix} \hat{I}_1(s) \\ \hat{I}_2(s) \end{bmatrix} &= \begin{bmatrix} 0 \\ 1 \end{bmatrix} \end{aligned} $$
(4.193)

Thus:

$$\displaystyle \begin{aligned} \hat{I}_1(s)&=\frac{1/s}{(5+1/s)(s+1+1/s)-(1/s)^2} \end{aligned} $$
(4.194)

Since \(\hat {v}_1(t)=5\hat {i}_1(t)\), we have:

$$\displaystyle \begin{aligned} \hat{V}_1(s)&=\frac{5}{(5s+1)(s+1+1/s)-1/s} \\ &=\frac{5}{5s^2+6s+6} \\ &=\frac{1}{s^2+1.2s+1.2} \end{aligned} $$
(4.195)

Recognizing this function of s to be the transform of j 2(t), we use previous calculations and conclude that:

$$\displaystyle \begin{aligned} \hat{v}_1(t)&=1.09e^{-0.6t}\sin{}(0.916t)\;t\geq 0 \end{aligned} $$
(4.196)

Thus, the two responses are equal, as required by the reciprocity theorem.

4.6.2 Synthesis of Higher-Order Circuit Elements

Recall from Chap. 1 that we defined (α, β) circuit elements as a “natural extension” of the four fundamental circuit elements. We have reproduced Fig. 1.40 in Fig. 4.50 for ease of discussion.

Fig. 4.50
figure 50

The “periodic table” of all two-terminal (α, β) elements, with a frequency based interpretation

In order to give some physical meaning to each higher order element \(\mathscr {E}\), it is convenient [5] to examine its small-signal behavior about an operating point Q on the associated v (α) − i (β) curve. Assuming that \(\mathscr {E}\) is characterized by v (α) = f(i (β)), the small-signal behavior of \(\mathscr {E}\) about Q is described by:

$$\displaystyle \begin{aligned} \delta v^{(\alpha)}(t)&=m_Q\delta i^{(\beta)}(t) \end{aligned} $$
(4.197)

where m Q denotes the slope f (i (β)) at Q. We can define the AC small-signal impedance Z() associated with Eq. (4.197) by taking the Laplace transform of Eq. (4.197) and letting s = :

$$\displaystyle \begin{aligned} \mathscr{L}\{\delta v\}&=Z(s)\mathscr{L}\{\delta i\} \end{aligned} $$
(4.198)

where:

$$\displaystyle \begin{aligned} Z(j\omega)&=(j\omega)^{\beta-\alpha}m_Q \end{aligned} $$
(4.199)

Notice we obtained Eq. (4.199) by simply understanding the fact that each derivative constitutes one . We can interpret Eq. (4.199) as the impedance of an associated linearized element \(\mathscr {E}_Q\). Since (β − α) can be any positive, zero, or negative integer, there are four interesting cases to considerFootnote 22:

  • Case 1: β − α = ±2n, n = even integer

    In this case, \(Z(j\omega )=\omega ^{\beta -\alpha }m_Q\overset {\triangle }=R(\omega )\) is a real positive function and hence \(\mathscr {E}_Q\) is purely resistive. We can interpret, therefore, \(\mathscr {E}_Q\) as a frequency-dependent resistor (red) in Fig. 4.50.

  • Case 2: β − α = ±2n, n = odd integer

    In this case, \(Z(j\omega )=-\omega ^{\beta -\alpha }m_Q\overset {\triangle }=R^*(\omega )\) is a real negative function and hence \(\mathscr {E}_Q\) is purely resistive. We can interpret, therefore, \(\mathscr {E}_Q\) as a frequency-dependent negative resistor (orange) in Fig. 4.50.

  • Case 3: β − α = (−1)n(2n + 1), n = 0, 1, 2, ⋯

    In this case, Z() = jωL(ω) is an imaginary number where

    $$\displaystyle \begin{aligned} L(\omega)&\overset{\triangle}= \begin{cases} \omega^{2n}m_Q,&\text{when }n\ \text{ even} \\ \omega^{-2(n+1)}m_Q,&\text{when }n\ \text{ odd} \end{cases} \end{aligned} $$
    (4.200)

    and hence \(\mathscr {E}_Q\) is purely inductive, provided m Q > 0. We can interpret, therefore, \(\mathscr {E}_Q\) as a frequency-dependent inductor (BlueGreen) in Fig. 4.50.

  • Case 4: β − α = (−1)n+1(2n + 1), n = 0, 1, 2, ⋯

    In this case, \(Z(j\omega )=-j\left (\frac {1}{\omega C(\omega )}\right )\) is an imaginary number where

    $$\displaystyle \begin{aligned} C(\omega)&\overset{\triangle}= \begin{cases} \frac{\omega^{2n}}{m_Q},&\text{when }n\ \text{ even} \\ \frac{\omega^{-2(n+1)}}{m_Q},&\text{when }n\ \text{ odd} \end{cases} \end{aligned} $$
    (4.201)

    and hence \(\mathscr {E}_Q\) is purely capacitive, provided m Q > 0. We can interpret, therefore, \(\mathscr {E}_Q\) as a frequency-dependent capacitor (OliveGreen) in Fig. 4.50.

Two applications of the interpretation above: a memristor \(\mathscr {M}\) characterized as a (−1, −1) element is classified as a frequency-dependent resistor (red) because the area of the pinched-hysteresis v − i loop is a function of frequency [7]. A second application is in interpreting (0, −2) element as a frequency-dependent negative resistor (orange) or FDNR, see [15].

We will however use the time domain to synthesize the particular higher-order element (0, −2), motivated by the fact that we need \(i=\ddot {v}\) for the Duffing oscillator implementation in Sect. 5.5, not \(i=-\ddot {v}\) as given by an FDNR. Consider the schematic in Fig. 4.51. The concept behind Fig. 4.51 is rooted in Sect. 2.5.4, specifically Fig. 2.42. Instead of connecting a nonlinear resistor, we have used a linear capacitor C 2. Based on Sect. 2.5.4, we want the following two-port relationship:

$$\displaystyle \begin{aligned} v_1&=v_2 \\ i_1&=k\frac{d^2}{dt^2}v_2 \end{aligned} $$
(4.202)

By the VCVS at port 2, we trivially obtain v 2 = v 1. From the VCCS across C, we have:

$$\displaystyle \begin{aligned} i&=-\alpha C\frac{di_2}{dt} \end{aligned} $$
(4.203)

Dimensionally, [α] = Ω. By the CCCS at port 1, we trivially obtain: i 1 = i. Using the expression for i above and the fact the v 1 = v 2, we get the desired relationship at port 1:

$$\displaystyle \begin{aligned} i_1&=-\alpha C\frac{di_2}{dt} \\ &=\alpha CC_2\frac{d^2}{dt^2}v_2 \\ &=\alpha CC_2\frac{d^2}{dt^2}v_1 \end{aligned} $$
(4.204)
Fig. 4.51
figure 51

A mutator for synthesizing (0, −2) from a (0, −1) element (capacitor C 2 at port 2)

We will synthesize the mutator in Fig. 4.51 by using two opamps and one CFOA in Sect. 5.5. For the general concept for synthesizing higher-order nonlinear elements, the interested reader can refer to [10].

4.6.3 Limit Cycles

In Sect. 4.2.1.6, we have already seen how a simple first-order opamp circuit could burst into a relaxation oscillation. Our analysis of this phenomenon depends on a key assumption, namely, the jump rule. Our objective in this final section is to justify this rule.

Every electronic oscillator requires at least two energy-storage elements and at least one nonlinear element [12]. We will therefore begin with the simplest nonlinear oscillator circuit, analyze its qualitative behavior, and then examine how the oscillation waveform varies as we tune a parameter, say the inductance.Footnote 23 We will then show that as the inductance decreases, the oscillation changes from a nearly “sinusoidal” waveform into a nearly “discontinuous” waveform. In the limit, when the inductance tends to zero, the waveform becomes discontinuous and we obtain the jump rule. Fig. 4.52 shows the basic structure of an important class of electronic oscillators. Since both the inductor and capacitor are linear and passive (i.e., L > 0, C > 0), we claim that the resistive one-port \(\mathscr {N}_R\) must be active (i.e., the DP characteristic contains at least some points in the second and/or fourth quadrant of the v − i plane) in order for oscillation to be possible.

Fig. 4.52
figure 52

Basic oscillator circuit

To see why \(\mathscr {N}_R\) must be active, suppose it is strictly passive so that v(t)i(t) > 0 for all t; then the energy will continually enter \(\mathscr {N}_R\), only to be dissipated in the form of heat.Footnote 24 This dissipated energy must of course come from the initial energy stored in the capacitor and inductor. Hence, as t →, the total energy stored in the capacitor and inductor will decrease continuously till it becomes completely dissipated. Since the instantaneous energy stored in the capacitor and inductor is \(\mathscr {E}_C(t)=\frac {1}{2}Cv_C^2(t), \mathscr {E}_L(t)=\frac {1}{2}Li_L^2(t)\) (recall Sect. 4.5), it follows that:

$$\displaystyle \begin{aligned} \text{Total energy }&=\frac{1}{2}Cv_C^2(t)+\frac{1}{2}Li_L^2(t)\rightarrow 0 \;\;\;\text{as }t\rightarrow \infty \end{aligned} $$
(4.205)

Hence both v C(t) and i L(t) must eventually tend to zero and no sustained oscillation is possible.

A typical active resistive one-port has already been described by the three-segment PWL negative resistance characteristic in Fig. 4.26b. In general, any continuous nonmonotonic current-controlled v − i characteristic described by \(v=\hat {v}(i)\) satisfying the conditions:

$$\displaystyle \begin{aligned} \hat{v}(0)&=0 \\ \hat{v}'(0)&<0 \\ \hat{v}(i)&\rightarrow\infty \;\;\text{as }i\rightarrow\infty \\ \hat{v}(i)&\rightarrow -\infty \;\;\text{as }i\rightarrow -\infty \end{aligned} $$
(4.206)

would cause the circuit in Fig. 4.52 to oscillate. This statement can be proved rigorously, see [12].

Indeed, the conditions in Eq. (4.206) are satisfied by many electronic circuits. For example, the DP characteristic in Fig. 4.54 of the twin-tunnel-diode circuit in Fig. 4.53 clearly satisfies the conditions in Eq. (4.206).

Fig. 4.53
figure 53

A negative resistance twin-tunnel-diode circuit

Fig. 4.54
figure 54

Typical DP characteristic of the circuit in Fig. 4.53

We will now consider the physical mechanisms of oscillation in the simple series \(\mathscr {N}_RLC\) circuit from Fig. 4.52. We can write the normal form equations for the circuit by inspection (Fig. 4.54):

$$\displaystyle \begin{aligned} \overset{\bullet}{v}_C&=\frac{-i_L}{C}\overset{\triangle}=f_1(v_C,i_L) \\ \overset{\bullet}{i}_L&=\frac{v_C-\hat{v}(i_L)}{L}\overset{\triangle}=f_2(v_C,i_L) \end{aligned} $$
(4.207)

Assuming only that \(\hat {v}(i_L)\) satisfies the conditions in Eq. (4.206) it is possible to derive general qualitative behaviors for this circuit. Indeed, equating f 1(⋅) and f 2(⋅) to zero in Eq. (4.207), we get the unique equilibrium point located at the origin: v CQ = 0, i CQ = 0.

Now, in order to determine if (0, 0) is a stable or unstable equilibrium point, we can perform a small-signal analysis of the circuit about the DC operating point Q (in this case, (0, 0)). But, we will now take the opportunity to introduce the concept of the Jacobian matrix : if we linearize the RHS of Eq. (4.207) (or any nth-order normal form equations) and ignore the quadratic and other higher order terms, we would get the following result (given for a 2nd-order system such as Eq. (4.207)):

$$\displaystyle \begin{aligned} \begin{bmatrix} \overset{\bullet}{\bar{x}}_1 \\ \overset{\bullet}{\bar{x}}_2 \end{bmatrix} &= \begin{bmatrix} a_{11} & a_{12} \\ a_{21} & a_{22} \end{bmatrix} \begin{bmatrix} \bar{x}_1 \\ \bar{x}_2 \end{bmatrix} \end{aligned} $$
(4.208)

where \(\bar {x}_1\overset {\triangle }=x_1-x_{1Q}\) and \(\bar {x}_2\overset {\triangle }=x_2-x_{2Q}\) represent the small-signal deviation from the operating point. From Taylor series, we know:

$$\displaystyle \begin{aligned} \begin{bmatrix} a_{11} & a_{12} \\ a_{21} & a_{22} \end{bmatrix} &= \begin{bmatrix} \frac{\partial f_1}{\partial x_1} & \frac{\partial f_1}{\partial x_2} \\ \quad & \quad \\ \frac{\partial f_2}{\partial x_1} & \frac{\partial f_2}{\partial x_2} \end{bmatrix}_{\mathbf{x}={\mathbf{x}}_Q} \end{aligned} $$
(4.209)

The matrix on the RHS of Eq. (4.209) is the Jacobian matrix J. For any nth-order system in normal form, we can generalize J to:

$$\displaystyle \begin{aligned} \mathbf{J}&= \begin{bmatrix} \frac{\partial f_1}{\partial x_1} & \frac{\partial f_1}{\partial x_2} & \cdots & \frac{\partial f_1}{\partial x_n} \\ \quad & \quad & \quad & \quad \\ \frac{\partial f_2}{\partial x_1} & \frac{\partial f_2}{\partial x_2} & \cdots & \frac{\partial f_2}{\partial x_n} \\ \cdot & \cdot & \cdot & \cdot \\ \cdot & \cdot & \cdot & \cdot \\ \cdot & \cdot & \cdot & \cdot \\ \frac{\partial f_n}{\partial x_1} & \frac{\partial f_n}{\partial x_2} & \cdots & \frac{\partial f_n}{\partial x_n} \end{bmatrix}_{\mathbf{x}={\mathbf{x}}_Q} \end{aligned} $$
(4.210)

For the second order Jacobian in Eq. (4.209), we gather from the Hartman-Grobman theorem [12], that the qualitative behavior (stable, unstable) of the associated nonlinear system will be “similar” to the linearized system about an equilibrium point.

For the oscillator described by Eq. (4.207), the Jacobian matrix evaluates to:

$$\displaystyle \begin{aligned} \mathbf{J}&= \begin{bmatrix} 0 & \frac{-1}{C} \\ \quad & \quad \\ \frac{1}{L} & \frac{-\hat{v}'(0)}{L} \end{bmatrix} \end{aligned} $$
(4.211)

We know from our basic calculus courses that the general solution of the linear ODE in Eq. (4.208) is given by:

$$\displaystyle \begin{aligned} \bar{\mathbf{x}}(t)&=(k_1e^{\lambda_1t})\boldsymbol{\eta}_1+(k_2e^{\lambda_2t})\boldsymbol{\eta}_2 \end{aligned} $$
(4.212)

where λ 1, λ 2 are the eigenvalues of J and η 1, η 2 are the associated eigenvectors. Also from our basic calculus courses, we know that if Re(λ 1) < 0, Re(λ 2) < 0 the system associated with Eq. (4.208) is stable, etc. Since instead of using the eigenvalues, we can utilize the trace and determinant of J:

$$\displaystyle \begin{aligned} T&=a_{11}+a_{22}=\frac{-\hat{v}'(0)}{L} \\ \varDelta&=a_{11}a_{22}-a_{12}a_{21}=\frac{1}{LC} \end{aligned} $$
(4.213)

Since Δ > 0 and by the second condition in Eq. (4.206), T > 0, we have the following relation for the equilibrium point (origin) of the oscillator to be an unstable :

$$\displaystyle \begin{aligned} \frac{1}{LC}&>\frac{1}{4}\left[\frac{-\hat{v}'(0)}{L}\right]^2 \end{aligned} $$
(4.214)

or equivalently:

$$\displaystyle \begin{aligned} |\hat{v}'(0)|<2\sqrt{\frac{L}{C}} \end{aligned} $$
(4.215)

So all trajectories starting near the origin would diverge from it and head toward infinity. But, just like the relaxation oscillator we studied earlier, \(\mathscr {N}_R\) is eventually passive (i.e. the v − i characteristic must lie in the first and third quadrants beyond a certain finite distance from the origin). Thus, in view of conditions 3 and 4 in Eq. (4.206), \(\mathscr {N}_R\) will start absorbing energy from the external world—the capacitor and inductor in this case.

Consequently, the energy initially supplied by the “active” \(\mathscr {N}_R\) (when the (v C, i L) is near the “unstable” origin) to propel the trajectory toward infinity eventually fizzles as \(\mathscr {N}_R\) becomes passive and begins to absorb energy instead. Therefore, the initial outward motion of the trajectory will be damped out by losses due to power dissipated inside \(\mathscr {N}_R\) when the trajectory is sufficiently far out. Soon, the trajectory must “grind to a halt” and start “falling” back toward the origin.

The above scenario is depicted in Fig. 4.55, where we have included a cubic \(v(i)=\frac {i^3}{3}-i\). The parametric plot of (i(t), v(t)) is called a phase portrait , so named because the x − y plane is historically called the phase plane.

Fig. 4.55
figure 55

Physical mechanism for oscillation

Observe that since the circuit has only one equilibrium state, and since it is unstable, there is no point where any trajectory could come to rest. Therefore all trajectories must continue to move at all times. Since they cannot stray too far beyond the active region and since no trajectory of any autonomous state equation can intersect itself,Footnote 25 except at equilibrium points, each trajectory must eventually tend toward some limiting orbit,Footnote 26 henceforth called a limit cycle . Note that a limit cycle is a periodic trajectory that is unique to a nonlinear system. By definition, linear oscillations are not limit cycles, because linear oscillations are a continuum of orbits. A limit cycle Γ must contain no other closed trajectories in a small band around Γ.

Specifically, let us now discuss the phase portrait of the typical Van der Pol oscillator , which helps us derive the generic jump rule. Suppose we choose \(\hat {v}(i)=\frac {i^3}{3}-i\). Then Eq. (4.207) reads:

$$\displaystyle \begin{aligned} \overset{\bullet}{v}_C&=\frac{-i_L}{C} \\ \overset{\bullet}{i}_L&=\frac{v_C-\left(\frac{1}{3}i_L^3-i_L\right)}{L} \end{aligned} $$
(4.216)

For fixed values of L and C, we could use a computer to generate the phase portrait of Eq. (4.216). One such phase portrait is shown in Fig. 4.56 But how does the phase portrait change (or bifurcate) as we vary parameters L and C? In more complicated state equations, this question can only be answered in general by a brute-force computer simulation method. But, we can often reduce the number of parameters without loss of generality by writing the equations in terms of dimensionless variables . For the Van der Pol oscillator, let us introduce the following “scaled” time variables:

$$\displaystyle \begin{aligned} \tau&\overset{\triangle}=\frac{1}{\sqrt{LC}}t \end{aligned} $$
(4.217)

Note that since \(\sqrt {LC}\) has the dimensions of time, τ is dimensionless and will henceforth be called “dimensionless time.” Note that this τ is unrelated to the time constant that we had defined earlier.

Fig. 4.56
figure 56

Simulated (ProcessBlue) and physical (Red) limit cycles from an implementation (to be discussed in Sect. 5.1) of the Van der Pol oscillator

Observe that:

$$\displaystyle \begin{aligned} \overset{\bullet}{v}_C&=\frac{dv_C}{d\tau}\frac{d\tau}{dt}=\frac{1}{\sqrt{LC}}\frac{dv_C}{d\tau} \\ \overset{\bullet}{i}_L&=\frac{di_L}{d\tau}\frac{d\tau}{dt}=\frac{1}{\sqrt{LC}}\frac{di_L}{d\tau} \end{aligned} $$
(4.218)

Substituting Eq. (4.216) into Eq. (4.218), we obtain the following equivalent state equation in terms of dimensionless time variable τ:

$$\displaystyle \begin{aligned} \frac{dv_C}{d\tau}&=-\frac{1}{\epsilon}i_L \\ \frac{di_L}{d\tau}&=\epsilon\left[v_C-\left(\frac{1}{3}i_L^3-i_L\right)\right] \end{aligned} $$
(4.219)

where

$$\displaystyle \begin{aligned} \epsilon&\overset{\triangle}=\sqrt{\frac{C}{L}} \end{aligned} $$
(4.220)

Observe that Eq. (4.219) now contains only one parameter, 𝜖, as defined by Eq. (4.220). In fact, Fig. 4.56 uses the dimensionless time form of the Van der Pol equation. The dimensionless form not only reduces the number of parameters, but also has the added advantage for computer simulation of scaling. In the case of time for instance, by going from say μs to s, we can use a more realistic time step for the numerical algorithm to avoid convergence issues. We will further explore dimensionless normal form in Chap. 5.

Suppose 𝜖 →, Eq. (4.220) implies L → 0. But, from the physical Van der Pol Eq. (4.216), we see that L → 0 implies \(\frac {di_L}{dt}\rightarrow \infty \). In other words we will have a vertical jump in the v − i plane, assuming i is the vertical axis, just as we discussed in Sect. 4.2.1.6.

Let us now consider the general Eq. (4.207) of a series oscillator:

$$\displaystyle \begin{aligned} \overset{\bullet}{v}_C&=-\frac{i_L}{C} {} \end{aligned} $$
(4.221)
$$\displaystyle \begin{aligned} \overset{\bullet}{i}_L&=\frac{1}{L}[v_C-\hat{v}(i_L)] {} \end{aligned} $$
(4.222)

The function \(\hat {v}(\cdot )\) representing the nonlinear resistor characteristic can be quite arbitrary except that it satisfies the conditions in Eq. (4.206). This class, as discussed earlier, includes the negative resistance opamp relaxation oscillator.

Dividing Eq. (4.222) by Eq. (4.221), we obtain the slope:

$$\displaystyle \begin{aligned} m(P)&\overset{\triangle}=\frac{di_L}{dv_C}=-\frac{C}{L}\left[\frac{v_C-\hat{v}(i_L)}{i_L}\right] \end{aligned} $$
(4.223)

of the tangent vector at any point \(P\overset {\triangle }=(v_C,i_L)\) on a trajectory in the v C − i L plane. Thus, we have:

  1. 1.

    As L → 0 in Eq. (4.223), the limiting slope |m(P)|→, as long as \(v(C)\neq \hat {v}(i_L)\). Thus all trajectories, except on the DP characteristic, will tend to vertical line segments as L → 0. In particular, at the impasse points, we will have a jump discontinuity.

  2. 2.

    Note that from Eq. (4.222), \(\overset {\bullet }{i}_L > 0,\;\text{if }v_C > \hat {v}(i_L)\) and vice versa. In other words, this gives us the condition for the dynamic route derived in Sect. 4.2.1.6.

  3. 3.

    To complete our analysis of the jump phenomenon, we must estimate the amount of time it takes a trajectory line segment to go from one branch of the DP plot to another. This is easily found from the velocity along the vertical direction as specified by Eq. (4.222), namely, \(\displaystyle \lim \limits _{L\rightarrow 0}\left |\frac {di_L}{dt}\right |\rightarrow \infty \), provided again we are not on the DP characteristic. In particular, the trajectory through each impasse point must execute a vertical instantaneous jump as L → 0.

We have thus formally justified the introduction of the jump rule in Sect. 4.2.1.6.

4.7 Conclusion

As a concluding note to this chapter, let us recall the overall idea behind this chapter was to analyze dynamic nonlinear networks. We essentially had three approaches: time domain, frequency domain, and energy. We restricted our discussion of frequency domain techniques to linear time-invariant circuits but learned about the powerful concepts of phasors and Laplace transforms. The mindful reader would have noticed that many of the ideas involved studying an associated linear system about a particular operating point. Although much insight can be gained for first and second order systems via the linearization technique, third and higher order systems exhibit extremely complicated nonperiodic phenomena, generally known as chaos. Thus, chaos is a phenomena that cannot be fully studied by linearization and is hence a property unique to nonlinear circuits. Therefore, Chap. 5 appropriately concludes the book by incorporating a plethora of ideas encountered throughout the book.

Because of the large body of material in this chapter, we have summarized concepts below, instead of specific formulae:

  1. 1.

    The order of complexity of a dynamic network is the minimum number of initial conditions that must be specified in terms of circuit variables, in order to determine the full behavior of the network.

  2. 2.

    When possible, the dynamic nonlinear network equations should be expressed in normal form.

  3. 3.

    Dual circuits help us reduce the enormous solution space of dynamic nonlinear networks.

  4. 4.

    We learned the following from time domain analysis of nth-order nonlinear networks:

    1. a.

      Current through a linear inductor and voltage across a linear capacitor cannot change instantaneously across discontinuities.

    2. b.

      Discontinuities in other circuit variables in the network occur because of the constraint in (a) above.

    3. c.

      Circuits exhibiting impasse points indicate that we need to augment the circuit model, most likely with parasitics.

    4. d.

      MNA, Tableau and Small Signal analysis can be easily extended to include dynamic networks.

  5. 5.

    An alternative to time domain analysis is frequency domain analysis. The advantage of this approach when applied to linear time-invariant circuits is that time domain differential equations are mapped to algebraic equations involving complex variables in the frequency domain. The main ideas discussed were:

    1. a.

      We use complex numbers to define a phasor, in order to obtain the steady-state response when the network is excited by a sinusoid of a particular frequency ω.

    2. b.

      Differential equations in the time domain can be converted to algebraic equations in the phasor domain, and hence techniques covered in Chap. 3 such as nodal analysis, tableau analysis, superposition, and Thévenin-Norton theorems are applicable to circuits in the phasor domain.

    3. c.

      For general excitation, we use the Laplace transform.

    4. d.

      To calculate the time response, we need to use partial fraction expansion and then use a table of inverse Laplace transforms.

    5. e.

      Laplace transforms can be used to find both the transient and steady-state responses.

  6. 6.

    For memristor networks:

    1. a.

      We discussed the Flux-Charge Analysis Method (FCAM). The advantage of this method is a reduction in the number of ODEs for the associated memristive network.

    2. b.

      Memristors display a distinct pinched-hysteresis v − i characteristic under sinusoidal excitation.

    3. c.

      Due to physical parasitics, a memristor’s v − i characteristic may become unpinched at the origin.

    4. d.

      We described small-signal AC characteristics of memristive devices.

  7. 7.

    A third approach to studying (dynamic nonlinear) networks is energy. We discussed formulation of system equations from both the Lagrangian and Hamiltonian. The main ideas discussed are:

    1. a.

      Inductors store the mechanical equivalent of “kinetic energy” via the current flowing through them (or the flux-linkage across them). Capacitors store the mechanical equivalent of “potential energy” via the voltage across them (or the charge stored in them).

    2. b.

      Lagrangian formalism is in terms of the difference between kinetic and potential energies. Hamiltonian formalism is in terms of the sum of kinetic and potential energies.

  8. 8.

    Reciprocity helps us understand the symmetric role played by the input and output of a physical system.

  9. 9.

    Higher-order circuit elements in general can be synthesized using higher order mutators. We showed how to synthesize a particular type of higher order mutator for \(i=\ddot {v}\).

  10. 10.

    Limit cycles are an exclusive steady-state behavior of nonlinear oscillators, that usually arise due to unstable equilibrium points.

Lab 4: Relaxation Oscillator (Transient Simulation) and High-Pass filter (AC Simulation)

Just like lab 3, we encourage the reader to perform simulation in QUCS first, so they can verify their answers to the appropriate problems via simulation.

Objective

To understand time domain (transient) simulation and frequency response (AC simulation)in QUCS

Theory

There are two steps to this lab: in the first step, you construct a relaxation oscillator. In the second step, you go through the QUCS online workbook to simulate a high-pass filter. For the relaxation oscillator, you will be performing a transient analysis or a time domain simulation. Please do not confuse transient analysis, as defined by circuit simulators, with the concept of transient response discussed in the text!

To perform sinusoidal steady-state analysis, the terminology used by circuit simulators is AC simulation. We will use a simple RC circuit to illustrate the idea of filtering signals. A discussion of filtering is beyond the scope of this book, but the reader is encouraged to go through the appropriate material in an excellent reference such as [12]. Moreover, as the reader simulates the high-pass filter, they are encouraged to modify the circuit to understand its functionality.

Lab Exercise

  1. 1.

    For this step, construct the circuit shown in Fig. 4.57.

    Fig. 4.57
    figure 57

    An opamp based relaxation oscillator

  2. 2.

    Once you enter the appropriate parameters, simulating the circuit should result in Fig. 4.58. Compare your result with the discussion of relaxation oscillators in this chapter (see also Exercise 4.7).

    Fig. 4.58
    figure 58

    Steady-state v C(t) and v out(t) for the circuit in Fig. 4.57

  3. 3.

    For this step, simulate the circuit under “AC simulation - A simple RC highpass” in the QUCS online workbook. Make sure you understand the results. If necessary, construct the circuit physically.

Exercises

4.1

Consider the memristor circuits in Figs. 4.5 and 4.6 from Example 4.1.3. What is the order of complexity for the two circuits if the memristive devices are replaced with ideal memristors?

4.2

For the circuit shown in Fig. 4.59, calculate v 0(t) for t ≥ 0, given i L(0) = 2 A.

Fig. 4.59
figure 59

Circuit for Exercise 4.2

4.3

Consider the circuit shown in Fig. 4.60a where the inductor is nonlinear and is given by the i − ϕ characteristic shown.

  1. 1.

    Let i s(t) = 3u(t) and i(0) = −1 A. Determine the current i(t) for t ≥ 0.

  2. 2.

    What is the amount of energy delivered to the inductor for t ≥ 0?

Fig. 4.60
figure 60

(a) Circuit and (b) nonlinear characteristic for Exercise 4.3

4.4

For the circuit shown in Fig. 4.61, calculate and sketch v C(t) for t > 0. Assume v C(0) = 0 V.

Fig. 4.61
figure 61

Circuit for Exercise 4.4

4.5

Consider the circuit shown in Fig. 4.62a, where N is described by the v − i characteristic shown in Fig. 4.62b.

  1. 1.

    Indicate the dynamic route. Label all equilibrium points and state whether they are stable or unstable.

  2. 2.

    Suppose v C(0) = 15 V. Find and sketch v C(t) and i C(t) for t ≥ 0. Indicate all pertinent information on the sketches.

Fig. 4.62
figure 62

(a) Circuit and (b) DP characteristic for Exercise 4.5

4.6

Consider the circuit shown in Fig. 4.63a, where N is described by the v − i characteristic shown in Fig. 4.63b.

  1. 1.

    Indicate the dynamic route. Label all equilibrium points and state whether they are stable or unstable.

  2. 2.

    Suppose i L(0) = 20 mA. Find and sketch v(t) and i(t) for t ≥ 0. Indicate all pertinent information on the sketches.

Fig. 4.63
figure 63

(a) Circuit and (b) DP characteristic for Exercise 4.6

4.7

Determine closed form expressions and sketch v C(t) and v o(t) waveforms for the relaxation oscillator in Fig. 4.26a.

4.8

Write the modified node equations for the circuit shown in Fig. 4.64.

Fig. 4.64
figure 64

Circuit for Exercise 4.8

4.9

The roots of a general cubic equation in X may be viewed (in the X − Y  plane) as the intersections of the X-axis with the graph of a cubic of the form:

$$\displaystyle \begin{aligned} Y&=X^3+AX^2+BX+C\end{aligned} $$
(4.224)
  1. 1.

    Show that the point of inflection of the graph occurs at \(X=-\frac {A}{3}\).

  2. 2.

    Deduce (algebraically and geometrically) that the substitution \(X=\left (x-\frac {A}{3}\right )\) will reduce the above equation to the form Y = x 3 + bx + c.

4.10

Reconsider the cubic: x 3 = 3px + 2q. To derive the general formula for the cubic:

  1. 1.

    Make the inspired substitution x = s + t and deduce that x solves the cubic if st = p, s 3 + t 3 = 2q.

  2. 2.

    Eliminate t between the two equations above, thereby obtaining a quadratic in s 3.

  3. 3.

    Solve the quadratic to obtain two possible values of s 3. By symmetry, what are the possible values of t 3?

  4. 4.

    Given that we know s 3 + t 3 = 2q, deduce the formula for x in Eq. (4.91).

4.11

Algebraically (and/or geometrically) prove the following:

  1. 1.

    \(|z|=\sqrt {x^2+y^2}\)

  2. 2.

    \(z\bar {z}=|z|{ }^2\)

  3. 3.

    \(\frac {1}{x+jy}=\frac {x}{x^2+y^2}-j\frac {y}{x^2+y^2}\)

  4. 4.

    (1 + j)4 = −4

  5. 5.

    (1 + j)13 = −26(1 + j)

  6. 6.

    \((1+j\sqrt {3})^6=2^6\)

4.12

Write nodal equations in the phasor domain for the circuit shown in Fig. 4.65. Use the nodal equations to find the ratio V o()∕V s.

Fig. 4.65
figure 65

Circuit for Exercise 4.12

4.13

Reconsider the system S from Exercise 1.9. If the input to S is a sinusoidal signal of frequency ω, is the frequency of the output signal still ω?

4.14

Prove the differentiation property of Laplace transforms for nth-order derivatives:

$$\displaystyle \begin{aligned} \mathscr{L}\{\frac{d^n}{dt^n}f(t)\}&=s^nF(s)-s^{n-1}f(0^-)-s^{n-2}f^\prime(0^-)\cdots -f^{n-1}(0^-) \end{aligned} $$
(4.225)

4.15

Show that if the initial conditions were not zero in Example 4.3.6, then we would have obtained:

$$\displaystyle \begin{aligned} I_L(s)&=\frac{\omega_0^2}{s^2+2\alpha s+\omega_0}I_s(s)+\frac{(s+2\alpha)i_L(0^-)+\overset{\bullet}{i}_L(0^-)}{s^2+2\alpha s+\omega_0^2} \end{aligned} $$
(4.226)

4.16

Determine Z in(s) for the circuit in Fig. 4.66. Show that the circuit functions as a physical implementation of a gyrator.

Fig. 4.66
figure 66

Circuit for Exercise 4.16

4.17

Derive the small-signal model for the thermistor from Sect. 4.4.2.

4.18

The circuit shown in Fig. 4.67 is made of linear time-invariant elements. Prior to time 0, the left capacitor is charged to V 0 volts, and the right capacitor is uncharged. The switch is closed at time 0. Calculate the following:

  1. 1.

    The current i for t ≥ 0.

  2. 2.

    The energy dissipated in the interval (0, T).

  3. 3.

    The limiting values for t → of (a) the capacitor voltages v 1 and v 2, (b) the current i and (c) the energy stored in the capacitor and the energy dissipated in the resistor.

  4. 4.

    Is there any relation between the energies? If so, state what it is.

  5. 5.

    What happens when R → 0?

Fig. 4.67
figure 67

Circuit for Exercise 4.18

4.19

We have encountered many resistive circuits having multiple equilibrium points. For example, the tunnel-diode circuit in Fig. 3.4a from Chap. 3 has three operating points. This answer seems to contradict the fact that a single laboratory measurement on the corresponding physical circuit can only give one operating point.

We are now in a position to resolve the so-called operating point paradox . Basically, the tunnel-diode circuit in Fig. 3.4a is not a realistic model of the physical circuit. In any physical circuit, as we have discussed numerous times, there always exist parasitic effects. In circuits having unique solution, these effects can often be neglected without discernible errors. In circuits exhibiting multiple solutions, however, some of these parasitic elements cannot be neglected.

Consider the realistic tunnel-diode circuit shown in Fig. 4.68. The three operating points in the resistive circuit can now be interpreted as equilibrium points in the remodeled dynamic circuit. Show that in Fig. 3.4b:

  1. 1.

    Q 2 is an unstable equilibrium point.

  2. 2.

    Use numerical simulation and phase portraits to show different initial conditions will give rise to either Q 1 or Q 3 as the operating point.

Fig. 4.68
figure 68

Realistic model for a biased tunnel-diode circuit

4.20

NOTE: This is an open-ended problem

Going through this chapter, the reader should have realized that there are three main approaches to studying circuits: time domain, frequency domain, and the energy approach. With respect to circuit simulation programs such as QUCS, they readily implement the time domain and frequency domain approaches.

So, a natural question is: what about energy based approaches? That is, can we supplement QUCS to compute Lagrangian and Hamiltonian for a specified circuit? And, how would one go about interpreting the results?

We would recommend investigating the questions above as a capstone project.