Keywords

1 Introduction

The Finite Difference Time Domain (FDTD) method is one of the most widely used calculation techniques in the analysis of electromagnetic phenomena since the early 1990s. Although the first application in the use of this method for electromagnetic wave problems dates back to 1966 [1], its prevalence has increased with the development of computer technologies in the solution of numerical calculations [2]. This method used in the solution of partial differential equations is based on the discretizing of Maxwell’s curl equations in time and space. At this stage, derivatives are converged using finite difference equations.

The FDTD method algorithm is very popular for its simple implementation and robust and accurate results in the analysis of electromagnetic fields [3]. Analytical methods, that can be used in simple engineering applications, cannot calculate transient and permanent state responses in complex structures with sufficient accuracy in terms of electrical and magnetic fields [4].

In the application stage of this method, first, the boundaries of the region to be analyzed are determined. These borders should be wide enough to cover all of the objects examined. In cases where suitable boundary conditions cannot be determined, calculation can be made by determining artificial boundaries where the calculation region is extended to infinity [5]. This region, whose boundaries are defined, is divided into cells according to the step intervals in the space and time. With finite difference equations solved due to these time and space variables, electric and magnetic field quantities can be calculated for a sufficient number of points in the solution region [6]. In this calculation stage, dielectric and magnetic material parameters of the design in the region where the solution set is located should be defined for each discrete region [5].

FDTD, which is a very simple and efficient alternative to solve Maxwell’s equations, can produce solutions in the analysis of many electrical engineering problems [7]. Since spatially and temporally discretization is used, it allows the modeling of three-dimensional inhomogeneous materials, analysis of designs involving planar and non-planar volumes, containing multiple dielectric planes and ground layer, and examination of non-ideal conductors and insulators [3]. This method, in which designs containing passive loads or active elements can also be modeled by adding them to the initial equation, is used in simulation of many engineering applications such as antennas, high voltage (HV) insulation systems, partial discharge (PD) imaging techniques and grounding systems [2, 7, 8].

In addition to these application areas, high frequency responses frequently encountered in high voltage and power system equipment could also be examined for some modifications made in this model [2, 8]. The propagation of electromagnetic wave in transient and non-transient modes can be examined in a wide frequency range. The FDTD method is widely used for transient analysis [9].

In engineering applications, the experimental measurement of electric and magnetic fields, which can be defined by many variables such as the size of the design, the variety of the material, and the surge caused by switching, is not always possible due to its complex nature and economic constraints [5]. In order to overcome these limitations and to understand the electromagnetic behavior of engineering designs, finite difference methods are used.

This chapter primarily examines the finite difference methods used in the solution of engineering problems defined by differential equations in a conceptual framework. Following this section, the application examples and results of the FDTD method, which stands out with its widespread use in calculating electrical and magnetic fields among these methods, are discussed. Researches on the limitations and alternative modifications of the method are also examined in this section. In the final part of the chapter, the advantages and disadvantages of FDTD, the basic modifications proposed for eliminating these disadvantages and alternative application areas are explained.

2 Finite Difference Methods for Time-Dependent Problems

2.1 Basic Concepts

A general initial problem for linear partial differential equations:

$$\begin{aligned} & u_{t} \left( {x,t} \right) = P\left( {x,t,\frac{\partial }{\partial x}} \right)u\left( {x,t} \right) \\ & u_{t} \left( {x,0} \right) = f(x) \\ \end{aligned}$$
(1)

where x is a vector of s components: \(x = \left( {x_{1} , \ldots ,x_{s} } \right)\), u is a vector of p components: \(u\left( {x,t} \right) = \left( {u_{1} \left( {x,t} \right), \ldots ,u_{p} \left( {x,t} \right)} \right)\) and P is a polynomial \(\frac{\partial }{\partial x}\).

For computational convenience, the domain of the solution.

\(u\left( {x,t} \right)\) is restricted to a bounded region. On this bounded region, a grid of points is constructed by discretizing both space and time. Step sizes are \(\Delta t\) and \(\Delta x_{i}\) and the grid points are

$$\begin{aligned} & t_{n} = n\Delta t \\ & x_{ji} = j_{i} \Delta x_{i} \cdots i = 0, \ldots ,N_{i} \\ \end{aligned}$$
(2)

Consider a two dimensional problem:

$$u_{t} = u_{x} + u_{y}$$
(3)

where \(u\left( {x,y,t} \right)\) is a real valued function. \(\Delta x\), \(\Delta y\) and \(\Delta t\) is positive and fixed quantities. Finite difference scheme is

$$\begin{aligned} U_{i,j}^{n + 1} & = 0.25\left( {U_{i + 1,j + 1}^{n} + U_{i - 1,j + 1}^{n} + U_{i + 1,j - 1}^{n} + U_{i - 1,j - 1}^{n} } \right) \\ & \quad + \frac{\Delta t}{2\Delta x}\left( {U_{i + 1,j}^{n} - U_{i - 1,j}^{n} } \right) + \frac{\Delta t}{2\Delta y}\left( {U_{i,j + 1}^{n} - U_{i,j - 1}^{n} } \right) \\ \end{aligned}$$
(4)

The shift operators are \(E_{1}\) and \(E_{2}\) so

$$\begin{aligned} U_{i,j}^{n + 1} & = \left( {0.25\left( {E_{1} + E_{1}^{ - 1} } \right)\left( {E_{2} + E_{2}^{ - 1} } \right) + \frac{\Delta t}{2\Delta x}\left( {E_{1} + E_{1}^{ - 1} } \right)} \right. \\ & \left. {\quad + \frac{\Delta t}{2\Delta y}\left( {E_{2} + E_{2}^{ - 1} } \right)} \right)U_{i,j}^{n} \\ \end{aligned}$$
(5)

2.2 Properties of Finite Difference Schemes

\(u\left( {x,t} \right)\) is the initial value problem, \(S\left( {t,t_{0} } \right)\) is the solution operator and the function u is

$$u\left( {x,t} \right) = S\left( {t,t_{0} } \right)u\left( {x,t_{0} } \right)$$
(6)

thus in particular

$$u\left( {x,\left( {n + 1} \right)\Delta t} \right) = S\left( {\left( {n + 1} \right)\Delta t,n\Delta t} \right)u\left( {x,n\Delta t} \right)$$
(7)

If the problem is autonomous, the operator P in Eq. 1 is independent of time, S is a function of the elapsed \(\left( {t - t_{0} } \right)\).

Scheme 1

$$U_{j}^{n + 1} = U_{j}^{n} + \frac{\Delta t}{2\Delta x}\left( {U_{j + 1}^{n} - U_{j - 1}^{n} } \right)$$
(8)

This scheme is useless since it will never be stable. To investigate its accuracy:

$$u_{j}^{n} = u\left( {j\Delta x,n\Delta t} \right)$$
(9)

The scheme can be rewritten in the form \(u_{t} = u_{x}\)

$$\frac{{U_{j}^{n + 1} - U_{j}^{n} }}{\Delta t} = \frac{{U_{j + 1}^{n} - U_{j - 1}^{n} }}{2\Delta x}$$
(10)

Local truncation error is

$$\begin{aligned} \tau_{j}^{n} & = \frac{{u_{j}^{n + 1} - u_{j}^{n} }}{\Delta t} - \frac{{u_{j + 1}^{n} - u_{j - 1}^{n} }}{2\Delta x} \\ & = u_{t} \left( {x_{j} ,t_{n} } \right) + O\left( {\Delta t} \right) - u_{x} \left( {x_{j} ,t_{n} } \right) + O\left( {\Delta x^{2} } \right) \\ \end{aligned}$$
(11)

This scheme is accurate of second order in space and first order in time.

Scheme 2: Lax-Friedrichs Scheme

$$U_{j}^{n + 1} = \frac{1}{2}\left( {U_{j + 1}^{n} + U_{j - 1}^{n} } \right) + \frac{\Delta t}{2\Delta x}\left( {U_{j + 1}^{n} - U_{j - 1}^{n} } \right)$$
(12)

This scheme is a first order accurate scheme. This scheme describes as (FTCS) forward in time and centered in space [10].

The Lax-Friedrichs scheme has two-degree precision along space and one-degree precision along time [11].

Scheme 3: Upwind Scheme

Consider one sided difference for the spatial derivative:

$$U_{j}^{n + 1} = U_{j}^{n} + \frac{\Delta t}{\Delta x}\left( {U_{j + 1}^{n} - U_{j}^{n} } \right)$$
(13)

This is a first order accurate scheme. The upwind differencing scheme is conservative [12].

Scheme 4: Downwind Scheme

Consider the one-sided difference for the spatial derivative:

$$U_{j}^{n + 1} = U_{j}^{n} + \frac{\Delta t}{\Delta x}\left( {U_{j}^{n} - U_{j - 1}^{n} } \right)$$
(14)

This is a first order accurate scheme. However, this scheme is also useless. The domain of dependence is not included in the scheme stencil therefore such a scheme is unstable [13].

Scheme 5: Leapfrog Scheme:

If the center difference is used for both time and spatial derivatives,

$$U_{j}^{n + 1} = U_{j}^{n - 1} + \frac{\Delta t}{\Delta x}\left( {U_{j + 1}^{n} - U_{j - 1}^{n} } \right)$$
(15)

To find its accuracy, it is rewritten as

$$\frac{{U_{j}^{n + 1} - U_{j}^{n - 1} }}{2\Delta t} = \frac{{U_{j + 1}^{n} - U_{j - 1}^{n} }}{2\Delta x}$$
(16)
$$\tau_{j}^{n} = \frac{{u_{j}^{n + 1} - u_{j}^{n - 1} }}{2\Delta t} - \frac{{u_{j + 1}^{n} - u_{j - 1}^{n} }}{2\Delta x} = u_{t} + O\left( {\Delta t^{2} } \right) - u_{x} + O\left( {\Delta x^{2} } \right)$$
(17)

This is a second order accurate scheme. The leapfrog method has good stability when solving partial differential equations with oscillatory solutions [14].

Scheme 6: Lax-Wendroff Scheme

It is based on the Taylor series expansion \(u\left( {x,t} \right)\) given by

$$u\left( {x,t + \Delta t} \right) = u\left( {x,t} \right) + \Delta t\,u_{t} \left( {x,t} \right) + \frac{1}{2}\Delta t^{2} u_{tt} \left( {x,t} \right) + O\left( {\Delta t^{3} } \right)$$
(18)

using \(u_{t} = u_{x}\) reduces to

$$u\left( {x,t + \Delta t} \right) = u\left( {x,t} \right) + \Delta t\,u_{x} \left( {x,t} \right) + \frac{1}{2}\Delta t^{2} u_{xx} \left( {x,t} \right) + O\left( {\Delta t^{3} } \right)$$
(19)

Using the centered difference, a scheme with order accuracy in both time and space is obtained by

$$U_{j}^{n + 1} = U_{j}^{n - 1} + \frac{\Delta t}{2\Delta x}\left( {U_{j + 1}^{n} - U_{j - 1}^{n} } \right) + \frac{{\Delta t^{2} }}{{2\Delta x^{2} }}\left( {U_{j + 1}^{n} - 2U_{j}^{n} + U_{j - 1}^{n} } \right)$$
(20)

The Lax-Wendroff scheme has two-degree precision along both space and time. The Lax-Wendroff scheme gives more accurate solution than that of Lax- Friedrich scheme since the Lax-Wendroff scheme has two-degree precision along time, while the Lax-Friedrichs scheme has one-degree precision along time.

The Lax-Wendroff scheme needs more computational time than that of Lax-Friedrich scheme since the Lax-Wendroff scheme need to calculate derivatives up to 4th order, while the Lax-Friedrichs scheme need to calculate derivatives up to 2nd order [11].

Scheme 7: Crank-Nicolson Scheme

This is a second order accurate implicit scheme

$$U_{j}^{n + 1} = U_{j}^{n - 1} + \frac{\Delta t}{2\Delta x}\left( {U_{j + 1}^{n + 1} - U_{j - 1}^{n + 1} + U_{j + 1}^{n} - U_{j - 1}^{n} } \right)$$
(21)

The Crank-Nicolson method is implicit scheme with second-order accuracy in both time and space. This method is an unconditionally stable [15].

2.3 Von Neumann Stability

Stability of the scheme \(V^{n + 1} = C\left( {\Delta t} \right)V^{n}\) can be written in terms of the amplification matrix \(G\left( {\Delta t,k} \right)\) as the following condition: \(t > 0\)

$$\left\| {G\left( {\Delta t,k} \right)^{n} } \right\| \le Ke^{\alpha t}$$
(22)

The condition must be satisfied for all multi-index \(k\) in order to establish stability of the scheme.

The Von Neumann Condition

The amplification matrix of a stable scheme satisfies the condition

$$p\left[ {G\left( {\Delta t,k} \right)} \right] \le e^{\gamma \Delta t} = 1 + O\left( {\Delta t} \right)$$
(23)

where \(p\left[ {G\left( {\Delta t,k} \right)} \right]\) denotes the spectral radius (largest magnitude of eigenvalues) of the matrix \(G\left( {\Delta t,k} \right)\)

The Von Neumann stability condition is necessary but not sufficient for stability. In most practical applications, turns out to be easily checked whether this condition holds or not [16].

2.4 The Leapfrog Scheme

2.4.1 The One Way Wave Equation

The one-way wave equation shows significant computational efficiency for a range of transmitted wave three-dimensional global, exploration and engineering scale applications [17]. The leapfrog scheme is

$$U_{j}^{n + 1} = U_{j}^{n - 1} + \frac{\Delta t}{\Delta x}\left( {U_{j + 1}^{n} - U_{j - 1}^{n} } \right)$$
(24)

The periodic conditions imposed through the usual periodicity requirement, \(U_{ - 1}^{n} = U_{N - 1}^{n}\), \(U_{N}^{n} = U_{0}^{n}\)

The vector can be defined as

$$V_{j}^{n} = \left( {\begin{array}{*{20}l} {U_{j}^{n} } \hfill \\ {U_{j - 1}^{n} } \hfill \\ \end{array} } \right)$$
(25)
$$V^{n + 1} = C\left( {\Delta t} \right)V^{n}$$
(26)

and \(\lambda = \frac{\Delta t}{\Delta x}\) then

$$V_{j}^{n + 1} = \left( {\begin{array}{*{20}c} {\lambda \left( {E - E^{ - 1} } \right)} & 1 \\ 1 & 0 \\ \end{array} } \right)V_{j}^{n}$$
(27)

where E and \(E^{ - 1}\) are the shift operations. \(V_{j}^{n} = \widehat{V}_{k}^{n} e^{ikj\Delta x}\) is the discrete fourier transform of \(V^{n}\).

$$\widehat{V}_{k}^{n + 1} e^{ikj\Delta x} = \left( {\begin{array}{*{20}c} {\lambda \left( {E + E^{ - 1} } \right)} & 1 \\ 1 & 0 \\ \end{array} } \right)\widehat{V}_{k}^{n} e^{ikj\Delta x}$$
(28)

and \(x_{j} = j\Delta x\) so

$$E\hat{e}^{ikj\Delta x} V_{k}^{n} = e^{ik\Delta x} e^{ikj\Delta x} \widehat{V}_{k}^{n}$$
(29)
$$E\hat{e}^{ikj\Delta x} V_{k}^{n} = e^{ - ik\Delta x} e^{ikj\Delta x} \widehat{V}_{k}^{n}$$
(30)

thus

$$\begin{aligned} \widehat{V}_{k}^{n + 1} & = e^{ - ikj\Delta x} \left( {\begin{array}{*{20}c} {\lambda \left( {E - E^{ - 1} } \right)} & 1 \\ 1 & 0 \\ \end{array} } \right)e^{ikj\Delta x} \widehat{V}_{k}^{n} \\ & = \left( {\begin{array}{*{20}c} {2i\lambda \sin \left( {k\Delta x} \right)} & 1 \\ 1 & 0 \\ \end{array} } \right)\widehat{V}_{k}^{n} \\ \end{aligned}$$
(31)

The explicit expression for the amplification matrix is

$$G\left( {\Delta x,k} \right) = \left( {\begin{array}{*{20}c} {2i\lambda \sin \left( {k\Delta x} \right)} & 1 \\ 1 & 0 \\ \end{array} } \right)$$
(32)

The variable \(\xi = k\Delta x\) restricted to \(0 \le \xi \le 2\pi\). The eigenvalues of the amplification matrix \(G\left( {\Delta x,k} \right)\) is

$$\mu_{1} (\xi ) = i\lambda \sin (\xi ) + \sqrt {1 - \lambda^{2} \sin^{2} (\xi )}$$
(33)

Case 1. If \(\lambda^{2} > 1\), then for those values of k such that \(\xi = k\Delta x = \frac{\pi }{2}\)

$$\mu_{1} \left( {{\pi \mathord{\left/ {\vphantom {\pi 2}} \right. \kern-0pt} 2}} \right) = i\left( {\lambda + \sqrt {\lambda^{2} - 1} } \right)$$
(34)

so \(\left| {\mu_{1} \left( {{\pi \mathord{\left/ {\vphantom {\pi 2}} \right. \kern-0pt} 2}} \right)} \right| > 1\) yielding that the Von Neumann stability condition is not satisfied by the amplification matrix. The leapfrog scheme is unstable when \(\lambda > 1\).

Case 2. If \(\lambda^{2} \le 1\), then

$$\left| {\mu_{1} (\xi )} \right|^{2} = \lambda^{2} \sin^{2} (\xi ) + 1 - \lambda^{2} \sin^{2} (\xi ) = 1$$
(35)

Then \(p[G] = 1\) and the Von Neumann condition is satisfied. Nonetheless, this does not imply that the scheme is stable for \(\lambda \le 1\), and it is unstable for \(\lambda = 1\).

The leapfrog scheme for \(u_{t} = u_{x}\) is stable for \(\lambda < 1\).

2.4.2 The Two Way Wave Equation

Comparison of migration results for one-way and two-way wave-equation migration shows that the two-way wave equation provides superior results [18]. The leapfrog method (second order-centered difference for time and space derivatives) for the two-way wave equation \(u_{tt} = u_{xx}\) is

$$\frac{{U_{j}^{n + 1} - 2U_{j}^{n} + U_{j}^{n - 1} }}{{\Delta t^{2} }} = \frac{{U_{j + 1}^{n} - 2U_{j}^{n} + U_{j - 1}^{n} }}{{\Delta x^{2} }}$$
(36)

The simplified 1D Maxwell’s equations can be written as, \(E_{t} = H_{x}\), \(H_{t} = E_{x}\) which is equivalent to \(E_{tt} = E_{xx}\) or \(H_{tt} = H_{xx}\)

The FDTD method (second order centered difference for time and space derivatives) is defined on staggered grid for H:

$$\frac{{E_{j}^{n + 1} - E_{j}^{n} }}{\Delta t} = \frac{{H_{{j + \frac{1}{2}}}^{{n + \frac{1}{2}}} - H_{{j - \frac{1}{2}}}^{{n + \frac{1}{2}}} }}{\Delta x}$$
(37)
$$\frac{{H_{{j + \frac{1}{2}}}^{{n + \frac{1}{2}}} - H_{{j + \frac{1}{2}}}^{{n - \frac{1}{2}}} }}{\Delta t} = \frac{{E_{j + 1}^{n} - E_{j}^{n} }}{\Delta x}$$
(38)

\(\lambda = \Delta t/\Delta x\) so (36) can be written as

$$U_{j}^{n + 1} = 2U_{j}^{n} + \lambda^{2} \left( {E - 2 + E^{ - 1} } \right)U_{j}^{n} - U_{j}^{n - 1}$$
(39)

where E is the shift operator.

$$V_{j}^{n} = \left( {\begin{array}{*{20}c} {U_{j}^{n} } \\ {U_{j - 1}^{n} } \\ \end{array} } \right)$$
(40)
$$V_{j}^{n + 1} = \left( {\begin{array}{*{20}c} {2 + \lambda^{2} \left( {E - 2 + E^{ - 1} } \right)} & { - 1} \\ 1 & 0 \\ \end{array} } \right)V_{j}^{n}$$
(41)

\(V_{j}^{n} = \widehat{V}_{k}^{n} e^{ikj\Delta x}\) so

$$\widehat{V}_{k}^{n + 1} = \left( {\begin{array}{*{20}c} {2 + \lambda^{2} \left( {e^{ikj\Delta x} - 2 + e^{ - ikj\Delta x} } \right)} & { - 1} \\ 1 & 0 \\ \end{array} } \right)\widehat{V}_{k}^{n}$$
(42)

Thus

$$G = \left( {\begin{array}{*{20}c} {2 + \lambda^{2} \left( {2\cos (\xi ) - 2} \right)} & { - 1} \\ 1 & 0 \\ \end{array} } \right)$$
(43)

The eigenvalues of G are \(\mu_{1} = a + \sqrt {a^{2} - 1}\) and \(\mu_{2} = a - \sqrt {a^{2} - 1}\) with \(a = 1 + \lambda^{2} \left( {\cos (\xi ) - 1} \right)\).

If \(\lambda > 1\) so \(\cos \left( {\xi_{0} } \right) < 1 - \frac{2}{{\lambda^{2} }}\). Then \(a\left( {\xi_{0} } \right) < - 1\) and \(\left| {\mu_{2} \left( {\xi_{0} } \right)} \right| = \left| {a - \sqrt {a^{2} - 1} } \right| > 1\). The Neumann stability is violated thus not stable.

If \(\lambda \le 1\), then \(a^{2} - 1 \le 0\) thus \(\mu_{1} = a + i\sqrt {1 - a^{2} }\) and \(\mu_{2} = a - i\sqrt {1 - a^{2} }\). So \(\left| {\mu_{i} } \right| = 1\) and the Von Neumann stability is satisfied. On the other hand, G is not a normal matrix and \(\left\| G \right\| > 1\).

2.4.3 Convergence for the Two Way Wave Equation

Replace \(U_{j}^{n}\) by \(u\left( {x_{j} ,t^{n} } \right)\) in Eq. (36), the residue is the local truncation error

$$\tau^{n} = O\left( {\Delta t^{2} } \right) + O\left( {\Delta x^{2} } \right)$$
(44)

Second, replace \(U_{j}^{n}\) by \(u\left( {x_{j} ,t^{n} } \right)\) in Eq. (41), the residue is

$$\Delta t^{2} \tau^{n} = \Delta t^{2} \left[ {O\left( {\Delta t^{2} } \right) + O\left( {\Delta x^{2} } \right)} \right]$$
(45)

Let \(V^{n + 1} = C\left( {\Delta t} \right)V^{n}\) denote the leapfrog scheme. Suppose

$$V^{n + 1} = \left( {\begin{array}{*{20}c} {U_{0}^{n + 1} } \\ {U_{0}^{n} } \\ {U_{1}^{n + 1} } \\ {U_{1}^{n} } \\ \vdots \\ {U_{N - 1}^{n + 1} } \\ {U_{N - 1}^{n} } \\ \end{array} } \right)$$
(46)

\(Q_{\Delta x}\) is the sampling operator at the spatial grid points and two time steps.

$$Q_{\Delta x} u\left( {x,t} \right) = \left( {\begin{array}{*{20}c} {u\left( {x_{0} ,t} \right)} \\ {u\left( {x_{0} ,t - \Delta t} \right)} \\ {u\left( {x_{1} ,t} \right)} \\ {u\left( {x_{1} ,t - \Delta t} \right)} \\ \vdots \\ {u\left( {x_{N - 1} ,t} \right)} \\ {u\left( {x_{N - 1} ,t - \Delta t} \right)} \\ \end{array} } \right)$$
(47)

2.5 Dissipative Schemes

A finite difference scheme \(V^{n + 1} = C\left( {\Delta t} \right)V^{n}\) is called dissipative of order \(2\tau\) if the amplification matrix satisfies

$$\rho \left[ {G\left( {\Delta t,k} \right)} \right] \le 1 - \delta \left| \xi \right|^{2\tau }$$
(48)

where \(\xi = k\Delta x\) for all \(\Delta t\), k and \(\delta > 0\) is independent of k and \(\Delta t\).

2.6 Difference Schemes for Hyperbolic Systems in One Dimension

$$u\left( {x,t} \right) = \left( {u_{1} \left( {x,t} \right), \ldots ,u_{p} \left( {x,t} \right)} \right)^{T}$$
(49)
$$u_{t} \left( {x,t} \right) = \frac{{\partial F\left( {u\left( {x,t} \right)} \right)}}{\partial x}$$
(50)

\(F(u)\) is a function \(F\left( {u_{1} , \ldots ,u_{p} } \right) =\) \(\left( {F_{1} \left( {u_{1} , \ldots ,u_{p} } \right), \ldots ,F_{p} \left( {u_{1} , \ldots ,u_{p} } \right)} \right)^{T}\)

$$\frac{{\partial F\left( {u\left( {x,t} \right)} \right)}}{\partial x} = \frac{\partial F\left( u \right)}{\partial u}\frac{{\partial u\left( {x,t} \right)}}{\partial x}$$
(51)

where \(\frac{\partial F(u)}{\partial u}\) denotes the gradient matrix \(A(u)\) with components \(a_{ij} (u) = \frac{{\partial F_{i} (u)}}{{\partial u_{j} }}\) so that the nonlinear system can be written in the form

$$u_{t} = A(u)u_{x}$$
(52)

The above nonlinear equation is called weakly, strongly, symmetric or strictly hyperbolic if for every \(u_{0}\) fixed, the corresponding linearized system:

$$u_{t} = A\left( {u_{0} } \right)u_{x}$$

is weakly, strongly, symmetric or strictly hyperbolic, respectively.

The Lax equivalence theorem states basically that an accurate scheme is stable if and only if it converges, provided that the problem is strongly well posed. Weak well posedness may give rise to instabilities.

2.6.1 First Order Schemes

Consider Friedrich’s scheme:

$$U_{j}^{n + 1} = \frac{1}{2}\left( {U_{j + 1}^{n} + U_{j - 1}^{n} } \right) + \frac{\Delta t}{2\Delta x}\left( {F_{j + 1}^{n} - F_{j - 1}^{n} } \right)$$
(53)

where \(F_{j + 1}^{n} = F\left( {U_{j + 1}^{n} } \right)\). This scheme is based on first order approximation of the derivatives using Taylor expansion, and it can be easily shown that this scheme is first order accurate. Linearizing the function \(F(u)\) around some arbitrary value to \(u_{0}\), \(A(u)\) is replaced by a constant matrix A, so that the linearized problem is equivalent to the original problem with \(F(u) = Au\). Substituting in the Freidrich’s scheme, the linearized form is obtained.

$$U_{j}^{n + 1} = \frac{1}{2}\left( {U_{j + 1}^{n} + U_{j - 1}^{n} } \right) + \frac{\Delta t}{2\Delta x}A\left( {U_{j + 1}^{n} + U_{j - 1}^{n} } \right)$$
(54)

The corresponding amplification matrix is given by

$$G(\xi ) = I\cos (\xi ) + i\lambda \sin (\xi )$$
(55)

where \(\xi = k\Delta x\), and I is the \(p \times p\) identity matrix. If the original problem is strongly or strictly hyperbolic, then it follows that the matrix \(A = A\left( {u_{0} } \right)\) is diagonalizable, there exist a matrix T

$$T^{ - 1} AT = \left( {\begin{array}{*{20}c} {a_{1} } & \cdots & 0 \\ \vdots & \ddots & \vdots \\ 0 & \cdots & {a_{p} } \\ \end{array} } \right)$$
(56)

where \(a_{1} , \ldots ,a_{p}\) are the real eigenvalues of A. Therefore:

$$T^{ - 1} G(\xi )T = I\cos (\xi ) + i\lambda \left( {\begin{array}{*{20}c} {a_{1} } & \cdots & 0 \\ \vdots & \ddots & \vdots \\ 0 & \cdots & {a_{p} } \\ \end{array} } \right)\sin (\xi )$$
(57)

and the eigenvalues are

$$\mu_{k} (\xi ) = \cos (\xi ) + i\lambda a_{k} \sin (\xi )$$
(58)

which implies that

$$\begin{aligned} \left| {u_{k} (\xi )} \right|^{2} & = \cos^{2} (\xi ) + i\lambda^{2} a_{k}^{2} \sin^{2} (\xi ) \\ & = 1 - \left( {1 - \lambda^{2} a_{k}^{2} } \right)\sin^{2} (\xi ) \\ \end{aligned}$$
(59)

Therefore, if \(\rho (A) = \max_{k} \left| {a_{k} } \right|\) satisfies the inequality \(\frac{\Delta t}{\Delta x}\rho (A) \le 1\) then Von Neumann stability condition will hold and \(\left| {u_{k} (\xi )} \right| \le 1\) for k and \(\xi\). It is an exercise to prove under strict inequality of Von Neumann condition, the scheme is dissipative of order 2.

Upwind schemes are motivated by the scalar equation \(u_{t} = au_{x}\) when \(p = 1\). If \(a > 0\) the characteristics are straight lines moving to the left, and the scheme constructed in order to “follow” the physical characteristics is:

$$U_{j}^{n + 1} = U_{j}^{n} + \frac{\Delta t}{\Delta x}a\left( {U_{j + 1}^{n} - U_{j}^{n} } \right),\quad a > 0$$
(60)

And the scheme is accurate and stable for \(0 < a\lambda \le 1\) for \(\lambda = \frac{\Delta t}{\Delta x}\). On the other hand, if \(a < 0\), then the characteristics point to the right and

$$U_{j}^{n + 1} = U_{j}^{n} + \frac{\Delta t}{\Delta x}a\left( {U_{j}^{n} - U_{j - 1}^{n} } \right),\quad a < 0$$
(61)

In this case, stability follows from the condition \(- 1 \le \lambda a < 0\).

2.6.2 Second Order Schemes

A scheme for approximating the solution of \(u_{t} = A(u)u_{x}\) is called a Lax-Wendroff scheme if under the assumption \(A(u) = A\) (or \(F(u) = A{\kern 1pt} u\) is linear) the scheme reduces to

$$U_{j}^{n + 1} = U_{j}^{n} + \frac{\Delta t}{2\Delta x}A\left( {U_{j + 1}^{n} - U_{j - 1}^{n} } \right) + \frac{1}{2}\left( {\frac{\Delta t}{\Delta x}A} \right)^{2} \left( {U_{j + 1}^{n} - 2U_{j}^{n} + U_{j - 1}^{n} } \right)$$
(62)

The above scheme is actually the only second order scheme for the linear problem.

Lax-Wendroff schemes arise from the idea of replacing time derivatives by space derivatives, using the equation \(u_{t} = F(u)\) and approximating the later by finite differences. Using a Taylor expansion for u

$$u\left( {x,t + \Delta t} \right) = u\left( {x,t} \right) + \Delta tu_{t} \left( {x,t} \right) + \frac{{\Delta^{2} }}{2}u_{tt} \left( {x,t} \right) + O\left( {\Delta t^{3} } \right)$$
(63)

\(u_{t} \left( {x,t} \right) = F\left( {u\left( {x,t} \right)} \right)\) in the linear case where \(F(u) = Au\)

$$u_{t} \left( {x,t} \right) = Au_{x} \left( {x,t} \right)$$
(64)
$$u_{tt} \left( {x,t} \right) = A^{2} u_{xx} \left( {x,t} \right)$$
(65)

The amplification matrix of the linear form of the Lax-Wendroff scheme is

$$G(\xi ) = I + i\lambda A\sin (\xi ) + \lambda {}^{2}A^{2} \left( {\cos (\xi ) - 1} \right)$$
(66)

\(\xi = k\Delta t\), \(\lambda = \frac{\Delta t}{\Delta x}\) and \(\eta = \sin \left( {\frac{\xi }{2}} \right)\),

$$G(\xi ) = I + 2i\lambda A\eta \sqrt {1 - \eta^{2} } - 2\lambda^{2} A^{2} \eta^{2}$$
(67)

Any eigenvalue \(\mu (\eta )\) of the amplification matrix

$$\mu (\eta ) = 1 + 2i\lambda \mu (A)\eta \sqrt {1 - \eta^{2} } - 2\lambda^{2} A^{2} \eta^{2}$$
(68)

The eigenvalues \(\mu (\eta )\) of the amplification matrix

$$\left| {\mu (\eta )} \right|^{2} = 1 - \lambda^{2} \mu (A)^{2} \eta^{4} \left( {1 - \lambda^{2} \mu (A)^{2} } \right)$$
(69)

which holds for every eigenvalue of \(G(\xi )\). The spectral radius of \(G(\xi )\) is defined as the maximum value of \(\mu (\eta )\). \(\mu_{*}\) is the eigenvalue of A which maximizes the above expression \(\left| {\mu (\eta )} \right|\)

$$\left| {\rho (G)} \right|^{2} = 1 - \lambda^{2} \mu_{*}^{2} \eta^{4} \left( {1 - \lambda^{2} \mu_{*}^{2} } \right)$$
(70)

Von Neumann condition will be satisfied if

$$\lambda_{p} (A) \le 1$$
(71)

which implies \(\lambda_{\mu } (A) \le 1\) for all eigenvalues of A. Furthermore, if \(\lambda {\kern 1pt} \mu_{*} < 1\), then the scheme given by (Eq. 62) is dissipative of order 4.

For the nonlinear case,

$$u_{tt} = \left[ {F(u)} \right]_{xt} = \left[ {F(u)_{t} } \right]_{x} = \left[ {A(u)u_{t} } \right]_{x} = \left[ {A(u)F(u)_{x} } \right]_{x}$$
(72)

Substituting \(u_{t} = F(u)_{x}\) and using Taylor expansion,

$$u\left( {x,t + \Delta t} \right) = u\left( {x,t} \right) + \Delta tF(u)_{x} + \frac{{\Delta t^{2} }}{2}\left[ {A(u)F(u)_{x} } \right]_{x} + O\left( {\Delta t^{3} } \right)$$
(73)
$$\begin{aligned} U_{j}^{n + 1} & = U_{j}^{n} + \frac{\Delta t}{2\Delta x}\left( {F_{j + 1}^{n} - F_{j - 1}^{n} } \right) \\ & \quad + \frac{1}{2}\left( {\frac{\Delta t}{\Delta x}} \right)^{2} \left( {A_{{j + \frac{1}{2}}}^{n} \left( {F_{j + 1}^{n} - F_{j}^{n} } \right) - A_{{j - \frac{1}{2}}}^{n} \left( {F_{j}^{n} - F_{j - 1}^{n} } \right)} \right) \\ \end{aligned}$$
(74)

\(F_{j}^{n} = F\left( {U_{j}^{n} } \right)\) so

$$A_{{j + \frac{1}{2}}}^{n} = A\left( {\frac{{U_{j + 1}^{n} + U_{j}^{n} }}{2}} \right)$$
(75)

Scheme Eq. 74 becomes rather inefficient in practical applications due to the many computations involved at each time step iteration in order to evaluate A and F. A modification of this scheme which is very popular considers approximating derivatives at “half stages” of the iteration

$$u\left( {x,t + \Delta t} \right) = u\left( {x,t} \right) + \Delta tu_{t} \left( {x,t + \frac{1}{2}\Delta t} \right) + O\left( {\Delta t^{2} } \right)$$
(76)

and it is known as the MacCormack scheme. Each iteration has two steps corresponding to first order approximations of the solution at half steps.

The scheme is given by:

$$U_{j}^{*} = U_{j}^{n} + \frac{\Delta t}{\Delta x}\left( {F_{j + 1}^{n} - F_{j}^{n} } \right)$$
(77)
$$U_{j}^{n + 1} = \frac{1}{2}\left( {U_{j}^{n} + U_{j}^{*} + \frac{\Delta t}{\Delta x}\left( {F_{j}^{*} - F_{j - 1}^{*} } \right)} \right)$$
(78)

where \(F_{j}^{n} = F\left( {U_{j}^{n} } \right)\), \(F_{j}^{*} = F\left( {U_{j}^{*} } \right)\).

This scheme is a two-stage which evaluates a “predictor” \(U_{j}^{*}\) and a “corrector” \(U_{j}^{**} = U_{j}^{*} + \frac{\Delta t}{\Delta x}\left( {F_{j}^{*} - F_{j - 1}^{*} } \right)\) and then forms \(U_{j}^{n + 1}\) as the average \(\left( {U_{j}^{**} + U_{j}^{*} } \right)/2\).

It is clear that in order to evaluate \(U_{j}^{n + 1}\) the scheme uses the same points in the grid at time in as Lax-Wendroff scheme. The “efficiency” of a scheme is often related to the cost in computer time of each iteration. In these terms, one can compere different schemes. For the Lax-Wendroff scheme, \(F_{j + 1}^{n}\), \(F_{j}^{n}\), \(F_{j - 1}^{n}\), \(A_{{j + \frac{1}{2}}}^{n}\) and \(A_{{j - \frac{1}{2}}}^{n}\) need to evaluate and perform matrix multiplications in each iteration, whereas MacCormack Scheme requires only the evaluation of \(F_{j + 1}^{n}\), \(F_{j}^{n}\), \(F_{j}^{*}\) and \(F_{j - 1}^{*}\).

It only remains to prove the order of accuracy of MacCormack scheme. The local truncation error of the MacCormack scheme is \(O\left( {\Delta t^{2} } \right) + O\left( {\Delta x^{2} } \right) + O\left( {\Delta t\Delta x} \right)\) in which \(\Delta t = O\left( {\Delta x} \right)\). Thus it has a second order accuracy in space and time.

The MacCormack scheme uses forward difference for the predictor and backward difference for the corrector steps. It has second order accuracy as the Lax-Wendroff method. It is much easier to apply, since it is no need to evaluate the second time derivatives [19].

Among the class of second order non-dissipative schemes is the leapfrog scheme. For the general non-linear equation, the scheme is given by:

$$U_{j}^{n + 1} = U_{j}^{n - 1} + \frac{\Delta t}{\Delta x}\left( {F_{j + 1}^{n} - F_{j - 1}^{n} } \right)$$
(79)

This scheme is analyzed in detail for the linear case, found out that it is not dissipative but stable, provided that \(\frac{\Delta t}{\Delta x}\rho (A) < 1\). The fact that Eq. 79 is accurate of second order follows a straightforward calculation. This scheme is generally more efficient than Lax-Wendroff schemes, although it needs roughly twice as much memory due to the dependence on two previous time stages to evaluate \(U^{n + 1}\), therefore in practice, must face the trade-off between efficiency and storage requirements. Since this is a non-dissipative scheme, it will not give good approximations for nonlinear equations. A dissipative term is introduced to Eq. 79 to deal with problem. When adding a dissipative term in the form of a small perturbation, care must be taken so that the resulting linear scheme retains stability. Recall that in the linear case \(F(u) = Au\), the amplification matrix \(G(\xi )\) is a \(2p \times 2p\) matrix (A itself is a \(p \times p\) matrix)

$$G(\xi ) = \left( {\begin{array}{*{20}c} {2i\lambda A\sin (\xi )} & I \\ I & 0 \\ \end{array} } \right)$$
(80)

where now each of the entries is itself a \(p \times p\) matrix. In order to express the eigenvalues \(\mu (\xi )\) of G in terms of those of A, if A is diagonalizable by a matrix T, then G possesses the same eigenvalues of G.

$$\begin{aligned} \widehat{G}\left( \xi \right) & = \left( {\begin{array}{*{20}c} {T^{ - 1} } & 0 \\ 0 & I \\ \end{array} } \right)\left( {\begin{array}{*{20}c} {2i\lambda A\sin (\xi )} & I \\ I & 0 \\ \end{array} } \right)\left( {\begin{array}{*{20}c} T & 0 \\ 0 & I \\ \end{array} } \right) \\ & = \left( {\begin{array}{*{20}c} {2{\kern 1pt} i{\kern 1pt} \lambda {\kern 1pt} T^{ - 1} A{\kern 1pt} T\sin (\xi )} & I \\ I & 0 \\ \end{array} } \right) \\ \end{aligned}$$
(81)

Recall that \(T^{ - 1} A{\kern 1pt} T\) is a diagonal matrix with diagonal entries \(a_{1} , \ldots ,a_{p}\). From this expression, it follows that any eigenvalue \(\mu (\xi )\) of the amplification matrix satisfies:

$$\mu^{2} (\xi ) = 1 + 2i\lambda a_{j} \sin (\xi )\mu (\xi ),\quad j = 1,2, \ldots ,p$$
(82)

If a dissipative term is added to the leapfrog scheme at time level n, this will cause rise to instabilities.

$$\varepsilon \left( {U_{j + 1}^{n} - 2U_{j}^{n} + U_{j - 1}^{n} } \right)$$
(83)

added to the scheme (Eq. 79) where \(\varepsilon\) is a small perturbation. Notice that any modification at time level n will affect the first block in the amplification matrix. The modified amplification matrix will be of the form:

$$G(\xi ) = \left( {\begin{array}{*{20}c} {2i\lambda A\sin (\xi ) + \varepsilon \sin^{2} \left( {\xi /2} \right)I} & I \\ I & 0 \\ \end{array} } \right)$$
(84)

and therefore the eigenvalues will now satisfy:

$$\mu^{2} (\xi ) = 1 + \left( {2i\lambda a_{j} \sin (\xi ) + \varepsilon \sin^{2} \left( {\xi /2} \right)} \right)\mu (\xi )$$
(85)

E denotes the shift operator \(E{\kern 1pt} U_{j}^{n} = U_{j + 1}^{n}\), adding a dissipative term at time level n amounts to modifying Eq. 74 yielding the scheme:

$$U_{j}^{n + 1} = U_{j}^{n - 1} + \frac{\Delta t}{\Delta x}A\left( {U_{j + 1}^{n} - U_{j - 1}^{n} } \right) + \varepsilon P(E)U_{j}^{n}$$
(86)

where \(P(E)\) is a function of the shift operator \(\left( {P(E) = E - 2I + E^{ - 1} } \right)\). Since \(P(E)\) approximates a second order derivative, its Fourier transform \(\widehat{P}(\xi )\) will be a real function of \(\xi\) and thus the modified eigenvalues will in general satisfy:

$$\mu^{2} (\xi ) = 1 + \left( {2i\lambda a_{j} \sin (\xi ) + \varepsilon \widehat{P}(\xi )} \right)\mu (\xi )$$
(87)

for some eigenvalue at of A.

Let \(x_{1}\) and \(x_{2}\) are the solutions of the equation \(x^{2} - ax - 1 = 0\). If both \(\left| {x_{1} } \right| \le 1\) and \(\left| {x_{2} } \right| \le 1\), then necessarily the coefficient \(\alpha\) is purely imaginary.

Using exactly the same analysis, the leapfrog scheme gives rise to instabilities when it is used to approximate parabolic equations. For the heat equation, this can also be explained by the stability region of the leapfrog method, which is only on the imaginary axis, while the centered finite difference used in approximating the second order derivatives will give real eigenvalues.

In order to introduce the correct amount of dissipation, the dissipation term at time level \(n - 1\) should be added. The operator \(E^{1/2} U_{j}^{n} = U_{j + 1/2}^{n}\) so the leapfrog scheme Eq. 79 can be rewritten in the form:

$$U_{j}^{n + 1} = U_{j}^{n - 1} + \frac{\Delta t}{\Delta x}\left( {E^{1/2} - E^{ - 1/2} } \right)\left( {E^{1/2} + E^{ - 1/2} } \right)F_{j}^{n}$$
(88)

in general form:

$$\begin{aligned} U_{j}^{n + 1} & = U_{j}^{n - 1} + \frac{\Delta t}{\Delta x}\left( {E^{1/2} - E^{ - 1/2} } \right)\left( {E^{1/2} + E^{ - 1/2} } \right)F_{j}^{n} \\ & \quad - \frac{\varepsilon }{16}\left( {E^{1/2} - E^{ - 1/2} } \right)^{4} U_{j}^{n - 1} \\ \end{aligned}$$
(89)

\(\eta = \sin \left( {\xi /2} \right)\) the amplification matrix of the linearized scheme

$$G(\xi ) = \left( {\begin{array}{*{20}c} {2i\lambda A\sin (\xi )} & {\left( {1 - \varepsilon \eta^{4} } \right)I} \\ I & 0 \\ \end{array} } \right)$$
(90)

and the eigenvalues hold the relations:

$$\mu^{2} (\xi ) = 1 - \eta^{4} + 2i\lambda \mu (A)\sin (\xi )\sin (\xi )\mu (\xi )$$
(91)

for some eigenvalue \(\mu (A)\) of A. Therefore:

$$\mu (\xi ) = i\lambda \mu (A)\sin (\xi ) \pm \sqrt {1 - \left| {\mu (A)} \right|\sin^{2} (\xi ) - \varepsilon \eta^{4} }$$
(92)

And \(\left| {\mu (\xi )} \right|^{2} = 1 - \varepsilon \eta^{4}\) provided that

$$1 - \left| {\lambda \mu (A)} \right|^{2} \sin^{2} (\xi ) - \varepsilon \eta^{4} > 0$$
(93)

for all eigenvalues of A and \(\xi\). Under this condition, the modified scheme Eq. 89 is stable and dissipative. Remark, though, that in order for Eq. 93 to hold, whenever add dissipation \((\varepsilon > 0)\), and also must decrease the value of \(\lambda = \frac{\Delta t}{\Delta x}\)\(.\) This means that for a fixed space grid, a large number of time steps must be evaluated to get an approximate solution at some given time t.

3 Finite Difference Time Domain Applications in Electrical Engineering

The FDTD method used in the solution of Maxwell’s equations allows to analyze the electric and magnetic fields and interactions with medium. Maxwell’s equations, which have differentials in time and space dimensions, are solved by using the future and past values of time and space [20]. In the solution of this discrete set of space and time, the electric and magnetic fields are resolved interconnectedly and the value obtained in each step becomes the first value for the next step [21]. The relationship between these two parameters is described in Fig. 1.

Fig. 1
figure 1

Time and special discretization in FDTD method [21]

This method is used in many different applications since it is a very powerful tool for solving partial differential equations. These applications include many engineering problems such as the percussion instrument model, where different sampling frequencies are used to reduce the simulation time, the grounding characteristics of wind turbines in low resistive soil, the propagation of partial discharge signals in HV current transformers, and the analysis of electromagnetic interaction currents flowing in the power cables of DC-DC converters [20, 22, 23]. Since the examination of all these application areas is beyond the scope of this chapter, the applications of the FDTD method in the power system and high voltage industry are discussed.

Electromagnetic transient and non-transient simulations become an important tool for planning, operation and fault analysis in electrical power systems [8]. These simulations concentrate on transient state analyzes that occur in power system equipment such as circuit breakers (CBs), lightning arrestor, overhead and underground cables, ultra-high frequency (UHF) sensors and power transformers (PTs) [8, 9, 24, 25].

Determination of lightning induced voltage and current caused by lightning discharges, one of the major sources of fault in power systems, is critical for the protection of power system equipment [26]. Aodsup and Kulworawanichpong [24] examined the propagation and reflection of the lightning strike in the lightning arrester with silicon carbide (SiC) and metal oxid varistor (MOV) by adapting the Telegraphist equations to the FDTD method. According to the simulation results, MOV arrester reflects and transmits the impulse surge smooter than SiC arrester [24]. Nagarjuna and Chandrasekaran [21] adapted the transmission line approach to FDTD equations to examine the current and voltage characteristics of the horizontal ground electrode at high impulse currents. Izadi et al. [26] calculated the electric and magnetic fields in different time and space using Maxwell’s equations and 2nd order FDTD while advancing of the lightning channel in the power system, and the proposed algorithm showed a good agreement with the measurement results.

Analyzing the electromagnetic behavior of overhead and underground cables, one of the important parts in power system transmission and distribution, improves system design. These cables, consisting of multiple layers with different characteristics between the cable core and the shell, can be successfully modeled by the FDTD method with a high spatial discretization [27]. In addition, frequency-dependent FDT models are numerically unstable or computational time is excessive. Additionally, underground cable applications of these models are very limited. To overcome these limitations, the FDTD method can be developed by taking into account distributed fixed parameters such as skin effect and imperfect earth in overhead lines [27]. It is an important problem to analyze the transient state responses of electromagnetic fields in underground cables, which are used more and more for environmental, political and technical reasons in high voltage applications. Barakou et al. [28] used the universal line model (ULM) and FDTD method to model these lines. While the FDTD method provides very high accuracy for slow front surge, these results are distorted by temporary fluctuations for fast front surges. Either way, simulation times are almost six times the ULM and are quite slow [28].

As a result of the operation of power system equipment such as disconnectors or CBs in gas-insulated substations (GISs), switching pulses called very fast transient (VFT) may occur in the frequency range from several MHz to more than 100 MHz [9]. Calculated the transient electromagnetic disturbances caused by these frequencies using FDTD and EMTP and found that the results obtained by the FDTD are less oscillating. Shakeri et al. [25] examined the effect of VFTs on power transformers, one of the most important equipment in power systems, by adapting the multi-conductor transmission line theory to the FDTD method and observed the effect of electromagnetic waves. In order to increase the accuracy of this model, the winding capacity matrix is calculated by FEM analysis, and the simulation results are obtained with a certainty to confirm the experimental results [25].

FDTD method is also used in the electromagnetic modeling stage to understand the behavior and improve the performance of UHF sensors used to detect partial discharges that can be dangerous for power systems and transformers [6]. Ishak et al. [22], using the FDTD integrated UHF sensor developed for this purpose, achieved agreement results with experimental results in a wide frequency range of 500–1500 MHz. Another proposed approach to determine the behavior of the UHF based test system used as a PD sensor in high voltage cables and to investigate the PD coupling process is the combination of the FDTD method and the transfer function theory [2]. This proposed approach has been applied to 11 kV XLPE cable by Hu et al. [2]. In another application where the amplitude and charge of the partial discharge current in gas insulated switches are examined, the data obtained with a voltage probe placed on the outer surface of a three-phase gas-insulated switch in the 84 kV-class are verified by simulation results obtained by the three-dimensional FDTD method [29].

Busbar structures commonly used in high voltage transmission are also affected by electromagnetic fields radiated from switching operations. These analyzes become even more important for high voltage equipment located close to switching equipment and electronic circuits of these [4]. The FDTD method is successfully used in the modeling of busbar structures in high voltage air-insulated substations [5]. Musa et al. [4] modeled transient electromagnetic fields as a result of switching operations in a 400 kV air insulated substation using simply specifying their constitutive parameters with this method.

Grounding behavior, which is one of the important parameters to ensure system reliability in transmission and distribution systems, can also be examined with the FDTD method. In this context, the soil ionization phenomenon, which reduces the ground electrode resistance, has been investigated by the FDTD method and applied to a typical high voltage substation of 500/220 kV [7].

Finite element and difference methods are also widely used in high voltage technique to calculate the breakdown characteristics of gas dielectrics [30]. The electron drift velocity, mean energy, ionization and attachment coefficients of dielectric gases such as SF6, CF4, CHF3, and argon, which are frequently used in the insulating gas industry, can be calculated by using the finite difference method for solving Boltzmann equations [31,32,33].

Despite this widespread use of the FDTD method, there are also limitations such as defining the uncertainties they have due to the nature of electromagnetic fields, and the excessive computation time needed in the analysis of large objects [9, 34].

In the calculation of electromagnetic fields, the properties of the object, such as geometric properties, electrical parameters, material characteristics and input sources, can increase randomness and thus uncertainty [35]. The uncertainty in these input parameters is reflected in the electromagnetic fields, which are the output parameters, and a parametric uncertainty appears in the resulting components [36]. Identifying these uncertainties, which are very important in some engineering problems, is also a major research topic in the analysis of electromagnetic fields [35]. As an alternative to the Monte Carlo method used to identify parametric uncertainties, there are many methods and approaches combined with FDTD. In this context, methods such as stochastic, polynomial chaos, control variations, and the method of moments are combined with the FDTD method and defined the uncertainties in the calculation of electromagnetic fields [35,36,37,38].

Chen [34] used the hybrid implicit explicit approach in combination with the FDTD method to overcome the problem of electromagnetic modeling in very fine structures. This proposed method is applicable for many boundary conditions including connect boundary, absorbing boundary and periodic boundary. In order to overcome the computation time problems encountered in the electromagnetic modeling of electrically large objects, Shi et al. [39] proposed FDTD method combined with Internet of Things, in which multiple processors are connected in parallel.

Another important problem of the FDTD method is the increased response time at high frequencies and the decrease in the accuracy of the analysis results [27]. In order to overcome this disadvantage, alternative models and software combined with FDTD method are used in the analysis of high frequency transient situations frequently encountered due to switching and lightning in high voltage equipment, especially cables [2, 27].

4 Conclusions

In this chapter, the time-dependent finite difference method, which is widely used in the solution of engineering problems defined by differential equations, is examined in the theoretical framework and application examples. In order to limit the examination in terms of engineering applications, the use of the finite difference method in the analysis of electric and magnetic fields in power systems and high voltage equipment is concentrated. The limitations of the finite difference method are defined and how these limitations can be overcome by combining them with different methods and approaches are discussed. Finite difference method, which is an important tool in robust and accurate calculation of electromagnetic fields, has been used more widely in transient analysis as well as in steady state analysis.