Keywords

2.1 Background

We commence with a story motivated by an observation of Professor KM Tamizhmani concerning modelling. It concerns a beautiful young woman and two suitors, both of them handsome young men named Krishnakumar and Sinuvasan. The young woman takes a pile of pebbles and puts one to the left for each ewe owned by Krishnakumar and one to the right for each ewe owned by Sinuvasan. When all of the ewes have been counted, there are 15 pebbles to the left and 10 to the right. Which suit will the woman accept?

The modelling behind this question is to use a pebble to represent an ewe, but there is a further consideration. What criterion does the woman use to reach her decision? Is it the number of ewes or is it the social convention as to who does the milking?

Here we have a very simple instance of mathematical modelling in application. All that was required for such a model to be considered is the social value of goats. A more sophisticated application of mathematics can be found in the times of the Babylonians who used linear interpolation to approximate the sine function as an aid to the determination of the position of the Moon which was an important matter for a society which used the lunar month. A millennium or so later Pythagoras travelled to the Land of Egypt and observed that the surveyors were making use of the 3-4-5 rule to ensure that the farmers’ fields were quite rectangular. He returned to his native Samos and did what philosophers do best which is to take something practical and construct a Theorem from it.

Another example of applied mathematics from ancient times is found in the work of Claudios Ptolemaios who devised a method for the calculation of orbits by the simple expedient of having circles move on circles. Ptolemaios made a mistake in his modelling in two respects. Firstly he assumed that the Earth was the centre of the Universe and secondly that the basic component of an orbit was a circle. Nevertheless his method provided accurate predictions for roughly one and a half millennia until after the physically acceptable model was introduced in the seventeenth century. This illustrates an interesting point about modelling and applied mathematics. The model may be physically incorrect and yet provide correct answers.

With the advent of the seventeenth century mathematical modelling developed remarkably well in the various branches of Mechanics. As the years turned into centuries, Mechanics became subdivided into specific areas—Classical, Continuum, Quantum, Relativistic—to such an extent that Applied Mathematics became synonymous with Mechanics. When other areas of application and modelling were developed, it was fashionable to call these areas Applicable Mathematics to avoid the obvious taint of Mechanics. Fortunately such a distinction appears to have faded in recent decades.

One of the reasons for the loss of the distinction can be found in the universality of differential equations. The same equation appears in various diverse applications. An example of the proliferation of models can be seen in a recent paper [17] devoted to solutions of the Fisher Equation and some of its generalisations. Some of the fields of application mentioned are logistic models of population growth, flame propagation, neurophysiology, autocatalytic chemical reactions and branching processes based on Brownian motion. According to [26], the original problem modelled the propagation of a gene in a population. The classical Fisher Equation

$$\begin{aligned} u_t = b u_{xx} + a u(1-u), \quad ab \not = 0, \end{aligned}$$
(2.1)

first appeared seventy-five years ago in [12]. It is remarkable how disparate processes can be modelled by what is essentially the same equation. Only the names of the labels and possibly boundary/initial conditions have been changed.

In 1828, Robert Brown [4] reported his observations of 1827 concerning the motion of particles, such as small pieces of broken pollen, suspended in a fluid. The irregular motion subsequently became called Brownian motion and became an important concept in the modelling of a wide variety of phenomena ranging from Statistical Physics to Financial Mathematics with quite a few stops in between. The essential point is that the mechanisms in each of these phenomena are based on some form of random, or stochastic, motion. Our particular interest today comprises some equations which have arisen in the field of Financial Mathematics. The literature is quite vast, but the seminal papers can be counted on the fingers of one hand.

2.2 An Algebraic Diversion

Before we begin our examination of some of the equations which arise in finance, we should recall a little of the algebraic theory of differential equations.

A differential equation,

$$\begin{aligned} E \left( x,\, u,\,u_x,\,u_{xx},\,\ldots \right) = 0, \end{aligned}$$
(2.2)

in which all symbols can be multisymbols, is invariant under the infinitesimal transformation generated by the operator

$$\begin{aligned} \varGamma = \xi \partial _x + \eta \partial _u \end{aligned}$$
(2.3)

if

$$\begin{aligned} \varGamma ^ {[n]}E_{\left| E = 0\right. } = 0, \end{aligned}$$
(2.4)

where \(\varGamma ^ {[n]} \) is the extension of \(\varGamma \) to account for all of the derivatives occurring in E. The invariance contained in (2.4) occurs when the Eq. (2.2) is taken into account. Usually the coefficient functions, \(\xi \) and \(\eta \), are taken to be functions of x and u only, i.e., the infinitesimal transformation is a point transformation, but it is also possible to include derivatives.

The number of symmetries, (2.3), can range from zero to infinity, depending upon the equation being studied. Under the operation of taking the Lie Bracket

$$\begin{aligned} \left[ \varGamma _i,\,\varGamma _j\right] _{LB} = \varGamma _i\varGamma _j - \varGamma _j\varGamma _i \end{aligned}$$
(2.5)

one obtains a Lie algebra. Different equations can have the same algebra even if their provenances are quite disparate.

The symmetries, if sufficient in number and type, can be used to reduce the equation and even find its solution. The calculation of the symmetries is usually an exercise in advanced tedium and is best left to a computer algebra code on some computer. Various packages are available and they are of variable quality. We use Sym [1, 79] which operates in Mathematica. For the identification of the algebra, we make use of the classification scheme of Mubarakzyanov [2023].

2.3 The Black–Scholes Equation

The Black–Scholes–Merton equation [2, 3, 19],

$$\begin{aligned} u_t+ \frac{1}{2}\sigma ^ 2 x ^ 2u_{xx} + rxu_x - ru = 0, \end{aligned}$$
(2.6)

is the precursor of the many evolution partial differential equations which have been derived in the modelling of various financial processes. Basically it has to do with the pricing of options, but anything vaguely connected such as corporate debt is equally grist for its mill. The symmetry analysis of (2.6) was first undertaken by Gasizov and Ibragimov [13]. After determining the symmetries they obtained of the solution for the initial condition being a delta function which is a typical initial condition for the heat equation.Footnote 1 A more typical problem is the solution of (2.6) subject to what is known as a terminal condition, i.e. \(u (T,x) = U \) when \( t = T \), and it is this problem which we solve to give a demonstration of the methodology.

The Lie point symmetries of (2.6) are

$$\begin{aligned} \varGamma _1= & {} x\partial _x \\ \varGamma _2= & {} 2tx\partial _x+ \left\{ t - \displaystyle {\frac{2}{\sigma ^ 2}}\left( rt -\log x\right) \right\} u\partial _u \\ \varGamma _3= & {} u\partial _u \\ \varGamma _4= & {} \partial _t \\ \varGamma _5= & {} 8t\partial _t+4x\log x\partial _x+ \left\{ 4tr + \sigma ^ 2t+2\log x+\displaystyle {\frac{4r}{\sigma ^ 2}}\left( rt - \log x\right) \right\} u\partial _u \\ \varGamma _6= & {} 8t ^ 2\partial _t+ 8t x\log x\partial _x+ \left\{ -4t+ 4t ^ 2 r + \sigma ^ 2t ^ 2 + 4t\log x+\displaystyle {\frac{4}{\sigma ^ 2}}\left( rt - \log x\right) ^ 2\right\} u\partial _u \\ \varGamma _{\infty }= & {} f (t,x)\partial _u, \end{aligned}$$

where \(\varGamma _{\infty } \) is the infinite subset of solutions to (2.6). The algebra of the finite subset is \(sl (2,R)\oplus _sW_3 \), where \(W_3 \) is the three-dimensional Heisenberg-Weyl algebra.

To solve the problem of the terminal condition we take a linear combination of the finite set of symmetries, \(\varGamma = \sum _{i = 1} ^ {i = 6}\alpha _i\varGamma _i \), and apply it to the two conditions given above.Footnote 2 In the case of \(t = T \) we obtain

$$\begin{aligned} \alpha _4+ 8T\alpha _5+ 8T ^ 2\alpha _6 = 0 \end{aligned}$$
(2.7)

in which we have replaced t by its specified value. When we turn to the condition \(u (T,x) = U \) and make the appropriate substitutions, we obtain

$$\begin{aligned}&\alpha _2 \left\{ T - \displaystyle {\frac{2}{\sigma ^ 2}}\left( rT -\log x\right) \right\} U+\alpha _3 U \nonumber \\&+\,\alpha _5 \left\{ 4Tr + \sigma ^ 2T+2\log x+\displaystyle {\frac{4r}{\sigma ^ 2}}\left( Tr - \log x\right) \right\} U \nonumber \\&+\,\alpha _6 \left\{ -4T+ 4T ^ 2 r + \sigma ^ 2t ^ 2 + 4T\log x+\displaystyle {\frac{4}{\sigma ^ 2}}\left( rT - \log x\right) ^ 2\right\} U = 0. \end{aligned}$$
(2.8)

The coefficient of \((\log x) ^2 \) in (2.8) means that \(\alpha _6 = 0 \) and hence from (2.7) that \(\alpha _4 = -8T\alpha _5 \). Returning to (2.8) the coefficient of \(\log x \) leads to \(\alpha _2\sigma ^ 2U/2 -4rU\alpha _5/\sigma ^ 2 = 0 \) and the remaining terms give \(\alpha _2 (1 -r\sigma ^ 2/2)U+\alpha _3U+\alpha _5 (4Tr +\sigma ^ 2T + 4r ^ 2T)U = 0 \). Consequently we have

$$\begin{aligned}&\alpha _1\,\,\text{ is } \text{ arbitrary } \nonumber \\&\alpha _2 = (2r-\sigma ^2)\alpha _5 \nonumber \\&\alpha _3 = -8rT\alpha _5 \nonumber \\&\alpha _4 = -8T\alpha _5. \end{aligned}$$
(2.9)

As is common with \((1+1) \) evolution partial differential equations of maximal symmetry, there are two symmetries which are compatible with the terminal condition. They are

$$\begin{aligned}&\varLambda _1 = x\partial _x \quad \text{ and } \\&\varLambda _2 = 8(t-T)\partial _t + (4rt-2\sigma ^2 t + 4\log x)x\partial _x + 8r(t-T)u\partial _u \end{aligned}$$

with the Lie Bracket \(\left[ \varLambda _1,\,\varLambda _2\right] _{LB} = 4\varLambda _1\) so that reduction by the normal subgroup, represented by \(\varLambda _1 \), is to be preferred. The invariants of the associated Lagrange’s system,

$$ \frac{\mathrm{{d}}t}{0} = \frac{\mathrm{{d}}x}{x} = \frac{\mathrm{{d}}u}{0}, $$

are t and u so that we introduce the change of variables \(y = t\) and \(v = u \) into (2.6) to obtain the ordinary differential equation

$$ v'-rv = 0 $$

with solution

$$ v = K\mathrm{{e}}^{ry}. $$

In terms of the original variables the solution obtained using \(\varLambda _1 \) is

$$ u = K\mathrm{{e}}^{rt} $$

and on the substitution of the terminal conditions to evaluate the constant of integration, we find that the solution of the terminal problem for (2.6) is

$$\begin{aligned} u (t,x) = U\exp [r(t-T)]. \end{aligned}$$
(2.10)

As the solution of this problem is unique, there is no need to make use of the second symmetry.

2.4 The Cox–Ingersoll–Ross equation

The Cox–Ingersoll–Ross equation [6] (see also [5, 11, 14, 25] for studies of similar equations),

$$\begin{aligned} u_t+ \frac{1}{2}\sigma ^ 2xu_{xx} - (\kappa - \lambda x)u_x - xu = 0, \end{aligned}$$
(2.11)

is an example of an equation for which the number of Lie point symmetries depends upon a relationship between the parameters in the equation.

For unconstrained values of the parameters (2.11) possesses the symmetries [10]

$$\begin{aligned} \varGamma _1&= u\partial _u \\ \varGamma _{2\pm }&= \exp [\pm \beta t ]\left\{ \pm \partial _t+\beta x\partial _x - \displaystyle {\frac{1}{\sigma ^ 2}} (-\beta \pm \lambda ) (\kappa \pm \beta x)u\partial _u\right\} \\ \varGamma _3&= \partial _t \\ \varGamma _{\infty }&= f (t,x)\partial _u, \end{aligned}$$

where, as above, \(\varGamma _{\infty } \) represents the solution symmetries of the linear evolution partial differential equation. The finite subalgebra is \(sl (2,R)\oplus A_1 \). Although there does not exist a point transformation which takes (2.11) to the classical heat equation, the algebraic structure is that of a heat equation with a source/sink term proportional to \(U/X ^ 2 \) in the transformed variables [10, 18].

Despite the diminution in the number of symmetries compared to (2.6), we can still investigate to see if there are sufficient symmetries to solve the problem with a terminal condition. As we did above, we take a linear combination of the elements of the finite subalgebra and apply it to the conditions \(u (T,x) = U \) when \( t = T \). The latter gives

$$ \alpha _{2+}\exp [\beta T ] - \alpha _{2 -}\exp [ -\beta T ] + \alpha _3 = 0 $$

and the former

$$\begin{aligned} \alpha _1 U&- \alpha _{2+}\exp [\beta T ]\displaystyle {\frac{1}{\sigma ^ 2}} (-\beta +\lambda ) (\kappa + \beta x)U\\ {}&- \alpha _{2 -}\exp [ -\beta T ]\displaystyle {\frac{1}{\sigma ^ 2}} (-\beta - \lambda ) (\kappa - \beta x)U = 0. \end{aligned}$$

It is necessary to separate the coefficient of x from the constant term. This gives a relationship between \(\alpha _{2+} \) and \(\alpha _{2 -} \). When this is substituted into the remaining terms, we obtain the relationships

$$\begin{aligned} \alpha _1= & {} -\displaystyle {\frac{2\kappa (\beta -\lambda )}{\sigma ^ 2}}\exp [\beta T ]\alpha _{2+}, \nonumber \\ \alpha _{2 -}= & {} \displaystyle {\frac{\beta -\lambda }{\beta +\lambda }}\exp [2\beta T ]\alpha _{2+}, \nonumber \\ \alpha _3= & {} - \displaystyle {\frac{2\lambda \lambda }{\beta +\lambda }}\exp [\beta T ]\alpha _{2+}. \end{aligned}$$
(2.12)

Even with the reduced number of symmetries we have been able to obtain a symmetry which is compatible with the terminal condition and this may be used to reduce (2.11) to an ordinary differential equation to be solved.

2.5 The Heath Equation

The evolution partial differential equations which arise in Financial Mathematics are not confined to linear equations. As a simple example we consider the equation treated in Heath [15], namely

$$\begin{aligned} 2 u_t+2 au_x+ b ^ 2u_{xx} - u_x ^ 2+ 2\nu (x) = 0. \end{aligned}$$
(2.13)

For a general function \(\nu (x) \) (2.13) possesses the Lie point symmetries [24]

$$\begin{aligned}&\varGamma _1 = \partial _t, \\&\varGamma _2 = \partial _u, \\&\varGamma _{\infty } = b ^ 2f (t,x)\exp [u/b ^ 2]\partial _u, \end{aligned}$$

where f (tx) is any solution of the linear equation

$$\begin{aligned} 2 u_t+2 au_x+ b ^ 2u_{xx} + 2\nu (x)u = 0. \end{aligned}$$
(2.14)

Due to the presence of the arbitrary function \(\nu (x) \) in (2.13) one would not expect any symmetries apart from the obvious \(\varGamma _1 \) and \(\varGamma _2 \). Due to the nonlinearity of (2.13) one would certainly not expect the presence of \(\varGamma _{\infty } \) as this is a characteristic of linear equations. As \(\varGamma _{\infty } \) is present, it is evident that a linearising transformation exists and it is easily inferred from the other terms in the symmetry to be given by \(U (t,x,u) = -\exp [ -u/b ^ 2] \). The transformation is of the same form as the Cole-Hopf transformation well-known from its linearising effect upon the Burgers equation.

It is known (cf. [24]) that (2.13) possesses additional symmetries if \(\nu (x) \) has certain specific forms. In the symmetry analysis of the equivalent equation, (2.14), using SYM two special cases arise naturally. They are

$$\begin{aligned}&\nu _1 (x) = a_1+ a_2x+ a_3x ^ 2 \quad \text{ and } \\&\nu _2 (x) = a_1+ a_2x+ a_3x ^ 2+ \displaystyle {\frac{a_4}{(a_2+2 a_3x) ^ 2}}. \end{aligned}$$

In the case of \(\nu _1 (x) \) the number of symmetries and their algebra are the same as for the classical heat equation and consequently there exists a point transformation connecting (2.14), hence (2.13), to the heat equation. This is not the case with \(\nu _2 (x) \). The number of symmetries corresponds to the heat equation with a source/sink term proportional to \(u/x ^ 2 \). Obviously the algebra is \(\{sl (2,R)\oplus A_1\}\oplus _s\,\infty \) which is characteristic of evolution equations derived from the Ermakov–Pinney equation [18].

2.6 A Really Nonlinear One!

What is essentially a variant of the Black–Scholes equation

$$ 2 V_t+2 (r-q)SV_S+ \varSigma ^ 2S ^ 2V_{SS} -2 rV = 0 $$

and readily reducible to the heat equation is rendered more than moderately nonlinear if \(\varSigma \) is assumed to be proportional to \(V_{SS}\) to become the differential equation,

$$\begin{aligned} 2 V_t+2 (r-q)SV_S+ \sigma ^ 2S ^ 2\left( V_{SS}\right) ^ 3 -2 rV = 0, \end{aligned}$$
(2.15)

which possesses five Lie point symmetries, namely

$$\begin{aligned}&\varGamma _1 = \exp [rt ]\partial _V, \\&\varGamma _2 = S\exp [qt ]\partial _V, \\&\varGamma _3 = \partial _t, \\&\varGamma _4 = \exp [ (2r -4q)t ]\left\{ \partial _t+ (r -q)S\partial _S + rV\partial _V\right\} , \\&\varGamma _5 = S\partial _S + 2V\partial _V. \end{aligned}$$

The five-dimensional algebra is \(\{A_1\oplus A_2\}\oplus _s 2A_1\).

The symmetries \(\varGamma _1 \) and \(\varGamma _2 \) satisfy (2.15) and as solution symmetries are of no use in giving a symmetry which is compatible with any other conditions. Fortunately the remaining three symmetries are sufficient for the purpose of satisfying the requirement that \(V (T,S) = G(S) \) when \(t = T \) provided that G(S) takes a specific form. The application of \(\varGamma = \alpha _3\varGamma _3 + \alpha _4\varGamma _4 + \alpha _5\varGamma _5\) to the terminal condition above leads to the conditions

$$\begin{aligned}&\alpha _3 = - \alpha _4\exp [ (2r -4q)T ] \quad \text{ and } \\&\alpha _4\exp [ (2r -4q)T ](r G(S) - (r-q)SG'(S)) + \alpha _5 (2G (S) - SG' (S)) = 0. \end{aligned}$$

One possibility for the second condition is that \(r = 2q \) in which case the conditions become

$$\begin{aligned}&\alpha _3 = -\alpha _4 \quad \text{ and } \\&(q\alpha _4 + \alpha _5) (2G - SG') = 0 \end{aligned}$$

so that either \(\alpha _5 = - q\alpha _4 \) or \(G (S) = KS ^ 2 \) for some constant, K. In the case of the former possibility \(\varGamma \) is zero. In the case of the latter \(\alpha _4 \) and \(\alpha _5 \) are arbitrary, but we have only the single symmetry

$$\begin{aligned} \varGamma = S\partial _S+ 2V\partial _V \end{aligned}$$
(2.16)

for which the invariants are t and \(VS ^ {-2} \). We substitute \(V = S ^ 2f (t) \) into (2.15) and easily find that

$$\begin{aligned} V = \frac{S ^ 2}{\sqrt{8\sigma ^ 2 (t+c)}}, \end{aligned}$$
(2.17)

where c is the constant of integration. The value of this constant is determined by imposing the terminal condition which gives

$$ c = \frac{1}{8\sigma ^ 2K ^ 2} - T. $$

If \(r\not = 2q \), the second condition gives two possibilities. If G (S) is still given by \(KS ^ 2 \), \(\alpha _4 = 0 \) and so \(\alpha _3 \) is also zero. The solution (2.17) still applies. On the other hand \(\alpha _4 \) is arbitrary and \(\alpha _5 = 0 \) if \(G (S) = KS ^ {r/(r -q)} \). Now

$$\begin{aligned} \varGamma =\left\{ \exp [ (2r -4q)t ] - \exp [ (2r -4q)T ]\right\} \partial _t + \exp [ (2r -4q)t ]\left\{ (r -q)S\partial _S + rV\partial _V\right\} . \end{aligned}$$
(2.18)

For other functions G S all the \(\alpha _i\) are zero and so there is no symmetry compatible with the terminal condition.

2.7 Conclusion

Given the constraints of time we have been able only to explore one aspect of Applied Mathematics. This is the quite recent application of Lie’s Theory of Continuous Groups to problems which arise in Financial Mathematics. We noted a recent paper which mentioned a few applications of the Fisher Equation—originally formulated in a biological context—to divers fields. The mechanisms of the various problems have a certain similarity and so we find the same equations, maybe mutatis mutandis, recurring. One of the important features is that methods developed in one field can find application in many other fields.

As the decades progress, the quantification of all manner of phenomena increases in number and diversity. The quantification is the gist of Applied Mathematics and so we have Globalisation.