Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

With n = 0, ±1, ±2, ⃛ this chapter concerns the function (bx + c)n and its special b = 1, c = 0 case. The powers 1, x, x 2, ⃛ and the reciprocal powers 1, x −1, x −2, ⃛ are the units from which power series are built. Such expansions and their applications are addressed in Section 10:13. Section 10:14 provides a brief exposition on the intriguing and useful lozenge diagrams.

10.1 Notation

The two formulas (bx + c)n and 1/(bx + c)n are equivalent in all respects. The powers x 2 and x 3 are termed the square and the cube of x respectively, and the special properties of functions containing these units are addressed in Chapters 15 and 16.

In the general notation βα, β is known as the base and α as the power or exponent. In this chapter [and in Chapter 12] the family of functions in which the base is the primary variable is treated, with the exponent held constant. In contrast, Chapter 26 is concerned with functions in which the exponent varies and the base is held constant. The instance in which both the base and the power are the same variable is touched on briefly in Sections 26:2 and 26:13.

10.2 Behavior

The power function is defined for all values of x and for all integer n except that x n is undefined when both x and n are zero. Figures 10-1 and 10-2 illustrate the behavior of x n for n = 0, ±1, ±2, ±3, ±4, ±7 and ±12. Notice the contrasting behavior of the positive and negative powers. Note also how the reflection properties depend on whether n is even or odd.

Figure 10-1
figure 11_1_139007_2_Enfigure 11_1_139007_2_En

Figure 10-1

Figure 10-2
figure 11_2_139007_2_Enfigure 11_2_139007_2_En

Figure 10-2

If n is positive, (bx + c)n has a zero of multiplicity n [Section 0:7] at x = −c/b. This value of x is the site of an infinite discontinuity in (bx + c)n when n is negative.

10.3 Definitions

The function (bx + c)n is defined by:

$$(bx + c)^n = \left \{ {\begin{array}{ll} \prod\limits_{j = 1}^{ - n} {\frac{1}{{bx + c}}} & n = - 1, - 2, - 3, \cdots \\ 1 & n = 0 \\ \prod\limits_{j = 1}^n {(bx + c)} & n = 1,2,3, \cdots \\ \end{array}} \right.$$
(10:3:1)

10.4 Special Cases

When n = 0

$$(bx + c)^n = 1 \quad n = 0$$
(10:4:1)

and when n = ±1, reduction occurs to the functions treated in Chapter 7. 0° is generally undefined, though it may be ascribed a value of either 0 or 1 in certain contexts.

When b = 0, (bx + c)n reduces to a constant [Chapter 1] for all values of c and n.

10.5 Intrarelationships

The function x n obeys the simple reflection formula

$$( - x)^n = \left\{ {\begin{array}{ll} {x^n } & {n = 0, \pm 2, \pm 4, \cdots } \\ { - x^n } & {n = \pm 1, \pm 3, \pm 5, \cdots } \\ \end{array}} \right.$$
(10:5:1)

For the (bx + c)n functions, reflection occurs about x = −c/b

$$\left[ {b\left( {\frac{{ - c}}{b} - x} \right) + c} \right]^n = ( - )^n \left[ {b\left( {\frac{{ - c}}{b} + x} \right) + c} \right]^n $$
(10:5:2)

The recurrences

$$(bx + c)^n = (bx + c)(bx + c)^{n - 1} $$
(10:5:3)

and

$$(bx + c)^{ - n} = \frac{{(bx + c)^{ - n + 1} }}{{bx + c}}$$
(10:5:4)

apply, as do the laws of exponents

$$(bx + c)^n (bx + c)^m = (bx + c)^{n + m} $$
(10:5:5)
$$\frac{{(bx + c)^n }}{{(bx + c)^m }} = (bx + c)^{n - m} $$
(10:5:6)

and

$$\left[ {(bx + c)^n } \right]^m = (bx + c)^{nm} $$
(10:5:7)

The simplest instances

$$x^2 - y^2 = (x - y)(x + y)$$
(10:5:8)
$$x^3 \pm y^3 = (x \pm y)\left( {x^2 \mp xy + y^2 } \right)$$
(10:5:9)
$$x^4 - y^4 = (x - y)(x + y)\left( {x^2 + y^2 } \right)$$
(10:5:10)
$$x^4 + y^4 = \left( {x^2 - \sqrt 2 xy + y^2 } \right)\left( {x^2 + \sqrt 2 xy + y^2 } \right)$$
(10:5:11)

of function-subtraction and function-addition formulas for integer powers generalize to formulas involving the cosine function [Chapter 32]:

$$x^n \pm y^n = (x + y)\prod\limits_{j = 1}^{(n - 1)/2} {\left[ {x^2 \pm 2xy\cos \left( {\frac{{2j\pi }}{n}} \right) + y^2 } \right]} \quad \quad \quad n = 1,3,5, \cdots $$
(10:5:12)
$$x^n + y^n = \prod\limits_{j = 1}^{n/2} {\left[ {x^2 + 2xy\cos \left( {\frac{{2j\pi - \pi }}{n}} \right) + y^2 } \right]} \quad \quad \quad n = 2,4,6, \cdots $$
(10:5:13)
$$x^n - y^n = (x + y)(x - y)\prod\limits_{j = 1}^{(n - 2)/2} {\left[ {x^2 - 2xy\cos \left( {\frac{{2j\pi }}{n}} \right) + y^2 } \right]} \quad \quad \quad n = 2,4,6, \cdots $$
(10:5:14)

As elaborated in Section 17:7, x n ± y n may always be expressed as the product of n factors, possibly complex; for example, 10:5:11 becomes the product of four factors, each of which is subsumed in \(x \pm (1 \pm i)y/\sqrt 2\).

Finite series of positive or negative integer powers may be summed as geometric series:

$$1 + x + x^2 + \cdots + x^{n - 1} + x^n = \frac{{1 - x^{n + 1} }}{{1 - x}}\quad \quad \quad n = 1,2,3, \cdots $$
(10:5:15)
$$1 + x^{ - 1} + x^{ - 2} + \cdots + x^{1 - n} + x^{ - n} = \frac{{x - x^{ - n} }}{{x - 1}}\quad \quad \quad n = 1,2,3, \cdots $$
(10:5:16)

The corresponding infinite series are summable only for restricted ranges of x, as discussed in Section 10:13.

10.6 Expansions

If n is positive, (bx + c)n may be expanded binomially as the finite series

$$(bx + c)^n = c^n + nc^{n - 1} bx + \frac{{n(n - 1)}}{{2!}}c^{n - 2} b^2 x^2 + \cdots + ncb^{n - 1} x^{n - 1} + b^n x^n = c^n \sum\limits_{j = 0}^n {\left( {\begin{array}{*{20}c} n \\ j \\ \end{array}} \right)\left( {\frac{{bx}}{c}} \right)^j \quad \quad n = 0,1,2, \cdots } $$
(10:6:1)

for all x, where \(\left( {\begin{array}{*{20}c} n \\ j \\ \end{array}} \right)\) is the binomial coefficient of Chapter 6. If n is negative, the series is infinite and takes the form

$$(bx + c)^n = c^n + nc^{n - 1} bx + \frac{{n(n - 1)}}{{2!}}c^{n - 2} b^2 x^2 + \cdots = c^n \sum\limits_{j = 0}^\infty {\left( {\begin{array}{*{20}c} {j - n - 1} \\ j \\ \end{array}} \right)\left( {\frac{{ - bx}}{c}} \right)^j \quad n = - 1, - 2, - 3, \cdots } \quad |x| < \left| {\frac{c}{b}} \right|$$
(10:6:2)

or

$$(bx + c)^n = b^n x^n + ncb^{n - 1} x^{n - 1} + \cdots = b^n x^n \sum\limits_{j = 0}^\infty {\left( {\begin{array}{*{20}c} {j - n - 1} \\ j \\ \end{array}} \right)\left( {\frac{{ - c}}{{bx}}} \right)^j } \quad \quad \quad n = - 1, - 2, - 3, \cdots \quad \quad |x| > \left| {\frac{c}{b}} \right|$$
(10:6:3)

depending on the magnitude of x. Section 6:14 presents some of the specific examples.

Positive integer powers may be expanded in terms of Pochhammer polynomials [Chapter 18]

$$x^n = \sum\limits_{j = 0}^n {\sigma _n^{(j)} (x - j + 1)_j } = \sum\limits_{j = 0}^n {( - )^j \sigma _n^{(j)} ( - x)_j } \quad \quad \quad n = 0,1,2, \cdots $$
(10:6:4)

where \(\sigma _n^{(j)}\) is a Stirling number of the second kind [Section 2:14], or in terms of Chebyshev polynomials of the first kind [Chapter 22]:

$$x^n = \gamma _n^{(n)} {\rm{T}}_n (x) + \gamma _{n - 2}^{(n)} {\rm{T}}_{n - 2} (x) + \gamma _{n - 4}^{(n)} {\rm{T}}_{n - 4} (x) + \cdots + \left\{ \begin{array}{ll} \gamma _0^{(n)} {\rm{T}}_0 (x) & n = 0,2,4, \cdots \\ \gamma _1^{(n)} {\rm{T}}_1 (x) & n = 1,3,5, \cdots \\ \end{array} \right.$$
(10:6:5)

For example, \(x^5 = {\textstyle{5 \over 8}}{\rm{T}}_1 (x) + {\textstyle{5 \over {16}}}{\rm{T}}_3 (x) + {\textstyle{1 \over {16}}}{\rm{T}}_5 (x)\). The coefficients \(\gamma _j^{(n)}\) are zero whenever n and j are of unlike parity or j exceeds n. \(\gamma _0^{(0)} = \gamma _1^{(1)} = 1\). Other \(\gamma _j^{(n)}\) are positive rational numbers that are calculable by sufficient applications of the recursion formulas \(\gamma _0^{(n)} = {\textstyle{1 \over 2}}\gamma _1^{(n - 1)}\), \(\gamma _1^{(n)} = \gamma _0^{(n - 1)} + {\textstyle{1 \over 2}}\gamma _2^{(n - 1)}\), and for j ≥ 2, \(\gamma _1^{(n)} = \gamma _0^{(n - 1)} + {\textstyle{1 \over 2}}\gamma _2^{(n - 1)}\). Expansions similar to 10:6:5 exist for each orthogonal polynomial family [Chapters 21−24].

10.7 Particular Values

For b not equal to zero, (bx + c)n adopts the particular values:

Table 1

10.8 Numerical Values

Equator’s power function routine (keyword power) can calculate integer (or non-integer) powers. Additionally, the “variable construction” feature of Equator [Appendix C] allows t p, or wt p + k, to be used as the argument of another function.

10.9 Limits And Approximations

The limiting behavior of x n is evident from Figures 10-1 and 10-2. Note that the discontinuity suffered by x n at x = 0 is of the +∞|+∞ variety when n = 2,4,6,⃛ but is −∞|+∞ for n = 1,3,5,⃛.

10.10 Operations Of The Calculus

The rule for differentiation

$$\frac{{\rm{d}}}{{{\rm{d}}x}}(bx + c)^n = nb(bx + c)^{n - 1} $$
(10:10:1)

is the m = 1 case of the multiple-differentiation formula

$$\frac{{{\mathop{\rm d}\nolimits} ^m }}{{{\rm{d}}x^m }}(bx + c)^n = \left\{ {\begin{array}{ll} \left( n \right)_m b^m (bx + c)^{n - m} & m \le n \\ 0 & m > n \\ \end{array}} \right.$$
(10:10:2)

that employs the Pochhammer notation [Chapter 18]. General formulas for indefinite and definite integration are:

$$\int\limits_{ - c/b}^x {(bt + c)^n {\rm{d}}t} = \left\{ \begin{array}{ll} \frac{{(bx + c)^{n + 1} }}{{(n + 1)b}} & n = 0,1,2, \cdots \\ \infty & n = - 1, - 2, - 3, \cdots \\ \end{array} \right.$$
(10:10:3)
$$\int\limits_x^\infty {(bt + c)^n {\rm{d}}t} = \left\{ \begin{array}{ll} \infty & n = - 1,0,1,2, \cdots \\ \frac{{(bx + c)^{n + 1} }}{{( - n - 1)b}} & n = - 2, - 3, - 4, \cdots \\ \end{array} \right.$$
(10:10:4)
$$\int\limits_{x_0 }^{x_1 } {(bt + c)^n {\rm{d}}t} = \left\{ \begin{array}{ll} \frac{{(bx_1 + c)^{n + 1} - (bx_0 + c)^{n + 1} }}{{(n + 1)b}} & n = 0,1, \pm 2, \pm 3, \cdots \\ \frac{1}{b}\ln \left( {\frac{{bx_1 + c}}{{bx_0 + c}}} \right) & n = - 1 \\ \end{array} \right.$$
(10:10:5)

Differintegrals [Section 12:14] of the nonnegative power x n are given by the formula

$$\frac{{{\rm{d}}^\upmu x^n }}{{{\rm{d}}x^\upmu }} = \frac{{n!x^{x - \upmu } }}{{\Gamma (n - \upmu + 1)}}\quad \quad \quad n = 0,1,2, \cdots \quad \quad x > 0$$
(10:10:6)

where Γ is the gamma function [Chapter 43].

On Laplace transformation, a nonnegative integer power obeys the simple formula

$$\int\limits_0^\infty {t^n \exp ( - st){\rm{d}}t} = {\sc l} \left\{ {t^n } \right\} = \frac{{n!}}{{s^{n + 1} }}\quad \quad \quad n = 0,1,2, \cdots $$
(10:10:7)

Negative integer powers cannot be transformed, but powers of the reciprocal linear function transform as follows, provided c ≠ 0:

$$\int\limits_0^\infty {(bt + c)^n {\rm{exp( - }}st{\rm{)}}\;{\rm{d}}t} = {\sc l} \{ (bt + c)^n \} = \frac{{b^n }}{{( - n - 1)!( - s)^{n + 1} }}\left[ { - \exp \left( {\frac{{cs}}{b}} \right){\rm{Ei}}\left( {\frac{{ - cs}}{b}} \right) + \sum\limits_{j = 1}^{ - n - 1} {(j - 1)!\left( {\frac{{ - b}}{{cs}}} \right)^j } } \right]$$
(10:10:8)

for n = −1,−2,−3,⃛. The transform generates a product of the exponential [Chapter 26] and exponential integral [Chapter 37] functions.

10.11 Complex Argument

Via a binomial expansion [Section 6:14], integer powers of the complex variable z, may be expressed as a pair of power series. For example, if n is positive

$$z^n = (x + iy)^n = x^n \sum\limits_{j = 0}^{{\rm{Int}}(n/2)} {\left( {\begin{array}{*{20}c} n \\ {2j} \\ \end{array}} \right)} \left( {\frac{{ - y^2 }}{{x^2 }}} \right)^j + ix^{n - 1} y\sum\limits_{j = 0}^{{\rm{Int}}\{ (n - 1)/2\} } {\left( {\begin{array}{*{20}c} n \\ {2j + 1} \\ \end{array}} \right)} \left( {\frac{{ - y^2 }}{{x^2 }}} \right)^j $$
(10:11:1)

while, for a negative power, one can split the function into real and imaginary components as

$$\frac{1}{{z^n }} = \frac{{(x - iy)^n }}{{(x^2 + y^2 )^n }} = \left( {\frac{x}{{x^2 + y^2 }}} \right)^n \sum\limits_{j{\rm{ = 0}}}^{{\rm{Int}}(n/2)} {\left( {\begin{array}{*{20}c} n \\ {2j} \\ \end{array}} \right)\left( {\frac{{ - y^2 }}{{x^2 }}} \right)^j - \frac{{ix^{n - 1} y}}{{(x^2 + y^2 )^n }}\sum\limits_{j = 0}^{{\rm{Int}}[(n - 1)/2]} {\left( {\begin{array}{*{20}c} n \\ {2j + 1} \\ \end{array}} \right)} \left( {\frac{{ - y^2 }}{{x^2 }}} \right)^j } $$
(10:11:2)

In these two equations the rectangular representation z = x + iy of a complex variable has served as the vehicle for expressing the properties of the power function when its argument is complex. For this function, however, it is more rewarding to use the polar representation z = ρ exp(i θ) of a complex number. With this approach, de Moivre’s theorem [Section 32:11] leads to

$$z^n = \rho ^n \exp \left( {ni\theta } \right) = \rho ^n \left[ {\cos (n\theta ) + i\sin (n\theta )} \right]$$
(10:11:3)

irrespective of the sign of n (there is a pole at the origin when n is negative). The real part is

$${\mathop{\rm Re}\nolimits} [z^n ] = \rho ^n \cos (n\theta )\quad \quad \quad {\rm{where}}\quad \rho = (x^2 + y^2 )^{n/2} \quad {\rm{and}}\quad \theta = \arctan (y/x) + \pi \left[ {1 - {\mathop{\rm sgn}} (x)} \right]/2$$
(10:11:4)

with the expression for Im[z n] having sin replace cos, but being otherwise similar. Figure 10-3 is a polar graph illustrating equation 10:11:4 and its imaginary counterpart for the case n = 5. In this representation the parts (real or imaginary) are zero at the center, with the red and blue “petals” respectively representing positive and negative excursions. At the edges of their petals, the parts adopt the value \(\pm\)ρn. Equator’s complex number raised to a real power routine (keyword compower) uses equation 10:11:4, and its congener Im[z n] = ρnsin(nθ), to compute the real and imaginary parts of (x+iy)n.

Figure 10-3
figure 11_3_139007_2_Enfigure 11_3_139007_2_En

Figure 10-3

The reciprocal power s n undergoes Laplace inversion to give t n−1/(n −1)! and this generalizes to

$$\int\limits_{\alpha - i\infty }^{\alpha + i\infty } {(bs + c)^n \;\frac{{\exp (ts)}}{{2\pi i}}\;{\rm{d}}s = } \;I\left\{ {(bs + c)^n } \right\} = \frac{{b^n t^{ - n - 1} }}{{( - n - 1)!}}\exp \left( {\frac{{ - ct}}{b}} \right)\quad \quad \quad n = - 1, - 2, - 3, \cdots $$
(10:11:5)

10.12 Generalizations

The restriction that the power be an integer is removed in Chapter 12. Quadratic functions [Chapter 15], cubic functions [Chapter 16], and polynomials [Chapters 17−24] are weighted finite sums of nonnegative integer powers.

10.13 Cognate Functions: Power Series

An infinite sum of weighted positive powers of a variable is a power series

$$a_0 + a_1 x + a_2 x^2 + \cdots + a_j x^j + \cdots = \sum\limits_{j = 0}^\infty {a_j x^j } $$
(10:13:1)

The a’s, which are generally functions of j, but not of x, are the coefficients of the series, while a j x j is the typical term. A similar series in which the general term is \(a_j x^{\alpha + \beta j}\) is a Frobenius series; by redefining the variable to be x β and by withdrawing a factor of x α, such a series can be converted to a power series.

Some special cases of power series in which the coefficients are drawn from the set (1,0,−1) include:

$$1 \pm x + x^2 \pm x^3 + x^4 \pm \cdots = \frac{1}{{1 \mp x}}\quad \quad \quad - 1 \le x < 1$$
(10:13:2)
$$x + x^3 + x^5 + x^7 + x^9 + \cdots = \frac{1}{2}\left[ {\frac{1}{{1 - x}} - \frac{1}{{1 + x}}} \right] = \frac{x}{{1 - x^2 }}\quad \quad \quad - 1 < x < 1$$
(10:13:3)

to which many others could be appended. These are summable series of integer powers whose exponents increase linearly, but one may also sum similar series in which the exponents increase quadratically, as follows

$$1 - x + x^4 - x^9 + x^{16} - \cdots = \frac{1}{2}\left[ {\theta _4 \left( {0,\frac{{ - \ln (x)}}{{\pi ^2 }}} \right) + 1} \right]\quad \quad \quad 0 < x < 1$$
(10:13:4)
$$1 + x + x^4 + x^9 + x^{16} + \cdots = \frac{1}{2}\left[ {\theta _3 \left( {0,\frac{{ - \ln (x)}}{{\pi ^2 }}} \right) + 1} \right]\quad \quad \quad 0 < x < 1$$
(10:13:5)
$$x + x^9 + x^{25} + x^{49} + x^{81} + \cdots = \frac{1}{2}\theta _2 \left( {0,\frac{{ - 4\ln (x)}}{{\pi ^2 }}} \right)\quad \quad \quad 0 < x < 1$$
(10:13:6)

in terms of exponential theta functions [Section 27:13] of zero parameter. The quantity {−ln(x)}/π2 that appears in these formulas is closely related to the nome function discussed in Section 61:15.

Addition and subtraction of power series is straightforward. Thus if A, B and C are the power series \(\sum {a_j x^j }\), \(\sum {b_j x^j }\) and \(\sum {c_j x^j }\), then if A ± B = C one has c j = a j ± b j . The rules for exponentiation and multiplication are

$$A^n = C\quad {\rm{where}}\quad c_0 = a_0^n \quad {\mathop{\rm and}\nolimits} \quad c_j = \frac{n}{{ja_0 }}\sum\limits_{k = 1}^j {(j + k)a_{j - k + 1} c_{k - 1} } \quad \quad \quad {\rm{for }}j = 1,2,3, \cdots $$
(10:13:7)

and

$$AB = C\quad {\rm{where}}\quad c_j = \sum\limits_{k = 0}^j {a_k b_{j - k} } $$
(10:13:8)

but when C = A/B, the expression for c j is too elaborate to be generally useful. In an operation known as reversion of series, a power series A in the variable x is converted into a power series for x, with a normalized A as the variable.

$$\;x = a_1 \sum\limits_{k = 1}^\infty {d_k } \left( {\frac{{A - a_0 }}{{a_1^2 }}} \right)^k \quad {\rm{where}}\quad \begin{array}{l} {d_1 = 1,\;d_2 = - a_2 ,\;d_3 = 2a_2^2 - a_1 a_3 ,\;d_4 = 5a_2 (a_1 a_3 - a_2^2 ) - a_1^2 a_4 ,\;} \hfill \\ {d_5 = 7a_2^2 (2a_2^2 - 3a_1 a_3 ) + 3a_1^2 (a_3^2 + 2a_2 a_4 ) - a_1^3 a_5 ,\;} \hfill \\ {d_6 = 7[6a_2^3 (2a_1 a_3 - a_2^2 ) + a_1^3 (a_3 a_4 + a_2 a_5 ) - 4a_1^2 a_2 (a_3^2 + a_2 a_4 )] - a_1^4 a_6 \;} \hfill \\ \end{array}$$
(10:13:9)

There is no general formula for the d coefficients. The operations of differentiation and integration may be carried out term-by-term and generate other power series. Differintegration generally produces a Frobenius series. Operations on convergent power series do not necessarily preserve convergence.

As Sections 6 of most of the chapters in this Atlas will attest, almost all mathematical functions may be expanded via the Maclaurin series (Colin Maclaurin, 1698 −1746, a Scottish mathematical prodigy, who defended his master’s thesis at the age of 14)

$${\mathop{\rm f}\nolimits} (x) = {\mathop{\rm f}\nolimits} (0) + x\frac{{{\rm{df}}}}{{{\rm{d}}x}}(0) + \frac{{x^2 }}{2}\frac{{{\mathop{\rm d}\nolimits} ^2 {\rm{f}}}}{{{\rm{d}}x^2 }}(0) + \frac{{x^3 }}{6}\frac{{{\mathop{\rm d}\nolimits} ^3 {\rm{f}}}}{{{\rm{d}}x^3 }}(0) + \cdots = \sum\limits_{j = 0}^\infty {\frac{{x^j }}{{j!}}} \frac{{{\mathop{\rm d}\nolimits} ^j {\rm{f}}}}{{{\rm{d}}x^j }}(0)$$
(10:13:10)

This is the special y = 0 case of the Taylor series 0:5:1. This formula permits most functions that can be repeatedly differentiated to be expressed as power series. Accordingly, a truncated version of such a power series is commonly used as a source of the (approximate) numerical value of a function for some specified value of the argument x:

$${\rm{f}}(x) \approx \sum\limits_{j = 0}^J {a_j x^j \quad \quad \quad a_j = \frac{1}{{j!}}} \frac{{{\mathop{\rm d}\nolimits} ^j {\rm{f}}}}{{{\rm{d}}x^j }}(0)\quad \quad \quad {\rm{large}}\;J$$
(10:13:11)

One may steadily increase J, calculating these partial sums the while. Equator frequently employs this tactic, ceasing the incrementation when three consecutive partial sums are identical (to the precision of the computation). Unfortunately, many series are not sufficiently convergent to yield adequate numerical approximations even when J has the large values accessible with speedy computers. Other numerical problems, in the form of rounding errors and precision loss, arise from the finite number of significant digits carried by most computer programs. These difficulties are mostly encountered when the terms in the Maclaurin series alternate in sign. Alternative methods are then sought, or a careful check is kept of the significance lost, the precision of the final answer being adjusted accordingly.

One simple remedy that is often effective is to convert the truncated power series into a concatenation (or “nested sum”) that may be summed “backwards”

$${\rm{f}}(x) \approx \left( {\left( {\left( { \cdots \left( {\left( {{\textstyle{1 \over 2}}b_J x + 1} \right)b_{J - 1} x + 1} \right)b_{J - 2} x + \cdots + 1} \right)b_2 x + 1} \right)b_1 x + 1} \right)a_0 \quad \quad \quad b_j = a_j /a_{j - 1} $$
(10:13:12)

Notice that this formula incorporates the ruse, useful only when the series alternates in sign, of halving the final summed term. A similar, and often helpful, stratagem is to convert the series to the continued fraction [see 0:6:12]

$${\rm{f}}(x) \approx \frac{{a_0 }}{{1 - }}\;\frac{{b_1 x}}{{b_1 x - }}\;\frac{{b_2 x}}{{b_2 x - }} \cdots \frac{{b_{J - 2} x}}{{b_{J - 2} x - }}\;\frac{{b_{J - 1} x}}{{b_{J - 1} x - }}\;\frac{{b_J x}}{2}$$
(10:13:13)

but there are no guarantees in this field, which is as much art as science.

There exist more radical techniques for finding numerical values of f(x). Though often classified under the “summation of series” rubric, these approaches actually abandon the 10:13:11 expansion of f(x) in favor of some other representation, such as a rational function [Section 14:13], a standard continued fraction [Section 0:6] or a non-Maclaurin series. In Section 10:14, some of these techniques will be discussed in the context of lozenge diagrams, but a simpler transformation, due to Euler, will be exposed here.

The Euler transformation replaces the power series f(x) = \(\sum {a_j x^j }\) by

$${\rm{f}}(x) = \frac{1}{x}\left[ {e_1 \;\frac{x}{{1 + x}} + e_2 \left( {\frac{x}{{1 + x}}} \right)^2 + e_3 \left( {\frac{x}{{1 + x}}} \right)^3 + \cdots } \right] = \frac{1}{x}\sum\limits_{k = 1}^\infty {e_k } \left( {\frac{x}{{1 + x}}} \right)^k $$
(10:13:14)

By equating coefficients, one easily finds that e 1 = a 0, e 2 = a 0+a 1, e 3 = a 0 +2a 1+a 2 , and generally

$$e_k = \sum\limits_{j = 0}^{k - 1} {\left( {\begin{array}{*{20}c} {k - 1} \\ j \\ \end{array}} \right)} \;a_j $$
(10:13:15)

The transformed series frequently has much improved convergence, so that a truncated version of 10:13:14 may provide an acceptable numerical approximation.

For very large values of the argument x, most of these summation strategies fail to deliver useful numerical values. Fortunately, for most functions, f, there exist power series expansions of f(1/x). These are generally asymptotic series, so that increasing the number of summed terms improves the approximation only up to a certain (x-dependent) point. An asymptotic relationship is indicated by the symbol “˜” replacing the usual “=”. For example, the asymptotic expansion [see 41:6:6]

$${\sqrt {\frac{\pi }{x}} \exp \left( {\frac{1}{x}} \right){\mathop{\rm erfc}\nolimits} \left( {\frac{1}{{\sqrt x }}} \right)} \sim 1 - \frac{x}{2} + \frac{{3x^2 }}{4} - \frac{{15x^3 }}{8} + \cdots = \sum\limits_{j = 0}^{} {(2j - 1)!!} \left( {\frac{{ - x}}{2}} \right)^j $$
(10:13:16)

yields a power series, values of early partial sums of which, for x = 0.15, are shown in Figure 10-4 as black dots. These initially converge towards the correct result (0.937597⃛), shown by the blue line, but then wander away. The ruses and transformations discussed above and in the next section remain useful, and are doubly necessary because wantonly increasing the number of terms is not an option with asymptotic series. Thus, halving the last term in a partial sum leads to the red points in Figure 10-4. Clearly these ameliorate the difficulty without overcoming the asymptoticity.

Figure 10-4
figure 11_4_139007_2_Enfigure 11_4_139007_2_En

Figure 10-4

10.14 Related Topic: Lozenge Diagrams

A lozenge diagram is a two-dimensional array of numbers (or symbols representing numbers) arranged in a fashion that aids the conceptualization of certain useful operations performed on power series. Each element of the lozenge diagram is characterized by two integer indices, n and m, each index taking nonnegative integer values. The element itself is denoted \(\left[ {_m^n } \right]\), but only those elements in which the indices have like parities appear. The arrangement of the elements, as shown below, is such that most occupy a vertex of at least one rhombus, and as many as four.

figure 11_a_139007_2_Enfigure 11_a_139007_2_En

The lozenge diagram extends indefinitely to the right and downwards. In most applications, numbers or symbols are entered into the “northernmost”, m = 0, row. A specific propagation rule is then applied to create new entries in the diagram. The sequencing of propagation may proceed row by row downwards or, often more conveniently, by creating new entries in the order \(\left[ {_1^1 } \right],\;\left[ {_1^3 } \right],\;\left[ {_2^2 } \right],\;\left[ {_1^5 } \right],\;\left[ {_2^4 } \right],\;\left[ {_3^3 } \right],\;\left[ {_1^7 } \right]\), and so on. The propagation rules vary according to the operation for which the lozenge diagram is being used, but in all cases the element \(\left[ {_m^n } \right]\) is computed, by arithmetic operations, from the adjacent elements \(\left[ {_{m - 2}^{\;n} } \right],\;\left[ {_{m - 1}^{n - 1} } \right]\), and\(\left[ {_{m - 1}^{n\; + 1} } \right]\;.\) Often, the final output information appears in the “southwestern” diagonal, that is, in the elements in which n and m are equal. Four applications of the lozenge diagram will be elaborated in this section [see Wimp for details]. In the first, a Padé table [Section 17:12] is created from a power series. In the second and third, power series are transformed and thereby summed numerically. In the fourth, a power series is converted into a continued fraction.

Successive partial sums of the standard power series\(\sum {a_j x^j }\)may be fed into the northernmost row of the lozenge diagram, so that

$$\left[ {_{\;0}^{2n} } \right] = a_0 + a_1 x + a_2 x^2 + \cdots + a_n x^n \quad \quad \quad n = 0,1,2, \cdots $$

In this application, the propagation rule used to form successive elements in the lozenge diagram is the Schmidt-Wynn transformation, or the ε-algorithm. It is

$$\left[ {_m^n } \right] = {\rm{ }}\left\{ {\begin{array}{ll} \frac{1}{{\left[ {_{m - 1}^{n + 1} } \right] - \left[ {_{m - 1}^{n - 1} } \right]}} & {\rm{when}} \quad m = 1 \\ \left[ {_{m - 2}^{n} } \right] + \frac{1}{{\left[ {_{m - 1}^{n + 1} } \right] - \left[ {_{m - 1}^{n - 1} } \right]}} & {\rm{otherwise}} \\ \end{array}} \right.\quad \quad {\rm{or}}\quad \quad S = \left\{ {\begin{array}{ll} \frac{1}{{E - W}} & {\text{first row}} \\ N + \frac{1}{{E - W}} & {\text{other rows}}\\ \end{array}} \right.$$
(10:14:2)

The second alternative in 10:14:2 provides a convenient mnemonic based on the points of the compass viewed from the center of each rhombus. A portion of the lozenge diagram derived in this way from the power series for the exponential function exp(x) follows:

figure 11_b_139007_2_Enfigure 11_b_139007_2_En

As a comparison with the table in Section 17:12 shows, not all the entries in the lozenge diagram are members of the Padé table, but those for which m and n are even, shown in red, are. And not all members of the Padé table can be generated by this propagation rule; the others, however, can be found similarly by starting with the reciprocal of the power series for the reciprocal of the function, in this case 1/exp(−x), as the input. All the expressions shown in red are valid approximations to exp(x). Among these rational functions, the most useful often are those that lie on the southwestern diagonal, which are 1, \((1 + {\textstyle{1 \over 2}}x)/(1 - {\textstyle{1 \over 2}}x)\), \((1 + {\textstyle{1 \over 2}}x + {\textstyle{1 \over {12}}}x^2 )/(1 - {\textstyle{1 \over 2}}x + {\textstyle{1 \over {12}}}x^2 ),\; \cdots\) in this case. For most functions these diagonal Padé approximants are better approximations, and in some cases phenomenally better approximations, to the function, than are the partial sums of the truncated power series.

It is evident from the burgeoning complexity of the scheme that as a means of constructing, algebraically by hand, diagonal approximants of ever-larger order, the Schmidt-Wynn procedure soon becomes prohibitively tedious. However, it is simple to program the propagation rule to process numbers rather than symbols. In this way, a sequence of numerical values of the partial sums of power series may be converted arithmetically into a sequence of numerical values of the diagonal rational functions. For the case of the exponential function, the diagonal approximants (equal to 1, 3, 2.71429, 2.71831, ⃛ when x = 1) do not converge to the true value (2.71828 to six digits) very much faster than do the partial sums (1, 2, 2.5, 2.66667, 2.70833, 2.71667, ⃛) themselves. Consider, however, the x = 1 instance of the function [Section 37:6] that has the asymptotic power series expansion

$${\rm{f}}(x) \sim \sum\limits_{j = 0}^{} {j!( - x)^j = 1 - x + 2!x^2 - 3!x^3 + 4!x^4 - 5!x^5 } + \cdots $$
(10:14:3)

The sequence of partial sums is shown in red as the northernmost row in the lozenge diagram below, together with early results of Schmidt-Wynn transformation

figure 11_c_139007_2_Enfigure 11_c_139007_2_En

In this application, it is again only every second element in the southwestern diagonal that provides a useful output. Even though the series 10:14:3 is atrociously divergent when x = 1, the red diagonal sequence takes the values 1.0000, 0.6667, 0.6154, 0.6027, that soon converge towards the “correct” value 0.5963 [equation 2:5:7] of f(1). Equator frequently uses this so-called ε-transformation to evaluate function values from poorly convergent power series. Note that, in this particular application of the lozenge diagram, it is advantageous to enter the negative of successive terms of the original power series directly into the second (m = 1) row, rather than calculating them from the northernmost (m = 0) row. By so doing, one avoids the significance loss that comes from subtracting two partial sums that may be nearly equal.

Another procedure, named the η-transformation, is a somewhat similar application of a lozenge diagram. Again the northernmost row represents power series 10:14:3, but with the difference that each element is now the numerical value of a term in the series, rather than being its partial sum. Keeping with the x = 1 instance of function 10:14:3 as our example, the lozenge diagram for the η-algorithm is

figure 11_d_139007_2_Enfigure 11_d_139007_2_En

In compass-point format, the propagation rule used by the η-algorithm is

$$S = \left\{ {\begin{array}{lll} \frac{1}{{(1/E) - (1/W)}} & {\rm{when}} & S = \left[ {_1^n } \right]\\ N + E - W & {\rm{when}} & S = \left[ {_2^n } \right],\left[ {_{\rm{4}}^n } \right],\left[ {_{\rm{6}}^n } \right] \cdots \\ \frac{1}{{(1/N) + (1/E) - (1/W)}} & {\rm{when}} & S = \left[ {_3^n } \right],\left[ {_5^n } \right],\left[ {_{\rm{7}}^n } \right] \cdots \\ \end{array}\;} \right.$$
(10:14:4)

For the η-algorithm, all the elements in the southwestern diagonal are useful: they represent successive terms in a numerical series corresponding to the x = 1 version of 10:14:3. From entries as far as n = m = 6, one has

$${\rm{f}}(1) \approx 1 - {\raise0.5ex\hbox{$\scriptstyle 1$} \kern-0.1em/\kern-0.15em \lower0.25ex\hbox{$\scriptstyle 2$}} + {\raise0.5ex\hbox{$\scriptstyle 1$} \kern-0.1em/\kern-0.15em \lower0.25ex\hbox{$\scriptstyle 6$}} - {\raise0.5ex\hbox{$\scriptstyle 2$} \kern-0.1em/\kern-0.15em \lower0.25ex\hbox{$\scriptstyle {21}$}} + {\raise0.5ex\hbox{$\scriptstyle 4$} \kern-0.1em/\kern-0.15em \lower0.25ex\hbox{$\scriptstyle {91}$}} - {\raise0.5ex\hbox{$\scriptstyle 6$} \kern-0.1em/\kern-0.15em \lower0.25ex\hbox{$\scriptstyle {221}$}} + {\raise0.5ex\hbox{$\scriptstyle {18}$} \kern-0.1em/\kern-0.15em \lower0.25ex\hbox{$\scriptstyle {1241}$}} = {\raise0.5ex\hbox{$\scriptstyle {44}$} \kern-0.1em/\kern-0.15em \lower0.25ex\hbox{$\scriptstyle {73}$}}$$
(10:14:5)

Note the identity of this result with that obtained from the \(\left[ {_6^6 } \right]\) result of the ε-transformation.

Our final application of lozenge diagrams is that developed by Heinz Rutishauser (Swiss mathematician, 1918 −1970). Any power series may be written as a concatenation (or nested sum):

$${\rm{f}}(x) = a_0 + a_1 x + a_2 x^2 + \cdots + a_j x^j + \cdots = a_0 + b_1 x\left( {1 + b_2 x\left( {1 + b_3 x\left( {1 + ... + b_j x\left( {1 + \cdots } \right)} \right)} \right)} \right)$$
(10:14:6)

where b j = a j /a j 1. Note that for an alternating series, all b’s are negative. It is the b j x terms that are fed into the northernmost row of a lozenge diagram in the Rutishauser transformation [see Acton]. Notice that b 1 x enters \(\left[ {_0^0 } \right]\) b 2 x enters \(\left[ {_0^2 } \right]\) and generally \(\left[ {_{\;0}^{2j - 2} } \right]\) = b j x. The propagation rule for this algorithm is simple but bizarre:

$$S = \left\{ {\begin{array}{lll} - E + W & {\rm{when}} & S = \left[ {_1^n } \right] \\ NE/W & {\rm{when}} & S = \left[ {_2^n } \right],\left[ {_{\rm{4}}^n } \right],\left[ {_{\rm{6}}^n } \right] \cdots \\ N - E + W & {\rm{when}} & S = \left[ {_3^n } \right],\left[ {_5^n } \right],\left[ {_{\rm{7}}^n } \right] \cdots\\ \end{array}} \right.$$
(10:14:7)

Note that the a 0 term, that leads the series in 10:14:6, plays no part in the algorithm. The output from the transformation does not relate directly to the power series that was input, but to an equivalent continued fraction. If, for the elements on the southwestern diagonal, we adopt the nomenclature c 1 x = \(\left[ {_0^0 } \right]\) = b 1 x, c 2 x = \(\left[ {_1^1 } \right]\), c 3 x = \(\left[ {_2^2 } \right]\) and generally c j x = \(\left[ {_{j - 1}^{j - 1} } \right]\), then the continued fraction in question is

$${\rm{f}}(x) = \frac{{a_0 }}{{1 - }}\frac{{c_1 x}}{{1 + }}{\kern 1pt} \;\frac{{c_2 x}}{{1 - }}\;\frac{{c_3 x}}{{1 + }}\;\frac{{c_4 x}}{{1 - }}\frac{{c_5 x}}{{1 + }}\;\frac{{c_6 x}}{{1 - \cdots }}$$
(10:14:8)

Thus the a coefficients of the original series have been converted, via the concatenation b coefficients, to the continued fraction c constants. The Rutishauser algorithm fails when applied to the x = 1 case of 10:14:3, the example treated previously. As an alternative, we reconsider the exponential series

$${\rm{f}}(x) = \exp (x) = \sum\limits_{j = 0}^\infty {\frac{{x^j }}{{j!}}\quad \quad \quad a_j = \frac{1}{{j!}}\quad \quad b_j x = \frac{x}{j}} $$
(10:14:9)

this time with x unspecified, and construct the following lozenge diagram:

figure 11_e_139007_2_Enfigure 11_e_139007_2_En

It follows that

$$\exp (x) = \frac{1}{{1 - }}\frac{x}{{1 + }}\;\frac{{{\textstyle{1 \over 2}}x}}{{1 - }}\;\frac{{{\textstyle{1 \over 6}}x}}{{1 + }}\;\frac{{{\textstyle{1 \over 6}}x}}{{1 - }}\;\frac{{{\textstyle{1 \over {10}}}x}}{{1 + }}\;\frac{{{\textstyle{1 \over {10}}}x}}{{1 - }}\;\frac{{{\textstyle{1 \over {14}}}x}}{{1 + \cdots }}$$
(10:14:10)

The occurrence of zeros or infinities in lozenge propagation calculations can disable the procedure. Sometimes it is possible to proceed without penalty by replacing the infinity or zero, respectively, by a very large or a very small number, such as the 10±99 used by Equator for this purpose during ε-transformations.