Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Many relationships in science and engineering take the form f(x) = bx + c, and many others can be cast into this form by a redefinition of the variables. The analysis of experimental results can thereby often be reduced to the determination of the coefficients b and c from paired x, f(x) data. The linear regression or least squares method of performing this analysis is exposed in Section 7:14.

7.1 Notation

The linear function bx + c, or its special case 1+ x, is sometimes referred to as a “binomial function” but confusingly it is also termed a “monomial function”. Neither name is used in this Atlas.

The constant c is termed the intercept, whereas b is known as the slope or gradient. These names derive, of course, from the observation that, if the linear function bx + c is plotted versus x, its graph is a straight line that intersects the vertical axis at altitude c and whose inclination from the horizontal is characterized by the number b, which measures the rate at which the function’s value increases with x. The b coefficient is also the tangent [Chapter 34] of angle θ shown in Figure 7-1. The term “slope” is occasionally associated with the angle θ itself, but it more usually means tan(θ), being negative if θ is an obtuse angle (90o < θ < 180o). The letter m often replaces b as a symbol for the slope. In the figures, but not elsewhere in the chapter, b and c are assumed positive.

Figure 7-1
figure 8_1_139007_2_Enfigure 8_1_139007_2_En

Figure 7-1

The name “inverse linear function” is sometimes given to the function 1/(bx + c). Throughout this Atlas, however, we reserve the phrase “inverse function” for the relationship described in equation 0:3:3. The unambiguous name reciprocal linear function is used here.

The name rectangular hyperbola may also be associated with the function 1/(bx + c). However, this name is also applicable to other functions, described in Section 7:13, that share the same shape as, but possess a different orientation to, the reciprocal linear function illustrated in Figure 7-2.

Figure 7-2
figure 8_2_139007_2_Enfigure 8_2_139007_2_En

Figure 7-2

7.2 Behavior

The linear function bx + c is defined for all values of the argument x and (unless b = 0) itself assumes all values. The same is true of the reciprocal linear function which, however, displays an infinite discontinuity at x = −c/b, as illustrated in Figure 7-2. A graph of the reciprocal linear function f(x) = 1/(bx + c) has a high degree of symmetry [Section 14:15], being inversion symmetric about the point x = −c/b, f = 0 and displaying mirror symmetry towards reflections in the lines −xsgn(b)−(c/b) and xsgn(b)+(c/b). Here sgn is the signum function [Chapter 8].

7.3 Definitions

The arithmetic operations of multiplication by b and addition of c fully define f(x) = bx + c. The same operations, followed by division into unity, define the reciprocal linear function.

The linear function is completely characterized when its values, f 1 and f 2, are known at two (distinct) arguments, x 1 and x 2. The slope and intercept may be found from the formula

$$bx + c = \left( {\frac{{f_2 - f_1 }}{{x_2 - x_1 }}} \right)x + \frac{{x_2 f_1 - x_1 f_2 }}{{x_2 - x_1 }}$$
(7:3:1)

7.4 Special Cases

When b = 0, the linear function and its reciprocal reduce to a constant [Chapter 1]. When c = 0, the linear function is proportional to its argument x.

7.5 Intrarelationships

Both the linear function and its reciprocal obey the reflection formula

$${\rm{f}}\left( { - x - \frac{c}{b}} \right) = - {\rm{f}}\left( {x - \frac{c}{b}} \right)\quad \quad \quad {\rm{f}}(x) = bx + c\quad {\rm{or}}\quad \frac{1}{{bx + c}}$$
(7:5:1)

Two linear functions that share the same b parameter represent straight lines that are parallel; the distance separating these lines is \(\left| {c_2 - c_1 } \right|/\sqrt {1 + b^2 }\). If the slopes of two straight lines satisfy the relation b 1 b 2 = −1, the lines are mutually perpendicular. intersecting at the point x = (c 2c 1)/(b 1b 2). The inverse of the linear function f(x) = bx + c, defined by F{f(x)} = x, is another linear function F(x) = (x/b)−(c/b); graphically, these two straight lines usually cross at x = c(1+b 2)/(1−b 2).

The sum or difference of two linear functions is a third linear function (b 1±b 2)x + c 1 ± c 2 and this property extends to multiple components. The product of two, three, or many linear functions is a quadratic function [Chapter 15], a cubic function [Chapter 16] or a higher polynomial function [Chapter 17]. Unless b 2 c 1 = b 1 c 2, the quotient of two linear functions is the infinite power series:

$$\frac{{b_1 x + c_1 }}{{b_2 x + c_2 }} = \left\{ \begin{array}{ll} \frac{{c_1 }}{{c_2 }} + \left( {\frac{{c_1 }}{{c_2 }} - \frac{{b_1 }}{{b_2 }}} \right)\sum\limits_{j = 1}^\infty {\left( {\frac{{ - b_2 x}}{{c_2 }}} \right)}^j & \left| x \right| < \left| {\frac{{c_2 }}{{b_2 }}} \right| \\ \frac{{b_1 }}{{b_2 }} - \left( {\frac{{c_1 }}{{c_2 }} - \frac{{b_1 }}{{b_2 }}} \right)\sum\limits_{j = 1}^\infty {\left( {\frac{{ - c_2 }}{{b_2 x}}} \right)}^j & \left| x \right| > \left| {\frac{{c_2 }}{{b_2 }}} \right| \\ \end{array} \right.$$
(7:5:2)

The sum or difference of two reciprocal linear functions is a rational function [Section 17:13] of numeratorial and denominatorial degrees of 1 and 2 respectively. Finite series of certain reciprocal linear functions may be summed in terms of the digamma function [Chapter 44]

$$\frac{1}{c} + \frac{1}{{x + c}} + \frac{1}{{2x + c}} + \cdots + \frac{1}{{Jx + c}} = \sum\limits_{j = 0}^J {\frac{1}{{jx + c}} = \frac{1}{x}\left[ {\psi \left( {J + 1 + \frac{c}{x}} \right) - \psi \left( {\frac{c}{x}} \right)} \right]} $$
(7:5:3)

or in terms of Bateman’s G function [Section 44:13]

$$\frac{1}{c} - \frac{1}{{x + c}} + \frac{1}{{2x + c}} - \cdots \pm \frac{1}{{Jx + c}} = \sum\limits_{j = 0}^J {\frac{{( - 1)^j }}{{jx + c}} = \frac{1}{{2x}}\left[ {{\rm{G}}\left( {\frac{c}{x}} \right) \pm {\rm{G}}\left( {J + 1 + \frac{c}{x}} \right)} \right]} $$
(7:5:4)

In formula 7:5:4, the upper/lower signs are taken depending on whether J is even or odd. The corresponding infinite sum is

$$\frac{1}{c} - \frac{1}{{x + c}} + \frac{1}{{2x + c}} - \cdots = \sum\limits_{j = 0}^\infty {\frac{{( - 1)^j }}{{jx + c}} = \frac{1}{{2x}}{\rm{G}}\left( {\frac{c}{x}} \right)} $$
(7:5:5)

See Section 44:14 for further information on this topic.

7.6 Expansions

The linear function may be expanded as a infinite series of Bessel functions [Chapter 52]

$$bx + c = c + 2\;{\rm{J}}_1 (bx) + 6\;{\rm{J}}_3 (bx) + 10\;{\rm{J}}_5 (bx) + \cdots = c + 2\sum\limits_{j = 1}^\infty {(2j - 1)} \;{\rm{J}}_{2j - 1} (bx)$$
(7:6:1)

though this representation is seldom employed.

The reciprocal linear function is expansible as a geometric series in alternative forms

$$\frac{1}{{bx + c}} = \left\{ \begin{array}{ll} \frac{1}{c} - \frac{{bx}}{{c^2 }} + \frac{{b^2 x^2 }}{{c^3 }} - \frac{{b^3 x^3 }}{{c^4 }} + \cdots = \frac{1}{c}\sum\limits_{j = 0}^\infty {\left( {\frac{{ - bx}}{c}} \right)^j } & \left| x \right| < \left| {\frac{c}{b}} \right| \\ \frac{1}{{bx}} - \frac{c}{{b^2 x^2 }} + \frac{{c^2 }}{{b^3 x^3 }} - \frac{{c^3 }}{{b^4 x^4 }} + \cdots = \frac{1}{{bx}}\sum\limits_{j = 0}^\infty {\left( {\frac{{ - c}}{{bx}}} \right)^j } & \left| x \right| > \left| {\frac{c}{b}} \right| \\ \end{array} \right.$$
(7:6:2)

according to the magnitude of the argument x compared to that of the ratio c/b. Likewise there are two alternatives when the reciprocal linear function is expanded as an infinite product

$$\frac{1}{{bx + c}} = \left\{ \begin{array}{ll} \frac{{c - bx}}{{c^2 }}\prod\limits_{j = 1}^\infty {\left[ {1 + \left( {\frac{{bx}}{c}} \right)^{2^j } } \right]} & - 1 < \frac{{bx}}{c} < 1 \\ \frac{{bx - c}}{{b^2 x^2 }}\prod\limits_{j = 1}^\infty {\left[ {1 + \left( {\frac{c}{{bx}}} \right)^{2^j } } \right]} & \left| {\frac{{bx}}{c}} \right| > 1 \\ \end{array} \right.$$
(7:6:3)

For example, if |x| < 1

$$\frac{1}{{1 \pm x}} = \left( {1 \mp x} \right)\left( {1 + x^2 } \right)\left( {1 + x^4 } \right)\left( {1 + x^8 } \right)\left( {1 + x^{16} } \right) \cdots $$
(7:6:4)

7.7 Particular Values

As Figure 7-1 shows, the linear function bx + c equals c when x = 0 and equals zero when x = −c/b. The reciprocal linear function has neither an extremum nor a zero, but it incurs a discontinuity at x = −c/b [Figure 7-2].

7.8 Numerical Values

These are easily calculated by direct substitution. The construction feature of Equator enables a linear function to be used as the argument of another function.

7.9 Limits And Approximations

The reciprocal linear function approaches zero asymptotically as x → ±∞.

7.10 Operations Of The Calculus

The rules for differentiation of the linear function and its reciprocal are

$$\frac{{\mathop{\rm d}\nolimits} }{{{\rm{d}}x}}(bx + c) = b$$
(7:10:1)

and

$$\frac{{\mathop{\rm d}\nolimits} }{{{\rm{d}}x}}\left( {\frac{1}{{bx + c}}} \right) = \frac{{ - b}}{{(bx + c)^2 }}$$
(7:10:2)

while those for indefinite integration are

$$\int\limits_{\rm{0}}^x {\;(bt + c){\rm{d}}t} = \frac{{bx^2 }}{2} + cx$$
(7:10:3)

and

$$\int\limits_{\rm{0}}^x {\;\frac{1}{{bt + c}}\;{\rm{d}}t} = \frac{1}{b}\ln \left( {\left| {\frac{{bx + c}}{c}} \right|} \right)$$
(7:10:4)

If 0 < −c/b < x, the integrand in 7:10:4 encounters an infinity; in this event, the integral is to be interpreted as a Cauchy limit. This means that the ordinary definition of the integral is replaced by

$$\mathop {\lim }\limits_{\varepsilon \to 0} \left\{ {\int\limits_0^{ - (c/b) - \varepsilon } {\frac{1}{{bt + c}}\;{\rm{d}}t\; + } \int\limits_{ - (c/b) + \varepsilon }^x {\frac{1}{{bt + c}}\;{\rm{d}}t} } \right\}$$
(7:10:5)

Several other integrals in this section, including those that follow immediately, may also require interpretation as Cauchy limits, but mention of this will not always be made

$$\int\limits_0^x {\frac{1}{{(Bt + C)(bt + c)}}{\rm{d}}t = \frac{1}{{bC - Bc}}\ln \left( {\left| {\frac{{C(bx + c)}}{{c(Bx + C)}}} \right|} \right)} \quad \quad \quad Bc \ne bC$$
(7:10:6)
$$\int\limits_0^x {\frac{{Bt + C}}{{bt + c}}{\rm{d}}t = \frac{{Bx}}{b} + \frac{{bC - Bc}}{{b^2 }}\ln \left( {\left| {\frac{{bx + c}}{c}} \right|} \right)} $$
(7:10:7)
$$\int\limits_0^x {\frac{{t^n }}{{bt + c}}{\mathop{\rm d}\nolimits} x = \frac{{\left( { - c} \right)^n }}{{b^{n + 1} }}\left[ {\ln \left( {\left| {\frac{{bx + c}}{c}} \right|} \right) + \sum\limits_{j = 1}^n {\frac{1}{j}\left( {\frac{{ - bx}}{c}} \right)^j } } \right]\quad \quad \quad n = 0,1,2, \cdots } $$
(7:10:8)

Formulas for the semidifferentiation and semiintegration [Section 12:14] of the linear function are

$$\frac{{{\mathop{\rm d}\nolimits} ^{1/2} }}{{{\rm{d}}x^{1/2} }}(bx + c) = \frac{{2bx + c}}{{\sqrt {\pi x} }}$$
(7:10:9)

and

$$\frac{{{\mathop{\rm d}\nolimits} ^{ - 1/2} }}{{{\rm{d}}x^{ - 1/2} }}(bx + c) = \sqrt {\frac{x}{\pi }} \left[ {\frac{{4bx}}{3} + 2c} \right]$$
(7:10:10)

when the lower limit is zero. The table below shows the semiderivatives and semiintegrals of the reciprocal linear functions 1/(bx+c) and 1/(bxc), when the lower limit is zero, and when it is −∞. In this table, but not necessarily elsewhere in the chapter, b and c are positive.

 

\({\mathop{\rm f}\nolimits} (x) = \frac{1}{{bx + c}}\)

\({\mathop{\rm f}\nolimits} (x) = \frac{1}{{bx - c}}\)

\(\frac{{{\mathop{\rm d}\nolimits} ^{{\raise0.5ex\hbox{$\scriptstyle 1$} \kern-0.1em/\kern-0.15em \lower0.25ex\hbox{$\scriptstyle 2$}}} {\mathop{\rm f}\nolimits} }}{{{\rm{d}}x^{{\raise0.5ex\hbox{$\scriptstyle 1$} \kern-0.1em/\kern-0.15em \lower0.25ex\hbox{$\scriptstyle 2$}}} }}\)

\(\frac{{\sqrt {bx + c} - \sqrt {bx} \;{\rm{arsinh}}\left( {\sqrt {bx/c} } \right)}}{{\sqrt {\pi x(bx + c)^3 } }}\quad x > 0\)

\(\frac{{ - \sqrt {c - bx} - \sqrt {bx} \;{\rm{arcsin}}\left( {\sqrt {bx/c} } \right)}}{{\sqrt {\pi x(c - bx)^3 } }}\quad 0 < x < \frac{c}{b}\)

\(\frac{{{\mathop{\rm d}\nolimits} ^{ - {\raise0.5ex\hbox{$\scriptstyle 1$} \kern-0.1em/\kern-0.15em \lower0.25ex\hbox{$\scriptstyle 2$}}} {\mathop{\rm f}\nolimits} }}{{{\rm{d}}x^{ - {\raise0.5ex\hbox{$\scriptstyle 1$} \kern-0.1em/\kern-0.15em \lower0.25ex\hbox{$\scriptstyle 2$}}} }}\)

\(\frac{{2{\rm{arsinh}}\left( {\sqrt {bx/c} } \right)}}{{\sqrt {\pi b(bx + c)} }}\quad x > 0\)

\(\frac{{ - 2{\rm{arcsin}}\left( {\sqrt {bx/c} } \right)}}{{\sqrt {\pi b(c - bx)} }}\quad 0 < x < \frac{c}{b}\)

\(\left. {\frac{{{\mathop{\rm d}\nolimits} ^{{\raise0.5ex\hbox{$\scriptstyle 1$} \kern-0.1em/\kern-0.15em \lower0.25ex\hbox{$\scriptstyle 2$}}} }}{{{\rm{d}}t^{{\raise0.5ex\hbox{$\scriptstyle 1$} \kern-0.1em/\kern-0.15em \lower0.25ex\hbox{$\scriptstyle 2$}}} }}{\mathop{\rm f}\nolimits} (t)} \right|_{ - \infty }^x\)

\(\frac{1}{2}\sqrt {\frac{{\pi b}}{{( - bx - c)^3 }}} \quad x < \frac{{ - c}}{b}\)

\(\frac{{ - 1}}{2}\sqrt {\frac{{\pi b}}{{(c - bx)^3 }}} \quad x < \frac{c}{b}\)

\(\left. {\frac{{{\mathop{\rm d}\nolimits} ^{ - {\raise0.5ex\hbox{$\scriptstyle 1$} \kern-0.1em/\kern-0.15em \lower0.25ex\hbox{$\scriptstyle 2$}}} }}{{{\rm{d}}t^{ - {\raise0.5ex\hbox{$\scriptstyle 1$} \kern-0.1em/\kern-0.15em \lower0.25ex\hbox{$\scriptstyle 2$}}} }}{\mathop{\rm f}\nolimits} (t)} \right|_{ - \infty }^x\)

\(- \sqrt {\frac{\pi }{{b( - bx - c)}}} \quad x < \frac{{ - c}}{b}\)

\(- \sqrt {\frac{\pi }{{b(c - bx)}}} \quad x < \frac{c}{b}\)

See Sections 12:14 and 64:14 for the definitions and symbolism of semidifferintegrals with various lower limits.

The Laplace transforms of the linear and reciprocal linear functions are

$$\int\limits_0^\infty {(bt + c)\exp ( - st)\;{\rm{d}}t = {\sc l} \left\{ {bt + c} \right\} = \frac{{cs + b}}{{s^2 }}} $$
(7:10:11)

and

$$\int\limits_0^\infty {\frac{1}{{bt + c}}\exp ( - st){\rm{d}}t = {\sc l} \left\{ {\frac{1}{{bt + c}}} \right\} = \frac{{ - 1}}{b}\exp \left( {\frac{{cs}}{b}} \right){\rm{Ei}}\left( {\frac{{ - cs}}{b}} \right)} $$
(7:10:12)

the latter involving an exponential integral [Chapter 37]. A general rule for the Laplace transformation of the product of any transformable function f(t) and a linear function is

$$\int\limits_0^\infty {(bt + c){\mathop{\rm f}\nolimits} \;(t)\exp ( - st){\rm{d}}t = {\sc l} \left\{ {(bt + c){\rm{f}}(t)} \right\} = c {\sc l} \left\{ {{\rm{f}}(t)} \right\} - b\frac{{\mathop{\rm d}\nolimits} }{{{\rm{d}}s}}\left\{ {{\rm{f}}(t)} \right\}} $$
(7:10:13)

Requiring a Cauchy-limit interpretation, the integral transform

$$\frac{1}{\pi }\int\limits_{ - \infty }^\infty {{\rm{f}}(t)\frac{{{\rm{d}}t}}{{t - y}}} $$
(7:10:14)

is called a Hilbert transform (David Hilbert, German mathematician, 1862 −1943). The Hilbert transforms of many functions, mostly piecewise-defined functions [Section 8:4], are tabulated by Erdélyi, Magnus, Oberhettinger and Tricomi [Tables of Integral Transforms, Volume 2, Chapter 15]. For example, the Hilbert transform of the pulse function [Section 1:13] is

$$\frac{1}{\pi }\int\limits_{ - \infty }^\infty {c\left[ {u\left( {t - a + \frac{h}{2}} \right) - u\left( {t - a - \frac{h}{2}} \right)} \right]\frac{{{\rm{d}}t}}{{t - y}}} = \frac{c}{\pi }\ln \left( {\frac{{2(a - y) + h}}{{2(a - y) - h}}} \right)$$
(7:10:15)

A valuable feature of Hilbert transformation is that the inverse transform is identical in form to the forward transformation, apart from a sign change.

7.11 Complex Argument

The linear function of complex argument, and its reciprocal, split into the real and imaginary parts

$$bz + c = (bx + c) + iby$$
(7:11:1)

and

$$\frac{1}{{bz + c}} = \frac{{bx + c}}{{(bx + c)^2 + b^2 y^2 }} - i\frac{{by}}{{(bx + c)^2 + b^2 y^2 }}$$
(7:11:2)

if b and c are real.

The inverse Laplace transformation of the linear and reciprocal linear functions leads to functions from Chapters 10 and 26

$$\int\limits_{\alpha - i\infty }^{\alpha + i\infty } {(bs + c)\frac{{\exp (ts)}}{{2\pi i}}\;{\rm{d}}s = {\sc I} \left\{ {bs + c} \right\} = b\delta^{\prime}(t) + c\delta (t)} $$
(7:11:3)

and

$$\int\limits_{\alpha - i\infty }^{\alpha + i\infty } {\frac{1}{{bs + c}}\frac{{\exp (ts)}}{{2\pi i}}\;{\rm{d}}s = {\sc I} \left\{ {\frac{1}{{bs + c}}} \right\} = \frac{1}{b}\exp \left( {\frac{{ - ct}}{b}} \right)} $$
(7:11:4)

The Laplace inversion of a function of bx + c is related to the inverse Laplace transform of the function itself through the general formula

$$\int\limits_{\alpha - i\infty }^{\alpha + i\infty } {{\mathop{\rm f}\nolimits} \;(bs + c)\frac{{\exp (ts)}}{{2\pi i}}\;{\rm{d}}s = {\sc I} \left\{ {{\rm{f}}(bs + c)} \right\} = \frac{1}{b}\exp \left( {\frac{{ - ct}}{b}} \right)I\left\{ {{\rm{\bar f}}\;{\rm{(}}s{\rm{)}}} \right\}} $$
(7:11:5)

7.12 Generalizations

The linear and reciprocal linear functions are the v = ±1 cases of the more general function (bx + c)v, two other instances of which are addressed in Chapter 11. More broadly, all the functions of Chapters 10−14 are particular examples of a wide class of algebraic functions generalized by the formula (bx n + c)v.

The linear function is an early member of a hierarchy in which the quadratic and cubic functions [Chapters 15 and 16] are higher members and which generalizes to the polynomial functions discussed Chapter 17. All the “named” polynomials [Chapters 18−24] have a linear function as one member of their families.

7.13 Cognate Functions

A frequent need in science and engineering is to approximate a function whose values, f 0 , f 1 , f 2 , ··· , f n , are known only at a limited number of arguments x 0, x 1, x 2, ···, x n , the so-called data points. The simplest way of constructing a function that fits all the known data, but one that is adequate in many applications, is by using a piecewise-linear function. In graphical terms, this implies simply “connecting the dots”. For any argument lying between two adjacent data points, the interpolation

$${\rm{f}}(x) = \frac{{\left( {x_{j + 1} - x} \right)f_j + \left( {x - x_j } \right)f_{j + 1} }}{{x_{j + 1} - x_j }}\quad \quad \quad x_j \le x \le x_{j + 1} $$
(7:13:1)

applies. Usually a piecewise-linear function has discontinuities at all the interior data points. This defect is overcome by the “sliding cubic” and “cubic spline” interpolations exposed in Section 17:14.

The simplest reciprocal linear functions 1/(1 ± x) serve as prototypes, or basis functions, for all L = K hypergeometric functions [Section 18:14]. All the functions in Tables 18-1 and 18-2, as well as many others, may be “synthesized” [Section 43:14] from 1/(1+ x) or 1/(1− x).

The reciprocal linear function is related to the function addressed in Section 15:4 because the shape of each is a rectangular hyperbola. Thus, clockwise rotation [Section 14:15] through an angle of π/4 of the curve 1/(bx + c) about the point x = −c/b on the x-axis produces a new function,

$$\pm \sqrt {\left( {x + \frac{c}{b}} \right)^2 + \frac{2}{b}} $$
(7:13:2)

that is a rectangular hyperbola of the class discussed in Section 15:4.

7.14 RELATED TOPIC: linear regression

Frequently experimenters collect data that are known, or believed, to obey the equation f = f(x) = bx + c but which incorporate errors. From the data, which consists of the n pairs of numbers (x 1 , f 1), (x 2 , f 2), (x 3 , f 3), ···, (x n , f n ), the scientist needs to find the b and c coefficients of the best straight line through the data, as in Figure 7-3. If the errors obey, or are assumed to obey, a Gaussian distribution [Section 27:14] and are entirely associated with the measurement of f (that is, the x values are exact), then the adjective “best” implies minimizing the sum of the squared deviations, \(\sum {(bx + c - f)^2 }\). The procedure for finding the coefficients that achieve this minimization is known as linear regression or least squares and leads to the formulas

$$b = \frac{{n\sum {xf} - \sum x \sum f }}{{n\sum {x^2 } - \left( {\sum x } \right)^2 }} = \frac{{6\left[ {2\sum {jf - (n + 1)\sum f } } \right]}}{{n(n^2 - 1)h}}$$
(7:14:1)

and

$$c = \frac{{\sum {x^2 } \sum f - \sum x \sum {xf} }}{{n\sum {x^2 } - \left( {\sum x } \right)^2 }} = \frac{{\sum f - b\sum x }}{n} = \frac{{\sum f }}{n} - \frac{{b(x_1 + x_n )}}{2}\;$$
(7:14:2)
Figure 7-3
figure 8_3_139007_2_Enfigure 8_3_139007_2_En

Figure 7-3

An abbreviated notation, exemplified by

$$\sum {xf = \sum\limits_{j = 1}^n {x_j f_j } } $$
(7:14:3)

is used in the formulas of this section. Evaluation of these formulas simplifies considerably in the common circumstance in which data are gathered with equal spacing, this is when x 2x 1 = x 3x 2 = ··· = x n x n 1 = h. The simplified formulas appear in red in equations 7:14:1, 7:14:2, 7:14:4 and 7:14:8.

A measure of how well the data obey the linear relationship is provided by the correlation coefficient, given by

$$r = \frac{{n\sum {xf} - \sum x \sum f }}{{\sqrt {\left[ {n\sum {x^2 } - \left( {\sum x } \right)^2 } \right]\left[ {n\sum {f^2 } - \left( {\sum f } \right)^2 } \right]} }} = b\sqrt {\frac{{n\sum {x^2 } - \left( {\sum x } \right)^2 }}{{n\sum {f^2 } - \left( {\sum f } \right)^2 }}} = \frac{{nhb}}{6}\sqrt {\frac{{3(n^2 - 1)}}{{n\sum {f^2 } - \left( {\sum f } \right)^2 }}} $$
(7:14:4)

Values close to ±1 imply a good fit of the data to the linear function, whereas r will be close to zero if there is little or no correlation between f and x. Sometimes r 2 is cited instead of r.

Commonly there is a need to know not only what the best values are of the slope b and the intercept c but also what uncertainties attach to these best values. Quoting their standard errors [Section 40:14] in the format

$${\rm{slope}} = b \pm \Delta b\quad \quad \quad {\rm{where}}\quad \Delta b = {\text{standard error in }}b$$
(7:14:5)

and

$${\rm{intercept}} = c \pm \Delta c\quad \quad \quad {\rm{where}}\quad \Delta c = {\text{standard error in }}c$$
(7:14:6)

is a succinct way of reporting the uncertainties associated with least squares determinations. the significance to be attached to these statements is that the probability is approximately 68% that the true slope will lie between b −Δb and b + Δb. Similarly, there is a 68% probability that the true intercept lies in the range c ± Δc. The formulas giving these standard errors are

$$\Delta b = \sqrt {\;\frac{{n\sum {f^2 } - \left( {\sum f } \right)^2 }}{{n\sum {x^2 } - \left( {\sum x } \right)^2 }} - \left[ {\frac{{n\sum {xf} - \sum {x\sum f } }}{{n\sum {x^2 } - \left( {\sum x } \right)^2 }}} \right]^2 } = b\sqrt {\frac{1}{{r^2 }} - 1} $$
(7:14:7)

and

$$\Delta c = \Delta b\sqrt {\frac{{\sum {x^2 } }}{n}} = \frac{{\Delta b}}{2}\sqrt {\left( {x_1 + x_n } \right)^2 + \frac{{(n^2 - 1)h^2 }}{3}} $$
(7:14:8)

A related, but simpler, problem is the construction of the best straight line through the points (x 1 , f 1), (x 2 , f 2), (x 3 , f 3), ···, (x n , f n ), with the added constraint that the line must pass through the point (x 0, f 0). In practical problems this obligatory point is often the x = 0, f = 0 origin. Equations 7:14:1 and 7:14:2 should not be used in these circumstances, though they often are. The appropriate replacements are

$$b = \frac{{\sum {(x - x_0 )(f - f_0 )} }}{{\sum {(x - x_0 )^2 } }}$$
(7:14:9)

and

$$c = f_0 - bx_0 $$
(7:14:10)

Equations 7:14:1−7:14:8 are based on the assumption that all data points are known with equal reliability, a condition that is not always valid. Variable reliability can be treated by assigning different weights to the points. If the value f 1 is more reliable than f 2, then a larger weight w 1 is assigned to the data pair (x 1 , f 1) than the weight w 2 assigned to point (x 2 , f 2). The weights then appear as multipliers in all summations, leading to the formulas

$$b = \frac{{\sum w \sum {wxf - \sum {wx} \sum {wf} } }}{{\sum w \sum {wx} ^2 - \left( {\sum {wx} } \right)^2 }}$$
(7:14:11)

and

$$c = \frac{{\sum {wf - b\sum {wx} } }}{{\sum w }}$$
(7:14:12)

for the slope and intercept. Only the relative weights are of import; the absolute values of w 1, w 2, w 3, ···, w n have no significance beyond this. In practice, one attempts to assign a weight w j to the jth point that is inversely proportional to the square of the uncertainty in f j . Notice that formulas 7:14:1 and 7:14:2 are the special cases of 7:14:11 and 7:14:12 in which all w’s are equal. Similarly, the formulas in 7:14:9 and 7:14:10 result from setting the weight of one point, (x 0 , f 0), to be overwhelmingly greater than all the other weights, which are uniform.