Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

14.1 Appendix 1: Least Squares Straight Line \( \varvec{y} = \varvec{\alpha} + \varvec{\lambda} \varvec{x} \). The Errors in \( \varvec{\alpha} \) and \( \varvec{\lambda} \)

Let us assume that we have performed a large number Μ of series of measurements, each consisting of Ν measurements, in which for various values of \( x_{i} \) we have determined the corresponding values of \( y_{i} \). The Ν values of x \( [x_{i}, (i = 1, 2, \ldots , N)] \) are kept the same in each series. Thus, for every value of \( x_{i} \) we have Μ values of \( y_{i} \). Assume that the errors are significant only in the values of \( y_{i} \) and that these values are mutually independent. We therefore have, for every value of \( x_{i} \), a distribution, which we assume to be normal, of Μ values of \( y_{i} \), with the same standard deviation \( \sigma \) about the real value of y which corresponds to the particular value \( x_{i} \) and is symbolized by \( Y_{i} \) (see Fig. 14.1).

Fig. 14.1
figure 1

The real linear relation between x and y and the distributions of the results of the measurements \( y_{i} \) about the real values \( Y_{i} \) (of y) which correspond to the values \( x_{i} \)

If the real relationship between x and y is

$$ y = A +{\varLambda} x, $$
(14.1)

then

$$ Y_{i} = A +{\varLambda} x_{i} . $$
(14.2)

For each of the Μ series of Ν measurements \( (x_{i} ,y_{i} ) \) we may find, using the method of least squares, the values of \( \alpha \) and \( {\lambda} \) in the relation \( y = \alpha + \lambda {\kern 1pt} x \), which we assume to connect x and y. The mean value of \( \lambda \) over all the Μ series of measurements is \( \varLambda \) and the standard deviation of \( \lambda \) or the standard error in \( \lambda \) is \( \updelta\lambda \), where

$$ (\updelta\lambda )^{2} = \frac{1}{M}\sum\limits_{r = 1}^{M} {(\lambda_{r} -{\varLambda} )^{2} } . $$
(14.3)

Similarly, the mean value of the \( \alpha \)’s is Α and the standard deviation of \( \alpha \) or the standard error in \( \alpha \) is \( \updelta\alpha \), where

$$ (\updelta\alpha )^{2} = \frac{1}{M}\sum\limits_{r = 1}^{M} {(\alpha_{r} - A)^{2} } . $$
(14.4)

The quantities \( \updelta\alpha \) and \( {\updelta {\varLambda} } \) are to be determined.

Naturally, having performed only one experiment (consisting of only one series of Ν measurements), we have only one value of \( \alpha \) and one of \( \lambda \), which we consider to be the best estimates we have for Α and \( \varLambda \), respectively. Let us define the new variable

$$ \chi \equiv x - \bar{x}, $$
(14.5)

where

$$ \bar{x} = \frac{1}{N}\sum\limits_{i = 1}^{N} {x_{i} } $$
(14.6)

is the mean of the values of x for our series of measurements. It is

$$ \sum\limits_{i = 1}^{N} {\chi_{i} } = \sum\limits_{i = 1}^{N} {(x_{i} - \bar{x}) = N\,\bar{x} - N\,\bar{x} = 0} . $$
(14.7)

Also, the quantity defined as

$$ D \equiv \sum\limits_{i = 1}^{N} {\chi_{i}^{2} } $$
(14.8)

is equal to

$$ D = \sum\limits_{i = 1}^{N} {(x_{i} - \bar{x})^{2} } = \sum\limits_{i = 1}^{N} {x_{i}^{2} - 2\bar{x}} \;\sum\limits_{i = 1}^{N} {x_{i} + } \bar{x}^{2} \;\sum\limits_{i = 1}^{N} 1 = \sum\limits_{i = 1}^{N} {x_{i}^{2} - N\,\bar{x}^{2} } . $$
(14.9)

The straight line \( y = \alpha + \lambda {\kern 1pt} x \) may now be expressed as

$$ y = \beta + \lambda {\kern 1pt} \chi , $$
(14.10)

where

$$ \beta \equiv \alpha + \lambda {\kern 1pt} \bar{x}. $$
(14.11)

The best values we have for \( \beta \) and \( \lambda \) are derived from the application of the method of least squares to the values \( (\chi_{i} , y_{i} ) \). From Eq. (11.15) we have

$$ (\beta - \lambda \,\bar{x}){\kern 1pt} N + \lambda {\kern 1pt} {\kern 1pt} \sum\limits_{i = 1}^{N} {(\chi_{i} + \bar{x})} = \sum\limits_{i = 1}^{N} {y_{i} } ,\quad \beta {\kern 1pt} N + \lambda \,\sum\limits_{i = 1}^{N} {\chi_{i} } = \sum\limits_{i = 1}^{N} {y_{i} } . $$
(14.12)

However, \( {\kern 1pt} {\kern 1pt} \sum\limits_{i = 1}^{N} {\chi_{i} } = 0 \) and so we have, finally,

$$ \beta = \frac{1}{N}\sum\limits_{i = 1}^{N} {y_{i} } = \bar{y}. $$
(14.13)

where \( \bar{y} \) is the mean value of the \( y_{i}. \)

From Eq. (11.16),

$$ (\beta - \lambda {\kern 1pt} \bar{x})\sum\limits_{i = 1}^{N} {(\chi_{i} + \bar{x})} + \lambda \,\sum\limits_{i = 1}^{N} {(\chi_{i} + \bar{x})^{2} } = \sum\limits_{i = 1}^{N} {(\chi_{i} + \bar{x})} y_{i} $$
(14.14)

Expanding and taking into account that \( {\kern 1pt} {\kern 1pt} \sum\limits_{i = 1}^{N} {\chi_{i} } = 0 \), we have

$$ N{\kern 1pt} \bar{x}\,(\beta - \bar{y}) + \lambda \,\sum\limits_{i = 1}^{N} {\chi_{i}^{2} } = \sum\limits_{i = 1}^{N} {\chi_{i} y_{i} } . $$
(14.15)

However, \( \beta - \bar{y} = 0 \) and so

$$ \lambda = \frac{1}{D}\sum\limits_{i = 1}^{N} {\chi_{i} y_{i} } , $$
(14.16)

with D defined by Eq. (14.8).

For each series of measurements, it is

$$ \lambda = \frac{{\chi_{1} }}{D}y_{1} + \frac{{\chi_{2} }}{D}y_{2} + \ldots + \frac{{\chi_{N} }}{D}y_{N} , $$
(14.17)

where the coefficients of \( y_{i} \) are the same for all series. Since the values \( y_{i} \) are mutually independent, we may write for the error in \( \lambda \),

$$ (\updelta\,\lambda )^{2} = \left( {\frac{{\chi_{1} }}{D}} \right)^{{{\kern 1pt} 2}} (\updelta\,y_{1} )^{2} + \left( {\frac{{\chi_{2} }}{D}} \right)^{{{\kern 1pt} 2}} (\updelta\,y_{2} )^{2} + \ldots + \left( {\frac{{\chi_{N} }}{D}} \right)^{{{\kern 1pt} 2}} (\updelta\,y_{N} )^{2} . $$
(14.18)

We have assumed, however, that the standard deviations of the \( y_{i} \) values are the same. Thus,

$$ (\updelta\,y_{1} )^{2} = (\updelta\,y_{2} )^{2} = \ldots = \sigma^{2} $$
(14.19)

and, therefore,

$$ (\updelta\,\lambda )^{2} = \frac{{\sigma^{2} }}{{D^{2} }}\sum\limits_{i = 1}^{N} {\chi_{i}^{2} } = \frac{{\sigma^{2} }}{{D^{2} }}D = \frac{{\sigma^{2} }}{D}. $$
(14.20)

For \( \beta \) we have \( \beta = \frac{1}{N}\sum\limits_{i = 1}^{N} {y_{i} } \) and so

$$ (\updelta\,\beta )^{2} = \frac{1}{{N^{2} }}(\updelta\,y_{1} )^{2} + \frac{1}{{N^{2} }}(\updelta\,y_{2} )^{2} + \ldots + \frac{1}{{N^{2} }}(\updelta\,y_{N} )^{2} = N\frac{1}{{N^{2} }}\sigma^{2} = \frac{{\sigma^{2} }}{N}. $$
(14.21)

What we want to find is the error \( \updelta\,\alpha \) in \( \alpha = \beta - \lambda {\kern 1pt} \bar{x} \), which is given by the relation

$$ (\updelta\,\alpha )^{2} = (\updelta\,\beta )^{2} + (\bar{x})^{2} (\updelta\,\lambda )^{2} $$
(14.22)

or

$$ (\updelta\,\alpha )^{2} = \left( {\frac{1}{N} + \frac{{(\bar{x})^{2} }}{D}} \right)\,{\kern 1pt} \sigma^{2} . $$
(14.23)

We will now evaluate \( \sigma \). If the real value of \( \beta \) is Β, then

$$ Y_{i} = {\varLambda} \chi_{i} + B. $$
(14.24)

Summing all the \( Y_{i} \), and since it is \( {\kern 1pt} {\kern 1pt} \sum\limits_{i = 1}^{N} {\chi_{i} } = 0 \), we have

$$ B = \frac{1}{N}\sum\limits_{i = 1}^{N} {Y_{i} } . $$
(14.25)

Also, multiplying Eq. (14.24) by \( \chi_{i} \) and summing, we get

$$ {\varLambda} = \frac{1}{D}\sum\limits_{i = 1}^{N} {\chi_{i} Y_{i} } . $$
(14.26)

Figure 14.2 shows the various quantities which will be used in the analysis of the problem.

Fig. 14.2
figure 2

The various quantities used in the evaluation of the errors \( \updelta\alpha \) and \( \updelta\lambda \)

The error in the ith measurement is

$$ e_{i} = y_{i} - Y_{i} = y_{i} - ({\varLambda} \chi_{i} - B). $$
(14.27)

The least squares straight line gives for \( x_{i} \) the value of \( \lambda {\kern 1pt} \chi_{i} + \beta \) for \( y \). The residual is, therefore,

$$ d_{i} = y_{i} - (\lambda \,\chi_{i} + \beta ), $$
(14.28)

as shown in Fig. 14.2. The values of \( e_{i} \) are not known but those of \( d_{i} \) are. We define the quantity

$$ s^{2} \equiv \sum\limits_{i = 1}^{N} {d_{i}^{2} } = \sum\limits_{i = 1}^{N} {[y_{i} - (\lambda \,\chi_{i} + \beta )]^{2} } . $$
(14.29)

From Eqs. (14.27) and (14.28) we have

$$ d_{i} = e_{i} - \left[ {(\lambda -{\varLambda} )\chi_{i} + (\beta - B)} \right]{\kern 1pt} . $$
(14.30)

From Eqs. (14.13) and (14.25),

$$ \beta - B = \frac{1}{N}\sum\limits_{i = 1}^{N} {e_{i} } $$
(14.31)

and from Eqs. (14.16) and (14.26),

$$ \lambda -{\varLambda} = \frac{1}{D}\sum\limits_{i = 1}^{N} {\chi_{i} (y_{i} - Y_{i} )} = \frac{1}{D}\sum\limits_{i = 1}^{N} {\chi_{i} e_{i} } . $$
(14.32)

Substituting from Eqs. (14.31) and (14.32) in Eq. (14.30),

$$ d_{i} = e_{i} - \chi_{i} \frac{1}{D}\sum\limits_{i = 1}^{N} {\chi_{i} e_{i} } - \frac{1}{N}\sum\limits_{i = 1}^{N} {e_{i} } . $$
(14.33)

We verify, in passing, that it is

$$ \begin{aligned} \sum\limits_{i = 1}^{N} {d_{i} } & = \sum\limits_{i = 1}^{N} {e_{i} } - \frac{1}{D}\left( {\sum\limits_{i = 1}^{N} {\chi_{i} } } \right)\;\left( {\sum\limits_{i = 1}^{N} {\chi_{i} e_{i} } } \right) - \frac{1}{N}\sum\limits_{i = 1}^{N} {\left( {\sum\limits_{i = 1}^{N} {e_{i} } } \right)} \\ & = \sum\limits_{i = 1}^{N} {e_{i} } - \frac{1}{D}\left( 0 \right)\left( {\sum\limits_{i = 1}^{N} {\chi_{i} e_{i} } } \right) - \frac{1}{N}N\,\left( {\sum\limits_{i = 1}^{N} {e_{i} } } \right) = 0. \\ \end{aligned} $$
(14.34)

Squaring \( d_{i} \),

$$ \begin{aligned} d_{i}^{2} & = e_{i}^{2} + \frac{{\chi_{i}^{2} }}{{D^{2} }}\left( {\sum\limits_{i = 1}^{N} {\chi_{i} e_{i} } } \right)^{2} + \frac{1}{{N^{2} }}\left( {\sum\limits_{i = 1}^{N} {e_{i} } } \right)^{2} \\ & \quad - 2\frac{{e_{i} \chi_{i} }}{D}\sum\limits_{i = 1}^{N} {\chi_{i} e_{i} } - 2\frac{{e_{i} }}{N}\sum\limits_{i = 1}^{N} {e_{i} } + 2\frac{{\chi_{i} }}{DN}\left( {\sum\limits_{i = 1}^{N} {\chi_{i} e_{i} } } \right)\;\left( {\sum\limits_{i = 1}^{N} {e_{i} } } \right) \\ \end{aligned} $$
(14.35)

and summing over all i’s

$$ \begin{aligned} \sum\limits_{i = 1}^{N} {d_{i}^{2} } & = \sum\limits_{i = 1}^{N} {e_{i}^{2} } + \frac{1}{{D^{2} }}\left( {\sum\limits_{i = 1}^{N} {\chi_{i}^{2} } } \right)\;\left( {\sum\limits_{i = 1}^{N} {\chi_{i} e_{i} } } \right)^{2} + \frac{1}{{N^{2} }}\sum\limits_{i = 1}^{N} {\left( {\sum\limits_{i = 1}^{N} {e_{i} } } \right)^{2} } \\ & \quad - \frac{2}{D}\left( {\sum\limits_{i = 1}^{N} {\chi_{i} e_{i} } } \right)\;\left( {\sum\limits_{i = 1}^{N} {\chi_{i} e_{i} } } \right) - \frac{2}{N}\left( {\sum\limits_{i = 1}^{N} {e_{i} } } \right)\;{\kern 1pt} \left( {\sum\limits_{i = 1}^{N} {e_{i} } } \right) \\&\quad + \frac{2}{DN}\left( {\sum\limits_{i = 1}^{N} {\chi_{i} } } \right)\;{\kern 1pt} \left( {\sum\limits_{i = 1}^{N} {\chi_{i} e_{i} } } \right)\;\left( {\sum\limits_{i = 1}^{N} {e_{i} } } \right) \\ \end{aligned} $$
(14.36)

Since it is \( \sum\limits_{i = 1}^{N} {\chi_{i}^{2} } = D \) and \( \sum\limits_{i = 1}^{N} {\chi_{i} } = 0 \), the last relation simplifies to

$$ \sum\limits_{i = 1}^{N} {d_{i}^{2} } = \sum\limits_{i = 1}^{N} {e_{i}^{2} } + \frac{1}{D}\,{\kern 1pt} \left( {\sum\limits_{i = 1}^{N} {\chi_{i} e_{i} } } \right)^{2} + \frac{1}{N}\left( {\sum\limits_{i = 1}^{N} {e_{i} } } \right)^{2} - \frac{2}{D}\left( {\sum\limits_{i = 1}^{N} {\chi_{i} e_{i} } } \right)^{2} - \frac{2}{N}\left( {\sum\limits_{i = 1}^{N} {e_{i} } } \right)^{2} + 0 $$

or

$$ \sum\limits_{i = 1}^{N} {d_{i}^{2} } = \sum\limits_{i = 1}^{N} {e_{i}^{2} } - \frac{1}{D}\left( {\sum\limits_{i = 1}^{N} {\chi_{i} e_{i} } } \right)^{2} - \frac{1}{N}\left( {\sum\limits_{i = 1}^{N} {e_{i} } } \right)^{2} . $$
(14.37)

This relation holds for every one of the Μ series of measurements. Using indices on the quantities of Eq. (14.37), we have, for \( r = 1, 2, \ldots , M \), the relations

$$ \sum\limits_{i = 1}^{N} {d_{ri}^{2} } = \sum\limits_{i = 1}^{N} {e_{ri}^{2} } - \frac{1}{D}\;\left( {\sum\limits_{i = 1}^{N} {\chi_{i} e_{ri} } } \right)^{2} - \frac{1}{N}\left( {\sum\limits_{i = 1}^{N} {e_{ri} } } \right)^{2} , $$
(14.38)

where we have left D and \( \chi_{i} \) without r indices, since, as mentioned at the start, we used the same values of \( x_{i} \) in the Μ series of measurements. We will now sum the r Eqs. (14.38). The sums of the various terms are:

$$ \begin{aligned} \sum\limits_{r = 1}^{M} {\sum\limits_{i = 1}^{N} {d_{ri}^{2} } } & = (d_{11}^{2} + d_{12}^{2} + \ldots + d_{1N}^{2} ) + (d_{21}^{2} + d_{22}^{2} + \ldots + d_{2N}^{2} ) \\ &\quad + \ldots + (d_{M1}^{2} + d_{M2}^{2} + \ldots + d_{MN}^{2} ) \\ & = (d_{11}^{2} + d_{21}^{2} + \ldots + d_{M1}^{2} ) + (d_{12}^{2} + d_{22}^{2} + \ldots + d_{M2}^{2} ) \\ &\quad + \ldots + (d_{1N}^{2} + d_{2N}^{2} + \ldots + d_{MN}^{2} ) \\ \end{aligned} $$
(14.39)
$$ \sum\limits_{r = 1}^{M} {\left( {\sum\limits_{i = 1}^{N} {e_{ri}^{2} } } \right) = MN\,\sigma^{2} } $$
(14.40)
$$ \sum\limits_{r = 1}^{M} {\left( {\sum\limits_{i = 1}^{N} {\chi_{i} e_{ri} } } \right)^{2} = } \sum\limits_{r = 1}^{M} {\left( {\sum\limits_{i = 1}^{N} {\chi_{i}^{2} e_{ri}^{2} } + 2\sum\limits_{\begin{subarray}{l} i,j \\ i \ne j \end{subarray} }^{N} {\chi_{i} \chi_{j} e_{ri} e_{rj} } } \right)} $$
(14.41)

For given i and j (and, therefore, also given \( \chi_{i} \,\chi_{j} \)), the sum of the terms \( \chi_{i} \,\chi_{j} \,e_{ri} \,e_{rj} \) over all values of r tends to zero, since the quantities \( e_{rj} \) and \( e_{ri} \) are mutually independent. Thus,

$$ \sum\limits_{r = 1}^{M} {\left( {\sum\limits_{i = 1}^{N} {\chi_{i} e_{ri} } } \right)^{2} = } \,\chi_{1}^{2} \sum\limits_{r = 1}^{M} {e_{r1}^{2} } + \chi_{2}^{2} \sum\limits_{r = 1}^{M} {e_{r2}^{2} } + \ldots = (\chi_{1}^{2} + \chi_{2}^{2} + \ldots )M\sigma^{2} = DM\sigma^{2} . $$
(14.42)

Also,

$$ \sum\limits_{r = 1}^{M} {\left( {\sum\limits_{i = 1}^{N} {e_{ri} } } \right)^{2} = } \sum\limits_{r = 1}^{M} {\left( {\sum\limits_{i = 1}^{N} {e_{ri}^{2} } + 2\sum\limits_{\begin{subarray}{l} i,j \\ i \ne j \end{subarray} }^{N} {e_{ri} e_{rj} } } \right)} = \sum\limits_{r = 1}^{M} {\left( {\sum\limits_{i = 1}^{N} {e_{ri}^{2} } + 2 \times 0} \right)} = \sum\limits_{r = 1}^{M} {\left( {N\sigma^{2} } \right)} = MN\sigma^{2} . $$
(14.43)

Substituting from Eqs. (14.39) to (14.43) in Eq. (14.38), we find that it is

$$ \sum\limits_{r = 1}^{M} {\sum\limits_{i = 1}^{N} {d_{ri}^{2} = MN\,\sigma^{2} - M\,\sigma^{2} - M\,\sigma^{2} } .} $$
(14.44)

For the Μ sums of the \( \sum\limits_{i = 1}^{N} {d_{ri}^{2} } \) we do not have an accurate value. We have an estimate, equal to the sum \( \sum\limits_{i = 1}^{N} {d_{i}^{2} } \), which is derived from our Ν measurements. The best estimate we can make is, therefore,

$$ \sum\limits_{r = 1}^{M} {\sum\limits_{i = 1}^{N} {d_{ri}^{2} } } \approx M\sum\limits_{i = 1}^{N} {d_{i}^{2} } $$
(14.45)

and Eq. (14.44) finally gives

$$ M\sum\limits_{i = 1}^{N} {d_{i}^{2} } = MN\,\sigma^{2} - 2M\,\sigma^{2} $$
(14.46)

or

$$ \sigma^{2} = \frac{1}{N - 2}{\kern 1pt} {\kern 1pt} \sum\limits_{i = 1}^{N} {d_{i}^{2} } , $$
(14.47)
$$ \sigma_{y} \equiv \sigma = \sqrt {\frac{1}{N - 2}\sum\limits_{i = 1}^{N} {(y_{i} - \alpha - \lambda x_{i} )^{2} } } . $$
(14.48)

Substituting in Eqs. (14.23) and (14.20), respectively, we have for the errors in \( \alpha \) and \( \lambda \),

$$ (\updelta\alpha )^{2} = \frac{1}{N - 2}\left( {\frac{1}{N} + \frac{{(\bar{x})^{2} }}{D}} \right)\sum\limits_{i = 1}^{N} {d_{i}^{2} } $$
(14.49)
$$ (\updelta\lambda )^{2} = \frac{1}{D}\frac{1}{N - 2}\sum\limits_{i = 1}^{N} {d_{i}^{2} } . $$
(14.50)

We adopt the notation

$$ \sum\limits_{i = 1}^{N} {x_{i} \equiv [x]} \quad \sum\limits_{i = 1}^{N} {y_{i} \equiv [y]} \quad \sum\limits_{i = 1}^{N} {x_{i}^{2} \equiv [x^{2} ]} \quad \sum\limits_{i = 1}^{N} {x_{i} y_{i} \equiv [xy]} . $$
(14.51)

Taking into account the fact that \( D = \sum\limits_{i = 1}^{N} {x_{i}^{2} - N\,\bar{x}^{2} } = [x^{2} ] - \frac{1}{N}[x]^{2} \) and \( (\bar{x})^{2} = \left( {\frac{1}{N}\sum\limits_{i = 1}^{N} {x_{i} } } \right)^{2} = \frac{{[x]^{2} }}{{N^{2} }} \), we have

$$ \sigma_{y}^{2} = \frac{[d]}{N - 2}{\kern 1pt} {\kern 1pt} , $$
(14.52)
$$ \updelta\alpha = \sigma_{y} {\kern 1pt} \sqrt {\frac{{[x^{2} ]}}{{N{\kern 1pt} \,[x^{2} ] - [x]^{2} }}} = \sqrt {\frac{[d]}{N(N - 2)} \cdot \frac{{[x^{2} ]}}{{N\,[x^{2} ] - [x]^{2} }}} $$
(14.53)

and

$$ \updelta\lambda = \sigma_{y} {\kern 1pt} \sqrt {\frac{N}{{N\,[x^{2} ] - [x]^{2} }}} = \sqrt {\frac{[d]}{N(N - 2)} \cdot {\kern 1pt} \frac{N}{{N\,[x^{2} ] - [x]^{2} }}{\kern 1pt} } . $$
(14.54)

The question to be answered now is this: Given the relation \( y = \alpha + \lambda {\kern 1pt} x \) and the errors in \( \updelta\alpha \) and \( \updelta\lambda \), what is the error in y, as this is evaluated for a given value of x? The first thought, which is to write the equation with the errors in its coefficients as

$$ y = (\alpha \pm\updelta\alpha ) + (\lambda \pm\updelta\lambda )\,x $$
(14.55)

and the error in y as

$$ \updelta{\kern 1pt} y = \sqrt {(\updelta\alpha )^{2} + (x\,\updelta\lambda )^{2} } , $$
(14.56)

would be wrong, because the magnitudes \( \alpha \) and \( \lambda \) are not mutually independent. This can be verified by the fact that

$$ \sum\limits_{r = 1}^{M} {(\lambda_{r} -\varLambda )(\alpha_{r} - A)} = - \bar{x}(\updelta\,\lambda )^{2} , $$
(14.57)

whereas it would have to be equal to zero for mutually independent \( \alpha \) and \( \lambda \). On the other hand, \( \beta \) and \( \lambda \) are mutually independent magnitudes, as testified by Eq. (14.21). The correct equation for the straight line with errors is, therefore,

$$ y = (\beta \pm \delta \beta ) + (\lambda \pm \delta \lambda ){\kern 1pt} {\kern 1pt} (x - \bar{x}), $$
(14.58)

from which the error in y is found to be

$$ \updelta{\kern 1pt} y = \sqrt {(\updelta\beta )^{2} + [(x - \bar{x})\,\updelta\lambda ]^{2} } . $$
(14.59)

Since according to Eq. (14.21) and since the best estimate we have for \( \sigma \) is \( \sigma_{y} \), it is \( \updelta\beta = \sigma_{y} /\sqrt N \), the error in y is:

$$ \updelta\,y = \frac{{\sigma_{y} }}{\sqrt N }\sqrt {1 + \frac{{N^{2} }}{{N\,[x^{2} ] - [x]^{2} }}(x - \bar{x})^{2} } . $$
(14.60)

The errors which are mutually independent and contribute to the error in y are: the error in the value of \( \beta = \bar{y} \) which determines the center Κ of the straight line (Fig. 14.2) and the error in the slope of the straight line, which determines its orientation as the line is considered to rotate about the center Κ. The erroneous Eq. (14.56) would mean that the error in y is determined by the error in the ordinate \( \alpha \) of the point of intersection T of the y-axis with the least squares straight line and the error \( \updelta\,\lambda \) in the slope of the line, as this would now be considered to rotate about the point Τ.

14.2 Appendix 2: Dimensional Analysis

We know that a mathematical equation describing the relationship between physical quantities does not only lead to numerical equality when in place of the symbols for these quantities we place their numerical values. The equation must also be homogeneous regarding the units and in general the dimensions of the two sides of the equation. Dimensions are attributed to all physical quantities, starting from those considered to be the fundamental ones. In Mechanics, fundamental dimensions are considered to be those of mass (Μ), length (L) and time (T). It is found that the definitions of other quantities as well as the physical laws can be expressed in terms of these three quantities. All masses, lengths and time intervals are measured by comparison with the prototypes defined for these three fundamental quantities. If we include electromagnetic phenomena, no additional dimension is required when using the c.g.s. system of units (electrostatic and electromagnetic system of units), since both the electric charge and the magnetic force are defined in terms of the three fundamental quantities of Mechanics. In the S.I. system of units the ‘electric charge’ is introduced via the definition of the electric current and so four fundamental dimensions are required: mass, length, time and electric charge (Q). We should mention that pure numbers (0, 1, π, e etc.) have no dimensions.

We may use the requirement that a mathematical relation between physical quantities should be homogeneous as regards dimensions, in order to extract conclusions regarding this relation, even if we do not know its form. This process is called dimensional analysis. Naturally, dimensional analysis is also used in order to check whether a relation between physical quantities is or is not the right one. For example, if a term of a sum has dimensions of L/T, i.e. (length)/(time), then, for the equation to be correct, all the other terms must also have these dimensions. Another very useful application of dimensional analysis is the derivation of functional relations connecting the physical quantities describing a certain phenomenon. The description of this method is the main purpose of this appendix [1].

14.2.1 The Dimensions of Physical Quantities

We denote by [Χ ] the dimensions of a physical quantity Χ. Thus, according to what has been said for the three fundamental quantities of Mechanics, we have for mass m, length l and time t,

$$ [m] = {{\rm M,}}\quad [l] = {{\rm L,}}\quad [t] = {{\rm T}}. $$
(14.61)

For consistency of the relations connecting the dimensions of physical quantities, we must accept that for a pure number Α it is

$$ [A] = 1. $$
(14.62)

Thus, for example, we have \( [ 1] = 1,[\sqrt 2 ] = 1,[\uppi] = 1 \) etc. Pure numbers are dimensionless .

The dimensions of a derivative can be found if we remember that

$$ \frac{{{\text{d}}^{n} y}}{{{\text{d}}x^{n} }} = \frac{\text{d}}{{{\text{d}}x}}\frac{\text{d}}{{{\text{d}}x}} \ldots \frac{\text{d}}{{{\text{d}}x}}(n\;{\text{times}}) \cdot y. $$
(14.63)

The operator \( \frac{\text{d}}{{{\text{d}}x}} \) means that we take a difference and divide by dx. The result of this is to multiply the dimensions of the denominator by the dimensions of x, i.e. [x]. Thus

$$ \left[ {\frac{{{\text{d}}^{n} y}}{{{\text{d}}x^{n} }}} \right] = \left[ {\frac{\text{d}}{{{\text{d}}x}}} \right]^{n} [y] = \frac{{ [ {\textit{y}]}}}{{[x]^{n} }}. $$
(14.64)

In short, we find the dimensions of a derivative by erasing the operatorsFootnote 1 d or \( \partial \). For example, the dimensions of speed or velocity are

$$ [\upsilon ] = \left[ {\frac{{{\text{d}}{\kern 1pt} x}}{{{\text{d}}t}}} \right] = \frac{[x]}{[t]} = \frac{{\rm L}}{{\rm T}} = {{\rm LT}}^{ - 1} . $$
(14.65)

The dimensions of acceleration are

$$ [a] = \left[ {\frac{{{\text{d}}{\kern 1pt}^{2} x}}{{{\text{d}}t^{2} }}} \right] = \frac{[x]}{{[t^{2} ]}} = \frac{[x]}{{[t]^{2} }} = {{\rm LT}}^{ - 2} . $$
(14.66)

The dimensions of other quantities may be found either using the equations of their definition or through some physical law. For example,

$$ {\text{the}}\;{\text{dimensions}}\;{\text{of}}\;{\text{force}}\;{\text{are}}\;[F] = [ma] = \left[ {m\frac{{{\text{d}}^{2} x}}{{{\text{d}}t^{2} }}} \right] = [m]\frac{[x]}{{[t^{2} ]}} = {{\rm M}}\frac{{\rm L}}{{{{\rm T}}^{ 2} }} = {{\rm MLT}}^{ - 2} $$
(14.67)
$$ {\text{the}}\;{\text{dimensions}}\;{\text{of}}\;{\text{energy}}\;{\text{are}}\;[E] = [Fx] = [F][x] = ({{\rm MLT}}^{ - 2} )({{\rm L}}) = {{\rm ML}}^{ 2} {{\rm T}}^{ - 2} $$
(14.68)

and so on. Similarly, the dimensions of the Newtonian constant of gravitation may be found using Newton’s law of gravity

$$ F = G\frac{{m_{1} m_{2} }}{{r^{2} }}. $$
(14.69)

Thus, we have \( G = \frac{{Fr^{2} }}{{m_{1} m_{2} }} \) and

$$ [G] = \left[ {\frac{{Fr^{2} }}{{m_{1} m_{2} }}} \right] = \frac{{[F][r^{2} ]}}{{[m^{2} ]}} = \frac{{({{\rm MLT}}^{ - 2} )({{\rm L}}^{ 2} )}}{{{{\rm M}}^{ 2} }} = {{\rm M}}^{ - 1} {{\rm L}}^{ 3} {{\rm T}}^{ - 2} . $$
(14.70)

Continuing in this way we may create a table of the dimensions of all the physical quantities.

14.2.2 The Dimensional Homogeneity of Equations

As we have already mentioned, the mathematical relations connecting physical quantities must be homogeneous regarding dimensions. As an example, we check the relation

$$ s = s_{0} + \upsilon_{0} t + \tfrac{1}{2}a \,t^{2} . $$
(14.71)

We have

$$ \begin{aligned} [s] & = [s_{0} ] + [\upsilon_{0} t] + [\tfrac{1}{2}a \,t^{2} ] \\ [s] & = [s_{0} ] + [\upsilon_{0} ][t] + [\frac{1}{2}][a ][t^{2} ] \\ \end{aligned} $$
$$ {{\rm L}} = {{\rm L}} + ({{\rm LT}}^{ - 1} )\,({{\rm T}}) + (1)\,({{\rm LT}}^{ - 2} )\,({{\rm T}}^{ 2} ) $$
(14.72)

or, finally,

$$ {{\rm L}} = {{\rm L}} + {{\rm L}} + {{\rm L}} = {{\rm L,}} $$
(14.73)

which of course must not be interpreted algebraically but must be considered as stating that ‘the length on the left is the sum of the three lengths on the right’, i.e. it is equal to a length, as it should. All equations involving physical quantities must pass this test if they are to be correct.

In many equations of Physics there appear functions such as the trigonometric, exponential, logarithmic etc. We have to examine the question of the dimensions of these functions. From the Maclaurin series for these functions,

$$ \sin z = z - \frac{1}{{3{\kern 1pt} !}}z^{3} + \frac{1}{5!}z^{5} - \ldots \quad \cos z = 1 - \frac{1}{{2{\kern 1pt} !}}z^{2} + \frac{1}{{4{\kern 1pt} !}}z^{4} - \ldots \quad {\text{e}}^{z} = 1 + \frac{1}{{1{\kern 1pt} !}}z + \frac{1}{2!}z^{2} + \frac{1}{{3{\kern 1pt} !}}z^{3} + \ldots , $$
(14.74)

it is obvious that both their arguments and the functions themselves must be dimensionless quantities. Otherwise, it would not be possible for different powers of the arguments to have the same dimensions and have dimensional homogeneity in the equation. The same is true for the logarithmic function. We can prove this in various ways. For example, if it is \( y = \ln x \), then \( x = {\text{e}}^{y} \), and, from what we have said about the exponential function, both x and y must be dimensionless quantities. Besides, if the arguments of these functions were not dimensionless numbers, the values of the functions would be different for different systems of units we might use.

Angles, defined as the ratios of two lengths, are also dimensionless. They must, of course, be expressed in radians (rad).

As a consequence, in the expression for the potential energy of a simple harmonic oscillator consisting of a mass m at the free end of a spring with constant k oscillating with an angular frequency ω and an amplitude a,

$$ U(t) = \frac{1}{2}ka^{2} \,\sin^{2} (\omega t + \phi ), $$
(14.75)

it is immediately obvious that the sine is a pure number and so are \( \omega t \) and \( \phi \). If this is not the case, then we have made a mistake in the derivation of the equation. We must, however, be careful in those cases in which some symbols have been replaced by their numerical values. For example, if in the previous equation we substitute \( \omega = 1\;{\text{rad/s}} \) and \( k = 10^{6} \;{\text{N/m}} \), we will have the equation \( U(t) = \frac{1}{2}10^{6} a^{2} \sin^{2} ({\kern 1pt} t + \phi ) \), which would appear to us to be dimensionally wrong. If, however, we are informed of the substitutions, we realize that the equation is dimensionally correct and also that we must use S.I. units when substituting for t and a.

14.2.3 The Derivation of Relations Between Physical Quantities Using Dimensional Analysis

We will demonstrate the power of dimensional analysis, as well as its limitations, with certain examples of the derivation of relations between the physical quantities involved in some phenomena.

Example 14.1

A simple harmonic oscillator consists of a mass m fixed at the free end of a spring, whose constant is k. We wish to find the functional relation for the period Τ of the oscillator.

The first thing we must decide is which are the quantities involved in the determination of the quantity we need to find. We must be very careful, since, if we include too many irrelevant quantities we will not have a unique relation. On the other hand, if we omit some relevant quantity, we will derive a relation which will be wrong.

The quantities on which the oscillator’s period may depend, are its mass, m, the spring’s constant k and the amplitude of the oscillation, a. We assume that the oscillators are executed on a smooth horizontal plane, so that gravity does not affect the period. We will investigate this matter later.

We assume that the period of the oscillator is given by a relation of the form

$$ T = A{\kern 1pt} a^{\alpha } m^{\beta } k^{\gamma } $$

where Α is a numerical coefficient and the exponents \( \alpha ,\beta ,\gamma \) are to be determined.

We write the equation of the dimensions of the two sides of the relation. The spring constant, being a force per unit length, has dimensions \( [k] = {\rm{MT}}^{ - 2} \). Therefore

$$ [T] = [A][{\kern 1pt} a^{\alpha } ][m^{\beta } ][k^{\gamma } ] = [A][a]^{\alpha } [m]^{\beta } [k]^{\gamma } $$

or

$$ {\text{T}} = (1)({\text{L}})^{\alpha } ({\text{M}})^{\beta } ({\text{MT}}^{ - 2} )^{\gamma } \quad {\text{T}} = {\text{L}}^{\alpha } {\text{M}}^{\beta + \gamma } {\text{T}}^{ - 2\gamma } . $$

Since L, M and T are dimensionally independent of each other, we may equate their exponents, respectively, on the left and on the right. We thus find that, for the assumed relation to be valid, it must be:

$$ 1 = - 2\gamma ,\quad 0 = \alpha ,\quad 0 = \beta + \gamma $$

from which it follows that

$$ \alpha = 0,\quad \beta = \frac{1}{2},\quad \gamma = - \frac{1}{2}. $$

The period of the oscillator is, therefore,

$$ T = A\,{\kern 1pt} \sqrt {\frac{m}{k}} . $$

The method does not tell us what the value of Α is. (From theory we know that \( A = 2\uppi \).)

The procedure we followed cannot be considered to be a proof of the relation derived. Nor is the derived relation necessarily correct. If someone includes the mass of the Earth Μ in the relevant variables, the ratio \( m/M \), which is dimensionless, could appear in any of an infinite number of forms, without violating the dimensional homogeneity of the equation. For example, we could have, among others, the expressions

$$ T = A\,{\kern 1pt} \sqrt {\frac{m}{k}\left( {\frac{m}{M}} \right)} ,\quad T = A\,\sqrt {\frac{m}{k}\left( {1 + \frac{m}{M}} \right)} ,\quad T = A\,\sqrt {\frac{m}{k}} \left( {\frac{m}{M}} \right)^{{7\uppi}} ,\quad T = A\,{\kern 1pt} \sqrt {\frac{m}{k}} \,{\kern 1pt} {\text{e}}^{m/M} . $$

Some of these equations may appear highly improbable or even ugly but this is no reason for us to reject them (Dirac’s advice that ‘it is more important to have beauty in one’s equations than to have them fit experiment’ is not always easy to apply in practice!). In the final analysis, if theory cannot help us, experiment will show us which of these equations is the acceptable one.

If we include the acceleration of gravity in the magnitudes determining Τ, and we set \( T = A{\kern 1pt} a^{\alpha } m^{\beta } k^{\gamma } g^{\delta } \), it follows that \( [T] = [A][{\kern 1pt} a]^{\alpha } [m]^{\beta } [k]^{\gamma } [g]^{\delta } \) or \( {\text{T}} = (1)({\text{L}})^{\alpha } ({\text{M}})^{\beta } ({\text{MT}}^{ - 2} )^{\gamma } ({\text{LT}}^{ - 2} )^{\delta } \). Finally, \( {\text{T}} = {\text{L}}^{\alpha + \delta } {\text{M}}^{\beta + \gamma } {\text{T}}^{ - 2\gamma - 2\delta } \). Equating the exponents of L, M and T, respectively, left and right, we find that, for the assumed relation to be valid, it must be:

$$ 1 = - 2\gamma - 2\delta ,\quad 0 = \alpha + \delta ,\quad 0 = \beta + \gamma . $$

We now have three equations with four unknowns. We, therefore, have no unique solutions. We may express the exponents \( \beta , \gamma \) and \( \delta \) in terms of \( \alpha \):

$$ \beta = \frac{1}{2} - \alpha ,\quad \gamma = \alpha - \frac{1}{2},\quad \delta = - \alpha . $$

For each value of \( \alpha \) we have a different solution. If by experiment we prove that the period of the oscillator does not depend on the amplitude of the oscillations, then it is \( \alpha = 0 \) and, therefore, \( \beta = \frac{1}{2},\gamma = - \frac{1}{2},\delta = 0 \). The independence of the period of the oscillation from the amplitude of the oscillations leads to the conclusion that the period does not depend on the acceleration of gravity. All these, of course, with the reservations already expressed.

Example 14.2

The Cepheid variables are stars whose luminosities vary periodically due to the expansion and contraction of their radii. There is a relation between the period of the pulsations of such a star and the absolute luminosity of the star and this makes the Cepheid variables useful in the determination of distances. What is the relation for the period of the variation of the luminosity of the Cepheid variables?

We suspect that the period depends on the radius and the mass of the star. Since, obviously, the gravitational forces must play an important role in the whole process, supplying the restoring forces during the oscillations, we must also include the Newtonian constant of gravitation in the quantities to be taken into account in the dimensional analysis. Let the period of the oscillations be given by a relation of the form

$$ T = A\,G^{\alpha } R^{\beta } M^{\gamma } . $$

The dimensions are

$$ [T] = [A]\,[G]^{\alpha } [R]^{\beta } [M]^{\gamma } \Rightarrow {\text{T}} = \,[{\text{M}}^{ - 1} {\text{L}}^{ 3} {\text{T}}^{ - 2} ]^{\alpha } [{\text{L}}]^{\beta } [{\text{M}}]^{\gamma } = {\text{M}}^{ - \alpha + \gamma } {\text{L}}^{ 3\alpha + \beta } {\text{T}}^{ - 2\alpha } . $$

Therefore, \( 1 = - 2\alpha ,0 = 3\alpha + \beta ,0 = - \alpha + \gamma \) and \( \alpha = - \frac{1}{2},\beta = \frac{3}{2},\gamma = - \frac{1}{2} \).

Thus, \( T = A\,\sqrt {\frac{{R^{3} }}{GM}} \). In terms of the star’s density \( \rho \), it is \( T = A^{{\prime }} /\sqrt {G\rho } \) where \( A^{{\prime }} \) is a new constant. The theoretical relation found by Sterne is \( T = \sqrt {6\pi \beta } /\sqrt {G\rho } \), where \( \beta \) is a numerical parameter which depends on the mode of oscillation and on the ratio of the specific heats of the stellar material, i.e. two dimensionless quantities. The agreement with observation is very good.

In those cases, in which the absolute temperature Τ is involved in the expression assumed, this is always taken as its product kT with Boltzmann’s constant, k. This results in the conversion of the temperature to energy, whose dimensions are known.

Example 14.3

Using dimensional analysis, derive Stefan’s law, which gives the amount of energy emitted per unit time and per unit area, Φ, from a body at absolute temperature Τ.

Assuming that, apart from the temperature, there may appear in the relation assumed universal constants related to electromagnetic radiation, i.e. the speed of light c and Planck’s constant, h. Boltzmann’s constant will be taken together with the absolute temperature in the product kT.

The amount of energy emitted per unit time per unit area has dimensions

$$ [\Phi ] = [{\text{energy}}]/([{\text{time}}][{\text{area}}]) = ( {\text{ML}}^{ 2} {\text{T}}^{ - 2} )/({\text{T}})({\text{L}}^{ 2} ) = {\text{MT}}^{ - 3} . $$

We assume that the required relation is of the form

$$ \Phi = A\,c^{\alpha } h^{\beta } (kT)^{\gamma } . $$

The dimensions give

$$ [\Phi ] = [A][c]^{\alpha } [h]^{\beta } [kT]^{\gamma } ,\quad {\text{MT}}^{ - 3} = ({\text{LT}}^{ - 1} )^{\alpha } ({\text{ML}}^{ 2} {\text{T}}^{ - 1} )^{\beta } ({\text{ML}}^{ 2} {\text{T}}^{ - 2} )^{\gamma } , $$
$$ 1 = \beta + \gamma ,\quad 0 = \alpha + 2\beta + 2\gamma ,\quad - 3 = - \alpha - \beta - 2\gamma \quad {\text{or}}\quad \alpha = - 2,\quad \beta = - 3,\quad \gamma = 4 $$

from which we have

$$ \Phi = A\,{\kern 1pt} (kT)^{4} /c^{2} h^{3} . $$

Theory gives \( A = (2\uppi^{5} \,/\,15). \)

Example 14.4

  1. (a)

    Find, using the method of dimensional analysis, a possible relation giving the period of a mathematical pendulum.

  2. (b)

    In an experiment, a mass m, having very small dimensions, was tied at the end of a very thin string, thus forming a pendulum. The measurements of the period Τ of the pendulum as a function of its length l gave the following results, for oscillations of small amplitude:

    l (m)

    0.10

    0.20

    0.30

    0.40

    0.50

    0.60

    0.70

    0.80

    0.90

    Τ (s)

    0.60

    0.87

    1.15

    1.26

    1.41

    1.52

    1.67

    1.83

    1.90

    Use these measurements in order to find the numerical coefficient in the mathematical relation found by dimensional analysis.

  1. (a)

    We assume a relation of the form \( T = A\,g^{\alpha } m^{\beta } l^{\gamma } \). Substituting the dimensions of the quantities, we have

    $$ \begin{aligned} {[}T{]} & = [A][g]^{\alpha } [m]^{\beta } [l]^{\gamma } \quad \Rightarrow \quad [{\text{T}}] = [{\text{LT}}^{ - 2} ]^{\alpha } [{\text{M}}]^{\beta } [{\text{L}}]^{\gamma } ,\quad {\text{T}} = {\text{L}}^{\alpha + \gamma } {\text{T}}^{{{ - 2}\alpha }} {\text{M}}^{\beta } , \\ 1 & = - 2\alpha ,\quad 0 = \alpha + \gamma ,\quad \;0 = \beta ,\quad \Rightarrow \quad \alpha = - \frac{1}{2},\quad \beta = 0,\quad \gamma = \frac{1}{2}. \\ \end{aligned} $$

    Therefore,

    $$ T = A\,{\kern 1pt} \sqrt {\frac{l}{g}} . $$
  2. (b)

    We will use the experimental results in order to determine the constant Α. If the relation found is correct, then plotting \( gT^{2} \) as a function of l must result in a straight line passing through the origin and having a slope equal to \( A^{2} \).

We see that this is indeed the case. The point at which the straight line cuts the abscissa line for \( l = 1.0\;{\text{m}} \) has \( gT^{2} = 39 \pm 1\;{\text{m}} \), where the error \( \pm 1\;{\text{m}} \) is an estimate based on the straight lines of maximum and of minimum slope allowed by the experimental points. Therefore, it is \( A^{2} = 39 \pm 1 \). The fractional error in \( A^{2} \) is 1/39 = 0.026. The fractional error in Α will be half this value. Thus, \( A^{2} = 39 \times (1 \pm 0.026) \) and \( A = \sqrt {39} \times (1 \pm 0.013) \). Finally, \( A = 6.24 \pm 0.08 \). The real value, derived from theory, is \( A = 2\uppi = 6.283 \ldots \)

We have here an example in which the combination of dimensional analysis and experiment helped us discover a relation describing the behavior of a physical system.

In fact, if the mass is not a point mass but has some dimensions (i.e. we have a physical and not a mathematical pendulum), the relation for the period of the pendulum is shown by theory to be

$$ T = 2\uppi\sqrt {\frac{l}{g}\left( {1 + \frac{{I_{\text{C}} }}{{Ml^{2} }}} \right)} , $$

where l is the distance of the body’s center of mass from the fixed end of the string, Μ is the mass of the body and \( I_{\text{C}} \) is its moment of inertia with respect to the axis passing through its center of mass and is normal to the plane of oscillations. Dimensional analysis does not give the dimensionless quantity \( \left( {1 + \frac{{I_{\text{C}} }}{{Ml^{2} }}} \right) \). Its existence can, however, be detected by experiment. Plotting the variation of the quantity \( \left( {\frac{T}{2\pi }} \right)^{2} \frac{g}{l} - 1 \) as a function of l, we will have the curve

$$ \left( {\frac{T}{2\pi }} \right)^{2} \frac{g}{l} - 1 = \frac{{a^{2} }}{{l^{2} }}, $$

where a is a length characteristic of the body (its radius of gyration), which is equal to zero for a mathematical pendulum. If at the end of the string we have a solid sphere of radius R, then it is \( a = \frac{2}{5}R. \)

Plotting the quantity \( y = \left( {\frac{T}{{2\uppi}}} \right)^{2} \frac{g}{l} - 1 \) as a function of \( x = \frac{1}{{l^{2} }} \), we will have the straight line \( y = \left( {\frac{2}{5}R} \right)^{2} x \), from which we can determine, experimentally, the dependence on R. Plotting y as a function of x in fact brings out the correction (residue) to the result of dimensional analysis. The deviations of y from zero is shown at large values of x, i.e. small values of l. If in the experiment it is \( R = 1\;{\text{cm}} \), we expect the straight line \( y = \frac{4}{25}x \) (with x given in \( {\text{cm}}^{ - 2} \)).

l (cm)

2

3

4

5

6

10

\( x\;\text{(}{\text{cm}}^{ - 2} ) \)

0.250

0.111

0.0625

0.040

0.0278

0.010

y

0.040

0.018

0.010

0.0064

0.0044

0.0016

It appears that, in order to detect experimentally the presence of the term \( \left( {1 + \frac{{I_{\text{C}} }}{{Ml^{2} }}} \right) \) with measurements at small values of l, an accuracy of the order of 1% is necessary in the measurement of Τ and l.

If there is a dependence of the period of the pendulum on the amplitude of the oscillations, this will not be disclosed by dimensional analysis. The amplitude of the oscillations, \( \theta_{0} \), is an angle, which is a dimensionless quantity. If it is

$$ T = 2\uppi\sqrt {\frac{l}{g}}\,\,\, f(\theta_{0} ), $$

where \( f(\theta_{0} ) \) is a function of the amplitude of the oscillations, the existence of this function may be detected by measuring Τ for different amplitudes of oscillation. Theory gives

$$ f(\theta_{0} ) = 1 + \frac{1}{4}{ \sin }^{2} \frac{{\theta_{0} }}{2} + \frac{9}{64}{ \sin }^{4} \frac{{\theta_{0} }}{2} + \ldots . $$

In this example we demonstrated the capabilities as well as the limitations of the method of dimensional analysis.

Problems

  1. 14.1

    Using dimensional analysis, derive Kepler’s third law, which relates the period of revolution of a planet about the Sun, Τ, with its mean distance from the Sun, R. The force of gravity (i.e. the constant G) and, therefore, the mass of the Sun, Μ, must also be taken into account.

  2. 14.2

    Find, using dimensional analysis, Boyle’s law, which relates the pressure Ρ with the absolute temperature Τ of a quantity of gas with \( n_{0} \) mol per unit volume. It should be noted that the number of mol, being a pure number, has no dimensions and, therefore, the dimensions of \( n_{0} \) are those of (volume) –1.

  3. 14.3

    Find, using dimensional analysis, the expression giving the volumetric flow rate Q (= volume per unit time) of a fluid through a pipe. Assume that the flow rate possibly depends on the radius a of the pipe, the density ρ and the viscosity η of the fluid and on the pressure gradient, \( \Delta p/\Delta x \), between the ends of the pipe. (The dimensions of η are \( [\eta ] = {\text{ML}}^{ - 1} {\text{T}}^{ - 1} \)).

  4. 14.4

    A string having a linear density \( \lambda \) (mass per unit length) is stretched by a tension (force) F. Using dimensional analysis, find the form of the relation giving the speed \( \upsilon \) of transverse waves along the string.

    If Young’s modulus E of the string’s material is included in the quantities on which the speed \( \upsilon \) depends, does the solution change? (The dimensions of Ε are [Ε] = [Force] /[Area]).

  5. 14.5

    An electric charge Q is uniformly distributed in a sphere of radius R. Using dimensional analysis, find the form of the relation giving the electrostatic energy W of the charge distribution in S.I. units. The dimensions of the electric constant \( \varepsilon_{0} \) involved in electrostatic phenomena are \( [\varepsilon_{0} ] = {\text{M}}^{ - 1} {\text{L}}^{ - 3} {\text{T}}^{2} {\text{Q}}^{2} \), where Q is the dimension of electric charge.

    Would your result change if the charge is not uniformly distributed inside the sphere but is spread on its surface?

14.3 Appendix 3: The Use of Random Numbers in Finding Values x of a Variable x Which Are Distributed According to a Given Probability Density Function f(x)

We will discuss the following problem:

We have random numbers uniformly distributed in the interval [0, 1). How can they be made to correspond to the values x of the variable x in such a way that these have a distribution described by a given probability density function \( f(x) \)?

For example, if we have at our disposal Ν random numbers between 0 and 1, how can we obtain an equal number of x values which are normally distributed about a certain mean value with a given standard deviation?

Problems of this kind have to be solved in applications of the Monte Carlo method , either in Statistics, as was done in Chap. 3 and elsewhere in this book or in many other applications of the method, e.g. in Physics etc.

Before we present the general theory for the problem, we should mention that the generation of random numbers with a given distribution may be achieved using various computer software programs. Using Microsoft Excel®, for example, random numbers may be produced which are distributed according to 7 different probability density functions (normal, Poisson, binomial etc.).

14.3.1 The Use of Random Numbers in Finding Values x of a Variable x Which Are Distributed According to a Given Probability Density Function f(x)

We assume that we are given the probability density function \( f(x) \) and the corresponding distribution function \( F(x) \) of a random variable x. These have been drawn in Fig. 14.3, using a common x-axis. We know that, by definition, \( f(x) \) takes positive values in the interval \( - \infty < x < \infty \) and that \( F(x) \) increases monotonically with x, assuming values between 0 and 1. The two functions are related via the expressions

$$ f(x) = \frac{{{\text{d}}F}}{{{\text{d}}x}}\quad {\text{and}}\quad F(x) = \int\nolimits_{{{\kern 1pt} {\kern 1pt} - \infty }}^{{{\kern 1pt} {\kern 1pt} {\kern 1pt} x}} {f(t)\,{\text{d}}t} . $$
(14.76)
Fig. 14.3
figure 3

The probability density function \( f(x) \) and the distribution function \( F(x) \) of the random variable x, plotted with a common x-axis. The method of making random numbers R to correspond to values of x is described in the text

With reference to Fig. 14.3, we divide the range of values of \( F(x) \) (0 to 1) into Ν equal sections, which we denote by the increasing number n, staring with \( n = 0 \) for \( F(x) = 0 \). We draw Ν straight lines parallel to the x-axis and at equal distances between them. From the points of intersection of these with the \( F(x) \) curve we draw straight lines normal to the x-axis, which intersect this axis at the Ν points \( x_{n} \). From the relation connecting the functions \( f(x) \) and \( F(x) \), it follows that these lines divide the area under the curve \( f(x) \) into N equal parts. The points of intersection of the x-axis define the N intervals \( ( - \infty ,x_{1} ],(x_{1} ,x_{2} ], \ldots ,(x_{N - 1} ,\infty ) \). Since to these intervals there correspond equal areas between the x-axis and the curve \( f(x) \), a value x of the random variable x is equally probable to lie in any one of these intervals.

We see that, using this method of projection, points uniformly distributed on the axis n [or of \( F(x) \)] are distributed on the x-axis according to the probability density \( f(x) \). The greater the number Ν is, the better the definition of the position of each point on the x-axis. If we have many different random numbers R in the interval [0, 1), putting for each one of them

$$ F(x) = R $$
(14.77)

and finding the value of x using the method described above, we have an equal number of points on the x-axis which are distributed according to the probability density function \( f(x) \).

The geometrical procedure followed for finding the value of x for which it is \( F(x) = R \), is equivalent to the solution of this equation by the inversion of function \( F(x) \), i.e.

$$ x(R) = F^{ - 1} (R). $$
(14.78)

For a given random number R in the interval [0, 1), in order to find the corresponding x-value we must solve Eq. (14.77) for x. We will demonstrate the method with some examples. In all examples, the random numbers used are the decimal digits of π (the first 50 000 digits).

Example 14.5

Use 10 000 five-digit random numbers in the interval [0, 1) in order to produce an equal number of values of x, which are uniformly distributed in the interval \( a \le x \le b \).

It was given that \( f(x) = \frac{1}{b - a} \) for \( a \le x \le b \) and \( f(x) = 0 \) outside this interval.

Obviously, \( f(x) \) is normalized. We find the distribution function

$$ F(x) = \int\nolimits_{a}^{x} {f(x)\,{\text{d}}x} = \int\nolimits_{a}^{x} {\frac{{{\text{d}}x}}{b - a}} = \frac{x - a}{b - a}, $$

which may be solved for x,

$$ x = a + (b - a)F(x). $$

Putting \( F(x) = R \), where R is a random number in the interval [0, 1), we have the corresponding value of x,

$$ x = a + (b - a)R. $$

We find the values of x which correspond to 10 000 random numbers, for the values \( a = 1, b = 3 \). The histogram below shows the results using a bin width of 0.1.

Example 14.6

Use 10 000 five-digit random numbers in the interval [0, 1) in order to produce an equal number of values of x, which are distributed according to the probability density function \( f(x) = \alpha \,{\text{e}}^{{ - \alpha {\kern 1pt} x}} \) in the interval \( 0 \le x < \infty \).

The function \( f(x) \) is normalized. We find the distribution function

$$ F(x) = \int\nolimits_{0}^{x} {f(x)\,{\kern 1pt} {\text{d}}x} = \int\nolimits_{0}^{x} {\alpha \,{\text{e}}^{{ - \alpha {\kern 1pt} x}} \,{\kern 1pt} {\text{d}}x} = \left[ { - {\text{e}}^{{ - \alpha {\kern 1pt} x}} } \right]_{{{\kern 1pt} {\kern 1pt} 0}}^{{{\kern 1pt} {\kern 1pt} x}} = 1 - {\text{e}}^{{ - \alpha {\kern 1pt} x}} . $$

The relation between \( f(x) \) and \( F(x) \), as well as the geometrical procedure used in corresponding values of x to the values of the random numbers, are shown in the following figure.

Equating \( F(x) = R \) and solving for x, we have \( x = - \frac{1}{\alpha }\,\ln {\kern 1pt} (1 - R) \).

As an example, for the case of \( \alpha = 1 \), we find the values of x which correspond to 10 000 random numbers. Their histogram is shown in the figure below, with a bin width of 0.1.

Example 14.7

Use 10 000 random numbers in order to produce an equal number of values of x, which are normally distributed about a mean of \( \mu_{x} = 0 \) with a standard deviation of \( \sigma_{x} = 1 \).

The given probability density function is \( f(x) = \frac{1}{{\sqrt {2\uppi} }}{\text{e}}^{{ - x^{2} /2}} \).

The corresponding distribution function is \( F(x) = \frac{1}{{\sqrt {2\uppi} }}\int_{ - \infty }^{x} {{\text{e}}^{{ - t^{2} /2}} \,} {\text{d}}t = \frac{1}{2}\left[ {1 + {\text{erf}}\left( {{x \mathord{\left/ {\vphantom {x {\sqrt 2 }}} \right. \kern-0pt} {\sqrt 2 }}} \right)} \right] \).

If R is a random number in the range \( 0 \le R < 1 \), the corresponding value of x is found by solving the equation \( \frac{1}{{\sqrt {2\uppi} }}\int_{ - \infty }^{x} {{\text{e}}^{{ - t^{2} /2}} \,} {\text{d}}t = R \) for x.

The inversion of \( F(x) \) will be done in this example using an approximation. If we define the function \( Q(x) \equiv \frac{1}{{\sqrt {2\uppi} }}\int_{x}^{\infty } {{\text{e}}^{{ - t^{2} /2}} \,} {\text{d}}t \), then it is \( F(x) = 1 - Q(x) \) and \( Q(x) = 1 - F(x) = 1 - R \). Since it is \( 0 \le R < 1 \), we may consider the number \( 1 - R \) as the random number and set \( Q(x) = R \) for convenience. For positive values of x, \( Q(x) \) takes values between ½ and 0.

For \( Q(x) \) the following approximate method exists [2]:

If it is \( Q(x_{p} ) = p \), where \( 0 < p \le 0.5 \), and \( t \equiv \sqrt { - 2\, \ln p} \), then

$$ x_{p} = t - \frac{{2.515517 + 0.802853\,t + 0.010328\,t^{2} }}{{1 + 1.432788\,t + 0.189269\,t^{2} + 0.001308\,t^{3} }} + \varepsilon (p)\quad {\text{where}}\;\left| {{\kern 1pt} \varepsilon (p){\kern 1pt} } \right| < 4.5 \times 10^{ - 4} . $$

The method described gives only the positive values of x. We may cover negative values of x as well by using one of the following two methods:

  1. 1.

    We put \( p = \left| {{\kern 1pt} R - 0.5{\kern 1pt} } \right| \). We find \( x_{p} \). If it is \( R - 0.5 < 0 \), we consider \( x_{p} \) to be negative. If it is \( R - 0.5 > 0 \), we consider \( x_{p} \) to be positive. This is equivalent to multiplying the value of \( x_{p} \) found by \( \frac{R - 0.5}{{\,\left| {R - 0.5{\kern 1pt} } \right|}} \).

  2. 2.

    We put \( p = R/2 \). We find \( x_{p} \). We decide whether \( x_{p} \) is to be considered to be positive or negative using another unbiased method, e.g. by whether the last digit of R is odd or even.

Using the first method and 10 000 random numbers, we find the corresponding values of x, the histogram of which is given below with a bin width of 0.2.

If we wish the distribution to have a standard deviation \( \sigma_{x} \) instead of unity, we multiply the values of x, which were found as described above, by the numerical value of \( \sigma_{x} \).

If we wish the distribution to have a standard deviation \( \sigma_{x} \) instead of unity and a mean equal to \( \mu_{x} \) instead of 0, having multiplied the values of x by the numerical value of \( \sigma_{x} \), we add to the results the numerical value of \( \mu_{x}. \)

Example 14.8 [E]

Using Excel©, produce 1000 numbers which are derived from a normal population with \( \mu = 5 \) and \( \sigma = 2. \)

Excel 2016 creates random numbers using the add-in Data Analysis, found in Data if it is installed.

Highlight cell 14. In Data Analysis choose Random Number Generation. In the window that opens, set Number of Variables 1, Number of Random Numbers 1000, Distribution Normal, Mean = 5, Standard Deviation = 2. Pressing OK fills column A with 1000 random numbers, normally distributed, with mean = 5 and standard deviation = 2.

Highlight column A. In Insert > Recommended Chart we choose Histogram. This produces a histogram of the random numbers.

Excel sets the bin width using the lowest and highest of the random numbers. This has the effect that the bin limits are numbers with many digits. To avoid this, we first sort the numbers in increasing order. In this case, the smallest number is −1.363261491. We change this to −1.4. The largest number is 11.75441697. We change this to 11.8. These changes will not affect the histogram. They will, however simplify the bin widths and limits. Double-clicking on the X-Axis, we open the Format Axis window. In Axis Options, we choose Bin Width 0.5. With these changes, the histogram has the form of the figure shown here.

Example 14.9 [O]

Using Origin©, produce 1000 numbers which are derived from a normal population with \( \mu = 5 \) and \( \sigma = 2. \)

We highlight the column in which we want the numbers to be entered, say A. Then

$$ {\mathbf{Column}} > {\mathbf{Set}}\;{\mathbf{Column}}\;{\mathbf{Values}} $$

In the window that opens, we select:

$$ {\mathbf{Function}} > {\mathbf{Data}}\;{\mathbf{Generation}} > {\mathbf{normal}}\left( {{\mathbf{nps}}\left[ {,\;{\mathbf{seed}}} \right]} \right) $$

To obtain numbers distributed normally with standard deviation σ and mean μ, we must give the instruction: normal([ n ])*[ σ ] + [ μ ], where [ n ] is the number of numbers we wish to obtain, [ σ ] is the numerical value of σ and [ μ ] is the numerical value of μ.

Here, we type the instruction normal(1000)*2 + 5 and press OK. 1000 numbers normally distributed with \( \mu = 5 \) and \( \sigma = 2 \) are entered in the selected column. The histogram of these is shown in the figure.

Example 14.10 [R]

Produce 1000 numbers which are derived from a normal population with \( \mu = 5 \) and \( \sigma = 2. \)

The function producing random numbers normally distributed is

rnorm(n, μ, σ)

. It produces n random numbers from a parent distribution with a mean μ and standard deviation σ. Using the function for \( \mu = 5 \) and \( \sigma = 2 \), produces the numbers whose histogram is shown

    x  <- rnorm(1000, 5, 2)     hist(x)

Programs

Excel

Origin

Ch. 14. Origin—Random Number Generation

Python

R

Ch. 14. R—Random Number Generation

Problems

  1. 14.6

    Find the relation which transforms random numbers R, distributed uniformly in the range [0, 1), to values of x which are distributed according to the probability density function \( f(x) = 1.5 \times 10^{ - 3} \sqrt x \) in the range [0, 100).

  2. 14.7

    Find the relation which transforms random numbers R, distributed uniformly in the range [0, 1), to values of x which are distributed according to the probability density function \( f(x) = \sin x \) in the range \( [0,\uppi/2) \).

  3. 14.8

    The probability density function for the displacement of the simple harmonic oscillator is \( f(x) = \frac{{2/\uppi}}{{\sqrt {a^{2} - x^{2} } }} \), for values of x between 0 and a. Find the relation which transforms random numbers R, uniformly distributed in the range [0, 1), to values of x which are distributed according to this density function in the range [0, a).

  4. 14.9

    [O] Produce 1000 numbers which are derived from a Poisson population with \( \mu = 5 \). Hint: Follow the procedure of Example 14.5 [O], with the final entry in the window that opens being poisson( n, μ ).

14.4 Appendix 4: The Values of the Fundamental Physical Constants

The values of the fundamental physical constants given below are the internationally recommended for 2010 by CODATA, The Committee on Data for Science and Technology.

The uncertainty in a value (standard deviation of the mean) is given in parentheses at the end of the numerical value and refers to the corresponding last digits. For example, the value \( G = 6.673\;84(80) \times 10^{ - 11} \;{\text{m}}^{ 3} \;{\text{kg}}^{ - 1} \;{\text{s}}^{ - 2} \) should be interpreted to mean \( G = (6.673\;84 \pm 0.000\;80) \times 10^{ - 11} \;{\text{m}}^{ 3} \;{\text{kg}}^{ - 1} \;{\text{s}}^{ - 2} \).

Most recently revised values of the fundamental physical constants may be found at the web page Fundamental Constants Data Center of the National Institute of Standards and Technology (NIST) of USA: http://physics.nist.gov/cuu/Constants/.

Recommended values of the fundamental physical constants. CODATA 2010

Universal constants

Quantity

Symbol

Value

Units

Speed of light in vacuum

c

299 792 458 (by definition)

\( {\text{m}}\;{\text{s}}^{-1} \)

Magnetic constant

\( \mu_{0} \)

\( 4\uppi \times 10^{ - 7} \) (exact)

\( 12.566\;370\;614 \ldots \times 10^{ - 7} \)

\( {\text{N}}\;{\text{A}}^{-2} \)

\( {\text{N}}\;{\text{A}}^{ - 2} \)

Electric constant

\( \varepsilon_{0} \)

\( 1/\mu_{0} c^{2} \) (exact)

\( 8.854\;187\;817 \ldots \times 10^{ - 12} \)

\( {\text{F}}\;{\text{m}}^{ - 1} \)

\( {\text{F}}\;{\text{m}}^{ - 1} \)

Newtonian constant of gravitation

G

\( 6.673\,\,84(80) \times 10^{ - 11} \)

\( {\text{m}}^{ 3} \;{\text{kg}}^{ - 1} \;{\text{s}}^{ - 2} \)

Planck constant

h

\( 6.626\,\,069\,\,57(29) \times 10^{ - 34} \)

J s

\( 4.135\;667\;516(91) \times 10^{ - 15} \)

eV s

Electromagnetic constants

Quantity

Symbol

Value

Units

Elementary charge

e

\( 1. 60 2\; 1 7 6\; 5 6 5\left( { 3 5} \right) \times 10^{ - 19} \)

C

Magnetic flux quantum \( h/2e \)

\( \varPhi_{0} \)

\( 2.0 6 7\; 8 3 3\; 7 5 8\left( { 4 6} \right) \times 10^{ - 15} \)

Wb

Conductance quantum \( 2e^{2} /h \)

\( G_{0} \)

\( 7. 7 4 8\;0 9 1\; 7 3 4 6\left( { 2 5} \right) \times 10^{ - 5} \)

S

Bohr magneton \( e\hbar /2m_{\text{e}} \)

\( \mu_{\text{B}} \)

\( 9. 2 7 4\;00 9\; 6 8\left( { 20} \right) \times 10^{ - 24} \)

\( {\text{J}}\;{\text{T}}^{ - 1} \)

Nuclear magneton \( e\hbar /2m_{\text{p}} \)

\( \mu_{\text{N}} \)

\( 5.0 50\; 7 8 3\; 5 3\left( { 1 1} \right) \times 10^{ - 27} \)

\( {\text{J}}\;{\text{T}}^{ - 1} \)

Atomic and nuclear constants

Quantity

Symbol

Value

Units

Fine-structure constant \( e^{2} /4\uppi\varepsilon_{0} \hbar c \)

\( \alpha \)

\( 7. 2 9 7\; 3 5 2\; 5 6 9 8\left( { 2 4} \right) \times 10^{ - 3} \)

Inverse of fine-structure constant

\( 1/\alpha \)

\( 1 3 7.0 3 5\; 9 9 9\;0 7 4\left( { 4 4} \right) \)

Rydberg constant \( \alpha^{2} m_{\text{e}} c/2h \)

\( R_{\infty } hc \) in eV

\( R_{\infty } \)

\( 10\; 9 7 3\; 7 3 1. 5 6 8\; 5 3 9\left( { 5 5} \right) \)

m−1

\( R_{\infty } hc \)

\( 1 3. 60 5\; 6 9 2\; 5 3\left( { 30} \right) \)

eV

Bohr radius \( \alpha /4\uppi\,R_{\infty } = 4\uppi{\kern 1pt} \varepsilon_{0} \hbar^{2} /m_{\text{e}} e^{2} \)

\( a_{0} \)

\( 0. 5 2 9\; 1 7 7\; 2 10\; 9 2\left( { 1 7} \right) \times 10^{ - 10} \)

m

Hartree energy \( e^{2} /4\uppi\,\varepsilon_{0} a_{0} = 2R_{\infty } hc = \alpha^{2} m_{\text{e}} c^{2} \)

\( E_{\text{h}} \)

\( 4. 3 5 9\; 7 4 4\; 3 4\left( { 1 9} \right) \times 10^{ - 18} \)

J

\( 2 7. 2 1 1\; 3 8 5\;0 5\left( { 60} \right) \)

eV

Electron mass

\( m_{\text{e}} \)

\( 9. 10 9\; 3 8 2\; 9 1\left( { 40} \right) \times 10^{ - 31} \)

kg

\( 5. 4 8 5\; 7 9 9\;0 9 4 6\left( { 2 2} \right) \times 10^{ - 4} \)

u

Proton mass

\( m_{\text{p}} \)

\( 1. 6 7 2\; 6 2 1\; 7 7 7\left( { 7 4} \right) \times 10^{ - 27} \)

kg

\( 1.00 7\; 2 7 6\; 4 6 6\; 8 1 2\left( { 90} \right) \)

u

Neutron mass

\( m_{\text{n}} \)

\( 1. 6 7 4\; 9 2 7\; 3 5 1\left( { 7 4} \right) \times 10^{ - 27} \)

kg

\( 1.00 8\; 6 6 4\; 9 1 6\;00\left( { 4 3} \right) \)

\( {\text{u}} \)

Electron mass energy equivalent

\( m_{\text{e}} c^{2} \)

\( 8. 1 8 7\; 10 5\;0 6\left( { 3 6} \right) \times 10^{ - 14} \)

J

\( 0. 5 10\; 9 9 8\; 9 2 8\left( { 1 1} \right) \)

MeV

Proton mass energy equivalent

\( m_{\text{p}} c^{2} \)

\( 1. 50 3\; 2 7 7\; 4 8 4\left( { 6 6} \right) \times 10^{ - 10} \)

J

\( 9 3 8. 2 7 2\;0 4 6\left( { 2 1} \right) \)

MeV

Neutron mass energy equivalent

\( m_{\text{n}} c^{2} \)

\( 1. 50 5\; 3 4 9\; 6 3 1\left( { 6 6} \right) \times 10^{ - 10} \)

J

\( 9 3 9. 5 6 5\; 3 7 9\left( { 2 1} \right) \)

MeV

Proton mass/electron mass

\( m_{\text{p}} /m_{\text{e}} \)

\( 1 8 3 6. 1 5 2\; 6 7 2\; 4 5\left( { 7 5} \right) \)

Neutron mass /proton mass

\( m_{\text{n}} /m_{\text{p}} \)

\( 1.00 1\; 3 7 8\; 4 1 9\; 1 7\left( { 4 5} \right) \)

Electron charge to mass quotient

\( - e/m_{\text{e}} \)

\( - 1. 7 5 8\; 8 20\;0 8 8\left( { 3 9} \right) \times 10^{11} \)

\( {\text{C}}\;{\text{kg}}^{ - 1} \)

Proton charge to mass quotient

\( e/m_{\text{p}} \)

\( 9. 5 7 8\; 8 3 3\; 5 8\left( { 2 1} \right) \times 10^{7} \)

\( {\text{C}}\;{\text{kg}}^{ - 1} \)

Electron Compton wavelength \( h/m_{\text{e}} c \)

\( \lambda_{\text{C}} \)

\( 2. 4 2 6\; 3 10\; 2 3 8 9\left( { 1 6} \right) \times 10^{ - 12} \)

m

Proton Compton wavelength \( h/m_{\text{p}} c \)

\( \lambda_{\text{C,p}} \)

\( 1. 3 2 1\; 40 9\; 8 5 6\; 2 3\left( { 9 4} \right) \times 10^{ - 15} \)

m

Neutron Compton wavelength \( h/m_{\text{n}} c \)

\( \lambda_{\text{C,n}} \)

\( 1. 3 1 9\; 5 90\; 90 6 8\left( { 1 1} \right) \times 10^{ - 15} \)

m

Classical electron radius \( \alpha^{2} a_{0} \)

\( r_{\text{e}} \)

\( 2. 8 1 7\; 9 40\; 3 2 6 7\left( { 2 7} \right) \times 10^{ - 15} \)

m

Electron magnetic moment

in Bohr magnetons

\( \mu_{\text{e}} \)

\( - 9. 2 8 4\; 7 6 4\; 30\left( { 2 1} \right) \times 10^{ - 24} \)

\( {\text{J}}\;{\text{T}}^{ - 1} \)

\( \mu_{\text{e}} /\mu_{\text{B}} \)

\( - 1.00 1\; 1 5 9\; 6 5 2\; 1 80\; 7 6\left( { 2 7} \right) \)

Proton magnetic moment

in Bohr magnetons

\( \mu_{\text{p}} \)

\( 1. 4 10\; 60 6\; 7 4 3\left( { 3 3} \right) \times 10^{ - 26} \)

\( {\text{J}}\;{\text{T}}^{ - 1} \)

\( \mu_{\text{p}} /\mu_{\text{B}} \)

\( 1. 5 2 1\;0 3 2\; 2 10\left( { 1 2} \right) \times 10^{ - 3} \)

Neutron magnetic moment

in Bohr magnetons

\( \mu_{\text{n}} \)

\( - 0. 9 6 6\; 2 3 6\; 4 7\left( { 2 3} \right) \times 10^{ - 26} \)

\( {\text{J}}\;{\text{T}}^{ - 1} \)

\( \mu_{\text{n}} /\mu_{\text{B}} \)

\( - 1.0 4 1\; 8 7 5\; 6 3\left( { 2 5} \right) \times 10^{ - 3} \)

Physico-chemical constants

Quantity

Symbol

Value

Units

Avogadro constant

\( N_{\text{A}} \)

\( 6.0 2 2\; 1 4 1\; 2 9\left( { 2 7} \right) \times 10^{23} \)

\( {\text{mol}}^{-1} \)

Atomic mass constant

\( m_{\text{u}} = \frac{1}{12}m\,({}^{12}{\text{C}}) = 1\;{\text{u}} = 10^{ - 3} \;{\text{kg}}\;{\text{mol}}^{ - 1} /N_{\text{A}} \)

\( m_{\text{u}} \)

\( 1. 6 60\; 5 3 8\; 9 2 1\left( { 7 3} \right) \times 10^{ - 27} \)

kg

Atomic mass constant energy equivalent

\( m_{\text{u}} c^{2} \)

\( 1. 4 9 2\; 4 1 7\; 9 5 4\left( { 6 6} \right) \times 10^{ - 10} \)

J

\( 9 3 1. 4 9 4\;0 6 1\left( { 2 1} \right) \)

MeV

Faraday constant \( N_{\text{A}} e \)

F

\( 9 6\; 4 8 5. 3 3 6 5\left( { 2 1} \right) \)

\( {\text{C}}\;{\text{mol}}^{-1} \)

Molar gas constant

R

\( 8. 3 1 4\; 4 6 2 1\left( { 7 5} \right) \)

\( {\text{J}}\;{\text{mol}}^{ - 1} \;{\text{K}}^{ - 1} \)

Boltzmann constant \( R/N_{\text{A}} \)

k

\( 1. 3 80\; 6 4 8 8\left( { 1 3} \right) \times 10^{ - 23} \)

\( {\text{J}}\;{\text{K}}^{ - 1} \)

\( 8. 6 1 7\; 3 3 2 4\left( { 7 8} \right) \times 10^{ - 5} \)

\( {\text{eV/K}}^{ - 1} \)

Inverse of Boltzmann’s constant in \( {\text{K}}\;{\text{eV}}^{ - 1} \)

\( 1/k \)

\( 11\,\,604.519(11) \)

\( {\text{K}}\;{\text{eV}}^{ - 1} \)

Molar volume of ideal gas \( RT/p \)

(Τ = 273.15 Κ, p = 101.325 kPa)

\( V_{\text{m}} \)

\( 2 2. 4 1 3\; 9 6 8\left( { 20} \right) \times 10^{ - 3} \)

\( {\text{m}}^{ 3} \;{\text{mol}}^{ - 1} \)

Stefan-Boltzmann constant \( (\uppi^{2} / 60)\,k^{4} /\hbar^{3} c^{2} \)

\( \sigma \)

\( 5. 6 70\; 3 7 3\left( { 2 1} \right) \times 10^{ - 8} \)

\( {\text{W}}\;{\text{m}}^{ 2} \;{\text{K}}^{ 4} \)

Wien wavelength displacement constant \( b = \lambda_{ \hbox{max} } T = (hc/k)/4.965\;114\;231 \ldots \)

b

\( 2. 8 9 7\; 7 7 2 1\left( { 2 6} \right) \times 10^{ - 3} \)

m K

Values which are internationally accepted:

Standard atmosphere = 101 325 Pa.

Standard acceleration of gravity \( g_{\text{n}} = 9.806\;65\;{\text{m/s}}^{ 2} \).