Abstract
In this chapter, we describe the mathematical knowledge necessary for understanding this book. First, we discuss matrices, open sets, closed sets, compact sets, the Mean Value Theorem, and Taylor expansions. All of these are topics covered in the first year of college. Next, we discuss absolute convergence and analytic functions. Then, we discuss the Law of Large Numbers and the Central Limit Theorem, as well as defining the symbols \(O_P(\cdot )\) and \(o_P(\cdot )\) used in subsequent chapters. Finally, we define the Fisher information matrix and discuss the properties of regular and realizable cases. For algebraic geometry and related topics, please refer to Chapter 6. Readers who already understand the content of this chapter may skip it as appropriate. At the end of the chapter, we provide the proof of Proposition 2, which was postponed in Chapter 1 and can be understood with the preliminary knowledge of this chapter, it can be understood.
Access provided by Autonomous University of Puebla. Download chapter PDF
In this chapter, we describe the mathematical knowledge necessary for understanding this book. First, we discuss matrices, open sets, closed sets, compact sets, the Mean Value Theorem, and Taylor expansions. All of these are topics covered in the first year of college. Next, we discuss absolute convergence and analytic functions. Then, we discuss the Law of Large Numbers and the Central Limit Theorem, as well as defining the symbols \(O_P(\cdot )\) and \(o_P(\cdot )\) used in subsequent chapters. Finally, we define the Fisher information matrix and discuss the properties of regular and realizable cases. For algebraic geometry and related topics, please refer to Chap. 6. Readers who already understand the content of this chapter may skip it as appropriate. At the end of the chapter, we provide the proof of Proposition 2, which was postponed in Chap. 1. It is assumed that with the preliminary knowledge of this chapter, it can be understood.
4.1 Elementary Mathematics
Here we discuss matrices and eigenvalues, open sets, closed sets, compact sets, the Mean Value Theorem, and Taylor expansions.
4.1.1 Matrices and Eigenvalues
A matrix \(A\in {\mathbb R}^{n\times n}\) (\(n\ge 1\)) with the same number of rows and columns is called a square matrix. A diagonal matrix is a matrix whose off-diagonal elements are all zero. A diagonal matrix with all diagonal elements equal to 1 is called an identity matrix, denoted as \(I_n\in {\mathbb R}^{n\times n}\). The sum of the diagonal elements of a square matrix is called the trace. A matrix with zero elements in the (i, j) (\(i<j\)) positions is called a lower triangular matrix.
In the following, for a square matrix \(A\in {\mathbb R}^{n\times n}\), we assume that there exists an \(X\in {\mathbb R}^{n\times n}\) such that \(AX=I_n\), and we try to find it. To do this, we perform two types of operations on the matrix \([A\mid I_n]\in {\mathbb R}^{n\times 2n}\), which consists of A and \(I_n\) arranged side by side:
-
1.
Subtract a multiple of one row from another row
-
2.
Swap two rows
We obtain a matrix such that the left half becomes a lower triangular matrix. Assuming that we performed operation 2 a total of m times, the product of the diagonal elements of the left half of the matrix at this point, multiplied by \((-1)^m\), is called the determinant of the matrix A. In the following, we write the determinant of the matrix A as \(\det A\).
After performing these operations, initially with \(B=I_n\), \([A\mid B]\) is transformed into \([A'\mid B']\), but X satisfying \(AX=B\) also satisfies \(A'X=B'\). If the determinant of A is not zero, we perform the above two operations further to make the left half a diagonal matrix. Finally, by
-
1.
Dividing each row by the value of the diagonal element
we make the left half the identity matrix \(I_n\).Footnote 1 If \(A''=I_n\), then for \([A''\mid B'']\), \(A''X=B''\), so the right half \(B''\) at that time is X. Conversely, if the determinant of A is zero, such a matrix X does not exist. When a square matrix exists such that \(AX=I_n\) (when the determinant of A is not zero), X is called the inverse matrix of A, denoted as \(X=A^{-1}\).
Example 24
In each of the cases \(d\not =0\) and \(d=0\),
can be done. The determinant is \((a-bc/d) \cdot d\cdot (-1)^0\) for \(d\not =0\), and \(cb\cdot (-1)^1\) for \(d=0\), both of which can be seen to be \(ad-bc\).\(\blacksquare \)
Example 25
When the determinant of A, \(\Delta =ad-bc\), is not zero, in particular when \(d\not =0\),
can be done, and
holds. That is, \(\displaystyle \frac{1}{\Delta } \left[ \begin{array}{cc} d&{}-b\\ -c&{}a \end{array} \right] \) becomes the inverse matrix of \(\displaystyle \left[ \begin{array}{cc} a&{}b\\ c&{}d \end{array} \right] \). \(\blacksquare \)
Moreover, when a constant \(\lambda \in {\mathbb C}\) and a vector \(u\in {\mathbb C}^n\) (\(u\not =0\)) exist such that \(Au=\lambda u\), \(\lambda \) is called an eigenvalue, and u is called an eigenvector. If the matrix \(A-\lambda I_n\) has an inverse, that is, if the determinant of \(A-\lambda I_n\) is not zero, then from \(u=(A-\lambda I_n)^{-1}0=0\), the u that satisfies \(Au=\lambda u\) is limited to \(u=0\). Eigenvalues are determined as solutions to the equation concerning \(\lambda \) (eigenvalue equation) stating that the determinant of \(A-\lambda I_n\) is zero. In other words,
holds.
Example 26
For \(n=2\), if we set \(\displaystyle A=\left[ \begin{array}{cc} a&{}b\\ c&{}d \end{array} \right] \), then \(\det (A-\lambda I_2)=(a-\lambda )(d-\lambda )-bc=0\) holds. Therefore, the solutions of the quadratic equation \(\lambda ^2-(a+d)\lambda +ad-bc=0\) are the eigenvalues. \(\blacksquare \)
A matrix \(A\in {\mathbb R}^{n\times n}\) for which all the (i, j) components \(A_{i,j}\) and (j, i) components \(A_{j,i}\) are equal is called a symmetric matrix. In general, eigenvalues \(\lambda \) are not necessarily real numbers, but when the matrix \(A\in {\mathbb R}^{n\times n}\) is symmetric, \(\lambda \) becomes a real number. In fact, for \(\lambda \in {\mathbb C}\) and \(u\in {\mathbb C}^n\) (\(u\not =0\)), sinceFootnote 2 \(A\overline{u}=\overline{Au}=\overline{\lambda u}=\overline{\lambda }\overline{u}\), we have
and
where \(\overline{z}\) denotes the complex conjugate of \(z\in {\mathbb C}\), and for \(a,b\in {\mathbb R}\), we set \(\overline{a+ib}=a-ib\).
Example 27
For the matrix \(\left[ \begin{array}{cc} a&{}b\\ c&{}d \end{array} \right] \), if we set \(b=c\), the eigenvalue equation becomes \(\lambda ^2-(a+d)\lambda +ad-b^2=0\), and its discriminant is \((a+d)^2-4(ad-b^2)=(a-d)^2+4b^2\ge 0\). Indeed, the eigenvalues are real numbers. \(\blacksquare \)
For a symmetric matrix A, it is called non-negative definite when all eigenvalues are non-negative, and positive definite when all eigenvalues are positive.
Example 28
For the matrix \(\left[ \begin{array}{cc} a&{}b\\ c&{}d \end{array} \right] \), if we set \(b=c\), when \(a+d\ge 0\), and \(ad\ge b^2\), the two solutions of the eigenvalue equation are non-negative, and it becomes non-negative definite. Furthermore, if both eigenvalues are positive, that is, \(a+d\ge 0\), and \(ad> b^2\), it becomes positive definite. \(\blacksquare \)
Moreover, for a symmetric matrix \(A\in {\mathbb R}^{n\times n}\), \(z^\top Az\), \(z\in {\mathbb R}^n\) is called the quadratic form of A.
Proposition 4
A symmetric matrix \(A\in {\mathbb R}^{n\times n}\) being non-negative definite is equivalent to the quadratic form \(z^\top Az\) being non-negative for any \(z\in {\mathbb R}^n\). Furthermore, A being positive definite is equivalent to the quadratic form \(z^\top Az\) being positive for any \(0\not =z\in {\mathbb R}^{n}\).
For the proof, please refer to the appendix at the end of the chapter.
4.1.2 Open Sets, Closed Sets, and Compact Sets
Let the Euclidean distance between each \(x, y \in {\mathbb R}^d\) be denoted as dist(x, y). For a subset M of \({\mathbb R}^d\), let us denote the open ball (excluding the boundary) of radius \(\epsilon > 0\) centered at \(z \in M\) as \(B(z, \epsilon ) := \{y \in {\mathbb R}^d \mid dist(z, y) < \epsilon \}\). If there exists a radius \(\epsilon > 0\) such that \(B(z, \epsilon ) \subseteq M\) for any \(z \in M\), M is called an open set. On the other hand, for any \(\epsilon > 0\), if the intersection of \(B(z, \epsilon )\) and M is non-empty, \(z \in {\mathbb R}^d\) is called a tactile point of \(M\subseteq {\mathbb R}^d\). If M contains all its tactile points as its elements, M is called a closed set (see Fig. 4.1). Generally, the complement of a closed set is an open set, and the complement of an open set is a closed set.
In fact, if M is a closed set, its tactile points are not included in the complement \({M}^C\), so when the radius of the open ball for each \(z \in {M}^C\) is chosen to be small, the open ball will not intersect M. Conversely, if M is an open set, when the radius of the open ball for each \(z \in {M}\) is chosen to be small, the open ball will not intersect \({M}^C\). Therefore, the tactile points of \({M}^C\) are not in M.
Example 29
Assume that \(d=1\) and \(d=3\) for items 1–4 and item 5, respectively.
-
1.
The open interval (a, b) is an open set, and the closed interval [a, b] is a closed set.
-
2.
The set of all real numbers \(\mathbb R\) and the set of all integers \(\mathbb Z\) are closed sets (\(\mathbb R\) is also an open set).
-
3.
The set \({\mathbb R} \cap {\mathbb Z}^C\), which is the set of all real numbers \(\mathbb R\) excluding the set of all integers \(\mathbb Z\), is an open set.
-
4.
The set of all rational numbers \(\mathbb Q\) is neither an open set nor a closed set.
-
5.
The region \(\{(x, y, z) \in {\mathbb R}^3 \mid x^2 + y^2 + z^2 < 1, z \ge 0\}\) is neither an open set nor a closed set.
\(\blacksquare \)
In addition, there is a concept of compact sets related to closed sets. When a mapping \(M \ni x \mapsto \epsilon (x) \in {\mathbb R}{>0}\) is arbitrarily defined and a finite number of \(z_1, \ldots , z_m\) are used such that the union of open balls \(\cup _{i=1}^m B(z_i, \epsilon (z_i))\) contains M as a subset, M is called compact. In this book, we only deal with subsets of \({\mathbb R}^d\) as the universal set and Euclidean distance as the distance. In this case, it is known that compact sets are equivalent to closed sets with bounded domains (bounded closed sets), where we say a set M is bounded when there exists a positive constant \(L>0\) such that \(dist(x, y) < L\) for any \(x, y \in M\). Among the closed sets in Example 29, [a, b] is compact, but \(\mathbb R\) and \(\mathbb Z\) are not.
Although the proof is omitted, if a set M is compact, a continuous function with domain M has maximum and minimum values.
Example 30
\(M=(0,1], [1, \infty )\) are not compact. The continuous function \(f(x)=1/x\) does not have a maximum value on \(M=(0,1]\) and does not have a minimum value on \(M=[1, \infty )\). \(\blacksquare \)
Also, let dist(x, a) be the distance between \(x,a \in M\) (\(M = {\mathbb R}\), for example, \(dist(x, a) = |x-a|\)). For any \(\epsilon > 0\), a function f with domain M is said to be continuous (continuous) at \(x=a\) if there exists a \(\delta = \delta (\epsilon , a)\) such that:
If the function is continuous for all \(a \in M\), then f is continuous.
The continuity of functions can be defined not only for \(f: {\mathbb R}\rightarrow {\mathbb R}\). From Chap. 4 onwards, we will examine the set of continuous functions C(K) defined on a compact set K. The distance between elements \(\phi \) and \(\phi '\) in C(K) is defined by the sup-norm (uniform norm):
Then, the continuity of a function \(f: C(K)\rightarrow {\mathbb R}\) at \(\phi = \phi _a \in C(K)\) is defined by the existence of \(\delta = \delta (\epsilon , \phi _a)\) such that for any \(\epsilon > 0\),
4.1.3 Mean Value Theorem and Taylor Expansion
In the following chapters, we will discuss the Mean Value Theorem and Taylor expansion, which will be used several times. They are particularly necessary for mathematical analysis when the sample size n is large.
The Mean Value Theorem asserts that for a differentiable function \(f: {\mathbb R}\rightarrow {\mathbb R}\), if \(a < b\), then there exists a c such that \(a < c < b\) satisfying
Example 31
For \(f(x) = x^2 - 3x + 2\), \(a = 2\), and \(b = 4\), we have
So, \(c = 3\) satisfies the condition. \(\blacksquare \)
Equation (4.2) can be written as \(f(b) = f(a) + f'(c)(b - a)\), which is an extended to Taylor’s theorem.
Namely, if f is continuous up to the \((n-1)\)-th derivative and is n times differentiable,
with
There exists an \(a<c<b\). If \(n=1\), it becomes the Mean Value Theorem. Sometimes it is written as \(\theta a +(1-\theta )b\) instead of c, and there exists such a \(0<\theta <1\).
Setting \(b=x\) in (4.3), we get
which is called the Taylor expansion of the function f at \(x=a\). Furthermore, setting \(a=0\), we get
which is called the Maclaurin expansion.
Example 32
When \(e^x\) and \(\log (1+x)\) are Maclaurin-expanded, there exist \(0<\theta <1\) for each of
and
\(\blacksquare \)
For the case of two variables, a function \(f: {\mathbb R}^2\rightarrow {\mathbb R}\) that is continuous up to the \((n-1)\)th derivative and differentiable n times, the Taylor expansion at \((x,y)=(a,b)\) can be written as
In the case of \(n=2\) for d variables, the Taylor expansion around \(x=(x_1,\ldots ,\) \(x_d)^\top =(a_1,\ldots ,a_d)^\top =a\) is, when f has a continuous first derivative and is twice differentiable, written as
where \(\nabla f: {\mathbb R}^d\rightarrow {\mathbb R}^d\) is a vector consisting of the d partial derivatives of f, \(\displaystyle \frac{\partial f}{\partial x_i}\), and \(\nabla ^2 f: {\mathbb R}^d\rightarrow {\mathbb R}^{d\times d}\) is a matrix (the Hessian matrix) consisting of the second partial derivatives of f, \(\displaystyle \frac{\partial ^2 f}{\partial x_i\partial x_j}\).
4.2 Analytic Functions
In the following, we will denote the set of non-negative integers by \(\mathbb N\). Firstly, for \(r=(r_1,\dots ,r_d)\in {\mathbb N}^d\), \(x=(x_1,\dots ,x_d),b=(b_1,\dots ,b_d)\in {\mathbb R}^d\), \(a_r=a_{r_1,\dots ,r_d}\in {\mathbb R}\), we define
A sum of such terms
is called a power series. When there are a finite number of non-zero terms, we call f(x) a polynomial with real coefficients in terms of \(x_1,\dots ,x_d\), and we denote the set of such polynomials as \({\mathbb R}[x]\) or \({\mathbb R}[x_1,\dots ,x_d]\). Furthermore, when there exists an open set U (\(b\in U\subseteq {\mathbb R}^d\)) such that for any \(x\in U\), \(\sum _r|a_r| |x-b|^r<\infty \), we say that f(x) converges absolutely. In this case, the infinite series (4.6) is independent of the order of the sums \(\sum _{r_1},\dots ,\sum _{r_d}\) and is unique. We call such a function \(f: U\rightarrow {\mathbb R}\) an (real) analytic function.
Example 33
For the infinite series \(\sum _{n=0}^\infty a_n\) with \(a_n=(-1)^n\), we can write it in two ways:
and
which is due to the fact that \(\sum _{n=0}^\infty |a_n|=1+1+\cdots =\infty \). However, in the case of \(a_n=(-\frac{1}{2})^n\), we have \(\sum _{n=0}^\infty |a_n|=1+\frac{1}{2}+\cdots =2\), and hence the series converges.
\(\blacksquare \)
Let \({a_n}\) be a sequence of real numbers and \(c\in {\mathbb R}\). When the power series \(\sum _{n=0}^\infty a_n(x-c)^n\) converges absolutely if \(|x-c|<R\) and diverges if \(|x-c|>R\), we call R the radius of convergence (we need to investigate the case where \(x-c\) equals the radius of convergence). If \(a_n=0\) except for a finite number of terms, \(R:=\lim _{n\rightarrow \infty } \left| \frac{a_{n}}{a_{n+1}}\right| \) will be the radius of convergence. In fact, if the absolute ratio of adjacent terms
is \(0\le r<1\), it converges, and if \(1<r\le \infty \), it diverges.
Example 34
For
the absolute ratio of adjacent terms is
for sufficiently large n. Therefore, it converges absolutely if \(|x|<1\). When investigating the case of \(|x|=1\), it becomes
so it does not converge absolutely when \(|x|=1\). Therefore, we can set the open set of the domain of f to be \(U=(-1,1)\). \(\blacksquare \)
Since taking the absolute value of each term makes it non-negative, absolute convergence becomes a convergence that does not assume the order of summation. However, what problems would arise with convergence that assumes the order of summation (conditional convergence)?
Example 35
If the series \(\displaystyle \sum _{n=1}^\infty (-1)^{n-1}\frac{1}{n}\) is summed in the order of
it becomes \(\log 2\). In fact, if we denote the right-hand side of
as \(S_n\), the equations
and
hold. On the other hand, if we first add the terms for \(n=1,2,4\), then the ones for odd numbers greater than or equal to 3, even numbers not divisible by 4 and greater than or equal to 6, and finally multiples of 4 greater than or equal to 8, (4.7) can be calculated as
\(\blacksquare \)
Although the proof is omitted, it is known that any series that converges conditionally can be made to converge to any real number by changing the order of its sum (Riemann’s rearrangement theorem).
Example 36
In \(U:=\{(x,y)\in {\mathbb R}^2\}\), the series
converges absolutely. In fact, the ratio of the absolute values of any two adjacent terms converges to 0. Therefore, we can rearrange the order of the terms, and we obtain
so the function \(f: U\rightarrow {\mathbb R}\), \(f(x,y)=e^{x+y}\), is an analytic function. \(\blacksquare \)
Here, if a function is differentiable any number of times \(r\ge 0\) and the r-times differentiated function is continuous, the function is said to be of class \(C^r\) . On the other hand, analytic functions are continuous and differentiable, and no matter how many times they are differentiated, the asymptotic ratio of adjacent terms remains the same, making them analytic functions. That is, they are of class \(C^\infty \) . Moreover, the analytic functions that can be expanded into power series can be uniquely expanded into Taylor series.
Note, however, that a function being of class \(C^\infty \) does not necessarily mean that it is an analytic function.
Example 37
The function
is of class \(C^\infty \) but not analytic. In fact, the Taylor expansion at \(x=0\) results in \(a_r=0\), \(r\in {\mathbb R}^d\) (Exercise 31), which contradicts the uniqueness of the Taylor expansion. \(\blacksquare \)
In Chap. 8, we will assume that the average likelihood ratio \(K(\theta )={\mathbb E}_X[\log \frac{p(X|\theta _*)}{p(X|\theta )}]\) and the prior distribution \(\varphi (\theta )\) are analytic functions on \(\theta \in \Theta \) and proceed with the discussion. In this case, for the power series with real numbers \(a_r\)
we considered whether \(\sum _{r\in {\mathbb N}^d}|a_r|\ |(x-b)^r|\) is finite. In this book, we further assume that the likelihood ratio \(f(x,\theta )=\log \frac{p(x|\theta _*)}{p(x|\theta )}\) is also an analytic function for \(\theta \in \Theta \). However, in the case of multiple variables, preparations for extension are necessary. In this case, we consider as \(a_r: \mathcal{X}\rightarrow {\mathbb R}\) and use the norm of \(a_r\).
The set V with the properties
and
is called a linear space. In a linear space, we call \(\Vert \cdot \Vert : V\rightarrow {\mathbb R}\) that satisfies the following conditions for each element a norm of V: for \(\alpha \in {\mathbb R}, f,g\in V\)
In this book, we denote the set of \(f: \mathcal{X}\rightarrow {\mathbb R}\) for which
is finite as \(L^2(q)\), where the true distribution q is used.
Here, the absolute value \(|\cdot |\) becomes the norm of the one-dimensional Euclidean space \(\mathbb R\), but the norm \(\Vert \cdot \Vert _2\) also becomes the norm of the linear space \(L^2(q)\) (problem 38). We often call it an analytic function taking real values when \(a_r\in {\mathbb R}\) and an analytic function taking values in \(L^2(q)\) when \(a_r\in L^2(q)\), but in this book, we simply call the former an analytic function. Also, when we write each norm as \(\Vert \cdot \Vert \), the set of x for which \(\sum _{r\in {\mathbb N}^d}\Vert a_r\Vert \ |(x-b)^r|\) is finite becomes the domain.
For example, if the log-likelihood ratio \(f(x,\theta )\)
is an analytic function, it means that there exists a convergence domain (the radius of convergence is non-zero) such that
4.3 Law of Large Numbers and Central Limit Theorem
4.3.1 Random Variables
By preparing a universal set \(\Omega \) and a set of its events in advance, when
becomes an event for any open set O of \(\mathbb R\), we say that \(X: \Omega \ni \omega \mapsto X(\omega )\in {\mathbb R}\) is measurable. Also, X is called a random variable that takes values inFootnote 3 \(\mathbb R\). However, the way to determine the probability needs to be defined separately.
Example 38
When \(\Omega =\{1,2,3,4,5,6\}\) and \(X(\omega )=(-1)^\omega \), it is necessary that at least \(\{1,3,5\}\) and \(\{2,4,6\}\) are events. That is, among the empty set \(\{\}\) and the universal set \(\Omega \) and these two sets, even if union, intersection, and complement operations are performed, no other than these four sets are generated. Also, by calculating the set of \(\omega \in \Omega \) such that \(X(\omega )\in (0,1)\), the set of \(\omega \in \Omega \) such that \(X(\omega )\in (-2,1)\), etc., we can see that the subset of \(\Omega \) where \(X(\omega )\in O\) for any open set O does not exist other than those four. The random variable X only defines the events, and the probability needs to be specified according to the axioms. \(\blacksquare \)
Random variables can be defined not only as \(\Omega \rightarrow {\mathbb R}\). If \(\eta : \Omega \rightarrow C(K)\) is measurable, where C(K) is a continuous function defined on a compact set K, then \(\eta \) is said to be a random variable that takes values in C(K). Rather than considering it as a random variable, it can be seen as a random function. In defining measurability, open sets are defined using distance by the uniform norm.
4.3.2 Order Notation
First, we shall define the limit of a sequence of real numbers, which appears frequently in this book.
An infinitely long sequence of real numbers \({a_n}\) is said to converge to \(\alpha \) as \(n\rightarrow \infty \), or \(\lim _{n\rightarrow \infty }a_n=\alpha \), if for any \(\epsilon >0\), \(|a_n-\alpha |<\epsilon \) holds except for a finite number of n.Footnote 4
Also, for a function g(n) of positive integer n such as \(g(n)=1,n,n^2\), if \(|g(n)a_n|<\epsilon \) holds for any \(\epsilon >0\) except for a finite number of n, i.e., if \(g(n)a_n\) converges to 0 as \(n\rightarrow \infty \), we write \(a_n=o(\frac{1}{g(n)})\). On the other hand, if there exists an \(M>0\) such that \(|g(n)a_n|<M\) holds except for a finite number of n, i.e., if \(g(n)a_n\) is bounded, we write \(a_n=O(\frac{1}{g(n)})\). For example, if it is O(1/n), it is also o(1).
4.3.3 Law of Large Numbers
Next, we will examine whether the sequence of probabilities \(\{P(A_n)\}\) for a sequence of events \(\{A_n\}\) converges to 1. When the probability \(P(|X_n-\alpha |<\epsilon )\) converges to 1 as \(n\rightarrow \infty \) for any \(\epsilon >0\), the sequence of random variables \(\{X_n\}\) is said to stochastically converge to \(\alpha \), and we write it as \(X_n\xrightarrow {P} \alpha \).
The Weak Law of Large Numbers is one of the most important theorems regarding stochastic convergence. Before introducing it, we shall show an important inequality.
Proposition 5
(Chebyshev’s Inequality) For a random variable with mean \(\mu \) and variance \(\sigma ^2>0\), for any constant \(k>0\), the inequality
holds.
Proof
Define I so that \(I(A)=1\) when event A occurs and \(I(A)=0\) otherwise. Then, the following inequality holds.
\(\blacksquare \)
Here, consider \(\left\{ X_{n}\right\} ^{\infty }_{n=1}\) and \(\left\{ \epsilon _{n}\right\} _{n=1}^{\infty }\) as sequences of random variables. When
holds as \(n \rightarrow \infty \), we write \(X_{n}=o_{P}\left( \epsilon _{n}\right) \). Especially, when \(X_{n} {\mathop {\longrightarrow }\limits ^{P}} 0\) holds, we write \(X_{n}=o_{P}(1)\).
Moreover, when there exist an \(M>0\) except for a finite number of n (they can depend on \(\delta \)) such that
for any \(\delta >0\), we write
Especially, if \(P\left( \left| X_{n}\right| \le M\right) \ge 1-\delta \), we write \(X_{n}=O_{P}(1)\).
\(o_{P}\) and \(O_{P}\) have the following properties for sequences of random variables \(\left\{ \epsilon _{n}\right\} _{n=1}^{\infty }\) and \(\left\{ \delta _{n}\right\} _{n=1}^{\infty }\):
In particular, (4.9) implies (4.10).
Moreover, for \(a\in {\mathbb R}\) and a continuous function \(g: {\mathbb R}\rightarrow {\mathbb R}\), we have
Equations (4.8)–(4.10) are known as Slutsky’s theorem and Eq. (4.11) is known as the Continuous Mapping Theorem. For proofs, see [16], for example. The notation \(O_P, o_P\) is not commonly used in general statistics, but it is frequently used in Watanabe’s Bayesian theory, so it is necessary to understand it well.
Example 39
An independent sequence of random variables \(X_1,X_2,\dots \) such that \(X_n\sim N(0,1)\) is \(O_P(1)\). Also, a sequence of random variables \(X_1,X_2,\dots \) such that \(X_n\sim N(0,1/n)\) stochastically converges to 0, hence \(X_n=o_P(1)\). \(\blacksquare \)
Proposition 6
(Weak Law of Large Numbers) For a sequence of independent and identically distributed random variables \(\{X_n\}\), the average \(\displaystyle Z_n:=\frac{{X_1+\cdots +X_n}}{n}\) stochastically converges to its expected valueFootnote 5 \(\mu \).
Proof
First, the mean and variance of \(Z_n\) are, respectively, \(\displaystyle {\mathbb E}{[}Z_{n}{]}= {\mathbb E}[\frac{X_1+\cdots +X_n}{n}]={\mathbb E}{[}X_{1}{]}=\mu \) andFootnote 6 \(\displaystyle {\mathbb V}{[}Z_{n}{]}={\mathbb V}{[}\frac{X_1+\cdots +X_n}{n}{]}={\mathbb V}{[}X_{1}{]}/n= \sigma ^2/n\). Applying these to Proposition 5, we obtain
Therefore, as \(n\rightarrow \infty \), the probability of the event \({(|Z_n-\mu |\ge \epsilon )}\) approaches 0. \(\blacksquare \)
Example 40
We generated random numbers following a binomial distribution 200 times (\(n=200\)), calculated \(Z_i\) for each point up to \(i=1,\ldots ,n\), and checked the degree of convergence (see Fig. 4.2). We generated \(Z_{n}\) 8 times each for \(p=0.5\) and \(p=0.1\).
\(\blacksquare \)
4.3.4 Central Limit Theorem
In the following, we denote the mean and variance of the true distribution q as \(\mu \) and \(\sigma ^2\), respectively. The Central Limit Theorem is, alongside the Law of Large Numbers, an important asymptotic property of a sequence of random variables \(\{X_n\}\).
Proposition 7
(Central Limit Theorem) For a sequence of independent random variables \(\{X_n\}\) each following the same distribution with mean \(\mu \) and variance \(\sigma ^2\),
follows the standard normal distribution as \(n\rightarrow \infty \).
This book will not prove this theorem, but we will confirm its meaning by giving examples of this theorem and its extensions. First, it should be noted that each of the random variables in the sequence \({X_n}\) does not necessarily need to follow a normal distribution.
Example 41
(Application of the Central Limit Theorem) Setting \(n=100\), for each distribution q below, we generated \(m=500\) random samples of (4.12) and plotted the distribution of \(Y_n\) (see Fig. 4.3).
-
1.
Standard normal distribution
-
2.
Exponential distribution with \(\lambda =1\)
-
3.
Binomial distribution with \(p=0.1\)
-
4.
Poisson distribution with \(\lambda =1\)
Note that the exponential distribution is a distribution with a probability density function that is 0 for \(x\le 0\) and
for \(x\ge 0\). The Poisson distribution takes values \(x=0,1,2,\ldots \), with probabilities \(q(x)=e^{-\lambda }\lambda ^x/x!\). The experiment was run using the following code:
It can be seen that regardless of the shape of distribution q, even with \(n=100\), the shape is close to the standard normal distribution. \(\blacksquare \)
The above Central Limit Theorem assumed that \(X_1,\ldots ,X_n\) were each real numbers (one-dimensional), and assumed \(\mu \in {\mathbb R}\), \(\sigma ^2>0\). Similar assertions hold even for two-dimensional and d-dimensional (\(d\ge 1\)) cases. Hereinafter, \(N(\mu ,\Sigma )\) denotes a d-dimensional normal distribution with mean \(\mu \in {\mathbb R}^d\) and covariance matrix \(\Sigma \in {\mathbb R}^{d\times d}\). The probability density function of \(X\sim N(\mu ,\Sigma )\) is as follows.
In general, when the distribution function \(\{F_n(x)\}\) of a sequence of real-valued random variables \(\{X_n\}\) converges to the distribution function \(F_X(x):=\int _{-\infty }^xq(t)dt\) of a random variable X at each continuous point x as \(n\rightarrow \infty \),
we say that \(\{X_n\}\) converges in distribution to X, and write this as \(X_n\xrightarrow {d} X\). If the probability density function followed by X is q, we sometimes write this as \(X_n\xrightarrow {d} q\). For example, the Central Limit Theorem can be written as \(X_n\xrightarrow {d} N(0,1)\). And, it is known that (4.13) is equivalent to
for any bounded and continuous function \(g: {\mathbb R}\rightarrow {\mathbb R}\) (Exercise 38), where \({\mathbb E}_n[\cdot ]\), \({\mathbb E}_X[\cdot ]\) are the operations of the mean with respect to the distribution functions \(F_n, F_X\) respectively.
Proposition 8
Consider independent random variables \(X_1,\ldots ,X_n\) with mean \(\mu \in {\mathbb R}^d\) and covariance matrix \(\Sigma \in {\mathbb R}^{d\times d}\) (they do not necessarily follow a normal distribution). Then, we have
On the other hand, for a probability variable \(\eta _n: C(K)\rightarrow {\mathbb R}\) that takes values in C(K), the concept of distribution function does not exist because C(K) is not in a Euclidean space.Footnote 7 Therefore, for any bounded and continuous function \(g: C(K) \rightarrow {\mathbb R}\), we define the convergence in distribution of the sequence \(\eta _1,\eta _2,\dots \) to a random variable \(\eta \) taking values in some C(K) (\(\eta _n\xrightarrow {d} \eta \)) as
4.4 Fisher Information Matrix
The Fisher information matrix represents the smoothness of the log-likelihood \(\log p(X|\theta )\) at each \(\theta \in \Theta \), and is an important measure for analyzing the relationship between the true distribution and the statistical model.
In this book, we assume the following conditions.
Assumption 2
-
1.
The order of integration in \(\mathcal X\) and differentiation with respect to \(\theta \in \Theta \) in \(p(\cdot |\theta )\) can be exchanged.
-
2.
For each \((x,\theta )\in \mathcal{X}\times \Theta \), the partial derivatives \(\displaystyle \frac{\partial ^2 \log p(x|\theta )}{\partial \theta _i\partial \theta _j}\) exist, for \(i,j=1,\ldots ,d\).
The Fisher information matrix \(I(\theta )\) is defined as the covariance matrix of
and we denote \(I:=I(\theta _*)\in {\mathbb R}^{d\times d}\) for \(\theta _*\in \Theta _*\). Also, we define the matrix \(J:=J(\theta _*)\in {\mathbb R}^{d\times d}\) using
Assuming regularity, there exists a unique \(\theta =\theta _*\) that minimizes \(D(q|| p(\cdot |\theta ))\), that is, minimizes \({\mathbb E}_X[-\log p(X|\theta )]\), and there exists an open set containing \(\theta _*\) that is included in \(\Theta \), and since \(\nabla ^2 {\mathbb E}_X[\log p(X|\theta )]\) is positive definite, \(\nabla {\mathbb E}_X[\log p(X|\theta )]\) is 0 at \(\theta =\theta _*\). We write this as
Therefore, if it is regular, the following holds from (4.16).
Example 42
Assume that the mean and variance of the true distribution q are \(\mu _{}\) and \(\sigma ^2_{}\), respectively (not necessarily normally distributed). For the probability density function (normal distribution) with parameter \(\theta =(\mu ,\sigma ^2)\)
we shall calculate the matrices I, J. From
we obtain
Let \(A:={\mathbb E}_X[(X-\mu _{**})^3]\) and \(B:={\mathbb E}_X[(X-\mu _{**})^4]\), then the (1,1), (1,2), and (2,2) elements of (4.24) are respectively
and
Furthermore, by substituting \(\theta =\theta _*=(\mu _*,\sigma _*^2)\), (4.24) becomes as follows.
On the other hand, from (4.21), we obtain
and
Moreover, if it is regular, from (4.19), (4.23) becomes 0, so \((\mu _,\sigma ^2_*)=(\mu _{},\sigma ^2_{})\). Therefore, we obtain
and
Furthermore, if it is realizable, the true distribution q is also normal, and since \(A=0\) and \(B=3(\sigma ^2_{**})^2\) (as per Example 2), (4.27) coincides with (4.28). \(\blacksquare \)
Proposition 9
When the true distribution q is realizable for the statistical model \({p(\cdot |\theta )}_{\theta \in \Theta }\) and is regular, \(I=J\) holds.
Proof
Since it is realizable, \(q=p(\cdot |\theta _*)\), and we can write
Furthermore, from the first condition of Assumption 2, we have
and from the equation where we substitute \(\theta =\theta _*\) into (4.16), we can write
Furthermore, since it is regular, we can apply (4.18), and the proposition follows. \(\blacksquare \)
Example 43
In Example 42, if we change the parameter \(\sigma ^2>0\) to \(\sigma \not =0\), the distribution becomes the same (homogeneous) at \(\theta =(\mu ,\sigma )\) and \(\theta =(\mu ,-\sigma )\), but
and
follow, and the values of \(J(\theta )\) do not coincide between the two. When \(\mu =\mu _{**}\), they coincide at \(\pm \sigma \). \(\blacksquare \)
When limited to the exponential family, using the notation of Sect. 2.4, given that \(p(x|\theta )=u(x)\exp \{v(\theta )^\top w(x)\}\) and \(\nabla \log p(x|\theta )=\nabla v(\theta )^\top w(x)\), the Fisher information matrix can be written as
and
Example 44
In Example 42, with \(J=4\), we can write \(\displaystyle u(x)=\frac{1}{\sqrt{2\pi }}\), \(\displaystyle v(\theta )=[\frac{1}{\sigma ^2},\frac{\mu }{\sigma ^2},\frac{\mu ^2}{\sigma ^2},\log \sigma ^2]^\top \), and \(\displaystyle w(x)=[-\frac{x^2}{2},{x}, -\frac{1}{2},-\frac{1}{2}]^\top \) (see Example 16). Hence,
can be obtained. We can calculate \({\mathbb E}_X[\cdot ]\) and \({\mathbb V}_{X}[\cdot ]\) using these and the true model. \(\blacksquare \)
Notes
- 1.
The method of obtaining the inverse matrix by operations 1, 2, and 3 is called Gaussian elimination.
- 2.
For \(u,v\in {\mathbb C}\), \(\overline{uv}=\overline{u}\cdot \overline{v}\) holds.
- 3.
It can be understood as a random variable.
- 4.
It is equivalent to the existence of some \(N(\epsilon )\) for any \(\epsilon \) such that \(|a_n-\alpha |<\epsilon \) for \(n\ge N(\epsilon )\).
- 5.
There is also a Strong Law of Large Numbers, which states that under the same conditions, there is almost sure convergence, not just convergence in probability, but this book does not deal with almost sure convergence.
- 6.
For \(a,b\in {\mathbb R}\) and a random variable X, we have \({\mathbb E}{[}aX+b{]}=a{\mathbb E}{[}X{]}+b\), \({\mathbb V}{[}aX+b{]}=a^2{\mathbb V}{[}X{]}\).
- 7.
The concept of distribution function applies not only to the one-dimensional case \(F: {\mathbb R}\ni x\mapsto \int _{-\infty }^x q(t)dt\), but also to the two-dimensional case \(F: {\mathbb R}^2\ni (x,y)\mapsto \int _{-\infty }^x\int _{-\infty }^y q(s,t)dsdt\).
Author information
Authors and Affiliations
Corresponding author
Appendices
Appendix: Proof of Proposition
Proof of Proposition 4
When matrix A is symmetric, the eigenvectors corresponding to different eigenvalues are orthogonal (their inner product is 0). Indeed, if \(Au=\lambda u\), \(Au'=\lambda ' u'\), and \(\lambda \not =\lambda '\), we have
which results in \(\langle u, {u'}\rangle =0\). If the eigenvalues are repeated, there exist as many linearly independent eigenvectors as there are repetitions, and we choose them to be orthogonal. Moreover, we normalize all eigenvectors to have a magnitude of 1. Assume that we have obtained eigenvalues \(\lambda _1,\dots ,\lambda _n\) and eigenvectors \(u_1,\dots ,u_n\) in this way. In this case, \(Au_i=\lambda _i u_i\) holds. Also, let \(U\in {\mathbb R}^{n\times n}\) be the matrix with columns \(u_1,\ldots ,u_n\), and let \(D\in {\mathbb R}^{n\times n}\) be the diagonal matrix with diagonal entries \(\lambda _1,\ldots ,\lambda _n\). We then have \(AU=UD\). Since U is an orthogonal matrix due to its construction (\(U^\top U=U^\top U=I_n\)), we can multiply \(U^\top \) from the right to get \(A=UDU^\top \).
First, if \(z^\top Az\ge 0\) for any \(z\in {\mathbb R}^n\), then \(\lambda _i u_i^\top u_i\ge 0\) and \(\lambda _i\ge 0\). Conversely, if \(\lambda _i\ge 0\), by denoting the matrix obtained by replacing each component of D with its square root as \(\sqrt{D}\), we have \(A=U\sqrt{D}\sqrt{D}U^\top =(\sqrt{D}U^\top )^\top \sqrt{D}U^\top \), and \(z^\top Az= (\sqrt{D}U^\top z)^\top \sqrt{D}U^\top z \ge 0\) holds for any \(z\in {\mathbb R}^n\).
Furthermore, if \(z^\top Az> 0\) for any \(z\not =0\), then \(\lambda _i u_i^\top u_i> 0\) and, since \(u_i^\top u_i\not =0\), we have \(\lambda _i>0\). Conversely, if \(\lambda _i>0\), then \(\sqrt{D}U^\top \) is regular, and for any \(z\not =0\), \(\sqrt{D}U^\top z\not =0\), and \(z^\top Az=(\sqrt{D}U^\top z)^\top \sqrt{D}U^\top z \not =0\) holds. \(\blacksquare \)
Proof of Proposition 2
In the following, let \(f(x,\theta _*,\theta ):=\log \frac{p(x|\theta _*)}{p(x|\theta )}\).
1. Given any \(\theta _1,\theta _2\in \Theta _*\), we have relative finite variance, so
There exists a constant \(\gamma >0\), so \(f(\cdot ,\theta _1,\theta _2)\) is zero as a function, and \(\theta _1,\theta _2\) become the same distribution.
2. As q is realizable, we can set \(f(x,\theta _*,\theta )=\log \frac{q(x)}{p(x|\theta )}\). Arbitrarily choose \(\theta _*\in \Theta _*\) from the homogeneous \(\Theta _*\) and consider the limit of \({\mathbb E}_X[f(X,\theta _*,\theta )]=D(q|p(\cdot |\theta ))\) as \(\theta \rightarrow \theta _*\) in (2.22). For \(F(t):=t+e^{-t}-1\), \(t\in {\mathbb R}\), there exists \(|t^*|\le |t|\) such that \(F(t)=\frac{t^2}{2}e^{-t^*}\) from the Taylor expansion of F at \(t=0\). Here, note that
Then, we can see that
holds, where we assume \(|t_\theta (x)|\le |\log \frac{q(x)}{p(x|\theta )}|\). Also, from the continuity (Assumption 1), if \(\theta \rightarrow \theta _*\), we can make \(p(x|\theta )\rightarrow q(x)=p(x|\theta _*)\). Therefore, for any \(\epsilon >0\), there exists a \(\theta \) that satisfies
Hence, if \(|\theta -\theta _*|<\delta \), we can have
and the constant c in (2.22) can be bounded by 2.
3. Let \(g(\theta ):={\mathbb E}_X[f(X,\theta _*,\theta )]\) and \(h(\theta ):={\mathbb E}_X[f^2(X,\theta _*,\theta )]\). Both \(g(\theta )\) and \(h(\theta )\) take their minimum values \(g(\theta _*)=h(\theta _*)=0\) at \(\theta _*\), an element of \(\Theta _*\). Hence, \(\nabla g(\theta _*)=\nabla h(\theta _*)=0\). Therefore, by Taylor’s expansion, and defining \(\nabla g(\theta )=\left( \frac{\partial g(\theta )}{\partial \theta _1}|_{\theta =\theta _*},\dots ,\frac{\partial g(\theta )}{\partial \theta _d}|_{\theta =\theta _*}\right) \), we have
and
for some \(\theta _1,\theta _2\) that exist between \(\theta \) and \(\theta _*\). Moreover, as \(\theta \rightarrow \theta _*\), we have \(\theta _1,\theta _2\rightarrow \theta _*\), and both can be approximated in the neighborhood of \(\theta _*\) by
On the other hand, since g is regular, the \(\theta _*\) that minimizes g is unique, and all eigenvalues of \(\nabla ^2 g(\theta _*)=\nabla ^2 D(q||p(\cdot |\theta ))|_{\theta =\theta _*}\) are positive. Also, according to Proposition 4, \(\nabla ^2 h\) is non-negative in the neighborhood of \(\theta =\theta _*\). If we denote the smallest eigenvalue of the former as \(\lambda _{min}>0\) and the largest value of the latter as \(\lambda _{max}\ge 0\), then we can write
Exercises 27–41
-
27.
For \(a, b, c, d \in {\mathbb R}\), prove that \(\overline{(a+bi)(c+di)}=\overline{a+bi}\cdot \overline{c+di}\). Also, for the eigenvalue \(\lambda \in {\mathbb C}\) and eigenvector \(u \in {\mathbb C}^n\) of matrix \(A\in {\mathbb R}^{n\times n}\), prove that \(A\overline{u}=\overline{Au}=\overline{\lambda u}=\overline{\lambda }\overline{u}\).
-
28.
For a matrix \(U=[u_1,\dots ,u_n]\in {\mathbb R}^{n\times n}\), where the inner product \(\langle u_i,u_j \rangle \) of each column is 1 when \(i=j\) and 0 otherwise, we call U an orthogonal matrix. Show that \(U^\top U=UU^\top =I_n\).
-
29.
Prove the following.
-
(a)
An open interval (a, b) is an open set, and a closed interval [a, b] is a closed set.
-
(b)
The set of all real numbers \(\mathbb R\) and the set of all integers \(\mathbb Z\) are closed sets.
-
(c)
The set \({\mathbb R}\cap {\mathbb Z}^C\), which is the set of all real numbers \(\mathbb R\) excluding the set of all integers \(\mathbb Z\), is an open set.
-
(d)
The set of all rational numbers \(\mathbb Q\) is neither an open set nor a closed set.
-
(e)
The region \(\{(x,y,z)\in {\mathbb R}^3\mid x^2+y^2+z^2< 1, z\ge 0\}\) is neither an open set nor a closed set.
-
(a)
-
30.
Based on the definition of Maclaurin expansion, prove (4.4) (4.5).
-
31.
Show that the function f in Example 37 is \(C^\infty \).
-
32.
Show that the absolute value \(|\cdot |\) is a norm in \(\mathbb R\). Also, show that \(L^2(q)\) is a linear space, and that \(\Vert \cdot \Vert _2\) is a norm. [Hint] Show in relation to the relationship \(\sim \) such as \(f\sim g \Longleftrightarrow \int _\mathcal{X}f(x)q(x)dx=\int _\mathcal{X}g(x)q(x)dx\).
-
33.
When tossing a coin with equal probability of heads or tails, what kind of event set should be prepared for the variable X to become a random variable, with \(X=1\) if heads appear and \(X=0\) if tails appear?
-
34.
Prove the two inequalities
$$\begin{aligned} \displaystyle {\mathbb E}[(X-\mu )^2]\ge {\mathbb E}[(X-\mu )^2I(|X-\mu |\ge k)]\ge k^2\cdot P(|X-\mu |\ge k). \end{aligned}$$ -
35.
Toss a coin with equal probability of heads or tails n times, and let \(a_i\) be the relative frequency of heads occurring up to the ith time, for \(1\le i\le n\). Write an R program that takes n as input, generates n random numbers following a binomial distribution, and outputs the sequence \(a_1,\ldots ,a_n\).
-
36.
When executing the following program with different values of m and n such as \(m=10,100\) and \(n=10,100\), different graphs are obtained. What kind of graphs can be generally obtained?
-
37.
Following the application example of Example 41, generate \(n=500\) random numbers approximately following the standard normal distribution from \(m=100\) sets of random numbers following the \(\chi ^2\) distribution with 2 degrees of freedom, and draw a graph similar to Fig. 4.3.
-
38.
Prove the following two propositions.
-
(a)
If \(X_{n} \xrightarrow {d} X\) and \(g: {\mathbb R} \rightarrow {\mathbb R}\) is bounded (\(|g(x)|<M\), there exists \(M>0\) such that \(x\in {\mathbb R}\)) and continuous, then \({\mathbb E}[g(X_{n})] \rightarrow {\mathbb E}[g(X)]\), where the following fact can be used without proof. When fixing \(\epsilon >0\) arbitrarily, we can choose continuous points \(a_{0}<a_{1}<\cdots <a_{k}\) of the distribution function of X that satisfy the following conditions:
$$\begin{aligned} P\left( X \le a_{0}\right) <\epsilon \ ,\ P\left( X>a_{k}\right) <\epsilon \end{aligned}$$and
$$\begin{aligned} \left| g(x)-g(a_{i})\right| <\epsilon \ ,\ x \in \left[ a_{i-1}, a_{i}\right] \ ,\ i=1,\dots ,k. \end{aligned}$$[Hint] Apply the function \(h: {\mathbb R} \rightarrow {\mathbb R}\), which takes the value 0 outside \((a_{0}, a_{k}]\) and a constant value within each \((a_{i-1}, a_{i})\), to the following inequality.
$$\begin{aligned} {} & {} \left| {\mathbb E}[g(X_{n})]-{\mathbb E}[g(X)]\right| \\ &\le & \left| {\mathbb E}[g(X_{n})]-{\mathbb E}[h(X_{n})]\right| +\left| {\mathbb E}[h(X_{n})]-{\mathbb E}[h(X)]\right| +|{\mathbb E}[h(X)]-{\mathbb E}[g(X)]|. \end{aligned}$$ -
(b)
For any bounded and continuous \(g: {\mathbb R} \rightarrow {\mathbb R}\), if \({\mathbb E}[g(X_{n})] \rightarrow {\mathbb E}[g(X)]\), then \(X_{n} \xrightarrow {d} X\). [Hint] Since \(g: {\mathbb R}\rightarrow {\mathbb R}\) is a bounded and continuous function, for example, let \(a\in {\mathbb R}\) be a continuous point of the distribution function of X, and let \(m\ge 1\), then
$$\begin{aligned} g_{a,m}(x)=\left\{ \begin{array}{ll} 1,&{}x\le a\\ -m(x-a)+1,&{}a< x <a+1/m\\ 0,&{}x\ge a+1/m. \end{array} \right. \end{aligned}$$For the function \(g_{a,m}: {\mathbb R}\rightarrow {\mathbb R}\), \({\mathbb E}[g_{a,m}(X_{n})] \rightarrow {\mathbb E}[g_{a,m}(X)]\) (\(n\rightarrow \infty \)) is established, and \(F_X(a)\le {\mathbb E}[g_{a,m}(X)]\le F_X(a+\frac{1}{m})\) holds. Finally, use the fact that \(a\in {\mathbb R}\) is a continuous point of the distribution function.
-
(a)
-
39.
When the true distribution is regular with respect to the statistical model, show that for \(\theta _*\in \Theta _*\),
$$\begin{aligned} I(\theta _*)= {\mathbb E}_X[\nabla \log p(X|\theta _*)(\nabla \log p(X|\theta _*))^\top ] \end{aligned}$$holds.
-
40.
Under regularity, (4.23) becomes 0. Assuming \(\theta =(\mu ,\sigma ^2)=(\mu _*,\sigma ^2_*)=\theta _*\), show that (4.25) can be written by (4.27). Also, why does (4.27) become (4.28) when realizable?
-
41.
Perform the same derivation as in Example 44 for Example 17.
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.
About this chapter
Cite this chapter
Suzuki, J. (2023). Mathematical Preparation. In: WAIC and WBIC with R Stan. Springer, Singapore. https://doi.org/10.1007/978-981-99-3838-4_4
Download citation
DOI: https://doi.org/10.1007/978-981-99-3838-4_4
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-99-3837-7
Online ISBN: 978-981-99-3838-4
eBook Packages: Computer ScienceComputer Science (R0)