Abstract
As a model for an on-line classification setting we consider a stochastic process \((X_{-n},Y_{-n})_{n}\), the present time-point being denoted by 0, with observables \(\ldots ,X_{-n},X_{-n+1}, \ldots , X_{-1}, X_0\) from which the pattern \(Y_0\) is to be inferred. So in this classification setting, in addition to the present observation \(X_0\) a number l of preceding observations may be used for classification, thus taking a possible dependence structure into account as it occurs e.g. in an ongoing classification of handwritten characters. We treat the question how the performance of classifiers is improved by using such additional information. For our analysis, a hidden Markov model is used. Letting \(R_l\) denote the minimal risk of misclassification using l preceding observations we show that the difference \(\sup _k |R_l - R_{l+k}|\) decreases exponentially fast as l increases. This suggests that a small l might already lead to a noticeable improvement. To follow this point we look at the use of past observations for kernel classification rules. Our practical findings in simulated hidden Markov models and in the classification of handwritten characters indicate that using \(l=1\), i.e. just the last preceding observation in addition to \(X_0\), can lead to a substantial reduction of the risk of misclassification. So, in the presence of stochastic dependencies, we advocate to use \( X_{-1},X_0\) for finding the pattern \(Y_0\) instead of only \(X_0\) as one would in the independent situation.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
In pattern recognition, the following basic situation is considered: A random variable (X, Y) consists of an observed pattern \(X\in {\mathcal {X}} \), typically \( {\mathcal {X}} = {\mathbb {R}}^d\), from which we want to infer the unobservable class Y which belongs to a given finite set M of classes. Consider the case that the distribution \(P^{(X,Y)}\) is known. Then the classification rule which chooses the class having maximum a posteriori probability given the observed pattern x has minimal risk of misclassification. This optimal rule is given by
where \({{\mathrm{argmax}}}\) takes, in a measurable way, some value \(y^*\) with \(P(Y=y^*|X=x)= \max _{y}P(Y=y|X=x)\). The minimal risk of misclassification, often termed the Bayes risk, is given by
Even though in many problems of pattern recognition the distribution of \(P^{(X,Y)}\) will not be known, the Bayes risk is a quantity of major importance as it provides the benchmark behaviour against which any other procedure is judged.
Let us briefly recall the i.i.d. model of supervised learning which has provided a main direction of research, see, e.g., the monograph (Devroye et al. 1996). There, in addition to (X, Y), we have a learning sequence \((X_1',Y_1'),(X_2',Y_2'),\ldots ,(X_n',Y_n')\) of independent copies of (X, Y), i.e. having the same distribution. This sequence is sampled independently of (X, Y) and is used for learning proposes, in a statistical sense for the estimation of unknown distributions to construct the classification procedure.
In this paper we take a different approach which is motivated by an on-line classification setting which we model in the following way: There is given a stochastic process \((X_{-n},Y_{-n})_{n}\), the present time-point being denoted by 0, with observables in temporal order
from which the pattern \(Y_0\) is to be inferred. The time parameter n belongs to some set of the form \(\{0,1,\ldots ,m\}\) or, for mathematical purposes, to \({\mathbb {N}}_0= \{0,1,\ldots \}\). So in this classification setting, previous observations may be used to classify the present observation. If \((X_0,Y_0)\) is independent of the past \((X_{-n},Y_{-n})_{n \ge 1 }\) then clearly previous observations carry no information on \(Y_0\) and our optimal classification would be given by \({{\mathrm{argmax}}}_{y} P(Y_0=y|X_0=x_0)\).
But in a variety of classification problems we encounter dependence. Looking e.g. at the on-line classification of handwritten characters the dependence structure in natural language could be taken into account. In this situation, \(X_0\) would be the current handwritten character to be classified, \(Y_0\) the unknown true character, the foregoing handwritten character would be \(X_{-1}\) and the unknown true character \(Y_{-1}\), and in general \(X_{-n}\) would be the n-th one preceding \(X_0\) with unknown \(Y_n\). So, there is a well-known dependence between the \(Y_n\)’s, described by linguists using Markov models (see e.g., Institute for Defense Analyses 1980 for early discussions), and this dependence is of course inherited by the \(X_n\)’s. A popular model for this situation is given by a hidden Markov model, which we shall also use in this paper.
Coming back to the general model, we prescribe to use the present and in addition the last l preceding observables. Then a classification rule with memory l takes the form
for some measurable \(\delta :{\mathcal {X}}^{l+1} \rightarrow M\). The optimal rule is given by
with Bayes risk
Obviously
Assume that we have such a process \((X_{-n},Y_{-n})_{n \in {\mathbb {N}}_0}\) with full past for our mathematical model. By martingale convergence it follows that for \(l \rightarrow \infty \)
Here it is important to point out that this paper centers around the behaviour of the optimal classification procedure in dependence on l, the number of past observations used. This differs markedly from one of the main lines of research in the i.i.d. model of supervised learning where the focus is on the behaviour of classification procedures in dependence on n, the size of the training sequence.
To investigate the behaviour of procedures which incorporate preceding information into classification rules we will use the setting of hidden Markov models. This class of models has been of considerable interest in the theory and applications of pattern recognition, see the monographs by MacDonald and Zucchini (1997) and by Huang et al. (1990) from a more practical viewpoint. It provides a class which allows good modelling for various problems with dependence and still may be handled well from the analytical, the algorithmic, and the statistical point of view, see the monograph by Cappé et al. (2005). The applications range from biology to speech recognition to finance; the above monographs contain a wealth of such examples.
A theoretical contribution to pattern recognition for such models was given by Holst and Irle (2001) where the asymptotic risk of misclassification for nearest neighbor rules in dependent models including hidden Markov models was derived. Similar models were treated in Ryabko (2006) to obtain consistency for certain classes of procedures, i.e. convergence of the risk of misclassification to the Bayes risk. As consistency for classification follows from consistency in the corresponding regression problem, see e.g. (Devroye et al. 1996, 6.7), any result on regression consistency yields a result on classification consistency, and a wealth of such results is available, e.g. under mixing conditions. All these results invoke the convergence of the size n of a training sequence to infinity and do not cover the topic of this paper. Closer to our paper is the problem of predicting \(Y_0\) from \((X_0,Y_{-1},\ldots ,Y_{-l})\) for stationary and ergodic time series, see e.g. (Györfi et al. 2002, Chap. 27). Our treatment differs as we do not have knowledge (just guesses of) \((Y_{-1},\ldots ,Y_{-l})\) in on-line pattern recognition, only that of \((X_0,X_{-1},\ldots ,X_{-l})\).
The hidden Markov model as it will be used in this paper takes the following form. We assume that for each m we have, written in their temporal order, observables \(X_{-m},X_{-m+1},\ldots , X_{-1}, X_0\) and unobservables \(Y_{-m},Y_{-m+1},\ldots , Y_{-1}, Y_0\). The unobservables form a Markov chain. The observables are conditionally independent given the unobservables in the form of
for some stochastic kernel Q and are not Markovian in general. This stochastic kernel and the transition matrix of the chain are assumed to be the same for each m. But we allow for the flexibility that, for each m, a different initial distribution, i.e. distribution of \(Y_{-m}\) may occur. Note that m stands for the time point in the past where our model would be started and the distribution of \(Y_{-m}\) would not be known.
For being completely precise we would have to use the notation \(Y^{(m)}_{-m},\ldots , Y^{(m)}_0\) since, due to our flexibility in initial distribution, the distribution of \(Y^{(m)}_{-m},\ldots , Y^{(m)}_0\) and \(Y^{(m+1)}_{-m},\ldots , Y^{(m+1)}_0\) need not coincide. Hence also \(R_l \ge R_{l+1}\) does not hold in general where \(R_l\) is computed in a model started at some time \(-m,m\ge l\), and \(R_{l+1}\) in a model with a possibly different initial distribution. But all our bounds will only involve the transition matrix and the stochastic kernel which do not depend on the index m. So we shall omit this upper index in order not to overburden our notations.
We assume that the transition matrix of the chain is such that there exists a unique stationary probability distribution \(\pi \), characterized by the property that if \(Y_{-m}\) has the distribution \(\pi \) then all later \(Y_{-m+k},~k\ge 0,\) have the same distribution \(\pi \). For our chain with full past \((X_{-n},Y_{-n})_{n \in {\mathbb {N}}_0}\) we consider the stationary setting where each \(Y_{-n}\) has the same distribution \(\pi \). Then of course \(R_l^* \ge R_{l+1}^*\) and \(\lim _{l}R_l^* = R^*\) denoting the risk in the stationary case with an additional \(^*\).
Without loss of generality we let the probability measures \(Q(\cdot ,y)\) be given by densities \(f_y\) with respect to some \(\sigma \)-finite measure \(\mu \) on \({\mathcal {X}}\). So we have for all n
This provides a unified treatment for the case of discrete \({\mathcal {X}}\) where \(\mu \) might be the counting measure, and for the case of Lebesgue densities where \({\mathcal {X}} = {\mathbb {R}}^d\) and \(\mu \) might be d-dimensional Lebesgue measure.
In Sect. 2 we shall show under a suitable assumption that \(\lim _l R_l\) exists and is independent of the particular sequence of initial distributions, hence \(\lim _l R_l = R^*\). Furthermore this convergence is exponentially fast and we provide a bound for \(\sup _k |R_l - R_{l+k}|\) in this respect. Let us remark that, as we are looking backwards in time, the usual geometric ergodicity forward in time does not seem to yield an immediate proof. In Sect. 3 we introduce kernel classification rules with memory and discuss their theoretical and practical performance. Our findings indicate that it might be useful to include a small number l of preceding observations, starting with \(l=1\), to increase the performance of classification rules with an acceptable increase in computational complexity. Various technical proofs are given in Sect. 4.
2 Exponential convergence
We consider a hidden Markov model as described in the Introduction. For this model we make the following assumption:
(A) All entries \(p_{ij}, i,j \in M\), in the transition matrix are \( >0\). All densities are \( >0\) on \({\mathcal {X}}\).
(A) will be assumed to hold throughout Sects. 2 and 4. It implies the finiteness of the following quantities which will be used in our bounds.
Remark 2.1
Set
Then \(\; 1 \le \alpha , \alpha (x) < \infty \).
Furthermore, with |M| denoting the number of classes, let
Then \(\; 0 < \eta , \eta (x) \le 1/2\).
The following result provides the main technical tool. Its proof will be given in Sect. 4. We use the notation \( x_{-n}^{0}\) for \( (x_{0},\ldots ,x_{-n}) \) and in the same manner we use \( X_{-n}^{0}\).
Theorem 2.2
Let \(l,n \in {\mathbb {N}}\). Consider a hidden Markov model which starts in some time point \(-m < \min \{-l, -n\}\). Let \( A\subseteq M \) and fix \( x_{0},\ldots ,x_{-n}\in {\mathcal {X}} \). Set
and
Then for all \(l\in {\mathbb {N}}\)
where \( \hat{\eta }_{j} = \eta (x_{j}) \) for \( j\in \{-n,\ldots ,0\}\) and \( \hat{\eta }_{j} = \eta \) for \( j\notin \{-n,\ldots ,0\}\).
In the following corollary the probabilities \(P\big (Y_{0}\in A|X_{-l}^{0}=x_{-l}^{0}\big )\) and \(P\big (Y_{0}\in A|X_{-l-k}^{0}=x_{-l-k}^{0}\big )\) are treated. The first will pertain to a hidden Markov model which starts in some time point , the second to one which starts in some time point . Note that terms of the form are identical in both models due to the identical transition matrix and the identical kernel Q.
Corollary 2.3
Let \( l,k\in {\mathbb {N}}\), \( A\subseteq M \) and \( x_{0},x_{-1},\ldots , x_{-l-k} \in {\mathcal {X}} \). Then
Proof
We obtain
using Theorem 2.2. \(\square \)
We now introduce the constants used for the exponential bound.
Remark 2.4
-
(i)
Set
$$\begin{aligned} \beta =\min _{k \in M} \int \frac{1}{ 1 + \big (|M|-1\big )\alpha (x)} f_{k}(x)\mu (\mathrm{d}x) \quad \mathrm{and} \quad \gamma =1-2\beta \,. \end{aligned}$$Then
$$\begin{aligned} 0 < \beta \le \frac{1}{2} \quad \mathrm{and} \quad 0 \le \gamma < 1. \end{aligned}$$For all \(k \in M\)
$$\begin{aligned} \int \big (1 - 2\eta (x)\big ) f_{k}(x)\mu (\mathrm{d}x) \le \gamma . \end{aligned}$$ -
(ii)
For the following result we need additional constants which arise from basic Markov process theory. A transition matrix Q is called uniformly ergodic if there exists a unique stationary probability distribution \(\pi \) and there exist constants \(a>0,~0<b<1,\) such that for any Markov chain \((Y_n)_{n\ge t}\) with transition matrix Q and any initial distribution at time t
$$\begin{aligned} \Vert P^{Y_{t+k}}-\pi \Vert \le a\cdot b^k, \quad k\in {\mathbb {N}}, \end{aligned}$$in total variation norm \(\Vert \cdot \Vert \). With the same meaning, also the process \((Y_n)_{n\ge t}\) is called uniformly ergodic. Assumption (A) above implies uniform ergodicity, so that we have for each Markov chain constants a, b as above, see (Meyn and Tweedie 2012, Chap. 16) for a general treatment.
Theorem 2.5
There exist constants \(a>0,~0<b,\gamma <1\) such that for all \( l,k\in {\mathbb {N}}\)
if \(R_l\) and \(R_{l+k}\) come from the same model started at some time point ,
in the general case that \(R_l\) and \(R_{l+k}\) come from possibly different models, the first started in some time point , the second in some time point .
Proof
The constants \(a,b,\gamma \) will be those introduced in Remark 2.4.
Let us firstly consider the case that \(R_l\) and \(R_{l+k}\) stem from the same model. Using the generic symbol f to denote densities in this model we write the joint density as \(f(x_{-l-k}^{0})\) and the joint conditional density as \(f_{y_{-l-k}^{0}}(x_{-l-k}^{0})\). With this notation we have
furthermore
We obtain from Corollary 2.3 and conditional independence
Let us now look at the general case with models 1 and 2, \(R_l=R_l^1\) stemming from model 1, \(R_{l+k}= R_{l+k}^2\) from model 2 respectively. Then
From the first part of the assertion
To treat \(R^1_{\lfloor l/2 \rfloor }\) and \( R^2_{\lfloor l/2 \rfloor }\) we note that the conditional Bayes risks for time lag \(\lfloor l/2 \rfloor \) given \(Y_{-\lfloor l/2 \rfloor }\) are the same in both models hence the unconditional risks differ by at most the total variation distance between the two distributions of \(Y_{\lfloor l/2 \rfloor }\) in the two models. This quantity is \(\le 2ab^{l/2}\) since both models have been running for at least \(l - \lfloor l/2 \rfloor \) time points, hence
\(\square \)
From this we easily obtain our main result.
Theorem 2.6
There exist constants \(a>0,~0<b,\gamma <1\) such that for all \(l \in {\mathbb {N}}\)
in particular for \(l \rightarrow \infty \)
Proof
As in Theorem 2.5, the constants \(a,b,\gamma \) are those of Remark 2.4. Recall that \(R^{*}_l\) is the Bayes risk in the stationary case. As already stated earlier, \(\lim _{l} R^{*}_l = R^{*}\) by martingale convergence. Theorem 2.5 shows
for all k, proving the assertion. \(\square \)
3 Kernel classification with memory
Optimal classification procedures provide benchmarks for the actual behaviour of data driven classification procedures which do not require knowledge of the underlying distribution. A general principle from statistical classification involves the availability of a training sequence \((x'_1,y'_1,\ldots ,x'_n,y'_n)\) where the \(x'_i\) have been recorded together with the \(y'_i\). This training sequence is used for the construction of a regression estimator
which leads to the classification rule
When we choose a kernel
and use the common kernel regression estimate we arrive at the kernel classification rule
The asymptotic behaviour, as the size of the training sequence tends to infinity, has been thoroughly investigated for such classification rules and in particular for kernel classification rules. In the i.i.d. case or more generally under suitable mixing conditions, such procedures are risk consistent in the following sense: Kernel classification rules asymptotically achieve the minimal risk of misclassification for \({\mathcal {X}}= {\mathbb {R}}^d\) if the size n of the training sequence tends to \(\infty \) and \(h=h(n)\) satisfies \(h(n) \rightarrow 0\) and \(nh(n)^{d} \rightarrow \infty \). As remarked in the Introduction, this type of consistency follows from the consistency of the corresponding regression estimator, hence any result on regression consistency translates into a result on risk consistency.
It is the aim of this section to discuss the applicability of kernel classification with memory in hidden Markov models. Assume that the training sequence \(X'_{1},Y'_{1},\ldots ,X'_{n},Y'_{n}\) is generated according to a hidden Markov model and that there is a sequence of observations
which stems from the stationary hidden Markov model with the same transition matrix and the same kernel and is stochastically independent of the training sequence. For the classification of \(X_{0} = x_{0}\) the usual kernel classification rule as described above would classify \(x_{0}\) as belonging to the class
This ignores the Markovian structure, so we want to use memory as in the optimal classification of the preceding Sect. 2. We propose the following procedure. Fix some memory \(l=1,2,\ldots \) prescribing the number of preceding observations used in the classification of the current one. Use a kernel
and, assuming a training sequence of size \(n+l\), classify observation \(x_{0}\) as originating from the class
Compared to rules without memory the role of \(x_{0}\) is taken by \((x_{-l},\ldots ,x_{0})\) whereas the role of \((X'_{i},Y'_{i})\) is taken by \((X'_{i},Y'_{i},\ldots ,X'_{i+l},Y'_{i+l})\).
The approach we propose here leads to a risk consistent procedure for hidden Markov models, i.e. the risk converges to the corresponding Bayes risk when, for fixed l, the size of the training sample n tends to \(\infty \). The proof of this risk consistency adapts the methods of proof for the i.i.d. case to the Markov model we have here. We present the basic facts here and refer to Irle (1997) for a detailed treatment; see also (Györfi et al. 1989, Chap. 13).
The kernel K has to satisfy that for any y
for a.a. \((x_{-l},\ldots ,x_{0})\) as \(k\rightarrow 0\). Any kernel K such that \(K\ge 0\), K is bounded with bounded support, and there exist \(t_0,c>0\) such that \(K(z)\ge c\) for \(\Vert z\Vert \le t_0\), fulfills the above condition, see, e.g., (Devroye et al. 1996, 10.1). We call such a kernel regular.
Next note that we consider a uniformly ergodic transition matrix for our Markov chain. Looking at the hidden Markov model forward in time, the process \((X_n',Y_n')_{n\in {\mathbb {N}}}\) forms a Markov chain with state space \(\mathcal {X}\times M\) in discrete time. The stationary distribution for this process is given by
It follows immediately that this process is again uniformly ergodic such that for \(a>0,~0<b<1\), the constants for the Y-process, it holds that for all n
since
In exactly the same manner, the process \((Z_n)_{n\in {\mathbb {N}}}=(X_n',Y_n',\ldots ,X_{n+l}',Y_{n+l}')\) is a Markov chain with state space \(\mathcal {Z}=(\mathcal {X}\times M)^{l+1}\) and stationary probability distribution \(\pi ^{(l)}\) given by
the p’s denoting the transition probabilities for the original chain. It is easily seen that this process is again uniformly ergodic where, with the same constants a, b, we have for all n
Finally we note that any uniformly ergodic process is geometrically mixing in the sense that there exist \(\alpha >0,~0<\beta <1\) such that for all n
for any measurable \(f,g:\mathcal {Z}\rightarrow {\mathbb {R}},~|f|,~|g|\le 1,\) and any initial distribution, see (Meyn and Tweedie 2012, Theorem 16.1.5).
Using the foregoing notations we can obtain the following result.
Theorem 3.1
Let K be regular and let \(h(n)>0, n=1,2,\ldots \) be such that \(h(n) \rightarrow 0\) and \(nh(n)^{d(l+1)} \rightarrow \infty \). Denote the risk of the kernel classification rule by \(L_{n}^{(l)}\). Then as \(n\rightarrow \infty \)
Proof
The proof is based on the observation that classification is easier than regression function estimation, see (Devroye et al. 1996, 6.7). To adapt this to our setting fix \(y\in M\). Let for \((x_{-l},....,x_0)\in \mathcal X^{l+1}\)
This is the kernel regression function estimator of size n for
with corresponding kernel classification rule
Now to show \(L_n^{(l)}\rightarrow R^*_l\) it is enough to show that as \(n\rightarrow \infty \)
in probability for almost all \((x_{-l},\ldots ,x_0)\), see (Devroye et al. 1996, Theorem 6.5). For this we may apply (Irle 1997, Theorem 1) and the application to kernel regression estimators in ibid, part 3, in particular the representation for \(\hat{p}_n\), p.138. We than use uniform ergodicity, geometric mixing, regularity of the kernel, together with \(nh(n)^{d(l+1)} \rightarrow \infty \) to infer that the conditions to apply (Irle 1997, Theorem 1 (i)) are fulfilled. This then shows the assertion.
\(\square \)
Remark 3.2
-
(i)
The more complicated result of almost sure convergence \(\hat{p}_n(x_{-l},\ldots ,x_0)\rightarrow p(x_{-l},\ldots ,x_0)\) in (1) needs additional conditions; see (Irle 1997, Corollary 4), (Györfi et al. 1989, Chap. 13). But here, we only need convergence in probability.
-
(ii)
This method of using past information to construct a classification procedure seems generally applicable. We simply have to replace \(x_0\) by \((x_{-l},\ldots ,x_0)\) and the learning sequence \((X_n',Y_n')_n\) by \((X_n',Y_n',\ldots ,X_{n+l}',Y_{n+l}')_n\). E.g. for a nearest neighbor classification we would look for the nearest neighbor among the \((X_n',Y_n',\ldots ,X_{n+l}',Y_{n+l}')\) and use the resulting \(Y_{n+l}'\) for classification. To show consistency, we can proceed in the same way as for kernel classification using the nearest neighbor regression estimate, compare (Irle 1997, Part 4).
So asymptotically as n tends to infinity, the kernel classification rule performs as the optimal rule of Sect. 2 and may be used as a typical nonparametric rule to test the usefulness of invoking preceding information.
From a practical point of view we now comment on the performance in simulations and in recognition problems for isolated handwritten letters which points to a saving in misclassifications.
3.1 Performance studies
In the following we report on some typical results in our studies of the actual behaviour of the kernel classification rule as proposed in this paper. As a general experience we point out that memory \(l>1\) did not lead to significant improvement over \(l=1\) so that we only compare the cases \(l=0,l=1\).
(i) In simulations 1 and 2 we choose \(\ldots Y_{-1}, Y_{0},Y_{1},\ldots \) as a Markov chain with four states and transition probabilities
with stationary distribution (0.25, 0.25, 0.25, 0.25).
In simulation 3 we choose \(Y_{1},Y_{2},\ldots \) i.i.d. following the stationary distribution. The \(X_{i}\)’s have a three-dimensional normal distribution with identical covariance matrix and mean vectors
So there is good distinction between all classes in simulation 1 with easy classification, there is poor distinction between classes 3 and 4 in simulations 2 and 3. The following table gives the error rate for classification with size n of the training sequence in the first row. A normal kernel is used.
This shows that use of the Markov structure in simulation 2 through \(l=1\) leads to the possibility of distinguishing between classes 3 and 4. In the i.i.d. case of simulation 3 an appeal to memory of course does not help.
(ii) The classification of handwritten isolated capital letters was performed using kernel methods. Features were obtained by transforming handwritten letters into \(16 \times 16\) grey-value matrices. The learning sequence was obtained by merging samples from seven different persons.
The following typical error rates resulted from the classification of the word SAITE (german for ’string’) where error rates are writer dependent. A normal kernel was used with \(h=1.0\) and \(h=0.25\).
Use of the Markov structure through \(l=1\) seems to lead to improved performance. Of course, the incorporation of memory can be applied to any procedure of pattern recognition. In particular we have also looked into nearest neighbor rules with memory l. Our findings have been similar to those for the kernel rule as discussed above and also advocate the use of memory \(l=1\).
4 Proofs for Section 2
Lemma 4.1
Let \(l \in {\mathbb {N}}_0\). Consider a hidden Markov model which starts in some time point \(-m < -l\) and let \(T \subseteq \{-m,-m+1,\ldots ,0\}\).
For any \( x_{t}\in {\mathcal {X}} ,\, t\in T, \) and \( i,\iota ,\kappa \in M \) we have
-
(i)
for \( -l\in T \)
$$\begin{aligned} \frac{P(Y_{-l}=\iota |Y_{-l-1}=i, X_{t}=x_{t},t\in T)}{P(Y_{-l}=\kappa |Y_{-l-1}=i, X_{t}=x_{t},t\in T)}\le \alpha (x_{-l})\, , \end{aligned}$$ -
(ii)
for \( -l\notin T \)
$$\begin{aligned} \frac{P(Y_{-l}=\iota |Y_{-l-1}=i,X_{t}=x_{t}:t\in T)}{P(Y_{-l}=\kappa |Y_{-l-1}=i, X_{t}=x_{t},t\in T)}\le \alpha \, . \end{aligned}$$
Proof
We shall use the symbol f in a generic way to denote joint and conditional joint densities; also we use \( T_{\ge s}= \{t\in T:t\ge s\} \). From the properties of a hidden Markov model we obtain
Note that the term \( f_{j}(x_{-l+1}) \) in line \( (*) \) only appears for \( -l+1\in T \). In the second part of the assertion we have \( -l\notin T \) hence the terms \( f_{\iota }(x_{-l}) \) and \( f_{\kappa }(x_{-l}) \) do not appear in line \( (*) \). Thus the dependence on \( x_{-l} \) disappears. \(\square \)
As a consequence we obtain:
Corollary 4.2
Consider the situation of Lemma 4.1. Then
-
(i)
for \( -l\in T \)
$$\begin{aligned} P\big (Y_{-l}=j|Y_{-l-1}=i,X_{t}=x_{t},t\in T \big )\ge \eta (x_{-l})>0\, , \end{aligned}$$ -
(ii)
for \( -l\notin T \)
$$\begin{aligned} P\big (Y_{-l}=j|Y_{-l-1}=i,X_{t}=x_{t},t\in T \big )\ge \eta >0\, . \end{aligned}$$
Proof
Set \( \hat{\alpha }= \alpha (x_{-l})\) for \( -l\in T \) and \( \hat{\alpha }= \alpha \) for \( -l\notin T\). Lemma 4.1 implies
hence
\(\square \)
We now give the proof of Theorem 2.2.
Proof
For \( A=M \) we have \( m_{l}^{+}-m_{l}^{-}=1-1=0 \). So assume \( A\ne M \). We use induction in l and shall apply Corollary 4.2.
Let \( l=1 \): Chose \( j\in A^{c} \).
The inductive step:
Assume that the assertion is true for \( l\in {\mathbb {N}}\). Let \(j^+\) and \(j^-\) be such that the maximum and minimum respectively, are attained in these values, i.e. \(j^+ = \arg m_{l}^{+},\, j^- = \arg m_{l}^{-}\). Then
Similarly
This implies
\(\square \)
References
Cappé O, Moulines E, Rydén T (2005) Inference in hidden Markov models, vol 6. Springer, New York
Devroye L, Györfi L, Lugosi G (1996) A probabilistic theory of pattern recognition, vol 31., Applications of mathematics (New York). Springer-Verlag, New York
Györfi L, Härdle W, Sarda P, Vieu P (1989) Nonparametric curve estimation from time series, vol 60. Springer-Verlag, Berlin
Györfi L, Kohler M, Krzyżak A, Walk H (2002) A distribution-free theory of nonparametric regression. Springer Series in Statistics. Springer-Verlag, New York
Holst M, Irle A (2001) Nearest neighbor classification with dependent training sequences. Ann. Statist. 29(5):1424–1442
Huang XD, Ariki Y, Jack MA (1990) Hidden Markov models for speech recognition, vol 2004. Edinburgh university press, Edinburgh
Institute for Defense Analyses. (1980) Communications Research Division and John D Ferguson. Symposium on the Application of Hidden Markov Models to Text and Speech. Institute for Defense Analyses, Communications Research Division
Irle A (1997) On consistency in nonparametric estimation under mixing conditions. J Multivar Anal 60(1):123–147
MacDonald IL, Zucchini W (1997) Monographs on statistics and applied probability, In: Cox DR, Hinkley DV, Rubin D, Silverman BW (eds) Hidden Markov and other models for discrete-valued time series, vol 70. Chapman & Hall, London
Meyn SP, Tweedie RL (2012) Markov chains and stochastic stability. Springer Science & Business Media, New York
Ryabko D (2006) Pattern recognition for conditionally independent data. J Mach Learn Res 7:645–664
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Christensen, S., Irle, A. & Willert, L. Classification error in multiclass discrimination from Markov data. Stat Inference Stoch Process 19, 321–336 (2016). https://doi.org/10.1007/s11203-015-9129-6
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11203-015-9129-6