Abstract
This paper describes a moments estimator for a standard state-space model with coefficients generated by a random walk. The method calculates the conditional expectations of the coefficients, given the observations. A penalized least squares estimation is linked to the GLS (Aitken) estimates of the corresponding linear model with time-invariant parameters. The estimates are moments estimates. They do not require the disturbances to be Gaussian, but if they are, the estimates are asymptotically equivalent to maximum likelihood estimates. In contrast to Kalman filtering, no specification of an initial state or an initial covariance matrix is required. While the Kalman filter is one sided, the filter proposed here is two sided and therefore uses more of the available information for estimating intermediate states. Further, the proposed filter has a clear descriptive interpretation.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
This paper describes and discusses an estimator for a linear time series model with time-varying coefficients. Such a model, the varying coefficients model, or “VC model” for short, generalizes the standard linear model. The standard model assumes that the coefficients giving the influence of the independent variables on the dependent variable remain constant. In the VC model, these coefficients are permitted to change over time.
The VC model poses the statistical problem of determining the smoothing parameters, variances, or splines, that are needed to model the movements of the coefficients over time. Schlicht (1989) and Schlicht and Ludsteck (2006) have proposed an estimation method—the VC method—that is specifically tailored to the case that the time-varying coefficients are generated by a random walk with normal disturbances. The present paper generalizes this approach to the non-Gaussian case.Footnote 1
The paper is organized as follows. In Sect. 1, the model is described and some motivation is provided by drawing on a discussion in economics. The “criteria” or “penalty” approach is explained that permits to estimate the time-paths of the coefficients in a purely descriptive way. A stochastic formulation is given in Sect. 2 which presupposes knowledge of the variances of the disturbances. It is shown that the descriptive approach outlined in Sect. 1 can be justified statistically by estimating the time averages of the coefficients in the corresponding linear GLS model with constant coefficients, again for given variances of the disturbances. Section 3 turns to the estimation of these variances. Two closely related estimators are explained: a moments estimator and a likelihood estimator. The likelihood estimator presupposes Gaussian disturbances while the moments estimator does not require this assumption. The relationship between these estimators is discussed. Section 4 provides some illustration of the way the model works and ponders some methodological issues. A conclusion follows.
2 The varying coefficients model in descriptive mode
This section describes the varying coefficients model and discusses the motivation for using it, as it emerged in economics (Sect. 1.1). Some features of the proposed method will be previewed in Sect. 1.2. The notation is introduced in Sect. 1.3 and the “criteria” or “penalty” approach is described that permits to estimate the development of the relations between the independent variable and the dependent variables over time in a purely descriptive way (Sect. 1.4).
2.1 The linear theoretical model and its empirical application
Consider a theory stating the dependent variable y as a linear function of some independent variables \(x_{1},x_{2},\ldots ,x_{n}\):
The coefficients \(a_{1},a_{2},\ldots ,a_{n}\) give the influence of the independent variables.
If we have T observations \(y_{t}\), \(x_{1,t}\), \(x_{2,t}\),...\(,x_{n,t}\) with \(t=1,2,\ldots ,T\) denoting the time of an observation, we can try to estimate the theoretical coefficients \(a_{1},a_{2},\ldots ,a_{n}\) by a standard linear regression. In order to do that, we have to add an error term \(u_{t}\) to capture discrepancies of the empirical from the theoretical regularity due to measurement errors etc. and obtain
In many cases it appears improbable, however, that outside influences not captured in the theoretical model affect only the disturbance term, and not the coefficients themselves. In the case of economics, we may think of changes in technology, preferences, market structure, and the composition of aggregates. All change over time and may affect the coefficients themselves.
In economics, the problem of possibly time-varying coefficients was the subject of the famous Keynes–Tinbergen controversy around 1940.Footnote 2 While Tinbergen (1940, 153) defended the use of regression analysis with the argument that in “many cases only small changes in structure will occur in the near future”, Keynes (1973, 294) objected that “the method requires not too short a series whereas it is only in a short series, in most cases, that there is a reasonable expectation that the coefficients will be fairly constant.”
It appears that both arguments are correct. The VC model takes care of both by assuming that the coefficients change slowly over time: They are highly auto-correlated. This is formalized by a random walk (Athans 1974; Cooley and Prescott 1973; Schlicht 1973). If \(a_{i,t}\) denotes the state of coefficient \(a_{i}\) at time t, it is assumed that
with the disturbance term \(v_{i,t}\) of expectation zero and with variance \(\sigma _{i}^{2}\). The assumption of expectation zero formalizes the idea that “the coefficients will be fairly constant” in the short run, while the variance \(\sigma _{i}^{2}\) is a measure of the stability of coefficient i and is to be estimated. For \(\sigma _{i}^{2}=0\) for some i, the case of a constant (time-invariant) coefficients is covered as well. As a consequence, the standard linear model is replaced by
This is the VC model that is presupposed in the following.
2.2 Properties of the VC method
The VC method that will be developed in this paper estimates the expected time-paths of the coefficients \(a_{i,t}\) for given observations \(x_{i,t}\) and \(y_{t}\) with \(i=1,2,\ldots ,n\) and \(T=1,2,\ldots ,T\). It can be viewed as a straightforward generalization of the method of least squares.
-
While the method of ordinary least squares selects estimates that minimize the sum of squared disturbances \(\sum _{t=1}^{T}u_{t}^{2}\) in the equation, VC selects estimates that minimize the sum of squared disturbances in the equation and a weighted sum of squared disturbances in the coefficients, i.e. \(\sum _{t=1}^{T}u_{t}^{2}+\gamma _{1}\sum _{t=2}^{T}v_{1,t}^{2}+\gamma _{2}\sum _{t=2}^{T}v_{2,t}^{2}+\cdots +\gamma _{n}\sum _{t=2}^{T}v_{n,t}^{2}\), where the weights for the changes in the coefficients \(\gamma _{1},\gamma _{2},\ldots ,\gamma _{n}\) are determined by the inverse variance ratios, i.e. \(\gamma _{i}=\sigma ^{2}/\sigma _{i}^{2}\). In other words, it balances the desiderata of a good fit and parameter stability over time.
-
Estimation can proceed by focusing on some selected coefficients and keeping the remaining coefficients constant over time. This is done by keeping the corresponding variances \(\sigma _{i}^{2}\) close to zero, rather than estimating them. (If all coefficients are frozen in this manner, the OLS result is obtained.)
-
The time-averages of the regression coefficients \(\frac{1}{T}\sum _{t}a_{t}\) are GLS estimates of the corresponding regression with fixed coefficients.
-
The VC method does not require initial values for the initial state and the initial variances. Rather all states and variances are estimated in an integrated unified procedure. This is an advantage over Kalman filtering which is typically quite sensitive to the choice of initial values, especially when dealing with shorter time series.
-
The VC method links the purely descriptive method of employing non-parametric splines through penalized least squares with an explicit statistical model with random-walk coefficients. This offers the possibility of model-based estimation.
-
All estimates are moments estimates. It is not necessary to presuppose Gaussian disturbances.
-
For increasing sample sizes T and under the assumption that all disturbances are normally distributed, the moments estimates approach the maximum likelihood estimates.
2.3 Notation and basic assumptions
All vectors are conceived as column vectors, and their transposes are indicated by an apostrophe. The observations at time t are \(x'_{t}=\) \(\left( x_{1,t},x_{2,t},\ldots ,x_{n,t}\right)\) and \(y_{t}\) for \(t=1,2,\ldots ,T\). We write
We write further
and define
with \(I_{n}\) denoting the identity matrix of order n and \(\otimes\) indicating the Kronecker product operator. Note that p and P are of full rank.
The model is obtained by writing Eqs. (1.4) and (1.5) in matrix form:
The model
Note that the explanatory variables X are taken as predetermined, rather than stochastic.
Regarding the observations X and y we assume that a perfect fit of the model to the data is not possible:
Assumption 1
\(Pa=0\,\,{\mathrm {implies}}\,\,y\ne Xa\).
This assumption rules out the (trivial) case that the standard linear model (1.2) fits the empirical data perfectly, a case that cannot reasonably be expected to occur in practical applications. Further, the assumption implies that the number of observations exceeds the number of coefficients to be estimated:
2.4 Least squares
In a descriptive spirit, the time-paths of the coefficients can be determined by following the penalized least squares approach, where some criteria are employed that formalize some descriptive desiderata.Footnote 3 In the case at hand, the desiderata are that the model fits the data well and that the coefficients change only slowly over time—u and v ought to be as small as possible. The sum of the squared errors \(u'u\) is taken as a criterion for the goodness of fit of Eq. (1.7), the weighted sum of the squared changes of the coefficients \(v_{i}'v_{i}\) over time give criteria for the stability of the coefficients over time. The combination of all these criteria gives an overall criterion that combines the desiderata of a good fit and stability of coefficients over time. The weights \(\left( \gamma _{1},\gamma _{2},\ldots ,\gamma _{n}\right)\) give the relative importance of the stability of the coefficients over time, where weight \(\gamma _{i}\) relates to coefficient \(a_{i}\). For the time being, these weights are taken as given but will later be estimated, too.
Write
and
Adding the sum of squares \(u'u\) and the weighted sum of squares \(v'Gv\) gives the overall criterion
This expression is to be minimized under the constraints given by the model (1.7), (1.8) with the observations X and y :
This determines the time-paths of the coefficients a that optimize this criterion. Hence we can write
The weighted sum of squares Q is the sum of two positive semi-definite quadratic forms. Assumption 1 rules out the case that Q can be zero. Hence Q is positive definite and of full rank. The first order condition for a minimizing a is
and the second order condition is that the Jacobian
is positive definite, which is the case. Solving (1.16) for a and plugging this into (1.13) and (1.14) gives the estimates
where the subscript LS stands for “least squares”.
3 The varying coefficients model in stochastic mode
This section considers the statistical treatment of the VC model under the assumption that the variances of the disturbances are known. With the parametrization outlined in Sect. 2.1, the VC model gives rise to a GLS (Aitken) model that permits to estimate the time-averages of the coefficients. With these estimates, the conditional expectations for the coefficients \(a_{i,t}\) for given observations X and y can be determined (Sect. 2.2). If the weights chosen for the descriptive estimation outlined in Sect. 1.4 are equal to the inverse variance ratios, the descriptive estimation and the conditional expectation coincide (Sect. 2.3).
3.1 Orthogonal parametrization
For purposes of estimation we need a model that explains the observation y as a function of the observations X and the random variables u and v. This would permit calculating the probability distribution of the observations y contingent on the parameters of the distributions of u and v, viz \(\sigma ^{2}\) and \(\varSigma\). The true model does not permit such an inference, though, because the matrix P is of rank \(\left( T-1\right) n\) rather than of rank Tn and cannot be inverted. Hence v does not determine a unique a but rather the set of solutions
with \(\beta\) as a shift parameter,
as the right-hand pseudo-inverse of P given in (1.6) of order \(Tn\times \left( T-1\right) n\), and the matrix
of order \(Tn\times n\). It is orthogonal to P:
with the square matrix \(\left( P',Z\right)\) of full rank. For any v we have \(a\in A\Leftrightarrow Pa=v\). Hence Eq. (1.7) and the set (2.1) give equivalent descriptions of the relationship between a and v.
Note that
Regarding the matrices P, \(\tilde{P}\), and Z we have
In view of (2.1), any solution a to \(Pa=v\) can be written as
for some \(\beta \in \mathbb {R}^{n}\). Equation (1.7) can be re-written as
The model (2.6), (2.7) will be referred to as the equivalent orthogonally parameterized model. It implies the true model (1.7), (1.8). It implies, in particular, that \(a_{t}\) is a random walk even though \(a_{t}\) depends, according to (2.6), on past and future realizations of \(v_{t}\).
The formal parameter \(\beta\) has a straightforward interpretation. Pre-multiplying (2.6) by \(Z'\) gives
and therefore
Hence \(\beta\) gives the averages of the coefficients \(a_{i,t}\) over time.
Equation (2.7) permits calculating the density of y dependent upon the parameters of the distributions of u and v and the formal parameters \(\beta\). In a second step, all these parameters—\(\sigma ^{2}\), \(\varSigma\), and \(\beta\)—can be determined by moments estimators that will be derived in Sect. 3.1.
The orthogonal parametrization (proposed in Schlicht 1985, Sec. 4.3.3 in another context) entails some advantages with respect to symmetry and mathematical transparency, as compared to more usual procedures, such as parametrization by initial values. It permits to derive our moments estimator that does not require normally distributed disturbances, and to write down an explicit likelihood function for the case of normally distributed disturbances that permits estimation of all relevant parameters in a unified one-shot procedure.
The formal parameter vector \(\beta\) relates directly to the coefficient estimates of a standard generalized least squares (GLS, Aitken) regression. Equation (2.7) can be interpreted as a standard regression for this parameter vector with the matrix \(x=XZ\) giving the explanatory variables:
and the disturbance
It has expectation zero
and covariance
The Aitken estimate \(\beta _{A}\) satisfies
or
where the subscript A stands for “Aitken”. As \(x=XZ\) and \(W=X\tilde{P}V\tilde{P'}X'+\sigma ^{2}I_{T}\), Eqs. (2.13) and (2.14) can be written as
and Eq. (2.14) gives rise to
3.2 The filter
This section derives the VC filter which gives the expectation of the coefficients a for given observations X and y, a given shift parameter \(\beta\), and given variances \(\sigma ^{2}\) and \(\varSigma\).
For given \(\beta\) and X, the vectors y and a can be viewed as realizations of random variables determined jointly by the system (2.6), (2.9) as brought about by the disturbances u and v:
The covariance is
The marginal distribution of y is as given by (2.9) and (2.12). On this basis, we take our estimate of a as
which is the expectation of a for the case that u and v are Gaussian and y, \(\beta\), \(\sigma ^{2}\), and \(\varSigma\) are given. (It will turn out later on that \(a_{A}\) is the expectation of a for non-Gaussian disturbances as well, see Eq. (2.27) below.)
Note that the variance-covariance matrix of w, as given in Eq. (2.12), tends to \(\sigma ^{2}I_{T}\) if the the variances \(\sigma _{i}^{2}\) go to zero, and Eq. (2.7) approaches the standard unweighted linear regression. In this sense, the OLS regression model is covered as a special limiting case by the model discussed here.
3.3 Least squares and aitken
The following theorem states that the least squares estimator \(a_{LS}\) and the Aitken estimator \(a_{A}\) coincide if the weights are given by the variance ratios.
Claim 1
\(G=\sigma ^{2}V^{-1}\) implies \(a_{LS}=a_{A}\).
Proof
Consider first the necessary conditions for a minimum of (1.12). The first-order condition (1.16) defines \(a_{LS}\) with weights \(G=\sigma ^{2}V^{-1}\) uniquely and can be written as
It will be shown that (2.17) implies
which will establish the proposition.
Pre-multiplication of (2.17) by \(\left( X'X+\sigma ^{2}P'V^{-1}P\right)\) gives
Because of \(PZ=0\) this can be written as
Adding and subtracting \(\sigma ^{2}X'\left( X\tilde{P}V\tilde{P'}X+\sigma ^{2}I_{T}\right) ^{-1}\left( y-XZ\beta _{A}\right)\) and using \(P'\tilde{P'}=\left( I_{Tn}-ZZ'\right)\) results in
which reduces to
According to (2.15), the last term is zero and we obtain
This shows that the least squares estimator \(a_{LS}\) and the Aitken estimator \(a_{A}\) coincide. \(\square\)
As a consequence of Claim 1, the least-squares estimates for u, v, and w and their Aitken counterparts coincide for \(G=\sigma ^{2}V^{-1}\). We need not distinguish them and denote all our estimates by circumflex:
For the sake of completeness and later use, the following observation is added:
Claim 2
\(G=\sigma ^{2}V^{-1}\) implies \(\hat{Q} =\sigma ^{2}\hat{w}'W^{-1}\hat{w}.\) In other words: the sum of squared deviations weighted by the variance ratios \(\frac{\sigma ^{2}}{\sigma _{1}^{2}},\frac{\sigma ^{2}}{\sigma _{2}^{2}},\ldots ,\frac{\sigma ^{2}}{\sigma _{n}^{2}}\) equals the weighted sum of squares (the squared Mahalanobis distance) in the Aitken regression.
Proof
As \(\hat{w}=X\tilde{P}\hat{v}+\hat{u}\), we have
With (2.5), (2.9), (2.12), and (2.20) this gives
and finally
Hence the weighted sum of squares Q equals the squared Mahalanobis distance. \(\square\)
Consider now the distribution of \(\hat{a}\). The matrix \(\left( X'X+\sigma ^{2}P'V^{-1}P\right)\), henceforth referred to as the “system matrix”, will be denoted by M:
With this, the normal equation (2.19), that defines the solution for the vector of the coefficients \(\hat{a}\) can be written as
With (1.7) and (2.24) we obtain
Given a realization of the time-path of the coefficients a, the estimator \(\hat{a}\) is distributed with mean
and covariance
which reduces to
and finally to
The system matrix (2.24) is determined by the observations X, the variance \(\sigma ^{2}\) and the variances \(\varSigma\). Equation (2.28) gives the precision of our estimate which is directly related to the system matrix M. The next step is to determine the variance \(\sigma ^{2}\) and the variances \(\varSigma\).
4 Variance estimation
This section turns to estimating the variances. In Sect. 3.1 the proposed moments estimators will be derived and in Sect. 3.2 a maximum likelihood criterion \({\mathcal {C}}_{L}\)will be given that is based on the parameterized model described in Sect. 2. In Sect. 3.4 a moments criterion \({\mathcal {C}}_{M}\) will be given that generates, upon minimization, the moments estimators and it will be argued that, for large T, both criteria approach each other. As a consequence, the theoretical appeal of the likelihood estimator for large samples carries over to the moments estimator in the Gaussian case.
4.1 Moments estimation
The moments estimator that will be developed in this section has, for any sample size, a straightforward interpretation: It is defined by the property that the variances of the disturbances in the estimated coefficients equal their expectations. It has, thus, a straightforward connotation even in shorter time series and does not presuppose that the perturbations u and v are normally distributed. It will be shown later that the moments estimators approach the respective maximum likelihood estimators in large samples if the disturbances are normally distributed.
In the following we denote the estimated coefficients by \(\hat{a}\) and the estimated perturbations by \(\hat{u}\) and \(\hat{v}\). For some variances \(\sigma ^{2}\) and \(\sum ={\mathrm {diag}}\left( \begin{array}{cccccc} \sigma _{1}^{2},&\sigma _{2}^{2},&.&.&.,&\sigma _{n}^{2}\end{array}\right)\), the estimated coefficients \(\hat{a}\) along with the estimated disturbances \(\hat{u}\) and \(\hat{v}\) are random variables brought about by realizations of the random variables u and v. Consider \(\hat{u}=y-X\hat{a}=X\left( a-\hat{a}\right) +u\) first. With (2.26) we obtain
Regarding \(\hat{v}\), consider the vectors \(\hat{v}_{i}'=\left( \begin{array}{cccccc} \hat{v}_{i,1}^{2},&\hat{v}_{i,3}^{2},&.&..&,&\hat{v}_{i,T-1}^{2}\end{array}\right)\) for \(i=1,2,\ldots ,n\), that is, the disturbances in the coefficients \(\hat{a}_{i}\) for each coefficient separately. These are obtained as follows.
Denote by \(e_{i}\in \mathbb {R}^{n}\) the n-th column of an \(n\times n\) identity matrix and define the \(\left( T-1\right) \times \left( T-1\right) n\)-matrix
that picks the time-path of the \(i-\)th disturbance \(v_{i}=\left( v_{i,1},v_{i,3},\ldots ,v_{i,T-1}\right) '\) from the disturbance vector v:
Note that, from (1.8),
Pre-multiplying (2.26) with the matrices \(E_{i}\) yields
Thus \(\hat{u}\) and \(\hat{v_{i}}\) are linear functions of the random variables u and v, and their expected squared errors can be calculated.
Claim 3
For given observations X and y and given variances \(\sigma ^{2}\)and \(\varSigma\), the expected squared deviations of \(\hat{u}\) and \(\hat{v}_{i}\), \(i=1,2,\ldots ,n\) are
This implies that the expected sum of squares is
Proof
The expectation of the squared estimated error \(\hat{u}\) is
and hence
In a similar way, the expectation of the squared estimated disturbance in the i-th coefficient \(\hat{v}_{i}\) is evaluated as
and hence
Regarding \(\hat{Q}\) we note that
and obtain
and hence
\(\square\)
The moments estimators are obtained by selecting variances \(\sigma ^{2}\) and \(\sigma _{i}^{2},i=1,2,\ldots ,n\) such that the expected moments \(E\left\{ \hat{u}'\hat{u}\right\}\) and \(E\left\{ \hat{v}_{i}'\hat{v}_{i}\right\} ,i=1,2,\ldots ,n\) given in (3.6) and (3.7) are equalized to the estimated moments \(\hat{u}'\hat{u}\) and \(\hat{v}_{i}'\hat{v}_{i},\,i=1,2,\ldots ,n\). As both the expected moments and the estimated moments are functions of the variances, the moments estimators, denoted by \(\hat{\sigma }^{2}\) and \(\hat{\sigma }_{i}^{2},\,i=1,2,\ldots ,n\), respectively, are defined as a fix point of the system
Alternatively, the moments estimators can be equivalently defined as a fix point of the system:
The implementations by Schlicht (2005b, 2021) use the latter alternative and employ a gradient process to find the solution of the equation system
This can be written as
Iteration starts with some variance ratios \(\gamma _{i}=\frac{\sigma ^{2}}{\sigma _{i}^{2}}.\) This permits to determine the right-hand sides of Eqs. (3.8) and (3.9). The variance ratios at the left-hand side of (3.8) and the variance at the left hand side of (3.9) are used for a new iteration, and this continues until convergence is reached, delivering the fix-point values \(\hat{\gamma }_{i}=\frac{\hat{\sigma }^{2}}{\hat{\sigma _{i}}^{2}}\) and \(\hat{\sigma }^{2}\) and the corresponding variances \(\hat{\sigma }_{i}^{2}=\frac{\hat{\sigma }^{2}}{\hat{\gamma }_{i}}\). (If this process does not converge, another solution procedure is available that will be discussed in Sect. 3.3 below.)
4.2 Likelihood estimation
This section derives a maximum-likelihood estimator for the variances under the additional assumption that the disturbances u and v are normally distributed.
Using Eqs. (1.8) and (2.10)–(2.14) together with the identity \(x=XZ\), the concentrated log-likelihood function for the Aitken regression (2.9) can be written as
with
By maximizing (3.10) with respect to \(\beta ,\) \(\sigma ^{2}\) and \(\varSigma\), the maximum likelihood estimates for the variances are obtained and the corresponding expectation for the parameter a is given in analogy to (2.20) as
with a caron denoting the maximum likelihood estimates and \(\check{V}=\left( I_{T-1}\otimes \check{\varSigma }\right)\).
The maximum likelihood estimator can be characterized in another way. This will be explained in the following. In order to do so, the following lemma is needed.
Claim 4
Proof
Hence the result
is obtained. \(\square\)
Claim 5
Minimizing the criterion
is equivalent to maximizing the likelihood function (3.10).
Proof
With (3.11) we have
As, according to Claim 2, \(w'W^{-1}w=\left( y-XZ\beta \right) 'W^{-1}\left( y-XZ\beta \right)\) equals \(\frac{1}{\sigma ^{2}}u'u+v'V^{-1}v\) and \(\log \det \left( PP'\right)\) and \(T\left( \log 2+\log \pi \right)\) are independent of the variances, we can write
where “\({\mathrm {constant}}\)” is independent of the variances and maximization of \({\mathscr {L}}\) with regard to the variances is equivalent to minimization of \({\mathcal {C}}_{L}.\) \(\square\)
4.3 Another representation of the moments estimators
The relationship between the likelihood estimator and the moments estimator can be elucidated with the aid of a criterion that is very similar to the likelihood criterion (3.12). This criterion function is
Claim 6
Minimization of the criterion function (3.13) with respect to the disturbances u and v and the variances \(\sigma ^{2}\) and \(\varSigma\) yields the moments estimators as defined in (3.3) and (3.4).
Proof
Note that the envelope theorem together with (3.2) implies
In view of (3.2) we obtain further
By definition (2.24) we have
and hence
With this, Eq. (3.16) can be written as
and we find
which gives
These first-order conditions are equivalent to Eqs. (3.3), (3.4) that define the moments estimator. \(\square\)
Johannes Ludsteck’s (2004, 2018) Mathematica packages for VC proceed by minimizing the criterion function (3.13). This permits very clean and transparent programming. As Claim 6 is confined to moments and does not require any assumption about the normality of the disturbances, Ludsteck’s estimators are moments estimators as well.
4.4 The relationship between the likelihood and the moments estimator
The likelihood estimates minimize, according to Claim 5, the criterion \({\mathcal {C}}_{L}\) and the moments estimates minimize, according to Claim 6, the criterion \({\mathcal {C}}_{M}\). It is claimed in the following that, for increasing T and bounded X, both estimates tend to coincide. To show that, the following lemma is needed.
Claim 7
For sufficiently large T and bounded explanatory variables X, the following holds true approximately:
Proof
Define the \(Tn\times Tn\) matrix
and consider the matrix \(\mathbb {P}M\mathbb {P}'.\) One way to calculate it is as follows:
This implies
For increasing T and bounded x, \(\frac{1}{T}xx'\) tends to zero and \(\left( I_{T}-\frac{1}{T}xx'\right)\) tends to \(I_{T}\). Hence \(\det \mathbb {P}M\mathbb {P}'\) tends to \(\det PMP'\) and we can write
for large T. Another way to evaluate \(\det \left( \mathbb {P}M\mathbb {P}\right)\) is the following:
As
is obtained. Combining (3.21) and (3.22) gives the result. \(\square\)
Claim 8
For increasing T and with bounded explanatory variables X, the moments criterion and the likelihood criterion coincide.
Proof
For increasing T and in view Claim 7, \({\mathcal {C}}_{M}\) tends to \({\mathcal {C}}_{L}\). \(\square\)
Hence the minimization of both criteria with respect to the variances will generate in the limit the same result.Footnote 4 In consequence, the descriptive appeal of the moments estimator carries over to the likelihood estimator, and the theoretical appeal of the likelihood estimator carries over to the moments estimator.
5 Miscellaneous notes
The following offers remarks on computation (Sect. 4.1), comments on some applications of the VC method in economics that illustrate aspects of the VC method of potential interest in other fields (Sect. 4.2). Some illustration provided by simulation studies is given (Sect. 4.3). Section 4.4 discusses the problem of artifacts. Some methodological concerns are raised in Sect. 4.5.
5.1 Notes on computation
The VC method has been embodied in some freely available software packages (Ludsteck 2004, 2018; Schlicht 2005b, 2021). Although these have been developed under the assumption that all disturbances are Gaussian, the numerical routines, briefly sketched at the end of Sects. 3.1 and 3.3, remain appropriate for the non-Gaussian case.
Schlicht and Ludsteck (2006, Sec. 11) have compared the performance of the moments estimator with that of the Kalman filter in the EViews (2005) implementation for the Gaussian case and conclude that “both estimators perform very similar—with the caveat that the Eviews estimates have been calculated by using the theoretical values as starting values. ...The distributions of the estimates for the weights are practically indistinguishable.” Given that true variances would be unavailable in practical applications and that the the Kalman results appear to be quite sensitive to the choice of initial values, that speaks for the VC method in the case that the coefficients follow a random walk. Further, the VC method dispenses of necessity to specify initial values and offers additional descriptive features, as indicated by Claim 1 and Eq. (2.8).
5.2 Notes on applications
In spite of its so far insufficient documentation, VC has found a quite a number of applications in various settings, mainly dealing with structural change. As any of the authors of these studies will be a better judge regarding the practical performance of the VC method than this author (who is neither an applied economist, nor an econometrican, nor a statistician), any comments in this regard from my side appear unwarranted. Yet it may be appropriate to illustrate possible uses of the VC method by means of some examples taken from my field, economics.
In the wake of the financial crisis of 2008, it has been observed that “monetary policy rules change gradually, pointing to the importance of applying a time-varying estimation framework” (Baxa et al. 2014) and that, “by applying the time-varying coefficients method ...it was clear that the past financial crisis caused the central bank to be more expansionary in its policy than usual towards financial stress” (Madsen 2012). Further, analyses of inflation targeting (IT) in “a time-varying coefficients methodology ...show a clear picture of credibility gains from the adoption of IT” (Nogueira 2009). Another application dealt with the recent decoupling of greenhouse gas emissions and gross domestic product in the wake of global warming where it has been found that “the evidence for decoupling among the richer countries gets weaker.” (Cohen et al. 2017). Regarding the relationship between unemployment and economic growth, known in economics as “Okun’s Law”, it has been contested that the relationship has been static over time (Jalles 2018) and that, actually, “deregulation in labor and product markets and recessions have strengthened the response of unemployment to the business cycle” (Furceri et al. 2019).
Such applications suggest to me that the VC method may offer an additional useful way for dealing with linear models with coefficients that follow a random walk, and I hope that similar applications will be found in other fields.
5.3 Some illustration
To illustrate the practical workings of VC, assume a model with an intercept term \(a_{t}\) and a single explanatory variable \(x_{t}\) with coefficient \(b_{t}\)Footnote 5:
Using the simulation tool from Ludsteck (2004; 2018), a time series for the explanatory variable was generated with \(x_{t}\sim {\mathcal {N}}\left( 0,100\right)\), \(t=1,2,\ldots ,50\). Further it was assumed that \(u_{t}\sim {\mathcal {N}}\left( 0,0.1\right)\), \(\left( a_{t}-a_{t-1}\right) \sim {\mathcal {N}}\left( 0,0.01\right)\), and \(\left( b_{t}-b_{t-1}\right) \sim {\mathcal {N}}\left( 0,0.001\right)\). Typically the optimally computed expectations of the time paths (calculated by using the true variances) and the VC estimates lie very close together. Figure 1 illustrates a somewhat atypical run with estimated smoothing weights that deviate from the true smoothing weights by the order of five. The optimally estimated time-paths of the coefficients (based on the true variances) and the estimated time-paths (based on the estimated coefficients) move together. This illustrates the general impression that the filtering results, especially the qualitative time-patterns, are not extremely sensitive with regard to the weights used for filtering.
It is, obviously, never possible to extract the movement of the true coefficients from the data, irrespective how long the time series is. (Only the estimation of the weights will improve with the length of the time series.) The best that can be done is to estimate the expectations of the coefficients. Given the variances, the VC estimate (which is the mean of a random vector) is optimal and cannot be improved upon, and the standard of comparison must be the estimates obtained with optimal weights, as in Fig. 1.
The distribution of the weights in the above setting is illustrated in Fig. 2. The time series for x, u, and v have been generated as described above and the VC moments estimation applied 5000 times. The histogram Fig. 2 illustrates that the estimates cluster around their theoretical values.
5.4 Artifacts
Suppose that the data of a particular problem have been generated by the standard linear model (1.2). If this is the case, the VC model is misspecified, because a correct estimation would require that the variances \(\sigma _{1}^{2},\sigma _{2}^{2},\ldots ,\sigma _{n}^{2}\) of the coefficients are zero and the weights \(\gamma _{1,}\gamma _{2},\ldots ,\gamma _{n}\)—the inverse variance ratios—are infinite, whereas VC implicitly assumes that the weights are finite. As the VC estimates with sufficiently large weights \(\gamma _{i}\) are indistinguishable from the OLS estimates, the VC estimation would nevertheless be approximately correct if the estimated weights are sufficiently large.Footnote 6
As VC estimates involve nearly twice as many parameters as OLS, there is more room for artifacts in VC. From this point of view, VC ought to be used with caution, especially if all parameters are permitted to vary over time, rather just a selected few. To illustrate, consider a linear model \(y_{t}=a+bx_{t}+u_{t}\) with \(a=1,\) \(b=2\), \(x_{t}\) drawn from a Normal distribution with mean zero and variance 5, and \(u_{t}\) normally distributed with mean zero and variance \(\sigma _{u}^{2}=0.1\). The histogram of the lowest estimated weights is given in Fig. 3. In 99% of the cases, the minimum weight is above 7.97, and in 95% of the cases, the minimum weight is above 34.6. The corresponding VC estimates are given in Fig. 4. In the 1% case, the estimate of the time paths involve severe artifacts. In the 5% case, artifacts are still there, but in the majority of cases, VC estimates conform to OLS estimates. Further, VC does not reject the hypothesis of time-invariant parameters in 99 per cent. of the cases. This observation suggests that VC may be used to check the linear specification of a time-series model.
With higher/lower noise, the problem of artifacts becomes more/less severe.Footnote 7 Still the problem has to be kept in mind when interpreting VC results.
5.5 Aggregate data, Pyrrho’s lemma, and the VC philosophy
Almost all economic models deal with aggregate data. Employment comprises women and men, different age groups and various occupations in sundry industries scattered over many regions. The wage level summarizes the earnings of all these people. Similarly, production comprises a multitude of goods and services, and the price level is just an index of thousands of the attached prices. The structures of these aggregates are not rigid but change over time in response to changing technologies, shifting tastes, and volatile business conditions. To assume that time-invariant laws govern the interaction of time series of such aggregates seem preposterous to me. Some researchers tried to cope with the problem by using weighted regression—giving higher weights to more recent observations (Gilchrist 1967, Rouhiainen 1978). This seems to me to be an inferior alternative to VC.
The reason for developing VC was my desire to show that a Marshallian view of economics, that involves time-varying structures, does not render quantitative economics impossible. Estimation can be done by using Kalman filtering, or the VC method described in this paper, or perhaps other methods. I advocated estimating time-varying structures with Kalman filtering in Schlicht (1977, Appendix B), but without any resonance. This puzzled me. Was this really such a bad idea?
Maybe it wasn’t, but the puzzle remains. What were the reasons for the decade-long resistance to dealing with time-varying coefficients? And why has this somewhat changed over the past fifteen years?
One reason may have been that structures changing over time cannot represent the ’true model’ economists were chasing during the heydays of ’dynamic stochastic general equilibrium’ macroeconomics. The existence of such a ’true model’ was simply postulated (Lucas 1976, 24). I think that this is, in the context of aggregate models dealing with long-run time series, a red herring, distracting from considering seriously what aggregate models represent.Footnote 8
Another reason, I submit, was the reductionist bent of economists. If a structure changes over time, this warrants explanation. Hence there was a tendency to add additional explanatory variables as ’controls’ in order to explain the change. While this may be sensible in certain cases, it is unnecessary and even obfuscating if the changes brought about by such outside forces are slow and independent of the relationships under study.Footnote 9 Further, the introduction of such controls seems, statistically speaking, problematic because of the following theorem that has been provided by Theo Dijkstra (1995, 122).
Pyrrho’s Lemma: For every collection of vectors, consisting of observations on a regressand and regressors, it is possible to get any set of coefficients as well as any set of predictions with variances as small as one desires, just by adding one additional vector from a continuum of vectors.
In other words: There exists a time series \(x_{n+1}\) that, if added to the explanatory variables \(x_{1,}x_{2},\ldots ,x_{n}\) in the standard linear model (1.2), will deliver arbitrarily predetermined coefficients and variances as estimates. This should make us reluctant to seek to explain too much by inserting additional controls which, taken together, span an entire set of such additional time series. Further, the procedure can generate the mirage of a ’true model’ in cases when such a model actually does not exist. Using VC reduces the necessity for adding further controls and mitigates, therefore, Pyrrho’s problem.
Let me add another remark. The VC model (1.4), (1.5) can easily be generalized in many ways. A possibility would be, for instance, to replace \(a_{i,t+1}=a_{i,t}+v_{i,t}\) by \(a_{i,t+1}=\theta _{i}\left( a_{i,t}-\bar{a}_{i}\right) +v_{i,t}\). Such generalizations (and many more) can be handled by Kalman filtering. So why not allow for more general specifications?
My objection would be that such generalizations would impinge on the descriptive transparency of the VC method which is, to me, a major concern—trumping more technical statistical considerations.
An estimation method, such as VC, can be viewed as a filter that seeks to identify certain patterns in clouds of data. In doing so, such a filter gives preference to certain patterns rather than others. The patterns preferred by the VC method conform to the desiderata underlying the descriptive account (Sect. 1.4). These are that the coefficients remain as time-invariant as possible and that a good fit is obtained. This makes sure that all estimated variability over time is driven by the data, rather than by another preference of the model, as would be the case in auto-regressive specifications.
Unfortunately the determination of weights used in VC is descriptively less transparent than the desiderata of stable coefficients and a good fit, but it carries nevertheless some descriptive meaning; in this regard, at least, there is room for improvement.
6 Conclusion
The VC method outlined in this paper addresses linear models with coefficients that are generated by a random walk. This is a rather special case, as it does not cover models with coefficients that presuppose more general stochastic processes. Yet it is a case that has received special attention in the literature, at least in economics. Focussing on this somewhat narrow class of models offers some benefits, however. Non-Gaussian disturbances can be admitted. The researcher is not required to postulate initial values that are typically unknown. Further, the time-averages of the estimated coefficients can be linked to the coefficients of an associated linear GLS (Aitken) model with constant coefficients. Further, the VC method permits an easy treatment of the case that only a subset of the coefficients varies over time while the rest of them remains constant. In all this, it allows for a more satisfactory treatment than would be possible within more general approaches, such as Kalman filtering.
Regarding further developments, and being confined to the rather narrow perspective of an economist, I would find it appropriate to conceive a way for dealing with missing observations in an appropriate way.Footnote 10 Further, moments estimators for simultaneous equations, again with a clear descriptive interpretation, appear desirable. Yet any attempt in this direction definitely exceeds my capabilities.
Notes
The calculations in this paper follow largely the calculations given in two unrefereed discussion papers (Schlicht 1989; Schlicht and Ludsteck 2006), for which this author was responsible—including some errors. In this paper the calculations have been rearranged, corrected, improved and adapted to cover the case that all disturbances are non-Gaussian.
For the penalized least squares approach, see Green and Silverman (2000). The approach was introduced by Whittaker (1923), Henderson (1924) and Leser (1961). It has been used also by Hodrick and Prescott (1997), and has been further developed by Leser (1963), Schlicht (1981), Schlicht and Pauly (1983), Schlicht (1984) and Schlicht (2005a). Other approaches such as Hastie and Tibshirani (1993) and Fan and Zhang (1999) make use of splines. This line of argument will not be pursued here.
A referee rightly pointed out that, in general, the convergence of functions does not necessarily imply the convergence of their maximizers. In this case, this criticism does not seem to apply, because both \({\mathcal {C}}_{M}\) and \({\mathcal {C}}_{L}\) are smoothly differentiable functions. The minima are characterized by the gradient equations (3.17), (3.18), i.e. \(\frac{\partial {\mathcal {C}}_{M}}{\partial \sigma ^{2}}=0\), \(\frac{\partial {\mathcal {C}}_{M}}{\partial \sigma _{i}^{2}}=0\) and the corresponding equations for the likelihood criterion \(\frac{\partial {\mathcal {C}}_{L}}{\partial \sigma ^{2}}=0\), \(\frac{\partial {\mathcal {C}}_{L}}{\partial \sigma _{i}^{2}}=0\). These conditions converge, too.
The following is taken from Schlicht and Ludsteck (2006, Sec. 10).
Even in the rather ill-conditioned case of \(\sigma _{u}^{2}=1\) VC does not reject the hypothesis that the coefficients may be time-invariant in 90 per cent. of the cases. The interested reader may explore the programming underlying Figs. 3 and 4 as well as further cases by consulting the Mathematica Notebook given in the accompanying material. Other cases of interest may be explored by running the notebook with alternative parameter settings.
Regarding the HP filter, I have proposed a method along with a software package to cope with missing data and structural breaks (Schlicht 2008, 2011). As the HP filter covers just a special case of VC—VC applied to a time series with a single explanatory variable of unity and a time-varying coefficient for this explanatory variable—a generalization of this method to VC may be feasible.
References
Athans, M. (1974). The importance of Kalman filtering methods for economic systems. In Annals of economic and social measurement (Vol. 3, No. 1, pp. 49–64). NBER Chapters. National Bureau of Economic Research, Inc. https://ideas.repec.org/h/nbr/nberch/9994.html
Baxa, J., Horváth, R., & Vašíček, B. (2014). How does monetary policy change? Evidence on inflation-targeting countries. Macroeconomic Dynamics, 18(3), 593–630. https://doi.org/10.1017/S1365100512000545
Cohen, G., Jalles, J., Loungani, P., & Marto, R. (2017). Emissions and growth: Trends and cycles in a globalized world. IMF Working Paper (WP17/191). https://www.imf.org/~/media/Files/Publications/WP/2017/wp17191.ashx
Cooley, T. F. & Prescott, E. C. (1973). An adaptive regression model. International Economic Review, 14(2), 364–371. https://ideas.repec.org/a/ier/iecrev/v14y1973i2p364-71.html
Dijkstra, T. (1995). Pyrrho’s lemma, or have it your way. Metrika: International Journal for Theoretical and Applied Statistics 42(1), 119–125. https://ideas.repec.org/a/spr/metrik/v42y1995i1p119-125.html
EViews (2005). Eviews 5.1 standard edition. http://eviews.com/home.html
Fan, J., & Zhang, W. (1999). Statistical estimation in varying coefficient models. Annals of Statistics, 27, 1491–1518. https://doi.org/10.1214/aos/1017939139
Furceri, D., Jalles, J., & Loungani, P. (2019). On the determinants of the Okun’s Law: New evidence from time-varying estimates. Comparative Economic Studies. https://doi.org/10.1057/s41294-019-00111-1
Gilchrist, W. G. (1967). Methods of estimation involving discounting. Journal of the Royal Statistical Society. Series B (Methodological), 29(2), 355–369. http://www.jstor.org/stable/2984595
Green, P. & Silverman, B. (2000). Nonparametric regression and generalized linear models. A roughness penalty approach. Boca Raton: Chapman & Hall. https://www.amazon.de/Nonparametric-Regression-Generalized-Linear-Models/dp/0412300400
Hastie, T. & Tibshirani, R. (1993). Varying-coefficient models. Journal of the Royal Statistical Society. Series B (Methodological), 55(4), 757–796. http://www.jstor.org/stable/2345993
Henderson, R. (1924). A new method of graduation. Transactions of the Actuarial Society of America, 25, 29–40.
Hodrick, R. J., & Prescott, E. C. (1997). Postwar U.S. business cycles: An empirical investigation. Journal of Money, Credit and Banking, 29(1), 1–16. https://ideas.repec.org/a/mcb/jmoncb/v29y1997i1p1-16.html
Jalles, J. T. (2018). On the time-varying relationship between unemployment and output: What shapes it? Scottish Journal of Political Economy, 66, 605–630.
Keynes, J. M. (1939). Professor Tinbergen’s method. Economic Journal, 49(195), 558–568. https://www.jstor.org/stable/2224838
Keynes, J. M. (1973). The general theory and after, part II: Defense and development. The collected works of John Maynard Keynes (Vol. XIV). London: Macmillan.
Leser, C. E. V. (1961). A simple method of trend construction. Journal of the Royal Statistical Society. Series B (Methodological), 23, 91–107. http://www.jstor.org/stable/2983845
Leser, C. E. V. (1963). Estimation of quasi-linear trend and seasonal variation. Journal of the American Statistical Association, 58, 1033–1043. http://www.jstor.org/stable/2283329
Lucas, R. (1976). The econometric policy evaluation: a critique. In K. Brunner & A. H. Meltzer (Eds.), Phillips curve and labor markets (pp. 19–46). Amsterdam: North Holland. https://EconPapers.repec.org/RePEc:eee:crcspp:v:1:y:1976:i::p:19-46
Ludsteck, J. (2004). VC Package for Mathematica. http://library.wolfram.com/infocenter/MathSource/5195/
Ludsteck, J. (2018). VC packages for estimating time-varying coefficients with mathematica, pp. 8–11. https://epub.ub.uni-muenchen.de/59479/
Madsen, M. W. (2012). Does financial stress have an impact on monetary policy? An econometric analysis using Norwegian data, Ph.D. thesis. Department of Economics, University of Oslo. https://www.duo.uio.no/bitstream/handle/10852/17126/Madsen_Michael_Masteroppgave.pdf
Nogueira, R. P. (2009). Testing credibility with time-varying coefficients. Applied Economics Letters, 16(18), 1813–1817. https://doi.org/10.1080/13504850701719611
Rouhiainen, J. (1978). The problem of changing parameters in demand analysis and forecasting. European Review of Agricultural Economics, 5(3–4), 349–359. https://ideas.repec.org/a/oup/erevae/v5y1978i3-4p349-359..html
Schlicht, E. (1973). Forcasting Markov chains. A theoretical foundation for exponential smoothing, Working paper B 13. Department of Economics, University of Regensburg. http://www.semverteilung.vwl.uni-muenchen.de/mitarbeiter/es/paper/schlicht-exponential_smoothing.pdf
Schlicht, E. (1977). Grundlagen der ökonomischen Analyse. Reinbek: Rowohlt. https://epub.ub.uni-muenchen.de/25821/
Schlicht, E. (1978). Die Methode der Gleichgewichtsbewegung als Approximationsverfahren. Chapters in Economics (pp. 293–305). Berlin: Duncker und Humblot. https://ideas.repec.org/h/lmu/muench/3149.html
Schlicht, E. (1981). A seasonal adjustment principle and a seasonal adjustment method derived from this principle. Journal of the American Statistical Association, 76(374), 374–378. Paper presented at the Econometric Society European Meeting Helsinki 1976. http://www.semverteilung.vwl.uni-muenchen.de/mitarbeiter/es/paper/schlicht-seasonal_adjustment.pdf
Schlicht, E. (1984). Seasonal adjustment in a stochastic model. Statistical Papers, 25, 1–12. http://www.semverteilung.vwl.uni-muenchen.de/mitarbeiter/es/paper/schlicht_seasonal-adjustment-in-a-stochastic-model.pdf
Schlicht, E. (1985). Isolation and aggregation in economics, annotated electronic reprint 2017 Edition. Berlin: Springer. https://ideas.repec.org/p/lmu/muenec/38821.html
Schlicht, E. (1989). Variance estimation in a random coefficients model, Munich Discussion Paper. Paper presented at the Econometric Society European Meeting, Munich. https://epub.ub.uni-muenchen.de/59143/
Schlicht, E. (1990). Local aggregation in a dynamic setting. Journal of Economics, 51(3), 287–305. https://ideas.repec.org/a/kap/jeczfn/v51y1990i3p287-305.html
Schlicht, E. (1997). The moving equilibrium theorem again. Economic Modelling, 14(2), 271–278. https://ideas.repec.org/a/eee/ecmode/v14y1997i2p271-278.html
Schlicht, E. (2005a). Estimating the smoothing parameter in the so-called Hodrick–Prescott filter. Journal of the Japan Statistical Society, 35, 99–119. https://doi.org/10.14490/jjss.35.99
Schlicht, E. (2005b). VCC—A program for estimating time-varying coefficients. Console Version With Source Code in C. http://epub.ub.uni-muenchen.de/archive/00000719/
Schlicht, E. (2008). Trend extraction with missing observations and structural breaks. Journal of the Japan Statistical Association. https://www.jstage.jst.go.jp/article/jjss/38/2/38_2_285/_pdf
Schlicht, E. (2011). Mend-A mathematica package for mending time series with missing observations and structural breaks. https://epub.ub.uni-muenchen.de/12227/
Schlicht, E. (2021). VC—A program for estimating time-varying coefficients. Version, 6. https://doi.org/10.5282/ubm/epub.684
Schlicht, E. & Ludsteck, J. (2006). Variance estimation in a random coefficients model. Munich Discussion Paper (2006–2012). https://ideas.repec.org/p/lmu/muenec/904.html
Schlicht, E. & Pauly, R. (1983). Descriptive seasonal adjustment by minimizing perturbations. Empirica. http://www.semverteilung.vwl.uni-muenchen.de/mitarbeiter/es/paper/schlicht-pauly-perturbations.pdf
Tinbergen, J. (1940). On a method of statistical business-cycle research. A reply. Economic Journal 50(197), 141–154. http://www.jstor.org/stable/2225763
Whittaker, E. T. (1923). On a new method of graduation. Proceedings of the Edinburgh Mathematical Society, 41, 63–75.
Acknowledgements
I wish to thank two referees of this journal who provided extremely competent, detailed, timely and very helpful comments that led to many improvements. Special thanks go to Johannes Ludsteck who helped me over the years with expert advice. I wish further to thank Daniela Diekmann, Theo Dijkstra. João Tovar Jalles, Walter Krämer, José Alberto Bravo Lopez, and Ralf Pauly for comments on earlier attempts. Not the least I wish to thank all the authors very much who have used the VC method. Without the encouragement that this conveyed I would not have been motivated to write this documentation.
Funding
Open Access funding enabled and organized by Projekt DEAL.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Supplementary Information
Below is the link to the electronic supplementary material.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Schlicht, E. VC: a method for estimating time-varying coefficients in linear models. J. Korean Stat. Soc. 50, 1164–1196 (2021). https://doi.org/10.1007/s42952-021-00110-y
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s42952-021-00110-y
Keywords
- Time-series analysis
- Linear model
- State-space estimation
- Time-varying coefficients
- Moments estimation. Kalman filtering
- Penalized least squares
- HP-Filter