1 Introduction

The subject of this paper is random coefficients regression (RCR) models. These models have been initially defined in biosciences (see e. g. Henderson 1975) and are now popular in many other fields of statistical applications. Besides the estimation of population (fixed) parameters, the prediction of individual random effects in RCR models are often of prior interest. Locally optimal designs for the prediction have been discussed in Prus and Schwabe (2016a, b). However, these designs depend on the covariance matrix of random effects. Therefore, some robust criteria like minimax (or maximin), which minimize the largest value of the criterion or maximize the smallest efficiency with respect to the unknown variance parameters, are to be considered. For fixed effects models, such robust design criteria have been well discussed in the literature (see e. g. Müller and Pázman 1998; Dette et al. 1995; Schwabe 1997). For optimal designs in nonlinear models see e. g. by Pázman and Pronzato (2007), Pronzato and Walter (1988) and Fackle-Fornius et al. (2015).

Here we focus on the minimax-criterion for the prediction in RCR models, which minimizes the “worst case” for the basic criterion with respect to the variance parameters. We choose the integrated mean squared error (IMSE) as the basic criterion. We consider particular linear and quadratic regression models in detail.

The structure of this paper is the following: The second part specifies the RCR models and presents the best linear unbiased prediction of the individual random parameters. The third part provides the minimax-optimal designs for the prediction. The paper will be concluded by a short discussion in the last part.

2 RCR model

We consider the RCR models, in which observation j of individual i is given by the following formula:

$$\begin{aligned} {Y}_{ij}=\mathbf {f}(x_{j})^\top {\varvec{\beta }}_i+ \varepsilon _{ij},\quad j=1, \dots , m, \quad i=1, \dots , n, \quad x\in \mathcal {X}, \end{aligned}$$
(1)

where m is the number of observations per individual, n is the number of individuals,\(\mathbf {f} =(f_1, \dots ,f_p)^\top \) is a vector of known regression functions. The experimental settings \(x_j\) come from an experimental region \(\mathcal {X}\). The observational errors \(\varepsilon _{ij}\) are assumed to have zero mean and common variance \(\sigma ^2>0\). The individual parameters \({\varvec{\beta }}_i=( \beta _{i1}, \dots , \beta _{ip})^\top \) have unknown expected value (population mean) \(\mathrm {E}\,({\varvec{\beta }}_i)={{\varvec{\beta }}}\) and known covariance matrix \(\mathrm {Cov}\,({\varvec{\beta }}_i)=\sigma ^2\mathbf {D}\). All individual parameters \({\varvec{\beta }}_{i}\) and all observational errors \(\varepsilon _{ij}\) are assumed to be uncorrelated.

The best linear unbiased predictor for the individual parameter \({\varvec{\beta }}_i\) is given by

$$\begin{aligned} \hat{{\varvec{\beta }}}_i=\mathbf {D}((\mathbf {F}^\top \mathbf {F})^{-1}+\mathbf {D})^{-1}\hat{{\varvec{\beta }}}_{i;\mathrm{ind}}+(\mathbf {F}^\top \mathbf {F})^{-1}((\mathbf {F}^\top \mathbf {F})^{-1}+\mathbf {D})^{-1}\hat{{\varvec{\beta }}}, \end{aligned}$$
(2)

where \(\hat{{\varvec{\beta }}}_{i;{\mathrm {ind}}}=(\mathbf {F}^\top \mathbf {F})^{-1}\mathbf {F}^\top \mathbf {Y}_i\) is the individualized estimator based only on observations at individual i, \(\hat{{\varvec{\beta }}}=(\mathbf {F}^\top \mathbf {F})^{-1}\mathbf {F}^\top \bar{\mathbf {Y}} \) is the best linear unbiased estimator for the population mean parameter, \(\mathbf {Y}_i=(Y_{i1}, \dots ,Y_{im})^\top \) is the individual vector of observations, \(\bar{\mathbf {Y}} =\frac{1}{n}\sum _{i=1}^n{\mathbf {Y}_i}\) is the mean observational vector and \(\mathbf {F}=(\mathbf {f}(x_1), \dots , \mathbf {f}(x_m))^\top \) is the design matrix, which is assumed to be of full column rank. If the dispersion matrix \(\mathbf {D}\) of individual random effects is non-singular, the best linear unbiased predictor (2) simplifies to

$$\begin{aligned} \hat{{\varvec{\beta }}}_i=(\mathbf {F}^\top \mathbf {F}+\mathbf {D}^{-1})^{-1}(\mathbf {F}^\top \mathbf {F}\,\hat{{\varvec{\beta }}}_{i;\mathrm{ind}}+\mathbf {D}^{-1}\hat{{\varvec{\beta }}}) \end{aligned}$$
(3)

and may be recognized as a weighted average of the individualized estimator \(\hat{{\varvec{\beta }}}_{i;{\mathrm {ind}}}\) and the estimator \(\hat{{\varvec{\beta }}}\) for the population parameter.

The mean squared error matrix of the vector \(\hat{\mathbf {B}}=(\hat{{\varvec{\beta }}}_1^\top , \dots ,\hat{{\varvec{\beta }}}_n^\top )^\top \) of all predictors of all individual parameters is given by the following formula (see e. g. Prus and Schwabe 2016b):

$$\begin{aligned}&\mathrm {MSE}\nonumber \\&\quad = \sigma ^2\left( \frac{1}{n}\mathbf {J}_{n} \otimes (\mathbf {F}^\top \mathbf {F})^{-1}+\left( \mathbf {I}_n-\frac{1}{n}\mathbf {J}_{n}\right) \otimes \left( \mathbf {D}-\mathbf {D}((\mathbf {F}^\top \mathbf {F})^{-1}+\mathbf {D})^{-1}\mathbf {D}\right) \right) ,\nonumber \\ \end{aligned}$$
(4)

where \(\mathbf {I}_n\) denotes the identity matrix, \(\mathbf {J}_n\) is the square matrix of order n with all entries equal to 1 and \(\otimes \) denotes the Kronecker product. For non-singular covariance matrix of random effects the mean squared error matrix (4) simplifies to

$$\begin{aligned} \mathrm {MSE}= \sigma ^2\left( \frac{1}{n}\mathbf {J}_n\otimes \left( \mathbf {F}^\top \mathbf {F}\right) ^{-1} + \left( \mathbf {I}_n-\frac{1}{n}\mathbf {J}_n \right) \otimes \left( \mathbf {F}^\top \mathbf {F}+\mathbf {D}^{-1}\right) ^{-1} \right) . \end{aligned}$$
(5)

3 Optimal designs

For this paper we define the exact designs as follows:

$$\begin{aligned} \xi = \left( \begin{array}{ccc} x_1 &{} , \dots , &{} x_k \\ m_1 &{}, \dots ,&{} m_k \end{array} \right) , \end{aligned}$$
(6)

where \(x_1, \dots , x_k\) are the distinct experimental settings (support points), \(k\le m\), and \(m_1, \dots , m_k\) are the corresponding numbers of replications. For analytical purposes we will focus on the approximate designs, which we define as

$$\begin{aligned} \xi = \left( \begin{array}{ccc} x_1 &{} , \dots , &{} x_k \\ w_1 &{}, \dots ,&{} w_k \end{array} \right) , \end{aligned}$$
(7)

where \(w_j=m_j/m\) and only the conditions \(w_j\ge 0\) and \(\sum _{j=1}^{k}w_j=1\) have to be satisfied (integer numbers of replications are not required). Further we will use the notation

$$\begin{aligned} \mathbf {M}(\xi )=\frac{1}{m}\sum _{j=1}^k m_j\mathbf {f}(x_j)\mathbf {f}(x_j)^\top \end{aligned}$$
(8)

for the standardized information matrix from the fixed effects model and \(\varvec{\varDelta }=m\, \mathbf {D}\) for the adjusted dispersion matrix of the random effects. We assume the matrix \(\mathbf {M}(\xi )\) to be non-singular. With this notation the definition of the mean squared error matrix of the prediction [formulas (4) and (5)] can be extended for approximate designs to

$$\begin{aligned}&\mathrm {MSE}(\xi )= \nonumber \\&{\frac{1}{n}}\mathbf {J}_n \otimes \mathbf {M}(\xi )^{-1} + \left( \mathbf {I}_n-{\frac{1}{n}}\mathbf {J}_n \right) \otimes \left( {{\varvec{\Delta }}}-{{\varvec{\Delta }}}\left( \mathbf {M}(\xi )^{-1}+{{\varvec{\Delta }}}\right) ^{-1}{{{\varvec{\Delta }}}}\right) , \end{aligned}$$
(9)

for general dispersion matrix \(\mathbf {D}\), and

$$\begin{aligned} \mathrm {MSE}(\xi )= {\frac{1}{n}}\mathbf {J}_n\otimes \mathbf {M}(\xi )^{-1} + \left( \mathbf {I}_n-{\frac{1}{n}}\mathbf {J}_n \right) \otimes \left( \mathbf {M}(\xi )+{{\varvec{\Delta }}}^{-1}\right) ^{-1}, \end{aligned}$$
(10)

for non-singular \(\mathbf {D}\), when we neglect the constant term \(\frac{\sigma ^2}{m}\).

3.1 IMSE-criterion

In this work our main interest is in the prediction of the individual response curves \(\mu _i=\mathbf {f}^\top {\varvec{\beta }}_i\). Therefore, we focus on the integrated mean squared error (IMSE-) criterion. The IMSE-criterion for the prediction can be defined (see also Prus and Schwabe 2016b) as the sum over all individuals

$$\begin{aligned} \mathrm {IMSE}_{pred} =\sum _{i=1}^{n}\mathrm {E}\,\left( \int _{\mathcal {X}} (\hat{\mu }_i(x) - \mu _i (x))^2 \nu (\mathrm {d}x)\right) \end{aligned}$$
(11)

of the expected integrated squared distances of the predicted and the real response, \(\hat{\mu }_i=\mathbf {f}^\top \hat{{\varvec{\beta }}}_i\) and \(\mu _i\), with respect to a suitable measure \(\nu \) on the experimental region \(\mathcal {X}\), which is typically chosen to be uniform on \(\mathcal {X}\) with \(\nu (\mathcal {X})=1\). This criterion may also be presented as the following function of the mean squared error matrix \(\mathrm {MSE}\):

$$\begin{aligned} \mathrm {IMSE}_{pred} = \mathrm {tr}\,(\mathrm {MSE}\cdot (\mathbf {I}_n\otimes \mathbf {V})), \end{aligned}$$
(12)

where \(\mathbf {V}=\int _{\mathcal {X}} \mathbf {f}(x)\mathbf {f}(x)^\top \nu (\mathrm {d}x)\), which may be recognized as the information matrix for the weight distribution \(\nu \) in the fixed effects model. For an approximate design \(\xi \) the IMSE-criterion has the form

$$\begin{aligned}&\mathrm {IMSE}_{pred}(\xi ) \nonumber \\&\quad =\mathrm {tr}\left( \mathbf {M}(\xi )^{-1}\mathbf {V}\right) +(n-1)\,\mathrm {tr}\left( \left( {{\varvec{\Delta }}}-{{\varvec{\Delta }}}\left( \mathbf {M}(\xi )^{-1}+{{\varvec{\Delta }}}\right) ^{-1}{{{\varvec{\Delta }}}}\right) \mathbf {V}\right) , \end{aligned}$$
(13)

which simplifies for non-singular covariance matrix of individual random parameters to a weighted sum of the IMSE-criterion for fixed effects models and the Bayesian IMSE-criterion:

$$\begin{aligned} \mathrm {IMSE}_{pred}(\xi ) = \mathrm {tr}\left( \mathbf {M}(\xi )^{-1}\mathbf {V}\right) +(n-1)\,\mathrm {tr}\left( \left( \mathbf {M}(\xi )+{{\varvec{\Delta }}}^{-1}\right) ^{-1}\mathbf {V}\right) . \end{aligned}$$
(14)

3.2 Minimax-criteria

In this section we consider optimal designs for the prediction in particular RCR models: straight line and quadratic regression. We define the minimax-criterion as the worst case of the IMSE-criterion with respect to the unknown variance parameters.

We additionally assume the diagonal covariance structure of random effects. Then the IMSE-criterion [(13) and (14)] will increase with increasing values of variance parameters. However, if all these parameters will be large, the criterion function will tend to the IMSE-criterion in the fixed effects model (multiplied by the number of individuals n). Therefore, we fix some of the variances and consider the behavior of minimax-optimal designs in the resulting particular cases.

Note that for special RCR, where only the intercept is random, optimal designs for fixed effects models retain their optimality (see Prus and Schwabe 2016b).

3.2.1 Straight line regression

We consider the linear regression model

$$\begin{aligned} Y_{ij}= \beta _{i1}+\beta _{i2}x_j+\varepsilon _{ij} \end{aligned}$$
(15)

on the experimental regions \(\mathcal {X}=[0,1]\) with the diagonal covariance structure of random effects: \(\mathbf {D}=\text {diag} (d_1, d_2)\). For the IMSE-criterion we choose the uniform weighting \(\nu =\lambda _{[0,1]}\), which leads to

$$\begin{aligned} \mathbf {V}=\int _0^1\mathbf {f}(x)\mathbf {f}(x)^\top \mathrm {d}x=\left( \begin{array}{cc} 1 &{} \frac{1}{2}\\ \frac{1}{2} &{} \frac{1}{3} \end{array} \right) . \end{aligned}$$
(16)

As proved in Prus (2015, ch. 5), IMSE-optimal designs for the prediction in model (15) are of the form

$$\begin{aligned} \xi = \left( \begin{array}{cc} 0 &{} 1 \\ 1-w &{} w \end{array} \right) , \end{aligned}$$
(17)

where w denotes the optimal weight of observations at the support point \(x=1\). Consequently, standardized information matrix (8) is given by:

$$\begin{aligned} \mathbf {M}(\xi )=\left( \begin{array}{cc} 1 &{} w\\ w &{} w \end{array} \right) . \end{aligned}$$
(18)

Using formula (14), we obtain the following form of the IMSE-criterion:

$$\begin{aligned} \mathrm {IMSE}_{pred}(\xi ) = \frac{m}{3}\left( \varPhi _1(\xi )+(n-1)\varPhi _2(\xi , d_1, d_2)\right) , \end{aligned}$$
(19)

where

$$\begin{aligned} \varPhi _1(\xi ) = \frac{1}{mw(1-w)}, \end{aligned}$$
(20)

which is independent of the variance parameters and coincides (neglecting the constant term) with the IMSE-criterion for the linear regression model without random effects, and

$$\begin{aligned} \varPhi _2(\xi , d_1, d_2)=\frac{3d_1+d_2+md_1d_2}{1+m(d_1+wd_2)+m^2w(1-w)d_1d_2}, \end{aligned}$$
(21)

which depends on both the weight w of the observations at the support point \(x=1\) and the variance parameters \(d_1\) and \(d_2\).

Further we focus on the situation with small values of the intercept dispersion \(d_1\) or equivalently small intercept variance \(\sigma ^2d_1\). In this case, the IMSE-criterion may be computed as limiting criterion (19) for \(d_1\rightarrow 0\):

$$\begin{aligned} \mathrm {IMSE}_{pred}(\xi ) = \frac{m}{3}\left( \frac{1}{mw(1-w)}+(n-1)\frac{d_2}{1+mwd_2}\right) . \end{aligned}$$
(22)

Note that for very large values of the observational errors variance \(\sigma ^2\) the assumption, \(d_1\rightarrow 0\) and \(d_2>0\), may be interpreted in the following way: the intercept variance \(\sigma ^2d_1\) has a positive value and the slope variance \(\sigma ^2d_2\) tends to infinity.

Note also that for fixed intercept (\(d_1= 0\)) the IMSE-criterion may be determined using formula (13). In this case we would obtain the same result (22).

It is easy to see that criterion (22) increases with increasing values of the slope dispersion \(d_2\). The latter property allows us to define the minimax-criterion as follows:

$$\begin{aligned} \text {IMSE}_{max}(\xi ) := \text {lim}_{d_2\rightarrow \infty }\text {IMSE}(\xi ), \end{aligned}$$
(23)

which results in

$$\begin{aligned} \text {IMSE}_{max}(\xi ) = \frac{1}{3}\left( \frac{1}{w(1-w)}+(n-1)\frac{1}{w}\right) \end{aligned}$$
(24)

and leads to the following optimal weight:

$$\begin{aligned} w^*_{max} = \frac{n-\sqrt{n}}{n-1}. \end{aligned}$$
(25)

Figure 1 illustrates the behavior of the optimal design with respect to the number of individuals n for all integer values in the interval [2, 500]. As we can see in Fig. 1, the optimal weight increases with increasing number of individuals. Figure 2 presents the efficiency of the minimax-optimal design \(w^*_{max}\) with respect to the locally optimal designs in dependence of the rescaled slope variance \(\rho ={d_2}/{(1+d_2)}\) for fixed numbers of individuals \(n=10\), \(n=50\) and \(n=500\). For all numbers of individuals the efficiency is high and increases with increasing slope variance.

Fig. 1
figure 1

Minimax-optimal weight \(w^*_{max}\) in dependence of number of individuals n for linear regression

Fig. 2
figure 2

Efficiency of minimax-optimal designs for linear regression for \(n=10\) (solid line), \(n=50\) (dashed line), \(n=500\) (dotted line)

3.2.2 Quadratic regression

We investigate the quadratic regression model

$$\begin{aligned} Y_{ij}= \beta _{i1}+\beta _{i2}x_j+\beta _{i3}x_j^2+\varepsilon _{ij} \end{aligned}$$
(26)

on the standard symmetric design region \(\mathcal {X}=[-1,1]\) with a diagonal covariance matrix of random effects: \(\mathbf {D}=\text {diag} (d_1, d_2, d_3)\). For the IMSE-criterion we apply the uniform weighting \(\nu =\frac{1}{2}\lambda _{[-1,1]}\), which results in

$$\begin{aligned} \mathbf {V}=\left( \begin{array}{ccc} 2 &{} 0 &{} \frac{2}{3} \\ 0 &{} \frac{2}{3} &{} 0 \\ \frac{2}{3} &{} 0 &{} \frac{2}{5} \end{array}\right) . \end{aligned}$$
(27)

In Prus (2015, ch. 5), it has been established that in model (26) optimal designs are of the form

$$\begin{aligned} \xi = \left( \begin{array}{ccc} -1 &{} 0 &{} 1 \\ w &{} 1-2w &{} w \end{array} \right) , \end{aligned}$$
(28)

where w denotes the optimal weight of observations at the support points \(x=-1\) and \(x=1\), and standardized information matrix (8) is of the form

$$\begin{aligned} \mathbf {M}(\xi )=\left( \begin{array}{ccc} 1 &{} 0 &{} 2w \\ 0 &{} 2w &{} 0 \\ 2w &{} 0 &{} 2w \end{array}\right) . \end{aligned}$$
(29)

Then using formula (14) we obtain the IMSE-criterion

$$\begin{aligned} \mathrm {IMSE}_{pred}(\xi ) =\frac{1}{15} \left( \varPhi _1(\xi )+(n-1)\left( \varPhi _2(\xi ,d_1,d_3) + \varPhi _3(\xi ,d_2)\right) \right) , \end{aligned}$$
(30)

where

$$\begin{aligned} \varPhi _1(\xi )=\frac{8}{w(1-2w)}, \end{aligned}$$
(31)

which is independent of the variance parameters and coincides with the IMSE-criterion for the fixed effects model (neglecting the constant),

$$\begin{aligned} \varPhi _2(\xi ,d_1,d_3)=\frac{2m\left( m(10w+3)d_1d_3+15d_1+3d_3\right) }{2m^2w(1-2w)d_1d_3+m(d_1+2wd_3)+1}, \end{aligned}$$
(32)

which depends on the variance parameters \(d_1\) and \(d_3\) and is independent of \(d_2\), and

$$\begin{aligned} \varPhi _3(\xi ,d_2)=\frac{10md_2}{2mwd_2+1}, \end{aligned}$$
(33)

which depends on the dispersion \(d_2\) of the slope and is independent of other variance parameters.

Further we fix some of the variance parameters and consider minimax-criteria for the resulting particular cases.

Case 1\(d_1\rightarrow 0\) and \(d_2\rightarrow 0\)

If both the intercept and the slope dispersions \(d_1\) and \(d_2\) are very small, IMSE-criterion (30) simplifies to

$$\begin{aligned} \mathrm {IMSE}_{pred}(\xi ) =\frac{1}{15} \left( \varPhi _1(\xi )+(n-1)\varPhi _2(\xi ,0,d_3)\right) , \end{aligned}$$
(34)

where

$$\begin{aligned} \varPhi _2(\xi ,0,d_3)=\frac{6md_3}{2mwd_3+1}. \end{aligned}$$
(35)

Note that for fixed intercept and fixed slope (\(d_1 = 0\) and \(d_2 = 0\)) the IMSE-criterion may be computed using formula (13), which leads to the same result (34).

IMSE-criterion (34) increases with increasing variance parameter \(d_3\). Therefore, we define the minimax-criterion as

$$\begin{aligned} \text {IMSE}_{max}(\xi ) := \text {lim}_{d_3\rightarrow \infty }\text {IMSE}_{pred}(\xi ), \end{aligned}$$
(36)

which results in

$$\begin{aligned} \text {IMSE}_{max}(\xi ) =\frac{6w(n-1)-3n-5}{15w(2w-1)}. \end{aligned}$$
(37)

We minimize criterion (37) directly and obtain the following optimal weight:

$$\begin{aligned} w^*_{max} = \frac{3n+5-2\sqrt{6n+10}}{6(n-1)}. \end{aligned}$$
(38)

Figure 3 illustrates the behavior of the optimal design with respect to the number of individuals n.

Note that for given observational errors variance \(\sigma ^2\), the assumption of very small intercept and slope dispersions (\(d_1\rightarrow 0\) and \(d_2\rightarrow 0\)) is equivalent to the assumption of very small variances (\(\sigma ^2d_1\rightarrow 0\) and \(\sigma ^2d_2\rightarrow 0\)). However, if \(\sigma ^2\) becomes very large, the assumption may be interpreted as follows: the intercept and slope variances \(\sigma ^2d_1\) and \(\sigma ^2d_2\) are positive and the variance \(\sigma ^2d_3\) of the coefficient of the quadratic term tends to infinity. The next assumptions (in cases 2–5) may also be interpreted in a similar way.

Case 2\(d_1\rightarrow 0\) and \(d_3\rightarrow 0\)

For small dispersions of the intercept and of the coefficient of the quadratic term \(d_1\) and \(d_3\), IMSE-criterion (30) simplifies to

$$\begin{aligned} \mathrm {IMSE}_{pred}(\xi ) =\frac{1}{15} \left( \varPhi _1(\xi )+(n-1)\varPhi _3(\xi ,d_2)\right) , \end{aligned}$$
(39)

which is increasing in \(d_2\). Hence, we define the minimax-criterion as limiting criterion (39) for \(d_2\rightarrow \infty \):

$$\begin{aligned} \text {IMSE}_{max}(\xi ) := \text {lim}_{d_2\rightarrow \infty }\text {IMSE}_{pred}(\xi ) \end{aligned}$$
(40)

and obtain

$$\begin{aligned} \text {IMSE}_{max}(\xi )=\frac{10w(n-1)-5n-3}{15w(2w-1)}, \end{aligned}$$
(41)

which leads to the following optimal design:

$$\begin{aligned} w^*_{max} = \frac{5n+3-2\sqrt{10n+6}}{10(n-1)}. \end{aligned}$$
(42)

The behavior of the design is described by Fig. 4.

Fig. 3
figure 3

Minimax-optimal weight \(w^*_{max}\) in dependence of number of individuals n for quadratic regression, case 1

Fig. 4
figure 4

Minimax-optimal weight \(w^*_{max}\) in dependence of number of individuals n for quadratic regression, cases 2 and 3

Note that for fixed intercept and fixed coefficient of the quadratic term (\(d_1 = 0\) and \(d_3 = 0\)), using formula (13) we would also obtain criterion (39).

Case 3 \(d_3\rightarrow 0\)

If only the dispersion \(d_3\) of the coefficient of the quadratic term is small, the IMSE-criterion is given by

$$\begin{aligned} \mathrm {IMSE}_{pred}(\xi ) =\frac{1}{15} \left( \varPhi _1(\xi )+(n-1)\left( \varPhi _2(\xi ,d_1,0) + \varPhi _3(\xi ,d_2)\right) \right) , \end{aligned}$$
(43)

where

$$\begin{aligned} \varPhi _2(\xi ,d_1,0)=\frac{30md_1}{md_1+1}, \end{aligned}$$
(44)

which is independent of w and \(d_2\), increasing in \(d_1\) and converges to 30 for \(d_1 \rightarrow \infty \). \(\varPhi _3(\xi ,d_2)\) increases with increasing values of \(d_2\). Then we define the minimax-criterion as

$$\begin{aligned} \text {IMSE}_{max}(\xi ) := \text {lim}_{d_1, d_2\rightarrow \infty }\text {IMSE}_{pred}(\xi ), \end{aligned}$$
(45)

which equals to

$$\begin{aligned} \text {IMSE}_{max}(\xi )=\frac{10w(n-1)-5n-3}{15w(2w-1)}+2(n-1) \end{aligned}$$
(46)

and coincides with minimax-criterion (41) for case 2 (neglecting the constant term \(2(n-1)\)).

Case 4 \(d_2\rightarrow 0\)

If only the slope dispersion is small, the IMSE-criterion has the form

$$\begin{aligned} \mathrm {IMSE}_{pred}(\xi ) =\frac{1}{15} \left( \varPhi _1(\xi )+(n-1)\varPhi _2(\xi ,d_1,d_3) \right) , \end{aligned}$$
(47)

which is increasing in \(d_1\) and \(d_3\). Then we define the minimax-criterion as

$$\begin{aligned} \text {IMSE}_{max}(\xi ) := \text {lim}_{d_1, d_3\rightarrow \infty }\text {IMSE}_{pred}(\xi ) \end{aligned}$$
(48)

and obtain

$$\begin{aligned} \text {IMSE}_{max}(\xi )=\frac{10w(1-n)-3n-5}{15w(2w-1)}, \end{aligned}$$
(49)

which results in

$$\begin{aligned} w^*_{max} = \frac{-3n-5+2\sqrt{6n^2+10n}}{10(n-1)}. \end{aligned}$$
(50)

Case 5\(d_1\rightarrow 0\)

For small intercept dispersion \(d_1\), the IMSE-criterion simplifies to

$$\begin{aligned} \mathrm {IMSE}_{pred}(\xi ) =\frac{1}{15} \left( \varPhi _1(\xi )+(n-1)\left( \varPhi _2(\xi ,0,d_3) + \varPhi _3(\xi ,d_2)\right) \right) . \end{aligned}$$
(51)

The criterion increases with both variance parameters \(d_2\) and \(d_3\). Therefore, we define the minimax-criterion as the limiting IMSE-criterion (51):

$$\begin{aligned} \text {IMSE}_{max}(\xi ) := \text {lim}_{d_2, d_3\rightarrow \infty }\text {IMSE}_{pred}(\xi ), \end{aligned}$$
(52)

which results in

$$\begin{aligned} \text {IMSE}_{max}(\xi )=\frac{8(2w(n-1)-n)}{15w(2w-1)}, \end{aligned}$$
(53)

and leads to the minimax-optimal weight

$$\begin{aligned} w^*_{max} = \frac{n-\sqrt{n}}{2(n-1)}. \end{aligned}$$
(54)

The behaviors of the optimal designs in cases 4 and 5 are illustrated by Figs. 5 and 6.

Fig. 5
figure 5

Minimax-optimal weight \(w^*_{max}\) in dependence of number of individuals n for quadratic regression, case 4

Fig. 6
figure 6

Minimax-optimal weight \(w^*_{max}\) in dependence of number of individuals n for quadratic regression, case 5

Note that for \(d_3 = 0\), \(d_2 = 0\) or \(d_1 = 0\) we would obtain the same results, (43), (47) or (51), respectively, using formula (13).

As we can see on the graphics, the optimal weights increase with increasing number of individuals n in cases 1, 2, 3 and 5 and decrease in case 4. The specific behavior in case 4 is caused by the joint influence of the intercept dispersion \(d_1\) and the dispersion of the coefficient of the quadratic term \(d_3\), which are included in part \(\varPhi _2(\xi ,d_1,d_3)\) of the IMSE-criterion. If at least one of these variance parameters is zero (cases  1, 2, 3 and 5), \(\varPhi _2(\xi ,d_1,d_3)\) simplifies much and the co-action of \(d_1\) and \(d_3\) is getting lost, which leads to completely different behaviors of the optimal designs.

For cases 1 and 2 we consider the efficiency of the minimax-optimal designs with respect to the locally optimal designs in dependence of the rescaled variances \(\rho ={d_3}/{(1+d_3)}\) and \(\rho ={d_2}/{(1+d_2)}\), respectively, for fixed numbers of individuals (Figs. 7 and 8). The efficiency turns out to be high and increasing with increasing variance parameters for both cases 1 and 2 and all values of the number of individuals (\(n=10\), \(n=50\), \(n=500\)).

Fig. 7
figure 7

Efficiency of minimax-optimal designs for quadratic regression, case 1, for \(n=10\) (solid line), \(n=50\) (dashed line), \(n=500\) (dotted line)

Fig. 8
figure 8

Efficiency of minimax-optimal designs for quadratic regression, case 2, for \(n=10\) (solid line), \(n=50\) (dashed line), \(n=500\) (dotted line)

4 Discussion

In this paper we have considered minimax-optimal designs for the IMSE-criterion for the prediction in particular RCR models: linear and quadratic regression. We have assumed the diagonal structure of the covariance matrix of random effects. In this case the IMSE-criterion is increasing with increasing values of all variance parameters. If all variances converge to infinity, the limiting criterion coincides with the IMSE-criterion in fixed effects models and, consequently, the optimal designs in fixed effects models retain their optimality for the prediction. If some of variance parameters are small, the minimax-optimal designs in RCR depend on the number of individuals and differ from the optimal designs in fixed effects models. For some particular cases we have considered the efficiency of the minimax-optimal designs with respect to the locally optimal designs. The efficiency turns out to be high and increase with increasing variance parameters.