Abstract
This chapter describes a modification of the nested error regression model having random regression coefficients. We can intuitively expect that the slope parameters of some explanatory variable are not constant and therefore they should take different values in different domains. The random regression coefficient model gives a practical solution to this problem by assuming that the beta parameters are random and therefore they give a more flexible way of modeling. The chapter gives fitting algorithms, derives the empirical best predictors of domain means, and approximates the mean squared errors. The last section provides R codes.
Access provided by Autonomous University of Puebla. Download chapter PDF
13.1 Introduction
Coefficients of auxiliary variables in the standard nested error regression (NER) model are not allowed to vary across sampling units or domains. This assumption is too rigid in many practical situations. In some small area estimation (SAE) problems, we can intuitively expect that the slope parameters of some explanatory variable are not constant and therefore they should take different values in different domains. The random regression coefficient (RRC) model gives a practical solution to this problem by assuming that the beta parameters are random and therefore they give a more flexible way of modeling.
This section describes a modification of the nested error regression (NER) model having random regression coefficients. In the framework of SAE, Prasad and Rao (1990) derived empirical best linear unbiased predictors (EBLUP) of domain linear parameters under a unit-level RRC model. They also derived a second order approximation of the mean squared error (MSE) of the EBLUP and gave an estimator of the MSE approximation. They considered a special case of the RRC model proposed by Dempster et al. (1981), with a single concomitant variable x and a null intercept parameter (regression through origin).
Moura and Hold (1999) used a class of models allowing for variation between areas because of: (1) differences in the distribution of unit-level or area-level variables between areas, and (2) area-specific components of variance which cannot be explained by covariates. Their family of models contains RRC models as particular cases. These authors derived EBLUPs of linear parameters, gave an approximation to the MSE of the EBLUP, and proposed MSE estimators.
Hobza and Morales (2013) applied a flexible class of RRC models to the prediction of domain linear parameters. They gave a Fisher-scoring algorithm to calculate the residual maximum likelihood estimators of the model parameters and they derived EBLUPs and MSEs estimators. They applied the introduced methodology to the estimation of household normalized net annual incomes in the Spanish Living Conditions Survey.
This chapter extends the results of Hobza and Morales (2013) by considering a model where the set of random effects has a multivariate normal distribution that includes all variances and covariances as unknown parameters. It also studies the more simple model without covariances and gives some R codes for the last model.
13.2 The RRC Model with Covariance Parameters
We start with a description of the more general model.
13.2.1 The Model
Let us consider the model
where
-
\(\boldsymbol {u}_d^\ast =(u_{1d},\ldots ,u_{pd})^\prime \overset {iid}{\sim }N_{p}(\boldsymbol {0},\boldsymbol {V}_{\boldsymbol {u}_d^\ast })\) and \(e_{dj}\overset {ind}{\sim }N(0,w_{dj}^{-1}\sigma _{e}^2)\) are independent, d = 1, …, D, j = 1…, n d,
-
\(\boldsymbol {V}_{\boldsymbol {u}_d^\ast }\) is a covariance matrix with components \(\mbox{cov}(u_{k_1d},u_{k_2,d})=\sigma _{k_1k_2}\), k 1, k 2 = 1, …, p.
In models with intercept, the first auxiliary variable is equal to one. The model (13.1) has p regression parameters and \(1+p+\frac 12p(p-1)= 1+\frac 12p(p+1)\) variance component parameters. They are β k, \(\sigma _e^2\), \(\sigma _{k_1k_2}\), with \(\sigma _{k_1k_2}=\sigma _{k_2k_1}\), k, k 1, k 2 = 1, …, p.
An example of model (13.1) with intercept, D = 2, n 1 = n 2 = 2, n = 4, and p = 2 is
In matrix notation model (13.1) is
where \(n=\sum _{d=1}^Dn_d\), β = β p×1, , , , , , , , , , , , , , with w dj > 0 known, d = 1, …, D, j = 1, …, n d. Covariance matrices are \(\boldsymbol {V}_e=\mbox{var}(\boldsymbol {e})=\sigma _e^2\boldsymbol {W}^{-1}\), \(\boldsymbol {V}_{k_1k_2}=\mbox{cov}(\boldsymbol {u}_{k_1},\boldsymbol {u}_{k_2})=\sigma _{k_1k_2}\boldsymbol {I}_D\), k 1, k 2 = 1, …, p, and
where
Let and . Under this notation, the variance of u is
and the model (13.2) can be written in the general form
If the variance components are known, then the BLUE of β = (β 1, …, β p)′ is (cf. (6.12))
and the BLUP of u is \(\tilde {{\boldsymbol {u}}}=\boldsymbol {V}_{u}\boldsymbol {Z}^\prime \boldsymbol {V}^{-1} (\boldsymbol {y}-\boldsymbol {X}\tilde {\boldsymbol {\beta }})\).
13.2.2 REML Estimators
In order to derive formulas for calculating the REML estimates of the unknown variance parameters we consider the alternative parametrization \(\sigma ^2=\sigma _e^2\), \(\varphi _{k_1k_2}=\sigma _{k_1k_2}/\sigma _e^2\), k 1, k 2 = 1…, p, and we define \(\boldsymbol {\sigma }=(\sigma ^2,\varphi _{k_1k_2},\,k_1,k_2=1\ldots ,p)\) and where Σ d = σ −2 V d.
The REML log-likelihood is (cf. (6.32))
where
are such that PX = 0 and P Σ P = P. The matrix Σ can be written in the form
where , k 1, k 2 = 1, …, p. In the same way as in (6.43) we obtain \(\frac {\partial \boldsymbol {P}}{\partial \varphi _{k_1k_2}}=-\boldsymbol {P}\boldsymbol {A}_{k_1k_2}\boldsymbol {P}\). Thus, by taking partial derivatives of the REML log-likelihood with respect to σ 2 and \(\varphi _{k_1k_2}\), k 1, k 2 = 1, …, p, one gets the scores
and the second partial derivatives
where k 1, k 2, i 1, i 2 = 1, …, p. By taking expectations and multiplying by − 1, we obtain the components of the Fisher information matrix. For k 1, k 2, i 1, i 2 = 1, …, p, we have
To calculate the REML estimates, the Fisher-scoring updating formula, at iteration i, is
where S(σ) and F(σ) are the vector of scores and the Fisher information matrix evaluated at σ. The following seeds can be used as starting values in the Fisher-scoring algorithm:
where \(\delta _{k_1,k_2}=1\) if k 1 = k 2, \(\delta _{k_1,k_2}=0\) if k 1 ≠ k 2, \(S^2=\frac {1}{n-p}(\boldsymbol {y}-\boldsymbol {X}\hat {\boldsymbol {\beta }}_0)^\prime \boldsymbol {W}(\boldsymbol {y}-\boldsymbol {X}\hat {\boldsymbol {\beta }}_0)\) and \(\hat {\boldsymbol {\beta }}_0=(\boldsymbol {X}^\prime \boldsymbol {W}\boldsymbol {X})^{-1}\boldsymbol {X}^\prime \boldsymbol {W}\boldsymbol {y}\).
13.2.3 EBLUP of the Domain Mean
Let us now consider a finite population U partitioned into D domains U d, i.e. \(U=\cup _{d=1}^DU_d\). Let N and N d be the sizes of U and U d, so that \(n=\sum _{d=1}^Dn_d\). We assume that the population target vector y = y N×1 follows the RRC model (13.2) with the obvious size changes, i.e. with N and N d in the place of n and n d, respectively.
Let s ⊂ U be a sample of n ≤ N units and let r = U − s be the set of non-sampled units. The domain and subdomain subsets of s and r are denoted by s d and r d, respectively. The subindexes s and r in vectors or matrices are used to denote their sampled and the non-sampled parts. Without loss of generality, we renumber the population units and we write
and
Using Theorem 4.1, the EBLUP of the linear parameter \(\eta =\boldsymbol {a}^\prime \boldsymbol {y}=\boldsymbol {a}_s^\prime \boldsymbol {y}_s+\boldsymbol {a}_r^\prime \boldsymbol {y}_r\) is
where
As V ers = 0, \(\boldsymbol {V}_{rs}=\boldsymbol {Z}_r\boldsymbol {V}_u\boldsymbol {Z}_s^\prime +\boldsymbol {V}_{ers}=\boldsymbol {Z}_r\boldsymbol {V}_u\boldsymbol {Z}_s^\prime \) and \(\hat {\boldsymbol {u}}=\hat {\boldsymbol {V}}_u\boldsymbol {Z}_s^\prime \hat {\boldsymbol {V}}_{s}^{-1}(\boldsymbol {y}_s-\boldsymbol {X}_s\hat {\boldsymbol {\beta }})\), then
The domain mean is \(\overline {Y}_{d}=\frac {1}{N_{d}}\sum _{j=1}^{N_{d}}y_{dj}=\mu _d+\overline {e}_{d}\), where \(\overline {e}_{d}=\frac {1}{N_{d}}\sum _{j=1}^{N_{d}}e_{dj}\) and
The domain mean \(\overline {Y}_{d}\) can be written in the form η = a ′ y, where
δ ab = 1 if a = b and δ ab = 0 if a ≠ b. It holds that \(\boldsymbol {a}^\prime \boldsymbol {X}=\overline {\boldsymbol {X}}_{d}=(\overline X_{1d},\ldots ,\overline X_{pd})\),
If n d > 0, then the EBLUP of \(\overline {Y}_{d}\) is
where \(\overline {y}_{s,d}=\frac {1}{n_{d}}\sum _{j=1}^{n_{d}}y_{dj}\), \(\overline {X}_{s,kd}=\frac {1}{n_{d}}\sum _{j=1}^{n_{d}}x_{kdj}\) and \(f_{d}=\frac {n_{d}}{N_{d}}\). If f d ≈ 0, then the EBLUP of \(\overline {Y}_{d}\) is approximately equal to
The MSE of the EBLUP can be estimated by adapting the steps 1–6 of the parametric bootstrap procedure described in Sect. 8.5.
13.3 The RRC Model Without Covariance Parameters
For the ease of exposition, we consider now a slightly simpler model under which we derive more detailed formulas for the REML estimators and the formulas for the analytic approximation of the MSE of EBLUPs.
13.3.1 The Model
Let us consider the RRC model
where \(u_{kd}\overset {iid}{\sim }N(0,\sigma _{k}^2)\) and \(e_{dj}\overset {iid}{\sim }N(0,w_{dj}^{-1}\sigma _{e}^2)\) are independent, d = 1, …, D, j = 1…, n d, k = 1, …, p. The model variance and covariance parameters are \(\sigma _e^2\), \(\sigma _k^2\), k = 1, …, p, (p + 1 parameters). In matrix notation model (13.3) is
where the meaning of the used symbols is exactly the same as the one given below formula (13.2). The difference with respect to the model (13.2) is only in the variance matrices which are now simpler and have the form \(\boldsymbol {V}_e=\mbox{var}(\boldsymbol {e})=\sigma _e^2\boldsymbol {W}^{-1}\), \(\boldsymbol {V}_{\boldsymbol {u}_k}=\mbox{var}(\boldsymbol {u}_k)=\sigma _k^2\boldsymbol {I}_D\), k = 1, …, p, and
where
We consider the alternative parameters \(\sigma ^2=\sigma _e^2\), \(\varphi _k=\sigma _k^2/\sigma _e^2\), k = 1, …, p, in such a way that V = σ 2 Σ and V d = σ 2 Σ d, where
Let θ = (σ 2, φ 1, …, φ p)′ be the vector of variance components, with σ 2 > 0, φ 1 > 0, …, φ p > 0. Let and . The variance of u is
Using this notation, the model (13.4) can be written in the general form
If θ is known, then the BLUE of β = (β 1, …, β p)′ is (cf. (6.12))
and the BLUP of u is \(\tilde {{\boldsymbol {u}}}=\boldsymbol {V}_{u}\boldsymbol {Z}^\prime \boldsymbol {V}^{-1} (\boldsymbol {y}-\boldsymbol {X}\tilde {\boldsymbol {\beta }})\), i.e.
13.3.2 REML Estimators
In this section we follow the same steps as in the Sect. 13.2.2 and we derive the REML estimators under the model (13.4) with more details concerning the matrix calculations. The REML log-likelihood is
where P = K(K ′ Σ K)−1 K ′ = Σ −1 − Σ −1 X(X ′ Σ −1 X)−1 X ′ Σ −1 and K = W −WX(X ′ WX)−1 X ′ W are such that PX = 0 and P Σ P = P. The matrix Σ can be written in the form
where , k = 1, …, p. As \(\frac {\partial \boldsymbol {P}}{\partial \varphi _k}=-\boldsymbol {P}\boldsymbol {A}_k\boldsymbol {P}\), by taking partial derivatives with respect to σ 2 and φ k, k = 1, …, p, one gets
The second partial derivatives are
By taking expectations and multiplying by − 1, we obtain the components of the Fisher information matrix
To calculate the REML estimates, the Fisher-scoring updating formula, at iteration i, is
The following seeds can be used as starting values in the Fisher-scoring algorithm:
where \(S^2=\frac {1}{n-p}(\boldsymbol {y}-\boldsymbol {X}\hat {\boldsymbol {\beta }}_0)^\prime \boldsymbol {W}(\boldsymbol {y}-\boldsymbol {X}\hat {\boldsymbol {\beta }}_0)\) and \(\hat {\boldsymbol {\beta }}_0=(\boldsymbol {X}^\prime \boldsymbol {W}\boldsymbol {X})^{-1}\boldsymbol {X}^\prime \boldsymbol {W}\boldsymbol {y}\).
13.3.2.1 Matrix Calculations for the RRC Model
In this section we show how to do the matrix calculations in the Fisher-scoring algorithm. We define
such that
For k = 1, …, p, the components of the vector of scores are
For k, k 1, k 2 = 1, …, p, the components of the REML Fisher information matrix are
The inverse of matrix Σ d can be calculated by applying iteratively the formula
Step 1: Define \(\boldsymbol {L}_{1d}=\boldsymbol {W}_d^{-1}+\varphi _1\boldsymbol {x}_{1,n_{d}}\boldsymbol {x}_{1,n_{d}}^\prime \). Take \(A=\boldsymbol {W}_d^{-1}\), \(C=\varphi _1\boldsymbol {x}_{1,n_{d}}\), B = I 1, and \(D=\boldsymbol {x}_{1,n_{d}}^\prime \), so that
Step 2: Define \(\boldsymbol {L}_{2d}=\boldsymbol {L}_{1d}+\varphi _2\boldsymbol {x}_{2,n_d}\boldsymbol {x}_{2,n_d}^\prime \). Take A = L 1d, \(C=\varphi _2\boldsymbol {x}_{2,n_d}\), B = I 1 and \(D=\boldsymbol {x}_{2,n_d}^\prime \), so that
…
Step p: Finally, \(\boldsymbol {L}_{pd}=\boldsymbol {L}_{p-1d}+\varphi _p\,\boldsymbol {x}_{p,n_d}\boldsymbol {x}_{p,n_d}^\prime \). Take A = L p−1d, \(C=\varphi _p\,\boldsymbol {x}_{p,n_d}\), B = I 1, and \(D=\boldsymbol {x}_{p,n_d}^\prime \), so that
13.3.3 EBLUP of a Domain Mean
The formulas for the EBLUP of the linear parameter \(\overline {Y}_d\) under model (13.4) have exactly the same form as the ones derived under model (13.2) in Sect. 13.2.3. Namely, if n d > 0, then the EBLUP of \(\overline {Y}_{d}\) is
where \(\overline {X}_{kd}=\frac {1}{N_{d}}\sum _{j=1}^{N_d}x_{kdj}\), \(\overline {y}_{s,d}=\frac {1}{n_{d}}\sum _{j=1}^{n_{d}}y_{dj}\), \(\overline {X}_{s,kd}=\frac {1}{n_{d}}\sum _{j=1}^{n_{d}}x_{kdj}\) and \(f_{d}=\frac {n_{d}}{N_{d}}\). If n d = 0, then the EBLUP of \(\overline {Y}_{d}\) is the synthetic part
13.3.4 MSE of the EBLUP
Let θ = (σ 2, φ 1, …, φ p)′ be the vector of variance components and \(\hat {\boldsymbol {\theta }}\) be the corresponding REML estimate. The MSEs of the EBLUPs of \(\overline {Y}_{d}\) and μ d are (cf. pp. 12 and 9, respectively)
where
and definitions of the symbols T s, Q s, and b ′ are given in Sect. 9.2 and will be revised on the following pages. The Prasad-Rao (PR) estimator of \(MSE(\hat {\overline {Y}}_{d}^{eblup})\) is
Now, we present a detailed description of calculation of the terms g i(θ), i = 1, …, 4, under the present model.
Calculation of g 1(θ)
To calculate \(g_1(\boldsymbol {\theta })=\boldsymbol {a}_r^\prime \boldsymbol {Z}_r\boldsymbol {T}_s\boldsymbol {Z}_r^\prime \boldsymbol {a}_r\), basic elements are
where
and \(\delta _{k_1k_2}=0\) if k 1 ≠ k 2, \(\delta _{k_1k_2}=1\) if k 1 = k 2. Therefore
where f d = n d∕N d and \(\overline {X}^{*}_{kd}=\frac {1}{N_d-n_d}\sum _{j\in r_d}x_{kdj}=(1-f_d)^{-1}(\overline {X}_{kd}-f_d\overline {x}_{kd})\).
Calculation of g 2(θ)
We recall that
where \(\boldsymbol {Q}_s=(\boldsymbol {X}_s^\prime \boldsymbol {V}^{-1}\boldsymbol {X}_s)^{-1}=\sigma ^2\left (\sum _{d=1}^D\boldsymbol {X}_{sd}^\prime \boldsymbol {\Sigma }_{sd}^{-1}\boldsymbol {X}_{sd}\right )^{-1}\) and \(\boldsymbol {V}_{es}^{-1}=\sigma ^{-2}\boldsymbol {W}_s\). The second vector is
The first vector is
Calculation of g 3(θ)
We recall that
where
The first derivative is \(\frac {\partial \boldsymbol {b}^{\prime }}{\partial \sigma ^2}=0\). As \(\frac {\partial \boldsymbol {\Sigma }_{s\ell }}{\partial \varphi _k}=\boldsymbol {x}_{k,n_\ell }\boldsymbol {x}_{k,n_\ell }^\prime \), the remaining derivatives are
k = 1, …, p, where the formula for derivative of an inverse matrix given in Appendix A was used. As , then
Let us define \(\boldsymbol {Q}(\boldsymbol {\theta })=(q_{k_1,k_2})_{k_1,k_2=0,1,\ldots ,p}\) , where q 0,k = q k,0 = 0, k = 0, 1, …, p and
for any k 1, k 2 = 1, …, p. After further simplifications we obtain
Then
where F(θ) is the REML Fisher information matrix.
Calculation of g 4(θ)
We recall that \(g_4(\boldsymbol {\theta })=\boldsymbol {a}_r^\prime \boldsymbol {V}_{er}\boldsymbol {a}_r\), where
Therefore
13.4 R Codes for EBLUPs
This section gives R codes for fitting the RRC model to the survey data file LFS20.txt. The target variable y is the variable income. As auxiliary variables we take registered and education. The function dir2 is employed for calculating direct estimators. The domains are defined by the variable area crossed by sex. The parameters of interest are the income means by domains.
We install and/or load some R packages: Matrix, lme4, and sae.
if(!require(Matrix)){ install.packages("Matrix") library(Matrix) } if(!require(lme4)){ install.packages("lme4") library(lme4) } if(!require(sae)){ install.packages("sae") library(sae) }
The following code reads the data files and calculate some variables:
# Read unit-level data dat <- read.table("LFS20.txt", header=TRUE, sep = "\t", dec = ".") # Education level 2 edu2 <- as.numeric(dat$EDUCATION==2) # Education level 3 edu3 <- as.numeric(dat$EDUCATION==3) # Read domain-level data aux <- read.table("Nds20.txt", header=TRUE, sep = "\t", dec = ".") # Prop. of registered people aux$mreg <- aux$reg/aux$N # Proportion of edu2 people aux$medu2 <- aux$edu2/aux$N # Proportion of edu3 people aux$medu3 <- aux$edu3/aux$N
We calculate direct estimators of domain average incomes and the population sizes by domain, by using dir2 function described in Sect. 2.8.4. We also define some new variables.
income.dir <- dir2(data=dat$INCOME, w=dat$WEIGHT, domain=list(sex=dat$SEX, area=dat$AREA)) diry <- income.dir$mean # Direct estimates of domain means hatNd <- income.dir$Nd # Direct estimates of population sizes nd <- income.dir$nd # Sample sizes fd <- nd/aux$N # Sample fractions
The following code calculates sample means by domains:
dat2 <- data.frame(income=dat$INCOME, edu2, edu3, reg=dat$REGISTERED) smeans <- aggregate(dat2, by=list(sex=dat$SEX,area=dat$AREA), mean) meany <- smeans$income # Sample means of income meanedu2 <- smeans$edu2 # Sample means of edu2 meanedu3 <- smeans$edu3 # Sample means of edu3 meanreg <- smeans$reg # Sample means of registered
We fit a random regression coefficient model with income as dependent variable and registered and education as explanatory variables. The fitted model has a random intercept and random slopes on the coefficients of the categories edu2 and edu3 of the variable education. The employed R code is
dat$EDUCATION <- as.factor(dat$EDUCATION) rrc <- lmer(formula=INCOME ~ REGISTERED + EDUCATION + (EDUCATION|AREA:SEX), data=dat, REML=FALSE) summary(rrc) # Summary of the fitting procedure anova(rrc) # Analysis of Variance Table beta <- fixef(rrc); beta # Regression parameters var <- as.data.frame(VarCorr(rrc)) # Variance parameters ref <- ranef(rrc)[[1]] # Modes of the random effects head(fitted(rrc)) # Predicted values residuals <- resid(rrc) # Residuals p.values <- 2∗pnorm(abs(coef(summary(rrc))[,3]), low=F) p.values # p values
Table 13.1 presents the estimated regression parameters and p-values.
We calculate the EBLUPs of income means by domain.
Xbeta <- beta[1] + beta[2]∗aux$mreg + beta[3]∗aux$medu2 + beta[4]∗aux$medu3 Xubeta <- ref[,1] + aux$medu2∗ref[,2] + aux$medu3∗ref[,3] mu <- Xbeta + Xubeta # Projective estimates of income means xbeta <- beta[1] + beta[2]∗meanreg + beta[3]∗meanedu2 + beta[4]∗meanedu3 xubeta <- ref[,1] + meanedu2∗ref[,2] + meanedu3∗ref[,3] mu.s <- meany - xbeta - xubeta eb <- mu + fd∗mu.s # EBLUPs of income means
Summary of results
output <- data.frame(Nd=aux$N[c(T,F)], hatNd=hatNd[c(T,F)] , nd=nd[c(T,F)], meany=round(meany[c(T,F)],0), dir=round(diry[c(T,F)],0), hatmu=round(mu[c(T,F)],0), eblup=round(eb[c(T,F)],0)) head(output, 10)
Figure 13.1 (left) plots the RRC model residuals \(\hat {e}_d=y_d-\hat {y}_d\). The residuals are situated symmetrically around 0. Figure 13.1 (right) plots the EBLUPs and direct estimates of men income means by areas. The EBLUPs behave more smoothly than the direct estimators.
For the ten first areas, Table 13.2 gives a summary of results for men. The population sizes, the estimated population sizes, and the sample sizes are denoted by N d, \(\hat {N}_d\), and n d, respectively. The columns meany and dir contain the sample means and the direct estimates of the population mean of the variable income. The projective predictors and the EBLUPs of the population means are labelled by hatmu and eblup, respectively.
References
Dempster, A.P., Rubin, D.B., Tsutakawa, R.K.: Estimation in covariance component models. J. Am. Stat. Assoc. 76, 341–353 (1981)
Hobza, T., Morales, D.: Small area estimation under random regression coefficient models. J. Stat. Comput. Simul. 83(11), 2160–2177 (2013)
Moura, F.A.S., Holt, D.: Small area estimation using multilevel models. Surv. Methodol. 25(1), 73–80 (1999)
Prasad, N.G.N., Rao, J.N.K.: The estimation of the mean squared error of small-area estimators. J. Am. Stat. Assoc. 85, 163–171 (1990)
Author information
Authors and Affiliations
Rights and permissions
Copyright information
© 2021 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this chapter
Cite this chapter
Morales, D., Esteban, M.D., Pérez, A., Hobza, T. (2021). Random Regression Coefficient Models. In: A Course on Small Area Estimation and Mixed Models. Statistics for Social and Behavioral Sciences. Springer, Cham. https://doi.org/10.1007/978-3-030-63757-6_13
Download citation
DOI: https://doi.org/10.1007/978-3-030-63757-6_13
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-63756-9
Online ISBN: 978-3-030-63757-6
eBook Packages: Mathematics and StatisticsMathematics and Statistics (R0)