Skip to main content

Kick-One-Out-Based Variable Selection Method Using Ridge-Type \(C_{p}\) Criterion in High-Dimensional Multi-response Linear Regression Models

  • Conference paper
  • First Online:
Intelligent Decision Technologies (KESIDT 2023)

Part of the book series: Smart Innovation, Systems and Technologies ((SIST,volume 352))

Included in the following conference series:

  • 195 Accesses

Abstract

In this paper, the kick-one-out method using a ridge-type \(C_{p}\) criterion is proposed for variable selection in multi-response linear regression models. Sufficient conditions for the consistency of this method are obtained under a high-dimensional asymptotic framework such that the number of explanatory variables and response variables, k and p, may go to infinity with the sample size n, and p may exceed n but k is less than n. It is expected that the method satisfying these sufficient conditions has a high probability of selecting the true model, even when \(p>n\).

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 299.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 379.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 379.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Bai, Z.D., Fujikoshi, Y., Hu, J.: Strong consistency of the AIC, BIC, \(C_{p}\) and KOO methods in high-dimensional multivariate linear regression. TR No. 18-9, Statistical Research Group, Hiroshima University (2018)

    Google Scholar 

  2. Bernstein, D.H.: Matrix Mathematics: Theory, Facts, and Formulas. Princeton University Press, Princeton (2009)

    Book  MATH  Google Scholar 

  3. Fujikoshi, Y.: High-dimensional consistencies of KOO methods in multivariate regression model and discriminant analysis. J. Multivariate Anal. 188, 104860 (2022). https://doi.org/10.1016/j.jmva.2021.104860

    Article  MathSciNet  MATH  Google Scholar 

  4. Fujikoshi, Y., Sakurai, T., Yanagihara, H.: Consistency of high-dimensional AIC-type and \(C_{p}\)-type criteria in multivariate linear regression. J. Multivariate Anal. 123, 184–200 (2014). https://doi.org/10.1016/j.jmva.2013.09.006

    Article  MathSciNet  MATH  Google Scholar 

  5. Fujikoshi, Y., Ulyanov, V.V., Shimizu, R.: Multivariate Statistics: High-Dimensional and Large-Sample Approximations. Wiley, Hoboken (2010)

    Book  MATH  Google Scholar 

  6. Katayama, S., Imori, S.: Lasso penalized model selection criteria for high-dimensional multivariate linear regression analysis. J. Multivariate Anal. 132, 138–150 (2014). https://doi.org/10.1016/j.jmva.2014.08.002

    Article  MathSciNet  MATH  Google Scholar 

  7. Kubokawa, T., Srivastava, M.S.: Selection of variables in multivariate regression models for large dimensions. Comm. Statist. A-Theory Methods 41, 2465–2489 (2012). https://doi.org/10.1080/03610926.2011.624242

    Article  MathSciNet  MATH  Google Scholar 

  8. Nishii, R., Bai, Z.D., Krishnaiah, P.R.: Strong consistency of the information criterion for model selection in multivariate analysis. Hiroshima Math. J. 18, 451–462 (1988). https://doi.org/10.32917/hmj/1206129611

    Article  MathSciNet  MATH  Google Scholar 

  9. Oda, R.: Consistent variable selection criteria in multivariate linear regression even when dimension exceeds sample size. Hiroshima Math. J. 50, 339–374 (2020). https://doi.org/10.32917/hmj/1607396493

    Article  MathSciNet  MATH  Google Scholar 

  10. Oda, R., Yanagihara, H.: A fast and consistent variable selection method for high-dimensional multivariate linear regression with a large number of explanatory variables. Electron. J. Statist. 14, 1386–1412 (2020). https://doi.org/10.1214/20-EJS1701

    Article  MathSciNet  MATH  Google Scholar 

  11. Srivastava, M.S.: Methods of Multivariate Statistics. Wiley, New York (2002)

    MATH  Google Scholar 

  12. Srivastava, M.S.: Multivariate theory for analyzing high dimensional data. J. Japan Statist. Soc. 37, 53–86 (2007). https://doi.org/10.14490/jjss.37.53

    Article  MathSciNet  MATH  Google Scholar 

  13. Yamamura, M., Yanagihara, H., Srivastava, M. S.: Variable selection by \(C_{p}\) statistic in multiple responses regression with fewer sample size than the dimension. In: Setchi, R., et al. (eds.) Knowledge-Based and Intelligent Information and Engineering Systems, vol. 6278, pp. 7–14 (2010). https://doi.org/10.1007/978-3-642-15393-8_2

  14. Yanagihara, H.: A high-dimensionality-adjusted consistent \(C_{p}\)-type statistic for selecting variables in a normality-assumed linear regression with multiple responses. Procedia Comput. Sci. 96, 1096–1105 (2016). https://doi.org/10.1016/j.procs.2016.08.151

    Article  Google Scholar 

  15. Zhao, L.C., Krishnaiah, P.R., Bai, Z.D.: On detection of the number of signals in presence of white noise. J. Multivariate Anal. 20, 1–25 (1986). https://doi.org/10.1016/0047-259X(86)90017-5

    Article  MathSciNet  MATH  Google Scholar 

Download references

Acknowledgments

The author would like to thank two reviewers for valuable comments. This work was supported by funding from JSPS KAKENHI grant numbers JP20K14363, JP20H04151, and JP19K21672.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ryoya Oda .

Editor information

Editors and Affiliations

Appendices

Appendix 1: Proof of Theorem 1

First, we consider the case of \(\ell \notin j_{*}\). Note that \((\boldsymbol{P}_{\omega }-\boldsymbol{P}_{\omega _{\ell }})\boldsymbol{X}_{j_{*}}=\boldsymbol{O}_{n,k_{j_{*}}}\) holds. Let \(\boldsymbol{W}=\boldsymbol{\mathcal {E}}'(\boldsymbol{I}_{n}-\boldsymbol{P}_{\omega })\boldsymbol{\mathcal {E}}\) and \(\boldsymbol{V}_{\omega ,\omega _{\ell }}=\boldsymbol{\mathcal {E}}'(\boldsymbol{P}_{\omega }-\boldsymbol{P}_{\omega _{\ell }})\boldsymbol{\mathcal {E}}\). Then, the upper bound of \(\mathcal {D}_{\ell }\) can be written as

$$\begin{aligned} \mathcal {D}_{\ell }&= \mathrm{{tr}}(\boldsymbol{V}_{\omega ,\omega _{\ell }}\boldsymbol{S}^{-1}_{\lambda })-p\alpha \le \lambda (n-k)\frac{\mathrm{{tr}}(\boldsymbol{V}_{\omega ,\omega _{\ell }})}{\mathrm{{tr}}(\boldsymbol{W})}-p\alpha . \end{aligned}$$
(10)

Let \(E_{1}\) be the event defined by \(E_{1}=\{(n-k)^{-1}\mathrm{{tr}}(\boldsymbol{W}) \ge \tau \mathrm{{tr}}(\boldsymbol{\varSigma }_{*})\}\). Using (10) and the event \(E_{1}\), we have

$$\begin{aligned} P\left( \cup _{\ell \notin j_{*}}\{\mathcal {D}_{\ell }\ge 0\}\right) \le \sum _{\ell \in j^{c}_{*}}P\left( \mathrm{{tr}}(\boldsymbol{V}_{\omega ,\omega _{\ell }}) \ge \lambda ^{-1}\tau p\alpha \mathrm{{tr}}(\boldsymbol{\varSigma }_{*})\right) +P(E^{c}_{1}). \end{aligned}$$
(11)

Applying (i) and (iii) of Lemma 1 in [9] to (11), the following equation can be derived:

$$\begin{aligned}&P\left( \cup _{\ell \notin j_{*}}\{\mathcal {D}_{\ell }\ge 0\}\right) \nonumber \\&\le O\left( k\xi ^{2}\mathrm{{tr}}(\boldsymbol{\varSigma }_{*})^{-2}(\lambda ^{-1}\tau p\alpha -1)^{-2}\right) +O\left( \xi ^{2}\mathrm{{tr}}(\boldsymbol{\varSigma }_{*})^{-2}n^{-1}(\tau -1)^{-2}\right) . \end{aligned}$$
(12)

Next, we consider the case of \(\ell \in j_{*}\). Since \((\boldsymbol{I}_{n}-\boldsymbol{P}_{\omega })\boldsymbol{X}_{j_{*}}=\boldsymbol{O}_{n,k_{j_{*}}}\) holds, notice that

$$\begin{aligned} \mathrm{{tr}}\{\boldsymbol{Y}'(\boldsymbol{P}_{\omega }-\boldsymbol{P}_{\omega _{\ell }})\boldsymbol{Y}\}=\mathrm{{tr}}(\boldsymbol{V}_{\omega ,\omega _{\ell }}) + 2\mathrm{{tr}}(\boldsymbol{U}_{\omega _{\ell }}) + np\delta ^{2}_{\ell }, \end{aligned}$$

where \(\delta ^{2}_{\ell }=\mathrm{{tr}}(\boldsymbol{\varDelta }_{\ell })\) and \(\boldsymbol{U}_{\omega _{\ell }}=\boldsymbol{\varTheta }'_{j_{*}}\boldsymbol{X}'_{j_{*}}(\boldsymbol{I}_{n}-\boldsymbol{P}_{\omega _{\ell }})\boldsymbol{\mathcal {E}}\). Using this notation, the lower bound of \(\mathcal {D}_{\ell }\) can be written as

$$\begin{aligned} \mathcal {D}_{\ell } \ge (1+\lambda ^{-1})^{-1}(n-k)\mathrm{{tr}}(\boldsymbol{W})^{-1}\left\{ \mathrm{{tr}}(\boldsymbol{V}_{\omega ,\omega _{\ell }}) + 2\mathrm{{tr}}(\boldsymbol{U}_{\omega _{\ell }}) + np\delta ^{2}_{\ell }\right\} -p\alpha . \end{aligned}$$
(13)

Let \(E_{2}\) and \(E_{3,\ell }\) be the events defined by \(E_{2}=\{(n-k)^{-1}\mathrm{{tr}}(\boldsymbol{W}) \le 3\mathrm{{tr}}(\boldsymbol{\varSigma }_{*})/2\}\) and \(E_{3,\ell }=\{\mathrm{{tr}}(\boldsymbol{U}_{\omega _{\ell }})\ge -np\delta ^{2}_{\ell }/4\}\). Using this notation and (13), we have

$$\begin{aligned}&P\left( \cup _{\ell \in j_{*}}\{\mathcal {D}_{\ell }\le 0\}\right) \nonumber \\&\le \sum _{\ell \in j_{*}}P\left( \mathrm{{tr}}(\boldsymbol{V}_{\omega ,\omega _{\ell }}) + np\min _{\ell \in j_{*}}\delta ^{2}_{\ell }/2 \le 3(1+\lambda ^{-1})p\alpha \mathrm{{tr}}(\boldsymbol{\varSigma }_{*})/2\right) \nonumber \\&\quad +P(E^{c}_{2})+\sum _{\ell \in j_{*}}P(E^{c}_{3,\ell })\nonumber \\&\le \sum _{\ell \in j_{*}}P\left( \mathrm{{tr}}(\boldsymbol{V}_{\omega ,\omega _{\ell }})-\mathrm{{tr}}(\boldsymbol{\varSigma }_{*})+ npc/2 \le \mathrm{{tr}}(\boldsymbol{\varSigma }_{*})\left\{ 3(1+\lambda ^{-1})p\alpha /2-1\right\} \right) \nonumber \\&\quad +P(E^{c}_{2})+\sum _{\ell \in j_{*}}P(E^{c}_{3,\ell }). \end{aligned}$$
(14)

Applying (i), (ii), and (iv) of Lemma 1 in [9] to (14), the following equation can be derived:

$$\begin{aligned} P\left( \cup _{\ell \in j_{*}}\{\mathcal {D}_{\ell }\le 0\}\right)&\le O \left( k_{j_{*}}\xi ^{2}n^{-2}p^{-2}\right) + O\left( \xi ^{2}\mathrm{{tr}}(\boldsymbol{\varSigma }_{*})^{-2}n^{-1}\right) \nonumber \\&\quad + O\left( k_{j_{*}}E[||\boldsymbol{\varepsilon }||^{4}]n^{-2}p^{-2}\right) + O\left( k_{j_{*}}\lambda _{\mathrm{{max}}}(\boldsymbol{\varSigma }_{*})^{2}n^{-2}p^{-2}\right) . \end{aligned}$$
(15)

Therefore, (12) and (15) complete the proof of Theorem 1.   \(\square \)

Appendix 2: Proof of Theorem 2

To prove Theorem 2, we need the following lemma concerning the 8-th moment of \(\boldsymbol{\varepsilon }\) (the proof is given in Appendix 3):

Lemma 1

Let \(\boldsymbol{A}\) be an \(n \times n\) symmetric idempotent matrix satisfying \(\mathrm{{rank}}(\boldsymbol{A})=m<n\). Then, \(E[\mathrm{{tr}}(\boldsymbol{\mathcal {E}}'\boldsymbol{A}\boldsymbol{\mathcal {E}})^{4}]\le \phi m^{3}E[||\boldsymbol{\varepsilon }||^{8}]\) holds, where \(\phi \) is a constant not depending on n, p, and m.

Using Lemma 1, for \(\ell \notin j_{*}\) we have

$$\begin{aligned} P\left( \mathrm{{tr}}(\boldsymbol{V}_{\omega ,\omega _{\ell }}) \ge \lambda ^{-1}\tau p\alpha \mathrm{{tr}}(\boldsymbol{\varSigma }_{*})\right)&\le E[\mathrm{{tr}}(\boldsymbol{V}_{\omega ,\omega _{\ell }})^4]/\{\lambda ^{-1}\tau p\alpha \mathrm{{tr}}(\boldsymbol{\varSigma }_{*})\}^4\nonumber \\&\le \phi E[||\boldsymbol{\varepsilon }||^{8}]/\{\lambda ^{-1}\tau p\alpha \mathrm{{tr}}(\boldsymbol{\varSigma }_{*})\}^4\nonumber \\&=O\left( \lambda ^4 p^{-4}\alpha ^{-4}\right) . \end{aligned}$$
(16)

Notice that \(\kappa _{4}\le E[||\boldsymbol{\mathcal {\varepsilon }}||^{4}] \le E[||\boldsymbol{\mathcal {\varepsilon }}||^{8}]^{1/2}\) holds. Therefore, (11), (12), (15), and (16) complete the proof of Theorem 2.   \(\square \)

Appendix 3: Proof of Lemma 1

Denote the i-th row vector of \(\boldsymbol{\mathcal {E}}\) as \(\boldsymbol{\varepsilon }_{i}\). The expectation \(E[\mathrm{{tr}}(\boldsymbol{\mathcal {E}}'\boldsymbol{A}\boldsymbol{\mathcal {E}})^{4}]\) can be expressed as follows:

$$\begin{aligned}&E[\mathrm{{tr}}(\boldsymbol{\mathcal {E}}'\boldsymbol{A}\boldsymbol{\mathcal {E}})^{4}]\\&=\sum ^{n}_{i=1}\{(\boldsymbol{A})_{ii}\}^{4}E[||\boldsymbol{\varepsilon }_{i}||^{8}]+\phi _{1} \sum ^{n}_{i \ne j}\{(\boldsymbol{A})_{ii}\}^{3}(\boldsymbol{A})_{jj}E[||\boldsymbol{\varepsilon }_{i}||^{6}||\boldsymbol{\varepsilon }_{j}||^{2}]\nonumber \\&\quad + \phi _{2} \sum ^{n}_{i \ne j}\{(\boldsymbol{A})_{ii}\}^{2}\{(\boldsymbol{A})_{ij}\}^{2}E[||\boldsymbol{\varepsilon }_{i}||^{4}(\boldsymbol{\varepsilon }'_{i}\boldsymbol{\varepsilon }_{j})^{2}]\nonumber \\&\quad +\phi _{3} \sum ^{n}_{i \ne j}\{(\boldsymbol{A})_{ii}\}^{2}\{(\boldsymbol{A})_{jj}\}^{2}E[||\boldsymbol{\varepsilon }_{i}||^{4}||\boldsymbol{\varepsilon }_{j}||^{4}] + \phi _{4} \sum ^{n}_{i \ne j}\{(\boldsymbol{A})_{ij}\}^{4}E[(\boldsymbol{\varepsilon }'_{i}\boldsymbol{\varepsilon }_{j})^{4}]\nonumber \\&\quad +\phi _{5} \sum ^{n}_{i \ne j}(\boldsymbol{A})_{ii}(\boldsymbol{A})_{jj}\{(\boldsymbol{A})_{ij}\}^{2}E[||\boldsymbol{\varepsilon }_{i}||^{2}||\boldsymbol{\varepsilon }_{j}||^{2}(\boldsymbol{\varepsilon }'_{i}\boldsymbol{\varepsilon }_{j})^{2}]\nonumber \\&\quad +\phi _{6} \sum ^{n}_{i \ne j \ne k}(\boldsymbol{A})_{ii}(\boldsymbol{A})_{jj}(\boldsymbol{A})_{kk}(\boldsymbol{A})_{ij}E[||\boldsymbol{\varepsilon }_{i}||^{2}||\boldsymbol{\varepsilon }_{j}||^{2}||\boldsymbol{\varepsilon }_{k}||^{2}(\boldsymbol{\varepsilon }'_{i}\boldsymbol{\varepsilon }_{j})]\nonumber \\&\quad +\phi _{7} \sum ^{n}_{i \ne j \ne k}(\boldsymbol{A})_{ii}\{(\boldsymbol{A})_{jk}\}^{3}E[||\boldsymbol{\varepsilon }_{i}||^{2}(\boldsymbol{\varepsilon }'_{j}\boldsymbol{\varepsilon }_{k})^{3}]\nonumber \\&\quad +\phi _{8} \sum ^{n}_{i \ne j \ne k}(\boldsymbol{A})_{ii}(\boldsymbol{A})_{jj}(\boldsymbol{A})_{ik}(\boldsymbol{A})_{jk}E[||\boldsymbol{\varepsilon }_{i}||^{2}||\boldsymbol{\varepsilon }_{j}||^{2}(\boldsymbol{\varepsilon }'_{i}\boldsymbol{\varepsilon }_{k})(\boldsymbol{\varepsilon }'_{j}\boldsymbol{\varepsilon }_{k})]\nonumber \\&\quad +\phi _{9} \sum ^{n}_{i \ne j \ne k}\{(\boldsymbol{A})_{ij}\}^{2}(\boldsymbol{A})_{jk}(\boldsymbol{A})_{ki}E[(\boldsymbol{\varepsilon }'_{i}\boldsymbol{\varepsilon }_{j})^{2}(\boldsymbol{\varepsilon }'_{j}\boldsymbol{\varepsilon }_{k})(\boldsymbol{\varepsilon }'_{k}\boldsymbol{\varepsilon }_{i})]\nonumber \\&\quad +\phi _{10} \sum ^{n}_{i \ne j \ne k}(\boldsymbol{A})_{ii}(\boldsymbol{A})_{ij}\{(\boldsymbol{A})_{jk}\}^{2}E[||\boldsymbol{\varepsilon }_{i}||^{2}(\boldsymbol{\varepsilon }'_{i}\boldsymbol{\varepsilon }_{j})(\boldsymbol{\varepsilon }'_{j}\boldsymbol{\varepsilon }_{k})^{2}], \end{aligned}$$

where the summation \(\sum ^{n}_{i \ne j \ne k}\) is defined by \(\sum ^{n}_{i \ne j}\sum ^{n}_{k:k\ne i,j}\) and \(\phi _{1},\ldots ,\phi _{10}\) are natural numbers not depending on n, p, and m. Since \(\boldsymbol{A}\) is positive semi-definite and is also symmetric idempotent, \(0 \le (\boldsymbol{A})_{ii}\le 1\), \(\{(\boldsymbol{A})_{ij}\}^{2} \le (\boldsymbol{A})_{ii}(\boldsymbol{A})_{jj}\) (e.g., [2], Fact 8.9.9), \(\boldsymbol{1}'_{n}\boldsymbol{D}_{\boldsymbol{A}}\boldsymbol{A}\boldsymbol{D}_{\boldsymbol{A}}\boldsymbol{1}_{n}\le \mathrm{{tr}}(\boldsymbol{A})\) and \(\boldsymbol{1}'_{n}\boldsymbol{D}^{2}_{\boldsymbol{A}}\boldsymbol{A}\boldsymbol{D}_{\boldsymbol{A}}\boldsymbol{1}_{n}\le \mathrm{{tr}}(\boldsymbol{A})\) hold. Recalling \(\boldsymbol{D}_{\boldsymbol{A}}=\mathrm{{diag}}\{(\boldsymbol{A})_{11},\ldots ,(\boldsymbol{A})_{nn}\}\) and using these facts, we have

$$\begin{aligned}&\sum ^{n}_{i=1}\{(\boldsymbol{A})_{ii}\}^{4} \le m, \ \sum ^{n}_{i \ne j}\{(\boldsymbol{A})_{ii}\}^{3}(\boldsymbol{A})_{jj}\le m^{2}, \ \sum ^{n}_{i \ne j}\{(\boldsymbol{A})_{ii}\}^{2}\{(\boldsymbol{A})_{ij}\}^{2}\le m^{2},\\&\sum ^{n}_{i \ne j}\{(\boldsymbol{A})_{ii}\}^{2}\{(\boldsymbol{A})_{jj}\}^{2}\le m^{2}, \ \sum ^{n}_{i \ne j}\{(\boldsymbol{A})_{ij}\}^{4}\le m^{2}, \ \sum ^{n}_{i \ne j}(\boldsymbol{A})_{ii}(\boldsymbol{A})_{jj}\{(\boldsymbol{A})_{ij}\}^{2}\le m^{2},\\&\sum ^{n}_{i \ne j \ne k}(\boldsymbol{A})_{ii}(\boldsymbol{A})_{jj}(\boldsymbol{A})_{kk}(\boldsymbol{A})_{ij}\le m(m+4), \ \sum ^{n}_{i \ne j \ne k}(\boldsymbol{A})_{ii}\{(\boldsymbol{A})_{jk}\}^{3}\le m(m^{2}+m+1),\\&\sum ^{n}_{i \ne j \ne k}(\boldsymbol{A})_{ii}(\boldsymbol{A})_{jj}(\boldsymbol{A})_{ik}(\boldsymbol{A})_{jk}\le 5m,\ \sum ^{n}_{i \ne j \ne k}\{(\boldsymbol{A})_{ij}\}^{2}(\boldsymbol{A})_{jk}(\boldsymbol{A})_{ki}\le m(3m+2),\\&\sum ^{n}_{i \ne j \ne k}(\boldsymbol{A})_{ii}(\boldsymbol{A})_{ij}\{(\boldsymbol{A})_{jk}\}^{2}\le m(m+6). \end{aligned}$$

Moreover, for any \(a,b,c,d,e,f,g,h \in \mathbb {N}\), \(E[(\boldsymbol{\varepsilon }'_{a}\boldsymbol{\varepsilon }_{b})(\boldsymbol{\varepsilon }'_{c}\boldsymbol{\varepsilon }_{d})(\boldsymbol{\varepsilon }'_{e}\boldsymbol{\varepsilon }_{f})(\boldsymbol{\varepsilon }'_{g}\boldsymbol{\varepsilon }_{h})] \le E[||\boldsymbol{\varepsilon }_{a}||^{8}]\) holds. Therefore, the proof of Lemma 1 is completed.   \(\square \)

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Oda, R. (2023). Kick-One-Out-Based Variable Selection Method Using Ridge-Type \(C_{p}\) Criterion in High-Dimensional Multi-response Linear Regression Models. In: Czarnowski, I., Howlett, R., Jain, L.C. (eds) Intelligent Decision Technologies. KESIDT 2023. Smart Innovation, Systems and Technologies, vol 352. Springer, Singapore. https://doi.org/10.1007/978-981-99-2969-6_17

Download citation

Publish with us

Policies and ethics