필터 지우기
필터 지우기

How to estimate Standard Error for the coefficients in ridge regression aproach

조회 수: 21 (최근 30일)
Hi, I would like to know How to estimate Standard Error for the coefficients in ridge regression aproach ?

답변 (3개)

Richard Willey
Richard Willey 2011년 8월 23일
If you're working with a ridge regression model (as opposed to lasso or elastic net) then its relatively easy to code up a paired bootstrap. With this said and done, (arguably) the standard error for for the coefficients from a ridge model isn't particularly useful. Here's a relevant quote from Goeman
"It is a very natural question to ask for standard errors of regression coefficients or other estimated quantities. In principle such standard errors can easily be calculated, e.g. using the bootstrap. Still, this package deliberately does not provide them. The reason for this is that standard errors are not very meaningful for strongly biased estimates such as arise from penalized estimation methods. Penalized estimation is a procedure that reduces the variance of estimators by introducing substantial bias. The bias of each estimator is therefore a major component of its mean squared error, whereas its variance may contribute only a small part. Unfortunately, in most applications of penalized regression it is impossible to obtain a suciently precise estimate of the bias. Any bootstrap-based calculations can only give an assessment of the variance of the estimates. Reliable estimates of the bias are only available if reliable unbiased estimates are available, which is typically not the case in situations in which penalized estimates are used.
Reporting a standard error of a penalized estimate therefore tells only part of the story. It can give a mistaken impression of great precision, completely ignoring the inaccuracy caused by the bias. It is certainly a mistake to make con fidence statements that are only based on an assessment of the variance of the estimates, such as bootstrap-based confi dence intervals do."

Marco Sandri
Marco Sandri 2011년 9월 1일
Here are some useful references.
Bootstrap for ridge regression:
Vinod H.D. (1995). Double bootstrap for shrinkage estimators. Journal of Econometrics, 68(287-302).
Bootstrap for l1-penalized linear regression:
Chatterjee, A. and Lahiri, S. N. (2011). Bootstrapping Lasso estimators. Journal of the American Statistical Association, Vol. 106, No. 494: 608–625
(One-step) bootstrap for l1-penalized GLMs:
S. Sartori (2011) Penalized Regression: bootstrap confidence intervals and variable selection for high dimensional data sets. See Chapter 3 - Section 3.6 - http://air.unimi.it/bitstream/2434/153099/6/phd_unimi_R07738.pdf

Tom Lane
Tom Lane 2011년 5월 5일
You could consider bootstrapping.
If there's a standard way to do this, I'm not aware of it. You probably realize that as the ridge parameter gets larger, the coefficients approach zero. You could just pick a ridge parameter value and compute the standard error that the ridge formula provides, but I'd find that hard to interpret.
If you have some technique for selecting the ridge parameter, I'd imagine you'd want to include the uncertainty in that in your bootstrapping or in any other standard error calculation.

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by