필터 지우기
필터 지우기

Nonlinear regression + Cross Validation = possible?

조회 수: 4 (최근 30일)
wesleynotwise
wesleynotwise 2017년 6월 16일
편집: wesleynotwise 2017년 6월 22일
Hello. World. I want to know is it possible to perform cross validation on nonlinear regression model?

채택된 답변

Star Strider
Star Strider 2017년 6월 16일
Cross-validation is used to assess the performance of classifiers.
Nonlinear regression does curve fitting (objective function parameter estimation).
These are two entirely different statistical techniques. What are you doing? How would you use cross-validation with your nonlinear regression?
  댓글 수: 19
Star Strider
Star Strider 2017년 6월 21일
I’m here occasionally these days.
I looked at the subplot problem when you posted it. I would not use subplot in that situation, instead just plotting all the data on one set of axes and using a legend call.
wesleynotwise
wesleynotwise 2017년 6월 21일
편집: wesleynotwise 2017년 6월 21일
Ah. I still need subplot in my case, due to the overlapping of data points, and it is easy for me to do the analysis. I think I have an idea now how to crack it. Thanks.

댓글을 달려면 로그인하십시오.

추가 답변 (1개)

Greg Heath
Greg Heath 2017년 6월 22일
편집: Greg Heath 2017년 6월 22일
I am surprised to hear that SS thinks that cross validation is not used for regression.
Maybe it is just a misunderstanding of terminology but I have used crossvalidation in regression many times.
Typically it is used when there are mounds of data:
1. Randomly divide the data into k subsets.
2. Then design a neural network model with two subsets: one for training
and one for validation.
3. Test the net on the remaining k-2 subsets.
4. If performance of one net is poor, the same data can be used several
(say 10) times with different random initial weights. Then, choose the
best of the 10.
5. Finally you can choose the best of the k nets or combine m (<=k) nets
Hope this helps.
Thank you for formally accepting my answer
Greg
  댓글 수: 4
Greg Heath
Greg Heath 2017년 6월 22일
편집: Greg Heath 2017년 6월 22일
It doesn't matter what your model is you can still use
1. k-fold cross-validation where there are k distinct subsets
2. k-fold bootstrapping where there are k nondistinct random subsets.
A driving factor is the ratio of fitting equations to the number of parameters that have to be estimated.
Hope this helps.
Greg
wesleynotwise
wesleynotwise 2017년 6월 22일
편집: wesleynotwise 2017년 6월 22일
Yes. Star Strider did point out that I was actually looking for bootstrap sampling techniques. My tiny wee brain cannot cope with that at the moment, that's why I used the alternative - data splitting.
Thanks :)

댓글을 달려면 로그인하십시오.

카테고리

Help CenterFile Exchange에서 Uncertainty Analysis에 대해 자세히 알아보기

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by