- Consider the complexity of the models. A smaller model that explains the data just as well as a larger model is generally preferred, as it is simpler and easier to interpret.
- Look at other goodness-of-fit metrics, such as the ‘adjusted R-squared', 'AIC (Akaike Information Criterion)’, or ‘BIC (Bayesian Information Criterion)'. These metrics penalize more complex models, so a smaller model may perform better. The 'F-test' can also be used to test whether the larger model (v2) is significantly better than the smaller model (v1).
- Conduct ‘cross-validation’ or use a hold-out dataset to test the performance of the models on new data. The model with better out-of-sample performance is generally preferred.
How to compare two nested models when they have very small R_squared diffrence.
조회 수: 3 (최근 30일)
이전 댓글 표시
I have two models, ie v1 = a1 + a2*f + a3*f2 and v2 = k( a1 + a2*f a3*f^2)
댓글 수: 0
답변 (1개)
Rohit
2023년 3월 20일
When comparing two nested models with very small differences in R-squared, it is important to consider other metrics and factors to determine which model is better.
Here are some suggestions:
댓글 수: 0
참고 항목
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!