How can I assess the reliability of my machine learning model on unseen data?
조회 수: 4 (최근 30일)
이전 댓글 표시
I have a model of a system that can detect some abnormalities and then react accordingly.
Now, I want to analyze how reliable is our model in predicting these abnormalities.
So far, I have manually analyse certain situations and assess whether the system reacted correctly or incorrectly. This is very time consuming and I would like to know how we could adopt supervised machine learning to train a neural network to make this assessment automatically.
채택된 답변
MathWorks Support Team
2018년 6월 14일
In general, to create a machine learning model, you would:
1. Collect data.
2. Split the data into training, test and validation sets.
3. Train a machine learning model using both the training and test sets.
4. Validate that your trained model on the validation set to verify that it can still reliably predict "unseen" data.
5. Use the model to predict real world data.
From the workflow above, you can see that we can only assess the accuracy of the model (before really using it in real world) by evaluating the prediction it outputs on the validation set.
If the predicted values on the validation set is within some reasonable accuracy that you desire, then, you can use the model to predict real world data with the assumption that it would also predict these new data with the same level of accuracy.
Yet, the validation set itself had to first be manually collected and labeled.
Furthermore, it is counter-intuitive (if not impossible) to be able to *automatically *assess the accuracy of your model on new unseen (and unlabeled) data. If you have another model that can assess whether your existing model is predicting new data correctly vs. incorrectly, you would certainly have used that model instead.
댓글 수: 1
추가 답변 (1개)
Greg Heath
2018년 6월 22일
THE ABOVE IS INCORRECT FOR NEURAL NETWORKS. FOR NNs:
DESIGN = TRAIN + VALIDATE
1. Collect data.
2. a. Split the data into DESIGN and TEST subsets.
b. Split the design data into TRAINING and VALIDATION subsets.
i. Weight values are calculated from the TRAINING subset.
ii. The VALIDATION subset is used to verify good performance
on NONTRAINING DATA via "EARLY STOPPING":
If, DURING TRAINING, VALIDATION subset performance decreases
for 6(default) CONSECUTIVE EPOCHS, TRAINING IS STOPPED!
FOR OBVIOUS REASONS I prefer the term "VALIDATION STOPPING"!
3. UNBIASED ESTIMATES of performance are obtained using the TEST subset which, of course, was not used in any way, for design.
4. MATLAB default values for the trn/val/tst split are 0.7/0.15/0.15
Hope this helps
Thank you for formally accepting my answer
Greg
댓글 수: 0
참고 항목
카테고리
Help Center 및 File Exchange에서 Sequence and Numeric Feature Data Workflows에 대해 자세히 알아보기
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!