When use K-cross validation concept to classify images, is the dataset is divided for training and testing? or for used for training and validation?

조회 수: 4 (최근 30일)
When use K-cross validation concept in pretrained CNN model to classify images, the dataset is split to k folds for example folde=5. Which means the for each fold we have 80% of the dataset used for training the model.
My question is about the rest 20% of the dataset in each fold. Is this rest part of the dataset is used for testing (to evaluate the classifier) in each fold? or it is used to validate the algorithm in each fold?
Actually I asked this question because some of researchers use K- cross validation to train (k-1) parts of the dataset and the rest part to validate the algorithm in each fold. while other researchers use the K- cross validation to test rest part of the dataset in each fold.
I need to use K-cross validation in my research but I confused about using this dataset part. Shall I use it for testing in each fold or use it to validate the algorithm?
As you know there is a big difference between validate dataset and testing dataset.
Thank you very Much
Your Rapid response is highly appreciated

답변 (0개)

카테고리

Help CenterFile Exchange에서 Recognition, Object Detection, and Semantic Segmentation에 대해 자세히 알아보기

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by