DNN training 3D Parameters

조회 수: 14 (최근 30일)
Sia Sharma
Sia Sharma 2024년 4월 11일 17:30
편집: Matt J 2024년 4월 11일 17:40
I am training a DNN on a small dataset of MRI images in 3D with a scratch network I created with 4 sets of convolutional layer, batch normalization + relu + max pooling, followed by a global average pooling and 2 fully connected layers with a dropout in between them. I am experiencing a lot of low accuracy for both my training and validation curves, and my loss curve does not decay and is more horizontal around 1. I have tried to use l2 regularization, change momentum, and add a learn rate drop factor but it doesn't improve the accuracy. This model worked well with 2D images, but I am unable to get an accuracy above 60% for my 3D network. Would be helpful to recieve some suggestions on what paramters I could try to change

답변 (1개)

Matt J
Matt J 2024년 4월 11일 17:39
편집: Matt J 2024년 4월 11일 17:40
The parameters you mention experimenting with do not include all the training options (see below for a more complete list). You could also try a different training algorithm, e.g., adam. Because it is a larger input/output dimension, you may also need to change the network architecture so that it has more weights to manipulate.
options = trainingOptions('adam', ...
'MiniBatchSize',5, ...
'MaxEpochs',100, ...
'InitialLearnRate',ilr, ...
'L2Regularization',1e-4,...
'LearnRateSchedule','piece', ...
'LearnRateDropFactor',0.8, ...
'LearnRateDropPeriod',5);

카테고리

Help CenterFile Exchange에서 Image Data Workflows에 대해 자세히 알아보기

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by