Error using trainNetwork Invalid training data. For classification tasks, responses must be a vector of categorical responses. For regression tasks, responses must be a vector
조회 수: 2 (최근 30일)
이전 댓글 표시
Error using trainNetwork
Invalid training data. For classification tasks, responses must be a vector of categorical responses. For regression tasks, responses must be a vector, a matrix, or a 4-D array of numeric responses which must not contain NaNs.
%Data_Pipeline
clc, clear, close all
% Specify the directory containing the .raw files
b_dir = 'binary_image_245 voxels';
g_dir = 'grey_image_245 voxels';
% Specify the size of the 3D images
imageSize = [245, 245, 245];
% Specify the datatype of the raw file
% dataType_b = 'uint8';
% dataType_g = 'uint16';
% Read all .raw files in the directory and get the 4D array of 3D images
b_images_245 = read3DImagesFromRaw(b_dir, imageSize, 'uint8');
g_images_245 = read3DImagesFromRaw(g_dir, imageSize, 'uint16');
% Initialize a new array to hold the resized data
b_images = zeros(128, 128, 128, 10, 'like', b_images_245); % Preallocate the resized array
g_images = zeros(128, 128, 128, 10, 'like', g_images_245);
% Loop over each volume in the fourth dimension (number of volumes)
for i = 1:size(b_images_245, 4)
% Resize each 3D volume to [128, 128, 128]
b_images(:,:,:,i) = imresize3(b_images_245(:,:,:,i), [128 128 128]);
g_images(:,:,:,i) = imresize3(g_images_245(:,:,:,i), [128 128 128]);
end
% binary images
mask_images = logical(b_images);
% grey images
input_images = mat2gray(g_images);
save("allDataSet.mat", "mask_images", "input_images","-v7.3");
%Machine_Learning pipeline
clc; clear; close all;
% Load preprocessed dataset
load('allDataSet.mat');
% Reshape input data
X = reshape(input_images, [128, 128, 128, 1, 10]);
Y = reshape(mask_images, [128, 128, 128, 1, 10]);
% Define 3D U-Net architecture
layers = [
image3dInputLayer([128 128 128 1], 'Name', 'input')
% Encoder
convolution3dLayer(3, 16, 'Padding', 'same', 'Name', 'conv1')
batchNormalizationLayer('Name', 'bn1')
reluLayer('Name', 'relu1')
maxPooling3dLayer(2, 'Stride', 2, 'Name', 'pool1')
convolution3dLayer(3, 32, 'Padding', 'same', 'Name', 'conv2')
batchNormalizationLayer('Name', 'bn2')
reluLayer('Name', 'relu2')
maxPooling3dLayer(2, 'Stride', 2, 'Name', 'pool2')
% Bottleneck
convolution3dLayer(3, 64, 'Padding', 'same', 'Name', 'conv3')
batchNormalizationLayer('Name', 'bn3')
reluLayer('Name', 'relu3')
% Decoder
transposedConv3dLayer(2, 32, 'Stride', 2, 'Name', 'upconv1')
concatenationLayer(4, 2, 'Name', 'concat1')
convolution3dLayer(3, 32, 'Padding', 'same', 'Name', 'conv4')
batchNormalizationLayer('Name', 'bn4')
reluLayer('Name', 'relu4')
transposedConv3dLayer(2, 16, 'Stride', 2, 'Name', 'upconv2')
concatenationLayer(4, 2, 'Name', 'concat2')
convolution3dLayer(3, 16, 'Padding', 'same', 'Name', 'conv5')
batchNormalizationLayer('Name', 'bn5')
reluLayer('Name', 'relu5')
convolution3dLayer(1, 1, 'Name', 'finalConv')
sigmoidLayer('Name', 'sigmoid')
dicePixelClassificationLayer('Name', 'output')
];
% Connect skip connections
lgraph = layerGraph(layers);
lgraph = connectLayers(lgraph, 'relu2', 'concat1/in2');
lgraph = connectLayers(lgraph, 'relu1', 'concat2/in2');
% Training options
options = trainingOptions('adam', ...
'InitialLearnRate', 1e-4, ...
'MaxEpochs', 50, ...
'MiniBatchSize', 2, ...
'Shuffle', 'every-epoch', ...
'Plots', 'training-progress');
% Train the network
net = trainNetwork(X, Y, lgraph, options);
%and after that i have an Error using trainNetwork
%Invalid training data. For classification tasks, responses must be a vector of categorical responses. For regression tasks, responses must be a vector, a matrix, or a 4-D array of numeric responses which must not contain NaNs.
댓글 수: 3
Cris LaPierre
2025년 3월 2일
편집: Cris LaPierre
2025년 3월 2일
I believe you need to use an imageDatastore and a pixelLabelDatastore in order to classify pixels. Consider these examples that use dicePixelClassificationLayer:
- 3-D Brain Tumor Segmentation Using Deep Learning
- Cardiac Left Ventricle Segmentation from Cine-MRI Images Using U-Net Network
Note that the recommended syntax changed in R2024a to use trainnet and dlnetworks instead, so you have to go back to older versions of the doc to find examples that use trainnetwork.
답변 (2개)
Walter Roberson
2025년 3월 1일
For regression tasks, responses must be a vector, a matrix, or a 4-D array of numeric responses which must not contain NaNs.
Y = reshape(mask_images, [128, 128, 128, 1, 10]);
That is a 5D array, not a 4D array.
net = trainNetwork(X, Y, lgraph, options);
댓글 수: 3
Walter Roberson
2025년 3월 1일
Hmmm, if it is a classification problem then
"For classification tasks, responses must be a vector of categorical responses."
Matt J
2025년 3월 2일
편집: Matt J
2025년 3월 2일
Instead of using Matlab's dicePixelClassificationLayer, you can try the following custom output layer:
classdef GeneralizedDiceLossLayer < nnet.layer.RegressionLayer
% Custom output layer for Generalized Dice Loss (GDL) in 3D binary segmentation
methods
function layer = GeneralizedDiceLossLayer(name)
% Constructor function
layer.Name = name;
layer.Description = "Generalized Dice Loss for 3D binary segmentation";
end
function loss = forwardLoss(layer, Y, T)
% Compute Generalized Dice Loss between predictions Y and ground truth T
% Y: Predictions (Sx, Sy, Sz, 1, B) from the network
% T: Ground Truth (Sx, Sy, Sz, 1, B)
% Ensure Y and T are in the same range (sigmoid output expected)
Y = squeeze(Y); % Remove singleton dimension -> (Sx, Sy, Sz, B)
T = squeeze(T); % Same as Y
% Flatten spatial dimensions (N voxels, B batches)
numBatches = size(Y, 4);
Y = reshape(Y, [], numBatches); % (N, B) where N = Sx*Sy*Sz
T = reshape(T, [], numBatches); % (N, B)
% Compute class weights w = 1 / (sum of ground truth per batch)^2
sumT = sum(T, 1) + eps; % Avoid division by zero
weights = 1 ./ (sumT.^2);
% Compute Generalized Dice Numerator (2 * weighted intersection)
numerator = 2 * sum(weights .* sum(Y .* T, 1));
% Compute Generalized Dice Denominator (weighted sum of both masks)
denominator = sum(weights .* (sum(Y, 1) + sumT));
% Compute final Generalized Dice Loss
loss = 1 - (numerator / (denominator + eps));
end
function dLdY = backwardLoss(layer, Y, T)
% Compute gradient of Generalized Dice Loss w.r.t. Y (predictions)
% Ensure correct dimensions (remove singleton channels)
Y = squeeze(Y); % (Sx, Sy, Sz, B)
T = squeeze(T); % Same as Y
% Get spatial size
[Sx, Sy, Sz, B] = size(Y);
% Flatten spatial dimensions into (N, B)
N = Sx * Sy * Sz;
Y = reshape(Y, N, B);
T = reshape(T, N, B);
% Compute class weights
sumT = sum(T, 1) + eps;
weights = 1 ./ (sumT.^2);
% Compute numerator and denominator of Generalized Dice coefficient
intersection = sum(Y .* T, 1);
sumY = sum(Y, 1);
numerator = 2 * weights .* intersection;
denominator = weights .* (sumY + sumT);
% Compute gradient dL/dY
dLdY_flat = (2 * weights .* T .* denominator - numerator .* (1 + Y)) ./ (denominator.^2 + eps);
% Reshape back to original 5D size: (Sx, Sy, Sz, 1, B)
dLdY = reshape(dLdY_flat, Sx, Sy, Sz, 1, B);
end
end
end
댓글 수: 0
참고 항목
카테고리
Help Center 및 File Exchange에서 Image Data Workflows에 대해 자세히 알아보기
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!