Deep learning network for custom training loops

A `dlnetwork`

object enables support for custom training loops
using automatic differentiation.

For most deep learning tasks, you can use a pretrained network and adapt it to your own data. For an example showing how to use transfer learning to retrain a convolutional neural network to classify a new set of images, see Train Deep Learning Network to Classify New Images. Alternatively, you can create and train networks from scratch using `layerGraph`

objects with the `trainNetwork`

and `trainingOptions`

functions.

If the `trainingOptions`

function does not provide the training options that you need for your task, then you can create a custom training loop using automatic differentiation. To learn more, see Define Custom Training Loops.

`lgraph`

— Network architecture`layerGraph`

objectNetwork architecture, specified as a layer graph.

The layer graph must not contain output layers. When training the network, calculate the loss separately.

For a list of layers supported by `dlnetwork`

, see Supported Layers.

`Layers`

— Network layers`Layer`

arrayNetwork layers, specified as a `Layer`

array.

`Connections`

— Layer connectionstable

Layer connections, specified as a table with two columns.

Each table row represents a connection in the layer graph. The first column,
`Source`

, specifies the source of each connection. The second
column, `Destination`

, specifies the destination of each connection.
The connection sources and destinations are either layer names or have the form
`'layerName/IOName'`

, where `'IOName'`

is the name
of the layer input or output.

**Data Types: **`table`

`Learnables`

— Network learnable parameterstable

Network learnable parameters, specified as a table with three columns:

`Layer`

– Layer name, specified as a string scalar.`Parameter`

– Parameter name, specified as a string scalar.`Value`

– Value of parameter, specified as a`dlarray`

.

The network learnable parameters contain the features learned by the network. For example, the weights of convolution and fully connected layers.

**Data Types: **`table`

`State`

— Network statetable

Network state, specified as a table.

The network state is a table with three columns:

`Layer`

– Layer name, specified as a string scalar.`Parameter`

– Parameter name, specified as a string scalar.`Value`

– Value of parameter, specified as a numeric array object.

The network state contains information remembered by the network between iterations. For example, the state of LSTM and batch normalization layers.

During training or inference, you can update the network state using the output of
the `forward`

and `predict`

functions.

**Data Types: **`table`

`InputNames`

— Network input layer namescell array

Network input layer names, specified as a cell array of character vectors.

**Data Types: **`cell`

`OutputNames`

— Network output layer namescell array

Network output layer names, specified as a cell array of character vectors. This
property includes all layers with disconnected outputs. If a layer has multiple outputs,
then the disconnected outputs are specified as
`'layerName/outputName'`

.

**Data Types: **`cell`

`forward` | Compute deep learning network output for training |

`predict` | Compute deep learning network output for inference |

`layerGraph` | Graph of network layers for deep learning |

`dlnetwork`

ObjectTo implement a custom training loop for your network, first convert it to a `dlnetwork`

object. Do not include output layers in a `dlnetwork`

object. Instead, you must specify the loss function in the custom training loop.

Load a pretrained GoogLeNet model using the `googlenet`

function. This function requires the Deep Learning Toolbox™ Model *for GoogLeNet Network* support package. If this support package is not installed, then the function provides a download link.

net = googlenet;

Convert the network to a layer graph and remove the layers used for classification using `removeLayers`

.

lgraph = layerGraph(net); lgraph = removeLayers(lgraph,["prob" "output"]);

Convert the network to a `dlnetwork`

object.

dlnet = dlnetwork(lgraph)

dlnet = dlnetwork with properties: Layers: [142x1 nnet.cnn.layer.Layer] Connections: [168x2 table] Learnables: [116x3 table] State: [0x3 table] InputNames: {'data'} OutputNames: {'loss3-classifier'}

This example shows how to train a network that classifies handwritten digits with a custom learning rate schedule.

If `trainingOptions`

does not provide the options you need (for example, a custom learning rate schedule), then you can define your own custom training loop using automatic differentiation.

This example trains a network to classify handwritten digits with the *time-based decay* learning rate schedule: for each iteration, the solver uses the learning rate given by ${\rho}_{\mathit{t}}=\frac{{\rho}_{0}}{1+\mathit{k}\text{\hspace{0.17em}}\mathit{t}}$, where *t* is the iteration number, $${\rho}_{0}$$ is the initial learning rate, and *k* is the decay.

**Load Training Data**

Load the digits data.

[XTrain,YTrain] = digitTrain4DArrayData; classes = categories(YTrain); numClasses = numel(classes);

**Define Network**

Define the network and specify the average image using the `'Mean'`

option in the image input layer.

layers = [ imageInputLayer([28 28 1], 'Name', 'input', 'Mean', mean(XTrain,4)) convolution2dLayer(5, 20, 'Name', 'conv1') batchNormalizationLayer('Name','bn1') reluLayer('Name', 'relu1') convolution2dLayer(3, 20, 'Padding', 1, 'Name', 'conv2') batchNormalizationLayer('Name','bn2') reluLayer('Name', 'relu2') convolution2dLayer(3, 20, 'Padding', 1, 'Name', 'conv3') batchNormalizationLayer('Name','bn3') reluLayer('Name', 'relu3') fullyConnectedLayer(numClasses, 'Name', 'fc') softmaxLayer('Name','softmax')]; lgraph = layerGraph(layers);

Create a `dlnetwork`

object from the layer graph.

dlnet = dlnetwork(lgraph)

dlnet = dlnetwork with properties: Layers: [12×1 nnet.cnn.layer.Layer] Connections: [11×2 table] Learnables: [14×3 table] State: [6×3 table] InputNames: {'input'} OutputNames: {'softmax'}

**Define Model Gradients Function**

Create the function `modelGradients`

, listed at the end of the example, that takes a `dlnetwork`

object `dlnet`

, a mini-batch of input data `dlX`

with corresponding labels `Y`

and returns the gradients of the loss with respect to the learnable parameters in `dlnet`

and the corresponding loss.

**Specify Training Options**

Train with a minibatch size of 128 for 5 epochs.

numEpochs = 5; miniBatchSize = 128;

Specify the options for SGDM optimization. Specify an initial learn rate of 0.01 with a decay of 0.01, and momentum 0.9.

initialLearnRate = 0.01; decay = 0.01; momentum = 0.9;

Visualize the training progress in a plot.

`plots = "training-progress";`

Train on a GPU if one is available. Using a GPU requires Parallel Computing Toolbox™ and a CUDA® enabled NVIDIA® GPU with compute capability 3.0 or higher.

`executionEnvironment = "auto";`

**Train Model**

Train the model using a custom training loop.

For each epoch, shuffle the data and loop over mini-batches of data. At the end of each epoch, display the training progress.

For each mini-batch:

Convert the labels to dummy variables.

Convert the data to

`dlarray`

objects with underlying type single and specify the dimension labels`'SSCB'`

(spatial, spatial, channel, batch).For GPU training, convert to

`gpuArray`

objects.Evaluate the model gradients, state, and loss using

`dlfeval`

and the`modelGradients`

function and update the network state.Determine the learning rate for the time-based decay learning rate schedule.

Update the network parameters using the

`sgdmupdate`

function.

Initialize the training progress plot.

if plots == "training-progress" figure lineLossTrain = animatedline('Color',[0.85 0.325 0.098]); ylim([0 inf]) xlabel("Iteration") ylabel("Loss") grid on end

Initialize the velocity parameter for the SGDM solver.

velocity = [];

Train the network.

numObservations = numel(YTrain); numIterationsPerEpoch = floor(numObservations./miniBatchSize); iteration = 0; start = tic; % Loop over epochs. for epoch = 1:numEpochs % Shuffle data. idx = randperm(numel(YTrain)); XTrain = XTrain(:,:,:,idx); YTrain = YTrain(idx); % Loop over mini-batches. for i = 1:numIterationsPerEpoch iteration = iteration + 1; % Read mini-batch of data and convert the labels to dummy % variables. idx = (i-1)*miniBatchSize+1:i*miniBatchSize; X = XTrain(:,:,:,idx); Y = zeros(numClasses, miniBatchSize, 'single'); for c = 1:numClasses Y(c,YTrain(idx)==classes(c)) = 1; end % Convert mini-batch of data to dlarray. dlX = dlarray(single(X),'SSCB'); % If training on a GPU, then convert data to gpuArray. if (executionEnvironment == "auto" && canUseGPU) || executionEnvironment == "gpu" dlX = gpuArray(dlX); end % Evaluate the model gradients, state, and loss using dlfeval and the % modelGradients function and update the network state. [gradients,state,loss] = dlfeval(@modelGradients,dlnet,dlX,Y); dlnet.State = state; % Determine learning rate for time-based decay learning rate schedule. learnRate = initialLearnRate/(1 + decay*iteration); % Update the network parameters using the SGDM optimizer. [dlnet, velocity] = sgdmupdate(dlnet, gradients, velocity, learnRate, momentum); % Display the training progress. if plots == "training-progress" D = duration(0,0,toc(start),'Format','hh:mm:ss'); addpoints(lineLossTrain,iteration,double(gather(extractdata(loss)))) title("Epoch: " + epoch + ", Elapsed: " + string(D)) drawnow end end end

**Test Model**

Test the classification accuracy of the model by comparing the predictions on a test set with the true labels.

[XTest, YTest] = digitTest4DArrayData;

Convert the data to a `dlarray`

object with dimension format `'SSCB'`

. For GPU prediction, also convert the data to `gpuArray`

.

dlXTest = dlarray(XTest,'SSCB'); if (executionEnvironment == "auto" && canUseGPU) || executionEnvironment == "gpu" dlXTest = gpuArray(dlXTest); end

Classify the images using `modelPredictions`

function, listed at the end of the example and find the classes with the highest scores.

dlYPred = modelPredictions(dlnet,dlXTest,miniBatchSize); [~,idx] = max(extractdata(dlYPred),[],1); YPred = classes(idx);

Evaluate the classification accuracy.

accuracy = mean(YPred == YTest)

accuracy = 0.9910

**Model Gradients Function**

The `modelGradients`

function takes a `dlnetwork`

object `dlnet`

, a mini-batch of input data `dlX`

with corresponding labels `Y`

and returns the gradients of the loss with respect to the learnable parameters in `dlnet`

, the network state, and the loss. To compute the gradients automatically, use the `dlgradient`

function.

function [gradients,state,loss] = modelGradients(dlnet,dlX,Y) [dlYPred,state] = forward(dlnet,dlX); loss = crossentropy(dlYPred,Y); gradients = dlgradient(loss,dlnet.Learnables); end

**Model Predictions Function**

The `modelPredictions`

function takes a `dlnetwork`

object `dlnet`

, an array of input data `dlX`

, and a mini-batch size, and outputs the model predictions by iterating over mini-batches of the specified size.

function dlYPred = modelPredictions(dlnet,dlX,miniBatchSize) numObservations = size(dlX,4); numIterations = ceil(numObservations / miniBatchSize); numClasses = dlnet.Layers(11).OutputSize; dlYPred = zeros(numClasses,numObservations,'like',dlX); for i = 1:numIterations idx = (i-1)*miniBatchSize+1:min(i*miniBatchSize,numObservations); dlYPred(:,idx) = predict(dlnet,dlX(:,:,:,idx)); end end

The `dlnetwork`

function supports the layers listed
below and custom layers without forward functions returning a nonempty memory value.

Layer | Description |
---|---|

An image input layer inputs 2-D images to a network and applies data normalization. | |

A 3-D image input layer inputs 3-D images or volumes to a network and applies data normalization. | |

A sequence input layer inputs sequence data to a network. |

Layer | Description |
---|---|

A 2-D convolutional layer applies sliding convolutional filters to the input. | |

A 3-D convolutional layer applies sliding cuboidal convolution filters to three-dimensional input. | |

A 2-D grouped convolutional layer separates the input channels into groups and applies sliding convolutional filters. Use grouped convolutional layers for channel-wise separable (also known as depth-wise separable) convolution. | |

A transposed 2-D convolution layer upsamples feature maps. | |

A transposed 3-D convolution layer upsamples three-dimensional feature maps. | |

A fully connected layer multiplies the input by a weight matrix and then adds a bias vector. |

Layer | Description |
---|---|

A sequence input layer inputs sequence data to a network. | |

An LSTM layer learns long-term dependencies between time steps in time series and sequence data. | |

A GRU layer learns dependencies between time steps in time series and sequence data. |

Layer | Description |
---|---|

A ReLU layer performs a threshold operation to each element of the input, where any value less than zero is set to zero. | |

A leaky ReLU layer performs a threshold operation, where any input value less than zero is multiplied by a fixed scalar. | |

A clipped ReLU layer performs a threshold operation, where any
input value less than zero is set to zero and any value above the clipping
ceiling is set to that clipping ceiling.
| |

An ELU activation layer performs the identity operation on positive inputs and an exponential nonlinearity on negative inputs. | |

A hyperbolic tangent (tanh) activation layer applies the tanh function on the layer inputs. | |

A softmax layer applies a softmax function to the input. |

Layer | Description |
---|---|

A batch normalization layer normalizes each input channel across a mini-batch. To speed up training of convolutional neural networks and reduce the sensitivity to network initialization, use batch normalization layers between convolutional layers and nonlinearities, such as ReLU layers. | |

A channel-wise local response (cross-channel) normalization layer carries out channel-wise normalization. | |

A dropout layer randomly sets input elements to zero with a given probability. | |

A 2-D crop layer applies 2-D cropping to the input. |

Layer | Description |
---|---|

An average pooling layer performs down-sampling by dividing the input into rectangular pooling regions and computing the average values of each region. | |

A 3-D average pooling layer performs down-sampling by dividing three-dimensional input into cuboidal pooling regions and computing the average values of each region. | |

A global average pooling layer performs down-sampling by computing the mean of the height and width dimensions of the input. | |

A 3-D global average pooling layer performs down-sampling by computing the mean of the height, width, and depth dimensions of the input. | |

A max pooling layer performs down-sampling by dividing the input into rectangular pooling regions, and computing the maximum of each region. | |

A 3-D max pooling layer performs down-sampling by dividing three-dimensional input into cuboidal pooling regions, and computing the maximum of each region. | |

A global max pooling layer performs down-sampling by computing the maximum of the height and width dimensions of the input. | |

A 3-D global max pooling layer performs down-sampling by computing the maximum of the height, width, and depth dimensions of the input. | |

A max unpooling layer unpools the output of a max pooling layer. |

Layer | Description |
---|---|

An addition layer adds inputs from multiple neural network layers element-wise. | |

A depth concatenation layer takes inputs that have the same height and width and concatenates them along the third dimension (the channel dimension). | |

A concatenation layer takes inputs and concatenates them along a specified dimension. The inputs must have the same size in all dimensions except the concatenation dimension. |

`dlarray`

| `dlfeval`

| `dlgradient`

| `forward`

| `layerGraph`

| `predict`

아래 MATLAB 명령에 해당하는 링크를 클릭하셨습니다.

이 명령을 MATLAB 명령 창에 입력해 실행하십시오. 웹 브라우저에서는 MATLAB 명령을 지원하지 않습니다.

Choose a web site to get translated content where available and see local events and offers. Based on your location, we recommend that you select: .

Select web siteYou can also select a web site from the following list:

Select the China site (in Chinese or English) for best site performance. Other MathWorks country sites are not optimized for visits from your location.

- América Latina (Español)
- Canada (English)
- United States (English)

- Belgium (English)
- Denmark (English)
- Deutschland (Deutsch)
- España (Español)
- Finland (English)
- France (Français)
- Ireland (English)
- Italia (Italiano)
- Luxembourg (English)

- Netherlands (English)
- Norway (English)
- Österreich (Deutsch)
- Portugal (English)
- Sweden (English)
- Switzerland
- United Kingdom (English)