Check Custom Layer Validity
If you create a custom deep learning layer, then you can use
the checkLayer
function
to check that the layer is valid. The function checks layers for validity, GPU compatibility,
correctly defined gradients, and code generation compatibility. To check that a layer is valid,
run the following
command:
checkLayer(layer,layout)
layer
is
an instance of the layer and layout
is a networkDataLayout
object specifying the valid sizes and data formats for inputs to the layer. To check with
multiple observations, use the ObservationDimension
option. To run the check for code generation compatibility,
set the CheckCodegenCompatibility
option to 1
(true
). For large input sizes, the gradient checks take longer to run.
To speed up the check, specify a smaller valid input size.Check Custom Layer Validity
Check the validity of the example custom layer sreluLayer
.
The custom layer sreluLayer
, attached to this example as a supporting file, applies the SReLU operation to the input data. To access this layer, open this example as a live script.
Create an instance of the layer.
layer = sreluLayer;
Create a networkDataLayout
object that specifies the expected input size and format of a single observation of typical input to the layer. Specify a valid input size of [24 24 20 128]
, where the dimensions correspond to the height, width, number of channels, and number of observations of the previous layer output. Specify the data has format "SSCB"
(spatial, spatial, channel, batch).
validInputSize = [24 24 20 128];
layout = networkDataLayout(validInputSize,"SSCB");
Check the layer validity using checkLayer
. When you pass data through the network, the layer expects 4-D array inputs, where the first three dimensions correspond to the height, width, and number of channels of the previous layer output, and the fourth dimension corresponds to the observations.
checkLayer(layer,layout)
Skipping GPU tests. No compatible GPU device found. Skipping code generation compatibility tests. To check validity of the layer for code generation, specify the CheckCodegenCompatibility and ObservationDimension options. Running nnet.checklayer.TestLayerWithoutBackward .......... .......... Done nnet.checklayer.TestLayerWithoutBackward __________ Test Summary: 20 Passed, 0 Failed, 0 Incomplete, 14 Skipped. Time elapsed: 0.1629 seconds.
The results show the number of passed, failed, and skipped tests. If you do not have a GPU, then the function skips the corresponding tests.
List of Tests
The checkLayer
function uses these tests to check the validity of custom
layers.
Test | Description |
---|---|
functionSyntaxesAreCorrect | The syntaxes of the layer functions are correctly defined. |
predictDoesNotError | predict function does not error. |
forwardDoesNotError | When specified, the |
forwardPredictAreConsistentInSize | When |
backwardDoesNotError | When specified, backward does not error. |
backwardIsConsistentInSize | When
|
predictIsConsistentInType | The outputs of |
forwardIsConsistentInType | When |
backwardIsConsistentInType | When |
gradientsAreNumericallyCorrect | When backward is specified, the gradients computed
in backward are consistent with the numerical
gradients. |
backwardPropagationDoesNotError | When backward is not specified, the derivatives
can be computed using automatic differentiation. |
predictReturnsValidStates | For layers with state properties, the predict
function returns valid states. |
forwardReturnsValidStates | For layers with state properties, the forward
function, if specified, returns valid states. |
resetStateDoesNotError | For layers with state properties, the resetState
function, if specified, does not error and resets the states to valid
states. |
| For layers that inherit from the
nnet.layer.Formattable class, the
predict function returns a formatted
dlarray with a channel dimension. |
| For layers that inherit from the
nnet.layer.Formattable class, the
forward function, if specified, returns a
formatted dlarray with a channel dimension. |
| When you specify one or more networkDataLayout
objects, the learnable parameters of the layer do not change after
repeated initialization with the same
networkDataLayout objects as input. |
| When you specify one or more networkDataLayout
objects, the state parameters of the layer do not change after repeated
initialization with the same networkDataLayout
objects as input. |
codegenPragmaDefinedInClassDef | The pragma "%#codegen" for code generation is
specified in class file. |
layerPropertiesSupportCodegen | The layer properties support code generation. |
predictSupportsCodegen | predict is valid for code generation. |
doesNotHaveStateProperties | For code generation, the layer does not have state properties. |
functionLayerSupportsCodegen | For code generation, the layer function must be a named function on
the path and the Formattable property must be
0 (false). |
Some tests run multiple times. These tests also check different data types and for GPU compatibility:
predictIsConsistentInType
forwardIsConsistentInType
backwardIsConsistentInType
To execute the layer functions on a GPU, the functions must support inputs and outputs of
type gpuArray
with the underlying data type
single
.
Generated Data
To check the layer validity, the checkLayer
function generates data with values in the range [-1, 1].
To check for multiple observations, either specify a
layout
with a batch ("B"
) dimension or specify the
observation dimension using the ObservationDimension
option. If you specify
the observation dimension, then the checkLayer
function checks that the
layer functions are valid using generated data with mini-batches of size 1 and 2. If you do not
specify this name-value pair, then the function skips the tests that check that the layer
functions are valid for multiple observations.
Diagnostics
If a test fails when you use checkLayer
,
then the function provides a test diagnostic and a framework diagnostic. The test
diagnostic highlights any issues found with the layer. The framework diagnostic provides
more detailed information.
Function Syntaxes
The test functionSyntaxesAreCorrect
checks that the layer
functions have correctly defined syntaxes.
Test Diagnostic | Description | Possible Solution |
---|---|---|
Incorrect number of input arguments for 'predict' in
Layer . | The syntax for the predict function is not
consistent with the number of layer inputs. | Specify the correct number of input and output
arguments in The
You can adjust the syntaxes for layers with multiple inputs, multiple outputs, or multiple state parameters:
Tip If the number of inputs to the layer can vary, then use If the number of outputs can vary, then use Tip If the custom layer has a |
Incorrect number of output arguments for 'predict' in
Layer | The syntax for the predict function is not
consistent with the number of layer outputs. | |
Incorrect number of input arguments for 'forward' in
Layer | The syntax for the optional forward function
is not consistent with the number of layer inputs. | Specify the correct number of input and output
arguments in The
You can adjust the syntaxes for layers with multiple inputs, multiple outputs, or multiple state parameters:
Tip If the number of inputs to the layer can vary, then use If the number of outputs can vary, then use Tip If the custom layer has a |
Incorrect number of output arguments for 'forward' in
Layer | The syntax for the optional forward function
is not consistent with the number of layer outputs. | |
Incorrect number of input arguments for 'backward' in
Layer | The syntax for the optional backward function
is not consistent with the number of layer inputs and
outputs. | Specify the correct number of input and output
arguments in The
You can adjust the syntaxes for layers with multiple inputs, multiple outputs, multiple learnable parameters, or multiple state parameters:
To reduce memory usage by preventing unused variables being saved between the forward and
backward pass, replace the corresponding input arguments with Tip If the number of inputs to If the number of outputs can vary, then use Tip If the layer forward functions support |
Incorrect number of output arguments for 'backward' in
Layer | The syntax for the optional backward function
is not consistent with the number of layer outputs. |
For layers with multiple inputs or outputs, you must set the values of the layer
properties NumInputs
(or alternatively,
InputNames
) and NumOutputs
(or
alternatively, OutputNames
) in the layer constructor function,
respectively.
Multiple Observations
The checkLayer
function checks that the layer functions are
valid for single and multiple observations. To check for multiple observations, either specify a
layout
with a batch ("B"
) dimension or specify the
observation dimension using the ObservationDimension
option. If you specify
the observation dimension, then the checkLayer
function checks that the
layer functions are valid using generated data with mini-batches of size 1 and 2. If you do not
specify this name-value pair, then the function skips the tests that check that the layer
functions are valid for multiple observations.
Test Diagnostic | Description | Possible Solution |
---|---|---|
Skipping multi-observation tests. To enable checks with
multiple observations, specify the 'ObservationDimension'
parameter in checkLayer . | If you do not specify the ObservationDimension
option in checkLayer , then the function skips
the tests that check data with multiple observations. | Use the command
For more information, see Layer Input Sizes. |
Functions Do Not Error
The tests predictDoesNotError
,
forwardDoesNotError
, and
backwardDoesNotError
check that the layer functions do not
error when passed inputs of valid size. If you specify an observation dimension,
then the function checks the layer for both a single observation and multiple
observations.
Test Diagnostic | Description | Possible Solution |
---|---|---|
The function 'predict' threw an
error: | The predict function errors when passed
data of the size defined by layout . | Address the error described in the
Tip If the layer forward functions support |
The function 'forward' threw an
error: | The optional forward function errors when
passed data of the size defined by
layout . | |
The function 'backward' threw an
error: | The optional backward function errors when
passed the output of predict . |
Outputs Are Consistent in Size
The test backwardIsConsistentInSize
checks that the
backward
function outputs derivatives of the correct
size.
The backward
function syntax depends on the type of layer.
dLdX = backward(layer,X,Y,dLdY,memory)
returns the derivativesdLdX
of the loss with respect to the layer input, wherelayer
has a single input and a single output.Y
corresponds to the forward function output anddLdY
corresponds to the derivative of the loss with respect toY
. The function inputmemory
corresponds to the memory output of the forward function.[dLdX,dLdW] = backward(layer,X,Y,dLdY,memory)
also returns the derivativedLdW
of the loss with respect to the learnable parameter, wherelayer
has a single learnable parameter.[dLdX,dLdSin] = backward(layer,X,Y,dLdY,dLdSout,memory)
also returns the derivativedLdSin
of the loss with respect to the state input, wherelayer
has a single state parameter anddLdSout
corresponds to the derivative of the loss with respect to the layer state output.[dLdX,dLdW,dLdSin] = backward(layer,X,Y,dLdY,dLdSout,memory)
also returns the derivativedLdW
of the loss with respect to the learnable parameter and returns the derivativedLdSin
of the loss with respect to the layer state input, wherelayer
has a single state parameter and single learnable parameter.
You can adjust the syntaxes for layers with multiple inputs, multiple outputs, multiple learnable parameters, or multiple state parameters:
For layers with multiple inputs, replace
X
anddLdX
withX1,...,XN
anddLdX1,...,dLdXN
, respectively, whereN
is the number of inputs.For layers with multiple outputs, replace
Y
anddLdY
withY1,...,YM
anddLdY1,...,dLdYM
, respectively, whereM
is the number of outputs.For layers with multiple learnable parameters, replace
dLdW
withdLdW1,...,dLdWP
, whereP
is the number of learnable parameters.For layers with multiple state parameters, replace
dLdSin
anddLdSout
withdLdSin1,...,dLdSinK
anddLdSout1,...,dLdSoutK
, respectively, whereK
is the number of state parameters.
To reduce memory usage by preventing unused variables being saved between the forward and
backward pass, replace the corresponding input arguments with ~
.
Tip
If the number of inputs to backward
can vary, then use
varargin
instead of the input arguments after
layer
. In this case, varargin
is a cell array
of the inputs, where the first N
elements correspond to the
N
layer inputs, the next M
elements correspond
to the M
layer outputs, the next M
elements
correspond to the derivatives of the loss with respect to the M
layer
outputs, the next K
elements correspond to the K
derivatives of the loss with respect to the K
state outputs, and the
last element corresponds to memory
.
If the number of outputs can vary, then use varargout
instead of the
output arguments. In this case, varargout
is a cell array of the
outputs, where the first N
elements correspond to the
N
the derivatives of the loss with respect to the
N
layer inputs, the next P
elements correspond
to the derivatives of the loss with respect to the P
learnable
parameters, and the next K
elements correspond to the derivatives of
the loss with respect to the K
state inputs.
Test Diagnostic | Description | Possible Solution |
---|---|---|
Incorrect size of 'dLdX' for
'backward' . | The derivatives of the loss with respect to the layer inputs must be the same size as the corresponding layer input. | Return the derivatives
|
Incorrect size of the derivative of the loss with
respect to the input 'in1' for 'backward' | ||
The size of 'Y' returned from 'forward' must be the
same as for 'predict' . | The outputs of predict must be the same
size as the corresponding outputs of
forward . | Return the outputs |
Incorrect size of the derivative of the loss with
respect to 'W' for 'backward' . | The derivatives of the loss with respect to the learnable parameters must be the same size as the corresponding learnable parameters. | Return the derivatives |
Tip
If the layer forward functions support dlarray
objects, then the software automatically determines the backward function and you do not need to specify the backward
function. For a list of functions that support dlarray
objects, see List of Functions with dlarray Support.
Outputs Are Formatted
Since R2023b
The tests formattableLayerPredictIsFormatted
and
formattableLayerForwardIsFormatted
check that the output of
the layer functions are dlarray
object with a channel
dimension.
Test Diagnostic | Description | Possible Solution |
---|---|---|
The layer output returned from 'predict' must be a
formatted dlarray . | The predict function does not return a
formatted dlarray | Return the output of the |
The layer output returned from 'forward' must be a
formatted dlarray . | The optional forward function does not return
a formatted dlarray | Return the output of the optional
|
Initialization
Since R2023b
These tests check that initialization does not overwrite nonempty parameters.
The tests
initializeDoesNotChangeLearnableParametersWhenTheyAreNotEmpty
and initializeDoesNotChangeStatefulParametersWhenTheyAreNotEmpty
check that the custom initialize
function does not overwrite
nonempty learnable and stateful parameters.
Test Diagnostic | Description | Possible Solution |
---|---|---|
The initialize function overwrites existing layer
learnable parameters . | The initialize function overwrites nonempty
learnable parameters. | Initialize only empty parameters. To check
whether a parameter is empty, use the
|
The initialize function overwrites existing layer state
parameters . | The initialize function overwrites nonempty
state parameters. |
Consistent Data Types and GPU Compatibility
The tests predictIsConsistentInType
,
forwardIsConsistentInType
, and
backwardIsConsistentInType
check that the layer functions
output variables of the correct data type. The tests check that the layer functions
return consistent data types when given inputs of the data types
single
, double
, and
gpuArray
with the underlying types single
or double
.
If the layer forward functions fully support dlarray
objects, then the layer
is GPU compatible. Otherwise, to be GPU compatible, the layer functions must support inputs
and return outputs of type gpuArray
(Parallel Computing Toolbox).
Many MATLAB® built-in functions support gpuArray
(Parallel Computing Toolbox) and dlarray
input arguments. For a list of
functions that support dlarray
objects, see List of Functions with dlarray Support. For a list of functions
that execute on a GPU, see Run MATLAB Functions on a GPU (Parallel Computing Toolbox). To use a GPU for deep
learning, you must also have a supported GPU device. For information on supported devices, see
GPU Computing Requirements (Parallel Computing Toolbox). For more information on working with GPUs in MATLAB, see GPU Computing in MATLAB (Parallel Computing Toolbox).
Tip
If you preallocate arrays using functions such as
zeros
, then you must ensure that the data types of these arrays are
consistent with the layer function inputs. To create an array of zeros of the same data type as
another array, use the "like"
option of zeros
. For
example, to initialize an array of zeros of size sz
with the same data type
as the array X
, use Y = zeros(sz,"like",X)
.
Test Diagnostic | Description | Possible Solution |
---|---|---|
Incorrect type of 'Y' for
'predict' . | The types of the outputs
Y1,…,Ym of the predict
function must be consistent with the inputs
X1,…,Xn . | Return the outputs
|
Incorrect type of output 'out1' for
'predict' . | ||
Incorrect type of 'Y' for
'forward' . | The types of the outputs
Y1,…,Ym of the optional
forward function must be consistent with
the inputs X1,…,Xn . | |
Incorrect type of output 'out1' for
'forward' . | ||
Incorrect type of 'dLdX' for
'backward' . | The types of the derivatives
dLdX1,…,dLdXn of the optional
backward function must be consistent with
the inputs X1,…,Xn . | Return the derivatives
|
Incorrect type of the derivative of the loss with
respect to the input 'in1' for 'backward' . | ||
Incorrect type of the derivative of loss with
respect to 'W' for 'backward' . | The type of the derivative of the loss of the learnable parameters must be consistent with the corresponding learnable parameters. | For each learnable parameter, return the derivative with the same type as the corresponding learnable parameter. |
Tip
If the layer forward functions support dlarray
objects, then the software automatically determines the backward function and you do not need to specify the backward
function. For a list of functions that support dlarray
objects, see List of Functions with dlarray Support.
Correct Gradients
The test gradientsAreNumericallyCorrect
checks that the
gradients computed by the layer functions are numerically correct. The test
backwardPropagationDoesNotError
checks that the derivatives
can be computed using automatic differentiation.
When the optional backward
function is not specified, the test
backwardPropagationDoesNotError
checks that the derivatives
can be computed using automatic differentiation. When the optional
backward
function is specified, the test
gradientsAreNumericallyCorrect
tests that the gradients
computed in backward
are numerically correct.
Test Diagnostic | Description | Possible Solution |
---|---|---|
Expected a dlarray with no dimension labels, but
instead found labels . | When the optional backward function is not
specified, the layer forward functions must output
dlarray objects without dimension
labels. | Ensure that any dlarray objects created in
the layer forward functions do not contain dimension
labels. |
Unable to backward propagate through the layer.
Check that the 'forward' function fully supports automatic
differentiation. Alternatively, implement the 'backward'
function manually . | One or more of the following:
| Check that the forward functions support
Check that the derivatives of the input
Alternatively, define a custom backward
function by creating a function named
|
Unable to backward propagate through the layer.
Check that the 'predict' function fully supports automatic
differentiation. Alternatively, implement the 'backward'
function manually . | ||
The derivative 'dLdX' for 'backward' is inconsistent
with the numerical gradient . | One or more of the following:
| If the layer forward functions support
Check that the derivatives in
If the derivatives are correctly
computed, then in the If the absolute and relative errors are within an acceptable margin of the tolerance, then you can ignore this test diagnostic. |
The derivative of the loss with respect to the input
'in1' for 'backward' is inconsistent with the numerical
gradient . | ||
The derivative of loss with respect to 'W' for
'backward' is inconsistent with the numerical
gradient . |
Tip
If the layer forward functions support dlarray
objects, then the software automatically determines the backward function and you do not need to specify the backward
function. For a list of functions that support dlarray
objects, see List of Functions with dlarray Support.
Valid States
For layers with state properties, the test
predictReturnsValidStates
checks that the predict function
returns valid states. When forward
is specified, the test
forwardReturnsValidStates
checks that the forward function
returns valid states. The test resetStateDoesNotError
checks that
the resetState
function returns a layer with valid state
properties.
Test Diagnostic | Description | Possible Solution |
---|---|---|
Error using 'predict' in Layer. 'State' must be
real-values numeric array or unformatted dlarray
object . | State outputs must be real-valued numeric arrays or
unformatted dlarray objects. | Ensure that the states identified in the
Framework Diagnostic are real-valued
numeric arrays or unformatted dlarray
objects. |
Error using 'resetState' in Layer. 'State' must be
real-values numeric array or unformatted dlarray
object | State properties of returned layer must be real-valued
numeric arrays or unformatted dlarray
objects. |
Code Generation Compatibility
If you set the CheckCodegenCompatibility
option to
1
(true
), then the
checkLayer
function checks the layer for code generation
compatibility.
The test codegenPragmaDefinedInClassDef
checks that the layer
definition contains the code generation pragma %#codegen
. The
test layerPropertiesSupportCodegen
checks that the
layer properties support code generation. The test
predictSupportsCodegen
checks that the outputs of
predict
are consistent in dimension and batch size.
In addition, when generating code that uses third-party libraries:
Code generation supports custom layers with 2-D image or feature input only.
The inputs and output of the layer forward functions must have the same batch size.
Nonscalar properties must be a single, double, or character array.
Scalar properties must have type numeric, logical, or string.
The checkLayer
function does not check that functions used by the layer
are compatible with code generation. To check that functions used by the custom layer also
support code generation, first use the Code Generation Readiness app. For more
information, see Check Code by Using the Code Generation Readiness Tool (MATLAB Coder).
Test Diagnostic | Description | Possible Solution |
---|---|---|
Specify '%#codegen' in the class definition of
custom layer | The layer definition does not include the pragma
"%#codegen" for code generation. |
Add the |
Nonscalar layer properties must be type single or
double or character array for custom layer | The layer contains non-scalar properties of type other than single, double, or character array. | Convert non-scalar properties to use a representation of type single, double, or character array. For
example, convert a categorical array to an array of integers
of type |
Scalar layer properties must be numeric, logical, or
string for custom layer | The layer contains scalar properties of type other than numeric, logical, or string. | Convert scalar properties to use a numeric representation, or a representation of type logical or string. For example, convert a categorical
scalar to an integer of type |
For code generation, 'Y' must have the same number
of dimensions as the layer input . | The number of dimensions of the output
| In the |
For code generation, 'Y' must have the same batch
size as the layer input . | The size of the batch size of the output
| In the |
See Also
Related Topics
- Define Custom Deep Learning Layers
- Define Custom Deep Learning Layer with Learnable Parameters
- Define Custom Deep Learning Layer with Multiple Inputs
- Define Custom Deep Learning Layer with Formatted Inputs
- Define Custom Recurrent Deep Learning Layer
- Define Custom Deep Learning Layer for Code Generation
- Define Nested Deep Learning Layer Using Network Composition