Main Content

neuralODELayer

Neural ODE layer

Since R2023b

    Description

    A neural ODE layer outputs the solution of an ODE.

    Creation

    Description

    layer = neuralODELayer(net,tspan) creates a neural ODE layer and sets the Network and TimeInterval properties.

    example

    layer = neuralODELayer(net,tspan,Name=Value) specifies additional properties using one or more name-value arguments.

    example

    Properties

    expand all

    Neural network characterizing neural ODE function, specified as a dlnetwork object.

    If Network has one input, then predict(net,Y) defines the ODE system, where net is the network. If Network has two inputs, then predict(net,T,Y) defines the ODE system, where T is a time step repeated over the batch dimension.

    The network size and format of the network inputs and outputs must match.

    When GradientMode is "adjoint", the network State property must be empty. To use a network with a nonempty State property, set GradientMode to "direct".

    Interval of integration, specified as a numeric vector with two or more elements. The elements in TimeInterval must be all increasing or all decreasing.

    The solver imposes the initial conditions given by Y0 at the initial time TimeInterval(1), then integrates the ODE function from TimeInterval(1) to TimeInterval(end).

    • If TimeInterval has two elements, [t0 tf], then the solver returns the solution evaluated at point tf.

    • If TimeInterval has more than two elements, [t0 t1 ... tf], then the solver returns the solution evaluated at the given points [t1 ... tf]. The solver does not step precisely to each point specified in TimeInterval. Instead, the solver uses its own internal steps to compute the solution, then evaluates the solution at the points specified in TimeInterval. The solutions produced at the specified points are of the same order of accuracy as the solutions computed at each internal step.

      Specifying several intermediate points has little effect on the efficiency of computation, but for large systems it can negatively affect memory management.

    Method to compute gradients with respect to the initial conditions and parameters when using the dlgradient function, specified as one of these values:

    • "direct" — Compute gradients by backpropagating through the operations undertaken by the numerical solver. This option best suits large mini-batch sizes or when TimeInterval contains many values.

    • "adjoint" — Compute gradients by solving the associated adjoint ODE system. This option best suits small mini-batch sizes or when TimeInterval contains a small number of values.

    When GradientMode is "adjoint", the network State property must be empty. To use a network with a nonempty State property, set GradientMode to "direct".

    The dlaccelerate function does not support accelerating the dlode45 function when the GradientMode option is "direct". To accelerate the code that calls the dlode45 function, set the GradientMode option to "adjoint" or accelerate parts of your code that do not call the dlode45 function with the GradientMode option set to "direct".

    The dlaccelerate function does not support accelerating networks that contain NeuralODELayer objects when the GradientMode option is "direct". To accelerate networks that contain NeuralODELayer objects, set the GradientMode option to "adjoint".

    Warning

    When GradientMode is "adjoint", all layers in the network must support acceleration. Otherwise, the software can return unexpected results.

    When GradientMode is "adjoint", the software traces the ODE function input to determine the computation graph used for automatic differentiation. This tracing process can take some time and can end up recomputing the same trace. By optimizing, caching, and reusing the traces, the software can speed up the gradient computation.

    For more information on deep learning function acceleration, see Deep Learning Function Acceleration for Custom Training Loops.

    The NeuralODELayer object stores this property as a character vector.

    Relative error tolerance, specified as a positive scalar. The relative tolerance applies to all components of the solution.

    Data Types: single | double | int8 | int16 | int32 | int64 | uint8 | uint16 | uint32 | uint64

    Absolute error tolerance, specified as a positive scalar. The absolute tolerance applies to all components of the solution.

    Data Types: single | double | int8 | int16 | int32 | int64 | uint8 | uint16 | uint32 | uint64

    Examples

    collapse all

    Create a neural ODE layer. Specify an ODE network containing a convolution layer followed by a tanh layer. Specify a time interval of [0, 1].

    inputSize = [14 14 8];
    
    layersODE = [
        imageInputLayer(inputSize)
        convolution2dLayer(3,8,Padding="same")
        tanhLayer];
    
    netODE = dlnetwork(layersODE);
    
    tspan = [0 1];
    layer = neuralODELayer(netODE,tspan)
    layer = 
      NeuralODELayer with properties:
    
                     Name: ''
             TimeInterval: [0 1]
             GradientMode: 'direct'
        RelativeTolerance: 1.0000e-03
        AbsoluteTolerance: 1.0000e-06
    
       Learnable Parameters
                  Network: [1x1 dlnetwork]
    
       State Parameters
        No properties.
    
    Use properties method to see a list of all properties.
    
    

    Create a neural network containing a neural ODE layer.

    layers = [
        imageInputLayer([28 28 1])
        convolution2dLayer([3 3],8,Padding="same",Stride=2)
        reluLayer
        neuralODELayer(netODE,tspan)
        fullyConnectedLayer(10)
        softmaxLayer];
    
    net = dlnetwork(layers)
    net = 
      dlnetwork with properties:
    
             Layers: [6x1 nnet.cnn.layer.Layer]
        Connections: [5x2 table]
         Learnables: [6x3 table]
              State: [0x3 table]
         InputNames: {'imageinput'}
        OutputNames: {'softmax'}
        Initialized: 1
    
      View summary with summary.
    
    

    Tips

    • To apply the neural ODE operation in deep learning models defined as functions or in custom layer functions, use dlode45.

    Algorithms

    expand all

    Version History

    Introduced in R2023b