How to calculate gradient of output of a neural network with respect to its parameters?
조회 수: 17 (최근 30일)
이전 댓글 표시
I am new to the Deep Learning toolbox. I am working on a Reinforcement Learning problem wherein I need to calculate the derivative of the output of NN with respect to parameters.
More specifically, let the I/O relation of the neural network be defined as , where x is the input, y is the output, and θ contains the weights and biases of the neural network. For a specific input , I am interested in calculating . Any idea how I should go about this with the deep learning toolbox?
댓글 수: 0
채택된 답변
Jon Cherrie
2021년 4월 27일
You can use dlgradient and dlfeval to take derivatives with respect to the input or to the parameters
Here's an example:
I'm going to let this function play the role of the neural network. You should use dlnetwork to create a more complex network.
function y = f(x,theta)
y = sum(sin(theta(1)*x+theta(2)));
end
Then I need to scope the computation of the function so that dlfeval knows where to apply auto-diff. I do that by defining a function that evaluates the network and computes the gradient of interest. In this case:
function [y, dy] = fun_and_deriv(x,theta)
y = f(x,theta);
dy = dlgradient(y,theta);
end
Then I can compute the derivative:
x = dlarray(0:10);
theta = dlarray([1 2]);
[y, dy] = dlfeval(@fun_and_deriv,x,theta)
which gives:
y =
1×1 dlarray
-0.9668
dy =
1×2 dlarray
0.6788 -1.1095
댓글 수: 0
추가 답변 (0개)
참고 항목
카테고리
Help Center 및 File Exchange에서 Custom Training Loops에 대해 자세히 알아보기
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!