Training a PINN network using custom training loop and weighted loss function
조회 수: 9 (최근 30일)
이전 댓글 표시
Hello MATLAB Community,
I have a doubt regarding the application of the minimax objective function optimization for the weighted loss minimization in the PINN framework. Suppose say I try to solve the Burger's equation using a PINN approach as outlined in this link. (https://in.mathworks.com/help/deeplearning/ug/solve-partial-differential-equations-with-lbfgs-method-and-deep-learning.html). How can i modify the code below in order to incorporate a weighted loss minimization framework using the minimax optimization.
I request the members of the community to kindly help me out on this regard as soon as possible. Please !!!!
The brief strategy for incorporated a weighted loss minimization is given in the attached images for your reference
답변 (1개)
Yash
2023년 8월 30일
편집: Yash
2023년 9월 1일
To incorporate a custom loss function in the PINN, modify the model loss function as specified in this example: https://www.mathworks.com/help/deeplearning/ug/solve-partial-differential-equations-with-lbfgs-method-and-deep-learning.html#:~:text=function%20%5Bloss%2Cgradients%5D%20%3D%20modelLoss(net%2CX%2CT%2CX0%2CT0%2CU0).
Refer to the function "dlgradient" (https://www.mathworks.com/help/deeplearning/ref/dlarray.dlgradient.html) to compute derivatives
,
,
using automatic differentiation.
,
,Define the objective function as given by the equation below to enforce Burger's equation and ensure that the numerical solution satisfies the conservation of momentum and conservation of mass.
J = Wu*MSEu + W0*MSE0 + Wb*MSEb + Wf*MSEf
Again, use the "dlgradient" function to calculate the gradients with respect to the learnable parameters ("theta", "Wu", "W0", "Wb", "Wf"). To update the learnable parameters as per the minimax optimization, employ the gradient descent/ascent procedure with the iterative updates given by:
theta_k+1 = theta_k - eta_k*gradient_J_theta
Wu_k+1 = Wu_k + eta_k_w*gradient_J_Wu
W0_k+1 = W0_k + eta_k_w*gradient_J_W0
Wb_k+1 = Wb_k + eta_k_w*gradient_J_Wb
Wf_k+1 = Wu_f + eta_k_w*gradient_J_Wf
Alternatively, refer to the function "fminimax" (https://www.mathworks.com/help/optim/ug/fminimax.html) to solve the minimax optimization at hand.
Hope this helps!
참고 항목
카테고리
Help Center 및 File Exchange에서 Parallel and Cloud에 대해 자세히 알아보기
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!