ANN and bvp5c merged

조회 수: 2 (최근 30일)
MINATI PATRA
MINATI PATRA 2025년 2월 23일
댓글: MINATI PATRA 2025년 2월 23일
%% Clear workspace
clc; clear; close all;
%% Define BVP: Blasius Equation
blasiusODE = @(eta, F) [F(2); F(3); -0.5 * F(1) * F(3)];
bc = @(Fa, Fb) [Fa(1); Fa(2); Fb(2) - 1]; % Boundary Conditions
% Solve using bvp5c
eta = linspace(0, 5, 100);
solinit = bvpinit(eta, [0 0 0.3]); % Better initial guess
options = bvpset('RelTol', 1e-6, 'AbsTol', 1e-6);
sol = bvp5c(blasiusODE, bc, solinit, options);
% Extract solution
eta = sol.x';
F = sol.y(1, :)'; % f
dF = sol.y(2, :)'; % f'
ddF = sol.y(3, :)'; % f''
%% Check for NaN or Inf
if any(isnan(F)) || any(isinf(F))
error('Solution contains NaN or Inf values.');
end
%% ANN Training Setup
x = linspace(0, 5, 100)'; % Inputs
rng(0); % Seed for reproducibility
IN = 1; HN = 100; ON = 1; % Neurons (1 input, 10 hidden, 1 output)
% Xavier initialization for weights
W1 = randn(HN, IN) * sqrt(2 / IN);
b1 = zeros(HN, 1);
W2 = randn(ON, HN) * sqrt(2 / HN);
b2 = zeros(ON, 1);
% Activation Functions
Af = @(x) sqrt(1 + x.^2)-1; % Swish Activation
dAf = @(x) x ./ (1 + x.^2).^(1/2);
% Training Parameters
iter = 50000; % Reduce iterations for faster training
momentum = 0.9;
VdW1 = zeros(size(W1)); VdW2 = zeros(size(W2));
Vdb1 = zeros(size(b1)); Vdb2 = zeros(size(b2));
% Training Loop
for epoch = 1:iter
% Forward Propagation
Z1 = W1 * x' + b1;
A1 = Af(Z1);
Z2 = W2 * A1 + b2;
y_pred = Z2'; y_exact_values = interp1(eta, F, x, 'spline');
% Compute error (difference between exact solution and predicted solution)
residual = y_exact_values - y_pred;
% Compute Gradient
dydx = diff(y_pred) ./ diff(x); % Use diff instead of gradient for better accuracy
dydx = [dydx(1); dydx]; % Maintain dimensions
% Loss Function
lambda = 0.001; % Regularization
loss = mean((dydx + residual).^2) + lambda * (residual(1) - 1).^2;
% Backpropagation
dZ2 = 2 * (dydx + y_pred);
dW2 = (dZ2' * A1') / length(x);
db2 = mean(dZ2, 1);
dA1 = W2' * dZ2';
dZ1 = dA1 .* dAf(Z1);
dW1 = (dZ1 * x) / length(x);
db1 = mean(dZ1, 2);
% Gradient Clipping (Increase to avoid vanishing gradients)
clip_value = 5;
dW1 = max(min(dW1, clip_value), -clip_value);
dW2 = max(min(dW2, clip_value), -clip_value);
% Learning Rate Decay
lr = 0.005 / (1 + 0.0001 * epoch);
% Momentum-based Gradient Descent
VdW1 = momentum * VdW1 + lr * dW1;
W1 = W1 - VdW1;
Vdb1 = momentum * Vdb1 + lr * db1;
b1 = b1 - Vdb1;
VdW2 = momentum * VdW2 + lr * dW2;
W2 = W2 - VdW2;
Vdb2 = momentum * Vdb2 + lr * db2;
b2 = b2 - Vdb2;
% Print progress
if mod(epoch, 10000) == 0
fprintf('Epoch %d: Loss = %.6f\n', epoch, loss);
end
end
Epoch 10000: Loss = 2.567515 Epoch 20000: Loss = 2.569349 Epoch 30000: Loss = 2.571416 Epoch 40000: Loss = 2.572941 Epoch 50000: Loss = 2.574102
%% Plot Results
figure;
plot(x, y_pred, 'm--', 'LineWidth', 2);
hold on;
plot(eta, F, 'k-', 'LineWidth', 2);
% Customize axes
ax = gca;
ax.XColor = 'blue';
ax.YColor = 'blue';
ax.XAxis.FontSize = 12;
ax.YAxis.FontSize = 12;
ax.FontWeight = 'bold';
% Labels
xlabel('\bf\itx', 'Color', 'red', 'FontName', 'Times New Roman', 'FontSize', 16);
ylabel('\bf\itf(x)', 'Color', 'red', 'FontName', 'Times New Roman', 'FontSize', 16);
% Legend
legend({'\bf\color{magenta}ANN Solution', '\bf\color{black}BVP Solution'}, ...
'Location', 'northeast', 'Box', 'off');
title('Blasius Flow: ANN vs BVP Solution');
grid on;
%%% I want to take an activation fuction so that the two plots match each other, if any other figures can be drawn through this model please suggest
  댓글 수: 4
Torsten
Torsten 2025년 2월 23일
Is there any reference where this method is explained ?

댓글을 달려면 로그인하십시오.

답변 (0개)

카테고리

Help CenterFile Exchange에서 Ordinary Differential Equations에 대해 자세히 알아보기

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by