필터 지우기
필터 지우기

steepest descent algorithm in Matlab

조회 수: 98 (최근 30일)
HAT
HAT 2021년 3월 30일
답변: HAT 2023년 3월 12일
I would like to solve the following constrained minimization problem:
min f(x1,x2) = x1.^2 + x1.*x2 + 3*x2.^2;
subject to: x1,x2 in [3,9]
using Steepest Descent Method.
In the case of unconstrained nonlinear optimization, we can apply directly the following Matlab code. But I don't have any idea for the case of constrained problem using this method. I was wondering if I could get help?
Thanks.
function [xopt,fopt,niter,gnorm,dx] = grad_descent(varargin)
% grad_descent.m demonstrates how the gradient descent method can be used
% to solve a simple unconstrained optimization problem. Taking large step
% sizes can lead to algorithm instability. The variable alpha below
% specifies the fixed step size. Increasing alpha above 0.32 results in
% instability of the algorithm. An alternative approach would involve a
% variable step size determined through line search.
%
% This example was used originally for an optimization demonstration in ME
% 149, Engineering System Design Optimization, a graduate course taught at
% Tufts University in the Mechanical Engineering Department. A
% corresponding video is available at:
%
% http://www.youtube.com/watch?v=cY1YGQQbrpQ
%
% Author: James T. Allison, Assistant Professor, University of Illinois at
% Urbana-Champaign
% Date: 3/4/12
if nargin==0
% define starting point
x0 = [3 3]';
elseif nargin==1
% if a single input argument is provided, it is a user-defined starting
% point.
x0 = varargin{1};
else
error('Incorrect number of input arguments.')
end
% termination tolerance
tol = 1e-6;
% maximum number of allowed iterations
maxiter = 1000;
% minimum allowed perturbation
dxmin = 1e-6;
% step size ( 0.33 causes instability, 0.2 quite accurate)
alpha = 0.1;
% initialize gradient norm, optimization vector, iteration counter, perturbation
gnorm = inf; x = x0; niter = 0; dx = inf;
% define the objective function:
f = @(x1,x2) x1.^2 + x1.*x2 + 3*x2.^2;
% plot objective function contours for visualization:
figure(1); clf; ezcontour(f,[-5 5 -5 5]); axis equal; hold on
% redefine objective function syntax for use with optimization:
f2 = @(x) f(x(1),x(2));
% gradient descent algorithm:
while and(gnorm>=tol, and(niter <= maxiter, dx >= dxmin))
% calculate gradient:
g = grad(x);
gnorm = norm(g);
% take step:
xnew = x - alpha*g;
% check step
if ~isfinite(xnew)
display(['Number of iterations: ' num2str(niter)])
error('x is inf or NaN')
end
% plot current point
plot([x(1) xnew(1)],[x(2) xnew(2)],'ko-')
refresh
% update termination metrics
niter = niter + 1;
dx = norm(xnew-x);
x = xnew;
end
xopt = x;
fopt = f2(xopt);
niter = niter - 1;
% define the gradient of the objective
function g = grad(x)
g = [2*x(1) + x(2)
x(1) + 6*x(2)];
  댓글 수: 1
KUDZAISHE CHAMATUMBA
KUDZAISHE CHAMATUMBA 2022년 10월 25일
이동: Matt J 2022년 10월 25일
when i actually try to run the code its giving me me an error, it doesnt run.... i also think when the code becomes this long it results in having a ;lot of bugs. Is there anyway we can simplify it, keep it neat , clean and short???

댓글을 달려면 로그인하십시오.

채택된 답변

HAT
HAT 2023년 3월 12일
clear; clc
% Define the interval constraints
bounds = [3, 9; 3, 9];
% Solve the optimization problem
x_opt = steepest_descent(@obj_func, @grad_func, bounds, 1000, 0.1, 1e-6);
% Print the optimal solution
disp(['Optimal solution: [', num2str(x_opt(1)), ', ', num2str(x_opt(2)), ']']);
Optimal solution: [3, 3]
% Define the steepest descent method
function x = steepest_descent(func, grad, bounds, max_iter, alpha, tol)
x0 = zeros(size(bounds, 1), 1);
for i = 1:max_iter
x_prev = x0;
grad_prev = grad(x_prev);
x0 = x_prev - alpha * grad_prev;
% Project onto the feasible set
for j = 1:length(x0)
x0(j) = max(bounds(j, 1), min(x0(j), bounds(j, 2)));
end
if norm(x0 - x_prev) < tol
break;
end
end
x = x0;
end
% Define the objective function to minimize
function f = obj_func(x)
f = x(1).^2 + x(1).*x(2) + 3*x(2).^2;
end
% Define the gradient of the objective function
function g = grad_func(x)
g = [2*x(1)+x(2), x(1)+6*x(2)];
end

추가 답변 (1개)

Matt J
Matt J 2021년 3월 30일
편집: Matt J 2021년 3월 30일
One way would be to transform the problem into an unconstrained one via the change of variables,
x1=6+3*sin(z1);
x2=6+3*sin(z2);
Then, you could apply the unconstrained steepest descent method to the modified problem.
  댓글 수: 2
Matt J
Matt J 2021년 4월 1일
In
function [x,fval,niter,gnorm,dx] = grad_descent(varargin)
the varargin are never used. Also, I don't see where you transform your cost function and gradient.
Matt J
Matt J 2021년 4월 1일
Well, your code is long and involved, so it's hard for me to know what precisely needs to be fixed. For starters, I think you should get rid of all the global variables -- they are making the code hard to read and probably introducing bugs. Also, your gradient descent engine still looks like it searches in the space of x. After you make the transformation of variables,
x1=6+3*sin(z1);
x2=6+3*sin(z2);
you should be searching in the space of z, because it is with respect ot z that the objective is unconstrained. That means in particular, that your cost and gradient evaluations should be made with respect to z.

댓글을 달려면 로그인하십시오.

카테고리

Help CenterFile Exchange에서 Nonlinear Optimization에 대해 자세히 알아보기

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by