Updating constraints in Fmincon: clarification

조회 수: 2 (최근 30일)
Dat Tran
Dat Tran 2016년 2월 17일
댓글: Dat Tran 2016년 2월 18일
Dear all,
May you please help me with this! I want to to update pr, Aeq and beq for every iteration. Every step, Fmincon has new Aeq and beq. All Aeq and beq are independent of each other! For example, I have two Aeq(s) and two beq(s) as the attached file and I want to run the code with N=2. 1st iteration, Fmincon solves for p using Aeq1, beq1 and pr. 2nd iteration, Fmincon solves for a new p using Aeq2, beq2 and pr=p (p solved from 1st iteration).Thanks a lot for your help!!!
Dat
function [p, fval] = MC_NT_try33(p0, Aeq, beq, N, opts)
if nargin < 5
opts = optimoptions('fmincon', 'Algorithm', 'interior-point', ...
'GradObj', 'on', 'DerivativeCheck', 'on');
end
M = length(p0);
p = nan(N,M);
fval = nan(N,1);
lb = zeros(1,64);
ub = ones(1,64);
pr1=[0.2 0.442 0.0001 0.0001 0.343 0.0001 1.000
1.000 0.0001 0.536 0.0001 0.0001 0.455 0.021 0.0001 0.0001];
pr = horzcat(pr1, (1/3) * ones(1, 64-16));
p0 = p0(:);
pr = pr(:);
for i=1:N
[pr, fval(i)] = fmincon(@(p) fun(p, pr), p0, [], [],...
Aeq, beq, lb, ub, [], opts);
p(i,:) = pr;
end
end
function [f, gradf] = fun(p, pr)
%%objective function
f = sum( p .* log(p) - p .* log(pr) );
%%gradient of objective function
if nargout > 1
gradf = log(p) + 1 - log(pr);
end
end
  댓글 수: 2
Matt J
Matt J 2016년 2월 17일
The question seems to have changed very little since the last 2 times you posted it. What exactly is new here, if I may ask?
Dat Tran
Dat Tran 2016년 2월 17일
Every step, new Aeq and beq have to be inputted. Aeq and beq are independent of each other! For example, I have two Aeq and two beq as the attached file and I want to run the code with N=2. 1st iteration, Fmincon solves for p using Aeq1, beq1 and pr. 2nd iteration, Fmincon solves for a new p using Aeq2, beq2 and pr=p (p solved from 1st iteration).Thanks a lot for your help!!!

댓글을 달려면 로그인하십시오.

답변 (1개)

Walter Roberson
Walter Roberson 2016년 2월 17일
function [p, fval] = MC_NT_try33(p0, Aeq, beq, N, opts)
if nargin < 5
opts = optimoptions('fmincon', 'Algorithm', 'interior-point', ...
'GradObj', 'on', 'DerivativeCheck', 'on');
end
M = length(p0);
p = nan(N,M);
fval = nan(N,1);
lb = zeros(1,64);
ub = ones(1,64);
pr1=[0.2 0.442 0.0001 0.0001 0.343 0.0001 1.000
1.000 0.0001 0.536 0.0001 0.0001 0.455 0.021 0.0001 0.0001];
pr = horzcat(pr1, (1/3) * ones(1, 64-16));
p0 = p0(:);
pr = pr(:);
for i=1:N
[pr, fval(i)] = fmincon(@(p) fun(p, pr), p0, [], [],...
Aeq, beq, lb, ub, [], opts);
p(i,:) = pr;
%now update your pr, Aeq, beq in some way
pr = pr + randn(size(pr)) / 1000;
Aeq = Aeq + randn(size(Aeq)) / 10000;
beq = beq + randn(size(beq)) / 10000;
%these changed values will be used in the next iteration of fmincon
end
end
function [f, gradf] = fun(p, pr)
%%objective function
f = sum( p .* log(p) - p .* log(pr) );
%%gradient of objective function
if nargout > 1
gradf = log(p) + 1 - log(pr);
end
end
If adding random values to pr, Aeq, beq was not what you had in mind for "updating" pr, Aeq and beq for every iteration, then you should have been more specific.
  댓글 수: 3
Walter Roberson
Walter Roberson 2016년 2월 18일
We went over this before. Store the individual possibilities in cell arrays. Pass them in to the routine. Index them in the fmincon call.
[pr, fval(i)] = fmincon(@(p) fun(p, pr), p0, [], [],...
Aeq{i}, beq{i}, lb, ub, [], opts);
Dat Tran
Dat Tran 2016년 2월 18일
Dear Roberson,
Thank so-oo much for helping me on this!!! I just understood your instruction :) Best, Dat

댓글을 달려면 로그인하십시오.

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by