# How is it possible that gamultiobj gives worse solution when the number of MaximumGenerations is raised?

조회 수: 6(최근 30일)
Steffen Kuehl 2016년 11월 14일
댓글: Steffen Kuehl 2016년 11월 16일
Hi,
My muliobjective optimization modell (gamultiobj) does not get continuously better when I raise the number of MaximumGenerations. Sometimes a solution is worse than another solution which was generated with the same parameters, but a smaller number of MaximumGenerations. Since gamultiobj is a variant of NSGA-II( I can't find anything more specific in the Matlab-documentation), shouldn't the elitism of the algorithm make sure, that good solutions, that are calculated in early generations, are kept until the end?So, that it's not possible to get a worse solution when more generations are calculated?
I use the same starting/Initial Population for all my runs. I created my own Creation, Crossover and Mutation Function. Since the Elitism is supposed to be located in the Selection Function, I don't think that creating my own functions should interfer with the elitism.
I appreciate any idea and help. Thanks Steffen
And to clarify, here is some of my code.
These are the options for the Algorithm.
% code
options = optimoptions('gamultiobj',...
'UseParallel', true,...
'UseVectorized', false,...
'CreationFcn',@popFun,...
'CrossoverFcn',@crossoverBinary,...
'MutationFcn',@mutFun,...
'PopulationSize',InitPop,...
'MaxStallGenerations',MaxStallG,...
'MaxGenerations',MaxGamultiobj)
This is the Functioncall. A and B are linear constraints.
% code
[X,fval,exitflag] = gamultiobj(objFun,n,A,B,[],[],[],[],[],options);
This is my creation function.
% code
function Population = popFun(GenomeLength,~,options)
%Population function for example PopExample.
% This function must ensure that the linear constraints are met
global Zusatz
Population=zeros(options.PopulationSize,GenomeLength);
rng('default');
rng(Zusatz);
intcon=1:GenomeLength;
B=ones(85,1);
B(36,1)=-1;
lb=zeros(1,GenomeLength);
ub=ones(1,GenomeLength);
opts =optimoptions('intlinprog','IntegerTolerance',1e-06,'Display','off');
for a=1:options.PopulationSize
f= randn(GenomeLength,1);
[x,fval,exitflag]= intlinprog(f,intcon,A,B,[],[],lb,ub,opts);
Population(a,:)=x;
end
end
This is the Crossover function
if true
% code
end
function xoverKids = crossoverBinary(parents,options,GenomeLength,~,~,thisPopulation)
global Zusatz
rng('default');
rng(Zusatz);
% Extract information about linear constraints, if any
linCon = options.LinearConstr;
nKids = length(parents)/2;
index = 1;
xoverKids = nan(nKids,GenomeLength);
for k = 1:nKids
% Get the parents from the population
parent1 = thisPopulation(parents(index),:);
index = index + 1;
parent2 = thisPopulation(parents(index),:);
index = index+1;
% find locations where parents have a different genome
idx = randi(GenomeLength,1);
% Where genome is the same, keep it the same in the kids
xoverKids(k,1:idx) = parent1(1:idx);
xoverKids(k,idx+1:end) = parent2(idx+1:end);
% Ensure that kid astisfies constraints
flag = any(linCon.Aineq*(xoverKids(k,:)') > linCon.bineq,1);
while flag % Does not satisfy constraints
idx = randi(GenomeLength,1);
xoverKids(k,1:idx) = parent1(1:idx);
xoverKids(k,idx+1:end) = parent2(idx+1:end);
flag = any(linCon.Aineq*(xoverKids(k,:)') > linCon.bineq,1);
end
end
end
And the Mutation Function.
if true
% code
end
function mutationChildren = mutFun(parents, options, GenomeLength, ...
~, ~, ~, thisPopulation)
linCon = options.LinearConstr;
global Zusatz
rng('default');
rng(Zusatz);
% Initialize the output
mutationChildren = nan(length(parents),GenomeLength);
for k = 1:length(parents)
mut = thisPopulation(parents(k),:)';
idx = randi(GenomeLength,1);
mutated = mut;
if mutated(idx)==1
mutated(idx)=0;
else
mutated(idx)=1;
end
% Check that constraints are satisfied.
flag = any(linCon.Aineq*mutated > linCon.bineq,1);
while flag
idx = randi(GenomeLength,1);
mutated = mut;
if mutated(idx)==1
mutated(idx)=0;
else
mutated(idx)=1;
end
% Check that constraints are satisfied.
flag = any(linCon.Aineq*mutated > linCon.bineq,1);
end
mutationChildren(k,:) = mutated';
end
end
##### 댓글 수: 5표시숨기기 이전 댓글 수: 4
Steffen Kuehl 2016년 11월 15일
편집: Steffen Kuehl 2016년 11월 15일
Actually the code runs without parallel processing. I accidently set 'UseVectorized'=true instead of using serial evaluation. My bad.
But the problem has nothing to do with parallel processing. I get the same result with and without parallel processing turned on.

댓글을 달려면 로그인하십시오.

### 채택된 답변

Brendan Hamm 2016년 11월 15일
Ok. So there is an additional change you make in here that was not mentioned and is affecting the output. You change the MaxStallGenerations which was causing one of the solutions to run more iterations which were producing "better" results. While they may have been better with respect to some of the functions, they were not necessarily better with respect to others.
The basic idea is that if the average change in the best Objective Evaluations between the current generation and the Generation which occurred MaxStallGenerations ago is less than the FunctionTolerance (1e-4 by default), termination will occur.
So, I have a few suggestions:
1. Keep the MaxStallGenerations the same between different runs.
2. There is no need to change the rng in each function, in fact I would discourage this. Instead what you can do is get the 4th output of gamultiobj which contains the state of the RNG at the start of the algorithms run. See the link for information on this.
3. Increase the population size. For a problem with this many variables I would consider having in the range of 500 different initial populations, this may provide you with a large enough "Elite" population to carry on to the next generation.
##### 댓글 수: 4표시숨기기 이전 댓글 수: 3
Steffen Kuehl 2016년 11월 16일
Thank you for answers.
Brendan, I thought you referring to a possibility using more than one (i.e. 500) initial Populations and not the size of one Population.
Thank you for all the work and thought you did put in your answers :)

댓글을 달려면 로그인하십시오.

### 추가 답변(2개)

John D'Errico 2016년 11월 14일
This is a stochastic solver. It generates points using random methods. As well, ANY numerical optimization tool can only find an approximate solution. They do not find an exact solution.
So there is no presumption that one optimization using a stochastic optimizer will always give as good a solution as another call to the same optimizer. Yes, by allowing it more iterations, you increase the chance that it will be able too improve, but it is only abetter chance, not an assurance of success.
##### 댓글 수: 1표시숨기기 없음
Steffen Kuehl 2016년 11월 14일
Thank you for your Answer John. I did set all parameters that are based on random numbers, so that I get reproducible results. If I run the exact calculation twice I get the exact same results. So shouldn't there be a better (or at least as good) solution if the number of Maximum Generations is raised, although gamultiobj is a stochastic solver?

댓글을 달려면 로그인하십시오.

Walter Roberson 2016년 11월 14일
One aspect that is not always obvious is that a later configuration, with a larger (less optimal) objective function value, might be less of a constraint violation. The magnitude of the constraint violation is compared as well as the objective function value.
##### 댓글 수: 1표시숨기기 없음
Steffen Kuehl 2016년 11월 16일

댓글을 달려면 로그인하십시오.

### Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!