how to find the values that minimize the objective function using the generic algorithm?

i have written the following script that finds the value of the objective function between experimental and modelling data. I want to minimize the objective function by using the generic algorithm. but it always stucks on the bounds that i set. Any suggestions on how to fix this?
clear all
close all
clc
% Define the experimental data
exp_t = [0, 5, 15, 30, 45, 60]; % time in mins
exp_y = [100, 0, 0, 0, 0, 0; % Cop yield
0, 32.3, 31.9, 31, 29.7, 28; % Bio-oil yield
0, 33.3, 33, 29.9, 23.5, 13.8; % Biochar yield
0, 24.8, 25.8, 26.8, 27.1, 26.7; % AP yield
0, 9.7, 9.2, 12.2, 19.6, 31.5]; % Gas yield
% Define the constant temperature
T = 300 + 273.15;
R = 8.314;
lb = [10 2];
ub = [100 25000];
guess = [8 10000 8 10000 8 10000 8 10000 8 10000 8 10000 8 10000];
% Preallocate k
k = zeros(1, numel(guess)/2);
for i = 1:2:numel(guess)
k((i+1)/2) = guess(i) * exp(-(guess(i+1)/(R*T)));
end
% Initialize concentrations
x0= [100, 0, 0, 0, 0]; % Initial concentrations
ode = @(t, x) [-(k(1) + k(2) + k(3) + k(4)) * x(1); % dMcop/dt
k(1) * x(1) - (k(5) + k(6)) * x(2); % dMbiooil/dt
k(2) * x(1); % dMBiochar/dt
k(3) * x(1) + k(5) * x(2) - k(7) * x(4); % dMaqueous/dt
k(4) * x(1) + k(6) * x(2) + k(7) * x(4)]; % dMgas/dt
[t, x] = ode45(ode, exp_t, x0); % solve numerically
% Calculate the squared differences
sq_diff = (exp_y - x').^2;
% Sum of squared differences
objective_function = sum(sq_diff(:));
disp(['Objective Function Value: ' num2str(objective_function)]);
% Define the objective function as a function of A and E
objfun = @(AE) objfun_internal(AE, exp_t, exp_y, T, R);
% Define the anonymous function that calls objfun
objfun2 = @(AE) objfun_internal([AE(1) AE(2) 8 10000 8 10000 8 10000 8 10000 8 10000 8 10000 8 10000], exp_t, exp_y, T, R);
% Define the internal objfun function
function result = objfun_internal(AE, exp_t, exp_y, T, R)
% Place your existing objfun logic here
k = zeros(1, numel(AE)/2);
for i = 1:2:numel(AE)
k((i+1)/2) = AE(i) * exp(-(AE(i+1)/(R*T)));
end
% Initialize concentrations
x0 = [100, 0, 0, 0, 0]; % Initial concentrations
ode = @(t, x) [-(k(1) + k(2) + k(3) + k(4)) * x(1); % dMcop/dt
k(1) * x(1) - (k(5) + k(6)) * x(2); % dMbiooil/dt
k(2) * x(1); % dMBiochar/dt
k(3) * x(1) + k(5) * x(2) - k(7) * x(4); % dMaqueous/dt
k(4) * x(1) + k(6) * x(2) + k(7) * x(4)]; % dMgas/dt
[t, x] = ode45(ode, exp_t, x0); % solve numerically
% Calculate the squared differences
sq_diff = (exp_y - x').^2;
% Sum of squared differences
result = sum(sq_diff(:));
end

댓글 수: 6

Ciao Georgios, I don't find the genetic algorithm solver ga() in your code. Can you include it?
Sure. Here is the full code:
clear all
close all
clc
% Define the experimental data
exp_t = [0, 5, 15, 30, 45, 60]; % time in mins
exp_y = [1, 0, 0, 0, 0, 0; % Cop yield
0, 32.3, 31.9, 31, 29.7, 28; % Bio-oil yield
0, 33.3, 33, 29.9, 23.5, 13.8; % Biochar yield
0, 24.8, 25.8, 26.8, 27.1, 26.7; % AP yield
0, 9.7, 9.2, 12.2, 19.6, 31.5]; % Gas yield
% Define the constant temperature
T = 300 + 273.15;
R = 8.314;
guess = [50 100 50 100 50 100 50 100 50 100 50 100 50 100];
% Define the optimization options
options = optimoptions('ga');
options.Display = 'iter'; % Display optimization information during iterations
options.MaxGenerations = 500; % Maximum number of generations
options.PopulationSize = 50; % Population size
options.ConstraintTolerance = 0; % Adjust constraint tolerance
% Lower bounds for A and E
lb = [1 5 1 5 1 5 1 5 1 5 1 5 1 5];
% Perform optimization using genetic algorithm
[optimal_params, min_objective] = ga(@(params) objective_function_wrapper(params, exp_t, exp_y, R, T), numel(guess), [], [], [], [], lb, [], [], options);
Single objective optimization: 14 Variables Options: CreationFcn: @gacreationuniform CrossoverFcn: @crossoverscattered SelectionFcn: @selectionstochunif MutationFcn: @mutationadaptfeasible Best Mean Stall Generation Func-count f(x) f(x) Generations 1 100 1.345e+04 1.348e+04 0 2 147 1.345e+04 1.347e+04 0 3 194 1.344e+04 1.347e+04 0 4 241 1.343e+04 1.346e+04 0 5 288 1.343e+04 1.346e+04 1 6 335 1.343e+04 1.345e+04 0 7 382 1.343e+04 1.344e+04 0 8 429 1.343e+04 1.344e+04 1 9 476 1.343e+04 1.344e+04 0 10 523 1.343e+04 1.343e+04 0 11 570 1.343e+04 1.343e+04 1 12 617 1.343e+04 1.343e+04 0 13 664 1.343e+04 1.343e+04 0 14 711 1.343e+04 1.343e+04 1 15 758 1.343e+04 1.343e+04 0 16 805 1.342e+04 1.343e+04 0 17 852 1.342e+04 1.343e+04 0 18 899 1.342e+04 1.343e+04 0 19 946 1.342e+04 1.343e+04 0 20 993 1.342e+04 1.343e+04 0 21 1040 1.342e+04 1.342e+04 0 22 1087 1.342e+04 1.342e+04 0 23 1134 1.342e+04 1.342e+04 0 24 1181 1.342e+04 1.342e+04 0 25 1228 1.342e+04 1.342e+04 0 26 1275 1.342e+04 1.342e+04 0 27 1322 1.342e+04 1.342e+04 0 28 1369 1.342e+04 1.342e+04 0 29 1416 1.342e+04 1.342e+04 0 30 1463 1.342e+04 1.342e+04 1 Best Mean Stall Generation Func-count f(x) f(x) Generations 31 1510 1.342e+04 1.342e+04 0 32 1557 1.342e+04 1.342e+04 0 33 1604 1.342e+04 1.342e+04 0 34 1651 1.342e+04 1.342e+04 0 35 1698 1.342e+04 1.342e+04 0 36 1745 1.342e+04 1.342e+04 0 37 1792 1.342e+04 1.342e+04 0 38 1839 1.342e+04 1.342e+04 0 39 1886 1.342e+04 1.342e+04 0 40 1933 1.342e+04 1.342e+04 0 41 1980 1.342e+04 1.342e+04 1 42 2027 1.342e+04 1.342e+04 2 43 2074 1.342e+04 1.342e+04 0 44 2121 1.342e+04 1.342e+04 0 45 2168 1.342e+04 1.342e+04 0 46 2215 1.342e+04 1.342e+04 0 47 2262 1.342e+04 1.342e+04 0 48 2309 1.342e+04 1.342e+04 1 49 2356 1.342e+04 1.342e+04 0 50 2403 1.342e+04 1.342e+04 0 51 2450 1.342e+04 1.342e+04 0 52 2497 1.342e+04 1.342e+04 0 53 2544 1.342e+04 1.342e+04 0 54 2591 1.342e+04 1.342e+04 0 55 2638 1.342e+04 1.342e+04 0 56 2685 1.342e+04 1.342e+04 1 57 2732 1.342e+04 1.342e+04 0 58 2779 1.342e+04 1.342e+04 0 59 2826 1.342e+04 1.342e+04 0 60 2873 1.342e+04 1.342e+04 0 Best Mean Stall Generation Func-count f(x) f(x) Generations 61 2920 1.342e+04 1.342e+04 0 62 2967 1.342e+04 1.342e+04 0 63 3014 1.342e+04 1.342e+04 0 64 3061 1.342e+04 1.342e+04 0 65 3108 1.342e+04 1.342e+04 0 66 3155 1.342e+04 1.342e+04 0 67 3202 1.342e+04 1.342e+04 0 68 3249 1.342e+04 1.342e+04 0 69 3296 1.342e+04 1.342e+04 0 70 3343 1.342e+04 1.342e+04 1 71 3390 1.342e+04 1.342e+04 0 72 3437 1.342e+04 1.342e+04 0 73 3484 1.342e+04 1.342e+04 0 74 3531 1.342e+04 1.342e+04 0 75 3578 1.342e+04 1.342e+04 0 76 3625 1.342e+04 1.342e+04 0 77 3672 1.342e+04 1.342e+04 0 78 3719 1.342e+04 1.342e+04 0 79 3766 1.342e+04 1.342e+04 0 80 3813 1.342e+04 1.342e+04 0 81 3860 1.342e+04 1.342e+04 0 82 3907 1.342e+04 1.342e+04 0 83 3954 1.342e+04 1.342e+04 0 84 4001 1.342e+04 1.342e+04 0 85 4048 1.342e+04 1.342e+04 0 86 4095 1.342e+04 1.342e+04 0 87 4142 1.342e+04 1.342e+04 0 88 4189 1.342e+04 1.342e+04 0 89 4236 1.342e+04 1.342e+04 0 90 4283 1.342e+04 1.342e+04 0 Best Mean Stall Generation Func-count f(x) f(x) Generations 91 4330 1.342e+04 1.342e+04 0 92 4377 1.342e+04 1.342e+04 1 93 4424 1.342e+04 1.342e+04 0 94 4471 1.342e+04 1.342e+04 0 95 4518 1.342e+04 1.342e+04 0 96 4565 1.342e+04 1.342e+04 1 97 4612 1.342e+04 1.342e+04 0 98 4659 1.342e+04 1.342e+04 0 99 4706 1.342e+04 1.342e+04 0 100 4753 1.342e+04 1.342e+04 0 101 4800 1.342e+04 1.342e+04 0 102 4847 1.342e+04 1.342e+04 1 103 4894 1.342e+04 1.342e+04 0 104 4941 1.342e+04 1.342e+04 0 105 4988 1.342e+04 1.342e+04 0 106 5035 1.342e+04 1.342e+04 0 107 5082 1.342e+04 1.342e+04 0 108 5129 1.342e+04 1.342e+04 0 109 5176 1.342e+04 1.342e+04 0 110 5223 1.342e+04 1.342e+04 0 111 5270 1.342e+04 1.342e+04 1 112 5317 1.342e+04 1.342e+04 0 113 5364 1.342e+04 1.342e+04 0 114 5411 1.342e+04 1.342e+04 0 115 5458 1.342e+04 1.342e+04 0 116 5505 1.342e+04 1.342e+04 0 117 5552 1.342e+04 1.342e+04 0 118 5599 1.342e+04 1.342e+04 1 119 5646 1.342e+04 1.342e+04 0 120 5693 1.342e+04 1.342e+04 0 Best Mean Stall Generation Func-count f(x) f(x) Generations 121 5740 1.342e+04 1.342e+04 0 122 5787 1.342e+04 1.342e+04 0 123 5834 1.342e+04 1.342e+04 0 124 5881 1.342e+04 1.342e+04 0 125 5928 1.342e+04 1.342e+04 0 126 5975 1.342e+04 1.342e+04 0 127 6022 1.342e+04 1.342e+04 0 128 6069 1.342e+04 1.342e+04 1 129 6116 1.342e+04 1.342e+04 0 130 6163 1.342e+04 1.342e+04 0 131 6210 1.342e+04 1.342e+04 1 132 6257 1.342e+04 1.342e+04 0 133 6304 1.342e+04 1.342e+04 0 134 6351 1.342e+04 1.342e+04 0 135 6398 1.342e+04 1.342e+04 0 136 6445 1.342e+04 1.342e+04 1 137 6492 1.342e+04 1.342e+04 0 138 6539 1.342e+04 1.342e+04 0 139 6586 1.342e+04 1.342e+04 0 140 6633 1.342e+04 1.342e+04 0 141 6680 1.342e+04 1.342e+04 0 142 6727 1.342e+04 1.342e+04 0 143 6774 1.342e+04 1.342e+04 0 144 6821 1.342e+04 1.342e+04 0 145 6868 1.342e+04 1.342e+04 0 146 6915 1.342e+04 1.342e+04 0 147 6962 1.342e+04 1.342e+04 0 148 7009 1.342e+04 1.342e+04 0 149 7056 1.342e+04 1.342e+04 0 150 7103 1.342e+04 1.342e+04 0 Best Mean Stall Generation Func-count f(x) f(x) Generations 151 7150 1.342e+04 1.342e+04 0 152 7197 1.342e+04 1.342e+04 0 153 7244 1.342e+04 1.342e+04 0 154 7291 1.342e+04 1.342e+04 0 155 7338 1.342e+04 1.342e+04 0 156 7385 1.342e+04 1.342e+04 1 157 7432 1.342e+04 1.342e+04 0 158 7479 1.342e+04 1.342e+04 0 159 7526 1.342e+04 1.342e+04 1 160 7573 1.342e+04 1.342e+04 0 161 7620 1.342e+04 1.342e+04 0 162 7667 1.342e+04 1.342e+04 0 163 7714 1.342e+04 1.342e+04 0 164 7761 1.342e+04 1.342e+04 0 165 7808 1.342e+04 1.342e+04 1 166 7855 1.342e+04 1.342e+04 0 167 7902 1.342e+04 1.342e+04 0 168 7949 1.342e+04 1.342e+04 0 169 7996 1.342e+04 1.342e+04 0 170 8043 1.342e+04 1.342e+04 0 171 8090 1.342e+04 1.342e+04 0 172 8137 1.342e+04 1.342e+04 1 173 8184 1.342e+04 1.342e+04 0 174 8231 1.342e+04 1.342e+04 1 175 8278 1.342e+04 1.342e+04 0 176 8325 1.342e+04 1.342e+04 0 177 8372 1.342e+04 1.342e+04 0 178 8419 1.342e+04 1.342e+04 0 179 8466 1.342e+04 1.342e+04 0 180 8513 1.342e+04 1.342e+04 0 Best Mean Stall Generation Func-count f(x) f(x) Generations 181 8560 1.342e+04 1.342e+04 0 182 8607 1.342e+04 1.342e+04 0 183 8654 1.342e+04 1.342e+04 0 184 8701 1.342e+04 1.342e+04 0 185 8748 1.342e+04 1.342e+04 0 186 8795 1.342e+04 1.342e+04 0 187 8842 1.342e+04 1.342e+04 0 188 8889 1.342e+04 1.342e+04 0 189 8936 1.342e+04 1.342e+04 0 190 8983 1.342e+04 1.342e+04 0 191 9030 1.342e+04 1.342e+04 0 192 9077 1.342e+04 1.342e+04 0 193 9124 1.342e+04 1.342e+04 1 194 9171 1.342e+04 1.342e+04 0 195 9218 1.342e+04 1.342e+04 0 196 9265 1.342e+04 1.342e+04 0 197 9312 1.342e+04 1.342e+04 0 198 9359 1.342e+04 1.342e+04 0 199 9406 1.342e+04 1.342e+04 0 200 9453 1.342e+04 1.342e+04 0 201 9500 1.342e+04 1.342e+04 0 202 9547 1.342e+04 1.342e+04 0 203 9594 1.342e+04 1.342e+04 0 204 9641 1.342e+04 1.342e+04 0 205 9688 1.342e+04 1.342e+04 0 206 9735 1.342e+04 1.342e+04 0 207 9782 1.342e+04 1.342e+04 0 208 9829 1.342e+04 1.342e+04 0 209 9876 1.342e+04 1.342e+04 0 210 9923 1.341e+04 1.342e+04 0 Best Mean Stall Generation Func-count f(x) f(x) Generations 211 9970 1.341e+04 1.342e+04 0 212 10017 1.341e+04 1.342e+04 1 213 10064 1.341e+04 1.342e+04 0 214 10111 1.341e+04 1.342e+04 0 215 10158 1.341e+04 1.342e+04 0 216 10205 1.341e+04 1.342e+04 0 217 10252 1.341e+04 1.342e+04 0 218 10299 1.341e+04 1.342e+04 0 219 10346 1.341e+04 1.342e+04 0 220 10393 1.341e+04 1.342e+04 0 221 10440 1.341e+04 1.342e+04 1 222 10487 1.341e+04 1.342e+04 0 223 10534 1.341e+04 1.342e+04 1 224 10581 1.341e+04 1.342e+04 0 225 10628 1.341e+04 1.342e+04 0 226 10675 1.341e+04 1.342e+04 0 227 10722 1.341e+04 1.342e+04 0 228 10769 1.341e+04 1.342e+04 0 229 10816 1.341e+04 1.342e+04 1 230 10863 1.341e+04 1.341e+04 0 231 10910 1.341e+04 1.342e+04 0 232 10957 1.341e+04 1.342e+04 0 233 11004 1.341e+04 1.341e+04 0 234 11051 1.341e+04 1.341e+04 0 235 11098 1.341e+04 1.341e+04 0 236 11145 1.341e+04 1.341e+04 0 237 11192 1.341e+04 1.341e+04 0 238 11239 1.341e+04 1.341e+04 0 239 11286 1.341e+04 1.341e+04 0 240 11333 1.341e+04 1.341e+04 0 Best Mean Stall Generation Func-count f(x) f(x) Generations 241 11380 1.341e+04 1.341e+04 1 242 11427 1.341e+04 1.341e+04 0 243 11474 1.341e+04 1.341e+04 0 244 11521 1.341e+04 1.341e+04 0 245 11568 1.341e+04 1.341e+04 1 246 11615 1.341e+04 1.341e+04 0 247 11662 1.341e+04 1.341e+04 0 248 11709 1.341e+04 1.341e+04 0 249 11756 1.341e+04 1.341e+04 0 250 11803 1.341e+04 1.341e+04 0 251 11850 1.341e+04 1.341e+04 0 252 11897 1.341e+04 1.341e+04 0 253 11944 1.341e+04 1.341e+04 0 254 11991 1.341e+04 1.341e+04 0 255 12038 1.341e+04 1.341e+04 0 256 12085 1.341e+04 1.341e+04 1 257 12132 1.341e+04 1.341e+04 0 258 12179 1.341e+04 1.341e+04 0 259 12226 1.341e+04 1.341e+04 1 260 12273 1.341e+04 1.341e+04 0 261 12320 1.341e+04 1.341e+04 1 262 12367 1.341e+04 1.341e+04 0 263 12414 1.341e+04 1.341e+04 0 264 12461 1.341e+04 1.341e+04 0 265 12508 1.341e+04 1.341e+04 0 ga stopped because the average change in the fitness value is less than options.FunctionTolerance.
% Extract optimized A and E values
optimized_A = optimal_params(1:2:end);
optimized_E = optimal_params(2:2:end);
% Display results
disp(['Optimal A values: ' num2str(optimized_A)]);
Optimal A values: 1.2753 90.7399 1.24527 1.04332 35.0202 31.5114 29.2699
disp(['Optimal E values: ' num2str(optimized_E)]);
Optimal E values: 56.2218 5 36.7583 52.1932 20.8904 30.4414 30.842
disp(['Minimum Objective Function Value: ' num2str(min_objective)]);
Minimum Objective Function Value: 13414.2631
% Function wrapper for the objective function
function obj = objective_function_wrapper(params, exp_t, exp_y, R, T)
k = zeros(1, numel(params)/2);
for i = 1:2:numel(params)
k((i+1)/2) = params(i) * exp(-(params(i+1)/(R*T)));
end
x0 = [1, 0, 0, 0, 0]; % Initial concentrations
ode = @(t, x) [-(k(1) + k(2) + k(3) + k(4)) * x(1);
k(1) * x(1) - (k(5) + k(6)) * x(2);
k(2) * x(1);
k(3) * x(1) + k(5) * x(2) - k(7) * x(4);
k(4) * x(1) + k(6) * x(2) + k(7) * x(4)];
[t, x] = ode45(ode, exp_t, x0);
sq_diff = (exp_y - x').^2;
obj = sum(sq_diff(:));
end
I see that you're attempting to determine the parameters for the biochemical system of differential equations. I've conducted my run, and the objective value remains relatively high at 13K. Is it possible that there are dynamics not accounted for in your suggested ordinary differential equations (ODEs)?
% Define the experimental data
exp_t = [0, 5, 15, 30, 45, 60]; % time in mins
exp_y = [1, 0, 0, 0, 0, 0; % Cop yield
0, 32.3, 31.9, 31, 29.7, 28; % Bio-oil yield
0, 33.3, 33, 29.9, 23.5, 13.8; % Biochar yield
0, 24.8, 25.8, 26.8, 27.1, 26.7; % AP yield
0, 9.7, 9.2, 12.2, 19.6, 31.5]; % Gas yield
% Define the constant temperature
T = 300 + 273.15;
R = 8.314;
guess = [50 100 50 100 50 100 50 100 50 100 50 100 50 100];
% Define the optimization options
options = optimoptions('ga');
% options.Display = 'iter'; % Display optimization information during iterations
options.MaxGenerations = 500; % Maximum number of generations
options.PopulationSize = 50; % Population size
options.ConstraintTolerance = 0; % Adjust constraint tolerance
% Lower bounds for A and E
lb = [1 5 1 5 1 5 1 5 1 5 1 5 1 5];
% Perform optimization using genetic algorithm
[optimal_params, min_objective] = ga(@(params) objective_function_wrapper(params, exp_t, exp_y, R, T), numel(guess), [], [], [], [], lb, [], [], options)
ga stopped because the average change in the fitness value is less than options.FunctionTolerance.
optimal_params = 1×14
1.1979 52.4322 84.3010 5.1761 1.0660 37.2499 1.0862 35.8966 27.5214 27.7001 11.1239 21.4973 1.2338 34.1919
min_objective = 1.3414e+04
% Extract optimized A and E values
optimized_A = optimal_params(1:2:end);
optimized_E = optimal_params(2:2:end);
% Display results
disp(['Optimal A values: ' num2str(optimized_A)]);
Optimal A values: 1.19789 84.301 1.06599 1.08618 27.5214 11.1239 1.23381
disp(['Optimal E values: ' num2str(optimized_E)]);
Optimal E values: 52.4322 5.17606 37.2499 35.8966 27.7001 21.4973 34.1919
disp(['Minimum Objective Function Value: ' num2str(min_objective)]);
Minimum Objective Function Value: 13414.3056
% Function wrapper for the objective function
function obj = objective_function_wrapper(params, exp_t, exp_y, R, T)
k = zeros(1, numel(params)/2);
for i = 1:2:numel(params)
k((i+1)/2) = params(i) * exp(-(params(i+1)/(R*T)));
end
x0 = [1, 0, 0, 0, 0]; % Initial concentrations
ode = @(t, x) [-(k(1) + k(2) + k(3) + k(4)) * x(1);
k(1) * x(1) - (k(5) + k(6)) * x(2);
k(2) * x(1);
k(3) * x(1) + k(5) * x(2) - k(7) * x(4);
k(4) * x(1) + k(6) * x(2) + k(7) * x(4)];
[t, x] = ode45(ode, exp_t, x0);
sq_diff = (exp_y - x').^2;
obj = sum(sq_diff(:));
end
this schematic represents the process that happens, assuming all reactions are first order:
the data about the yields that i have are these:
I have adjusted them and added an extra line for t=0 as initial conditions. I am not exactly sure if this reasoning was correct though.
Your measurement data are in wt %. So there are two things you should ask yourself:
  • Usually, kinetic models work in mol %, not in wt %. Is this different in your case ?
  • Why the sums of each row do not add to 100 ?
And a final remark:
My guess is that you will have to divide your experimental data by a factor of 100 (except for the first row). Your model works in fractions, not in percent.
The yields are given in weight percentage, the sums should indeed be 100 but they slightly less(99.9). i dont know if this causes a problem. I devided the data with 100 as you said. the objective value decreases significantly but the results of the activation energies and pre exponential factors are still wrong

댓글을 달려면 로그인하십시오.

답변 (2개)

Hassaan
Hassaan 2024년 1월 12일
  1. Adjust Bounds: Review the lower (lb) and upper (ub) bounds that you've set for the optimization variables (A and E). Make sure that the bounds are reasonable and wide enough to allow the optimization algorithm to explore the search space effectively. Tight bounds can cause the optimization to get stuck.
  2. Population Size and Generations: Experiment with the population size and the number of generations in the genetic algorithm. Increasing the population size and/or the number of generations can help the algorithm explore the search space more thoroughly.
  3. Mutation and Crossover Rates: Adjust the mutation rate and crossover rate parameters of the genetic algorithm. These parameters control the exploration-exploitation trade-off. Higher mutation rates can help escape local minima, while lower mutation rates can help convergence. Experiment with different values to find a balance.
  4. Objective Function Scaling: Consider scaling your objective function. Sometimes, optimization algorithms work better when the objective function values are within a certain range. Scaling can help in achieving this.
  5. Initial Guess: Experiment with different initial guesses for the optimization variables. The algorithm's convergence can depend on the starting point, and different initial points can lead to different local minima.
  6. Convergence Criteria: Check the convergence criteria of the genetic algorithm. Ensure that the algorithm is set to terminate properly when it converges or when a maximum number of function evaluations is reached.
  7. Parallel Computing: If you have access to multiple CPU cores, consider enabling parallel computing for the genetic algorithm. This can speed up the optimization process.
  8. Optimization Toolbox Functions: MATLAB's Optimization Toolbox provides various optimization functions that are tailored for different optimization problems. You might explore other optimization techniques such as fmincon, fminunc, or patternsearch in addition to the genetic algorithm.
  9. Output Verbosity: Increase the verbosity of the optimization output. This can provide insights into what's happening during optimization and help you identify potential issues.
  10. Plot the Optimization Progress: Visualize the optimization progress by plotting the objective function values over iterations or generations. This can help you understand the optimization dynamics.
  11. Advanced Techniques: If the optimization problem is complex and standard optimization methods do not work well, you might explore more advanced techniques like surrogate modeling or global optimization algorithms.
---------------------------------------------------------------------------------------------------------------------------------------------------------
If you find the solution helpful and it resolves your issue, it would be greatly appreciated if you could accept the answer. Also, leaving an upvote and a comment are also wonderful ways to provide feedback.
Professional Interests
  • Technical Services and Consulting
  • Embedded Systems | Firmware Developement | Simulations
  • Electrical and Electronics Engineering
Feel free to contact me.
I used some standardised code that I wrote for these sorts of problems that I adapted to your data —
clear all
close all
clc
% Define the experimental data
exp_t = [0, 5, 15, 30, 45, 60]; % time in mins
exp_y = [1, 0, 0, 0, 0, 0; % Cop yield
0, 32.3, 31.9, 31, 29.7, 28; % Bio-oil yield
0, 33.3, 33, 29.9, 23.5, 13.8; % Biochar yield
0, 24.8, 25.8, 26.8, 27.1, 26.7; % AP yield
0, 9.7, 9.2, 12.2, 19.6, 31.5]; % Gas yield
t = exp_t.';
c = exp_y.';
% % Define the constant temperature
% T = 300 + 273.15;
% R = 8.314;
% lb = [10 2];
% ub = [100 25000];
% guess = [8 10000 8 10000 8 10000 8 10000 8 10000 8 10000 8 10000];
% % Preallocate k
% k = zeros(1, numel(guess)/2);
% for i = 1:2:numel(guess)
% k((i+1)/2) = guess(i) * exp(-(guess(i+1)/(R*T)));
% end
% % Initialize concentrations
% x0= [1, 0, 0, 0, 0]; % Initial concentrations
% ode = @(t, x) [-(k(1) + k(2) + k(3) + k(4)) * x(1); % dMcop/dt
% k(1) * x(1) - (k(5) + k(6)) * x(2); % dMbiooil/dt
% k(2) * x(1); % dMBiochar/dt
% k(3) * x(1) + k(5) * x(2) - k(7) * x(4); % dMaqueous/dt
% k(4) * x(1) + k(6) * x(2) + k(7) * x(4)]; % dMgas/dt
% [t, x] = ode45(ode, exp_t, x0); % solve numerically
% % Calculate the squared differences
% sq_diff = (exp_y - x').^2;
% % Sum of squared differences
% objective_function = sum(sq_diff(:));
% disp(['Objective Function Value: ' num2str(objective_function)]);
% % Define the objective function as a function of A and E
% objfun = @(AE) objfun_internal(AE, exp_t, exp_y, T, R);
% % Define the anonymous function that calls objfun
% objfun2 = @(AE) objfun_internal([AE(1) AE(2) 8 10000 8 10000 8 10000 8 10000 8 10000 8 10000 8 10000], exp_t, exp_y, T, R);
% % Define the internal objfun function
% function result = objfun_internal(AE, exp_t, exp_y, T, R)
% % Place your existing objfun logic here
% k = zeros(1, numel(AE)/2);
% for i = 1:2:numel(AE)
% k((i+1)/2) = AE(i) * exp(-(AE(i+1)/(R*T)));
% end
% % Initialize concentrations
% x0 = [1, 0, 0, 0, 0]; % Initial concentrations
% ode = @(t, x) [-(k(1) + k(2) + k(3) + k(4)) * x(1); % dMcop/dt
% k(1) * x(1) - (k(5) + k(6)) * x(2); % dMbiooil/dt
% k(2) * x(1); % dMBiochar/dt
% k(3) * x(1) + k(5) * x(2) - k(7) * x(4); % dMaqueous/dt
% k(4) * x(1) + k(6) * x(2) + k(7) * x(4)]; % dMgas/dt
% [t, x] = ode45(ode, exp_t, x0); % solve numerically
% % Calculate the squared differences
% sq_diff = (exp_y - x').^2;
% % Sum of squared differences
% result = sum(sq_diff(:));
% end
ftns = @(theta) norm(c-kinetics(theta,t));
PopSz = 500;
Parms = 12;
optsAns = optimoptions('ga', 'PopulationSize',PopSz, 'InitialPopulationMatrix',randi(1E+4,PopSz,Parms)*1E-3, 'MaxGenerations',5E3, 'FunctionTolerance',1E-10); % Options Structure For 'Answers' Problems
opts = optimoptions('ga', 'PopulationSize',PopSz, 'InitialPopulationMatrix',randi(1E+4,PopSz,Parms)*1E-3, 'MaxGenerations',5E3, 'FunctionTolerance',1E-10, 'PlotFcn',@gaplotbestf, 'PlotInterval',1);
optshf = optimoptions('ga', 'PopulationSize',PopSz, 'InitialPopulationMatrix',randi(1E+4,PopSz,Parms)*1E-3, 'MaxGenerations',5E3, 'FunctionTolerance',1E-10, 'HybridFcn',@fmincon, 'PlotFcn',@gaplotbestf, 'PlotInterval',1); % With 'HybridFcn'
t0 = clock;
fprintf('\nStart Time: %4d-%02d-%02d %02d:%02d:%07.4f\n', t0)
[theta,fval,exitflag,output,population,scores] = ga(ftns, Parms, [],[],[],[],zeros(Parms,1),Inf(Parms,1),[],[],opts);
t1 = clock;
fprintf('\nStop Time: %4d-%02d-%02d %02d:%02d:%07.4f\n', t1)
GA_Time = etime(t1,t0)
DT_GA_Time = datetime([0 0 0 0 0 GA_Time], 'Format','HH:mm:ss.SSS');
fprintf('\nElapsed Time: %23.15E\t\t%s\n\n', GA_Time, DT_GA_Time)
fprintf('Fitness value at convergence = %.4f\nGenerations \t\t\t\t = %d\n\n',fval,output.generations)
fprintf(1,'\tRate Constants:\n')
for k1 = 1:length(theta)
fprintf(1, '\t\tTheta(%2d) = %8.5f\n', k1, theta(k1))
end
tv = linspace(min(t), max(t));
Cfit = kinetics(theta, tv);
figure
hd = plot(t, c, 'p');
for k1 = 1:size(c,2)
CV(k1,:) = hd(k1).Color;
hd(k1).MarkerFaceColor = CV(k1,:);
end
hold on
hlp = plot(tv, Cfit);
for k1 = 1:size(c,2)
hlp(k1).Color = CV(k1,:);
end
hold off
grid
xlabel('Time')
ylabel('Concentration')
legend(hlp, compose('C_%d',1:size(c,2)), 'Location','N')
function C=kinetics(k,t)
c0 = k(8:12);
% c0=[1;0;0;0];
[T,Cv]=ode45(@DifEq,t,c0);
%
function dC=DifEq(t,x)
dC = [-(k(1) + k(2) + k(3) + k(4)) * x(1); % dMcop/dt
k(1) * x(1) - (k(5) + k(6)) * x(2); % dMbiooil/dt
k(2) * x(1); % dMBiochar/dt
k(3) * x(1) + k(5) * x(2) - k(7) * x(4); % dMaqueous/dt
k(4) * x(1) + k(6) * x(2) + k(7) * x(4)]; % dMgas/dt
end
C=Cv;
end
This requires more than the 55 seconds allowed here, so you will need to run it offline.
This code runs and gives reasonable result, however it does not give a good fit to the data, even on several runs. It may be necessary to review your system of differential equations to be sure that they correctly describe the problem you want to solve. (With other data and other problems, my code here generally gives quite good results.) Note that it also estimates the initial conditions of the differential equations, and reports those as the last 5 elements of the 12-elrment ‘theta’ parameter vector.
I do not completely udnerstand what you are doing, so it will be necessary for you to add the calculations and such involving temperature to get the result you want.
I have this running in the background (so far for 20 minutes). When it converges, I will post its results.
.

댓글 수: 7

thanks for your answer. I cannot exactly understand what your code is doing. I want to calculate the activation energies and the pre exponential factor of the Arhennius equation, which you completely commented out, and that then will be used to solve the equations. Furthermore, do you use the temperature value somewhere in this code? Last, i do not believe that this problem is so complex to run for hours since its for a masters and the proffessor hasnt mentioned something about this being so complex and that would require so much time to run.
The following picture shows a good representation of what i want my code to do:
My pleasure.
My objective was to get the ga part of the code working. My code uses ga rather than any genetic algorithm code I wrote.
The essence of it is to initialise the fitness function and the initial population matrix:
t = exp_t;
c = exp_y;
ftns = @(theta) norm(c-kinetics(theta,t));
PopSz = 500;
Parms = 12;
then call ga:
opts = optimoptions('ga', 'PopulationSize',PopSz, 'InitialPopulationMatrix',randi(1E+4,PopSz,Parms)*1E-3, 'MaxGenerations',5E3, 'FunctionTolerance',1E-10, 'PlotFcn',@gaplotbestf, 'PlotInterval',1);
[theta,fval,exitflag,output,population,scores] = ga(ftns, Parms, [],[],[],[],zeros(Parms,1),Inf(Parms,1),[],[],opts);
however first set up the objective function ‘kinetics’ using the integrated differential equations:
function C=kinetics(k,t)
c0 = k(8:12);
% c0=[1;0;0;0];
[T,Cv]=ode45(@DifEq,t,c0);
%
function dC=DifEq(t,x)
dC = [-(k(1) + k(2) + k(3) + k(4)) * x(1); % dMcop/dt
k(1) * x(1) - (k(5) + k(6)) * x(2); % dMbiooil/dt
k(2) * x(1); % dMBiochar/dt
k(3) * x(1) + k(5) * x(2) - k(7) * x(4); % dMaqueous/dt
k(4) * x(1) + k(6) * x(2) + k(7) * x(4)]; % dMgas/dt
end
C=Cv;
end
The rest of it just reports and plots the data.
I did not use any of the conversions because I do not understand what you are doing. If those need to be part of the objective function (‘kinetics’), then add them as appropriate to it so that it reports out the correct matrix as ‘C’. If you want to add temperature and anything else to ‘kinetics’ declare them as additional arguments:
function C=kinetics(theta,t,T)
...
end
Since ‘C’ (the output of ‘kinetics’) is what gets compared to your data, it needs to match as closely as possible to them. If that requires additional calculations, do them in ‘kinetics’.
The rest of the code does not need to be changed even if ‘kinetics’ is changed internally to add more calculations to the result.
After letting it run for nearly 40 minutes, it is still not converging on the correct solution. It stopped when it thought it had converged.
thank you for your response
so i changed my experimental data because they were wrong:
exp_y = [1 0 0 0 0;
0 0.299 0.361 0.21 0.13;
0 0.299 0.344 0.224 0.133;
0 0.295 0.289 0.239 0.177;
0 0.288 0.202 0.247 0.263;
0 0.276 0.082 0.248 0.394]; for every product it goes downwards so lets say for biooil it would be the second column
the thing is that i want tp set upper bounds for the rate constants at 0.5 and upper bounds for the x1,x2,x3,x4,x5 at 1 (concenctration). can i set different bounds for parameters in your code?
i also wrote this simpler code but the fit isn't good:
clear all
close all
clc
% Define the experimental data for different temperatures
t = [0, 5, 15, 30, 45, 60]; % time in mins
% Example: Create cell arrays for exp_y and T for three different temperatures
exp_y= [1 0 0 0 0;
0 0.299 0.361 0.21 0.13;
0 0.299 0.344 0.224 0.133;
0 0.295 0.289 0.239 0.177;
0 0.288 0.202 0.247 0.263;
0 0.276 0.082 0.248 0.394];
c1 = exp_y(:,1);
c1_0 = 1;
c2 = exp_y(:,2);
c2_0 = 0;
c3 = exp_y(:,3);
c3_0 = 0;
c4 = exp_y(:,4);
c4_0 = 0;
c5 = exp_y(:,5);
c5_0 = 0;
tin = t';
yin = [c1,c2,c3,c4,c5];
p0 = [0.027 0.0016 0.00023 0.0013 0.004 0.013 0.061 0.080 c1(1) c2(1) c3(1) c4(1) c5(1)]; % initial guess
lb = [0 0 0 0 0 0 0 0 0 0 0 0 0];
ub = [0.48 0.48 0.48 0.48 0.48 0.48 0.48 0.48 1 1 1 1 1];
%Lsqcurvefit
options = optimoptions('lsqcurvefit','StepTolerance',1e-100,'MaxFunctionEvaluations',10000,'MaxIterations',10000);
[p,Rsdnrm,Rsd,ExFlg,OptmInfo,Lmda,Jmat] = lsqcurvefit(@(p,time) diff1(p,time,yin),p0,tin,yin,lb,ub,options);
Local minimum possible. lsqcurvefit stopped because the final change in the sum of squares relative to its initial value is less than the value of the function tolerance.
FitData = diff1(p,t);
c1_out = FitData(:,1);
c2_out = FitData(:,2);
c3_out = FitData(:,3);
c4_out = FitData(:,4);
c5_out = FitData(:,5);
time_plot = [0,t];
c1_plot = [c1_0,c1'];
c2_plot = [c2_0,c2'];
c3_plot = [c3_0,c3'];
c4_plot = [c4_0,c4'];
c5_plot = [c5_0,c5'];
%Plot
c1_out_plot = [c1_0,c1_out'];
c2_out_plot = [c2_0,c2_out'];
c3_out_plot = [c3_0,c3_out'];
figure(2)
plot(time_plot,c1_plot,'bo')
grid on
hold on
plot(time_plot,c2_plot,'ro')
plot(time_plot,c3_plot, 'go')
plot(time_plot,c1_out_plot,'b')
plot(time_plot,c2_out_plot,'r')
plot(time_plot,c3_out_plot,'g')
hold off
xlabel('Time [min]')
ylabel('Concentration (wt.%)')
legend('c1','c2', 'c3')
SStotC1 = sum((c1 - mean(c1)).^2);
SSresC1 = sum((c1(:) - c1_out).^2);
R2C1 = 1 - SSresC1/SStotC1
R2C1 = 0.9971
SStotC2 = sum((c2 - mean(c2)).^2);
SSresC2 = sum((c2(:) - c2_out).^2);
R2C2 = 1 - SSresC2/SStotC2
R2C2 = 0.8499
SStotC3 = sum((c3 - mean(c3)).^2);
SSresC3 = sum((c3(:) - c3_out).^2);
R2C3 = 1 - SSresC3/SStotC3
R2C3 = 0.4955
SStotC4 = sum((c4 - mean(c4)).^2);
SSresC4 = sum((c4(:) - c4_out).^2);
R2C4 = 1 - SSresC4/SStotC4
R2C4 = 0.7801
SStotC5 = sum((c5 - mean(c5)).^2);
SSresC5 = sum((c5(:) - c5_out).^2);
R2C5 = 1 - SSresC5/SStotC5
R2C5 = 0.8643
p(1:8)
ans = 1×8
0.2666 0.3565 0.2408 0.0065 0.0000 0.0000 0.0000 0.0146
function C =diff1(p,time,~)
x0 = p(9:13);
[t,Cout] = ode45(@DifEq,time,x0);
function dC = DifEq(t,x)
xdot = zeros(5,1);
xdot(1) = -(p(1) + p(2) + p(3)) * x(1);
xdot(2) = p(2) * x(1) + p(4) * x(4) - (p(5) + p(6) + p(8)) * x(2);
xdot(3) = p(3) * x(1) + p(6) * x(2);
xdot(4) = p(1) * x(1) + p(5) * x(5) - (p(4) + p(7)) * x(4);
xdot(5) = p(8) * x(2) + p(7) * x(4);
dC = xdot;
end
C = Cout;
end
My pleasure!
I will re-run my code with your new data later.
‘... can i set different bounds for parameters in your code?
Yes. See the ga documentation for details.
I will look at the rest of that (especially your latest comment) later.
I do not understand what you are doing with this (and similar sets of statements):
SStotC1 = sum((c1 - mean(c1)).^2);
SSresC1 = sum((c1(:) - c1_out).^2);
R2C1 = 1 - SSresC1/SStotC1
My ‘ftns’ function can be made to calculate those if that is what you want to do.
.
thank you i will be waiting for your response.
i've just put these in the script to see the fit for each data.
what i want to do basically is to calculate the rate constants k1,k2... that make the model fit better to the experimental data. the thing is that based on references these values should increase as i increase the temperature (other experimental data for 315 and 300 celcius can be found below). With every method i couldnt find a result that they increase.
[1 0 0 0 0; %T=300
0 0.323 0.333 0.248 0.096;
0 0.319 0.33 0.258 0.093;
0 0.31 0.299 0.268 0.123;
0 0.297 0.235 0.271 0.197;
0 0.28 0.138 0.267 0.315],
[1 0 0 0 0 ; %T=315
0 0.32 0.33 0.23 0.12;
0 0.318 0.32 0.242 0.12;
0 0.311 0.278 0.254 0.157;
0 0.301 0.202 0.26 0.237;
0 0.287 0.093 0.258 0.362],
[1 0 0 0 0; %T=330
0 0.299 0.361 0.21 0.13;
0 0.299 0.344 0.224 0.133;
0 0.295 0.289 0.239 0.177;
0 0.288 0.202 0.247 0.263;
0 0.276 0.082 0.248 0.394]
As always, you have much too few data for far too many coefficients to be estimated. Mathematics cannot substitute the lack of data.
And therefore the temperature influence on your conversion is far too small to be seen in parameter trends.
If you implicitly want to implement that the rate constants increase with increasing temperature, fit all data simultaneously and use the Arrhenius law for the rate expressions which are temperature-dependent (although this again will introduce activation energies as new unknown parameters).

댓글을 달려면 로그인하십시오.

카테고리

도움말 센터File Exchange에서 Nonlinear Optimization에 대해 자세히 알아보기

태그

질문:

2024년 1월 12일

편집:

2024년 1월 22일

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by