Need help with curve fitting

조회 수: 86 (최근 30일)
tuhin
tuhin 2024년 11월 9일 10:58
편집: Image Analyst 2024년 11월 10일 20:35
% Given data
P = [0.1, 0.22, 0.4, 0.65, 1.36, 1.95, 2.63, 7.8, 11.3];
k = [1.11, 1.17, 1.3, 1.38, 1.45, 1.76, 2.28, 2.82, 3.17];
% Select only the first 6 data points
P_fit_data = P(1:6);
k_fit_data = k(1:6);
% Ensure k values are positive for log transformation
if any(k_fit_data <= 0)
error('All k values must be positive for logarithmic transformation.');
end
% Define the nonlinear model for fitting: ln(k) = -mu * ln(Pc - P)
model = @(params, P) log(params(1) - P).^(-params(2));
% Initial guess for parameters: Pc and mu
initial_guess = [max(P_fit_data) + 1, 1]; % Ensure initial guess for Pc is greater than max(P)
% Define bounds for parameters: Pc must be greater than max(P) and mu positive
lb = [max(P_fit_data) + 0.1, 0]; % Lower bounds: Pc > max(P), mu > 0
ub = [10, 5]; % Upper bounds for reasonable parameter range
% Small offset to avoid log of zero or negative numbers
offset = 1e-10;
% Custom cost function for lsqnonlin: residuals between model and data
cost_function = @(params) sum((log(params(1) - P_fit_data + offset).^(-params(2)) - k_fit_data).^2);
% Options for lsqnonlin to control optimization
options = optimset('Display', 'iter', 'TolFun', 1e-6, 'TolX', 1e-6, 'MaxIter', 1000);
% Perform the fitting using lsqnonlin
[params_opt, resnorm, residual, exitflag] = lsqnonlin(cost_function, initial_guess, lb, ub, options);
Warning: Trust-region-reflective algorithm requires at least as many equations as variables; using Levenberg-Marquardt algorithm instead.
First-order Norm of Iteration Func-count Resnorm optimality Lambda step 0 3 9.99991e+39 2.18e-46 0.01 Initial point is a local minimum. Optimization completed because the size of the gradient at the initial point is less than 1e-4 times the value of the function tolerance.
% Extract optimized parameters
Pc_opt = params_opt(1);
mu_opt = params_opt(2);
% Ensure Pc is greater than the maximum value of P to avoid non-positive values of (Pc - P)
if Pc_opt <= max(P_fit_data)
Pc_opt = max(P_fit_data) + 1e-5; % Adjust Pc to be slightly larger than max(P)
end
% Generate fitted values using the optimized parameters
P_fit = linspace(min(P_fit_data), max(P_fit_data), 100); % Create a finer grid for smooth curve
k_fit = log(Pc_opt - P_fit + offset).^(-mu_opt);
% Log-Log transformation for fitting
Pc_minus_P = Pc_opt - P_fit_data;
if any(Pc_minus_P <= 0)
error('Some values of (Pc - P) are non-positive, leading to invalid log transformation.');
end
ln_P_diff = log(Pc_minus_P + offset); % ln(Pc - P) with an offset
ln_k = log(k_fit_data);
% Perform linear fitting in log-log scale
coeffs = polyfit(ln_P_diff, ln_k, 1); % Fit ln(k) = -mu * ln(Pc - P)
mu_fit = -coeffs(1); % Extract mu from slope
intercept = coeffs(2);
% Generate fitted values for log-log plot
ln_k_fit = polyval(coeffs, log(Pc_minus_P + offset));
% Print the raw and fitted data in log-log scale
fprintf('Raw Data (log-log scale):\n');
Raw Data (log-log scale):
for i = 1:length(P_fit_data)
fprintf('ln(Pc - P) = %.4f, ln(k) = %.4f\n', ln_P_diff(i), ln_k(i));
end
ln(Pc - P) = 1.0473, ln(k) = 0.1044 ln(Pc - P) = 1.0043, ln(k) = 0.1570 ln(Pc - P) = 0.9361, ln(k) = 0.2624 ln(Pc - P) = 0.8329, ln(k) = 0.3221 ln(Pc - P) = 0.4637, ln(k) = 0.3716 ln(Pc - P) = 0.0000, ln(k) = 0.5653
fprintf('\nFitted Data (log-log scale):\n');
Fitted Data (log-log scale):
for i = 1:length(P_fit_data)
fprintf('ln(Pc - P) = %.4f, ln(k) = %.4f\n', log(Pc_opt - P_fit_data(i) + offset), ln_k_fit(i));
end
ln(Pc - P) = 1.0473, ln(k) = 0.1689 ln(Pc - P) = 1.0043, ln(k) = 0.1855 ln(Pc - P) = 0.9361, ln(k) = 0.2117 ln(Pc - P) = 0.8329, ln(k) = 0.2514 ln(Pc - P) = 0.4637, ln(k) = 0.3934 ln(Pc - P) = 0.0000, ln(k) = 0.5718
% Display the fitting parameters
fprintf('\nOptimized parameters using lsqnonlin:\n');
Optimized parameters using lsqnonlin:
fprintf('Pc = %.4f\n', Pc_opt);
Pc = 2.9500
fprintf('mu = %.4f\n', mu_opt);
mu = 1.0000
% Save the data to a file
fileID = fopen('log_log_data.txt', 'w'); % Open a file for writing
% Write the raw data in log-log scale
fprintf(fileID, 'Raw Data (log-log scale):\n');
for i = 1:length(P_fit_data)
fprintf(fileID, 'ln(Pc - P) = %.4f, ln(k) = %.4f\n', ln_P_diff(i), ln_k(i));
end
% Write the fitted data in log-log scale
fprintf(fileID, '\nFitted Data (log-log scale):\n');
for i = 1:length(P_fit_data)
fprintf(fileID, 'ln(Pc - P) = %.4f, ln(k) = %.4f\n', log(Pc_opt - P_fit_data(i) + offset), ln_k_fit(i));
end
% Close the file
fclose(fileID);
% Plot in log-log scale
figure;
plot(ln_P_diff, ln_k, 'bo', 'MarkerFaceColor', 'b'); % Transformed data points
hold on;
plot(log(Pc_minus_P + offset), ln_k_fit, 'r-', 'LineWidth', 1.5); % Fitted line
xlabel('ln(Pc - P)');
ylabel('ln(k)');
title('Log-Log Fit');
legend('Transformed Data', 'Linear Fit');
grid on;
......Trying to fit raw data with two fitting parameters Pc and mu. The experimental Pc values are in between 2 to 3. Due to retraning of fitting form we alwys need to consider Pc>P to make poitive log(Pc-P) value. Thus I am selecting only first 6 values from the raw data sets to obtain Pc in between experimental range. However the raw data is not good enough fitted. However raw data is rescaling allowed with using Pc or any other recalong factors to match the data sets with the fitting graphs. Also the fitting parameters are not selecting anything best. it is always selecting a set of fixed values for both the parametrs. Please help me with the rescaling and data fitting.
  댓글 수: 7
Star Strider
Star Strider 2024년 11월 9일 14:55
I do not understand this:
\( k \sim (P_c - P)^{-\mu} \), with \( P_c \) and \( \mu \)
and especially ‘sim’ so I will leave it at that.
Good luck!
Image Analyst
Image Analyst 2024년 11월 9일 14:59
편집: Image Analyst 2024년 11월 10일 20:35
Well okay, but I was just as confused as @Star Strider. Personally I'd use fitnlm but I don't know what your model is. Is it something like
y = a * log(x) + b
??? Anyway, your data doesn't look like it follows a log formula curve very much, especially for the points on the right. What is the theoretical model that your physical process is expected to obey? Is it known for a fact to be log, or is that just what you think or want it to follow?
markerSize = 30;
% Given data
P = [0.1, 0.22, 0.4, 0.65, 1.36, 1.95, 2.63, 7.8, 11.3];
k = [1.11, 1.17, 1.3, 1.38, 1.45, 1.76, 2.28, 2.82, 3.17];
% Select only the first 6 data points
P_fit_data = P(1:6);
k_fit_data = k(1:6);
subplot(3, 1, 1);
plot(P_fit_data, k_fit_data, 'b.-', 'LineWidth', 2, 'MarkerSize', markerSize);
grid on;
xlabel('P_fit_data', 'Interpreter','none');
ylabel('k_fit_data', 'Interpreter','none')
title('Original Data', 'Interpreter','none')
% Plot it on a log-log plot
subplot(3, 1, 2);
loglog(P_fit_data, k_fit_data, 'b.-', 'LineWidth', 2, 'MarkerSize', markerSize);
grid on;
xlabel('P_fit_data', 'Interpreter','none');
ylabel('k_fit_data', 'Interpreter','none')
title('Original Data on a log-log plot', 'Interpreter','none')
grid on;
% Plot it on a log-log plot
subplot(3, 1, 3);
plot(P_fit_data, log(P_fit_data), 'b.-', 'LineWidth', 2, 'MarkerSize', markerSize);
grid on;
xlabel('P_fit_data', 'Interpreter','none');
ylabel('log(P_fit_data)', 'Interpreter','none')
title('Pure log : log(P_fit_data)', 'Interpreter','none')
grid on;

댓글을 달려면 로그인하십시오.

답변 (1개)

John D'Errico
John D'Errico 2024년 11월 9일 15:02
Um. NO. Yes, there is a language problem, and what you write is confusing. But you are not doing what you claim to want to do. We see that in these two consecutive lines:
% Define the nonlinear model for fitting: ln(k) = -mu * ln(Pc - P)
model = @(params, P) log(params(1) - P).^(-params(2));
So params(1) is Pc.
But why are you trying to raise it to -params(2)?
If the model is: ln(k) = -mu * ln(Pc - P), then you need to do this:
model = @(params, P) -params(2)*log(params(1) - P);
And since params(2) is a SCALAR, you need not use .*, just the simple * is ok.
I think, your actual, unlogged model would have raised it to the -mu power, then you got confused.
  댓글 수: 2
tuhin
tuhin 2024년 11월 9일 16:23
편집: tuhin 2024년 11월 9일 16:26
% Given data
P = [0.1, 0.22, 0.4, 0.65, 1.36, 1.95, 2.63, 7.8, 11.3];
k = [0.9009, 0.8547, 0.7692, 0.7246, 0.6897, 0.5682, 0.4386, 0.3546, 0.3155];
% Select only the first 6 data points
P_fit_data = P(1:6);
k_fit_data = k(1:6);
% Offset to avoid log(0) issues
offset = 1e-10;
% Define the nonlinear model as per the suggestion: ln(k) = -mu * ln(Pc - P)
model = @(params, P) -params(2) * log(params(1) - P + offset);
% Initial guesses for parameters [Pc, mu]
initial_guess = [max(P_fit_data) + 1, 1]; % Ensure Pc > max(P)
% Bounds for parameters: ensure Pc > max(P) and mu > 0
lb = [max(P_fit_data) + 0.1, 0]; % Lower bounds: Pc > max(P), mu > 0
ub = [10, 5]; % Upper bounds for reasonable parameter range
% Define the cost function for lsqnonlin
cost_function = @(params) -params(2) * log(params(1) - P_fit_data + offset) - k_fit_data;
% Perform the fitting using lsqnonlin
options = optimset('Display', 'iter', 'TolFun', 1e-6, 'TolX', 1e-6, 'MaxIter', 1000);
[params_opt, resnorm, residual, exitflag] = lsqnonlin(cost_function, initial_guess, lb, ub, options);
Norm of First-order Iteration Func-count Resnorm step optimality 0 3 14.2386 7.34 1 6 5.79778 0.758776 1.41 2 9 3.70564 0.688972 0.149 3 12 3.47781 0.393133 0.0133 4 15 3.46228 0.134621 0.00286 5 18 3.45842 0.0803679 0.00054 6 21 3.45746 0.0476622 4.05e-05 7 24 3.45738 0.0146961 3.38e-07 Local minimum found. Optimization completed because the size of the gradient is less than the value of the optimality tolerance.
% Extract optimized parameters
Pc_opt = params_opt(1);
mu_opt = params_opt(2);
% Ensure Pc is greater than max(P) to avoid non-positive values of (Pc - P)
if Pc_opt <= max(P_fit_data)
Pc_opt = max(P_fit_data) + 1e-5; % Adjust Pc slightly larger than max(P)
end
% Generate fitted values using the optimized parameters
P_fit = linspace(min(P_fit_data), max(P_fit_data), 100); % Create a finer grid for smooth curve
k_fit = exp(-mu_opt * log(Pc_opt - P_fit + offset)); % Back-transform to get k
% Log-Log transformation for linear fit
ln_P_diff = log(Pc_opt - P_fit_data + offset); % ln(Pc - P) with offset
ln_k = log(k_fit_data);
% Use `fitnlm` to confirm and check the linear relationship in log-log scale
T = table(ln_P_diff', ln_k', 'VariableNames', {'ln_P_diff', 'ln_k'});
linearModel = fitnlm(T, 'ln_k ~ -mu_opt* ln_P_diff', 'PredictorNames', {'ln_P_diff'});
Error using internal.stats.parseArgs (line 43)
Wrong number of arguments.

Error in NonLinearModel.fit (line 1439)
internal.stats.parseArgs(paramNames, paramDflts, otherArgs{:});

Error in fitnlm (line 99)
model = NonLinearModel.fit(X,varargin{:});
% Extract slope (should correspond to -mu) and intercept
coeffs = linearModel.Coefficients.Estimate;
mu_fit = -coeffs(1); % Extract mu from slope
% Print the raw and fitted data in log-log scale
fprintf('Raw Data (log-log scale):\n');
for i = 1:length(P_fit_data)
fprintf('ln(Pc - P) = %.4f, ln(k) = %.4f\n', ln_P_diff(i), ln_k(i));
end
fprintf('\nFitted Data (log-log scale):\n');
for i = 1:length(P_fit_data)
fprintf('ln(Pc - P) = %.4f, ln(k) = %.4f\n', log(Pc_opt - P_fit_data(i) + offset), ...
predict(linearModel, table(log(Pc_opt - P_fit_data(i) + offset))));
end
% Display optimized parameters
fprintf('\nOptimized parameters using lsqnonlin:\n');
fprintf('Pc = %.4f\n', Pc_opt);
fprintf('mu = %.4f\n', mu_opt);
% Plot in log-log scale
figure;
plot(ln_P_diff, ln_k, 'bo', 'MarkerFaceColor', 'b'); % Transformed data points
hold on;
plot(log(Pc_opt - P_fit + offset), log(k_fit), 'r-', 'LineWidth', 1.5); % Fitted curve
xlabel('ln(Pc - P)');
ylabel('ln(k)');
title('Log-Log Fit with Optimized Parameters');
legend('Transformed Data', 'Fit using lsqnonlin');
grid on;
...............follow suggestins. but getting error.
John D'Errico
John D'Errico 2024년 11월 9일 17:20
편집: John D'Errico 2024년 11월 9일 17:21
P = [0.1, 0.22, 0.4, 0.65, 1.36, 1.95, 2.63, 7.8, 11.3];
k = [0.9009, 0.8547, 0.7692, 0.7246, 0.6897, 0.5682, 0.4386, 0.3546, 0.3155];
% Select only the first 6 data points
P_fit_data = P(1:6);
k_fit_data = k(1:6);
First, lets look at your model and your data. Always apply common sense to anything you do, else you will common nonsense out.
Your model is
log(k) = -mu*log(Pc - P)
Supose we pick some value for Pc, then plot what remains? The result should be at least consistent with a vaguely straight line.
Pctest = 2.5;
plot(log(Pctest - P_fit_data),log(k_fit_data),'o')
Even if that was not the correct value for Pc, it will be reasonable. The problem is, the resulting curve has POSITIVE SLOPE. Do you understand that?
What model are you trying to fit?
log(k) = -mu*log(Pc - P)
A NEGATIVE SLOPE. If mu is constrained to be positive, then you are trying to fit a curve that has a positive slope, with a function that ALWAYS has a negative slope.
USE COMMON SENSE! Your model is inconsistent with the data!!!!!!!!
Now, look at what happened to you when you tried to fit that model.
% Offset to avoid log(0) issues
offset = 1e-10;
% Define the nonlinear model as per the suggestion: ln(k) = -mu * ln(Pc - P)
model = @(params, P) -params(2) * log(params(1) - P + offset);
% Initial guesses for parameters [Pc, mu]
initial_guess = [max(P_fit_data) + 1, 1]; % Ensure Pc > max(P)
% Bounds for parameters: ensure Pc > max(P) and mu > 0
lb = [max(P_fit_data) + 0.1, 0]; % Lower bounds: Pc > max(P), mu > 0
ub = [10, 5]; % Upper bounds for reasonable parameter range
% Define the cost function for lsqnonlin
WAIT.
cost_function = @(params) -params(2) * log(params(1) - P_fit_data + offset) - k_fit_data;
Why are you subtracting k_fit_data in the cost function, and not log(k_fit_data)? Is not the model
log(k) = -mu*log(Pc - P)
There are just so many inconsistencies in what you are doing that I cannot know what you need to do, or even if the model you have chosen makes any sense at all.
Just take a step back. Go SLOWLY. Check your code carefully. Don't just randomly take the log of somethings sometimes.

댓글을 달려면 로그인하십시오.

카테고리

Help CenterFile Exchange에서 Curve Fitting Toolbox에 대해 자세히 알아보기

제품


릴리스

R2024b

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by