Problem with fitting a loglog plot

조회 수: 7 (최근 30일)
Wissem-Eddine KHATLA
Wissem-Eddine KHATLA 2024년 1월 17일
댓글: Star Strider 2024년 1월 23일
Hello everyone,
I have some problems trying to obtain the best stratight line possible in order to fit a loglog data set. I keep having an error related to polyfit :
Warning: Polynomial is badly conditioned. Add points with distinct X values, reduce the degree of the polynomial, or try centering
and scaling as described in HELP POLYFIT.
I tried to find some other alternaitves but I am stuck to find a reliable solution to this problem. Any clue how I can solve it ?
The data used for the script is provided attached to my post.
Thanks for your help,
SCRIPT :
data_exp_high = load("data_Q_1e-8.txt");
t_exp_high = data_exp_high(:,1);
R_exp_high = data_exp_high(:,2);
% Ajustement linéaire des logarithmes
log_t_exp = log(t_exp_high);
log_R_exp = log(R_exp_high);
% Ajustement linéaire en utilisant polyfit
coefficients = polyfit(log_t_exp, log_R_exp, 1);
% Récupération des paramètres alpha et beta
alpha_estime = exp(coefficients(2));
beta_estime = coefficients(1);
% Affichage des résultats
disp(['Paramètre alpha estimé : ', num2str(alpha_estime)]);
disp(['Paramètre beta estimé : ', num2str(beta_estime)]);
% Tracé des courbes
figure;
loglog(t_exp_high, R_exp_high, 'o', 'DisplayName', 'Données expérimentales');
hold on;
loglog(t_exp_high, alpha_estime * t_exp_high.^beta_estime, 'r-', 'DisplayName', 'Ajustement linéaire en log-log');

채택된 답변

Mathieu NOE
Mathieu NOE 2024년 1월 17일
hello
I knew even before opening your data that there would be a zero somewhere ... bingo !
so it works better once you have removed zeros (or negative data if that may arise once)
data_exp_high = load("data_Q_1e-8.txt");
t_exp_high = data_exp_high(:,1);
R_exp_high = data_exp_high(:,2);
% eliminer les valeurs nulles
ind = (t_exp_high>0 & R_exp_high>0);
t_exp_high = t_exp_high(ind);
R_exp_high = R_exp_high(ind);
% Ajustement linéaire des logarithmes
log_t_exp = log(t_exp_high);
log_R_exp = log(R_exp_high);
% Ajustement linéaire en utilisant polyfit
coefficients = polyfit(log_t_exp, log_R_exp, 1);
% Récupération des paramètres alpha et beta
alpha_estime = exp(coefficients(2));
beta_estime = coefficients(1);
% Affichage des résultats
disp(['Paramètre alpha estimé : ', num2str(alpha_estime)]);
disp(['Paramètre beta estimé : ', num2str(beta_estime)]);
% Tracé des courbes
figure;
loglog(t_exp_high, R_exp_high, 'o', 'DisplayName', 'Données expérimentales');
hold on;
loglog(t_exp_high, alpha_estime * t_exp_high.^beta_estime, 'r-', 'DisplayName', 'Ajustement linéaire en log-log');
  댓글 수: 1
Wissem-Eddine KHATLA
Wissem-Eddine KHATLA 2024년 1월 17일
Thank you @Mathieu NOE : Your intuition was indeed correct ! Thanks again

댓글을 달려면 로그인하십시오.

추가 답변 (2개)

Alan Stevens
Alan Stevens 2024년 1월 17일
polyfit doesn't like the log of zero. One option is to remove the first row of the data:
data_exp_high = load("data_Q_1e-8.txt");
data_exp_high(1,:) = [];
t_exp_high = data_exp_high(:,1);
R_exp_high = data_exp_high(:,2);
% Ajustement linéaire des logarithmes
log_t_exp = log(t_exp_high);
log_R_exp = log(R_exp_high);
% Ajustement linéaire en utilisant polyfit
coefficients = polyfit(log_t_exp, log_R_exp, 1);
% Récupération des paramètres alpha et beta
alpha_estime = exp(coefficients(2));
beta_estime = coefficients(1);
% Affichage des résultats
disp(['Paramètre alpha estimé : ', num2str(alpha_estime)]);
disp(['Paramètre beta estimé : ', num2str(beta_estime)]);
% Tracé des courbes
figure;
loglog(t_exp_high, R_exp_high, 'o', 'DisplayName', 'Données expérimentales');
hold on;
loglog(t_exp_high, alpha_estime * t_exp_high.^beta_estime, 'r-', 'DisplayName', 'Ajustement linéaire en log-log');

Star Strider
Star Strider 2024년 1월 17일
Using a linear regression on logarithmically-transformed data creates multiplicative errors, not additive errors that least squares approaches require (and assume).
Use a nonlinear approach, and no data-editing is required, and the results are more accurate —
M1 = readmatrix('data_Q_1e-8.txt')
M1 = 256×2
0 0.0010 0.5500 0.0019 0.6000 0.0020 0.6500 0.0021 0.7000 0.0022 0.7500 0.0022 0.8000 0.0023 0.8500 0.0024 0.9000 0.0024 0.9500 0.0025
x = M1(:,1);
y = M1(:,2);
loglogfcn = @(b,x) x.^b(1) .* exp(b(2));
[B,fv] = fminsearch(@(b) norm(y - loglogfcn(b,x)), randn(2,1))
B = 2×1
0.4954 -5.9328
fv = 0.0020
figure
plot(x, y, 'pb', 'MarkerFaceColor','b')
hold on
plot(x, loglogfcn(B,x), '-r', 'LineWidth',2)
hold off
grid
xlabel('x')
ylabel('y')
title('Linear Scale')
text(25, 0.0325, sprintf('$f(x) = x^{%.3f}\\cdot%.5f$', B(1),exp(B(2))), 'Interpreter','latex')
Results = table(x, y, loglogfcn(B,x), y-loglogfcn(B,x), 'VariableNames',{'X','Y','Fitted Data','Error'})
Results = 256×4 table
X Y Fitted Data Error ____ _________ ___________ ___________ 0 0.0010486 0 0.0010486 0.55 0.0019463 0.0019715 -2.5153e-05 0.6 0.0020274 0.0020583 -3.0892e-05 0.65 0.0021143 0.0021416 -2.7268e-05 0.7 0.0021764 0.0022217 -4.5241e-05 0.75 0.0022354 0.0022989 -6.3545e-05 0.8 0.0022997 0.0023736 -7.3903e-05 0.85 0.002352 0.002446 -9.4027e-05 0.9 0.0024134 0.0025162 -0.00010281 0.95 0.0024749 0.0025846 -0.00010963 1 0.0025367 0.0026511 -0.00011442 1.05 0.002602 0.0027159 -0.00011394 1.1 0.0026641 0.0027793 -0.00011516 1.15 0.0027337 0.0028411 -0.00010744 1.2 0.0028054 0.0029017 -9.6235e-05 1.25 0.0028697 0.002961 -9.1278e-05
RMS_Error = rmse(Results.Y, Results.('Fitted Data'))
RMS_Error = 1.2739e-04
figure
loglog(x, y, 'pb', 'MarkerFaceColor','b')
hold on
plot(x, loglogfcn(B,x), '-r', 'LineWidth',2)
hold off
grid
xlabel('x')
ylabel('y')
title('‘loglog’ Scale')
text(2.5, 0.02, sprintf('$f(x) = %.3f\\cdot log(x) %+.3f$', B), 'Interpreter','latex')
Lv = x>0;
B = polyfit(log(x(Lv)), log(y(Lv)), 1)
B = 1×2
0.5008 -5.9488
yfit = exp(polyval(B, log(x(Lv))));
figure
loglog(x, y, 'pb', 'MarkerFaceColor','b')
hold on
plot(x(Lv), yfit, '-r', 'LineWidth',2)
hold off
grid
xlabel('x')
ylabel('y')
title('‘loglog’ Scale')
text(2.5, 0.02, sprintf('$f(x) = %.3f\\cdot log(x) %+.3f$', B), 'Interpreter','latex')
Results = table(x(Lv), y(Lv), yfit, y(Lv)-yfit, 'VariableNames',{'X','Y','Fitted Data','Error'})
Results = 255×4 table
X Y Fitted Data Error ____ _________ ___________ ___________ 0.55 0.0019463 0.0019339 1.245e-05 0.6 0.0020274 0.00202 7.4213e-06 0.65 0.0021143 0.0021026 1.1689e-05 0.7 0.0021764 0.0021821 -5.6984e-06 0.75 0.0022354 0.0022588 -2.3466e-05 0.8 0.0022997 0.002333 -3.3333e-05 0.85 0.002352 0.002405 -5.3005e-05 0.9 0.0024134 0.0024748 -6.1371e-05 0.95 0.0024749 0.0025427 -6.781e-05 1 0.0025367 0.0026089 -7.2241e-05 1.05 0.002602 0.0026734 -7.1427e-05 1.1 0.0026641 0.0027364 -7.2349e-05 1.15 0.0027337 0.002798 -6.4345e-05 1.2 0.0028054 0.0028583 -5.2873e-05 1.25 0.0028697 0.0029174 -4.7672e-05 1.3 0.0029326 0.0029752 -4.2609e-05
RMS_Error = rmse(Results.Y, Results.('Fitted Data'))
RMS_Error = 1.3317e-04
.
  댓글 수: 2
Wissem-Eddine KHATLA
Wissem-Eddine KHATLA 2024년 1월 23일
@Star Strider Thank you for your contribution : your approach is more rigorous and indeed much more reliable for future dataset : it's true that performing a lin-regression on log-data can induce much higher values for errors but I have a question according the norm that you use in the line
[B,fv] = fminsearch(@(b) norm(y - loglogfcn(b,x)), randn(2,1))
What do you think about the norm's type choice ? Do you think that it can be sensitive if we use another dataset for example ?
Thanks,
Star Strider
Star Strider 2024년 1월 23일
My pleasure!
I should have named ‘loglogfcn’ as ‘powerfcn’ since that would be correct.
Using nonlinear techniques to fit a power function (or a log-log plot) is correct, and would produce the most robust parameter estimates. (Here, the errors are relatively small, however even so, the errors of the linear log-transformed fit are about 4½% larger than the errors of the nonlinear fit.) The choice of regression model depends entirely on the data and the process that created the data, however linearising data is definitely not something I encourage. (That might once have been acceptable — before computers came into wide use — however it currently is not.)
Also, using fminsearch here is reasonably robust, since it uses a derivative-free optimisation method, however for more difficult problems, a global optimisation method such as ga would be more likely to succeed in initially estimating the best parameter set, and the parameters then can be ‘tuned’ using a hybrid approach with a gradient-descent optimisation technique. It depends on the data and the complexity of the problem.
So to answer your question:
Do you think that it can be sensitive if we use another dataset for example ?
Yes, if the data are similar to those presented here, and the power function model is appropriate to the data.
err_powerfcn = 1.2739e-04;
err_logloglinear = 1.3317e-04;
rel_err = err_logloglinear / err_powerfcn
rel_err = 1.0454
.

댓글을 달려면 로그인하십시오.

카테고리

Help CenterFile Exchange에서 Interpolation에 대해 자세히 알아보기

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by