Hello everyone!
I have a problem with fsolve, whick probably depends on my misunderstanding of how the anonymous functions work.
The code is following:
JordanNormalForm=[0 0;0 -1];
Bp=[1;-1];
MatrixExp=@(t) expm(JordanNormalForm.*t);
MatrixUnderIntegral=@(t) expm(-JordanNormalForm.*t)*Bp;
IntExp1=@(t1) integral( MatrixUnderIntegral,0,t1,'ArrayValued',true);
X01=@(t1) feval(MatrixExp, t1)*(feval(IntExp1,t1)+x0');
IntExp2=@(t2) integral( MatrixUnderIntegral,0,t2,'ArrayValued',true);
IntPlusX01=@(t1,t2) -feval(IntExp2,t2)+feval(X01,t1);
X02=@(varargin) feval(MatrixExp, varargin{1})*feval(IntPlusX01,varargin{1:2});
fsolve(X02,[1,0.5])
So, when i run this code I get this error
Not enough input arguments.
Error in MatrixExponentOfLinearParallelSys>@(t1,t2)feval(MatrixExp,t2)*feval(IntPlusX01,t1,t2) (line 287)
X02=@(t1,t2) feval(MatrixExp, t2)*feval(IntPlusX01,t1,t2);
Error in fsolve (line 242)
fuser = feval(funfcn{3},x,varargin{:});
Error in MatrixExponentOfLinearParallelSys (line 289)
fsolve(X02,[1,0.5])
Caused by:
Failure in initial objective function evaluation. FSOLVE cannot continue.
I just don't understand what can cause this problem. Because other operations like feval works correctly.
And even more than that i don't understand why this works then:
MatrixExp=@(t) expm(JordanNormalForm.*t);
MatrixUnderIntegral=@(t) expm(-JordanNormalForm.*t)*Bp;
IntExp1=@(t1) integral( MatrixUnderIntegral,0,t1,'ArrayValued',true);
X01=@(t1) feval(MatrixExp, t1)*(feval(IntExp1,t1)+x0');
IntExp2=@(t2) integral( MatrixUnderIntegral,0,t2,'ArrayValued',true);
IntPlusX01=@(t1,t2) -feval(IntExp2,t2)+feval(X01,t1);
X02=@(t) feval(MatrixExp, t(2))*feval(IntPlusX01,t(1),t(2));
fsolve(X02,[1,0.5])
Of course in this case I can use the second variant, but I need to do amore general case, so the varargin variant is more preferable.
Thanks for any help.

댓글 수: 2

I recommend against using feval() when it can be avoided.
MatrixExp = @(t) expm(JordanNormalForm.*t);
MatrixUnderIntegral = @(t) expm(-JordanNormalForm.*t)*Bp;
IntExp1 = @(t1) integral( MatrixUnderIntegral,0,t1,'ArrayValued',true);
X01 = @(t1) MatrixExp(t1) * (IntExp1(t1)+x0');
IntExp2 = @(t2) integral( MatrixUnderIntegral,0,t2,'ArrayValued',true);
IntPlusX01 = @(t1,t2) -IntExp2(t2) + X01(t1, t2);
X02 = @(t) MatrixExp(t(2)) * IntPlusX01(t(1),t(2));
fsolve(X02, [1,0.5])
Also I recommend that you recheck whether you truly do want the * matrix multiplication operator instead of the .* element-by-element multiplication.
Ivan Khomich
Ivan Khomich 2020년 10월 9일
Thanks for your answer, Walter. I don’t think the problem is in the functions, but in how I produce the variables to this function. It is because of in the second case all works correctly.
The second variant is ok, but I want to know is there a way to use fsolve with varargin, because in general case, I’ll have more variables in equations.

댓글을 달려면 로그인하십시오.

 채택된 답변

Walter Roberson
Walter Roberson 2020년 10월 9일

0 개 추천

IntPlusX01=@(t1,t2) -feval(IntExp2,t2)+feval(X01,t1);
X02=@(varargin) feval(MatrixExp, varargin{1})*feval(IntPlusX01,varargin{1:2});
fsolve(X02,[1,0.5])
fsolve() invokes the given handle with a row vector of values. A row vector of values occupies one argument no matter how large the vector is.
So X02 is going to be invoked with a single argument -- varargin is going to be a cell with one element that is the vector.
Inside X02 you have feval(MatrixExp, varargin{1}) so the complete vector is going to be passed as the (only) argument to MatrixExp . MatrixExp is @(t) expm(JordanNormalForm.*t) so that would potentially be valid, provided that JodanNormalForm has compatible dimensions for multiplying by the vector (of two elements) in t and the result is a square matrix.
(If t is expected to be a scalar there, then you could save computation by computing the svd ahead of time and using V*diag(exp(t.*diag(D)))/V )
IntPulsX01 would then be invoked with the single argument that is the vector of values. But InPlusX01 expects to be invoked with two arguments so you have a problem.

댓글 수: 2

Walter, I'm very grateful for your answer!
With your && I managed to figure out the logic of producing the variables, so I've solve my problem, but in diferent way that I want at the beginning.
Bring my code here:
J=cell(SysOrder,1);
B=cell(SysOrder,1);
X0=cell(SysOrder,1);
XF=cell(SysOrder,1);
for i=1:SysOrder
J{i}=JordanNormalForm(1:i,1:i);
B{i}=Bp(1:i);
X0{i}=x0p(1:i);
XF{i}=xfp(1:i);
end
IndUmaxZero=0;
X=cell(SysOrder);
Y=cell(SysOrder);
Z=cell(SysOrder);
SetOfLinearEquations=cell(SysOrder,1);
for i=1:SysOrder
for j=1:SysOrder
X{i,j}=@(t) expm(J{j}.*t(i));
Y{i,j}=@(t) integral(@(t) expm(-J{j}.*t)*B{j}, 0, t(i),'ArrayValued',true);
if i>1
Z{i,j}=@(t,U0) X{i,j}(t)*(U0*U(IndUmaxZero+1+mod(i,2)).*Y{i,j}(t)+Z{i-1,j}(t,U0));
else
Z{i,j}=@(t,U0) X{i,j}(t)*(U0*U(IndUmaxZero+1+mod(i,2)).*Y{i,j}(t)+X0{j});
end
if i==j
SetOfLinearEquations{j}=@(t,U0) Z{i,j}(t,U0)-XF{j};
end
end
end
for m=1:SysOrder
NumOfEq=m;
options=optimset('disp','off','MaxFunEvals', 100,'OutputFcn',...
@(x,optimValues,state) StopFunction(x,optimValues,state,NumOfEq),...
'Algorithm', 'trust-region');
TIME0(1:m)=fsolve(@(t) SetOfLinearEquations{m}(t,U0(k)),T0(1:m), options);
end
It works very well for me, so thank you again!
If it wil not bother you, can you explain what you've meant in " If t is expected to be a scalar there, then you could save computation by computing the svd ahead of time and using V*diag(exp(t.*diag(D)))/V ''?
My t is just the time,so it is scalar. My matrix J have normal Jordan form when there is more than one integrator in the system, and just diagonal in other cases.
If you examine doc expm then it says
Y = expm(X) computes the matrix exponential of X. Although it is not computed this way, if X has a full set of eigenvectors V with corresponding eigenvalues D, then [V,D] = eig(X) and expm(X) = V*diag(exp(diag(D)))/V
And you are wanting to take expm(JordanNormalForm.*t), then potentially we could find
%{
[V, D] = eig(JordanNormalForm);
eJNFt = V*diag(exp(t.*diag(D)))/V;
%}
But does that actually work? Let us test:
format short g
rng(42); A = randi([-9 9],3,3)
A = 3×3
-2 2 -8 9 -7 7 4 -7 2
expm(A)
ans = 3×3
6.994 7.195 -7.9778 4.3071 4.4201 -4.8929 -3.0287 -3.1272 3.464
e2A = expm(2*A)
e2A = 3×3
104.07 107.07 -118.64 63.981 65.828 -72.938 -45.143 -46.446 51.463
[V, D] = eig(A);
V*diag(exp(diag(D)))/V %mathematical equivalent to expm(A)
ans = 3×3
6.994 + 2.0694e-16i 7.195 - 4.835e-16i -7.9778 - 8.8818e-16i 4.3071 + 1.2808e-16i 4.4201 - 2.9811e-16i -4.8929 - 4.4409e-16i -3.0287 - 9.0124e-17i -3.1272 + 2.0938e-16i 3.464 + 0i
e2Aalt = V*diag(exp(2*diag(D)))/V %is this equivalent to expm(2*A) ?
e2Aalt = 3×3
104.07 + 3.0812e-15i 107.07 - 7.1959e-15i -118.64 + 0i 63.981 + 1.8943e-15i 65.828 - 4.424e-15i -72.938 - 7.1054e-15i -45.143 - 1.3366e-15i -46.446 + 3.1215e-15i 51.463 + 0i
e2A - e2Aalt
ans = 3×3
7.1054e-14 - 3.0812e-15i 1.9895e-13 + 7.1959e-15i -2.4158e-13 + 0i 3.5527e-14 - 1.8943e-15i 1.279e-13 + 4.424e-15i -1.4211e-13 + 7.1054e-15i -2.8422e-14 + 1.3366e-15i -8.5265e-14 - 3.1215e-15i 1.0658e-13 + 0i
So, yes, to within round-off the two are equivalent. If you are working with real-valued matrices, take real() of the result to remove the noise complex parts.
(Note: for reasons I am not clear on at the moment, the 3 x 3 complex output might have been truncated to 1 1/2 columns wide in this experimental facility I am using.)
Why this matters is that you are doing the expm() step many times, so potentially if you pre-decomposed the JodanNormalForm matrix, then possibly the V*diag(exp(t*diag(D)))/V might be faster than expm(t*JordanNormalForm) . Something that might be worth timing at some point.

댓글을 달려면 로그인하십시오.

추가 답변 (0개)

카테고리

도움말 센터File Exchange에서 Mathematics에 대해 자세히 알아보기

질문:

2020년 10월 9일

댓글:

2020년 10월 10일

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by