필터 지우기
필터 지우기

How to use fminbnd correctly when a function has a function "inside"

조회 수: 3 (최근 30일)
I am trying to find the minimum of the following function:
@(x) (x+(1+omega)*mean(min(max(L-x,0),VaR-x))
subject to 0<=x<=VaR
where omega is a constant number, L is a random number, VaR is a constant number and x is the priority of a limited stop-loss reinsurance (the variable I am looking for).
Since I have to set a value for L, and it is a random number from a given distribution, I set L=datasample(e,1) where e is a vector of number.
My problem is that I get different results depending on where I define L, I show the two different cases that lead to different results:
Case 1:
e = random('gamma',45,8,1,1000); % random distribution
L=sort(e); % sort the distribution to find VaR, a quantile
VaR=quantile(L,0.995);
omega=[0.05 0.3 0.5 0.8 0.9];
% OPTIMIZATION for each value of omega
Priority=zeros(1,length(omega));
for k=1:length(omega)
D=zeros(1,20000);
for j=1:20000
ob=@(x)(x+(1+omega(k))*(mean(min(max(datasample(e,1)-x,0),VaR-x))));
d_star=fminbnd(ob,0,VaR);
D(j)=d_star;
end
Priority(k)=mean(D);
end
Case2:
e = random('gamma',45,8,1,1000); % random distribution
L=sort(e); % sort the distribution to find VaR, a quantile
VaR=quantile(L,0.995);
omega=[0.05 0.3 0.5 0.8 0.9];
% OPTIMIZATION for each value of omega
Priority=zeros(1,length(omega));
for k=1:length(omega)
D=zeros(1,20000);
for j=1:20000
X=datasample(e,1);
ob=@(x)(x+(1+omega(k))*(mean(min(max(X-x,0),VaR-x))));
d_star=fminbnd(ob,0,VaR);
D(j)=d_star;
end
Priority(k)=mean(D);
end
Why do the results of the two simulations differ? Is a case of these two right? Thank you in advance.

답변 (1개)

Walter Roberson
Walter Roberson 2017년 9월 25일
Differences are expected there. When you do
ob=@(x)(x+(1+omega(k))*(mean(min(max(datasample(e,1)-x,0),VaR-x))));
then the datasample(e,1) is executed every time ob is executed, giving you different results even for the same x. The only way you can meaningfully minimize on such a function would be if you executed it long enough for every possible e to have been generated.
When you do
X=datasample(e,1);
ob=@(x)(x+(1+omega(k))*(mean(min(max(X-x,0),VaR-x))));
then you are picking one particular random sample first, and then all of the calls to ob will be minimizing with respect to that one random sample.
Now, fminbnd uses a deterministic search, so for any given omega value and any given e value, the resulting x is going to be consistent. With there being only 1000 different e values, it does not seem to make sense to re-compute the fminbnd for a given omega and e combination. If you did need to randomly sample from those 1000 possible outcomes per omega value, then it would make more sense to do the fminbnd once for each e value, and then to randomly oversample from the resulting 1000 x values.
  댓글 수: 3
Walter Roberson
Walter Roberson 2017년 9월 25일
Consider:
>> ob2 = @(x) (x-rand()).^2;
>> fminbnd(ob2, 0, 10)
ans =
0.978784590457837
>> fminbnd(ob2, 0, 10)
ans =
0.499502320632666
>> fminbnd(ob2, 0, 10)
ans =
0.827153998914576
Notice the results are different every time, because rand() is being called inside of ob2.
Now,
>> R = rand()
R =
0.27054908332043
>> ob1 = @(x) (x-R).^2;
>> fminbnd(ob1,0,10)
ans =
0.270549083320429
>> fminbnd(ob1,0,10)
ans =
0.270549083320429
Notice the results are the same each time, because the randomness is outside of ob1.
You can only meaningfully minimize a function if it gives a consistent result each time it is called with the same input.
Have another look at your function,
ob=@(x)(x+(1+omega(k))*(mean(min(max(datasample(e,1)-x,0),VaR-x))));
datasample(e,1) is going to be a scalar. x is a scalar. a scalar minus a scalar is a scalar. max(a_scalar, 0) is a scalar. VaR is a scalar, and since x is a scalar, VaR-x is a scalar. min(a_scalar, another_scalar) is a scalar. So now you have mean(a_scalar) which is going to be the scalar itself, with it not being worth having called mean(). But since you did call mean() there, that suggests that you thought you were dealing with a vector rather than a scalar, so what is it you thought was being calculated?
Mattia Michael  Margoni Bastian
Mattia Michael Margoni Bastian 2017년 9월 25일
Basicaly, I have the following basic optimal reinsurance model
min(x+ P(min(max(L-x,0),VaR-x))
subject to 0<=x<=VaR
where P(.) is a function that calculates the premium (a so called premium principle). Now, I wish to apply the expected value premium principle, which is defined as:
P(X)=(1+omega)*mean(X)
where omega is the safety loading ( a scalar) and X is a random variable.
So, if I apply this premium principle to the optimization problem I come up with:
(x+(1+omega)*mean(min(max(L-x,0),VaR-x))
I agree with you that the mean of a scalar is a non-sense, but I cannot set the vector of the distribution L minus x (which is a scalar), it is not mathematically correct. How can I implement this random variable L such that I get the optimal x, as a scalar? Thank you very much again. I hope it is clear and not confusing.

댓글을 달려면 로그인하십시오.

카테고리

Help CenterFile Exchange에서 Solver Outputs and Iterative Display에 대해 자세히 알아보기

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by