Constrained Weighted Least Squares (Image Interpolation)
조회 수: 4 (최근 30일)
이전 댓글 표시
Hello,
I'd like your assistance with solving a constrained weighted least squares problem.
I can't use `lsqlin` since the size of the images creates matrices beyond the abilities of my computer (Or any PC for that matter).
We're given a matrix I of M by N. On this matrix set of L pixels S is set to be anchors. All the other pixels should be recalculated based on that pixels using the following constrains:
Which is equivalent of:
Where the matrix A just "Selects" the anchor pixels from the interpolated vector.
Now, the trick for most pixels, outside the relevant pixel neighborhood, the weights are zeros. Namely, the equation above could be written as:
Now, assume the neighborhood is something like K x K neighborhood. I think this should allow moving into a sparse problem.
Assuming we have all the W_rs needed, how can I built this problem in a manner MATLAB's can solve it?
Thank You.
Giving an example for w_rs:
Where I is the original values of the pixels at pixel r and s respectively.
Of course it is normalized per pixel.
댓글 수: 4
Matt J
2014년 4월 16일
OK. Just to be clear, I didn't ask if W_rs=W_sr. I asked if it has the separable form
W_rs= f_s *f_r
Your recently posted example for W_rs does not have this form.
채택된 답변
Matt J
2014년 4월 16일
편집: Matt J
2014년 4월 16일
Here's what I'd propose. Note that your constraints are highly trivial and shouldn't require an iterative solver. Just treat the E(p)=I(p) as knowns (which is what they are) and apply the solver exclusively to the unknowns E(~p).
%%Fake initial data
M=200;
N=M;
K=3;
I=rand(M,N);
sig=1;
p=false(1,M*N);
p(1:10:end)=1;
tic; %%Set up equation coefficient matrix
xx=1:M;
yy=1:N;
delta=floor((-K:K)/2); %assume K odd
[x,xdelta,y,ydelta]=ndgrid(xx,delta,yy,delta);
xd=x+xdelta;
yd=y+ydelta;
idx=(xd>=1 & xd<=M & yd>=1 & yd<=N); %legitimate indices
x=x(idx); xd=xd(idx);
y=y(idx); yd=yd(idx);
r=sub2ind([M,N],x,y); r=r(:);
s=sub2ind([M,N],xd,yd); s=s(:);
W=spdiags(exp(-(I(r)-I(s))/2/sig^2),0, length(r),length(r));
A=W*(sparse(1:length(r),r,1)-sparse(1:length(s),s,1));
toc;
tic; %%Solve equations
E=zeros(M*N,1);
E(p)=I(p);
E(~p) = -A(:,~p)\(A*E); %solve
toc;
댓글 수: 9
Matt J
2014년 4월 17일
편집: Matt J
2014년 4월 17일
I think the best thing would be to compute W_rs as a sparse matrix
W=sparse(r,s,exp(-(I(r)-I(s).^2)/2/sig^2, N*M,N*M);
Now you can just normalize the rows to sum to 1
sW=sum(W,2);
nzRows=sW>0;
W(nzRows,:)=bsxfun(@rdivide,W(nzRows,:),sW(nzrows));
and now
[r,s,w]=find(W);
Although, I think that normally all(nzRows)=true.
Note that once you have the data triples r,s,w and/or the matrix W, then either of your objective functions can be expressed almost as written and fed to fmincon,e.g.,
function f=objfun1(E,r,s,w)
f=sum(w.*(E(r)-E(s)).^2);
end
function f=objfun2(E,W)
f=norm(E-W*E).^2;
end
추가 답변 (1개)
Matt J
2014년 4월 17일
Maybe a more transparent way to see the problem is to notice that it can be expressed in the equivalent matrix/vector form,
min E.'*(I-W)*E (*)
ignoring your constraints for a moment. To see the above, take the given form in your post (divided by two) and do the following massaging, where W is the sparse matrix with components W(r,s).
f/2=sum_rs(w_rs.*(E(r)-E(s)).^2)/2;
=sum_rs(w_rs.*(E(r)^2)/2 +sum_rs(w_rs*E(s).^2)/2-sum(w_rs*E(r)*E(s))
=norm(E).^2 - E.'*W*E
=E.'(I-W)*E
which is exactly (*). Thus, once you've generated W using the sparse command, everything is expressible as a simple quadratic program.
min. f=norm(E-W*E)^2
or equivalently,
min E*((I-W)^2)*E
So, the only difference between the two problems is that the Hessian 2*(I-W) in the first gets squared in the second 2*(I-W)^2.
To deal with "constraints" in either problem, write the problem generically as
min f(E) = E.'*Q*E/2 (**)
and rewrite E in the partitioned form E=[Eu;Ep] where Eu are the unknown E(r) and Ep are the known E(p). Similarly, parition Q as
Q=[Quu, Qup;Qup.' Qpp]
Substituting into () will lead to an unconstrained quadratic program in Eu
min f(Eu) = Eu.'*Quu*Eu/2 -(Qup*Ep).'*Eu
whcih can be minimized analytically
Eu=Quu\(Qup*Ep)
How computationally hard this all is, depends on the sparsity of Q and hence the sparsity of W. The number of bytes needed to hold the sparse matrix W will be approximately 24*N*M*K^2.
For N-M=4000 and K=7, this comes out to about 17GB. So, you'll need to cut back somewhere, I think.
댓글 수: 0
참고 항목
제품
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!