Find Image SVD without using SVD command

조회 수: 10 (최근 30일)
umair
umair 2014년 5월 1일
편집: John D'Errico 2021년 9월 4일
My question is pretty simple but I am new to SVD analysis. My final goal will be to implement denoise an Image using SVD but at the moment of time I am trying to comprehend the concept of Singular value decomposition.
As the title suggest , I want to decompose the image into its component matrices but I want to avoid using SVD command so I can get the concept of what is actually going on in the process.
The code :
a = double(rgb2gray(imread('Lenna.png'))); a_tp = a';
Z2 = a*a_tp; Z1 = a_tp*a;
[U,U_val] = eig(Z1);
[V,V_val] = eig(Z2);
Sig = sqrt(U_val+V_val);
figure(1) Img_new = imshow(((U*Sig*V')));
I thought U, V and Sigma are my components as U is the eigen vectors for a'*a and v are the eigen vectors for a*a' and sigma are the corresponding eigen values but this ain't right ... There is some conceptual mistake , Help me please
PS >> This was the reference tutorial > http://www.youtube.com/watch?v=BmuRJ5J-cwE
  댓글 수: 2
umair
umair 2014년 5월 2일
편집: Walter Roberson 2021년 9월 4일
clear all; clc;
a = double(rgb2gray(imread('Lenna.png')));
[q d r] = svd(a);
a_tp = a';
Z1 = a_tp*a;
[ Z1_vec,Z1_val] = eig(Z1);
[k p] = size(a); [m n] = size(Z1_vec); [o p] = size(Z1_val);
U = zeros(p,m); % Size of U
for i = 1:1:m
U(:,i) = (a*Z1_vec(:,n))/sqrt(Z1_val(o,p)); % U in SVD
o = o-1; p = p-1;
n = n-1;
end
[o p] = size(Z1_val);
Sigma = sqrt(Z1_val);
Sig= zeros(o,p);
for i=1:1:p
Sig(i,i) = Sigma(o-i+1,p-i+1); % Diagnol matix
end
V = fliplr(Z1_vec); % r in SVD
figure(1) Img_new = imshow((mat2gray(U*Sig*V')));
figure(2) Img_svd = imshow((mat2gray(q*d*r')));
santhosh kumar buddepu
santhosh kumar buddepu 2021년 9월 4일
It is working only square matrices, not all matrices

댓글을 달려면 로그인하십시오.

답변 (1개)

John D'Errico
John D'Errico 2021년 9월 4일
편집: John D'Errico 2021년 9월 4일
Too late for an answer to the question posed, but since the question is getting comments asking for more help, I'll post an answer.
The simple answer is to not worry about images at all. Since the question is how to compute the SVD, effectively by use of eig, just start with any array.
A = rand(5,3)
A = 5×3
0.1243 0.0906 0.5962 0.2261 0.0861 0.6465 0.2251 0.3320 0.4505 0.4900 0.4923 0.2527 0.1966 0.9218 0.1907
So A is just a small random matrix. Small, so we can see what is happening easily.
[U,S,V] = svd(A)
U = 5×5
-0.3335 -0.5023 0.2846 -0.5175 -0.5364 -0.3816 -0.5577 0.0040 0.0260 0.7366 -0.4158 -0.1516 0.0580 0.8195 -0.3594 -0.4832 0.1739 -0.8272 -0.2013 -0.1071 -0.5804 0.6192 0.4810 -0.1392 0.1705
S = 5×3
1.4231 0 0 0 0.7640 0 0 0 0.2855 0 0 0 0 0 0
V = 3×3
-0.4021 -0.0206 -0.9154 -0.6844 0.6709 0.2855 -0.6082 -0.7413 0.2838
Can we find the same solution using eig?
[V1,D1] = eig(A'*A)
V1 = 3×3
0.9154 -0.0206 0.4021 -0.2855 0.6709 0.6844 -0.2838 -0.7413 0.6082
D1 = 3×3
0.0815 0 0 0 0.5837 0 0 0 2.0253
This should compare to the diagonal elements of S, but since we formed A'*A, the eigenvalues are the SQUARE of the singular values. And, oh, by the way, eig does not really worry about the order.
sqrt(diag(D1))
ans = 3×1
0.2855 0.7640 1.4231
So now the eigenvalues and the singular values match up. But how about the eigenvectors of A'*A, and the corresponding singular vectors found in V?
The columns of V1 and V are essentially the same, but for the ordering issue, and oh, by the way, you do know that eigenvectors and singular vectors can be arbitrarily multiplied by -1 and still be completely valid? So some of the vectors can have their signs flipped. Still as good as the same.
Next, consider A*A'.
[V2,D2] = eig(A*A')
V2 = 5×5
-0.4813 -0.5691 -0.2846 0.5023 0.3335 -0.0222 0.7368 -0.0040 0.5577 0.3816 0.8413 -0.3050 -0.0580 0.1516 0.4158 -0.1939 -0.1200 0.8272 -0.1739 0.4832 -0.1501 0.1610 -0.4810 -0.6192 0.5804
D2 = 5×5
-0.0000 0 0 0 0 0 0.0000 0 0 0 0 0 0.0815 0 0 0 0 0 0.5837 0 0 0 0 0 2.0253
Are these the same as the singular values we gound above?
First, notice there are now 5 eigenvectors, and 5 eigenvalues. That is because A had more rows than columns, so A*A' is a 5x5 matrix.
So you need to understand the linear algebra now. What is the rank of the matrix A? A has rank 3. So if I multiply A*A', that product matrix cannot have a higher rank that either of the terms in the product. The result will still have rank 3. So the product matrix must now have two ZERO eigenvalues. And of course, the non-zero singular values will be squared.
sqrt(diag(D2))
ans =
0.0000 + 0.0000i 0.0000 + 0.0000i 0.2855 + 0.0000i 0.7640 + 0.0000i 1.4231 + 0.0000i
The zero singular values that we found will generally only be approximately zero, so on the order of eps.
And again, when we look at the eigenvectors as columns of V2, we see the three vectors that correspond to the non-zero eigenvalues may arbitrarily have their signs flipped. Again, that is completely arbitrary and irrelevant. But how about the eigenvectors for the zero eigenvalues? Why are they not the same as those last two columns of U?
This really needs an understanding of linear algebra. Some reading would help, or better yet, a course on linear algebra.
But the two nullspace singular vectors are merely one pair of vectors that span the nullspace of the columns of A.
nullvecs = null(A.')
nullvecs = 5×2
-0.5175 -0.5364 0.0260 0.7366 0.8195 -0.3594 -0.2013 -0.1071 -0.1392 0.1705
V2(:,1:2)
ans = 5×2
-0.4813 -0.5691 -0.0222 0.7368 0.8413 -0.3050 -0.1939 -0.1200 -0.1501 0.1610
Because that subspace is 2-dimensional, we can find infinitely many linear combinations of those two vectors that span the same subspace. eig(A*A') arbitrarily finds two other such vectors. As such, we can find the linear transformation matrix that will exactly rotate one of those pairs of vectors into the other.
nullvecs\V2(:,1:2)
ans = 2×2
0.9979 0.0654 -0.0654 0.9979
Again, the two sets of vectors are just a different (but equally valid) way to define a basis for the nullspace of the columns of A.
If you want to apply this to an image, an image is just an array of numbers, just as I did. And of course, it will be a larger array than my simple test case, but what I did will still be equally valid.
Finally, there may be some issues in the tiny singular values, because when you form A*A' and A'*A, you are effectively squaring things. And that can cause problems because you are only working in double precision arithmetic. That means the SVD will be far more capable of accurately expressing those small singular values and the corresponding singular vectors, because the SVD does not form those product matrices. Using EIG is NOT how you normally want to compute the SVD, even if only for this reason.
  댓글 수: 2
santhosh kumar buddepu
santhosh kumar buddepu 2021년 9월 4일
thank you very much sir for your elaborate explanation... is it possible to do without using eig also? as i want to implement svd in hardware platform like FPGAs, so i need that
John D'Errico
John D'Errico 2021년 9월 4일
편집: John D'Errico 2021년 9월 4일
Is it possible? Yes. Of course it is. I wrote an SVD myself long ago, though in APL, before APL had an SVD. No, I cannot (actually I will not) teach you in detail how to do that here. The SVD can be computed using a sequence of Householder rotations to first bi-diagonalize the matrix, and then the algorithm I used followed up with a sequence of Givens rotations to kill off the off-diagonal elements. You can find it described in the old Linpack manual (accompanied by pseudo-code) as I recall. If you really need it, then you will need to write it.

댓글을 달려면 로그인하십시오.

카테고리

Help CenterFile Exchange에서 Linear Algebra에 대해 자세히 알아보기

제품

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by