Point registration - How to get the eigenvectors to have a consistent direction?

Hello,
I'm trying to replicate the algorithm of the attached paper. It's about feature-based correspondence based on the eigenvectors. I'll summarize:
We have a point cloud/shape (as in Figure 2, which I'm trying to replicate) and create a matrix H (adjacency of the points) which describes the relation of the intradistances (not interdistances) in an image. From this matrix we calculate the eigenvectors and values. They have to be reordered from big to small and the sign of the vector adapted, so that they have a consistent direction. The rows describe the features. Finally we calculate square of the euclidean distance between the features to get the matrix Z. The smallest number in every row tells us, which point corresponds to which.
Now I get the same numbers for the eigenvectors, but not always the right sign. For the matrix U1 of the eigenvectors of the first body, the second column should be multiplied by -1. For the matrix U2 of the eigenvectors of the second body, the second and fourth column should be multiplied by -1.
I tried to find out if this is somehow apparent when calculating the angles between eigenvectors (see the code) but the result (see the matrix bellow the code) doesn't tell me anything. Maybe you have a better eye.
for i = 1:minNofP;
for j = 1:minNofP;
ThetaInDegrees.CosTheta(i,j) = max(min(dot(U1(:,i),U2(:,j))/(norm(U1(:,i))*norm(U2(:,j))),1),-1)
ThetaInDegrees.Angle(i,j) = real(acosd(ThetaInDegrees.CosTheta(i,j)));
end
end
The problem is, that without the adaptation of the sign, the algorithm doesn't work. This may be a "just" a mathematics problem, but I am not able to solve it and I know that here there are a lot of people a lot more knowlegdeable (and smarter) than me.
There are many things attached:
1) The pdf of the article, in case you want to thow an eye on it. The consistent direction thing is explained on the third page on the right side before the ecuation.
2) Adjacency_V2 is an adapted version of the adjacency function from https://www.mathworks.com/matlabcentral/fileexchange/29227-eigen-function-of-the-laplacian. I adapted it to use my euclidean distance function. It should not be a problem, it has worked fine for me.
3) TruncateToMinNumber is a function to truncate a matrix to the smaller number of eigenvectors. It doesn't have an effect on this example, as the matrices have the same amount of eigenvectors and so they stay the same.
4) Correct EV have the correct eigenvectors given on the paper. The algorithm works with it. I've used it to compare.
5) DistanceBetweenF is my function for the euclidean distance between the features.
6) Features_EV_EW_v2 is my code. I've written %IGNORE on the lines you don't need but I dind't want to erase.
I tried to make the question as complete as possible but so summarized that someone actually reads it. I'm very sorry if I forgot some function or something relevant for you to be able to help me. I will edit it in as soon as I know about it.
As always thank you very much.

답변 (1개)

Dana
Dana 2020년 9월 25일
This isn't my area of expertise, but one simple "orientation" test is as follows. Consider two scalars a and b. One way to check whether these scalars are of the same sign (i.e., "oriented in the same direction") is to compare with . If the former is larger they are of the same sign, while if the latter is larger they are of opposite signs. We can extend this principle to the case of vectors a and b by saying that a and b are oriented in the same direction if and only if . Otherwise, they are oriented in opposite diretions (in which case we would want to flip one of them).

댓글 수: 5

Hello Dana, thank you for your answer.
I have tried this but I'm not able to get the solution from it.
I've made a function (see bellow) to make this. The matrix comparisson shows rowwise the difference between the sum and substraction of the column vectors in U1 and U2.
function [sumof substractionof comparisson] = whichislonger(U1, U2)
N1 = size(U1, 1);
N2 = size(U1, 2);
M1 = size(U2, 1);
M2 = size(U2, 2);
for i=1:N2
for j=1:M1
sumof(i,j) = norm(U1(:,i) + U2(:,j));
substractionof(i,j) = norm(U1(:,i) - U2(:,j))
end
end
comparisson = sumof - substractionof
end
The matrix comparisson should should tell me that the vectors U1(:,2), U2(:,2) and U2(:,4) should be oriented the other way.
This is what comparisson shows me:
If I do this same thing for just U1 (that is replacing U2 in the function whichislonger with U1) I get:
In this case everything is zero or positive.
Can you tell me what to look for or if I am missing something?
First, as I understand it the eigenvalues of each of your two original matrices H1 and H2 are all positive, and should have been first sorted in descending order. Did you do this? So for H1, something like:
[U1,D1] = eig(H1); % get eigenvector/eigenvalue matrices
[d1,srt1] = sort(diag(D1),'descend'); % descending sort of diag. of D1
D1 = diag(d1); % new D1 with sorted eigenvalues
U1 = U1(:,srt1); % corresponding re-arrangement of columns of U1
and then the same for H2.
With this sort in place, I could be wrong, but as I understand it the method you're using implicitly "pairs" the eigenvalues in H1 and H2 according to their relative size. So the largest eigenvalue in H1 is paired to the largest eigenvalue in H2, the second-largest in H1 to the second-largest in H2, etc. This pairing then extends to the eigenvectors (e.g., the eigenvector corresponding to the largest eigenvalue in H1 is paired to the eigenvector corresponding to the largest eigenvalue in H2, etc.). As a result, you only need to compare and for each of these pairs of eigenvectors:
N = size(H1,1);
for j = 1:N
if norm(U1(:,j)-U2(:,j)) > norm(U1(:,j)+U2(:,j))
U2(:,j) = -U2(:,j);
end
end
Yes, I did sort them the way you said, so that i get the exact same values. Both H1 and H2 are correct.
The result stays the same with the code you suggested, though. There is no change in the orientation of the vectors.
You can try it out yourself if you want to, I've attached the variables. Maybe I am doing something wrong, I don't know. It's frustraiting.
V11 and V22 are the solutions (what I want to achieve with U1 and U2), H11 and H22 are H1 and H2 respectively and U1 and U2 are my matrices where U1(:,2), U2(:,2) and U2(:,4) are wrongfully oriented.
Thank you again!
First, it's worth pointing out that having U1(:,2) and U2(:,2) both in the opposite direction doesn't matter. All that matters is that corresponding eigenvectors are oriented in the same direction as each other. Which direction that is is irrelevant. You can check this by noting that if you mutiply both those columns by -1 you'll get the same eventual results when you compute your association matrix.
The fact that U2(:,4) is switched but not U1(:,4), though, indicates that they must be using some other orientation method. Looking at the paper you linked, they allude to a process where they "orient the axes in V2 one at a time, choosing for each that direction which maximally aligns the two sets of feature vectors". They're not clear on what this is, but my guess is it's some kind of greedy search algorithm, where they test all the possible orientations (e.g., flip only column 1 of V2, flip columns 1 & 2, flip only 2, flip 1 & 3 & 4, etc.), and then choose the one that aligns the feature vectors as closely as possible according to some metric. For example, it could be something like: compute the association matrix Z for each possible orientation, find the smallest elements in each row and add them up, and then choose the orientation that gives the smallest sum.
Unfortunately, I can't be much help beyond that.
Thank you very much Dana, for taking the time to help me, answering my questions and even reading the paper. That's very generous.
I will try out your suggestion at the end. I'll tell you if it works. If it doesn't, it's fine :)
Edit: In the end I posted a question about it here, in case someone has the same question someday. As it is, I wasn't so keen about letting this method go.

댓글을 달려면 로그인하십시오.

카테고리

도움말 센터File Exchange에서 Multidimensional Arrays에 대해 자세히 알아보기

질문:

2020년 9월 25일

편집:

2020년 10월 2일

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by