# nonlinear 2d calibration of line-camera using known pattern - how to align world coordinates to points?

조회 수: 8(최근 30일)
arnold 2 Jul 2016
댓글: praveen rai 28 Dec 2017
I rephrased the question, see last comment
--------------------------------------------
Hi all,
I'm currently working with a highres line camera scanner and we want to use it for measurements. Therefore we need to be able to calibrate it properly in 2d.
We have an accurate test-pattern made by photolithography. We can assume it to be 'exact'. I have managed to come up with a very robust subpixel circle recognition to find the points of interest so I have no question regarding image recognition.
What is unclear to me at this point is how to use the found image coordinates together with the known spatial correlation in order to come up with a distortion map which ultimately would be used to correct the distortion.
The example shows a cropped subset of a typical image of circles with 400µm xy spacing. The recognition is already done (red circles with blue centercross) leaving out structures that are too close to the edge. Now, how do I calculate a distortion-map from that?
I played around with some algorithms like knnsearch in order to look at the lokal distances and I can see that they vary significantly. I don't want to simply create a coordinate system using arbitrary points and then measure relative distances to that system because I fear that might induce large errors over big distances.
My guess is that using a combined approach of the local information (distances) in combination with the 'far field' (straight lines, orthogonality etc.) should result in the most robst outcome but I'm stuck on how to continue. In the end we want to do many measurements in order to determine the measurement errors caused by vibration and thermal influences i.e.
I'm fairly certain this has been solved (many) times but as a non-expert on the field I just don't know what to search for exactly. I would appreciating any help!
regards
Arnold

#### 댓글 수: 6

표시 이전 댓글 수: 3
arnold 3 Jul 2016
Rephrased question:
I don't want to reinvent the wheel. I want to determine the camera distortion so I can use this to unwarp the image (and other images). I use an image of an calibration pattern. The pattern is larger than the field of view in order to cover the enitre area. This pattern has a known grid distance (real world). The locations of the pattern marks are determined accurately giving image coordinates. From here on out I have the following question(s):
1. A function that takes image coordinates, known grid spacing and then gives the image distortion vectorfield would be the final goal.
I'm at a loss. I'm sure though, that I must be looking at it the wrong way. Anyways, people must have solved this many times over but I don't find an implementation that's general enough - apparently.
Any help greatly appreciated.
arnold 5 Jul 2016
no idea, anyone?
praveen rai 28 Dec 2017
@arnold can u provide me the code that how u find robust subpixel circle recognition bcz i want to find the square

로그인 to comment.

### 답변(1개)

Image Analyst 2 Jul 2016
I'd look at the average inter-row spacing as a function of row. It should be uniform. If there's a bias, you can determine that and use it to correct your real, non-calibration images. If you scan the calibration target multiple times and find that the inter-row distance is not deterministic and basically random, then there's not much you can do.

#### 댓글 수: 13

표시 이전 댓글 수: 10
arnold 3 Jul 2016
maybe I need to rephrase the question and re-post it under a new title? I think I have most of the pieces with the one piece missing being the allocation image points to world points
Walter Roberson 3 Jul 2016
I recommend posting your rephrased question here as a comment on your original question.
arnold 3 Jul 2016
done, see above.

로그인 to comment.

이 질문에 답변하려면 로그인을(를) 수행하십시오.

Translated by