How to combine three colour channels into the 'best' sRGB representation using a colour checker chart?

조회 수: 17 (최근 30일)
There is probably no 'right' answer for this but I would appreciate some advice from image experts!
I have a 16-bit 1280 x1024 monochrome camera. I want to use this to create some realistic colour pictures (or as close as possible to a true rgb representation). To do this, I am using three colour bandpass filters and a white light source. My filters are narrower than the Bayer rgb filters on a professional camera.
For interest, the filters are:
  • Bandpass λ= 593 – 666 nm ("red filter")
  • Bandpass λ= 540 – 577 nm ("green filter")
  • Bandpass λ= 430 – 490 nm ("blue filter")
My workflow:
  1. I shine the white light source on an xrite colour checker, which is a chart of 24 colour squares, of which red, green and blue are primary colours typical of colour photographic processing.
  2. I take a monochrome image of the colour checker squares through each of the three filters. So I now have 3 greyscale images of the squares. I can then import the image greyscale values into Matlab to create a 1280 x 1024 x 3 array.
  3. I scale and bias the array values, and then export a RGB image that is as close as possible to the 'true' colours. I know that it can't be perfect.
How do I do step 3?
The xrite colour checker squares have known sRGB values (0-255). For example: red square [180 49 57], green square [67 149 74], blue square [35 63 147] (Source - page 5).
For my three monochrome images, when I examine the greyscale values of the red, green and blue squares under the three filters, I get the following mean greyscale values.
  • Image 1 - Red filter: Red square 28540, green square 8507, blue square 4077
  • Image 2 - Green filter: Red square 6636, green square 35605, blue square 8917
  • Image 3 - Blue filter: Red square 11129, green square 26291, blue square 51405.
Proportionately, these raw figures don't look far off the colour checker's sRGB values, which is a good start. Perhaps I just need a linear combination that minimises the difference.
(It looks as though the blue content of the white light was higher than the red/green content because the greyscale values are generally higher for the blue filter. So some normalisation of images 1-3 may be required.)
Anyone have experience with this?

채택된 답변

Image Analyst
Image Analyst 2023년 11월 28일
Shouldn't be too hard. So you have 3 gray scale images, one with a red filter, one with a green filter, and one with a blue filter. So you know the actual R, G, and B values for the 24 chips. And by looking at the spec for the sRGB values, you can choose a model and do a least squares fit.
So basically
Restimated = a0 + a1*Ractual + a2*Gactual + a3 * Bactual + a4 * Ractual * Gactual + etc.
and you have 24 of those equations, one equation for each chip. Then you do a least squares fit to get the a0 - a10 coefficients, or however many terms you decided to use. Then you do it again to get Gestimated and the 10 "b" coefficients, and again for the Blue to get the 10 "c" coefficients. Now you can create Red, Green, and Blue images from the coeficients you just discovered and your 3 actual images. See my attached tutorial (PPTX presentation) and code.
  댓글 수: 7
Image Analyst
Image Analyst 2023년 11월 29일
The images do not need to be white balanced and have any particular gamma set because the transform will handle that - basically undo or ignore that - and the correction should be about as good as it would be if there had been intermediate transforms to linearize gamma and color correct the image, maybe better.
The built-in MATLAB functions are not accurate enough for us for the very small color differences we need to measure. The color correction matrix they want has only 4 terms in the model. In other words estimated R = a constant plus a coefficient times actual R plus another coefficient times actual G plus another coefficient times B. That does not do much warping of the color space. So there can't be super accurate color correction because you don't have square terms R^2, etc. or cross terms R*G, R*B, and G*B. So I use a model that has a constant, three linear terms, three quadratic terms, and 3 cross terms, for 10 terms total. The built-in functions can't handle a model like that.
Most people are probably oblivious and just compare images from two different cameras taken at different times with different lighting and exposures is to run the image through rgb2lab and deltaE. But you and I know that will give garbage, arbitrary results. The way I explain it to them intuitively is to imagine you took two snapshots of the same color checker chart, one with normal exposure and one with half the exposure time. Now compute the delta E between chips using that simplistic method. It will show huge delta E color differences even though the object itself (the chart) didn't change color at all. There is a delta E but it would be due to the exposure change, not the subject changing, which is usually what you're interested in if you're doing research on stain removal, fading, color shift, etc. rgb2lab() does not give correct LAB values unless you have standard colors, for example pure white is the sRGB values that the theory says it should have. But of course it never will have that. So essentially you can get whatever LAB values you want for your object just by varying things like color temperature of the lighting, illuminance on your sample, etc. And of course that is not what you want. What you would want is that you should get the same LAB value for a patch on your scene (color checker chip) as you would have gotten had you slapped it on top of a spectrophotometer, which will give you the "true" LAB intrinsic to the sample and chosen illuminant.
That said, with the imaging systems I design we always white balance the camera before snapping pictures. And to account for changes in camera temperature (which can affect the pixel values even if the exposure and lens iris are the same), and to account for small changes in our lighting (even though we have optical feedback and the user double checks light level with a luxmeter) we take several snapshot with different exposures to try to "home in" on the optimal exposure to make sure the digital values are as close to target values as possible. Few people in industry do, or even know about, the lengths we go to to make sure the values are as accurate as possible, both before image capture and after image capture.
For the 4 big color measurement companies (X-Rite, DataColor, HunterLabs, and Konica-Minolta), I believe most or all have some kind of colorimeter that you can use with your monitor and your sample to try to make sure your image on screen or print matches the actual object. Of course it could be inherently different because a reflective object, like a color checker chart, will not look exactly like a light-emitting object, like your computer monitor, no matter how close you match the colors because the optics are just different. But it can be made good enough for all intents and purposes. Personally, I've never much cared for what it looks like on screen so I've never calibrated my monitor, though for others in my company it matters and they have. So I don't know if those color calibration solutions provided by the comnpanies "fix" the monitor just using the crude 3-term ICC model or whether they have some proprietary way to correct it better using a higher order model.
Sorry if your head is spinning by now. Color science is a very tricky topic, even to me still after getting into it about 40 years ago. I've talked to worlds experts in color at conferences and even they admit they don't know everything - that's why it's an ongoing topic of research. Much of the trickiness comes in because we're not just talking about physics, radiometry, and optics here, it's confounded by the fact that the weirdness of human vision, psychometrics, and physiology are involved.
Steve Francis
Steve Francis 2023년 11월 30일
@Image Analyst, you are a gentleman. Thank you for your patience and for your generosity.

댓글을 달려면 로그인하십시오.

추가 답변 (0개)

제품


릴리스

R2022b

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by