Compare color of images
이전 댓글 표시
I have two standard shades of green.


I also have a test image ( a leaf with black background).

Can you show me how to compare the test image with the 2 standard images and determine where the test image falls closer, in terms of color, among the 2 standard images.
thank you.
채택된 답변
추가 답변 (2개)
Walter Roberson
2013년 12월 19일
leaf_mask = LeafImage(:,:,2) > 32; %adjust the 32 if you want
inleaf_pixels = reshape( LeafImage(leaf_mask(:,:,[1 1 1])), [], 3);
leaf_mean = mean(inleaf_pixels, 1);
Now likewise you can find the mean colors for the two shades. With those in place, you can norm() the difference in mean colors (i.e., take the Euclidean distance)
댓글 수: 6
Walter Roberson
2013년 12월 19일
leaf_mask is created as a binary array, true only in the places where the green channel is sufficiently bright. This will include the white edges around the leaf, but will exclude the black area.
leaf_mask will be 2D. indexing it at (:,:,[1 1 1]) is short-hand for either cat(3,leaf_mask,leaf_mask,leaf_mask) or repmat(leaf_mask,1,1,3) . All three expressions extend leaf_mask into 3D by copying it into three stacked planes.
Indexing the image at the 3D mask will extract the content of the pixels where the mask is true.
It is a peculiarity of MATLAB that when you use logical indexing to extract from an array that is 2D or higher, the result will be stored as a column vector rather than as an array. This masks sense when you consider that when I did the extraction the 3D mask could have specified that I wanted to extract (for example) the Red and Green of one pixel, but the Green only of a different pixel: logical masking is not necessarily going to pull out nice rectangular areas, so MATLAB returns a vector of the information that was requested.
As I know that I did extract all 3 panes of every pixel that I extracted at all, I can then reshape() the column vector. The result will be an N x 3 matrix where the second dimension is R, G, or B.
mean(,1) applied to an N x 3 matrix will give a 1 x 3 result -- the mean R, mean G, mean B.
Whether you use srgb2lab() before finding the Euclidean distance depends on whether srgb to rgb correction has already been done in reading the images (TIFF in particular can do it as they are read). It also depends on what you mean by "falls closer, in terms of color". If you are referring to Hue, then consider HSV. It seems to me that Saturation is also important for natural shades of green.
LAB has the difficulty,
Lab values do not define absolute colors unless the white point is also specified.
My thought would be that if the difference between Euclidean distance for sRGB vs LAB is going to be significant when you only have two shades to compare against, then you should probably optimize by choosing different shades.
Walter Roberson
2013년 12월 19일
1) No, it will not become [G,G,G]. The mask is derived from the G plane in this case, but it might have been derived more complexly in the more general case. For example if the primary color was yellow, in order to figure out where the yellow was it would be necessary to look at both the G and B planes.
The two-dimensional mask tells you which locations in the X/Y plane are "of interest". You want to extract the pixel data at those locations. But the pixel data is three-dimensional, not two-dimensional, and MATLAB does not have any direct syntax for saying "Take this 2D mask and extract all three channels at each location indicated in the 2D mask". You need a 3D mask to extract from a 3D array.
Suppose you had a color image that was 2 x 4. But that's really 2 x 4 x 3, one 2 x 4 slice for the R pane, one 2 x 4 slice for the G pane, one 2 x 4 slice for the B pane. Now suppose you want to extract the bottom-right pixel, the one at image location (2,4). You want all three panes, so that would be (2,4,:) to extract, if written in subscript form. If you were extracting an irregular area then you would not be able to use subscript form to extract everything at once.
The bottom-right location of the image as seen by the user can be represented by the logical mask
F F F F
F F F T
but MATLAB has no direct way to say extract that mask "in all three panes". So you need to extend it to a 3D mask. In each pane of the 3D mask you want exactly the same pixels selected, so the 3D mask is 3 copies of the 2D mask, but stored in a 3D array. You could do
mask3d(:,:,1) = mask;
mask3d(:,:,2) = mask;
mask3d(:,:,3) = mask;
or you could do the equivalent
mask3d = cat(3,mask,mask,mask)
or
mask3d = repmat(mask, 1, 1, 3)
or
mask3d = mask(:,:,[1 1 1])
This does not mean that the layers will all be copies of the G pane of the image, it means that the layers will all be copies of some information we derived that happened in this case to only require the G pane.
2) No, as explained above, inleaf_pixels will have all three panes. leaf_mean(1,1) will be the mean of the red, leaf_mean(1,2) will be the mean of the green, leaf_mean(1,3) will be the mean of the blue.
3) "shade/color" still is not enough to determine which color system you need to use to get the best answer
My experience with leaves lacking in nitrogen is that their Saturation decreases (they get more white light mixed in) and their Value increases (they get brighter), but I could not say with any certainty that the Hue changes much; perhaps a little, but the leaves are still manifestly green. You might find yourself wanting to weigh differences in Hue more than differences in Saturation, for example.
Wikipedia indicates for L*a*b* that
The nonlinear relations for L*, a*, and b* are intended to mimic the nonlinear response of the eye. Furthermore, uniform changes of components in the L*a*b* color space aim to correspond to uniform changes in perceived color, so the relative perceptual differences between any two colors in L*a*b* can be approximated by treating each color as a point in a three-dimensional space (with three components: L*, a*, b*) and taking the Euclidean distance between them
I would suggest to you that the human eye's response to color changes in the leaf might potentially be sufficient to distinguish between "low nitrogen" and "nitrogen okay", but that the measurement by that distance is not necessarily the best for analyzing how much nitrogen is present. If that is an aim, then it would be better to calibrate, taking samples of leaves, taking their pictures (under consistent illumination) and putting the leaf through a spectrometer to measure nitrogen. The "consistent illumination" bit reminds me: your automated system will need to impose a consistent illumination on the samples, so it will not be as easy as having the farmer put a sample in the device and having the device examine the color under whatever the ambient light conditions happen to be.
Have you examined the response of leaf nitrogen under various wavelengths, in particular infrareds?
Elvin
2013년 12월 19일
Image Analyst
2013년 12월 19일
You can probably find out which color it is closest to in any color space, but the more professional way that the pros use is LAB or HSV color space since it's more relevant to human vision than RGB space - in fact that's the whole reason why they were invented.
Ashish Dahal
2020년 11월 24일
0 개 추천

Hello Everyone,
I hope everybody is doing well. I want to know the color difference from shifting from YellowA to Yellow B to Yellow C. And same for the other colours from shifting from A to C. Or what is the relative change from going from YellowA to Yellow C?and same for other colours?
Best Regards
Ashish
댓글 수: 1
Image Analyst
2020년 11월 24일
편집: Image Analyst
2020년 11월 24일
This is not an Answer. You should have started your own question. Anyway...
You can use delta E but first you have to get control over your image capture situation. Right now I can immediately see that it's a total mess.
First of all, the items are not even in the same location. Look how we're looking at the right side of the "A" objects but the left side of the "C" objects. They should all be centered in the field of view.
Secondly, you don't have control over your lighting. Just look how the background brightness varies. Presumably it's the same white background but it's not the same from snapshot to snapshot so how can we know if the color difference is due to illumination change, camera exposure change, or due to the object changing color?
Third, what about your camera? Are you using a flash? Hopefully not because flashes are not consistent in their light output. Are you using the camera in manual mode, or is it fully automatic? It should be in manual mode since you don't want the camera automatically changing anything or else you won't know if the color change is true to the object or just a side effect of the camera changing something.
Fourth, you need a color standard in there, like the X-rite Color Checker Chart. It should be in the field of view next to the disc. Then you can do a true calibration from RGB to CIELAB and that will help alleviate any of the exposure differences that might still remain. See attached tutorial for how to color calibrate a digital imaging system.
If you get control over those then you still should do a background correction to correct for the lens shading and illumination non-uniformities.
카테고리
도움말 센터 및 File Exchange에서 Blocked Images에 대해 자세히 알아보기
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!