I have two standard shades of green.
I also have a test image ( a leaf with black background).
Can you show me how to compare the test image with the 2 standard images and determine where the test image falls closer, in terms of color, among the 2 standard images.
thank you.

 채택된 답변

Image Analyst
Image Analyst 2013년 12월 19일

2 개 추천

I know we've talked about this before, a few months ago or so. First segment out the green leaf - get a binary image that is true where the leaf is and false where the leaf isn't. For example use the green mask like Walter showed you. Then the most widely used method in the color industry is to calculate the "Delta E" (which is the color difference). You convert RGB into LAB and then calculate the Euclidean distance between the two points in LAB color space. At least that's the simplest which is probably okay for you. Remember we talked about using a color checker passport to calibrate your images. Otherwise you're just using "book formulas" - which might be okay if you just want to find out which standard is closest. But if you ever want to graduate to a fully calibrated system, you'll have to use a standard such as the Color Checker Passport or the Munsell soil chart or the X-rite Color checker.

댓글 수: 8

Elvin
Elvin 2013년 12월 19일
편집: Elvin 2013년 12월 19일
Sorry if I still don't get this right after a month of asking questions to you. :( I'm confused with this idea: If I only get the G of the leaf image, can I still convert it to LAB space (using the srgb2lab) and still get the L,A,B channels?
By the way, my only aim so far is to find out which standard is closest to the test image. I'm not yet after for the aim "to graduate to a fully calibrated system". I will only aim for that once I already understand what I'm doing and answer all the questions in my mind.
Elvin
Elvin 2013년 12월 19일
Can you show me how to use the mask so the only left in the test image is the leaf (the green only). So that I can now convert it to LAB? thanks
You can get to lab color space like this (from your code already):
cform = makecform('srgb2lab');
Transform_Leaf = applycform(LeafImage,cform);
It's probably easiest to get delta E from lab color space. HSV color space is kind of like lab color space except that it uses polar coordinates instead of Cartesian coordinates.
You need all 3 color channels to convert color spaces. If you take only the G channel, that's only a monochrome grayscale image with no color so that can't be converted into a color space - you need the R and G also.
Elvin
Elvin 2013년 12월 19일
So this code is correct?
LeafImage = imread('test1.jpg');
leaf_mask = LeafImage(:,:,2) > 32;
inleaf_pixels = reshape(LeafImage(leaf_mask(:,:,[1 1 1])), [], 3);
Transform_Leaf = applycform(LeafImage,cform);
What's the use of the leaf_mask and inleaf_pixels if I could already convert LeafImage directly into LAB space? By the way, will the black background on the test image will give different values of the L,A,B channels? Is there a way to convert only the leaf part (the green part only of the test image) into LAB space? I don't know if I'm right but I have this idea in mind that the result would be accurate if I excluded the black background first and then converted only the green part (the leaf only) into LAB. If there's a way to do it, can I ask for a code for it? Thanks
The leaf_mask (2D) or inleaf_pixels (3D) are masks so you can extract the color of only the leaf and not of the background. You don't want all black surrounding background to be included when you compute the mean color do you? Of course not. You do want to convert the leaf part into LAB. The leaf is green but you do not want to convert the green channel image into lab. What I would do is to convert the whole image into lab, then extract each channel separately.
lChannel = lab(:,:,1);
aChannel = lab(:,:,2);
bChannel = lab(:,:,3);
Then get the mean color in the mask
meanL = mean(lChannel(leaf_mask));
meanA = mean(aChannel(leaf_mask));
meanB = mean(bChannel(leaf_mask));
then do the same for the standard image, which needs no mask
lChannelStandard = labStandard(:,:,1);
aChannelStandard = labStandard(:,:,2);
bChannelStandard = labStandard(:,:,3);
meanLStandard = mean2(lChannel);
meanAStandard = mean2(aChannel);
meanBStandard = mean2(bChannel);
Then calculate delta E
deltaL = meanL - meanLStandard ;
deltaA = meanA - meanAStandard ;
deltaB = meanB - meanBStandard ;
deltaE = sqrt(deltaL^2+deltaA^2+deltaB^2);
Elvin
Elvin 2013년 12월 19일
I see. Thank you very much for that. :)
By the way, can you verify my code:
clear
% FOR STANDARD IMAGES
LCC = imread('LCC1.jpg');
cform = makecform('srgb2lab');
Transform_LCC1 = applycform(LCC,cform);
LChannel_LCC = Transform_LCC1(:, :, 1);
AChannel_LCC = Transform_LCC1(:, :, 2);
BChannel_LCC = Transform_LCC1(:, :, 2);
meanLStandard = mean2(LChannel_LCC);
meanAStandard = mean2(AChannel_LCC);
meanBStandard = mean2(BChannel_LCC);
%FOR TEST IMAGE
LeafImage = imread('test1.jpg');
leaf_mask = LeafImage(:,:,2) > 32;
inleaf_pixels = reshape(LeafImage(leaf_mask(:,:,[1 1 1])), [], 3);
Transform_Leaf = applycform(LeafImage,cform);
LChannel_Leaf = Transform_Leaf(:, :, 1);
AChannel_Leaf = Transform_Leaf(:, :, 2);
BChannel_Leaf = Transform_Leaf(:, :, 2);
meanLLeaf = mean(LChannel_Leaf(leaf_mask));
meanALeaf = mean(AChannel_Leaf(leaf_mask));
meanBLeaf = mean(BChannel_Leaf(leaf_mask));
% FOR DELTAE
deltaL = meanLStandard - meanLLeaf;
deltaA = meanAStandard - meanALeaf;
deltaB = meanBStandard - meanBLeaf;
deltaE = sqrt((deltaL)^2 + (deltaA)^2 + (deltaB)^2);
Image Analyst
Image Analyst 2013년 12월 19일
Looks right, except that you don't need or use inleaf_pixels.
Elvin
Elvin 2013년 12월 19일
Thank you very much for the help sir. :)

댓글을 달려면 로그인하십시오.

추가 답변 (2개)

Walter Roberson
Walter Roberson 2013년 12월 19일

0 개 추천

leaf_mask = LeafImage(:,:,2) > 32; %adjust the 32 if you want
inleaf_pixels = reshape( LeafImage(leaf_mask(:,:,[1 1 1])), [], 3);
leaf_mean = mean(inleaf_pixels, 1);
Now likewise you can find the mean colors for the two shades. With those in place, you can norm() the difference in mean colors (i.e., take the Euclidean distance)

댓글 수: 6

Elvin
Elvin 2013년 12월 19일
편집: Elvin 2013년 12월 19일
may I ask what's the function of the code that you've posted? Sorry, I'm just new to image processing.
In using the Euclidean distance, should I need to transform the images using the srgb2lab function?
leaf_mask is created as a binary array, true only in the places where the green channel is sufficiently bright. This will include the white edges around the leaf, but will exclude the black area.
leaf_mask will be 2D. indexing it at (:,:,[1 1 1]) is short-hand for either cat(3,leaf_mask,leaf_mask,leaf_mask) or repmat(leaf_mask,1,1,3) . All three expressions extend leaf_mask into 3D by copying it into three stacked planes.
Indexing the image at the 3D mask will extract the content of the pixels where the mask is true.
It is a peculiarity of MATLAB that when you use logical indexing to extract from an array that is 2D or higher, the result will be stored as a column vector rather than as an array. This masks sense when you consider that when I did the extraction the 3D mask could have specified that I wanted to extract (for example) the Red and Green of one pixel, but the Green only of a different pixel: logical masking is not necessarily going to pull out nice rectangular areas, so MATLAB returns a vector of the information that was requested.
As I know that I did extract all 3 panes of every pixel that I extracted at all, I can then reshape() the column vector. The result will be an N x 3 matrix where the second dimension is R, G, or B.
mean(,1) applied to an N x 3 matrix will give a 1 x 3 result -- the mean R, mean G, mean B.
Whether you use srgb2lab() before finding the Euclidean distance depends on whether srgb to rgb correction has already been done in reading the images (TIFF in particular can do it as they are read). It also depends on what you mean by "falls closer, in terms of color". If you are referring to Hue, then consider HSV. It seems to me that Saturation is also important for natural shades of green.
LAB has the difficulty,
Lab values do not define absolute colors unless the white point is also specified.
My thought would be that if the difference between Euclidean distance for sRGB vs LAB is going to be significant when you only have two shades to compare against, then you should probably optimize by choosing different shades.
Elvin
Elvin 2013년 12월 19일
편집: Elvin 2013년 12월 19일
Can I ask for the following questions for better understanding?
1. All three expressions extend leaf_mask into 3D by copying it into three stacked planes.
- Does this mean that the inleaf_pixels will become G,G,G instead of R,G,B?
2. mean(,1) applied to an N x 3 matrix will give a 1 x 3 result -- the mean R, mean G, mean B
- Does the code leaf_mean = mean(inleaf_pixels, 1); gets the mean of the G only?
3. It also depends on what you mean by "falls closer, in terms of color"
- What I'm trying to do is determine if the test image is closer in shade/color either on the 1st standard image or the 2nd. By the way, the aim of this project is to develop an automated system. I mean, nowadays, farmers are only using their eyes to compare the leaf (test image) with the standard color (like the 2 standard image above) to determine whether the plant lacks nitrogen or not. So what I'm trying to do is to do the comparison via computer.
4. My thought would be that if the difference between Euclidean distance for sRGB vs LAB is going to be significant when you only have two shades to compare against, then you should probably optimize by choosing different shades.
- So what do you think would be the best method for me to compare the test image with the standard images?
Can you check if I'm doing this right?
% FOR STANDARD IMAGES
LCC1 = imread('LCC1.jpg');
LCC4 = imread('LCC4.jpg');
cform = makecform('srgb2lab');
Transform_LCC1 = applycform(LCC1,cform);
Transform_LCC4 = applycform(LCC4,cform);
LChannel_LCC1 = mean(mean(Transform_LCC1(:, :, 1)));
AChannel_LCC1 = mean(mean(Transform_LCC1(:, :, 2)));
BChannel_LCC1 = mean(mean(Transform_LCC1(:, :, 2)));
LChannel_LCC4 = mean(mean(Transform_LCC4(:, :, 1)));
AChannel_LCC4 = mean(mean(Transform_LCC4(:, :, 2)));
BChannel_LCC4 = mean(mean(Transform_LCC4(:, :, 2)));
%FOR TEST IMAGE
LeafImage = imread('test1.jpg');
leaf_mask = LeafImage(:,:,2) > 32;
inleaf_pixels = reshape( LeafImage(leaf_mask(:,:,[1 1 1])), [], 3);
Transform_Leaf = applycform(LeafImage,cform);
LChannel_Leaf = mean(mean(Transform_Leaf(:, :, 1)));
AChannel_Leaf = mean(mean(Transform_Leaf(:, :, 2)));
BChannel_Leaf = mean(mean(Transform_Leaf(:, :, 2)));
Should I use now the Euclidean distance? How should I do it? I'm confused with this idea: If I only get the G of the leaf image, can I still convert it to LAB space (using the srgb2lab) and still get the L,A,B channels?
Thanks
1) No, it will not become [G,G,G]. The mask is derived from the G plane in this case, but it might have been derived more complexly in the more general case. For example if the primary color was yellow, in order to figure out where the yellow was it would be necessary to look at both the G and B planes.
The two-dimensional mask tells you which locations in the X/Y plane are "of interest". You want to extract the pixel data at those locations. But the pixel data is three-dimensional, not two-dimensional, and MATLAB does not have any direct syntax for saying "Take this 2D mask and extract all three channels at each location indicated in the 2D mask". You need a 3D mask to extract from a 3D array.
Suppose you had a color image that was 2 x 4. But that's really 2 x 4 x 3, one 2 x 4 slice for the R pane, one 2 x 4 slice for the G pane, one 2 x 4 slice for the B pane. Now suppose you want to extract the bottom-right pixel, the one at image location (2,4). You want all three panes, so that would be (2,4,:) to extract, if written in subscript form. If you were extracting an irregular area then you would not be able to use subscript form to extract everything at once.
The bottom-right location of the image as seen by the user can be represented by the logical mask
F F F F
F F F T
but MATLAB has no direct way to say extract that mask "in all three panes". So you need to extend it to a 3D mask. In each pane of the 3D mask you want exactly the same pixels selected, so the 3D mask is 3 copies of the 2D mask, but stored in a 3D array. You could do
mask3d(:,:,1) = mask;
mask3d(:,:,2) = mask;
mask3d(:,:,3) = mask;
or you could do the equivalent
mask3d = cat(3,mask,mask,mask)
or
mask3d = repmat(mask, 1, 1, 3)
or
mask3d = mask(:,:,[1 1 1])
This does not mean that the layers will all be copies of the G pane of the image, it means that the layers will all be copies of some information we derived that happened in this case to only require the G pane.
2) No, as explained above, inleaf_pixels will have all three panes. leaf_mean(1,1) will be the mean of the red, leaf_mean(1,2) will be the mean of the green, leaf_mean(1,3) will be the mean of the blue.
3) "shade/color" still is not enough to determine which color system you need to use to get the best answer
My experience with leaves lacking in nitrogen is that their Saturation decreases (they get more white light mixed in) and their Value increases (they get brighter), but I could not say with any certainty that the Hue changes much; perhaps a little, but the leaves are still manifestly green. You might find yourself wanting to weigh differences in Hue more than differences in Saturation, for example.
Wikipedia indicates for L*a*b* that
The nonlinear relations for L*, a*, and b* are intended to mimic the nonlinear response of the eye. Furthermore, uniform changes of components in the L*a*b* color space aim to correspond to uniform changes in perceived color, so the relative perceptual differences between any two colors in L*a*b* can be approximated by treating each color as a point in a three-dimensional space (with three components: L*, a*, b*) and taking the Euclidean distance between them
I would suggest to you that the human eye's response to color changes in the leaf might potentially be sufficient to distinguish between "low nitrogen" and "nitrogen okay", but that the measurement by that distance is not necessarily the best for analyzing how much nitrogen is present. If that is an aim, then it would be better to calibrate, taking samples of leaves, taking their pictures (under consistent illumination) and putting the leaf through a spectrometer to measure nitrogen. The "consistent illumination" bit reminds me: your automated system will need to impose a consistent illumination on the samples, so it will not be as easy as having the farmer put a sample in the device and having the device examine the color under whatever the ambient light conditions happen to be.
Have you examined the response of leaf nitrogen under various wavelengths, in particular infrareds?
Elvin
Elvin 2013년 12월 19일
1. By the way, may I ask what is the use of masking the leaf part in the image?
2. I see. Thanks for that. So, I need to use the leaf_mean(1,2) to get the mean of the G only.
3. So you're saying that I should use HSV instead of LAB or RGB?
Also, I'm now again confused with what color space to use. You're saying before that I should compare the mean color, so I should use the RGB, right? But in your last comment you said that nitrogen on the leaf is more of HSV. If you read Image Analyst's answer, he told me to use the LAB space. Which one should I use? RGB, HSV, or LAB?
By the way, I'm not after in measuring how much nitrogen is there on the leaf. I'm only after which of the following standard images is the test image closer in terms of the green color.
May I ask for code on what to do next after masking the leaf? You already gave me this code for masking the leaf, right?
LeafImage = imread('test1.jpg');
leaf_mask = LeafImage(:,:,2) > 32;
inleaf_pixels = reshape( LeafImage(leaf_mask(:,:,[1 1 1])), [], 3);
If I'm going to use the LAB space and use the Euclidean Distance, may I know what's the next code after the code above to convert the leaf only part into LAB, so I can get the L,A,B channels?
Thanks
Image Analyst
Image Analyst 2013년 12월 19일
You can probably find out which color it is closest to in any color space, but the more professional way that the pros use is LAB or HSV color space since it's more relevant to human vision than RGB space - in fact that's the whole reason why they were invented.

댓글을 달려면 로그인하십시오.

Ashish Dahal
Ashish Dahal 2020년 11월 24일

0 개 추천

Hello Everyone,
I hope everybody is doing well. I want to know the color difference from shifting from YellowA to Yellow B to Yellow C. And same for the other colours from shifting from A to C. Or what is the relative change from going from YellowA to Yellow C?and same for other colours?
Best Regards
Ashish

댓글 수: 1

Image Analyst
Image Analyst 2020년 11월 24일
편집: Image Analyst 2020년 11월 24일
This is not an Answer. You should have started your own question. Anyway...
You can use delta E but first you have to get control over your image capture situation. Right now I can immediately see that it's a total mess.
First of all, the items are not even in the same location. Look how we're looking at the right side of the "A" objects but the left side of the "C" objects. They should all be centered in the field of view.
Secondly, you don't have control over your lighting. Just look how the background brightness varies. Presumably it's the same white background but it's not the same from snapshot to snapshot so how can we know if the color difference is due to illumination change, camera exposure change, or due to the object changing color?
Third, what about your camera? Are you using a flash? Hopefully not because flashes are not consistent in their light output. Are you using the camera in manual mode, or is it fully automatic? It should be in manual mode since you don't want the camera automatically changing anything or else you won't know if the color change is true to the object or just a side effect of the camera changing something.
Fourth, you need a color standard in there, like the X-rite Color Checker Chart. It should be in the field of view next to the disc. Then you can do a true calibration from RGB to CIELAB and that will help alleviate any of the exposure differences that might still remain. See attached tutorial for how to color calibrate a digital imaging system.
If you get control over those then you still should do a background correction to correct for the lens shading and illumination non-uniformities.

댓글을 달려면 로그인하십시오.

질문:

2013년 12월 19일

편집:

2020년 11월 24일

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by