So I have coded this image convolution script, but I get this error (Error using .*
Integers can only be combined with integers of the same class, or scalar doubles error in imF=A0.*double(H))
I am stuck, can someone please help?
H=[1 0 1;0 1 0;1 0 1];
for i = 1:x-1
for j = 1:y-1
A0=A(i:i+1, j:j+1);
imF=A0.*H;
S(i,j)=sum(sum(imF));
end
end
imshow(S)

 채택된 답변

DGM
DGM 2022년 11월 5일
편집: DGM 2022년 11월 5일

1 개 추천

Here's a start.
% this image is class 'uint8'
A = imread('cameraman.tif');
% for the math to work, you need it to be floating-point class
A = im2double(A);
[y x] = size(A);
H = [1 0 1;0 1 0;1 0 1];
% filter is sum-normalized if the goal is to find the local mean
%H = H/sum(H(:));
% preallocate output
S = zeros(y,x);
% image geometry is [y x], not [x y]
% treating edges by avoidance requires indexing to be offset
% A0 needs to be the same size as H
for i = 2:y-1
for j = 2:x-1
A0 = A(i-1:i+1, j-1:j+1);
imF = A0.*H;
S(i,j) = sum(sum(imF));
end
end
imshow(S)
Why is it blown out? That's because the filter kernel is not sum-normalized. As a result, the brightness of the image is increased proportional to the sum of H. If you do want the sum, then you're set. So long as we stay in 'double', the supramaximal image content is still there, but it can't be rendered as anything brighter than white. If we cast back to an integer class, that information will be lost.
If you want an averaging filter instead, normalizing the kernel is cheaper than dividing the result.
Why are the edges black? Because they aren't processed. There are various ways of handling the edges. One common way is to simply pad the edges.
There are numerous examples of 2D filters. The following answer is one and links to several others.

댓글 수: 8

Blob
Blob 2022년 11월 5일
Thank you very much. I want to tell you that I changed the variable 'i' with 'k', because I had an error, Matlab thought that 'i' was for imaginary!
Blob
Blob 2022년 11월 5일
편집: Blob 2022년 11월 5일
@DGM sorry to insist, but I keep getting this error :
Error using .*
Integers can only be combined with integers of the same class,
or scalar doubles. imF = A0.*H;
What is the problem?
Not sure which is the integer, A0 or H. You need to cast the non-integer variable to double:
imF = double(A0) .* double(H);
Yes, I think it's best to avoid using i and j as indices for that reason. It doesn't necessarily cause any problems, but it can.
Given your prior code, H should be of class 'double'. A, and therefore A0 will be of whatever class was returned from imread(). Most images will be 8-bit unsigned integer. The smart way would be to cast A prior to the loop. Casting segments within the loop is going to be slower. Similarly, if you're writing a function and want to sanitize a user-defined H, you would cast it as 'double' outside the loop instead of casting it thousands of times repeatedly inside the loop.
As in the example I gave, if you use
A = im2double(A);
A will be cast as 'double' and rescaled to [0 1], which is the expected data range for the class.
If you use
A = double(A);
A will be cast as 'double', but not rescaled. It will still be [0 255]. This is marginally faster, but you'll have to deal with the fact that the array is not scaled correctly for its class. Tools like imshow() and imwrite() won't know what to do with it.
The example I gave runs without error. If you've adapted it and are having issues with your new code, let us know what changes you've made so that we're on the same page.
Blob
Blob 2022년 11월 13일
Sorry to ask again @DGM, but why do the result between the manual convolution differ from the one that is already implemented in MATLAB (conv2)?
You'll have to be specific about which examples you're comparing.
One difference I mentioned above is that the edges aren't being handled. If the window is centered on the first column of the image, it will be overhanging the edge with nothing to sample from. There are a few things you could do.
  • you could pad the image temporarily
  • you could conditionally truncate the window
  • you could just avoid getting close to the edges
The last option is the one that's being used in the above example. Since the window is 3x3, there will be a 1px annulus around the output image that was never processed and still contains zeros from when the output array was preallocated.
for i = 2:y-1
for j = 2:x-1
% ...
end
end
The way conv2 and imfilter() do it is by padding the array so that the window can reach the image edges without needing to resort to conditional behaviors. Assuming again a 3x3 window, there will be a 1px annulus on the output where the image is filtered with its neighbors, but also with 3 (or 5 if it's a corner pixel) padding pixels. Typical default padding is zero, so that would mean that those edge pixels will be slightly darkened.
Blob
Blob 2022년 11월 13일
편집: Blob 2022년 11월 13일
I am trying to convolute the image 'Lena' with the filter Hy = [0 -1 0;0 0 0;0 1 0];
The difference is in the eyes, not the edges
DGM
DGM 2022년 11월 13일
편집: DGM 2022년 11월 13일
This may be what you're talking about.
A = imread('cameraman.tif');
% for the math to work, you need it to be floating-point class
A = im2double(A);
[y x] = size(A);
H = [0 -1 0; 0 0 0; 0 1 0];
% preallocate output
S = zeros(y,x);
% image geometry is [y x], not [x y]
% treating edges by avoidance requires indexing to be offset
% A0 needs to be the same size as H
for i = 2:y-1
for j = 2:x-1
A0 = A(i-1:i+1, j-1:j+1);
imF = A0.*H;
S(i,j) = sum(sum(imF));
end
end
% use conv2()
Sc2 = conv2(A,H,'same');
% use imfilter()
Sif = imfilter(A,H);
% comare the results
% since these are all in the range [-1 1]
% rescale for viewing
outpict = [S; Sc2; Sif];
outpict = (1+outpict)/2;
imshow(outpict)
Unless you've changed something, The prior example behaves generally like imfilter() (except at the edges). Note that the example with conv2() gives the opposite slope. Transitions from dark to light have negative slope.
In order to get the same behavior out of conv2(), rotate the filter by 180 degrees.
H = rot90(H,2); % correlation vs convolution
Note that imfilter() supports both, but defaults to correlation.
There are other differences in behavior between the three that may influence how the results are displayed by image()/imshow(), but knowing if that's the case would require an example of how exactly you're creating the two images.

댓글을 달려면 로그인하십시오.

추가 답변 (0개)

태그

질문:

2022년 11월 5일

편집:

DGM
2022년 11월 13일

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by