creating an image convolution code

조회 수: 75 (최근 30일)
Blob
Blob 2022년 11월 5일
편집: DGM 2022년 11월 13일
So I have coded this image convolution script, but I get this error (Error using .*
Integers can only be combined with integers of the same class, or scalar doubles error in imF=A0.*double(H))
I am stuck, can someone please help?
H=[1 0 1;0 1 0;1 0 1];
for i = 1:x-1
for j = 1:y-1
A0=A(i:i+1, j:j+1);
imF=A0.*H;
S(i,j)=sum(sum(imF));
end
end
imshow(S)

채택된 답변

DGM
DGM 2022년 11월 5일
편집: DGM 2022년 11월 5일
Here's a start.
% this image is class 'uint8'
A = imread('cameraman.tif');
% for the math to work, you need it to be floating-point class
A = im2double(A);
[y x] = size(A);
H = [1 0 1;0 1 0;1 0 1];
% filter is sum-normalized if the goal is to find the local mean
%H = H/sum(H(:));
% preallocate output
S = zeros(y,x);
% image geometry is [y x], not [x y]
% treating edges by avoidance requires indexing to be offset
% A0 needs to be the same size as H
for i = 2:y-1
for j = 2:x-1
A0 = A(i-1:i+1, j-1:j+1);
imF = A0.*H;
S(i,j) = sum(sum(imF));
end
end
imshow(S)
Why is it blown out? That's because the filter kernel is not sum-normalized. As a result, the brightness of the image is increased proportional to the sum of H. If you do want the sum, then you're set. So long as we stay in 'double', the supramaximal image content is still there, but it can't be rendered as anything brighter than white. If we cast back to an integer class, that information will be lost.
If you want an averaging filter instead, normalizing the kernel is cheaper than dividing the result.
Why are the edges black? Because they aren't processed. There are various ways of handling the edges. One common way is to simply pad the edges.
There are numerous examples of 2D filters. The following answer is one and links to several others.
  댓글 수: 8
Blob
Blob 2022년 11월 13일
편집: Blob 2022년 11월 13일
I am trying to convolute the image 'Lena' with the filter Hy = [0 -1 0;0 0 0;0 1 0];
The difference is in the eyes, not the edges
DGM
DGM 2022년 11월 13일
편집: DGM 2022년 11월 13일
This may be what you're talking about.
A = imread('cameraman.tif');
% for the math to work, you need it to be floating-point class
A = im2double(A);
[y x] = size(A);
H = [0 -1 0; 0 0 0; 0 1 0];
% preallocate output
S = zeros(y,x);
% image geometry is [y x], not [x y]
% treating edges by avoidance requires indexing to be offset
% A0 needs to be the same size as H
for i = 2:y-1
for j = 2:x-1
A0 = A(i-1:i+1, j-1:j+1);
imF = A0.*H;
S(i,j) = sum(sum(imF));
end
end
% use conv2()
Sc2 = conv2(A,H,'same');
% use imfilter()
Sif = imfilter(A,H);
% comare the results
% since these are all in the range [-1 1]
% rescale for viewing
outpict = [S; Sc2; Sif];
outpict = (1+outpict)/2;
imshow(outpict)
Unless you've changed something, The prior example behaves generally like imfilter() (except at the edges). Note that the example with conv2() gives the opposite slope. Transitions from dark to light have negative slope.
In order to get the same behavior out of conv2(), rotate the filter by 180 degrees.
H = rot90(H,2); % correlation vs convolution
Note that imfilter() supports both, but defaults to correlation.
There are other differences in behavior between the three that may influence how the results are displayed by image()/imshow(), but knowing if that's the case would require an example of how exactly you're creating the two images.

댓글을 달려면 로그인하십시오.

추가 답변 (0개)

카테고리

Help CenterFile Exchange에서 Denoising and Compression에 대해 자세히 알아보기

태그

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by