how to fix particular connected component in multiple frames
조회 수: 5 (최근 30일)
이전 댓글 표시
hi everyone.
i have binary image consisting of 8 to 10 connected components. actually i want to check weather the center of these 8 to 10 connnected component is changing postion in next frame or not but for that first i have to fix/label the connected components in all frames for making sure that the connected component in first frame was the same as in the next frame (like tracking but not propere tracking because i just want to find change in position of multiple connected compnents in multiiple frames).
How i can fix the particular connected components in multiple frames. eiather by assinging ID or label them but dont know how to code that in matlab. so because of begginer in matlab i need someone's help in codding.
thanx in advance
댓글 수: 14
jonas
2020년 8월 23일
If you post actual images and not screenshot then people are more likely to write code for you... I dont want to crop your desktop and binarize before even getting started.
채택된 답변
jonas
2020년 8월 25일
Here's something I stitched together that works OK for the images you provided. I had to start by cropping them to the right size.. Note that this method will fail as soon as two blobs merge, disappears or move too much in a frame. All in all, not very robust.
files = dir('C:\Users\pics\*.png');
rect = ([360 130 700 520]);
for i = 1:numel(files)
BW{i} = imcrop(imbinarize(rgb2gray(imread(fullfile(files(i).folder,files(i).name)))),rect); %very long line
end
%% real action starts here
%build a colormap
cmap = repmat([perms([1 0 0]);perms([1 1 0]);[0,0,0]],2,1);
%start with a black image
im_old = zeros(size(BW{1}(:,:,1)));
k = 1
for j = 1:numel(files)
%empty new image
im_new = zeros(size(BW{1}(:,:,1)));
%load next frame
im_j = BW{j};
%remove small components
im_j = bwareaopen(im_j,20);
%find all blobs
S = bwconncomp(im_j,4);
%check if empty and continue to next frame if so
if S.NumObjects < 1
continue
end
%loop over objects
for i = 1:S.NumObjects
%apply mask of object 1 on previous image
overlapping = im_old(S.PixelIdxList{i});
%remove any non-overlapping pixels
overlapping(overlapping==0) = [];
if isempty(overlapping) %if no overlapping, assign new label
im_new(S.PixelIdxList{i}) = k;
k = k+1;
else
%if overlapping, assign most common label
im_new(S.PixelIdxList{i}) = mode(overlapping);
end
end
%replace old with new
im_old = im_new;
%store image as rgb
out{j} = label2rgb(im_new,cmap);
end
댓글 수: 9
Image Analyst
2020년 9월 26일
편집: Image Analyst
2020년 9월 26일
Come on, make it easy for us to help you. You forgot to attach angle_all.m with the changes I suggested. I'll check back later for it.
By the way, this code I suggested works just fine:
folder = pwd
files = dir(fullfile(folder, 'frame_*.jpg'));
rect = ([360 130 700 520]);
for k = 1:numel(files)
fullFileName = fullfile(files(k).folder, files(k).name);
grayImage = imread(fullFileName);
[rows, columns, numberOfColorChannels] = size(grayImage);
fprintf('%s is %d rows by %d columns by %d color channels.\n', files(k).name, rows, columns, numberOfColorChannels);
% Make gray scale if it's not already.
if numberOfColorChannels == 3
grayImage = rgb2gray(grayImage);
end
if islogical(grayImage)
grayImage = uint8(255 .* grayImage);
end
BW{k} = imcrop(imbinarize(grayImage),rect);
end
And I can't write a complete turnkey tracking app for you that keeps unique colors like you want. I've described many times in many posts how difficult it is, with all the ambiguities etc. Like
- objects occluding each other or merging together or splitting apart
- objects leaving the field of view, perhaps reappearing later
- objects entering the field of view
- objects changing shape from frame to frame
- objects changing size, color, or some other unique identifying attribute from frame to frame
One thing you'd have to do is make a feature vector with all the attributes describing all the things that are needed to distinguish one blob from another like size, shape, color, speed, etc. and then try to match up feature vectors for the blobs from one frame to the next to decide which blob was present in the prior frame, and if it was, which one it was. This is a lot of work that I just can't donate to you. It would take months or probably years. I'm sure there are companies out there that have teams of people who have spent years on tracking. Mathworks is one but their tracking stuff is just part of the low level stuff you'd need. There is still a whole bunch of stuff you'd need to add beyond what they did, and I'm sure there are companies that have done all that. You just need to find them and buy their software. That will be MUCH, MUCH easier and faster than writing it from scratch on your own.
추가 답변 (2개)
Image Analyst
2020년 8월 23일
The Computer Vision Toolbox has lots of tracking capabilities:
It could save you a lot of time if you bought that toolbox. Do you already have it? If not, buy it.
댓글 수: 0
Image Analyst
2020년 8월 23일
See Mathworks ball tracking demo: https://www.mathworks.com/matlabcentral/fileexchange/39851-algorithm-development-with-matlab?s_tid=prof_contriblnk
댓글 수: 6
Image Analyst
2020년 8월 24일
You'd have to make a list of features of all the blobs. Then I'd be tempted to try K-Nearest Neighbors, knnsearch() to match blobs from one feature set to the prior feature set. Sorry, I don't have a demo for this already. But it's simple so you can try it yourself easily I'd think. You need the Statistics and Machine Learning Toolbox to use that function. However it will match each blob in set 2 with a blob in set 1 regardless if it's correct or not. I mean the blob in set 1 might have left the scene and the blob in set 2 may have just entered the scene but it will say that some blob in set 1 matches the blob in set 2. In that case, perhaps (but maybe not) some blob in set 1 may have two blobs that match it in set 2. Or it may have none. So you'll need to inspect the number of matches and if there are zero, or two or more, you're going to have to make some decisions about what to do. But if all goes well and you have the same blobs in both sets (none left and none entered), and the features didn't change much from set 1 to set 2, then each blob will have exactly one matching blob in the other set.
Image Analyst
2020년 8월 25일
You may want to watch this webinar tomorrow:
Join us for the fourth of seven bi-weekly free seminars focusing on the best paper research presented at EI 2020. Talks will be followed by live discussion: seminars will be recorded.*
Best Paper Imaging and Multimedia Analytics in a Web and Mobile World 2020 Conference
Bryan Blakeslee & Andreas Savakis (Rochester Institute of Technology)
Change detection in image pairs has traditionally been a binary process, reporting either "Change" or "No Change." In this paper, we present LambdaNet, a novel deep architecture for performing pixel-level directional change detection based on a four-class classification scheme. LambdaNet successfully incorporates the notion of "directional change" and identifies differences between two images as "Additive Change" when a new object appears, "Subtractive Change" when an object is removed, "Exchange" when different objects are present in the same location, and "No Change".
About the Speaker
Bryan Blakeslee is a recent graduate of the Rochester Institute of Technology’s computer engineering program. His areas of interest are deep learning and embedded systems.
Join us on Wednesday, 26 August 2020
10:00 - 10:45 EDT / 15:00 - 15:45 BST / 16:00 - 16:45 CET
참고 항목
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!