HELP, How to do real time face tracking in matlab?

Hi, can someone PLEASE tell me how to track a face in a real time video using A webcam in MATLAB. Im using this sample code which outputs a video file with my face being being tracked. All i want it to do is to track my face live using the webcam. Its probably a small change in the code but i have no idea how to do it.
Can anybody please help me with this?
Detect the face
% Create a cascade detector object.
faceDetector = vision.CascadeObjectDetector();
% Read a video frame and run the detector.
videoFileReader = vision.VideoFileReader('facevid.WMV');
videoFrame = step(videoFileReader);
bbox = step(faceDetector, videoFrame);
% Draw the returned bounding box around the detected face.
videoOut = insertObjectAnnotation(videoFrame,'rectangle',bbox,'Face');
figure, imshow(videoOut), title('Detected face');
% Get the skin tone information by extracting the Hue from the video frame
% converted to the HSV color space.
[hueChannel,~,~] = rgb2hsv(videoFrame);
% Display the Hue Channel data and draw the bounding box around the face.
figure, imshow(hueChannel), title('Hue channel data');
rectangle('Position',bbox(1,:),'LineWidth',2,'EdgeColor',[1 1 0])
Track the face
% Detect the nose within the face region. The nose provides a more accurate
% measure of the skin tone because it does not contain any background
% pixels.
noseDetector = vision.CascadeObjectDetector('Nose');
faceImage = imcrop(videoFrame,bbox(1,:));
noseBBox = step(noseDetector,faceImage);
% The nose bounding box is defined relative to the cropped face image.
% Adjust the nose bounding box so that it is relative to the original video
% frame.
noseBBox(1,1:2) = noseBBox(1,1:2) + bbox(1,1:2);
% Create a tracker object.
tracker = vision.HistogramBasedTracker;
% Initialize the tracker histogram using the Hue channel pixels from the
% nose.
initializeObject(tracker, hueChannel, noseBBox(1,:));
% Create a video player object for displaying video frames.
videoInfo = info(videoFileReader);
videoPlayer = vision.VideoPlayer('Position',[300 300 videoInfo.VideoSize+30]);
% Track the face over successive video frames until the video is finished.
while ~isDone(videoFileReader)
% Extract the next video frame
videoFrame = step(videoFileReader);
% RGB -> HSV
[hueChannel,~,~] = rgb2hsv(videoFrame);
% Track using the Hue channel data
bbox = step(tracker, hueChannel);
% Insert a bounding box around the object being tracked
videoOut = insertObjectAnnotation(videoFrame,'rectangle',bbox,'Face');
% Display the annotated video frame using the video player object
step(videoPlayer, videoOut);
end
% Release resources
release(videoFileReader);
release(videoPlayer);

 채택된 답변

Anand
Anand 2014년 2월 21일

0 개 추천

You can use imaqhwinfo to see which cameras are connected to your machine and then accordingly use the videoinput command to connect to the camera of your choice. Within the loop, use the getsnapshot command to capture an image.
This video should be helpful:

댓글 수: 8

thank you for your reply. i have managed to get this working. do you know which function i will need to use to detect emotions in the video. basically im trying to do emotion recognition and i want it to be detected in the video i record. would you know anything about this.
thank you
if im not mistaken i would need to calculate the distance between the eyes and the corner points of the mouth to determine which expression it is. but how would i go about doing this.
First find the feature points (eyes, nose and mouth) using something like vision.CascadeObjectDetector. Use the ClassificationModel property to control which feature point you're looking for. There are classification models for eye pairs, nose and mouth. Note that this will not always find the feature points, so you may need to play with the parameters or bolster the approach with some post-processing.
Once you've found the features, find the left-most point in the left eye and the right-most point in the right eye. The euclidean distance between them is what you're looking for. Use some basic image processing to locate the corner points of the mouth and then do the same thing.
Note this approach will need tweaking and work on your part.
Hope this helps.
Ok, i have used the vision.CascadeObjectDetector to find specific feature on the face eg. mouth, eyes and nose. and then i used the ClassificationModel to detect specific feature points, i have already found the distance using the sqrt method to find the left most and right most points of the eye and both of the corner points for th mouth
now how do i actually detect my emotion in the video i record, i want it to pick up the emotion i do in the video, for example if i smile in the video it would detect it based on the distance i calculated and let me know...this is the main bit i really dont understand...its called emotion recognition and ive searched everywhere on the internet but cant find anything. i had already done everything you mention on your previous post.
can u help me on this. thanks
Wow, all these Anands - can get confusing. Anyway, I answered this in another post that had essentially the same question: http://www.mathworks.com/matlabcentral/answers/119221#answer_126259
i have managed to read the video frames, but how do i extract them in screen shots?
videoFReader = vision.VideoFileReader('face.avi');
videoPlayer = vision.VideoPlayer;
while ~isDone(videoFReader)
videoFrame = step(videoFReader);
step(videoPlayer, videoFrame);
end
release(videoPlayer);
release(videoFReader);
you can use this code
obj = VideoReader('cute.mp4');
vid = read(obj); frames = obj.NumberOfFrames; ST='.jpg';
for x = 1:25
Sx=num2str(x);
Strc=strcat(Sx,ST);
Vid=vid(:,:,:,x);
imwrite(Vid,Strc);
end

댓글을 달려면 로그인하십시오.

추가 답변 (0개)

질문:

2014년 2월 20일

댓글:

2017년 8월 21일

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by