필터 지우기
필터 지우기

how can i blur the background?

조회 수: 2 (최근 30일)
Gf
Gf 2024년 5월 3일
답변: Image Analyst 2024년 5월 3일
Hello, I want to blur the background not the man. please help as soon as possible.
  댓글 수: 1
Walter Roberson
Walter Roberson 2024년 5월 3일
Note that in the general case, it is impossible for a program to automatically select which part of the image is "background" and which part is "foreground".
Consider the images produced by SOHO, the solar observatory. The images are primarily intended for observing the Sun, in which case you ignore everything outside of the solar disk. But if you take the same images and mask out the solar disk then you can observe objects moving close to the solar disk -- comets!
So in one case you ignore everything outside of the solar disk, and in the other case you ignore the solar disk, and so using the same image you can get two different purposes. The foreground for one purpose is the background for the other purpose.
Therefore it is impossible to automatically select foreground versus background: the "noise" for one purpose might easily be the "signal" for a different purpose.

댓글을 달려면 로그인하십시오.

답변 (2개)

DGM
DGM 2024년 5월 3일
There are several answers already. Here's the latest one. See the links therein.

Image Analyst
Image Analyst 2024년 5월 3일
You can try the foreground detector in the Computer Vision Toolbox.
help foregrounddetector
--- foregrounddetector not found. Showing help for vision.ForegroundDetector instead. --- ForegroundDetector Detect foreground using Gaussian Mixture Models detector = vision.ForegroundDetector returns a foreground detector System object, detector, that computes foreground mask using Gaussian Mixture Models (GMM) given a series of either grayscale or color video frames. detector = vision.ForegroundDetector('PropertyName', PropertyValue, ...) returns a foreground detector System object, H, with each specified property set to the specified value. Step method syntax: foregroundMask = step(detector, I) computes the foreground mask for input image I, and returns a logical mask where true represents foreground pixels. Image I can be grayscale or color. This form of the step function call is allowed when AdaptLearningRate is true (default). foregroundMask = step(detector, I, learningRate) computes the foreground mask for input image I using the LearningRate provided by the user. This form of the step function call is allowed when AdaptLearningRate is false. System objects may be called directly like a function instead of using the step method. For example, y = step(obj, x) and y = obj(x) are equivalent. ForegroundDetector methods: step - See above description for use of this method reset - Resets the GMM model to its initial state release - Allow property value and input characteristics changes clone - Create foreground detection object with same property values isLocked - Locked status (logical) ForegroundDetector properties: AdaptLearningRate - Enables the adapting of LearningRate as 1/(current frame number) during the training period specified by NumTrainingFrames NumTrainingFrames - Number of initial video frames used for training the background model LearningRate - Learning rate used for parameter updates MinimumBackgroundRatio - Threshold to determine the gaussian modes in the mixture model that constitute the background process NumGaussians - Number of distributions that make up the foreground-background mixture model InitialVariance - Initial variance to initialize all distributions that compose the foreground-background mixture model Example: Detect moving cars in video ------------------------------------ reader = VideoReader('visiontraffic.avi'); detector = vision.ForegroundDetector(); blobAnalyzer = vision.BlobAnalysis(... 'CentroidOutputPort', false, 'AreaOutputPort', false, ... 'BoundingBoxOutputPort', true, 'MinimumBlobArea', 250); player = vision.DeployableVideoPlayer(); while hasFrame(reader) frame = readFrame(reader); fgMask = step(detector, frame); bbox = step(blobAnalyzer, fgMask); % draw bounding boxes around cars out = insertShape(frame, 'Rectangle', bbox, 'ShapeColor', 'Yellow'); step(player, out); % view results in the video player end release(player); See also: VideoReader, vision.BlobAnalysis, regionprops, vision.KalmanFilter, imopen, imclose Documentation for vision.ForegroundDetector doc vision.ForegroundDetector
I have no idea how will its algorithm will work for your particular image(s).
Otherwise you can manually trace the foreground with drawfreehand to make a mask. Then blur the entire image and assign the blurred pixels inside the mask to the original unblurred image. Like for once you have the mask:
h = ones(15);
filteredRGB = imfilter(originalRGB, h);
[r,g,b] = imsplit(rgbImage);
[rf, gf, bf] = imsplit(filteredRGB);
r(mask) = rf(mask);
g(mask) = gf(mask);
b(mask) = bf(mask);
outputImage = cat(3, r, g, b);
See attached drawing demos for creating and masking images.

카테고리

Help CenterFile Exchange에서 Image Segmentation and Analysis에 대해 자세히 알아보기

제품


릴리스

R2024a

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by