Main Content

estimateFlow

Estimate optical flow between two frames

Since R2024b

    Description

    flow = estimateFlow(flowModel,I) estimates the optical flow between the current frame I and the previous frame using the recurrent all-pairs field transforms (RAFT) deep learning algorithm.

    RAFT optical flow estimation outperforms methods like Farneback by delivering higher accuracy, particularly in areas with minimal texture and under difficult camera movements.

    example

    flow = estimateFlow(flowModel,I,Name=Value) specifies options using one or more name-value arguments in addition to the previous syntax. For example, MaxIterations=10 sets the number of refinement iterations to 10.

    Examples

    collapse all

    Create a RAFT optical flow object.

    flowModel = opticalFlowRAFT;

    Create an object to read the input video file.

    vidReader = VideoReader("visiontraffic.avi",CurrentTime=11);

    Create a custom figure window to visualize the optical flow vectors.

    h = figure;
    movegui(h);
    hViewPanel = uipanel(h, Position=[0 0 1 1], Title="Plot of Optical Flow Vectors");
    hPlot = axes(hViewPanel);

    Read consecutive image frames to estimate optical flow. Display the current current frame and overlay optical flow vectors using a quiver plot. The estimateFlow function calculates the optical flow between two consecutive frames.

    Note that the function internally stores the previous frame and utilizes it implicitly for optical flow estimation. Consequently, when the function is called for the first time on a sequence of frames, it will return a zero flow. This is because, in the absence of a genuine previous frame, the initial frame is treated as both the current and previous frame, leading to no detectable motion between the two. This is consistent with the argument structure and behavior of established optical flow estimation methods, such as opticalFlowFarneback.

    while hasFrame(vidReader)
        frame = readFrame(vidReader);
        flow = estimateFlow(flowModel,frame);
    
        imshow(frame)
        hold on
        plot(flow,DecimationFactor=[10 10],ScaleFactor=0.45,Parent=hPlot,color="g");
        hold off
        pause(10^-3)
    end

    Reset the opticalFlowRAFT object after the video processing has completed. This clears the internal state of the object, including the saved previous frame.

    reset(flowModel);

    Input Arguments

    collapse all

    Optical flow object, specified as a

    Current video frame, specified as a 2-D grayscale or RGB image. When the input image is of type uint8 or int16, the pixel values must be in the range [0,255]. When the input image is of type single or double, the pixels values must be in the range [0,1]. The RAFT model requires input images to have a minimum dimension of more than 57 pixels on the shortest side.

    When initializing motion estimation in a video, the first frame is duplicated as its own "previous frame" to ensure the calculated motion (optical flow) starts from zero, as there is no actual prior frame to compare with.

    Data Types: single | double | int16 | uint8

    Name-Value Arguments

    Specify optional pairs of arguments as Name1=Value1,...,NameN=ValueN, where Name is the argument name and Value is the corresponding value. Name-value arguments must appear after other arguments, but the order of the pairs does not matter.

    Example: estimateFlow(flowModel,I,MaxIterations=10) sets the number of refinement iterations to 10.

    Number of refinement iterations, specified as an integer. Increasing the number of refinement iterations enhances the precision of the optical flow estimation from its initial prediction by iteratively refining the calculation. Decreasing this value accelerates the algorithm's execution but compromises on accuracy. A typical value is 12.

    Acceptable difference between consecutive refinement iterations, specified as a scalar. The tolerance level set for iterations in estimating optical flow, beginning with an initial prediction, determines the acceptable deviation between each successive iteration. If the variation in optical flow between two successive updates falls below this tolerance threshold, the iteration process is halted. Opting for larger tolerance values can speed up the process but at the expense of reduced accuracy.

    Processing execution environment to perform the model's inference, specified as "CPU", "GPU", or "auto".

    • "auto" — Use a local GPU if one is available. Otherwise, use the local CPU.

    • "CPU" — Use the local CPU.

    • "GPU" — Use the local GPU.

    Acceleration, specified as "auto" or "none". The "auto" acceleration setting increases memory usage but also speeds up execution.

    Output Arguments

    collapse all

    Optical flow velocity matrices, returned as an opticalFlow object.

    Tips

    • Using RAFT for optical flow estimation on a GPU requires a minimum of 12 GB of memory.

    • The RAFT model, being fully convolutional, can process images of any size in theory, with the only limitation being the available GPU memory.

    Extended Capabilities

    Version History

    Introduced in R2024b