This is machine translation

Translated by Microsoft
Mouseover text to see original. Click the button below to return to the English version of the page.

Note: This page has been translated by MathWorks. Click here to see
To view all translated materials including this page, select Country from the country navigator on the bottom of this page.

Extended Object Tracking

This example shows you how to track extended objects. Extended objects are objects whose dimensions span multiple sensor resolution cells. As a result, the sensors report multiple detections of the extended objects in a single scan. In this example, you compare the results of tracking extended objects with a point object tracker and an extended object tracker.

Introduction

In tracking, tracked objects are considered points, meaning that each sensor that detects these objects returns one detection per object per scan. With the development of sensors that have better resolution, such as a high-resolution radar or a lidar, the sensor can return more than one detection of an object. In such cases, the object is considered to be extended. For example, the figure depicts multiple detections for a single vehicle that spans multiple radar resolution cells.

The key benefit of using a high-resolution sensor is getting more information about the object, such as its dimensions and orientation. This additional information can improve the probability of detection and reduce the false alarm rate.

Extended objects present new challenges to traditional trackers, because these trackers assume a single detection per object per sensor. In some cases, you can segment the sensor data to provide the point object tracker with a single detection per object. However, by doing so, the benefit of using a high-resolution sensor may be lost.

In contrast, extended object trackers can handle multiple detections per object. In addition, these trackers estimate not only the dynamics of the object, but also its dimensions and orientation. In this example, you use a point object tracker and an extended object tracker to track vehicles around the ego vehicle and you compare the tracking results of both trackers.

Scenario

In this example, which recreates the example Sensor Fusion Using Synthetic Radar and Vision Data (Automated Driving System Toolbox) from Automated Driving System Toolbox™, there is an ego vehicle and three other vehicles: a vehicle ahead of the ego vehicle in the right lane, a vehicle behind the ego vehicle in the right lane, and an overtaking vehicle. The overtaking vehicle begins its motion behind the three other vehicles, moves to the left lane to pass them, and ends in the right lane ahead of all three vehicles.

In this example, you simulate an ego vehicle that has 6 radar sensors and 2 vision sensors covering the 360 degrees field of view. The sensors have some overlap and some coverage gap. The ego vehicle is equipped with a long-range radar sensor and a vision sensor on both the front and the back of the vehicle. Each side of the vehicle has two short-range radar sensors, each covering 90 degrees. One sensor on each side covers from the middle of the vehicle to the back. The other sensor on each side covers from the middle of the vehicle forward.

% Create the scenario
path = fullfile(matlabroot,'examples','driving_fusion','main');
addpath(path)
[scenario, egoVehicle, sensors] = helperCreateScenario;

Point Object Tracker

The multiObjectTracker System object™ is a point object tracker. It assumes that every track can be detected at most once by a sensor in a scan. In this case, the simulated radar sensors have a high enough resolution to generate multiple detections per object. If these detections are not clustered, the tracker generates multiple tracks per object. Clustering returns one detection per cluster, at the cost of having a larger uncertainty covariance and losing information about the true object dimensions. It also makes it hard to distinguish between two objects when they are close to each other, for example, when one vehicle passes another vehicle.

% Create a multiObjectTracker
tracker = multiObjectTracker('FilterInitializationFcn', @initSimDemoFilter, ...
    'AssignmentThreshold', 30, 'ConfirmationParameters', [4 5], ...
    'NumCoastingUpdates', 3);

% Use 'Ego Cartesian' as the sensor coordinate frame for the point tracker
for k = 1:8
    sensors{k}.DetectionCoordinates = 'Ego Cartesian';
end

% Reset the random number generator for repeatable results
S = rng;
rng(2018)
snapTimes = [2.6,5.0,7.2,9.1,inf];
snapIndex = 1;

% Create the display and return a handle to the bird's-eye plot panels
BEPS = helperCreateDemoDisplay(egoVehicle, sensors);

Run the scenario

while advance(scenario) && ishghandle(BEPS{1}.Parent)
    % Get the scenario time
    time = scenario.SimulationTime;

    % Get the poses of the other vehicles in ego vehicle coordinates
    ta = targetPoses(egoVehicle);

    % Collect detections from the ego vehicle sensors
    [detections,isValidTime] = helperDetect(sensors, ta, time);

    % Update the tracker if there are new detections
    if any(isValidTime)
        % Detections must be clustered first for the point tracker
        detectionClusters = helperClusterDetections(detections, egoVehicle.Length);

        % Update the tracker
        confirmedTracks = updateTracks(tracker, detectionClusters, time);

        % Update bird's-eye plot
        helperUpdateDisplayPoint(BEPS, egoVehicle, sensors, detections, confirmedTracks);
    end

    % Snap a figure every time the overtaking vehicle passes another vehicle
    if time > snapTimes(snapIndex)
        snapIndex = snapIndex + 1;
        snapnow
    end
end

These results show that, with clustering, the point tracker can keep track of the objects in the scene. However, it also shows that the track associated with the overtaking vehicle (yellow) moves from the front of the vehicle at the beginning of the scenario to the back of the vehicle at the end. At the beginning of the scenario, the overtaking vehicle is behind the ego vehicle (blue), so radar and vision detections are made from its front. As the overtaking vehicle passes the ego vehicle, radar detections are made from the side of the overtaking vehicle and then from its back, and the track moves to the back of the vehicle.

You can also see that the clustering is not perfect. When the passing vehicle passes the vehicle that is behind the ego vehicle (purple), both tracks are slightly shifted to the left due to the imperfect clustering. Similarly, throughout the scenario, the clustering sometimes fails to cluster all the radar detections from the same object to a single cluster. As a result, the point tracker generates two tracks for the overtaking vehicle.

Extended Object Tracker

To create an extended object tracker, you first have to specify a model for the extended object. The following model defines the extended objects as rectangular, with similar dimensions to the ego vehicle. Each extended object is assumed to be making a coordinated turn about its pivot, located at the center of the rear axle.

model = extendedCoordinatedTurn( ...
    'Pivot', 0.787, ...
    'StdYawAcceleration', 0.1, ...
    'MeanLength', egoVehicle.Length, ...
    'MeanWidth', egoVehicle.Width, ...
    'StdLength', 0.02, ...
    'StdWidth', 0.01, ...
    'CorrLengthWidth', 0.5);

Create the extended object tracker by defining the model used for tracking, the number of particles used for tracked objects and for undetected objects, and the sampling algorithm. In this example, you use Gibbs sampling to associate the detections with the extended object tracks.

sampler = gibbs; % Creates a Gibbs sampler
tracker = helperExtendedObjectTracker( ...
    'Model', model, ...
    'NumTrackingParticles', 5000, ...
    'MaxNumUndetectedParticles', 1e4, ...
    'SamplingMethod', sampler);
% Reset the scenario, the display, and the random number generator
restart(scenario);
for i = 1:numel(BEPS)
    clearPlotterData(BEPS{i});
end
rng(2018)
snapIndex = 1;

% For this tracker, the sensors report in sensor coordinates
for k = 1:6
    release(sensors{k});
    sensors{k}.DetectionCoordinates = 'Sensor Spherical';
end

for k = 7:8
    release(sensors{k});
    sensors{k}.DetectionCoordinates = 'Sensor Cartesian';
end

Run the scenario

while advance(scenario) && ishghandle(BEPS{1}.Parent)
    % Get the scenario time
    time = scenario.SimulationTime;

    % Get the poses of the other vehicles in ego vehicle coordinates
    ta = targetPoses(egoVehicle);

    % Collect detections from the ego vehicle sensors
    [detections,isValidTime] = helperDetect(sensors, ta, time);

    % Update the tracker if there are new detections
    if any(isValidTime)
        % Update the extended tracker with all the detections. Note that
        % there is no need to cluster the detections before passing them to
        % the tracker.
        confirmedTracks = updateTracks(tracker, detections, time, egoVehicle, sensors);

        % Coordinate transform the tracks output from scenario frame to ego
        % vehicle frame
        confirmedTracks = toEgoCoordinates(egoVehicle, confirmedTracks);

        % Update the bird's-eye plot
        helperUpdateDisplayExtended(BEPS, egoVehicle, sensors, detections, confirmedTracks);
    end

    % Snap a figure every time the overtaking vehicle passes another vehicle
    if time > snapTimes(snapIndex)
        snapIndex = snapIndex + 1;
        snapnow
    end
end
% Return the random number generator to its previous state
rng(S)
rmpath(path)

These results show that the extended object tracker can handle multiple detections per object per sensor, without the need to cluster these detections first. Moreover, by using the multiple detections, the tracker estimates the position, velocity, dimensions and orientation of each object.

You can notice that the estimated tracks, as depicted by their outline, have a good fit with the simulated ground truth object, depicted by the solid color patches. In particular, the track associated with the passing vehicle coincides with the ground truth of that vehicle throughout the first two-thirds of the scenario. Only when the ground truth vehicle is getting too far from the ego vehicle, which reduces the number of radar detections, does the estimated shape not coincide with the ground truth. At closer ranges, when the overtaking vehicle has many detections, the extended object tracker can exploit the rich sensor data to improve its estimate.

Summary

This example showed how to use an extended object tracker to track objects that return multiple detections in a single sensor scan. You can use the extended object tracker with high-resolution sensors like a radar or a lidar.

References

[1] Granström, Karl, Lennart Svensson, Stephan Reuter, Yuxuan Xia, and Maryam Fatemi. "Likelihood-Based Data Association for Extended Object Tracking Using Sampling Methods," IEEE Transactions on Intelligent Vehicles . Vol. 3, No. 1, March 2018.

[2] Granström, Karl, Marcus Baum, and Stephan Reuter. "Extended Object Tracking: Introduction, Overview and Applications," Journal of Advances in Information Fusion. Vol. 12, No. 2, December 2017.