Main Content

Build Occupancy Map from 3-D Lidar Data using SLAM

This example demonstrates how to build a 2-D occupancy map from 3-D Lidar data using a simultaneous localization and mapping (SLAM) algorithm. This occupancy map is useful for localization and path planning for vehicle navigation. You can also use this map as a prebuilt map to incorporate sensor information.

In this example, you process point clouds incrementally to estimate the trajectory of a vehicle with a mounted lidar sensor. These estimates are prone to accumulating drift over time, which reduces the accuracy of the built map. To correct drift, use loop closure detection and pose graph optimization. Finally, use the optimized poses to build an occupancy map. The lidar point cloud data in this example has been collected from a scene in a simulation environment built using the Unreal Engine® from Epic Games®.

In this example you learn how to:

  1. Set up a scenario in the simulation environment with different scenes, vehicles, sensor configurations, and collect data.

  2. Estimate the trajectory of a vehicle using the phase correlation registration technique.

  3. Use a SLAM algorithm to perform loop closure detection and pose graph optimization.

  4. Use the estimated poses to build and visualize an occupancy map.

Set Up Scenario in Simulation Environment

Load the prebuilt Large Parking Lot scene. The Select Waypoints for Unreal Engine Simulation example describes how to interactively select a sequence of waypoints from a scene and how to generate a reference vehicle trajectory. Use this approach to select a sequence of waypoints and generate a reference trajectory for the ego vehicle. Add additional vehicles by specifying the parking poses of the vehicles. Visualize the reference path and the parked vehicles on a 2-D bird's-eye view of the scene.

% Load reference path
data = load('parkingLotReferenceData.mat');

% Set reference trajectory of the ego vehicle
refPosesX = data.refPosesX;
refPosesY = data.refPosesY;
refPosesT = data.refPosesT;

% Set poses of the parked vehicles
parkedPoses = data.parkedPoses([18 21],:);

% Display the reference path and the parked vehicle locations
sceneName = 'LargeParkingLot';
hScene = figure('Name', 'Large Parking Lot','NumberTitle','off');
helperShowSceneImage(sceneName);
hold on
plot(refPosesX(:,2),refPosesY(:,2),'LineWidth',2,'DisplayName','Reference Path');
scatter(parkedPoses(:,1),parkedPoses(:,2),[],'filled','DisplayName','Parked Vehicles');
xlim([-60 40])
ylim([10 60])
hScene.Position = [100, 100, 1000, 500]; % Resize figure
legend
hold off

Open the model and add parked vehicles using the helperAddParkedVehicle function.

modelName = 'GenerateLidarDataOfParkingLot';
open_system(modelName);
snapnow;

helperAddParkedVehicles(modelName,parkedPoses);

Record and Visualize Data

Set up a simple model with a hatchback vehicle moving along the specified reference path by using the Simulation 3D Vehicle with Ground Following block. Mount a lidar on the roof center of a vehicle using the Simulation 3D Lidar block. Record and visualize the sensor data. The recorded data is used to develop the localization algorithm.

close(hScene)

if ~ispc
    error("Unreal Engine Simulation is supported only on Microsoft" + char(174) + " Windows" + char(174) + ".");
end

% Run simulation
simOut = sim(modelName);

close_system(modelName,0);

% Extract lidar data
ptCloudArr = helperGetPointClouds(simOut);

% Extract ground truth for the lidar data as an array of rigidtform3d objects
groundTruthPosesLidar = helperGetLidarGroundTruth(simOut);

Vehicle Odometry Using Phase Correlation Algorithm

In order to build a map using the collected point clouds, the relative poses between the successive point clouds need to be estimated. From these poses, the estimated trajectory of the vehicle is determined. This approach of incrementally estimating the trajectory is called odometry.

To build a 2-D occupancy grid map, 2-D pose estimations are sufficient. Hence, convert the point clouds into 2-D occupancy grid images by projecting the points onto the ground plane. Use a phase correlation registration algorithm to calculate the 2-D relative transformation between two images. By successively composing these transformations, you transform each point cloud back to the reference frame of the first point cloud. This technique is also used in pcregistercorr [1].

In summary, these are the steps used to calculate the vehicle odometry:

  1. Preprocess the point cloud

  2. Create an occupancy grid image of the point cloud, by determining the occupancy based on the height (Z-axis) value of the points

  3. Register two occupancy grid images created from point clouds that correspond to the same scene. Use the imregcorr function to register the grid images and estimate the pose.

  4. Repeat steps on point clouds successively to estimate the relative poses between them

Preprocess Point Cloud

The preprocessing step involves the following operations:

  1. Remove the outliers.

  2. Clip the point cloud to concentrate on the area of interest.

  3. Segment and remove the ego vehicle.

ptCloud = ptCloudArr(1);

% Remove outliers in the point cloud
ptCloudDenoised = pcdenoise(ptCloud);

% Clip the point cloud. This is done to improve the processing performance
% and also to include only the areas of interest

% Set the limits to select the point cloud
selectLimitX = [-40 40];
selectLimitY = [-40 40];

% In order to not include ground in the processing, add a buffer value to
% the lower Z limit of the point cloud. Clip anything above 5m.
minZValue = ptCloudDenoised.ZLimits(1) + 0.2;
maxZValue = 5;
selectLimitZ = [minZValue maxZValue];

roi = [selectLimitX selectLimitY selectLimitZ];
indices = findPointsInROI(ptCloudDenoised,roi);
ptCloudClipped = select(ptCloudDenoised,indices);

% Segment and remove ego vehicle

% Set location of the sensor and vehicle radius
sensorLocation = [0 0 0];
vehicleRadius = 3.5;

% Find the indices of the points other than the ego vehicle and create a
% point cloud with these points
egoIndices = findNeighborsInRadius(ptCloudClipped,sensorLocation,vehicleRadius);

egoFixed = false(ptCloudClipped.Count,1);
egoFixed(egoIndices) = true;
ptCloudProcessed = select(ptCloudClipped,~egoFixed);

% Visualize and compare the point cloud before and after preprocessing.
figure('Name','Processed Point Clouds','NumberTitle','off');
pcViewAxes = pcshowpair(ptCloud, ptCloudProcessed);
title('Point cloud before and after preprocessing');
xlabel(pcViewAxes,'X (m)');
ylabel(pcViewAxes,'Y (m)');
zlabel(pcViewAxes,'Z (m)');

Create Occupancy Grid Image

Follow the steps below to create the occupancy grid images from the point clouds by projecting the points onto the ground plane.

  1. Define a grid on the X-Y plane with appropriate resolution

  2. Calculate a probability value for each point in the point cloud based on its Z-value. This probability is determined by a lower and an upper limit, with all the Z-values between the limits scaled in the range [0,1]. Any points with Z-values below the lower limit or above the upper limit, are mapped to the probability values 0 and 1 respectively.

  3. Accumulate the probabilities in each grid cell corresponding to the points and compute the average. Use pcbin to discretize the grid and obtain the indices of the grid cells for processing.

  4. Visualize the occupancy grid image that represents a top view of the scene.

% Set the occupancy grid size to 100 m with a resolution of 0.2 m
gridSize = 100;
gridStep = 0.2;

% The occupancy grid is created by scaling the points from 1m - 5m in
% height to the probability values of [0 1]
zLimits = [1 5];

% Since we use the Z-values to determine the occupancy in a grid image,
% move the point cloud by sensor height so that more points are included to
% calculate occupancy

% Set the sensor height. This information can be obtained from the lidar
% point cloud
sensorHeight = groundTruthPosesLidar(1).Translation(3);

locationPts = ptCloudProcessed.Location;
locationPts(:,3) = locationPts(:,3) + sensorHeight;
ptCloudHeightAdjusted = pointCloud(locationPts);

% Calclate the number of bins
spatialLimits = [-gridSize/2 gridSize/2; -gridSize/2 gridSize/2; ptCloudHeightAdjusted.ZLimits];

gridXBinSize = round(abs(spatialLimits(1,2) - spatialLimits(1,1)) / gridStep);
gridYBinSize = round(abs(spatialLimits(2,2) - spatialLimits(2,1)) / gridStep);

numBins = [gridXBinSize gridYBinSize 1];

% Find bin indices
binIndices = pcbin(ptCloudHeightAdjusted,numBins,spatialLimits,'BinOutput',false);

% Pre allocate occupancy grid
occupancyGrid = zeros(gridXBinSize,gridYBinSize,'like',ptCloudHeightAdjusted.Location);
gridCount = zeros(gridXBinSize,gridYBinSize);

% Scale the Z values of the points to the probability range [0 1]
zValues = rescale(ptCloudHeightAdjusted.Location(:,3),'InputMin', zLimits(1),'InputMax', zLimits(2));

for idx = 1:numel(binIndices)
    binIdx = binIndices(idx);
    if ~isnan(binIdx)
        occupancyGrid(binIdx) = occupancyGrid(binIdx) + zValues(idx);
        gridCount(binIdx) = gridCount(binIdx) + 1;
    end
end

gridCount(gridCount == 0) = 1;

occupancyGrid = occupancyGrid ./ gridCount;

% Visualize the created occupancy grid
figure;
subplot(1,2,1);
pcshow(ptCloudProcessed); view(2);
title('Point cloud birds eye view')
subplot(1,2,2);
imshow(imrotate(occupancyGrid, 90)); 
tHandle = title('Occupancy grid image');
tHandle.Color = [1 1 1];

Projecting the points onto the ground plane works well if the ground plane is flat. The data collected in this example has a relatively flat ground plane. If the ground plane is not flat, transform the ground plane so that it is parallel to the X-Y plane. For more information, see the Register Lidar Moving Point Cloud to Fixed Point Cloud example and the Tips section of pcregistercorr.

Register and Estimate Poses

Use poseGraph (Navigation Toolbox) to store the estimated poses.

% Pose graph for registration poses
regEstPoseGraph = poseGraph;
relTrans = groundTruthPosesLidar(1).T(4, 1:2);
relYaw = -atan2(groundTruthPosesLidar(1).T(2,1),groundTruthPosesLidar(1).T(1,1));

Then use imregcorr to estimate the relative transformation. Add the poses obtained to the pose graph using addRelativePose (Navigation Toolbox) method. Visualize the poses that are stored in the pose graph as the algorithm progresses.

% Get ground truth pose graph
gTPoseGraph = helperGetGTPoseGraph(groundTruthPosesLidar);

% Obtain the nodes from the pose graphs
gTNodes = gTPoseGraph.nodes();

% Plot the ground truth trajectory
figure('Name','Vehicle Trajectory','NumberTitle',"off"); 
compareTrajAxes = axes;
compareTrajAxes.XLim = [-10 60];
compareTrajAxes.YLim = [-20 5];
compareTrajAxes.XGrid = 'on';
compareTrajAxes.YGrid = 'on';
title('Compare Trajectories')

hold on; 
plot(gTNodes(:,1),gTNodes(:,2),'Color','blue','LineWidth',3,'DisplayName','Ground Truth Trajectory'); 

% Visualize the estimated trajectory and the location of the vehicle when
% running the algorithm
estimatedTraj = plot(gTNodes(1,1),gTNodes(1,2),'Color','green','LineWidth',3 ,'DisplayName','Estimated Trajectory');
currentLoc = scatter(gTNodes(1,1),gTNodes(1,2),50,'red','filled','DisplayName','Vehicle Location');

legend;

% Process every fourth frame. This is done to speed up the processing
% without affecting much the accuracy.
skipFrames = 3;

numFrames = numel(groundTruthPosesLidar);

occGridFixed = occupancyGrid';

for frameIdx = 1+skipFrames:skipFrames:numFrames
    
    ptCloud = ptCloudArr(frameIdx);
    
    ptCloudProcessed = helperPreprocessPtCloud(ptCloud);
    
    occGridMoving = helperCreateOccupancyGrid(ptCloudProcessed,gridSize,gridStep,zLimits, sensorHeight);
    
    % Registration
    
    % imregcorr reports the transformation from the origin. Hence provide a
    % referencing such that the sensor is at the center of the images
    Rgrid = imref2d(size(occGridMoving));
    offsetX = mean(Rgrid.XWorldLimits);
    Rgrid.XWorldLimits = Rgrid.XWorldLimits - offsetX;
    offsetY = mean(Rgrid.YWorldLimits);
    Rgrid.YWorldLimits = Rgrid.YWorldLimits - offsetY;    
    
    % Find relative pose in image coordinates
    [relPoseImage, peak] = imregcorr(occGridMoving,Rgrid,occGridFixed,Rgrid);
    
    % Convert translation to grid coordinates
    transInGrid = relPoseImage.T(3,1:2) .* gridStep;
    
    % The tranformation is a rigid transformation. Since relative pose is
    % an affine2d object, convert to rigid2d object
    rotations = relPoseImage.T(1:2,1:2);
    [u,~,v] = svd(rotations);
    relPose = rigid2d(u*v', transInGrid);
    
    % Add pose to registration estimate pose graph
    relTrans = relPose.Translation(1:2);
    relYaw = -atan2(relPose.T(2,1), relPose.T(1,1));     
    regEstPoseGraph.addRelativePose([relTrans relYaw]);  
    
    occGridFixed = occGridMoving;
    
    % Update the estimated trajectory and vehicle location
    estimatedNode = regEstPoseGraph.nodes(regEstPoseGraph.NumNodes);
    estimatedTraj.XData(end+1) = estimatedNode(1);
    estimatedTraj.YData(end+1) = estimatedNode(2);
    set(currentLoc,'XData',estimatedNode(1),'YData',estimatedNode(2));
    drawnow;
end
hold off;

Notice that there is a drift that has accumulated over time which makes the trajectory to deviate from the ground truth and terminate at a different end point. This drift occurs due to the errors accumulated in estimation of poses.

Detect Loops and Correct Drift

Graph SLAM technique can be used to correct the accumulated drift and obtain an accurate map. Use the lidarSLAM (Navigation Toolbox) object and its methods to use pose graph optimization to detect loops and correct drift in the trajectory. When a new scan is added to the lidarSLAM (Navigation Toolbox) object, the following steps occur:

  1. Pose Estimation: Uses imregcorr for registration.

  2. Build pose graph: Stores poses in a poseGraph (Navigation Toolbox) object.

  3. Loop closure detection: Loops are places that have been previously visited. The lidarSLAM (Navigation Toolbox) object uses scan registration methods, matchScansGrid (Navigation Toolbox) and matchScans (Navigation Toolbox) to detect loops. This process is referred to as loop closure detection. The object performs loop closure detection by matching the current scan against the previous scans within a small radius around the current robot location. Detected loops are added to the pose graph.

  4. Pose graph optimization: Based on the detected loops, the object optimizes the pose graph to find new set of vehicle poses with reduced drift.

  5. Build a map.

The lidarSLAM parameters are emprically set based on the data.

% Set lidarSLAM Parameters
mapResolution = 2;
maxLidarRange = 50;
loopClosureThreshold = 2450;
loopClosureRadius = 3;

% Initialize lidarSLAM object
slamAlg = lidarSLAM(mapResolution,maxLidarRange);
slamAlg.ScanRegistrationMethod = 'PhaseCorrelation';
slamAlg.LoopClosureThreshold = loopClosureThreshold;
slamAlg.LoopClosureSearchRadius = loopClosureRadius;

Use addScan (Navigation Toolbox) method of the lidarSLAM (Navigation Toolbox) object to incrementally provide the data for registration and loop closure detection. Because the addScan (Navigation Toolbox) method only accepts a lidarScan (Navigation Toolbox) object, you must convert the grid image to a lidarScan (Navigation Toolbox) object.

occGridFixed = occupancyGrid';

% Create a lidarScan object from occupancy grid image
[rows,cols,values] = find(occGridFixed);
xLocs = -gridSize/2 + rows * gridStep;
yLocs = -gridSize/2 + cols * gridStep;

scan = lidarScan([yLocs xLocs]);

% Visualize the grid image and lidar scan
figure('Name','Occupancy image and lidar scan','NumberTitle','off');
subplot(1,2,1);
imshow(imrotate(occupancyGrid, -180));
title('Occupancy grid image')
subplot(1,2,2)
plot(scan)

Incrementally add the lidar scans to the lidarSLAM (Navigation Toolbox) object.

% Plot the ground truth trajectory
figure('Name','Trajectory traversal','NumberTitle','off'); 
slamAlgAxes = axes;
plot(gTNodes(:,1), gTNodes(:,2),'Color','blue','LineWidth',3,'DisplayName','Ground Truth Trajectory'); 
hold on; 
slamAlgAxes.XLim = [-10 60];
slamAlgAxes.YLim = [-20 5];
slamAlgAxes.XGrid = 'on';
slamAlgAxes.YGrid = 'on';
title('Trajectory traversal and loop closure detections')
legend

% Plot the current location of the vehicle as the algorithm progresses.
currentLoc = scatter(gTNodes(1,1), gTNodes(1,2), 50,'red','filled','DisplayName','Vehicle Location');

% Also plot any location where a loop closure was detected.
loopLocation = scatter([],[], 50,'black','filled','DisplayName','Loop Detected');

for frameIdx = 1:skipFrames:numFrames
    
    ptCloud = ptCloudArr(frameIdx);
    
    ptCloudProcessed = helperPreprocessPtCloud(ptCloud);
    
    occGridMoving = helperCreateOccupancyGrid(ptCloudProcessed, gridSize, gridStep, zLimits, sensorHeight);
    
    [rows,cols,values] = find(occGridMoving);
    xLocs = -gridSize/2 + rows * gridStep;
    yLocs = -gridSize/2 + cols * gridStep;
    
    scan = lidarScan([yLocs xLocs]);  
    
    % Add Lidar scan to algorithm
    [isScanAccepted,loopClosureInfo] = addScan(slamAlg, scan); 
    
    % Update the loop closure detected location
    if isScanAccepted && size(loopClosureInfo.EdgeIDs,1) > 0
        loopLocation.XData(end+1) = gTNodes(frameIdx,1); 
        loopLocation.YData(end+1) = gTNodes(frameIdx,2);
    end
    
    % Update the vehicle location
    set(currentLoc,'XData',gTNodes(frameIdx,1),'YData',gTNodes(frameIdx,2));
    drawnow;
end
hold off;

Visualize the trajectory obtained by using pose graph optimization. Note that the drift has been reduced. For details on how to calculate the errors for pose estimates, see the Lidar Localization with Unreal Engine Simulation example.

% Obtain the poses from the pose graph
slamAlgPoses = slamAlg.PoseGraph.nodes();
estNodes = regEstPoseGraph.nodes();

% Plot the trajectories from the nodes
figure('Name','Vehicle Trajectory','NumberTitle',"off"); 
slamTrajAxes = axes;
slamTrajAxes.XLim = [-10 60];
slamTrajAxes.YLim = [-20 5];
slamTrajAxes.XGrid = 'on';
slamTrajAxes.YGrid = 'on';
title('Compare Trajectories')
legend

hold on; 

plot(gTNodes(:,1),gTNodes(:,2),'Color','blue','LineWidth',3,'DisplayName','Ground Truth Trajectory'); 
plot(estNodes(:,1),estNodes(:,2),'Color','green','LineWidth',3 ,'DisplayName','Estimated Trajectory');
plot(slamAlgPoses(:,1),slamAlgPoses(:,2),'Color','magenta','LineWidth',3,'DisplayName','Optimized Trajectory');

hold off;

View and visualize the detected loop closure nodes added to the slamAlg object.

slamAlg.PoseGraph.LoopClosureEdgeIDs
ans = 1×10

   189   193   195   197   199   201   203   205   207   209

figure('Name','Loop Closure Results','NumberTitle',"off"); 
pGraphAxes = slamAlg.PoseGraph.show('IDs','off');
pGraphAxes.XLim = [0  10];
pGraphAxes.YLim = [-5 3];
title('Loop closure edges')

Build and Visualize Map

Use the buildMap (Navigation Toolbox) method to build the occupancy grid map. Obtain the required scans and poses using the scansAndPoses (Navigation Toolbox) method of lidarSLAM (Navigation Toolbox) object.

[scans, optimizedPoses]  = scansAndPoses(slamAlg);

map = buildMap(scans, optimizedPoses, 1, 50);
figure('Name', 'Occupancy Map', 'NumberTitle',"off"); 
show(map);
hold on
show(slamAlg.PoseGraph, 'IDs', 'off');
xlim([-20 100]);
ylim([-50 50]);
hold off;

Stitch the point clouds together using the information from the pose graph to create a point cloud map.

sensorHeight = groundTruthPosesLidar(1).Translation(3);
ptCloudAccum = ptCloud.empty;

% Configure display
xlimits = [ -30  100];
ylimits = [-50  60];
zlimits = [-3  30];
player = pcplayer(xlimits, ylimits, zlimits);

% Define rigid transformation between the lidar sensor mounting position
% and the vehicle reference point. 
lidarToVehicleTform = rigidtform3d([0 0 0],[0 0 -1.57]);

% Specify vehicle dimensions
centerToFront = 1.104;
centerToRear  = 1.343;
frontOverhang = 0.828;
rearOverhang  = 0.589;
vehicleWidth  = 1.653;
vehicleHeight = 1.513;
vehicleLength = centerToFront + centerToRear + frontOverhang + rearOverhang;
hatchbackDims = vehicleDimensions(vehicleLength,vehicleWidth,vehicleHeight, ...
'FrontOverhang',frontOverhang,'RearOverhang',rearOverhang);

vehicleDims   = [hatchbackDims.Length,hatchbackDims.Width,hatchbackDims.Height];
vehicleColor  = [0.85 0.325 0.098];

% Estimation trajectory handle
markerSize = 3;
hold(player.Axes,'on');
estTrajHandle   = scatter3(player.Axes,NaN,NaN,NaN,markerSize,vehicleColor,'filled');
hold(player.Axes,'off');

for frameIdx = 1:skipFrames:numFrames
    
    % Obtain the point clouds
    ptCloud = ptCloudArr(frameIdx);
    ptCloudProcessed = helperPreprocessPtCloud(ptCloud);
    
    % Obtain the pose
    poseIdx = floor((frameIdx-1)/skipFrames) + 1;
    pose = optimizedPoses(poseIdx, :);
    
    % Construct the rigidtform3d object required to concatenate point clouds.
    % This object holds the rigid transformation
    yaw = pose(3);
    rot = [cos(yaw) -sin(yaw) 0;...
       sin(yaw) cos(yaw) 0;...
       0 0 1];
    trans = [pose(1:2) sensorHeight];
    
    absPose = rigidtform3d(rot,trans);
    
    % Accumulated point cloud map
    ptCloudAccum = pccat([ptCloudAccum,pctransform(ptCloudProcessed,absPose)]);
    
    % Update accumulated point cloud map
    view(player,ptCloudAccum);
    
    % Set viewing angle to top view
    view(player.Axes,2);
    
    % Convert current absolute pose of sensor to vehicle frame 
    absVehiclePose = rigidtform3d(  absPose.A * lidarToVehicleTform.A );
    
    % Draw vehicle at current absolute pose
    helperDrawVehicle(player.Axes,absVehiclePose,vehicleDims,'Color',vehicleColor);
    
    % 
    estTrajHandle.XData(end+1) = trans(1);
    estTrajHandle.YData(end+1) = trans(2);
    estTrajHandle.ZData(end+1) = trans(3);
    
end

hold(player.Axes, 'off');

Figure Point Cloud Player contains an axes object. The axes object contains 2 objects of type scatter.

References

[1] Dimitrievski, Martin, David Van Hamme, Peter Veelaert, and Wilfried Philips. “Robust Matching of Occupancy Maps for Odometry in Autonomous Vehicles.” In Proceedings of the 11th Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications, 626–33. Rome, Italy: SCITEPRESS - Science and Technology Publications, 2016.

Supporting Functions

helperShowSceneImage displays scene image.

helperAddParkedVehicles adds parked vehicles to the parking lot scene.

helperPreprocessPtCloud preprocesses the point cloud.

helperCreateOccupancyGrid creates an occupancy grid image from point cloud.

helperDrawVehicle draws a vehicle in the map.

helperGetSceneImage retrieves scene image and spatial reference.

helperGetPointClouds extracts an array of pointCloud objects that contain lidar sensor data.

function ptCloudArr = helperGetPointClouds(simOut)

% Extract signal
ptCloudData = simOut.ptCloudData.signals.values;

% Create a pointCloud array
ptCloudArr = pointCloud(ptCloudData(:,:,:,2)); % Ignore first frame
for n = 3 : size(ptCloudData,4)
    ptCloudArr(end+1) = pointCloud(ptCloudData(:,:,:,n));  %#ok<AGROW>
end

end

helperGetLidarGroundTruth Extract ground truth location and orientation.

function gTruth = helperGetLidarGroundTruth(simOut)
numFrames = size(simOut.lidarLocation.time,1);

gTruth = repmat(rigidtform3d, numFrames-1, 1);

for i = 2:numFrames
    gTruth(i-1).Translation = squeeze(simOut.lidarLocation.signals.values(:, :, i));
    % Ignore the roll and the pitch rotations since the ground is flat
    yaw = double(simOut.lidarOrientation.signals.values(:, 3, i));
    gTruth(i-1).R = [cos(yaw), -sin(yaw),  0; ...
        sin(yaw), cos(yaw), 0; ...
        0,         0,    1];
end
end

helperGetGTPoseGraph Obtain pose graph from Lidar ground truth

function gTPoseGraph = helperGetGTPoseGraph(gTruthLidar)

gTPoseGraph = poseGraph;
gTPoseGraph.addRelativePose([0 0 0]);

for idx = 3:numel(gTruthLidar)
    relTform = gTruthLidar(idx).T * inv(gTruthLidar(idx-1).T);
    relTrans = relTform(4, 1:2);
    relYaw = -atan2(relTform(2,1),relTform(1,1));     
    
    gTPoseGraph.addRelativePose([relTrans relYaw]);
    
end

end

See Also

Functions

Objects

Related Topics

External Websites