Main Content

detect

Detect lane boundaries in images

Since R2023a

Description

Detect Lane Boundaries in Image Coordinate System

lanePoints = detect(detector,I) detects lane boundary points within a single image, I, using a laneBoundaryDetector object, detector. The function returns the lane boundary points detected in the input image as a set of pixel coordinates, lanePoints.

lanePoints = detect(detector,batch) detects lane boundary points for the batch of images, batch.

lanePoints = detect(detector,imds) detects lane boundary points for a series of images associated with an ImageDatastore object imds.

example

lanePoints = detect(___,Name=Value) specifies options using one or more name-value arguments in addition to any combination of arguments from previous syntaxes. For example, DetectionThreshold="0.2" sets the lane detection score threshold to 0.2.

[lanePoints,scores] = detect(___) additionally returns confidence scores, scores, for detected lanes in images.

Detect Lane Boundaries in Vehicle Coordinate System

[lanePointsVehicle,laneBoundaries] = detect(detector,I,sensor) detects and returns the lane boundary points lanePointsVehicle in the vehicle coordinate system by using the monoCamera object sensor. This object function also returns lane boundaries, laneBoundaries.

[lanePointsVehicle,laneBoundaries] = detect(detector,batch,sensor) detects lane boundary points for the batch of images batch.

[lanePointsVehicle,laneBoundaries] = detect(detector,imds,sensor) detects lane boundary points for a series of images associated with an ImageDatastore object, imds.

example

[lanePointsVehicle,laneBoundaries] = detect(___,Name=Value) specifies options using one or more name-value arguments in addition to any combination of arguments from the previous three syntaxes. For example, ExecutionEnvironment="cpu" uses the hardware resource as CPU to execute the function.

Note

This object requires the Scenario Builder for Automated Driving Toolbox™ support package, Deep Learning Toolbox™, and the Deep Learning Toolbox Converter for ONNX™ Model Format support package. You can install the Scenario Builder for Automated Driving Toolbox and Deep Learning Toolbox Converter for ONNX Model Format support packages from the Add-On Explorer. For more information about installing add-ons, see Get and Manage Add-Ons.

Examples

collapse all

Detect lane boundary points in an RGB image by using the laneBoundaryDetector object.

Read an image into the workspace.

I = imread("highway.png");

Initialize the laneBoundaryDetector object.

detector = laneBoundaryDetector;

Detect the boundary points of the lanes in the image by using the detect object function of the laneBoundaryDetector object.

lanes = detect(detector,I,ROI=120,ExecutionEnvironment="cpu");

Insert the detected lane boundary points into the image, as markers, by using the insertMarker function.

for i = 1:size(lanes{1},2)
    if ~isempty(lanes{1}{i})
        I = insertMarker(I,lanes{1}{1,i},"o",Size=3);
    end
end

Display the image, annotated with the detected lane boundary points.

imshow(I)

Detect lane boundaries in an RGB image by using the laneBoundaryDetector object and camera sensor parameters.

Note: You must have relevant camera parameters for the sensor on which your RGB image is captured.

Read an image from the PandaSet data set into the workspace.

I = imread("PandaSetImage.jpg");

Specify your camera sensor parameter as a monoCamera object.

focalLength = [1970.0131 1970.0091]; % Units are in pixels
principalPoint = [970.0002 483.2988]; % Units are in pixels
imageSize = [1080 1920]; % Units are in pixels
intrinsics = cameraIntrinsics(focalLength,principalPoint,imageSize);
height = 1.8;
sensorloc = [0 0];
PandaSetMonoCam = monoCamera(intrinsics,height,SensorLocation=sensorloc);

Initialize the laneBoundaryDetector object.

detector = laneBoundaryDetector;

Crop the top 400 rows from the image, and performs lane boundary detection in the image by using the detect object function of the laneBoundaryDetector object.

[lanePointsVehicle,laneBoundaries] = detect(detector,I,PandaSetMonoCam,ROI=400,ExecutionEnvironment="cpu");

Convert vehicle coordinates to image coordinates by using the vehicleToImage function. Insert the detected lane boundary points into the image, as markers, by using the insertMarker function.

for i = 1:size(lanePointsVehicle{1},2)
    if ~isempty(lanePointsVehicle{1}{i})
        lanePointsImage = vehicleToImage(PandaSetMonoCam,lanePointsVehicle{1}{1,i});
        I = insertMarker(I,lanePointsImage,"circle",Size=5);
    end
end

Display the input image, annotated with the detected lane boundary points.

imshow(I)

Display the extracted lane boundaries.

bep = birdsEyePlot(XLim=[0 30],YLim=[-20 20]);
lbPlotter = laneBoundaryPlotter(bep,DisplayName="Lane boundaries");
plotLaneBoundary(lbPlotter,laneBoundaries{1})

Input Arguments

collapse all

Lane boundary detector, specified as a laneBoundaryDetector object.

Input RGB image, specified as an M-by-N-by-3 numeric array.

Batch of input RGB images, specified as an M-by-N-by-3-by-B numeric array. B is the number of images.

Series of input RGB images, specified as an ImageDatastore object containing the full filenames of the test images.

Camera sensor parameters, specified as a monoCamera object. The function uses these parameters to convert lane boundary points from the image coordinate system to the vehicle coordinate system.

Name-Value Arguments

Specify optional pairs of arguments as Name1=Value1,...,NameN=ValueN, where Name is the argument name and Value is the corresponding value. Name-value arguments must appear after other arguments, but the order of the pairs does not matter.

Example: detect(detector,I,ROI=100) crops the top 100 rows from the image, and performs lane boundary detection.

Region of interest to search for lane boundaries, specified as a positive integer or four-element numeric row vector.

If you specify a positive integer, C, the function crops the image by removing the top C rows from the image, and performs lane detection.

If you specify a four-element numeric row vector of the form [x y width height], the function crops the image by using the position and size of the crop rectangle, and performs lane detection. The vector specifies the position in the image of the upper-left corner of the crop rectangle, [x y], and the size of the crop rectangle, [width height], in pixels.

Lane detection threshold, specified as a scalar in the range [0, 1]. The function removes lane detections with scores less than the threshold. Increase this value to reduce the number of false positive detections.

Lane detection overlap threshold, specified as a scalar in the range 0 to 1. When the normalized Euclidean distance between lanes is below the overlap threshold, the function removes the lane boundary points around the reference lane. Increase this value to reduce the number of detected lane boundary points.

Maximum number of expected lane boundaries across images, specified as a positive integer.

Hardware resource, specified as "auto", "gpu", or "cpu".

  • "auto" — Use a GPU if one is available. Otherwise, use the CPU. The use of a GPU requires Parallel Computing Toolbox™ and a CUDA® enabled NVIDIA® GPU. For information about the supported compute capabilities, see GPU Computing Requirements (Parallel Computing Toolbox).

  • "gpu" — Use the GPU. If a suitable GPU is not available, the function returns an error message.

  • "cpu" — Use the CPU.

Size of image groups, specified as a positive integer. The function groups and processes images together in batches of this size. Use batches to process a large collection of images with improved computational efficiency. Increasing the MiniBatchSize value can increase your processing speed, but also consumes more memory.

Lane boundary model, specified as "Parabolic" or "Cubic".

Note

The LaneBoundaryModel argument applies only if you specify the sensor input argument and laneBoundaries output argument.

Approximate lane boundary width, specified as a positive scalar in meters.

Note

The BoundaryWidth argument applies only if you specify the sensor input argument and laneBoundaries output argument.

Show progress bar status, specified as a logical 1 (true) or 0 (false). Specify true to display a progress bar of lane detections per frame. Otherwise, specify this value as false.

Output Arguments

collapse all

Lane boundary points in the image coordinate system, returned as a cell array. The returned lane boundary point pixel coordinates are in the form [x y].

  • If you specify a single input image, I, then the function returns lanePoints as a 1-by-L cell array containing the detected lane boundary points. L is the number of detected lanes. Each cell is a P-by-2 matrix , containing the [x y] pixel coordinates of P boundary points for the corresponding lane.

  • If you specify a batch of input images, batch, or an ImageDatastore object, imds, then the function returns lanePoints as a B-by-1 cell array containing lane boundaries for B images. Each cell contains a 1-by-L cell array, where L is the number of lanes detected in the corresponding image, and each cell is a P-by-2 matrix that contains the [x y] pixel coordinates of P boundary points for that lane.

Lane detection scores, returned as a cell array. Each detection score is in the range [0, 1]. The closer a value of scores is to 1, the higher the confidence of the corresponding lane detection.

  • If you specify a single input image, I, then the function returns scores as a cell containing an L-element row vector that specifies the scores of L detected lanes. L is the number of detected lanes in the image.

  • If you specify a batch of input images, batch, or an ImageDatastore object, imds, then the function returns scores as a B-by-1 cell array containing the scores of the detected lanes for B images. Each cell contains an L-element row vector that specifies the scores of the L detected lanes for the corresponding image.

Note

The scores output argument is applicable to only the syntaxes used to detect lane boundaries in an image coordinate system.

Lane boundary points in the vehicle coordinate system, returned as a cell array. You must specify the camera sensor parameters sensor to obtain lane boundary points in the vehicle coordinate system. For more information, see Coordinate Systems in Automated Driving Toolbox.

  • If you specify a single input image, I, then the function returns lanePointsVehicle as a 1-by-L cell array containing the detected lane boundary points. L is the number of detected lanes. Each cell is a P-by-2 matrix , containing the [x y] vehicle coordinates of P boundary points for the corresponding lane.

  • If you specify a batch of input images, batch, or an ImageDatastore object, imds, then the function returns lanePoints as a 1-by-B cell array containing lane boundaries for B images. Each cell contains a 1-by-L cell array, where L is the number of lanes detected in the corresponding image, and each cell is a P-by-2 matrix that contains the [x y] vehicle coordinates of P boundary points for that lane.

Lane boundaries, returned as a parabolicLaneBoundary object or cubicLaneBoundary object. The LaneBoundaryModel argument defines the type of lane boundary model to return.

Version History

Introduced in R2023a

expand all