Perception with Computer Vision and Lidar

Camera sensor configuration, object and lane boundary detections using machine learning and deep learning, lidar processing

Automated Driving Toolbox™ perception algorithms use data from cameras and lidar scans to detect and track objects of interest in a driving scenario. These algorithms are ideal for ADAS and autonomous driving applications, such as automatic braking and steering. Detect vehicles, pedestrians, and lane markers through detectors that have been pretrained using deep learning and traditional machine learning techniques. You can also train custom detectors.

  • Camera Sensor Configuration
    Monocular camera sensor calibration, image-to-vehicle coordinate system transforms, bird’s-eye-view image transforms
  • Visual Perception
    Lane boundary, pedestrian, vehicle, and other object detections using machine learning and deep learning
  • Lidar Processing
    Velodyne® file import, segmentation, downsampling, transformations, visualization, and 3-D point cloud registration from lidar

Featured Examples