Use Lidar Toolbox™ functionalities to build a system that can issue collision warnings based on 2D lidar scans in a simulated warehouse arena.
Learn how to simulate a robot workspace with obstacles, generate 2D lidar data, detect obstacles based on 2D lidar scans, and provide a timely warning before an impending collision.
2D lidars are widely used in autonomous navigation applications like 2D SLAM, obstacle detection, and collision warning in robotics and autonomous driving. In this video, I'll demonstrate how to build a collision warning system with 2D lidar using an example from lidar toolbox documentation.
The workflow involves four main steps. The first, simulate robot workspace with obstacles. And then generate 2D lidar data, followed by detecting obstacles. And finally, provide a timely warning before any impending collision.
We'll start by loading synthetic map of our warehouse arena with obstacles in which our virtual robot will operate. We'll create occupancy maps to represent our arena, which is very common in robotics workflows. Each cell in the occupancy grid has a value representing the occupancy status of that cell. An occupied location is represented as 1, and a free location is represented as 0. The triangle in the map represent of our robot.
We can now add a simulated 2D lidar sensor on our robot using rangeSensor object. The rangeSensor system object is the range-bearing sensor that is capable of outputting range and angle measurements based on the given sensor pose and obstacles in the occupancy map.
Now that we have our arena and robot, we will define detection area of lidar sensor using an interactive interface. The detection area represents area around the 2D lidar, where the system will provide collision warnings if an obstacle is detected. We can use this interface to make different shapes and modify detection areas around the lidar to [? detect ?] any 2D lidar of our choice.
The detection area is divided into three levels. The black region is the dangerous region where collision is imminent. The red region represents high chance of coalition. And the yellow region represents the area where caution measures can be applied to avoid any collision.
The robot will now traverse through the viewpoints, and for each viewpoint, we'll look through the following steps to provide collision warnings. First, we stream data from lidar sensor as lidarScan object. The lidarScan object contains range and angle data for a simple 2D lidar. We then segment point cloud data into clusters of obstacles using pcsegdist function. This function segments point clouds into clusters based on the Euclidean distance between the points.
Now, look over each obstacle clusters to check any possible collision. We can find the danger level based on where the obstacle is in the detection arena. Finally, we issue a warning based on the danger limit. A low warning circle denotes that there is an obstacle in the detection region, and action can be taken to avoid collision with it. Red warning circle denotes a possible collision, and a black circle denotes an imminent collision. We can further use these collision warnings for safe autonomous navigation of our robot in the warehouse arena.
Please follow the links below to learn more. If you have any questions or comments, please let us know.
Select a Web Site
Choose a web site to get translated content where available and see local events and offers. Based on your location, we recommend that you select: .Select web site
You can also select a web site from the following list:
Select the China site (in Chinese or English) for best site performance. Other MathWorks country sites are not optimized for visits from your location.