In the context of drone delivery, ensuring precise and obstacle-free navigation remains a critical challenge. Cameras, though effective in specific scenarios, encounter drawbacks such as reduced visibility during bad weather or in low-light conditions. Unlike stereo cameras, monocular cameras face challenges when it comes to accurately perceiving complex 3D environments. Additionally, the challenge lies in classifying all potential obstacles within the environment—ranging from trees and power lines to streetlight poles and other drones. The finite dataset upon which obstacle detection operates cannot encompass every object present. However, 3D data circumvents the need to recognize individual objects, focusing instead on their positions and velocities relative to the drone. This information empowers an unsupervised optimization of the obstacle avoidance system, enhancing operational efficiency.
This project dives into practical point cloud analysis using the KITTI dataset. We first use Open3D
for visualization and employ Voxel Grid for downsampling. We then apply the RANSAC algorithm to segment obstacles from the road surface, enhancing scene understanding. Utilizing DBSCAN clustering, we cluster similar obstacles for more accurate spatial insight. To enable tracking, 3D bounding boxes are created around each obstacle. Finally, we employ surface reconstruction on a custom point cloud generated by an iPhone's LiDAR. The objective of this project was to demonstrate the application of Machine Learning algorithms for processing point cloud data in 3D space.
- Understanding LiDARs
- Point Cloud Processing
- Downsampling with Voxel Grid
- Segmentation with RANSAC
- Clustering with DBSCAN
- 3D Bounding Box with PCA
- Surface Reconstruction
LiDAR stands for Light Detection And Ranging
. It is a active remote sensing technology that uses laser pulses to measure distances and create precise 3D representations of objects and environments. Note that an active system generates its own energy, in this case, light, to measure objects or features on the ground. By emitting laser beams and measuring their reflections, LiDAR captures detailed spatial data, making it valuable in various fields like mapping, environmental monitoring, forestry, and autonomous vehicle
.
LiDAR technology operates by emitting short laser pulses toward the target area. These pulses hit surrounding objects and return to the LiDAR sensor. By calculating the time it takes for the pulses to return (time of flight), the system determines the distance to each object with high accuracy. Additionally, LiDAR can measure the intensity of the returned laser light, providing information about the objects' reflectivity or composition.
Distance = (Speed of light x time of flight) / 2
Note that we divide by 2
in the equation because we only want the distance from the object to the lidar.
- Waveform: Distribution of energy that returns to the sensor
- Intensity: Amount of energy that returned to the sensor
- Peaks: Areas where more photons or more light energy returns to the sensor
1. Mechanical LiDAR: Employ moving parts, typically rotating mirrors
or oscillating prisms
, to steer the laser beam and scan the surroundings, achieving a 360-degree
field of view.
2. Solid-State LiDAR: Use semiconductor
components to generate and manipulate laser beams, eliminating the need for moving parts. This design enhances reliability and reduces size
and weight
.
- MEMs: Micro-Electro-Mechanical Systems
(MEMS) LiDAR leverages tiny, microfabricated mirrors
to direct laser beams. MEMS LiDARs offer high-speed scanning capabilities and compact form factors.
- OPA: Optical Phased Array
(OPA) LiDAR utilizes electronically controlled optical elements
to steer laser beams without moving parts. OPA LiDARs provide rapid beam steering, allowing for real-time data acquisition and high-resolution mapping.
3. Flash LiDAR: Also known as single-shot
or non-scanning
LiDAR, illuminates the entire field of view simultaneously
with a single pulse of laser light. This type of LiDAR offers high data acquisition rates, making it suitable for real-time applications.
Image Source: Lidar for Autonomous Driving: The Principles, Challenges, and Trends for Automotive Lidar and Perception Systems
y2mate.is.-.How.Ouster.Digital.Lidar.works-fFEmWubsE8U-720p-1692370527.4.mp4
Video source: How Ouster Digital Lidar works
1. 2D: Measures distances in 2D space
, typically along a single plane
. 2D LiDAR provides information about the distances
and angles
of objects in that plane but does not capture information about objects in the vertical
direction.
2. 3D: Measures distances in 3D space
, providing information about objects in both the horizontal
and vertical
directions. Each point in the point cloud contains the X, Y, and Z coordinates. By calculating the distance
and angle
of the laser beams, 3D LiDAR generates a 3D point cloud
that represents the surveyed scene.
3. 4D: In addition to measuring distances in 3D space (X, Y, and Z), 4D LiDAR also captures the time-of-flight
for each laser pulse. This additional temporal data allows us to measure the speed
and movement
of objects over time.
4D.mp4
1. Discrete: Operates by emitting individual laser pulses
to measure distances where each pulse illuminates a single point
on the target surface. By calculating the time-of-flight
, the distance to the object can be determined.
2. Continuous: Amplitude-Modulated Continuous Wave (AMCW): Emits a continuous laser beam with a modulated intensity
. The modulation allows the LiDAR system to differentiate between the emitted light and the reflected light from the target surface. By analyzing the modulation phase shift
, the system can calculate the distance
to the object.
3. Continuous: Frequency-Modulated Continuous Wave (FMCW): Emits a continuous laser beam with a varying frequency
. The frequency shift
of the reflected light is used to determine the distance
to the target. FMCW LiDAR is particularly useful for velocity
measurements, as it can also capture the Doppler shift
caused by moving objects.
FMCW.mp4
Video source: Aeva’s FMCW technology is resistant to interference from sunlight and other LiDAR sensors
LiDARs operate using laser light to measure distances and create 3D representations of environments. The choice of the working wavelength, such as 905 nm
and 1550 nm
, has implications for the system's performance, safety, and interaction with the environment. Here's an overview of these wavelengths as illustrated by Velodyne:
Shorter wavelengths like 905 nm
are favored for high-precision applications where detailed and accurate measurements are crucial. Longer wavelengths like 1550 nm
are chosen for their potential to provide extended range and better performance in challenging environments while maintaining a higher level of eye safety.
The number of lasers in the vertical direction is called channels for the lidar. In a general sense, a greater number of channels
corresponds to an increased quantity of lasers
, leading to an enhanced overall performance of LiDAR technology.
The number of points per second that a LiDAR sensor can generate depends on several factors, including the laser pulse repetition rate
, the field of view (FoV)
, the scanning pattern
, and the scanning speed
.
Number of Points per Second = Laser Pulse Repetition Rate × Number of Laser Beams × Scanning Density
Where:
- Laser Pulse Repetition Rate: The rate at which laser pulses are emitted by the LiDAR system (in pulses per second).
- Number of Laser Beams: The total number of individual laser beams or channels in the LiDAR system.
- Scanning Density: The density of measurements in the scanned area, often measured in points per square meter or points per degree.
Below is an explanation of different channels of LiDARS with different FoV from RoboSense:
Image source: Harness distribution comparison
Below is a video of me testing a 2D RPLiDAR:
LiDAR.-.Made.with.Clipchamp.1.mp4
Dual return in LiDAR refers to the capability of a LiDAR sensor to detect and differentiate between multiple returns from a single laser pulse. This is particularly useful in scenarios where the laser pulse hits objects that have multiple surfaces or reflective materials. The dual return system records both the first and the last returns.
In drone delivery with LiDAR, dual return plays a crucial role in obstacle detection and safe navigation. As the drone descends to make a delivery, the LiDAR sensor emits laser pulses that bounce back from various surfaces. The first return captures data from obstacles closer to the drone, like buildings or trees. The last return records reflections from farther objects such as power lines or tall structures. By analyzing these dual returns, the drone's navigation system can accurately identify potential obstacles and their distances, allowing it to adjust its flight path in real-time to avoid collisions.
Image source: An Introduction to LiDAR
In our analysis, we will employ the KITTI Dataset, which includes synchronous stereo images and LiDAR data. This dataset captures two pairs of images, taken at regular time intervals, using their stereo system. Additionally, their Velodyne LiDAR generates a corresponding point cloud. As a result, we can visualize the scene in 2D using the images and in 3D using the point cloud, as demonstrated below:
In this project, there are several libraries and software available for point cloud processing. Among them, two commonly used libraries are Open3D and PCL. For this project, we will exclusively utilize Open3D. The reason for this choice is its user-friendly nature and the abundant literature available on it. Our point cloud has already been transformed into the .ply
format, allowing us to employ the read_point_cloud function
from Open3D as follows:
point_cloud = open3d.io.read_point_cloud(point_cloud_path)
After reading the file, there are numerous ways to visualize the point cloud with Open3D:
o3d.visualization.draw_geometries([point_cloud])
Or:
vis = open3d.visualization.Visualizer()
vis.create_window()
vis.add_geometry(point_cloud)
vis.get_render_option().background_color = np.asarray([0, 0, 0])
vis.run()
vis.destroy_window()
Or with get_plotly_fig
from plotly. (However, I was having some error using get_plotly_fig
, hence I created my custom function). The function below allows you to encode either the distance with color or use the reflectance data from the point cloud itself.
visualize_reflectance_distance(point_cloud, mode='distance', save=False, show=True) #mode = reflectance or distance
If we want to visualize the point cloud with the distance encoded with color:
Notice, that points further away are in the shades of yellow as shown with the color bar on the right. Now, if we want to visualize the point cloud with the reflectance encoded with color:
Observe that the billboard on the right or the license plate of the car or even the lane lines have a higher reflectance and a darker shade of red. We will work more on this later.
Let's build a lane line detection
using reflectance data from our point cloud. From the image above, when we look closely, we can see traces of the lane lines with darker shades of red. We will now filter the point cloud with higher reflectance values. The steps:
- Get the reflectivity values
- Create a mask of points that have reflectivity above the threshold
- Filter points and reflectivities using the mask
- Create a new point cloud with the filtered points and colors
# Thresholding
filtered_point_cloud = reflectivity_threshold(point_cloud, threshold=0.5)
With the thresholding technique, we filtered out all point clouds having reflectivity values below 0.5
. Observe, how the lane lines are more visible, but also the license plate and the guardrails on the side. We are interested only in the lane lines hence, we need to filter those out.
Now that we have the filtered reflectance values, we need to manually create an ROI that will encompass the lane lines. We need to set the min and max values for the ROI as such:
# Region of Interest
roi_point_cloud = roi_filter(point_cloud, roi_min=(0, -3, -2), roi_max=(20, 3, 0))
When we are satisfied with our selected ROI, we then need to create a pipeline using the reflectivity_threshold
function and the roi_filter
function.
# Lane Line Detection
def lane_line_detection(point_cloud):
# Make a copy
point_cloud_copy = copy.deepcopy(point_cloud)
# Thresholding
filtered_point_cloud = reflectivity_threshold(point_cloud_copy, threshold=0.45)
# Region of Interest
roi_point_cloud = roi_filter(filtered_point_cloud, roi_min=(0, -3, -2), roi_max=(20, 3, 0))
return roi_point_cloud
Below is a video of the output:
reflectance_output.2.mp4
Notice that when processing our point cloud, it is not necessary to have all the points. In fact, it can a better idea to downsample our point cloud to remove any potential noise, or for faster processing and visualization. The steps to Voxel Grid Downsampling are as follows:
- Define the Voxel Size: Smaller voxel sizes will result in a more detailed point cloud, while larger voxel sizes will lead to more aggressive downsampling.
- Divide the Space: The 3D space of the point cloud is divided into voxels by creating a 3D grid. Each point in the point cloud is assigned to the voxel it falls into based on its 3D coordinates.
- Subsampling: For each voxel, one point is retained. This can be achieved by selecting the
centroid
of the points within the voxel or by using other sampling methods. - Remove Redundant Points: Points that fall into the same voxel are reduced to one point, effectively downsampling the point cloud.
Voxel=0.1 | Voxel=0.5 | Voxel=1.0 |
---|---|---|
#points = 45803 | #points = 13811 | #points = 5922 |
# Voxel Grid
point_cloud_downsampled = point_cloud.voxel_down_sample(voxel_size=1)
The downsampling process results in a point cloud with a reduced number of points while still retaining its main features.
RANSAC (Random Sample Consensus
) was invented by Fischler and Bolles in 1981
as a solution to the problem of fitting models (such as lines, planes, circles, etc.) to noisy data with outliers. The algorithm is widely used in computer vision, image processing, and other fields where robust estimation of model parameters is required.
- Randomly choose
s
samples which is the minimum number of samples to fit a model. - Fit the model to the randomly chosen samples.
- Count the number
M
of datapoints which fit the model within a measure of errore
. - Repeat steps 1-3
N
times. - Choose the model that has the highest number of
M
inliers.
In our scenario, we choose ransac_n
equal to 3
, which is the number of points to randomly sample for each iteration of RANSAC, and num_iterations
equal to 2000
which is the number of RANSAC iterations to perform.
# Perform plane segmentation using RANSAC
plane_model, inliers = point_cloud.segment_plane(distance_threshold=distance_threshold, ransac_n=ransac_n, num_iterations=num_iterations)
# Extract inlier and outlier point clouds
inlier_cloud = point_cloud.select_by_index(inliers)
outlier_cloud = point_cloud.select_by_index(inliers, invert=True)
Observe how the plane which is the road is segmented from all other vertical objects such as cars, trees, billboards, and so on. The inliers are the road segment and the outliers are the rest.
DBSCAN (Density-Based Spatial Clustering of Applications with Noise) is a density-based clustering algorithm proposed by Martin Ester, Hans-Peter Kriegel, Jörg Sander, and Xiaowei Xu in 1996. It is particularly suitable for datasets with complex structures and a need for automatic cluster detection.
- Choose parameters
ε
andMinPts
that define the neighborhood of a data point and the minimum points for forming core points, respectively. - Mark data points as core points if they have at least MinPts points within ε-neighborhood.
- A point A is directly density-reachable from B if B is core and A is within its ε-neighborhood.
- Density-reachability is transitive, forming density-based clusters by linking mutually density-reachable points.
Noise points
are outliers, not core, and not density-reachable from any core point.
In our scenario, we will use an eps value of 1.5
which is the maximum distance between two points for one to be considered as in the neighborhood of the other, and a min_points
value of 50
which signifies the minimum number of points required to form a dense region (core points).
labels = np.array(outlier_cloud.cluster_dbscan(eps=eps, min_points=min_points, print_progress=True))
Notice that the road is one cluster and all the other obstacles are separate clusters represented in different colors.
After successfully clustering our obstacles, the next step is to enclose them within 3D bounding boxes. Given that we are working in a 3D space, the bounding boxes we create are also in 3D. However, it's essential to recognize that in addition to generating bounding boxes, we require information about their orientation. As some obstacles might be positioned at various angles relative to the ego vehicle, we aim to construct bounding boxes that precisely encompass each obstacle. We will use PCA
for that:
-
Compute the centroid of the point cloud subset, which is the center of mass.
-
Calculate the covariance matrix for the subset, summarizing dimension relationships.
-
Apply PCA to the covariance matrix, finding eigenvectors and eigenvalues.
-
The eigenvector with the largest eigenvalue signifies the longest axis and primary spread direction.
-
The eigenvector with the smallest eigenvalue indicates the shortest axis and compression direction.
-
Use these eigenvectors to define the orientation (
rotation
) of the Oriented Bounding Box (OBB
). It aligns the OBB with the eigenvectors, so the box represents the point cloud's principal directions. -
OBB size along each axis reflects point spread along the corresponding eigenvector.
obs = []
# Group points by cluster label
indexes = pd.Series(range(len(labels))).groupby(labels, sort=False).apply(list).tolist()
# Iterate over clusters and perform PCA
for i in range(0, len(indexes)):
nb_points = len(outlier_cloud.select_by_index(indexes[i]).points)
if nb_points > MIN_POINTS and nb_points < MAX_POINTS:
sub_cloud = outlier_cloud.select_by_index(indexes[i])
obb = sub_cloud.get_oriented_bounding_box()
Note that the get_oriented_bounding_box
function from Open3D performs the PCA function for us behind the scenes. We use the outlier point cloud subset as these correspond to the cars and other objects whereas the inliers represent the road.
For the last part, we will see how we can do surface reconstruction given a point cloud. To do that, we first need to compute the normals
of each point in a point cloud. Normals refer to the direction vectors that are perpendicular to the surfaces of the points. For each point in the point cloud, a normal vector is calculated to indicate the direction of the surface at that point. Normals are used to estimate the orientation of surfaces and to reconstruct smooth surfaces from a point cloud. For this scenario, I chose to get the point cloud of my jug of water using the LiDAR of an iPhone as shown below:
We can then visualize the point cloud and the normals with the code below:
point_cloud_downsampled.estimate_normals(search_param=o3d.geometry.KDTreeSearchParamHybrid(radius=100, max_nn=30))
There are several methods that we can do surface reconstruction such as:
- Constructs a simplicial complex from a set of points in 2D or 3D space.
- The complex includes vertices, edges, and faces, which together form a piecewise linear approximation of the point cloud.
- The parameter "alpha" determines which points are considered part of the shape and which are considered outliers.
- Used to create a smooth and continuous surface from an unorganized point cloud.
- Based on solving a partial differential equation (PDE) over the volumetric representation of the point cloud.
- It is particularly effective for capturing fine geometric details in the reconstructed surface.
- Works by "pivoting" a ball of a certain radius around each point in the point cloud.
- Constructs triangles by connecting the centers of the balls when three balls are in contact.
- It iteratively adds triangles to the mesh until a closed surface is formed.
- It is suitable for reconstructing complex and irregular shapes, and it often produces watertight surfaces.
Below is an example using Alpha Shape (left) and using Mesh Lab (right):
[1] Generation Robots. (n.d.). What is LiDAR Technology? Scanning technology. Webpage. https://www.generationrobots.com/blog/en/what-is-lidar-technology/
[2] Velodyne LiDAR. (n.d.). A Guide to LiDAR Wavelengths. Webpage. https://velodynelidar.com/blog/guide-to-lidar-wavelengths/
[3] NEON Science. (n.d.). LiDAR Basics. Webpage. https://www.neonscience.org/resources/learning-hub/tutorials/lidar-basics
[4] NEON Science. (n.d.). Plasio: View Point Clouds. Webpage. https://www.neonscience.org/resources/learning-hub/tutorials/plasio-view-pointclouds
[5] Towards Data Science. (n.d.). Random Sample Consensus Helps You Filter Those Pesky Outliers. Webpage. https://towardsdatascience.com/random-sample-consensus-helps-you-filter-those-pesky-outliers-9dbfbb8b668f
[6] First Principles of Computer Vision. (n.d.). Dealing with Outliers: RANSAC Image Stitching. YouTube. https://www.youtube.com/watch?v=EkYXjmiolBg&ab_channel=FirstPrinciplesofComputerVision
[7] AEVA. (n.d.). Aeries II. [Website]. https://www.aeva.com/aeries-ii/
[8] Open3D. (n.d.). [Website]. http://www.open3d.org/
[9] CVlibs. (n.d.). KITTI Dataset. [Website]. https://www.cvlibs.net/datasets/kitti/
[10] AEVA. (n.d.). [Website]. https://www.aeva.com/
[11] Ouster. (n.d.). [Website]. https://ouster.com/