Lidar vs radar autonomous driving 3D point cloud around a car

Lidar vs. Radar in Autonomous Driving: Which Sensor is Best?

By

Your car already includes an array of sensors, ECUs, and other electronics to ensure it functions properly. If you’ve purchased a new vehicle in the last 5 years or so, some of those sensors are becoming more obvious. A great example is your car’s backup sensor, which uses a radar module for nearby object detection.

The number of sensors, wireless and in-vehicle networking systems, and overall computing power are expected to increase as vehicles become more autonomous. For the current class of vehicles on the market, lidar is not so important as drivers are still fully in control of all but the most advanced vehicles. If you’re innovating safety systems or computer vision systems for new vehicles, it helps to know the differences between lidar vs. radar for autonomous vehicles. You may find that both technologies could be useful in current human-controlled vehicles.

Lidar vs. Radar for Autonomous Driving

Let’s jump right into the role of these two types of systems in human-controlled and autonomous vehicles. As we’ll see, they play different roles alongside many other sensors (e.g., cameras and sonar), and they will become an integral part of connected cars through V2X networks.

Radar Systems

The primary uses for radar in automotive systems is in rangefinding (position detection) and heading determination (velocity detection). The current best-in-class solutions for new vehicles use FMCW radar at 77 GHz (linear chirp from 76 to 81 GHz) for ranging finding and heading determination. These modules are normally built using a stack of two boards; one board contains the RF components with series-fed patch array antennas (4 Tx, 3 Rx), and the other contains the processor and power circuitry. This arrangement is separated by a ground plane, which nicely isolates the digital and analog sections of the system.

Radar systems will have to confront an ever-increasing array of EMC problems, but these might be solvable through a use to fiber for radar signal routing and networking. In addition, they only provide a narrow field of view due to beam divergence, despite the use of beamforming. To learn more about some of these challenges in automotive EMC with chirped radar, read my recent article in Interference Technology’s 2020 Automotive EMC Guide.

Lidar Systems

Lidar systems currently use 905 nm infrared laser pulses to create a raster-scanned map of the environment surrounding the vehicle. Note that these systems may move to 1550 nm in the name of safety, although some doubts have been raised regarding this point; you can read more about this potential safety issue in a recent LaserFocusWorld article. Continuous wave lasers are used in coherent lidar for velocity measurements, or a pulsed laser could be used for raster-scanned depth mapping.

Lidar systems generate a 3D point cloud of the surrounding environment by scanning the beam across the region around the vehicle. A 3D point cloud is created by scanning the module to a specific angle and firing off a laser pulse. The reflected signal at each point in the cloud is received and converted to a distance measurement with a time-to-digital converter, which gives the depth measurement. The 3D depth map generated in this process can provide a very high resolution image when the reflection occurs very close to the vehicle.

Lidar vs radar autonomous driving 3D depth map

3D depth map created using an experimental lidar system.

Such an image can then be used for object identification and tracking alongside radar modules. These systems are not used in today’s ADAS systems, but they may play a huge role in future automobiles, including driverless cars. These systems require impeccable timing with very low jitter at the driver output (less than 100 ps), which can be a difficult task considering the high power delivered to a driven laser diode in a lidar system. Ensuring stable power integrity in your PCB design is one of the challenges in creating high quality lidar images.

Which is Best for New Vehicles?

In my opinion, the jury is still out about which system will be ideal for newer vehicles, including autonomous vehicles. For ADAS systems, radar is still king; newer cars with advanced ADAS systems use an array of sensors for object identification, tracking heading and velocity estimation. In current vehicles, short-range (24 GHz) and long-range (77 GHz) radar modules are being used for ADAS systems, including object tracking, hazard detection, and adaptive cruise control. These all play a role in making human-controlled automobiles safer and enabling smarter autonomous vehicles.

To say that one of these options is 'better' than the other misses the entire point of having multiple sensors in a vehicle. The two types of systems can reinforce each other to provide a complete view of the surrounding environment, including the location and heading of nearby objects. 24 and 77 GHz radar modules can already be added to a vehicle with a lidar system, and the two types of sensors can be used together. The use of both technologies to identify targets, map the surrounding environment with tagged targets, and construct depth images of nearby targets is being explored by automotive companies. Once you bring computer vision algorithms and image classification algorithms into the mix, you now have a complete system to identify, track, and distinguish different objects.

 

Lidar vs radar autonomous driving 3D depth map comparison

Comparison of depth map images created using lidar vs. radar in autonomous driving. Source: fierceelectronics.com

 

In short, the two technologies can reinforce each other, but they not perfect replacements for each other. Lidar is advantageous as it does not suffer the same RF routing and signal integrity problems as a digital lidar module. In contrast, radar is much easier to design, build, and integrate; there are no optical components involved beyond a simple radome. There are also some tasks that neither technology can perform reliably, and cameras are still needed as part of the sensor suite in autonomous vehicles.

Don’t Forget About Cameras

There are other tasks that neither system can handle. Something like lane assist requires a high contrast image, which must be gathered with a camera and processed to pick out lanes in the image. Machine learning algorithms are very useful for image classification and object segmentation in still camera images, but cameras have a limited field of view and require particular optics to work properly on a vehicle. The current class of cameras resemble smartphone cameras with CCDs or CMOS sensors; the images they gather are sharp enough to be analyzed and autofocusing can be applied during motion, but they are limited to specific tasks around the vehicle.

As much as we would like to do everything needed for computer vision around a car with a single panoramic camera, it simply isn’t possible. Datasets used for image classification and object segmentation contain flat images, which are then used for training. I have not seen a report on the use of flat images as a training dataset for image classification/object segmentation/identification in panoramic images.

 

At NWES, we’ve worked with everything from DC power systems to experimental RF products for mil-aero systems. If you still can’t decide between lidar vs. radar for autonomous driving, we can help you analyze the tradeoffs and create high quality, fully manufacturable PCB layouts for your system. We're here to help electronics companies design modern PCBs and create cutting-edge technology. We've also partnered directly with EDA companies and advanced PCB manufacturers, and we'll make sure your next layout is fully manufacturable at scale. Contact NWES for a consultation.

 



Ready to start your next design project?



Subscribe to our updates

* indicates required



Ready to work with NWES?
Contact us today for a consultation.

Contact Us Today

Our Clients and Partners