Radar has been one of the most significant additions to vehicles in the past two decades. It enables advanced driver assistance system (ADAS) features like adaptive cruise control (ACC), as well as critical safety features like automatic emergency braking and blind spot detection. Radars in vehicles have grown from an expensive accessory on most upmarket cars to a standard feature in most cars.
IDTechEx's research conducted in "Automotive Radar 2024-2044: Forecasts, Technologies, Applications" shows that, on average, 70% of new cars shipped in 2022 had a front-facing radar, while 30% had side radars. However, with ADAS systems becoming more sophisticated and level 3 autonomous systems entering the market for the first time, radar technology needs to improve to meet the new performance demands these systems require. As such, the industry is now seeing the first generations of "4D imaging" radars come to market and get deployed onto vehicles. With that in mind, IDTechEx explores what a 4D imaging radar is, why it is needed, and what new technologies they are using.
What is a 4D imaging radar?
Firstly, a 4D radar is not automatically an imaging radar. The two terminologies can sometimes seem interchangeable; however, IDTechEx believes it is important to distinguish between the two. In the past, most radars have been limited to 3 dimensions, with these being azimuth (horizontal angle), distance, and velocity. A 4D radar simply means the addition of some resolving ability in the elevation direction.
A classic example that highlights the need for this fourth dimension is the scenario of detecting a parked car at the entrance of a tunnel. A 3D radar will return the same results, whether there is a car at the entrance or not. Normally, the vehicle will assume that the large reflection is a tunnel, and the adaptive cruise control system will continue. This behavior is perfectly acceptable if a human is behind the wheel and can override the ACC system accordingly, but it becomes an issue for vehicles operating at SAE level 3 and above, which has become a real-world reality over the past couple of years.
In theory, a 4D radar will overcome this problem. The addition of the vertical resolution means the radar should be able to separate the stopped vehicle at ground level from the tunnel a few meters above the deck. However, if the vertical resolution is poor to the extent that the tunnel and car are still present in the same "pixel", then the situation has not improved. This is where the distinction between 4D radar and 4D imaging radar comes into play. The imaging radar should have sufficient angular resolution that it can distinguish the tunnel and vehicle even at very long distances. In fact, IDTechEx thinks that an imaging radar should have sufficient resolution to distinguish much smaller obstacles at long distances, for instance, a person on the road at 100m. But what resolution is needed to do this? Assuming that the person is 5–6 feet tall, a resolution of around 1° would be needed to separate the person from the road. In this scenario, the system would have enough time to activate the brakes and bring the vehicle to a stop, avoiding a collision, even at highway speeds.
However, being able to detect the person is one thing; correctly classifying them as a person with the radar data alone is another. This is why radar is usually supported with front-facing cameras for applications like automatic emergency braking. At night, in foggy conditions, or in heavy rain, the camera might not be able to see either. In these situations, there are a few options: add short- or long-wave infrared detection to the vehicle, providing camera-like resolution with robustness to poor visibility conditions; add LiDAR to the vehicle, with radar-like ranging abilities but at a large cost; or improve the resolution of radar further.
Getting to 1° resolution and beyond
Radar has a natural physical limit to its resolving performance known as the Rayleigh criterion, which is proportional to the inverse of frequency multiplied by aperture size (1/ƒ∂). In short, a normal automotive radar operating at 77 GHz with an antenna array 10 cm wide should be able to reach a resolution of 2.8°. For context, a typical human eye can resolve at around 0.005-0.01°, enough to see a 1cm object at 100m. To improve radar's resolution, its operating frequency could be increased; after all, humans use visible light, which is in the hundreds of terahertz range. However, the frequency of radar is limited by regulations and is not something that is easily changed.
The next option is to increase the size of the aperture. While this is technically possible, doing so runs into practical challenges. In order to get from 2.8° to 1°, the aperture needs to increase from 10 cm to 28 cm. To get this resolution in both azimuth and elevation, the radar is now 28 cm x 28 cm, which will be challenging to integrate into the front bumper. It will likely cause airflow issues to the radiator, could be difficult to protect from damage, and will cause the OEM's aesthetics teams a bit of a headache. IDTechEx has seen radars getting larger, with examples like Continental's ARS540, Bosch's FR5+, and Arbe's Phoenix all exceeding 10cm, but the largest of these, the Phoenix, is still only 12.7 cm by 14.3 cm.
Another issue with creating larger radars is filling them with channels. Creating a 28 cm x 28 cm radar without the supporting semiconductor technology would be like building a $10,000 DSLR camera lens and pairing it with a 1 MP camera phone sensor from 2001. Here, the analog to pixels is virtual channels, which are the multiples of a radar's transmitting and receiving channels (Tx and Rx). In the past, a 3D radar might have had one transmitting channel and three receiving channels (1Tx/3Rx). A basic 4D radar would likely use a radar transceiver with a 3Tx/4Rx arrangement, while some leading radars combine four of these chips to get a 12Tx/16Rx arrangement with 192 virtual channels. Arbe has developed a chipset that scales to 48Tx/48Rx in a single radar, giving 2,304 virtual channels. This helps Arbe achieve 1° resolution in the azimuth and 1.7° in the elevation.
One way of combating the challenges of building a very large radar is to somehow distribute it. IDTechEx has seen a couple of approaches to this. One from Zendar involves using two lower-performance radars placed on opposite ends of the bumper and working together. Now, the aperture size has increased from less than 10cm to effectively 1.5–2 m. As such, the resolution of these two radars working together is just over 0.1° in azimuth. The other approach that IDTechEx has seen is to build separate antenna boards for each channel (on a 3Tx/4Rx radar) and place them across the bumper. This is the development route being explored by Plastic Omnium and Greener Wave.
Software is another key aspect of this discussion, and nearly all the companies mentioned here will be using some kind of super-resolution software to improve their performance. Returning to the camera analogy, modern DSLR cameras come with powerful processors that can make the most of an image, while the cameras in modern phones have had years of software development to produce the sharpest, most natural-looking results. In radar, there are a few examples of start-ups making some exemplary algorithms for improving the resolution of radar without making any physical changes. Zadar Labs uses technologies like machine learning, AI, and encoded transmission signals to improve radar performance. Spartan, on the other hand, uses an algorithm based on research for F-18 and F-35 fighter jet applications. Super-resolution software can improve the angular resolution by a factor of 4, taking a standard 2.8° angular resolution radar down to 0.5–1° and lower if it is already employing some of the other techniques discussed here.
Scanning is another promising option for radar. Metawave's metamaterial antenna uses beamforming and beam steering technologies to focus the radar's performance into a thin fan. This fan is then scanned through the desired elevation field of view, providing upgraded resolution performance in both elevation and azimuth. The other bonus of this technology is that the emission energy is spread over a far smaller area than a flash-type approach, meaning it can achieve longer ranges than a non-scanning equivalent.
With all these options for building the next generation of radar, how can one know which is best? IDTechEx's "Automotive Radar 2024–2044: Forecasts, Technologies, and Applications" report goes into even more detail about how these technologies work and improve radar performance. The report also presents benchmarking analysis to pit these technologies against each other over key performance indicators such as angular resolution, range, field of view, and frame rate, identifying which technology produces the best overall package, with some unexpected results.
IDTechEx's latest report, "Automotive Radar 2024-2044: Forecasts, Technologies, Applications," provides a holistic overview of the automotive radar industry, including tier one and tier 2 products, market shares, start-ups, new technologies, material requirements, autonomous and ADAS markets, and 20-year forecasts.
Click here for the full report and downloadable sample pages.
Click here for more information on the full portfolio of IDTechEx research and how to access the expert-led data and analysis as part of a market intelligence subscription.