Intelligent Driving Fusion Algorithm Research: sparse algorithms, temporal fusion and enhanced planning and control become the trend.
China Intelligent Driving Fusion Algorithm Research Report, 2024 released by ResearchInChina analyzes the status quo and trends of intelligent driving fusion algorithms (including perception, positioning, prediction, planning, decision, etc.), sorts out algorithm solutions and cases of chip vendors, OEMs, Tier1 & Tier2 suppliers and L4 algorithm providers, and summarizes the development trends of intelligent driving algorithms.
Since the period of eight months from Musk's live test drive of FSD V12 Beta in August 2023 to the 30-day free trial of FSD V12 Supervised in March 2024, advanced intelligent driving such as urban NOA has begun to become the arena of major OEMs, and there have been ever more application cases for end-to-end algorithms, BEV Transformer algorithms, and AI foundation model algorithms.
1. Sparse algorithms improve efficiency and reduce intelligent driving cost.
At present, most BEV algorithms are dense and consume considerable computing power and storage. The smoothness of more than 30 frames per second requires expensive computing resources such as NVIDIA A100. Even so, only 5 to 6 2MP cameras can be supported. For 8MP cameras, extremely expensive resources like multiple H100 GPUs are needed.
Our real world has sparse features. Sparsification helps sensors reduce noise and improve robustness. In addition, as distance increases, grids are bound to be sparse, and a dense network can only be maintained within about 50 meters. By reducing queries and feature interactions, sparse perception algorithms speed up calculations and lower storage requirements, greatly improve the computing efficiency and system performance of the perception model, shorten the system latency, expand the perception accuracy range, and ease the impact of vehicle speed.

Therefore, the academia has shifted to sparse target-level algorithms rather than dense grid-based algorithms since 2021. With long-term efforts, sparse target-level algorithms can perform almost as well as dense grid-based algorithms. The industry also keeps iterating sparse algorithms. Recently, Horizon Robotics has open-sourced Sparse4D, its vision-only algorithm which ranks first on both nuScenes vision-only 3D detection and 3D tracking lists.?
Sparse4D is a series of algorithms moving towards long-time-sequence sparse 3D target detection, belonging to the scope of multi-view temporal fusion perception technology. Facing the industry development trend of sparse perception, Sparse4D builds a pure sparse fusion perception framework, which makes perception algorithms more efficient and accurate and simplifies perception systems. Compared with dense BEV algorithms, Sparse4D reduces the computational complexity, breaks the limit of computing power on the perception range, and outperforms dense BEV algorithms in perception effect and reasoning speed.
Another significant advantage of sparse algorithms is to cut down the cost of intelligent driving solutions by reducing dependence on sensors and consuming less computing power. For example, Megvii Technology mentioned that taking a range of measures, for example, optimizing the BEV algorithm, reducing computing power, removing HD maps, RTK and LiDAR, unifying the algorithm framework, and automatic annotation, it has lowered the costs of its intelligent driving solutions based on PETR series sparse algorithms by 20%-30%, compared with conventional solutions on the market.
2. 4D algorithms offer higher accuracy and make intelligent driving more reliable.
As seen from the sensor configurations of OEMs, in recent three years ever more sensors have been installed, with increasing intelligent driving functions and application scenarios. Most urban NOA solutions are equipped with 10-12 cameras, 3-5 radars, 12 ultrasonic radars and 1-3 LiDARs.?

With the increasing number of sensors, ever more perception data are generated. How to improve the utilization of the data is also placed on the agenda of OEMs and algorithm providers. Although the algorithm details of companies are a little different, the general ideas of the current mainstream BEV Transformer solutions are basically the same: conversion from 2D to 3D and then to 4D.
Temporal fusion can greatly improve the algorithm continuity, and the memory of obstacles can handle occlusion and allows for better perception the speed information. The memory of road signs can improve the driving safety and the accuracy of vehicle behavior prediction. The fusion of information from historical frames can improve the perception accuracy of the current object, while the fusion of information from future frames can verify the object perception accuracy, thereby enhancing the algorithm reliability and accuracy.
?
Tesla's Occupancy Network algorithm is a typical 4D algorithm.

Tesla adds the height information to the vector space of 2D BEV+ temporal information output by the original Transformer algorithm to build the 4D space representation form of 3D BEV + temporal information. The network runs every 10ms on the FSD, that is, it runs at 100FPS, which greatly improves the speed of model detection.?

3. End-to-end algorithms integrating perception, planning and control enable more anthropomorphic intelligent driving.
Mainstream intelligent driving algorithms have adopted the “BEV+Transformer” architecture, and many innovative perception algorithms have emerged. However, rule-based algorithms still prevail among planning and control algorithms. Some OEMs face technical and practical challenges in both perception and planning & control systems, which are sometimes in a "split" state. In some complex scenarios, the perception module may fail to accurately recognize or understand the environmental information, and the decision module may make incorrect driving decisions due to improper handling of the perception results or algorithm limitations. This restricts the development of advanced intelligent driving to some extent.?
UniAD, an end-to-end intelligent driving algorithm jointly released by SenseTime, OpenDriveLab and Horizon Robotics, was rated as the Best Paper in CVPR2023. UniAD integrates three main tasks (perception, prediction and planning) and six sub-tasks (target detection, target tracking, scene mapping, trajectory prediction, grid prediction and path planning) into a unified end-to-end network framework based on Transformer for the first time to attain a general model of full-stack task-critical driving. Under the nuScenes real scene dataset, UniAD performs all tasks best in the field, especially in terms of the prediction and planning results far better the previous best solution.?????
The basic end-to-end algorithm enables direct inputs from sensors and predictive control outputs, but it is difficult to optimize, because of lacking effective feature communication between network modules and effective interaction between tasks and needing to output results in phases. The decision-oriented perception and decision integrated design proposed by the UniAD algorithm uses token features for deep fusion according to the perception-prediction-decision process, so that the indicators of all tasks targeting decision are consistently improved.??

In terms of planning and control algorithms, Tesla adopts an approach of interactive search + evaluation model to enable a comfortable and effective algorithm that combines conventional search algorithms with artificial intelligence:
Firstly, candidate objects are obtained according to lane lines, occupancy networks and obstacles, and then decision trees and candidate object sequences are generated.
The trajectory for reaching the above objects is constructed synchronously using conventional search and neural networks;
The interaction between the vehicle and other participants in the scene is predicted to form a new trajectory. After multiple evaluations, the final trajectory is selected. During the trajectory generation, Tesla applies conventional search algorithms and neural networks, and then scores the generated trajectory according to collision check, comfort analysis, the possibility of the driver taking over and the similarity with people, to finally decide the implementation strategy.??

XBrain, the ultimate architecture of Xpeng’s all-scenario intelligent driving, is composed of XNet 2.0, a deep vision neural network, and XPlanner, a planning and control module based on a neural network. XPlanner is a planning and control algorithm based on a neural network, with the following features:
Rule algorithm
Long time sequence (minute-level)
Multi-object (multi-agent decision, gaming capability)
Strong reasoning
The previous advanced algorithms and ADAS functional architectures were separated and consisted of many small logic planning and control algorithms for sub-scenes, while XPlanner has a unified planning and control algorithm architecture. XPlanner is supported by a foundation model and a large number of extreme driving scenes for simulation training, thus ensuring that it can cope with various complex situations.

Smart Car Information Security (Cybersecurity and Data Security) Research Report, 2025
Research on Automotive Information Security: AI Fusion Intelligent Protection and Ecological Collaboration Ensure Cybersecurity and Data Security
At present, what are the security risks faced by inte...
New Energy Vehicle 800-1000V High-Voltage Architecture and Supply Chain Research Report, 2025
Research on 800-1000V Architecture: to be installed in over 7 million vehicles in 2030, marking the arrival of the era of full-domain high voltage and megawatt supercharging.
In 2025, the 800-1000V h...
Foreign Tier 1 ADAS Suppliers Industry Research Report 2025
Research on Overseas Tier 1 ADAS Suppliers: Three Paths for Foreign Enterprises to Transfer to NOA
Foreign Tier 1 ADAS suppliers are obviously lagging behind in the field of NOA.
In 2024, Aptiv (2.6...
VLA Large Model Applications in Automotive and Robotics Research Report, 2025
ResearchInChina releases "VLA Large Model Applications in Automotive and Robotics Research Report, 2025": The report summarizes and analyzes the technical origin, development stages, application cases...
OEMs’ Next-generation In-vehicle Infotainment (IVI) System Trends Report, 2025
ResearchInChina releases the "OEMs’ Next-generation In-vehicle Infotainment (IVI) System Trends Report, 2025", which sorts out iterative development context of mainstream automakers in terms of infota...
Autonomous Driving SoC Research Report, 2025
High-level intelligent driving penetration continues to increase, with large-scale upgrading of intelligent driving SoC in 2025
In 2024, the total sales volume of domestic passenger cars in China was...
China Passenger Car HUD Industry Report, 2024
ResearchInChina released the "China Passenger Car HUD Industry Report, 2025", which sorts out the HUD installation situation, the dynamics of upstream, midstream and downstream manufacturers in the HU...
ADAS and Autonomous Driving Tier 1 Suppliers Research Report, 2025 – Chinese Companies
ADAS and Autonomous Driving Tier 1 Suppliers Research Report, 2025 – Chinese Companies
Research on Domestic ADAS Tier 1 Suppliers: Seven Development Trends in the Era of Assisted Driving 2.0
In the ...
Automotive ADAS Camera Report, 2025
①In terms of the amount of installed data, installations of side-view cameras maintain a growth rate of over 90%From January to May 2025, ADAS cameras (statistical scope: front-view, side-view, surrou...
Body (Zone) Domain Controller and Chip Industry Research Report,2025
Body (Zone) Domain Research: ZCU Installation Exceeds 2 Million Units, Evolving Towards a "Plug-and-Play" Modular Platform
The body (zone) domain covers BCM (Body Control Module), BDC (Body Dom...
Automotive Cockpit Domain Controller Research Report, 2025
Cockpit domain controller research: three cockpit domain controller architectures for AI Three layout solutions for cockpit domain controllers for deep AI empowerment
As intelligent cockpit tran...
China Passenger Car Electronic Control Suspension Industry Research Report, 2025
Electronic control suspension research: air springs evolve from single chamber to dual chambers, CDC evolves from single valve to dual valves
ResearchInChina released "China Passenger Car Elect...
Automotive XR Industry Report, 2025
Automotive XR industry research: automotive XR application is still in its infancy, and some OEMs have already made forward-looking layout
The Automotive XR Industry Report, 2025, re...
Intelligent Driving Simulation and World Model Research Report, 2025
1. The world model brings innovation to intelligent driving simulation
In the advancement towards L3 and higher-level autonomous driving, the development of end-to-end technology has raised higher re...
Autonomous Driving Map (HD/LD/SD MAP, Online Reconstruction, Real-time Generative Map) Industry Report 2025
Research on Autonomous Driving Maps: Evolve from Recording the Past to Previewing the Future with "Real-time Generative Maps"
"Mapless NOA" has become the mainstream solution for autonomous driving s...
End-to-End Autonomous Driving Research Report, 2025
End-to-End Autonomous Driving Research: E2E Evolution towards the VLA Paradigm via Synergy of Reinforcement Learning and World Models??The essence of end-to-end autonomous driving lies in mimicking dr...
Research Report on OEMs and Tier1s’ Intelligent Cockpit Platforms (Hardware & Software) and Supply Chain Construction Strategies, 2025
Research on intelligent cockpit platforms: in the first year of mass production of L3 AI cockpits, the supply chain accelerates deployment of new products
An intelligent cockpit platform primarily r...
Automotive EMS and ECU Industry Report, 2025
Research on automotive EMS: Analysis on the incremental logic of more than 40 types of automotive ECUs and EMS market segments
In this report, we divide automotive ECUs into five major categories (in...