Recent advancements in autonomous driving are increasingly focused on enhancing the robustness and adaptability of driving systems in complex environments. Researchers are exploring innovative frameworks that integrate diverse data sources and improve decision-making processes. For instance, new methods are addressing the limitations of traditional imitation learning by incorporating context-aware planning and adaptive strategies that enhance spatial awareness and trajectory generation. Additionally, frameworks that unify vision and motion representation are gaining traction, enabling real-time planning based on rich scene understanding. The integration of vision-language models into reinforcement learning is also emerging, aiming to improve safety and contextual awareness during vehicle operation. These developments hold significant potential for addressing commercial challenges such as ensuring safety in unpredictable traffic scenarios and enhancing the efficiency of autonomous systems, ultimately paving the way for more reliable and deployable driving solutions in real-world applications.
Existing end-to-end autonomous driving models rely heavily on purely data-driven inductive reasoning. This "black-box" nature leads to a lack of interpretability and absolute safety guarantees in comp...
Ensuring safe decision-making in autonomous vehicles remains a fundamental challenge despite rapid advances in end-to-end learning approaches. Traditional reinforcement learning (RL) methods rely on m...
Vision-based autonomous driving has gained much attention due to its low costs and excellent performance. Compared with dense BEV (Bird's Eye View) or sparse query models, Gaussian-centric method is a...
End-to-end autonomous driving aims to generate safe and plausible planning policies from raw sensor input. Driving world models have shown great potential in learning rich representations by predictin...
Dynamic maps (DM) serve as the fundamental information infrastructure for vehicle-road-cloud (VRC) cooperative autonomous driving in China and Japan. By providing comprehensive traffic scene represent...
Imitation learning (IL) is widely used for motion planning in autonomous driving due to its data efficiency and access to real-world driving data. For safe and robust real-world driving, IL-based plan...
Vision Language Models (VLMs) bridge visual perception and linguistic reasoning. In Autonomous Driving (AD), this synergy has enabled Vision Language Action (VLA) models, which translate high-level mu...
Recent advances in Vision-Language-Action (VLA) models have shown promising capabilities in autonomous driving by leveraging the understanding and reasoning strengths of Large Language Models(LLMs).Ho...
End-to-end autonomous driving policies based on Imitation Learning (IL) often struggle in closed-loop execution due to the misalignment between inadequate open-loop training objectives and real drivin...
High infraction rates remain the primary bottleneck for end-to-end (E2E) autonomous driving, as evidenced by the low driving scores on the CARLA Leaderboard. Despite collision-related infractions bein...