Recent advancements in autonomous driving perception are focusing on enhancing the robustness and accuracy of 3D object detection and lane segmentation, particularly in challenging environments. New frameworks are integrating dynamic calibration techniques to address the unique challenges posed by articulated vehicles, improving perception under rapid motion and occlusion. Additionally, the incorporation of explicit 3D geometric understanding into Bird's-Eye-View (BEV) representations is gaining traction, allowing for more precise feature extraction and improved performance across various tasks. The use of mixed camera configurations, including fisheye lenses, is also being explored to enhance full-view perception, addressing the limitations of traditional pinhole camera systems. Furthermore, innovative metrics for evaluating perception errors are emerging, enabling a more nuanced understanding of safety-critical failures in real-time driving scenarios. These developments collectively aim to refine autonomous systems, paving the way for safer and more reliable deployment in commercial applications.
Autonomous trucking poses unique challenges due to articulated tractor-trailer geometry, and time-varying sensor poses caused by the fifth-wheel joint and trailer flex. Existing perception and calibra...
Accurate shape and trajectory estimation of dynamic objects is essential for reliable automated driving. Classical Bayesian extended-object models offer theoretical robustness and efficiency but depen...
Road construction sites create major challenges for both autonomous vehicles and human drivers due to their highly dynamic and heterogeneous nature. This paper presents a real-time system that detects...
Bird's-Eye-View (BEV) perception serves as a cornerstone for autonomous driving, offering a unified spatial representation that fuses surrounding-view images to enable reasoning for various downstream...
Accurate 3D lane segment detection and topology reasoning are critical for structured online map construction in autonomous driving. Recent transformer-based approaches formulate this task as query-ba...
In autonomous driving, relying solely on frame-based cameras can lead to inaccuracies caused by factors like long exposure times, high-speed motion, and challenging lighting conditions. To address the...
Modern autonomous driving systems increasingly rely on mixed camera configurations with pinhole and fisheye cameras for full view perception. However, Bird's-Eye View (BEV) 3D object detection models ...
3D semantic occupancy prediction is central to autonomous driving, yet current methods are vulnerable to long-tailed class bias and out-of-distribution (OOD) inputs, often overconfidently assigning an...
Criticality metrics such as time-to-collision (TTC) quantify collision urgency but conflate the consequences of false-positive (FP) and false-negative (FN) perception errors. We propose two novel effo...