Recent advancements in robotics control are increasingly focused on enhancing efficiency and responsiveness in real-time applications. Techniques like the One-Step Flow Policy and ProbeFlow are addressing the latency issues associated with traditional action generation methods, enabling robots to execute high-precision tasks with significantly reduced inference times. Meanwhile, frameworks inspired by biological systems, such as Neuromorphic Vision-Language-Action, are introducing adaptive control mechanisms that mimic human reflexes, enhancing stability and energy efficiency. The integration of reinforcement learning with model predictive control is proving effective for locomotion tasks, allowing robots to adapt to complex environments without extensive prior training. Additionally, demonstration-free approaches using large language models are streamlining the manipulation process, enabling robots to autonomously explore and learn from their environments. Collectively, these developments are paving the way for more agile, intelligent robotic systems capable of tackling a broader range of commercial challenges, from logistics to personal assistance.
We propose a contact-explicit hierarchical architecture coupling Reinforcement Learning (RL) and Model Predictive Control (MPC), where a high-level RL agent provides gait and navigation commands to a ...
Recent Vision-Language-Action (VLA) models equipped with Flow Matching (FM) action heads achieve state-of-the-art performance in complex robot manipulation. However, the multi-step iterative ODE solvi...
Generative flow and diffusion models provide the continuous, multimodal action distributions needed for high-precision robotic policies. However, their reliance on iterative sampling introduces severe...
Recent advances in embodied intelligence have leveraged massive scaling of data and model parameters to master natural-language command following and multi-task control. In contrast, biological system...
We propose a fully data-driven, Koopman-based framework for statistically robust control of discrete-time nonlinear systems with linear embeddings. Establishing a connection between the Koopman operat...
Despite recent advances in control, reinforcement learning, and imitation learning, developing a unified framework that can achieve agile, precise, and robust whole-body behaviors, particularly in lon...
Humanoid robots require diverse motor skills to integrate into complex environments, but bridging the kinematic and dynamic embodiment gap from human data remains a major bottleneck. We demonstrate th...
Humanoid robots often need to balance competing objectives, such as maximizing speed while minimizing energy consumption. While current reinforcement learning (RL) methods can master complex skills li...
Vision-Language-Action (VLA) models aim to control robots for manipulation from visual observations and natural-language instructions. However, existing hierarchical and autoregressive paradigms often...
Humanoid robots have the promise of locomoting like humans, including fast and dynamic running. Recently, reinforcement learning (RL) controllers that can mimic human motions have become popular as th...