ScienceToStartup
Product
Trends
Topics
Saved
Articles
Changelog
Careers
About
Enterprise
Resources
State of Robotic Manipulation | Report | ScienceToStartup
Home
Resources
State Reports
Robotic Manipulation
State of Robotic Manipulation
41 papers · avg viability 6.8
Download CSV
View topic page
Top papers
DreamPlan: Efficient Reinforcement Fine-Tuning of Vision-Language Planners via Video World Models
(8.0)
TiPToP: A Modular Open-Vocabulary Planning System for Robotic Manipulation
(8.0)
End-to-End Dexterous Grasp Learning from Single-View Point Clouds via a Multi-Object Scene Dataset
(8.0)
Characterization, Analytical Planning, and Hybrid Force Control for the Inspire RH56DFX Hand
(8.0)
FG-CLTP: Fine-Grained Contrastive Language Tactile Pretraining for Robotic Manipulation
(8.0)
ForceVLA2: Unleashing Hybrid Force-Position Control with Force Awareness for Contact-Rich Manipulation
(8.0)
RoCo Challenge at AAAI 2026: Benchmarking Robotic Collaborative Manipulation for Assembly Towards Industrial Automation
(8.0)
From Passive Observer to Active Critic: Reinforcement Learning Elicits Process Reasoning for Robotic Manipulation
(8.0)
COAD: Constant-Time Planning for Continuous Goal Manipulation with Compressed Library and Online Adaptation
(8.0)
VolumeDP: Modeling Volumetric Representation for Manipulation Policy Learning
(8.0)
NovaPlan: Zero-Shot Long-Horizon Manipulation via Closed-Loop Video Language Planning
(8.0)
MoE-ACT: Scaling Multi-Task Bimanual Manipulation with Sparse Language-Conditioned Mixture-of-Experts Transformers
(8.0)
AnchorVLA4D: an Anchor-Based Spatial-Temporal Vision-Language-Action Model for Robotic Manipulation
(8.0)
DexHiL: A Human-in-the-Loop Framework for Vision-Language-Action Model Post-Training in Dexterous Manipulation
(8.0)
Stein Variational Ergodic Surface Coverage with SE(3) Constraints
(7.0)
AnoleVLA: Lightweight Vision-Language-Action Model with Deep State Space Models for Mobile Manipulation
(7.0)
Ada3Drift: Adaptive Training-Time Drifting for One-Step 3D Visuomotor Robotic Manipulation
(7.0)
Beyond Dense Futures: World Models as Structured Planners for Robotic Manipulation
(7.0)
Beyond Short-Horizon: VQ-Memory for Robust Long-Horizon Manipulation in Non-Markovian Simulation Benchmarks
(7.0)
Concurrent Prehensile and Nonprehensile Manipulation: A Practical Approach to Multi-Stage Dexterous Tasks
(7.0)
Coordinated Manipulation of Hybrid Deformable-Rigid Objects in Constrained Environments
(7.0)
Enabling Dynamic Tracking in Vision-Language-Action Models via Time-Discrete and Time-Continuous Velocity Feedforward
(7.0)
FAR-Dex: Few-shot Data Augmentation and Adaptive Residual Policy Refinement for Dexterous Manipulation
(7.0)
Language-Grounded Decoupled Action Representation for Robotic Manipulation
(7.0)
Large Reward Models: Generalizable Online Robot Reward Generation with Vision-Language Models
(7.0)
Learning Bimanual Cloth Manipulation with Vision-based Tactile Sensing via Single Robotic Arm
(7.0)
MALLVI: a multi agent framework for integrated generalized robotics manipulation
(7.0)
Master Micro Residual Correction with Adaptive Tactile Fusion and Force-Mixed Control for Contact-Rich Manipulation
(7.0)
NS-VLA: Towards Neuro-Symbolic Vision-Language-Action Models
(7.0)
PPGuide: Steering Diffusion Policies with Performance Predictive Guidance
(7.0)
See, Plan, Rewind: Progress-Aware Vision-Language-Action Models for Robust Robotic Manipulation
(7.0)
TacLoc: Global Tactile Localization on Objects from a Registration Perspective
(7.0)
TacVLA: Contact-Aware Tactile Fusion for Robust Vision-Language-Action Manipulation
(7.0)
Vision-Based Hand Shadowing for Robotic Manipulation via Inverse Kinematics
(7.0)
From Flow to One Step: Real-Time Multi-Modal Trajectory Policies via Implicit Maximum Likelihood Estimation-based Distribution Distillation
(6.0)
Push, Press, Slide: Mode-Aware Planar Contact Manipulation via Reduced-Order Models
(4.0)
Robotic Scene Cloning:Advancing Zero-Shot Robotic Scene Adaptation in Manipulation via Visual Prompt Editing
(4.0)
Altered Thoughts, Altered Actions: Probing Chain-of-Thought Vulnerabilities in VLA Robotic Manipulation
(3.0)
HapticVLA: Contact-Rich Manipulation via Vision-Language-Action Model without Inference-Time Tactile Sensing
(3.0)
Learning Adaptive Force Control for Contact-Rich Sample Scraping with Heterogeneous Materials
(3.0)
Confusion-Aware In-Context-Learning for Vision-Language Models in Robotic Manipulation
(3.0)