Recent advancements in human-robot interaction are focusing on enhancing personalization and adaptability, particularly in multi-user environments. New frameworks leverage large language models to create more engaging and contextually aware interactions, as seen in systems that manage long-term user profiles and dynamically adjust responses. Additionally, innovative approaches are emerging to facilitate intuitive communication, such as gloss-free sign language frameworks that streamline the interaction process by directly mapping gestures to commands, reducing reliance on complex annotations. Researchers are also exploring the impact of robot actions on human behavior, utilizing statistical methods to identify influential behaviors that can improve collaborative systems. Furthermore, gaze-based intent recognition is being refined to support users with limited motor capabilities, enhancing the efficiency of human-robot collaboration. These developments suggest a concerted effort to make robots more responsive and reliable, addressing commercial needs in sectors like healthcare and assistive technology, where effective communication and trust are paramount.
Existing human-robot interaction systems often lack mechanisms for sustained personalization and dynamic adaptation in multi-user environments, limiting their effectiveness in real-world deployments. ...
Long-range Human-Robot Interaction (HRI) remains underexplored. Within it, Command Source Identification (CSI) - determining who issued a command - is especially challenging due to multi-user and dist...
Gaze is a valuable means of communication for impaired people with extremely limited motor capabilities. However, robust gaze-based intent recognition in multi-object environments is challenging due t...
In human-robot collaboration, a robot's expression of hesitancy is a critical factor that shapes human coordination strategies, attention allocation, and safety-related judgments. However, designing h...
In human-robot interaction (HRI), detecting a human's gaze helps robots interpret user attention and intent. However, most gaze detection approaches rely on specialized eye-tracking hardware, limiting...
Human-robot interaction combines robotics, cognitive science, and human factors to study collaborative systems. This paper introduces a method for identifying influential robot actions using transfer ...
In this paper, we present a novel probabilistic safe control framework for human-robot interaction that combines control barrier functions (CBFs) with conformal risk control to provide formal safety g...
We present, to our knowledge, the first sign language-driven Vision-Language-Action (VLA) framework for intuitive and inclusive human-robot interaction. Unlike conventional approaches that rely on glo...
This paper presents a comprehensive pipeline for recognizing objects targeted by human pointing gestures using RGB images. As human-robot interaction moves toward more intuitive interfaces, the abilit...
We introduce MERGE, a system for situational grounding of actors, objects, and events in dynamic human-robot group interactions. Effective collaboration in such settings requires consistent situationa...