Recent research in human-AI interaction emphasizes the nuanced dynamics of collaboration and trust, revealing critical insights into how users engage with AI systems across various contexts. Studies highlight the prevalence of invisible failures in AI, where issues arise without user awareness, suggesting a need for improved monitoring and transparency in AI design. Concurrently, advancements in non-verbal communication frameworks are enabling more natural interactions between humans and AI, particularly in robotic environments, while findings on multi-AI advice systems indicate that panel size and consensus can significantly affect decision-making accuracy. The exploration of privacy concerns in human-AI romantic relationships underscores the complexities of agency and boundary negotiation, while investigations into disempowerment patterns reveal that AI interactions can sometimes distort user perceptions and values. Collectively, this body of work signals a shift toward understanding the intricate interplay of trust, agency, and user experience in human-AI collaborations, with implications for product development and ethical considerations in AI deployment.
AI systems fail silently far more often than they fail visibly. In a large-scale quantitative analysis of human-AI interactions from the WildChat dataset, we find that 78% of AI failures are invisible...
We study the ongoing debate regarding the statistical fidelity of AI-generated data compared to human-generated data in the context of non-verbal communication using full body motion. Concretely, we a...
Just as people improve decision-making by consulting diverse human advisors, they can now also consult with multiple AI systems. Prior work on group decision-making shows that advice aggregation creat...
Generative AI (GenAI) has rapidly entered education, yet its user experience is often explained through adoption-oriented constructs such as usefulness, ease of use, and engagement. We argue that thes...
AI-based tools that mediate, enhance or generate parts of video communication may interfere with how people evaluate trustworthiness and credibility. In two preregistered online experiments (N = 2,000...
For generative AI agents to partner effectively with human users, the ability to accurately predict human intent is critical. But this ability to collaborate remains limited by a critical deficit: an ...
There is no 'ordinary' when it comes to AI. The human-AI experience is extraordinarily complex and specific to each person, yet dominant measures such as usability scales and engagement metrics flatte...
An increasing number of LLM-based applications are being developed to facilitate romantic relationships with AI partners, yet the safety and privacy risks in these partnerships remain largely underexp...
Although AI assistants are now deeply embedded in society, there has been limited empirical study of how their usage affects human empowerment. We present the first large-scale empirical analysis of d...
LLMs are increasingly supporting decision-making across high-stakes domains, requiring critical reflection on the socio-technical factors that shape how humans and LLMs are assigned roles and interact...