Recent advancements in conversational AI are increasingly focused on enhancing user safety, satisfaction, and engagement in diverse contexts. New frameworks like SafeCRS address the critical need for personalized safety alignment in conversational recommender systems, significantly reducing safety violations while maintaining recommendation quality. Meanwhile, BoRP introduces a scalable method for evaluating user satisfaction, improving the accuracy of feedback mechanisms essential for iterative development. Tools such as Lexara are streamlining the evaluation of large language models in conversational visual analytics, making it accessible for developers without programming expertise. Additionally, research into cognitive biases reveals that LLMs can emulate human decision-making patterns, providing insights for designing adaptive conversational agents. Systems like GCAgent are also enhancing group chat dynamics by integrating dialogue agents that boost engagement. Collectively, these efforts reflect a shift toward more responsible, user-centered conversational AI that prioritizes safety, interpretability, and effective interaction across various applications.
Current LLM-based conversational recommender systems (CRS) primarily optimize recommendation accuracy and user satisfaction. We identify an underexplored vulnerability in which recommendation outputs ...
We examine whether large language models (LLMs) can predict biased decision-making in conversational settings, and whether their predictions capture not only human cognitive biases but also how those ...
Conversational agents powered by large language models (LLMs) with tool integration achieve strong performance on fixed task-oriented dialogue datasets but remain vulnerable to unanticipated, user-ind...
Recent advances in Large Language Models (LLMs) have enabled conversational AI agents to engage in extended multi-turn interactions spanning weeks or months. However, existing memory systems struggle ...
Recent digitisation efforts in natural history museums have produced large volumes of collection data, yet their scale and scientific complexity often hinder public access and understanding. Conventio...
Large Language Models (LLMs) are transforming Conversational Visual Analytics (CVA) by enabling data analysis through natural language. However, evaluating LLMs for CVA remains a challenge: requiring ...
Accurate evaluation of user satisfaction is critical for iterative development of conversational AI. However, for open-ended assistants, traditional A/B testing lacks reliable metrics: explicit feedba...
Human conversation is organized by an implicit chain of thoughts that manifests as timed speech acts. Capturing this perceptual pathway is key to building natural full-duplex interactive systems. We i...
Cognitive biases often shape human decisions. While large language models (LLMs) have been shown to reproduce well-known biases, a more critical question is whether LLMs can predict biases at the indi...
As a key form in online social platforms, group chat is a popular space for interest exchange or problem-solving, but its effectiveness is often hindered by inactivity and management challenges. While...