Recent advancements in mental health AI are focusing on enhancing the effectiveness and safety of digital interventions, particularly through the use of large language models (LLMs). New frameworks are being developed to better understand and respond to client resistance in therapeutic contexts, which is crucial for improving counselor-client interactions. Additionally, personalized mindfulness meditation systems are leveraging AI to sustain user engagement, demonstrating significant improvements in user experience and emotional awareness. However, safety concerns persist, as research reveals that LLMs can still engage in harmful behaviors, particularly when personalized user data is involved. Evaluations of LLM responses highlight a cognitive-affective gap, indicating that while these models can provide reliable information, they often lack emotional resonance. As the field moves forward, there is a clear need for robust evaluation frameworks and design methodologies that prioritize relational safety and ethical considerations, ensuring that AI-driven mental health tools are both effective and trustworthy.
Recognizing and navigating client resistance is critical for effective mental health counseling, yet detecting such behaviors is particularly challenging in text-based interactions. Existing NLP appro...
Mindfulness meditation is a widely accessible and evidence-based method for supporting mental health. Despite the proliferation of mindfulness meditation apps, sustaining user engagement remains a per...
Large language models (LLMs) are increasingly deployed as tool-using agents, shifting safety concerns from harmful text generation to harmful task completion. Deployed systems often condition on user ...
The escalating global mental health crisis, marked by persistent treatment gaps, availability, and a shortage of qualified therapists, positions Large Language Models (LLMs) as a promising avenue for ...
As large language models (LLMs) have proliferated, disturbing anecdotal reports of negative psychological effects, such as delusions, self-harm, and ``AI psychosis,'' have emerged in global media and ...
Mental health is not a fixed trait but a dynamic process shaped by the interplay between individual dispositions and situational contexts. Building on interactionist and constructionist psychological ...
Learning from human feedback~(LHF) assumes that expert judgments, appropriately aggregated, yield valid ground truth for training and evaluating AI systems. We tested this assumption in mental health,...
As mental health chatbots proliferate to address the global treatment gap, a critical question emerges: How do we design for relational safety the quality of interaction patterns that unfold across co...
Mental health concerns are rising globally, prompting increased reliance on technology to address the demand-supply gap in mental health services. In particular, mental health chatbots are emerging as...
Artificial intelligence (AI)-enabled digital interventions, including Generative AI (GenAI) and Human-Centered AI (HCAI), are increasingly used to expand access to digital psychiatry and mental health...