Human-AI Collaboration Enables Empathic Conversations in Text-Based Mental Health Support


Access to mental health care is a global challenge that online peer support platforms can help mitigate. Millions of people seek and provide support online but they are untrained and unaware of key supportive skills and strategies such as empathy.

The Behavioral Data Science Group at the Allen School, in collaboration with clinical psychologists from the UW and Stanford Medical Schools, is working on creating computational methods for helping peer supporters express empathy more effectively in conversations. We are developing AI tools that can identify and improve empathy in conversations and give intelligent and actionable real-time feedback to users.

Publications

Human-AI Collaboration Enables More Empathic Conversations in Text-based Peer-to-Peer Mental Health Support

We develop Hailey, an AI-in-the-loop agent that provides just-in-time feedback to help participants who provide support (peer supporters) respond more empathically to those seeking help (support seekers). We evaluate Hailey in a randomized controlled trial with real-world peer supporters on TalkLife (N=300). We show that our Human-AI collaboration approach leads to a 19.60% increase in conversational empathy between peers overall.

[Paper]


Towards Facilitating Empathic Conversations in Online Mental Health Support: A Reinforcement Learning Approach

Best Paper Award (TheWebConf/WWW 2021)

We introduce empathic rewriting, a new task for transforming low-empathy conversations to higher empathy. We propose Partner, a deep reinforcement learning agent that learns to make sentence-level edits to conversations in order to increase the expressed level of empathy while maintaining conversation quality through specificity, fluency, and diversity.

[Paper] [Code] [Slides] [TheWebConf Talk]


A Computational Approach to Understanding Empathy Expressed in Text-Based Mental Health Support

EMNLP 2020

We develop a novel unifying theoretically-grounded framework for characterizing the communication of empathy in text-based conversations. We collect and share a corpus of 10k (post, response) pairs annotated using this empathy framework with supporting evidence for annotations (rationales). We develop a multi-task RoBERTa-based bi-encoder model for identifying empathy in conversations and extracting rationales underlying its predictions.

[Paper] [Code and Data] [Slides] [EMNLP Talk]

Team


Acknowledgements

We would like to thank TalkLife and Jamie Druitt for their support and for providing us access to a TalkLife dataset. We also thank the members of UW Behavioral Data Science group, UW NLP group, Zac E. Imel, and the anonymous reviewers for their feedback on this work. T.A., A.S., and I.L. were supported in part by NSF grant IIS-1901386, NSF grant CNS-2025022, NIH grant R01MH125179, Bill & Melinda Gates Foundation (INV-004841), the Office of Naval Research (#N00014-21-1-2154), a Microsoft AI for Accessibility grant, a Garvey Institute Innovation grant, an Adobe Data Science Research Award, and the Allen Institute for Artificial Intelligence. A.S.M. was supported by grants from the National Institutes of Health, National Center for Advancing Translational Science, Clinical and Translational Science Award (KL2TR001083 and UL1TR001085) and the Stanford Human-Centered AI Institute. D.C.A. was supported in part by a NIAAA K award (K02AA023814).

Conflict of Interest Disclosure: Dr. Atkins is a cofounder with equity stake in a technology company, Lyssn.io, focused on tools to support training, supervision, and quality assurance of psychotherapy and counseling.