Selected Projects

Here are a selection of research projects that I have worked on over the years.

Aligning LLMs with domain expertise in delivering psychotherapy for health intervention

Aligning LLMs with domain expertise in delivering psychotherapy for health intervention

Chatbots or conversational agents (CAs) are increasingly used to improve access to digital psychotherapy. Many current systems rely on rigid, rule-based designs, heavily dependent on expert-crafted dialogue scripts for guiding therapeutic conversations. Although recent advances in large language models (LLMs) offer the potential for more flexible interactions, their lack of controllability and transparency poses significant challenges in sensitive areas like psychotherapy. In this work, we explored how aligning LLMs with expert-crafted scripts can enhance psychotherapeutic chatbot performance. Our comparative study showed that LLMs aligned with expert-crafted scripts through prompting and fine-tuning significantly outperformed both pure LLMs and rule-based chatbots, achieving a more effective balance between dialogue flexibility and adherence to therapeutic principles. Building on findings, we proposed ``Script-Strategy Aligned Generation (SSAG)’’, a flexible alignment approach that reduces reliance on fully scripted content while enhancing LLMs’ therapeutic adherence and controllability. In a 10-day field study, SSAG demonstrated performance comparable to full script alignment and outperformed rule-based chatbots, empirically supporting SSAG as an efficient approach for aligning LLMs with domain expertise. Our work advances LLM applications in psychotherapy by providing a controllable, adaptable, and scalable solution for digital interventions, reducing reliance on expert effort. It also provides a collaborative framework for domain experts and developers to efficiently build expertise-aligned chatbots, broadening access to psychotherapy and behavioral interventions.

Trust perception in multimodal user interfaces

Trust perception in multimodal user interfaces

The deployment of Conversational User Interfaces (CUIs) with advanced Large Language Models (LLMs) has significantly transformed health information seeking and dissemination, facilitating immediate and interactive communication between users and digital health resources. However, while trust is crucial for adopting online health advice, how the dissemination interface influences people’s perceived trust in health information provided by LLMs remains unclear. To address this, we conducted a mixed-methods, within-subjects lab study (N=20) to investigate how different CUIs (i.e., a text-based, speech-based, and embodied interface) affect user-perceived trust levels when delivering health information from an identical LLM source. Our key findings showed that: (a) participants’ trust levels in health information delivered were significantly variant across different interfaces; (b) there is a significant correlation between trust in health-related information and trust in the delivered user interface as well as the usability level of the user interface; (c) the type of health questions did not affect participants’ perceived trust; and (d) participants’ prior experience with various interfaces, processing approaches for information with different modalities, and presentation styles were key determinants of trust in health-related information. Our study taps into differences in trust perceptions of health information from LLMs and its dissemination. We highlight the potential of various LLM-powered CUIs in health-related information-seeking contexts. We contribute key factors and considerations for ensuring effective and reliable personal health information seeking in the age of LLM-powered CUIs and multi-modal information dissemination.