Researching trustworthy human-LLM interaction across alignment, interfaces, and human trust perception.
Interests
  • NLP / NLG / LLMs
  • Dialogue System
  • LLMs Alignment-Evaluation
  • User Interfaces
  • Cognition-Physiology
Education
  • PhD in NLP-HCI-Psychology

    University of Amsterdam, the Netherlands

  • MSc in Optimization and AI (NLP)

    Heidelberg University, Germany

  • BSc in Computational Mathematics

    XiDian University, China

Research overview

Trustworthy LLM behavior

Building LLM systems that align with expert strategies, domain needs, and sensitive interaction goals.

Interfaces that shape trust

Examining how text, voice, embodiment, and interaction framing change user confidence and interpretation.

Human-centered measurement

Using behavioral and physiological sensing to study how people perceive, rely on, and question AI output.

I work at the intersection of natural language generation, human-computer interaction, and cognitive psychology. The goal is to make large language models not only technically strong, but also perceptually clear, socially appropriate, and usable in high-stakes settings such as psychotherapy and health support.

My current direction focuses on adaptive LLM systems that respond to human behavior and multimodal signals, while also uncovering how trust emerges through both model behavior and interface design.

Recent highlights
January 2026

A full paper and a poster were accepted to CHI 2026 in Barcelona.

December 2025

New work was accepted by IJHCS and CSCW 2026.

August 2025

A VR and AI project was selected for first-round incubation funding from the Wellcome Trust AI Accelerator.

March 2025

A full paper and an LBW contribution were accepted to CHI 2025 in Yokohama.

2024 to 2025

Ongoing publications and presentations across COLING, CSCW, ICMI, and leading HCI venues.