Xin Sun
Xin Sun

PhD Candidate

About Me

I am a postdoctoral researcher at the University of Amsterdam (UvA), the Netherlands, and a researcher at the National Institute of Informatics (NII), Japan, working on Trustworthy human-LLM interaction. I received my PhD at the University of Amsterdam, supervised by Prof. Dr. Jos A. Bosch, Dr. Abdallah El Ali, and Dr. Jan de Wit. Prior to my PhD, I obtained a BSc in Mathematics from Xidian University and an MSc in Computer Science from Heidelberg University under the supervision of Prof. Dr. Artur Andrzejak.

My research lies at the intersection of natural language generation and human–computer interaction, focusing on how large language models can be made trustworthy and explainable in sensitive contexts such as healthcare and mental support. Specifically, I study how LLMs interact with humans in psychological and health-related scenarios, from generating explainable and domain-aligned therapeutic dialogue, to designing transparent and trustworthy interfaces, and modeling human trust perception through behavioral and physiological signals.

Download CV
Interests
  • NLP / NLG / LLMs
  • Dialogue System
  • LLMs Alignment-Evaluation
  • User Interfaces
  • Cognition-Physiology
Education
  • PhD in NLP-HCI-Psychology

    University of Amsterdam, the Netherlands

  • MSc in Optimization and AI (NLP)

    Heidelberg University, Germany

  • BSc in Computational Mathematics

    XiDian University, China

My Research

I am a researcher in human–computer interaction (HCI) and natural language generation (NLG), focusing on building trustworthy, controllable, and explainable interactions with large language models (LLMs) in sensitive contexts such as psychotherapy, health intervention, and mental support. With a background spanning mathematics, computer science, and cognitive psychology, I strive to bridge how AI thinks with how humans feel and trust.

My research takes a three-dimensional perspective on trust in human–LLM interaction, integrating model alignment, interface transparency, and human trust perception. By combining generative AI, NLG, HCI, and multimodal human sensing, my work aims to make LLMs not only technically reliable but also perceptually and socially trustworthy.

Moving forward, my research focuses on making LLMs more adaptive to human behavior and physiological signals, enabling them to align dynamically with users’ cognitive and emotional states. In parallel, I aim to explore the implicit interpretability mechanisms of LLMs, uncovering how these models internally represent, infer, and explain human behaviors in interaction. Also, I am interested in advancing multimodal interfaces or embodied intelligence powered by generative AI, delivering innovative and impactful solutions for health, learning, and psychotherapy.

Please reach out to collaborate 😃

Recent Activity

September 2025 - A new work is accepted by the Conference on Empirical Methods in Natural Language Processing (EMNLP 2025)!

August 2025 - Our project has been selected for the first-round incubation funding from the Wellcome AI Accelerator!

August 2025 - A new work is accepted by the ACM SIGCHI Conference on Computer-Supported Cooperative Work & Social Computing (CSCW 2025)!

July 2025 - A new work is accepted by the ACM International Conference of Multimodal Interaction (ICMI 2025)!

April 2025 - A new work is accepted by the International Journal of Human-Computer Studies!

March 2025 - A new full paper and an LBW work are accepted by the ACM Conference on Human Factors in Computing Systems (CHI 2025)!

January 2025 - Two works are accepted as oral presentations by the International Conference on Computational Linguistics (COLING 2025)!

October 2024 - A new work is accepted by the ACM SIGCHI Conference on Computer-Supported Cooperative Work & Social Computing (CSCW 2024)!