Persona-Driven Bilingual LLM Dialogues: Cultural Metaphors, Linguistic Divergence, and Emotional Disclosure

Authors

  • Chenlisha Sun

DOI:

https://doi.org/10.54097/sjbj4g80

Keywords:

Large Language Models, Persona-driven Dialogue, Cross-cultural Communication, Mental Health Support, Computational Linguistics

Abstract

Large Language Models (LLMs) can take on personas that provide culturally informed advisor-client conversations that facilitate mental health therapy. This paper is a comparative study of bilingual, persona-based therapeutic conversations created by LLMs in the person of either a client or a counselor. We will compare an English-speaking big brother (Raoul) and a Chinese auntie persona ("Aunt Mei" or Mei Yi) in conversations concerning depression. We will use quantitative linguistic analysis, tracking sentiment trajectory, an exploratory self-disclosure scoring system, and a lexicon of culturally based metaphors to examine how these personas impact the conversation dynamic. The results show the two groups diverge in the language they use. The Chinese Auntie Mei persona uses multitudes of proverbs and social-relational types of communication, while the Western Raoul persona makes more use of first person plural ("we") inclusivity. These differences in language types impact the level of self-disclosure registered. Psychiatric clients talking in English exhibit much higher levels of use of both "I" pronouns and negative emotional words from the beginning. The Chinese clients, in contrast, reveal their feelings in a more indirect, gradual manner. The sentiment trajectory shows the sentiment of Chinese (ZH) clients rises steeply and quickly, while the sentiment trajectory of English (EN) clients proceeds more gradually. The main result of our study is that culturally grounded personas impact the therapeutic process in ways beyond mere linguistic flavor. They impact and frame the self-disclosure and emotional improvement trajectories themselves. We will discuss the implications for design of culturally informed bilingual therapeutic AI which respects local narrative forms while facilitating self-disclosure.

Downloads

Download data is not yet available.

References

[1] S. M. Schueller, M. G. Titov, D. C. Mohr, "Harnessing technology for mental health: The path to a digital service delivery model", JAMA Psychiatry, 2023.

[2] B. A. Johnson et al., "The effectiveness of conversational agents for the treatment of depression and anxiety: A systematic review and meta-analysis", J. Med. Internet Res., 2023.

[3] J. Weisz et al., "A randomized controlled trial of a chatbot for depression and anxiety", NPJ Digit. Med., 2021.

[4] M. Malik, J. Jiang, K. M. A. Chai, "An empirical analysis of the writing styles of persona-assigned LLMs", in Proc. EMNLP, 2024.

[5] L. O. H. Kroczek et al., "The influence of persona and conversational task on social interactions with a LLM-controlled embodied conversational agent", Comput. Human Behav., 2025.

[6] C. Hill, K. O'Brien, Helping Skills: Facilitating Exploration, Insight, and Action, 4th ed., American Psychological Association, 2014.

[7] A. K. Reuel, S. Peralta, J. Sedoc, G. Sherman, L. Ungar, "Measuring the language of self-disclosure across corpora", in Findings of the Association for Computational Linguistics: ACL 2022, 2022.

[8] S. O. Yoon, J. M. Kim, S. C. Park, "Culture moderates changes in linguistic self-presentation and detail provision when deceiving others", PLoS ONE, 2017.

[9] G. Lakoff, M. Johnson, Metaphors We Live By, University of Chicago Press, 1980.

[10] L. M. McMullen, D. Tay, "Research review of psychotherapists’ use of metaphors", Psychotherapy, 2023.

[11] P. Jantondaeng, Z. Zheng, "A study of Chinese idiom expressions containing the word “jin” (gold)", J. Liberal Arts, Prince of Songkla Univ., 2024.

[12] M. Salvi, "Persona consistency in dialogue systems: A survey", in Proc. COLING, 2022.

[13] M. Eshghie, S. Eshghie, "ChatGPT as a therapist assistant: A suitability study", arXiv, 2023.

[14] A. Fiske, P. Henningsen, E. Buyx, "Your robot therapist will see you now: Ethical implications of embodied artificial intelligence in psychiatry, psychology, and psychotherapy", J. Med. Internet Res., 2019.

[15] H. Li, Z. Cai, A. C. Graesser, Y. Duan, "A comparative study on English and Chinese word uses with LIWC", in Proc. 25th Int. Florida Artif. Intell. Res. Soc. Conf. (FLAIRS-25), 2012.

Downloads

Published

31-10-2025

Issue

Section

Articles