A Trust Construction Model in Human-AI Social Interactions: A Dual-Path Analysis Based on Algorithmic Transparency and Emojis

Main Article Content

Yunjun Han

Keywords

social AI, user trust, AI trust, social presence, algorithmic transparency, emojis

Abstract

In the context of the rapid development of social AI applications today, user trust has become a crucial prerequisite for their long-term adoption, yet the mechanisms underlying its formation still lack systematic research from an emotional perspective. To address this, this study proposes a dual-path model based on “algorithmic transparency and emojis,” introducing social presence as a mediating variable to explore its influence mechanism on user trust. Employing empirical methods for research and analysis, the results demonstrate that both algorithmic transparency and emoji usage can significantly enhance users’ trust in social AI, with social presence playing a mediating role between algorithmic transparency, emoji usage, and user trust. The findings of this study not only expand the theoretical perspectives on trust construction in social AI but also offer practical suggestions for algorithmic transparency design and interaction optimization.

Abstract 0 | PDF Downloads 0

References

  • Al-Natour, S., Benbasat, I. and Cenfetelli, R., (2021). Designing Online Virtual Advisors to Encourage Customer Self-disclosure: A Theoretical Model and an Empirical Test. Journal of Management Information Systems, vol. 38, no. 3, pp. 798-827.
  • Al-Oraini, B. S., (2025). Chatbot dynamics: trust, social presence and customer satisfaction in AI-driven services. Journal of Innovative Digital Transformation, vol. 2, no. 2, pp. 109-130.
  • Aldunate, N. and Gonzalez-Ibanez, R., (2016). An Integrated Review of Emoticons in Computer-Mediated Communication. Front Psychol, vol. 7, p. 2061.
  • Aquilino, L., Di Dio, C., Manzi, F., Massaro, D., Bisconti, P. and Marchetti, A., (2025). Decoding trust in artificial intelligence: A systematic review of quantitative measures and related variables. Informatics, vol. 12, no. 3, p. 70.
  • Becker, C. and Fischer, M., (2024). Published. Factors of Trust Building in Conversational AI Systems: A Literature Review. Artificial Intelligence in HCI. HCII 2024, Cham. Springer Nature Switzerland, pp. 27-44.
  • Casaló, L. V., Millastre-Valencia, P., Belanche, D. and Flavián, C., (2025). Intelligence and humanness as key drivers of service value in Generative AI chatbots. International Journal of Hospitality Management, vol. 128, p. 104130.
  • China Academy of Information and Communications Technology, (2021). Trustworthy Artificial Intelligence White Paper [Online]. Available: https://www.caict.ac.cn/kxyj/qwfb/bps/202107/P020210709319866413974.pdf [Accessed December 11, 2025].
  • Choudhury, A. and Shamszare, H., (2023). Investigating the Impact of User Trust on the Adoption and Use of ChatGPT: Survey Analysis. Journal of Medical Internet Research, vol. 25, p. e47184.
  • Cobb, S. C., (2009). Social presence and online learning: A current view from a research perspective. Journal of Interactive Online Learning, vol. 8, no. 3, pp. 241-254.
  • Daft, R. L. and Lengel, R. H., (1986). Organizational information requirements, media richness and structural design. Management science, vol. 32, no. 5, pp. 554-571.
  • Derks, D., Fischer, A. H. and Bos, A. E. R., (2008). The role of emotion in computer-mediated communication: A review. Computers in Human Behavior, vol. 24, no. 3, pp. 766-785.
  • Deryugina, O. V., (2010). Chatterbots. Scientific and Technical Information Processing, vol. 37, no. 2, pp. 143-147.
  • Fogg, B. J. and Tseng, H., (1999). Published. The elements of computer credibility. Proceedings of the SIGCHI conference on Human Factors in Computing Systems, Pittsburgh, Pennsylvania, USA. Association for Computing Machinery, pp. 80–87.
  • Garfinkle, A., (2023). ChatGPT on track to surpass 100 million users faster than TikTok or Instagram: UBS [Online]. Yahoo Finance. Available: https://finance.yahoo.com/news/chatgpt-on-track-to-surpass-100-million-users-faster-than-tiktok-or-instagram-ubs-214423357.html [Accessed December 11, 2025].
  • Haenlein, M. and Kaplan, A., (2019). A Brief History of Artificial Intelligence: On the Past, Present, and Future of Artificial Intelligence. California Management Review, vol. 61, no. 4, pp. 5-14.
  • Haleem, A., Javaid, M. and Singh, R. P., (2022). An era of ChatGPT as a significant futuristic support tool: A study on features, abilities, and challenges. BenchCouncil Transactions on Benchmarks, Standards and Evaluations, vol. 2, no. 4, p. 100089.
  • Hu, P., Zeng, Y., Wang, D. and Teng, H., (2024). Too much light blinds: The transparency-resistance paradox in algorithmic management. Computers in Human Behavior, vol. 161, p. 108403.
  • Huynh, M.-T. and Aichner, T., (2025). In generative artificial intelligence we trust: unpacking determinants and outcomes for cognitive trust. AI & Society, vol. 40, no. 8, pp. 5849-5869.
  • Jin, S. V. and Youn, S., (2022). Social Presence and Imagery Processing as Predictors of Chatbot Continuance Intention in Human-AI-Interaction. International Journal of Human–Computer Interaction, vol. 39, no. 9, pp. 1874-1886.
  • Koonchanok, R., Pan, Y. and Jang, H., (2024). Public attitudes toward chatgpt on twitter: sentiments, topics, and occupations. Social Network Analysis and Mining, vol. 14, no. 1, p. 106.
  • Lee, C. and Cha, K., (2024). Toward the Dynamic Relationship Between AI Transparency and Trust in AI: A Case Study on ChatGPT. International Journal of Human–Computer Interaction, vol. 41, no. 13, pp. 8086-8103.
  • Lee, M. K., Jain, A., Cha, H. J., Ojha, S. and Kusbit, D., (2019). Procedural Justice in Algorithmic Fairness. Proceedings of the ACM on Human-Computer Interaction, vol. 3, no. CSCW, pp. 1-26.
  • Li, Y., Jiang, Y., Tian, D., Hu, L., Lu, H. and Yuan, Z., (2019). AI-Enabled Emotion Communication. IEEE Network, vol. 33, no. 6, pp. 15-21.
  • Liefooghe, B., Min, E. and Aarts, H., (2023). The effects of social presence on cooperative trust with algorithms. Scientific Reports, vol. 13, no. 1, p. 17463.
  • Lv, X., Luo, J., Liang, Y., Liu, Y. and Li, C., (2022). Is cuteness irresistible? The impact of cuteness on customers’ intentions to use AI applications. Tourism Management, vol. 90, p. 104472.
  • Meng, H., Lu, X. and Xu, J., (2025). The Impact of Chatbot Response Strategies and Emojis Usage on Customers' Purchase Intention: The Mediating Roles of Psychological Distance and Performance Expectancy. Behavioral Sciences, vol. 15, no. 2, p. 117.
  • Ngo, V., (2025a). Humanizing AI for trust: the critical role of social presence in adoption. AI & Society, pp. 1-17.
  • Ngo, V. M., (2025b). Balancing AI transparency: Trust, Certainty, and Adoption. Information Development, p. 02666669251346124.
  • Ochmann, J., Michels, L., Tiefenbeck, V., Maier, C. and Laumer, S., (2024). Perceived algorithmic fairness: An empirical study of transparency and anthropomorphism in algorithmic recruiting. Information Systems Journal, vol. 34, no. 2, pp. 384-414.
  • Oh, C. S., Bailenson, J. N. and Welch, G. F., (2018). A Systematic Review of Social Presence: Definition, Antecedents, and Implications. Front Robot AI, vol. 5, p. 114.
  • Papenmeier, A., Englebienne, G. and Seifert, C., (2019). Published. How model accuracy and explanation fidelity influence user trust. Proceedings of the IJCAI 2019 Workshop on Explainable Artificial Intelligence (XAI), Macau, China. IJCAI, pp. 94-100.
  • Park, K. and Yoon, H. Y., (2024). Beyond the code: The impact of AI algorithm transparency signaling on user trust and relational satisfaction. Public Relations Review, vol. 50, no. 5, p. 102507.
  • Park, K. and Yoon, H. Y., (2025). AI algorithm transparency, pipelines for trust not prisms: mitigating general negative attitudes and enhancing trust toward AI. Humanities and Social Sciences Communications, vol. 12, no. 1, p. 1160.
  • Riley, B. K. and Dixon, A., (2024). Emotional and cognitive trust in artificial intelligence: A framework for identifying research opportunities. Curr Opin Psychol, vol. 58, p. 101833.
  • Seeger, A.-M., Pfeiffer, J. and Heinzl, A., (2021). Texting with Humanlike Conversational Agents: Designing for Anthropomorphism. Journal of the Association for Information Systems, vol. 22, no. 4, pp. 931-967.
  • Sfar, N., Sboui, M. and Baati, O., (2025). The impact of chatbot anthropomorphism on customer experience and chatbot usage intention: a technology acceptance approach. International Journal of Quality and Service Sciences, vol. 17, no. 2, pp. 168-194.
  • Shahzad, M. F., Xu, S. and Javed, I., (2024). ChatGPT awareness, acceptance, and adoption in higher education: the role of trust as a cornerstone. International Journal of Educational Technology in Higher Education, vol. 21, no. 1, p. 46.
  • Shen, W. and Li, S., (2025). Influence of the Use of Emojis by Chatbots on Interaction Satisfaction. Journal of Marketing Development & Competitiveness, vol. 19, no. 2, pp. 17-30.
  • Swan, K. and Shih, L. F., (2005). On the nature and development of social presence in online course discussions. Journal of Asynchronous learning networks, vol. 9, no. 3, pp. 115-136.
  • Toader, D.-C., Boca, G., Toader, R., Măcelaru, M., Toader, C., Ighian, D. and Rădulescu, A. T., (2019). The effect of social presence and chatbot errors on trust. Sustainability, vol. 12, no. 1, p. 256.
  • Tu, C.-H. and McIsaac, M., (2002). The relationship of social presence and interaction in online classes. The American journal of distance education, vol. 16, no. 3, pp. 131-150.
  • Von Eschenbach, W. J., (2021). Transparency and the black box problem: Why we do not trust AI. Philosophy & technology, vol. 34, no. 4, pp. 1607-1622.
  • WANG, Q., Ji, D. and Li, B., (2024). Published. The influence of anthropomorphic chatbot design on consumer tolerance of service failures: The mediating roles of attachment and cognitive dissonance. SIGHCI 2023 Proceedings, Houston, Texas, USA. Association for Information Systems (AIS), p. 17.
  • Wang, X., Nie, B. and Cai, B., (2025). The impact of social presence on consumer purchase behavior in e-commerce live streaming contexts. Business and Economic Research, pp. 71-74.
  • Xu, Y., Bradford, N. and Garg, R., (2023). Transparency enhances positive perceptions of social artificial intelligence. Human behavior and emerging technologies, vol. 2023, no. 1, p. 5550418.
  • Yu, S. and Zhao, L., (2024). Emojifying chatbot interactions: An exploration of emoji utilization in human-chatbot communications. Telematics and Informatics, vol. 86, p. 102071.
  • Zerilli, J., Bhatt, U. and Weller, A., (2022). How transparency modulates trust in artificial intelligence. Patterns, vol. 3, no. 4, p. 100455.
  • Zhang, J., Wang, X., Lu, J., Liu, L. and Feng, Y., (2024). The impact of emotional expression by artificial intelligence recommendation chatbots on perceived humanness and social interactivity. Decision Support Systems, vol. 187, p. 114347.
  • Zhang, M., Ding, S., Liu, Y., Li, H., Zhu, Y. and Qin, C., (2021a). Influence of emojis on online trust among college students. Frontiers in psychology, vol. 12, p. 747925.
  • Zhang, S., Meng, Z., Chen, B., Yang, X. and Zhao, X., (2021b). Motivation, social emotion, and the acceptance of artificial intelligence virtual assistants—Trust-based mediating effects. Frontiers in Psychology, vol. 12, p. 728495.

Similar Articles

1-10 of 25

You may also start an advanced similarity search for this article.