|
|
Construction of User-Trusted Explainable Models for Academic Information Recommendation |
Chen Yunyi1, Wu Dan1,2, Xia Zishuo1 |
1.School of Information Management, Wuhan University, Wuhan 430072 2.Center for Studies of Human-Computer Interaction and User Behavior, Wuhan University, Wuhan 430072 |
|
|
Abstract With the advancement of artificial intelligence, its unexplainable “black box” is hindering people’s understanding and trust in the system, limiting the cooperative relationship between humans and artificial intelligence. We proposed and validated a phased, problem-oriented user-trusted explainable model for the application scenario of academic information recommendation with the aim of improving the interpretability of artificial intelligence. Starting from the process of human-computer interaction, the model divides the whole process of interaction into three phases: initial contact, starting interaction, and in-depth collaboration, and specifies which problems had to be explained at each phase, so as to achieve the effect of enhancing user trust and promoting human-computer interaction. Subsequently, the effectiveness and rationality of the model were verified through experiments. Based on grounded theory, the interview content was coded at three levels: open coding, axial coding, and selective coding. The coding results were used to interpret and improve the model and propose interpretable practical optimization strategies. The staged, problem-oriented interpretable model proposed in this study can enhance user trust at multiple levels, and a mapping relationship exists between its explanatory content, presentation form, language style, and system cognition, usage intention, and usage behavior of users. Based on this, we proposed specific guidance strategies for each interactive stage from the perspectives of explanatory content and presentation form, aiming to have a positive impact on the construction of interpretable artificial intelligence.
|
Received: 12 December 2024
|
|
|
|
1 IBM. 什么是可解释AI?[EB/OL]. [2024-04-29]. https://www.ibm.com/cn-zh/topics/explainable-ai. 2 Gunning D. DARPA’s explainable artificial intelligence (XAI) program[C]// Proceedings of the 24th International Conference on Intelligent User Interfaces. New York: ACM Press, 2019: ii. 3 发展负责任的人工智能: 新一代人工智能治理原则发布[EB/OL]. (2019-06-17) [2023-04-29]. https://www.most.gov.cn/kjbgz/201906/t20190617_147107.html. 4 吴丹, 孙国烨. 迈向可解释的交互式人工智能: 动因、途径及研究趋势[J]. 武汉大学学报(哲学社会科学版), 2021, 74(5): 16-28. 5 Harper R H R. The role of HCI in the age of AI[J]. International Journal of Human–Computer Interaction, 2019, 35(15): 1331-1344. 6 熊回香, 唐明月, 叶佳鑫, 等. 融合加权异质网络与网络表示学习的学术信息推荐研究[J]. 现代情报, 2023, 43(5): 23-34. 7 Liu H F, Jing L P, Wen J X, et al. Interpretable deep generative recommendation models[J]. Journal of Machine Learning Research, 2021, 22(1): Article No.202. 8 Wang W J, Lin X Y, Feng F L, et al. Generative recommendation: towards next-generation recommender paradigm[OL]. (2024-02-25). https://arxiv.org/pdf/2304.03516. 9 Durán J M. Dissecting scientific explanation in AI (sXAI): a case for medicine and healthcare[J]. Artificial Intelligence, 2021, 297: 103498. 10 Guidotti R, Monreale A, Ruggieri S, et al. A survey of methods for explaining black box models[J]. ACM Computing Surveys, 2019, 51(5): Article No.93. 11 靳庆文, 朝乐门, 孟刚. AI治理中的算法解释及其实现方法研究[J]. 情报资料工作, 2022, 43(5): 16-23. 12 Belle V, Papantonis I. Principles and practice of explainable machine learning[J]. Frontiers in Big Data, 2021, 4: 688969. 13 王冬丽, 杨珊, 欧阳万里, 等. 人工智能可解释性: 发展与应用[J]. 计算机科学, 2023, 50(S1): 19-25. 14 孟韬, 陈梦圆, 张天锴, 等. 交互失误情境下交互式人工智能拟人化的负面影响——基于ChatGPT和搜索引擎的实验证据[J]. 情报理论与实践, 2024, 47(1): 84-91. 15 Weisberg D S, Landrum A R, Hamilton J, et al. Knowledge about the nature of science increases public acceptance of science regardless of identity factors[J]. Public Understanding of Science, 2021, 30(2): 120-138. 16 张乐, 李森林. 知识、理解与信任: 个体对人工智能的信任机制[J]. 社会学评论, 2023, 11(3): 59-83. 17 Schr?der A M, GhajargarM. Unboxing the algorithm: designing an understandable algorithmic experience in music recommender systems[C]// Proceedings of the Perspectives on the Evaluation of Recommender Systems Workshop 2021. Aachen: CEUR-WS Team, 2021: Paper 12. 18 Nunes I, Jannach D. A systematic review and taxonomy of explanations in decision support and recommender systems[J]. User Modeling and User-Adapted Interaction, 2017, 27: 393-444. 19 Amershi S, Weld D, Vorvoreanu M, et al. Guidelines for human-AI interaction[C]// Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. New York: ACM Press, 2019: Paper No.3. 20 IBM. IBM design for AI: explainability[EB/OL]. [2024-05-03]. https://www.ibm.com/design/ai/ethics/explainability. 21 Google. Google AI: responsible AI practices[EB/OL]. [2024-05-03]. https://ai.google/responsibilities/responsible-ai-practices/category=interpretability. 22 Dhanorkar S, Wolf C T, Qian K, et al. Who needs to know what, when?: Broadening the explainable AI (XAI) design space by looking at explanations across the AI lifecycle[C]// Proceedings of the 2021 ACM Designing Interactive Systems Conference. New York: ACM Press, 2021: 1591-1602. 23 Cabour G, Morales-Forero A, Ledoux é, et al. An explanation space to align user studies with the technical development of Explainable AI[J]. AI & Society, 2023, 38(2): 869-887. 24 Wang D D, Yang Q, Abdul A, et al. Designing theory-driven user-centric explainable AI[C]// Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. New York: ACM Press, 2019: Paper No.601. 25 Endsley M R. Level of automation forms a key aspect of autonomy design[J]. Journal of Cognitive Engineering and Decision Making, 2018, 12(1): 29-34. 26 Jin W N, Fan J Y, Gromala D, et al. EUCA: the end-user-centered explainable AI framework[OL]. (2022-03-01). https://arxiv.org/pdf/2102.02437. 27 Liao Q V, Gruen D, Miller S. Questioning the AI: informing design practices for explainable AI user experiences[C]// Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. New York: ACM Press, 2020: 1-15. 28 Liao Q V, Pribi? M, Han J, et al. Question-driven design process for explainable AI user experiences[OL]. (2021-09-03). https://arxiv.org/pdf/2104.03483. 29 Schneider J, Handali J. Personalized explanation in machine learning: a conceptualization[C]// Proceedings of the 27th European Conference on Information Systems. Cham: Springer, 2019: 1-17. 30 Wolf C T. Explainability scenarios: towards scenario-based XAI design[C]// Proceedings of the 24th International Conference on Intelligent User Interfaces. New York: ACM Press, 2019: 252-257. 31 李韬奋, 杨水利, 祝明伟. 体验型产品个性化推荐的结构维度实证研究——以图书产品为例[J]. 软科学, 2021, 35(6): 139-144. 32 ?man P, Liikkanen L A. Interacting with context factors in music recommendation and discovery[J]. International Journal of Human–Computer Interaction, 2017, 33(3): 165-179. 33 Jahanbakhsh F, Awadallah A H, Dumais S T, et al. Effects of past interactions on user experience with recommended documents[C]// Proceedings of the 2020 Conference on Human Information Interaction and Retrieval. New York: ACM Press, 2020: 153-162. 34 Kunkel J, Donkers T, Michael L, et al. Let me explain: impact of personal and impersonal explanations on trust in recommender systems[C]// Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. New York: ACM Press, 2019: Paper No.487. 35 Barbu C M, Carbonell G, Ziegler J. The influence of trust cues on the trustworthiness of online reviews for recommendations[C]// Proceedings of the 34th ACM/SIGAPP Symposium on Applied Computing. New York: ACM Press, 2019: 1687-1689. 36 李贺, 侯力铁, 祝琳琳. 移动图书馆情景感知信息推荐服务用户接受行为研究[J]. 图书情报工作, 2019, 63(12): 94-104. 37 Yang X. Influence of informational factors on purchase intention in social recommender systems[J]. Online Information Review, 2020, 44(2): 417-431. 38 王伟军, 王阳, 王玉珠, 等. 移动商务用户个性化推荐采纳行为影响因素的实证研究[J]. 系统管理学报, 2017, 26(5): 816-823. 39 吴继飞, 于洪彦, 朱翊敏, 等. 人工智能推荐对消费者采纳意愿的影响[J]. 管理科学, 2020, 33(5): 29-43. 40 陈炳霖, 薛可, 余明阳. 人工智能推荐产品类型对消费者采纳意愿的影响机理研究——基于算法透明度的调节作用[J]. 江西社会科学, 2023, 43(1): 194-205, 208. 41 曹树金, 王雅琪. 图书馆微信公众号图书阅读推广文章采纳行为影响因素[J]. 图书馆论坛, 2021, 41(1): 99-110. 42 于微微, 王心妍, 相静. 个性化推荐系统用户接受意愿和采纳行为影响因素研究[J]. 图书情报导刊, 2018, 3(4): 74-79. 43 梅潇, 查先进, 严亚兰. 智能推荐环境下移动社交媒体用户隐私风险感知影响机理研究[J]. 情报理论与实践, 2024, 47(1): 57-64. 44 王雪, 查先进, 梅潇, 等. 隐私担心视角下移动社交媒体智能推荐用户信息规避行为研究[J]. 情报学报, 2023, 42(5): 598-610. 45 占南, 闫香玉. 电商智能推荐用户信息隐私披露意愿影响机制研究[J]. 现代情报, 2023, 43(10): 35-53. 46 Hoffman R R, Mueller S T, Klein G, et al. Metrics for explainable AI: challenges and prospects[J]. (2019-02-01). https://arxiv.org/pdf/1812.04608. 47 Cramer H, Evers V, Ramlal S, et al. The effects of transparency on trust in and acceptance of a content-based art recommender[J]. User Modeling and User-Adapted Interaction, 2008, 18(5): 455-496. 48 吴世友. 如何运用ATLAS.ti分析定性数据和发掘研究主题[J]. 社会工作, 2017(6): 23-40, 111. |
|
|
|