|
|
Interpretable Graph Neural Network for Social Media Rumor Detection |
Wang Zihang, Yan Pengwei, Jiang Zhuoren |
Department of Information Resources Management, School of Public Affairs, Zhejiang University, Hangzhou 310058 |
|
|
Abstract The increasing scale of social media data and data heterogeneity pose new challenges. In contrast, complex structural features in rumor propagation networks are difficult to explore; however, the interpretability of a deep neural network-based rumor detection model must be further investigated. In this study, we design and implement an interpretable graph neural network model for a rumor detection task. Specifically, we train a graph neural network interpreter based on mask learning using a residual graph neural network model. This framework incorporates the structural features of rumor propagation and provides an automatic interpretation of the graph neural network model, considering both network structure and node features. The experiments are conducted on two online rumor datasets sourced from Sina Weibo (Chinese) and Twitter (English). Interpretive analyses are performed at both the global and case levels. The experimental results show that the proposed graph neural network model effectively exploits the communication structure features and outperforms a series of baseline models in the rumor detection task. Using the trained graph neural network interpreter, we discovered that long propagation chains are the key network topology for rumors in larger-scale rumor propagation trees and text features are the key node attributes in smaller-scale rumor propagation trees.
|
Received: 29 August 2022
|
|
|
|
1 Bondielli A, Marcelloni F. A survey on fake news and rumour detection techniques[J]. Information Sciences, 2019, 497: 38-55. 2 DiFonzo N, Bordia P. Rumor, gossip and urban legends[J]. Diogenes, 2007, 54(1): 19-35. 3 Zubiaga A, Liakata M, Procter R, et al. Towards detecting rumours in social media[C]// Proceedings of the 29th AAAI Conference on Artificial Intelligence. Palo Alto: AAAI Press, 2015: 35-41. 4 贺刚, 吕学强, 李卓, 等. 微博谣言识别研究[J]. 图书情报工作, 2013, 57(23): 114-120. 5 Castillo C, Mendoza M, Poblete B. Information credibility on Twitter[C]// Proceedings of the 20th International Conference on World Wide Web. New York: ACM Press, 2011: 675-684. 6 Kwon S, Cha M, Jung K, et al. Prominent features of rumor propagation in online social media[C]// Proceedings of the 2013 IEEE 13th International Conference on Data Mining. Piscataway: IEEE, 2013: 1103-1108. 7 曾子明, 王婧. 基于LDA和随机森林的微博谣言识别研究——以2016年雾霾谣言为例[J]. 情报学报, 2019, 38(1): 89-96. 8 Yu F, Liu Q A, Wu S, et al. A convolutional approach for misinformation identification[C]// Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence. Palo Alto: AAAI Press, 2017: 3901-3907. 9 Wang W Y. “Liar, liar pants on fire”: a new benchmark dataset for fake news detection[C]// Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics. Stroudsburg: Association for Computational Linguistics, 2017: 422-426. 10 Afroz S, Brennan M, Greenstadt R. Detecting hoaxes, frauds, and deception in writing style online[C]// Proceedings of the 2012 IEEE Symposium on Security and Privacy. Piscataway: IEEE, 2012: 461-475. 11 Liu X M, Nourbakhsh A, Li Q Z, et al. Real-time rumor debunking on Twitter[C]// Proceedings of the 24th ACM International on Conference on Information and Knowledge Management. New York: ACM Press, 2015: 1867-1870. 12 Ma J, Gao W, Wei Z Y, et al. Detect rumors using time series of social context information on microblogging websites[C]// Proceedings of the 24th ACM International on Conference on Information and Knowledge Management. New York: ACM Press, 2015: 1751-1754. 13 Bruna J, Zaremba W, Szlam A, et al. Spectral networks and locally connected networks on graphs[C]// Proceedings of the 2nd International Conference on Learning Representations. ICLR, 2014: 1-14. 14 Kipf T N, Welling M. Semi-supervised classification with graph convolutional networks[C]// Proceedings of the 5th International Conference on Learning Representations. ICLR, 2017: 1-14. 15 Hamilton W L, Ying R, Leskovec J. Inductive representation learning on large graphs[C]// Proceedings of the 31st International Conference on Neural Information Processing Systems. Red Hook: Curran Associates, 2017: 1025-1035. 16 Veli?kovi? P, Cucurull G, Casanova A, et al. Graph attention networks[C]// Proceedings of the 6th International Conference on Learning Representations. ICLR, 2018: 1-12. 17 Yan S J, Xiong Y J, Lin D H. Spatial temporal graph convolutional networks for skeleton-based action recognition[C]// Proceedings of the 32nd AAAI Conference on Artificial Intelligence. Palo Alto: AAAI Press, 2018: 7444-7452. 18 Bian T A, Xiao X, Xu T Y, et al. Rumor detection on social media with bi-directional graph convolutional networks[C]// Proceedings of the 34th AAAI Conference on Artificial Intelligence. Palo Alto: AAAI Press, 2020: 549-556. 19 Bai N, Meng F R, Rui X B, et al. Rumour detection based on graph convolutional neural net[J]. IEEE Access, 2021, 9: 21686-21693. 20 王昕岩, 宋玉蓉, 宋波. 一种加权图卷积神经网络的新浪微博谣言检测方法[J]. 小型微型计算机系统, 2021, 42(8): 1780-1786. 21 Ribeiro M T, Singh S, Guestrin C. “Why should I trust you?”: explaining the predictions of any classifier[C]// Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. New York: ACM Press, 2016: 1135-1144. 22 Zhou B L, Khosla A, Lapedriza A, et al. Learning deep features for discriminative localization[C]// Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2016: 2921-2929. 23 Selvaraju R R, Cogswell M, Das A, et al. Grad-CAM: visual explanations from deep networks via gradient-based localization[C]// Proceedings of the 16th IEEE International Conference on Computer Vision. Piscataway: IEEE, 2017: 618-626. 24 Chattopadhyay A, Sarkar A, Howlader P, et al. Grad-CAM++: generalized gradient-based visual explanations for deep convolutional networks[C]// Proceedings of the 2018 IEEE Winter Conference on Applications of Computer Vision. Piscataway: IEEE, 2018: 839-847. 25 Zhang J M, Bargal S A, Lin Z, et al. Top-down neural attention by excitation backprop[J]. International Journal of Computer Vision, 2018, 126(10): 1084-1102. 26 Bach S, Binder A, Montavon G, et al. On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation[J]. PLoS One, 2015, 10(7): e0130140. 27 Galhotra S, Pradhan R, Salimi B. Explaining black-box algorithms using probabilistic contrastive counterfactuals[C]// Proceedings of the 2021 International Conference on Management of Data. New York: ACM Press, 2021: 577-590. 28 Pope P E, Kolouri S, Rostami M, et al. Explainability methods for graph convolutional neural networks[C]// Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2020: 10764-10773. 29 Huang Q, Yamada M, Tian Y, et al. GraphLIME: local interpretable model explanations for graph neural networks[J]. IEEE Transactions on Knowledge and Data Engineering, 2023, 35(7): 6968-6972. 30 Vu M N, Thai M T. PGM-Explainer: probabilistic graphical model explanations for graph neural networks[C]// Proceedings of the 34th International Conference on Neural Information Processing Systems. Red Hook: Curran Associates, 2020: 12225-12235. 31 Ying R, Bourgeois D, You J X, et al. GNNExplainer: generating explanations for graph neural networks[C]// Proceedings of the 33rd International Conference on Neural Information Processing Systems. Red Hook: Curran Associates, 2019: 9244-9255. 32 Luo D S, Cheng W, Xu D K, et al. Parameterized explainer for graph neural network[C]// Proceedings of the 34th International Conference on Neural Information Processing Systems. Red Hook: Curran Associates, 2020: 19620-19631. 33 Li G H, Müller M, Thabet A, et al. DeepGCNs: can GCNs go as deep as CNNs?[C]// Proceedings of the 2019 IEEE/CVF International Conference on Computer Vision. Piscataway: IEEE, 2020: 9266-9275. 34 Song C H, Tu C C, Yang C, et al. CED: credible early detection of social media rumors[OL]. (2018-11-10). https://arxiv.org/pdf/1811.04175.pdf. 35 Kochkina E, Liakata M, Zubiaga A. PHEME dataset for rumour detection and veracity classification[DS/OL]. (2018-06-10). https://doi.org/10.6084/m9.figshare.6392078.v1. 36 Wellman B. Culture of the Internet[M]. Marva: Lawrence Erlbaum Associates Publishers, 1997: 179-205. 37 Ni J M, Abrego G H, Constant N, et al. Sentence-T5: scalable sentence encoders from pre-trained text-to-text models[C]// Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics. Stroudsburg: Association for Computational Linguistics, 2022: 1864-1874. 38 Vapnik V N, Chervonenkis A. A note on one class of perceptrons[J]. Automation and Remote Control, 1964, 25(12): 821-837. 39 Breiman L. Random forests[J]. Machine Language, 2001, 45(1): 5-32. 40 Chen T Q, Guestrin C. XGBoost: a scalable tree boosting system[C]// Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. New York: ACM Press, 2016: 785-794. 41 McCallum A, Nigam K. A comparison of event models for naive Bayes text classification[C]// Proceedings of the AAAI-98 Workshop on Learning for Text Categorization. Palo Alto: AAAI Press, 1998: 41-48. 42 Shu K, Wang S H, Liu H. Understanding user profiles on social media for fake news detection[C]// Proceedings of the 2018 IEEE Conference on Multimedia Information Processing and Retrieval. Piscataway: IEEE, 2018: 430-435. 43 Prasad J. The psychology of rumour: a study relating to the great Indian earthquake of 1934[J]. British Journal of Psychology General Section, 1935, 26(1): 1-15. 44 刘于思, 徐煜. 在线社会网络中的谣言与辟谣信息传播效果: 探讨网络结构因素与社会心理过程的影响[J]. 新闻与传播研究, 2016, 23(11): 51-69, 127. 45 van der Maaten L, Hinton G. Visualizing data using t-SNE[J]. Journal of Machine Learning Research, 2008, 9: 2579-2605. 46 Kullback S, Leibler R A. On information and sufficiency[J]. The Annals of Mathematical Statistics, 1951, 22(1): 79-86. |
|
|
|