|
|
Multimodal Negative Sentiment Recognition in Online Public Opinion during Public Health Emergencies Based on Fusion Strategy |
Zeng Ziming1,2, Sun Shouqiang1, Li Qingqing1 |
1.School of Information Management, Wuhan University, Wuhan 430072 2.Center for Studies of Information Resources, Wuhan University, Wuhan 430072 |
|
|
Abstract Social media is used for the online mapping of the offline public opinion on public health emergencies, and multimodal information with image and text becomes the primary means of public sentiment expression. To fully use the correlation and complementarity among different modalities and improve the accuracy of the multimodal negative sentiment recognition in the online public opinion during public health emergencies, this study constructs a two-stage, hybrid fusion strategy-driven multimodal fine-grained negative sentiment recognition model (THFMFNSR) comprising four parts: multimodal feature representation, feature fusion, a classifier, and decision fusion. By collecting image-text data related to COVID-19 from Sina Weibo, this study verifies the effectiveness of the model and extracts the best sentiment decision fusion rules and classifier configurations. The results show that compared with the optimal recognition model with text, image, and image-text feature fusion, the precision of this model in sentiment recognition improved by 14.48%, 12.92%, and 2.24%, respectively, and in fine-grained negative sentiment recognition, the precision improved by 22.73%, 10.85%, and 3.34%, respectively. The multimodal fine-grained negative sentiment recognition model can sense public opinion situations and assist public health departments and public opinion control departments in decision making.
|
Received: 03 June 2022
|
|
|
|
1 范涛, 吴鹏, 王昊, 等. 基于多模态联合注意力机制的网民情感分析研究[J]. 情报学报, 2021, 40(6): 656-665. 2 张亚洲, 戎璐, 宋大为, 等. 多模态情感分析研究综述[J]. 模式识别与人工智能, 2020, 33(5): 426-438. 3 突发公共卫生事件应急条例[EB/OL]. [2020-06-15]. http://www.gov.cn/gongbao/content/2011/content_1860801.htm. 4 安璐, 欧孟花. 突发公共卫生事件利益相关者的社会网络情感图谱研究[J]. 图书情报工作, 2017, 61(20): 120-130. 5 王晰巍, 刘宇桐, 李玥琪. 突发公共卫生事件中公民隐私泄露舆情的情感演化图谱研究[J]. 情报理论与实践, 2022, 45(3): 19-27. 6 曾子明, 孙晶晶. 基于用户注意力的突发公共卫生事件舆情情感演化研究——以新冠肺炎疫情为例[J]. 情报科学, 2021, 39(9): 11-17. 7 韩普, 张伟, 张展鹏, 等. 基于特征融合和多通道的突发公共卫生事件微博情感分析[J]. 数据分析与知识发现, 2021, 5(11): 68-79. 8 李长荣, 纪雪梅. 面向突发公共事件网络舆情分析的领域情感词典构建研究[J]. 数字图书馆论坛, 2020(9): 32-40. 9 Cai Y, Yang K, Huang D P, et al. A hybrid model for opinion mining based on domain sentiment dictionary[J]. International Journal of Machine Learning and Cybernetics, 2019, 10(8): 2131-2142. 10 Xu G X, Yu Z H, Yao H S, et al. Chinese text sentiment analysis based on extended sentiment dictionary[J]. IEEE Access, 2019, 7: 43749-43762. 11 杨爽, 陈芬. 基于SVM多特征融合的微博情感多级分类研究[J]. 数据分析与知识发现, 2017, 1(2): 73-79. 12 国显达, 那日萨, 崔少泽. 基于CNN-BiLSTM的消费者网络评论情感分析[J]. 系统工程理论与实践, 2020, 40(3): 653-663. 13 Hyun D, Park C, Yang M C, et al. Target-aware convolutional neural network for target-level sentiment analysis[J]. Information Sciences, 2019, 491: 166-178. 14 赖雪梅, 唐宏, 陈虹羽, 等. 基于注意力机制的特征融合-双向门控循环单元多模态情感分析[J]. 计算机应用, 2021, 41(5): 1268-1274. 15 范昊, 李鹏飞. 基于FastText字向量与双向GRU循环神经网络的短文本情感分析研究——以微博评论文本为例[J]. 情报科学, 2021, 39(4): 15-22. 16 刘继, 顾凤云. 基于BERT与BiLSTM混合方法的网络舆情非平衡文本情感分析[J]. 情报杂志, 2022, 41(4): 104-110. 17 张柳, 王晰巍, 黄博, 等. 基于字词向量的多尺度卷积神经网络微博评论的情感分类模型及实验研究[J]. 图书情报工作, 2019, 63(18): 99-108. 18 徐元, 毛进, 李纲. 面向突发事件应急管理的社交媒体多模态信息分析研究[J]. 情报学报, 2021, 40(11): 1150-1163. 19 Ke Y, Tang X O, Jing F. The design of high-level features for photo quality assessment[C]// Proceedings of the 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2006: 419-426. 20 Zhao S C, Yao H X, Yang Y, et al. Affective image retrieval via multi-graph learning[C]// Proceedings of the 22nd ACM International Conference on Multimedia. New York: ACM Press, 2014: 1025-1028. 21 Marchesotti L, Perronnin F, Larlus D, et al. Assessing the aesthetic quality of photographs using generic image descriptors[C]// Proceedings of the 2011 International Conference on Computer Vision. Piscataway: IEEE, 2012: 1784-1791. 22 Hayashi T, Hagiwara M. Image query by impression words-the IQI system[J]. IEEE Transactions on Consumer Electronics, 1998, 44(2): 347-352. 23 李志义, 许洪凯, 段斌. 基于深度学习CNN模型的图像情感特征抽取研究[J]. 图书情报工作, 2019, 63(11): 96-107. 24 陆柳杏, 吴丹. 基于视觉注意力的图像情感研究框架[J]. 图书情报知识, 2020(6): 101-108. 25 蔡国永, 贺歆灏, 储阳阳. 图像整体与局部区域嵌入的视觉情感分析[J]. 计算机应用, 2019, 39(8): 2181-2185. 26 曾金, 陆娜, 胡潇戈, 等. 网站新闻人物图像情感倾向研究[J]. 情报科学, 2018, 36(6): 131-135, 141. 27 Kumar A, Garg G. Sentiment analysis of multimodal Twitter data[J]. Multimedia Tools and Applications, 2019, 78(17): 24103-24119. 28 Zhao Z Y, Zhu H Y, Xue Z H, et al. An image-text consistency driven multimodal sentiment analysis approach for social media[J]. Information Processing & Management, 2019, 56(6): 102097. 29 Huang F R, Wei K M, Weng J, et al. Attention-based modality-gated networks for image-text sentiment analysis[J]. ACM Transactions on Multimedia Computing, Communications, and Applications, 2020, 16(3): Article No.79. 30 Baecchi C, Uricchio T, Bertini M, et al. A multimodal feature learning approach for sentiment analysis of social network multimedia[J]. Multimedia Tools and Applications, 2016, 75(5): 2507-2525. 31 Cao D L, Ji R R, Lin D Z, et al. Visual sentiment topic model based microblog image sentiment analysis[J]. Multimedia Tools and Applications, 2016, 75(15): 8955-8968. 32 Xu N, Mao W J. MultiSentiNet: a deep semantic network for multimodal sentiment analysis[C]// Proceedings of the 2017 ACM on Conference on Information and Knowledge Management. New York: ACM Press, 2017: 2399-2402. 33 Liu M F, Zhang L M, Liu Y, et al. Recognizing semantic correlation in image-text Weibo via feature space mapping[J]. Computer Vision and Image Understanding, 2017, 163: 58-66. 34 Yang B, Shao B, Wu L J, et al. Multimodal sentiment analysis with unidirectional modality translation[J]. Neurocomputing, 2022, 467: 130-137. 35 Chen B Z, Cao Q, Hou M X, et al. Multimodal emotion recognition with temporal and semantic consistency[J]. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 2021, 29: 3592-3603. 36 Majumder N, Hazarika D, Gelbukh A, et al. Multimodal sentiment analysis using hierarchical fusion with context modeling[J]. Knowledge-Based Systems, 2018, 161: 124-133. 37 Chen F H, Ji R R, Su J S, et al. Predicting microblog sentiments via weakly supervised multimodal deep learning[J]. IEEE Transactions on Multimedia, 2018, 20(4): 997-1007. 38 Wu T, Peng J J, Zhang W Q, et al. Video sentiment analysis with bimodal information-augmented multi-head attention[J]. Knowledge-Based Systems, 2022, 235: 107676. 39 王雨竹, 谢珺, 陈波, 等. 基于跨模态上下文感知注意力的多模态情感分析[J]. 数据分析与知识发现, 2021, 5(4): 49-59. 40 Vaswani A, Shazeer N, Parmar N, et al. Attention is all you need[C]// Proceedings of the 31st International Conference on Neural Information Processing Systems. Red Hook: Curran Associates, 2017: 6000-6010. 41 Devlin J, Chang M W, Lee K, et al. BERT: pre-training of deep bidirectional transformers for language understanding[C]// Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Stroudsburg: Association for Computational Linguistics, 2018: 4171-4186. 42 Dosovitskiy A, Beyer L, Kolesnikov A, et al. An image is worth 16×16 words: transformers for image recognition at scale[OL]. (2021-06-03). https://arxiv.org/pdf/2010.11929.pdf. 43 Poria S, Cambria E, Bajpai R, et al. A review of affective computing: from unimodal analysis to multimodal fusion[J]. Information Fusion, 2017, 37: 98-125. 44 Mikels J A, Fredrickson B L, Larkin G R, et al. Emotional category data on images from the international affective picture system[J]. Behavior Research Methods, 2005, 37(4): 626-630. 45 张海涛, 王丹, 徐海玲, 等. 基于卷积神经网络的微博舆情情感分类研究[J]. 情报学报, 2018, 37(7): 695-702. |
|
|
|