|
|
Objective Consistency of Funded Projects and Papers Based on Discipline and Text |
Huang Ying1,2,3, Yu Yifei1,2, Zheng Yinxin1,2, Zhu Yunchang1, Zhang Lin1,2,3 |
1.School of Information Management, Wuhan University, Wuhan 430072 2.Center for Science, Technology & Education Assessment (CSTEA), Wuhan University, Wuhan 430072 3.Centre for R&D Monitoring (ECOOM) and Department of MSI, KU Leuven, Leuven B- 3000 |
|
|
Abstract Exploring the content relevance of funded projects and output papers can help broaden the analytical dimension of funding performance evaluation, which is significant for improving the level of funding management. Considering the content relevance of funded projects and papers, this study proposes a definition of objective consistency and develops a method to identify such consistency between funded projects and papers based on discipline and text. The projects funded by the National Natural Science Foundation of China (NSFC) were used to verify the effectiveness of the method. A large number of NSFC projects are found to have low objective consistency with their output papers, and there is a significant difference in the distribution of NSFC projects with different objective consistency characteristics across various aspects. The proportion of “low discipline similarity-low text similarity” projects is higher in the more influential projects of the Key Program and the National Science Fund for Distinguished Young Scholars. The results indicate that the objective consistency identification method can effectively compensate for the deficiency of a single perspective and achieve a comprehensive multidimensional identification of objective consistency between funded projects and papers, thus providing a new perspective and approach for performance evaluation of the former.
|
Received: 05 September 2022
|
|
|
|
1 凌健, 宣勇, 宣晓冬. 对地方自然科学基金绩效评估的若干问题探讨[J]. 科技管理研究, 2007, 27(7): 59-60, 63. 2 汪雪锋, 陈云, 王志楠, 等. 基于学科交叉与目标一致性的重大研究计划资助绩效评价[J]. 科研管理, 2017, 38(4): 132-144. 3 马强, 陈建新. 同行评议方法在科学基金项目管理绩效评估中的应用[J]. 科技管理研究, 2001, 21(4): 37-41. 4 张诗乐, 盖双双, 刘雪立. 国家自然科学基金资助的效果——基于论文产出的文献计量学评价[J]. 科学学研究, 2015, 33(4): 507-515. 5 李新杰, 李雄诒, 孙泽厚. 基于DEA方法的省级自然科学基金效率实证研究[J]. 软科学, 2012, 26(6): 78-82. 6 李志兰, 何学东. 基于DEA模型的自然科学基金投入产出效率分析——以浙江省自然科学基金为例[J]. 浙江大学学报(理学版), 2015, 42(2): 246-252. 7 Huang C C, Chu P Y, Chiang Y H. A fuzzy AHP application in government-sponsored R&D project selection[J]. Omega, 2008, 36(6): 1038-1052. 8 郭瑞. 基于灰色关联分析的省级自然科学基金项目评价研究[D]. 合肥: 合肥工业大学, 2008. 9 李兴国, 宁平. 基于BP神经网络的省级自然科学基金项目评价模型研究[J]. 科技进步与对策, 2008, 25(3): 144-146. 10 龚艳冰, 邓建高, 梁雪春. 基于SVM的省级自然科学基金项目评价研究[J]. 情报杂志, 2009, 28(4): 64-66. 11 袁醉敏, 徐燕椿, 束沛鑫. 省级科技计划评价的方法[J]. 科学管理研究, 1989, 7(6): 74-78. 12 杨芳娟, 刘云, 宋赛赛. 基于循证设计的中国博士后科学基金整体资助绩效评估[J]. 科学学与科学技术管理, 2014, 35(8): 152-161. 13 段庆锋, 汪雪锋, 朱东华, 等. 国家自然科学基金合作与交流类项目绩效评估方法研究[J]. 科学学与科学技术管理, 2010, 31(9): 5-8. 14 程艳丽. 期刊及其学术论文的评价方法及思考[J]. 河南大学学报(社会科学版), 2010, 50(2): 152-156. 15 王贤文, 刘则渊, 侯海燕. 全球主要国家的科学基金及基金论文产出现状: 基于Web of Science的分析[J]. 科学学研究, 2010, 28(1): 61-66. 16 张爱军, 高萍, 刘素芳. 世界各国社会科学基金论文产出绩效分析[J]. 情报科学, 2010, 28(5): 705-708. 17 闫建军. 科技论文基金项目标注不端行为的调查研究[J]. 济源职业技术学院学报, 2014, 13(2): 35-37. 18 王小艳. 科技论文基金项目标注不实分析及对策[J]. 中国科技期刊研究, 2014, 25(7): 954-957, 974. 19 梁继文, 杨建林, 王伟, 等. 科技项目及其成果文献的相关性评估研究[J]. 情报学报, 2022, 41(2): 155-166. 20 白雪娜, 张辉玲, 黄修杰. 科技论文基金项目标注的不端行为及防范对策研究——基于178篇论文标注209个国家自然科学基金项目的实证分析[J]. 编辑学报, 2017, 29(3): 260-264. 21 董建军. 中国知网收录的基金论文资助现状和被引情况分析[J]. 中国科技期刊研究, 2013, 24(2): 307-312. 22 舒安琴, 罗瑞, 张耀元, 等. 科技期刊中国家自然科学基金标注失范现象的调查[J]. 中国科技期刊研究, 2020, 31(4): 413-418. 23 叶文豪, 洪磊, 唐梦嘉, 等. 科研论文基金项目“一文多注”和不实标注研究——以2014—2018年国家社科基金项目为例[J]. 图书与情报, 2020(6): 65-72. 24 叶文豪, 王东波, 沈思, 等. 基于孪生网络的基金与受资助论文相关性判别模型构建研究[J]. 情报学报, 2020, 39(6): 609-618. 25 Hook D W, Porter S J, Herzog C. Dimensions: building context for search and evaluation[J]. Frontiers in Research Metrics and Analytics, 2018, 3: 23. 26 Zhang L, Zheng Y X, Zhao W J, et al. Is Dimensions a reliable data source of funding and funded publications?[C]// Proceedings of the 18th International Conference on Scientometrics and Informetrics, KU Leuven, 2021: 1573-1574. 27 Herzog C, Hook D, Konkiel S. Dimensions: bringing down barriers between scientometricians and data[J]. Quantitative Science Studies, 2020, 1(1): 387-395. 28 Jia T, Wang D S, Szymanski B K. Quantifying patterns of research-interest evolution[J]. Nature Human Behaviour, 2017, 1(4): Article No.0078. 29 Herzog C, Lunn B K. Response to the letter ‘Field classification of publications in Dimensions: a first case study testing its reliability and validity’[J]. Scientometrics, 2018, 117(1): 641-645. 30 Singh P, Piryani R, Singh V K, et al. Revisiting subject classification in academic databases: a comparison of the classification accuracy of Web of Science, Scopus & Dimensions[J]. Journal of Intelligent & Fuzzy Systems, 2020, 39(2): 2471-2476. 31 Papadimitriou C H, Raghavan P, Tamaki H, et al. Latent semantic indexing: a probabilistic analysis[J]. Journal of Computer and System Sciences, 2000, 61(2): 217-235. 32 Ahmadi P, Gholampour I, Tabandeh M. Cluster-based sparse topical coding for topic mining and document clustering[J]. Advances in Data Analysis and Classification, 2018, 12(3): 537-558. 33 Teh Y W, Jordan M I, Beal M J, et al. Hierarchical dirichlet processes[J]. Journal of the American Statistical Association, 2006, 101(476): 1566-1581. 34 段瑞雪, 王小捷, 孙月萍, 等. HDP主题模型的用户意图聚类[J]. 北京邮电大学学报, 2011, 34(S1): 55-58. 35 Zhang L, Sun B B, Shu F, et al. Comparing paper level classifications across different methods and systems: an investigation of Nature publications[J]. Scientometrics, 2022, 127(12): 7633-7651. 36 Bornmann L. Field classification of publications in Dimensions: a first case study testing its reliability and validity[J]. Scientometrics, 2018, 117(1): 637-640. |
|
|
|