|
|
Analysis of the Relevance Evaluation of Scientific-Technological Projects and Achievements |
Liang Jiwen1,2, Yang Jianlin1,2, Wang Wei1,2, Wang Fei3 |
1.School of Information Management, Nanjing University, Nanjing 210023 2.Jiangsu Key Laboratory of Data Engineering and Knowledge Service, Nanjing 210023 3.Jiangsu Institute of Science and Technology Information, Nanjing 210042 |
|
|
Abstract The post evaluation of scientific-technological (S&T) projects is the core link in the entire process of S&T management, which judges the completion quality and benefits of the project. Existing research more deeply considers building an evaluation system and quantitative evaluation indicators, while the research on determining the relevance of projects and results at the content level is scarce. This paper uses S&T reports as supplementary information for S&T projects, building a similarity calculation model based on the Bidirectional Encoder Representations from Transformers (BERT) architecture, and it explores S&T projects’ correlation evaluation and achievement. The results show that the fusion model can effectively evaluate the relevance of S&T projects and achievement. This study also analyzes the low relevance of the projects and achievements and the existing problems in the S&T reports. We aimed to comprehensively improve the efficiency of S&T information services.
|
Received: 21 October 2020
|
|
|
|
1 杨建林. 大数据浪潮下情报学研究与教育的变革与守正[J]. 情报理论与实践, 2020, 43(4): 1-9. 2 鲁晶晶, 谭宗颖, 万昊. 关于科技项目成果评估研究内容的分析与思考[J]. 科学管理研究, 2016, 34(1): 37-41. 3 陈晓田, 杨列勋, 李若筠. 对基金项目成果评估的尝试与思考[J]. 管理科学学报, 1998, 1(3): 3-5. 4 周寄中, 杨列勋, 许治. 关于国家自然科学基金管理科学部资助项目后评估的研究[J]. 管理评论, 2007, 19(3): 13-19, 63. 5 张小菁. 地方科技计划项目绩效评价研究[J]. 科技管理研究, 2010, 30(21): 201-204. 6 孙毅, 徐土松, 徐长明, 等. 地方科技计划项目跟踪监测及绩效评价方法研究[J]. 科技进步与对策, 2010, 27(4): 111-115. 7 张玉臣, 王兆欢. 公共科研项目的基本追求及绩效评价研究——基于科研项目绩效后评估实例[J]. 科技进步与对策, 2014, 31(13): 119-123. 8 杜元伟, 王素素. 基于DEMATEL-模糊综合评判的科学基金项目绩效评价方法[J]. 中国科学基金, 2018, 32(2): 161-169. 9 李新杰, 李雄诒, 孙泽厚. 基于DEA方法的省级自然科学基金效率实证研究[J]. 软科学, 2012, 26(6): 78-82. 10 李志兰, 何学东. 基于DEA模型的自然科学基金投入产出效率分析——以浙江省自然科学基金为例[J]. 浙江大学学报(理学版), 2015, 42(2): 246-252. 11 冯海燕, 许超. 北京市自然科学基金项目绩效评价[J]. 中国高校科技, 2019(11): 37-40. 12 龚艳冰, 邓建高, 梁雪春. 基于SVM的省级自然科学基金项目评价研究[J]. 情报杂志, 2009, 28(4): 64-66. 13 冷雄辉, 张丛煌. 基于DEA方法的江西省科技支撑计划中期绩效评估实证研究[J]. 科技管理研究, 2015, 35(19): 52-56, 63. 14 张晨, 宋佳宁, 任文茜. 省级科技计划项目管理的后评估研究——以安徽为例[J]. 科技管理研究, 2015, 35(22): 172-175. 15 Pendlebury D A. Science, citation, and funding[J]. Science, 1991, 251(5000): 1410-1411. 16 Zhao S X, Lou W, Tan A M, et al. Do funded papers attract more usage?[J]. Scientometrics, 2018, 115(1): 153-168. 17 Tonta Y, Akbulut M. Does monetary support increase citation impact of scholarly papers?[J]. Scientometrics, 2020, 125(2): 1617-1641. 18 王飞, 梁继文. 基于国家社科基金统计学领域项目成果分析[J]. 西南民族大学学报(人文社科版), 2017, 38(9): 230-234. 19 贺德方. 科技报告的内涵、作用与管理机制[J]. 情报学报, 2014, 33(8): 788-792. 20 Mikolov T, Sutskever I, Chen K, et al. Distributed representations of words and phrases and their compositionality[C]// Proceedings of the 26th International Conference on Neural Information Processing Systems. Red Hook: Curran Associates, 2013: 3111-3119. 21 Devlin J, Chang M W, Lee K, et al. BERT: pre-training of deep bidirectional transformers for language understanding[C]// Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Stroudsburg: Association for Computational Linguistics, 2019: 4171-4186. 22 Sun Y, Wang S H, Li Y K, et al. ERNIE: enhanced representation through knowledge integration[OL]. (2019-04-19). https://arxiv.org/pdf/1904.09223.pdf. 23 Xu L, Zhang X W, Dong Q Q. CLUECorpus2020: a large-scale Chinese corpus for pre-training language model[OL]. (2020-03-05). https://arxiv.org/pdf/2003.01355.pdf. 24 Goodfellow I J, Shlens J, Szegedy C. Explaining and harnessing adversarial examples[OL]. (2015-03-20). https://arxiv.org/pdf/1412.6572.pdf. 25 Madry A, Makelov A, Schmidt L, et al. Towards deep learning models resistant to adversarial attacks[OL]. (2019-09-04). https://arxiv.org/pdf/1706.06083.pdf. 26 Lee D H. Pseudo-Label: the simple and efficient semi-supervised learning method for deep neural networks[C]// Workshop on Challenges in Representation Learning, International Conference on Machine Learning, 2013. 27 Li Z, Ko B S, Choi H J. Naive semi-supervised deep learning using pseudo-label[J]. Peer-to-Peer Networking and Applications, 2019, 12(5): 1358-1368. 28 Sun C, Qiu X P, Xu Y G, et al. How to fine-tune BERT for text classification?[C]// Proceedings of the China National Conference on Chinese Computational Linguistics. Cham: Springer, 2019: 194-206. |
|
|
|