|
|
Promoting the Evaluation of Representative Works: Challenges and Recommendations |
Yu Liping1, Zhang Kuangwei2, Jiang Changbing2 |
1.School of Statistics and Mathematics, Zhejiang Gongshang University, Hangzhou 310018 2.School of Management and E-business, Zhejiang Gongshang University, Hangzhou 310018 |
|
|
Abstract The representative system has recently been implemented in scientific and technological evaluations. Analyzing and thinking about the representative system is helpful to perfect the representative system and improve the quality of scientific and technological evaluations. To this end, the present article first analyzes the evaluation object. Next, the challenges in evaluating the representative work are examined from the perspective of the evaluation purpose, evaluation object, peer review, and evaluation technology. Finally, the reliability and pass rate of the representative work is mathematically proven. Studies have found that representative evaluation suffers from the following challenges: macroscopic perspective, the interdisciplinary comparison, the author's requirement to choose between different types of work and the masterpiece, subject heterogeneity impact assessment, applicability to different academic evaluation problem, the imperfection of the peer review system, mismatch between the assessments of evaluation experts and the objective, difficulty in comparing different groups, high cost, and low reliability of assessment. Furthermore, the following policy recommendations are made. First, it is inappropriate to expand the scope of the representative work review. Second, the requirements for representative work selection should be clarified, the classification evaluation should be refined, and traditional methods should be used for junior and intermediate scholars. Finally, a review system should be established that considers both quantitative assessment and representative works, improves the quality of peer review, provides autonomy to universities in science and technology review systems, promotes bibliometric indicators, and improves the technology evaluation mechanism.
|
Received: 22 April 2020
|
|
|
|
1 叶继元. 有益遏制学术评价形式化数量化[N]. 中国教育报, 2012-03-28(3). 2 李颍, 董超, 李正风, 等. 美英社会科学评价的经验与启示[J]. 清华大学教育研究, 2015, 36(5): 13-20. 3 邱均平, 任全娥. 国内外人文社会科学科研成果评价比较研究[J]. 国外社会科学, 2007(3): 58-66. 4 俞吾金. “代表作”制度改变了什么[N]. 解放日报, 2012-06-09(5). 5 宋敏, 杜尚宇, 王春蕾. 基于高校自然科学奖的代表作评价机制[J]. 中国高校科技, 2019(9): 34-37. 6 臧峰宇. 学术评价不妨尝试“代表作”制度[N]. 光明日报, 2012-03-21(2). 7 姜春林, 赵宇航. 代表作评价: 探索之路与完善之策[J]. 甘肃社会科学, 2016(3): 143-148. 8 田贤鹏. 高校教师学术代表作制评价实施: 动因、挑战与路径[J]. 中国高教研究, 2020(2): 85-91. 9 符征. “学术代表作”评价制度待完善[N]. 中国科学报, 2012-10-25(A3). 10 李涛. 高校学术评价不宜简单采用“代表作”制[J]. 评价与管理, 2012, 10(4): 50. 11 Wenner?s C, Wold A. Nepotism and sexism in peer-review[J]. Nature, 1997, 387(6631): 341-343. 12 Marsh H W, Jayasinghe U W, Bond N W. Improving the peer-review process for grant applications: reliability, validity, bias, and generalizability[J]. The American Psychologist, 2008, 63(3): 160-168. 13 Bornmann L, Daniel H D. Seasonal bias in editorial decisions? A study using data from chemistry[J]. Learned Publishing, 2011, 24(4): 325-328. 14 Bohannon J. Who's afraid of peer review?[J]. Science, 2013, 342(6154): 60-65. 15 姜春林, 魏庆肖. 人文社会科学代表性论文评价指标体系建构及其实现机制[J]. 甘肃社会科学, 2017(2): 97-106. 16 姜春林, 张立伟, 张春博. 科学计量方法辅助代表作评价的探讨[J]. 情报资料工作, 2014(3): 31-36. 17 张积玉. 以量化为基础以代表作为主的综合化学术评价制度构建——基于S大学的经验[J]. 重庆大学学报(社会科学版), 2019, 25(6): 84-96. 18 杨兴林. 高校教师职务晋升的学术代表作评价研究[J]. 江苏高教, 2015(2): 34-37. 19 杜学亮. 代表作评价制度的困境与出路[J]. 中国政法大学学报, 2019(2): 74-79,207. 20 陈云良, 罗蓉蓉. “学术代表作”制度的实施条件和程序安排[J]. 现代大学教育, 2014(1): 99-105. 21 许纪霖. 回归学术共同体的内在价值尺度[J]. 清华大学学报(哲学社会科学版), 2014, 29(4): 78-82. 22 王浩斌. 学术共同体、学术期刊与学术评价之内在逻辑解读[J]. 中国社会科学评价, 2015(3): 69-81,126-127. |
|
|
|