|
|
|
|
|
|
| Intelligence Theories and Methods |
|
|
165 |
Data Fairness: A New Social Issue in the Era of Digital Economy Hot! |
|
 |
Huo Chaoguang, Zhao Dongxiang |
|
|
DOI: 10.3772/j.issn.1000-0135.2026.02.001 |
|
|
With the rapid advancement of the digital economy and ongoing evolution of data intelligence technologies, data fairness is increasingly emerging as a critical social issue that society cannot avoid, owing to the long-standing concerns over income, education, and healthcare fairness. Unlike traditional factors (e.g., labor, capital, land, and technology), data raise complex and thorny social equity issues. Given that theoretical research on data fairness in Chinese academia lags behind practices driven by digital intelligence development and the evolving digital economy, this paper systematically reviews relevant global multidisciplinary research, proposes a comprehensive data fairness framework system, and analyzes the six basic principles of data fairness. The paper discusses the five main dimensions and four edge dimensions of data fairness, as well as the data fairness issues involved in the three main stages of data collection and acquisition: processing and analysis, sharing, and utilization in the life cycle of data analysis. This paper systematically discusses and proposes a content framework of data fairness for the first time, enriching existing theoretical research on data fairness, expanding the new connotation of social equity in the era of the digital economy, and providing theoretical support for data governance. |
|
|
2026 Vol. 45 (2): 165-179
[Abstract]
(
43
)
HTML
(165 KB)
PDF
(1239 KB)
(
11
) |
|
|
|
180 |
Data Ethics in China's Public Data Openness: Generative Logic, Problem Identification, and Agile Governance Hot! |
|
 |
Zhang Chuhui, Pei Lei, Li Zhuozhuo |
|
|
DOI: 10.3772/j.issn.1000-0135.2026.02.002 |
|
|
With the advancement of market-oriented reforms for data factors, the breadth and depth of public data opennness continue to expand, precipitating numerous ethical issues. However, within the specific context of public data openness, there is currently a dearth of practical tools for a holistic assessment of these ethical issues, as well as a lack of methodological frameworks for their governance. Therefore, it is crucial to investigate the definitional characteristics and evaluative criteria of data ethical issues in public data openness, and to thoroughly grasp the underlying patterns of data ethics. By employing a combination of qualitative and content analysis, this study proceeds along the logical framework of “generation logic—problem identification—governance mode.” Guided by the “ethics-first” philosophy, it systematically reveals latent data ethical issues in China's public data opening, establishes a comprehensive typology of these issues, and constructs an agile governance network for data ethics.The findings indicate that data ethical issues in public data openness exhibit an evolutionary pattern of “ethical ecology—ethical relationship—ethical representation.” Furthermore, effective data ethical governance requires the integration of three mutually supportive and organically linked dimensions: vertical hierarchical coordination, horizontal procedural prevention and control mechanisms, and a responsibility network among multiple stakeholders. This approach aims to form a governance landscape characterized by clear rights and responsibilities, multi-stakeholder co-governance, and strong ethical resilience. |
|
|
2026 Vol. 45 (2): 180-192
[Abstract]
(
34
)
HTML
(142 KB)
PDF
(1761 KB)
(
14
) |
|
|
|
193 |
TRIZ Theory Combined with Hidden Markov Model Technology Life Cycle Assessment and Technology Evolution Pattern Discovery Hot! |
|
 |
Yu Yan, Wang Ruishan, Ma Xinyuan, Liu Pan |
|
|
DOI: 10.3772/j.issn.1000-0135.2026.02.003 |
|
|
In light of the shortcomings of current technology life cycle assessment methods, such as strong subjectivity, and insufficient theoretical support, as well as the problem that technology evolution patterns rely on manual discovery, this study proposes a technology life cycle assessment method combining TRIZ theory and the hidden Markov model, and an automated method for discovering technology evolution patterns based on technology life cycle clustering. The method first constructs technology life cycle assessment indicators based on TRIZ theory, then uses the hidden Markov model to integrate the assessment indicators to evaluate the technology life cycle. Subsequently, through technology life cycle clustering, technology evolution patterns are identified based on TRIZ theory. Taking the field of lithium-ion battery electrolytes as empirical data, the accuracy and effectiveness of this method are verified. |
|
|
2026 Vol. 45 (2): 193-213
[Abstract]
(
43
)
HTML
(172 KB)
PDF
(6784 KB)
(
16
) |
|
|
|
214 |
Diffusion Cycles and Structural Characteristics of Scientific Knowledge Across Disciplines: A Perspective of Theoretical Terminology Analysis Hot! |
|
 |
Zhang Weichong, Wang Fang, Zhao Hong |
|
|
DOI: 10.3772/j.issn.1000-0135.2026.02.004 |
|
|
Unlike citations or keywords, theories as fundamental knowledge units, better reveal the structural connections and knowledge transfer pathways across disciplines. To uncover the diffusion dynamics and structural characteristics of theories within the scientific knowledge system, this study analyzes approximately 2.25 million articles published between 1985 and 2019 in the Web of Science Core Collection and 2,833 distinct theoretical terms extracted from them. A theory diffusion dataset was constructed through five steps: theory classification, term lexicon building, literature collection, automatic term extraction, and data integration. An S-shaped diffusion model was applied to characterize the cross-disciplinary diffusion cycle of theories, and a novel dual-indicator framework of “disciplinary exclusivity-disciplinary dispersion” was proposed to capture their interdisciplinary diffusion patterns. Furthermore, a multi-disciplinary relational map was generated based on theory co-occurrence networks to reveal the structural features of disciplinary interactions. The results show that: (1) the diffusion of theories across disciplines generally follows a logistic curve, with half of the diffusion process completed in about 12 years on average, reflecting distinct evolutionary stages; (2) a significant differentiation exists between disciplinary specificity and generality, with applied sciences serving as a critical bridge that facilitates translation and integration of theoretical knowledge; and (3) the scientific system as a whole exhibits a highly interconnected network composed of five tightly interwoven disciplinary clusters, indicating an overall trend toward integrated knowledge evolution. |
|
|
2026 Vol. 45 (2): 214-229
[Abstract]
(
27
)
HTML
(142 KB)
PDF
(4489 KB)
(
13
) |
|
|
|
230 |
Technology Opportunity Identification Based on Technology Correlation—Taking the Field of High-Temperature Superconductivity as an Example Hot! |
|
 |
Zhu Xiangli, Li Qianzhi, Liu Xiaoping |
|
|
DOI: 10.3772/j.issn.1000-0135.2026.02.005 |
|
|
To improve the accuracy of identifying potential technology opportunities from scientific research, this study proposes a framework for identifying science-technology linkages that integrates semantic analysis and topological modelling. Combining the methods of BERTopic topic modelling, explicit co-occurrence and implicit link prediction, we construct multidimensional science-technology correlation indicators, and adopt the TOPSIS-CRITIC model to assess the innovativeness of scientific topics and the constraints of technological development, so as to identify technological directions with high development potential. Using high-temperature superconductivity as a case study, empirical analysis identifies six potential technology opportunities with strong consistency in frontier development trends in the field of high-temperature superconductivity, which verifies the validity and foresight of the methodology. The study innovatively proposes a term-level link prediction method to address the terminology gap, and explores a pathway for identifying promising technological opportunities by integrating semantic and structural features. |
|
|
2026 Vol. 45 (2): 230-242
[Abstract]
(
37
)
HTML
(173 KB)
PDF
(4399 KB)
(
17
) |
|
|
|
243 |
Deep Learning-Driven Multilabel Classification and Semantic Relationship Mapping of Disaster Entities in Chinese Meteorological Historical Materials Hot! |
|
 |
Hu Zewen, Xie Shaoke |
|
|
DOI: 10.3772/j.issn.1000-0135.2026.02.006 |
|
|
This study aimed to address the automatic identification, multilabel classification, knowledge graph construction, and cyclical dynamic evolution of disaster entities and their semantic relationships in Chinese meteorological historical materials. Meteorological historical data from the early Ming Dynasty, drawn from the Chinese Three Thousand Years of Meteorological Records Collection, were used as the study sample. The BERT-RCNN (bidirectional encoder representations from transformers - recurrent convolutional neural networks) deep learning model was applied to extract and classify disaster entities and to perform multilabel classification of disaster records in the historical data. Based on this process, structured data were generated and used to construct the schema layer of semantic relational mapping of historical meteorological hazard entities. The integrated use of the Neo4j graph database and Gephi visualization tools for map visualization and cyclical evolution analysis revealed the temporal changes and spatial distribution characteristics of meteorological hazards in the early Ming Dynasty. The results showed that the BERT-RCNN model demonstrated significant performance advantages in the automatic identification and multilabel classification of meteorological disaster entities in Chinese meteorological historical data, achieving a mean micro-F1 classification precision and disaster record recall of up to 96%. Differences in recognition and multilabel classification performance were observed among different disaster entities. The model exhibited optimal recognition and classification performance for disaster entities such as floods and droughts, which occurred more frequently in the early Ming Dynasty, while recognition and identification performance was poorer for a small number of disaster types with lower occurrence frequencies. The semantic relationship mapping of historical meteorological disaster entities, constructed on the basis of structured data obtained after multilabel classification, clearly revealed the temporal and spatial variation characteristics of different types of disaster entities in Chinese meteorological historical data. From a temporal perspective, the frequency of meteorological disasters in the early Ming Dynasty in China exhibited an overall trend of a slight decrease, followed by a gradual increase, and then a sharp increase in the final stage. The main disaster types affecting societal development in the early Ming Dynasty in China were floods, droughts, and locusts. With respect to spatial distribution, droughts and floods exhibited higher frequencies and greater spatial concentration in the middle and lower reaches of the Yangtze River and the lower reaches of the Yellow River. |
|
|
2026 Vol. 45 (2): 243-260
[Abstract]
(
31
)
HTML
(158 KB)
PDF
(4674 KB)
(
8
) |
|
|
|
261 |
Cross-Social Media Platform Emergency Knowledge Collaboration Based on Multimodal Heterogeneous Information Networks Hot! |
|
 |
Zhou Wei, An Lu, Wu Xuan, Han Ruilian, Li Gang |
|
|
DOI: 10.3772/j.issn.1000-0135.2026.02.007 |
|
|
Effective collaboration of multimodal emergency knowledge across social media platforms is critical for improving information integration and intelligent response during public emergencies. To address the limitations in network modeling and representation accuracy and the lack of collaboration pathways in current multimodal heterogeneous information networks, this study proposes a cross-platform emergency knowledge collaboration method. First, multimodal data (text, images, and videos) from multiple platforms were collected to construct heterogeneous information networks with a unified structure and semantic alignment. Second, an enhanced heterogeneous graph convolutional network (HGCN) model is proposed and combined with a multi-head attention fusion mechanism to improve the semantic representation. Finally, node- and group-level collaboration strategies are introduced to support structural linkage and semantic coordination across platforms. In experimental results, the proposed Enhanced-HGCN achieved the best clustering performance, with a normalized mutual information (NMI) score of 0.5660, significantly outperforming the baseline models. The cross-platform network achieved a clustering coefficient of 0.451 and reduced the betweenness centrality to 0.83×10-4, confirming the overall effectiveness of the proposed method for semantic fusion and structural optimization. |
|
|
2026 Vol. 45 (2): 261-276
[Abstract]
(
33
)
HTML
(239 KB)
PDF
(6929 KB)
(
8
) |
|
|
| Intelligence Technology and Application |
|
|
277 |
Automatic Extraction of Grant Elements for Fine-Grained Matching with Outcomes Hot! |
|
 |
Lu Wei, Yi Fan, Huang Yong, Jiang Yi, Liu Yinpeng, Cheng Qikai |
|
|
DOI: 10.3772/j.issn.1000-0135.2026.02.008 |
|
|
Scientific funding plays a vital role in advancing scientific progress. However, in current fund management practices, a mismatch remains between grant outcomes and grant content, highlighting the urgent need for a fine-grained evaluation mechanism to ensure funding effectiveness. A prerequisite for fine-grained matching between grant outcomes and content is the extraction of key grant elements. Existing research primarily focuses on sentence-level rhetorical classification, which lacks the granularity required, whereas problem and method extraction in scientific papers often targets a single research issue, failing to accommodate grants involving multiple sub-problems. To address these limitations, this study focused on extracting key elements from research grant proposals. We defined five core research grant elements: background, questions, methods, objectives, and significance. Three strategies—zero-shot learning, one-shot learning, and fine-tuning—were employed in conjunction with large language models for the extraction of these grant elements. The fine-tuned strategy yielded the best performance, achieving a ROUGE-L score of 0.849, which demonstrates the effectiveness and practical applicability of the fine-tuned model for extracting grant elements. This work lays a methodological foundation for subsequent downstream tasks and provides valuable methodological tools for managing and evaluating scientific research projects and their outcomes. |
|
|
2026 Vol. 45 (2): 277-290
[Abstract]
(
31
)
HTML
(147 KB)
PDF
(5999 KB)
(
10
) |
|
|
|
291 |
Technology Opportunity Identification Based on Dynamic Graph Neural Networks Hot! |
|
 |
Du Xianjin, Xu Yuxiang, Che Zifan, Fu Hong, Wu Gen |
|
|
DOI: 10.3772/j.issn.1000-0135.2026.02.009 |
|
|
Technological opportunity identification is a critical driver of innovation. This study proposes a dynamic graph neural network (DGNN)-based identification method to improve the accuracy and timeliness of technological opportunity identification. This approach constructs annual International Patent Classification (IPC) co-occurrence networks and utilizes feature learning to obtain node topological, textual, and hierarchical semantic attributes. These features are then weighted and fused using multimodal fusion methods and attention mechanisms. By training the DGNN model, the long short-term memory network (LSTM) was used to model the evolutionary process of the network structure and node attributes, enabling link prediction for potential future IPC combinations. Technological opportunities were evaluated by combining the centrality metrics with the Louvain algorithm. In the task of identifying technological opportunities in the field of new energy vehicle manufacturing, all the indicators of the model proposed in this paper are significantly better than those of the baseline model. Notably, the AUC value reaches 0.875, and the F1 value reaches 0.823, which are respectively 6.45% and 6.74% higher than those of the second-best model, EvolveGCN. The results reveal technological hotspot trends and development directions and provide actionable references and guidance for innovation practices. |
|
|
2026 Vol. 45 (2): 291-302
[Abstract]
(
24
)
HTML
(189 KB)
PDF
(1409 KB)
(
11
) |
|
|
|
303 |
Long-Text Relation Recognition Based on LLM-BERT Collaborative Framework Hot! |
|
 |
Wu Shuai, He Lin, Lyu Xingyue, Lu Yingjie, Wu Can, Wang Xinzhe |
|
|
DOI: 10.3772/j.issn.1000-0135.2026.02.010 |
|
|
Long-Text Relation Recognition plays an important role in the fields of scientific and technological intelligence and digital humanities and is the key to realizing the transformation of knowledge reorganization to knowledge discovery. However, owing to the characteristics of long texts, such as large context span, scattered semantic clues, and complex entity references, the traditional Large Language Model (LLM) is prone to insufficient contextual understanding, semantic shifts, and illusions when dealing with this type of text. As a result, long texts do not yet better realize value-added content in the practical applications of scientific and technological intelligence, humanities computing, and other fields. To solve these problems, we first constructed an entity relationship system based on the clustering results of relationship trigger words. Second, for long-text features, a long-text relationship recognition algorithm based on the LLM-BERT synergistic framework was designed to improve semantic relevance. Third, the advantages of the pretrained model, deep learning network, and attention mechanism for processing text features are integrated to construct the BERT-CNN-BiLSTM-MHA (BCBM) model to deeply mine text semantics. Finally, a summary quality assessment mechanism was designed to mitigate the LLM illusion by combining model confidence and text similarity. The experimental results show that the measured effect of this algorithm is better than that of the traditional model, and to a certain extent, it alleviates the problems of insufficient contextual understanding, semantic shift, and hallucinations that are easily generated by LLM when processing long text. |
|
|
2026 Vol. 45 (2): 303-318
[Abstract]
(
29
)
HTML
(182 KB)
PDF
(4434 KB)
(
19
) |
|
|
|