Version 1
: Received: 18 July 2024 / Approved: 19 July 2024 / Online: 19 July 2024 (09:49:47 CEST)
How to cite:
Ma, N.; Di, W. Research on Machine Reading Comprehension Integrating Longdistance Semantic Relations and Linguistic Features. Preprints2024, 2024071590. https://doi.org/10.20944/preprints202407.1590.v1
Ma, N.; Di, W. Research on Machine Reading Comprehension Integrating Longdistance Semantic Relations and Linguistic Features. Preprints 2024, 2024071590. https://doi.org/10.20944/preprints202407.1590.v1
Ma, N.; Di, W. Research on Machine Reading Comprehension Integrating Longdistance Semantic Relations and Linguistic Features. Preprints2024, 2024071590. https://doi.org/10.20944/preprints202407.1590.v1
APA Style
Ma, N., & Di, W. (2024). Research on Machine Reading Comprehension Integrating Longdistance Semantic Relations and Linguistic Features. Preprints. https://doi.org/10.20944/preprints202407.1590.v1
Chicago/Turabian Style
Ma, N. and Wu Di. 2024 "Research on Machine Reading Comprehension Integrating Longdistance Semantic Relations and Linguistic Features" Preprints. https://doi.org/10.20944/preprints202407.1590.v1
Abstract
Machine reading comprehension is a crucial area of research in natural language processing, aiming to enable machines to read and understand text as humans do, and to answer questions related to the text content. The pre-trained language model, represented by BERT, has achieved superior results compared to traditional models in many NLP tasks, leading to the rise of the pre-trained paradigm in the field of natural language processing. This paper addresses the issue that the pre-trained language model lacks the ability to extract long-distance semantic relations and make efficient use of linguistic features. Firstly, the recent developments of the pre-trained language model are described. Then, two feature maps are used to intuitively express the structured long-distance semantic correlation features, and the traditional sequence structure features are integrated. The influence of different feature map construction methods on machine reading comprehension is compared. Finally, the application of the pre-trained language model in machine reading comprehension is summarized and prospected. By fusing graph structure features, the model can learn more abundant language knowledge, thus further improving its reasoning ability.
Keywords
Machine reading comprehension; Long-distance semantic relations; Linguistic features; Pre-trained language models
Subject
Computer Science and Mathematics, Artificial Intelligence and Machine Learning
Copyright:
This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.