Article
Version 1
Preserved in Portico This version is not peer-reviewed
Automatic Voice Query Service for Multi-Accented Mandarin Speech
Version 1
: Received: 20 March 2021 / Approved: 22 March 2021 / Online: 22 March 2021 (10:55:53 CET)
How to cite: Xiao, K.; Qian, Z. Automatic Voice Query Service for Multi-Accented Mandarin Speech. Preprints 2021, 2021030513. https://doi.org/10.20944/preprints202103.0513.v1 Xiao, K.; Qian, Z. Automatic Voice Query Service for Multi-Accented Mandarin Speech. Preprints 2021, 2021030513. https://doi.org/10.20944/preprints202103.0513.v1
Abstract
Automatic Voice Query Service can extremely reduce the artificial cost, which could improve the response efficiency for users. The automatic speech recognition (ASR) is one of the important component in AVQS. However, many dialect areas in China make the AVQS have to response the multi-accented Mandarin users by single acoustic model in ASR. This problem severely limits the accuracy of ASR for multi-accented speech in the AVQS. In this paper, a new framework for AVQS is proposed to improve the accuracy of response. Firstly, the fusion feature including iVector and filterbank acoustic features is used to train the Transformer-CTC model. Secondly, the transformer-CTC model is used to construct the end-to-end ASR. Finally, key words matching algorithm for AVQS based on fuzzy mathematic theory is proposed to further improve the accuracy of response. The results show that the final accuracy in our proposed framework for AVQS arrives at 91.5%. The proposed framework for AVQS can satisfy the service requirement of different areas in mainland of China. This research has a great significance for exploring the application value of artificial intelligence in the real scene.
Keywords
Automatic Voice Query Service; Automatic Speech Recognition; Multi-Accented Mandarin Speech Recognition
Subject
Engineering, Automotive Engineering
Copyright: This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Comments (0)
We encourage comments and feedback from a broad range of readers. See criteria for comments and our Diversity statement.
Leave a public commentSend a private comment to the author(s)
* All users must log in before leaving a comment