Article
Version 1
Preserved in Portico This version is not peer-reviewed
CheapSE: Improving Magnitude-Based Speech Enhancement Using Self-Reference
Version 1
: Received: 18 March 2024 / Approved: 19 March 2024 / Online: 19 March 2024 (11:20:26 CET)
How to cite: Dai, B.; Tan, K.; Xue, H.; Lu, H. CheapSE: Improving Magnitude-Based Speech Enhancement Using Self-Reference. Preprints 2024, 2024031140. https://doi.org/10.20944/preprints202403.1140.v1 Dai, B.; Tan, K.; Xue, H.; Lu, H. CheapSE: Improving Magnitude-Based Speech Enhancement Using Self-Reference. Preprints 2024, 2024031140. https://doi.org/10.20944/preprints202403.1140.v1
Abstract
This study addresses the critical challenge of Speech Enhancement (SE) in noisy environments, where the deployment of Deep Neural Networksolutions on microcontrollers is hindered by their extensive computational demands. Focusing on this gap, our research introduces a novel SE method optimized for MCUs, employing a 2-layer GRU model that capitalizes on perceptual speech properties and innovative training methodologies. By incorporating self-reference signals and a dual strategy of compression and recovery based on the Mel scale, we develop an efficient model tailored for low-latency applications. Our GRU-2L-128 model demonstrates a significant reduction in size and computational requirements, achieving a 14.2× decrease in model size and a 409.1× reduction in operations compared to traditional DNN methods like DCCRN, without compromising performance. This advancement offers a promising solution for enhancing speech intelligibility in resource-constrained devices, marking a pivotal step in SE research and application.
Keywords
GRU; Self-Reference; speech enhancement
Subject
Computer Science and Mathematics, Security Systems
Copyright: This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Comments (0)
We encourage comments and feedback from a broad range of readers. See criteria for comments and our Diversity statement.
Leave a public commentSend a private comment to the author(s)
* All users must log in before leaving a comment