Article
Version 1
This version is not peer-reviewed
Tiny Residual Networks
Version 1
: Received: 23 July 2024 / Approved: 24 July 2024 / Online: 24 July 2024 (17:50:03 CEST)
How to cite: Singh, S.; Khatri, I.; Zhou, X. Tiny Residual Networks. Preprints 2024, 2024071953. https://doi.org/10.20944/preprints202407.1953.v1 Singh, S.; Khatri, I.; Zhou, X. Tiny Residual Networks. Preprints 2024, 2024071953. https://doi.org/10.20944/preprints202407.1953.v1
Abstract
This study presents the Distilled MixUp Squeeze ResNet (DMSResNet), an optimized variant of the Residual Network architecture designed for high accuracy and computational efficiency. By integrating Squeeze-and-Excitation blocks, MixUp data augmentation, and knowledge distillation, it achieves competitive performance within strict parameter constraints and limited training resources. Evaluated on the CIFAR-10 dataset, the model attains a 96.56% accuracy with only 4.3 million total parameters. The results demonstrate that optimization of established architectures can yield performance comparable to newer models, especially in resource-limited scenarios. This work contributes to the development of efficient deep learning models for applications in constrained computational environments.
Keywords
Residual Neural Networks; Deep Learning Optimization
Subject
Computer Science and Mathematics, Artificial Intelligence and Machine Learning
Copyright: This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Comments (0)
We encourage comments and feedback from a broad range of readers. See criteria for comments and our Diversity statement.
Leave a public commentSend a private comment to the author(s)
* All users must log in before leaving a comment