PreprintArticleVersion 1This version is not peer-reviewed
Simulating Fractional Stride Decrease in Binary Artificial Neural Networks for Autonomous Spacecraft Systems Using Exponential and Logarithmic Decay: A Comparative Experiment in Energy-Efficient ANN Design
Version 1
: Received: 29 October 2024 / Approved: 30 October 2024 / Online: 30 October 2024 (13:13:02 CET)
How to cite:
Lancelot, J. F. Simulating Fractional Stride Decrease in Binary Artificial Neural Networks for Autonomous Spacecraft Systems Using Exponential and Logarithmic Decay: A Comparative Experiment in Energy-Efficient ANN Design. Preprints2024, 2024102426. https://doi.org/10.20944/preprints202410.2426.v1
Lancelot, J. F. Simulating Fractional Stride Decrease in Binary Artificial Neural Networks for Autonomous Spacecraft Systems Using Exponential and Logarithmic Decay: A Comparative Experiment in Energy-Efficient ANN Design. Preprints 2024, 2024102426. https://doi.org/10.20944/preprints202410.2426.v1
Lancelot, J. F. Simulating Fractional Stride Decrease in Binary Artificial Neural Networks for Autonomous Spacecraft Systems Using Exponential and Logarithmic Decay: A Comparative Experiment in Energy-Efficient ANN Design. Preprints2024, 2024102426. https://doi.org/10.20944/preprints202410.2426.v1
APA Style
Lancelot, J. F. (2024). Simulating Fractional Stride Decrease in Binary Artificial Neural Networks for Autonomous Spacecraft Systems Using Exponential and Logarithmic Decay: A Comparative Experiment in Energy-Efficient ANN Design. Preprints. https://doi.org/10.20944/preprints202410.2426.v1
Chicago/Turabian Style
Lancelot, J. F. 2024 "Simulating Fractional Stride Decrease in Binary Artificial Neural Networks for Autonomous Spacecraft Systems Using Exponential and Logarithmic Decay: A Comparative Experiment in Energy-Efficient ANN Design" Preprints. https://doi.org/10.20944/preprints202410.2426.v1
Abstract
This paper takes a novel approach to simulating fractional stride decreases in binary artificial neural networks (ANNs), addressing the limitations of fixed integer strides typically used in traditional convolutional neural networks (CNNs). By introducing a dynamic stride adjustment mechanism that reduces stride over time using either exponential or logarithmic decay functions, the network can simulate fractional strides, enabling the capture of increasingly fine-grained spatial details at deeper layers. This method improves output resolution, feature extraction without increasing network complexity, and possible solutions for binary ANN architectures where computational efficiency is critical. Using synthetic data, we analyze how both decay functions impact output resolution in a comparative experiment. Exponential decay produces a rapid initial stride decrease, leading to early significant resolution gains, while logarithmic decay offers a more gradual and controlled resolution increase. Both methods are enhanced by an attention mechanism that selectively emphasizes important regions in feature maps, improving spatial feature extraction efficiency. Our findings demonstrate that dynamic, asymptotic stride adjustment provides an effective method for simulating fractional strides in binary ANNs, with potential applications in areas such as object detection, segmentation, super-resolution, and autonomous systems, including spacecraft navigation and hazard detection. We also examine the trade-offs between the two decay methods in terms of resolution growth and computational overhead, contributing to ongoing research in adaptive ANN architectures and binary neural network design.
Computer Science and Mathematics, Artificial Intelligence and Machine Learning
Copyright:
This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.