Preprint Article Version 1 This version is not peer-reviewed

Simulating Fractional Stride Decrease in Binary Artificial Neural Networks for Autonomous Spacecraft Systems Using Exponential and Logarithmic Decay: A Comparative Experiment in Energy-Efficient ANN Design

Version 1 : Received: 29 October 2024 / Approved: 30 October 2024 / Online: 30 October 2024 (13:13:02 CET)

How to cite: Lancelot, J. F. Simulating Fractional Stride Decrease in Binary Artificial Neural Networks for Autonomous Spacecraft Systems Using Exponential and Logarithmic Decay: A Comparative Experiment in Energy-Efficient ANN Design. Preprints 2024, 2024102426. https://doi.org/10.20944/preprints202410.2426.v1 Lancelot, J. F. Simulating Fractional Stride Decrease in Binary Artificial Neural Networks for Autonomous Spacecraft Systems Using Exponential and Logarithmic Decay: A Comparative Experiment in Energy-Efficient ANN Design. Preprints 2024, 2024102426. https://doi.org/10.20944/preprints202410.2426.v1

Abstract

This paper takes a novel approach to simulating fractional stride decreases in binary artificial neural networks (ANNs), addressing the limitations of fixed integer strides typically used in traditional convolutional neural networks (CNNs). By introducing a dynamic stride adjustment mechanism that reduces stride over time using either exponential or logarithmic decay functions, the network can simulate fractional strides, enabling the capture of increasingly fine-grained spatial details at deeper layers. This method improves output resolution, feature extraction without increasing network complexity, and possible solutions for binary ANN architectures where computational efficiency is critical. Using synthetic data, we analyze how both decay functions impact output resolution in a comparative experiment. Exponential decay produces a rapid initial stride decrease, leading to early significant resolution gains, while logarithmic decay offers a more gradual and controlled resolution increase. Both methods are enhanced by an attention mechanism that selectively emphasizes important regions in feature maps, improving spatial feature extraction efficiency. Our findings demonstrate that dynamic, asymptotic stride adjustment provides an effective method for simulating fractional strides in binary ANNs, with potential applications in areas such as object detection, segmentation, super-resolution, and autonomous systems, including spacecraft navigation and hazard detection. We also examine the trade-offs between the two decay methods in terms of resolution growth and computational overhead, contributing to ongoing research in adaptive ANN architectures and binary neural network design.

Keywords

artificial neural network; convolutional neural network; artificial intelligence

Subject

Computer Science and Mathematics, Artificial Intelligence and Machine Learning

Comments (0)

We encourage comments and feedback from a broad range of readers. See criteria for comments and our Diversity statement.

Leave a public comment
Send a private comment to the author(s)
* All users must log in before leaving a comment
Views 0
Downloads 0
Comments 0


×
Alerts
Notify me about updates to this article or when a peer-reviewed version is published.
We use cookies on our website to ensure you get the best experience.
Read more about our cookies here.