Article
Version 1
Preserved in Portico This version is not peer-reviewed
Accelerated Gradient Descent Using Instance Eliminating Back Propagation
Version 1
: Received: 6 August 2020 / Approved: 7 August 2020 / Online: 7 August 2020 (09:29:54 CEST)
How to cite: Hosseinali, F. Accelerated Gradient Descent Using Instance Eliminating Back Propagation. Preprints 2020, 2020080181. https://doi.org/10.20944/preprints202008.0181.v1 Hosseinali, F. Accelerated Gradient Descent Using Instance Eliminating Back Propagation. Preprints 2020, 2020080181. https://doi.org/10.20944/preprints202008.0181.v1
Abstract
Artificial Intelligence is dominated by Artificial Neural Networks (ANNs). Currently, the Batch Gradient Descent (BGD) is the only solution to train ANN weights when dealing with large datasets. In this article, a modification to the BGD is proposed which significantly reduces the training time and improves the convergence. The modification, called Instance Eliminating Back Propagation (IEBP), eliminates correctly-predicted-instances from the Back Propagation. The speedup is due to the elimination of unnecessary matrix multiplication operations from the Back Propagation. The proposed modification does not add any training hyperparameter to the existing ones and reduces the memory consumption during the training.
Keywords
Artificial Neural Networks; Gradient Descent; Back Propagation; Instance Elimination; Speed up; Batch Size
Subject
Computer Science and Mathematics, Computer Science
Copyright: This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Comments (0)
We encourage comments and feedback from a broad range of readers. See criteria for comments and our Diversity statement.
Leave a public commentSend a private comment to the author(s)
* All users must log in before leaving a comment