Preprint
Article

Accelerated Gradient Descent Using Instance Eliminating Back Propagation

Altmetrics

Downloads

259

Views

283

Comments

0

This version is not peer-reviewed

Submitted:

06 August 2020

Posted:

07 August 2020

You are already at the latest version

Alerts
Abstract
Artificial Intelligence is dominated by Artificial Neural Networks (ANNs). Currently, the Batch Gradient Descent (BGD) is the only solution to train ANN weights when dealing with large datasets. In this article, a modification to the BGD is proposed which significantly reduces the training time and improves the convergence. The modification, called Instance Eliminating Back Propagation (IEBP), eliminates correctly-predicted-instances from the Back Propagation. The speedup is due to the elimination of unnecessary matrix multiplication operations from the Back Propagation. The proposed modification does not add any training hyperparameter to the existing ones and reduces the memory consumption during the training.
Keywords: 
Subject: Computer Science and Mathematics  -   Computer Science
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

© 2024 MDPI (Basel, Switzerland) unless otherwise stated