Article
Version 1
Preserved in Portico This version is not peer-reviewed
Online Sequential Extreme Learning Machine: A New Training Scheme for Restricted Boltzmann Machines
Version 1
: Received: 25 May 2020 / Approved: 27 May 2020 / Online: 27 May 2020 (08:18:39 CEST)
How to cite: Tarek, B. Online Sequential Extreme Learning Machine: A New Training Scheme for Restricted Boltzmann Machines. Preprints 2020, 2020050444. https://doi.org/10.20944/preprints202005.0444.v1 Tarek, B. Online Sequential Extreme Learning Machine: A New Training Scheme for Restricted Boltzmann Machines. Preprints 2020, 2020050444. https://doi.org/10.20944/preprints202005.0444.v1
Abstract
Abstract: The main contribution of this paper is to introduce a new iterative training algorithm for restricted Boltzmann machines. The proposed learning path is inspired from online sequential extreme learning machine one of extreme learning machine variants which deals with time accumulated sequences of data with fixed or varied sizes. Recursive least squares rules are integrated for weights adaptation to avoid learning rate tuning and local minimum issues. The proposed approach is compared to one of the well known training algorithms for Boltzmann machines named “contrastive divergence”, in term of time, accuracy and algorithmic complexity under the same conditions. Results strongly encourage the new given rules during data reconstruction.
Keywords
restricted Boltzmann machine; contrastive divergence; extreme learning machine; online sequential extreme learning machine; autoencoders; deep belief network; deep learning
Subject
Computer Science and Mathematics, Computational Mathematics
Copyright: This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Comments (0)
We encourage comments and feedback from a broad range of readers. See criteria for comments and our Diversity statement.
Leave a public commentSend a private comment to the author(s)
* All users must log in before leaving a comment