Restricted Boltzmann machines (RBMs) are the building blocks of some deep learning networks. However, despite their importance, it is our perception that some very important derivations about the RBM are missing in the literature, and a beginner may feel RBM very hard to understand. We provide here these missing derivations. We cover the classic Bernoulli-Bernoulli RBM and the Gaussian-Bernoulli RBM, but leave out the ``continuous'' RBM as it is believed not as mature as the former two. This tutorial can be used as a companion or complement to the famous RBM paper ``Training restricted Boltzmann machines: An introduction'' by Fisher and Igel.
Keywords:
Subject: Computer Science and Mathematics - Artificial Intelligence and Machine Learning
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.