Article
Version 1
This version is not peer-reviewed
Statistical Signatures of Abstraction in Deep Neural Networks
Version 1
: Received: 2 July 2024 / Approved: 2 July 2024 / Online: 3 July 2024 (05:32:10 CEST)
How to cite: Caputo, C. O.; Marsili, M. Statistical Signatures of Abstraction in Deep Neural Networks. Preprints 2024, 2024070253. https://doi.org/10.20944/preprints202407.0253.v1 Caputo, C. O.; Marsili, M. Statistical Signatures of Abstraction in Deep Neural Networks. Preprints 2024, 2024070253. https://doi.org/10.20944/preprints202407.0253.v1
Abstract
We study how abstract representations emerge in a Deep Belief Network (DBN) trained on benchmark datasets. Our analysis targets the principles of learning in the early stages of information processing, starting from the “primordial soup" of the under-sampling regime. As the data is processed by deeper and deeper layers, features are detected and removed, transferring more and more “context-invariant” information to deeper layers. We show that the representation approaches an universal model – the Hierarchical Feature Model (HFM) – determined by the principle of maximal relevance. Relevance quantifies the uncertainty on the model of the data, thus suggesting that “meaning" – i.e. syntactic information – is that part of the data which is not yet captured by a model. Our analysis shows that shallow layers are well described by pairwise Ising models, which provide a representation of the data in terms of generic, low order features. We also show that plasticity increases with depth, in a similar way as it does in the brain. These findings suggest that DBNs are capable of extracting a hierarchy of features from the data which is consistent with the principle of maximal relevance.
Keywords
Machine Learning; Statistical Physics; learning theory
Subject
Computer Science and Mathematics, Artificial Intelligence and Machine Learning
Copyright: This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Comments (0)
We encourage comments and feedback from a broad range of readers. See criteria for comments and our Diversity statement.
Leave a public commentSend a private comment to the author(s)
* All users must log in before leaving a comment