Preprint Article Version 1 This version is not peer-reviewed

Statistical Signatures of Abstraction in Deep Neural Networks

Version 1 : Received: 2 July 2024 / Approved: 2 July 2024 / Online: 3 July 2024 (05:32:10 CEST)

How to cite: Caputo, C. O.; Marsili, M. Statistical Signatures of Abstraction in Deep Neural Networks. Preprints 2024, 2024070253. https://doi.org/10.20944/preprints202407.0253.v1 Caputo, C. O.; Marsili, M. Statistical Signatures of Abstraction in Deep Neural Networks. Preprints 2024, 2024070253. https://doi.org/10.20944/preprints202407.0253.v1

Abstract

We study how abstract representations emerge in a Deep Belief Network (DBN) trained on benchmark datasets. Our analysis targets the principles of learning in the early stages of information processing, starting from the “primordial soup" of the under-sampling regime. As the data is processed by deeper and deeper layers, features are detected and removed, transferring more and more “context-invariant” information to deeper layers. We show that the representation approaches an universal model – the Hierarchical Feature Model (HFM) – determined by the principle of maximal relevance. Relevance quantifies the uncertainty on the model of the data, thus suggesting that “meaning" – i.e. syntactic information – is that part of the data which is not yet captured by a model. Our analysis shows that shallow layers are well described by pairwise Ising models, which provide a representation of the data in terms of generic, low order features. We also show that plasticity increases with depth, in a similar way as it does in the brain. These findings suggest that DBNs are capable of extracting a hierarchy of features from the data which is consistent with the principle of maximal relevance.

Keywords

Machine Learning; Statistical Physics; learning theory

Subject

Computer Science and Mathematics, Artificial Intelligence and Machine Learning

Comments (0)

We encourage comments and feedback from a broad range of readers. See criteria for comments and our Diversity statement.

Leave a public comment
Send a private comment to the author(s)
* All users must log in before leaving a comment
Views 0
Downloads 0
Comments 0
Metrics 0


×
Alerts
Notify me about updates to this article or when a peer-reviewed version is published.
We use cookies on our website to ensure you get the best experience.
Read more about our cookies here.