Preprint Article Version 1 Preserved in Portico This version is not peer-reviewed

Epitopological Sparse Deep Learning via Network Link Prediction: A Brain-Inspired Training for Artificial Neural Networks

Version 1 : Received: 7 July 2022 / Approved: 8 July 2022 / Online: 8 July 2022 (10:40:18 CEST)
Version 2 : Received: 8 June 2023 / Approved: 9 June 2023 / Online: 9 June 2023 (08:57:55 CEST)
Version 3 : Received: 26 October 2023 / Approved: 27 October 2023 / Online: 30 October 2023 (05:54:50 CET)

How to cite: Zhang, Y.; Muscoloni, A.; Cannistraci, C. V. Epitopological Sparse Deep Learning via Network Link Prediction: A Brain-Inspired Training for Artificial Neural Networks. Preprints 2022, 2022070139. https://doi.org/10.20944/preprints202207.0139.v1 Zhang, Y.; Muscoloni, A.; Cannistraci, C. V. Epitopological Sparse Deep Learning via Network Link Prediction: A Brain-Inspired Training for Artificial Neural Networks. Preprints 2022, 2022070139. https://doi.org/10.20944/preprints202207.0139.v1

Abstract

Sparse training (ST) aims to improve deep learning by replacing fully connected artificial neural networks (ANNs) with sparse ones. ST is promising but at an early stage, therefore it might benefit to borrow brain-inspired learning paradigms such as epitopological learning (EL) from complex network intelligence theory. EL is a field of network science that studies how to implement learning on networks by changing the shape of their connectivity structure (epitopological plasticity). EL was conceived together with the Cannistraci-Hebb (CH) learning theory according to which: the sparse local-community organization of many complex networks (such as the brain ones) is coupled to a dynamic local Hebbian learning process and contains already in its mere structure enough information to partially predict how the connectivity will evolve during learning. One way to implement EL is via link prediction: predicting the existence likelihood of each nonobserved link in a network. CH theory inspired a network automata rule for link prediction called CH3-L3 that was recently proven to be very effective for general purpose link prediction. Here, starting from CH3-L3 we propose a CH training (CHT) approach to implement epitopological sparse deep learning in ANNs. CHT consists of three parts: kick start pruning, to hint the link predictors; epitopological prediction, to shape the ANN topology; and weight refinement, to tune the synaptic weights values. Experiments on MNIST and CIFAR10 datasets compare the efficiency of CHT and other ST-based algorithms in speeding up the ANN training across epochs. While SET leverages random evolution and RigL adopts gradient information, CHT is the first algorithm in ST that learns to shape sparsity by using the sparse topological organization of the ANN.

Keywords

sparse training; neural networks; link prediction; network automata; Cannistraci-Hebb; epitopological learning

Subject

Computer Science and Mathematics, Artificial Intelligence and Machine Learning

Comments (0)

We encourage comments and feedback from a broad range of readers. See criteria for comments and our Diversity statement.

Leave a public comment
Send a private comment to the author(s)
* All users must log in before leaving a comment
Views 0
Downloads 0
Comments 0
Metrics 0


×
Alerts
Notify me about updates to this article or when a peer-reviewed version is published.
We use cookies on our website to ensure you get the best experience.
Read more about our cookies here.