Preprint Article Version 1 Preserved in Portico This version is not peer-reviewed

Kolmogorov-Arnold network for word-level explainable meaning representation

Version 1 : Received: 28 May 2024 / Approved: 29 May 2024 / Online: 29 May 2024 (15:00:04 CEST)

How to cite: Galitsky, B. A. Kolmogorov-Arnold network for word-level explainable meaning representation. Preprints 2024, 2024051981. https://doi.org/10.20944/preprints202405.1981.v1 Galitsky, B. A. Kolmogorov-Arnold network for word-level explainable meaning representation. Preprints 2024, 2024051981. https://doi.org/10.20944/preprints202405.1981.v1

Abstract

We leverage the explainability feature of KAN network and build an explainable language model where certain neurons encode individual words and neuron activation is fully interpretable in terms of the basis of a word. To do that, we propose a continuous word2vec model where a meaning of a word is expressed by a continuous profile of distances of this word from the words in the basis of words which is interpolated. As a result, the whole KAN network can be interpreted as a sequential procedure with word expressions. We then proceed from words to logic programs and develop a clause learning technique based on KAN. A logic program construction process from facts now become fully interpretable. We follow the differentiable inductive logic programming technique, representing a logic program as a matrix learned by KAN. Hence, we obtain an efficient and fully interpretable rule learning approach.

Keywords

Kolmogorov-Arnold network, explainability, symbolic regression, continuous word2vec model, inductive logic programming

Subject

Computer Science and Mathematics, Artificial Intelligence and Machine Learning

Comments (0)

We encourage comments and feedback from a broad range of readers. See criteria for comments and our Diversity statement.

Leave a public comment
Send a private comment to the author(s)
* All users must log in before leaving a comment
Views 0
Downloads 0
Comments 0


×
Alerts
Notify me about updates to this article or when a peer-reviewed version is published.
We use cookies on our website to ensure you get the best experience.
Read more about our cookies here.