Preprint Article Version 1 This version is not peer-reviewed

MMSE Estimation and Mutual Information Gain

Version 1 : Received: 31 July 2024 / Approved: 1 August 2024 / Online: 5 August 2024 (15:38:46 CEST)

How to cite: Gibson, J. MMSE Estimation and Mutual Information Gain. Preprints 2024, 2024080096. https://doi.org/10.20944/preprints202408.0096.v1 Gibson, J. MMSE Estimation and Mutual Information Gain. Preprints 2024, 2024080096. https://doi.org/10.20944/preprints202408.0096.v1

Abstract

Information theoretic quantities such as entropy, entropy rate, information gain, and relative entropy are often used to understand the performance of intelligent agents in learning applications. Mean squared error has not played a role in these analyses, primarily because it is not felt to be a viable performance indicator in these scenarios. We build on a new quantity, the log ratio of entropy powers, to establish that minimum mean squared error (MMSE) estimation, prediction, and smoothing are directly connected to Mutual Information Gain or Loss in an agent learning system modeled by a Markov chain for many probability distributions of interest. Expressions for mutual information gain or loss are developed for MMSE estimation, prediction, and smoothing and an example for fixed lag smoothing is presented.

Keywords

Mutual information gain; entropy power; minimum mean squared error estimation

Subject

Engineering, Electrical and Electronic Engineering

Comments (0)

We encourage comments and feedback from a broad range of readers. See criteria for comments and our Diversity statement.

Leave a public comment
Send a private comment to the author(s)
* All users must log in before leaving a comment
Views 0
Downloads 0
Comments 0
Metrics 0


×
Alerts
Notify me about updates to this article or when a peer-reviewed version is published.
We use cookies on our website to ensure you get the best experience.
Read more about our cookies here.