Background
I am an undergraduate from Indiana University despite being the age of a grad student. I should have graduated by now, but my obsession with research prevents me from moving forward. There is a chance that I might have a learning disability since writing isn’t very easy for me.
As I’ve been in and out of college, I never got the chance to rigorously learn the subjects I’m researching. Most of what I learned was from Wikipedia, blogs and random research articles. I know little of what I read but I learn what I can from questions on math stack exchange.
What I truly want, however; is for someone to take my ideas and publish them.
I warn that the definitions may not be rigorous so try to go easy on me. (I recommend using programming such as Mathematica, Python, JavaScript or Matlab to understand
Section 3 and
Section 4).
1. Preliminaries
Suppose
A is a set measurable in the
Carathèodory sense [
1], such for
,
, and function
.
1.1. Motivation
It seems the set of measurable functions with infinite or undefined expected values (def. 1), using the
uniform measure [
2] ( pp.32-37), may be a
prevalent subset [
3,
4] of the set of all measurable functions, meaning "almost every" measurable function has infinite or undefined expected values. Furthermore, when the Lebesgue measure of
A, measurable in the Caratheodory sense, has zero or infinite volume (or undefined measure), there may be multiple, conflicting ways of defining a "natural" uniform measure on
A.
Below I will attempt to define a question regarding an extension of the expected value (when it’s undefined or infinite) which allows finite values instead.
Note the reason the question will be so long is there are plenty of “meaningless” extensions of the expected value (e.g. if the expected value is infinite or undefined we can just replace it with zero).
Therefore we must be more specific about what is meant by “meaningful” extension but there are some preliminary definitions we must clarify.
1.2. Preliminary Definitions
Definition 1
(Expected value w.r.t the Uniform Probability Measure). From an answer to a question in cross validated (a website for statistical questions) [5] , let denote a uniform random variable on set and denote the probability density function from the radon-nikodym derivative [6] of the uniform probability measure on A measurable in the Carathèodory sense. If denotes the indicator function on :
then the radon-nikodym derivative of uniform probability measure must have the form . (Note is not the derivative of U in the sense of calculus but rather the denominator of the probability density function derived from the uniform probability measure defined as U.)
Therefore, using the law of the unconscious statistician, we should get
such the expected value is undefined when A does not have a uniform probability distribution or f is not integrable w.r.t the measure .
Definition 2
(Defining the pre-structure). Since there’s a chance that does not exist or f is not integrable w.r.t to , using def. 1 we define a sequence of sets where if:
For all , exists (when A is countable infinite then for every , must be a finite set since would be a discrete uniform distribution of ; otherwise, when A is uncountable, then is the normalized Lebesgue measure or some other uniform measure on (e.g. [7]) such for every the Lebesgue measure or some other uniform measure on exists and is finite.
For all , is positive and finite such that is intrinsic. (For countably infinite A, would be the counting measure where is positive and finite since is finite. For uncountable A, would either be the Lebesgue measure or the radon-nikodym derivative on some other uniform measure on (e.g. [7]), where either of the measures on are positive and finite.)
is then apre-structure of A, since for every the sequence does not equal A, but “approaches" A as r increases.
Definition 3
(Expected value of Pre-Structure). If is a pre-structure of A (def. 2), then for , if
we then have that the expected value of the pre-structure could be described as (def. 1) where:
Definition 4
(Uniform coverings of each term of the pre-structure). We define the uniform ε coverings of each term of the pre-structure (i.e., ) as a group of pair-wise disjoint sets that cover for every , such the measure of each of the sets that cover have the same value of , where and the total sum of of the coverings is minimized. In shorter notation, if
and set Ω
is defined as:
then for every , the set of uniform ε coverings is defined using where ω “enumerates" all possible uniform ε coverings of for every .
Definition 5
(Sample of the uniform coverings of each term of the pre-structure). The sample of uniform ε coverings of each term of the pre-structure or is the set of points, such for every and , we take a point from each pair-wise disjoint set in the uniform ε coverings of (def. 4). In shorter notation, if
and set is defined as:
then for every , the set of all samples of the set of uniform ε coverings is defined using , where ψ “enumerates" all possible samples of .
Definition 6
(Entropy on the sample of uniform coverings of each term of the pre-structure). Since there are finitely many points in the sample of the uniform ε coverings of each term of pre-structure (def. 5), we:
Arrange the x-value of the points in the sample of uniform ε coverings from least to greatest. This is defined as:
Take the multi-set of the absolute differences between all consecutive pairs of elements in (1). This is defined as:
Normalize (2) into a probability distribution, where for multi-set X, we have as the cardinality of all elements in the multi-set, including repeated ones. This is defined as:
Take the entropy of (3), (for further reading, see [8] ( pp.61-95)). This is defined as:
where (4) is the entropy on the sample of uniform coverings of .
Definition 7
(Pre-Structure Converging Uniformly to A).
For every (
using def. 4, 5, and 6)
if set A is finite:
and if set A is non-finite:
we say the pre-structure converges uniformlyto A (or in shorter notation):
(Note we wish to define a uniform convergence of a sequence of sets to A since the definition is analogous to a uniform measure.)
Definition 8
(Equivalent Pre-Structures).
The pre-structures and of A areequivalentif, from def. 3, where and :
Definition 9
(Non-Equivalent Pre-Structures).
The pre-structures and of A are non-equivalent if, from def. 3, where and :
Definition 10
(Pre-Structures converging Sublinearly, Linearly, or Superlinearly to A compared to that of another Sequence). Suppose pre-structures and are non-equivalent and converge uniformly to A; and suppose for every , where and :
- (a)
We take the cardinality of the sample of the uniform ε coverings of divided by the smallest cardinality of the sample of the uniform ε coverings of (def. 5), where the entropy on the sample of uniform coverings on is larger than the entropy on the sample of uniform coverings on (def. 6). In other words, if:
then the ratio described at the beginning of (a) is defined (using 1.2.8) as
- (b)
We take the cardinality of the sample of uniform ε covering of divided by the largest cardinality of the sample of the uniform ε covering of (def. 5), where the entropy on the sample of uniform coverings on is smaller then the entropy on the sample of uniform coverings on (def. 6). In other words if:
then the ratio described at the start of (b) is defined (using 1.2.10) as
-
If using equations 1.2.9 and 1.2.11 we have that:
we say converges uniformly to A at asuperlinear rateto that of .
-
If using equations 1.2.9 and 1.2.11 we have that:
we say converges uniformly to A at alinear rateto that of .
-
If using equations 1.2.9 and 1.2.11 we have that:
we say converges uniformly to A at asublinear rateto that of .
I assume and are always equal but I’m not sure how to prove this.
1.3. Question on Preliminary Definitions
2. Main Question
Does there exist a unique extension (or a method that constructively defines a unique extension) of the expected value of f when the value’s finite, using the uniform probability measure [2] (pp.32-37) on sets measurable in the Carathèodory sense, such we replace f with infinite or undefined expected values with f defined on a chosen pre-structure which depends on A where:
The expected value of f on each term of the pre-structure is finite
The pre-structure converges uniformly to A
The pre-structure converges uniformly to A at a linear or superlinear rate to that of other non-equivalent pre-structures of A which satisfies (1) and (2).
The generalized expected value of f on a pre-structure (i.e. an extension of def. 3 to answer the full question) has a unique & finite value, such the pre-structure satisfies (1), (2), and (3).
A choice function is defined which chooses a pre-structure from A where the following satisfies (1), (2), (3), and (4) for the largest possible subset of ℝA
If there is more than one choice function that satisfies (1), (2), (3), (4) and (5), we choose the choice function with the “simplest form", meaning for a general pre-structure of A, when each choice function is fully expanded, we take the choice function with the fewest variables/numbers (excluding those with quantifiers).
3. Informal Attempt to Answer Main Question
(I advise using computer programmings such as Mathematica, Python, JavaScript, or Matlab to understand the definitions of the answer below.)
3.1. Generalized Expected Values
If the image of
f under
A is
, such from def. 2 and 7, we take the pre-structure of
where:
and take the pre-image under
f of
(defined as
) such that:
However, note the expected value of
(def. 3) may be infinite (e.g. unbounded
f). Hence, for every
, we take
where:
Thus, the
generalized expected value or
is:
And we describe the process of extending
(def. 3) to the
generalized expected value as
.
3.2. Choice Function
Suppose
is the set of all pre-structures of
A which satisfies criteria (1) and (2) of the main question where the
generalized expected value of the pre-structures, as they converge uniformly to
A, is unique and finite such the pre-structure
should be a sequence of sets that satisfies criteria (1), (2), (3) and (4) of the main question where:
and pre-structure
is an element of
such that:
but is not an element of the set of equivelant pre-structures of
(i.e. def. 8).
Further note from (a), with equation 1.2.8 in def. 10, if we take:
and from (b), with equation 1.2.10 in def. 10, we take:
Then, using def. 5 with equations 3.2.3 and 3.2.4, if:
where, using absolute value function
, we have:
such that
and, using equations 3.2.5, 3.2.6, 3.2.7, 3.2.8, 3.2.9 with the nearest integer function
, we want:
such, using equation 3.2.10, if set
and
is the power-set, then set
is the largest element of:
w.r.t to inclusion, such the
choice function is
if the following contains just one element.
Otherwise, for
, suppose we say
represents the
k-th iteration of the choice function of
A, e.g.
, where the infinite iteration of
(if it exists) is
. Therefore, when taking the following:
we say
is the
choice function and the expected value, using def. 3.2.1, is
.
3.3. Questions on Choice Function
3.4. Increasing Chances of an Unique and Finite Expected Value
If there exists a unique and finite
(see Section 3.1) where:
Then is the generalized expected value w.r.t choice function C, which answers criteria (1), (2), (3), (4), (perhaps (5)) of the question in Section 2; however, there is still a chance that the equation 3.4.1 fails to give an unique . Hence; if , we take the k-th iteration of the choice function C in 3.2.11, such there exists a , where for all , if is unique and finite then the following is the generalized expected value w.r.t finitely iterated C.
In other words, if the
k-th iteration of
C is represented as
(where e.g.
), we want a unique and finite
where:
If this still does not give a unique and finite expected value, we then take the
most generalized expected value w.r.t an infinitely iterated C where if the
infinite iteration of
C is stated as
, we then want a unique
where:
However, in such cases, should only be used for functions where the expected value is infinite or undefined or for worst-case functions—badly behaved (where for , , and f is a function) defined on infinite points covering an infinite expanse of space. For example:
For a worst-case f defined on countably infinite A (e.g. countably infinite "pseudo-random points" non-uniformly scattered across the real plane), one may need just one iteration of C (since most function on countable sets need just one iteration of C for to be unique); otherwise, one may use equation 3.4.2 for finite iterations of C.
For a worst-case f defined on uncountable A, we might have to use equation 3.4.3 as averaging such a function might be nearly impossible. We can imagine this function as an uncountable number of "pseudo-random" points non-uniformly generated on a subset of the real plane (see Section 4.1 for a visualization.)
Note, however, that no matter how generalized and “meaningful" the extension of an expected value is, there will always be an f where the expected value does not exist.
3.5. Questions Regarding The Answer
4. Glossary
4.1. Example of Case (2) of Worst Case Functions
We wish to create a function that appears to be a “pseudo-randomly" distributed but has infinite points that are non-uniform (i.e. does not have
complete spatial randomness [
16]) in the sub-space of
, where the expected value or integral of the function w.r.t
uniform probability measure [
2] (pp.32-37) is non-obvious (i.e. not the center of the space the function covers nor the area of that space).
Suppose for real numbers and , we generate an uncountable number of "nearly pseudo-random" points that are non-uniform in the subspace .
We therefore define the function as .
Now suppose where the base-b expansion of real numbers, in interval , have infinite decimals that approach x from the right side so when we get .
Furthermore, for
, if
and
is a function where
takes the digit in the
-th decimal fraction of the base-
b expansion of
x (e.g.
), then
is a sequence of functions such that
is defined to be:
then for some large
and
, the intermediate function (before
f) or
is defined to be
where the points in
are "almost pseudo-randomly" and non-uniformly distributed on
. What we did was convert every digit of the base-
b expansion of
x to a pseudo-random number that is non-equally likely to be an integer, including and in-between, 0 and
. Furthermore, we also make the function appear truly “pseudo-random", by adding the
-th decimal fraction with the next
k decimal fractions; however, we want to control the end-points of
such if
, we convert
to
by manipulating equation 4.1.2 to get:
such the larger
k is, the more pseudo-random the distribution of points in
f in the space
, but unlike most distributions of such points,
f is uncountable.
4.2. Question Regarding Section 4.1
Let us give a specific example, suppose for the function in equation 4.1.3 of
Section 4.1, we have:
(one can try simpler parameters); what is the expected value using either equations 3.4.2 and 3.4.3 (or a more complete solution to
Section 2) if the answer is finite and unique?
What about for f in general (i.e. in terms of b, , , , and k)?
(Note if
and
, then the function is an explicit example of the function that Wood
[2] describes in Quanta Magazine)
4.3. Approximating the Expected Value
Definition 11
(Approximating the Expected Value).
In practice, the computation of this expected value may be complicated if the set A is complicated. If analytic integration does not give a closed-form solution then a general and relatively simple way to compute the expected value (up to high accuracy) is with importance sampling. To do this, we produce values for some density function g with support (hopefully with support fairly close to A) and we use the estimator:
From the law of large numbers, we can establish that so if we take M to be large then we should get a reasonably good computation of the expected value of interest.
Note importance sampling requires three things:
We need to know when point x is in set A or not
We need to be able to generate points from a density g that is on a support that covers A but is not too much bigger than A
We have to be able to compute and for each point
References
- (https://mathoverflow.net/users/35357/michael greinecker), M.G. Demystifying the Caratheodory Approach to Measurability. MathOverflow, [https://mathoverflow.net/q/34007]. URL:https://mathoverflow.net/q/34007 (version: 2010-07-31).
- T., L.; E., R. The maximum entropy of a metric space. https://arxiv.org/pdf/1908.11184.pdf.
- Ott, W.; Yorke, J.A. Prevelance. Bulletin of the American Mathematical Society 2005, 42, 263–290, https://www.ams.org/journals/bull/2005-42-03/S0273-0979-05-01060-8/S0273-0979-05-01060-8.pdf. [Google Scholar] [CrossRef]
- Hunt, B.R. Prevalence: a translation-invariant “almost every” on infinite-dimensional spaces 1992. https://arxiv.org/abs/math/9210220. [CrossRef]
- (https://stats.stackexchange.com/users/173082/ben), B. In statistics how does one find the mean of a function w.r.t the uniform probability measure? Cross Validated, [https://stats.stackexchange.com/q/602939]. https://stats.stackexchange.com/q/602939 (version: 2023-01-24).
- B. , P. 3 ed.; John Wiley & Sons: New York, 1995; pp. 419–427, https://www.colorado.edu/amath/sites/default/files/attached-files/billingsley.pdf. [Google Scholar]
- (https://mathoverflow.net/users/46214/mark mcclure), M.M. Integral over the Cantor set Hausdorff dimension. MathOverflow, [https://mathoverflow.net/q/235609]. https://mathoverflow.net/q/235609 (version: 2016-04-07).
- M., G. 2 ed.; Springer New York: New York [America];, 2011; pp. 61–95. https://ee.stanford.edu/~gray/it.pdf. [CrossRef]
- (https://math.stackexchange.com/users/13130/dave-l renfro), D.L.R. Proof that neither “almost none” nor “almost all” functions which are Lebesgue measurable are non-integrable. Mathematics Stack Exchange, [https://math.stackexchange.com/q/4623168]. https://math.stackexchange.com/q/4623168 (version: 2023-01-21).
- Grabowski, A.; Kornilowicz, A. Introduction to Liouville Numbers. Formalized Mathematics 2017, 25. [Google Scholar] [CrossRef]
- C., W. Mathematicians Prove 2D Version of Quantum Gravity Really Works. Quanta Magazine, https://www.quantamagazine.org/mathematicians-prove-2d-version-of-quantum-gravity-really-works-20210617.
- E., B.; M., E. Integration with Filters. https://arxiv.org/pdf/2004.09103.pdf.
- Shackel, N. Bertrand’s Paradox and the Principle of Indifference. Philosophy of Science 2007, 74, 150–175, https://orca.cardiff.ac.uk/id/eprint/3803/1/Shackel%20Bertrand’s%20paradox%205.pdf. [Google Scholar] [CrossRef]
- Drory, A. Failure and Uses of Jaynes’ Principle of Transformation Groups. Foundations of Physics 2015, 45, 439–460, https://arxiv.org/pdf/1503.09072.pdf. [Google Scholar] [CrossRef]
- B., K. Visualization of Uncountable Number of Psuedo-random Points Generate on Subset of the Real Plane, 2023. https://www.wolframcloud.com/obj/4e78f594-1578-402a-a163-ebb16319ada2.
- Maimon O., R. L. 2 ed.; Springer New York: New York [America];, 2010; pp. 851–852. [CrossRef]
[1] |
Meaningful Average—The average answers the main question in Section 2
|
[2] |
Wood wrote on Feynman Path Integrals: “No known mathematical procedure can meaningfully average [1] an infinite number of objects covering an infinite expanse of space in general." |
|
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).