Preprint
Concept Paper

Defining the Most Generalized, Natural Extension of the Expected Value on Measurable Functions

Altmetrics

Downloads

668

Views

272

Comments

1

This version is not peer-reviewed

Submitted:

06 April 2023

Posted:

07 April 2023

Read the latest preprint version here

Alerts
Abstract
In this paper, we will extend the expected value of the function w.r.t the uniform probability measure on sets measurable in the Caratheodory sense to be finite for a larger class of functions, since the set of all measurable functions with infinite or undefined expected values forms a prevalent subset of the set of all measurable functions, which means "almost all" measurable functions have infinite or undefined expected values. Before we define the specific problem in section 2, we will outline some preliminary definitions. We'll then define the specific problem (along with a partial solution in section 3) to visualize the complete solution. Along the way, we will ask a series of questions to clarify our understanding of the paper.
Keywords: 
Subject: Computer Science and Mathematics  -   Probability and Statistics

0. Background

I am an undergraduate from Indiana University despite being the age of a grad student. I should have graduated by now, but my obsession with research prevents me from moving forward. There is a chance that I might have a learning disability since writing isn’t very easy for me.
As I’ve been in and out of college, I never got the chance to rigorously learn the subjects I’m researching. Most of what I learned was from Wikipedia, blogs and random research articles. I know little of what I read but learn what I can from asking questions on math stack exchange.
What I truly want, however; is for someone to take my ideas and publish them.
I warn that the definitions may not be rigorous so try to go easy on me. (I recommend using programming such as Mathematica, Python, JavaScript or Matlab to understand later sections).

1. Preliminaries

Suppose A is a set measurable in the Carathèodory sense [1], such for n N , A R n , and function f : A R .

1.1. Motivation

It seems the set of measurable functions with infinite or undefined expected values (def. 1), using the uniform measure [[2], p.32-37], may be a prevalent subset [3,4] of the set of all measurable functions, meaning "almost every" measurable function has infinite or undefined expected values. Furthermore, when the Lebesgue measure of A, measurable in the Caratheodory sense, has zero or infinite volume (or undefined measure), there may be multiple, conflicting ways of defining a "natural" uniform measure on A.
Below I will attempt to define a question regarding an extension of the expected value (when it’s undefined or infinite) which allows for finite values instead.
Note the reason the question will be so long is there are plenty of “meaningless” extensions of the expected value (e.g. if the expected value is infinite or undefined we can just replace it with zero).
Therefore we must be more specific about what is meant by “meaningful” extension but there are some preliminary definitions we must clarify.

1.2. Preliminary Definitions

Definition 1 (Expected value w.r.t the Uniform Probability Measure). 
From an answer to a question in cross validated (a website for statistical questions) [5] , let X Uniform ( A ) denote a uniform random variable on set A R n and p X denote the probability density function from the radon-nikodym derivative [[6], p.419-427] of the uniform probability measure on A measurable in the Carathèodory sense. If I ( x A ) denotes the indicator function on x A :
I ( x A ) = 1 x A 0 x A
then the radon-nikodym derivative of uniform probability measure must have the form I ( x A ) / U ( A ) . (Note U is not the derivative of U in the sense of calculus but rather the denominator of the probability density function derived from the uniform probability measure U.)
Therefore, by using the law of the unconscious statistician, we should get
E [ f ( X ) ] = R n f ( x ) · p X ( x ) d x = R n f ( x ) · I ( x A ) U ( A ) d x = 1 U ( A ) A f ( x ) d x = E U [ f ( X ) ]
such the expected value is undefined when A does not have a uniform probability distribution or f is not integrable w.r.t the measure U .
Definition 2 (Defining the pre-structure). 
Since there’s a chance that X Uniform ( A ) does not exist or f is not integrable w.r.t to U , using def. 1 we define a sequence of sets F r r N where if:
(a) 
lim inf r F r = r 1 q r F q
(b) 
lim sup r F r = r 1 q r F q
then we have:
1. 
lim inf r F r = lim sup r F r = A
2. 
For all r N , X r Uniform ( F r ) exists (when A is countable infinite, then for every r N , F r must be finite since X r would be a discrete uniform distribution of F r ; otherwise, when A is uncountable, X r is the normalized Lebesgue measure or some other uniform measure on F r (e.g. [7]) where for every r N , either measure on F r exists and is finite.
3. 
For all r N , U ( F r ) is positive and finite such that U is intrinsic. (For countably infinite A, U would be the counting measure where U ( F r ) is positive and finite since F r is finite. For uncountable A, U would either be the Lebesgue measure or the radon-nikodym derivative of some other uniform measure on F r (e.g. [7]), where either of the measures on F r are positive and finite.)
where F r r N is apre-structureof A, since for every r N the sequence does not equal A, but "converges" to A as r increases (see (a) & (b) of this definition).
Example 1. 
Suppose A = Q . One pre-structure of Q is F r r N = c / r ! : c Z , r · r ! c r · r ! r N since:
1. 
lim inf r F r = lim sup r F r = A
r 1 q r c / q ! : c Z , q · q ! c q · q = r 1 q r c / q ! : c Z , q · q ! c q · q = Q
2. 
For every r N , set F r = c / r ! : c Z , r · r ! c r · r ! is finite, meaning each term of the pre-structure has a discrete uniform distribution. Therefore, X r Uniform ( F r ) exists.
3. 
For every r N , F r is finite; meaning U is the counting measure. Furthermore, since U ( F r ) = 2 r · r ! + 1 and for all r N , 2 r · r ! + 1 is positive and finite, criteria (3) of def. 2 is satisfied.
Example 2. 
Suppose A = Q . Another pre-structure of Q is
F r r N = c / d : c Z , d N , d r , d r c d r r N
where we note the following:
1. 
lim inf r F r = lim sup r F r = A
r 1 q r c / d : c Z , d N , d q , d q c d q = r 1 q r c / d : c Z , d N , d q , d q c d q = Q
2. 
For every r N , set F r = c / d : c Z , d N , d r , d r c d r is finite, meaning each term of the pre-structure has a discrete uniform distribution. Therefore, X r Uniform ( F r ) exists.
3. 
For every r N , F r is finite; meaning U is the counting measure, since (when ϕ ( · ) is the Euler’s totient function [[8], p.239-249]) we have U ( F r ) = | c / d : c Z , d N , d r , d r c d r | = d = 1 r 2 d Φ ( d ) , and if correct,  d = 1 r 2 d Φ ( d )  is greater than zero and positive for all  r N . Therefore, criteria (3) of def. 2 is satisfied.
There are plenty of pre-structures of Q . Infact, there may be countably infinite many of these pre-structures.
Example 3. 
We need additional examples, where  U  is not the counting measure. Perhaps one example of  F r r N  (where  A = R ) is:
F r r N = [ r , r ] r N
It’s obvious that:
lim inf r F r = lim sup r F r = A r 1 q r [ q , q ] = r 1 q r [ q , q ] = R
Note that the uniform random variable of  A = R  doesn’t exist but for every  r N , the uniform density of  F r  is  I ( x [ r , r ] ) / ( 2 r ) .
Furthermore, for every  r N ,  U  is the 1-d Lebesgue measure where  U ( F r ) = 2 r , such where   2 r  is positive and finite (since  r > 0 ).
Definition 3 (Expected value of f on Pre-Structure) 
If F r r N is a pre-structure of A (def. 2), then for r N , if
E U [ f ( X r ) ] = 1 U F r F r f d x
we then have that the expected value of f on the pre-structure could be described as E U f ( X r ) E U [ f ] where:
( ϵ > 0 ) ( N N ) ( r N ) r N E U [ f ( X r ) ] E U [ f ] < ϵ
( ϵ > 0 ) ( N N ) ( r N ) r N 1 U F r F r f d x E U [ f ] < ϵ
Example 4. 
Suppose A = Q where f : A R such that:
f ( x ) = 1 x ( 2 n + 1 ) / 2 m : n Z , m N 0 x ( 2 n + 1 ) / 2 m : n Z , m N
Using the pre-structure in example 1 or F r r N = c / r ! : c Z , r · r ! c r · r ! r N , we presume (and prove) E U [ f ] using def. 3 is 1.
And using the pre-structure in example 2 or
F r r N = c / d : c Z , d N , d r , d r c d r r N
we presume (but must prove) E U [ f ] , using def. 3 is 1 / 3 .
This shows different pre-structures give different expected values; therefore, we must choose a unique set of equivelant pre-structures (def. 8) which gives the same & finite expected value.
Definition 4 (Uniform ε coverings of each term of the pre-structure). 
We define the uniform ε coverings of each term of the pre-structure F r r N (i.e., F r ) as a group of pair-wise disjoint sets that cover F r for every r N , such the measure U of each of the sets that cover F r have the same value of ε range ( U ) , where ε > 0 and the total sum of U of the coverings is minimized. In shorter notation, if
  • The element t N
  • The set T N is arbitrary and uncountable.
and set Ω is defined as:
Ω = 1 , · · · , t if there are t ways of writing uniform ε coverings of F r N if there are countably infinite ways of writing uniform ε coverings of F r T if there are uncountable ways of writing uniform ε coverings of F r
then for every ω Ω , the set of uniform ε coverings is defined using U ( ϵ , F r , ω ) where ω “enumerates" all possible uniform ε coverings of F r for every r N .
Example 5. 
Suppose
1. 
A = Q [ 0 , 1 ]
2. 
F r r N = c / d : c Z , d N , d r , 0 c d r N
Inorder to calculate U ( 2 , F 4 , 1 ) , note that:
F 4 = 0 , 1 0 , 1 / 2 , 1 0 , 1 / 3 , 2 / 3 , 1 0 , 1 / 4 , 2 / 4 , 3 / 4 , 1 0 / 5 , 1 / 5 , 2 / 5 , 3 / 5 , 4 / 5 , 5 / 5
= 0 , 1 , 1 / 2 , 1 / 3 , 2 / 3 , 1 / 4 , 3 / 4 , 1 / 5 , 2 / 5 , 3 / 5 , 4 / 5
and; since ε = 2 and U is the counting measure, one example of U ( 2 , F 4 , 1 ) is
0 , 1 , 1 / 2 , 1 / 3 , 2 / 3 , 1 / 4 , 3 / 4 , 1 / 5 , 2 / 5 , 3 / 5 , 4 / 5 , 6 / 5
Note U (in this case the counting measure) of each set in the uniform ε covering is 2 where we’re "over-covering" F 4 by one element (i.e. 6 / 5 ) as we are minimizing the total sum of U of the coverings (which for U ( 2 , F 4 , 1 ) is 6 · 2 = 12 ).If U ( 2 , F 4 , 1 ) = 0 , 1 , 1 / 2 , 1 / 3 , 2 / 3 , 1 / 4 , 3 / 4 , 1 / 5 , 2 / 5 , 3 / 5 , 4 / 5 , 6 / 5 , then
U ( 2 , F 4 , 2 ) = 0 , 1 / 2 , 1 / 3 , 1 , 2 / 3 , 1 / 4 , 3 / 4 , 1 / 5 , 2 / 5 , 3 / 5 , 4 / 5 , 6 / 5
and e.g.
U ( 2 , F 4 , 3 ) = 0 , 1 / 3 , 1 / 2 , 1 , 2 / 3 , 1 / 4 , 3 / 4 , 1 / 5 , 2 / 5 , 3 / 5 , 4 / 5 , 6 / 5
Also note, for counting measure U , where ε > 0 and ε range ( U ) (i.e. ε N ), we have that inf ( ε ) = 1 .
Definition 5 (Sample of the uniform ε coverings of each term of the pre-structure). 
The sample of uniform ε coverings of each term of the pre-structure F r r N or F r is the set of points, such for every ε range ( U ) and r N , we take a point from each pair-wise disjoint set in the uniform ε coverings of F r (def. 4). In shorter notation, if
  • The element k N
  • The set K N is arbitrary and uncountable.
and set Ψ ω is defined as:
Ψ ω = 1 , · · · , k if there are k ways of writing the sample of uniform ε coverings of F r N if there are countably infinite ways of writing the sample of uniform ε coverings of F r K if there are uncountable ways of writing the sample of uniform ε coverings of F r
then for every ψ Ψ ω , the set of all samples of the set of uniform ε coverings is defined using S ( U ( ϵ , F r , ω ) , ψ ) , where ψ “enumerates" all possible samples of U ( ϵ , F r , ω ) .
Example 6. 
From example 5 where:
1. 
A = Q [ 0 , 1 ]
2. 
F r r N = c / d : c Z , d N , d r , 0 c d r N
3. 
U ( 2 , F 4 , 1 ) = 0 , 1 , 1 / 2 , 1 / 3 , 2 / 3 , 1 / 4 , 3 / 4 , 1 / 5 , 2 / 5 , 3 / 5 , 4 / 5 , 6 / 5
Then one sample of U ( 2 , F 4 , 1 ) = 0 , 1 , 1 / 2 , 1 / 3 , 2 / 3 , 1 / 4 , 3 / 4 , 1 / 5 , 2 / 5 , 3 / 5 , 4 / 5 , 6 / 5 is:
S ( U ( 2 , F r , 1 ) , 1 ) = 0 , 1 / 3 , 1 / 4 , 1 / 5 , 3 / 5 , 6 / 5
and another sample of U ( 2 , F 4 , 1 ) = 0 , 1 , 1 / 2 , 1 / 3 , 2 / 3 , 1 / 4 , 3 / 4 , 1 / 5 , 2 / 5 , 3 / 5 , 4 / 5 , 6 / 5 is:
S ( U ( 2 , F r , 1 ) , 2 ) = 0 , 1 / 2 , 1 / 4 , 3 / 4 , 2 / 5 , 4 / 5
Definition 6 (Entropy on the sample of uniform coverings of each term of the pre-structure) 
Since there are finitely many points in the sample of the uniform ε coverings of each term of pre-structure F r r N (def. 5), we:
1. 
Arrange the x-value of the points in the sample of uniform ε coverings from least to greatest. This is defined as:
Ord ( S ( U ( ϵ , F r , ω ) , ψ ) )
2. 
Take the multi-set of the absolute differences between all consecutive pairs of elements in (1). This is defined as:
Ord ( S ( U ( ϵ , F r , ω ) , ψ ) )
3. 
Normalize (2) into a probability distribution. This is defined as:
P ( S ( U ( ϵ , F r , ω ) , ψ ) ) = y / z Ord ( S ( U ( ϵ , F r , ω ) , ψ ) ) z : y Ord ( S ( U ( ϵ , F r , ω ) , ψ ) )
4. 
Take the entropy of (3), (for further reading, see [9]). This is defined as:
E ( S ( U ( ϵ , F r , ω ) , ψ ) ) = x P ( S ( U ( ϵ , F r , ω ) , ψ ) ) x log 2 x
where (4) is the entropy on the sample of uniform coverings of F r .
Example 7. 
From example 6:
1. 
A = Q [ 0 , 1 ]
2. 
F r r N = c / d : c Z , d N , d r , 0 c d r N
3. 
U ( 2 , F 4 , 1 ) = 0 , 1 , 1 / 2 , 1 / 3 , 2 / 3 , 1 / 4 , 3 / 4 , 1 / 5 , 2 / 5 , 3 / 5 , 4 / 5 , 6 / 5
4. 
S ( U ( 2 , F 4 , 1 ) , 1 ) = 0 , 1 / 3 , 1 / 4 , 1 / 5 , 3 / 5 , 6 / 5
Then
1. 
Ord S ( U ( 2 , F 4 , 1 ) , 1 ) = 0 , 1 / 5 , 1 / 4 , 1 / 3 , 3 / 5 , 6 / 5 which organizes elements in S ( U ( 2 , F 4 , 1 ) , 1 ) from least to greatest.
2. 
Ord S ( U ( 2 , F 4 , 1 ) , 1 ) = 0 1 / 5 , 1 / 5 1 / 4 , 1 / 4 1 / 3 , 1 / 3 3 / 5 , 3 / 5 6 / 5 = 1 / 5 , 1 / 20 , 1 / 12 , 4 / 15 , 3 / 5
3. 
Since z Ord ( S ( U ( 2 , F 4 , 1 ) , 1 ) ) z = 1 / 5 + 1 / 20 + 1 / 12 + 4 / 15 + 3 / 5 = 6 / 5 we use this to normalize (2) into a probability distribution
P ( S ( U ( 2 , F 4 , 1 ) , 1 ) ) = y / ( 6 / 5 ) : y Ord ( S ( U ( 2 , F 4 , 1 ) , 1 ) ) = ( 5 / 6 ) y : y 1 / 5 , 1 / 20 , 1 / 12 , 4 / 15 , 3 / 5 = 1 / 6 , 1 / 24 , 5 / 72 , 2 / 9 , 1 / 2
4. 
Hence we take the entropy of 1 / 6 , 1 / 24 , 5 / 72 , 2 / 9 , 1 / 2 or:
E ( S ( U ( ϵ , F r , ω ) , ψ ) ) = x P ( S ( U ( ϵ , F r , ω ) , ψ ) ) x log 2 x = ( 1 / 6 ) log 2 ( 1 / 6 ) + ( 1 / 24 ) log 2 ( 1 / 24 ) + ( 5 / 72 ) log 2 ( 5 / 72 ) + ( 2 / 9 ) log 2 ( 2 / 9 ) + ( 1 / 2 ) log 2 ( 1 / 2 ) 1.8713
Definition 7 (Pre-Structure Converging Uniformly to A) 
For every r N (using def. 4, 5, and 6)if set A is finite and for ε range ( U ) , we have ε > 0 , we then want:
lim ε 0 sup r N sup ω Ω sup ψ Ψ ω E ( S ( U ( ϵ , F r , ω ) , ψ ) ) E ( F r )
and if set A is non-finite:
lim ε 0 sup r N sup ω Ω sup ψ Ψ ω E ( S ( U ( ϵ , F r , ω ) , ψ ) ) = +
we say the pre-structure F r r N converges uniformlyto A (or in shorter notation):
F r r N A
(Note we wish to define a uniform convergence of a sequence of sets to A since the definition is analogous to a uniform measure.)
Theorem 1. 
Show every pre-structure of A converges uniformly to A.
Example 8. 
I assume, using example 4, if
1. 
A = Q [ 0 , 1 ]
2. 
F r r N = c / d : c Z , d N , d r , 0 c d r N
then F r r N A . I need to prove this.
Definition 8 (Equivalent Pre-Structures) 
The pre-structures F r r N and ( F j ) j N of A areequivalentif for all f R A , where from def. 3, E U [ f ( X r ) ] E U [ f ] or E U [ f ( X j ) ] E U [ f ] such that:
E U [ f ] = E U [ f ]
Definition 9 (Equivelant Pre-Structures (Alternate Def.)). 
The pre-structures F r r N and ( F j ) j N of A areequivalentif we have:
r j = arg min r N U ( F r F j ) : F r F j
is the r-value (for every j N ) where U ( F r F j ) is minimized
r j = arg max r N U ( F j F r ) : F r F j
is the r-value (for every j N ) where U ( F j F r ) is maximized
j r = arg min j N U ( F j F r ) : F j F r
is the j-value (for every r N ) where U ( F j F r ) is minimized and:
j r = arg max j N U ( F j F r ) : F j F r
is the j-value (for every r N ) where U ( F j F r ) is maximized such that:
sup inf U j = 1 F r j F j , U j = 1 F j F r j , inf U r = 1 F j r F r , U r = 1 F r F j r < +
means the pre-structures F r r N and ( F j ) j N are equivelant.
Example 9. 
From example 3, if A = R where F r r N = [ r , r ] r N , the cantor set is C and ( F j ) j N = [ j , j ] x + j : x C j N . Since with either pre-structure, U is the 1-d dimensional Lebesgue measure and (using equation 1.2.12) we get:
sup inf + , 0 , inf 0 , + = sup 0 , 0 = 0 < +
Definition 10 (Non-Equivalent Pre-Structures). 
The pre-structures F r r N and ( F j ) j N of A arenon-equivalentif there exists an f R A , where from def. 3, E U [ f ( X r ) ] E U [ f ] or E U [ f ( X j ) ] E U [ f ] where:
E U [ f ] E U [ f ]
Definition 11 (Non-Equivelant Pre-Structures (Alternate Def.)). 
The pre-structures F r r N and ( F j ) j N of A arenon-equivalentif we have:
r j = arg min r N U ( F r F j ) : F r F j
is the r-value (for every j N ) where U ( F r F j ) is minimized
r j = arg max r N U ( F j F r ) : F r F j
is the r-value (for every j N ) where U ( F j F r ) is maximized
j r = arg min j N U ( F j F r ) : F j F r
is the j-value (for every r N ) where U ( F j F r ) is minimized and:
j r = arg max j N U ( F j F r ) : F j F r
is the j-value (for every r N ) where U ( F j F r ) is maximized such that:
sup inf U j = 1 F r j F j , U j = 1 F j F r j , inf U r = 1 F j r F r , U r = 1 F r F j r = +
means the pre-structures F r r N and ( F j ) j N are non-equivelant.
Example 10. 
From example 4, if A = Q , pre-structures F r r N = c / r ! : c Z , r · r ! c r · r ! r N and ( F j ) j N = c / d : c Z , d N , d j , d j c d j j N are non-equivelant since for f : Q R where:
f ( x ) = 1 x ( 2 n + 1 ) / 2 m : n Z , m N 0 x ( 2 n + 1 ) / 2 m : n Z , m N
we have E U [ f ] = 1 (i.e. the expected value of f on F r ) and E U [ f ] = 1 / 3 (i.e. the expected value of f on F j ), which means
E U [ f ] E U [ f ]
hence from def. 10, the pre-structures F r r N and ( F j ) j N are non-equivelant.
Example 11. 
Suppose A = Z , where F r r N = s Z : r s r r N , ( F j ) j N = s Z : 2 j s 2 j j N and
f ( x ) = 2 x + 1 x = r , r is odd , r > 0 0 x = r , r is even , r < 0 2 x + 1 x = r , r is even , r 0 0 x = r , r is odd , r 0
E U [ f ] is undefined (i.e. the expected value of f on F r ) and E U [ f ] = 1 (i.e. the expected value of f on F j ). Since at least one of the pre-structure i.e. { F j } j N has a defined expected value and E U [ f ] E U [ f ] (i.e. undefined values do not equal 1), we can say that F r r N and { F j } j N are non-equivelant.
Definition 12 (Pre-Structures converging Sublinearly, Linearly, or Superlinearly to A compared to that of another Sequence) 
Suppose pre-structures F r r N and ( F j ) j N are non-equivalent and converge uniformly to A; and suppose for every ε range ( U ) , where ε > 0 ) and r N :
(a) 
From def. 5 and 6, suppose we have:
| S ( U ( ϵ , F r , ω ) , ψ ) | ¯ = inf | S ( U ( ϵ , F j , ω ) , ψ ) | : j N , ω Ω , ψ Ψ ω , E ( S ( U ( ϵ , F j , ω ) , ψ ) ) E ( S ( U ( ϵ , F r , ω ) , ψ ) )
then (using 1.2.14) we have
α ¯ ϵ , r , ω , ψ = S ( U ( ϵ , F r , ω ) , ψ ) ) / S ( U ( ϵ , F r , ω ) , ψ ) ¯
(b) 
From def. 5 and 6, suppose we have:
| S ( U ( ϵ , F r , ω ) , ψ ) | ̲ = sup | S ( U ( ϵ , F j , ω ) , ψ ) | : j N , ω Ω , ψ Ψ ω , E ( S ( U ( ϵ , F j , ω ) , ψ ) ) E ( S ( U ( ϵ , F r , ω ) , ψ ) )
then (using 1.2.16) we get
α ̲ ϵ , r , ω , ψ = S ( U ( ϵ , F r , ω ) , ψ ) ) / S ( U ( ϵ , F r , ω ) , ψ ) ̲
1. 
If using equations 1.2.15 and 1.2.17 we have that:
lim sup ε 0 lim sup r sup ω Ω sup ψ Ψ ω α ¯ ϵ , r , ω , ψ = lim inf ε 0 lim inf r sup ω Ω sup ψ Ψ ω α ̲ ϵ , r , ω , ψ = 0
we say F r r N converges uniformly to A at asuperlinear rateto that of ( F j ) j N .
2. 
If using equations 1.2.15 and 1.2.17 we have either:
(a) 
0 lim inf ε 0 lim inf r sup ω Ω sup ψ Ψ ω α ¯ ϵ , r , ω , ψ < +
0 < lim sup ε 0 lim sup r sup ω Ω sup ψ Ψ ω α ̲ ϵ , r , ω , ψ +
(b) 
0 < lim inf ε 0 lim inf r sup ω Ω sup ψ Ψ ω α ¯ ϵ , r , ω , ψ +
0 lim sup ε 0 lim sup r sup ω Ω sup ψ Ψ ω α ̲ ϵ , r , ω , ψ < +
(c) 
0 lim sup ε 0 lim sup r sup ω Ω sup ψ Ψ ω α ¯ ϵ , r , ω , ψ < +
0 < lim inf ε 0 lim inf r sup ω Ω sup ψ Ψ ω α ̲ ϵ , r , ω , ψ +
(d) 
0 < lim sup ε 0 lim sup r sup ω Ω sup ψ Ψ ω α ¯ ϵ , r , ω , ψ +
0 lim inf ε 0 lim inf r sup ω Ω sup ψ Ψ ω α ̲ ϵ , r , ω , ψ < +
we then say F r r N converges uniformly to A at alinear rateto that of ( F j ) j N .
3. 
If using equations 1.2.15 and 1.2.17 we have that:
lim inf ε 0 lim inf r sup ω Ω sup ψ Ψ ω α ¯ ϵ , r , ω , ψ = lim sup ε 0 lim sup r sup ω Ω sup ψ Ψ ω α ̲ ϵ , r , ω , ψ = +
we say F r r N converges uniformly to A at asublinear rateto that of ( F j ) j N .
Note 1. 
Since def. 12 is difficult to apply, we make assumptions (without proofs) for the examples below:
Example 12 (Example of pre-structure converging super-linearly to A compared to that of another pre-structure). 
From example 4:
1. 
A = Q [ 0 , 1 ]
2. 
( F r ) r N = s / r ! : 0 s r ! r N
3. 
( F j ) j N = c / d : c Z , d N , d j , 0 c d j N
we assume that F r r N converges at asuperlinearrate to that of ( F j ) j N .
Example 13 (Obvious Example of pre-structure converging linearly to A compared to that of another pre-structure). 
Consider the following:
1. 
A = Q [ 0 , 1 ]
2. 
( F r ) r N = s / r ! : 0 s r ! r N
3. 
( F j ) j N = w / ( 2 j ) ! : w Z , 0 w 2 j j N
we assume that F r r N converges at alinearrate to that of ( F j ) j N since, using programming, we assume:
0 < lim inf ε 0 lim inf r sup ω Ω sup ψ Ψ ω α ¯ ϵ , r , ω , ψ = lim sup ε 0 lim sup r sup ω Ω sup ψ Ψ ω α ̲ ϵ , r , ω , ψ < +
Example 14 (Non-Obvious Example of pre-structure converging linearly to A compared to another pre-structure). 
If · is the nearest integer function and · is the floor function, consider the following:
1. 
A = a : a Q [ 0 , 1 ]
2. 
( F r ) r N = s / r ! : 0 s r ! r N
3. 
( F j ) j N = ( s / 2 z ) 2 / j ! : 0 s ( j ! ) 1 / ( 7 z ) , 0 z log 2 ( j + 1 3 ) [ 0 , 1 ] j N (we choose this pre-structure since if log 2 ( | F j | ) is the highest entropy (def. 6) that E ( F j ) could be for every j N , we say ( F j ) j N has ahigher entropy per elementthan that of F r r N if there exists a k N , such for all j k , E ( F j ) / log 2 ( | F j | ) > E ( F j ) / log 2 ( F j ) ).
despite ( F j ) j N having a higher entropy per element, F r r N converges at alinearrate to that of ( F j ) j N , since using programming, we assume:
lim inf ε 0 lim inf r sup ω Ω sup ψ Ψ ω α ¯ ϵ , r , ω , ψ = 0
lim sup ε 0 lim sup r sup ω Ω sup ψ Ψ ω α ̲ ϵ , r , ω , ψ = +
which should satisfy criteria (2a) in def. 12.
Theorem 2. 
If F r r N converges super-linearly to A compared to that of ( F j ) j N then ( F j ) j N converges sub-linearly to A compared to that of ( F r ) r N
Example 15 (Example of pre-structure converging super-linearly compared to another pre-structure). 
In example 12, if we swap F r r N for ( F j ) j N where:
1. 
A = Q [ 0 , 1 ]
2. 
( F r ) r N = c / d : c Z , d N , d r , 0 c d r N
3. 
( F j ) j N = s / j ! : 0 s j ! j N
we assume that F r r N converges at asublinearrate to that of ( F j ) j N .

1.3. Question on Preliminary Definitions

1.
Are there “simpler" alternatives to either of the preliminary definitions? (Keep this in mind as we continue reading).

2. Main Question

Does there exist a unique extension (or a method that constructively defines a unique extension) of the expected value of f when the value’s finite, using the uniform probability measure [[2], p.32-37] on sets measurable in the Carathèodory sense, such we replace f with infinite or undefined expected values with f defined on a chosen pre-structure which depends on A where:
  • The expected value of f on each term of the pre-structure is finite
  • The pre-structure converges uniformly to A
  • The pre-structure converges uniformly to A at a linear or superlinear rate to that of other non-equivalent pre-structures of A which satisfies (1) and (2).
  • The generalized expected value of f on a pre-structure (i.e. an extension of def. 3 to answer the full question) has a unique & finite value, such the pre-structure satisfies (1), (2), and (3).
  • A choice function is defined which chooses a pre-structure from A where the following satisfies (1), (2), (3), and (4) for the largest possible subset of ℝA.
  • If there is more than one choice function that satisfies (1), (2), (3), (4) and (5), we choose the choice function with the “simplest form”, meaning for a general pre-structure of A, when each choice function is fully expanded, we take the choice function with the fewest variables/numbers (excluding those with quantifiers).
How do we answer this question? (See § Section 3.1, § Section 3.2 & §Section 3.4 for a partial answer.)

3. Informal Attempt to Answer Main Question

(I advise using computer programmings such as Mathematica, Python, JavaScript, or Matlab to understand the definitions of the answer below.)

3.1. Generalized Expected Values

If the image of f under A is f [ A ] : = f ( x ) : x A , such from def. 2 and 7, we take the pre-structure of f [ A ] where:
F r r N f [ A ]
and take the pre-image under f of F r (defined as f 1 F r : = x A : f ( x ) F r ) such that:
f 1 F r r N A
However, note the expected value of f 1 F r (def. 3) may be infinite (e.g. unbounded f). Hence, for every r N , we take the pre-structure ( F r , t r ) t r N of f 1 [ F r ] where:
( r N ) F r , t r t r N f 1 [ F r ]
Thus, the generalized expected value or E ¨ U [ f ] is:
( ϵ > 0 ) ( N N ) ( r N ) t r N r N , t r N 1 U F r , t r F r , t r f d x E ¨ U [ f ] < ϵ
and (similar to def. 2 & 3) if
E U f X r , t r = 1 U F r , t r F r , t r f d x
we describe the process of the generalized expected value as E U f X r , t r E ¨ U [ f ] .

3.2. Choice Function

Suppose S ( A ) is the set of all pre-structures of A which satisfies criteria (1) and (2) of the main question where the generalized expected value of the pre-structures, as they converge uniformly to A, is unique and finite such the pre-structure ( ( F r , t r ) t r N ) r N S ( A ) should be a sequence of sets that satisfies criteria (1), (2), (3) and (4) of the main question where (using the end of § 3.1):
E U f X r , t r E ¨ U [ f ]
and pre-structure ( ( F j , t j ) t j N ) j N is an element of S ( A ) such (using the end of § 3.1):
E U f X j , t j E ¨ U [ f ]
but is not an element of the set of equivelant pre-structures of F r , t r t r N (i.e. def. 8). Further note from (a), with equation 1.2.14 in def. 12, if we take:
| S ( U ( ϵ , F r , t r , ω ) , ψ ) | ¯ = inf | S ( U ( ϵ , F j , t j , ω ) , ψ ) | : j N , t j N , ω Ω , ψ Ψ ω , E ( S ( U ( ϵ , F j , t j , ω ) , ψ ) ) E ( S ( U ( ϵ , F r , t r , ω ) , ψ ) )
and from (b), with equation 1.2.16 in def. 12, we take:
| S ( U ( ϵ , F r , t r , ω ) , ψ ) | ̲ = sup | S ( U ( ϵ , F j , t j , ω ) , ψ ) | : j N , t j N , ω Ω , ψ Ψ ω , E ( S ( U ( ϵ , F j , t j , ω ) , ψ ) ) E ( S ( U ( ϵ , F r , t r , ω ) , ψ ) )
Then, using def. 5 with equations 3.2.3 and 3.2.4, if:
sup ω Ω sup ψ Ψ ω S ( U ( ϵ , F r , t r , ω ) , ψ ) = S ( ε , F r , t r ) = S
sup ω Ω sup ψ Ψ ω | S ( U ( ϵ , F r , t r , ω ) , ψ ) | ¯ = | S ( ε , F r , t r ) | ¯ = | S | ¯
sup ω Ω sup ψ Ψ ω | S ( U ( ϵ , F r , t r , ω ) , ψ ) | ̲ = | S ( ε , F r , t r ) | ̲ = | S | ̲
where, using absolute value function | | · | | , we have:
S ( r ) = sup ( F r , t r + 1 ) sup F r , t r inf ( F r , t r ) inf F r , t r + 1 | | inf ( F r , t r ) inf F r , t r + 1 sup ( F r , t r + 1 ) sup F r , t r 1 | |
such that
T ( r ) = sup F r , t r + 1 inf F r , t r sup F r , t r inf F r , t r + 1 inf F r , t r inf F r , t r + 1 sup F r , t r + 1 sup F r , t r 1 inf F r , t r inf F r , t r + 1 sup F r , t r + 1 sup F r , t r
and, using equations 3.2.5, 3.2.6, 3.2.7, 3.2.8, 3.2.9 with the nearest integer function · , we want
K ( ε , F r ) = | | 1 S ( r ) | | 5 ( 5 | 5 | | S | 1 + | S | | S | ̲ + 2 | S | | S | ̲ + | S | | S | ̲ + | S | + | S | ¯ 1 + | S | ̲ / | S | 1 + | S | / | S | ¯ 1 + | S | ̲ / | S | ¯ | S | 5 | 5 | + | S | 5 ) T ( r )
such, using equation 3.2.10, if set S ( A ) S ( A ) and P · is the power-set, then set C ( A ) is the largest element of:
{ S ( A ) S ( A ) : ( ϵ 1 > 0 ) ( M N ) ( ε range ( U ) ) k N r N ( t r N ) F r , t r S ( A )
0 < ε M , r k , t r k | S ( ε , F r , t r ) K ( ε , F r , t r ) inf F g , t g S ( A ) S ( ε , F g , t g ) K ( ε , F g , t g ) | < ϵ 1 } P ( S ( A ) )
w.r.t to inclusion, such the choice function is C ( A ) if the following contains just one element.
Otherwise, for k N , suppose we say C k ( A ) represents the k-th iteration of the choice function of A, e.g. C 3 ( A ) = C ( C ( C ( A ) ) ) , where the infinite iteration of C ( A ) (if it exists) is lim k C k ( A ) = C ( A ) . Therefore, when taking the following:
C ( A ) = C ( A ) if C ( A ) contains one element C j ( A ) if j N , such for all k j , C k ( A ) contains one element C ( A ) if it exists , and C ( A ) contains one element
we say C ( A ) is the choice function and the expected value, using def. 3.2.1, is E ¨ U [ f ] .

3.3. Questions on Choice Function

1. 
Suppose we define function f : A R . What unique pre-structure would C ( A ) contain (if it exists) for:
(a)
A = Z where if ( F r , t r ) t r N r N C ( Z ) and f = id Z , we want
( F r , t r ) t r N r N = ( m Z : r t r m t r r ) t r N r N
(b)
A = Q where if ( F r ) t r N r N C ( Q ) and f = id Q , we want
( F r , t r ) t r N r N = ( s / r ! : s Z , r · r ! t r s t r r · r ! ) t r N r N
(c)
A = R where we’re not sure what ( F r , t r ) t r N r N C ( R ) would be if f = id R . What would ( F r , t r ) t r N r N be if it’s unique?

3.4. Increasing Chances of an Unique and Finite Expected Value

In case C ( A ) , in equation 3.2.12, does not exist; if there exists a unique and finite E ¨ U [ f ] (see § 3.1) where:
( F r , t r ) t r N r N C A E ¨ U [ f ] is unique & finite
Then E ¨ U [ f ] is the generalized expected value w.r.t choice function C, which answers criteria (1), (2), (3), (4), (perhaps (5)) of the question in § ; however, there is still a chance that the equation 3.4.1 fails to give an unique E ¨ U [ f ] . Hence; if k N , we take the k-th iteration of the choice function C in 3.2.11, such there exists a j N , where for all k j , if E ¨ U [ f ] is unique and finite then the following is the generalized expected value w.r.t finitely iterated C.
In other words, if the k-th iteration of C is represented as C [ k ] (where e.g. C 3 ( A ) = C ( C ( C ( A ) ) ) ), we want a unique and finite E ¨ U [ f ] where:
j N ( k N ) ( k j ( F r , t r ) t r N r N C [ k ] A E ¨ U [ f ] is unique & finite )
If this still does not give a unique and finite expected value, we then take the most generalized expected value w.r.t an infinitely iterated C where if the infinite iteration of C is stated as lim k C [ k ] ( f [ A ] ) = C ( f [ A ] ) , we then want a unique E ¨ U [ f ] where:
( F r , t r ) t r N r N C A E ¨ U [ f ] is unique & finite
However, in such cases, E ¨ U [ f ] should only be used for functions where the expected value is infinite or undefined or for worst-case functions—badly behaved f : A R (where for n N , A R n , and f is a function) defined on infinite points covering an infinite expanse of space. For example:
1.
For a worst-case f defined on countably infinite A (e.g. countably infinite "pseudo-random points" non-uniformly scattered across the real plane), one may need just one iteration of C (since most function on countable sets need just one iteration of C for E ¨ U [ f ] to be unique); otherwise, one may use equation 3.4.2 for finite iterations of C.
2.
For a worst-case f defined on uncountable A, we might have to use equation 3.4.3 as averaging such a function might be nearly impossible. We can imagine this function as an uncountable number of "pseudo-random" points non-uniformly generated on a subset of the real plane (see § 4.1 for a visualization.)
Note, however, that no matter how generalized and “meaningful" the extension of an expected value is, there will always be an f where the expected value does not exist.

3.5. Questions Regarding The Answer

1.
Using prevalence and shyness [3,4], can we say the set of f where either equations 3.4.1, 3.4.2 and 3.4.3 have an unique and finite E ¨ U [ f ] which forms either a prevalent or neither prevalent nor shy subset of R A ? (If the subset is prevalent, this implies either one of the generalized expected values can be unique and finite for a “large" subset of R A ; however, if the subset is neither prevalent nor shy we need more precise definitions of “size" which takes “an exact probability that the expected values are unique & finite"—some examples (which are shown in this answer [10]) being:
(a)
Fractal Dimension notions
(b)
Kolmogorov Entropy
(c)
Baire Category and Porosity
2.
There may be a total of 292 variables in the choice function C (excluding quantifiers). Is there a choice function (ignoring quantifiers) which answers criteria (1), (2), (3) & (4) of the main question in §Section 2 for a "larger" subset of R A ? (This might be impossible to answer since such a solution cannot be shown with prevalence or shyness [3,4])—therefore, we need a more precise version of “size" with some examples, again, shown in [10].
3.
If question (2) is correct, what is the choice function C using either equations 3.4.1, 3.4.2 and 3.4.3 fully answers the question in §Section 2?
4.
Can either equations 3.4.1, 3.4.2 and 3.4.3 (when A is the set of all Liouville numbers [11] and f = id A ) give a finite value? What would the value be?
5.
Similar to how definition 13 in §Section 4 approximates the expected value in definition 1, how do approximate equations 3.4.1, 3.4.2 and 3.4.3?
6.
Can programming be used to estimate equations 3.4.1, 3.4.2 and 3.4.3 respectively (if an unique/finite result of either of the expected values exist)?

3.6. Applications

1.
In Quanta magazine [12], Wood writes on Feynman Path Integrals: “No known mathematical procedure can meaningfully average1 an infinite number of objects covering an infinite expanse of space in general. The path integral is more of a physics philosophy than an exact mathematical recipe."—despite Wood’s statement, mathematicians Bottazzi E. and Eskew M. [13] found a constructive solution to the statement using integrals defined on filters over families of finite sets; however, the solution was not unique as one has to choose a value in a partially ordered ring of infinite and infinitesimal elements.
(a)
Perhaps, if Botazzi’s and Eskew’s Filter integral [13] is not enough to solve Wood’s statement, could we replace the path integral with expected values from equations 3.4.1, 3.4.2 and 3.4.3 respectively (or a complete solution to Section 2)? (See, again, §4.1 for a visualization of Wood’s statement.)
2.
As stated in §Section 1.1, “when the Lebesgue measure of A, measurable in the Caratheodory sense, has zero or infinite volume (or undefined measure), there may be multiple, conflicting ways of defining a "natural" uniform measure on A." This is an example of Bertand’s Paradox which shows, "the principle of indifference (that allows equal probability among all possible outcomes when no other information is given) may not produce definite, well-defined results for probabilities if applied uncritically, when the domain of possibilities is infinite [14]. Using § 3.1, perhaps if we take (from def. 3.2.12):
C ( A ) = C ( A ) if C ( A ) contains one element C j ( A ) if j N , such for all k j , C k ( A ) contains one element C ( A ) if it exists , and C ( A ) contains one element
then for ( F r , t r ) t r N r N C ( A ) , if we want S A and we get the following:
( U ( S ) R ) ( ϵ > 0 ) ( N N ) ( r N ) ( t r N ) r N , t r N U ( S F r , t r ) U ( F r , t r ) U ( S ) < ϵ
Then U ( S ) might serve as a solution to Bertand’s Paradox (unless there’s a better C ( A ) and ( F r , t r ) t r N r N C ( A ) which completely solves the main question in § ).
Now consider the following:
(a)
How do we apply U ( S ) (or a better solution) to the usual example which demonstrates the Bertand’s Paradox as follows: for an equilateral triangle (inscribed in a circle), suppose a chord of the circle is chosen at random—what is the probability that the chord is longer than a side of the triangle? [15] (According to Bertand’s Paradox there are three arguments which correctly use the principle of indifference yet give different solutions to this problem [15]:
i.
The “random endpoints" method: Choose two random points on the circumference of the circle and draw the chord joining them. To calculate the probability in question imagine the triangle rotated so its vertex coincides with one of the chord endpoints. Observe that if the other chord endpoint lies on the arc between the endpoints of the triangle side opposite the first point, the chord is longer than a side of the triangle. The length of the arc is one-third of the circumference of the circle, therefore the probability that a random chord is longer than a side of the inscribed triangle is 1 / 3 .
ii.
The "random radial point" method: Choose a radius of the circle, choose a point on the radius, and construct the chord through this point and perpendicular to the radius. To calculate the probability in question imagine the triangle rotated so a side is perpendicular to the radius. The chord is longer than a side of the triangle if the chosen point is nearer the center of the circle than the point where the side of the triangle intersects the radius. The side of the triangle bisects the radius, therefore the probability a random chord is longer than a side of the inscribed triangle is 1 / 2 .
iii.
The "random midpoint" method: Choose a point anywhere within the circle and construct a chord with the chosen point as its midpoint. The chord is longer than a side of the inscribed triangle if the chosen point falls within a concentric circle of radius 1 / 2 the radius of the larger circle. The area of the smaller circle is one-fourth the area of the larger circle, therefore the probability a random chord is longer than a side of the inscribed triangle is 1 / 4 .

4. Glossary

4.1. Example of Case (2) of Worst Case Functions

(If the explanation below is difficult to understand, see this visualization to accompany the explanation [16], where when changing the sliders each time, wait a couple of seconds for the graph to load.)
We wish to create a function that appears to be a “pseudo-randomly" distributed but has infinite points that are non-uniform (i.e. does not have complete spatial randomness [17]) in the sub-space of R 2 , where the expected value or integral of the function w.r.t uniform probability measure [2][ p.32-37] is non-obvious (i.e. not the center of the space the function covers nor the area of that space).
Suppose for real numbers x 1 , x 2 , y 1 and y 2 , we generate an uncountable number of "nearly pseudo-random" points that are non-uniform in the subspace [ x 1 , x 2 ] × [ y 1 , y 2 ] R 2 .
We therefore define the function as f : [ x 1 , x 2 ] [ y 1 , y 2 ] .
Now suppose b 2 , 3 , · · · , 10 where the base-b expansion of real numbers, in interval [ x 1 , x 2 ] , have infinite decimals that approach x from the right side so when x 1 = x 2 we get f ( x 1 ) = f ( x 2 ) .
Furthermore, for N 0 = N 0 , if r N 0 and digit b : R × Z 0 , 1 , · · · , b 1 is a function where digit b ( x , r ) takes the digit in the b r -th decimal fraction of the base-b expansion of x (e.g. digit 10 ( 1 . 789 , 2 ) = 8 ), then g r r N 0 is a sequence of functions such that g r : N 0 N 0 is defined to be:
g r ( x ) = 10 b sin ( r x ) + 10 b
then for some large k N and x 1 , x 2 R , the intermediate function (before f) or f 1 : [ x 1 , x 2 ] R is defined to be
f 1 ( x ) = r = 0 g r + 1 p = r r + k digit b ( x , p ) / b r 10 = r = 0 10 b sin r + 1 p = r r + k digit b ( x , p ) + 10 b / b r 10
where the points in f 1 are "almost pseudo-randomly" and non-uniformly distributed on x 1 , x 2 × [ 0 , 10 ] . What we did was convert every digit of the base-b expansion of x to a pseudo-random number that is non-equally likely to be an integer, including and in-between, 0 and ( 10 · 10 s ) / b . Furthermore, we also make the function appear truly “pseudo-random", by adding the b r -th decimal fraction with the next k decimal fractions; however, we want to control the end-points of [ 0 , 10 s + 1 ] such if y 1 , y 2 R , we convert x 1 , x 2 × [ 0 , 10 ] to x 1 , x 2 × [ y 1 , y 2 ] by manipulating equation 4.1.2 to get:
f ( x ) = y 2 y 2 y 1 10 f 1 ( x ) y 2 y 2 y 1 10 r = 0 10 b sin r + 1 p = r r + k digit b ( x , p ) + 10 b / b r 10
such the larger k is, the more pseudo-random the distribution of points in f in the space x 1 , x 2 × y 1 , y 2 , but unlike most distributions of such points, f is uncountable.

4.2. Question Regarding 4.1

Let us give a specific example, suppose for the function in equation 4.1.3 of §4.1, we have:
  • b = 3
  • [ x 1 , x 2 ] × [ y 1 , y 2 ] = [ 0 , 1 ] × [ 0 , 1 ]
  • k = 100
(one can try simpler parameters); what is the expected value using either equations 3.4.2 and 3.4.3 (or a more complete solution to Section 2) if the answer is finite and unique?
What about for f in general (i.e. in terms of b, x 1 , x 2 , y 1 , y 2 and k)?
(Note if x 1 , y 1 and x 2 , y 2 , then the function is an explicit example of the function that Wood2 describes in Quanta Magazine)

4.3. Approximating the Expected Value

Definition 13 (Approximating the Expected Value). 
In practice, the computation of this expected value may be complicated if the set A is complicated. If analytic integration does not give a closed-form solution then a general and relatively simple way to compute the expected value (up to high accuracy) is with importance sampling. To do this, we produce values X 1 , X 2 , . . . , X M IID g for some density function g with support A support ( g ) R n (hopefully with support fairly close to A) and we use the estimator:
μ ^ M i = 1 M I ( X i A ) · f ( X i ) / g ( X i ) i = 1 M I ( X i A ) / g ( X i )
From the law of large numbers, we can establish that E [ f ( X ) ] = lim M μ ^ M so if we take M to be large then we should get a reasonably good computation of the expected value of interest.
Note importance sampling requires three things:
1. 
We need to know when point x is in set A or not
2. 
We need to be able to generate points from a density g that is on a support that covers A but is not too much bigger than A
3. 
We have to be able to compute f ( x ) and g ( x ) for each point x A

References

  1. (https://mathoverflow.net/users/35357/michael greinecker), M.G. Demystifying the Caratheodory Approach to Measurability. MathOverflow, [https://mathoverflow.net/q/34007]. URL:https://mathoverflow.net/q/34007 (version: 2010-07-31).
  2. T., L.; E., R. The maximum entropy of a metric space. https://arxiv.org/pdf/1908.11184.pdf.
  3. Ott, W.; Yorke, J.A. Prevelance. Bulletin of the American Mathematical Society 2005, 42, 263–290. https://www.ams.org/journals/bull/2005-42-03/S0273-0979-05-01060-8/S0273-0979-05-01060-8.pdf.
  4. Hunt, B.R. Prevalence: a translation-invariant “almost every” on infinite-dimensional spaces 1992. https://arxiv.org/abs/math/9210220. [CrossRef]
  5. (https://stats.stackexchange.com/users/173082/ben), B. In statistics how does one find the mean of a function w.r.t the uniform probability measure? Cross Validated, [https://stats.stackexchange.com/q/602939]. https://stats.stackexchange.com/q/602939 (version: 2023-01-24).
  6. B., P. 3 ed.; John Wiley & Sons: New York, 1995; pp. 419–427. https://www.colorado.edu/amath/sites/default/files/attached-files/billingsley.pdf.
  7. (https://mathoverflow.net/users/46214/mark mcclure), M.M. Integral over the Cantor set Hausdorff dimension. MathOverflow, [https://mathoverflow.net/q/235609]. https://mathoverflow.net/q/235609 (version: 2016-04-07).
  8. Rosen, K.H. Elementary number theory and its applications (6. ed.).; Addison-Wesley, 1993; pp. I–XV, 1–544.
  9. M., G. 2 ed.; Springer New York: New York [America];, 2011; pp. 61–95. https://ee.stanford.edu/~gray/it.pdf. [CrossRef]
  10. (https://math.stackexchange.com/users/13130/dave-l renfro), D.L.R. Proof that neither “almost none” nor “almost all” functions which are Lebesgue measurable are non-integrable. Mathematics Stack Exchange, [https://math.stackexchange.com/q/4623168]. https://math.stackexchange.com/q/4623168 (version: 2023-01-21).
  11. Grabowski, A.; Kornilowicz, A. Introduction to Liouville Numbers. Formalized Mathematics 2017, 25. [CrossRef]
  12. C., W. Mathematicians Prove 2D Version of Quantum Gravity Really Works. Quanta Magazine. https://www.quantamagazine.org/mathematicians-prove-2d-version-of-quantum-gravity-really-works-20210617.
  13. E., B.; M., E. Integration with Filters. https://arxiv.org/pdf/2004.09103.pdf.
  14. Shackel, N. Bertrand’s Paradox and the Principle of Indifference. Philosophy of Science 2007, 74, 150–175. https://orca.cardiff.ac.uk/id/eprint/3803/1/Shackel%20Bertrand’s%20paradox%205.pdf. [CrossRef]
  15. Drory, A. Failure and Uses of Jaynes’ Principle of Transformation Groups. Foundations of Physics 2015, 45, 439–460. https://arxiv.org/pdf/1503.09072.pdf. [CrossRef]
  16. B., K. Visualization of Uncountable Number of Psuedo-random Points Generate on Subset of the Real Plane, 2023. https://www.wolframcloud.com/obj/4e78f594-1578-402a-a163-ebb16319ada2.
  17. Maimon O., R.L. 2 ed.; Springer New York: New York [America];, 2010; pp. 851–852. [CrossRef]
1
Meaningful Average—The average answers the main question in §
2
Wood wrote on Feynman Path Integrals: “No known mathematical procedure can meaningfully average 1 an infinite number of objects covering an infinite expanse of space in general."
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

© 2024 MDPI (Basel, Switzerland) unless otherwise stated