Preprint
Concept Paper

This version is not peer-reviewed.

Finding a Published Research Paper Which Meaningfully Averages the Most Pathalogical Functions (v3)

Submitted:

18 January 2025

Posted:

21 January 2025

You are already at the latest version

Abstract

I want to meaningfully average a pathalogical function (i.e., an everywhere surjective function whose graph has zero Hausdorff measure in its dimension). In case this impossible, we wish to average a nowhere continuous function defined on the rationals. We do this taking satisfying expected values of chosen sequences of bounded functions converging to f which are both equivalent and finite. As of now, I’m unable to solve this due to limited knowledge of advanced math and most people are too busy to help. Therefore, I’m wondering if anyone knows a research paper which solves my doubts. Unlike the previous paper, “Finding a Research Paper Which Meaningfully Averages Pathalogical Functions (V2)" [1] we modified the equations, the theorems, the approach to solving theorems in the blockquote, and the leading question which solves the blockquote. We also deleted unnecessary definitions to make the paper less complicated.

Keywords: 
;  ;  ;  ;  ;  ;  ;  ;  ;  

1. Intro

Let n N and suppose function f : A R n R , where A and f are Borel. Let dim H ( · ) be the Hausdorff dimension, where H dim H ( · ) ( · ) is the Hausdorff measure in its dimension on the Borel σ -algebra.

1.1. First Special Case of f

If the graph of f is G, is there an explicit f where:
(1)
The function f is everywhere surjective [2] (i.e., function defined on a topological space where its’ restriction to any non-empty open subset is surjective).
(2)
H dim H ( G ) ( G ) = 0

1.1.1. Potential Answer

If A = R , using this MathOverflow post [3], define f such that:
Consider a Cantor set C [ 0 , 1 ] with Hausdorff dimension 0 [4]. Now consider a countable disjoint union m N C m such that each C m is the image of C by some affine map and every open set O [ 0 , 1 ] contains C m for some m. Such a countable collection can be obtained by e.g. at letting C m be contained in the biggest connected component of [ 0 , 1 ] ( C 1 C m 1 ) (with the center of C m being the middle point of the component).
Note that m C m has Hausdorff dimension 0, so m C m × [ 0 , 1 ] R 2 has Hausdorff dimension one [5].
Now, let g : [ 0 , 1 ] R such that g | C m is a bijection C m R for all m (all of them can be constructed from a single bijection C R , which can be obtained without choice, although it may be ugly to define) and outside m C m let g be defined by g ( x ) = h ( x ) , where h : [ 0 , 1 ] R has a graph with Hausdorff dimension 2 [6] (this doesn’t require choice either).
Then the function g has a graph with Hausdorff dimension 2 and is everywhere surjective, but its graph has Lebesgue measure 0 because it is a graph (so it admits uncountably many disjoint vertical translates).
Note, we can make the construction with union of C m rather explicit as follows. Split the binary expansion of x as strings of size with a power of two, say x = 0.1101000010 becomes ( s 0 , s 1 , s 2 , ) = ( 1 , 10 , 1000 , ) . If this sequence eventually contains only strings of the form 0 0 or 1 1 , say after s k , then send it to y = i > 0 ϵ i 2 i , where s k + i = ϵ i ϵ i . Otherwise, send it to the explicit continuous function h given by the linked article [6]. This will give you something from [ 0 , 1 ) [ 0 , 1 )
Finally, compose an explicit (reasonable) bijection from [ 0 , 1 ) to R . In your case, the construction can be easily adapted so that the [ 0 , 1 ] or [ 0 , 1 ) target space is actually ( 0 , 1 ) , then compose with t ( 1 2 x ) / ( x 2 x ) .
In case this function is impossible to average, consider the following example:

1.2. Second Special Case of f

Suppose, we define A = Q , where f : A R , such that:
f ( x ) = 1 x ( 2 s + 1 ) / ( 2 t ) : s Z , t N , t 0 0 x ( 2 s + 1 ) / ( 2 t ) : s Z , t N , t 0
In the following sections, we shall see why we chose § and this function.

1.3. Attempting to Analyze/Average f

Suppose, the expected value of f is:
E [ f ] = 1 H dim H ( A ) ( A ) A f d H dim H ( A )
Note, using § , explicit f is pathological since it’s everywhere surjective and difficult to meaningfully average (i.e., the most generalized, satisfying (§2) extension of E [ f ] is non-finite).
Thus, we want the most generalized, satisfying extension of E [ f ] on bounded f, where the extension takes finite values for all f defined within §1.1. Moreover, suppose:
(1)
The sequence of bounded functions is f s = ( f r s ( s ) ) r s N  
(2)
The sequence of bounded functions converges to f: i.e., f r s ( s ) f  
(3)
The generalized, satisfying extension of E [ f ] is E [ f r s ( s ) ] : i.e., there exists a s N , where E [ f r s ( s ) ] is finite  
(4)
There exists k , v N where the expected value of f k and f v are finite and non-equivelant: i.e.,
< E [ f r k ( k ) ] E [ f r v ( v ) ] < +
(Whenever (4) is true, (3) is non-unique.)  

1.3.1. Example Proving § 1.3 (1)-(4) Correct

Using the second case of f : A R in §1.2, where A = Q , and:
f ( x ) = 1 x ( 2 s + 1 ) / ( 2 t ) : s Z , t N , t 0 0 x ( 2 s + 1 ) / ( 2 t ) : s Z , t N , t 0
suppose:
( A r ) r N = ( c / r ! : c Z , r · r ! c r · r ! ) r N
and
( A j ) j N = ( c / d : c Z , d N , d j , d j c d j )
where for f r : A r R ,
f r ( x ) = f ( x ) for all x A r
and for f j : A j R
f j ( x ) = f ( x ) for all x A j
Note, f r and f j are bounded since f is bounded (i.e., criteria 1 of §1.3 is satisfied). Also, the set-theoretic limit of ( A r ) r N and ( A j ) j N is A = Q : i.e.,
lim sup r A r = r 1 q r A q lim inf r A r = r 1 q r A q
where:
lim sup r A r = lim inf r A r = A = Q
(We are not sure how to prove this; however, a mathematician which specializes in rational numbers and set-theoretic limits should be able to verify the former.)
Hence, one can see that f r : A r R and f j : A j R converges to f : A R (i.e., criteria 2 of §1.3 is satisfied).
Now, suppose we want to average f r and f j which we denote E [ f r ] and E [ f j ] . Note, this is the same as computing the following (i.e., the cardinality is | · | and the absolute value is | | · | | ):
( ϵ > 0 ) ( N N ) ( r N ) r N 1 | A r | A r f ( x ) d H 0 E [ f r ] < ϵ
( ϵ > 0 ) ( N N ) ( j N ) j N 1 | A j | A j f ( x ) d H 0 E [ f j ] < ϵ
If we assume E [ f r ] = 1 in eq. 6, then [7]:
The integral A f ( x ) d x counts the number of fractions with an even denominator and an odd numerator in set A , after canceling all possible factors of 2 in the fraction. Let us consider the first case. We can write | | 1 | A r | 1 A r f | | = ( | A r | A r f ) / | A r | = H ( r ) / | A r | , where H ( r ) counts the fractions x = c / r ! in A r that are not counted in A f , i.e., for which f ( x ) = 0 . This is the case when the denominator is odd after the cancellation of the factors of 2, i.e., when the numerator c has a number of factors of 2 greater than or equal to that of r ! , which we will denote by V ( r ) : = v 2 ( r ! ) a.k.a the 2-valuation of r ! , oeis : A 11371 ( r ) = r O ( ln ( r ) ) [8]. That means, c must be a multiple of 2 V ( r ) . The number of such c with r · r ! c r · r is simply the length of that interval, equal to | A r | = 2 r ( r ! ) + 1 , divided by 2 V ( r ) . Thus, | | 1 | A r | 1 A r f | | = [ | A r | / 2 V ( r ) ] / A r 1 / 2 V ( r ) = 1 / 2 n O ( log n ) . This obviously tends to zero, proving E [ f r ] = 1
Since E [ f r ] = 1 is finite, this proves §1.3 criteria 3.
Last, we need to show E [ f j ] = 1 / 3 , where E [ f r ] E [ f j ] proving §1.3 criteria 4.
Concerning the second case [7], it is again simpler to consider the complementary set of x A j such that the denominator is odd when all possible factors of 2 are canceled. We can see that for j = 2 p 1 , and these obviously include all those we had for smaller j. The “new" elements in A j with j = 2 p 1 are those that have the denominator d = 2 p 1 when written in lowest terms. Their number is equal to the number of κ < d , gcd ( κ , d ) = 1 , which is given by Euler’s ϕ function. Since we also consider negative fractions, we have to multiply this by 2. Including x = 0 , we have G ( j ) = | x A j | f ( x ) = 0 | = 1 + 2 0 κ j / 2 ϕ ( 2 κ + 1 ) . There is no simple explicit expression for this (cf. oeis:A99957 [9]), but we know that G ( j ) = 1 + 2 · A 99957 ( j / 2 ) 2 · 8 ( j / 2 ) 2 / π 2 = 4 j 2 / π 2 [9]. On the other hand, the total number of all elements of A j is | A j | = 1 + 2 1 κ j ϕ ( κ ) , since each time we increase j by 1, we have the additional fractions with the new denominator d = j and the numerators are coprime with d, again with the sign + or −. From oeis:A002088 [10] we know that 1 κ j ϕ ( κ ) = 3 j 2 / π 2 + O ( j log j ) , so | A j | 6 j 2 / π 2 , which finally gives | A j | 1 A j f = ( | A j | G ( j ) ) / | A j | ( 6 4 ) / 6 = 1 / 3 as desired.
Hence, E [ f j ] = 1 / 3 and E [ f r ] E [ f j ] proving §1.3 criteria 4.
Therefore, in § 1.3, since:
Theorem 1. 
The set of all Borel f, where E [ f ] is finite, forms a shy [11] subset of all Borel measurable functions in R A
Note 2 
(Proof theorem 1 is true). We follow the argument presented in Example 3.6 of this paper [11], take X : = L 0 ( A ) (measurable functions over A), let P denote the one-dimensional subspace of A consisting of constant functions (assuming the Lebesgue measure on A) and let F : = L 0 ( A ) L 1 ( A ) (measurable functions over A without finite integral). Let λ P denote the Lebesgue measure over P, for any fixed f F :
λ P α R | A ( f + α ) d μ < = 0
Meaning P is a one-dimensional, so f is a 1-prevelant set.
Theorem 3. 
The set of all Borel f, where the expected values of different sequences of bounded functions that converge to f (§5.1) are not equivalent, form a prevalent [11]subset of R A .
Note 4 
(Possible method to proving theorem 3 true). Note, we first must prove that only f whose lines of symmetry intersect at one point have the same expected value for any bounded sequence of functions converging to f. Notice, the set of all symmetric Borel functions forms a shy subset of the set of all Borel measurable functions in R A . Hence, the set of all functions, where their lines of symmetry intersect at one point also forms a shy subset of the set of all Borel measurable functions in R A (i.e., a subset of a shy subset is also shy).
Since thm. 1 and 3 are true, we need to fix the problems the theorems address with the following:

1.3.2. Blockquote

We want the set of all Borel f, where satisfying expected values of chosen sequences of bounded functions converging to f5.1) are equivalent and finite, form:
(1)
a prevalent [11] subset of R A
(2)
If not prevalent then neither prevalent [11] nor shy [11] subset of R A .
For the sake of clarity & precision, we describe examples of “extending E [ f ] on all A with positive & finite Hausdorff measure" (§2) and use the examples to define the terms “unique & satisfying" (§3) in the blockquote of this section.

2. Extending the Expected Value w.r.t the Hausdorff Measure

The following are two methods to determining the most generalized, satisfying extension of E [ f ] on all A with a positive and finite Hausdorff measure:
(1)
One way is defining a generalized, satisfying extension of the Hausdorff measure on all A with positive & finite measure which takes positive, finite values for all Borel A. This can theoretically be done in the paper “A Multi-Fractal Formalism for New General Fractal Measures"[12] by taking the expected value of f w.r.t the extended Hausdorff measure.
(2)
Another way is finding a generalized, satisfying average of all A in the fractal setting. This can be done with the papers “Analogues of the Lebesgue Density Theorem for Fractal Sets of Reals and Integers" [13] and “Ratio Geometry, Rigidity and the Scenery Process for Hyperbolic Cantor Sets" [14] where we take the expected value of f w.r.t the densities in [13,14].
(Note, the methods in these papers could be used in §3.1 to answer the blockquote of § .)

3. Attempt to Define “Unique and Satisfying" in The Blockquote of §1.3

3.1. Leading Question

To define unique and satisfying in the blockquote of the §1.3, we take the expected value of a sequence of bounded functions chosen by a choice function. To find the choice function we ask the leading question...
If we make sure to:
(A)
Define C to be chosen center point of R n + 1 (e.g., the origin)
(B)
Define E to be the fixed, expected rate of expansion for all chosen sequences of each bounded functions’ graph (e.g., E = 1 )
(C)
Define E to be actual rates of expansion for each chosen sequence of each bounded functions’ graph ( § )
Does there exist a unique choice function which chooses a unique set of equivalent sequences of bounded functions where:
(1)
The chosen sequences of bounded functions converge to f5.1)
(2)
The “measure" (§5.3.1, §5.3.2) of the graph of all chosen sequences of bounded functions should increase at a rate linear or superlinear to that of all sequences of bounded functions converging to f5.1)
(3)
The expected values, defined in the papers of §2, for all chosen sequences of bounded functions are equivalent and finite
(4)
For the chosen sequences of bounded functions satisfying (1), (2) and (3), when f is unbounded (i.e, skip (4) when f is bounded):
  • The absolute difference between the expected value of (3) and the ( n + 1 ) -th coordinate of C is the less than or equal to that of all sequences of bounded functions satisfying (1), (2), and (3)
  • For all E E 5.4), when the absolute value is | | · | | , the “rate of divergence" [15] of E E  is less than or equal to that of all chosen sequences of bounded functions satisfying (1), (2), and (3)
(5)
When set Q R A is the set of all f R A , where the choice function chooses all sequences of bounded functions satisfying (1), (2), (3) and (4), then Q is
(a)
a prevelant [11] subset of R A
(b)
If not (5a) then neither a prevelant [11] nor shy [11] subset of R A
(6)
Out of all choice functions which satisfy (1), (2), (3), (4) and (5), we choose the one with the simplest form, meaning for each choice function fully expanded, we take the one with the fewest variables/numbers?
(In case this is unclear, see §5.)

3.1.1. Explaining Motivation Behind § 3.1

(1)
When defining “the measure" (§5.3.15.3.2) of a function, we want a bounded sequence of functions with a “high" entropic density (i.e., we aren’t sure if this is infact what the “measure" measures.) For example, when A = R and f is everywhere surjective [2], the “measure" chooses bounded sequence of functions whose lines of symmetry intersect at one point rather than non-symmetrical functions  §5.3.16, §8.
(2)
Note, the later criteria doesn’t apply to bounded f: e.g., using §1.3.1, when A = Q and f : A R , where:
f ( x ) = 1 x ( 2 s + 1 ) / ( 2 t ) : s Z , t N , t 0 0 x ( 2 s + 1 ) / ( 2 t ) : s Z , t N , t 0
depending on the sequence of bounded function ( f r ) chosen which converge to f: E [ f r ] can be any number in [ 0 , 1 ] . Since [ 0 , 1 ] is bounded, we need only §3.1.1 crit. 1 to solve the blockquote of §1.3.2.
(3)
Using an unbounded function in §1.1 (i.e., an everywhere surjective f whose graph has zero Hausdorff measure in its dimension) depending on the sequence of bounded functions ( f r ) r N chosen which converge tof: E [ f r ] can be any real number (when it exists). To fix this, take all ( f r ) r N , where the E [ f r ] has smallest absolute difference from the ( n + 1 ) -th coordinate of a reference point (i.e., the center point C R n ). The problem is there exists f, where the expected value of sequences of bounded functions with non-equivalent expected values to that of the chosen sequence have the same minimum absolute difference from ( n + 1 ) -th coordinate of C.
(4)
Thus, we take the sequence of functions whose actual rate of expansion E from C ( § 5.4) “diverges" [15, p.275-322] at the smallest rate from the expected, fixed rate of expansion E from C (i.e., the “rate of divergence of E E , using the absolute value | | · | | , is less than or equal to that of all the non-equivalent sequences of bounded functions which satisfy §3.1 criteria (1), (2), and (3)).
(5)
Finally, since there might still be sequences of bounded functions which satisfy § 3.3.1 criteria (1), (3) and (4), but their graphs are congruent with different E [ f r ] , we use equation T in §6.3.1 eq. 125 to choose sequences of bounded sets with the same expected value.
I’m convinced the expected values of the sequences of bounded functions chosen by a choice function which answers the leading question aren’t unique nor satisfying enough to answer the blockquote of §1.3.2. Still, adjustments are possible by changing the criteria or by adding new criteria to the question.

4. Question Regarding My Work

Most don’t have time to address everything in my research, hence I ask the following:
Is there a research paper which already solves the ideas I’m working on? (Non-published papers, such as mine [16], don’t count.)
Using AI, papers that might answer this question are “Prediction of dynamical systems from time-delayed measurements with self-intersections" [17] and “A Hausdorff measure boundary element method for acoustic scattering by fractal screens" [18].
Does either of these papers solve the blockquote of § 1.3.2?

5. Clarifying §3

Suppose ( f r ) r N is a sequence of bounded functions converging to f and ( G r ) r N is a sequence of the graph of each f r . Let dim H ( · ) be the Hausdorff dimension and H dim H ( · ) ( · ) be the Hausdorff measure in its dimension on the Borel σ -algebra.
See §3.1 once reading §5, and consider the following:
Is there a simpler version of the definitions below?

5.1. Defining Sequences of Bounded Functions Converging to f

The sequence of bounded functions ( f r ) r N , where f r : A r R and ( A r ) r N is a sequence of bounded sets, converges to function f : A R when:
For any x A there exists a sequence x A r s.t. x ( x 1 , · · · , x n ) and f r ( x ) f ( x 1 , · · · , x n ) (see [19] for info).
Example 0.1 
(Example of §5.1). If A = R and f : A R , where f ( x ) = 1 / x , then an example of ( f r ) r N , such that f r : A r R is:
(1)
( A r ) r N = ( [ r , 1 / r ] [ 1 / r , r ] ) r N
(2)
f r ( x ) = 1 / x for x A r
Example 0.2 
(More Complex Example). If A = R and f : A R , where f ( x ) = x , then an example of ( f r ) r N , such that f r : A r R is:
(1)
( A r ) r N = ( [ r , r ] ) r N
(2)
f r ( x ) = x + ( 1 / r ) sin ( x ) for x A r

5.2. Expected Value of Bounded Sequence of Functions

If ( f r ) converges to f5.1), the expected value of f w.r.t ( f r ) r N is E [ f r ] (when it exists), where the absolute value is | | · | | , such that E [ f r ] satisfies:
( ϵ > 0 ) ( N N ) ( r N ) r N 1 H dim H ( A r ) A r A r f r d H dim H ( A r ) E [ f r ] < ϵ
Note, E [ f r ] can be extended by using § .

5.2.1. Example

Using example 0.1, when ( f r ) r N = ( x , 1 / x ) : x [ r , 1 / r ] [ 1 / r , r ] r N where:
(1)
( A r ) r N = [ r , 1 / r ] [ 1 / r , r ] r N
(2)
f r ( x ) = 1 / x for x A r
If we assume E [ f r ] = 0 :
( ϵ > 0 ) ( N N ) ( r N ) r N 1 H dim H ( A r ) ( A r ) A r f r d H dim H ( A r ) E [ f r ] < ϵ =
( ϵ > 0 ) ( N N ) ( r N )
r N 1 H dim H ( [ r , 1 / r ] [ 1 / r , r ] ) ( [ r , 1 / r ] [ 1 / r , r ] ) [ r , 1 / r ] [ 1 / r , r ] 1 / x d H dim H ( [ r , 1 / r ] [ 1 / r , r ] ) 0 < ϵ =
( ϵ > 0 ) ( N N ) ( r N ) r N 1 H 1 ( [ r , 1 / r ] [ 1 / r , r ] ) [ r , 1 / r ] [ 1 / r , r ] 1 / x d H 1 < ϵ =
( ϵ > 0 ) ( N N ) ( r N ) r N 1 ( 1 / r ( r ) ) + ( r 1 / r ) r 1 / r 1 / x d x + 1 / r r 1 / x d x < ϵ =
( ϵ > 0 ) ( N N ) ( r N ) r N 1 ( r 1 / r ) + ( 1 / r + r ) ln ( | | x | | ) + C | r 1 / r + ln ( | | x | | ) + C | 1 / r r < ϵ =
( ϵ > 0 ) ( N N ) ( r N ) r N 1 ( r 1 / r ) + ( 1 / r + r ) ln ( | | r | | ) ln ( | | 1 / r | | ) + ln ( | | r | | ) ln ( | | 1 / r | | ) < ϵ =
( ϵ > 0 ) ( N N ) ( r N ) r N 1 2 r 2 / r · 4 ln ( r ) < ϵ =
To prove eq. 17 is true, recall:
r e r / 2 , e 1 / r e r
r e r / 2 , e 1 / ( 2 r ) e r / 2
r e 1 / ( 2 r ) e r / 2
r e r / 2 / e 1 / ( 2 r )
r e r / 2 1 / ( 2 r )
ln ( r ) r / 2 1 / ( 2 r )
4 ln ( r ) 2 r 2 / r
Hence, for all ε > 0
4 ln ( r ) < ε ( 2 r 2 / r )
4 ln ( r ) 2 r 2 / r < ε
4 ln ( r ) 2 r 2 / r < ε
Since eq. 17 is true, E [ f r ] = 0 . Note, if we simply took the average of f from ( , ) , using the improper integral, the expected value:
lim ( x 1 , x 2 , x 3 , x 4 ) ( , 0 , 0 + , + ) 1 ( x 4 x 3 ) + ( x 2 x 1 ) x 1 x 2 1 x d x + x 3 x 4 1 x d x =
lim ( x 1 , x 2 , x 3 , x 4 ) ( , 0 , 0 + , + ) 1 ( x 4 x 3 ) + ( x 2 x 1 ) ln ( | | x | | ) + C | x 1 x 2 + ln ( | | x | | ) + C | x 3 x 4 =
lim ( x 1 , x 2 , x 3 , x 4 ) ( , 0 , 0 + , + ) 1 ( x 4 x 3 ) + ( x 2 x 1 ) ln ( | | x 2 | | ) ln ( | | x 1 | | ) + ln ( | | x 4 | | ) ln ( | | x 3 | | )
is + (when x 2 = 1 / x 1 , x 3 = 1 / x 4 , and x 1 = exp x 4 2 ) or (when x 2 = 1 / x 1 , x 3 = 1 / x 4 , and x 4 = exp x 1 2 ), making E [ f ] undefined. (However, using eq. 10-17, we get the E [ f r ] = 0 instead of an undefined value.)

5.3. Defining the “Measure"

5.3.1. Preliminaries

We define the “measure" of ( f r ) r N in § 5.3.2, where ( G r ) r N is a sequence of the graph of each f r . To understand this “measure", continue reading.
(1)
For every r N , “over-cover" G r with minimal, pairwise disjoint sets of equal H dim H ( G r ) measure. (We denote the equal measures ε , where the former sentence is defined C ( ε , G r , ω ) : i.e., ω Ω ε , r enumerates all collections of these sets covering G r . In case this step is unclear, see §8.1.)
(2)
For every ε , r and ω , take a sample point from each set in C ( ε , G r , ω ) . The set of these points is “the sample" which we define S ( C ( ε , G r , ω ) , ψ ) : i.e., ψ Ψ ε , r , ω enumerates all possible samples of C ( ε , G r , ω ) . (If this is unclear, see §8.2.)
(3)
For every ε , r, ω and ψ ,
(a)
Take a “pathway” of line segments: we start with a line segment from arbitrary point x 0 of S ( C ( ε , G r , ω ) , ψ ) to the sample point with the smallest ( n + 1 ) -dimensional Euclidean distance to x 0 (i.e., when more than one sample point has the smallest ( n + 1 ) -dimensional Euclidean distance to x 0 , take either of those points). Next, repeat this process until the “pathway” intersects with every sample point once. (In case this is unclear, see §8.3.1.)
(b)
Take the set of the length of all segments in (a), except for lengths that are outliers (i.e., for any constant C > 0 , the outliers are more than C times the interquartile range of the length of all line segments as r ). Define this L ( x 0 , S ( C ( ε , G r , ω ) , ψ ) ) . (If this is unclear, see 8.3.2.)
(c)
Multiply remaining lengths in the pathway by a constant so they add up to one (i.e., a probability distribution). This will be denoted P ( L ( x 0 , S ( C ( ε , G r , ω ) , ψ ) ) ) . (In case this is unclear, see §8.3.3)
(d)
Take the shannon entropy [20, p.61-95] of step (c). We define this:
E ( P ( L ( x 0 , S ( C ( ε , G r , ω ) , ψ ) ) ) ) = x P ( L ( x 0 , S ( C ( ε , G r , ω ) , ψ ) ) ) x log 2 x
which will be shortened to E ( L ( x 0 , S ( C ( ε , G r , ω ) , ψ ) ) ) . (If this is unclear, see §8.3.4.)
(e)
Maximize the entropy w.r.t all "pathways". This we will denote:
E ( L ( S ( C ( ε , G r , ω ) , ψ ) ) ) = sup x 0 S ( C ( ε , G r , ω ) , ψ ) E ( L ( x 0 , S ( C ( ε , G r , ω ) , ψ ) ) )
(In case this is unclear, see §8.3.5.)
(4)
Therefore, the maximum entropy, using (1) and (2) is:
E max ( ε , r ) = sup ω Ω ε , r sup ψ Ψ ε , r , ω E ( L ( S ( C ( ε , G r , ω ) , ψ ) ) )

5.3.2. What Am I Measuring?

Suppose we define two sequences of the graph of the bounded functions converging to the graph of f: e.g., ( G r ) r N and ( G j ) j N , where for constant ε and cardinality | · |
  • (a)
    Using (2) and (3e) of 5.3.1, suppose:
    S ( C ( ε , G r , ω ) , ψ ) ̲ = sup S ( C ( ε , G j , ω ) , ψ ) : j N , ω Ω ε , j , ψ Ψ ε , j , ω , E ( L ( S ( C ( ε , G j , ω ) , ψ ) ) ) E ( L ( S ( C ( ε , G r , ω ) , ψ ) ) )
    then (using S ( C ( ϵ , G r , ω ) , ψ ) ̲ ) we get
    α ̲ ε , r , ω , ψ = S ( C ( ε , G r , ω ) , ψ ) ̲ / S ( C ( ε , G r , ω ) , ψ ) )
    (b)
    Also, using (2) and (3e) of 5.3.1, suppose:
    S ( C ( ε , G r , ω ) , ψ ) ¯ = inf S ( C ( ε , G j , ω ) , ψ ) : j N , ω Ω ε , j , ψ Ψ ε , j , ω , E ( L ( S ( C ( ε , G j , ω ) , ψ ) ) ) E ( L ( S ( C ( ε , G r , ω ) , ψ ) ) )
    then (using S ( C ( ε , G r , ω ) , ψ ) ¯ ) we also get:
    α ¯ ε , r , ω , ψ = S ( C ( ε , G r , ω ) , ψ ) ¯ / S ( C ( ε , G r , ω ) , ψ ) )
  • (1)
    If using α ¯ ϵ , r , ω , ψ and α ̲ ϵ , r , ω , ψ we have:
    1 < lim sup ε 0 lim sup r sup ω Ω ε , r sup ψ Ψ ε , r , ω α ¯ ε , r , ω , ψ , lim inf ε 0 lim inf r inf ω Ω ε , r inf ψ Ψ ε , r , ω α ̲ ε , r , ω , ψ < +
    then what I’m measuring from  ( G r ) r N increases at a rate superlinear to that of ( G j ) j N .
    (2)
    If using equations α ¯ ε , j , ω , ψ and α ̲ ε , j , ω , ψ (swapping r N and ( G r ) r N , in α ¯ ϵ , r , ω , ψ and α ̲ ϵ , r , ω , ψ , with j N and ( G j ) j N ) we get:
    1 < lim sup ε 0 lim sup j sup ω Ω ε , j sup ψ Ψ ε , j , ω α ¯ ε , j , ω , ψ , lim inf ε 0 lim inf j inf ω Ω ε , j inf ψ Ψ ε , j , ω α ̲ ε , j , ω , ψ < +
    then what I’m measuring from  ( G r ) r N increases at a rate sublinear to that of ( G j ) j N .  
    (3)
    If using equations α ¯ ε , r , ω , ψ , α ̲ ε , r , ω , ψ , α ¯ ε , j , ω , ψ , and α ̲ ε , j , ω , ψ , we both have:  
    (a)
    lim sup ε 0 lim sup r sup ω Ω ε , r sup ψ Ψ ε , r , ω α ¯ ε , r , ω , ψ or lim inf ε 0 lim inf r inf ω Ω ε , r inf ψ Ψ ε , r , ω α ̲ ε , r , ω , ψ are equal to zero, one or +
    (b)
    lim sup ε 0 lim sup j sup ω Ω ε , j sup ψ Ψ ε , j , ω α ¯ ε , j , ω , ψ or lim inf ε 0 lim inf j inf ω Ω ε , j inf ψ Ψ ε , j , ω α ̲ ε , j , ω , ψ are equal to zero, one or +
    then what I’m measuring from  ( G r ) r N increases at a rate linear to that of ( G j ) j N .

    5.3.3. Example of The “Measure" of ( G r ) Increasing at Rate Super-Linear to That of ( G j )

    Suppose, we have function f : A R , where A = Q [ 0 , 1 ] , and:
    f ( x ) = 1 x ( 2 s + 1 ) / ( 2 t ) : s Z , t N , t 0 [ 0 , 1 ] 0 x ( 2 s + 1 ) / ( 2 t ) : s Z , t N , t 0 [ 0 , 1 ]
    such that:
    ( A r ) r N = ( c / r ! : c Z , 0 c r ! ) r N
    and
    ( A j ) j N = ( c / d : c Z , d N , d j , 0 c j ) j N
    where for f r : A r R ,
    f r ( x ) = f ( x ) for all x A r
    and f j : A j R
    f j ( x ) = f ( x ) for all x A j
    Hence, when ( G r ) r N is:
    ( G r ) r N = ( x , f ( x ) ) : x c / r ! : c Z , 0 c r ! r N
    and ( G j ) j N is:
    ( G j ) j N = ( x , f ( x ) ) : x c / d : c Z , d N , d j , 0 c j j N
    Note, the following:
    Since ε > 0 and A = Q [ 0 , 1 ] is countably infinite, there exists minimum ε which is 1. Therefore, we don’t need ε 0 . Also, we maximize E ( L ( S ( C ( ε , G r , ω ) , ψ ) ) ) by the following procedure:
    (1)
    For every r N , group x G r into elements with an even numerator when simplified: i.e.,
    x ( 2 s + 1 ) / ( 2 t ) : s Z , t N , t 0
    which we call S 1 , r , and group x G r into elements with an odd denominator when simplified: i.e.,
    x Q ( 2 s + 1 ) / ( 2 t ) : s Z , t N , t 0
    which we call S 2 , r
    (2)
    Arrange the points in S 1 , r from least to greatest and take the 2-d Euclidean distance between each pair of consecutive points in S 1 , r . In this case, since all points lie on y = 1 , take the absolute difference between the x-coordinates of S 1 , r then call this D 1 , r . (Note, this is similar to § 5.3.1 step 3a).
    (3)
    Repeat step (2) for S 2 , r , then call this D 2 , r . (Note, all point of S 2 , r lie on y = 0 .)
    (4)
    Remove any outliers from D r = D 1 , r D 2 , r { d ( ( n ! 1 n ! , 1 ) , ( 1 , 0 ) ) } (i.e., d is the 2-d Euclidean distance between points ( n ! 1 n ! , 1 ) and ( 1 , 0 ) ). Note, in this case, D 2 , r and { d ( ( n ! 1 n ! , 1 ) , ( 1 , 0 ) ) } should be outliers (i.e., for any C > 0 , the lengths of D 2 , r are more than C times the interquartile range of the lengths of D r ) leaving us with D 1 , r .
    (5)
    Multiply the remaining lengths in the pathway by a constant so they add up to one. (See P[r] of code 1 for an example)
    (6)
    Take the entropy of the probability distribution. (See entropy[r] of code 1 for an example.)
    We can illustrate this process with the following code:
    Code 1: Illustration of step (1)-(6)
    • Clear["*Global`*"]
    • A[r_] := A[r] = Range[0, r!]/(r!)
    • (*Below is step 1*)
    • S1[r_] :=
    •  S1[r] = Sort[Select[A[r], Boole[IntegerQ[Denominator[#]/2]] == 1 &]]
    • S2[r_] :=
    •  S2[r] = Sort[Select[A[r], Boole[IntegerQ[Denominator[#]/2]] == 0 &]]
    • (*Below is step 2*)
    • Dist1[r_] := Dist1[r] = Differences[S1[r]]
    • (*Below is step 3*)
    • Dist2[r_] := Dist2[r] = Differences[S2[r]]
    • (*Below is step 4*)
    • NonOutliers[r_] :=
    •  NonOutliers[r] = Dist1[r] (*We exclude Dist2[r] since it’s an outlier*)
    • (*Below is step 5*)
    • P[r_] := P[r] = NonOutliers[r]/Total[NonOutliers[r]]
    • (*Below is step 6*)
    • entropy[r_] := entropy[r] = Total[-P[r] Log[2, P[r]]]
    Taking Table[{r,entropy[r]},{r,3,8}], we get:
    Code 2: Output of Table[{r,entropy[r]},{r,3,8}]
    • Clear["*Global`*"]
    • {{{3,1}, {4,(2 Log[11])/(11 Log[2]) + (9 Log[22])/(11 Log[2])},
    •  {5,(14 Log[59])/(59 Log[2]) + (45 Log[118])/(59 Log[2])},
    •  {6,(44 Log[359])/(359 Log[2]) + (315 Log[718])/(359 Log[2])},
    •  {7,(314 Log[2519])/(2519 Log[2]) + (2205 Log[5038])/(2519 Log[2])},
    •  {8,(314 Log[20159])/(20159 Log[2]) + (19845 Log[40318])/(20159 Log[2])}}}
    and notice when:
    (1)
    c ( r ) = ( r ! ) / 2 1
    (2)
    b ( 4 ) 9 , b ( 5 ) 45 , b ( 6 ) 315 , b ( 7 ) 2205 , b ( 8 ) 19845
    (3)
    a ( r ) + b ( r ) = c ( r )
    the output of code can be defined:
    a ( r ) log 2 ( c ( r ) ) c ( r ) + b ( r ) log ( 2 c ( r ) ) c ( r ) = a ( r ) log 2 ( c ( r ) ) + b ( r ) log ( 2 c ( r ) ) c ( r )
    Hence, since a ( r ) = c ( r ) b ( r ) = ( r ! ) / 2 1 b ( r ) :
    a ( r ) log 2 ( c ( r ) ) + b ( r ) log ( 2 c ( r ) ) c ( r ) =
    ( r ! / 2 1 b ( r ) ) log 2 ( c ( r ) ) + b ( r ) log 2 ( 2 c ( r ) ) c ( r ) =
    ( r ! / 2 ) log 2 ( c ( r ) ) log 2 ( c ( r ) ) b ( r ) log 2 ( r ) + b ( r ) log 2 ( c ( r ) ) + b ( r ) log 2 ( 2 ) c ( r ) =
    ( r ! / 2 ) log 2 ( c ( r ) ) log 2 ( c ( r ) ) + b ( r ) c ( r ) =
    ( r ! / 2 1 ) log 2 ( c ( r ) ) + b ( r ) c ( r ) =
    ( r ! / 2 1 ) log 2 ( r ! / 2 1 ) + b ( r ) r ! / 2 1 =
    log 2 ( r ! / 2 1 ) + b ( r ) r ! / 2 1 =
    and lim r b ( r ) / c ( r ) = 1 (I need help proving this):
    log 2 ( r ! / 2 1 ) + b ( r ) r ! / 2 1 log 2 ( r ! / 2 1 ) + 1
    log 2 ( r ! / 2 1 ) + log 2 ( 2 ) =
    log 2 ( 2 ( r ! / 2 1 ) )
    log 2 ( r ! 2 ) log 2 ( r ! )
    Hence, entropy[r] is the same as:
    E ( L ( S ( C ( 1 , G r , ω ) , ψ ) ) ) log 2 ( r ! )
    Now, repeat code with:
    ( A j ) j N = ( c / d : c Z , d N , d j , 0 c j )
    Code 3: Illustration of step (1)-(6) on ( A j )
    • Clear["*Global`*"]
    • A[j_] := A[j] =
    •   DeleteDuplicates[Flatten[Table[Range[0, t]/t, {t, 1, j}]]]
    • (*Below is step 1*)
    • S1[j_] :=
    •  S1[j] = Sort[Select[A[j], Boole[IntegerQ[Denominator[#]/2]] == 1 &]]
    • S2[j_] :=
    •  S2[j] = Sort[Select[A[j], Boole[IntegerQ[Denominator[#]/2]] == 0 &]]
    • (*Below is step 2*)
    • Dist1[j_] := Dist1[j] = Differences[S1[j]]
    • (*Below is step 3*)
    • Dist2[j_] := Dist2[j] = Differences[S2[j]]
    • (*Below is step 4*)
    • NonOutliers[j_] :=
    •  NonOutliers[j] = Join[Dist1[j], Dist2[j]] (*There are no outliers*)
    • (*Below is step 5*)
    • P[j_] := P[j] = NonOutliers[j]/Total[NonOutliers[j]]
    • (*Below is step 6*)
    • entropy[j_] := entropy[j] = N[Total[-P[j] Log[2, P[j]]]]
    Using this post [21], we assume an approximation of Table[entropy[j],{j,3,Infinity}] or
    E ( L ( S ( C ( 1 , G j , ω ) , ψ ) ) ) is:
    E ( L ( S ( C ( 1 , G j , ω ) , ψ ) ) ) 2 log 2 ( j ) + 1 log 2 ( 3 π )
    Hence, using §5.3.2 (a) and §5.3.2 (1), take S ( C ( ε , G j , ω ) , ψ ) = M = 1 j ϕ ( M ) 3 π 2 j 2 (where ϕ is Euler’s Totient function) computing the following:
    S ( C ( ε , G r , ω ) , ψ ) ̲ = sup S ( C ( ε , G j , ω ) , ψ ) : j N , ω Ω ε , j , ψ Ψ ε , j , ω , E ( L ( S ( C ( ε , G j , ω ) , ψ ) ) ) E ( L ( S ( C ( ε , G r , ω ) , ψ ) ) ) = sup 3 π 2 j 2 : j N , ω Ω ε , j , ψ Ψ ε , j , ω , 2 log 2 ( j ) + 1 log 2 ( 3 π ) log 2 ( r ! ) =
    where:
    (1)
    For every r N , we find a j N , where 2 log 2 ( j ) + 1 log 2 ( 3 π ) log 2 ( r ! ) , but the absolute value of log 2 ( r ! ) 2 log 2 ( j ) + 1 log 2 ( 3 π ) is minimized. In other words, for every r N , we want j N where:
    2 log 2 ( j ) + 1 log 2 ( 3 π ) log 2 ( r ! )
    2 log 2 ( j ) log 2 ( r ! ) 1 + log 2 ( 3 π )
    2 log 2 ( j ) 2 2 log 2 ( r ! ) 1 + log 2 ( 3 π )
    j 2 2 log 2 ( r ! ) 2 log 2 ( 3 π ) / 2
    j r ! ( 3 π ) 2
    j = 3 π r ! 2
    3 π 2 j 2 = 3 π 2 3 π r ! 2 2 S ( C ( 1 , G r , ω ) , ψ ) ̲
    Finally, since S ( C ( ε , G r , ω ) , ψ ) = r ! , we wish to prove
    1 < lim inf ε 0 lim inf r inf ω Ω ε , r inf ψ Ψ ε , r , ω α ̲ ε , r , ω , ψ < +
    within §5.3.2 crit. 1:
    lim inf ε 0 lim inf r inf ω Ω ε , r inf ψ Ψ ε , r , ω α ̲ ε , r , ω , ψ = lim inf r inf ω Ω ε , r inf ψ Ψ ε , r , ω S ( C ( ε , F r , ω ) , ψ ) ̲ S ( C ( 1 , G r , ω ) , ψ ) )
    = lim r 3 π 2 3 π r ! 2 2 r !
    where using mathematica, we get the limit is greater than one:
    Code 4: Limitof eq. 59
    • Clear["*Global`*"]
    • N[Limit[((3/Pi^2) (Floor[Sqrt[(3 Pi r!)/2]])^2)/(r!), r -> Infinity]]
    •  (*Output is 1.43239*)
    Also, using §5.3.2 (b) and §5.3.2 (1), take S ( C ( ε , G j , ω ) , ψ ) = M = 1 j ϕ ( M ) 3 π 2 j 2 (where ϕ is Euler’s Totient function) to compute the following:
    S ( C ( ε , G r , ω ) , ψ ) ̲ = inf S ( C ( ε , G j , ω ) , ψ ) : j N , ω Ω ε , j , ψ Ψ ε , j , ω , E ( L ( S ( C ( ε , G j , ω ) , ψ ) ) ) E ( L ( S ( C ( ε , G r , ω ) , ψ ) ) ) = inf 3 π 2 j 2 : j N , ω Ω ε , j , ψ Ψ ε , j , ω , 2 log 2 ( j ) + 1 log 2 ( 3 π ) log 2 ( r ! ) =
    where:
    (1)
    For every r N , we find a j N , where 2 log 2 ( j ) + 1 log 2 ( 3 π ) log 2 ( r ! ) , but the absolute value of 2 log 2 ( j ) + 1 log 2 ( 3 π ) log 2 ( r ! ) is minimized. In other words,
    for every r N , we want j N where:
    2 log 2 ( j ) + 1 log 2 ( 3 π ) log 2 ( r ! )
    2 2 log 2 ( j ) log 2 ( r ! ) 1 + log 2 ( 3 π )
    2 log 2 ( j ) 2 2 log 2 ( r ! ) 1 + log 2 ( 3 π )
    j 2 2 log 2 ( r ! ) 2 log 2 ( 3 π ) / 2
    j r ! ( 3 π ) 2
    j = 3 π r ! 2
    3 π 2 j 2 = 3 π 2 3 π r ! 2 2 S ( C ( 1 , G r , ω ) , ψ ) ¯
    Finally, since S ( C ( 1 , G r , ω ) , ψ ) = r ! , we wish to prove
    1 < lim sup ε 0 lim sup r sup ω Ω ε , r sup ψ Ψ ε , r , ω α ¯ ε , r , ω , ψ < +
    within §5.3.2 crit. 1:
    lim sup ε 0 lim sup r sup ω Ω ε , r sup ψ Ψ ε , r , ω α ¯ ε , r , ω , ψ = lim sup r sup ω Ω ε , r sup ψ Ψ ε , r , ω S ( C ( 1 , G r , ω ) , ψ ) ¯ S ( C ( 1 , G r , ω ) , ψ ) )
    = lim r 3 π 2 3 π r ! 2 2 r !
    where using mathematica, we get the limit is greater than one:
    Code 5: Limitof eq. 69
    • N[Limit[((3/Pi^2) (Ceiling[Sqrt[(3 Pi r!)/2]])^2)/(r!), r -> Infinity]]
    •  (*The output is 1.43239*)
    Hence, since the limits in eq. 58 and eq. 68 are greater than one and less than + : i.e.,
    1 < lim inf ε 0 lim inf r inf ω Ω ε , r inf ψ Ψ ε , r , ω α ̲ ε , r , ω , ψ = lim sup ε 0 lim sup r sup ω Ω ε , r sup ψ Ψ ε , r , ω α ¯ ε , r , ω , ψ < +
    what we’re measuring from ( G r ) r N increases at a rate superlinear to that of ( G j ) j N (i.e., 5.3.2 crit. 1).

    5.3.4. Example of The “Measure" from ( G r ) r N Increasing at a Rate Sub-Linear to That of ( G j ) j N

    Using our previous example, we can use the following theorem:
    Theorem 5. 
    If what we’re measuring from ( G r ) r N increases at a rate superlinear to that of ( G j ) j N , then what we’re measuring from ( G j ) r N increases at a ratesublinearto that of ( G r ) r N
    Hence, in our definition of super-linear (§5.3.2 crit. 2), swap G r and r N for ( G j ) and j N regarding α ¯ ϵ , r , ω , ψ and α ̲ ϵ , r , ω , ψ (i.e., α ¯ ϵ , j , ω , ψ and α ̲ ϵ , j , ω , ψ ) and notice thm. 5 is true when:
    1 < lim sup ε 0 lim sup j sup ω Ω ε , j sup ψ Ψ ε , j , ω α ¯ ε , j , ω , ψ , lim inf ε 0 lim inf j inf ω Ω ε , j inf ψ Ψ ε , j , ω α ̲ ε , j , ω , ψ < +

    5.3.5. Example of The “Measure" from ( G r ) r N Increasing at a Rate Linear to That of ( G j ) j N

    Suppose, we have function f : A R , where A = Q [ 0 , 1 ] , and:
    f ( x ) = 1 x ( 2 s + 1 ) / ( 2 t ) : s Z , t N , t 0 [ 0 , 1 ] 0 x ( 2 s + 1 ) / ( 2 t ) : s Z , t N , t 0 [ 0 , 1 ]
    such that:
    ( A r ) r N = ( c / r ! : c Z , 0 c r ! ) r N
    and
    ( A j ) j N = ( c / ( ( j ! ) 2 ) : c N , 1 c ( j ! ) 2 )
    where for f r : A r R ,
    f r ( x ) = f ( x ) for all x A r
    and f j : A j R
    f j ( x ) = f ( x ) for all x A j
    Hence, when ( G r ) r N is:
    ( G r ) r N = ( x , f ( x ) ) : x c / r ! : c Z , 0 c r ! r N
    and ( G j ) j N is:
    ( G j ) j N = ( x , f ( x ) ) : x c / ( ( j ! ) 2 ) : c Z , 0 c ( j ! ) 2 j N
    We already know, using §5.3.3:
    E ( L ( S ( C ( 1 , G r , ω ) , ψ ) ) ) log 2 ( r ! 2 ) log 2 ( r ! )
    Also, using §5.3.3 steps 1-6 on ( A j ) j N :
    Code 6: Illustration of step (1)-(6) on ( A j )
    • Clear["*Global`*"]
    • A[j_] := A[j] =  Range[0, 7 (j!)]/(7 (j!))
    • (*Below is step 1*)
    • S1[j_] :=
    •  S1[j] = Sort[Select[A[j], Boole[IntegerQ[Denominator[#]/2]] == 1 &]]
    • S2[j_] :=
    •  S2[j] = Sort[Select[A[j], Boole[IntegerQ[Denominator[#]/2]] == 0 &]]
    • (*Below is step 2*)
    • Dist1[j_] := Dist1[j] = Differences[S1[j]]
    • (*Below is step 3*)
    • Dist2[j_] := Dist2[j] = Differences[S2[j]]
    • (*Below is step 4*)
    • NonOutliers[j_] :=
    •  NonOutliers[j] = Dist1[j] (*Dist2[j] is an outlier*)
    • (*Below is step 5*)
    • P[j_] := P[j] = NonOutliers[j]/Total[NonOutliers[j]]
    • (*Below is step 6*)
    • entropy[j_] := entropy[j] = N[Total[-P[j] Log[2, P[j]]]]
    • T = Table[{j,entropy[j]},{j,3,6}]
    where the output is
    Code 7: Output of Code 6
    • (*@@*)(@@)
    • {{3,(8 Log[17])/(17 Log[2]) + (9 Log[34])/(17 Log[2])},
    •      {4,(8 Log[287])/(287 Log[2]) + (279 Log[574])/(287 Log[2])},
    •      {5,(224 Log[7199])/(7199 Log[2]) + (6975 Log[14398])/(7199 Log[2])},
    •      {6,(2024 Log[259199])/(259199 Log[2]) + (257175 Log[518398])/(259199 Log[2])}}
    Notice when:
    (1)
    c ( j ) = ( j ! ) 2 / 2 1
    (2)
    b ( 4 ) 9 , b ( 5 ) 279 , b ( 6 ) 6975 , b ( 7 ) 257175 , b ( 8 ) 19845
    (3)
    a ( j ) + b ( j ) = c ( j )
    the output of code can be defined:
    a ( j ) log 2 ( c ( j ) ) c ( j ) + b ( j ) log ( 2 c ( j ) ) c ( j ) = a ( j ) log 2 ( c ( j ) ) + b ( j ) log ( 2 c ( j ) ) c ( j )
    Hence, since a ( j ) = c ( j ) b ( j ) = ( j ! ) 2 / 2 1 b ( j ) :
    a ( j ) log 2 ( c ( j ) ) + b ( j ) log ( 2 c ( j ) ) c ( j ) =
    ( ( j ! ) 2 / 2 1 b ( j ) ) log 2 ( c ( j ) ) + b ( j ) log 2 ( 2 c ( j ) ) c ( j ) =
    ( ( j ! ) 2 / 2 ) log 2 ( c ( j ) ) log 2 ( c ( j ) ) b ( j ) log 2 ( j ) + b ( j ) log 2 ( c ( j ) ) + b ( j ) log 2 ( 2 ) c ( j ) =
    ( ( j ! ) 2 / 2 ) log 2 ( c ( j ) ) log 2 ( c ( j ) ) + b ( j ) c ( j ) =
    ( ( j ! ) 2 / 2 1 ) log 2 ( c ( j ) ) + b ( j ) c ( j ) =
    ( ( j ! ) 2 / 2 1 ) log 2 ( ( j ! ) 2 / 2 1 ) + b ( j ) ( j ! ) 2 / 2 1 =
    log 2 ( ( j ! ) 2 / 2 1 ) + b ( j ) ( j ! ) 2 / 2 1 =
    since lim r b ( r ) / c ( r ) = 1 (this is proven in [22]):
    log 2 ( ( j ! ) 2 / 2 1 ) + b ( j ) ( j ! ) 2 / 2 1 log 2 ( ( j ! ) 2 / 2 1 ) + 1
    log 2 ( ( j ! ) 2 / 2 1 ) + log 2 ( 2 ) =
    log 2 ( ( j ! ) 2 2 ) )
    log 2 ( ( j ! ) 2 ) =
    2 log 2 ( j ! )
    Hence, entropy[r] is the same as:
    E ( L ( S ( C ( 1 , G r , ω ) , ψ ) ) )
    2 log 2 ( j ! )
    Therefore, using §5.3.2 (a) and §5.3.2 (3a), take S ( C ( ε , G j , ω ) , ψ ) = ( j ! ) 2 to compute the following:
    S ( C ( ε , G r , ω ) , ψ ) ̲ = sup S ( C ( ε , G j , ω ) , ψ ) : j N , ω Ω ε , j , ψ Ψ ε , j , ω , E ( L ( S ( C ( ε , G j , ω ) , ψ ) ) ) E ( L ( S ( C ( ε , G r , ω ) , ψ ) ) ) = sup ( j ! ) 2 : j N , ω Ω ε , j , ψ Ψ ε , j , ω , 2 log 2 ( j ! ) log 2 ( r ! ) =
    where:
    (1)
    For every r N , we find a j N , where 2 log 2 ( j ! ) log 2 ( r ! ) , but the absolute value of log 2 ( r ! ) 2 log 2 ( j ! ) is minimized. In other words, for every r N , we want j N where:
    2 log 2 ( j ! ) log 2 ( r ! )
    2 2 log 2 ( j ! ) 2 log 2 ( r ! )
    ( 2 log 2 ( j ! ) ) 2 r !
    ( j ! ) 2 r !
    ( j ! ) 2 = r !
    To solve for j, we try the following code:
    Code 8: Code for j in eq. 97
    • Clear["Global`*"]
    • T1 = Table[
    •    {sol[r_] := sol[r] = Reduce[j > 0 && ((j!)^2) <= r!, j, Integers],
    •     jsolve = Max[j /. Solve[sol[r], {j}, Integers]],
    •     (* Largest j that solves inequality (j!)^2<=r for every r *)
    •     , N[(jsolve!)^2/(r!)]}, {r, 3, 40}];
    • Tablejsolve =
    •  Table[{T1[[r - 3 + 1, 2]], r}, {r, 3,
    •    40}] (*Takes largest j-values for every r in r!*)
    • loweralphr =
    •  Table[{r, T1[[r - 3 + 1, 4]]}, {r, 3,
    •    40}] (* Takes largest largest j values and corresponding r value*)
    • ListPlot[loweralphr] (*Graph points of upperalph. Notice, the graph has
    • a lower bound of zero.*)
    Note, the output is:
    Code 9: Output for code 8
    • Clear["Global`*"]
    • (* Output of Tablejsolve*)
    • {{2, 3}, {2, 4}, {3, 5}, {4, 6}, {4, 7}, {5, 8}, {5, 9}, {6, 10}, {7, 11}, {7, 12},
    •  {8, 13}, {8, 14}, {9, 15}, {10, 16}, {10, 17}, {11, 18}, {11, 19}, {12, 20}, {13, 21},
    •  {13, 22}, {14, 23}, {14, 24}, {15, 25}, {15, 26}, {16, 27}, {17, 28}, {17, 29},
    •  {18, 30}, {18, 31}, {19, 32}, {20, 33}, {20, 34}, {21, 35}, {21, 36}, {22, 37}, {22, 38},
    •  {23, 39}, {24, 40}}
    • (*Output of loweralphr*)
    • {{3, 0.666667}, {4, 0.166667}, {5, 0.3}, {6, 0.8}, {7, 0.114286}, {8, 0.357143}, {9, 0.0396825},
    •  {10, 0.142857}, {11, 0.636364}, {12, 0.0530303}, {13, 0.261072}, {14, 0.018648}, {15, 0.100699},
    •  {16, 0.629371}, {17, 0.0370218}, {18, 0.248869}, {19, 0.0130984}, {20, 0.0943082}, {21, 0.758956},
    •  {22, 0.034498}, {23, 0.293983}, {24, 0.0122493}, {25, 0.110244}, {26, 0.00424014}, {27,0.0402028},
    •  {28, 0.41495}, {29, 0.0143086}, {30, 0.154533}, {31, 0.00498494}, {32, 0.0562364}, {33, 0.681653},
    •  {34, 0.0200486}, {35, 0.252613}, {36, 0.00701702}, {37, 0.0917902}, {38, 0.00241553},
    •  {39, 0.0327645}, {40, 0.471809}}
    Figure 1. Plot of loweralphr
    Figure 1. Plot of loweralphr
    Preprints 146511 g001
    Finally, since the lower bound of loweralphr is zero, we have shown:
    lim inf ε 0 lim inf r inf ω Ω ε , r inf ψ Ψ ε , r , ω α ̲ ε , r , ω , ψ = 0
    Next, using §5.3.2 (b) and §5.3.2 (3b), take S ( C ( ε , G r , ω ) , ψ ) = r ! and swap r N and ( G r ) r N with j N and ( G j ) j N , to compute the following:
    S ( C ( ε , G j , ω ) , ψ ) ̲ = inf S ( C ( ε , G r , ω ) , ψ ) : r N , ω Ω ε , r , ψ Ψ ε , r , ω , E ( L ( S ( C ( ε , G r , ω ) , ψ ) ) ) E ( L ( S ( C ( ε , G j , ω ) , ψ ) ) ) = inf r ! : r N , ω Ω ε , r , ψ Ψ ε , j , ω , log 2 ( r ! ) 2 log 2 ( j ! ) =
    where:
    (1)
    For every j N , we find a r N , where log 2 ( r ! ) 2 log 2 ( j ! ) , but the absolute value of 2 log 2 ( j ! ) log 2 ( r ! ) is minimized. In other words, for every j N , we want r N where:
    log 2 ( r ! ) 2 log 2 ( j ! )
    2 log 2 ( r ! ) 2 2 log 2 ( j ! )
    r ! ( 2 log 2 ( j ! ) ) 2
    r ! ( j ! ) 2
    r ! = ( j ! ) 2
    To solve r, we try the following code:
    Code 10: Code for r in eq. 104
    • Clear["Global`*"]
    • Clear["Global`*"]
    • T2 = Table[
    •    {sol[j_] := sol[j] = Reduce[j > 0 && r! <= (j!)^2, r, Integers],
    •     rsolve = Max[r /. Solve[sol[j], {r}, Integers]],
    •     (* Largest r that solves inequality (r!)<=(j!)^2 for every r *)
    •     , N[(rsolve!)/((j!)^2)]}, {j, 3, 40}];
    • Tablersolve =
    •  Table[{T2[[j - 3 + 1, 2]], j}, {j, 3,
    •    40}] (*Takes largest r-values for every j in j!*)
    • loweralphj =
    •  Table[{j, T2[[j - 3 + 1, 4]]}, {j, 3,
    •    40}] (* Takes largest largest r values and corresponding j value*)
    • ListPlot[loweralphj](*Graph points of upperalph. Notice, the graph
    • has a upperbound of Infinity *)
    Note, the output is:
    Code 11: Output for code 10
    • Clear["Global`*"]
    • (* Output of Tablersolve*)
    • {{4, 3}, {5, 4}, {7, 5}, {9, 6}, {10, 7}, {12, 8}, {14, 9}, {15, 10}, {17, 11}, {19, 12},
    •  {20, 13}, {22, 14}, {24, 15}, {26, 16}, {27, 17}, {29, 18}, {31, 19}, {32, 20}, {34, 21},
    •  {36, 22}, {38, 23}, {39, 24}, {41, 25}, {43, 26}, {44, 27}, {46, 28}, {48, 29}, {50, 30},
    •  {51, 31}, {53, 32}, {55, 33}, {57, 34}, {58, 35}, {60, 36}, {62, 37}, {64, 38}, {65, 39},
    •  {67, 40}}
    • (*Output of loweralphj*)
    • {{3, 0.666667}, {4, 0.166667}, {5, 0.3}, {6, 0.8}, {7, 0.114286}, {8, 0.357143}, {9, 0.0396825},
    •  {10, 0.142857}, {11, 0.636364}, {12, 0.0530303}, {13, 0.261072}, {14, 0.018648}, {15, 0.100699},
    •  {16, 0.629371}, {17, 0.0370218}, {18, 0.248869}, {19, 0.0130984}, {20, 0.0943082}, {21, 0.758956},
    •  {22, 0.034498}, {23, 0.293983}, {24, 0.0122493}, {25, 0.110244}, {26, 0.00424014}, {27,0.0402028},
    •  {28, 0.41495}, {29, 0.0143086}, {30, 0.154533}, {31, 0.00498494}, {32, 0.0562364}, {33, 0.681653},
    •  {34, 0.0200486}, {35, 0.252613}, {36, 0.00701702}, {37, 0.0917902}, {38, 0.00241553},
    •  {39, 0.0327645}, {40, 0.471809}}
    Figure 2. Plot of loweralphj
    Figure 2. Plot of loweralphj
    Preprints 146511 g002
    since the lower bound of loweralphj is zero, we have shown:
    lim inf ε 0 lim inf j inf ω Ω ε , j inf ψ Ψ ε , j , ω α ̲ ε , j , ω , ψ = 0
    Hence, using eq. 98 and 105, since both:  
    (1)
    lim sup ε 0 lim sup r sup ω Ω ε , r sup ψ Ψ ε , r , ω α ¯ ε , r , ω , ψ or lim inf ε 0 lim inf r inf ω Ω ε , r inf ψ Ψ ε , r , ω α ̲ ε , r , ω , ψ are equal to zero, one or +
    (2)
    lim sup ε 0 lim sup j sup ω Ω ε , j sup ψ Ψ ε , j , ω α ¯ ε , j , ω , ψ or lim inf ε 0 lim inf j inf ω Ω ε , j inf ψ Ψ ε , j , ω α ̲ ε , j , ω , ψ are equal to zero, one or +
    then what I’m measuring from  ( G r ) r N increases at a rate linear to that of ( G j ) j N .

    5.4. Defining the Actual Rate of Expansion of Sequence of Bounded Sets

    5.4.1. Definition of Actual Rate of Expansion of Sequence of Bounded Sets

    Suppose ( f r ) r N is a sequence of bounded functions converging to f, where ( G r ) r N is a sequence of the graph on each f r , and d ( Q , R ) is the Euclidean distance between points Q , R R n . Therefore, using the “chosen" center point C R n + 1 , when:
    G ( C , G r ) = sup d ( C , y ) : y G r
    the actual rate of expansion is:
    E ( C , G r ) = G ( C , G r + 1 ) G ( C , G r )
    Note, there are cases of G r r N when E isn’t fixed and E E (i.e., the chosen, fixed rate of expansion).

    5.4.2. Example

    Suppose, we have f : A R , where A = R and f ( x ) = x , such that ( A r ) r N = ( [ r , r ] ) r N and for f r : A r R :
    f r ( x ) = f ( x ) for all x A r
    Hence, when ( G r ) r N is:
    ( G r ) r N = ( x , x ) : x [ r , r ] r N
    such that C = ( 0 , 0 ) , note the farthest point of G r from C is either ( r , r ) or ( r , r ) . Hence, to compute G ( C , G r ) , we can take d ( ( 0 , 0 ) , ( r , r ) ) or d ( ( 0 , 0 ) , ( r , r ) ) :
    G ( C , G r ) = sup d ( C , y ) : y G r =
    d ( ( 0 , 0 ) , ( r , r ) ) =
    ( 0 r ) 2 + ( 0 r ) 2 =
    r 2 + r 2 =
    2 r 2 =
    2 r 2 =
    2 | r |
    2 r sin ce r > 0
    and the actual rate of expansion is:
    E ( C , G r ) = G ( C , G r + 1 ) G ( C , G r ) =
    2 ( r + 1 ) 2 r =
    2 ( r + 1 ) 2 r =
    2 r + 2 2 r =
    2

    5.5. Reminder

    See if §3.1 is easier to understand.

    6. My Attempt at Answering the Blockquote of §1.3.2

    6.1. Choice Function

    Suppose we define the following:
    (1)
    ( f k ) k N is the sequence of bounded functions which satisfies (1), (2), (3), (4) and (5) of the leading question in § 3.1
    (2)
    S ( f ) is all sequences of bounded functions satisfying (1) of the leading question where the expected values, defined in the papers of § 2, is finite.
    (3)
    ( f j ) j N is an element S ( f ) but not an element in the set of sequences of bounded functions to that of ( f k ) k N which have equal expected values: i.e., ( f k ) . We represent this criteria as:
    ( f j ) j N S ( f ) ( f k ) k N
    Further note, from §5.3.2 (b), if we take:
    S ( C ( ε , G k , ω ) , ψ ) ¯ = inf S ( C ( ε , G j , ω ) , ψ ) : j N , ω Ω ε , j , ψ Ψ ε , j , ω , E ( L ( S ( C ( ε , G j , ω ) , ψ ) ) ) E ( L ( S ( C ( ε , G k , ω ) , ψ ) ) )
    and from §5.3.2 (a), we take:
    S ( C ( ε , G k , ω ) , ψ ) ̲ = sup S ( C ( ε , G j , ω ) , ψ ) : j N , ω Ω ε , j , ψ Ψ ε , j , ω , E ( L ( S ( C ( ε , G j , ω ) , ψ ) ) ) E ( L ( S ( C ( ε , G k , ω ) , ψ ) ) )
    Then, §5.3.1 (2), eq. 119, and eq. 120 is:
    sup ω Ω ε , k sup ψ Ψ ε , k , ω S ( C ( ε , G k , ω ) , ψ ) = S ( ε , G k ) = S
    sup ω Ω ε , k sup ψ Ψ ε , k , ω S ( C ( ε , G k , ω ) , ψ ) ¯ = S ( ε , G k ) ¯ = S ¯
    sup ω Ω ε , k sup ψ Ψ ε , k , ω S ( C ( ε , G k , ω ) , ψ ) ̲ = S ( ε , G k ) ̲ = S ̲

    6.2. Approach

    We manipulate the definitions of §5.3.2 (a) and §5.3.2 (b) to solve (1), (2), (3), (4) and (5) of the leading question in § 3.1

    6.3. Potential Answer

    6.3.1. Preliminaries (Definition of T in Case of §3.1.1 (5))

    When the difference of point X = ( x 1 , · · · , x n ) and Y = ( y 1 , · · · , y n ) is:
    X Y = ( x 1 y 1 , x 2 y 2 , · · · , x n y n )
    the average of G r for every r N is:
    Avg ( G r ) = 1 H dim H ( G r ) ( G r ) G r ( x 1 , · · · , x n ) d H dim H ( G r )
    and d ( P , Q ) is the n-d Euclidean distance between points P , Q R n , we define an explicit injective F : R n R such that:
    (1)
    If d ( Avg ( G r ) , C ) < d ( Avg ( G j ) , C ) , then F ( Avg ( G r ) C ) < F ( Avg ( G j ) C )
    (2)
    If d ( Avg ( G r ) , C ) > d ( Avg ( G j ) , C ) , then F ( Avg ( G r ) C ) > F ( Avg ( G j ) C )
    (3)
    If d ( Avg ( G r ) , C ) = d ( Avg ( F j ) , C ) , then F ( Avg ( G r ) C ) F ( Avg ( G j ) C )
    where using “chosen" center point C R n :
    T ( C , G r ) = F ( Avg ( G r ) C )

    6.3.2. Question

    Does T exist? If so, how do we define it?
    Hence, using S , S ¯ , S ̲ , E, E ( C , G k ) 5.4), and T ( C , F k ) , such that with the absolute value function | | · | | , ceiling function · , and nearest integer function · , we define:
    K ( ε , G k ) = 1 + E E ( C , G k ) 5 ( 5 | 5 | S 1 + S S ̲ + 2 S S ̲ + S S ̲ + S + S ¯ 1 + S ̲ / S 1 + S / S ¯ 1 + S ̲ / S ¯ S 5 | 5 | + S 5 ) T ( C , G k ) E ( C , G k )
    where E , E, and T are “removed" when E , E = 0 , the choice function which answers the leading question in § 3.1 could be the following, s.t.we explain the reason behind choosing the choice function in §6.4:
    Theorem 6. 
    If we define:
    M ( ε , G k ) = | S ( ε , G k ) | K ( ε , G k ) | S ( ε , G k ) |
    M ( ε , G j ) = | S ( ε , G j ) | K ( ε , G j ) | S ( ε , G j ) |
    where for M ( ε , G k ) , we define M ( ε , G k ) to be the same as M ( ε , G j ) when swapping “ j N " with “ k N " (for eq. 119 & 120) and sets G k with G j (for eq. 119126), then for constant v > 0 and variable v * > 0 , if:
    S ¯ ( ε , k , v * , G j ) = inf | S ( ε , G j ) | : j N , M ( ε , G j ) M ( ε , G k ) v * { v * } + v
    and:
    S ̲ ( ε , k , v * , G j ) = sup | S ( ε , G j ) | : j N , v * M ( ε , G j ) M ( ε , G k ) { v * } + v
    then for all ( f j ) j N S ( f ) ( f k ) k N ( § crit. 3), if:
    inf | | 1 c | | : ( ϵ > 0 ) ( c > 0 ) ( k N ) ( j N ) S ( ε , G k ) S ( ε , G j ) c < ε
    where · is the ceiling function, E is the fixed rate of expansion, Γ is the gamma function, n is the dimension of R n , dim H ( G k ) is the Hausdorff dimension of set G k R n + 1 , and A k is area of the smallest ( n + 1 ) -dimensional box that contains A k , then:
    V ( ε , G k , n ) = ( A k 1 sign ( E ) ( E sign ( E ) + 1 ) exp n ln ( π ) / 2 Γ ( n / 2 + 1 ) k ! ( n dim H ( G k ) ) k sign ( E ) dim H ( G k ) sign dim H ( G k ) + 1 + ( 1 sign ( dim H ( G k ) ) ) ) / ε / | S ( ε , G k ) |
    the choice function is:
    lim sup ε 0 lim v * lim sup k sign ( M ( ε , G k ) ) S ¯ ( ε , k , v * , G j ) | S ( ε , G k ) | + v c V ( ε , G k , n ) sign ( M ( ε , G k ) ) S ̲ ( ε , k , v * , G j ) | S ( ε , G k ) | + v c V ( ε , G k , n ) =
    lim inf ε 0 lim v * lim inf k sign ( M ( ε , G k ) ) S ¯ ( ε , k , v * , G j ) | S ( ε , G k ) | + v c V ( ε , G k , n ) sign ( M ( ε , G k ) ) S ̲ ( ε , k , v * , G j ) | S ( ε , G k ) | + v c V ( ε , G k , n ) = 0
    such that ( G k ) k N satisfies eq. 131 & eq. 132. (Note, we want sup = , inf = + , and ( f k ) k N to answer the leading question of § 3.1) where the answer to the blockquote of § 1.3.2 is E [ f k ] (when it exists).

    6.4. Explaining the Choice Function and Evidence the Choice Function Is Credible

    Notice, before reading the programming in code 12, without the “c"-terms in eq. 131 and eq. 132:
    (1)
    The choice function in eq. 131 and eq. 132 is zero, when what I’m measuring from G k k N 5.3.2 criteria 1) increases at a rate superlinear to that of ( G j ) j N , where sign ( M ( ε , G k ) ) = 0 .
    (2)
    The choice function in eq. 131 and eq. 132 is zero, when for a given ( G k ) k N and ( G j ) j N there doesn’t exist c where eq. 129 is satisfied or c = 0 .
    (3)
    When c does exist, suppose:
    J ( k ) : k N , | S ( ε , G k ) | | S ( ε , G J ( k ) ) | c
    (a)
    When | S ( ε , G k ) | < | S ( ε , G J ( k ) ) | , then:
    lim sup ε 0 lim v * lim sup k sign ( M ( ε , G k ) ) S ¯ ( ε , k , v * , G j ) | S ( ε , G k ) | + v = c
    lim inf ε 0 lim v * lim inf k sign ( M ( ε , G k ) ) S ̲ ( ε , k , v * , G j ) | S ( ε , G k ) | + v = 0
    (b)
    When | S ( ε , G k ) | > | S ( ε , G J ( k ) ) | , then:
    lim sup ε 0 lim v * lim sup k sign ( M ( ε , G k ) ) S ¯ ( ε , k , v * , G j ) | S ( ε , G k ) | + v = +
    lim inf ε 0 lim v * lim inf k sign ( M ( ε , G k ) ) S ̲ ( ε , k , v * , G j ) | S ( ε , G k ) | + v = 1 / c
    Hence, for each sub-criteria under crit. (3), if we subtract one of their limits by their limit value, then eq. 131 and eq. is zero. (We do this using the “c"-term in eq. 131 and 132). However, when the exponents of the “c"-terms aren’t equal to 1 , the limits of eq. 131 and 132 aren’t equal to zero. We want this, infact, whenever we swap S ( ε , G k ) with S ( ε , G j ) . Moreover, we define function V ( ε , G k , n ) (i.e., eq. 130), where:
    (3)
    (i)
    When S ( ε , G k ) Numerator V ( ε , G k , n ) , then eq. 131 and 132 without the “c"-terms are zero. (The “c"-terms approach zero and still allow eq. 131 and 132 to equal zero.)
    (ii)
    When S ( ε , G k ) Numerator V ( ε , G k , n ) , then sign ( M ( ε , G k ) ) is zero which makes eq. 131 and 132 equal zero.
    (iii)
    Here are some examples of the numerator of V ( ε , G k , n ) (eq. 130):
    A.
    When E = 0 , n = 1 , and dim H ( A ) = 0 , the numerator of V ( ε , G k , n ) is A k ! + 1 / ε
    B.
    When E = z , n = 1 , and dim H ( A ) = 0 , the numerator of V ( ε , G k , n ) is 2 z k · k ! + 1 / ε
    C.
    When E = 0 , n = z 2 , and dim H ( A ) = z 2 , the numerator of V ( ε , G k , n ) is ceiling of constant A times the volume of an n-dimensional ball with finite radius: i.e.,
    A z 1 exp z 2 ln ( π ) / 2 Γ ( z 2 / 2 + 1 ) / ε
    D.
    When E = z 1 , n = z 2 , and dim H ( A ) = z 2 , the numerator of V ( ε , G k , n ) is ceiling of the volume of the n-dimensional ball: i.e.,
    z 1 exp z 2 ln ( π ) / 2 Γ ( z 2 / 2 + 1 ) k z 2 / ε
    Now, consider the code for eq. 131 and eq. 132. (Note, the set theoretic limit of G k is the graph of function f : A R .) In this example, A = Q [ 0 , 1 ] , and:
    f ( x ) = 1 x ( 2 s + 1 ) / ( 2 t ) : s Z , t N , t 0 [ 0 , 1 ] 0 x ( 2 s + 1 ) / ( 2 t ) : s Z , t N , t 0 [ 0 , 1 ]
    such that:
    ( A k ) r N = ( c / k ! : c Z , 0 c k ! ) k N
    the ceiling function is · , and:
    ( A j ) j N = ( c / j ! / 3 : c Z , 0 c j ! / 3 ) j N
    such for f k : A k R ,
    f k ( x ) = f ( x ) for all x A k
    and f j : A j R
    f j ( x ) = f ( x ) for all x A j
    Hence, when ( G k ) r N is:
    ( G k ) k N = ( x , f ( x ) ) : x c / k ! : c Z , 0 c k ! k N
    and ( G j ) j N is:
    ( G j ) j N = ( x , f ( x ) ) : x c / j ! / 3 : c Z , 0 c j ! / 3 j N
    Note, the following (we leave this to mathematicians to figure LengthS1, LengthS2, Entropy1 and Entropy 2 for other A and f in code 12).

    6.4.1. Evidence With Programming

    Code 12: Code for eq. 131 and 132 to eq. 141 and eq. 142
    Preprints 146511 i001Preprints 146511 i002Preprints 146511 i003

    7. Questions

    (1)
    Does § 6 answer the leading question in § 3.1
    (2)
    Using thm. 6, when f is defined in § 1.1, does E [ f k ] have a finite value?
    (3)
    Using thm. 6, when f is defined in § 1.2, does E [ f k ] have a finite value?
    (4)
    If there’s no time to check questions 1, 2 and 3, see § 4.

    8. Appendix of §5.3.1

    8.1. Example of §5.3.1, Step 1

    Suppose
    (1)
    A = R
    (2)
    When defining f : A R :
    f ( x ) = 1 x < 0 1 0 x < 0.5 0.5 0.5 x
    (3)
    G r r N = ( x , f ( x ) ) : r x r r N
    Then one example of C ( 2 / 6 , G 1 , 1 ) , using §5.3.1 step 1, (where G 1 = ( x , f ( x ) ) : 1 x 1 r N ) is:
    { ( x , f ( x ) ) : 1 x 2 6 6 , ( x , f ( x ) ) : 2 6 6 x 2 2 6 6 , ( x , f ( x ) ) : 2 2 6 6 x 3 2 6 6 ( x , f ( x ) ) : 3 2 6 6 x 4 2 6 6 , ( x , f ( x ) ) : 4 2 6 6 x 5 2 6 6 , ( x , f ( x ) ) : 5 2 6 6 x 6 2 6 6 ( x , f ( x ) ) : 6 2 6 6 x 7 2 6 6 , ( x , f ( x ) ) : 7 2 6 6 x 8 2 6 6 , ( x , f ( x ) ) : 8 2 6 6 x 9 2 6 6 }
    Note, the length of each partition is 2 / 6 , where the borders could be approximated as:
    { ( x , f ( x ) ) : 1 x . 764 , ( x , f ( x ) ) : . 764 x . 528 , ( x , f ( x ) ) : . 528 x . 293 ( x , f ( x ) ) : . 293 x . 057 , ( x , f ( x ) ) : . 057 x . 178 , ( x , f ( x ) ) : . 178 x . 414 ( x , f ( x ) ) : . 414 x . 65 , ( x , f ( x ) ) : . 65 x . 886 , ( x , f ( x ) ) : . 886 x 1.121 }
    which is illustrated using alternating orange/black lines of equal length covering G 1 (i.e., the black vertical lines are the smallest and largest x-cooridinates of G 1 ).
    (Note, the alternating covers in Figure 3 satisfy step (1) of §5.3.1, because the Hausdorff measure in its dimension of the covers is 2 / 6 and there are 9 covers over-covering G 1 : i.e.,
    Figure 3. The alternating orange & black lines are the “covers" and the vertical lines are the boundaries of G 1 .
    Figure 3. The alternating orange & black lines are the “covers" and the vertical lines are the boundaries of G 1 .
    Preprints 146511 g003
    Definition 1 
    (Minimum Covers of Measure ε = 2 / 6 covering G 1 ). We can compute the minimum covers of C ( 2 / 6 , G 1 , 1 ) , using the formula:
    H dim H ( G 1 ) ( G 1 ) / ( 2 / 6 )
    where H dim H ( G 1 ) ( G 1 ) / ( 2 / 6 ) = Length ( [ 1 , 1 ] ) / ( 2 / 6 ) = 2 / ( 2 / 6 ) = 6 2 = 6 ( 1.4 ) = 8 + . 4 = 9 ).
    Note there are other examples of C ( 2 / 6 , G 1 , ω ) for different ω . Here is another case:
    which can be defined (see eq. 144 for comparison):
    { ( x , f ( x ) ) : 6 9 2 6 x 6 8 2 6 , ( x , f ( x ) ) : 6 8 2 6 x 6 7 2 6 , ( x , f ( x ) ) : 6 7 2 6 x 6 6 2 6 ( x , f ( x ) ) : 6 6 2 6 x 6 5 2 6 , ( x , f ( x ) ) : 6 5 2 6 x 6 4 2 6 , ( x , f ( x ) ) : 6 4 2 6 x 6 3 2 6 ( x , f ( x ) ) : 6 3 2 6 x 6 2 2 6 , ( x , f ( x ) ) : 6 2 2 6 x 6 2 6 , ( x , f ( x ) ) : 6 2 6 x 1 }
    In the case of G 1 , there are uncountable different covers C ( 2 / 6 , G 1 , ω ) which can be used. For instance, when 0 α ( 12 9 2 ) / 6 (i.e., ω = α ( 12 9 2 ) / 6 + 1 ) consider:
    { ( x , f ( x ) ) : α 1 + α x α + 2 6 6 , ( x , f ( x ) ) : α + 2 6 6 x α + 2 2 6 6 , ( x , f ( x ) ) : α + 2 2 6 6 x α + 3 2 6 6 ( x , f ( x ) ) : α + 3 2 6 6 x α + 4 2 6 6 , ( x , f ( x ) ) : α + 4 2 6 6 x α + 5 2 6 6 , ( x , f ( x ) ) : α + 5 2 6 6 x α + 6 2 6 6 , ( x , f ( x ) ) : α + 6 2 6 6 x α + 7 2 6 6 ( x , f ( x ) ) : α + 7 2 6 6 x α + 8 2 6 6 , ( x , f ( x ) ) : α + 8 2 6 6 x α + 9 2 6 6 }
    When α = 0 and ω = ( 9 2 6 ) / 6 , we get Figure 4 and when α = ( 12 9 2 ) / 6 and ω = 1 , we get Figure 3
    Figure 4. This is similar to Figure 3, except the start-points of the covers are shifted all the way to the left.
    Figure 4. This is similar to Figure 3, except the start-points of the covers are shifted all the way to the left.
    Preprints 146511 g004

    8.2. Example of §5.3.1, Step 2

    . Suppose:
    (1)
    A = R
    (2)
    When defining f : A R : i.e.,
    f ( x ) = 1 x < 0 1 0 x < 0.5 0.5 0.5 x
    (3)
    G r r N = ( x , f ( x ) ) : r x r r N
    (4)
    G 1 = ( x , f ( x ) ) : 1 x 1
    (5)
    C ( 2 / 6 , G 1 , 1 ) , using eq. 145 and Figure 3, which is approximately
    { ( x , f ( x ) ) : 1 x . 764 , ( x , f ( x ) ) : . 764 x . 528 , ( x , f ( x ) ) : . 528 x . 293 ( x , f ( x ) ) : . 293 x . 057 , ( x , f ( x ) ) : . 057 x . 178 , ( x , f ( x ) ) : . 178 x . 414 ( x , f ( x ) ) : . 414 x . 65 , ( x , f ( x ) ) : . 65 x . 886 , ( x , f ( x ) ) : . 886 x 1.121 }
    Then, an example of S ( C ( 2 / 6 , G 1 , 1 ) , 1 ) is:
    { ( . 9 , 1 ) , ( . 65 , 1 ) , ( . 4 , 1 ) , ( . 2 , 1 ) , ( . 1 , 1 ) , ( . 3 , 1 ) , ( . 55 , . 5 ) , ( . 75 , . 5 ) , ( 1 , . 5 ) }
    Below, we illustrate the sample: i.e., the set of all blue points in each orange and black line of  C ( 2 / 6 , G 1 , 1 ) covering G 1 :
    Figure 5. The blue points are the “sample points", the alternative black and orange lines are the “covers", and the red lines are the smallest & largest x-coordinates G 1 .
    Figure 5. The blue points are the “sample points", the alternative black and orange lines are the “covers", and the red lines are the smallest & largest x-coordinates G 1 .
    Preprints 146511 g005
    Note, there are multiple samples that can be taken, as long as one sample point is taken from each cover in C ( 2 / 6 , G 1 , 1 ) .

    8.3. Example of §5.3.1, Step 3

    Suppose
    (1)
    A = R
    (2)
    When defining f : A R :
    f ( x ) = 1 x < 0 1 0 x < 0.5 0.5 0.5 x
    (3)
    G r r N = ( x , f ( x ) ) : r x r r N
    (4)
    G 1 = ( x , f ( x ) ) : 1 x 1
    (5)
    C ( 2 / 6 , G 1 , 1 ) , using eq. 145 and Figure 3, is approx.
    { ( x , f ( x ) ) : 1 x . 764 , ( x , f ( x ) ) : . 764 x . 528 , ( x , f ( x ) ) : . 528 x . 293 ( x , f ( x ) ) : . 293 x . 057 , ( x , f ( x ) ) : . 057 x . 178 , ( x , f ( x ) ) : . 178 x . 414 ( x , f ( x ) ) : . 414 x . 65 , ( x , f ( x ) ) : . 65 x . 886 , ( x , f ( x ) ) : . 886 x 1.121 }
    (6)
    S ( C ( 13 / 6 , G 1 , 1 ) , 1 ) , using eq. 150, is:
    { ( . 9 , 1 ) , ( . 65 , 1 ) , ( . 4 , 1 ) , ( . 2 , 1 ) , ( . 1 , 1 ) , ( . 3 , 1 ) , ( . 55 , . 5 ) , ( . 75 , . 5 ) , ( 1 , . 5 ) }
    Therefore, consider the following process:

    8.3.1. Step 3a

    If S ( C ( 2 / 6 , G 1 , 1 ) , 1 ) is:
    { ( . 9 , 1 ) , ( . 65 , 1 ) , ( . 4 , 1 ) , ( . 2 , 1 ) , ( . 1 , 1 ) , ( . 3 , 1 ) , ( . 55 , . 5 ) , ( . 75 , . 5 ) , ( 1 , . 5 ) }
    suppose x 0 = ( . 9 , 1 ) . Note, the following:
    (1)
    x 1 = ( . 65 , 1 ) is the next point in the “pathway" since it’s a point in S ( C ( 2 / 6 , G 1 , 1 ) , 1 ) with the smallest 2-d Euclidean distance to x 0 instead of x 0 .
    (2)
    x 2 = ( . 4 , 1 ) is the third point since it’s a point in S ( C ( 2 / 6 , G 1 , 1 ) , 1 ) with the smallest 2-d Euclidean distance to x 1 instead of x 0 and x 1 .
    (3)
    x 3 = ( . 2 , 1 ) is the fourth point since it’s a point in S ( C ( 2 / 6 , G 1 , 1 ) , 1 ) with the smallest 2-d Euclidean distance to x 2 instead of x 0 , x 1 , and x 2 .
    (4)
    we continue this process, where the “pathway" of S ( C ( 2 / 6 , G 1 , 1 ) , 1 ) is:
    ( . 9 , 1 ) ( . 65 , 1 ) ( . 4 , 1 ) ( . 2 , 1 ) ( . 55 , . 5 ) ( . 75 , . 5 ) ( 1 , . 5 ) ( . 3 , 1 ) ( . 1 , 1 )
    Note 7. 
    If more than one point has the minimum 2-d Euclidean distance from x 0 , x 1 , x 2 , etc. take all potential pathways: e.g., using the sample in eq. 154, if x 0 = ( . 65 , 1 ) , then since ( . 9 , 1 ) and ( . 4 , 1 ) have the smallest Euclidean distance to ( . 65 , 1 ) , taketwopathways:
    ( . 65 , 1 ) ( . 9 , 1 ) ( . 4 , 1 ) ( . 2 , 1 ) ( . 55 , . 5 ) ( . 75 , . 5 ) ( 1 , . 5 ) ( . 3 , 1 ) ( . 1 , 1 )
    and also:
    ( . 65 , 1 ) ( . 4 , 1 ) ( . 2 , 1 ) ( . 9 , 1 ) ( . 55 , . 5 ) ( . 75 , . 5 ) ( 1 , . 5 ) ( . 3 , 1 ) ( . 1 , 1 )

    8.3.2. Step 3b

    Next, take the length of all line segments in each pathway. In other words, suppose d ( P , Q ) is the n-th dim.Euclidean distance between points P , Q R n . Using the pathway in eq. 155, we want:
    { d ( ( . 9 , 1 ) , ( . 65 , 1 ) ) , d ( ( . 65 , 1 ) , ( . 4 , 1 ) ) , d ( ( . 4 , 1 ) , ( . 2 , 1 ) ) , d ( ( . 2 , 1 ) , ( . 55 , . 5 ) ) , d ( ( . 55 , . 5 ) , ( . 75 , . 5 ) ) , d ( ( . 75 , . 5 ) , ( 1 , . 5 ) ) , d ( ( 1 , . 5 ) , ( . 3 , 1 ) ) , d ( ( . 3 , 1 ) , ( . 1 , 1 ) ) }
    Whose distances can be approximated as:
    { . 25 , . 25 , . 2 , . 901389 , . 2 , . 25 , 1.655295 , . 2 }
    Also, we see the outliers [23] are . 901389 and 1.655295 (i.e., notice that the outliers are more prominent for ε 2 / 6 ). Therefore, remove . 901389 and 1.655295 from our set of lengths:
    { . 25 , . 25 , . 2 , . 2 , . 25 , . 2 }
    This is illustrated using:
    Figure 6. The black arrows are the “pathways" whose lengths aren’t outliers. The length of the red arrows in the pathway are outliers.
    Figure 6. The black arrows are the “pathways" whose lengths aren’t outliers. The length of the red arrows in the pathway are outliers.
    Preprints 146511 g006
    Hence, when x 0 = ( . 9 , 1 ) , using §5.3.1 step 3b & eq. 154, we note:
    L ( ( . 9 , 1 ) , S ( C ( 2 / 6 , G 1 , 1 ) , 1 ) ) = { . 25 , . 25 , . 2 , . 2 , . 25 , . 2 }

    8.3.3. Step 3c

    To convert the set of distances in eq. 157 into a probability distribution, we take:
    x { . 25 , . 25 , . 2 , . 2 , . 25 , . 2 } x = . 25 + . 25 + . 2 + . 2 + . 25 + . 2 = 1.35
    Then divide each element in { . 25 , . 25 , . 2 , . 2 , . 25 , . 2 } by 1.35
    { . 25 / ( 1.35 ) , . 25 / ( 1.35 ) , . 2 / ( 1.35 ) , . 2 / ( 1.35 ) , . 25 / ( 1.35 ) , . 2 / ( 1.35 ) }
    which gives us the probability distribution:
    { 5 / 27 , 5 / 27 , 4 / 27 , 4 / 27 , 5 / 27 , 4 / 27 }
    Hence,
    P ( L ( ( . 9 , 1 ) , S ( C ( 2 / 6 , G 1 , 1 ) , 1 ) ) ) = { 5 / 27 , 5 / 27 , 4 / 27 , 4 / 27 , 5 / 27 , 4 / 27 }

    8.3.4. Step 3d

    Take the shannon entropy of eq. 159:
    E ( P ( L ( ( . 9 , 1 ) , S ( C ( 2 / 6 , G 1 , 1 ) , 1 ) ) ) ) = x P ( L ( ( . 9 , 1 ) , S ( C ( 2 / 6 , G 1 , 1 ) , 1 ) ) ) x log 2 x = x { 5 / 27 , 5 / 27 , 4 / 27 , 4 / 27 , 5 / 27 , 4 / 27 } x log 2 x = ( 5 / 27 ) log 2 ( 5 / 27 ) ( 5 / 27 ) log 2 ( 5 / 27 ) ( 4 / 27 ) log 2 ( 4 / 27 ) ( 4 / 27 ) log 2 ( 4 / 27 ) ( 5 / 27 ) log 2 ( 5 / 27 ) ( 4 / 27 ) log 2 ( 5 / 27 ) = ( 15 / 27 ) log 2 ( 5 / 27 ) ( 12 / 27 ) log 2 ( 4 / 27 ) 2.57604
    We shorten E ( P ( L ( ( . 9 , 1 ) , S ( C ( 2 / 6 , G 1 , 1 ) , 1 ) ) ) ) to E ( L ( ( . 9 , 1 ) , S ( C ( 2 / 6 , G 1 , 1 ) , 1 ) ) ) , giving us:
    E ( L ( ( . 9 , 1 ) , S ( C ( 2 / 6 , G 1 , 1 ) , 1 ) ) ) 2.57604

    8.3.5. Step 3e

    Take the entropy, w.r.t all pathways, of the sample:
    { ( . 9 , 1 ) , ( . 65 , 1 ) , ( . 4 , 1 ) , ( . 2 , 1 ) , ( . 1 , 1 ) , ( . 3 , 1 ) , ( . 55 , . 5 ) , ( . 75 , . 5 ) , ( 1 , . 5 ) }
    In other words, we’ll compute:
    E ( L ( S ( C ( 2 / 6 , G 1 , 1 ) , 1 ) ) ) = sup x 0 S ( C ( 2 / 6 , G 1 , 1 ) , 1 ) E ( L ( x 0 , S ( C ( 2 / 6 , G 1 , 1 ) , 1 ) ) )
    We do this by repeating §8.3.18.3.4 for different x 0 S ( C ( 2 / 6 , G 1 , 1 ) , 1 ) ) ) (i.e., in the equation with multiple values, see note 7)
    E ( L ( ( . 9 , 1 ) , S ( C ( 2 / 6 , G 1 , 1 ) , 1 ) ) ) 2.57604
    E ( L ( ( . 65 , 1 ) , S ( C ( 2 / 6 , G 1 , 1 ) , 1 ) ) ) 2.3131 , 2.377604
    E ( L ( ( . 4 , 1 ) , S ( C ( 2 / 6 , G 1 , 1 ) , 1 ) ) ) 2.3131
    E ( L ( ( . 2 , 1 ) , S ( C ( 2 / 6 , G 1 , 1 ) , 1 ) ) ) 2.57604
    E ( L ( ( . 1 , 1 ) , S ( C ( 2 / 6 , G 1 , 1 ) , 1 ) ) ) 1.86094
    E ( L ( ( . 3 , 1 ) , S ( C ( 2 / 6 , G 1 , 1 ) , 1 ) ) ) 1.85289
    E ( L ( ( . 55 , . 5 ) , S ( C ( 2 / 6 , G 1 , 1 ) , 1 ) ) ) 2.08327
    E ( L ( ( . 75 , . 5 ) , S ( C ( 2 / 6 , G 1 , 1 ) , 1 ) ) ) 2.31185
    E ( L ( ( 1 , . 5 ) , S ( C ( 2 / 6 , G 1 , 1 ) , 1 ) ) ) 2.2622
    Hence, since the largest value out of eq. 162-170 is 2.57604 :
    E ( L ( S ( C ( 13 / 6 , G 1 , 1 ) , 1 ) ) ) = sup x 0 S ( C ( ε , G 1 , 3 ) , 1 ) E ( L ( x 0 , S ( C ( ε , G 1 , 1 ) , 1 ) ) ) 2.57604

    References

    1. Krishnan, B. Finding a Published Paper Which Meaningfully Averages The Most Pathalogical Functions (V2), 2024. https://www.researchgate.net/publication/384467315_Finding_a_Published_Research_Paper_Which_Meaningfully_Averages_The_Most_Pathalogical_Functions_v2. [CrossRef]
    2. Bernardi, C.; Rainaldi, C. Everywhere surjections and related topics: Examples and counterexamples. Le Matematiche 2018, 73, 71–88. Available online: https://www.researchgate.net/publication/325625887_Everywhere_surjections_and_related_topics_Examples_and_counterexamples.
    3. (https://mathoverflow.net/users/87856/arbuja), A. Is there an explicit, everywhere surjective f:R→R whose graph has zero Hausdorff measure in its dimension? MathOverflow, [https://mathoverflow.net/q/476471]. https://mathoverflow.net/q/476471.
    4. (https://math.stackexchange.com/users/413/jdh), J. Uncountable sets of Hausdorff dimension zero. Mathematics Stack Exchange, [https://math.stackexchange.com/q/73551]. https://math.stackexchange.com/q/73551.
    5. (https://mathoverflow.net/users/11009/pablo shmerkin), P.S. Hausdorff dimension of R x X. MathOverflow, [https://mathoverflow.net/q/189274]. https://mathoverflow.net/q/189274.
    6. Xie, T.; Zhou, S. On a class of fractal functions with graph Hausdorff dimension 2. Chaos, Solitons & Fractals 2007, 32, 1625–1630. Available online: https://www.sciencedirect.com/science/article/pii/S0960077906000129. [CrossRef]
    7. MFH. Prove the following limits of a sequence of sets? Mathchmaticians, 2023. https://matchmaticians.com/questions/hinaeh.
    8. OEIS Foundation Inc.. A011371. The On-Line Encyclopedia of Integer Sequences, 1999. https://oeis.org/A011371.
    9. OEIS Foundation Inc.. A099957. The On-Line Encyclopedia of Integer Sequences, 2005. https://oeis.org/A099957.
    10. OEIS Foundation Inc.. A002088. The On-Line Encyclopedia of Integer Sequences, 1991. https://oeis.org/A002088.
    11. Ott, W.; Yorke, J.A. Prevelance. Bulletin of the American Mathematical Society 2005, 42, 263–290. Available online: https://www.ams.org/journals/bull/2005-42-03/S0273-0979-05-01060-8/S0273-0979-05-01060-8.pdf. [CrossRef]
    12. Achour, R.; Li, Z.; Selmi, B.; Wang, T. A multifractal formalism for new general fractal measures. Chaos, Solitons & Fractals 2024, 181, 114655. Available online: https://www.sciencedirect.com/science/article/abs/pii/S0960077924002066.
    13. Bedford, T.; Fisher, A.M. Analogues of the Lebesgue density theorem for fractal sets of reals and integers. Proceedings of the London Mathematical Society 1992, 3, 95–124. Available online: https://www.ime.usp.br/~afisher/ps/Analogues.pdf. [CrossRef]
    14. Bedford, T.; Fisher, A.M. Ratio geometry, rigidity and the scenery process for hyperbolic Cantor sets. Ergodic Theory and Dynamical Systems 1997, 17, 531–564. Available online: https://arxiv.org/pdf/math/9405217. [CrossRef]
    15. Sipser, M. Introduction to the Theory of Computation, 3 ed.; Cengage Learning, 2012; pp. 275–322.
    16. Krishnan, B. Bharath Krishnan’s ResearchGate Profile. https://www.researchgate.net/profile/Bharath-Krishnan-4.
    17. Barański, K.; Gutman, Y.; Śpiewak, A. Prediction of dynamical systems from time-delayed measurements with self-intersections. Journal de Mathématiques Pures et Appliquées 2024, 186, 103–149. Available online: https://www.sciencedirect.com/science/article/pii/S0021782424000345. [CrossRef]
    18. Caetano, A.M.; Chandler-Wilde, S.N.; Gibbs, A.; Hewett, D.P.; Moiola, A. A Hausdorff-measure boundary element method for acoustic scattering by fractal screens. Numerische Mathematik 2024, 156, 463–532. [Google Scholar] [CrossRef]
    19. (https://math.stackexchange.com/users/5887/sbf), S. Convergence of functions with different domain. Mathematics Stack Exchange, [https://math.stackexchange.com/q/1063261]. https://math.stackexchange.com/q/1063261.
    20. M., G. Entropy and Information Theory, 2 ed.; Springer New York: New York [America];, 2011; pp. 61–95. https://ee.stanford.edu/~gray/it.pdf. [CrossRef]
    21. ydd. Finding the asymptotic rate of growth of a table of value? Mathematica Stack Exchange, [https://mathematica.stackexchange.com/a/307050/34171]. https://mathematica.stackexchange.com/a/307050/34171.
    22. ydd. How to find a closed form for this pattern (if it exists)? Mathematica Stack Exchange, [https://mathematica.stackexchange.com/a/306951/34171]. https://mathematica.stackexchange.com/a/306951/34171.
    23. John, R. Outlier. https://en.m.wikipedia.org/wiki/Outlier.
    Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
    Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
    Prerpints.org logo

    Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

    Subscribe

    Disclaimer

    Terms of Use

    Privacy Policy

    Privacy Settings

    © 2025 MDPI (Basel, Switzerland) unless otherwise stated