Preprint
Article

All Physical Information Is Discretely Connected From the Beginning and All Geometrical Appearance Is a Delayed Statistical Consequence

Altmetrics

Downloads

149

Views

100

Comments

1

A peer-reviewed article of this preprint also exists.

This version is not peer-reviewed

Submitted:

23 June 2023

Posted:

25 June 2023

You are already at the latest version

Alerts
Abstract
Information is physically measurable as a selection from a set of possibilities, the domain of information. This defines the term "information". The domain of the information must be known together reproducibly beforehand. As a practical consequence, digital information exchange can be made globally efficient, interoperable and searchable to a large extent by online definition of application-optimized domains of information. There are even more far-reaching consequences for physics. The purpose of this article is to present prerequisites and possibilities for a physical approach that is consistent with the precise definition of information. This concerns not only the discretization of the sets of possible experimental results, but also the order of their definition along time. The access to or comparison with the domain of information is the more frequent, the earlier it was defined. The geometrical appearance of our space is apparently a delayed statistical consequence of a very frequent connection with the common primary domain of information.
Keywords: 
Subject: Physical Sciences  -   Theoretical Physics

1. Introduction: The domain of information

Information is a fundamental part of our existence, it shapes our lives. Scientific disciplines such as information science and computer science deal with information processing. Nevertheless, the term "information" has not yet been precisely defined (which is also addressed on the Internet [1] and in the literature [2]) Strictly speaking, the lack of an exact definition of information is a shortcoming, because elementary information is by no means fuzzy: Physically, the transport of elementary information is done by energy quanta (this will be discussed further in section 3.7) or technically by "bits". Each bit means a selection from a set of 2 possibilities. Bits are used to encode numbers which select from ordered sets, which can also be very large. Thus the conditions are given to start as sharp as usual in mathematics: with set theory. We know from mathematics that it is possible to build very complex and well-founded approaches starting from ordered sets. A systematic and efficient approach is thus made possible.
Digital and all other physical information is exchanged as the reproducible result of a (physical) measurement and represents no more and no less than a reproducible selection from the ordered set of all possible measurement results. This set of possibilities is the "domain" of information. The common order is necessary for reproducible selection. This basic principle is in short:
Information is a reproducible selection from its domain.
The definition and order of all elements selectable from the domain (i.e. the domain of the information) must be reproducibly known by all those exchanging the information, i.e. after a reproducible sequence of elementary steps, the information exchanged must be identical for all. From this follows also the finiteness of the domain (within finite time).
We can say abbreviated: The domain of information must be uniformly defined for all and is inseparable from the information. This principle is pervasive and substantially underestimated.
The relevance of the exact definition (1) of information is actually obvious in computer science. We could apply it systematically and efficiently by using the Internet to define the domains of digital information globally in a uniform way for all users. Digital information is always a sequence of numbers, i.e. a selection from ordered sets. For example, we can start with sequences of "quantitative data" and optimize them for the application of interest. This has already been explained and described in detail in several publications [3,4,5,6,7,8,9,10,11,12,13,14,15]. The resulting exchanged binary information is called "domain vector" or "DV" [3,4,5,6,7,8,9,10] and has the structure:
DV: UL plus number sequence
UL" means "Uniform Locator" and is an efficient global pointer to the machine-readable online definition of the number sequence. The domain of the number sequence is thus automatically defined uniformly worldwide (as the domain of the transported information). In this way, interoperable and searchable digital information could be uniformly defined online worldwide. It would be optimizable for the respective application and precisely comparable and searchable. This is also demonstrated online. [16]. With respect to the digital application, essential conclusions are thus obvious: The data structure (2) can efficiently encode all possible types of digital information and make them globally exchangeable, comparable and searchable in an energy-saving manner.
Despite considerable technical, scientific and economic importance, the (uniform) online definition of digital information (number sequences) has not yet been introduced. The alternative is the repeated non-uniform local definition in all possible formats - this is much less efficient, mostly not comparable and thus incompatible ("non-interoperable"), it causes an enormous amount of unnecessary redundant work in programming and especially in application. Therefore, it would be appropriate to teach and also systematically deepen the much more efficient uniform global (online) definition of digital information.
The systematic digital utilization of definition (1) is only one of many possible applications. A consequent analysis of (1) really goes to the heart of the matter and can influence the worldview. Not only does it show that our frame of reference must be completely connected a priori, it also shows combinatorial details. In [3] it was already briefly mentioned that (1) holds without exception. Since "information" is of central importance to us, so is its definition (1). Information and with (1) also its domain even shape our consciousness. At this, the access (comparison with) the domain is mostly fast and unconscious. Otherwise it would not be usable. For example, language vocabulary as a common domain of linguistic information exchange must be quickly available or "familiar" to all communicators.
In general, it turns out that domains learned earlier usually allow faster access. Our brain learned the vocabulary of our body's nerve impulses early on and now uses this knowledge unconsciously and quickly. Later, it has used it to learn the signals of our environment. This also opens up possible applications in psychology - that would be another area that could be systematically expanded.
The basic principle (1) is superordinate. Everywhere where information plays a role (and these are really many subject areas) applications can arise.
Strictly speaking, fundamental physics is the original science to information, because the result of every physical experiment is information and also means a selection from the set of possible experimental results. This is just the domain of the resulting information according to (1). In section 4.7 of [3] it was already mentioned that the (implicit) knowledge of the domain of information (1) and the resulting combinatorics is a far-reaching topic for research in physics.
This article shall therefore show prerequisites and first possibilities towards a physical approach, which is compatible with the exact definition (1) of information.

2. Material and method: mathematical tools for the analysis of the domain of information

Until today, the consideration of the exact definition (1) of information has not been focused in physics. It has not been systematically explored why (together with relativity) there is always a common order of "time" for the exchange of information. Thus, this topic can be extended and can lead to surprising conclusions.
Naturally, the definition (1) of information first concerns theoretical physics, especially quantum mechanics. The consequent application of (1) is very far-reaching, unusual and new. Therefore, there is a danger that the arguments put forward here will not be taken seriously. On the other hand, it quickly becomes clear that silence does not provide a solution either, because a convincing concept (even the definition of time) must be in accordance with (1). However, fundamental concepts commonly used in theoretical physics start from other basics than (1). Starting from the usual geometrical view, first the classical theoretical physics was introduced. When this was no longer sufficient, quantum mechanics was developed. Here interesting and important relations were uncovered and more exactly analyzed how measured information influences the future entire (world-wide) reality. Also Hilbert spaces were introduced, which are closed and separable [17], i.e. they contain a countable dense set and have a countable orthonormal basis. The term "countable" already implies a systematic constructability of the set. But still a priori (time-independent) infinite sets are used, for example for the description of location and momentum, as it was common in classical geometrical concepts. These compromises were chosen because they were helpful in practice for a quick explanation of experimental results. The geometric view still plays an essential role, despite the experimentally detectable quantization of the exchanged energy measurable in experiments (ultimately as information). Note, complex, hierarchical combinatorics and multidimensional, ever increasing sets are not the problem. The problem lies in concepts (e.g. sets) which allow time-independent "existing" infinity. Continuous sets allow here far too much freedom - without equivalent in reality. From the point of view of information theory, a priori infinite sets and thus also the "real numbers" (routinely used in geometry) are completely incompatible with (1). Every finite interval of a continuous set different from 0 contains infinitely many elements independent of time. To "know" them, one needs an infinite amount of (ultimately undefined) information - such a thing is not real. The real numbers are helpful for many practical argumentations and also necessary here for bridging to existing constructs with analytic functions. But we must not forget: A priori infinite sets are only constructs and not bijectively mappable to something real. If we want to get much further in understanding reality, it is not enough that our mathematical model approximates experimental results. It must be discrete and it must be guaranteed that the size of the domains of the measurable information is only finite within finite measuring time. It is possible (and in view of the enormous size of the visible universe also plausible) that the size of the domains of information grows without limits, but only together with the time. (Despite relativistic time dilation, macroscopic time reversal is not measurable, i.e. there is common increase of time.)
We therefore need a common concept of time increment that connects us for the transport and exchange of information, also because of (1). Here, a concept published years ago [18] can help, which, starting from the (measurable) "time dilation", provides a finite and quantitative approach to proper time as a "sum of return probabilities". There are several mathematical and structural connections and peculiarities here, which offer a starting point for the information-theoretical approach (1), but are still unconsidered in present approaches of theoretical physics. Bridging to common concepts of theoretical physics is possible by assuming that the common "set of possibilities" or domain of information per proper time grows only together with the (common) time. The elementary steps for this can be done with unbounded increasing frequency (in particular the access to the basic primary domain, cf. section 3.7) and requires a new combinatorial approach to the concept of time. This is possible and it can be shown that essential current computational models (e.g. geometric functions like sin(x) and cos(x) and also their generalization by the (matrix) exponential function, cf. section 3.6) can be derived from it. In the following it will be introduced and discussed step by step.
The present time separates between future and past time. Future means "no previous information". We first consider a basal experiment as simple as possible, which delivers exactly 1 bit of information without prior information, i.e. a selection from one of 2 possibilities, e.g. drawing one of two equally probable balls "1" or "-1" from an urn. For possibility "1" we move one step to the right, for possibility "-1" to the left. Let k be the position to the right of the origin after n ≥ 0 steps. The integer k thus increases by 1 when drawing ball "1" and decreases by 1 when drawing ball "-1". This results in a so-called Bernoulli Random Walk or BRW with binomial distribution of the path possibilities.
Thus, n and k are integers with n≥0 and -n ≤ k ≤ n. The well-known Pascal's triangle or binomial distribution (Table 1) shows, as a function of the number of steps n, the numbers of path possibilities towards position k. These correspond to the binomial coefficients:
n n + k 2 = n ! n + k 2 ! n k 2 !                              
The column k=0 plays a special role. It represents the numbers of return possibilities to the origin k=0. Without prior information (unknown future) steps to the right and to the left have equal probability p=1/2. At this column k=0 is also the symmetry center. Since this choice of this "coordinate system" (n, k) has many advantages and reveals interconnections, we define the resulting symmetric probability distribution as a function:
Q 0 n , k = n ! n + k 2 ! n k 2 !   1 2 n            
The function Q0 represents probabilities in the symmetric BRW. It holds:
Q 0 n + 1 , k = Q 0 ( n , k 1 ) + Q 0 ( n , k + 1 ) 2            
Equation ( 5 ) shows the algorithm of the symmetric BRW. We can also define the general function Q with the general basic algorithm:
Q n + 1 , k = a n k   Q n , k 1 + b n k   Q n , k + 1      
Here Q(n, k)=0 for |k|>n. In the case 0 a n k 1   a n d   b n k = 1 a n k    we can consider a n k   a n d     b n k  as probabilities. In the general case, a n k   u n d   b n k    are constant factors, which can be called probability amplitudes for steps to the right and left. These can also be complex numbers or, more generally, matrices or linear operators, as is common in quantum mechanics. Obviously with ( 6 ) also applies to all linear combinations or superpositions:
n = 0 n m a x k = n + 1 2 n + 1 2 Q n + 1,2 k = n = 0 n m a x k = n + 1 2 n + 1 2 a n k   Q n , 2 k 1 + b n k   Q n , 2 k + 1                      
The summation can also be performed only over a partial range, e.g. only over one step (i.e. fixed n, e.g. n=nmax) or over step sequences with different frequencies, depending on the proper time ( 10 ). Essential is the uniform definition of n and k and their basal synchronization, so that a n k    and   b n k  have constant meaning until the summation is finished. The general function Q(n, k) from ( 6 ) is defined only together with the factors   a n k  und   b n k . These must be fixed before. The laws, which follow from the algorithm ( 6 ), are valid for all linear combinations of Q(n, k) and Q0(n, k). Certain combinations ( 6 ) and ( 7 ) in analytic models with continuous sets of numbers result in derivatives and integrals. In the following, already for the sake of clarity, simple definitions will be chosen, e.g.   a n k = b n k = 1 / 2 . In this constant symmetric case Q(n, k) = Q0(n, k) is valid. The function Q0(n, k) already has many interesting properties and is broadly combinable ( 7 ). This may be a reason for the universality of the Schrödinger equation, cf. Section 3.5.
By induction we obtain [18]:
m = 0 n Q 0 2 m , 0 = 2 n + 1 Q 0 ( 2 n , 0 )                    
The physical relevance is first shown in a context which has already been described and derived in detail [18,19]: We do an experiment in an inertial frame of reference. Let c be the velocity of light, v the velocity of a clock relative to the observer in an inertial frame and let x=v/c be the ratio to the velocity of light. According to the experimentally measurable relativity, the function f0(x) ( 9 ) shows the time dilation:
f 0 x = 1     1 x 2                                    
That means, the moving clock goes slower than the observer's clock by the factor f0(x). The Taylor series expansion of this function f0(x) is:
f 0 x = 1     1 x 2 = n = 0 2 n n x 2 2 n = n = 0 2 n ! n ! n ! x 2 2 n =
= 1 + 1 2 x 2 + 6 16 x 4 + 20 64 x 6 + = n = 0 Q 0 2 n , 0 x 2 n                      
The last line ( 10 ) illustrates the relationship of time dilation with the return probabilities when comparing with Table 1 in k=0. Time dilation corresponds to the sum of the return probabilities of a BRW with probabilities p=(1-√(1-x2))/2 for a step to one side and p=(1+√(1-x2))/2 for a step to the other side. "Time" thus shows itself proportional to the number of return events. The symmetric case p=1/2 for both sides is particularly interesting. This case occurs in the case x=1 or v=c and is thus the rule for electromagnetic interaction. This transports information and here we need an exact information theoretic calculation, but just in this typical case the analytic expression ( 9 ) for the time dilation is not usable and infinite (a priori, thus not conforming to reality). On the other hand, the sum ( 10 ) is also possible for x=1 over a finite (increasing) number of steps and thus can be also finite within finite time. Therefore, in the following we assume that the approach of a BRW (progressing with each increase of time) actually provides deeper insights into the combinatorics of reality and we will get confirmation for this.
Also important are linear superpositions or combinations of BRWs and their derivatives: As shown in ( 7 ), BRWs can also be superimposed with different prefactors (linear), for example with different signs (due to a conservation law). Table 2 shows a simple example of a superposition of two BRWs with opposite signs, which start with 1 in n=1 and k=-1 and with -1 in k=1. Because of this antisymmetric start and ( 5 ), the value 0 results in column k=0 for all n. In addition, antisymmetric values with opposite signs result for k<0 and k>0. Table 2 shows this
)) starts in n=1 with 1 in k=-1 and with -1 in k=1. As in Table 1, the column k in row n+1 results from the addition of the values in k-1 and k+1 in row n. Due to the opposite sign of the values for k<0 and k>0, the value 0 in the column k=0 results for all n, which can be understood as a superposition of two BRWs with opposite signs. This results in deletion in column k=0 and the total sum of the absolute values is no longer 1 but becomes smaller and smaller. This has several effects, also on the renormalization.
We can also consider such a superposition as in Table 2 as the first discrete derivative along k. We define the discrete derivative QD(d, n, k) of degree d by QD(0, n, k) = Q0(n, k) and for n≥d≥1
Q D d , n , k = Q D d 1 , n 1 , k + 1 Q D ( d 1 , n 1 , k 1 ) 2                
Here n ≥ d is necessary to have enough values at all to form finite differences of d-th order. This only becomes apparent in the discrete approach. For abbreviation let Q1(n, k)=QD(1,n,k) and Q2(n, k)=QD(2,n,k). The discrete derivatives ( 11 ) defined in this way can be calculated. In particular
Q 1 n , k = Q 0 n 1 , k + 1 Q 0 ( n 1 , k 1 ) 2 = k n Q 0 ( n , k )
Q 2 n , k = Q 1 n 1 , k + 1 Q 1 ( n 1 , k 1 ) 2 =   k 2 n n n 1 Q 0 ( n , k )        
The derivatives, such as ( 12 ) and ( 13 ), yield polynomials as prefactors. Similarly, Hermite polynomials result when using the exponential function as generating function [20].
Table 2 shows up to n=6 the values of the first discrete derivative with respect to dk, i.e. the values of Q1(n, k). Because of exact antisymmetry around the middle column k=0, in rows 2n-1 at k= ±1 are the "flowing out" amounts, which according to ( 13 ) just correspond to the 2nd derivative Q2(2n, 0). These show up as coefficients of the Taylor series expansion of the function 1/f0(x) ( 10 ):
1     f 0 x =     1 x 2 = 1 n = 1 1 2 n 1 2 n n x 2 2 n = 1 + n = 1 Q 2 2 n , 0 x 2 n                
According to algorithm ( 5 ), a new row n+1 is created in each step (as in Table 1) and each value in position k is created by adding the neighboring values of the previous row n from position k-1 and k+1. Starting from this elementary simple algorithm, deeper insights are possible, also regarding a growth of the domain of information together with time. Therefore, we will now address some possibilities for this and show first results.

3. Results: New approaches and implications

3.1. Eigentime as finite sum

In case of equal probability p=1/2 for steps to the right and to the left, the sum ( 10 ) of the return probabilities in k=0 grows with the number of steps n without limit. With the help of the Stirling formula lim n n ! = 2 π n   n e n  and ( 8 ) holds for large n [18]:
Q 0 2 n , 0 4 π n 2 n e 2 n 2 π n n e n 2 1 2 2 n = 4 π n 2 n e 2 n 2 π n n e 2 n 2 2 n = 4 π n 2 π n = 1 π n
With ( 8 ) follows
m = 0 n Q 0 2 m , 0 2 n π                        
Thus we have for the time dilation at v=c the expression ( 16 ), which depends only on the number of steps of a symmetric BRW.
The function f0(x) ( 10 ) and its reciprocal ( 14 ) occur often in geometry and physics. With ( 16 ) we have a closed form also for the frequent case x=1 (resp. v=c). This is compatible with the definition of information (1), if we assume that the sum of the return probabilities ( 16 ) (and thus also the possible number of return events) at given time is finite, because it grows only together with the time.

3.2. Estimation of the basal number of steps with access to the primary domain in this universe

The earlier a domain is defined, the more frequently it is accessed. This can be explained by the fact that domains defined earlier are also used for the definition of domains defined later. The access to the "primary domain" defined earliest at the time of origin of our universe occurs therefore maximally fast in (the part of the totality that we call) this universe, i.e. at every progress of our measurable time (this progress occurs for all of us at every exchange of energy quanta or photons, see also section 3.7). If we assume that also this common maximum measurable time progress occurs proportionally to the sum of the return probabilities of a primary BRW (which started in the symmetry center at the time of origin of our universe), we can try a first estimation of the so far occurred "maximum" number of steps nmax. For this we assume that the maximum measurable expansion of the universe is proportional to the total expansion of the primary central BRW. Because of its large step number, the maximum probabilities of this BRW are meanwhile concentrated to only a small "pointed" range, since the standard deviation of a BRW grows only proportionally to the root of the step number n. What physical interactions might give a clue to this?
Interactions with limited ranges such as the weak and strong interactions come into question. The strong interaction is the strongest fundamental force in nature. In this rough estimate, let us first assume that the range of the strong interaction with about 10-15m [21] corresponds to the standard deviation n m a x of the primary BRW (connecting in our observable universe) and that the estimated "diameter" of about 8.8*1026 m of this universe [22] corresponds to the total extent or step number of the primary BRW. Then we get
n m a x n m a x = 8.8 10 26 m 10 15 m = 8.8 10 41 ;     s o       n m a x = 8.8 10 41 2 = 7.744 10 83    
In view of this rough estimate it is remarkable for comparison that the number of photons of the extragalactic background radiation (EBL) [22] was estimated to 4*1084, thus has a similar order of magnitude as nmax ( 17 ). If we (roughly) distribute the number nmax to the estimated age of this universe of 13.8 * 109 years or 4.35 * 1017 seconds, we get the following step frequency fmax:
fmax = nmax / 4.35 * 1017 = 1.78·1066 /s
This frequency is very high. In comparison, the speed of light c is slow. From one step to the next, the light covers only the following distance smin:
smin = c / fmax = 3 * 108 m/s / 1.78·1066 /s = 1,68 * 10-58 m
Thus, the range of the strong interaction or the diameter of an atomic nucleus with 10-15 m is about 1043 times larger than smin. Obviously, the gradation smin is much too fine to be measurable, giving the impression of a "continuum".
So, even the rough calculation ( 17 ) leads to very high frequencies for which our perceptible time, the speed of light and therefore also our maximum information speed are only slow. This means that the information of all physical (external) measurements comes clearly delayed and therefore more or less from the past, depending on the proper time ( 16 ). The number of possibilities can grow extremely fast along a (under more preconditions later starting) rare proper time because of the fast increase fmax (18) of the global number of steps ( 17 ).
It should be noted at this point that the standard deviation of a simple BRW is a simplification. For example, the superposition of two BRWs with opposite signs, starting from an original center k=0 as in Table 2, is plausible due to conservation of energy. But also in this case (which also requires other renormalization) the resulting expansion and the distance of the extrema have similar magnitudes.
How can this now be fitted into the framework of quantum mechanics?

3.3. First bridge to quantum mechanics

Especially in quantum mechanics, the "information" of measurement results is shown to be crucial for future measurement results - thus ultimately for physics. The experimental results thus prove that we need a precise information-theoretical approach. Also, quantum physical experiments are particularly suitable for analyzing the combinatorics of information, because the set of possible measurement results of quantum physical experiments and thus the domain of generated information are clearly defined and manageable.
In quantum mechanics, physical states are described by complex-valued vectors. The column vectors are called Kets and the corresponding complex conjugate row vectors Bras. Eigenstates of the system are basis states. These are described in each case by orthonormal basis vectors. This can result in high-dimensional state vectors already in the microscopic quantum physical domain. In current approaches, continuous result sets are assumed for, among others, location and momentum, resulting in infinite-dimensional state spaces containing all possible state vectors. In an (exact) information-theoretic approach, this must be replaced by finite-dimensional spaces. However, their dimensionality resp. the number of possibilities can grow extremely fast along proper time and can be synchronized for information exchange within the global step number ( 17 ). Even the rough calculation ( 17 ) leads to very high frequencies for which our perceptible time, the speed of light and therefore also our information speed are very slow. This means that the information of physical measurements mostly comes from clear past. This illustrates that a lot can happen during the measurement.
In ( 10 ), proper time was represented as the sum of return probabilities of a BRW. Each BRW up to the return in k=0 in line 2n can be decomposed into 2 BRWs in succession, each up to line n. For such "outward and return paths" there are several possibilities per return. Thus we get
k = n / 2 n / 2 Q 0 n , 2 k 2 = Q 0 ( 2 n , 0 )                    
Every progress of time is coupled with such return events according to ( 10 ). Progress of time is also coupled with every physical measurement. From this point of view it is less surprising that the probability of every quantum mechanical measurement results from the product of a probability amplitude ("way there") with its complex conjugate ("way back") like in ( 20 ). Both are prerequisite for complete measurement and also for time progress.
This view can also show a first bridge to geometry as a statistical consequence:

3.4. About statistics and geometry

We could consider a BRW from start to return (over positive k) as coupled with a mirrored BRW on the opposite side (over negative k) for reasons of symmetry. However, the exact information about the coupling is not necessarily available, perhaps only as an average value resulting from a conservation law. This may give the impression of a modified probability distribution with 2 independent BRWs, one over k<0 to Q0(n-1,-1) and one over k>0 to Q0(n-1,1). From there, the two seemingly independent BRWs each go to k=0 with probability 1/2. The probability Q0AND(n, 0) for this is then:
Q 0 A N D n , 0 = Q 0 ( n 1 , 1 ) 2 Q 0 ( n 1 , + 1 ) 2 = Q 0 ( n , 0 ) 2 2 1 2 2 π n 2 = 1 2 π n
The meeting probability Q0AND(n, 0) of two simultaneously starting independent BRWs after n steps in their common starting point approaches 1/(2πn) for large n, which corresponds to the reciprocal of the circumference of a circle with radius n. This can show a relatively simple connection between statistics and geometry. If both BRWs start at the same time and are exactly mirrored (due to a conservation law for symmetry reasons), the probability of return is simple as for a BRW, i.e. given by Q0(n, 0). If, on the other hand, the BRWs start later and decide seemingly independently of each other ("AND"-conjunction, ( 21 )), the probability that they meet after n steps at the starting point k=0 is the geometric probability Q0AND(n, 0) and thus corresponds to the probability of meeting a segment of length 1 on a circle with radius n.
To get an idea of the order of magnitude in metric units, we remember that because of the slow speed of light compared to the global step frequency fmax (18), a lot can happen (1043 steps of the length smin (19) until the crossing of an atomic nucleus) until geometric (macroscopic) distances are measurable. There is always a delay to the perception of the geometric appearance, e.g., the circumference of the circle: we can then say, "The larger the macroscopic radius or distance n*smin of the circle, the more delayed is our perception of the circle, the more possibilities 2πn*smin (for positioning) on the circle there are."
Even if this reasoning starts as a two-dimensional approach at first (since a circle is a two-dimensional object), it fits to the stepwise propagation of electromagnetic fields, which transport information, and in further propagation steps include the 3rd dimension (cf. section 3.8).
Because of the uniform algorithm ( 5 ), ( 20 ) can be written analogously also for superpositions. There are further relationships, e.g.
k = n / 2 n / 2 Q 1 n , 2 k 2 = Q 1 2 n 1 , 1 = Q 2 ( 2 n , 0 )                  
The squares on the left side of equation ( 22 ) show no direct linear superposition. This equation holds because BRWs can be chained and because of ( 12 ) and ( 13 ). Further chaining is possible also along time steps ( 10 ).
Several opportunities for further research arise. For example, equations ( 20 ) and ( 22 ) have analogies to quantum mechanical calculations of integrals and sums over squares of probability amplitudes, respectively.

3.5. Bridges to quantum mechanics: Schrödinger equation

For clarity, we assume here a non-relativistic particle in one dimension. The Schrödinger equation for this is [23]
i h t Ψ t , x = h 2 2 m 2 x 2 + V ( t , x ) Ψ t , x                      
Here Ψ(t, x) denotes the wave function or the quantum mechanical state and t and x are variables for location and time.
A function that yields a valid quantum mechanical probability amplitude as a function of location and time must also satisfy the Schrödinger equation. Therefore, the Schrödinger equation is a central tool of quantum mechanics.
However, this equation is also a differential equation on continuous sets of numbers and as such cannot be directly adopted in an information-theoretic approach. It must, in order to be compatible with (1), be translated into an equation with finite differences. This is indeed possible. To this end, according to ( 10 ), we identify the increase (by 1) in the number of steps n of the primary BRW (cf. Section 3.2) as the minimal condition for increase ∂t in time, and the change (by ±1) in the location coordinate k of the BRW as the minimal condition for a change ∂x in location coordinate. Application of the algorithm ( 5 ) yields
Q 0 n + 2 , k = Q 0 n + 1 , k 1 + Q 0 n + 1 , k + 1 2
Q 0 n + 2 , k = Q 0 n , k + 2 + Q 0 n , k 2 + Q 0 n , k + Q 0 n , k 2 2 2                      
Q 0 n + 2 , k Q 0 n , k = Q 0 n , k + 2 Q 0 n , k 2 Q 0 n , k Q 0 n , k 2 2 2            
The left side of ( 24 ) represents a finite difference along the number of steps n corresponding to the derivative along the time ∂t on the left side of the Schrödinger equation ( 23 ) and the right side represents a 2nd order finite difference along the location coordinate k corresponding to the 2nd derivative along the location coordinate ∂x on the right side of the Schrödinger equation ( 23 ).
Since the derivation ( 24 ) uses only the algorithm of the BRW ( 5 ), the same argumentation works also for all superpositions or linear combinations ( 7 ), of course also for superpositions with prefactors with opposite sign (due to physical conservation laws, see Q1-triangle, Table 2). Prefactors analogous to the Schrödinger equation are also necessary for finite differences when embedded in a larger multidimensional system. An analogy to the potential term V(t, x) in ( 23 ) becomes also necessary for finite differences ( 24 ) when embedded in a larger system. Thereby symmetries can become more apparent. For example, the potential of gravity may be a consequence of a conservation law, i.e., a global symmetry.
Validity at all superpositions can explain the universal validity of the Schrödinger equation, but ultimately also requires the synchronization of finite differences via a primary domain (cf. Section 3.7), which will be addressed also in the discussion.

3.6. Bridges to quantum mechanics: (Matrix) Exponential Function as Binomial Expansion

The complex exponential function is used as an algebraic tool in all areas of quantum mechanics. By its Taylor series expansion we have also immediately a reference to familiar functions of geometry like sin(x) and cos(x). But this infinite time independent series expansion does not fit to an (exact) information theoretic approach. For this we need an algebraic approach, whose branching depth and complexity increases discretely together with the physical time.
We have this approach in the steps of a BRW ( 10 ). The binomial coefficients (Table 1), which reflect the number of path possibilities in the BRW, can also be used to approximate the exponential function. In fact, the exponential function can be replaced by a finite binomial expansion of arbitrary precision. Let k ,   n N 0 , k ≤ n, x C . We define:
b n k n , k , x : = n ! k ! n k ! x n k       u n d         b n n , x : = 1 + x n n = k = 0 n b n k ( n , k , x )      
Due to lim n n ! n k n k ! = 1 we get
lim n b n ( n , x ) = lim n 1 + x n n = lim n k = 0 n x k k !     n ! n k n k ! = 1 + x + x 2 ! 2 + x 3 ! 3 +
The right hand side of ( 26 ) corresponds to the series expansion of the exponential function. So we get
lim n b n ( n , x ) = lim n 1 + x n n = 1 + x + x 2 ! 2 + x 3 ! 3 + = k = 0 x k k ! = e x      
The right-hand expressions in ( 26 ) and ( 27 ) should serve as a bridge to frequently used limits of calculus and also to geometric functions because of e i x = cos x + i   s i n ( x ) . The binomial expansion ( 25 ) can approximate these with arbitrary accuracy, if "only" n becomes arbitrarily large. However, this can be done in a time-conformal way and thus in a reality-conformal way, if we assume that increase of time is proportional to the sum of probabilities of return events of a BRW ( 10 ), which are proportional to binomial coefficients. Therefore, the expression ( 25 ) can better show the real combinatorics. The considerations for calculating nmax in Section 3.2 and the estimation ( 17 ) show that such reality-conforming n can become extremely large.
We initially assumed   x C . However, this can be replaced and extended by matrices. Indeed, for illustrating important combinatorics, in particular multidimensional time-conformal combinatorics (cf. Section 3.8), matrices are more suitable than the commonly used complex numbers. In accordance with a time-conformal development starting from ( 10 ), ( 25 ) can also be defined for matrices:
b n k n , k , A = n ! k ! n k ! A n k
b n n , A = I + A n n =   k = 0 n b n k ( n , k , A )  
Here A is a square matrix and I is the unit matrix with the same dimensionality as A. Since the unit matrix I commutes with A, the series expansion (29) is uniquely defined.
The exponential function is also defined for matrices, but is approximated by its Taylor series in a different order. Here it is recalled that the complex exponential function and also the matrix exponential function can be replaced by a finite binomial expansion (29) and that this can provide completely new insights into the time conformal combinatorial nature of physical processes.
It should be noted here that in the definition of the function bn(n, x) in (29), another subdivision can also be chosen, such as:
b n n , x : = 1 + x n n = 1 2 + 1 2 + x n n          
b n n , A : = I + A n n = 1 2 I + 1 2 I + A n n
We can consider the left-hand side of ( 30 )( 31 ) as the decided (past) part of a BRW, and the right-hand side of ( 30 )( 31 ) as the undecided (future) part.
For large n, such as nmax, the right-hand sides of ( 30 ) and ( 31 ) result approximately in a symmetric distribution of the binomial coefficients as in a symmetric BRW ( 4 ). To replace the "approximately" with "exactly", there are even more combinatorial details to consider. Instead of e.g. ( 31 ) we could write for the exact consideration of a conservation law
I = I n = I + A n A n n = 1 2 I + A n + 1 2 I A n n
Due to the quantization and conservation of angular momentum, it seems interesting to use for A, for example, a 3D rotation matrix (π/2 rotation about one of the 3 spatial directions, see also Table 3) and to investigate the combinatorics in more detail.
) and ( 37 ) with time reference. The derivatives d/dx, d/dy, d/dz are abbreviated with dx, dy, dz. These are shown here simplified as steps ordered along one dimension k.

3.7. The primary domain is prerequisite for the time-ordered exchange of energy and information and for maintaining the conservation laws.

The term "primary domain" introduced above denotes the most upstream minimal common set of possibilities in this (perceptible or measurable) universe. Since a set of possibilities (domain) can only be defined under access to existing information, i.e. information from the past, the access to domains of information occurs the more frequently, the further upstream (in the past) these were defined - according to (1) as selection from a (further upstream) domain. Thus, the most upstream "primary domain" is maximally frequently used, but its size is minimal. It must be sufficient only for the determinability of an order.
An approach for further consideration can be given: Progressive time implies energy flow and measurable change (of information), which in turn requires access to the primary domain. The access or reference to the primary domain (ordered or "synchronized" along the progressing time) is thus a precondition for our common ordered time and necessary at every energy flow. Thus, the primary domain can be described in more detail.
For this, we examine the basics for the exchange of information between distinguishable localizations. We exchange information "outside" as free energy. Free energy "expresses" itself per proper time by impulse to the "outside". A precise information-theoretical consideration shows that this sign of the impulse requires a synchronization along increase of the time. This concerns the electromagnetic quanta (photons) which are our elementary information carriers. Their propagation direction decides on the direction of the information transport. There is actually a decisive degree of freedom for this in the definition of the propagation vector of the energy, thus in the definition of the Poynting vector [19] Se.
S e = 1 μ 0     E × B        
Here B denotes the magnetic flux density and μ0 the field constant and E the electric field strength. The cross product E x B defines a vector perpendicular to E and B, whose direction resp. sign depends on the order of the components E1, E2, E3 of E and B1, B2, B3 of B according to
S e   μ 0 = E × B = i , j , k = 1 3 ε i j k E i B j   e k        
where e 1 ,   e 2 , e 3 denote the base unit vectors of a right-handed Cartesian coordinate system and εijk denotes the Levi-Civita symbol. It is
  ε i j k =             1 ,   i f   i ,   j , k   i s   a n   e v e n   p e r m u t a t i o n   o f   ( 1,2 , 3 ) 1 ,   i f   i ,   j , k   i s   a n   o d d   p e r m u t a t i o n   o f   ( 1,2 , 3 )               0 , i f   i f   a t   l e a s t   t w o   i n d i c e s   i , j , k   a r e   e q u a l                
Each of the 3 indices i, j, k represents one of the 3 orthogonal directions e 1 ,   e 2 , e 3 . How should physical systems localized at different locations immediately and reproducibly "know" (as a common past) whether the permutation of the 3 orthogonal directions is "even" or "odd"? This question is decisive for the propagation direction of the energy and therefore also decisive for all information which we exchange!
In ( 33 ) and ( 34 ) the sign of εijk determines the sign of the propagation direction of each energy exchange. This speaks for the fact that the selection of one of 2 possible orders of a common set of 3 possibilities takes place at access to the primary domain of our universe.
It is obvious that the sign of the energy flow is important to guarantee the (basic) conservation law of energy. There are further conservation laws in physics, which must be considered. For the guarantee of the conservation laws, information from the respective past must be available. How can this be stored, and how can it be guaranteed that the access can take place sufficiently fast?
One possibility is the exactly antisymmetric subsequent start of a new BRW within previously started BRWs as illustrated in Table 2 This respects the principle of "neutrality of subsequent changes", since the total sum (from the previous point of view) is preserved. This means that after each step in the primary BRW, the total sum over the conserved quantity must be equal to 0. A BRW with an antisymmetric start satisfies this condition. The example in Table 2 illustrates this; it is k = n / 2 n / 2 Q 1 n , 2 k = 0   . According to this, the information is most quickly retrievable in the center (middle) between the starting points, i.e. in k=0 in Table 2. In k≠0 it appears as asymmetry.
We need for the consequent information-theoretical approach such pre-information for the exact synchronization at every exchange of energy. This must be guaranteed from the beginning of time. In the framework of a primary conservation law of energy, it is plausible that the global total sum of probability amplitudes over each row n is equal to 0, as in Table 2 for Q1(n, k). Then we could "simply" assume that one of the two sides k>0 or k<0 was chosen in an initial decision. This initial decision has maximum priority, because starting from "rest mass" it defines the propagation direction of "energy" per time progress resp. increasing number of steps nmax. The "probability" or access frequency in the context of a global calculation is therefore maximal. (This fits with the finding that the earlier the domain of information is learned, the faster it is available on average later).
Approaches to further research:
Since it is about the earliest decision or symmetry breaking for our universe, this could be decisive in the context of a maximum measurable symmetry. In the context of the CPT symmetry this could decide about the sign of the charge (and therefore predominance of matter over antimatter) at usual time progress resp. enlargement of the maximum number of steps.
The choice of a side like e.g. k>0 and the prefactor ( 12 ) in the Q1 distribution could cause geometrical asymmetries and have further effects. It could be expressed as potential, also macroscopically, for example as gravitational potential.

3.8. Bridge to electromagnetism, Maxwell's equations, perspective

In section 3.7 we already noticed that the electromagnetic laws play a decisive role for the propagation direction of our basal information carriers resp. photons. To make these laws compatible with the basal definition of information (1), it is first necessary to discretize them. We start with the Maxwell Vacuum Equations with time reference. It holds [26]
× B = 1 c E t           u n d     × E = 1 c B t      
Here E denotes the electric field vector with components Ex, Ey, Ez, and B the magnetic field vector with components Bx, By, Bz and c the speed of light and t the time. For clarification of combinatorics, we use a notation without units. Written out in components we get from ( 35 ) with c=1:
E t = x B z y B y z + y B x z B z x + z B y x B x y                            
B t = x E z y + E y z + y E x z + E z x + z E y x + E x y      
Under suitable conditions, we can consider the expressions in the parentheses each as 2 alternatives of a BRW and thus discretize them. This becomes clearer in the form of a table (Table 3)
As with a BRW, the increase in time dt is associated with the increase in number of steps n. The derivatives d/dx, d/dy, d/dz are linear operators. This can be transferred to the basic algorithm of the general BRW. We already noticed that the a n k   u n d   b n k   in ( 6 ) can be matrices or linear operators. If conditions are given that allow ordering along one dimension, a clear transfer to a BRW along one dimension k is possible as in Table 3. Different initial values lead to different further development, also with different effects of renormalization. This and further multidimensional considerations could be the subject of further research. Computer simulations [25] could also help here, including considerations of energy propagation resp. the Poynting vector (section 3.7).

4. Discussion

The introduction first points out the need for a precise definition of information and then introduces definition (1), which is the focus of this article. It is mentioned that the digital application (2) of (1) has great potential, as this enables the systematic implementation of more and more precisely comparable and globally searchable digital information. This has been addressed in previous publications [3,4,5,6,7,8,9,10]. In this context it was mentioned [3] that the definition (1) also has fundamental consequences for physics.
The preparation of a physical experiment determines the set (domain) of its possible results, and the result of any physical experiment is information, i.e. a selection from the previously determined set of possibilities or domain. This just corresponds to the definition (1) of information. Thus, fundamental physics is actually the first science about information and should consistently apply definition (1) of information.
There is a lot of literature on information theoretic approaches, also in physics. However, apart from own literature [3,4,5,6,7,8,9,10], there seem to be no other publications with an (exact) information-theoretic approach resulting from definition (1). In the last publication [3], which delves into the application of (1) in computer science, it has already been pointed out in Section 4.7 that the application of (1) in physics would also be an important topic for further research. This article is intended to provide suggestions in this regard.
The domain of information presupposed in (1) must always be (ordered and) reproducibly known before information exchange. That means, it is finite, because after reproducible (thus also finite) sequence of elementary steps each element of the domain must have the same meaning for all (represent identical information). Each element of the domain can only be defined with the help of information, which means selection from a previously defined domain. So we need also a discrete (and in the direction of the past even finite) concept to time and proper time.
In earlier publications [18,19] such a concept was already presented, starting from the relativistic time dilation, which can be represented as sum ( 10 ) of the return probabilities of a Bernoulli Radom Walk resp. "BRW". From this we can conclude that in the steps of a (modified, superimposed) random walk, current information and thereby also the domain of later information is defined. However, this still needs to be connected (step by step) with current approaches and bridges need to be shown in particular to quantum mechanics. In connection with this it is pointed out that also linear combinations or superpositions ( 7 ) of BRWs are possible as long as the elementary discrete steps are synchronized resp. "connected".
Since the consistent application of the elementary definition of information (1) (among other things because of the necessary discretization) means in the end a deep intervention into current thought buildings, the question arises whether this is necessary. Perhaps one would like to do without the clear definition of "information", because this does not fit into the present concept. Of course, nobody can be forced to do so, but this article can then clarify relevant limits and contradictions of common thought buildings and thus indirectly help to save time. We can save time, for example, if we consider the "big bang model" only as a way to get an overview of the first orders of magnitude (measurable here), but of course not as a starting point for the explanation of (measurable information of) reality.
Then we can also question whether we want to start the thought building at all with a clear definition of information, which is elementary (exact) and therefore starts as usual in mathematics with elementary terms of set theory. If not, what is the alternative? Actually, the experience showed again and again that the application of ill-defined or even undefined terms does not help in the end.
So the question still arises whether there is an alternative exact definition of information which differs decisively from (1).
Selection of elements from a set is elementary. Thereby it is quite possible to refine and extend details, especially the notion of "reproducible knowledge" (of the elements) of a set of possibilities. This requires a discrete concept to time and proper time, since "knowledge" is possible only for parts of the past. To make such a concept possible belongs just to one of the objectives of this article.
The concept of time and proper time used here got its initial impulse from the power series development of the function ( 10 ) for relativistic time dilation. It was shown that this can be represented as the sum of the return probabilities of a Bernoulli Random Walk (BRW) [18,19]. The approach of a BRW allows a discrete representation of discrete sets of possibilities for information, which are always finite at a given time or number of steps and therefore compatible with our definition of information (1). In Section 2 (Material and Method) it is also mentioned that the symmetric case of the BRW (p=1/2 for both sides) is particularly interesting (also for the inclusion of conservation laws). This important case occurs regularly in the ultrarelativistic case of the speed of light, i.e. the elementary electromagnetic propagation speed of information. The expression ( 9 ) results in this case in "infinity" and is therefore not usable. However, the approach to proper time via series expansion ( 10 ) as a sum of return probabilities of a symmetric BRW remains usable also in the ultrarelativistic case and shows in particular also combinatorial details. The BRW approach, with additional physically relevant modifications, such as linear combinations or superpositions (e.g., Table 2) and discrete derivatives ( 11 ) of BRWs are therefore discussed in more depth and first results are shown (Section 3).
First, a direct relationship( 16 ) between eigentime and number of steps of a symmetric BRW is shown. The symmetric BRW also corresponds to the "no prior information" case, since no direction is preferred. In section 3.2, this is ^^1^1 to a global calculation. Consequently starting from (1), there must be an initially defined primary domain of information, whose knowledge is a prerequisite for any subsequent exchange of information in our universe. So to say, the "direction of time" was defined in connection with the propagation direction of energy per time increase (see section 3.7). This also means that the primary domain of information was defined in the first steps of a primary (comprehensive, thus maximum) BRW in our universe. This maximum connecting BRW is necessary for the guarantee of the conservation of energy, (cf. section 3.7) and for the synchronization of elementary finite differences ( 5 )( 6 ) and their possible superpositions ( 7 ).
For the sake of clarity, in section 3.2 we first made a rough estimate of the maximum number of steps nmax, since the standard deviation of the maximal BRW is n m a x and within a few standard deviations around the mean most steps of a BRW occur. Within this rough estimate, we first chose the range of the strong interaction as a measure of the standard deviation n m a x and the maximum measurable distance (i.e., the estimated extent of the measurable universe) as a measure of the extent nmax of the primary BRW. Using ( 17 ), we obtained n m a x = 7.744 10 83 10 84 . Rounding is more than justified because of this rough estimate. Using the range of the weak interaction would have resulted in an even larger value.
In any case, this rough estimate calculation already shows that the gradation of the discrete representation is too fine to be measurable. So it would be a fundamental mistake to conclude from missing measurability of the gradation that reality is continuous (like e.g. the "real numbers"). The information-theoretical approach (1) makes clear that for an information-theoretical and therefore exact description of reality we have to work from the beginning with discrete sets of numbers, which moreover have to be finite within finite time.
An exact information-theoretical approach naturally concerns first quantum mechanics, where just the emphasis is put on computational models to clear basal physical experiments. Equation ( 20 ) illustrates that also in the BRW approach every progress of time can be decomposed into sums over concatenated outward and return paths. This shows first analogies to quantum mechanics, where the probability of any measurement result is the product of a probability amplitude ("outward path") with its complex conjugate probability amplitude ("return path").
Moreover, the concatenation of two BRWs leads to typical probabilities ( 21 ) of the geometric view. This shows first possibilities, how in the context of further research the geometrical appearance can be derived as a statistical consequence appearance (which occurs delayed due to limited information speed).
Section 3.5 shows a way to discretize the Schrödinger equation, here choosing the non-relativistic one-dimensional form. Despite this simplification, the analogies of derivatives of the quantum mechanical state Ψ(t, x) to discrete finite differences of Q0(n, k) shown are remarkable, since the Schrödinger equation has central importance in quantum mechanics. The algorithm of the symmetric BRW ( 5 ) is also sufficient for the argument ( 24 ). Essential is "only" the uniform definition resp. synchronization of n and k for ( 5 ) and for superposition ( 7 ). The synchronization of finite differences is necessary for the "finite" Schrödinger equation. Again, from an information-theoretic point of view, this requires the embedding of the BRWs within a maximal primary BRW with a maximal number of rows (e.g., nmax in ( 17 )). Thus, the universal validity of the Schrödinger equation is another indication for this assumption.
The exponential function also plays an important role in quantum mechanical calculations, e.g. as part of quantum mechanical state functions. This function can be represented as a binomial expansion ( 25 ), if "only" n becomes arbitrarily large. This can be done in conformity with time [18,19] and thus in conformity with reality (cf. also ( 10 )). In this case, for large n the right-hand sides of ( 30 ) and ( 31 ) show approximately a symmetric distribution of the binomial coefficients as in a symmetric BRW ( 4 ).
Section 3.7 now deals with the basal question of the minimum prior information necessary (in our universe) for elementary information exchange resp. exchange of energy quanta or photons. For this, indeed, an important degree of freedom can be found: The order of the 3 space dimensions decides about the sign of the Poynting vector ( 32 ) and thus about the direction of the elementary energy transport. The fact that in ( 33 ) and ( 34 ) the sign of εijk determines the sign of the direction of propagation of any energy exchange speaks in favor of the hypothesis that the selection of one of 2 possible orders of a set of 3 possibilities takes place at access to the primary domain of our universe. We have to know this order reproducibly together as necessary pre-information at every information exchange or energy exchange (per common increase of time).
A prerequisite for this (also for the comprehensive validity of the Schrödinger equation, cf. section 3.5) is ultimately the basal discrete synchronization resp. connection of finite differences as described at ( 7 ). This and other results (sections 3.4, 3.6, 3.7) led to the title of this article.
Finally, section 3.8 describes a bridge ( 35 ) to electromagnetism. Maxwell's equations are particularly interesting because they show, with reference to time, the combinatorics of energy and information propagation in all measurable dimensions. However, for compatibility with the definition (1) of information, we need a discrete representation of the electromagnetic laws. Starting from the Maxwell Vacuum Equations ( 36 )( 37 ) written out without units, Table 3 shows the resulting combinatorics spread out along one dimension. Possibilities for further research are addressed, and multidimensional computer simulations [25] may also help.

5. Some hints for interpretation

A philosophical discussion of the definition (1) of information and the resulting consequences is beyond the scope of this article. However, some remarks on the interpretation are appropriate.
We are all as living beings locally separated, but ultimately part of a whole, because we can exchange information, so ultimately together we must all have the same primary domain of information, which we must know more or less unconsciously. This connects all information. The access to the primary domain of information is necessary and the determined order ( 34 ) is crucial for the control of every energy flow, see section 3.7. We can consider decisions as the causes of information, because we have to decide first, before the information about the decision can be expressed and perceived elsewhere. The (in this reference frame or universe) primary (initial) decision defining the primary domain ("initial symmetry breaking") controls the further energy flow (per time increase) with maximum effect.
This can be done, for example, by choosing a side as shown in  Table 2, i.e., by choosing one of 2 BRWs with opposite signs (because of the exact conservation law of energy - which implies that our contribution is important after all).
The initial decision defines thus information with maximum effect for the further long-term common future for all life which exchanges information (as energy quanta) later.
But what does this mean for as living beings, whose conscious memory usually begins much later? How shall we decide?
Since contradictory information finally extinguishes itself (due to the same primary domain and exact conservation of energy), it is certainly advisable to avoid contradictions to the common initial decision (leading into the future) and to decide to the best of our knowledge in such a way that our own decisions also lead into a common future in the long run and do not contradict the common future. To this end, we can ask ourselves:
Which decisions would future generations want from us?

6. Conclusions

Since the result of any physical, well-defined experiment is information in the form of a selection from the set of possible experimental results, definition (1) of information is also relevant to physics. A more detailed analysis shows that substantial consequences for theoretical physics follow from this:
The set of possibilities resp. domain of information must be reproducibly known, so that the information (as selection from this set) is exchangeable and reproducible. From this follows that within finite time the domain of information can only be finite.
Mathematical approaches to theoretical physics that use time-independent infinite sets are therefore unsuitable for an information-theoretic approach (1).
Starting from the series expansion of time dilation, it is shown that time is proportional to the sum of the return probabilities of a Bernoulli Random Walk or "BRW".
The BRW approach is shown to be suitable for an information theoretic approach in which the domain of information is always discrete and only increases together with time.
Starting from the BRW approach, several bridges can be formed to current mathematical approaches, e.g., to the use of linear operators, the Schrödinger equation, and the (matrix) exponential function in quantum mechanics. Bridges from BRW statistics to geometry are also possible.
The laws of electrodynamics give clues in discrete form to basic discrete combinatorics. They show 2 possibilities for the calculation of the sign of the Poynting vector (i.e. for the direction of the energy flow). From this, conclusions can be drawn to the structure of the common primary domain of information, which is necessary for the definition and connection of later defined (domains of) information. A final illustrative presentation of the combinatorics of Maxwell's equations is intended to give suggestions for further research.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Wikipedia, Information. Available online: https://en.wikipedia.org/wiki/Information (accessed on 05 May 2023).
  2. Floridi, L. Information: A very short introduction. OUP Oxford, 2010.
  3. Orthuber, W. We Can Define the domain of Information Online and Thus Globally Uniformly. Information 2022, 13(5), 256. [Google Scholar] [CrossRef]
  4. Orthuber, W. The Building Blocks of Information Are Selections–Let’s Define Them Globally! In pHealth; IOS Press: Amsterdam, The Netherlands, 2021; pp. 76–81. [Google Scholar]
  5. Orthuber, W. Reproducible Transport of Information. In Studies in Health Technology and Informatics; IOS Press: Amsterdam, The Netherlands, 2021; Volume 281, pp. 3–7. [Google Scholar]
  6. Orthuber, W. Internet since decades—but no globally searchable data. GJETA 2020, 4, 70–72. [Google Scholar] [CrossRef]
  7. Orthuber, W. How to make medical information comparable and searchable. Digit. Med. 2020, 6, 1–8. [Google Scholar] [CrossRef]
  8. Orthuber, W. Information Is Selection-A Review of Basics Shows Substantial Potential for Improvement of Digital Information Representation. Int. J. Environ. Res. Public Health 2020, 17, 2975. [Google Scholar] [CrossRef] [PubMed]
  9. Orthuber, W. Global predefinition of digital information. Digit. Med. 2018, 4, 148–156. [Google Scholar] [CrossRef]
  10. Orthuber, W. Online definition of comparable and searchable medical information. Digit. Med. 2018, 4, 77–83. [Google Scholar] [CrossRef]
  11. Orthuber, W.; Hasselbring, W. Proposal for a New Basic Information Carrier on the Internet: URL Plus Number Sequence. In Proceedings of the 15th International Conference WWW/Internet, Mannheim, Germany, 28–30 October 2016; pp. 279–284. [Google Scholar]
  12. Orthuber, W.; Papavramidis, E. Standardized vectorial representation of medical data in patient records. Med. Care Compunetics 2010, 6, 153–166. [Google Scholar] [CrossRef]
  13. Orthuber, W.; Dietze, S. Towards Standardized Vectorial Resource Descriptors on the Web. In Proceedings of the Informatik 2010: Service Science—Neue Perspektiven für die Informatik, Beiträge der 40. Jahrestagung der Gesellschaft für Informatik e.V. (GI), Band 2, Leipzig, Germany, 27 September–1 October 2010; pp. 453–458. [Google Scholar]
  14. Orthuber, W.; Sommer, T. A Searchable Patient Record Database for Decision Support. Medical Informatics in a United and Healthy Europe. In Proceedings of MIE 2009, The XXIInd International Congress of the European Federation for Medical Informatics, Sarajevo, Bosnia and Herzegovina, 30 August–2 September 2009; pp. 584–588. [Google Scholar]
  15. Orthuber, W.; Fiedler, G.; Kattan, M.; Sommer, T.; Fischer-Brandies, H. Design of a global medical database which is searchable by human diagnostic patterns. Open Med. Inform. J. 2008, 2, 21–31. [Google Scholar] [CrossRef] [PubMed]
  16. Orthuber, W. Numeric Search in User Defined Data. Available online: http://numericsearch.com (accessed on 05 May 2023).
  17. Korsch, H. J. (2019). Mathematik der Quantenmechanik: Grundlagen, Beispiele, Aufgaben, Lösungen. Carl Hanser Verlag GmbH Co KG: München, Germany, 2019; pp. 78–79.
  18. Orthuber, W. A discrete and finite approach to past physical reality. International Journal of Mathematics and Mathematical Sciences 2004, 19, 1003–1023. [Google Scholar] [CrossRef]
  19. Orthuber, W. A discrete and finite approach to past proper time. arXiv preprint quant-ph/0207045, 2002.
  20. Wikipedia. Hermite polynomials. Available online: https://en.wikipedia.org/wiki/Hermite_polynomials (accessed on 05 May 2023).
  21. Salam, A. Fundamental interaction. Access Science. [CrossRef]
  22. Wikipedia, Observable Universe. Available online: https://en.wikipedia.org/wiki/Observable_universe (accessed on 05 May 2023).
  23. Wikipedia, Schrödinger equation. Available online https://en.wikipedia.org/wiki/Schr%C3%B6dinger_equation. (accessed on 05 May 2023).
  24. Richter, F. , Florian, M., & Henneberger, K. (2008). Poynting's theorem and energy conservation in the propagation of light in bounded media. Europhysics Letters, 81(6), 67005.
  25. Orthuber, W. A discrete approach to the vacuum Maxwell equations and the fine structure constant. arXiv preprint quant-ph/0312188, 2003.
  26. Wikipedia, Maxwell's equations. Available online https://en.wikipedia.org/wiki/Maxwell's_equations. (accessed on 05 May 2023).
Table 1. Pascal triangle or "Q0 triangle": It starts in line n=0 in position k=0 with the value 1 and shows below it the numbers of path possibilities to position k after n steps. The column k=0 represents the original position and contains for n>0 the numbers of return possibilities to the origin. Multiplication with column psym gives for each row b the resulting probabilities without prior information, i.e. equal probabilities of 1/2 each for steps to the right (resp. k+1) or to the left (resp. k-1). At this, the column k=0 represents the center of symmetry.
Table 1. Pascal triangle or "Q0 triangle": It starts in line n=0 in position k=0 with the value 1 and shows below it the numbers of path possibilities to position k after n steps. The column k=0 represents the original position and contains for n>0 the numbers of return possibilities to the origin. Multiplication with column psym gives for each row b the resulting probabilities without prior information, i.e. equal probabilities of 1/2 each for steps to the right (resp. k+1) or to the left (resp. k-1). At this, the column k=0 represents the center of symmetry.
n↓ k→ -6 -5 -4 -3 -2 -1 0 1 2 3 4 5 6 psym
0 1 1/1
1 1 1 1/2
2 1 2 1 1/4
3 1 3 3 1 1/8
4 1 4 6 4 1 1/16
5 1 5 10 10 5 1 1/32
6 1 6 15 20 15 6 1 1/64
Table 2. The "Q1 triangle" with the values Q1(n, k) (cf. ( 12
Table 2. The "Q1 triangle" with the values Q1(n, k) (cf. ( 12
n↓ k→ -6 -5 -4 -3 -2 -1 0 1 2- 3 4 5 6 p↓
0 0 1/1
1 1 -1 1/2
2 1 0 -1 1/4
3 1 1 -1 1 1/8
4 1 2 0 -2 -1 1/16
5 1 3 2 -2 -3 -1 1/32
6 1 4 5 0 -5 -4 -1 1/64
Table 3. Illustration of the combinatorics of discrete Maxwell Vacuum Equations ( 36
Table 3. Illustration of the combinatorics of discrete Maxwell Vacuum Equations ( 36
n↓ k→ -6 -5 -4 -3 -2 -1 0 1 2 3 4 5 6
0 Ez
dx -dy
1 By Bx
-dz dx -dy dz
2 Ex Ez Ey
dy -dz dx -dy dz -dx
3 Bz By Bx Bz
-dx dy -dz dx -dy dz -dx dy
4 Ey Ex Ez Ey Ex
dz -dx dy -dz dx - dy dz -dx dy -dz
5 Bx Bz By Bx Bz By
-dy dz -dx dy -dz dx -dy dz -dx dy -dz dx
6 Ez Ey Ex Ez Ey Ex Ez
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

© 2024 MDPI (Basel, Switzerland) unless otherwise stated