Introduction and literature review
The most justifiable version of information entropy is the Rényi entropy with a free Rényi non- extensivity parameter q, and the Tsallis entropy can be thought of as a linear approximation [
1] to the Rényi entropy when
q. When
, the Boltzmann-Shannon entropy functional replaces both other entropy functions. When the Rényi entropy functional is subjected to the maximum information entropy (MEP) principle, the result is the micro canonical (homogenous) distribution for an isolated system. The Boltzmann entropy functional replaces the Rényi entropy functional in this situation, which supports the universality of Boltzmann's principle of statistical mechanics, regardless of the value of the Rényi parameter
q.
The need for non-extensive statistics based on Tsallis' information entropy is critical given its quick development. The one-parameter family of Rényi entropies (or just Rényi entropy) seems to be the most rational one [
2]. The well-known Boltzmann-Shannon entropy functional replaces the Rényi entropy functional when the Rényi parameter
q is equal to unity. The non-extensive Tsallis' entropy functional is produced by linearizing the extended Rényi entropy functional in the vicinity [
2] of a point
q.
When the principle of maximum of an information entropy (MEP) is applied to the Rényi entropy functional of an isolated system. At this phase, the Rényian functional reduces to the Boltzmannian functional , thus enforcing Boltzmann Principle from which all thermodynamic properties of extensive and non-extensive Hamiltonian systems can be deduced.
The measure of information in this example of a system’s incomplete statistical descriptor with the help of probabilistic distribution is called the information entropy functional, or just entropy
Boltzmann-Shannon representation of the entropy functional is the most well-known to read as:
The entropy correlates with the thermodynamic entropy functional in the situation given, where the distribution is the system's macroscopic equilibrium state and the subscripts denote dynamic microstates in the Gibbs phase space.
This entropy functional was justified by Khinchin and Shannon based[
5] on a system of axioms presented in a theorem form. Their axioms were analyzed in [
2,
5], where it was shown that a uniquely determined Boltzmann-Shannon entropic form is provided by a quite artificial axiom related to a form of conditional entropy functional(that is, the entropy functional of subsystem of a system being in a prescribed state). Uffink [
5] examined several papers on this topic and discovered that the Shore and Johnson [
6] axiom system, which results in the Rényi entropy functional[
7] to read as:
The Rényi entropy functional, which is a mathematical measure was used to quantify the amount of information or disorder in a system. It mentions that
q (c.f., (2)) must be positive and not less than zero. The properties and characteristics of Rényi entropy functional are further explored in related literature[
7,
8,
9]. Among its basic properties, we may mention positivity(
), concavity for
In the case of
(which, in view of the normalization of the distribution
corresponds to the condition
, one can restrict oneself to the linear term of the logarithm in the expression for
over this difference, and
changes to
Havdra and Charvat [
10] and Daroczy [
11] have proposed such a linearization of the Rényi entropy functional; at this time, Tsallis' entropy [
12] had come into existence.
The entropy functional stops being exhaustive due to logarithm linearization. To examine a range of non-extensive systems, Tsallisian followers have extensively utilised this quality [
12,
13,
14,
15,
16,
17,
18,
19,
20]. In doing so, the constraint
mentioned above is typically ignored.
According to MEP when describing a system statistically, its distribution function should accurately represent the average quantities observed in the system. If these quantities are not known, the distribution function should be as indeterminate as possible. This approach has been widely used in constructing equilibrium statistical thermodynamics for isolated or weakly interacting thermodynamic systems.
After the research conducted by Jaynes[
21], Gibbs ensembles were widely used as a statistical approach to accurately represent average quantities observed in a system, as they are commonly used in constructing equilibrium statistical thermodynamics for isolated or weakly interacting thermodynamic systems. The information entropy functional, commonly referred to as the Boltzmann-Shannon entropy functional, is traditionally used to quantify the "disorder" or uncertainty in a system.
This theorem introduces as it reads a new physical interpretation: as Rényi entropy functional is more general than both Shannon and Tsallis, since Rényi entropy functional reduces to Shannon case as the parameter
and that Rényian entropy functional’s linearization in the neighbourhood of a point
q is the Tsallisian entropy functional. This leads to a newer ground in information theory , we could represent it by the following diagram:
In other words, this could be read as employing Rényi entropy functional to research any concept would generate the special case of Shannon case as , and would reduces to the Tsallis case if we carry out the linearization of the Rényi entropy functional
Shannon entropy, sometimes referred to as Gibbs entropy in statistical physics, is a measure of “disorder” in a system. As an alternative to Gibbs entropy, Tsallis developed a non-extensive entropy[
13,
15], indexed by
, which results in an infinite family of Tsallis non-extensive entropies. While Tsallis non-extensive entropy produces “type II generalised Lévy stable distributions” with heavy tails that obey power laws, Gibbs entropy produces exponential distributions. It is important to remember that Tsallis entropy is equivalent to Havrda-Charvat's structural
-entropy[
14], but the non-extensive mechanics community frequently ignores this relationship. Additionally, rather than the other way around, Tsallis distributions are derived from Lévy distributions.
Result four
Following our approach as above subject to more complex constraints,
After few algebraic steps, the reader can check that by optimizing the Rényi’s entropy functional subject to constraints (4), (11), and (16), the reader can check that after few algebraic steps, the solution is in the closed form representation:
where
are the Lagrangian multipliers. The variance corresponds to
, (16) defines the variance. We can see that for
(19) reduces to Tsallisian distribution [
13] as a special case.
By (19), we have for small values of
,
Carrying out the same analysis, we have for large values of
,
This is summarized in the more compact form:
The density functions (PDFs) (10), (15), (17), and (19) refer to generalized t-distributions, which exhibit polynomial tails for small values of x and power law tails for large values of
x. These distributions encompass the entire range of Lévy stable distributions, which are commonly used to model extreme events and heavy-tailed phenomena. Specifically, equation (10) represents the PDFs for small values of
x, while equation (19) represents the pdf for large values of x.
where
The outcomes for (15), (17), and (19) are comparable. Since the Lévy distributions can be represented by the generalised t-distributions (10), (15), (17), and (19), the density functions that can be obtained from Rényi entropy functional are even broader. It is clear from the analysis done and reported in (23), for (19), that the case is still relevant.