Preprint
Article

Kullback-Leibler Divergence(KLD) Formalism of The Stable Queue with KLD Applications to Biometrics

This version is not peer-reviewed.

Submitted:

17 February 2024

Posted:

19 February 2024

You are already at the latest version

Abstract
The paper explores the Kullback-Leibler divergence formalism (KLDF) applied to the stable MG1 queue manifold. More potentially, both service time probability and cumulative functions which make KLDF exact are obtained. The credibility of KLDF is justified through consistency axioms. Some potential applications of Kullback-Leibler divergence to Biometry are presented. The paper concludes with closing remarks combined with some challenging open problems and the next phase of research.
Keywords: 
Subject: 
Computer Science and Mathematics  -   Probability and Statistics

I. Introduction

It has been common practice in probabilistic inverse approaches [1] to treat both measurable data and model parameters that are unknown as uncertain. This method provides deeper understanding of the uncertainty associated with the measured data and model parameters. [2–10]. KLD [11–16] is a method used to compare two probability distributions. In Probability and Statistics, when we need to simplify complex distributions or approximate observed data, KL Divergence helps us quantify the amount of information lost in the process of choosing an approximation. KLD measures the difference between the two distributions and assists us with comprehending the trade-off between accuracy and simplicity in statistical modelling.
Shannonian entropic measure [9,17], namely H(p) reads as
H p = n = 0 p n l n ( p ( n ) )  
to define “The minimum number of bits it would take us to encode our information”.
KLD is commonly written as:
D K L ( p | | r = n = 0 p n l n p ( n ) r ( n )
We can precisely quantify lost when approximating one distribution with another using KL divergence.
This paper provides a roadmap of its contents, starting with some fundamental background in section I. The main results are given in section II. Section III deals with potential KLD applications to Biometrics. Finally, section IV provides conclusions, some emerging open problems, and future pathways of research.
According to [18–20], the maximum entropy state probability of the generalized geometric solution of a stable M/G/1 queue, subject to normalization, mean queue length (MQL), L and server utilization, ρ (<1) is given by:
Figure 1. A Stable M/G/1 queue.
Figure 1. A Stable M/G/1 queue.
Preprints 99174 g001
p n = 1 ρ ,   n = 0 1 ρ g x n ,   n 1  
where g = ρ 2 ( L ρ ) ( 1 ρ ) ,   x = L ρ L  and L = ρ 2 1 + 1 + ρ C s 2 1 ρ  (MQL for the underlying queue), ρ (= 1 - p 0 )   and C s 2  is the squared coefficient of variations ) .  Obviously, p n  (c.f., (3)) reads:
  p n = 1 ρ ,   n = 0 2 ρ ( 1 + ρ β 1 ρ 1 ) n 1 ( ( 1 + ρ β 1 ρ + 1 ) n ,   n 1  
where   β = C s 2 .  The reader can observe the difference between our novel approach and that in (3) and (4). Moreover, it is notable that the newly obtained KL formalism in the current paper is more general which reduces to (3) as a special case.
[19] evaluated the credibility of Tsallis’ NME formalism in terms of the four consistency axioms for large systems.
The credibility of KLDF NME formalism as a method of inductive reasoning is in Appendix A.
It has been shown (c.f., [19]) that the EME state steady probability of a stable M/G/1 queue that maximizes Shannon’s entropy function (c.f., [17]),
H p 1 ,   S = n = 0 p n ln p ( n )
with requirements,
Normalization,
n = 0 p n = 1
SU,
p 1 , S 0 = n = 0 h n p n = 1 ρ ,   ρ = λ μ  
where
h n = 1   for   n = 0   and   h n = 0   for   n = 1 ,   2 , .  
P-K MQL,
< n > = n = 0 n p n =   ρ 2 ( 1 + 1 + ρ C s 2 1 ρ )
reads
p n = p 0 ,   n = 0 p ( 0 ) τ s x n   n > 0
where   p 0 = 1 ρ ,   τ s  = 2/(1+ C s 2 ) and x = < n > ρ < n > .

II. MAIN RESULTS

Theorem 1. KLDF, namely,   p K L n , under (6)-(9) reads as
p K L n = p K L 0   n = 0 p K L 0 τ s   y n q ( 0 )   n > 0
where the initial state probabilities p K L 0  satisfy the SU constraint (6)
  p K L 0 = 1 ρ  
Such that
τ s   =   2 / ( 1 +   C s 2 )  
y   = < n > ρ < n > = ρ ρ + ( τ K L ) p K L 0 ,
    y n =   q ( n ) x n ,   M Q L = < n > = ρ 2   1 + ρ C s , K L 2 1 ρ  
with
ρ ( 1 y ) 1 ρ y = τ s q ( 0 ) = τ K L   ( 15 )
Proof
The Lagrangian, follows by maximizing KLDF under (6)-(9) to satisfy:
[ n = 0 p K L n ln p K L n q n α n = 0 h n + ( β 1 ) n = 0 1 γ n = 0 n = 0
Hence,
ln p K L n q n + 1 + β 1 α h n + γ n = 0
Hence
  p K L n = q n ( e β   )   ( e α h n   )   ( e γ n   )  
n = 0 ,   17 translates to
  p K L 0 = q 0 ( e β   )   ( e α )  
Linking (17) with (18) implies
  p K L n = p K L 0 q n τ s   x n q 0 ,  
x = e γ 0,1 ,   τ s =   e α ( 0,1 )  
Hence, we have
Define,   y n =   q ( n ) x n ,   y = < n > ρ < n > ,  implies that p K L n  will get the form:
p K l n = p K L 0   n = 0 p K L 0 τ s   y n q ( 0 )   n > 0
Thus
1 = p K L 0 + p K L 0 τ s y 1 y q ( 0 ) , implying ρ ( 1 y ) 1 ρ y = τ K L ,  which clearly implies y = ρ ρ + ( τ K l ) p K L 0 . This proves the required results.
Clearly, from the above devised result
p K L n = p K L 0   n = 0 p K L 0 τ s   y n q ( 0 )   n > 0
It is known that q 0   ( 0,1 ). it is evident that as q 0 1 ,   the KL formalism in (20,KL) will get the form:
p S h n = p K L 0   n = 0 p K L 0 τ s   y n   n > 0
which is the obtained Shannonian formalism in [20]. This shows the strength of the newly devised KLD formalism.
Theorem 2. The KL NME formalism, p K L n ,   are exact when PDFs of service time read as
f s , K L t = 1 τ K L u 0 t +   μ ( τ K L ) 2 e μ τ K L  
where   u 0 t  reads as
u 0 t =   ,   t = 0 0 ,   t 0  
such that
u 0 t = 1   and   τ s = 2 / ( 1 + C s 2 ) ,   τ K L =   τ s q ( 0 )
Proof
Q K L z c . f . ,   20 for p K L n   c . f . ,   20 ,   K L  reads as
Q K L z =   n = 0 p K L ( n ) z n ,   z < 1  
H e n c e , by replacing   p K L ( n )  of (20,KL) into (23), we have
Q K L z = n = 0 p K L n z n =
p K L 0 + p K L 0 τ K L y z 1 y z = p K l 0 ( 1 y z 1 τ K L ) 1 y z  
Following [21],
Q K l z =   p K L 0 1 z F s , K L * ( λ λ z ) F s , K L * λ λ z z
where
F s , K l * θ = E e θ s = 0 e θ t f s , K l t d t
Hence,
( 1 z ) F s , K L * ( λ λ z ) F s , K L * λ λ z z = ( 1 y z 1 τ K L ) 1 y z
Following (27), yields
F s , K L * λ λ z = z ( ( 1 y z 1 τ K L ) ( 1 y z 1 τ K L ( 1 z ) ( 1 y z ) ) = ( 1 y z 1 τ K L ) y τ K L y z + 1
Define, λ λ z = θ . Therefore,
z = 1 θ λ
Thus,
F s , K L * θ = μ τ K L + θ 1 τ K L θ + μ τ K L = 1 τ K L + μ ( τ K L ) 2 θ + μ τ K L
By inverting Laplace-StieltjesTransform, F s , K L * θ the GEKl-type pdf f s , K L t (c.f., (21)) follows.
It is observed that as q 0 1 , τ K L τ s , which reduces to the Shannonian limiting case obtained (c,f., [20]).
Corollary 1. The CDF F s , K L t o f the GEKL type of service time with the PDFs f s , K L t of Theorem 2 is captured fully by F s , K l t } , which reads as
F s , K L t = 1 τ K L e μ τ K L t
τ s = 2/(1+ C s , 2 ), τ K L = τ s q ( 0 ) .
Proof
We have
F s , K L t   = 0 t   f s , K L x d x =
0 t 1 τ K L u 0 x d x + μ τ K L 0 t e μ τ K L x d x = 1 τ K L + μ τ K L μ τ K L ( 1 e μ τ K L t ) = 1 τ K L e μ τ K L t   QED
For q 0 1, the novel derivation (31) reduces to the formula in [20],
F s t = 1 τ s e τ s μ t with τ s = 2 C s 2 + 1 .
Corollary 2. Following (21), we have
E s K L = 1 μ  
E s K L 2 = 2 μ 2 τ K L  
C s , K L 2 = E s K L 2 ( E S K L ) 2 1 = 2 τ K L τ K L  
τ K L = τ s q ( 0 ) , τ s = 2 / ( 1 + C s 2 )
Proof
The mean of S K L is given by
E S K L = 0 t f s . K L t d t = 0 t μ τ K L e μ τ K L t d t = μ τ K L 0 t e μ τ K L t d t
Introducing
Γ m = 0 w m 1 e w d w
and substituting w = μ τ K L t on (35) and since Γ 2 = 1 , it is implied that E s K L = μ τ K L μ 2 ( τ K L ) 2 Γ 2 = 1 μ   .
Moreover,
E s K L 2 = 0 t 2 f s , K L t d t = 0 t 2 μ τ K L e μ τ K L t d t = μ τ K L 0 t 2 e μ τ K L t d t
Let w = μ τ K L t ,,   E s K L 2 is given by (33). Hence C s , q , R 2 by (34).
As q ( 0 ) 1, the new derivations (32)-(34) reduce to that in [20],
E S = 1 μ  
E S 2 = 2 μ 2 τ K L  
C s , K L 2 = E S 2 ( E S ) 2 1 = 2 τ K L τ K L  
with
τ K L = τ s q ( 0 ) , τ s = 2 / ( 1 + C s 2 ) .

III. KLD APPLICATIONS TO BIOMETRICS

The use of biometric information was addressed by [25], specifically accelerometer data collected by smartphones, for advanced authentication and on-line user identification. The proposed approach utilizes homological analysis to monitor the inherent walking patterns of different users in the accelerometer data. By transforming the expected persistence diagrams into probability distribution functions and measuring the discrepancy in walking patterns using the KLD, users can be identified with high accuracy.
In [25] a proposed user identification system that utilizes accelerometer data was employed. The raw data is then filtered, and the magnitude of the accelerometer readings is calculated. The system focuses on identifying the user’s activity, specifically walking, by extracting the associated accelerometer data. This is illustrated by Figure 2.
The Sokoto Coventry Fingerprint Dataset (SOCOFing) [26] was used in this study to construct and evaluate different models. SOCOFing is a biometric fingerprint database consisting of 6,000 grayscale images collected from 600 African subjects. The dataset was partitioned into training, validation, and testing sets, and four neural network architectures were trained and evaluated, with the Res-WCAE model outperforming others in terms of noise reduction and achieving state-of-the-art performance. This can be seen from Figure 3 (c.f., [26]).
To improve the ability of the proposed model [26] to make accurate predictions on unseen data, a regularized cost function that includes KLD regularization was introduced. This regularization is achieved by incorporating a prior distribution into the cost function.
Recently [27], LSR has been found to be effective in reducing the variation within a class by minimizing KLD between a uniform distribution and the predicted distribution of a neural network. This regularization method helps improve the performance of the network by providing more robust and generalized predictions, particularly in classification tasks where reducing overconfidence and overfitting are important considerations.
To select the most informative sub-bands [28], the KL-divergence values of each dataset are normalized and averaged across the three datasets. Higher KL-divergence values indicate more discriminative sub-bands for classification. The sub-bands are chosen based on the highest average KLD values from all datasets, allowing for the identification of sub-bands that are discriminative across different datasets, as shown by Figure (c.f., [28]).

IV. SUMMARY, RESEARCH QUESTIONS COMBINED WITH NEXT PHASE RESEARCH

The ever-challenging problem of finding the closed form expression of the KLD formalism of the stable M / G / 1 queue is solved in this paper. More fundamentally, the corresponding service time PDF and CDF for which the derived KLDF is exact are obtained. Some potential KLD applications to Biometrics are highlighted.
These are few emerging research questions:
  • Can we unlock the mystery of the threshold of the obtained CDF(c.f., (31))?
  • A really challenging open problem is replacing the proposed KLD in this paper by the corresponding KLD for Ismail’s Entropy(IE) (c.f., [29]). This is a yet a great challenge to world mathematicians till current.
  • Is the open problem of finding the info-geometric analysis of the derived KLDF solvable?
  • Replacing the proposed version of KLD in this paper by the corresponding KLD of Ismail’s entropy(IE)[29], can we get better results to advance Biometrics?
The next phase of research includes solving the above-listed open problems.

Appendix A. KL FORMALISM VS. EME CONSISTEN-CY AXIOMS

1. Uniqueness 

It translates to, “If the same problem is solved twice in exactly the same way, the same answer is expected in both cases i.e., the solution should be unique” (c.f., [6]). Let f K L , h K l be two PDFs such that:
H K l * f K L , N = H K * h K L , N
Hence,
n = 1 N f K L , N l n ( f K L , N q ( N ) ) = n = 1 N h K L , n l n ( h K L , N q ( N ) )
By (A.2),it is implied that:
f K L , N l n f K L , N l n ( q N ) = h K L , N l n h K L , N l n ( q N )
Let the contradiction be true, namely, f K L , N h K L , N . Thus   γ > 1   satisfying:
f K L , N = γ h K L , N
Combining (A.3) and (A.4), we get
γ h K L , N ( l n γ + l n h K L , N l n q N = h K L , N ( l n h K L , N l n q N
Since h K L , N is non-zero, (A.5) can be transformed into
γ ( l n γ ) = 1 γ ( l n h K L , N q ( N )
By default, q N ( 0,1 ) .
By mathematical analysis, we have the following possibilities:
(1)
q ( N ) <   h K L , N
It is implied that h K L , N q ( N ) > 1 , ln( h K L , N q ( N ) ) > 0 .
This implies by (A.6), γ > 1 ,   that γ ( l n γ ) < 0 (Contradiction)
(2)
q ( N ) =   h K L , N
It is implied that h K L , N q ( N ) = 1 , ln( h K L , N q ( N ) ) = 0 .
This implies by (A.6), γ > 1 ,   that γ l n γ = 0 (Contradiction)
(3)
q ( N ) >   h K L , N
It is implied that h K L , N q ( N ) < 1 , ln( h K L , N q ( N ) ) < 0 .
Hence, KL divergence is negative, which contradicts that fact that KL divergence is non-negative(c.f.[22]).
Therefore., “there cannot be two distinct probability distributions f k L , N , h K L , N ϵ Ω having the same KL divergence measure in Ω .Thus, KL formalism satisfies the axiom of uniqueness (c.f., [6]).

2. Invariance

The invariance axiom states that “The same solution should be obtained if the same inference problem is solved twice in two different coordinate systems” (c.f., [22]).Following the analytic methodology proposed in [6] and adopting the notation of Subsection 1, let Ξ be a coordinate transformation from state { S n , n = 1,2 , , N } to state { M n , n = 1,2 , , N } , where M be a transformed set of N possible discrete states, namely M = M n , n = 1,2 , , N with   Γ ( p K l , N M n = Ξ 1 ( p K L , N S n , where J is the Jacobian   J = ( M n ) S n . Moreover, let Γ Ω be the closed convex set of all probability distributions Γ defined on M such that Ξ p K l , N M n   > 0 for all M n ϵ M , n = 1, 2, ...,N and   n = 1 N Ξ p K L , N M n = 1 . It can be clearly seen that, transforming variables from S n ∈ S into R n ∈ R, the extended KLD is transformation invariant [23] namely
H K L * p K L , N = H K L * ( Ξ p K L , , N )
Thus, the EME formalism satisfies the axiom of invariance [6] since the minimum in   Ξ Ω corresponds to the minimum in Ω ” (c.f., [23]).

3. System Independence

It translates to “It should not matter whether one accounts for independent information about independent systems separately in terms of different probabilities or together in terms of the joint probability” (c.f., [6]). The joint probability for any independent systems Q and M is:
h K l , N ( x k , y n ) = Pr X = x k , Y = y n = f K l , N ( x k )   g K l , N ( y n )
Thus, KLD can be written as
H K L * h K l , N = n = 1 N h K l , N ln h K l , N q N = n = 1 N f K l , N ( x k ) g K l , N ( y n ) l n ( f K l , N ( x k ) g K l , N ( y n ) q ( N ) )
Assume that
H K L * h K l , N = H K L * [ f K L , N ] + H K l * [ g ( Y K L , N ) ]
By (A.9) and (A.10), this could be rewritten in the simpler form
f g   l n f g q N = f   l n f q N + g   l n g q N
Hence,
f g   l n f g l n q N = f   l n f + g   l n g f + g l n q N  
Define q N = 1 , so (A.11) is
f g   l n f g = f   l n f + g   l n g  
(A.12) implies,
f g f g = f f g g
Let f = g = 1 2 in (A.13)
Thus
( 1 4 ) 1 4 = 1 2  
(A.14) is impossible. Thus, system independence is defied because of long-range interactions.

4. Subset Independence(SI)

SI in a physical interpretation reads as “It does not matter whether one treats an independent subset of system states in terms of a separate conditional density or in terms of the full system density” (c.f., [24]).
In the given context, the notation and concepts related to an aggregate state of a system, denoted as x, and its associated probability distribution f K l x . The probability distribution represents the likelihood of the random variable X taking the value x. The text also mentions that the aggregate states ξ i , where   i ranges from 1 to L, can be expressed using this notation.
S i * f K L , i x i j = ξ i
We have
H K L * f K L = ( i S i ξ i f K l , i x i j
where f K l , i x   ϵ   Ω .   Equation (A.16) will read as
H K l * f K L = [ i ξ i S i f K L , i x i j ]
Thus,
H K L , i * f K l , i = ( S i * f K l , i x i j l n ( f K l , i x i j q ( N ) ) )
Hence,
H K L , i * f K l , i ( x i j ) = S i * l n ( f K l , i x i j q N f K l , i x i j = l n ( S i * ( f K l , i x i j q N f K l , i x i j )  
Hence, apparently by the above proof it holds that:
S i * ( f K l , i x i j q N f K l , i x i j > f K l , i x i j q N f K l , i x i j > f K l , i x i j q N q N = f K l , i x i j q ( N ) ( q ( N ) ) q ( N )
By (A.20), there exists a positive real number 0 < σ < 1 satisfying:
f K l , i x i j q N f K l , i x i j = f K l , i x i j q ( N ) σ ( q ( N ) ) q ( N )
Combining (A.19) and (A.21) implies
H K L , i * f K l , i ( x i j ) = S i * l n ( f K l , i x i j q N f K l , i x i j = f K l , i q ( N ) x i j σ q N q N
Following (A.23),
f K l , i q ( N ) x i j = σ q N q N H K L , i * f K l , i ( x i j )
By (A.24),
f K l , i ( x i j ) = q ( N ) σ ( 1 q N ) ( H K L , i * f K l , i ( x i j ) ) ( 1 q N )
Linking (A.16) with (A.25) yields
H K L * f K L = i ξ i q ( N ) σ ( 1 q N ) ( H K L , i * f K l , i ( x i j ) ) ( 1 q N )
(A.26) implies that KLD satisfies subset independence.

References

  1. Z.S. Agranovich, and V.A. Marchenko, “The inverse problem of scattering theory,” Courier Dover Publications; 2020.
  2. H. Naman, N. Hussien, M. Al-dabag, and H. Alrikabi, “Encryption System for Hiding Information Based on Internet of Things,” 2021, p. 172-183.
  3. I.A.Mageed and Q.Zhang, “Formalism of the Rényian Maximum Entropy (RMF) of the Stable M/G/1 queue with Geometric Mean (GeoM) and Shifted Geometric Mean (SGeoM) Constraints with Potential GeoM Applications to Wireless Sensor Networks (WSNs),” electronic Journal of Computer Science and Information Technology, vol. 9, no. 1, 2023, p.31-40. [CrossRef]
  4. N. Mehr, M. Wang, M. Bhatt, M. Schwager, “Maximum-entropy multi-agent dynamic games: Forward and inverse solutions,” IEEE Transactions on Robotics, 2023. [CrossRef]
  5. J. Korbel, “Calibration invariance of the MaxEnt distribution in the maximum entropy principle,” Entropy, vol. 23, no. 1, 2021, p. 96. [CrossRef]
  6. Golan and D.K. Foley, “Understanding the Constraints in Maximum Entropy Methods for Modeling and Inference,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 23, no. 3, 2022, p. 3994-3998. [CrossRef]
  7. D. Salvatore, “Theory and problems of statistics and econometrics,” 2021.
  8. Martini, S. Schmidt, and W. Del Pozzo, “Maximum entropy spectral analysis: A case study,” arXiv preprint arXiv:2106.09499, 2021. arXiv:2106.09499.
  9. E. Pardo-Igúzquiza and P. A. Dowd, “Maximum entropy spectral analysis of uneven time series in the geosciences,” Bol. Geol. Min, vol. 131, no. 2, 2020, p. 325-337. [CrossRef]
  10. A. Mageed and Q. Zhang, “An Introductory Survey of Entropy Applications to Information Theory, Queuing Theory, Engineering, Computer Science, and Statistical Mechanics, “In 2022 27th IEEE International Conference on Automation and Computing (ICAC), 2022, p. 1-6.
  11. L. Sciullo, A.Trotta, and M. Di Felice, “Design and performance evaluation of a LoRa-based mobile emergency management system (LOCATE), “Ad Hoc Networks, vo. 96, 2020, p. 101993. [CrossRef]
  12. I.A. Mageed and Q. Zhang, “The Rényian-Tsallisian Formalisms of the Stable M/G/1 Queue with Heavy Tails Entropian Threshold Theorems for the Squared Coefficient of Variation, “electronic Journal of Computer Science and Information Technology, vol. 9, no. 1, 2023, p. 7-14. [CrossRef]
  13. A. Mageed, “The Entropian Threshold Theorems for the Steady State Probabilities of the Stable M/G/1 Queue with Heavy Tails with Applications of Probability Density Functions to 6G Networks, “electronic Journal of Computer Science and Information Technology, vol. 9, no. 1, 2023, p. 24-30. [CrossRef]
  14. A. Mageed, Q. Zhang, D. D. Kouvatsos, and N. Shah, “M/G/1 queue with Balking Shannonian Maximum Entropy Closed Form Expression with Some Potential Queueing Applications to Energy, “In IEEE 2022 Global Energy Conference (GEC), 2022, p. 105-110. [CrossRef]
  15. D.D. Kouvatsos et I. A. Mageed, “Formalismes de maximum d’entropie non extensive et inférence inductive d’une file d’attente M/G/1 stable à queues Lourdes, “Théorie des files d’attente 2: Théorie et pratique, 2021, p. 183.
  16. A. Mageed and D. D. Kouvatsos, “The Impact of Information Geometry on the Analysis of the Stable M/G/1 Queue Manifold, “In ICORES, 2021, p. 153-160.
  17. A. Mageed and Q. Zhang, “Inductive Inferences of Z-Entropy Formalism (ZEF) Stable M/G/1 Queue with Heavy Tails, “In IEEE 2022 27th International Conference on Automation and Computing (ICAC), 2022, p. 1-6.
  18. I.A. Mageed and Q. Zhang, “Threshold Theorems for the Tsallisian and Rényian (TR) Cumulative Distribution Functions (CDFs) of the Heavy-Tailed Stable M/G/1 Queue with Tsallisian and Rényian Entropic Applications to Satellite Images (SIs),” electronic Journal of Computer Science and Information Technology, vol. 9, no. 1, 2023, p. 41-47. [CrossRef]
  19. A. Mageed, Q. Zhang, and B. Modu, “The Linearity Theorem of Rényian and Tsallisian Maximum Entropy Solutions of The Heavy-Tailed Stable M/G/1 Queueing System entailed with Potential Queueing-Theoretic Applications to Cloud Computing and IoT, “electronic Journal of Computer Science and Information Technology, vol. 9, no. 1, 2023, p. 15-23. [CrossRef]
  20. G. Malik, S. Upadhyaya, and R. Sharma, “Particle swarm optimization and maximum entropy results for 𝑀X/G/1 retrial g-queue with delayed repair,” International Journal of Mathematical, Engineering and Management Sciences, vol. 6, no. 2, 2021, p. 541. [CrossRef]
  21. B. Y. Lichtsteiner, “Delays in queues of queuing systems with stationary requests flows, “T-Comm-Телекoммуникации и Транспoрт, vol. 15, no. 2, 2021, p. 54-58. [CrossRef]
  22. N. Anand and M. A. Saifulla, “A Probabilistic Method to Identify HTTP/1.1 Slow Rate DoS Attacks Check for updates,” Communication and Intelligent Systems: Proceedings of ICCIS, vol. 2, 2023, Volume 2. 2023, p. 689-717.
  23. D. D. Kouvatsos and and I . A. Mageed, “Non-Extensive Maximum Entropy Formalisms and Inductive Inferences of Stable M/G/1 Queue with Heavy Tails, “in “Advanced Trends in Queueing Theory, “vol. 2, March 2021, Vladimir Anisimov and Nikolaos Limnios (eds.), Books in ‘Mathematics and Statistics’, Sciences by ISTE & J. Wiley, London, UK.
  24. S. E. Avgerinou, E. A. Anyfadi, G. Michas, and F. Vallianatos, “A Non-Extensive Statistical Physics View of the Temporal Properties of the Recent Aftershock Sequences of Strong Earthquakes in Greece, “Applied Sciences, vol. 13, no. 3, 2023, p. 1995. [CrossRef]
  25. Yan, L. Zhang, and H. C. Wu, “Advanced Homological Analysis for Biometric Identification Using Accelerometer, “IEEE Sensors Journal, vol. 21, no. 6, 2020, p. 7954-7963. [CrossRef]
  26. Y.Liang and W. Liang, “ResWCAE: Biometric Pattern Image Denoising Using Residual Wavelet-Conditioned Autoencoder, “arXiv preprint arXiv:2307.12255, 2023. arXiv:2307.12255.
  27. Y. G. Jung, C. Y. Low, J. Park, and A.B. Teoh, “Periocular recognition in the wild with generalized label smoothing regularization, “IEEE Signal Processing Letters, vol. 27, 2020, p. 1455-1459. [CrossRef]
  28. P. Aghdaie, B. Chaudhary, S. Soleymani, J. Dawson, and N. M. Nasrabadi, “Detection of morphed face images using discriminative wavelet sub-bands, “In 2021 IEEE International Workshop on Biometrics and Forensics (IWBF), 2021, p. 1-6. [CrossRef]
  29. A. Mageed and Q. Zhang, “An Information Theoretic Unified Global Theory For a Stable M/G/1 Queue With Potential Maximum Entropy Applications to Energy Works,” In IEEE 2022 Global Energy Conference (GEC), 2022, p. 300-305. [CrossRef]
Figure 2. The diagram illustrates the flow of data processing for user identification, including dynamic segmentation and the extraction of reliable signal features for identification purposes[25].
Figure 2. The diagram illustrates the flow of data processing for user identification, including dynamic segmentation and the extraction of reliable signal features for identification purposes[25].
Preprints 99174 g002
Figure 3. The samples shown depict original figures, noisy figures, and denoised figures for varying levels of noise.
Figure 3. The samples shown depict original figures, noisy figures, and denoised figures for varying levels of noise.
Preprints 99174 g003
Figure 4. The average KL-divergence values in each sub-band for the three datasets, with the results displayed in green. KL-divergence is a measure of dissimilarity between probability distributions, and in this context, it is used to assess the discriminative power of different sub-bands in the datasets [28].
Figure 4. The average KL-divergence values in each sub-band for the three datasets, with the results displayed in green. KL-divergence is a measure of dissimilarity between probability distributions, and in this context, it is used to assess the discriminative power of different sub-bands in the datasets [28].
Preprints 99174 g004
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Alerts
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

© 2025 MDPI (Basel, Switzerland) unless otherwise stated