Preprint
Article

This version is not peer-reviewed.

An Elementary Theory of Indefinite Summation Using Integral Transforms

Submitted:

18 March 2025

Posted:

18 March 2025

Read the latest preprint version here

Abstract
We present a novel approach to indefinite summation through integral transform methods. We first establish the Laplace transform’s utility in solving complex summation problems, then develop a generalized frame- work of integral transforms that provides a systematic approach to evalu- ating nontrivial indefinite sums. Using the continuous binomial transform as a central example, we derive key identities that demonstrate the frame- work’s effectiveness. We also introduce a novel technique in addressing varying step sizes as well as nonlinear variable transformations in discrete indefinite summations. Our results show that integral transforms—with their flexible choice of kernels—serve as powerful tools for analyzing indefinite summations.
Keywords: 
;  ;  ;  ;  ;  ;  ;  

1. Introduction

The theory of indefinite summations is well-known in Discrete Calculus. We begin by defining the operators and formally derive a known result of the Euler-Maclaurin Formula:
Δ f ( x ) = f ( x + 1 ) f ( x ) = g ( x ) ,                                         ( 1 ) Δ 1 g ( x ) = f ( x ) .                                         ( 2 )
Let
f ( x ) = n = 0 D x n f | a n ! ( x a ) n .
f ( x + 1 ) = n = 0 D x n f | a n ! ( x + 1 a ) n .
Setting a = x , we have:
f ( x + 1 ) = n = 0 D x n f | x n ! ( 1 ) n = e D x f ( x ) .
Therefore:
f ( x + 1 ) f ( x ) = ( e D x 1 ) f ( x ) = g ( x ) . f ( x ) = g ( x ) e D x 1 = n = 0 B n n ! ( D x ) n 1 g ( x ) = ( D x ) 1 g ( x ) + 1 2 g ( x ) + n = 1 B 2 n ( 2 n ) ! ( D x ) 2 n 1 g ( x ) + C
where B n are the Bernoulli numbers, and C is the constant term that becomes the relevant remainder term.

2. Generalizing the Method in [1]

2.1. Using the Laplace Transform

In [1], Stenlund utilizes the Laplace transform in order to develop a method to solve finite series and infinite series. For our purposes, we will only consider the indefinite summation to develop a similar result:
Δ 1 g ( x ) = x g ( x )
By assuming that conditions are met, we may assume g(x) is the laplace transform of some function G(t):
g ( x ) = 0 e x t G ( t ) d t .                                         ( 3 )
Thus:
x g ( x ) = x 0 e x t G ( t ) d t = 0 x e x t G ( t ) d t .
It is known that
Δ x ( e x t e t 1 + C * ) = e x t
And
0 ( e x t e t 1 + C * ) G ( t ) d t
We may drop the constant, as
x g ( x ) = 0 e x t e t 1 G ( t ) d t + 0 C * G ( t ) d t = 0 e x t e t 1 G ( t ) d t + C                                         ( 4 )
Utilizing the identities in () and () are already sufficient to derive closed forms of various summations, as the laplace transform lends itself nicely to various functions.

2.2. Analyzing the General Transform

It is clear, however, that in this method, there are ultimately two things that we must consider. Firstly, by introducing the transform, we would like for the indefinite summation over the kernel to be sufficiently simple, and we would also like for the existence of a nice inverse function G(t).
Consider:
g ( x ) = a K ( x , t ) G ( t ) d t .
Then we similarly have that
x g ( x ) = x a K ( x , t ) G ( t ) d t = a x K ( x , t ) G ( t ) d t .                                         ( 5 )
Immediately, another intuitive yet slightly unnatural choice would be to let K ( x , t ) = x t due to the well-known summation of x x t = x t + 1 . Taking a = , immediately provides some interesting connections:
I x = x t d t = x 1 t 1 + x 1 t d t = 2 x 1 t d t .
We have I 0 = 1 (at t = 0) and I x = 2 I x 1 ; hence I x = 2 x .
Thus, as 2 x = x t d t , we must have that
x 2 x = x t + 1 d t = 2 x .
Another example, utilizing the absorption identity x t = x t x 1 t 1 :
I x = t x t d t = x x 1 t 1 d t = x x 1 t d t = x 2 x 1 .
Thus:
x x 2 x 1 = x t + 1 t d t = x t ( t 1 ) d t = x 2 x 1 2 x
as expected very quickly. By offsetting the summation to a separate kernel, the summation is solved purely through symbolic manipulation and the evaluation of the integral
Based on these few examples, we now develop a brief theory of this continuous binomial transform.

3. The Continuous Binomial Transform

Consider the following integral transform:
B { f } ( x ) = F ( x ) = x t f ( t ) d t f ( t ) = K ( x , t ) B { f } ( x ) d x where , x t = Γ ( x + 1 ) Γ ( t + 1 ) Γ ( t x + 1 )
We proceed in the usual manner:
f ( t ) = K ( x , t ) x τ f ( τ ) d τ d x f ( t ) = f ( τ ) K ( x , t ) x τ d x d τ
And we would like the inner integral K ( x , t ) x τ d x = δ ( t τ ) . Motivated by the discrete binomial inversion formula given by:
F ( n ) = m n m f ( m ) ,                                         ( 6 ) f ( n ) = m ( 1 ) n m n m F ( m ) .                                         ( 7 )
(see Equation (5.48) in [2], p. 192), we proceed with our derivation.
Proof. We will prove that the desired kernel K ( x , t ) is given by
K ( x , t ) = ( 1 ) t x t x .
We have
f ( t ) = f ( τ ) K ( x , t ) x τ d x d τ . = f ( τ ) ( 1 ) t x t x x τ d x d τ = f ( τ ) t τ ( 1 ) t x t τ x τ d x d τ
Examining the inner integral:
( 1 ) t x t τ x τ d x .
Setting j = x τ , we have d j = d x .
( 1 ) t ( j + τ ) t τ j d j .
Rewriting the exponent:
( 1 ) t τ ( 1 ) j t τ j d j .
It suffices to show that
I = ( 1 ) j t τ j d j = δ ( t τ )
This is relatively simple to show. Let m = t τ . Consider:
( 1 + x ) m = k 0 m k x k .
Formally, we utilize the Residue Theorem to obtain:
m j = 1 2 π i C ( 1 + z ) m z j + 1 d z                                         ( 8 ) = 1 2 π i π π ( 1 + e i θ ) m e i θ ( j + 1 ) i e i θ d θ                                         ( 9 ) = 1 2 π π π ( 1 + e i θ ) m e i j θ d θ                                         ( 10 )
We also note that this extends to all real values of m and j:
( 1 + e i θ ) m = ( 2 cos ( θ / 2 ) ) m e i m θ / 2 , m j = 2 m 2 π π π cos ( θ / 2 ) m e i ( m / 2 j ) θ d θ m j = 2 m 1 π π π cos ( θ / 2 ) m cos ( ( m / 2 j ) θ ) d θ                                         ( 11 )
which can then be evaluated numerically since the imaginary part is zero due to symmetry. This is well-defined.
Then
I = ( 1 ) j m j d j = e i π j 1 2 π π π ( 1 + e i θ ) m e i j θ d θ d j = 1 2 π π π ( 1 + e i θ ) m e i ( π θ ) j d j d θ
We recognize the inner integral as a basic Fourier Transform giving 2 π δ ( π θ )
2 π 1 2 π π π ( 1 + e i θ ) m δ ( π θ ) d θ = ( 1 + e i π ) m = δ m = δ ( t τ )
Therefore,
f ( t ) = f ( τ ) t τ ( 1 ) t x t τ x τ d x d τ = f ( τ ) t τ ( 1 ) t τ ( 1 ) j t τ j d j d τ = f ( τ ) t τ ( 1 ) t τ δ ( t τ ) d τ = f ( t )
This completes the proof with K ( x , t ) = ( 1 ) t x t x . Thus:
B { f } ( x ) = F ( x ) = x t f ( t ) d t                                         ( 12 ) f ( t ) = e i π ( t x ) t x B { f } ( x ) d x                                         ( 13 )

3.1. Example Usage

To demonstrate its use, we will fully work out the case where B { f } ( x ) = 2 x nicely.
f ( t ) = e i π ( t x ) t x 2 x d x = e i π ( t x ) 1 2 π i C ( 1 + z ) t z x + 1 d z 2 x d x = 1 2 π e i π ( t x ) π π ( 1 + e i θ ) t e i θ x d θ 2 x d x = e i π t π π ( 1 + e i θ ) t 1 2 π e i π x e i θ x 2 x d x d θ = e i π t π π ( 1 + e i θ ) t 1 2 π e i x ( l o g ( 2 ) i ( θ + π ) ) d x d θ = e i π t π π ( 1 + e i θ ) t δ ( l o g ( 2 ) i θ π ) d θ = e i π t ( 1 + e i ( l o g ( 2 ) i π ) ) t = e i π t ( 1 + 2 ( 1 ) ) t = e 2 π i t = 1 .
We will also illustrate the case where B { f } ( x ) = sin ( x )
f ( t ) = e i π ( t x ) t x sin ( x ) d x = e i π t C ( 1 + e i θ ) t 1 2 π e i π x e i θ x sin ( x ) d x d θ = e i π t C ( 1 + e i θ ) t 1 2 π e i ( π + θ ) x e i x e i x 2 i d x d θ = e i π t C ( 1 + e i θ ) t 1 4 π i ( 2 π δ ( π + θ 1 ) 2 π δ ( π + θ + 1 ) ) d θ We choose C as a unit circle : θ [ π 1 , π 1 ] = 1 2 i [ ( e i 1 ) t ( e i 1 ) t ] = 1 2 i [ z z ¯ ] = ( z ) = ( ( e i 1 ) t ) = ( ( e i / 2 ( e i / 2 e i / 2 ) ) t ) = ( ( e i / 2 ( 2 i ) sin ( 1 / 2 ) ) t ) = ( 2 sin ( 1 / 2 ) ) t sin ( ( π + 1 2 ) t )
Now, evaluating the sum is relatively tricky. We will develop one more identity to help us, utilizing the same methods before:
x t a t d t = 1 2 π i C ( 1 + z ) x z t + 1 d z a t d t = C ( 1 + e i θ ) x 1 2 π e i θ t + t log ( a ) d t d θ = π π ( 1 + e i θ ) x δ ( θ + log ( a ) i ) d θ = ( 1 + a ) x
And we have, relatively simply:
x sin ( x ) = x x t f ( t ) d t = x t + 1 ( 2 sin ( 1 / 2 ) ) t sin ( ( π + 1 2 ) t ) d t = x t + 1 ( 2 sin ( 1 / 2 ) ) t ( e π + 1 2 t i ) d t = ( x t + 1 ( 2 sin ( 1 / 2 ) ) t e π + 1 2 t i d t ) Let A = 2 sin ( 1 / 2 ) and B = π + 1 2 : = ( x t + 1 ( A e B i ) t d t ) = ( ( A e B i ) 1 x t ( A e B i ) t d t ) = 1 A ( ( 1 + A e B i ) x e B i ) = 1 2 sin ( 1 / 2 ) ( ( 1 + 2 sin ( 1 / 2 ) e π + 1 2 i ) x e π + 1 2 i ) Notice that 1 + 2 sin ( 1 / 2 ) e π + 1 2 i = 1 + 2 i sin ( 1 / 2 ) e i / 2 = 1 2 sin 2 ( 1 / 2 ) + 2 i sin ( 1 / 2 ) cos ( 1 / 2 ) = cos ( 1 ) + i sin ( 1 ) = e i . Thus : x sin ( x ) = 1 2 sin ( 1 / 2 ) ( e i x e π + 1 2 i ) = 1 2 sin ( 1 / 2 ) e i ( x π + 1 2 ) = sin ( x π + 1 2 ) 2 sin ( 1 / 2 )
This illustrates the versatility of the method. By utilizing integral transforms like this one, we have a mechanical method that may evaluate the indefinite summations of various functions purely analytically

4. Change of Variables

In indefinite summation, it is often useful to focus solely on either the even or odd terms. However, adjusting the step size in the discrete setting can be a challenging problem. In this paper, we introduce a method that addresses these scaling factors effectively. We start by examining how the functional equation is affected by these changes.

4.1. The Problem with Scaling

Consider:
Δ 1 ( g ( a x ) ) = f ( x ) .                                         ( 14 )
g ( a x ) = f ( x + 1 ) f ( x )                                         ( 15 )
g ( x ) = f ( 1 a x + 1 ) f ( 1 a x ) .                                         ( 16 )
We may define : Δ a 1 ( g ( x ) ) = f ( x )
If we analyze this functional equation formally, we obtain a result akin to the Euler- Maclaurin Formula:
f ( a x ) = f ( e log x + log a ) = e log ( a ) D log x f ( e log ( x ) ) = e log ( a ) x D x f ( x ) f ( a x + 1 ) = e D x e log ( a ) x D x f ( x ) = e D x ( 1 + l o g ( a ) x ) f ( x ) . We have : g ( x ) = f ( 1 a x + 1 ) f ( 1 a x ) = ( e D x ( 1 + l o g ( 1 a ) x ) e log ( 1 a ) x D x ) f ( x ) = g ( x ) ( e log ( 1 a ) x D x ) ( e D x 1 ) f ( x ) = g ( x ) Thus : f ( x ) = e log ( a ) x D x e D x 1 g ( x ) .
The laurent series expansion can be easily obtained now for few terms, giving an analogous Euler-Maclaurin formula. For a = 2, we have:
( n = 0 ( log ( 2 ) x D x ) n n ! ) ( n = 0 B n n ! ( D x ) n 1 ) g ( x ) = ( D x 1 + ( log ( 2 ) x 1 2 ) + ( log ( 2 ) x ) 2 log ( 2 ) x + 1 6 2 D x + O ( D x 2 ) ) g ( x ) + C
This generalization is consistent and relatively useful as an approximation of hopelessly complex sums. However, relating this back to the central method is not so clear.
If we can find a specific function, whose indefinite summation is known for a generalized step size, we will have the power to extend it to multiple functions via our integral transforms.
An immediate natural choice goes back to the exponential function in the laplace transform method of Section 2.1, as generalizing by step size simply changes the geometric series, whose indefinite summation is easily obtained elementarily. Indeed:
e x t e t 1 = x e x t n x e x t = x e n x t = e n x t e n t 1
Which is verified immediately by analyzing the functional definitions in () and (). Therefore, similar to ():
x g ( n x ) = n x g ( x ) = 0 n x e x t G ( t ) d t
= 0 e n x t e n t 1 G ( t ) d t .                                         ( 18 )
The key consequence of this lets G(t) be the very same inverse laplace transform as in , making the process of finding the inverse sufficiently less complex.
A Concrete Example: Suppose we are looking to evaluate
x e v e n , 0 x N sin ( x ) . We utilize the inverse laplace transform to obtain : sin ( x ) = e i x e i s 2 i L 1 { sin ( x ) } ( t ) = δ ( t + i ) δ ( t i ) 2 i 0 e 2 x t e 2 t 1 δ ( t + i ) δ ( t i ) 2 i d t = cos ( 2 x 1 ) 2 sin ( 1 ) = f ( x ) x e v e n , 0 x N sin ( x ) = 0 x N 2 sin ( 2 x ) = cos ( 2 ( N 2 + 1 ) 1 ) + cos ( 1 ) 2 sin ( 1 )
which can be verified immediately through computational means.
Summing odd terms becomes very easy as well. Frankly, given any scaling factor of n: f ( n x ) , we can compute sums for all shifts of f ( n x + m mod n ) . Here, for instance:
x o d d , 0 x N sin ( x ) = 0 x N 2 1 sin ( 2 x + 1 ) We make the shift substitution ( which is completely valid as it does not change step size ) of x = x 1 2 : 1 2 x N 2 1 2 sin ( 2 x ) = f ( N 2 + 1 2 ) f ( 1 2 ) .
Thus, we have a general method of dealing with step size: first make the scaling substitution and apply (19); then shift the bounds based on the application.
This result is extremely useful due to the extent to which the exponential function is easy to sum. If we consider, for instance, the Continuous Binomial Transform method, we are unable to proceed.
x x t = x t + 1 n x x t = x n x t Unfortunately , this summand doesn t have a very simple antidifference for n > 1 .

4.2. A General Change of Variables

We now consider a generalized substitution. Indeed, let
Δ 1 ( g ( h ( x ) ) = f ( x )                                         ( 19 ) g ( h ( x ) ) = f ( x + 1 ) f ( x )                                         ( 20 ) g ( x ) = f ( h 1 ( x ) + 1 ) f ( h 1 ( x ) )                                         ( 21 ) And let Δ h ( x ) 1 = f ( x ) .                                         ( 22 )
We may attempt to analyze this functional equation formally again. We wish to construct an operator G such that, for any suitable function a ( x ) ,
G a ( x ) = a b ( x ) .
Our strategy is to linearize the mapping through the Abel functional equation.

4.3. The Abel Functional Equation

We seek a function c ( x ) satisfying
c b ( x ) = c ( x ) + 1 .
If such a function c ( x ) exists on the domain of interest, it allows us to “linearize” the action of b. Now, define a new function F by
F c ( x ) = a ( x ) .
Then we have
a ( x ) = F c ( x ) and a b ( x ) = F c ( b ( x ) ) .
Using the Abel equation, this becomes
a b ( x ) = F c ( x ) + 1 .

4.4. Translation in the c-Variable

Consider the translation operator T acting on functions of c ( x ) = u :
T F ( u ) = F ( u + 1 ) = ( e d d u F ) ( u )
as we have shown. Thus, we obtain
a b ( x ) = F c ( x ) + 1 = e d d u F c ( x ) .
We then have the operator G acting on a by
G a ( x ) = e d d u F c ( x ) = ( e d d u a ) ( x ) .
Using the chain rule, note that
d d u = d d c ( x ) = 1 c ( x ) d d x .
Thus, the operator G may also be written as
G = e 1 c ( x ) d d x
such that
G a ( x ) = a b ( x ) .
under the condition that
c 1 c ( x ) + 1 = b ( x ) ,
(Notice that for b ( x ) = A x , we have c ( x ) = log ( x ) log ( A ) , giving the desired operator we found in the formal analysis of (17)). Thus:
g ( x ) = f ( h 1 ( x ) + 1 ) f ( h 1 ( x ) ) = e 1 c ( x ) D x ( e D x 1 ) f ( x ) , c 1 ( c ( x ) + 1 ) = h 1 ( x )
And we have that
f ( x ) = e 1 c ( x ) D x 1 e D x 1 g ( x )
whose laurent series can be analyzed explicitly depending on the function h 1 ( x ) Now, attempting to generalize the substitution to the method of integral transforms does not yield a general solution. However, note that:
x g ( h ( x ) ) = h ( x ) g ( x ) = { f ( x ) : g ( x ) = f ( h 1 ( x ) + 1 ) f ( h 1 ( x ) ) } = 0 h ( x ) e x t G ( t ) d t = 0 x e h ( x ) t G ( t ) d t                                         ( 23 )
This depends heavily on h(x). Clearly, if h(x) is some linear function, the inner summation is quickly resolved (as we’ve seen when dealing with scaling factors). However, it is quickly observed that even slightly more complex nonlinear functions like h ( x ) = x 2 yield results that are not expressible as closed with this specific kernel.
Remark: An immediate natural assumption then is that, for specific h ( x ) , we seek to find a specific integral transform where the summation
x K ( h ( x ) , t )
is defined elementarily, in spite of the fact that the laplace transform has worked beautifully for linear h ( x ) as we’ve observed. In other words, we are allowed to choose any integral transform that we’d like, provided that the sum over the kernel and the sum over the kernel with the change of variable h ( x ) is sufficiently simple.

5. Conclusions

Utilizing integral transforms for the evaluation of indefinite sums arises evidently as an interesting tool. While the Laplace transform is specifically interesting due to its ability to faciliate a generalization of step sizes, other transforms can also be analyzed similarly.
For instance, the Fourier transform can also be generalized formally to varying step sizes:
x g ( n x ) = n x e i x t G ( t ) d t = x e i n x t G ( t ) d t . = e i n x t e i n t 1 G ( t ) d t . where G ( t ) = 1 2 π g ( x ) e i x t d t .
In fact, this acts as a more general approach to the laplace version, where numerous indefinite summation identities may be evaluated.
Other transforms may be analyzed in a similar fashion, using .

5.1. Various Worked Examples

1. The Riemann-Zeta Function
For instance, we can analyze the Riemann-Zeta fairly simply.
x g ( x ) = x t x 1 G ( t ) d t = t x 1 t 1 G ( t ) d t . 1 x k = 0 G ( t ) t x 1 d t
Computing the inverse Mellin Transform gives
G ( t ) = ( log ( t ) ) k 1 Γ ( k ) , for 0 < t 1 and 0 elsewhere
We find that:
x 1 x k = 0 1 ( log ( t ) ) k 1 Γ ( k ) t x 1 t 1 , d t
Or similarly:
ζ ( k ) = x = 1 1 x k = 0 1 ( log ( t ) ) k 1 Γ ( k ) 1 1 t , d t
Now, using the beautiful identity of the polygamma function:
ψ ( n ) ( z ) = 0 1 t z 1 1 t ( ln ( t ) ) n d t
x 1 x k = 0 1 ( log ( t ) ) k 1 Γ ( k ) t x 1 t 1 , d t = ( 1 ) k 1 Γ ( k ) ψ ( k 1 ) ( x ) + C
2. Polynomials
We will address the problem of polynomials here. Evidently, it is unclear as to what might happen when we attempt to compute summations of polynomial terms using this method. Consider:
x x 2 = 0 e x t e t 1 L 1 { x 2 } d t
Recall that
L { δ ( t ) } = x 2 .
Now, if we formally compute the integral, as
L { δ ( t ) e t 1 , }
we may formally treat this as some distribution with
δ ( n ) ( t ) , φ ( t ) = φ ( n ) ( 0 )
Therefore,
0 e x t e t 1 L 1 { x 2 } d t = 0 e x t e t 1 δ ( t ) d t = d 2 d t 2 [ e x t e t 1 ] | t = 0
Where f ( 0 ) 2 ! is the t 2 term in the series expansion. Thus:
e x t e t 1 = ( 1 t 1 2 t 12 + . . . ) ( 1 x t + x 2 t 2 2 x 3 t 3 6 + . . . ) = 1 t + ( x 1 2 ) + t ( x 2 2 + x 2 1 12 ) + t 2 ( x 3 6 x 2 4 + x 12 ) + O ( t 3 )
And we see relatively clearly that
x x 2 = 2 ! [ t 2 ] e x t e t 1 = x 3 3 x 2 2 + x 6
In fact, we have immediately that
x x n = n ! [ t n ] e x t e t 1
Ultimately, this framework provides a powerful and elegant approach to indefinite summation. By our formalization of the Euler-Maclaurin formula and our usage of integral transforms to manipulate indefinite summations, we hope to illuminate deeper connections between discrete summation and traditional Calculus. The demonstrated robustness of our method in handling nontrivial step sizes and variable transformations suggests promising directions for future research as well, particularly in the analysis of discrete-continuous correspondences and the development of more general summation techniques as seen in Section 4.1.

References

  1. Henrik Stenlund. “On Methods for Transforming and Solving Finite Series. arXiv 2016, arXiv:1602.04080. Available online: https://arxiv.org/abs/1602.04080.
  2. Ronald Graham, Donald Knuth, and Oren Patashnik. *Concrete Mathematics*. Addison-Wesley, 1994.
  3. Sui Sun Cheng. *Advances in Discrete Mathematics and Applications Volume 3, Partial Difference Equations*. 2019.
  4. Lokenath Debnath and Dembaru Bhatta. *Integral Transforms and Their Applications, Third Edition*. Taylor & Francis Group, 2015.
  5. Jose Bonet and Paweł Domański. “Abel’s functional equation and eigenvalues of composition operators on spaces of real analytic functions.” Preprint, submitted 2014. https://jbonet.webs.upv.es/wp-content/uploads/2014/04/BD_eigenvaluessubmitted03032014.
  6. E. T. Whittaker and G. N. Watson. *A Course of Modern Analysis, 4th Edition*. Cambridge University Press, 1927.
  7. Wikipedia, “Polygamma function,” Wikipedia, The Free Encyclopedia. [Online]. Available: https://en.wikipedia.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2025 MDPI (Basel, Switzerland) unless otherwise stated