1. Introduction
The Shannon differential entropy has gained widespread adoption as a measure across various fields of research. It was introduced by Shannon in his article [
1], and since then, it has become a cornerstone of probability theory. Suppose we have a non-negative random variable
X, which is absolutely continuous with a probability density function
. If the expected value of the logarithm exists, then the expression
is referred to as the Shannon differential entropy. This is the situation being described in this scenario. This definition allows us to quantify the uncertainty associated with the random variable
X by measuring the amount of information required to describe it. Owing to its adaptability and usefulness, this metric has gained widespread adoption in a multitude of research domains, making it an essential tool for any researcher looking to explore probability distributions.
While differential entropy has many advantages, a fascinating substitute for conventional entropy was suggested by Rao and colleagues in their paper [
2], which they called cumulative residual entropy (CRE). Unlike differential entropy, which uses
, CRE uses
to obtain a measure of entropy. The definition of CRE is given by
With its ability to capture the residual uncertainty of a distribution, CRE is particularly useful in situations where the tails of the distribution are of interest. The CRE has emerged as the preferred measure for characterizing information dispersion in problems related to reliability theory, which is robust metric has been extensively employed in numerous studies, such as those carried out by [
3,
4,
5,
6,
7,
8], to name a few. A representation of the cumulative entropy (CE), which is an information measure similar to equation (
1), is provided by Di Crescenzo and Longobardi in their work [
9] given by
where
is the cumulative distribution function (cdf) of a random variable
Like CRE, CE is nonnegative and
if and only if
X is a constant.
Several measures of dynamic information have been proposed to characterize the uncertainty in suitable functionals of the random lifetime
X of a system. We review two such measures, namely the dynamic cumulative residual entropy (introduced by Asadi and Zohrevand [
10]) and the cumulative past entropy (proposed by Di Crescenzo and Longobardi in [
11]), which are defined as the dynamic cumulative entropy of
and
, respectively. In this case, the dynamic cumulative residual entropy is defined as
where
is the survival function of
. Di Crescenzo and Longobardi [
11] pointed out that in many realistic scenarios, uncertainty is not necessarily related to the future. For example, if a system that starts operating at time 0 is only observed at predetermined inspection times and is found to be "down" at time
t, then the uncertainty is dependent on the specific moment within
at which it failed. When faced with such scenarios, the cumulative past entropy proves to be a valuable metric. It is defined as follows:
where
is the cdf of
.
Numerous researchers have demonstrated a strong inclination toward investigating the information characteristics of coherent systems which can be seen in [
8,
12,
13,
14,
15,
16], and the references therein. Recently, Kayid [
17] investigated the Tsallis entropy of coherent systems when all components are alive at time
Mesfioui
et al. [
18] have also delved into the Tsallis entropy such systems, adding to the current body of captivating research in this area. More recently, Kayid and Shrahili [
19,
20] investigated the Shannon differential and Renyi entropy of coherent systems when all components have failed at time
The aim of this study is to examine the uncertainty properties of the lifetimes of coherent systems, with a particular focus on the cumulative past entropy. In fact, when all components have failed at time
we focus on coherent systems consisting of
n components by utilizing the system signature.
Therefore, the result of this paper is organized as follows: In
Section 2, we provide an expression for the CE of a coherent system’s lifetime when component lifetimes are independent and identically distributed, given that all components of the system have failed at time
t by implementing the concept of system signature. In
Section 3, some useful bounds are presented. A new criterion is represented to choose a preferable among coherent systems in
Section 4. Some concluding remarks are also given in
Section 5.
2. CE of the past lifetime
In this section, we present by applying the system signature concept to define the past lifetime CE of a coherent system, which can have an arbitrary structure. To this aim, we assume that at a specific time
all components of the system have failed and employ the concept of system signature which is described by an
n-dimensional vector
where the
i-th element is defined as
(see [
21]).
Consider a coherent system whose component lifetimes
are independent and identically distributed (i.i.d.), and whose signature vector
is known. Assuming that the coherent system have failed at time
t, we can represent the past lifetime of the system as
Khaledi and Shaked [
22] have established results that allow us to express the cumulative distribution function of
in terms of their findings as
where
denotes the cdf of
It is important to note that
represents the elapsed time since the failure of the component with a lifetime of
in the system, given that the system has failed at or before time
Furthermore, according to (
6),
corresponds to the
ith order statistic of
n i.i.d. components with a cumulative distribution function of
Hereafter, we provide a formula for computing the cumulative entropy of
. For this purpose, we define
, and use
, which is essential to our approach. Using this transformation, we can express the cumulative entropy of
in terms of
V, as shown in the upcoming theorem.
Theorem 1.
We can express the CE of as follows
where for all and
represents the survival function of such that
is the survival function of i-th order statistics where lifetimes are uniformly distributed. Moreover, is the quantile function of
Proof. Substituting
and using equations (
5) and (
6), we obtain:
where the last equality is obtained by using the change of variables
Furthermore,
represents the survival function of
as given in (
9). By utilizing (
8), we can derive the relationship (
7), which serves to conclude the proof. □
Suppose we examine an
i-out-of-
n system with a system signature of
, where
Them, we obtain a special case of Equation (
7), which reduces to
The following theorem is a direct consequence of Theorem 1 and characterizes the aging properties of the system’s components. It is noteworthy to mention that a random variable X is said to have a decreasing reversed hazard rate (DRHR) if its reversed hazard rate function declines for .
Theorem 2. If X is DRHR, then is increasing in t.
Proof.By noting that Equation (7) can be rewritten as
for all We can easily confirm that holds for all . Therefore, we obtain:
If , then . Therefore, if X has a DRHR property, then:
By utilizing Equation (11), we can infer that for all , thus completing the proof. □
We provide an example to demonstrate the applications of Theorems 1 and 2 in engineering systems. This example highlights how these theorems can be utilized for analyzing the CE of a failed coherent system and for investigating the aging characteristics of a system.
Example 1. Suppose we have a coherent system with a system signature
. To compute the precise value of
using equation (
7), we require the lifetime distributions of the system components. For this purpose, let us assume the following lifetime distributions.
(i) Let
X follow the uniform distribution in
From (
7), we immediately obtain
The analysis indicates that the cumulative entropy of increases as time t increases, which aligns with previous research on the behavior of cumulative entropy for specific categories of random variables. For instance, it is established that the uniform distribution has a DRHR property, indicating that the CE of should increase as time t increases, as per Theorem 2.
(b) Let’s examine a random variable
X, whose cdf is defined as follows:
After performing some algebraic manipulations, we have
Calculating this relationship explicitly is challenging, thus we must rely on numerical methods to proceed. In
Figure 2, we illustrate the cumulative entropy of
for different values of
k. It is evident that
X exhibits a DRHR property for all
. Referring back to Theorem 2, we can see that
rises with increasing
t when
.
Figure 1.
A coherent system with signature
Figure 1.
A coherent system with signature
Figure 2.
Cumulative entropy of with respect to t for various values of , using the cdf Part (b) from Example 1.
Figure 2.
Cumulative entropy of with respect to t for various values of , using the cdf Part (b) from Example 1.
The example above provides insight into the complex interplay between the CE of a random variable and time, emphasizing the significance of accounting for the decreasing reversed hazard rate property when analyzing such systems. The results suggest that the DRHR property of X is a critical factor in determining the temporal dynamics of the CE of , which could have significant consequences in various areas of research including the study of intricate systems and the development of efficient data compression methods.
The notion of duality is a valuable tool in engineering reliability to reduce the computational workload of computing the signatures of all coherent systems of a given magnitude by roughly half (as demonstrated, for instance, in Kochar
et al. [
23]). Particularly, if
stands for lifetime of a coherent system with signature
then its dual system with lifetime
has a signature
. By utilizing the principle of duality, we have the following theorem which facilitates the computational intricacy entailed in calculating the past cumulative entropy of
Theorem 3. If holds true for all and t, then, for all and we can conclude that
Proof. It is crucial to emphasize that the equation
valids for all
and
. Furthermore, since
holds for all
, we can leverage (
7) to derive the subsequent expression:
and this completes the proof. □
4. Preferable system
Hereafter, we consider two nonnegative random variables,
X and
Y, that represent the lifetimes of two items having the same supports
For any given time
, we define their respective past lifetimes as
and
. To this end, we also introduce the distribution functions of
and
by
respectively. In their groundbreaking work, Di Crescenzo and Longobardi [
24] introduced a novel concept that utilizes the mean past lifetimes of nonnegative random variables
X and
Y with cdfs
F and
G, by
and
respectively. They proposed a past version of the cumulative Kullback-Leibler information measure that is defined as a function of the mean past lifetimes of
X and
Y as follows:
provided that
whenever
In order to advance our findings, we define a novel measure of distance that is symmetric and applicable to two distributions. This measure is called Symmetric Past Cumulative (SPC) Kullback-Leibler divergence and is denoted by the shorthand
.
Definition 1. Consider two non-negative past random variables,
and
, with shared support and cumulative distribution functions
and
, respectively. In this case, we introduce the SPC Kullback-Leibler divergence as a measure of distance between the two variables as follows:
for all
The proposed measure, defined as (
17), possesses several desirable properties. First and foremost, it is nonnegative and symmetric. Moreover, the value of
is equal to zero if and only if
and
are almost everywhere equal. In addition to these properties, we also observe the following.
Lemma 1.Suppose we have three random variables, , , and , each with cumulative distribution functions , , and , respectively. If the stochastic ordering holds, then the following statement is true
- (i)
- (ii)
for all
Proof. Given that the function is decreasing in the interval and increasing in , we can make an important inference from the condition for . Specifically, we can conclude that
- (i)
- (ii)
By integrating both sides of the descriptions (i) and (ii), we can obtain the desired result. □
As holds for any coherent system, from Lemma 1, we have the following theorem.
Theorem 4.If is the lifetime of a coherent system based on , then:
- (i)
,
- (ii)
.
Traditional stochastic ordering may not be sufficient for pairwise comparisons of system performance, particularly for certain system structures. In some cases, pairs of systems remain incomparable using stochastic orders. In this case, alternative metrics for comparing system performance are being explored. Hereafter, we will introduce a new method to choose a preferable system. It is worth to note that engineers typically favor those that offer extended operational time. As a result, it is crucial to ensure that the systems being compared possess comparable attributes. Furthermore, assuming that all other attributes are equal, we can choose parallel system lifetime since it has a longer past lifetime than alternative systems. In other words, from (
6), we have
Rather than relying on comparisons between two systems at a time
we can explore a system that has a structure or distribution that is more similar to that of the parallel system. Essentially, our goal is to determine which of these systems is more similar or closer in configuration to the parallel system, while also having a dissimilar configuration from that of the series system. To this aim, one can employ the idea of SPC given in Definition 1. Consequently, we put forth the following Symmetric Past Divergence (SPD) measure for
as
Theorem 3 establishes that . It is evident that if and only if and if and only if . Put simply, we can deduce that if is closer to 1, the distribution of is more akin to the parallel system’s distribution. On the other hand, if is closer to , the distribution of is more similar to the series system’s distribution. Based on this, we suggest the subsequent definition.
Definition 2. Consider two coherent systems, each with n component lifetimes that are independent and identically distributed, alongside signatures and . Let and represent their respective past lifetimes. At time t, we assert that is more desirable than with regard to the Symmetric Past Distance (SPD) measure, indicated by , if and only if for all
It is important to note that the equivalence of
and
does not always mean that
. In accordance with Definition 2, we can define
. From equation (
17) and the aforementioned conversions, we obtain
for
. Then, from (
19), we get
If we assume that the components are i.i.d., we can derive the equations and . By referencing Theorem 4, we can arrive at an intriguing result.
Proposition 3. If is the lifetime of a coherent system based on , then , for .
The next theorem is readily apparent.
Theorem 5. Assuming the conditions outlined in Definition 2, if the is exponentially distributed, then the SPD measure is independent of time In other words, holds true for all
Proof. Using the memoryless property, we can deduce that holds true for all . Consequently, we can conclude that the result holds true. □
Example 3. Consider two coherent systems with past lifetimes and , in which the component lifetimes are exponentially distributed with cdf . The signatures of these systems are given by and respectively. Although these systems are not comparable using traditional stochastic orders, so, we can compare them by the SPD measure. By using numerical computation, we obtain and This indicates that the system with lifetime is less similar to the parallel system than the system with lifetime .
Theorem 6. Suppose that and denote the lifetimes of two coherent systems with signatures and , respectively, based on n i.i.d. components with the same cdf F. If , we can assert that .
Proof. We can derive the desired outcome from Theorem 2.3 of Khaledi and Shaked [
22]. Specifically, if we have two probability vectors denoted
and
, where
, then we have
. By applying Lemma 1, we can obtain
and
. These relations enable us to arrive at the desired outcome due to the relation defined in equation (
18). □
An exciting discovery is that the comparison based on SPD can perform as a necessary condition for the usual stochastic order. Let us assume we have two coherent systems, denoted by and , each composed of several components with lifetimes . If we find that is less reliable than in the stochastic order, denoted as , we can conclude that the system is also less reliable than in the SPD order, i.e., . This comparison of systems through the SPD order can be useful when comparing systems that are difficult to assess otherwise. It is worth noting that if we find that two systems and are equally reliable in the stochastic order, i.e., , then they are also equivalently reliable in the SPD order, i.e., . This highlights the potential power of the SPD order in system analysis.