Preprint
Article

Bayesian Estimation Based on Learning Rate Parameter Under the Joint Hybrid Censoring Scheme for K Exponential Populations

Altmetrics

Downloads

89

Views

33

Comments

0

This version is not peer-reviewed

Submitted:

02 August 2023

Posted:

04 August 2023

You are already at the latest version

Alerts
Abstract
Generalized Bayes is a Bayesian approach based on the learning rate parameter η. In this study, we examine the effect of parameter η on the estimation results considering joint type-I and type-II hybrid censored samples from k exponential populations. In addition to the learning rate parameter, we consider two loss functions, the Linex and general entropy loss functions in the Bayesian approach. Monte Carlo simulations are performed to compare the performances of the estimation results under losses and different values of η. An illustrative example is performed to study the effect of the learning rate parameter and the different losses with different parameters.
Keywords: 
Subject: Computer Science and Mathematics  -   Probability and Statistics

MSC:  62F10; 62F15

1. Introduction

Generalized Bayes (GB) is a Bayesian study based on a learning rate parameter ( 0 < η < 1 ) , as a fractional power on the likelihood function L L ( θ ; d a t a ) for the parameter θ Θ , traditional Bayesian framework is obtained for η = 1 . That is, if the prior distribution of the parameter θ is π ( θ ) then the GB posterior distribution for θ is
π * θ d a t a ) L η π ( θ ) , θ Θ , 0 < η < 1 .
More details about the GB method and the choice of the value of the rate parameter can be found in [1,2,3,4,5,6,7,8,9,10,11,12,13]. The choice of learning rate was studied in [3,4,5,6] using the Safe Bayes algorithm based on minimizing a sequential risk measure. In [7] and [8], another learning rate selection method was proposed, which involves two different information adaptation strategies. The authors in [12] studied the GB estimation based on a joint type-II censored sample of k-exponential populations using different values of the learning rate parameter, whereas in [13], the GB prediction was based on a joint type-II censored sample of multiple exponential populations. In [14], a study on Bayes estimation and prediction based on a joint type-II censored sample of two exponential populations was presented.
The exact likelihood inference for multiple exponential populations under different types of joint censoring was studied by [15]. In addition, the exact likelihood inference for two populations of two-parameter exponential distributions under type II joint censoring was studied by [16]. There are many types of hybrid censoring in the literature, for example [17], which include different types of censoring; here, we only used the type-I hybrid censoring scheme (HCS-I) and type-II hybrid censoring scheme (HCS-II).
Let w 1 < . . . < w N denote the ordered lifetime of N experimental units and the observations be w 1 < . . . < w D , 0 D N . HCS-I arises if the experiment stops at T = m i n ( w r , T 1 ) , where r and T 1 are pre-fixed. Therefore, HCS-I has two cases: the first case occurs when T = T 1 or w D < T 1 < w D + 1 ,   w h e r e   D = R 1   i s   a   r a n d o m   v a r i a b l e   l e s s   t h a n   r , may take the values R 1 = 0 , 1 , . . . , r 1 ; but the second case occurs if w r < T 1 or T = w r , and then D = r . HCS-II arises, if the experiment stops at T = m a x ( w r , T 2 ) , which means at least r failures are observed at the end of the experiment.
HCS-II also provides two cases: the first case occurs when T 2 < w r or T = w r and then D = r ; however the second case occurs if T = T 2 or w D < T 2 < w D + 1 where D = R 2 is random variable satisfies r D N .
Suppose that products from k different production lines are manufactured in the same plant, and k independent samples of sizes n j , 1 j k are selected from these k lines and simultaneously subjected to a lifetime test. To reduce the cost of the experiment or shorten the experiment time, the experimenter could terminate the lifetime test experiment at T . In this situation, one is interested in either a point or an interval estimate of the average lifetime of the units produced by these k lines.
Suppose { X j n j , j = 1 , . . . , k } be k -samples where, X j n j = { X j 1 , X j 2 , . . . , X j n j } are the lifetimes of n j specimens of product line A j , and assumed to be independent and identically distributed (iid) random variables from a population with cumulative distribution function (cdf) F j ( x ) and probability density function (pdf) f j ( x ) . Furthermore, let N = j = 1 k n j denote the total sample size and D = j = 1 k D j denote the total number of observed failures. Then, under the joint hybrid censoring scheme for the k -samples, the observable data consists of ( δ , w ) , where w = ( w 1 , . . . , w D ) , w i { X j i n j i , i = 1 , . . . , D ; j i = 1 , . . . , k } , and δ = ( δ 1 j , . . . , δ D j ) associated to ( j 1 , . . . , j D ) is defined by
δ i j = 1 , i f j = j i 0 , o t h e r w i s e .
Letting D j = i = 1 D δ i j denote the number of X j -failures in w and D = j = 1 k D j , where D N and D j n j , j , then, the joint density function of ( δ , w ) is given by
f ( δ , w ) = C D i = 1 D j = 1 k ( f j ( w i ) ) δ i j . j = 1 k ( F j ( T ) ) n j D j
where F j = 1 F j , is the survival functions of j t h population and C D = j = 1 k n j ! ( n j D j ) ! .
Let M be the number of failures up to time T , distributed with the probability mass function
P ( M = m ) = l j = 1 k n j l j p j l j q j n j l j , m = 0,1 , . . . , N ,
where, l = l 1 , . . . , l k , m = j = 1 k l j , 1 l j n j and p j = F j ( T ) , q j = F j ( T ) .
For HCS-I:
D = R 1 , T = T 1 , M = 0,1 , . . . , r 1 , ( C a s e 1 ) , r , T = w r , M = r , . . . , N , ( C a s e 2 ) .
For HCS-II:
D = r , T = w r , M = 0,1 , . . . , r 1 , ( C a s e 1 ) , R 2 , T = T 2 , M = r , . . . , N , ( C a s e 2 ) .
The main objective of this study is to consider the Bayesian estimation of the parameters based on the learning rate parameter under the joint hybrid censoring scheme (HCS-I and HCS-II) for k-exponential populations when censoring is implemented on k-samples in a combined manner. The remainder of this paper is organized as follows. Section 2 presents the maximum likelihood and generalized Bayes estimators using Linex and general entropy loss functions in the GB approach to estimate the population parameters. A numerical study of the results in Section 2 is presented in Section 3. Finally, Section 4 concludes the paper.

2. Estimation of the Parameters

The populations studied here are exponential with pdf and cdf, respectively,
f j = θ j e x p ( θ j x ) , F j = 1 e x p ( θ j x ) , x > 0 , θ j > 0 , 1 j k .
Substituting (5) into (3), we obtain the likelihood function
L ( Θ , δ , w ) = C D i = 1 D j = 1 k { θ j e x p ( θ j w i ) } δ i j j = 1 k { e x p ( θ j T ) } n j D j
= C D j = 1 k θ j D j e x p { θ j u j }
where, Θ = ( θ 1 , . . . , θ k ) and u j = i = 1 D w i δ i j + T ( n j D j ) .

2.1. Maximum Likelihood Estimation

From (2) the MLE of θ j for 1 j k , under HCS-I is given by
θ ^ j M = R 1 j u j , T = T 1 , M = 0,1 , . . . , r 1 , ( C a s e 1 ) , r j u j , T = w r , M = r , . . . , N , ( C a s e 2 ) .
The MLE of θ j for 1 j k , under HCS-II is given by
θ ^ j M = r j u j , T = w r , M = 0,1 , . . . , r 1 , ( C a s e 1 ) , R 2 j u j , T = T 2 , M = r , . . . , N , ( C a s e 2 ) .
Remark 1.
MLEs of θ j exist if we have at least k failures ( D k ) , such that at least one failure observed from each sample that satisfies the condition j = 1 k D j 1 or 1 D j D k + 1 .
We determined the MLEs to compare their results with those of Bayesian estimation, which uses different values for the loss functions parameters and different values for the learning rate parameter, as described in Section 3.

2.2. Generalized Bayes estimation

The parameters Θ are assumed to be unknown, we may consider the conjugate prior distributions of Θ as independent gamma prior distributions, i.e. θ j G a m ( a j , b j ) . Hence, the joint prior distribution of Θ is given by
π ( Θ ) = j = 1 k π j ( θ j ) ,
where
π j ( θ j ) = b j a j Γ ( a j ) θ j a j 1 e b j θ j ,
and Γ ( ) denotes the complete gamma function.
Combining (6) and (9), after raising (6) to the fractional power η , the generalized Bayes posterior distribution of Θ is then
π * Θ d a t a ) = j = 1 k ( u j η + b j ) D j η + a j θ j D j η + a j 1 Γ ( D j η + a j ) e x p [ { θ j ( u j η + b j ) } ] ,
Notice that the distribution of the generalized posterior density function is gamma distribution G a m ( D j η + a j , u j η + b j ) because π j is a conjugate prior.
In GB estimation, we consider two loss functions, Linex and general entropy loss functions:
(i). The Linex loss function which is asymmetric is given by
L L ( φ * , φ ) e ν ( φ * φ ) ν ( φ * φ ) 1 , ν 0 .
(ii). The generalization of the entropy (GE) loss function is
L G E ( φ * , φ ) φ * φ c c ln φ * φ 1 , c 0 .
Under the Linex loss function, the Bayes estimators of θ j are given by
θ ^ j L = 1 ν l n E e ν θ j = D j η + a j ν l n 1 + ν u j η + b j , ν 0 , 1 j k .
Under the general entropy (GE) loss function, the Bayes estimators of θ j are given by
θ ^ j E = { E ( θ j c ) } 1 c = Γ D j η + a j c Γ D j η + a j 1 c 1 u j η + b j , 1 j k .
Remark 2.
Putting c = 1 , 1 in (13) we get the Bayes estimators of θ j under the weighted squared error loss function and squared error loss function, respectively.
Remark 3.
The estimators θ ^ j J , are Bayes estimators of θ j using Jeffreys’ non-informative priors π J j = 1 k 1 θ j which can be obtained directly by putting a j = b j = 0 in (11) so (13) leading to MLEs θ ^ j M after putting c = 1 .
The estimators θ ^ j under HCS-I in the GB approach can be obtained by putting D j = r j for case 1 and D j = R 1 j for Case 2 respectively. But we can obtain θ ^ j under HCS-II by putting D j = R 2 j for case 1 and D j = r j for case 2 respectively.

3. Numerical Study

This section presents the results of the Monte Carlo simulation study to evaluate the performance of the inference methods derived in the previous section and presents an example to illustrate the inference methods discussed here.

3.1. Simulation Study

We considered the different choices for the three populations’ sample sizes ( n 1 , n 2 , n 3 ) and also for r , T 1 , T 2 . We choose the exponential parameters ( θ 1 , θ 2 , θ 3 ) to be (0.2,0.5,0.9) and T 1 = 2 , 3 , 4 , 5 and for Monte Carlo simulations we use 10,000 repetitions. By use of (8) we obtain the MLEs of θ 1 , θ 2 , θ 3 and their estimated risk which are presented in Table 1 for HCS-I and Table 2 for HCS-II.
For the simulation study, note that some of the simulated samples do not satisfy the condition in Remark 1, so they must be discarded. Therefore, the average values of the observed failures ( r ¯ 1 , r ¯ 2 , r ¯ 3 ) in both cases of T are computed with p 1 and R ¯ 1 which listed in Table 1; p 2 and R ¯ 2 which stated in Table 2 where p 1 is the ratio of samples that stopped at T 1 , R ¯ 1 is the mean of the number of the observed values up to T 1 ( R 1 r ), p 2 is the ratio of samples that stopped at T 2 , R ¯ 2 is the mean of the number of the observed values up to T 2 ( R 2 r ). For Bayesian study, sample sizes are ( n 1 , n 2 , n 3 ) = ( 10 , 10 , 10 ) , T 1 = 2,3 for r = 20 , T 2 = 4,5 for r = 25 , and the hyperparameters are ( a 1 , b 1 , a 2 , b 2 , a 3 , b 3 ) = ( 1 , 5 , 1 , 2 , 1.8 , 2 ) .
The values for the learning rate parameters are η = 0.1 , 0.4 , 0.8 . Note that, for η = 0.1 we chose c = 1.5 , 1 , 0.85 , 0.75 ; ν = 0.5 , 0.1 , 0.3 , 0.5 , for η = 0.4 we chose c = 1 , 0.5 , 0.25 , 0.1 ; ν = 0.1 , 0.7 , 1,4 , and finally for η = 0.8 we chose c = 1 , 0.1 , 0.65 , 1 ; ν = 0.5 , 1.5 , 3.5 , 8.5 . The results of Bayesian estimators of θ 1 , θ 2 , θ 3 under HCS-I and HCS-II are presented in Table 3, Table 4, Table 5 and Table 6.

3.2. Illustrative Example

To illustrate the usefulness of the results developed in the previous sections, we consider three samples of size n 1 = n 2 = n 3 = 10 from Nelson’s data (groups 1, 4, and 5), [see [18] p.462] corresponding to a failure of an insulating fluid subjected to a high-stress load within minutes. These failure times, denoted here as samples X i , i = 1,2 , 3 , and their order statistics with respect to ( W , j i ) are given in Table 7.
For GB study, the hyperparameters are ( a 1 , b 1 , a 2 , b 2 , a 3 , b 3 ) = ( 1,2.6,1 , 2,1 , 3 ) , η = 0.1,0.4 and c = 1 , 0.8 , 0.3 ; ν = 0.1,0.3,1 . MLE and Bayesian estimation of the parameters are presented in Table 8 for HCS-I and Table 9 for HCS-II.

4. Conclusion

In this study, we considered a joint type- II censoring scheme when the lifetimes of three populations have exponential distributions. We obtained the MLEs and the Bayesian estimates of the parameters using different values for the learning rate parameter η and GE, the Linex loss function in a simulation study, and an illustrative example. It is clear that the Bayesian estimators are better than the MLEs; therefore, we discuss the Bayesian results in detail. In a simulation study, η = 0.1 results in an overestimate for c = 1.5 , 1 ; ν = 0.5 , 0.1 but an underestimate for c = 0.75 ; ν = 0.5 , so c = 0.85 ; ν = 0.3 leads to better estimation results; for η = 0.4 there is an overestimation for c = 1 , 0.5 ; ν = 0.1 , 0.7 but an underestimation for c = 0.1 ; ν = 4 , so c = 0.25 ; ν = 1 leads to better estimation results. Finally, for η = 0.8 , better results are obtained for c = 0.65 ; ν = 3.5 , but due to the chosen values for T 1 , T 2 there is not much difference between the results in HCS-I and HCS-II. In the illustrative example, we obtained the best results for η = 0.1 , c = 0.8 ; ν = 0.3 . It might be interesting to study this work with a different type of censoring.

Funding

Princess Nourah bint Abdulrahman University Researchers Supporting Project number (PNURSP2023R226), Princess Nourah bint Abdulrahman University, Riyadh, Saudi Arabia.

Data Availability Statement

The data used to support the findings of this study are included in the article.

Acknowledgments

The authors extend their sincere appreciation to Princess Nourah bint Abdulrahman University Researchers Supporting Project Number (PNURSP2023R226), Princess Nourah bint Abdulrahman University, Riyadh, Saudi Arabia.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Bissiri, P.G.; Holmes, C.C.; Walker, S.G. General framework for updating belief distributions. Journal of the Royal Statistical Society. Series B, Statistical Methodology 2016, 78, 1103–1130. [Google Scholar] [CrossRef] [PubMed]
  2. Miller, J.W.; Dunson, D.B. Robust Bayesian inference via coarsening. Journal of the American Statistical Association 2019, 114, 1113–1125. [Google Scholar] [CrossRef] [PubMed]
  3. Grünwald, P. The safe Bayesian: learning the learning rate via the mixability gap. In Algorithmic Learning Theory; Lecture Notes in Computer Science; Springer: Heidelberg, 2012; Volume 7568, pp. 169–183. [Google Scholar]
  4. Granwald, P.; van Ommen, T. Inconsistency of Bayesian inference for misspecified linear models, and a proposal for repairing it. Bayesian Analysis 2017, 12, 1069–1103. [Google Scholar] [CrossRef]
  5. Granwald, P. Safe probability. Journal of Statistical Planning and Inference 2018, 195, 47–63. [Google Scholar] [CrossRef]
  6. De Heide, R.; Kirichenko, A.; Mehta, N.; Grünwald, P. Safe-Bayesian generalized linear regression. arXiv 2019, arXiv:1910.09227. [Google Scholar]
  7. Holmes, C.C.; Walker, S.G. Assigning a value to a power likelihood in a general Bayesian model. Biometrika 2017, 497–503. [Google Scholar] [CrossRef]
  8. Lyddon, S.P.; Holmes, C.C.; Walker, S.G. General Bayesian updating and the loss-likelihood bootstrap. Biometrika 2019, 465–478. [Google Scholar] [CrossRef]
  9. Martin, R. Invited comment on the article by van der Pas, Szabó, and van der Vaart. Bayesian Analysis 2017, 1254–1258. [Google Scholar]
  10. Martin, R.; Ning, B. Empirical priors and coverage of posterior credible sets in a sparse normal mean model. Sankhya Ì Series A 2020, 477–498, Special issue in memory of Jayanta K. Ghosh. [Google Scholar] [CrossRef]
  11. Wu, P.S.; Martin, R. A Comparison of Learning Rate Selection Methods in Generalized Bayesian Inference. Bayesian Anal. 2023, 18, 105–132. [Google Scholar] [CrossRef]
  12. Abdel-Aty, Y.; Kayid, M.; Alomani, G. Generalized Bayes estimation based on a joint type-II censored sample from k-exponential populations. Mathematics 2023, 11, 2190. [Google Scholar] [CrossRef]
  13. Abdel-Aty, Y.; Kayid, M.; Alomani, G. Generalized Bayes Prediction Study Based on Joint Type-II Censoring. Axioms 2023, 12, 716. [Google Scholar] [CrossRef]
  14. Shafay, A.R.; Balakrishnan NY Abdel-Aty, Y. Bayesian inference based on a jointly type-II censored sample from two exponential populations. Journal of Statistical Computation and Simulation 2014, 2427–2440. [Google Scholar] [CrossRef]
  15. Su, F. Exact likelihood inference for multiple exponential populations under joint censoring. Ph.D. Thesis, McMaster University, 2013. [Google Scholar]
  16. Abdel-Aty, Y. Exact likelihood inference for two populations from two-parameter exponential distributions under joint Type-II censoring. Communications in Statistics—Theory and Methods 2017, 9026–9041. [Google Scholar] [CrossRef]
  17. Balakrishnan, N.; Kundu, D. Hybrid censoring: Models, inferential results and applications. Computational Statistics Data Analysis 2013, 57, 166–209. [Google Scholar] [CrossRef]
  18. Nelson, W. Applied Life Data Analysis 1982.
Table 1. MLEs under HCS-I.
Table 1. MLEs under HCS-I.
n 1 , n 2 , n 3 ( r , T 1 ) p 1 R 1 r 1 , r 2 , r 3 θ ^ 1 M , θ ^ 2 M , θ ^ 3 M E r θ ^ 1 M , θ ^ 2 M , θ ^ 3 M
(10,10,10) (20, 2) 0.73 16.9 (3.3,6.2,8.3) (0.345, 0.625, 1.070) (0.460, 0.300, 0.443)
(20, 3) 0.17 18.2 (3.8,7,8.9) (0.286, 0.604, 1.046) (0.235, 0.274, 0.408)
(25, 4) 0.62 22.7 (5.3,8.5,9.7) (0.256, 0.590, 1.024) (0.177, 0.246, 0.385)
(25, 5) 0.29 23.2 (5.8,8.9,9.8) (0.250, 0.583, 1.022) (0.123, 0.231, 0.368)
(8, 9,13) (20, 2) 0.55 17.4 (2.6,5.5,10.6) (0.427, 0.643, 1.022) (1.126, 0.336, 0.352)
(20, 3) 0.06 18.5 (2.8,5.9,11.2) (0.372, 0.620, 1.013) (1.464, 0.316, 0.349)
(25, 4) 0.41 23.1 (4.1,7.6,12.5) (0.280, 0.602, 1.003) (0.192, 0.261, 0.318)
(25, 5) 0.15 23.5 (4.4,7.8,12.6) (0.271, 0.595, 0.996) (0.218, 0.255, 0.316)
(12,11,7) (20, 2) 0.86 16.1 (3.9,6.9,5.8) (0.299, 0.613, 1.155) (0.670, 0.269, 0.618)
(20, 3) 0.33 17.9 (4.9,8.1,6.3) (0.263, 0.593, 1.115) (0.219, 0.245, 0.556)
(25, 4) 0.77 22.1 (6.5,9.5,6.8) (0.243, 0.583, 1.088) (0.108, 0.220, 0.518)
(25, 5) 0.46 22.9 (7.2,9.9,6.9) (0.239, 0.578, 1.063) (0.103, 0.212, 0.488)
(20,20,20) (40, 2) 0.85 35 (6.6,12.6,16.7) (0.244, 0.557, 0.985) (0.106, 0.171, 0.261)
(40, 3) 0.12 37.9 (7.7,14.2,17.8) (0.232, 0.547, 0.978) (0.094, 0.155, 0.254)
(50, 4) 0.73 46.6 (10.9,17.2,19.4) (0.226, 0.546, 0.968) (0.074, 0.144, 0.236)
(50, 5) 0.30 47.8 (11.8,17.9,19.6) (0.222, 0.541, 0.959) (0.070, 0.141, 0.231)
(16,18,26) (40, 2) 0.63 36.4 (5.1,11.1,21.4) (0.262, 0.562, 0.963) (0.269, 0.183, 0.224)
(40, 3) 0.02 38.2 (5.6,12,22.4) (0.248, 0.557, 0.957) (0.122, 0.179, 0.218)
(50, 4) 0.47 47.4 (8.5,15.3,25.1) (0.232, 0.549, 0.952) (0.088, 0.154, 0.205)
(50, 5) 0.11 48.3 (8.9,15.6,25.3) (0.229, 0.546, 0.949) (0.086, 0.152, 0.200)
(24,22,14) (40, 2) 0.96 33.1 (7.9,13.9,11.7) (0.233, 0.554, 1.023) (0.091, 0.162, 0.336)
(40, 3) 0.32 37.3 (10,16.4,12.8) (0.225, 0.544, 1.010) (0.076, 0.146, 0.316)
(50, 4) 0.89 45.2 (13.1,19,13.6) (0.220, 0.541, 0.996) (0.063, 0.133, 0.304)
(50, 5) 0.54 47.2 (14.7,20,13.8) (0.216, 0.537, 0.984) (0.059, 0.129, 0.293)
Table 2. MLE’s under HCS-II.
Table 2. MLE’s under HCS-II.
n 1 , n 2 , n 3 ( r , T 2 ) p 2 R 2 r 1 , r 2 , r 3 θ ^ 1 M , θ ^ 2 M , θ ^ 3 M E r θ ^ 1 M , θ ^ 2 M , θ ^ 3 M
(10,10,10) (20, 2) 0.27 21 (4,7.3,9) (0.282, 0.599, 1.043) (0.397, 0.267, 0.406)
(20, 3) 0.83 22.3 (4.6,7.9,9.4) (0.266, 0.598, 1.041) (0.154, 0.246, 0.390)
(25, 4) 0.38 25.8 (6.2,9.2,9.9) (0.246, 0.578, 1.009) (0.124, 0.225, 0.361)
(25, 5) 0.71 26.3 (6.7,9.4,9.9) (0.245, 0.581, 1.004) (0.114, 0.222, 0.355)
(8, 9,13) (20, 2) 0.45 21.3 (3,6.2,11.4) (0.336, 0.625, 1.017) (0.487, 0.310, 0.352)
(20, 3) 0.94 23 (3.6,7,12.2) (0.302, 0.610, 1.009) (0.272, 0.273, 0.320)
(25, 4) 0.59 26.1 (4.8,8.1,12.8) (0.264, 0.596, 0.990) (0.155, 0.252, 0.302)
(25, 5) 0.85 26.7 (5.2,8.4,12.9) (0.261, 0.591, 0.987) (0.145, 0.242, 0.306)
(12,11,7) (20, 2) 0.14 20.8 (5.2,8.4,6.5) (0.254, 0.583, 1.106) (0.135, 0.239, 0.531)
(20, 3) 0.67 21.8 (5.7,8.8,6.6) (0.250, 0.587, 1.095) (0.122, 0.230, 0.517)
(25, 4) 0.23 25.9 (8,10.3,6.9) (0.233, 0.570, 1.063) (0.097, 0.210, 0.507)
(25, 5) 0.54 26.1 (8.2,10.4,7) (0.234, 0.567, 1.058) (0.096, 0.199, 0.502)
(20,20,20) (40, 2) 0.15 41.3 (7.9,14.5,17.9) (0.234, 0.548, 0.976) (0.095, 0.161, 0.256)
(40, 3) 0.88 43.9 (9.1,15.6,18.7) (0.230, 0.550, 0.973) (0.084, 0.152, 0.243)
(50, 4) 0.27 51.2 (12.4,18.2,19.7) (0.221, 0.540, 0.959) (0.069, 0.139, 0.233)
(50, 5) 0.70 52.1 (13.1,18.6,19.8) (0.221, 0.541, 0.956) (0.068, 0.136, 0.228)
(16,18,26) (40, 2) 0.37 41.8 (5.8,12.2,22.6) (0.248, 0.557, 0.960) (0.117, 0.177, 0.219)
(40, 3) 0.98 44.7 (7.3,14,24.3) (0.240, 0.556, 0.958) (0.100, 0.165, 0.207)
(50, 4) 0.53 51.6 (9.3,16,25.5) (0.229, 0.545, 0.949) (0.084, 0.148, 0.199)
(50, 5) 0.89 52.9 (10.2,16.6,25.7) (0.228, 0.545, 0.946) (0.079, 0.147, 0.194)
(24,22,14) (40, 2) 0.04 41 (10.4,16.7,12.9) (0.224, 0.538, 1.005) (0.076, 0.145, 0.302)
(40, 3) 0.68 42.7 (11.2,17.5,13.1) (0.224, 0.545, 1.001) (0.073, 0.144, 0.309)
(50, 4) 0.11 50.9 (15.7,2.5,13.9) (0.214, 0.535, 0.977) (0.058, 0.128, 0.287)
(50, 5) 0.46 51.5 (16.1,20.6,13.9) (0.215, 0.535, 0.977) (0.059, 0.129, 0.291)
Table 3. GB estimators under HCS-I, GE loss.
Table 3. GB estimators under HCS-I, GE loss.
η = 0.1
n 1 , n 2 , n 3 ( r , T 1 ) θ ^ 1 G E , θ ^ 2 G E , θ ^ 3 G E E r θ ^ 1 G E , θ ^ 2 G E , θ ^ 3 G E θ ^ 1 G E , θ ^ 2 G E , θ ^ 3 G E E r θ ^ 1 G E , θ ^ 2 G E , θ ^ 3 G E
c = 1.5 c = 1
(10,10,10) (20, 2) (0.249, 0.608, 1.014) (0.058, 0.125, 0.150) (0.214, 0.531, 0.931) (0.029, 0.079, 0.093)
(20, 3) (0.245, 0.598, 0.999) (0.052, 0.122, 0.153) (0.210, 0.524, 0.926) (0.029, 0.080, 0.098)
(25, 4) (0.244, 0.586, 1.001) (0.049, 0.125, 0.137) (0.212, 0.526, 0.922) (0.030, 0.080, 0.097)
(25, 5) (0.239, 0.581, 0.988) (0.052, 0.123, 0.151) (0.210, 0.525, 0.921) (0.031, 0.080, 0.096)
c = 0.85 c = 0.7
(20, 2) (0.204, 0.505, 0.905) (0.024, 0.075, 0.095) (0.193, 0.483, 0.878) (0.024, 0.071, 0.094)
(20, 3) (0.202, 0.502, 0.898) (0.025, 0.075, 0.094) (0.192, 0.484, 0.873) (0.028, 0.074, 0.095)
(25, 4) (0.202, 0.501, 0.893) (0.028, 0.077, 0.093) (0.194, 0.486, 0.873) (0.028, 0.074, 0.094)
(25, 5) (0.202, 0.509, 0.893) (0.028, 0.075, 0.094) (0.191, 0.486, 0.871) (0.028, 0.075, 0.096)
η = 0.4
c = 1 c = 0.5
(20, 2) (0.240, 0.564, 0.976) (0.075, 0.175, 0.228) (0.217, 0.534, 0.940) (0.064, 0.155, 0.211)
(20, 3) (0.233, 0.557, 0.959) (0.073, 0.164, 0.227) (0.211, 0.520, 0.932) (0.064, 0.148, 0.207)
(25, 4) (0.229, 0.556, 0.952) (0.070, 0.157, 0.214) (0.212, 0.534, 0.925) (0.061, 0.144, 0.196)
(25, 5) (0.225, 0.552, 0.959) (0.068, 0.153, 0.213) (0.207, 0.515, 0.924) (0.060, 0.139, 0.196)
c = 0.25 c = 0.1
(20, 2) (0.203, 0.509, 0.917) (0.060, 0.147, 0.205) (0.186, 0.479, 0.878) (0.062, 0.144, 0.199)
(20, 3) (0.200, 0.510, 0.917) (0.061, 0.146, 0.200) (0.185, 0.473, 0.872) (0.062, 0.139, 0.193)
(25, 4) (0.203, 0.503, 0.901) (0.061, 0.136, 0.191) (0.187, 0.496, 0.865) (0.057, 0.133, 0.187)
(25, 5) (0.199, 0.505, 0.890) (0.058, 0.134, 0.190) (0.191, 0.495, 0.967) (0.057, 0.132, 0.189)
η = 0.8
c = 1 c = 0.1
(20, 2) (0.254, 0.589, 1.012) (0.106, 0.216, 0.304) (0.220, 0.528, 0.951) (0.087, 0.187, 0.267)
(20, 3) (0.245, 0.583, 0.999) (0.101, 0.203, 0.283) (0.214, 0.531, 0.937) (0.086, 0.184, 0.257)
(25, 4) (0.238, 0.561, 0.979) (0.090, 0.187, 0.269) (0.215, 0.526, 0.936) (0.079, 0.167, 0.246)
(25, 5) (0.235, 0.565, 0.988) (0.088, 0.183, 0.270) (0.210, 0.522, 0.913) (0.075, 0.161, 0.235)
c = 0.65 c = 1
(20, 2) (0.198, 0.510, 0.913) (0.084, 0.180, 0.259) (0.183, 0.490, 0.899) (0.083, 0.179, 0.254)
(20, 3) (0.196, 0.497, 0.921) (0.081, 0.173, 0.251) (0.185, 0.496, 0.888) (0.080, 0.169, 0.247)
(25, 4) (0.197, 0.522, 0.901) (0.073, 0.161, 0.237) (0.193, 0.494, 0.876) (0.074, 0.159, 0.227)
(25, 5) (0.200, 0.503, 0.899) (0.074, 0.155, 0.240) (0.191, 0.492, 0.873) (0.071, 0.153, 0.232)
Table 4. GB estimators under HCS-II, GE loss.
Table 4. GB estimators under HCS-II, GE loss.
η = 0.1
n 1 , n 2 , n 3 ( r , T 2 ) θ ^ 1 G E , θ ^ 2 G E , θ ^ 3 G E E r θ ^ 1 G E , θ ^ 2 G E , θ ^ 3 G E θ ^ 1 G E , θ ^ 2 G E , θ ^ 3 G E E r θ ^ 1 G E , θ ^ 2 G E , θ ^ 3 G E
(10,10,10) c = 1.5 c = 1
(20, 2) (0.245, 0.599, 1.008) (0.051, 0.120, 0.141) (0.210, 0.523, 0.927) (0.031, 0.085, 0.098)
(20, 3) (0.246, 0.594, 1.009) (0.053, 0.127, 0.137) (0.212, 0.525, 0.927) (0.032, 0.084, 0.097)
(25, 4) (0.238, 0.584, 0.988) (0.052, 0.119, 0.145) (0.207, 0.520, 0.920) (0.033, 0.084, 0.097)
(25, 5) (0.238, 0.585, 0.988) (0.052, 0.116, 0.146) (0.210, 0.524, 0.914) (0.033, 0.080, 0.099)
c = 0.85 c = 0.7
(20, 2) (0.201, 0.501, 0.903) (0.027, 0.077, 0.095) (0.191, 0.479, 0.872) (0.029, 0.075, 0.095)
(20, 3) (0.204, 0.502, 0.903) (0.028, 0.078, 0.092) (0.194, 0.484, 0.873) (0.028, 0.076, 0.092)
(25, 4) (0.201, 0.501, 0.895) (0.031, 0.076, 0.095) (0.190, 0.487, 0.872) (0.031, 0.077, 0.097)
(25, 5) (0.201, 0.500, 0.886) (0.031, 0.075, 0.095) (0.192, 0.483, 0.867) (0.031, 0.075, 0.096)
η = 0.4
c = 1 c = 0.5
(20, 2) (0.236, 0.561, 0.978) (0.076, 0.165, 0.223) (0.212, 0.524, 0.930) (0.067, 0.151, 0.204)
(20, 3) (0.232, 0.555, 0.947) (0.075, 0.164, 0.217) (0.214, 0.536, 0.920) (0.065, 0.150, 0.199)
(25, 4) (0.226, 0.540, 0.958) (0.070, 0.152, 0.210) (0.209, 0.493, 0.925) (0.063, 0.138, 0.196)
(25, 5) (0.224, 0.555, 0.942) (0.070, 0.151, 0.212) (0.212, 0.513, 0.908) (0.062, 0.135, 0.192)
c = 0.25 c = 0.1
(20, 2) (0.201, 0.501, 0.905) (0.063, 0.143, 0.198) (0.186, 0.483, 0.870) (0.063, 0.144, 0.197)
(20, 3) (0.205, 0.514, 0.901) (0.063, 0.144, 0.196) (0.190, 0.492, 0.874) (0.063, 0.140, 0.190)
(25, 4) (0.201, 0.507, 0.902) (0.062, 0.134, 0.194) (0.188, 0.497, 0.890) (0.059, 0.130, 0.189)
(25, 5) (0.203, 0.502, 0.886) (0.060, 0.131, 0.192) (0.193, 0.483, 0.923) (0.060, 0.127, 0.191)
η = 0.8
c = 1 c = 0.1
(20, 2) (0.245, 0.569, 0.994) (0.101, 0.199, 0.288) (0.208, 0.533, 0.948) (0.085, 0.182, 0.260)
(20, 3) (0.246, 0.580, 0.984) (0.098, 0.196, 0.276) (0.214, 0.540, 0.931) (0.083, 0.177, 0.252)
(25, 4) (0.238, 0.569, 0.964) (0.088, 0.184, 0.260) (0.213, 0.526, 0.908) (0.077, 0.162, 0.239)
(25, 5) (0.236, 0.551, 0.991) (0.086, 0.175, 0.264) (0.209, 0.514, 0.930) (0.075, 0.157, 0.242)
c = 0.65 c = 1
(20, 2) (0.198, 0.506, 0.911) (0.082, 0.173, 0.245) (0.183, 0.489, 0.885) (0.081, 0.170, 0.245)
(20, 3) (0.199, 0.507, 0.916) (0.080, 0.162, 0.242) (0.196, 0.494, 0.885) (0.081, 0.164, 0.239)
(25, 4) (0.200, 0.504, 0.899) (0.073, 0.154, 0.235) (0.191, 0.490, 0.875) (0.071, 0.150, 0.231)
(25, 5) (0.202, 0.507, 0.898) (0.073, 0.153, 0.232) (0.196, 0.493, 0.871) (0.071, 0.150, 0.231)
Table 5. GB estimators under HCS-I, Linex loss.
Table 5. GB estimators under HCS-I, Linex loss.
η = 0.1
n 1 , n 2 , n 3 ( r , T 1 ) θ ^ 1 L , θ ^ 2 L , θ ^ 3 L E r θ ^ 1 L , θ ^ 2 L , θ ^ 3 L θ ^ 1 L , θ ^ 2 L , θ ^ 3 L E r θ ^ 1 L , θ ^ 2 L , θ ^ 3 L
(10,10,10) υ = 0.5 υ = 0.1
(20, 2) (0.222, 0.584, 1.026) (0.037, 0.115, 0.166) (0.215, 0.539, 0.947) (0.031, 0.086, 0.110)
(20, 3) (0.219, 0.570, 1.018) (0.035, 0.112, 0.158) (0.213, 0.535, 0.940) (0.028, 0.084, 0.109)
(25, 4) (0.219, 0.568, 1.004) (0.036, 0.110, 0.160) (0.214, 0.542, 0.935) (0.031, 0.082, 0.105)
(25, 5) (0.217, 0.562, 1.006) (0.036, 0.110, 0.153) (0.211, 0.528, 0.933) (0.032, 0.086, 0.105)
υ = 0.3 υ = 0.5
(20, 2) (0.209, 0.501, 0.882) (0.025, 0.069, 0.090) (0.206, 0.491, 0.856) (0.023, 0.067, 0.092)
(20, 3) (0.207, 0.499, 0.877) (0.026, 0.071, 0.090) (0.204, 0.486, 0.853) (0.025, 0.069, 0.096)
(25, 4) (0.207, 0.509, 0.875) (0.029, 0.072, 0.089) (0.204, 0.496, 0.855) (0.027, 0.072, 0.098)
(25, 5) (0.207, 0.499, 0.880) (0.028, 0.072, 0.094) (0.203, 0.490, 0.852) (0.028, 0.070, 0.098)
η = 0.4
υ = 0.1 υ = 0.7
(10,10,10) (20, 2) (0.243, 0.574, 0.988) (0.077, 0.178, 0.237) (0.230, 0.537, 0.922) (0.067, 0.150, 0.193)
(20, 3) (0.234, 0.563, 0.970) (0.074, 0.168, 0.228) (0.225, 0.532, 0.898) (0.067, 0.144, 0.187)
(25, 4) (0.232, 0.558, 0.958) (0.070, 0.159, 0.220) (0.223, 0.534, 0.907) (0.065, 0.138, 0.184)
(25, 5) (0.227, 0.554, 0.967) (0.068, 0.155, 0.216) (0.222, 0.530, 0.899) (0.064, 0.137, 0.182)
υ = 1 υ = 4
(20, 2) (0.227, 0.525, 0.895) (0.067, 0.140, 0.184) (0.199, 0.440, 0.724) (0.050, 0.122, 0.217)
(20, 3) (0.222, 0.516, 0.882) (0.065, 0.138, 0.181) (0.198, 0.442, 0.727) (0.051, 0.119, 0.215)
(25, 4) (0.221, 0.526, 0.885) (0.063, 0.132, 0.173) (0.201, 0.448, 0.728) (0.052, 0.113, 0.208)
(25, 5) (0.219, 0.518, 0.883) (0.063, 0.131, 0.175) (0.199, 0.451, 0.728) (0.051, 0.114, 0.209)
η = 0.8
υ = 0.5 υ = 1.5
(10,10,10) (20, 2) (0.257, 0.568, 0.985) (0.103, 0.201, 0.275) (0.244, 0.551, 0.907) (0.081, 0.181, 0.238)
(20, 3) (0.241, 0.571, 0.950) (0.096, 0.194, 0.266) (0.233, 0.541, 0.914) (0.091, 0.175, 0.229)
(25, 4) (0.238, 0.568, 0.970) (0.090, 0.180, 0.253) (0.230, 0.541, 0.914) (0.083, 0.163, 0.219)
(25, 5) (0.230, 0.551, 0.950) (0.085, 0.170, 0.249) (0.228, 0.530, 0.904) (0.080, 0.157, 0.217)
υ = 3.5 υ = 8.5
(20, 2) (0.230, 0.504, 0.846) (0.082, 0.152, 0.209) (0.200, 0.424, 0.692) (0.065, 0.140, 0.255)
(20, 3) (0.221, 0.491, 0.835) (0.080, 0.147, 0.207) (0.199, 0.431, 0.693) (0.066, 0.138, 0.251)
(25, 4) (0.216, 0.502, 0.844) (0.075, 0.139, 0.197) (0.199, 0.441, 0.698) (0.062, 0.124, 0.237)
(25, 5) (0.218, 0.508, 0.820) (0.072, 0.136, 0.193) (0.199, 0.433, 0.699) (0.061, 0.120, 0.246)
Table 6. GB estimators under HCS-II, Linex loss.
Table 6. GB estimators under HCS-II, Linex loss.
η = 0.1
n 1 , n 2 , n 3 ( r , T 1 ) θ ^ 1 L , θ ^ 2 L , θ ^ 3 L E r θ ^ 1 L , θ ^ 2 L , θ ^ 3 L θ ^ 1 L , θ ^ 2 L , θ ^ 3 L E r θ ^ 1 L , θ ^ 2 L , θ ^ 3 L
(10,10,10) υ = 0.5 υ = 0.1
(20, 2) (0.219, 0.569, 1.012) (0.036, 0.114, 0.167) (0.212, 0.531, 0.942) (0.030, 0.088, 0.107)
(20, 3) (0.221, 0.569, 1.014) (0.036, 0.115, 0.163) (0.213, 0.530, 0.941) (0.032, 0.092, 0.108)
(25, 4) (0.217, 0.560, 1.005) (0.037, 0.110, 0.153) (0.209, 0.531, 0.930) (0.034, 0.085, 0.107)
(25, 5) (0.217, 0.561, 0.999) (0.038, 0.108, 0.151) (0.214, 0.527, 0.931) (0.033, 0.085, 0.105)
υ = 0.3 υ = 0.5
(20, 2) (0.206, 0.499, 0.883) (0.027, 0.074, 0.093) (0.202, 0.486, 0.853) (0.027, 0.071, 0.097)
(20, 3) (0.206, 0.510, 0.877) (0.030, 0.073, 0.088) (0.205, 0.491, 0.850) (0.027, 0.071, 0.089)
(25, 4) (0.206, 0.504, 0.882) (0.031, 0.073, 0.089) (0.202, 0.490, 0.853) (0.030, 0.072, 0.098)
(25, 5) (0.207, 0.499, 0.869) (0.032, 0.072, 0.090) (0.203, 0.489, 0.848) (0.030, 0.069, 0.099)
η = 0.4
υ = 0.1 υ = 0.7
(10,10,10) (20, 2) (0.232, 0.569, 0.982) (0.075, 0.170, 0.230) (0.224, 0.536, 0.904) (0.069, 0.146, 0.189)
(20, 3) (0.233, 0.564, 0.984) (0.075, 0.165, 0.226) (0.225, 0.538, 0.916) (0.068, 0.142, 0.185)
(25, 4) (0.228, 0.547, 0.959) (0.070, 0.153, 0.214) (0.217, 0.526, 0.900) (0.065, 0.134, 0.182)
(25, 5) (0.222, 0.550, 0.956) (0.070, 0.152, 0.212) (0.221, 0.528, 0.898) (0.066, 0.133, 0.180)
υ = 1 υ = 4
(20, 2) (0.222, 0.522, 0.884) (0.067, 0.141, 0.180) (0.197, 0.441, 0.729) (0.054, 0.122, 0.220)
(20, 3) (0.221, 0.518, 0.883) (0.066, 0.137, 0.176) (0.198, 0.449, 0.730) (0.054, 0.118, 0.211)
(25, 4) (0.217, 0.528, 0.867) (0.064, 0.131, 0.173) (0.198, 0.439, 0.724) (0.054, 0.108, 0.215)
(25, 5) (0.216, 0.522, 0.879) (0.064, 0.127, 0.173) (0.200, 0.455, 0.728) (0.053, 0.113, 0.214)
η = 0.8
υ = 0.5 υ = 1.5
(10,10,10) (20, 2) (0.241, 0.561, 0.984) (0.099, 0.191, 0.267) (0.235, 0.546, 0.916) (0.092, 0.174, 0.233)
(20, 3) (0.243, 0.557, 0.980) (0.094, 0.184, 0.257) (0.234, 0.528, 0.920) (0.089, 0.166, 0.223)
(25, 4) (0.225, 0.541, 0.943) (0.084, 0.168, 0.248) (0.225, 0.538, 0.907) (0.081, 0.158, 0.219)
(25, 5) (0.230, 0.543, 0.927) (0.084, 0.168, 0.237) (0.225, 0.522, 0.913) (0.080, 0.154, 0.220)
υ = 3.5 υ = 8.5
(20, 2) (0.217, 0.501, 0.834) (0.080, 0.149, 0.204) (0.198, 0.435, 0.699) (0.067, 0.140, 0.254)
(20, 3) (0.224, 0.491, 0.829) (0.080, 0.145, 0.197) (0.200, 0.449, 0.700) (0.067, 0.137, 0.244)
(25, 4) (0.216, 0.504, 0.824) (0.074, 0.137, 0.197) (0.197, 0.435, 0.697) (0.062, 0.122, 0.246)
(25, 5) (0.214, 0.504, 0.827) (0.074, 0.133, 0.197) (0.200, 0.441, 0.697) (0.063, 0.122, 0.244)
Table 7. Sample X1, X2 and X3, and their order (w, ji), where δji = 1.
Table 7. Sample X1, X2 and X3, and their order (w, ji), where δji = 1.
Sample Data
X1 1.89, 4.03, 1.54, 0.31, 0.66, 1.7, 2.17, 1.82, 9.99, 2.24
X2 1.17, 3.87, 2.8, 0.7, 3.82, 0.02, 0.5, 3.72, 0.06, 3.57
X3 8.11, 3.17, 5.55, 0.80, 0.20, 1.13, 6.63, 1.08, 2.44, 0.78
Ordered data (w, ji)
(0.02,2), (0.06,2), (0.20,3), (0.31,1), (0.50,2), (0.66,1), (0.70,2), (0.78,3), (0.80,3), (1.083), (1.13,3), (1.17,2), (1.54,1), (1.70,1), (1.82,1), (1.89,1), (2.17,1), (2.24,1), (2.44,3), (2.80,2), (3.17,3), (3.57,2), (3.72,2), (3.82,2), (3.87,2), (4.03,1), (5.55,3), (6.63,3), (8.11,3), (9.99,1)
Table 8. ML and GB estimators under HCS-I.
Table 8. ML and GB estimators under HCS-I.
r r 1 , r 2 , r 3 T 1 θ ^ 1 , θ ^ 2 , θ ^ 3
20 (6,5,5)
T 1 1.89 , 2.17
2 MLE (0.377, 0.402, 0.357)
GB η = 0.1 η = 0.4
c = 1 (0.382, 0.462, 0.341) (0.379, 0.430, 0.349)
c = 0.8 (0.360, 0.435, 0.321) (0.368, 0.416, 0.338)
c = 0.3 (0.305, 0.364, 0.268) (0.341, 0.382, 0.310)
υ = 0.1 (0.386, 0.470, 0.345) (0.381, 0.433, 0.351)
υ = 0.3 (0.369, 0.442, 0.330) (0.373, 0.421, 0.343)
υ = 1 (0.342, 0.403, 0.307) (0.359, 0.402, 0.330)
20 (8,6,6)
T 1 2.8 , 4.03
3 MLE (0.476, 0.365, 0.371)
GB η = 0.1 η = 0.4
c = 1 (0.420, 0.439, 0.346) (0.450, 0.396, 0.359)
c = 0.8 (0.399, 0.414, 0.327) (0.440, 0.385, 0.349)
c = 0.3 (0.344, 0.351, 0.277) (0.414, 0.357, 0.323)
υ = 0.1 (0.425, 0.445, 0.350) (0.453, 0.399, 0.361)
υ = 0.3 (0.406, 0.422, 0.336) (0.443, 0.389, 0.353)
υ = 1 (0.378, 0.388, 0.314) (0.428, 0.375, 0.341)
25 (8,5,6)
T 1 2.44 , 2.80
2.5 MLE (0.462, 0.334, 0.365)
GB η = 0.1 η = 0.4
c = 1 (0.415, 0.429, 0.345) (0.441, 0.376, 0.355)
c = 0.8 (0.394, 0.403, 0.325) (0.431, 0.364, 0.345)
c = 0.3 (0.340, 0.338, 0.275) (0.405, 0.334, 0.320)
υ = 0.1 (0.420, 0.435, 0.348) (0.443, 0.378, 0.357)
υ = 0.3 (0.402, 0.412, 0.334) (0.434, 0.369, 0.350)
υ = 1 (0.374, 0.377, 0.312) (0.419, 0.354, 0.338)
25 (8,10,7)
T 1 3.87 , 4.03
4 MLE (0.475, 0.494, 0.366)
GB η = 0.1 η = 0.4
c = 1 (0.420, 0.497, 0.346) (0.450, 0.495, 0.357)
c = 0.8 (0.399, 0.474, 0.328) (0.440, 0.486, 0.348)
c = 0.3 (0.344, 0.416, 0.280) (0.414, 0.462, 0.325)
υ = 0.1 (0.425, 0.503, 0.350) (0.453, 0.498, 0.359)
υ = 0.3 (0.406, 0.479, 0.336) (0.443, 0.488, 0.352)
υ = 1 (0.378, 0.444, 0.315) (0.428, 0.472, 0.341)
Table 9. ML and GB estimators under HCS-II.
Table 9. ML and GB estimators under HCS-II.
r r 1 , r 2 , r 3 T 2 θ ^ 1 , θ ^ 2 , θ ^ 3
20 (8,6,6)
T2 < 3.17
2 MLE (0.476, 0.365, 0.371)
GB η = 0.1 η = 0.4
c = 1 (0.420, 0.439, 0.346) (0.450, 0.396, 0.359)
c = 0.8 (0.399, 0.414, 0.327) (0.440, 0.385, 0.349)
c = 0.3 (0.344, 0.351, 0.277) (0.414, 0.357, 0.323)
υ = 0.1 (0.425, 0.445, 0.350) (0.453, 0.399, 0.361)
υ = 0.3 (0.406, 0.422, 0.336) (0.443, 0.389, 0.353)
υ = 1 (0.378, 0.388, 0.314) (0.428, 0.375, 0.341)
20 (8,8,7)
T 1 3.72 , 3.82
3.8 MLE (0.401 0.397 0.333)
GB η = 0.1 η = 0.4
c = 1 (0.392 0.448 0.333) (0.397 0.418 0.333)
c = 0.8 (0.372 0.426 0.315) (0.388 0.408 0.325)
c = 0.3 (0.321 0.367 0.270) (0.365 0.384 0.304)
υ = 0.1 (0.396 0.454 0.337) (0.399 0.420 0.335)
υ = 0.3 (0.380 0.432 0.324) (0.392 0.412 0.329)
υ = 1 (0.355 0.400 0.304) (0.380 0.398 0.320)
25 (8,10,7)
T 2 < 4.03
4 MLE (0.394, 0.494, 0.324)
GB η = 0.1 η = 0.4
c = 1 (0.389, 0.497, 0.329) (0.391, 0.495, 0.326)
c = 0.8 (0.369, 0.474, 0.312) (0.382, 0.486, 0.318)
c = 0.3 (0.318, 0.416, 0.267) (0.360, 0.462, 0.297)
υ = 0.1 (0.393, 0.503, 0.333) (0.393, 0.498, 0.328)
υ = 0.3 (0.376, 0.479, 0.320) (0.386, 0.488, 0.322)
υ = 1 (0.352, 0.444, 0.301) (0.374, 0.472, 0.313)
25 (9,10,10)
T 1 8.11 , 9.99
MLE (0.355, 0.494, 0.335)
GB η = 0.1 η = 0.4
c = 1 (0.370, 0.497, 0.334) (0.361, 0.495, 0.334)
c = 0.8 (0.352, 0.474, 0.319) (0.353, 0.486, 0.328)
c = 0.3 (0.306, 0.416, 0.279) (0.344, 0.462, 0.311)
υ = 0.1 (0.374, 0.503, 0.337) (0.362, 0.498, 0.335)
υ = 0.3 (0.360, 0.479, 0.326) (0.357, 0.488, 0.331)
υ = 1 (0.338, 0.444, 0.309) (0.357, 0.472, 0.324)
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

© 2024 MDPI (Basel, Switzerland) unless otherwise stated