1. Introduction
Generalized Bayesian is a Bayesian study based on the positive learning rate parameter (LRP)
as a power on the likelihood function
for the parameter
, the traditional Bayesian framework is obtained for
. In other words, if the parameter
has a prior distribution
, the GB posterior distribution
is proportional to
. This distribution is derived by raising
to the power
and then combining the result with the prioritized density
. For more information on the GB method and how to select the value of the rate parameter, see [
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13]. In particular, the Safe Bayes algorithm based on the minimization of a sequential risk measure has been used in [
3,
4,
5,
6] to study learning rate selection. In [
7,
8], an alternative approach to learning rate selection was presented which includes two different information adaptation strategies. In [
12], GBE was investigated using a common type-II censored sample from several exponential populations and different parameter values for the learning rate. A similar study was presented in [
13], but focused on joint hybrid censoring.
Based on the currently available observations, Bayesian predictive research can be used to determine the point predictor or prediction interval for unknown future values in the same sample; this is called a one-sample prediction scheme. In addition to using the currently available observations, Bayesian prediction research can also be used to predict one or more future samples; these types of prediction schemes are referred to as two-sample and multi-sample prediction schemes. Many authors have dealt with the prediction of future failures or samples based on different censoring techniques and using different prediction methods. We list some of these that are relevant to our research. For example, [
13] investigated the GBP using a combined type-II censored sample from several exponential populations. A study using a joint type-II censored sample from two exponential populations for Bayes estimation and prediction was published in [
14]. Based on the extended order statistics with multiple type II censoring, a Bayesian prediction for future values of distributions from the class of exponential distributions was created in [
15,
16].
Since the parameter of the analyzed distribution in the Bayesian analysis is random, its distribution is based on the previous distribution. In empirical Bayesian analysis (EB), the hyperparameters (i.e., the parameters of the prior distribution) are also unknown. The marginal density function of the hyperparameters, which is used to estimate the hyperparameters, is obtained by combining the density function of the distribution with prior distributions. Therefore, the maximum likelihood estimators (MLEs) of these hyperparameters were determined using the data from the original distribution. Several authors have used EB. For example, [
17] used type II progressive censoring from the Lomax model to investigate the reliability of EBE. Based on progressively type II censored data, for the Kumaraswamy distribution [
18] derived the EBE of the parameter, the reliability and the hazard function.
Based on type II censored samples, we examine the GBE, GEBE, GBP, and GEBP for the Pareto distribution (). Thus, the main objective of this study is to compare all GB and GEB results for different LRP values, including the value , which characterizes classical Bayes.
The rest of the paper is structured as follows:
Section 2 presents the Pareto distribution and reviews the research on GB and GEB, while
Section 3 deals with GBP and GEBP.
Section 4 presents a simulation study based on a censored Type II sample from
to identify GBE, GEBE, GBP and GEBP for different LRP values and compares the results.
Section 5 concludes the paper with a discussion and conclusions.
2. Estimation Study
In this section, we introduce our model,
, and study the GB and GEB problems for this model. The Pareto distribution can be applied to economic management, social geophysical phenomena, and science. More information about the distribution can be found in [
19].
The Model
The probability density function (pdf), cumulative distribution function (cdf) and survival function of one parameter Pareto distribution respectively given by,
2.1. Generalized Bayesian estimation
The likelihood function under type-II censored data from the class is given by,
where,
, and
.
Consider the prior distribution of
is gamma distribution with pdf,
where,
is a vector of hyperparameters.
Raising (4) to the fractional power
, then combining it with the prior density (5) we get GB posterior distribution of
given the data as follows,
where
2.2. Generalized Empirical Bayesian Estimation
We want to find the marginal pdf
, by combining (1) and (5) then integrating with respect to
as follows,
From (8) we get the cdf
as follows,
From (8) and (9) the likelihood function
under type-II censored data is given by,
Using the loglikelihood function , to find the maximum likelihood estimator (MLE) , where
Differentiating
w. r. to
then equating to zero as follows,
We get the MLE
by solving the following two equations numerically,
Substituting by
in (7) we obtain the GE posterior as follows,
3. One Sample Prediction
In this section we study GBP and GEBP intervals using one sample prediction scheme for Pareto distribution. Under type-II censored sample, the first
ordered statistics
are observed from random sample of size
One sample prediction scheme is considered to predict the rest of unobserved values
The conditional density function of
given
is given by,
Substituting by (1), (3) in (16), the conditional density function of
given
is,
where,
Combining (6) and (17) then integrating with respect to
, the GB predictive density function is given by,
The predictive reliability function of
is given by,
Equating (19) to , respectively, we obtain ( GBP bounds .
GEBP bounds , can be obtained by substituting by in (19) then equating to .
4. Numerical Study
In this section, the results of the Monte Carlo simulation are presented to evaluate the performance of the inference methods derived in the previous sections.
The simulation study about distribution is designed and carried out as follows:
We consider two values for based on the hyperparameters and respectively, is obtained as the mean of gamma distribution (5).
Generate one sample from and with size , and choosing .
For EB, we use MLE
to compute
, where the results MLE
based on
and
are shown in
Table 1.
For the Monte Carlo simulations we use replicates, therefore the estimator , and the estimated risk,
Using (7) and (15), the estimation results are obtained and expressed by the estimator and ER for different values of LRP, where .
The results of GBE and GEBE for
and
are shown in
Table 2 and
Table 4.
Prediction results are based on one sample from and with size , the number of observations is we then compute the GBP, GEBP bounds and its lengths at for the future values with using (19).
The results of GBP and GEBP for
and
are shown in
Table 3 and
Table 5.
Table 1.
The MLEs of the hyperparameters under different censored samples.
Table 1.
The MLEs of the hyperparameters under different censored samples.
|
|
|
|
|
= 0.5
|
= 3
|
|
(25, 20) (100, 50) (100, 75) (100, 100) |
(4.56, 8.41) |
(10.883, 3.756) |
|
(4.253, 8.25) |
(10.3955, 3.657) |
|
(4.642, 9.134) |
(11.6707, 3.81) |
|
(5.153, 10.2) |
(11.9567, 4) |
|
Table 2.
GBE and GEBE for the parameter .
Table 2.
GBE and GEBE for the parameter .
|
|
|
|
|
|
50 75 100 |
0.1 |
0.5028 |
0.0005 |
0.5096 |
0.0036 |
0.5024 |
0.0004 |
0.5058 |
0.0030 |
0.5022 |
0.0002 |
0.5037 |
0.0016 |
50 75 100 |
0.5 |
0.5072 |
0.0029 |
0.5097 |
0.0060 |
0.5056 |
0.0013 |
0.5059 |
0.0036 |
0.5041 |
0.0006 |
0.5046 |
0.0025 |
50 75 100 |
1 |
0.5083 |
0.0035 |
0.5099 |
0.0076 |
0.5061 |
0.0018 |
0.5064 |
0.0039 |
0.5045 |
0.0013 |
0.5049 |
0.0026 |
50 75 100 |
2 |
0.5096 |
0.0043 |
0.5103 |
0.0099 |
0.5062 |
0.0021 |
0.5066 |
0.0048 |
0.5050
|
0.0016
|
0.5051
|
0.0032
|
Table 3.
GBP and GEBP bound for Pareto future values at .
Table 3.
GBP and GEBP bound for Pareto future values at .
s |
|
|
length |
|
length |
21 23 25 |
0.1 |
(3.0731,5.0055) |
1.9324 |
(3.0726, 4.9361) |
1.8635 |
(3.3535,8.6084) |
5.2549 |
(3.3363, 8.4412) |
5.1049 |
(4.2030,18.4935) |
14.2905 |
(4.1332,18.0386) |
13.9054 |
21 23 25 |
0.5 |
(3.0731,4.7355) |
1.6624 |
(3.0729, 4.7010) |
1.6281 |
(3.3667,7.6165) |
4.2498 |
(3.3589, 7.5285) |
4.1696 |
(4.2789,15.6562) |
11.3773 |
(4.2475,15.4145) |
11.1670 |
21 23 25 |
1 |
(3.0731,4.6526) |
1.5795 |
(3.0730, 4.6318) |
1.5588 |
(3.3718,7.3151) |
3.9433 |
(3.3672, 7.2613) |
3.8941 |
(4.3107,14.8007) |
10.4900 |
(4.2918,14.6528) |
10.3610 |
21 23 25 |
2 |
(3.0731,4.6000) |
1.5269 |
(3.0730, 4.5887) |
1.5157 |
(3.3754,7.1258) |
3.7504 |
(3.3728, 7.0957) |
3.7229 |
(4.3332,14.2663)
|
9.9331
|
(4.3229,14.1833)
|
9.8604
|
Table 4.
GBE and GEBE for the parameter .
Table 4.
GBE and GEBE for the parameter .
|
|
|
|
|
|
50 75 100 |
0.1 |
3.0079 |
0.0048 |
2.8975 |
0.0072 |
3.0078 |
0.0024 |
3.0443 |
0.0060 |
3.0076 |
0.0016 |
3.0014 |
0.0046 |
50 75 100 |
0.5 |
3.0317 |
0.0416 |
2.9815 |
0.0218 |
3.0260 |
0.0220 |
3.0383 |
0.0167 |
3.0198 |
0.0054 |
3.0172 |
0.0132 |
50 75 100 |
1 |
3.0427 |
0.0256 |
3.0115 |
0.0397 |
3.0316 |
0.0209 |
3.0387 |
0.0229 |
3.0257 |
0.0137 |
3.0230 |
0.0207 |
50 75 100 |
2 |
3.0529 |
0.0434 |
3.0356 |
0.0743 |
3.0353 |
0.0313 |
3.0393 |
0.0551 |
3.0287 |
0.0253 |
3.0254 |
0.0378 |
Table 5.
GBP and GEBP bound for Pareto future values at
Table 5.
GBP and GEBP bound for Pareto future values at
.s |
|
|
length |
|
length |
21 23 25 |
0.1 |
(0.5127,0.8032) |
0.2905 |
(0.5127,0.8043) |
0.2916 |
(0.5609,1.3189) |
0.7580 |
(0.5628,1.3151) |
0.7523 |
(0.7094,2.7496) |
2.0402 |
(0.7178,2.7366) |
2.0188 |
21 23 25 |
0.5 |
(0.5127,0.7824) |
0.2697 |
(0.5127,0.7849) |
0.2722 |
(0.5621,1.2431) |
0.6810 |
(0.5632,1.2476) |
0.6844 |
(0.7164,2.534) |
1.8176 |
(0.7212,2.5455) |
1.8243 |
21 23 25 |
1 |
(0.5127,0.7732) |
0.2605 |
(0.5127,0.7754) |
0.2627 |
(0.5627,1.2098) |
0.6471 |
(0.5634,1.2148) |
0.6514 |
(0.7201,2.4395) |
1.7194 |
(0.7232,2.4527) |
1.7295 |
21 23 25 |
2 |
(0.5127,0.7664) |
0.2537 |
(0.5127,0.7680) |
0.2553 |
(0.5631,1.1851) |
0.6220 |
(0.5636,1.1889) |
0.6253 |
(0.7231,2.3697) |
1.6466 |
(0.7249,2.3801) |
1.6552 |
From
Table 2. According to
and
ER, GBE becomes better for small value of LRP but for the large value of
, that means getting the best result at
and
(complete sample). GEBE also becomes better for small value of LRP but for the large value of
, that means getting the best result at
and
Generally, for
the result of GBE is better than that of GEBE. The values of
in
Table 3 and
Table 5 are presented as the logarithmic value of the lower and upper bounds of
According to the length of the interval, GBP and GEBP becomes better for large value of LRP, but for the small value of , that means getting the best result at and . In general, the result of GEBP is better than that of GBP for .
From
Table 4. GBE becomes better for small value of LRP but for the large value of
, that means getting the best result at
and
(complete sample). GEBE becomes better for small value of LRP but for the large value of
, except for
, the results are better for
; we can conclude that the best result at
and
For
the result of GBE is better than that of GEBE.
From
Table 5. GBP and GEBP becomes better for large value of LRP but for the small value of
, that means getting the best result as the smallest length at
and
. In general, the result of GBP is better than that of GEBP for
.
5. Discussion and Conclusion
In this study, one parameter Pareto distribution is considered and studied based on type-II censored sample. GB, GEB, GBP and GEBP are discussed for the distribution with different values of LRP
According to the results in
Table 2. to
Table 5. We can conclude the results of the Pareto distribution as follows:
5.1. The Result of Distribution
Both of GBE and GEBE become better for small value of LRP but for the large value of , that means getting the best result at and (complete sample).
GBP and GEBP becomes better for large value of LRP, that means getting the best result at and .
The result of GBE is better than that of GEBE but the result of GEBP is better than that of GBP.
Small values of LRP give the best result for GBE but vice versa for GEBP.
5.2. The Result of Distribution
GBE becomes better for small value of LRP but for the large value of , that means getting the best result at and (complete sample).
GEBE becomes better for small value of LRP but for the large value of , except for , the results are better for .
The best result of GEBE at and
The result of GBE is better than that of GEBE.
GBP and GEBP becomes better for large value of LRP, that means getting the best result at and .
The result of GBE is better than that of GEBE and the result of GBP is better than that of GEBP.
Small values of LRP give the best result for GBE but vice versa for GEBP.
References
- Miller, J. W. and Dunson, D. B. Robust Bayesian inference via coarsening. Journal of the American Statistical Association, 2019, 114(527): 1113-1125. [CrossRef]
- Grünwald, P. The safe Bayesian: learning the learning rate via the mixability gap. In Algorithmic Learning Theory, 2012, volume 7568 of Lecture Notes in Computer Science, 169-183. Springer, Heidelberg. MR3042889. [CrossRef]
- Grünwald, P. and van Ommen, T. Inconsistency of Bayesian inference for mis specified linear models, and a proposal for repairing it. Bayesian Analysis, 2017, 12(4): 1069-1103. [CrossRef]
- Grünwald, P. Safe probability. Journal of Statistical Planning and Inference, 2018, 47-63. MR3760837. [CrossRef]
- De Heide, R., Kirichenko, A., Grünwald, P., and Mehta, N. Safe-Bayesian generalized linear regression. In International Conference on Artificial Intelligence and Statistics, 2020, 2623-2633. PMLR. 106, 113.
- Holmes, C. C. and Walker, S. G. Assigning a value to a power likelihood in a general Bayesian model. Biometrika, 2017, 497-503. [CrossRef]
- Lyddon, S. P., Holmes, C. C., and Walker, S. G. General Bayesian updating and the loss-likelihood bootstrap. Biometrika, 2019, 465-478. [CrossRef]
- Martin, R. Invited comment on the article by van der Pas, Szabó, and van der Vaart. Bayesian Analysis, 2017, 1254-1258.
- Martin, R. and Ning, B. Empirical priors and coverage of posterior credible sets in a sparse normal mean model. Sankhyā Series A, 2020, 477-498. Special issue in memory of Jayanta K. Ghosh. [CrossRef]
- Wu, P. S., Martin, R. “A Comparison of Learning Rate Selection Methods in Generalized Bayesian Inference.” Bayesian Anal., 2023,18 (1) 105 - 132. [CrossRef]
- Abdel-Aty, Y., Kayid, M., and Alomani, G. Generalized Bayes estimation based on a joint type-II censored sample from k-exponential populations. Mathematics, 2023, 11, 2190. [CrossRef]
- Abdel-Aty, Y.; Kayid, M.; Alomani, G. Generalized Bayes Prediction Study Based on Joint Type-II Censoring. Axioms 2023, 12, 716. [CrossRef]
- Abdel-Aty, Y.; Kayid, M.; Alomani, G. Selection effect of learning rate parameter on estimators of k exponential populations under the joint hybrid censoring. Heliyon, 2024(10), e34087. [CrossRef]
- Shafay, A.R., Balakrishnan, N. Y. Abdel-Aty, Y. Bayesian inference based on a jointly type-II censored sample from two exponential populations, Journal of Statistical Computation and Simulation, 2014, 2427-2440. [CrossRef]
- Abdel-Aty, Y., Franz, J., Mahmoud, M.A.W. Bayesian prediction based on generalized order statistics using multiply type-II censoring. Statistics, 2007, 495-504. [CrossRef]
- Shafay, A. R. , Mohie El-Din, M. M. and Abdel-Aty, Y. Bayesian inference based on multiply type-II censored sample from a general class of distributions. Journal of Statistical Theory and Applications, 2018, 17(1), 146–157. [CrossRef]
- Mohie El-Din, M. M, Okasha, H. and B. Al-Zahrani, B. Empirical Bayes estimators of reliability performances using progressive type-II censoring from Lomax model, Journal of Advanced Research in Applied Mathematics, 2013, 1(5), 74-83. [CrossRef]
- Kumar, M., Singh, S., Singh, U. and Pathak, A. Empirical Bayes estimator of parameter, reliability and hazard rate for kumaraswamy distribution. Life Cycle Reliability and Safety Engineering, 2019, 1(14). [CrossRef]
- Arnold, B. Pareto Distributions, 2015, (2nd ed.). CRC Press. Retrieved from https://www.perlego.com/book/1644485/pareto-distributions-pdf.
|
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).