A Bayesian Look at Classical Estimation: The Exponential Distribution

Abdulaziz Elfessi and David M. Reineke
University of Wisconsin - La Crosse

Journal of Statistics Education Volume 9, Number 1 (2001)

Copyright © 2001 by Abdulaziz Elfessi and David M. Reineke, all rights reserved.
This text may be freely shared among individuals, but it may not be republished in any medium without express written consent from the authors and advance notification of the editor.


Key Words: Bayes estimator; Classical estimator; Credibility interval; Improper prior distribution.

Abstract

Many undergraduate students are introduced to frequentist or classical methods of parameter estimation such as maximum likelihood estimation, uniformly minimum variance unbiased estimation, and minimum mean square error estimation in a reliability, probability, or mathematical statistics course. Rossman, Short, and Parks (1998) present some thought provoking insights on the relationship between Bayesian and classical estimation using the continuous uniform distribution. Our aim is to explore these relationships using the exponential distribution. We show how the classical estimators can be obtained from various choices made within a Bayesian framework.

1. Introduction

The one-parameter exponential distribution is often used to illustrate concepts such as parameter estimation in undergraduate courses in mathematical statistics, probability, and reliability. The exponential density is easy to manipulate analytically and provides a good starting point for discussions of more general distributions. Furthermore, its analytical tractability allows exploration of the relationships between classical and Bayesian estimation.

Consider a random sample of independent observations X1, ..., Xn from an exponential distribution with probability density function

.

Among the classical estimators of , it is easy to show that the maximum likelihood estimator (MLE) and the uniformly minimum variance unbiased estimator (UMVUE) are and , respectively. In the class of estimators of the form , the one that minimizes the mean squared error is .

In Section 2 we consider the problem of estimating the parameter using the Bayesian approach. Bayesian estimators derived from an improper prior distribution can be used to derive the classical estimators given above. The technique of deriving the classical estimators from the Bayesian estimator is not new. Rossman, Short, and Parks (1998) present a very helpful paper for teaching connections between Bayesian and classical estimators using the continuous uniform distribution.

The roots of Bayesian analysis lie in Bayes's Theorem:

,

where A is an event and the Bj's, j = 1, ... , m, are mutually exclusive and collectively exhaustive events in a sample space with P ( Bj ) > 0 for all j. The same results can be translated to random variables, both discrete and continuous. Let U and Y be continuous random variables and let fU ( u ) be the prior density of U and g ( y | u ) be the conditional density of Y given U. Bayes' Theorem for continuous random variables then can be represented by

where g ( u | y ) is called the posterior density function of U. For more information see Rohatgi (1984) or Berger (1988). In the next section we illustrate the continuous case using the exponential distribution with a single parameter . Additionally, interval estimators for are compared and an example is given.

2. Derivation of Point and Interval Estimators

Bayesian statistics have traditionally been dominated by the notion of conjugate priors. A class C of prior distributions is a conjugate family for F, where F denotes a class of density functions, if the posterior distribution is also in the class C for all density functions in F and all prior density functions in C. When dealing with conjugate priors, the posterior distribution can be easily calculated. In this section we derive the posterior distribution by using an improper prior distribution for the parameter .

Consider the improper prior distribution (i.e. ) for of the form

.

Notice that this prior distribution is the kernel of a gamma distribution when . However, such a restriction on is not necessary and decreases the flexibility of the resulting parameter estimator. Applying Bayes' Theorem

where f ( X1, ..., Xn ) is the marginal distribution of X, it follows that the posterior distribution of is

The posterior distribution is proper when and has a constant of proportionality given by . The estimator is derived by choosing that value of which minimizes (assuming the squared error loss). The Bayes estimator of is given by

The classical estimators derived in Section 1 can be obtained from the Bayes estimator by choosing different values of and . If and , then the estimator corresponds to the MLE and the prior distribution is the Jeffreys' prior, , a standard noninformative prior as well as an improper prior. For more information on the Jeffreys' prior see Berger (1988). Choosing and yields the UMVUE. The mean square estimator corresponds to setting and . For and the prior density function is (the flat improper prior). The resulting estimator in the case of the flat prior is .

A 100C% confidence interval for a parameter is obtained by finding L and U such that . When X1, ... , Xn are independent and identically distributed exponential random variables, Kapur and Lamberson (1977) show that has a chi-square distribution with 2n degrees of freedom. Using this transformation, the interval estimate is developed by solving for in

and the resulting 100C% confidence interval for is

The Bayesian analog to the confidence interval is called a credibility interval. In general, a 100C% credibility interval for a parameter given a random sample X1, ..., Xn is an interval ( l ( X1, ..., Xn ) , u ( X1, ..., Xn ) such that

.

Kapur and Lamberson (1977) show that has a chi-square distribution with degrees of freedom. By using the posterior distribution in (1) a 100C% Bayesian credibility interval is easily developed beginning with

which gives the interval

So, when and the Bayesian and classical interval estimates are the same.

3. Example

Consider the following random sample of cycles to failure (in ten thousands) for 20 heater switches subject to an overload voltage:

0.0100, 0.0340, 0.1940, 0.5670, 0.6010, 0.7120, 1.2910, 1.3670

1.9490, 2.3700, 2.4110, 2.8750, 3.1620, 3.2800, 3.4910, 3.6860

3.8540, 4.2110, 4.3970, 6.4730

These data are from Kapur and Lamberson (1977, p. 240). Table 1 summarizes the Bayesian point and interval estimates of . It also identifies the values of and the Bayesian interpretation of the prior distribution as well as the corresponding classical counterpart to each point estimate of . Notice that negative values of produce lower values for the posterior mean for. Negative values of and positive values of put more prior weight on the small values of , resulting in lower estimates of the posterior mean.


Table 1. Bayes Estimates for Various Values of and

 
 
Posterior
mean
Classical
counterpart
95% Credibility
interval
-2 0 0.3835 minimum MSE estimate (0.2273, 0.5799)
-1 0 0.4048 UMVUE (0.2437, 0.6061)
-1 1 0.3964   (0.2386, 0.5935)
-1 2 0.3883   (0.2338, 0.5813)
-1 3 0.3805   (0.2291, 0.5697)
0 0 0.4261 MLE (0.2603, 0.6322)*
1 1 0.4381   (0.2712, 0.6444)
1 0 0.4474   (0.2770, 0.6581)
* Corresponds to the classical confidence interval


4. Conclusion

We have shown the relationship of Bayesian estimators of the scale parameter of the one-parameter exponential distribution to three classical estimators, namely the MLE, UMVUE, and minimum MSE estimator. We considered both point and interval estimators. Our Bayesian estimators were derived from an improper prior distribution that is rather general. In practice, and are parameters whose values depend on the experimenter's a priori knowledge of the unknown parameter and its distribution. An example was used to demonstrate the methods presented and to illustrate how Bayesian methods can yield classical estimators.


Acknowledgements

The authors are very grateful to the editor and to three anonymous referees, all of whom contributed toward the improvement of the manuscript.


References

Berger, J. O. (1988), Statistical Decision Theory and Bayesian Analysis (2nd ed.), New York: Springer-Verlag.

Kapur, K. C. and Lamberson, L. R. (1977), Reliability in Engineering Design, New York: John Wiley & Sons, Inc.

Rohatgi, V. K. (1984), Statistical Inference, New York: John Wiley & Sons, Inc.

Rossman, A. J., Short, T. H. and Parks, M. T. (1998), "Bayes Estimators for the Continuous Uniform Distribution," Journal of Statistics Education, [Online], 6(3). (http://jse.amstat.org/v6n3/rossman.html)


Abdulaziz Elfessi
Mathematics Department
University of Wisconsin - La Crosse
1725 State Street
La Crosse, WI 54601
USA

elfessi.abdu@uwlax.edu

David M. Reineke
Mathematics Department
University of Wisconsin - La Crosse
1725 State Street
La Crosse, WI 54601
USA

reineke.davi@uwlax.edu


Volume 9 (2001) | Archive | Index | Data Archive | Information Service | Editorial Board | Guidelines for Authors | Guidelines for Data Contributors | Home Page | Contact JSE | ASA Publications