Online Program Home
  My Program Register!

Abstract Details

Activity Number: 656
Type: Topic Contributed
Date/Time: Thursday, August 3, 2017 : 10:30 AM to 12:20 PM
Sponsor: Section on Bayesian Statistical Science
Abstract #324268
Author(s): Jose Guardiola* and Hassan Elsalloukh
Companies: Texas A&M University-CC and University of Arkansas at Little Rock
Keywords: Bayesian estimation ; Maximum entropy estimation ; maximum likelihood estimation ; Information theory ; Information gain ; posterior distribution

This paper discusses a comparison among Bayesian statistical inference, maximum likelihood estimation, and maximum entropy methods. Using a toy example we compare estimation results using maximum entropy as defined by Jaynes, Bayesian inference, and the maximum likelihood method. This example discusses the relationship among those methods and highlights the differences and similarities among them. The Bayesian inference example is developed using some form of prior knowledge, the maximum entropy solution is derived from suitable constraints, while maximum likelihood maximizes sensitivity to the observed data. The results that correspond to such methods are discussed from the point of view of information theory and a statistical approach by comparing related quantities. For instance, a maximum likelihood solution can be made equivalent to a maximum entropy estimation and Bayesian inference under certain conditions. Finally, some conclusions are drawn regarding the information gain and Kullback-Leibler divergence when updating knowledge from the prior to the posterior in Bayesian inference, its relationship with maximum entropy estimation and maximum likelihood method.

Authors who are presenting talks have a * after their name.

Back to the full JSM 2017 program

Copyright © American Statistical Association