This year MaxEnt 2006 will be in Paris from 8th to 13th, July. MaxEnt is an annual conference on Maximum Entropy methods and their applications in science and this year it will be the 26th time it happens. The conference-chair is Ali Mohammad-Djafari and it has some notorious people in the organization: G. L. Bretthorst, A. Caticha, C. Rodriquez and J. Skilling among others.
Maximum Entropy methods are extensively used not only in physics but also in statistics and engineering since the landmark paper of Shannon, named A Mathematical Theory of Communication, where he found that the same quantity called entropy in statistical physics could be used as a measure of information.
One of the most interesting applications of MaxEnt is related to Bayesian inference. It is often said that the Bayesian interpretation is a "subjective" interpretation of probability theory and a lot of criticism come from this, some even say that due to this Bayesian probabilities are not even scientific. I'm in the course of writing a post about Bayesian inference, but while I cannot finish it, let me say that MaxEnt can be used as a method to generate Bayesian prior distributions in a totally objective way. The idea, roughly, is that as entropy is a measure of "uncertainty" or "disinformation", the correct prior distribution is obtained by maximizing the entropy of the distribution subject to constraints given by the avaiable information. This means that when using this principle we are trying to use just the avaiable inforamtion and assuming nothing about everything else, i.e., we assume that our disinformation about the rest is maximal.
An interesting and simple example of this method is related to the well-known Gaussian distribution. It can be shown that the probability distribution obtained by MaxEnt when the only information we have is the mean and the covariance is given by a Gaussian, i.e., if the only thing you have is a mean and a covariance matrix, your best guess is a Gaussian one.
Picture: Entropy Trails, by Tetragrammaton Productions.
Maximum Entropy methods are extensively used not only in physics but also in statistics and engineering since the landmark paper of Shannon, named A Mathematical Theory of Communication, where he found that the same quantity called entropy in statistical physics could be used as a measure of information.
One of the most interesting applications of MaxEnt is related to Bayesian inference. It is often said that the Bayesian interpretation is a "subjective" interpretation of probability theory and a lot of criticism come from this, some even say that due to this Bayesian probabilities are not even scientific. I'm in the course of writing a post about Bayesian inference, but while I cannot finish it, let me say that MaxEnt can be used as a method to generate Bayesian prior distributions in a totally objective way. The idea, roughly, is that as entropy is a measure of "uncertainty" or "disinformation", the correct prior distribution is obtained by maximizing the entropy of the distribution subject to constraints given by the avaiable information. This means that when using this principle we are trying to use just the avaiable inforamtion and assuming nothing about everything else, i.e., we assume that our disinformation about the rest is maximal.
An interesting and simple example of this method is related to the well-known Gaussian distribution. It can be shown that the probability distribution obtained by MaxEnt when the only information we have is the mean and the covariance is given by a Gaussian, i.e., if the only thing you have is a mean and a covariance matrix, your best guess is a Gaussian one.
Picture: Entropy Trails, by Tetragrammaton Productions.
2 comments:
on del.icio.us you posted a link to
http://www.unification.org/ucbooks/kintro/first.htm
This site is an unauthorized copy. The real website is here:
http://langintro.com/kintro/
Click "Find out why I chose this license" for details.
Please update your link.
Thanks for the warning. It´s already corrected.
Post a Comment