Dear all,
I have got a question about updating a beta prior in a particular situation. Normally you have a beta distribution with shape parameters a and b. The mean of this distribution is a/a+b and the sample size, or the confidence, or K is a+b. Now, if you do some trials, with let's say N positive outcomes and M negative outcomes, you end up with a posterior distribution that is beta(θ  a+N, b+M). So, now your mean is (a+N) / (a+N+b+M) and the sample size / confidence / K is a+N+b+M. Now, my question is: what if you want to keep the K / confidence level fixed? So let's say K should always be 10. So beta(θ  5, 5) would be fine, as would beta(θ  9, 1) and beta(θ  1.23, 8.77). In other words: in this case I would like the mean of the posterior to be able to change to reflect the evidence found in the new data, but the confidence level should remain the same (rather than increasing all the time). So, this is more or less what is described at section 9.1 in the book (page 192 and onwards in my copy). The mean is an hierarchical prior and we have a fixed K. However, in the book there are several coins being tossed all at once and the estimation is done for all these coins and the mean simultaneously. However, suppose these coins are given to you one by one... So what I would like to do is to update the distribution for one coin, and use that as a prior for the next coin I encounter. So, that makes grid approximation and Gibbs sampling less attractive, if I understand things correctly, as you do not end op with a nice beta distribution afterwards that you can easily/elegantly update in subsequent steps. And also, I am wondering if having an hierarchical prior makes an awful lot of sense in the first place if you are dealing with just one coin per update?? So summarizing: Is there a simple update rule for updating a beta distribution if you want K to be fixed? Any help is really appreciated!! Tom 
Administrator

Well, I'm not sure a person would really want to do this outside of pedagogical examples for building intuition, but, in principle, it's like having this singleparameter model:
y ~ dbern( mu*K , (1mu)*K ) mu ~ dbeta(A,B) You can write out the formulas for that likelihood and prior, and put them into Bayes' rule, like Eqn. 5.7 on p. 84 of the book, and see if you can simplify the resulting expression.
Or, put it in JAGS to get the MCMC representation  but that's not what you were looking for.On Wed, Apr 3, 2013 at 9:33 AM, TomKenter [via Doing Bayesian Data Analysis] <[hidden email]> wrote: Dear all, 
Dear John,
Many thanks for you answer!
The scenario I am thinking about is one where the distribution you are trying to estimate is not fixed. Rather it might change over time. So let's say there is this factory producing coins (to stick to our familiar scenario of coins ;) ) and you want to know the overall bias it has. All you get each day is a couple of coins produced that day so you can update your believes as you go. Now, the point is that the bias the factory has might gradually shift over time. So what I want is to update my believes, but I do not want to get too tied down by them. I want to be able to alter my beliefs based on the last, let's say, 10 coins I saw. Even if I saw 1000 coins already.
One way to do this would be to scale each posterior distribution back to the K I want. However I am not sure if this is theoretically sound..?!? 
Administrator

The scenario you describe, involving tracking of a dynamic process, is one that is often addressed by a socalled Kalman filter. In engineering it's thought of as a leastsquares estimator, but it also has a natural Bayesian interpretation. The Kalman filter is used for metricscaled data described by a normal distribution, not dichotomous (head/tail) data. If you want to delve into it, you can get an intro to the idea from this article: http://www.indiana.edu/~kruschke/articles/Kruschke2008.pdf and the reference cited therein: Meinhold & Singpurwalla 1983. On Thu, Apr 4, 2013 at 4:41 AM, TomKenter [via Doing Bayesian Data Analysis] <[hidden email]> wrote: Dear John, Many thanks for you answer! 
Dear John, Thanks! I will definitely look into it. Tom On Mon, Apr 8, 2013 at 3:08 AM, John K. Kruschke [via Doing Bayesian Data Analysis] <[hidden email]> wrote:

Free forum by Nabble  Edit this page 