Update rule for beta distribution with fixed K/confidence/sample size

classic Classic list List threaded Threaded
5 messages Options
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Update rule for beta distribution with fixed K/confidence/sample size

TomKenter
Dear all,

I have got a question about updating a beta prior in a particular situation.

Normally you have a beta distribution with shape parameters a and b. The mean of this distribution is a/a+b and the sample size, or the confidence, or K is a+b.

Now, if you do some trials, with let's say N positive outcomes and M negative outcomes, you end up with a posterior distribution that is beta(θ | a+N, b+M). So, now your mean is (a+N) / (a+N+b+M) and the sample size / confidence / K is a+N+b+M.

Now, my question is: what if you want to keep the K / confidence level fixed? So let's say K should always be 10. So beta(θ | 5, 5) would be fine, as would beta(θ | 9, 1) and beta(θ | 1.23, 8.77).

In other words: in this case I would like the mean of the posterior to be able to change to reflect the evidence found in the new data, but the confidence level should remain the same (rather than increasing all the time).

So, this is more or less what is described at section 9.1 in the book (page 192 and onwards in my copy). The mean is an hierarchical prior and we have a fixed K. However, in the book there are several coins being tossed all at once and the estimation is done for all these coins and the mean simultaneously.  

However, suppose these coins are given to you one by one...
So what I would like to do is to update the distribution for one coin, and use that as a prior for the next coin I encounter.  
So, that makes grid approximation and Gibbs sampling less attractive, if I understand things correctly, as you do not end op with a nice beta distribution afterwards that you can easily/elegantly update in subsequent steps.
And also, I am wondering if having an hierarchical prior makes an awful lot of sense in the first place if you are dealing with just one coin per update??

So summarizing: Is there a simple update rule for updating a beta distribution if you want K to be fixed?

Any help is really appreciated!!

Tom
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: Update rule for beta distribution with fixed K/confidence/sample size

John K. Kruschke
Administrator

 

Well, I'm not sure a person would really want to do this outside of pedagogical examples for building intuition, but, in principle, it's like having this single-parameter model:
  y ~ dbern( mu*K , (1-mu)*K )
  mu ~ dbeta(A,B)
You can write out the formulas for that likelihood and prior, and put them into Bayes' rule, like Eqn. 5.7 on p. 84 of the book, and see if you can simplify the resulting expression.
Or, put it in JAGS to get the MCMC representation -- but that's not what you were looking for.

On Wed, Apr 3, 2013 at 9:33 AM, TomKenter [via Doing Bayesian Data Analysis] <[hidden email]> wrote:
Dear all,

I have got a question about updating a beta prior in a particular situation.

Normally you have a beta distribution with shape parameters a and b. The mean of this distribution is a/a+b and the sample size, or the confidence, or K is a+b.

Now, if you do some trials, with let's say N positive outcomes and M negative outcomes, you end up with a posterior distribution that is beta(θ | a+N, b+M). So, now your mean is (a+N) / (a+N+b+M) and the sample size / confidence / K is a+N+b+M.

Now, my question is: what if you want to keep the K / confidence level fixed? So let's say K should always be 10. So beta(θ | 5, 5) would be fine, as would beta(θ | 9, 1) and beta(θ | 1.23, 8.77).

In other words: in this case I would like the mean of the posterior to be able to change to reflect the evidence found in the new data, but the confidence level should remain the same (rather than increasing all the time).

So, this is more or less what is described at section 9.1 in the book (page 192 and onwards in my copy). The mean is an hierarchical prior and we have a fixed K. However, in the book there are several coins being tossed all at once and the estimation is done for all these coins and the mean simultaneously.  

However, suppose these coins are given to you one by one...
So what I would like to do is to update the distribution for one coin, and use that as a prior for the next coin I encounter.  
So, that makes grid approximation and Gibbs sampling less attractive, if I understand things correctly, as you do not end op with a nice beta distribution afterwards that you can easily/elegantly update in subsequent steps.
And also, I am wondering if having an hierarchical prior makes an awful lot of sense in the first place if you are dealing with just one coin per update??

So summarizing: Is there a simple update rule for updating a beta distribution if you want K to be fixed?

Any help is really appreciated!!

Tom


To start a new topic under Doing Bayesian Data Analysis, email [hidden email]
To unsubscribe from Doing Bayesian Data Analysis, click here.
NAML

Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: Update rule for beta distribution with fixed K/confidence/sample size

TomKenter
Dear John,

Many thanks for you answer!
And many, many thanks for the book as well, now that I am at it... ;-) !!!

The scenario I am thinking about is one where the distribution you are trying to estimate is not fixed. Rather it might change over time. So let's say there is this factory producing coins (to stick to our familiar scenario of coins ;-) ) and you want to know the overall bias it has. All you get each day is a couple of coins produced that day so you can update your believes as you go. Now, the point is that the bias the factory has might gradually shift over time.
So, you are shooting at (i.e. trying to estimate) a moving target.

So what I want is to update my believes, but I do not want to get too tied down by them. I want to be able to alter my beliefs based on the last, let's say, 10 coins I saw. Even if I saw 1000 coins already.

One way to do this would be to scale each posterior distribution back to the K I want. However I am not sure if this is theoretically sound..?!?
So let's say I want K to be 10 always, and I have a prior beta(θ | 5, 5). Now I see a new coin that comes op head 5 times, and tails 0 times. So my posterior would be beta(θ | 10, 5). But I want K to be 10 so I scale it down by K/(K+z) = 10/(10 + 5) to beta( θ | 10 * 10/15, 5 * 10/15) = beta( θ | 100/15, 50/15).
This posterior still has the same mean as the original one, so the same predicted probability for θ, but my confidence about it is the same as what I started out with...

Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: Update rule for beta distribution with fixed K/confidence/sample size

John K. Kruschke
Administrator

The scenario you describe, involving tracking of a dynamic process, is one that is often addressed by a so-called Kalman filter. In engineering it's thought of as a least-squares estimator, but it also has a natural Bayesian interpretation. The Kalman filter is used for metric-scaled data described by a normal distribution, not dichotomous (head/tail) data.
If you want to delve into it, you can get an intro to the idea from this article:
http://www.indiana.edu/~kruschke/articles/Kruschke2008.pdf
and the reference cited therein: Meinhold & Singpurwalla 1983.


On Thu, Apr 4, 2013 at 4:41 AM, TomKenter [via Doing Bayesian Data Analysis] <[hidden email]> wrote:
Dear John, Many thanks for you answer!
And many, many thanks for the book as well, now that I am at it... ;-) !!!

The scenario I am thinking about is one where the distribution you are trying to estimate is not fixed. Rather it might change over time. So let's say there is this factory producing coins (to stick to our familiar scenario of coins ;-) ) and you want to know the overall bias it has. All you get each day is a couple of coins produced that day so you can update your believes as you go. Now, the point is that the bias the factory has might gradually shift over time.
So, you are shooting at (i.e. trying to estimate) a moving target.

So what I want is to update my believes, but I do not want to get too tied down by them. I want to be able to alter my beliefs based on the last, let's say, 10 coins I saw. Even if I saw 1000 coins already.

One way to do this would be to scale each posterior distribution back to the K I want. However I am not sure if this is theoretically sound..?!?
So let's say I want K to be 10 always, and I have a prior beta(θ | 5, 5). Now I see a new coin that comes op head 5 times, and tails 0 times. So my posterior would be beta(θ | 10, 5). But I want K to be 10 so I scale it down by K/(K+z) = 10/(10 + 5) to beta( θ | 10 * 10/15, 5 * 10/15) = beta( θ | 100/15, 50/15).
This posterior still has the same mean as the original one, so the same predicted probability for θ, but my confidence about it is the same as what I started out with...


To start a new topic under Doing Bayesian Data Analysis, email [hidden email]
To unsubscribe from Doing Bayesian Data Analysis, click here.
NAML


Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: Update rule for beta distribution with fixed K/confidence/sample size

TomKenter
Dear John,

Thanks! I will definitely look into it.

Tom 


On Mon, Apr 8, 2013 at 3:08 AM, John K. Kruschke [via Doing Bayesian Data Analysis] <[hidden email]> wrote:

The scenario you describe, involving tracking of a dynamic process, is one that is often addressed by a so-called Kalman filter. In engineering it's thought of as a least-squares estimator, but it also has a natural Bayesian interpretation. The Kalman filter is used for metric-scaled data described by a normal distribution, not dichotomous (head/tail) data.
If you want to delve into it, you can get an intro to the idea from this article:
http://www.indiana.edu/~kruschke/articles/Kruschke2008.pdf
and the reference cited therein: Meinhold & Singpurwalla 1983.


On Thu, Apr 4, 2013 at 4:41 AM, TomKenter [via Doing Bayesian Data Analysis] <[hidden email]> wrote:
Dear John, Many thanks for you answer!
And many, many thanks for the book as well, now that I am at it... ;-) !!!

The scenario I am thinking about is one where the distribution you are trying to estimate is not fixed. Rather it might change over time. So let's say there is this factory producing coins (to stick to our familiar scenario of coins ;-) ) and you want to know the overall bias it has. All you get each day is a couple of coins produced that day so you can update your believes as you go. Now, the point is that the bias the factory has might gradually shift over time.
So, you are shooting at (i.e. trying to estimate) a moving target.

So what I want is to update my believes, but I do not want to get too tied down by them. I want to be able to alter my beliefs based on the last, let's say, 10 coins I saw. Even if I saw 1000 coins already.

One way to do this would be to scale each posterior distribution back to the K I want. However I am not sure if this is theoretically sound..?!?
So let's say I want K to be 10 always, and I have a prior beta(θ | 5, 5). Now I see a new coin that comes op head 5 times, and tails 0 times. So my posterior would be beta(θ | 10, 5). But I want K to be 10 so I scale it down by K/(K+z) = 10/(10 + 5) to beta( θ | 10 * 10/15, 5 * 10/15) = beta( θ | 100/15, 50/15).
This posterior still has the same mean as the original one, so the same predicted probability for θ, but my confidence about it is the same as what I started out with...


To start a new topic under Doing Bayesian Data Analysis, email [hidden email]
To unsubscribe from Doing Bayesian Data Analysis, click here.
NAML





To unsubscribe from Update rule for beta distribution with fixed K/confidence/sample size, click here.
NAML

Loading...