A reader asks:

Dear Prof. Kruschke,

I have been working on the problem 19.2 from your book and have two questions.

At the end of Chapter 19, you mention using a large thinning constant. In the code I noticed that it is actually set to 1.

In the chapter, you also mention using different hyperprior distributions for a0, a1, a2 and a1a2, as depicted in figure 19.2. In the code, they all seem to have the same parameter value. Is this due to the simplicity of the problem, that different distributions have not been generated?

I am working through this for a class and I am trying to understand all the subtleties.

Thanks,

...

My reply:

Thanks for your interest in Bayesian data analysis!

First, regarding thinning: it's typically not necessary. Please see this blog post:

http://doingbayesiandataanalysis.blogspot.com/2011/11/thinning-to-reduce-autocorrelation.htmland, specifically in the case of ANOVA, see:

http://doingbayesiandataanalysis.blogspot.com/2011/07/autocorrelation-in-bayesian-anova.htmlSecond, regarding the top-level constants in the prior: They are supposed to be broad (i.e., vague, noncommittal) on the scale of the data. If I recall correctly, the programs standardize the data going into the JAGS/BUGS model, so the priors all use the same generic constants.

Hope that helps!