I think my question is relatively simple, but I'm struggling. I'm going to put my question in terms of the book's coin and mint analogy.
Assume I have a multiple coin multiple mint setup, like the one in figure 9.15. To make it more concrete, I have 3 mints (mints A, B, and C), with each mint producing 5 coins. How can I use this model to predict a posterior distribution of the potential biases of a newly produced coin from Mint A? (In other words, not one of the 5 coins from Mint A on which I have data?) I was able to follow the code in R to get a distribution of potential biases for each current coin from Mint A, as well as the posterior marginal distributions for Mint A's mu and kappa parameters. At first I thought that I could just use the distribution of Mint A's mu parameter, since it represents our posterior of the potential biases produced in coins by Mint A. But isn't there some potential variation assumed in the model, such that even if we knew for certain that mu=0.5, the new coin's bias (i.e., theta) does not necessarily equal 0.5? If I were to just use the distribution of mu as the predicted distribution of a new coin's bias, then I would be ignoring this variation. Any thoughts? The only other thought I had was that, because I started with the prior belief that the bias of a coin from Mint A is distributed as Beta( Mu(A)*Kappa(A), (1Mu(A))*Kappa(A) ), then I could just go through all of my sampled values for Mint A's Mu and Kappa values (i.e., all of the Mu(A)s and Kappa(A)s) and choose a random value from the beta distribution with the parameters guided by each particular sampled point. I'd then have a another set of sample points which maybe (but probably not) would be the distribution of a new coin from Mint A. It just seemed like forcing the beta distribution onto the posterior was not really guided by anything besides the fact that it was my prior. P.S. Thanks for an excellent book, John! It's been a perfect fit for someone who is trying to learn this work on their own 
Administrator

If I understand you question correctly, you are trying to get a "posterior predictive" distribution of theta_c values for a particular c. One way to do this is as you indicated at the end of your message: At each step s, randomly generate a theta value from beta( theta  mu_c[s]*kappa_c[s] , (1mu_c[s])*kappa_c[s] ) Hope that helps. Thanks very much for reading the book. I hope it continues to serve you well. John John K. Kruschke, Professor
Doing Bayesian Data Analysis The book: http://www.indiana.edu/~kruschke/DoingBayesianDataAnalysis/ The blog: http://doingbayesiandataanalysis.blogspot.com/ On Wed, Sep 25, 2013 at 5:33 PM, Adam [via Doing Bayesian Data Analysis] <[hidden email]> wrote: I think my question is relatively simple, but I'm struggling. I'm going to put my question in terms of the book's coin and mint analogy. 
Free forum by Nabble  Edit this page 