Are most published research findings false?

classic Classic list List threaded Threaded
6 messages Options
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Are most published research findings false?

wikipeterson
I was wondering what you thought of Ioannidis's argument that most published findings are false. Do you think that a shift to the Bayesian paradigm would reduce the incidence of false alarms?
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: Are most published research findings false?

John K. Kruschke
Administrator

Without speaking directly to Ioannidis' specific arguments, I'll reply with generalities.

Bayesian analysis can help alleviate false alarms from some causes, but certainly not all causes.

One cause of false alarms is biased selection of data, such as the "file drawer problem". In this case, several different experiments or observational studies are conducted, and the one that (by chance) "shows the effect" is rationalized as the one that somehow got the procedure right, so that's the one that is published while the others are stuck in the file drawer. No analysis procedure alone can solve this problem, but some analyses might estimate how much data is in the file drawer if there really is no effect, and other analyses might be able to model and estimate the degree of bias IF there is some signature of the bias in the data (such as systematically missing data).

A particular case of the file drawer problem is the "decline effect" wherein the magnitude of  the phenomenon declines across (exact) replications published. Presumably the effect size declines because the data that made it into the initial publications were high outliers, selected by authors because the data showed a strong effect. The selection need not have been conscious "cheating" by the authors; they merely rationalized why, of their several varied studies, the one with the (accidentally) largest effect is the one that got the procedure right. Again, no analysis alone can completely solve this problem, but some analyses might try to model  and estimate the bias (see previous paragraph).

Another cause of false alarms is measuring a zillion things and declaring the few that accidentally show random outliers to be "significant". (This would be analogous to the file drawer problem if the researcher suppressed mention of the zillion other things that did not turn out to be significant.) In NHST, the way to address this issue is with corrections for multiple tests. The correct correction depends on which tests were intended, and whether they were intended in advance (i.e., planned) or intended only after looking at the data (i.e., post hoc). It was the strangeness of this technique --interpreting results not on the basis of the data but on the basis of which tests the analyst pretends to be interested in-- that finally drove me to Bayesian methods. The Bayesian approach to this issue is to let the data from different sources mutually inform each other via a hierarchical model. The hierarchical model expresses a rational prior about how the measures relate to each other, and Bayesian software and posterior interpretation are especially amenable to hierarchical models. The hierarchical model provides shrinkage of estimates, to the extent the data indicate, and the shrunken estimates attenuate false alarms.

Bayesian methods can also help attenuate false alarms in sequential testing. In NHST, if you sequentially test data sampled from a null effect, you'll eventually falsely reject the null. With Bayesian methods, because the null can be accepted, the false alarm rate asymptotes at a level far below 100%.

Bayesian methods are also good for refocusing the goal of research onto precision of estimation instead of rejection of null value. If the goal is precision, there is less attraction to sequential testing and "sampling to a foregone conclusion". Bayesian methods are especially good for the goal of achieving precision because precision is measured by the posterior distribution, not by a fickle confidence interval.

More could be said, of course, but that's all I have time for right now...

On Sun, Jun 23, 2013 at 9:56 AM, wikipeterson [via Doing Bayesian Data Analysis] <[hidden email]> wrote:
I was wondering what you thought of Ioannidis's argument that most published findings are false. Do you think that a shift to the Bayesian paradigm would reduce the incidence of false alarms?


If you reply to this email, your message will be added to the discussion below:
http://doing-bayesian-data-analysis.12272.x6.nabble.com/Are-most-published-research-findings-false-tp5000668.html
To start a new topic under Doing Bayesian Data Analysis, email [hidden email]
To unsubscribe from Doing Bayesian Data Analysis, click here.
NAML

Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: Are most published research findings false?

wikipeterson
You've also written on the issue of optional stopping. I right to think that we ought to have a lower rate of false alarms under a Bayesian paradigm because it would eliminate the problem of researchers collecting additional data until the p-value dips below .05?
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: Are most published research findings false?

John K. Kruschke
Administrator

That's right. See the discussion around Figure 12 of the JEP:General article available at http://www.indiana.edu/~kruschke/BEST/

I also briefly discuss this issue in this video: http://youtu.be/YyohWpjl6KU


On Fri, Jun 28, 2013 at 4:56 PM, wikipeterson [via Doing Bayesian Data Analysis] <[hidden email]> wrote:
You've also written on the issue of optional stopping. I right to think that we ought to have a lower rate of false alarms under a Bayesian paradigm because it would eliminate the problem of researchers collecting additional data until the p-value dips below .05?


If you reply to this email, your message will be added to the discussion below:
http://doing-bayesian-data-analysis.12272.x6.nabble.com/Are-most-published-research-findings-false-tp5000668p5000670.html
To start a new topic under Doing Bayesian Data Analysis, email [hidden email]
To unsubscribe from Doing Bayesian Data Analysis, click here.
NAML

Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: Are most published research findings false?

Kevin McC
John,

I was posting on the forum for an example problem issue, but couldn't resist getting asking about this since I enjoyed your arguments for bayesian analysis in the book, I was curious if you had ever read David Freedman's famous Foundations of Science paper . Found here: http://www.stat.berkeley.edu/~census/fos.pdf. He talks in general terms about subjectivist (bayesian) vs. objectivist (frequentist) points of view on statistics.

""Radical" subjectivists, like Bruno de Finetti or Jimmie Savage, differ from classical subjectivists and objectivists; radical subjectivists deny the very existence of unknown parameters. For such statisticians, probabilities express degrees of belief about observables. You pull a coin out of your pocket, and-- Damon Runyon notwithstanding-- they can assign a probability to the event that it will land heads when you toss it. The braver ones can even assign a probability to the event that you really will toss the coin. (These are "prior" probabilities, or "opinions.") Subjectivists can also "update" opinions in the light of the data; for example, if the coin is tossed 10 times, landing heads 6 times and tails 4 times, what is the chance that it will  land heads on the 11th toss? This involves computing a "conditional" probability using Kolmogorov’s calculus, which applies whether the probabilities are subjective or objective."

Kevin
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: Are most published research findings false?

John K. Kruschke
Administrator

No, hadn't seen that Freedman paper before. My quick reaction to the "radical subjectivist" stance: Seems antithetical to scientific theory. Science is all about the inference of latent constructs, from subatomic particles to genes to psychological constructs such as intelligence. Some constructs are useful and survive, other constructs prove to be not so useful and die out.
(And, this issue is orthogonal, I think, to the topic of this thread, regarding false alarms...)


On Fri, Jun 28, 2013 at 6:31 PM, Kevin McC [via Doing Bayesian Data Analysis] <[hidden email]> wrote:
John,

I was posting on the forum for an example problem issue, but couldn't resist getting asking about this since I enjoyed your arguments for bayesian analysis in the book, I was curious if you had ever read David Freedman's famous Foundations of Science paper . Found here: http://www.stat.berkeley.edu/~census/fos.pdf. He talks in general terms about subjectivist (bayesian) vs. objectivist (frequentist) points of view on statistics.

""Radical" subjectivists, like Bruno de Finetti or Jimmie Savage, differ from classical subjectivists and objectivists; radical subjectivists deny the very existence of unknown parameters. For such statisticians, probabilities express degrees of belief about observables. You pull a coin out of your pocket, and-- Damon Runyon notwithstanding-- they can assign a probability to the event that it will land heads when you toss it. The braver ones can even assign a probability to the event that you really will toss the coin. (These are "prior" probabilities, or "opinions.") Subjectivists can also "update" opinions in the light of the data; for example, if the coin is tossed 10 times, landing heads 6 times and tails 4 times, what is the chance that it will  land heads on the 11th toss? This involves computing a "conditional" probability using Kolmogorov’s calculus, which applies whether the probabilities are subjective or objective."

Kevin


If you reply to this email, your message will be added to the discussion below:
http://doing-bayesian-data-analysis.12272.x6.nabble.com/Are-most-published-research-findings-false-tp5000668p5000673.html
To start a new topic under Doing Bayesian Data Analysis, email [hidden email]
To unsubscribe from Doing Bayesian Data Analysis, click here.
NAML

Loading...