My impression is that most scientists feel that they don't make very many errors in reporting their results.   My own experience from coding tells me that I make many simple mistakes.

One very simple check for mistakes in a published paper is to see if the reported statistic value matches the reported p value.  For example, if we see "t(15) = 2.3; p = 0.034” we can check if the p value for t=2.3, with 15 degrees of freedom, really is 0.034.   Obviously, this check cannot tell us about other problems that might have lead to an inflated t (etc) value.  Nevertheless, nearly 20% of reported statistics failed this test in a sample from the psychology literature:

http://dx.doi.org/10.3758/s13428-011-0089-5

The Google+ URL for this post was https://plus.google.com/+MatthewBrett/posts/McVXG62Hc22

Share on: TwitterFacebookEmail



Published

Category

G+ archive

Atom feed