First the link to this week’s complete list as HTML and as PDF.

One way to stop something from being done badly is, I suppose, preventing it from being done at all. I for one have ranted about ridiculous, irrelevant, and wrong claims about statistical significance often enough. Still I do have my doubts, that banning them completely, as the journal *Basic and Applied Social Psychology* has just announced in their editorial by Trafimow & Marks is the best answer out there.

Following Webb et al. it seems I was not totally gaga when singing to my unborn daughter, although at the time everybody including her mother seemed to think so.

Preventive colonoscopy can avoid 53 % or more of deaths from colon cancer the advertising claims. As far as I can tell, it’s true and the number may be even higher. But what does it mean?

As Zauber et al. tell us, 12 out of 2602 patients who had adenomatous polyps removed died of colorectal cancer in the following up to 23 years. The expected number for statistically matched (age,, sex, race) members of the general population was 25.4, making the number prevented 13.4 or 53 %. So far so good. The total number of deaths from any cause was 1246, so the 13.4 equal 1.1 % of prevented deaths. Or so it seems. The prevalence of adenomatous polyps in preventive colonoscopy is 114 (Wikipedia) or 78 (Crispin et al.) per 1000. Using the higher number that means at least 22 825 colonoscopies or 10 930 expected deaths against 13.4 prevented ones, or 0.12 %.

This is the real number a patient needs to know and has to match the unpleasantness and risk of complications (0.31 %) against, not the 53 % his doctor typically tells him.

In his 2011 study John Joannidis has done a very clever thing, one I would not have thought possible. He quantitatively determined the number of excess positive results through biased reporting from the results of published papers alone. The reasoning is as follows: Given a moderate true effect size and a given number of subjects per study, then not all studies will demonstrate the effect to statistical significance, i.e. the expected number of positive studies is less than 100 %. He then compared the published fraction of positive results to the expected one. It should be noted that in the presence of bias the reported effect size will always be larger then the true one. In his calculations Joannidis used the reported size for his estimate of the true one, so the real amount of bias will certainly be even bigger than the one he found.