First the link to this week’s complete list as HTML and as PDF.
Sher et al. use LOWESS to draw regression lines through extremely noisy data. They then go on to interpret those in far more detail than the data warrant. Their single limited sample makes it impossible to distinguish between true age-related changes and random, individual idiosyncrasies. I accept all their first-order results (none of which are new or surprising), but their inferred inflection at age 7, where they interpret identical data in two completely different ways, remains pure speculation. This could of course be seen as valid hypothesis generation, but the test of that hypothesis remains yet to be done.
In psychology, it seems, there are only two kinds of articles: fraud and junk. The tiny fraction that seems to be neither must be a classification error. In Markowitz & Hancock you need not even read the article itself. The abstract alone states “Using differences in language dimensions we were able to classify Stapel’s publications with above chance accuracy. […] and that deception can be revealed in fraudulent scientific discourse.”
The first part is correct, 70 % is barely above chance, but using the same data sample both to develop and to test a discriminator is ridiculous nonsense in the first place, so even that non-result remains unproven. The authors do not even try to quantify the variation of their new-found measure between authors. If this were an undergraduate paper, it would have to be failed.
Lamba & Nityananda are the same. With data I’d have done the math but just looking at their figure 1 my estimate is, that after eliminating no more then three outliers in each panel the correlation would drop to zero. Their figure two is meaningless. If I strongly overestimate the overconfident ones and underestimate the underconfident, then I am maximally deceived by confidence, but my mean error would average out to zero. This is the kind of mistake one expects from freshmen in courses 101.