D Bishop has a post on BishopBlog (here), Time for neuroimaging (and PNAS) to clean up its act. This is a great posting: well argued and organized.
She has found a paper (for other reasons) that appears to have a good reputation but breaks a number of rules. All its conclusions are invalid. Because some authors had a financial interest in the results, it should have been reviewed carefully but it was not found wanting by the original peer reviewers or later authors who cited it or used its graphics. Wow what an example.
What was wrong? The first conclusion was not valid because an important control group was missing a group who had the condition but was not treated. The second conclusion was not valid because it was based on a faulty statistical procedure. The third is invalid it does not take account of within-group variance. Conclusion 4 relied on unusual outliers and some dodgy stats.
She makes some recommendation to correct this sort of thing which she had found in a number of papers.
Is there a solution? One suggestion is that reviewers and readers would benefit from a simple cribsheet listing the main things to look for in a methods section of a paper in this area. Is there an imaging expert out there who could write such a document, targeted at those like me, who work in this broad area, but arent imaging experts? Maybe it already exists, but I couldnt find anything like that on the web.
Imaging studies are expensive and time-consuming to do, especially when they involve clinical child groups. I’m not one of those who thinks they aren’t ever worth doing. If an intervention is effective, imaging may help throw light on its mechanism of action. However, I do not think it is worthwhile to do poorly-designed studies of small numbers of participants to test the mode of action of an intervention that has not been shown to be effective in properly-controlled trials. It would make more sense to spend the research funds on properly controlled trials that would allow us to evaluate which interventions actually work.