A writer to the Less Wrong discussion page (here) commented on the New Yorker article by J Lehrer (article) about why some effects start out with positive results and with time the results become negative.
The comment is by a physicist, name unknown:
A summary of explanations for this effect:
- “The most likely explanation for the decline is an obvious one: regression to the mean. As the experiment is repeated, that is, an early statistical fluke gets cancelled out.”
- “Jennions, similarly, argues that the decline effect is largely a product of publication bias, or the tendency of scientists and scientific journals to prefer positive data over null results, which is what happens when no effect is found.”
- “Richard Palmer… suspects that an equally significant issue is the selective reporting of results—the data that scientists choose to document in the first place. … Palmer emphasizes that selective reporting is not the same as scientific fraud. Rather, the problem seems to be one of subtle omissions and unconscious misperceptions, as researchers struggle to make sense of their results.”
- “According to Ioannidis, the main problem is that too many researchers engage in what he calls “significance chasing,” or finding ways to interpret the data so that it passes the statistical test of significance—the ninety-five-per-cent boundary invented by Ronald Fisher. … The current “obsession” with replicability distracts from the real problem, which is faulty design.”
Things sort themselves out – this is not a huge problem to those in the particular field where a ‘decline effect’ happens. Where the problem is serious is outside the field involved. Where a original result piques the public’s interest, especially where it is reported without needed context and very especially where it touches a controversy, there is a problem.
There are a lots of people who hold metaphoric bets on how the brain works and particularly how consciousness works. Although it is not as controversial as, for example, global warming, people still would like certain things to be true and others to be false. It would help keep results in perspective if people knew that unusual results often fade away with time as more studies are done, that this is normal and not scandalous.
I will say it again and hope that my readers are not bored by this repeated message – do not put your trust in individual results or even chains of results which depend on their weakest link, but trust strong webs or fabrics of evidence.