You are currently browsing the thoughts on thoughts weblog archives for the day 01/01/2011.
M | T | W | T | F | S | S |
---|---|---|---|---|---|---|
« Dec | Feb » | |||||
1 | 2 | |||||
3 | 4 | 5 | 6 | 7 | 8 | 9 |
10 | 11 | 12 | 13 | 14 | 15 | 16 |
17 | 18 | 19 | 20 | 21 | 22 | 23 |
24 | 25 | 26 | 27 | 28 | 29 | 30 |
31 |
- 22/04/2011: fresh look at mirror neurons
- 19/04/2011: What change blindness says about memory
- 16/04/2011: Synaesthesia of concepts
- 13/04/2011: synaesthesis reversed by hypnosis
- 10/04/2011: How is the world represented without vision?
- 07/04/2011: keeping attention on the danger
- 04/04/2011: Faces
- 01/04/2011: consciousness evolved
- 29/03/2011: Anticipating eye movements
- 27/03/2011: Encephalon #85
- April 2011
- March 2011
- February 2011
- January 2011
- December 2010
- November 2010
- October 2010
- September 2010
- August 2010
- July 2010
- June 2010
- May 2010
- April 2010
- March 2010
- February 2010
- January 2010
- December 2009
- November 2009
- October 2009
- September 2009
- August 2009
- July 2009
- June 2009
- May 2009
- April 2009
- March 2009
- February 2009
- January 2009
- December 2008
- November 2008
- October 2008
- September 2008
- August 2008
- July 2008
- June 2008
Archive for 01/01/2011
Decline effect
01/01/2011 by admin.
A writer to the Less Wrong discussion page (here) commented on the New Yorker article by J Lehrer (article) about why some effects start out with positive results and with time the results become negative.
The comment is by a physicist, name unknown:
A summary of explanations for this effect:
- “The most likely explanation for the decline is an obvious one: regression to the mean. As the experiment is repeated, that is, an early statistical fluke gets cancelled out.”
- “Jennions, similarly, argues that the decline effect is largely a product of publication bias, or the tendency of scientists and scientific journals to prefer positive data over null results, which is what happens when no effect is found.”
- “Richard Palmer… suspects that an equally significant issue is the selective reporting of results—the data that scientists choose to document in the first place. … Palmer emphasizes that selective reporting is not the same as scientific fraud. Rather, the problem seems to be one of subtle omissions and unconscious misperceptions, as researchers struggle to make sense of their results.”
- “According to Ioannidis, the main problem is that too many researchers engage in what he calls “significance chasing,” or finding ways to interpret the data so that it passes the statistical test of significance—the ninety-five-per-cent boundary invented by Ronald Fisher. … The current “obsession” with replicability distracts from the real problem, which is faulty design.”
Things sort themselves out – this is not a huge problem to those in the particular field where a ‘decline effect’ happens. Where the problem is serious is outside the field involved. Where a original result piques the public’s interest, especially where it is reported without needed context and very especially where it touches a controversy, there is a problem.
There are a lots of people who hold metaphoric bets on how the brain works and particularly how consciousness works. Although it is not as controversial as, for example, global warming, people still would like certain things to be true and others to be false. It would help keep results in perspective if people knew that unusual results often fade away with time as more studies are done, that this is normal and not scandalous.
I will say it again and hope that my readers are not bored by this repeated message – do not put your trust in individual results or even chains of results which depend on their weakest link, but trust strong webs or fabrics of evidence.
Posted in Uncategorized | 1 Comment »