Search This Blog

Wednesday, September 14, 2011

Study finds statistical error in large numbers of neuroscience papers



 Other Sciences / Mathematics 
(PhysOrg.com) -- Sander Nieuwenhuis and his associates from the Netherlands have done a study on one particular type of statistical error that apparently crops up in an inordinately large number of papers published in neuroscience journals. In their paper, published in Nature Neuroscience, they claim that up to half of all papers published in such journals contain the error.
The problem lies in the way findings are presented. If a group of researchers, for example, applies something (chemical, food, energy, etc.) that is to cause an effect on something else (nerve cells, populations, inanimate objects, etc.) and finds the amount of change caused by the main thing that is being studied is “significant” but the change in the control group is not, they cannot then, reasonably compare the two results and come up with something that they consider significant unless the differences between the two are actually statistically significant (based on additional research).
Why such errors appear in so many research papers is open to debate. Whether it’s due to researchers wishing to overstate their findings, ignorance, or simple sloppiness, it’s clear that more scrutiny and peer review must be done by researchers before submitting their work. Of course, that’s only half the equation, why are journals who obviously take their reputations very seriously not properly vetting such papers before publishing them?
In their study, the group reviewed 513 papers published in five different highly regarded journals over a two year period. They found half of the papers (where such an error was possible) had the error in them. In addition they also found that when looking at 120 articles published on Nature Neuroscience (with cell and molecular themes) that 25 had the error in them.
Clearly there is a serious problem here; this research project highlights a problem that is likely present in other areas of science as well; namely the inaccuracies present in science journals, mainstream science magazines, the media and perhaps even in classroom lectures. Failing to check for and fix simple statistical inaccuracies in papers presenting results obtained in research, calls into question their very integrity.
Hopefully research studies such as this one will cause alarm both in the research and publishing communities and bring about better controls on both.
More information: Erroneous analyses of interactions in neuroscience: a problem of significance, Nature Neuroscience 14, 1105–1107 (2011)doi:10.1038/nn.2886
Abstract 
In theory, a comparison of two experimental effects requires a statistical test on their difference. In practice, this comparison is often based on an incorrect procedure involving two separate tests in which researchers conclude that effects differ when one effect is significant (P < 0.05) but the other is not (P > 0.05). We reviewed 513 behavioral, systems and cognitive neuroscience articles in five top-ranking journals (Science, Nature, Nature Neuroscience, Neuron and The Journal of Neuroscience) and found that 78 used the correct procedure and 79 used the incorrect procedure. An additional analysis suggests that incorrect analyses of interactions are even more common in cellular and molecular neuroscience. We discuss scenarios in which the erroneous procedure is particularly beguiling.
via Guardian
© 2011 PhysOrg.com
"Study finds statistical error in large numbers of neuroscience papers." September 13th, 2011. http://www.physorg.com/news/2011-09-statistical-error-large-neuroscience-papers.html
Posted by
Robert Karl Stonjek

No comments:

Post a Comment