Science news and articles on health, environment, global warming, stem cells, bird flu, autism, nanotechnology, dinosaurs, evolution -- the latest discoveries in astronomy, anthropology, biology, chemistry, climate & bioengineering, computers, engineering ; medicine, math, physics, psychology, technology, and more from the world's leading research centers universities.

Criteria for funding and promotion lead to bad science

0
IMAGE

Credit: Wanderer, Caspar David Friedrich. Photographic reproduction by Cybershot800i. (Diff), Wikimedia Commons

Scientists are trained to carefully assess theories by designing good experiments and building on existing knowledge. But there is growing concern that too many research findings may in fact be false. New research publishing 10 November in open-access journal PLOS Biology by psychologists at the universities of Bristol and Exeter suggests that this may happen because of the criteria used in funding science and promoting scientists which, they say, place too much weight on novel, eye-catching findings.

Some scientists are becoming concerned that published results are inaccurate — a recent attempt by 270 scientists to reproduce the findings reported in 100 psychology studies the Reproducibility Project: Psychology found that only about 40 per cent could be reproduced.

This latest study shows that we shouldn't be surprised by this, because researchers are incentivised to work in a certain way if they want to further their careers, such as running a large number of small studies, rather than a smaller number of larger, more definitive ones. But while this might be good for their careers, it won't necessarily be good for science.

Professor Marcus Munafò and Dr Andrew Higginson, researchers in psychology at the universities of Bristol and Exeter, concluded that scientists aiming to progress should carry out lots of small, exploratory studies because this is more likely to lead to surprising results. The most prestigious journals publish only highly novel findings, and scientists often win grants and get promotions if they manage to publish just one paper in these journals, which means that these small (but unreliable) studies may be disproportionately rewarded in the current system.

The authors used a mathematical model to predict how an optimal researcher who is trying to maximise the impact of their publications should spend their research time and effort. Scientific researchers have to decide what proportion of time to invest in looking for exciting new results rather than confirming previous findings. They also must decide how much resource to invest in each experiment.

The model shows that the best thing for career progression is carry out lots of small exploratory studies and no confirmatory ones. Even though each experiment is less likely to identify a real effect if it's there, they are likely to get some false positives, which unfortunately are often published too.

Dr Higginson said: "This is an important issue because so much money is wasted doing research from which the results can't be trusted; a significant finding might be just as likely to be a false positive as actually be measuring a real phenomenon."

This wouldn't happen if more publications, rather than one or two high profile ones, mattered to scientists' careers, nor if novel findings weren't prized so much more than confirmatory work that confirms previous findings, say the researchers.

So is there any way to overcome this problem of bad scientific practice? There could be immediate solutions, as Professor Munafò explained: "Journal editors and reviewers could be much stricter about good statistical procedures, such as insisting on large sample sizes and tougher statistical criteria for deciding whether an effect has been found."

There are already some encouraging signs – for example, a number of journals are introducing reporting checklists which require authors to state, among other things, how they decided on the sample size they used. Funders are also making similar changes to grant application procedures.

"The best thing for scientific progress would be a mixture of medium-sized exploratory studies with large confirmatory studies," said Dr Higginson. "Our work suggests that researchers would be more likely to do this if funding agencies and promotion committees rewarded asking important questions and good methodology, rather than surprising findings and exciting interpretations."

###

In your coverage please use this URL to provide access to the freely available article in PLOS Biology: http://dx.doi.org/10.1371/journal.pbio.2000995

Citation: Higginson AD, Munafò MR (2016) Current Incentives for Scientists Lead to Underpowered Studies with Erroneous Conclusions. PLoS Biol 14(11): e2000995. doi:10.1371/journal.pbio.2000995

Funding: Medical Research Council and the University of Bristol (grant number MC_UU_12013/6).Received by MRM. The funder had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. Natural Environment Research Council (grant number NE/L011921/1).Received by ADH. The funder had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. MRM is a member of the UK Centre for Tobacco and Alcohol Studies, a UKCRC Public Health Research: Centre of Excellence. Funding from British Heart Foundation, Cancer Research UK, Economic and Social Research Council, Medical Research Council, and the National Institute for Health Research, under the auspices of the UK Clinical Research Collaboration, is gratefully acknowledged.

Competing Interests: The authors have declared that no competing interests exist.

Media Contact

Andrew D. Higginson
[email protected]

http://www.plos.org

Leave A Reply

Your email address will not be published.