Psychologists Confront Rash of Invalid Studies

wheels-in-head
A number of high-profile scandals in the psychology field have scientists wondering how many of their published results are valid. (Image credit: argus | Shutterstock.com)

In the wake of several scandals in psychology research, scientists are asking themselves just how much of their research is valid.

In the past 10 years, dozens of studies in the psychology field have been retracted, and several high-profile studies have not stood up to scrutiny when outside researchers tried to replicate the research.

By selectively excluding study subjects or amending the experimental procedure after designing the study, researchers in the field may be subtly biasing studies to get more positive findings. And once research results are published, journals have little incentive to publish replication studies, which try to check the results.

That means the psychology literature may be littered with effects, or conclusions, that aren't real. [Oops! 5 Retracted Science Studies]

The problem isn't unique to psychology, but the field is going through some soul-searching right now. Researchers are creating new initiatives to encourage replication studies, improve research protocols and to make data more transparent.  

"People have started doing replication studies to figure out, 'OK, how solid, really, is the foundation of the edifice that we're building?'" said Rolf Zwaan, a cognitive psychologist at Erasmus University in the Netherlands. "How solid is the research that we're building our research on?"

Storm brewing

In a 2010 study in the Journal of Social and Personal Psychology, researchers detailed experiments that they said suggested people could predict the future.

Other scientists questioned how the study, which used questionable methodology such as changing the procedure partway through the experiment, got published; the journal editors expressed skepticism about the effect, but said the study followed established rules for doing good research.

That made people wonder, "Maybe there's something wrong with the rules," said University of Virginia psychology professor Brian Nosek.

But an even bigger scandal was brewing. In late 2011, Diederik Stapel, a psychologist in the Netherlands, was fired from Tilburg University for falsifying or fabricating data in dozens of studies, some of which were published in high-profile journals.

And in 2012, a study in PLOS ONE failed to replicate a landmark 1996 psychology study that suggested making people think of words associated with the elderly — such as Florida, gray or retirement — made them walk more slowly.

Motivated reasoning

The high-profile cases are prompting psychologists to do some soul-searching about the incentive structure in their field.

The push to publish can lead to several questionable practices.

Outright fraud is probably rare. But "adventurous research strategies" are probably common, Nosek told LiveScience. [The 10 Most Destructive Human Behaviors]

Because psychologists are so motivated to get flashy findings published, they can use reasoning that may seem perfectly logical to them and, say, throw out research subjects who don't fit with their findings. But this subtle self-delusion can result in scientists seeing an effect where none exists, Zwaan told LiveScience.

Another way to skew the results is to change the experimental procedure or research question after the study has already begun. These changes may seem harmless to the researcher, but from a statistical standpoint, they make it much more likely that psychologists see an underlying effect where none exists, Zwaan said.

For instance, if scientists set up an experiment to find out if stress is linked to risk of cancer, and during the study they notice stressed people seem to get less sleep, they might switch their question to study sleep. The problem is the experiment wasn't set up to account for confounding factors associated with sleep, among other things.

Fight fire with psychology

In response, psychologists are trying to flip the incentives by using their knowledge of transparency, accountability and personal gain.

For instance, right now there's no incentive for researchers to share their data, and a 2006 study found that of 141 researchers who had previously agreed to share their data, only 38 did so when asked.

But Nosek and his colleagues hope to encourage such sharing by making it standard practice. They are developing a project called the Open Science Framework, and one goal is to encourage researchers to publicly post their data and to have journals require such transparency in their published studies. That should make researchers less likely to tweak their data.

"We know that behavior changes as a function of accountability, and the best way to increase accountability is to create transparency," Nosek said.

One journal, Social Psychology, is dangling the lure of guaranteed publication to motivate replication studies. Researchers send proposals for replication studies to the journal, and if they're approved, the authors are guaranteed publication in advance. That would encourage less fiddling with the protocol after the fact.

And the Laura and John Arnold Foundation now offers grant money specifically for replication studies, Nosek said.

Follow LiveScience on Twitter @livescience. We're also on Facebook & Google+

Tia Ghose
Managing Editor

Tia is the managing editor and was previously a senior writer for Live Science. Her work has appeared in Scientific American, Wired.com and other outlets. She holds a master's degree in bioengineering from the University of Washington, a graduate certificate in science writing from UC Santa Cruz and a bachelor's degree in mechanical engineering from the University of Texas at Austin. Tia was part of a team at the Milwaukee Journal Sentinel that published the Empty Cradles series on preterm births, which won multiple awards, including the 2012 Casey Medal for Meritorious Journalism.