Navigation

The Data Don’t Really Support the Most Popular Happiness Strategies

https://ift.tt/Vi72LDZ

When it comes to life hacks, the secret to happiness is among the things people seek the most (along with how to get rich), at least according to internet searches. And there’s no dearth of advice on how to get happier, from expressing more gratitude to practicing mindfulness and meditation to exercising.

Some of these strategies are so entrenched in the modern consciousness that you’d be excused for thinking there’s a lot of scientific evidence behind them. All those people on the internet and running wellness centers can’t be wrong. Or can they?

In a study published July 20 in Nature Human Behavior, happiness researchers at the University of British Columbia report that for the most popular happiness strategies there is at best little solid, scientifically sound evidence—and in some cases, there is none.
[time-brightcove not-tgx=”true”]

Like many in the happiness field, Elizabeth Dunn, professor of psychology at the University of British Columbia, and Dunigan Fox, a PhD student in Dunn’s lab, took for granted that the strategies many people turn to in order to boost happiness came to prominence because the data documenting their effect justified the connection. But a recent push among experts in the psychology field to apply more stringent measures to research studies and the interpretation of data made the pair question how strong the historical happiness-research data really was.

Read more: The Daily Habits of Happiness Experts

In 2010, after a major psychology journal agreed to publish a paper purporting to find evidence for extra sensory perception, many researchers in the field questioned how data supporting what they felt was an impossible conclusion could be included in a reputable scientific publication. That criticism led to a reassessment in the field of how data are collected, how they are analyzed, and how much adjustment of the data occurs in the interpretation phase to fit pre-specified hypotheses. The problem wasn’t unique to psychology research, but, says Dunn, “we were the most proactive in combating the problem and making it central to reforming our field. The little techniques that people used to massage the data and thought was like jay walking, was really more like robbing the bank in terms of how big an impact they have.”

Some of these strategies included selective reporting of only certain segments of data collected during a trial, or excluding certain participants, or controlling for certain criteria such as age—all of which can cumulatively affect the outcome. Researchers may have good reasons for these practices, including the fact that there may not be sufficient data on specific age groups, for example, to make an analysis statistically relevant. But while “there may be justifiable reasons for making these decisions, they could lead to findings that are not as reliable,” says Folk. “It’s like throwing a dart on the wall and drawing the bullseye afterward.”

Read more: 6 Surprising Things You Think Are Making You Happy—But Are Doing the Opposite

After 2011, those in the psychology field informally agreed to apply more rigorous standards to their research, by creating a system where scientists could pre-register studies and their planned methods for collecting and analyzing data in different online databanks (a popular one, Open Science Framework, is operated by the Center for Open Science, a non-profit group of scientists working to promote transparency and reproducibility of scientific data). That would, in theory, discourage adjusting their methods or analysis in any way once the data were collected. “The only way to cut short the incredible flexibility that’s occurring with data interpretation is to tie our hands ahead of time,” says Dunn. The commitment isn’t binding in the sense that it’s not connected to funding or academic appointments, but reviewers who evaluate papers before publication could consult the database and monitor adherence.

She and Folk investigated how many of the studies involving the five most popular happiness strategies promoted in media coverage of the topic included such pre-registered studies. The strategies included:

They started with more than 22,000 papers involving these methods, and determined that only 494 properly compared the interventions to a control, as well as excluded specialized groups of people such as those with depression who could confound the results. Of that group of studies, only 57 were pre-registered and constructed in such a way that their results were statistically well-powered.

When Dunn and Folk then looked at each specific strategy, they found that studies exploring gratitude and social interaction were the only methods that had pre-registered and well-powered studies supporting them. On the one hand, says Folk, “it’s understandable that people continue to think there is a ton of well-supported evidence behind these strategies.”After all, he says “there is plenty of research prior to 2011 that shows evidence that these strategies can improve happiness,” says Folk. “At the time, it was the highest quality evidence that was available. But there was a problem with the way the research was done, although nobody realized it then.”

Read more: Happiness In America Isn’t What It Used to Be

Even with the two methods that had the strongest evidence behind them—expressing gratitude and becoming more socially engaged—the researchers found that the effects were short-lived. Writing lists of things for which you are grateful or writing letters to people expressing gratitude can both improve mood—but only temporarily. Similarly, becoming more socially engaged by speaking to strangers could also improve positive mood for short periods, but the studies did not include looking at the effect of spending more time with close friends and loved ones.

Their findings don’t necessarily mean that the strategies won’t contribute to improving people’s mood and increasing happiness, but the durability and strength of the effect isn’t clear from the existing data. “I think there is good reason to believe that they do work,” says Folk. “But I think it’s important to understand that they might not work for everybody.”



Share
Banner

Post A Comment:

0 comments: