Behavioral New World
September 1, 2023
So how do we know?
In last month’s newsletter, I wrote:
Thus, the quality of the evidence we rely on in decision-making depends on three things: 1) the integrity of the original research; 2) how accurately that research is interpreted for broader audiences; and 3) whether the (non-fraudulent) research can be replicated.
In that newsletter, I discussed the replication crisis (point 3). This month I address point 1 – the integrity of research. I draw from Stuart Ritchie’s excellent book Science Fictions: How Fraud, Bias, and Hype Undermine the Search for Truth. Here I touch upon three issues that undermine the evidence that we use for decision-making. Then I’ll try to answer the question, “How do we know what to believe?”
(Some of) the problems
Fraud. In an irony that I could not have dreamt up, a recent scandal involves a scholar whose research focus is dishonesty! See https://rb.gy/siasn. And a well-known behavioral economist’s work has also been seriously questioned: https://rb.gy/qqvyx.
Fraud occurs for obvious reasons. “Publish or perish” is a well-worn phrase in academic circles. The quest for fame and fortune can be a powerful pull. In short, the rewards for highly visible research can be substantial.
Academic publication bias. Academic journals (more specifically, their editors) are inclined—consciously or not—to publish studies that have statistically significant results.[1] In a simplified example, a study that purports to show how you can get rich investing in the stock market is more interesting than an article that shows you can’t “beat the market.”
Public press publication bias. A journalist is more likely to write about a study that shows that signing an honesty pledge decreases cheating than to write about a study that shows that this simple intervention has little effect. A simple, easy-to-implement intervention has a significant effect? That’s interesting. It doesn’t have an effect? Less interesting. The journalist’s choice is not intentionally misleading—they simply want people to read their writing and interesting is better.
Important: “insignificant” findings are information. For example, if it is impossible to beat the market, don’t waste your time trying. More generally, publication bias of both types means that we as information consumers are missing part of the picture because we tend to see only “sexy” results.
What can we as information consumers do?
There is not much we can do, if anything, to reduce research fraud. In response to the Reproducibility Project (last month’s newsletter), academic publishers, associations, and editors have taken steps to increase the integrity of research. Science Fictions, referenced above, provides explanations of recent changes in the procedures for academic publishing.
I have three suggestions for those of us not doing the research. First, be wary of any single study. Often the results of a first-of-a-kind study are reported with the headline, “New and surprising results.” If they’re new, it makes sense to wait to see if subsequent studies find (roughly) the same results.
Second, publications that strive for objectivity often include skeptical commentary. A recent example: Researchers claim, “We’ve found a superconductor that works at room temperature.” Skeptic: “I’m not convinced. There are replication studies underway, let’s wait.” Pay attention to the skeptic. (Update: The results have not replicated well.)
Third, think carefully about where and from whom you get your information. Does the publication have a reputation for being even-handed? Personally, I trust a leading business magazine and a well-known science magazine for most of my research news (coincidentally, both are British publications).
Of course, I don’t blindly accept everything I read in those publications (my inner skeptic is alive and well), but I find them more consistently correct than alternatives. To choose wisely, watch for confirmation bias (May 2022 newsletter) when selecting your news sources.
If you are not a subscriber, you can subscribe for free at johnhowe.substack.com
[1] No need to worry about “statistically significant”—you can just think, “significant” or “important.”
"A journalist is more likely to write about a study that shows that signing an honesty pledge decreases cheating than to write about a study that shows that this simple intervention has little effect. A simple, easy-to-implement intervention has a significant effect? That’s interesting. It doesn’t have an effect? Less interesting. "
Some teachers/admins are implementing honesty pledges to deal with AI. Some people think it might have the opposite effect. I was sent this paper, and have not read it yet. All round it seems like a bad idea. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3432445