When Trust is Verified Badly
Written by
Categories: Modern, Not an expert

When Trust is Verified Badly

Now, we can observe many flaws in just this one passage, but it should be noted that Low has done her reading and cites widely. The problem is that the analyses on which she is working are themselves flawed and, without detailed study outside of her discipline, she and other academics are unlikely to realise this. This is a hard warning for those of us who wish to research that assumptions are pervasive and insidious.

Rob Runacres, “HEMA Research: false truths and wishful thinking,” Western Martial Arts Workshop, Racine WI,September 2017 https://www.renaissanceswordclub.com/2017/09/27/hemaresearch/

In an earlier post, I argued that science advances human knowledge through a network that tests claims before they become premises in bigger arguments, and then tests the structure of those arguments to make sure they can hold the weight placed upon them. Past the early days of a field of knowledge, understanding advances because of systems and communities not lone geniuses who do everything themselves. Communities can ask more and harder questions than any one person can. But anyone who follows science news knows that this does not always happen. How can this system of verified trust fail?

Sometimes a claim lies in the no-mans-land between communities. Ancient historians started to toy with a really unlikely theory about ancient linen armour because ancient archaeologists had no finds to offer them, and armour scholars and armourers were different communities. They could have read a few books on the world history of armour, or talked to armour experts, but none of the people peer-reviewing their work or drinking coffee with then at a conference knew that those were evidence and experts they should talk about. The PhD student and fencer above accuses a researcher with a literary orientation of assuming that fencing before the 16th century was crude. We now have a bookshelf full of sources showing that late medieval fighting arts were sophisticated, but there was very little interest in then before the 1990s, and they still get more interest from amateurs than medievalists and literary scholars. The researcher could have found the evidence against her belief, but she did not know who to ask or where to find it (and the people who reviewed her book before publication did not know either). The authorities she trusted did not know anything about late medieval fighting arts, but still spoke confidently about them, and she did not know that they were just talking.

Other times, the researchers don’t know how to use their methods of verification. Every time I read a statistician talk about methods in the quantitative social sciences, they complain that many psychologists and so on do not have a good undergraduate understanding of statistics but just know how to perform rote calculations. Economists Reinhart and Rogoff got governments to follow their ‘scientific’ argument for austerity until an undergraduate noticed that they had set up their calculations wrong in Microsoft Excel. The archaeologists who argued that Tall el-Hamman in Jordan was destroyed by a comet have been accused of misunderstanding how mud brick falls as a site decays, how pottery reacts to heat, and how to distinguish between damage to bones at death and damage to bones after death. Its not for nothing that Richard Feynman named this cargo-cult science. The methods of science are impressive, and its all too tempting to use them as props or rhetorical tools but not to try and disprove your cherished beliefs. And Reinhart and Rogoff were not the only researchers to insist that even if the facts they based their argument on were incorrect, it was still true in a higher sense. Andrew J. Gelman the statistician believes that many researchers in these fields see numbers, statistical methods, and anecdotes as rhetorical spice to add taste to an argument which they find emotionally convincing.

Scientific methods are very powerful. But if a claim lies outside a community’s areas of expertise, they may not be able to test it as well as they think. And its very tempting to use the trappings of science as ritual paraphenalia and not as something which could prove a cherished belief wrong or force you to redo months of work. Scientific methods are powerful but demanding, and human beings have trouble meeting those demands.

L. Sprague de Camp said that people will pay all they have to be bunked, but nothing to be debunked. Prove him wrong with a donation on Patreon or other payment processors.

Further Reading:

I would like to follow up on Maciej Ceglowski’s essay Scott and Scurvy (2010) which argues that the same Royal Navy which found a preventative for scurvy ended up just going through the motions a century later without ever realizing that what they were doing was no longer effective.

Andrew J. Gelman, “Biology as a Cumulative Science” https://statmodeling.stat.columbia.edu/2022/03/04/biology-as-a-cumulative-science-and-the-relevance-of-this-idea-to-replication/

Michał Sikorski, “Is Forensic Science in Crisis?” 31 March 2022 http://philsci-archive.pitt.edu/

PS (edit): the psychologist running rats in Feynman’s story seems to have been a Paul Thomas Young, but no published study exactly as he described has been identified. Which shows how it can be powerful but dangerous to skip these steps of verification when you are sure what you are saying is true and the story you illustrate it with is just a detail!

(scheduled 3 June 2022)

Write a comment

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.