Some Thoughts on Gardner’s “Future Babble” (2010)
Written by
Categories: Modern, Not an expert

Some Thoughts on Gardner’s “Future Babble” (2010)

Introduction

the cover of Dan Gardner's "Future Babble" with the title, author, and blurbs in black over a black and white crystal ball with pale yellow rays radiating from it

Dan Gardner’s Future Babble (McClelland and Stewart Ltd.: Toronto, 2010) is a pop book with a structural theory for why so many people get called out to predict the future using methods which fail nine times out of ten, then called back out after one failed prediction to make another. It relies upon earlier trade books (such as Phil Tetlock‘s work on expert judgement and When Prophecy Fails) and the psychology of cognitive biases and heuristics. One of Gardner’s favourite case studies is Paul Ehrlich who like Noam Chomsky spent most of his career repeating ideas he had in the 1960s (but whose ideas were much more easily falsified: the death rate did not rapidly rise from the late 1970s, and people all around the world start having smaller families once women have the ability to chose).

Three-Agent Model

Gardner focuses on a triangle of the questioning public, people with opinions on the future, and the media who chose which of those people to promote.

People want to know what’s happening now and what will happen in the future, and admitting we don’t know can be profoundly disturbing. So we try to eliminate uncertainty however we can. We see patterns where there are none. We treat random results as if they are meaningful. … Sometimes we create these stories ourselves, but, even with the human mind’s bountiful capacity for self-delusion, it can be hard to fool ourselves into thinking that we know what the future holds … So we look to experts. They must know. They have PhDs, prizes, and offices in major universities. And thanks to the news media’s preference for the simple and dramatic, the sort of expert we are likely to hear from is confident and conclusive. (p. 15)

When Philip Tetlock forced people to make predictions which were easy to evaluate as true or false, he found that they were about as accurate as random guesses (pp. 25, 26). Some experts were worse at predicting their area of expertise than other topics! (p. 82) Tetlock divided experts into foxes who did slightly better than chance and hedgehogs who were worse (although both could be beaten, as a class, by simple algorithms such as “always predict no change” or “project forward the average of the past ten years”).

I would summarize Gardner’s model as follows. The public wants to know the future but is not very good at choosing which experts to trust. They tend to prefer people who are confident and articulate and have simple, emotionally compelling stories. Experts are better at some kinds of predictions than others, and the experts who are least bad at predicting tend to be cautious, multi-faceted, and self-critical. In principle, intermediaries such as news organizations could help by systematically favouring foxes, asking questions that we have reasons to think are answerable, and giving less space to experts with a record of failure or a habit of making excuses for failure. But they rarely do so, in part because citing impressive experts boosts their own credibility, and in part because they often confuse expert criticism with personal attacks (p. 180-182). Getting someone on to babble about what might happen one day is quick and cheap, finding what actually happened yesterday is slow and expensive. One of the most striking passage is this complaint by journalist Jeff Greenfield on PBS NewsHour in 1999:

Its the least intellectually taxing question that someone can ask or the least intellectually taxing answer someone can give, and one of the least challenging discussions for an audience. You’re not actually asking about history, culture, facts. You’re talking about what one of my law school professors used to call breezy speculation, or the initials thereof. And in an era where there’s more and more talk on cable, the cheapest thing you can do on television is to bring people into the studio and have them talk, and the easiest kind of talk is to say what’s going to happen.

pp. 166, 167 (remember Vi Hart’s Social Media Systems and Democracy and her warning that quick, easy responses give you more votes on the Internet than slow, hard responses?

Celebrity debunkers like Julian Simmon or James Randi can turn into distorted mirrors of their rivals, because the debunker has to assert that the flim-flam he attacks is important, and because both need showmanship to get a prominent place in the news (pp. 232-236). As one of Gardner’s foxes, the Canadian analyst Alan Barnes, puts it:

Our clients have been too willing to accept analyses that are not as good as they could be and should be. And because we’re not getting that kind of pressure, there’s not much incentive to improve the way we do business (p. 263)

I could add that the news have incentives to create scandals and dangers, because people pay more attention to new threats than to anything else. This discourages them from pointing out that this scary prediction is similar to five earlier scary predictions which were not as bad as feared or never happened at all.

Methods

A book which leans on psychology should acknowledge the shoddy methods common in that field. Gardner is not particularly critical as he summarizes research. Since I am not Andrew Gelman, one commonsense example will do: is it really shocking that when psychologists asked 14-year old boys “questions about such emotional subjects as their families, sexuality, politics, and religion” they were not able to guess what they had said 48 years later? (p. 208) How many children would answer fully and frankly in that situation? How many experienced things shortly afterwards which changed their perspective, such as puberty or a religious crisis? And how stable were their opinions on those things?

As far as I know, nobody claimed to predict prices decades in advance or the world population a century in advance before the 20th century. People in the ancient world tried to predict the outcomes of wars or marriages or horse races, but as far as I know they did not try to predict prices or demographics. Is it reasonable to say that people who predict these things today are driven by human evolution and deep time? Or is it just something that con artists thought up which newspapers keep printing and businesses keep buying because of how broadcast media and large businesses work? Astrology by zodiac sign was invented in the early 20th century to fill newspaper columns too.

A strength of Gardner’s approach is that he considers the possibility that there are different kinds of people with different strengths and weaknesses in their thinking. Pop psychology often presents cognitive biases as something that affects everyone, but Tetlock found two broad ways of thinking about how to predict the future which lead to different types of prediction. Unfortunately, Gardner combines this with an Imperial We. He assumes that his audience is the kind of people who are drawn to bullshit, not the ones who are curious but end up scratching their head when someone who could talk about the world today decides to enthuse about world population projections for 2100.

Another aspect of Tetlock’s work left me puzzled. Most of us have learned to be skeptical of the blowhard who claims to be an expert on everything and has few or no demonstrable achievements. One of the warning signs that an expert is turning into a pundit is that they start speaking to a large audience about all kinds of things (Zach Weinersmith has a comic where he describes this as going emeritus) But Tetlock’s foxes are eclectic: they use a wide variety of data and methods, rather than forcing everything into the Procrustean bed of their pet theory. And his superforecasters seemed to be characterized by a way of thinking about every problem rather than domain knowledge of a specific problem. Are the barracks-room lawyer and the nerd with a “well, actually” at one end of a curve with superforecasters at the other? Are there methods by which someone can use their big educated modern brain to easily solve any hard problem?

Conclusion

Basically, it’s been shown time and again and again; companies which do not audit completed projects in order to see how accurate the original projections were, tend to get exactly the forecasts and projects that they deserve. Companies which have a culture where there are no consequences for making dishonest forecasts, get the projects they deserve. Companies which allocate blank cheques to management teams with a proven record of failure and mendacity, get what they deserve.

Daniel Davies https://blog.danieldavies.com/2004_05_23_d-squareddigest_archive.html

In short, the mystery of the Bermuda Triangle became a mystery by a kind of communal reinforcement among uncritical authors and a willing mass media to uncritically pass on the speculation that something mysterious is going on in the Atlantic. A similar phenomenon for the Pacific was also promoted by Berlitz for what is called the Dragon’s Triangle.

Skepdic https://skepdic.com/bermuda.html

This is a depressing book because its about a structural problem which can only be reduced if intermediaries sacrifice their personal short-term interest for the long-term collective good. Its not reasonable to expect the mass of the public to start thinking like scientists, or expect the hedgehogs to fly off to Mars and leave us in peace, but intermediaries like news organizations and web service companies could choose to ignore predictions of the world population in 50 years or the impact of a new technology, and could choose to boost experts who use good methods or have a track record of success over experts who are slick and confident. Particularly in the new media, I think the willingness to do this is essentially zero. If you are an orthodox scientist (like I am) then the idea that nobody can predict population or prices decades in the future should not be a surprise. Popular criticisms of a science of future history go back to de Camp’s “Science of Whithering” in 1940. Although people of good will have worked together to create better systems in the past, Gardner’s vision of the future is a photogenic bullshitter stamping on a book forever.

I got three useful things out of this book. First, Tetlock has given a basis to be suspicious when anyone famous makes a prediction. People with one kind of mind accuse people who say this of being hipsters or jealous rivals, but Tetlock found evidence that the traits which are good for becoming famous are bad for making accurate predictions or providing effective analysis. Second, its an antidote to thinkers who are too worried about the worry of the day, whether nuclear weapons and the Yellow Peril in the 1940s, peak oil and the Iraq war in the oughties, or chatbots and the Russo-Ukrainian war today. The future is full of dangers but they are rarely the ones we expect. Third is Gardner’s strategy of making choices where a wide variety of outcomes have upsides (John Keegan’s plan with branches, George R.R. Martin’s chaos as a ladder, the Internet’s Xanatos Gambit, chess players’ fork).

Further Reading: Marc Brooker, “The Four Hobbies, and Apparent Expertise” https://brooker.co.za/blog/2023/04/20/hobbies.html (essay in the style of Paul Graham)

That Mitchell and Webb Look skits with False Jeopardy Productions eg. “I’m looking for a gift for my aunt” https://piped.mha.fi/watch?v=cg4z7mzaVco

(scheduled 6 April 2023)

paypal logo
patreon logo

3 thoughts on “Some Thoughts on Gardner’s “Future Babble” (2010)

  1. russell1200 says:

    Taleb makes a very strong point by noting that there is your “normal” distribution (aka: the bell curve) and things that don’t have normal distribution, even though a large number of outcomes may fall within a limited range (your various long tails, black swans, etc. apply).

    Most human endeavors fall within complex systems that are very subject to fat tails. So casinos can do a very good job of predicting income, and seeing when the numbers are off, but they don’t predict the loss of money from such things as kidnappings, accidental deaths, etc.

    1. Sean says:

      I should probably read something long by Nicholas Taleb one of these days. A book whose review is in the pipe talks about how fraudsters take advantage of people who assume that anomalies are random and can be contained by some inflexible procedure that does not rely on human judgment or randomness.

  2. Books Read in 2023 – Book and Sword says:

    […] Gardner, Future Babble (McClelland and Stewart Ltd.: Toronto, 2010) Review of Gardner […]

Write a comment

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.