Robin Hanson

Why Does Research Clump?

A wrought silver bowl showing Ajax, Athena, and Odysseus
Since my view of how research works involves a lot of arguing in front of an audience, how about the debate about who should inherit the armour of Achilles? Silver plate in the Hermitage with Ajax and Odysseus competing for the armour of Achilles (number ω-279.) Photo by Sean Manning, September 2015.

Robin Hanson, the economist and futurist with a great deadpan, has been thinking about why academic research tends to clump around particular problems. Like many American thinkers today, he appeals to a theory of mind where most of what people do is really about status and social position and nobody is sincere. In his post Idea Talkers Clump, he puts it thus:

I keep encountering people who are mad at me, indignant even, for studying the wrong scenario. While my book assumes that brain emulations are the first kind of broad human-level AI, they expect more familiar AI, based on explicitly-coded algorithms, to be first.

… I’d estimate that there is now at least one hundred times as much attention given to the scenario of human level AI based on explicit coding (including machine learning code) than to brain emulations.

But I very much doubt that ordinary AI first is over one hundred times as probable as em-based AI first. …

In addition, due to diminishing returns, intellectual attention to future scenarios should probably be spread out more evenly than are probabilities. The first efforts to study each scenario can pick the low hanging fruit to make faster progress. In contrast, after many have worked on a scenario for a while there is less value to be gained from the next marginal effort on that scenario.

Yes, sometimes there can be scale economies to work on a topic; enough people need to do enough work to pass a critical threshold of productivity. But I see little evidence of that here, and much evidence to the contrary. Even within the scope of working on my book I saw sharply diminishing returns to continued efforts. So even if em-based AI had only 1% the chance of the other scenario, we’d want much more than 1% of thinkers to study it. At least we would if our goal were better understanding.

But of course that is not usually the main goal of individual thinkers. We are more eager to jump on bandwagons than to follow roads less travelled. All those fellow travellers validate us and our judgement. We prefer to join and defend a big tribe against outsiders, especially smaller weaker outsiders.

Now, I share his frustration when I see large amounts of attention being devoted to some problems, while others which seem just as interesting are ignored. If smart people have been arguing about something for 200 years, and no new sources or methods have appeared, I have trouble believing that my opinion will add anything to the conversation (this is Daniel Kahneman’s principle “thou shalt respect base rates, and not let thyself make excuses about why this time is different” and Edsger W. Dijkstra’s Third Golden Rule for Scientific Research [EWD 637]). On the other hand, as an ancient historian from Canada, I can think of some other reasons why research tends to clump.
Read more