Mathematical Methods and Research as a Community
The famous ancient historian Walter Scheidel has reviewed a book on mathematical models of the economy of the Roman empire.
The current system of academic training, recruitment and promotion is not well equipped to recognize work that is routinely collaborative, may result in electronic outputs rather than traditional deliverables, and is not overtly focused on the monograph as the basic coin of the realm. All that makes it hard to reconcile with norms and expectations that are deeply entrenched in the academic humanities, most notably in the United States where institutionalized individualism and fetishization of the little-read book rule supreme. Academic incentive structures will need to be tweaked in favor of collaborative and non-traditional work to give simulation studies a chance to flourish.
Some of my gentle readers may not know that in ancient world studies we have a situation where to make a bibliography count for academic promotion, we have to print a few hundred copies and sell them to libraries where they collect dust while researchers check the website with PDFs or a searchable database. Rachel Mairs’ Hellenistic Far East Bibliography faces this barrier, so does the ETCSL. And peer-reviewed publications in ancient history and philology are still expected to be written by one or two authors, whereas in natural science there are often a dozen or more authors who contribute different specialized skills (perhaps one performs a chemical test, another writes the software, a third does most of the writing, and a fourth manages the project). But I see a big problem with pushing to focus on understanding the ancient world through mathematical models.
Right now all areas of research are in an epistemic crisis because our networks of verified trust have broken down. This is obvious with public health policy and social psychology but as far back as 2009, Jona Lendering noticed that 37 of 50 common errors about antiquity came from trained experts writing outside their area of expertise (eg. an expert in Greek literature writing about Roman archaeology). For many different reasons, these mathematical models are especially hard for outsiders to evaluate. They certainly require math and programming skills which not many students of the ancient world possess, but often they use proprietary software packages, large amounts of computing power, or the kind of big data which is bad data. And if you want to spiral closer and closer to the truth, you want to use methods which as many peers can evaluate as possible.
I see people trying to put history before the Iron Age into absolute time by using complicated statistical techniques on incomplete sequences of tree rings (dendrochronology) and ambiguous radiocarbon dates. This is very ambitious, but its very easy to assume your conclusion, and few people have the skills and time to see if the methods work. Already in 1991, it was very difficult to sort out the evidence for chronology in the area from Sweden to Egypt and conclusively demonstrate either Petrie’s Egyptian chronology or its challengers.
Discussions of research such as Gray and Atkinson on Proto-Indo-European are often frustrating because many critics did not have the language to express the mathematical problems with their argument and the journal did not understand the logical or philological problems. (Papers in natural science are also structured differently than papers in the historical sciences, so you have to learn to read each genre). The rise of twitter did not help things since it rewards snappy witticisms over moderate and reasoned criticism (has anyone seen a clear update on the theory of a cosmic airburst over Tall el-Hamman which avoids insinuations and overstatement and just focuses on the evidence and methods?)
Training more archaeologists and historians in math and statistics could be a good idea, but we are already struggling to spot holes in arguments built with our existing methods. It took 50 years before someone said in print that the glued linen theory did not match any surviving armour before 1970, or any description of armour being made before 1860, because most people interested in ancient Greek warfare are not archaeologists or armour scholars. Until we improve our systems of mutual criticism, adding more methods will just create more opportunities for a bad argument to slip through because nobody can see and explain the problems.
Scheidel enthuses:
The payoff promises to be considerable: novel pathways of analysis as well as consilience with social science fields in terms of research design and hypothesis testing.
But the quantitative social sciences produce more manure than a pig farm, even though statistical methods are their main tools. They do not produce results you can rely on without a lot of inspection. Expecting archaeologists who dabble in statistics to use them better than psychologists or sociologists is very optimistic, and so is expecting ancient world studies to clean up after bad studies when psychologists and sociologists and economists have failed. I think ancient world studies has things to offer quantitative social science, from case studies before the 19th century to our care with our data and curiosity about where those numbers come from and how the things we can measures relate to the things we want to study.
So I would agree that administrators should give more credit to online publications and post-publication peer review (if your colleagues all use a resource, that is very strong evidence that they think it is sound and useful, much stronger than if two colleagues and an editor agreed it was publishable). I also agree that academic training in the historical sciences encourages people to try to do everything themselves, and that this is inefficient and leads to overwork. Very few people are good researchers, and good popularizers, and good at keeping an audience on social media, in addition to all the basic academic tasks like teaching and department administration. Encouraging a more collaborative approach would be of value. I even agree that the archaeology of the Roman empire can produce something that is much easier to call data than ancient texts or ancient paintings and sculptures.
But if we want to learn true things about the past, I think we should put more resources into collaborative criticism and dialogue between different types of researchers, and not in to ‘big ideas’ that fewer and fewer people can evaluate. The ‘big ideas’ might help ancient world studies compete with boastful disciplines like economics, but without stronger criticism, they won’t lead us closer to how it really is, or convince other thoughtful skeptical people that our work and methods have value. As Jona Lendering warned us in 2009, lets work harder at suppressing the false even if finding new truths is more fun.
As I will say at the end of this year, to launch really big impressive projects, you have to trust a lot of people and some of them will cheat you. Dan Davies convinced me of this in his book on the economics of fraud. I am happy to focus on my less ambitious and more skeptical work, helping people understand what ancient Greek swords or Babylonian temples were like rather than developing a grand theory of the Roman economy. That will never make me famous, but it will never lure me into telling millions of people a plausible lie.
PS. For the record, I have a degree in computer science (so about two years each of calculus, linear algebra, graph theory, statistics, and so on)
Further Reading: Kostas Vlassopoulos shows how using numbers can be ‘truthy’ rather than scientific in another BMCR of a book I have not read.
Edit 2023-08-05: Stanford’s ORBIS model is a good example of something you can do using Roman archaeology which is really amazingly cool but where the results may not be as accurate as they seem (can’t comment directly not my specialty) https://orbis.stanford.edu/
(scheduled 10 July 2023 based on a Mastodon thread)
I once knew a fellow who boasted about his interdisciplinary research. And then in the space of a year I heard two views of him and his work. A physicist told me “his grasp of physics is weak but we find his knowledge of chemistry valuable”. And a chemist said, in virtually the same words, that the fellow didn’t know much chemistry but he did bring physics to the party.
My own view was that he wasn’t very bright so I’d trust him on neither.
As ever all depends on the individual’s own merits, above all his honesty. As a rule I preferred collaborating with people who were happy to say “I don’t know”.
I will say one other thing about collaborative research: when two or three of you suddenly, in a conversation, see the penny drop, the joy is wonderful – as in dancing and laughing in the corridor wonderful. Man is a social animal.
We don’t really have a choice in ancient history because we have to use all kinds of evidence and nobody is competent to handle them all and often their validity is disputed. Right now I feel like we are putting too much energy into new claims, and not enough into making sure that the foundations are still sound, but other people have other views.
I will say that Roman archaeology is the part of ancient world studies where sophisticated mathematical models seem most promising, because the archaeology of societies with a lot of durable stuff can give ‘data’ not just ornamental numbers.
Also, what I am talking about is the need for people with different skill sets to talk to each other and criticize each other’s work. The glued linen theory lasted so long because nobody in mainstream ancient world studies knew that none of the cultures with linen armour in Mesoamerica, Europe, Africa, or India made linen armour that way and not all knew that in archaeology, its a bad sign if your theory has no parallels in the ethnographic or historical record. We don’t need superscholars who can do everything, we need to make sure that many scientific-minded people with a variety of skills examine claims before we accept them. In my experience, mathematical models in ancient world studies rarely get this public and formal criticism aimed at mutual understanding, just a quiet conversation “I played with it and I did not find the results credible because …”
I would love to specialize more in writting hard evidenced studies instead of grand narratives. However in our country it’s very hard to do some interdisciplinary collaboration between universities or domestic academics (usually more than 2, or 3-4 people = Doom, not that pc game). I’m curious if online publications and more collaborative projects are in the stadium of inception for next years, but so far I know, that system isn’t geared in such way.
I see from Vlassopoulos review http://www.bmcreview.org/2016/03/20160304.html and Ober’s answer https://www.academia.edu/22898166/Reply_to_Vlassopoulos that collective mutual criticism and civil debate is sometimes hardly in existence. But I take it as bad exception. Some statement’s of Ober are super funny (his answer and mainly the book), if he is not academic, but popular writter trying to mediate various new flavours of academic knowledge. But don’t get me wrong for favoritism, some works from Ober are good like https://brill.com/display/title/1278 (where I stand on his side against critics).
What worries me more is, the same thing as in 2014. We have bunch of super duper specialized books, specialists, but not profilertation of such vast amount of knowledge into more comprehensive, cohesive “grand narrative” for laymen, students, some serious actualization of knowledge for our educational system, general public. Or the ways of thinking how should we to do it and keep the pace with all new evolvement in ancient history (and other history as well). People are still comfortable like:
Ahh I’m not Achaemenid specialist, why should I read someting about Neo-Babylonian, Neo-Assyrian Empire, or Persians, their military. It’s not my expertise, so I cannot judge it. Yes, You can at least till certain depth and check various things (humanities aren’t natural sciences, when You really have to know more to evaluate academic work). Then You can’t write obviously arrogant, unkowledgable models and comparisons. It’s not just Ober who’s lacking, it’s general trend, not my field I don’t care about conclusions of other disciplines, academics. Because it would destroys my model or I will be forced to rethink the whole concept of my work.
Use of quantitave methods in ancient history is surely problematic, when we don’t have so much data as we want to be a certain like in other periods. I am no mathematician and IT guy, yet I refuse to totally abandon an effort to use some available data altogether. In case of Neo-Assyrian Empire we have some studies (based on archeology, vis Karen Radner and others), where we see very interesting data for their agricultural production, system of storages. Despite absence of total knowledge (which won’t be achieved for long time if ever) these incomplete numbers can tells us something important. At least we infer from this two models how Assyrians were operating economy, tribute system, taxes.
Yes, in my case of Neo-Assyrian military history it is mediated knowledge, but even with this You can still decide which theoretical model is more plausible and why. Or even if You have only one detailed book about Neo-Assyrian society based on hard data (for Harran region https://brill.com/display/title/13257), together with other primary sources and Dezso works You can at least propose what social classes were in the army, in what percentage they were represented, average wealth for classes, etc. I was shocked I have to do this by myself for my book. Definition of academic (not by profession but by nature) is, if I want to know someting, I will do so from relevant sources and I will question my sources and conclusions. For that You don’t need to be scholar in several fields. I still insist that humanities are unlike from nature sciences vastly accesible to laymen, despite some limitations.
I think except the hard academic core, the popularizing authors must put in more effort and honesty. And above all publishers should be more educated and not so stingy sometimes. Sadly I had in my hands Healy’s book about Neo-Assyrian warfare, total disaster https://ospreypublishing.com/us/ancient-assyrians-9781472848093/ Skip the fact that almost no new drawing were used, 2/3 of the book is civil history, Dezso https://elte.academia.edu/Tam%C3%A1sDezs%C5%91 would be doing so much better work on 300 pages (because civil history is everywhere, this complement should take a minimal part in content, certainly not more than 30%). Such wasted opportunity.
Even non-specialist could find fascinating problems about archery, logistics, etc. and consult it with specialists (academics, martial artists) or do something “new” by himself. Nothing like that from Healy. I am definitively allergic for inability of some authors to contemplate that our current enviromentalistic obssesion as conclusion of all problems is just a current fashion and not truth of highest god, without problems (but philosophy of science is hardly known these days I guess). In Healy’s case, yes he is citing climatic, archeological research how drought probably destroyed Assyrians. Yet we have studies, that problem of draught and salinization of soil wasn’t in the whole Empire and there was still lot of food available for the army. But why bother to question some estabilished “prestigious” new theory or search for more. I am so dissapointed by attitude of author and publisher.
[…] did a lot of introspection and writing not of general interest. One which I can share here is that I am happy to stick with medium-sized claims where I can judge all of the details of the argument, rather than ‘big […]