Thought experiments: scientific parallels

I’ll be giving a controversial paper at two conferences: the American Political Science Association (Sept 1-4, in Philadelphia), and the European Consortium on Political Research (Sept 7-10, in Prague).

My paper draws parallels between thought experiments in political theory and philosophy, and controlled experiments/comparisons in the natural and social sciences. Some of these parallels have been noticed before, by people like Frances Kamm, Tamar Gendler, and (in the book on political theory methods that I’m editing) Kimberley Brownlee and Zofia Stemplowska. But no one I’m aware of has taken advantage of the powerful toolkit that social and natural scientists have developed. I thus use ideas like internal and external validity, controlled comparison, omitted variable bias, interaction effects, spurious correlations, testable implications, and parsimony.

This helps us see better how to do thought experiments, and how much we can learn from them.

Thought ExperimentOf course, some readers will be more interested in my broader claims about the relationship between political theory and science. But note that I don’t equate the two: there are parallels, but also important differences. By contrast, I do argue elsewhere that some textual interpretation is essentially scientific: we often ask empirical questions (like what Locke meant by ‘rights’ or why he wrote what he wrote), and scientific ideas are the best tools we have yet developed for answering such questions. (See here for the most explicit version of the argument, and here for the most details account of what a scientific approach to textual intepretation involves.)

This isn’t really what’s going on in political theory thought experiments – which are, furthermore, only one part of political theory, and a part that many authors don’t use. Nonetheless, this casts some light on what some philosophers of science mean when they discuss ‘naturalism’, defined here as philosophy and science being ‘continuous’.

Although I’ve been thinking about and teaching some of these ideas for many years, my paper was written quite quickly, and needs more work. In particular, I cannot yet say how widespread the problems I discuss are.

The paper is here. Any comments and criticisms would be much appreciated!

‘Methods in Analytical Political Theory’ sent to Cambridge University Press

Marthe Donas, Le Livre d'imagesI’ve now sent the manuscript of Methods in Analytical Political Theory to Cambridge University Press.

Each chapter gives ‘how-to’ advice, explaining how to use the method or approach being discussed.

The lineup is as follows:

  1. Introduction: a ‘how-to’ approach (Adrian Blau, King’s College London)
  2. How to write analytical political theory (Robert Goodin, ANU)
  3. Thought experiments (Kimberley Brownlee, Warwick, and Zofia Stemplowska, Oxford)
  4. Reflective equilibrium (Carl Knight, Glasgow)
  5. Contractualism (Jonathan Quong, USC)
  6. Moral sentimentalism (Michael Frazer, University of East Anglia)
  7. Realism (Robert Jubb, Reading)
  8. Realistic idealism (David Schmidtz, Arizona)
  9. Conceptual analysis (Johan Olsthoorn, KU Leuven)
  10. Positive political theory (Alan Hamlin, Manchester and King’s College London)
  11. Rational choice theory (Brian Kogelmann, Arizona, and Gerald Gaus, Arizona)
  12. Interpreting texts (Adrian Blau, King’s College London)
  13. Comparative political thought (Brooke Ackerly, Vanderbilt, and Rochana Bajpai, SOAS)
  14. Ideological analysis (Jonathan Leader Maynard, Oxford)
  15. How to do a political theory PhD (Robert Goodin, ANU, and Keith Dowding, ANU)

The book should be out in 2017.

Talk at NCH: ‘History, Political Theory and Philosophy: Different Questions, Different Answers?’

On Tuesday March 22 I’ll be talking to the History of Political Thought Society at the New College of the Humanities, on ‘History, Political Theory and Philosophy: Different Questions, Different Answers?’

I’ll be arguing that while historians, political theorists and philosophers often end up asking different questions, many of their tools are the same. Historians have in effect won the battle to get political theorists and philosophers to think historically and consult historical research, but political theorists and philosophers need to do more to convince historians to think philosophically and consult philosophical research. This can be a valuable means even to primarily historical ends!

Time: 6:30 pm – 8:00 pm. NCH Bedford Square

Location: Drawing Room, New College of the Humanities, 19 Bedford Square, London, WC1B 3HH. (N.B. Someone will need to let you in, so if possible please arrive by 6.30.)



Call for Papers: Methods in Political Theory, at ECPR General Conference, Prague, 7-10 Sept 2016

Keith Dowding and I are organising at least seven panels on Methods in Political Theory at the ECPR General Conference in Prague, 7-10 September 2016. Details are below.


The deadline for paper abstract submission is 15 February 2016.


In order to apply you need a MyECPR account ( This is free if your university is an ECPR member institution. Then upload a paper abstract. Feel free to contact me ( or Keith Dowding ( if you have questions about your abstract or anything else.



Disappointing (non-)response by Arthur Melzer to my and other people’s criticisms

Perspectives on Political Science16 of us wrote reviews of Arthur Melzer’s important book about esoteric writing, Philosophy Between the Lines, in the June and October issues of Perspectives on Political Science. Melzer has now written a 10,000-word response. Unfortunately, he did not engage with most of the reviews. His wording is curious:

In the space allotted me for rejoinder, it would clearly not be possible to reply to each of the essays individually, and it would be unbearably tedious if it were. Most of the essays, at any rate, stand in no particular need of reply.

I’m not sure about any of those three claims!

For what it’s worth, my review made the following points:

  • Melzer misinterprets, or interprets partially, some evidence about esotericism, e.g. in Machiavelli and Rousseau;
  • Melzer is not clear about whether contextualist/Cambridge-School interpretations are esoteric;
  • Melzer works with a straw man when he discusses “strictly literal” readings, as opposed to esoteric ones;
  • Melzer does not respond to the most important critiques of Strauss’s methodology.




CSI Cambridge: history of political thought as detective-work

UPDATE: This article has now been published, in History of European Ideas 41:8 (2015), pp. 1178-94.

My paper ‘History of Political Thought as Detective-Work’ has now been accepted by History of European Ideas. The paper uses a detective analogy (following Collingwood and others) to give practical principles for textual interpreters on how to draw plausible inferences from incomplete, ambiguous evidence about what authors meant and why they wrote what they wrote.

david-caruso-csi-miamiI used a different analogy in the versions of this paper I gave at York, Reading, Durham, KCL and Kent in 2010-2012, but that analogy was too controversial to get published, and I only make it explicit in a forthcoming chapter in Winfried Schröder, ed., Reading Between The Lines (de Gruyter, forthcoming). But those who read between the lines of the current paper will see what I’m really arguing. For what it’s worth, the different analogy was also present in the original version of my ‘Anti-Strauss’ article, but the referees rightly made me take it out. Still, it’s there implicitly. My critique of Strauss has always been a vehicle for far more important ideas.

Here is the abstract of my History of European Ideas paper:

This paper offers practical guidance for empirical interpretation in the history of political thought, especially uncovering what authors meant and why they wrote what they wrote. I thus seek to fill a small but significant hole in our rather abstract methodological literature. To counter this abstraction, I draw not only on methodological theorising but also on actual practice – and on detective-work, a fruitful analogy. The detective analogy seeks to capture the intuition that we can potentially find right answers but must handle fragmentary evidence that different people can plausibly read in different ways. Placing the focus on evidence, and on combining different types of evidence, suggests that orthodox categories like ‘contextualist’ and ‘Marxist’ too often accentuate differences between scholars. This paper instead highlights core principles that unite us – ideas that underpin good textual interpretation across all ‘schools of thought’.

New DPE students: welcome to King’s College London!

Critical ThinkingIf you’re joining the Department of Political Economy (DPE) as a new undergraduate student in September 2015: welcome!

I’m one of your lecturers, and here are two (optional) preparatory readings you might find helpful, for two different modules which I convene.

4SSPP101 Studying Politics

Studying Politics is a core module taken by all students on the Politics programme and the Political Economy programme. It’s designed to empower you to think rigorously and critically about the politics research you’ll read at university. Reading 1 is the first 20 pages of Jon Elster’s book Explaining Social Behavior (2007), which gives a great sense of how to think like a social scientist. One of the most important things you’ll learn at university is the importance of thinking like a researcher, not just like a student. We want to encourage you to criticise what you read, not just make notes on it. To be critical, you will need to understand the choices that researchers make and what they could have done differently – and we will give you the tools to do this.

Students on the Politics, Philosophy and Economics (PPE) programme don’t take this module – but you’ll still find Elster’s article interesting and useful if you want to read it, because the ideas in it apply to other modules you’ll take.

Academic Writing Skills

This is an optional module offered to all students taking the Politics programme, the Political Economy Programme, the PPE programme, and the Politics, Philosophy and Law (PPL) programme. My department is the first in the university to run a term-long course like this. It gives you guidance on how to write better university essays. Reading 2 gives a lot of practical advice about studying at university, including the important of not being too trusting about what your lecturers and seminar tutors say! (We expect you to be critical of us, not just of what you read, of each other, and of yourselves.) Especially if you’re a bit worried or unsure about what to expect at uni, this chapter will give you a flavour of studying politics at university.

Looking forward to meeting you in September!

Symposium on Arthur Melzer’s new book on esoteric philosophy

I’m part of a symposium of reviews of Arthur Melzer’s important book about esoteric writing, Philosophy Between the Lines, in the journal Perspectives on Political Science (vol. 44 no. 3, 2015). This is a two-part symposium, with Melzer responding to the reviews in the second part, in the forthcoming issue. The first part of the symposium has contributions from a variety of authors:


  • Francis Fukuyama drives a further wedge between Strauss and silly criticisms of his alleged effect on US foreign policy;
  • Michael Frazer asks if some philosophers writing about esotericism actually did so esoterically;
  • Adrian Blau challenges some of Melzer’s evidence as well as what appear to be false dichotomies between esoteric/non-esoteric and literal/non-literal readings of texts – click here for a summary of my views and a copy of my article;
  • Douglas Burnham questions the idea of ‘historicism’ and asks how well Nietzsche fits this category;
  • Rob Howse questions Melzer’s evidence about the relationship between persecution and esotericism;
  • Miguel Vatter makes further distinctions between types and aims of esotericism;
  • in separate pieces, Norma Thompson, Catherine/Michael Zuckert, Larry Arnhart, Roslyn Weiss, Grant Havers and Peter Augustine Lawler each develop different aspects of the account of ancient versus modern esotericism/society.

How to do history of political thought

Interpreting textsHere is my draft chapter on how to interpret texts, for a book on methods in political theory that I’m editing for Cambridge University Press.

I’m keen for comments – however critical! The only problem is that I need comments by August 1st if possible, as I’m submitting the book manuscript on September 1st. Sorry for the crazy deadline.

I’m particularly keen to hear from current graduate students (MA or PhD), or advanced undergraduates, as that is who the chapter is aimed at.

Even if you’ve never met me, I’d love your criticisms and suggestions! Please download the article and email me at Adrian.Blau [at] – thanks!

Is replication just for scientists? Part 2: interpreting texts

Part 1 argued that replicability, an important facet of scientific research, is also found in philosophical thought experiments. Indeed, philosophical thought experiments are easier to replicate than most natural or social science research.

Here, in Part 2, I apply this idea to interpreting texts, whether in the history of political thought, in philosophy, or anywhere else.

Reading book

My key claim is that when we make an empirical claim about a text – for example, what an author meant by a word or phrase – we should provide our evidence, so that other interpreters can replicate our reading to see if they agree or not. In other words, we should give precise references (e.g. page numbers) so that other people can find the passage, read it for themselves, and see if they share our interpretation.

Aside from replicability, there are two more self-interested reasons to give precise references . First, it forces us to try to be careful. I can think of several occasions where I find that I have misread or misremembered an argument when I look for the page number. Second, it shows our readers that we have tried to be careful. I’m more likely to trust an interpretater if I think that the author has been careful with her evidence, although there are exceptions in both directions, of course.

Unfortunately, sometimes we cannot give precise references, because we have not read the source we are citing, or not read it closely enough, or not read it recently. We don’t always give precise references in informal contexts (e.g. on blogs!) but where possible we should do so in published academic writings. One reason we don’t is the  bad academic convention of giving precise references for direct quotations but not necessarily when only citing ideas without quoting them. I believe we should give precise references in both situations.

To change the convention, journal editors and publishers should make us give precise references where we can. I remember one journal editor of a leading political theory journal who considered forcing people to give page numbers in order to get away from slapdash references to “Rawls 1971” and the like. I note with great pleasure that the American Political Science Review now requires authors to give ‘precise page references to any published material cited’. My only caveat to that is where page numbers are not helpful: for example, there are so many different editions of Rousseau’s Social Contract that chapter numbers are probably more helpful there.

But the basic principle stands: ideally, other people should be able to replicate what we have done to see if they agree with our claims. This principle is as important in textual interpretation as it is in the natural sciences.

KKV’s strategic error in Designing Social Inquiry

In 1994, Gary King, Robert Keohane and Sidney Verba (‘KKV’) published their seminal book Designing Social Inquiry. It was very controversial, perhaps intentionally so, because of the claim that

our main concern in this book is making qualitative research more scientific (p. 18).

This led to a backlash from many qualitative political scientists.

KKVI believe that the substance of KKV’s book points to a different and less controversial argument. They start to make this argument at the very bottom of page 4:

All good research can be understood – indeed, is best understood – to derive from the same underlying logic of inference. Both quantitative and qualitative research can be systematic and scientific.

But they then move on to a less relevant issue: historical research. That’s not really the point.

This is what I believe they should have said next:

All quantitative and qualitative researchers fall short of the ideal to greater or lesser extents. It happens that the logic of social-science inference is often more developed in quantitative research, but this book will use examples of good and bad practice from both qualitative and quantitative research.

This is consistent with the book’s content; it would just have required some different examples.

This message is less controversial – and perhaps the book would have been less widely read as a result. But people might have paid more attention to some ideas which have, alas, generated less debate. For example, I think that more weight should be placed on KKV’s very important ideas about uncertainty, which have greatly influenced me (see this blog post and this article of mine) and which I see as fundamental to all empirical research – even empirical research which does not see itself as social-scientific (see this blog post and this article of mine).

Important caveat: the suggestion I have made about what KKV should have said is still controversial: not everyone thinks that there is a unified logic of inference in social science! I’m just saying that if that is KKV’s view, they may have been better off framing the idea differently.

Noel Malcolm interview on the new edition of Hobbes’s Leviathan

Listen to an 18-minute interview with historian Noel Malcolm, covering his awe-inspiring new Clarendon edition of the English and Latin versions of Leviathan, Hobbes’s best-known work of political philosophy.

Malcolm starts the interview by summarising his new views about why Hobbes wrote Leviathan (from 2.35 to about 4.50). Malcolm’s analysis, alongside exciting research by historian David Scott, is giving us new ideas about why Hobbes wrote Leviathan, which may in turn cast new light on some of what Hobbes meant.

Malcolm also gives a stimulating account of Hobbes’s views on religion (from 9.00 to 15.15).

Noel Malcolm

Malcolm briefly discusses an important question: should philosophers be historians? Malcolm says no, but gives two reasons why philosophers nonetheless benefit from historical research. First, a philosopher may claim that a particular argument (say, Hobbes’s account of the relationship between liberty and authority) was made for philosophical reasons, when the argument may actually have been intended as a  contribution to a local political debate (from 8.10 to 8.35, and from 16.15 to 16.35).

It’s not clear to me, though, that this objection will trouble philosophers. Malcolm would need to show that a scholar can misunderstand Hobbes’s argument if she misreads his intentions. There are places where this is true and places where it is not, and unsurprisingly Malcolm doesn’t go into that kind of detail in the interview.

Second, Malcolm notes that words may not mean what we think they mean, and we may need to place them in their context if we are not to be led astray (from 16.00 to 16.15). One example, which he touches on earlier, is ‘atheist’, which had different meanings in the 17th century to now. That strikes me as a much stronger reason why philosophers should read work by historians.

One issue which is not discussed, unsurprisingly, is whether historians should be philosophers. That is a question I explore in an article I’m currently writing.

Is social science useful? Roundtable at King’s College London, 14 June 2013

I’m co-organising a roundtable on ‘Is Social Science Useful’ at King’s College London, featuring some prestigious speakers from KCL, UCL, the LSE, Ipsos MORI, and UPenn (the University of Pennsylvania).

Here are the details.

Is Social Science Useful?

King’s Interdisciplinary Social Sciences Doctoral Training Centre (KISS-DTC) Roundtable

June 14, 2013, 4.30pm – 6pm

Room K2.31 King’s Building – followed by drinks at ‘Chapters’, 2nd floor, Strand Building

Social science research is increasingly judged on its ‘usefulness’ and ‘practical relevance’, beyond its intellectual and theoretical contributions. But how useful is social science? Could it be more useful? Are there costs in pursuing usefulness? This roundtable will feature eminent social scientists and practitioners with diverse views about these important issues.

Philip Tetlock is the Leonore Annenberg University Professor in Democracy and Citizenship at the Wharton School, University of Pennsylvania. He has published widely on political psychology, especially on bias and prediction in politics and public policy. He is the author of the award-winning book Expert Political Judgment: How Good Is It? How Can We Know?

Alena Ledeneva is Professor of Politics and Society at UCL. She works on corruption, economic crime, corporate governance and the informal economy in Russia and other postcommunist countries. Her books include How Russia Really Works (2006) and Can Russia Modernise? (2013).

Cheryl Schonhardt-Bailey is Reader in Political Science at the LSE. She works on the interplay between interests, ideas and institutions in legislative politics, trade and monetary policy, and political rhetoric. Her most recent book is Deliberating American Monetary Policy.

Patten Smith, is Director of Research Methods at the Research Methods Centre of Ipsos MORI, one of the UK’s largest research companies. He is the author of ‘Survey research: two types of knowledge’, which explores the divide between the kinds of knowledge held by survey experts in research agencies and in academia. He is currently the Chair of the Social Research Association.

Nick Butler is Chair of King’s Policy Institute. Between 2002 and 2006 he was Group Vice-President at BP and has since worked as a Senior Policy Adviser at 10 Downing Street. He is the author of The Future of European Universities: Renaissance or Decay?

To attend, please sign up at the Eventbrite page:

For any questions or queries about the event please contact:

Address & directions:

King’s College London | Strand | London WC2R 2LS

Organised on behalf of the KISS-DTC Regulation cluster themes: ‘Regulation, Governance and Politics’; ‘Work and Organisations’; ‘Markets, Firms and Competitiveness’.


Who knows? Uncertainty in qualitative social science

I’m looking for your help: I need references which discuss the idea of uncertainty in qualitative research. Probably in social science, but maybe history.

Here’s the point I’m trying to make.

When we tackle empirical matters – how many people have HIV, why the dinosaurs went extinct, how democratisation affects economic growth, and so on – we can never know the answers for certain. (I’m not thinking about prediction, by the way, but about description or explanation of things in the past or present.)

In quantitative social science, this idea is standard: it’s central to statistical inference. But I don’t know how much it’s been discussed in relation to qualitative research, aside of course from debates over Bayesian research. I have looked . . . but I haven’t found much.

The place of uncertainty in qualitative research is something I tried to theorise in an article in History and Theory. I argued that when we study historical texts, we often ask empirical questions, such as why Machiavelli wrote what he wrote, or what Mill meant by ‘harm’. We can’t know the answers for certain, but often we should indicate how confident we are in our findings. This reminds us that we are not telling our readers what happened: we are telling them how strong we think our evidence is.

Reporting uncertainty in qualitative research is thus subjective, whereas in quantitative research it is objective (at least, where the indication of uncertainty is part of statistical significance).

But can anyone tell me who has written about uncertainty in qualitative research, whether in social science or history?

uncertainty in qualitative research

My ideas about this issue have been greatly influenced by Gary King, Robert Keohane and Sidney Verba’s Designing Social Inquiry – see especially pp. 7-8 and 31-2 of chapter 1. Unusually, they depict uncertainty as a core feature of science. This is a crucial idea. It took me years to grasp what they were getting at, but I now agree.

However, King Keohane and Verba actually say very little about what uncertainty involves in qualitative research, as Larry Bartels notes. This is surprising, given that their book is meant to be precisely about what quantitative researchers can teach qualitative ones. When I wrote my article, I had to do much of the thinking for myself (helped by Collingwood, by Keynes, and of course by many actual examples of good and bad practice in substantive research).

I’m now interested in writing a paper about uncertainty in qualitative social science. Of course, the idea is widespread: for example, it’s implicit in any discussion of triangulation. But do you know of people who have theorised the idea and/or discussed its place in qualitative research? (Again, aside from Bayesians.) Can anyone point me to some references? I’d be very grateful – thanks!

Is replication just for scientists? Part 1: thought experiments in philosophy

Natural scientists are big on replication. When one lab reports an important finding, other labs try to replicate it. If they can’t, as with Fleischmann and Pons on cold fusion, you have a problem.

Social scientists are getting bigger on replication. Leading social science journals now require authors to upload empirical datasets. But in practice, replication is rare, as Andrew Gelman notes. Replication is still not widely expected – and besides, there are far fewer social scientists than natural scientists.

What about political philosophy and history of political thought? I’m not sure how much replication has been discussed in these areas, except in empirical areas e.g. experimental philosophy. (Let me know if you have references about replication in other areas of philosophy! I’m not thinking about such things as checking someone’s logic, of course.)

This strikes me as an important issue. Indeed, much of my work – and much of this blog – is about showing the intellectual links between philosophy, history and social science.

This post will thus address replication in philosophy. Part 2 will cover history of political thought.

A common tool in political philosophy is thought experiments. Is it worse to kill someone than to let someone die? This is a hugely important moral question. It is also a hugely complex moral question. How do we approach such difficult problems?

Nietzsche: is it worse to kill someone or to make them stronger?

Nietzsche: is it worse to kill someone or to make them stronger?

One method used is thought experiments. Is it worse to intentionally drown a child if one wants to get its inheritance than to fail to help the same child if it slips in a bath and starts to drown? This is Frances Kamm’s example – see p. 18 of Morality Mortality, volume 2. And there are many similar examples: Nozick’s experience machine, the ticking time-bomb scenario, Jim and the Indians, trolley problems, and so on.

My suggestion is that philosophical thought experiments are actually easier to replicate than almost any natural or social science research, provided the thought experiment is outlined clearly enough. You simply think through the experiment as described by the author, and see what your intuition/answer is.

Sometimes you reach the same conclusion. Sometimes you don’t, as with Frances Kamm’s retort to Peter Unger – see p. 13 of Morality, Mortality, volume 2.

Sometimes you question whether your intuitions are reliable. This may be because you reject the nature of many of these thought experiments, as with Robert Goodin – see pp. 8-9 of Political Theory and Public Policy. Or it may be because you think your own intuitions have been primed by previous thought experiments, as Mike Otsuka discusses – see pp. 109-10 of his 2008 paper in the journal Utilitas.

Sometimes you re-run the thought experiment with a different model, e.g. different order or different frames. Otsuka does this in the paper mentioned just above.

And sometimes – perhaps most importantly – you re-run the thought experiment with different variables. For example, if one adds uncertainty to the ticking time-bomb scenario, even many people who initially advocated torture become less willing to do so. I use the ticking time-bomb scenario with first-year politics students as a way of getting them to think about thought experiments in terms of variables.

Replication with different variables is also important in the social sciences. My favourite example is Daniel Treisman’s 2007 paper in the Annual Review of Political Science. He attempt to replicate many well-known cross-national analyses of corruption, and finds that small and reasonable changes to the independent variables often alters the results (see pp. 222 onwards). This challenges the reliability of the data and the models. It’s strikingly similar to the way that Otsuka questions our intuitions and alters the model (see above).

Replication matters, and it’s pleasingly common in philosophy, at least in relation to thought experiments.

Was Shakespeare a schoolteacher? How sloppy are some journalists?

Several people have been claiming that Shakespeare spent a few years working as a schoolteacher in Titchfield, a village in Hampshire. The claims have some plausibility and may be right. But I’m interested in how sloppily the BBC reported the story. The BBC makes it sound like a definite finding. Surprisingly, the Daily Mail newspaper is more even-handed, as we’ll see. And the claims about Shakespeare make some interesting intellectual errors in their own right.



What is it like to be Leo Strauss?

Last year, I published a critique of Leo Strauss. Strauss was an important and influential thinker who is controversial in two ways. He’s a conservative, and may have influenced many neoconservatives in the Reagan and Bush administrations. I don’t care about that. What I do care about are his historical interpretations, especially his claims that writers like Plato and Machiavelli hid secret messages in their texts using odd techniques which Strauss often seems to have been the first to spot. I have no problem with the idea that some people have written esoterically, but I do doubt the particular claims that Strauss makes. Near the end of my paper, I wrote a little satire, mimicking Strauss’s approach and parodying his style to ‘prove’ that Thomas Hobbes hid secret messages about the music of Beethoven – even though Hobbes died 91 years before Beethoven was born. While writing the satire, though, I suddenly saw what it might have been like to be Leo Strauss. I had been finding lots of astonishing parallels between Hobbes’s writings and Beethoven’s music – it was starting to get freaky. And suddenly, a thought started to flash into my head: ‘Is it possible that Hobbes was actually writing about Beethoven?’ I didn’t even finish this thought: of course, Hobbes could not have been writing about Beethoven. But that moment showed me how easy it is to read too much into a mere coincidence. Strauss and his esoteric bookshelves, by Adrian Blau And this is where Strauss goes wrong. There is a natural human bias to look for evidence which fits one’s ideas, or to interpret things to support one’s ideas. Psychologists call this confirmation bias. If you think you don’t suffer from this … well, I’m very happy for you, but you’re probably not going to be the next Sherlock Holmes. Scientific methods arose in part to counteract biases such as confirmation bias. Scientists shouldn’t just look for evidence which fits their theories: they should question their evidence, test their theories, compare different explanations, and so on. If he had applied such principles, Strauss would not have made many of the claims he made. What is it like to be Leo Strauss? I can’t say for sure, but one brief moment, I might just have known.