A stint at the University of Arizona

I’m spending a few weeks at the University of Arizona, courtesy of the Center for the Philosophy of Freedom. I’m working on my thought experiments project, before presenting a paper on caricatures of science, at a conference in Tucson in early December. I’m loving it here. The sky is blue; the weather is gorgeous; the air is clean. And there are fascinating people around. I’ve had great conversations with philosophers, experimental economists, and others.

A surprisingly positive review of a Straussian book on Hobbes

Readers who know my aversion to Leo Strauss (see here) may be surprised by my surprisingly positive review of Devin Stauffer’s new book on Hobbes, on Notre Dame Philosophical Reviews (link).

StauffHobbes

Stauffer, an Associate Professor at UT Austin, argues that Hobbes was trying to subvert his readers’ religious attachments – but not by saying so directly. Rather, the argument is esoteric: Hobbes’s real views can only be grasped if we read between the lines. For example, some of Hobbes’s ‘defences’ of religious views were so bad that they would subtly draw attention to the opposite view.

I’m not convinced, and my review raises five challenges to Stauffer’s interpretation. Still, I don’t reject Stauffer’s book: it is definitely plausible. Indeed, it’s the best Straussian interpretation I’ve seen – way better than anything Strauss wrote.

Underpinning my critique is is the need to interpret texts ‘scientifically’, by comparing alternative interpretations, looking at what fits and doesn’t fit one’s interpretation, standing outside of the interpretation and asking what it would take to be right, and so on. I discuss those ideas elsewhere on my blog, in relation to my paper ‘History of Political Thought as Detective-Work’, originally called ‘History of Political Thought as a Social Science’, here, and in exploring the place of uncertainty in history of political thought, here. I’m actually most explicit about the scientific nature of textual interpretation in a chapter I wrote called ‘The Irrelevance of (Straussian) Hermeneutics’. Please email me if you want a copy, at Adrian.Blau -[at]- kcl.ac.uk.

 

A five-week US tour

I’m in the US for five weeks, giving the following papers:

Columbia, Wed Oct 3 – ‘How (not) to use history of political thought for contemporary purposes’.USUKflag2
Stanford, Fri Oct 12 – ‘The logic of inference of thought experiments in political philosophy’.
Berkeley, Tue Oct 16 – ‘Hobbes’s failed political science’.
Association for Political Theory conference, Bryn Mawr/Haverford, Sat Oct 20 – ‘Post-truth politics and the rise of bullshit’.
Arizona, Thu Oct 25 – ‘The logic of inference of thought experiments in political philosophy’. (I’m also teaching a class on ‘Corruption and conceptual analysis’ on Mon Oct 22.)
University of Texas at Austin, Fri Nov 2 – ‘The logic of inference of thought experiments in political philosophy’.

Thought experiments: scientific parallels

I’ll be giving a controversial paper at two conferences: the American Political Science Association (Sept 1-4, in Philadelphia), and the European Consortium on Political Research (Sept 7-10, in Prague).

My paper draws parallels between thought experiments in political theory and philosophy, and controlled experiments/comparisons in the natural and social sciences. Some of these parallels have been noticed before, by people like Frances Kamm, Tamar Gendler, and (in the book on political theory methods that I’m editing) Kimberley Brownlee and Zofia Stemplowska. But no one I’m aware of has taken advantage of the powerful toolkit that social and natural scientists have developed. I thus use ideas like internal and external validity, controlled comparison, omitted variable bias, interaction effects, spurious correlations, testable implications, and parsimony.

This helps us see better how to do thought experiments, and how much we can learn from them.

Thought ExperimentOf course, some readers will be more interested in my broader claims about the relationship between political theory and science. But note that I don’t equate the two: there are parallels, but also important differences. By contrast, I do argue elsewhere that some textual interpretation is essentially scientific: we often ask empirical questions (like what Locke meant by ‘rights’ or why he wrote what he wrote), and scientific ideas are the best tools we have yet developed for answering such questions. (See here for the most explicit version of the argument, and here for the most details account of what a scientific approach to textual intepretation involves.)

This isn’t really what’s going on in political theory thought experiments – which are, furthermore, only one part of political theory, and a part that many authors don’t use. Nonetheless, this casts some light on what some philosophers of science mean when they discuss ‘naturalism’, defined here as philosophy and science being ‘continuous’.

Although I’ve been thinking about and teaching some of these ideas for many years, my paper was written quite quickly, and needs more work. In particular, I cannot yet say how widespread the problems I discuss are.

The paper is here. Any comments and criticisms would be much appreciated!

CSI Cambridge: history of political thought as detective-work

UPDATE: This article has now been published, in History of European Ideas 41:8 (2015), pp. 1178-94.

My paper ‘History of Political Thought as Detective-Work’ has now been accepted by History of European Ideas. The paper uses a detective analogy (following Collingwood and others) to give practical principles for textual interpreters on how to draw plausible inferences from incomplete, ambiguous evidence about what authors meant and why they wrote what they wrote.

david-caruso-csi-miamiI used a different analogy in the versions of this paper I gave at York, Reading, Durham, KCL and Kent in 2010-2012, but that analogy was too controversial to get published, and I only make it explicit in a forthcoming chapter in Winfried Schröder, ed., Reading Between The Lines (de Gruyter, forthcoming). But those who read between the lines of the current paper will see what I’m really arguing. For what it’s worth, the different analogy was also present in the original version of my ‘Anti-Strauss’ article, but the referees rightly made me take it out. Still, it’s there implicitly. My critique of Strauss has always been a vehicle for far more important ideas.

Here is the abstract of my History of European Ideas paper:

This paper offers practical guidance for empirical interpretation in the history of political thought, especially uncovering what authors meant and why they wrote what they wrote. I thus seek to fill a small but significant hole in our rather abstract methodological literature. To counter this abstraction, I draw not only on methodological theorising but also on actual practice – and on detective-work, a fruitful analogy. The detective analogy seeks to capture the intuition that we can potentially find right answers but must handle fragmentary evidence that different people can plausibly read in different ways. Placing the focus on evidence, and on combining different types of evidence, suggests that orthodox categories like ‘contextualist’ and ‘Marxist’ too often accentuate differences between scholars. This paper instead highlights core principles that unite us – ideas that underpin good textual interpretation across all ‘schools of thought’.

Help needed: Darwin on confirmation bias

Apparently, Charles Darwin said that when he heard something that did not fit his theory, he wrote it down, otherwise he tended to forget it.

Can anyone give me a reference for this?

Thanks!

UPDATE: Thanks to David Schweiger (via Steven Hamblin’s blog), we have the answer. It’s from Darwin’s Autobiography (p. 123):

I had also, during many years, followed a golden rule, namely, that whenever a published fact, a new observation or thought came across me, which was opposed to my general results, to make a memorandum of it without fail and at once; for I had found by experience that such facts and thoughts were far more apt to escape from the memory than favourable ones. Owing to this habit, very few objections were raised against my views which I had not at least noticed and attempted to answer.

Darwin finger

How to do history of political thought

Interpreting textsHere is my draft chapter on how to interpret texts, for a book on methods in political theory that I’m editing for Cambridge University Press.

I’m keen for comments – however critical! The only problem is that I need comments by August 1st if possible, as I’m submitting the book manuscript on September 1st. Sorry for the crazy deadline.

I’m particularly keen to hear from current graduate students (MA or PhD), or advanced undergraduates, as that is who the chapter is aimed at.

Even if you’ve never met me, I’d love your criticisms and suggestions! Please download the article and email me at Adrian.Blau [at] kcl.ac.uk – thanks!

Nat Blau (1928-2010)

Dad-small

Joseph Norman ‘Nat’ Blau

My dad died five years ago today. He was a brilliant doctor, empathising closely with his patients and making thousands of lives better. He was a neurologist specialising in headache and migraine, and co-founded the City of London Migraine Clinic, which gave free consultations to all migraine sufferers. In 1962 he beat Roger Bannister to the post of consultant neurologist at the National Hospital for Nervous Diseases, at Queen Square in London. He used to joke that it was the only time anyone ran faster than Roger Bannister.

Queen's Square consultants, Oct 1974. Dad is second from left in the middle row.

Queen Square consultants, Oct 1974. Dad is second from left in the middle row.

My dad teaching my brother about migraine

My dad teaching my brother about migraine

He published over 100 papers in scientific journals, not only on migraine but also on such things as ponytail headache (from tying ponytails too tightly) and sleep-lack headache. He edited a respected textbook on migraine, and his Headache and Migraine Handbook (1986) was written in a straightforward style for ordinary people. He was a superbly clear and concise writer: many of the tips I pass on to my own students came from him.

Mum and Dad's wedding

Mum and Dad’s wedding

He was married to my mother, Jill, for 41 years. He was a caring father to me, my brother Justin and my sister Rosie. He put a lot of emphasis on our education, and inculcated a questioning attitude in us. One of his sayings I still quote is: “If a theory explains all the facts, the theory must be wrong, because some of the facts are wrong.” He disliked the phrase “we now know”, because in his view, some of what we “know” is actually incorrect. His papers would sometimes mention what we don’t know or what his hypothesis could not explain – partly out of honesty, partly as a spur to further research.

He used to refer to “Blau’s Law of 10%”, which was his rule of thumb that only 10% of people have “got it”. He would sometimes follow this up with another comment: “If you’ve got it, you’ve got it. If you haven’t got it, you’ve had it!”

Meeting Princess Diana

With Princess Diana

His preferred version of IQ was the “Insight Quotient”; no one could get to 100% on this scale. He also invented an SQ – a “Sleep Quotient” – referring to the number of people in the audience who were asleep in talks and lectures. His own SQ was almost always zero, I suspect – he was an exceptionally engaging lecturer. He would never get stuck behind the lectern or talk at an audience.

Maida Vale Hospital staff, Nov 1968

Maida Vale Hospital staff, Nov 1968. Dad is in the middle of the front row.

He was by all accounts a superb teacher. He taught until he was 80, when his cancer excessively affected his mobility. His students had great admiration and affection for him. He disliked the way that medical students were expected to soak up knowledge without also developing critical faculties.

He was very funny. If we complained of some pain or ache, his answer was always “Talking too much”. In my case, this was usually true.

Dad age 75

J.N. Blau, at 75

Around the age of 75, he started work on a book called Wrong Ideas and No Ideas in Medicine, which he never finished. He had always been fascinated by wrong ideas which held back progress. He published a seering critique of the neurologist Harold Wolff (Cephalalgia 24:3, 2004) which attacked him for “a high degree of obsession, a desire to be on top and to win, and from an intellectual point of view, his dogmatism and ultra-focus on the vascular theory of migraine …. Wolff retarded progress in the understanding of migraine by at least 30 years”. That sums up several things Dad regarded as key sins. He used to say “Listen to the patient: he is telling you the diagnosis.” Wolff’s descriptions of migraine clashed with what Dad heard from the vast majority of his own patients. The one thing worse than a theory which explained all of the evidence was a theory which didn’t even match much of the evidence in the first place!

A black-tie event at Queen's Square, Feb 1965

A black-tie event at Queen Square, Feb 1965

In his day, most medical students came from wealthy backgrounds. He had next to nothing, and if he hadn’t worked hard at school he couldn’t have won the scholarship that allowed him to study medicine. I’m incredibly proud of what he achieved.

I think of him most when I’m very sad or very happy. When I’m sad I wish he was here, and when I’m happy I want to share good things with him. When I won a teaching award in 2013, I had a brief moment of joy and then started crying, because he wasn’t around to hear about this. My brother won a teaching award in the same year and I think my dad would have been prouder about these two prizes than the promotions my brother and I also got that year. Teaching mattered more to my dad than academic standing. When I asked why he wasn’t a Professor, he said: “I don’t profess to know anything.”

I love and miss him very much.

Gadamer’s God-awful account of science

I’ve just finished my chapter for the book of the Reading Between The Lines conferenceGadamer Truth MethodMy chapter included a critique of Gadamer’s account of science, in his book Truth and Method and elsewhere.

I argue that Gadamer makes deeply misleading claims about what science involves, and does not reference any practising natural or social scientist; as far as I can tell, Gadamer’s most recent reference to an actual scientist was from someone writing 98 years before the publication of Truth and Method. Oh, and Gadamer misquotes this scientist and treats him as far more naive than he was.

But many commentators simply repeat Gadamer’s caricatures or pass over them in silence. This might actually be more troubling than Gadamer’s naughty scholarship.

My question is: can anyone point me to a good critique of Gadamer’s account of science? So far, I’ve only found five people who criticise any aspect of his account of science:

  • pp. 226 and 236 of Dieter Misgeld’s article in the journal Philosophy of Social Science, from 1979;
  • pp. 168-9 of Richard Bernstein’s Beyond Objectivism and Relativism (1983);
  • the opening chapter of Joel Weinsheimer’s Gadamer’s Hermeneutics (1985) – this is the most powerful critique but still leaves Gadamer largely unscathed;
  • p. 4 of Georgia Warnke’s Gadamer: Hermeneutics, Tradition and Reason (1987); and
  • p. 158 of Robert D’Amico’s book Contemporary Continental Philosophy (1999).

If you can point me to any other references – preferably in English! – I’d be most grateful. Thanks!

Is replication just for scientists? Part 2: interpreting texts

Part 1 argued that replicability, an important facet of scientific research, is also found in philosophical thought experiments. Indeed, philosophical thought experiments are easier to replicate than most natural or social science research.

Here, in Part 2, I apply this idea to interpreting texts, whether in the history of political thought, in philosophy, or anywhere else.

Reading book

My key claim is that when we make an empirical claim about a text – for example, what an author meant by a word or phrase – we should provide our evidence, so that other interpreters can replicate our reading to see if they agree or not. In other words, we should give precise references (e.g. page numbers) so that other people can find the passage, read it for themselves, and see if they share our interpretation.

Aside from replicability, there are two more self-interested reasons to give precise references . First, it forces us to try to be careful. I can think of several occasions where I find that I have misread or misremembered an argument when I look for the page number. Second, it shows our readers that we have tried to be careful. I’m more likely to trust an interpretater if I think that the author has been careful with her evidence, although there are exceptions in both directions, of course.

Unfortunately, sometimes we cannot give precise references, because we have not read the source we are citing, or not read it closely enough, or not read it recently. We don’t always give precise references in informal contexts (e.g. on blogs!) but where possible we should do so in published academic writings. One reason we don’t is the  bad academic convention of giving precise references for direct quotations but not necessarily when only citing ideas without quoting them. I believe we should give precise references in both situations.

To change the convention, journal editors and publishers should make us give precise references where we can. I remember one journal editor of a leading political theory journal who considered forcing people to give page numbers in order to get away from slapdash references to “Rawls 1971” and the like. I note with great pleasure that the American Political Science Review now requires authors to give ‘precise page references to any published material cited’. My only caveat to that is where page numbers are not helpful: for example, there are so many different editions of Rousseau’s Social Contract that chapter numbers are probably more helpful there.

But the basic principle stands: ideally, other people should be able to replicate what we have done to see if they agree with our claims. This principle is as important in textual interpretation as it is in the natural sciences.

Leo Strauss conference, Marburg, July 19-20

I’m giving a paper at a conference on Leo Strauss, on July 19-20. The conference, in Marburg, is called ‘Reading Between The Lines: Leo Strauss and the History of Early Modern Philosophy’. Also speaking are Jonathan Israel, Gianni Paganini, Al Martinich and Edwin Curley, amongst others.

My paper is called ‘The Irrelevance of (Straussian) Hermeneutics’. I don’t normally like titles with parentheses, but I reject the idea of a ‘Straussian hermeneutic’ partly because I reject the usefulness of the classic hermeneutic texts – Schleiermacher, Gadamer, and so on. Indeed, my claims about the irrelevance of a ‘Straussian hermeneutic’ (see also this critique of mine) is less important than my comments on the irrelevance of hermeneutics more generally. I reckon we can get far more useful guidance elsewhere on how to interpret texts. People who’ve been following the blog should have an idea of where I think we should look!

KKV’s strategic error in Designing Social Inquiry

In 1994, Gary King, Robert Keohane and Sidney Verba (‘KKV’) published their seminal book Designing Social Inquiry. It was very controversial, perhaps intentionally so, because of the claim that

our main concern in this book is making qualitative research more scientific (p. 18).

This led to a backlash from many qualitative political scientists.

KKVI believe that the substance of KKV’s book points to a different and less controversial argument. They start to make this argument at the very bottom of page 4:

All good research can be understood – indeed, is best understood – to derive from the same underlying logic of inference. Both quantitative and qualitative research can be systematic and scientific.

But they then move on to a less relevant issue: historical research. That’s not really the point.

This is what I believe they should have said next:

All quantitative and qualitative researchers fall short of the ideal to greater or lesser extents. It happens that the logic of social-science inference is often more developed in quantitative research, but this book will use examples of good and bad practice from both qualitative and quantitative research.

This is consistent with the book’s content; it would just have required some different examples.

This message is less controversial – and perhaps the book would have been less widely read as a result. But people might have paid more attention to some ideas which have, alas, generated less debate. For example, I think that more weight should be placed on KKV’s very important ideas about uncertainty, which have greatly influenced me (see this blog post and this article of mine) and which I see as fundamental to all empirical research – even empirical research which does not see itself as social-scientific (see this blog post and this article of mine).

Important caveat: the suggestion I have made about what KKV should have said is still controversial: not everyone thinks that there is a unified logic of inference in social science! I’m just saying that if that is KKV’s view, they may have been better off framing the idea differently.

Is Derrida full of bullshit? Part 2

Part 1 outlined two notions of bullshit: Harry Frankfurt’s notion of bullshit as phoniness or indifference to truth, and Jerry Cohen’s notion of bullshit as unclarifiable clarity.

We saw too that Cohen claimed – very naughtily, without references – that there is a lot of bullshit in Derrida. Such sentiments are quite widespread.

I’m only going to look at one passage by Derrida which has been called bullshit by Brian Leiter, a prominent philosopher who is bitingly critical of Derrida on his excellent blog, Leiter Reports. Leiter has a deliciously acerbic approach to ‘frauds and intellectual voyeurs who dabble in a lot of stuff they plainly don’t understand’. Leiter is a Nietzsche expert who reserves special vitriol for Derrida’s ‘preposterously stupid writings on Nietzsche’, the way Derrida ‘misreads the texts, in careless and often intentionally flippant ways, inventing meanings, lifting passages out of context, misunderstanding philosophical arguments, and on and on’.

I’ll focus solely on Leiter’s 2003 blog entry, ‘Derrida and Bullshit’, which attacks the ‘ridiculousness’ of Derrida’s comments on 9/11. This came from an interview with Derrida in October 2001. Here is an abbreviated version; you can see the full thing on p. 85 onwards of this book.

… this act of naming: a date and nothing more. … [T]he index pointing toward this date, the bare act, the minimal deictic, the minimalist aim of this dating, also marks something else. Namely, the fact that we perhaps have no concept and no meaning available to us to name in any other way this ‘thing’ that has just happened … But this very thing … remains ineffable, like an intuition without concept, like a unicity with no generality on the horizon or with no horizon at all, out of range for a language that admits its powerlessness and so is reduced to pronouncing mechanically a date, repeating it endlessly, as a kind of ritual incantation, a conjuring poem, a journalistic litany or rhetorical refrain that admits to not knowing what it’s talking about.

9/11 turned the world upside down. Or at least 45 degrees to the side.

9/11 turned the world upside down.
Or at least 45 degrees to the side.

So, is this bullshit, on the Frankfurt and/or the Cohen notions of bullshit? I would say no. I take Derrida to be saying the following.

We often repeat the name ‘9/11’ without thinking much about it. But the words we use can be very revealing. Why do we try to reduce this complex event to such a simple term? Because the event is so complex we cannot capture it properly. Precisely by talking about it in such a simple way, we admit that we don’t really understand it.

If I have understood Derrida – tell me if I haven’t – this explanation is surely wrong. I’d guess that in most cases we call such events by a name, usually a place or a thing. For example:

  • Pearl Harbor, the Somme, Gallipoli, the Korean War
  • the Great Fire of London, Hurricane Katrina
  • Watergate, the execution of Charles I, the storming of the Bastille
  • Chernobyl, Bhopal, Exxon Valdez

My guess is that we are most likely to use a date where we cannot restrict an event to a place or name:

  • Arab Spring
  • (May) 1968 riots
  • the 1960s
  • Black Tuesday, Black Wednesday

But my guess is that such names are rarer: places or things are usually more identifiable.

So, why was 9/11 called ‘9/11’, ‘September the 11th’? My guess is that it would usually have been called ‘the attack on the Twin Towers’ except for the fact that there were two other locations: an attack on the Pentagon, and a plane that crashed in Pennsylvania. I’m also guessing that ‘9/11’ had a ring to it because of the shop ‘7/11’. If the attack had happened in just one location on February 9th, we’d simply refer to the place.

I might be wrong. Other explanations will be gratefully received. But if I’m right, it suggests that Derrida’s explanation is a bit pompous, and probably wrong, but it is not Frankfurt-bullshit, because it is not attempting to deceive anyone, and it is not Cohen-bullshit, because it is not unclarifiably unclear.

There’s a deeper point here, about method. Philosophers and literary theorists often ask questions which are essentially empirical. Derrida’s question is empirical: what explains the name ‘9-11’? To answer empirical questions, it is best to use a scientific approach – for example, looking at more than just one possible explanation. In the fortnight that BlauBlog has been active, this is a point I’ve already made several hundred and fifty times.

Derrida, however, does not think like a social scientist. As a result, his explanation only seems plausible because he has not considered the alternatives.

In short, what Derrida said is crap, but not bullshit.

Is social science useful? Roundtable at King’s College London, 14 June 2013

I’m co-organising a roundtable on ‘Is Social Science Useful’ at King’s College London, featuring some prestigious speakers from KCL, UCL, the LSE, Ipsos MORI, and UPenn (the University of Pennsylvania).

Here are the details.

Is Social Science Useful?

King’s Interdisciplinary Social Sciences Doctoral Training Centre (KISS-DTC) Roundtable

June 14, 2013, 4.30pm – 6pm

Room K2.31 King’s Building – followed by drinks at ‘Chapters’, 2nd floor, Strand Building

Social science research is increasingly judged on its ‘usefulness’ and ‘practical relevance’, beyond its intellectual and theoretical contributions. But how useful is social science? Could it be more useful? Are there costs in pursuing usefulness? This roundtable will feature eminent social scientists and practitioners with diverse views about these important issues.

Philip Tetlock is the Leonore Annenberg University Professor in Democracy and Citizenship at the Wharton School, University of Pennsylvania. He has published widely on political psychology, especially on bias and prediction in politics and public policy. He is the author of the award-winning book Expert Political Judgment: How Good Is It? How Can We Know?

Alena Ledeneva is Professor of Politics and Society at UCL. She works on corruption, economic crime, corporate governance and the informal economy in Russia and other postcommunist countries. Her books include How Russia Really Works (2006) and Can Russia Modernise? (2013).

Cheryl Schonhardt-Bailey is Reader in Political Science at the LSE. She works on the interplay between interests, ideas and institutions in legislative politics, trade and monetary policy, and political rhetoric. Her most recent book is Deliberating American Monetary Policy.

Patten Smith, is Director of Research Methods at the Research Methods Centre of Ipsos MORI, one of the UK’s largest research companies. He is the author of ‘Survey research: two types of knowledge’, which explores the divide between the kinds of knowledge held by survey experts in research agencies and in academia. He is currently the Chair of the Social Research Association.

Nick Butler is Chair of King’s Policy Institute. Between 2002 and 2006 he was Group Vice-President at BP and has since worked as a Senior Policy Adviser at 10 Downing Street. He is the author of The Future of European Universities: Renaissance or Decay?

To attend, please sign up at the Eventbrite page: socialscienceroundtable.eventbrite.co.uk

For any questions or queries about the event please contact: Adrian.Blau@kcl.ac.uk

Address & directions:

King’s College London | Strand | London WC2R 2LS

Organised on behalf of the KISS-DTC Regulation cluster themes: ‘Regulation, Governance and Politics’; ‘Work and Organisations’; ‘Markets, Firms and Competitiveness’.

 

Is Derrida full of bullshit? Part 1

Why is there so much bullshit in politics? Does a particular kind of bullshit flourish in French philosophy?

These are questions which have excited lots of academics in recent years, partly because they are fascinating and important questions – but mainly because it allows us to swear in public.

Academics discuss two key ideas of bullshit. (I’m working on a third, but it’s not ready yet.) The first and most famous comes from Harry Frankfurt’s famous essay On Bullshit. The essence of bullshit, for Frankfurt, ‘is not that it is false but that it is phony.’ The bullshitter may or may not deceive us, or intend to deceive us, about the alleged facts. ‘What he does necessarily attempt to deceive us about is his enterprise.’ In short, the essence of Frankfurt-bullshit is phoniness, indifference to truth.

Frankfurt’s essay is great fun, but quite frustrating, not least because of the woeful lack of useful examples. The example Frankfurt discusses at greatest length, a comment by Wittgenstein, is not obviously bullshit, even for Frankfurt. He could have mentioned politicians who evade questions. ‘That’s not the real issue, the real issue is why my opponents are doing such-and-such.’

Apparently Jim Harbaugh, coach of the San Francisco 49ers, often bullshits like this. For example, when asked whether two players would be fit for a game, and wanting to keep his opponents guessing, he replied ‘I know what you just asked, but I was so mesmerized and dazzled by your voice right there. You have got a great voice. I lost my train of thought.’

I used to bullshit when I started teaching and didn’t want to admit that I hadn’t understood a question. ‘It’s interesting you should ask that, because Aristotle says something similar …’, I might say. Then I could talk for a minute in the vain hope that my students would not spot my phoniness and my inability to answer their question.

GM-BS

The second idea of bullshit comes from Jerry Cohen, whose essay ‘Deeper Into Bullshit‘ defines bullshit as ‘unclarifiable unclarity’. Whereas Frankfurt-bullshit focuses on the mental state of the bullshitter, Cohen-bullshit focuses on the bullshitter’s output. Someone may be entirely sincere in what she says, but may still come out with something which is unclear and cannot be made clear.

Staggeringly, even Cohen doesn’t give useful examples. His only specific example, by Etienne Balibar, is probably not bullshit: it actually does make some sense, as Frankfurt himself points out in his response to Cohen in the book Contours of Agency.

Worse, Cohen feels free to make airy accusations about bullshit flourishing in French philosophy: ‘what I have read of Jacques Derrida, Gilles Deleuze, Jacques Lacan, and Julia Kristeva leads me to think that there is a great deal of bullshit in their work’. Yet Cohen gives no references. Perhaps he was intending to do so before his untimely death. If not, I’m afraid we should not hesitate to describe Cohen’s comment as lazy and unscholarly. If he has read enough of these writers to see ‘a great deal of bullshit in their work’, then he should give us some examples. A claim as important and critical as this needs to be backed up.

Given Frankfurt’s and Cohen’s notions of bullshit, is Derrida full of bullshit? I will answer this in Part 2.

Who knows? Uncertainty in qualitative social science

I’m looking for your help: I need references which discuss the idea of uncertainty in qualitative research. Probably in social science, but maybe history.

Here’s the point I’m trying to make.

When we tackle empirical matters – how many people have HIV, why the dinosaurs went extinct, how democratisation affects economic growth, and so on – we can never know the answers for certain. (I’m not thinking about prediction, by the way, but about description or explanation of things in the past or present.)

In quantitative social science, this idea is standard: it’s central to statistical inference. But I don’t know how much it’s been discussed in relation to qualitative research, aside of course from debates over Bayesian research. I have looked . . . but I haven’t found much.

The place of uncertainty in qualitative research is something I tried to theorise in an article in History and Theory. I argued that when we study historical texts, we often ask empirical questions, such as why Machiavelli wrote what he wrote, or what Mill meant by ‘harm’. We can’t know the answers for certain, but often we should indicate how confident we are in our findings. This reminds us that we are not telling our readers what happened: we are telling them how strong we think our evidence is.

Reporting uncertainty in qualitative research is thus subjective, whereas in quantitative research it is objective (at least, where the indication of uncertainty is part of statistical significance).

But can anyone tell me who has written about uncertainty in qualitative research, whether in social science or history?

uncertainty in qualitative research

My ideas about this issue have been greatly influenced by Gary King, Robert Keohane and Sidney Verba’s Designing Social Inquiry – see especially pp. 7-8 and 31-2 of chapter 1. Unusually, they depict uncertainty as a core feature of science. This is a crucial idea. It took me years to grasp what they were getting at, but I now agree.

However, King Keohane and Verba actually say very little about what uncertainty involves in qualitative research, as Larry Bartels notes. This is surprising, given that their book is meant to be precisely about what quantitative researchers can teach qualitative ones. When I wrote my article, I had to do much of the thinking for myself (helped by Collingwood, by Keynes, and of course by many actual examples of good and bad practice in substantive research).

I’m now interested in writing a paper about uncertainty in qualitative social science. Of course, the idea is widespread: for example, it’s implicit in any discussion of triangulation. But do you know of people who have theorised the idea and/or discussed its place in qualitative research? (Again, aside from Bayesians.) Can anyone point me to some references? I’d be very grateful – thanks!

Is replication just for scientists? Part 1: thought experiments in philosophy

Natural scientists are big on replication. When one lab reports an important finding, other labs try to replicate it. If they can’t, as with Fleischmann and Pons on cold fusion, you have a problem.

Social scientists are getting bigger on replication. Leading social science journals now require authors to upload empirical datasets. But in practice, replication is rare, as Andrew Gelman notes. Replication is still not widely expected – and besides, there are far fewer social scientists than natural scientists.

What about political philosophy and history of political thought? I’m not sure how much replication has been discussed in these areas, except in empirical areas e.g. experimental philosophy. (Let me know if you have references about replication in other areas of philosophy! I’m not thinking about such things as checking someone’s logic, of course.)

This strikes me as an important issue. Indeed, much of my work – and much of this blog – is about showing the intellectual links between philosophy, history and social science.

This post will thus address replication in philosophy. Part 2 will cover history of political thought.

A common tool in political philosophy is thought experiments. Is it worse to kill someone than to let someone die? This is a hugely important moral question. It is also a hugely complex moral question. How do we approach such difficult problems?

Nietzsche: is it worse to kill someone or to make them stronger?

Nietzsche: is it worse to kill someone or to make them stronger?

One method used is thought experiments. Is it worse to intentionally drown a child if one wants to get its inheritance than to fail to help the same child if it slips in a bath and starts to drown? This is Frances Kamm’s example – see p. 18 of Morality Mortality, volume 2. And there are many similar examples: Nozick’s experience machine, the ticking time-bomb scenario, Jim and the Indians, trolley problems, and so on.

My suggestion is that philosophical thought experiments are actually easier to replicate than almost any natural or social science research, provided the thought experiment is outlined clearly enough. You simply think through the experiment as described by the author, and see what your intuition/answer is.

Sometimes you reach the same conclusion. Sometimes you don’t, as with Frances Kamm’s retort to Peter Unger – see p. 13 of Morality, Mortality, volume 2.

Sometimes you question whether your intuitions are reliable. This may be because you reject the nature of many of these thought experiments, as with Robert Goodin – see pp. 8-9 of Political Theory and Public Policy. Or it may be because you think your own intuitions have been primed by previous thought experiments, as Mike Otsuka discusses – see pp. 109-10 of his 2008 paper in the journal Utilitas.

Sometimes you re-run the thought experiment with a different model, e.g. different order or different frames. Otsuka does this in the paper mentioned just above.

And sometimes – perhaps most importantly – you re-run the thought experiment with different variables. For example, if one adds uncertainty to the ticking time-bomb scenario, even many people who initially advocated torture become less willing to do so. I use the ticking time-bomb scenario with first-year politics students as a way of getting them to think about thought experiments in terms of variables.

Replication with different variables is also important in the social sciences. My favourite example is Daniel Treisman’s 2007 paper in the Annual Review of Political Science. He attempt to replicate many well-known cross-national analyses of corruption, and finds that small and reasonable changes to the independent variables often alters the results (see pp. 222 onwards). This challenges the reliability of the data and the models. It’s strikingly similar to the way that Otsuka questions our intuitions and alters the model (see above).

Replication matters, and it’s pleasingly common in philosophy, at least in relation to thought experiments.

Was Shakespeare a schoolteacher? How sloppy are some journalists?

Several people have been claiming that Shakespeare spent a few years working as a schoolteacher in Titchfield, a village in Hampshire. The claims have some plausibility and may be right. But I’m interested in how sloppily the BBC reported the story. The BBC makes it sound like a definite finding. Surprisingly, the Daily Mail newspaper is more even-handed, as we’ll see. And the claims about Shakespeare make some interesting intellectual errors in their own right.

Shakespeare

(more…)

What is it like to be Leo Strauss?

Last year, I published a critique of Leo Strauss. Strauss was an important and influential thinker who is controversial in two ways. He’s a conservative, and may have influenced many neoconservatives in the Reagan and Bush administrations. I don’t care about that. What I do care about are his historical interpretations, especially his claims that writers like Plato and Machiavelli hid secret messages in their texts using odd techniques which Strauss often seems to have been the first to spot. I have no problem with the idea that some people have written esoterically, but I do doubt the particular claims that Strauss makes. Near the end of my paper, I wrote a little satire, mimicking Strauss’s approach and parodying his style to ‘prove’ that Thomas Hobbes hid secret messages about the music of Beethoven – even though Hobbes died 91 years before Beethoven was born. While writing the satire, though, I suddenly saw what it might have been like to be Leo Strauss. I had been finding lots of astonishing parallels between Hobbes’s writings and Beethoven’s music – it was starting to get freaky. And suddenly, a thought started to flash into my head: ‘Is it possible that Hobbes was actually writing about Beethoven?’ I didn’t even finish this thought: of course, Hobbes could not have been writing about Beethoven. But that moment showed me how easy it is to read too much into a mere coincidence. Strauss and his esoteric bookshelves, by Adrian Blau And this is where Strauss goes wrong. There is a natural human bias to look for evidence which fits one’s ideas, or to interpret things to support one’s ideas. Psychologists call this confirmation bias. If you think you don’t suffer from this … well, I’m very happy for you, but you’re probably not going to be the next Sherlock Holmes. Scientific methods arose in part to counteract biases such as confirmation bias. Scientists shouldn’t just look for evidence which fits their theories: they should question their evidence, test their theories, compare different explanations, and so on. If he had applied such principles, Strauss would not have made many of the claims he made. What is it like to be Leo Strauss? I can’t say for sure, but one brief moment, I might just have known.