This post is part five of the series Raw Nerve.
We are all capable of believing things which we know to be untrue, and then, when we are finally proved wrong, impudently twisting the facts so as to show that we were right. Intellectually, it is possible to carry on this process for an indefinite time: the only check on it is that sooner or later a false belief bumps up against solid reality, usually on a battlefield.
—George Orwell, “In Front of Your Nose”
If you want to understand experts, you need to start by finding them. So the psychologists who wanted to understand “expert performance” began by testing alleged experts, to see how good they really were.
In some fields it was easy: in chess, for example, great players can reliably beat amateurs. But in other fields, it was much, much harder.
Take punditry. In his giant 20-year study of expert forecasting, Philip Tetlock found that someone who merely predicted “everything will stay the same” would be right more often than most professional pundits.1 Or take therapy. Numerous studies have found an hour with a random stranger is just as good as an hour with a professional therapist.2 In one study, for example, sessions with untrained university professors helped neurotic college students just as much as sessions with professional therapists.3 (This isn’t to say that therapy isn’t helpful — the same studies suggest it is — it’s just that what’s helpful is talking over your problems for an hour, not anything about the therapist.)
As you might expect, pundits and therapists aren’t fans of these studies. The pundits try to weasel out of them. As Tetlock writes; “The trick is to attach so many qualifiers to your vague predictions that you will be well positioned to explain pretty much whatever happens. China will fissure into regional fiefdoms, but only if the Chinese leadership fails to manage certain trade-offs deftly, and only if global economic growth stalls for a protracted period, and only if…”4 The therapists like to point to all the troubled people they’ve helped with their sophisticated techniques (avoiding the question of whether someone unsophisticated could have helped even more). What neither group can do is point to clear evidence that what they do works.
Compare them to the chess grandmaster. If you try to tell the chess grandmaster that he’s no better than a random college professor, he can easily play a professor and prove you wrong. Every time he plays, he’s confronted with inarguable evidence of success or failure. But therapists can often feel like they’re helping — they just led their client to a breakthrough about their childhood — when they’re actually not making any difference.
Synthesizing hundreds of these studies, K. Anders Ericsson concluded that what distinguishes experts from non-experts is engaging in what he calls deliberate practice.5 Mere practice isn’t enough — you can sit and make predictions all day without getting any better at it — it needs to be a kind of practice where you receive “immediate informative feedback and knowledge of results.”6
In chess, for example, you pretty quickly discover whether you made a smart move or a disastrous error, and it’s even more obvious in other sports (when practicing free-throws, it’s pretty obvious if the ball misses the net). As a result, chess players can try different tactics and learn which ones work and which don’t. Our pundit is not so lucky. Predicting a wave of revolutions in the next twenty years can feel very exciting at the time, but it will be twenty years before you learn whether it was a good idea or not. It’s hard to get much deliberate practice on that kind of time frame.
I’ve noticed very ambitious people often fall into this sort of trap. Any old slob can predict what will happen tomorrow, they think, but I want to be truly great, so I will pick a much harder challenge: I will predict what will happen in a hundred years. It comes in lots of forms: instead of building another silly site like Instagram, I will build an artificial intelligence; instead of just doing another boring experiment, I will write a grand work of social theory.
But being great isn’t as easy as just picking a hard goal — in fact, picking a really hard goal avoids reality almost as much as picking a really easy one. If you pick an easy goal, you know you’ll always succeed (because it’s so easy); if you pick a really hard one, you know you’ll never fail (because it will always be too early to tell). Artificial intelligence is a truly big problem — how can you possibly expect us to succeed in just a decade? But we’re making great progress, we swear.
The trick is to set yourself lots of small challenges along the way. If your startup is eventually going to make a million dollars, can it start by making ten? If your book is going to eventually persuade the world, can you start by persuading your friends? Instead of pushing all your tests for success way off to the indefinite future, see if you can pass a very small one right now.
And it’s important that you test for the right thing. If you’re writing a program that’s supposed to make people’s lives easier, what’s important is not whether they like your mockups in focus groups; it’s whether you can make a prototype that actually improves their lives.
One of the biggest problems in writing self-help books is getting people to actually take your advice. It’s not easy to tell a compelling story that changes the way people view their problems, but it turns out to be a lot easier than writing something that will actually persuade someone to get up off the couch and change the way they live their life. There are some things writing is really good at, but forcing people to get up and do something isn’t one of them.
The irony, of course, is that the books are totally useless unless you take their advice. If you just keep reading them, thinking “that’s so insightful! that changes everything,” but never actually doing anything different, then pretty quickly the feeling will wear off and you’ll start searching for another book to fill the void. Chris Macleod calls this “epiphany addiction”: “Each time they feel like they’ve stumbled on some life changing discovery, feel energized for a bit without going on to achieve any real world changes, and then return to their default of feeling lonely and unsatisfied with their life. They always end up back at the drawing board of trying to think their way out of their problem, and it’s not long before they come up with the latest pseudo earth shattering insight.”7
Don’t let that happen to you. Go out and test yourself today: pick a task just hard enough that you might fail, and try to succeed at it. Reality is painful — it’s so much easier to keep doing stuff you know you’re good at or else to pick something so hard there’s no point at which it’s obvious you’re failing — but it’s impossible to get better without confronting it.
Next in this series: Cherish mistakes
Philip Tetlock, Expert Political Judgment: How Good Is It? How Can We Know? (2006). I don’t have my copy handy, so I checked this description against Philip Tetlock, “Reading Tarot on K Street,” The National Interest (September/October 2009), 57–67. ↩
Robyn M. Dawes, House of Cards: Psychology and Psychotherapy Built on Myth (1996). ↩
Hans H. Strupp and Suzanne W. Hadley, “Specific vs Nonspecific Factors in Psychotherapy: A Controlled Study of Outcome,” Archives of General Psychology 36:10 (1979), 1125–1136. ↩
Tetlock, “Reading Tarot,” 67. ↩
K. Anders Ericsson, Ralf Th. Krampe, and Clemens Tesch-Römer, “The Role of Deliberate Practice in the Acquisition of Expert Performance,” Psychological Review, 100:3 (July 1993), 363–406. ↩
Ericsson, “Role,” 367. ↩