Anyone else seen this NYT op-ed by David Hambrick and Elizabeth Meinz? Under the title “Sorry, Strivers: Talent Matters”, we have an attack by two psychologists on what they see as the new consensus – that hard work, rather than talent, is what drives success.
They mention the popular writing of David Brooks, Malcom Gladwell and Geoff Colvin as promoting the view that “what seems to separate the great from the merely good is hard work, not intellectual ability”. This, they claim, is not what science tells us.
In support of the idea that intellectual ability matters, they cite two studies, one on so-called “extremely gifted individuals” and one of their own on piano players. I haven’t been able to access a PDF of the one about pianists, so I only have a second-hand account to go on (by the authors themselves, [here]).
This is the first time in years I check back on the IQ literature, and I’m afraid to report it hasn’t gotten any less depressing in the meanwhile.
Let’s start with the super-genius study. My reference is a review paper (Lubinski & Benbow, 2006). From what I can tell, here’s what the study did: they organised a “talent search” to find kids around the age of 12-13 that scored particularly high on the SAT, which is apparently more or less an IQ test. They were interested in differences between people who are already in the extremes of the distribution. In IQ terms, the question is whether IQ still makes a difference beyond a certain threshold, i.e., does it matter if you score 200 rather than 160?
To do so they formed 3 different cohorts of kids with increasingly stringent inclusion criteria, and followed up on their academic and professional career.
Their main result is that even among their highly-selected cohorts, they still have a correlation between how high people scored when initially tested and various achievements later in life. High-scorers were more likely to have completed a PhD, had filed more patents on average, earned more on average, and were more likely to be tenured at a major US university. Note that all these outcomes are correlated (except the money bit): it’s hard to get tenure without a PhD.
What is most remarkable about the paper is the almost comical disregard for potential confounds. They simply do not control for *anything*, as far as I can tell. So here’s a list of possible differences between high and low scorers:
– Social background, hello? This includes social class, levels of education of the parents, access to scientists in the immediate social environment.
– Related: parental involvement, i.e. how invested are parents in their kids’ career choice?
– Time: kids who scored higher were recruited on average several years later. How did the general population change during that time?
– Observer effect: I assume that kids knew how high they’d scored. Other than turning you into a snotty little shit, I imagine that being told at age 12 by people in lab coats that you are amazingly gifted can somewhat impact your choices later in life.
It’s simply unbelievable that people can still publish papers that don’t even pretend to deal with the whole correlation-causation thing. At least in economics they do a regression.
Moving on to the second study: I can’t say much about it not having access to the research paper, but what the authors did was take a bunch of pianists, measure their working-memory capacity and their musical sight-reading skills. What they found is that, when controlling for practice, working memory-capacity was still correlated (a bit) with sight-reading skills. I don’t know whether if other possible confounds have been controlled for, like how young the players were when they started learning music. The causality could be backwards: maybe a lot of sight-reading practice as a kid increases working memory capacity.
I don’t think there’s much to learn from either of these two studies. At root, I think belief in talent and innateness is completely irrefutable. It’s a form of essentialism, an assumption that winners in our society are intrinsically special.
Some specific versions of innateness are definitely testable: if your idea is that amount of white matter at birth determines success in life, this could be disproved by showing there’s no link. I don’t think that’s what’s at play, however. Belief that people who rise to the top of our society have something special about them can never be disproved, and provides some measure of cognitive comfort for the dominant and the dominated.
References:
Hambrick, D.Z and Meinz, E.J. (2011). Limits on the Predictive Power of Domain-Specific Experience and Knowledge in Skilled Performance. Current Directions in Psychological Science October 2011 20: 275-279, doi:10.1177/0963721411422061
Lubinski, D., & Benbow, C. P. (2006). Study of Mathematically Precocious Youth after 35 years: Uncovering antecedents for the development of math-science expertise. Perspectives on Psychological Science, 1, 316-345.
Hi Simon,
I did indeed read the NY Times article; I am trying to get a reprint of the Meinz article (surprised that as an academic you don’t have access). There is a soft write-up of the work here:
Click to access 5b176194-ba9a-498d-87c3-c51bc0b1c66b.pdf
And I agree. I wish this IQ non-sense would go away along with evolutionary psychology. What a crock.
Best, Jason
Hi Jason,
I’m at a technical university, when it comes to journal subscriptions psychology is (understandably) not their priority. I don’t quite agree with you with respect to evolutionary psych – I don’t think there’s anything wrong in principle in trying to figure out what evolution did to our brain. I just think it’s immensely difficult given the lack of relevant historical data, so I doubt we’ll ever have confident answers to anything much.
I used to really hate the pop-science that came out of the field, though (like the ludicrous sexual selection stuff). I’ve now realised that there’s similar nonsense coming out of neuroscience, e.g. brain reading, so it seems to me that it’s unfair to always single out evo psych as the bad apple.
Best
Simon
“At least in economics they do a regression.”; that was funny. 😉
But to fair, Economists (or at least Econometricians) are pretty much obsessed with confounding (which is good), they work hard hard to find the right “instrumental variables” , and some of them these days even do experiments with random assignments of “treatments” like in medecine.
You’re right, I also have the impression that econometricians are quite careful about the whole correlation/causation business. I assume epidemiologists are similarly careful. Other fields don’t have such a strong emphasis on that, because the data come from experiments most of the time. Where I come from, when people talk about confounding they usually mean that the experimental methods are wrong, i.e. the experiment doesn’t actually test what it claims to be testing.