By Brink Lindsey
Last week Heritage Foundation scholar Jason Richwine, coauthor of a hotly disputed new study on
the fiscal costs of comprehensive immigration reform, resigned his
position in a hail of controversy over his 2009 Harvard Ph.D. dissertation.
In that dissertation Richwine had argued, among other things, that
American “Hispanics” are less intelligent than native-born whites as
evidenced by their lower average scores on IQ tests. Richwine then
attributed Hispanics’ alleged intellectual inferiority at least partly
to genetic factors.
The Richwine affair is just the latest flap in a long-running dispute over the significance of IQ tests and group differences in IQ scores. It’s easy enough to shut down that debate with cries of racism, but stigmatizing a point of view as morally tainted isn’t the same thing as demonstrating that it’s untrue. Here I want to explain why Richwine’s position is intellectually as well as morally unsound.
The Richwine affair is just the latest flap in a long-running dispute over the significance of IQ tests and group differences in IQ scores. It’s easy enough to shut down that debate with cries of racism, but stigmatizing a point of view as morally tainted isn’t the same thing as demonstrating that it’s untrue. Here I want to explain why Richwine’s position is intellectually as well as morally unsound.
How justified are such inferences? Well, it depends. Without a doubt, the skills assessed on modern IQ tests are widely applicable and highly valued in contemporary American society. Accordingly, considered just as a measure of skills rather than as a proxy for underlying ability, IQ scores clearly tell us something of genuine importance. They are a reasonably good predictor not only of performance in the classroom but of income, health, and other important life outcomes.
But what about innate mental ability? Does such a thing even exist? Evidence from IQ tests provides strong support that it does. First of all, scores on the various IQ subtests are highly correlated with each other, suggesting the presence of a general underlying factor. Furthermore, IQ scores tend to stabilize around age eight and are resistant to moving around much thereafter, in keeping with a relatively fixed level of innate intellectual capacity. And studies of twins and adoptees offer substantial evidence that this capacity has a strong genetic component. The scores of twins (who are genetically identical, more or less) are much more highly correlated than those of regular siblings (who share only about half the same genes). Meanwhile, the scores of regular siblings are in turn much more highly correlated than the scores of adopted and biological children raised together.
So what’s the problem? These studies typically assume that the similarity of twins’ shared environment is the same as that of regular siblings (highly unlikely) and that adoptive families are as diverse as families generally (in fact, parents that adopt tend to be better off and better educated). When these assumptions are relaxed, environmental factors start to loom larger. In this regard, consider a pair of French adoption studies that controlled for the socioeconomic status of birth and adoptive parents. They found that being raised by high-SES (socioeconomic status) parents led to an IQ boost of between 12 and 16 points – a huge improvement that testifies to the powerful influence that upbringing can have.
A study of twins by psychologist Eric Turkheimer and colleagues that similarly tracked parents’ education, occupation, and income yielded especially striking results. Specifically, they found that the “heritability” of IQ – the degree to which IQ variations can be explained by genes – varies dramatically by socioeconomic class. Heritability among high-SES (socioeconomic status) kids was 0.72; in other words, genetic factors accounted for 72 percent of the variations in IQ, while shared environment accounted for only 15 percent. For low-SES kids, on the other hand, the relative influence of genes and environment was inverted: Estimated heritability was only 0.10, while shared environment explained 58 percent of IQ variations.
Turkheimer’s findings make perfect sense once you recognize that IQ scores reflect some varying combination of differences in native ability and differences in opportunities. Among rich kids, good opportunities for developing the relevant cognitive skills are plentiful, so IQ differences are driven primarily by genetic factors. For less advantaged kids, though, test scores say more about the environmental deficits they face than they do about native ability.
This, then, shows the limits to IQ tests: Though the tests are good measures of skills relevant to success in American society, the scores are only a good indicator of relative intellectual ability for people who have been exposed to equivalent opportunities for developing those skills – and who actually have the motivation to try hard on the test. IQ tests are good measures of innate intelligence—if all other factors are held steady. But if IQ tests are being used to compare individuals of wildly different backgrounds, then the variable of innate intelligence is not being tested in isolation. Instead, the scores will reflect some impossible-to-sort-out combination of ability and differences in opportunities and motivations. Let’s take a look at why that might be the case.
Comparisons of IQ scores across ethnic groups, cultures, countries, or time periods founder on this basic problem: The cognitive skills that IQ tests assess are not used or valued to the same extent in all times and places. Indeed, the widespread usefulness of these skills is emphatically not the norm in human history. After all, IQ tests put great stress on reading ability and vocabulary, yet writing was invented only about 6,000 years ago – rather late in the day given that anatomically modern humans have been around for over 100,000 years. And as recently as two hundred years ago, only about 15 percent of people could read or write at all.
More generally, IQ tests reward the possession of abstract theoretical knowledge and a facility for formal analytical rigor. But for most people throughout history, intelligence would have taken the form of concrete practical knowledge of the resources and dangers present in the local environment. To grasp how culturally contingent our current conception of intelligence is, just imagine how well you might do on an IQ test devised by Amazonian hunter-gatherers or medieval European peasants.
The mass development of highly abstract thinking skills represents a cultural adaptation to the mind-boggling complexity of modern technological society. But the complexity of contemporary life is not evenly distributed, and neither is the demand for written language fluency or analytical dexterity. Such skills are used more intensively in the most advanced economies than they are in the rest of the world. And within advanced societies, they are put to much greater use by the managers and professionals of the socioeconomic elite than by everybody else. As a result, American kids generally will have better opportunities to develop these skills than kids in, say, Mexico or Guatemala. And in America, the children of college-educated parents will have much better opportunities than working-class kids.
Among the strongest evidence that IQ tests are testing not just innate ability, but the extent to which that innate ability has been put to work developing specific skills, is the remarkable “Flynn effect”: In the United States and many other countries, raw IQ scores have been rising about three points a decade. This rise is far too rapid to have a genetic cause. The best explanation for what’s going on is that increasing social complexity is expanding the use of the cognitive skills in question – and thus improving the opportunities for honing those skills. The Flynn effect is acutely embarrassing to those who leap from IQ score differences to claims of genetic differences in intelligence.
Jason Richwine is the latest exemplar of the so-called “hereditarian” interpretation of IQ – namely, that IQ scores are a reliable indicator of immutable, inborn intelligence across all groups of people, and therefore that group differences in IQ indicate group differences in native intelligence. Yes, the hereditarian view lends aid and comfort to racists and nativists. But more importantly, it’s just plain wrong. Specifically, it is based on the ahistorical and ethnocentric assumption of a fixed relationship between the development of certain cognitive skills and raw mental ability. In truth, the skills associated with intelligence have changed over time—and unevenly through social space—as society evolves.
The lower IQ scores of American Hispanics cannot simply be dismissed out of hand. They are evidence of skill deficits that sharply curtail chances for achievement and success. But contrary to the counsel of despair from hereditarians like Richwine, those deficits aren’t hard-wired. Progress in reducing achievement gaps will certainly not be easy, but a full review of the IQ evidence shows that it is possible. And it will be aided by policies, like immigration reform, that encourage the full integration of Hispanics into the American economic and cultural mainstream.
No comments:
Post a Comment