Tag: bias

Where there’s smoke, there must be fire, right? Revisiting “The Bell Curve”

I read Richard Herrnstein’s & Charles Murray’s The Bell Curve in 1994 along with other members of my book club. The main point, that cognitive ability was coming to play an ever greater role in our society was well-substantiated, and thought-provoking, and led to many stimulating conversations. These cognitive elites appeared to be separating physically and culturally from the rest of American society. What might this mean? Where Herrnstein and Murray discussed what this trend might mean, they reviewed the literature on whether intelligence is more influenced by nature or nurture; as I recall, they said something to the effect of: it’s likely to be a bit of both.

Over the years, I’ve encountered many references to the book which are at odds with what I remember. And the opprobrium has only escalated. I started to ponder my recollection. Could it have been a deeply racist, white nationalist diatribe – and I failed to pick up on it? Surely, given the amount of smoke surrounding the book, there must be some fire.

After the recent Middlebury incident, where protesters seeking to keep Charles Murray from presenting his extreme views became physically violent, I dug out my copy of the book. I intended to reread it and see how the book stacks up with the disparaging accounts I read about it. It is a weighty tome (873 pp) and would demand quite an investment of time. I was delighted therefore when brothers Bo and Ben Winegard, one a psychology professor, the other, a psychology grad student, decided to do the job for me. In A Tale of Two Bell Curves, the brothers Winegard suggest that what is said about the book is so far removed from what is in the book that it’s best to think of the two creations as separate books. They go on to compare the (actual) book’s key claims with the relevant scientific literature – finding none where the assertions are far off the beaten path. They conclude thus:

There are two versions of The Bell Curve. The first is a disgusting and bigoted fraud. The second is a judicious but provocative look at intelligence and its increasing importance in the United States. The first is a fiction. And the second is the real Bell Curve. Because many, if not most, of the pundits who assailed The Bell Curve did not and have not bothered to read it, the fictitious Bell Curve has thrived and continues to inspire furious denunciations. We have suggested that almost all of the proposals of The Bell Curve are plausible. Of course, it is possible that some are incorrect. But we will only know which ones if people responsibly engage the real Bell Curve instead of castigating a caricature.

Following Middlebury, Cornell social scientists, Wendy Williams and Stephen Ceci, decided to examine just how extreme Murray’s views are. They transcribed Murray’s Middlebury speech, and had three different groups assess it; professors reviewing it without Murray’s name; professors reviewing it with Murray’s name; and, a group of regular American adults.  Reviewers were asked to rate the material on a scale from 1 to 9, ranging from very liberal to very conservative, with 5 defined as “middle of the road.” All three groups gave the piece a centrist score.  In their NYTimes editorial on the exercise, they conclude:

Our data-gathering exercise suggests that Mr. Murray’s speech was neither offensive nor even particularly conservative. It is not obvious, to put it mildly, that Middlebury students and faculty had a moral obligation to prevent Mr. Murray from airing these views in public.

And finally, neuroscientist-philosopher Sam Harris, in the April 22 2017 episode of his Waking Up podcast, interviewed Charles Murray about “the controversy over The Bell Curve, the validity and significance of IQ as a measure of intelligence, the problem of social stratification, the rise of Trump, universal basic income, and other topics.”

Harris kicks off with his own reflections on how he came to invite Murray for an interview. Like most people, he observes, he had long had a negative opinion of Murray and his work, assuming that “when seemingly respectable people are calling someone a Nazi, a fascist, a white supremacist, a eugenicist – it must be deserved.”  Seeing Murray listed as a contributor to a thematic issue of a journal, led him to decline his own invitation to contribute. Why would he want to associate himself with someone like that? Following Middlebury, he too decided to examine Murray’s work; and, he admits to being very surprised at what he found. Murray’s work reveals him to be a “deeply rational and careful scholar..”, one who is “quite obviously motivated by an ethical concern about inequality in this society.”

Reflecting on the notable gap between Murray and what Murray’s critics say about him  – Harris takes great issue with Murray’s critics. The criticisms appear to have “nothing to do with (Murray’s) errors of scholarship,  or the way he’s conducted himself, or his personal motives. The critics, in fact, ignore much of what Murray and Herrnstein wrote. Murray’s scapegoating derives instead from his “having merely discussed differences in human intelligence at all.”

In case you share Sam Harris’ earlier negative conviction about The Bell Curve and Charles Murray, and you haven’t read the book, I heartily recommend it. I’ll even lend you my copy.

Our prejudice problem

People are increasingly aware of the growing political polarization in the United States; for example, I hear many discussions about polarization among the political parties and the resulting problems created, for example, in Congress. I think, however, that relatively fewer people are aware of Americans’ prejudice at the individual level; I rarely hear concerns expressed about the negative attitudes Americans have towards people affiliated with the ‘other’ party. I have long been worried about this. And, people’s responses to the election have heightened my anxiety. I decided to educate myself on the topic and I want to share what I’ve found. The upshot, for my time- or attention-span challenged friends: when it comes to thinking about, or interacting with, people from the ‘other’ party, we Americans have a serious prejudice problem.

I read this 2015 paper by Iyengar and Westwood “Fear and Loathing Across Party Lines: New Evidence on Group Polarization” which Jonathan Haidt describes in his 2016 Edge World Question essay. The authors report four studies (using nationally representative samples) in which they gave Americans various ways to reveal cross-partisan prejudice; they apply the same methods to assess cross-racial prejudice – to have a benchmark for comparison. All studies found prejudice towards people affiliated with the ‘other’ party; and in all cases, prejudice was much greater than any linked to race.

First they used the Implicit Association Test. The test measured peoples’ implicit positive and negative associations – by measuring how quickly and easily people can pair words that are emotionally good versus bad with words and images associated with Republicans vs Democrats; and then, Blacks vs. Whites. Both Blacks and Whites manifest mild preferences (positive associations) for their own group. The effect sizes for cross-partisan implicit attitudes were much larger than cross-race. When Americans look at each other or try to listen to each other, they are slightly biased in favor of their own race, but relatively strongly biased against people from the “other side” politically.

Haidt describes the second study: where the authors “had participants read pairs of fabricated resumes of graduating high school seniors and select one to receive a scholarship. Race made a difference—Black and White participants generally preferred to award the scholarship to the student with the stereotypically Black name. But Party made an even bigger difference, and always in a tribal way: 80 percent of the time, partisans selected the candidate whose resume showed that they were on their side, and it made little difference whether their co-partisan had a higher or lower GPA than the cross-partisan candidate.” I note in passing that Democrat-affiliated respondents exhibited somewhat higher bias.

In the third study, the authors had respondents participate in two games (the Dictator game and the Trust game). Peoples’ gaming decisions reveal their generosity toward and trust of the other player. In both games, effects of racial similarity were negligible and not significant. Effects of party-affiliation similarity were considerable – with players consistently revealing partisan preferences; they trust same-party players more, they are more generous towards same-party players. Democrat- and Republican affiliated players manifested similar levels of bias, except in the Trust game, where Democrats revealed much lower trust of Republicans. That is, they allocate considerably more resources when the other player is a Democrat, trusting the other player to behave appropriately.

In the fourth study, they used the same game structure to distinguish favoritism towards those in one’s own party, from animosity towards those in the other party. They found that peoples’ animosity toward the other was considerably more consequential than favoritism toward same-group affiliated players.

Are there “hedgehog” researchers?




According to Tim Harford’s write-up, the Good Judgment Project finds the following makes people better forecasters: some basic training in probabilistic reasoning; working in teams; having an ‘actively open-minded’ thinking style. And, you can improve your performance by tracking it. Thought-provoking, eh? When I learned this, I immediately wondered: would not all four factors apply to people’s performance in observing, studying, analyzing and learning about things in general? What about social science researchers? We know there is room for all kinds of bias in social science research; couldn’t these insights be applied to improve social science research? Could they be applied to give us a ‘red flag’ for studies that might manifest more a researcher’s “hedgehogian’ world view than an open-minded look at the evidence?



I should come clean. I am especially interested in the implications of an ‘open minded thinking style’. Because, if this thinking style is associated with more trustworthy research, then I have discovered some foundation for one of my long-standing rules-of-thumb: disregard the work of social science researchers whose findings consistently confirm the superiority of reforms moving social systems in one direction. Yes. I call them ‘hedgehog’ researchers. Though…not to their face.



Let me explain what I mean by “directionality”. Most studies of social system reform strategies can be categorized according to a “direction of change” in which the intervention moves the social system. Common directions include: more role for citizens & communities; larger role for markets; larger role for government, etc. Directionality can be more fine-grained, however. Reforms can be characterized in terms of whether they move a system in the direction of a social system prototype (e.g. an “NHS” like health system; a Bismarckian social health insurance system).  Much health services research  assesses the soundness of reform strategies which move health systems ‘away from” or “closer to” one of these prototypes. Some researchers do many studies that  use the social system of their native land as the prototype-benchmark. I have encountered this often with respect to health systems researchers, and especially those from the UK. Delve into their analyses, and you find they are often assessing the soundness of reform strategies which move health systems ‘away from’ or ‘closer to’ the structure of their very own NHS.



If you categorize researchers’ studies in this way, many show no pattern of directionality. A few are…worryingly consistent. For one, every initiative to expand the role of market forces generates positive results. For another, every strategy that moves a health system to be more “NHS-like” is better than the alternative.



Once I started tracking this pattern, I intuitively stopped trusting these Researchers of Unusual Consistency (R.O.U.C.s). I now see this consistency as a sign that the researcher lacks an ‘actively open-minded thinking style’. And, since the Good Judgment Project findings suggest that this thinking style is protective against bias, I think others would do well to keep their eyes open for such patterns.



Time to rethink fat consumption, if you haven’t already

A study “Association of Dietary, Circulating, and Supplement Fatty Acids With Coronary Risk; A Systematic Review and Meta-analysis” published March 18 in the Annals of Internal Medicine should be the “nail in the coffin” of the lipid hypothesis (linking saturated fat consumption to coronary heart risk). I want to help out, to hammer one tiny nail in the coffin of this zombie idea. Herewith, my hammer swings.

The study is a systematic review of all available evidence on the lipid hypothesis, including observational studies, prospective cohort studies and RCTs. Taken together, the evidence does not support any link between consuming saturated fat and coronary heart risk. Its “surprising” results have come up in several conversations this week; one friend (you know who you are) speculated that the research may have been funded by a nefarious, self-interested funder (the beef industry perhaps?). This is not the case – as you can see if you follow the link above.

My friends, and many others, are suspicious because the finding conflicts with so much existing evidence. Except, they do not; rather, the finding confirms the balance of existing evidence. The findings are at odds with current dietary guidelines and conventional wisdom. This is a very different issue altogether.

Since this issue has come up in several conversations, I want to lay out what I  discovered when I examined the evolution of the evidence for this hypothesis, as well as the evolution of dietary guidelines.

The origin of the lipid hypothesis lay in poor handling of then-available observational data. To wit, Ancel Keys’ Seven Countries Study (1980), which examined observational data on changes in fat consumption and heart disease levels of different countries. It was named for the seven countries that saw an increase in heart disease cases correspond with increased fat consumption; the study ignored considerable additional observational data that was available at the time – which, taken together, supported the linkage – but weakly. Nevertheless, Time magazine covers, and sadly, national dietary guidelines based on the findings followed.There have been many more observational studies since then. Taken together, their findings do not support the lipid hypothesis. Check out this excellent overview of the evidence.

The mechanism? The concern over fat gathered steam when studies showed that saturated fat increases LDL cholesterol — the bad cholesterol — the artery-clogging stuff. They assumed this increased the risk of heart disease. When further studies did not confirm saturated fat elevated coronary heart risk, researchers started to dig more deeply into the mechanism. They found the more important predictor of risk is the ratio a person has of LDL to HDL, the good cholesterol. Note, compared with carbohydrates, saturated fat can increase HDL and lower fat deposits in the blood called triglycerides, which, is protective against heart disease. Heck, even the American Heart Assn admits this. In fact, more recent studies, such as those examining the health effects of consuming full-fat dairy – see here and here, suggest there are health benefits from eating higher saturated fat diets.

Nor do subsequent prospective, cohort studies (e.g. Framingham) support the lipid hypothesis. See this systematic review Siri-Tarino, P. W., Sun, Q., Hu, F. B., & Krauss, R. M. (2010). Meta-analysis of prospective cohort studies evaluating the association of saturated fat with cardiovascular disease. The American journal of clinical nutrition, 91(3), 535-546.  They foundno significant evidence for concluding that dietary saturated fat is associated with an increased risk of CHD or CVD”.

Many RCTs to measure the effects (in terms of fatal or non-fatal heart attacks) of saturated fat have been either inconclusive, poorly designed, or completely unsupportive of the hypothesis. A few such studies are (I could not find a systematic review of only RCTs):

  • Research committee. Low-fat diet in myocardial infarction. A controlled trial. The Lancet 1965;2:501-4.
  • Rose GA, Thomson WB, Williams RT. Corn oil in treatment of ischaemic heart disease. British Medical Journal 1965;i:1531-3.
  • Research committee to the medical research council. Controlled trial of soya-bean oil in myocardial infarction. The Lancet 1968;ii:693-700.
  • Dayton S, and others. A controlled clinical trial of a diet high in unsaturated fat in preventing complications of atherosclerosis. Circulation 1969;40(suppl 2):1-63.
  • Leren P. The effect of plasma cholesterol lowering diet in male survivors of myocardial infarction. A controlled clinical trial. Acta Medica Scandinavica 1966;suppl 466:1-92.
  • Woodhill JM, and others. Low fat, low cholesterol diet in secondary prevention of coronary heart disease. Adv Exp Med Biol 1978;109:317-30.
  • Burr ML, and others. Effects of changes in fat, fish, and fibre intakes on death and myocardial reinfarction: diet and reinfarction trial (DART). The Lancet 1989;2:757-61.
  • Frantz ID, and others. Test of effect of lipid lowering by diet on cardiovascular risk. The Minnesota Coronary Survey. Arteriosclerosis 1989;9:129-35.

This brings me back to the just-published systematic review of the available evidence from all three methods (observational, prospective cohorts; and, RCTs) Chowdhury, R., S. Warnakula, et al. (2014). “Association of Dietary, Circulating, and Supplement Fatty Acids With Coronary Risk: A Systematic Review and Meta-analysis.” Ann Intern Med 160(6): 398-406.

Which, unsurprisingly, found that “current evidence does not clearly support cardiovascular guidelines that encourage high consumption of polyunsaturated fatty acids and low consumption of total saturated fats”.

Let us hope government guidelines will finally be changed to reflect the evidence. We can’t take such a change for granted though. The folks involved with developing dietary guidelines have been ignoring the evidence they are wrong for quite awhile (see here and here).

I am not giving dietary advice. I am encouraging my many econometrically literate friends to take a look at the evidence themselves. Like me, I think you will be surprised what you find.