Five reasons why our coronavirus guesstimations are dumb and wrong

For reasons that I struggle to understand, a lot of big-brained Twitter people feel the need to forecast the expected lifespan of this pandemic, and the death toll it is likely to incur over the next few months. Here are five reasons why, in most cases, this behavior is dumb and wrong, although with some exceptions that I’ll note below.

1. We don’t understand how the virus behaves

To construct a prediction about the future, we have to draw on evidence about the past and the present. We assume that the evidence we have at hand is accurate, and if we’re right about that then there’s at least some chance that our predictions will be accurate too. Unfortunately, in the case of coronavirus, this assumption has been false time and time again.

Yes, they actually tweeted this.

In January, we were told that the virus couldn’t spread from human-to-human contact. In February, progressive governments like Canada affirmed that there was no need to enforce border controls, and now Canada is closed to entry for all except citizens alone. We were told that coronavirus couldn’t be spread to or by animals, and now confirmed cases of corona infections among felines have been reported. In Britain, we were told that herd immunity was going to be the key to getting over corona, and now it is becoming evident that naturally acquired herd immunity may be functionally impossible in the short term due to antibody-dependent enhancement (ADE) properties the virus is now known to possess.

Evidently, our governments failed to respond adequately to the risk imposed by coronavirus because they had faulty assumptions, and the short list of assumptions I’ve documented above is less than the tip of the iceberg. These short assumptions led to bad policy and got people killed. If you are trying to sell your models to policymakers based on your own flawed assumptions, then those deaths ought to be bearing very heavily on your conscience.

There is no reason to believe that our current global understanding of coronavirus is accurate either, so even at this stage models are still horrifically flawed. Only one country has issued a public advisory about the risks of “reactivation” in relation to coronavirus, whereas countries like the USA and UK are still strategizing as if getting the virus makes you immune. Models that are being drafted for use right now in government bureaus across the world are terribly flawed, and this will likely remain the case in the near future, because:

2. We don’t know how the virus will behave in the future

Viruses are not technically ‘alive’ under the generally agreed upon definitions used in biological science to describe lifeforms. But as with all life forms, mutation is a constant risk whenever replication at large scales is happening.

The mutations that have currently been documented for coronavirus are not guaranteed to transform it into a global black death that kills everyone it touches. But that doesn’t mean that such a scenario won’t happen – in truth, we simply cannot know. Projecting into the future about the likely death toll of coronavirus in Brazil based on the deaths in Germany, given that the reality of mutation makes it possible that the two countries will eventually end up with different strains of the virus that cannot be meaningfully compared in this way, is dangerous, dumb, and irresponsible.

3. We can’t compare the impact of corona in the first place

What counts as a coronavirus death? The official guidelines issued by the USA’s Center for Disease Control require that medical professionals report both confirmed and probable cases and deaths, and so the official statistics that get published are a confusing jumble of confirmation, conjecture, and comorbidity. A week ago, the implementation of these guidelines in New York State resulted in a 60% explosion in the total recorded death toll for coronavirus victims, even though people continued to die at the exact same rate as before. This exemplifies the significance of definitions and criteria in how we count the virus, and that matters for both the infected count and the death toll.

It turns out that the definitions make it utterly impossible to compare the impact of the virus internationally in any meaningful way. Since the ‘confirmed cases’ are understood to be widely underreported worldwide, as commonly only patients who are sick enough to require hospitalization get tested and counted, the death count is similarly subject to statistical distortion. A coronavirus death in the USA is not equal to a coronavirus death in Germany, and there are people dying right now at home from coronavirus whose deaths are neither confirmed nor counted. This is why critical information about the virus, like the total average case fatality rate that countries should plan to expect, remain widely disputed and unsettled. If your statistical model assumes the total average CFR is just below 1%, then perhaps opening up the economy might be a good idea. But if this assumption strays too far from the mark, a policy based on your model could feasibly kill tens of millions of people.

4. We are lying to ourselves about everything

In times like these, when precision and factual accuracy can impose unacceptable delays upon the primary task of saving lives, it is hard to say exactly what constitutes a lie. With that in mind, there are at least some cases where countries (i.e. China) blatantly distorted or even falsified statistical data relating to coronavirus – something the media decried as a ‘conspiracy theory’ until very recently. While the recent growth in recognition toward these deceptions is a good sign, China is very far from alone in this regard:

Note: I did not even mention corona in the search entry

But the lies don’t stop at deliberate underreporting – far from it. In February the US Surgeon General told Americans that masks are “NOT effective in preventing general public from catching” the coronavirus, and that they should neither buy nor wear them. As of now, many Western countries are beginning to step in line with East Asian countries like China and Korea, which encourage – and in some cases, legally demand – their citizens to wear masks. Germany has already done so, and compulsory masking laws will be enforced by police in multiple German regions by the end of April.  Hilariously, the US Surgeon General later backtracked on his initial lies about the inefficacy of masking by encouraging Americans to wear masks or ‘cloth face coverings‘ if they absolutely must go outside.

Most of you don’t need me to explain why these lies were broadcasted for months by both state and private media outlets in virtually every Western country. As you already know, this was the quintessential ‘noble lie’ – perhaps even the lie that will come to characterize this pandemic and the criminally incompetent response to it in the history books many years from now. Even in countries that pride themselves on having a ‘free press’ like the UK and USA, newspapers have to abide by governmentally issued reporting guidelines, or face serious repercussions. Without resorting to eccentric conspiracy theory explanations involving NWO or ‘depopulation’, it looks like Western governments simply panicked in the face of a public rush on masks, and facing the prospect of having an undersupply of masks for key frontline medical workers, they simply decided to lie. It’s distasteful, but ultimately rational, given that democratic governments are simply so incapable of ramping up the production of necessary medical supplies that they had no other choice. But yet again, the unfortunate truth is that the ‘Great Mask Conspiracy’ was only the tip of the iceberg.

It’s not just ‘masks don’t work’ – it’s everything. It’s ‘pets cant get coronavirus’ – something we were told very confidently earlier this year, which is now known to be utterly false. It’s ‘maintaining a 2-meter distance will keep you safe’ – yet another item in a long string of lies. Overall I can’t even be bothered to count the long list of lies we’ve been told, or to talk about the countless occasions where the media has decried those who spoke out against these lies by castigating them as conspiracy theorists, Russian bots, or other defamatory terms. If by some miracle you aren’t already aware of this by now, then you should probably go read a more comforting blog.

First I was like…
But then I was like…

What this means for the statistical forecasts you see in the news every day is self-explanatory. None of the projections you are reading, no matter how well-intentioned the data scientists who crafted them are, will be free from lies, deception, distortion, or propaganda. Calculations that are forged upon a bedrock of lies are unlikely to be true. There’s really no way around it.

5 (Bonus): Since we’re inundated by uncertainty, we should plan in preparation for the worst case scenario, and we don’t need pandemic projections to do that

By this point, you will hopefully agree with me: neither you nor I have any clue at all about how our societies will be affected by the coronavirus pandemic over the coming months, which means we don’t know what our lives will look like in the near future. In situations of extreme uncertainty, planning for the worst and hoping for the best is the most sensible thing you can do. If you forego the daily news updates and laughable puppet show broadcasts put on by your local politicians, you can focus on keeping a low profile, stocking up on food, water, and other essential items, and staying inside unless it becomes absolutely necessary to do otherwise. If the truth lies on the side of the optimists, then you will only lose the time you spent in the grocery store buying bottled water and tinned sardines. If, on the other hand, the pessimists happen to be right, then you’ll not only lose nothing – you’ll have saved your life.

Exceptions

By necessity, policymaking requires us to simplify the impossibly complex world into a finite set of manipulable variables, and most of the time this is a good way of doing things. If you’re directly involved in the policymaking process, or if you need to make hard financial decisions in the context of your investment portfolio, your job, or your small business, then it’s obviously reasonable to survey the landscape of expert opinion and decide for yourself which forecasts seem most reliable. If those conditions don’t apply to you, and you don’t stand to make immediate financial gain from being right about this, then stop wasting your life.

Bad politics in Nature: Bunce and McElreath (2019)

Today I read a paper in Nature by John Bunce and Richard McElreath, entitled “Sustainability of minority culture when inter-ethnic interaction is profitable.” As a staunch advocate for cultural pluralism, and a supporter of the theoretical autonomy of all cultures regardless of minority status (since history tells us that what is a majority today can quickly become a minority tomorrow, and vice versa; ask the Aztecs) I was enthused by the subject matter, and read the paper with great interest.

The paper opens strongly with some well-phrased descriptions of cultural shift and variation. Very quickly though, we start to see a political leaning emerge:

The highlighted sections, while not necessarily incorrect, give us a clear look at authorial intentions. Cultural displacement occurs when minority cultures are lost to a “powerful majority group.” The suggestion that preventing the loss of minority cultures may contribute to the richness of society is something I entirely agree with, but it is also inherently subjective and inappropriate for one of the world’s leading scientific journals. The citation (no. 12) used for the ‘richness’ claim is a dummy citation which refers only to language preservation activist Michael Krauss giving the same opinion. There’s more to criticize about the idea of “optimally distinct group identities” (optimizing for what, specifically?) but you get the idea.

What I think is truly inappropriate is this next part of the work:

We can now infer that this article is the brainchild of John Bunce (author 1) whereas McElreath (author 2) probably played the role of methodological consultant, which makes sense as his best received book is on coding and mathematics. I’m going to continue in reference to Bunce as the author.

Bunce’s description of ethnically Mestizo Peruvians – those of mixed Spanish and South American heritage – is an accusation that carries a moral charge far too great for his feeble implications to support. To describe these people as ‘colonists’ because some of their ancestors were indeed colonists is no different to describing Muslim minority communities in the Balkans as ‘invaders’ because many of their ancestors did indeed enter the Balkans as part of Muslim military invasions. I use this example, because ‘invaders’ is exactly the term used by Brenton Tarrant to describe his victims in Christchurch. I am sure that Bunce himself would be horrified by the Christchurch massacre, and would strongly protest the legitimacy of Tarrant’s terminology. Nevertheless, it’s clear that Bunce’s language isn’t intended to be used both ways; it’s a hypocritical, ideological accusation that lacks logical consistency.

The lack of consistency in moral standards is something that we see in the article’s conclusion as well. Toward the end of the article, Bunce finishes by concluding that minority cultures may be preserved by establishing interethnic barriers or boundaries which can be freely crossed by the minority culture, but not the majority culture. This isn’t particularly surprising or novel, insofar as this is something we’ve been implementing as a political solution to the problem of cultural preservation of ethnic minority cultures. In the UK, for instance, we have a Welsh Parliament, a Scottish Parliament, but not an English Parliament. I don’t think the legislators who enacted these parliaments would be surprised by the scientific revelation here, since they presumably already guessed it would be the case.

But to me there’s an issue beyond that which Bunce (and perhaps also McElreath) bear responsibility for missing. When proposing solutions to social problems, researchers are faced with a theoretically limitless number of ways in which to achieve their goals. I could just as easily publish a paper concluding ‘wholesale systematic genocide is a very effective way of eliminating minority cultures.’ It’s obviously true, but also obviously not worth publishing. Likewise in the case of this paper, it seems that author John Bunce is making statements which are obviously true, combining them with those that are logically incoherent, and packaging these into a ‘scientific paper’ hardly fit for purpose in order to promote his political ideology.

Angela Saini thinks about race like Richard Spencer

Quillette just published this essay by Drs. Bo Winegard and Noah Carl, who joined forces to review Angela Saini’s new pop science book, Superior.

For the unfamiliar, Saini is a Desi journalist and writer from Britain considered popular within her undergraduate and professional audiences. More specifically, her popularity appears relatively confined to the center-left and middling-high (+1 SD) openness portion of that audience. Her wikipedia page (which she probably wrote herself) emphasizes her two MA degrees from high-tier British universities (Ox and Imperial) and the two books she wrote prior to Superior seem like manifestos on behalf of perceived underdogs, whose true virtue she is compelled to explain. I’m going to let a page from her book on women, entitled Inferior, speak for itself:

Just one page alone tells us so much about the kind of person who succeeds in the industry of science journalism today.

In Saini’s worldview, the “early divisions” that appear and begin to distinguish male and female humans in childhood reflect only a belief about biology, not biology itself. Reproductive and sexual behavior by consenting adults is guided by a kind of zombie notion, fed by scientific research (which is thus implied to be unreliable). If we have an interpretation of the past that contradicts this, it is only because our visions are tainted by “myths.” Men are dominant, therefore women must be, and are, submissive.

Amidst this, we (meaning Angela Saini) have “this dark, niggling feeling that never seems to go away no matter how much equality legislation is passed: the feeling that we aren’t the same, that, in fact, our biology might explain the sexual inequality that has existed… across the world.” We don’t need Freud to point out how much closer this is to a childhood diary of shameful thoughts than a book about science. Yet Saini moves decisively to nip this ‘dark feeling’ in the bud, dispelling it with one phrase: ‘thoughts like these are dangerous.’

This, in a nutshell, is Angela Saini. Her motivation is to act against the very thought that anything biological could explain anything she views as sociological. If the science seems to suggest any differences that can be related to biology, this is only because, she tells us, the science is itself flawed:

You can probably guess what her latest book Superior: The Return of Race Science had to say about its subject matter. The Quillette article did a pretty good job evaluating Saini’s arguments, so I won’t repeat their rebuttal. Instead, I want to draw attention to Saini’s fundamental mistake that we see in both Superior and Inferior: she combines strawman ideas of race and gender into a Frankenstein belief system no-one actually holds. Let’s start with her definition of race from the prologue of Superior:

“No place or people has a claim on superiority. Race is the counter-argument. Race is at its heart the belief that we are born different, deep inside our bodies, perhaps even in character and intellect, as well as in outward appearance.”

Winegard and Carl did a good job at picking up on what she actually means. In their words, Saini’s definition “inextricably binds up race with morality, making it an  affront to human dignity and a threat to metaphysical equality.” To Saini, race is the idea that these features on which any two randomly selected humans from the same society will differ (appearance, character, intellect) grants a metaphysically and morally superior status. Racism, then, is a Nietzschean view in which one’s group is a master, and others are slaves.

Interestingly, Saini’s attitude here is much closer to the ‘pro-race-ists’ such as Richard Spencer than any credible scientist working in biology or related fields. As far as I’m aware, Spencer – like Saini – believes that race does indeed confer some metaphysically superior status; he only disagrees with Saini on the small matter of its existence.

Saini Spencer
Race is real No Yes
Race is socially constructed Yes No
Race determines metaphysical value Yes Yes

Ignoring her inadvertent agreement with the alt-right, I’m going to throw a curveball into our discussion by arguing that both Spencer and Saini are wrong. In common English parlance, we use the word race to legitimately describe groups defined by shared cultural characteristics, but also those defined by shared patterns of biological descent and variation, and often we see a substantial degree of overlap between the two. Yet we do not (or, at least, cannot) use race as a marker of metaphysical value or worth, and it is for this reason that both Saini and Spencer are incorrect.

Saini counters throughout her book, most notably when she interviews David Reich in chapter 7, that racial categories are mere (i.e. imaginary) social constructions. Consider the following paragraph:  

There’s some truth to be found here: racial categories are indeed socially constructed. Consider the list of racial categories proposed below (from chapter 5 of Nicholas Wade’s book on the topic):

By definition, these are social constructions because they were assembled by a social community (scientists) and are subject to change and redefinition over time. But does this truly suggest, as Saini & Co. assume, that this makes them worthless and harmful? Don’t all categories of human knowledge, like fashion, physics, religion, and law operate the same way – being constructed by a social community and then applied for a specific purpose within a relevant domain? To her credit, the paragraph above does give the remarkable admission that “some categories may be useful” but discredits itself by making a full 180 to conclude that race is both useless (i.e. invalid; not fit for purpose) and pernicious, which is a synonym for wicked and malevolent.

A common-sense response might seek to look at some examples of what we often call ‘race’ in order to evaluate how harmful or inaccurate it really is. African Americans are a great place to start, because they are indeed genetically heterogeneous to a degree that makes the notion of biological homogeneity seem hard to defend. As we know, those considered racially ‘black’ in America include original AAs of West-African slave descent, recent arrivals from East African countries like Somalia and Ethiopia, or Khoisan immigrants from South Africa (who are described in Wade’s book as a separate racial category altogether). Therefore, it’s fairly unsurprising that AAs have among the highest levels of intragroup genetic diversity in the world, and are composed on average of roughly 79% African ancestry. In the case of individuals with 0% African ancestry like Sean King, it’s very clear that African/black American is a social identity or ethnic grouping, bearing little resemblance to the biological category Wade proposes.

For people like Saini, the definitional breadth of blackness in the American social context is the strongest possible case to support her argument. Yet if we concede that black is too broad to be useful, we tacitly accept that any trends, gaps, or disparities revealed by the adoption of ‘black’ are just as useless as the category itself. Prostate cancer, which remains the biggest killer of Western men today, has rates among black Americans that are “50–60 times higher than the rates in Shanghai, China.” Do we really think that pretending black and Asian Americans to be equally susceptible to prostate cancer will rid us of a “nonsense” and “wicked” category? Or would it worsen the already severe health disparities dividing America along racial lines today?

We don’t have to speculate – let’s check for common-sense by looking at a country overtly opposed to racial categories. France has a legal prohibition on collecting demographic statistics of ethnicity or race, which it has ardently enforced for many years. Yet despite this, the French government assiduously collects data on the proportion of newborns considered ‘at risk’ for sickle cell disease – a genetic disorder effectively limited to individuals of Sub-Saharan African ancestry. If France consistently avoided collecting statistics on racial demography altogether, then the life expectancy of its black population as well as the efficiency of its health service would obviously suffer as a result. How this would further the cause of justice, let alone science, is never explained by Saini – but it’s consistent with a set of policy measures her book strongly supports.

While Superior commendably admits that “understanding [medical] correlations is important” it never tells us why health disparities occupy a unique position among behavior, socioeconomic outcomes, social attitudes, or any other characteristic that will on average differ between groups; knowing one is important, but no harm will befall a society that ignores every other disparity that exists (???). When paraphrasing the section quoted above, we get: ‘no need or value to categorize people biologically, but understanding medical implications of biogeographic ancestry is important, but most categories are nonsense, but there are some physical differences, but most categories are useless, but some are useful, but race above all else is a wicked and insidious way to interpret the diversity we believe to characterize the human species in our world.’ While hyperbolic, this is no less extreme or radical than the claims and normative proposals found in Superior, particularly in its sections on cognitive ability.

You could counter that any concept of race which clumps Ethiopians, West Africans, Khosian, and Sean King in a single category is inherently silly, and should be removed or refined. This is a reasonable suggestion, but it fails to acknowledge the advantages of ‘fuzziness’ we’d trade off by enforcing a more strictly genetic interpretation on the general population. Saini is well-aware of this, rightly noting:

“…according to the rules laid out by the US government in its 1997 Office of Management and Budget standards on race and ethnicity, people who originate in Europe, the Middle East, and North Africa are automatically classified as white. Since Hefny arrived from Egypt, he is officially white.”

It’s certainly true that the ostensibly racial category of ‘white’, like ‘black’, is applied very openly in the United States – but why exactly is this a bad thing? After all, if our critique rests upon the divisiveness of race, then we should aim for racial categories that are open and inclusive to the utmost extent that doesn’t compromise the quality of information on health disparities or socioeconomic gaps in our societies. If we don’t bother with pragmatic utility and instead focus solely on the strict scientific accuracy of racial categories, we’re beginning to sound alarmingly close to actual 20th-century Nazis – something that should tell us we’re going in the wrong direction.

Stepping back into the safe territory of common sense, we might ask why on earth Saini gets it so wrong. After all, if categorizing humans into groups based on shared genetic ancestry can help people (which it can) and if these categories can be relatively inclusive (which they appear to be) then why oppose the effort? For our answer, we turn right back to Saini’s prologue, where she writes:

“We invent hierarchies, give meaning to our own racial categories…” “…once defined, these “races” rapidly became slotted into hierarchies based on the politics of the time, character being conflated with appearance, and political circumstance becoming a biological fact.”

Above all, it’s these claims where I cannot help but dig my heels in. The idea that race is made meaningful by hierarchy is as absurd as gender being intrinsically hierarchical. The notion that categorizing people based on shared ancestry invariably conflates character with appearance suggests that well-intentioned scientists like David Reich are ultimately motivated by a sinister desire to determine moral goodness based on physical aspect. While Saini seems to acknowledge that although race can be medically useful and sociologically informative, such benefits are a mere coincidence of its true purpose: to rank order the moral importance of an individual life based on racial ancestry, so that we can treat them differently.

Perhaps importantly, I’d suggest that this would be serious cause for concern, were any of it true and not dramatic fiction. The idea of moral worth or value sounds abstract or arcane to many of us today, but it was very tangible for the passengers of the sinking Titanic, where men (being of lower moral value) resigned themselves to death while women and children descended to the safety of the lifeboats. The case of the Titanic shows that when push comes to shove, ‘moral value’ ultimately means ‘who gets to live and who has to die’ so it’s definitely a subject worth raising if and when we feel there is valid cause for concern. Nevertheless, the fact that military servicemen and first responders routinely give their lives for people of racial groups Saini considers ‘lower’ on her fictitious racial hierarchy should give you a sense of how valid this concern really is.

Ultimately, Superior is very much like Inferior in that both books rest upon a fictitious idea of ‘race’ and ‘gender’ that is always assumed, never justified, and shared only among her fellow ideologues liberally quoted in every chapter as if to hammer their bad and unpersuasive ideas right into your skull. Both of her books start not with their subject matter of race, or gender, but with her – with personal stories from her background and upbringing, and the various ways in which she was ‘unjustly’ and ‘inaccurately’ categorized while growing up as a brown girl in Britain. It could be that this writing style is popular in circles I’m unfamiliar with, or it could be that Saini – like so many others who simply assume the views of their ‘opponents’ – is projecting her ghosts and demons upon an audience that sorely deserves better.

Lies, damn lies, and academics on GWAS

A colleague of mine back at Cambridge (my good ol’ alma mater) studies the history and philosophy of science, and they recently told me about a lecture they had on GWAS. Or perhaps the lecture was on the problems with GWAS, because the impression of the lecture they transmitted would certainly support that interpretation.

One criticism that stuck out to me in particular was the claim that GWAS cannot be trusted because current and future GWAS are conducted upon the basis of databases compiled from results of previously published GWA studies, which can allow for false positives and mistakenly identified variants to proliferate as the research area continues to develop. This is of course why we should be very careful about relying on GWAS, especially for studies on intelligence differences – something the lecturer took pains to emphasize.

This situation interested me, because two things are going on at different levels. On the one level, the lecturer is completely correct – even a small margin of error will inevitably result in a small probability of false-positives that are compounded within future research, allowing for small mistakes to accumulate into big mistakes over time. In truth, this is a feature of human knowledge systems (to which the scientific method belongs) in a broader sense, because the validity of every next step in science depends upon the validity of previous steps, and there is always a non-zero chance that previous steps were wrong-footed. As Gregory (Role of Probability Theory in Science) states:

“Of course, any theory makes certain assumptions about nature which are assumed to be true and these assumptions form the axioms of the deductive inference process… For example, Einstein’s Special Theory of Relativity rests on two important assumptions; namely, that the vacuum speed of light is a constant in all inertial reference frames and that the laws of nature have the same form in all inertial frames”

This is a useful example because Einstein’s assumptions are not a priori; they are instead assumed based on other developments in physics or natural philosophy which preceded them. As any coder will know all-too well, this cascade of ‘potential errors’ means not only that you have a reasonable expectation of encountering an ‘actual error’ within the system of knowledge you’re dealing with, but that these could also be concealed within your current and future research projects conducted in alignment with that method.

So again, since GWAS is a highly technical application of the scientific method for specific purposes, and this cumulative error probability is indeed a common feature to all science, it could be described as a ‘pitfall’ of GWAS. But in accepting that GWAS is ‘sketchy’ or ‘unreliable’ because of this, we’re also forced to accept that all science – the science that drives our cars or powers our lights or cools our refrigerators – is equivalently imperiled. Have you ever been cautioned about the untrustworthiness of lightbulbs or refrigerators due to human epistemic constants? You can probably guess by now that something else is going on here.

Let’s try a thought experiment. Imagine two men are having a conversation about a cute girl they both know. MAN-A is curious about the girl, and asks MAN-B for his opinion. Imagine that MAN-B excoriates the girl for the following reasons: “Oh hell no dude, that girl is disgusting. Do you realize she shits? Like, she actually goes to the bathroom? And what’s worse, I heard she gets periods constantly, and even sneezes in the springtime. I wouldn’t be caught dead with her.”

If MAN-B in our thought experiment seems stupid, misogynistic, or even totally detached from reality, it’s because he is. As unattractive as human bodily functions might be, they’re described in relation to the ‘human body’ for a reason; they’re universal. They are features, not flaws, of human physiology that all of us still alive are equally guilty of. If you ever find yourself interested in or curious about a nice girl you meet and you hear similar remarks reflected in a mutual friend’s opinion of her, he really shouldn’t be your friend.

The illogicality encapsulated within this highly vulgar and contemptible example, where MAN-B reveals his nasty attitudes through ridiculously discriminatory standards, is really the same thing you’re seeing in the GWAS example given above. It makes no sense to criticize a girl for having features common to all people or women, unless you’re an utter jerk with a grudge against them. In the very same way, it makes no sense to argue that GWAS are invalid or untrustworthy because of a feature common to literally all science. Yet if you are a hypocritical person who simply dislikes GWAS specifically, it makes perfect sense for you to use flawed reasoning and discriminatory standards to support your equally unjustifiable hostility.

To clarify, I am not saying (nor have I ever said) that GWAS is or should be free of criticism. One of the most important criticisms of GWAS is the distinctly WWEIRD (White, Western, Educated, Industrialized, Rich, Democratic) shade of GWA study participants, which precludes people lacking these characteristics in other parts of the world from sharing in the undeniable benefits that GWAS have brought to healthcare and family planning. Consider how fundamentally different this is to previous eras, where new innovations (e.g. the railway, 3D printing) could bring benefit on a global scale regardless of where or by what demic group (whites and the Japanese, respectively) they were pioneered. Today, cutting edge GWAS results in educational attainment, disease risk, and psychopathology continue to expand the scope of informed options available to white prospective parents or healthcare recipients, for whom these results are applicable. But if by chance you happen to be Desi, or Asian, or Sub-Saharan African, or of any other genetically distinct grouping, these benefits are often unavailable. Razib Khan wrote about this recently:

Because most GWAS are performed in European populations, PRS values for individuals not of European ancestry are far less accurate. This phenomenon is caused by several factors. One of the major ones is that each population has genetic variations that cause diseases special and unique to a given population (“private alleles” in the jargon). Studies which use only Europeans cannot detect unique variation in non-European populations by definition. Those variants are not found in Europeans! Additionally, sometimes genetic variants even give different risks in Europeans than non-Europeans because of interactions of genes. The predictions in one population do not transfer to another.

This is a serious problem with current GWAS research that I would expect any decent person to express concern about. It becomes especially relevant for family planning, because in a number of non-Western societies infanticide remains a common method for dealing with children with unwanted genetic diseases – something that embryo selection, for instance, would render obsolete.

Clearly, this is not what the lecturer said. The lecturer attacked GWAS on the basis of ‘muh cumulative error probability’ despite this being a feature inherent to the scientific method itself. All before an audience of philosophy of science students, no less.

I am not in favor of arbitrarily attributing ulterior motives, particularly bad faith, to others with whom I disagree. But in cases with fallacies so obvious, and discrimination so blatant, it’s almost unreasonable to think that underlying hostility isn’t at play. While it would be completely fine to give a lecture on epistemic issues in science in which you discuss the example of cumulative error probability in relevant GWAS cases, to selectively apply this principle to undermine GWAS as a whole is a logically invalid weaponization that says more about you as a lecturer than it does about GWAS itself. Perhaps if you had actually done your research into the topic you claim to know so much about, you would be lecturing instead on real issues specific to GWAS, such as the lack of diversity mentioned earlier. By neglecting to do so, you actually reveal yourself to be ignorant of the discussions within relevant disciplinary communities (e.g. BehavGen) regarding the limitations or flaws of GWAS that aren’t simply universal features of the scientific method.

Again: if a friend badmouths a girl because she has a feature literally everyone else has, he probably shouldn’t be your friend. If a lecturer badmouths GWAS because it shares features common to all applied scientific fields, well…

…You’re probably in college.

No, that’s not polygyny… comments on Ross et al. (2018)

Royal Society recently put out a massive paper by Ross et al. with over 9000 authors (you know, one of those ones) on polygyny and wealth inequality. The title, “Greater wealth inequality, less polygyny: rethinking the polygyny threshold model” would have you thinking that the authors were able to refute or quantitatively disprove the polygyny threshold model with some sophisticated mathematics, but unfortunately this is not the case. Instead, the paper uses a strange mixed sample of hunter gatherer and highly developed industrial populations to argue that the transition to agriculture increases socioeconomic inequality, and additionally results in conditions of subsistence living that for most make polygyny effectively impossible.

Don’t you love it when the author and affiliation list is so big you can’t even screencap it? Maybe it’s deliberate!

Firstly, we should realize that this doesn’t amount to either a refutation or even the titular ‘rethinking’ of the polygyny threshold model. While results from their quant analysis are basically legit, it doesn’t change the fact that the authors have effectively based their study on a tautological proposition; subsistence living results in no surplus wealth (also tautological) which means that it is exceedingly rare for polygyny to be mutually beneficial. Alright. So where’s the challenge to the polygyny threshold model?

I have read a lot about polygyny, but I have never encountered any claim that polygyny ipso facto increases linearly with socioeconomic inequality per se. Rather, claims are made that conditions of high socioeconomic inequality will guarantee polygyny, as male reproductive success is subject to greater resource-dependent elasticity than female fitness due to inherent biological features (i.e. 9 months of pregnancy). This great presentation has more details, but for those with little time:

I had to screenshot this in word since I don’t have LaTeX on my WordPress acc ;_;

Or if you prefer (from the presentation linked above; this contains an error, as the 1948 paper cited is by A.J. Bateman, not Bateson):

To be fair, the authors recognize this by stating their intention to merely “extend” the polygyny threshold model, but I’d argue they haven’t done so in a way that’s significant enough to merit the “rethinking” boast. But this is not to suggest the paper has no value. Instead, what the authors have actually done is modeled the conditions for polygyny to take place in a largely monogamous society at subsistence-level conditions – unironically a notable achievement. This is a far more interesting result, and one that would merit wider recognition than the paper has currently received.

There are still some problems though. For instance, the paper notes:

“Sequential marriage can be considered a form of polygyny insofar as men typically replace divorced wives with younger women, allowing a subset of males in the population to increase their lifetime reproductive success relative to less wealthy males in the population, as has been shown in many of the populations sampled…”

Now this actually is a problem, since the definition of polygyny that the authors are using is not actually “polygyny” but “effective polygyny” so defined. I hate it when researchers redefine constructs in this ad-hoc fashion (especially when it’s not highlighted in the abstract) because it can mislead people who don’t read the full paper, and most of the postdocs I know don’t. Luckily, I did.

I think the problem with including sequential marriage into a working definition for polygyny is that there are substantial qualitative differences that distinguish these behaviors. For instance, technical polygyny (one man, multiple women at the same time, in a sexually exclusive [typically marital] arrangement) actually alters the operational sex ratio, among other things. Sequential marriage, by contrast, only means that the available pool of females includes larger numbers of women who already have children – that is, single mothers. Of course this may change the calculus for male satisfaction or some other outcomes, but these are not equivalent to the social effects we expect from a normatively polygynous mating equilibrium. For example, they completely negate some of the reported correlates for polygynous mating, such as female suffering and self-reported detriment to well-being noted amongst women in polygynous marriages (source from observational study in Cameroon). I understand that well-being wasn’t strictly a feature of relevance to Ross et al.’s analysis, but it DOES have implications for the precise ‘leveling’ of the polygyny threshold. A situation where a woman is going to be a second or third-order co-wife is very different to one in which she’s merely a second or third-order sequential wife. These differences matter quite a lot if we’re paying any attention to the implications for (1) the OSR (2) female well-being and gender inequality (3) male violence and intrasexual competition, among many other things.

Sourcing locations for data used in the paper. Does this look representative to you?

Now look, I’m not trying to be an ass and negate all the hard work the forty-thousand authors of this study did. But I do find it somewhat annoying when people publish work under “GOTCHA!” titles like “rethinking XYZ” despite nothing comparable to this having actually taken place. Far from rethinking, the authors actually RELIED ON the polygyny threshold model for their analysis, and came to the result that the agricultural transition killed incentive for polygyny amongst most normal people living in subsistence-level conditions. Fair enough. But why not just say so?

IMHO, the far more interesting result we get from the paper is this: we know that transition to monogamy occurred around the transition to agriculture in some societies, and this paper provides some really awesome and useful analysis to explain why that might have happened. But what it DOESN’T do is explain why monogamy actually became a social institution to the exclusion of plural marriage. Just because it isn’t worth it to have 2+ wives doesn’t mean that your society will necessarily ban having 2+ wives. We still don’t have an answer for why polygyny becomes legally and socially prohibited in these agricultural societies. However, I think that primate inequality aversion (as exhibited by this outraged capuchin monkey) might be a good place to start.

I don’t have the data to hand, but I do have a hypothesis. Agricultural transition makes polygyny functionally impossible for overwhelming majority of people, who are living at subsistence. But it DOES NOT affect the ability of men with sufficient social standing and resources to obtain and retain multiple wives. Historically, such men were stratified into classes or castes – merchants or Japanese 商 etc. It seems plausible to suggest that the impoverished majority of monogamous males (and perhaps their wives!) would have expressed strong opposition to their rich rulers taking multiple wives, and rallied to condemn this behavior. Others have articulated this hypothesis before (e.g. Henrich, Boyd, and Richerdson, 2012) but this study provides some useful background evidence for its plausibility. If you’re a man farming away in a Neolithic village under fairly awful living conditions, you might be able to tolerate paying taxes to your overlord despite his nice villa on top of the hill. But what if he has 6 wives and your daughter is one of them? Perhaps there might be an ‘outrage threshold’ we need to think about alongside the polygyny threshold model.

God, why does Gurlockk get to have the biggest rocks, the shiniest gems, and 12 wives when I can’t even count past ten?

The meta-analysis that wasn’t: assessing Flynn Effects through diachronic change in ICV

A while ago I was conducting a meta-analysis on diachronic variation in cranial volume measurements for different East Asian populations. I got into the project after being inspired by Lynn’s suggestion that the Flynn Effect is primarily nutritional, as presumably this would also show an effect on height, head size, and thus ICV. It could even be possible to control for ICV changes to show only the direct change to IQ over time, which would be a more rigorous way to compare Flynn Effect magnitudes. What a great idea, I thought.

Unfortunately as I got deeper into the project, I realized that virtually all of my data couldn’t be sourced back to any obtainable research. A lot of numbers were sourced from papers that were only available in physical archives in Japan or Korea, so I did what any good researcher does and promptly gave up.

However, even with the limit to my data integrity acknowledged, I was still able to get some plots of the cranial volume measurements that show some interesting results. Here’s a scatterplot I built using values from all studies I could find; the x-axis shows the year of the study’s publication and not the date of subject collection, expiry, or measurement. Note that all datapoints are n-unweighted (raw averages).

While initially there doesn’t seem to be much to say about this graph, it’s noteworthy that there don’t appear to be (when plotted in this form) any outliers, but two relatively clear clusters by gender. Clearly the female cluster is far less grouped than the male one (the source of its statistical insignificance that can be clearly seen in the following trendlines) but both clusters have mutually exclusive ranges, which are interesting. Since all of the studies here had sample sizes that were above 20 we’d expect distributions to be roughly normal, meaning that group-level sex differences in ICV are likely very reliable. This of course is no surprise seeing as ICV is effectively a function of body size and height, which are also variant with respect to sex in the same way. This becomes more obvious when we ignore the insignificant ethnic variation and go purely to the sex differences:

Overall it is fairly clear that ICV as reported in studies is increasing overtime. However, note the lack of any female reports going back beyond the early 1920s? This is a sign that our data is shitty. For many of the earlier studies no sex information was present, and although in some cases I was able to make informed estimates based on the averagees (i.e. a mean ICV value close to 1500 is almost certainly male or mostly male in content) it was never really certain how accurate or representative the values might be. However, if we are correct in inferring that earlier studies likely did not disambiguate male and female samples (as opposed to exclusively using male samples) then it could be suggested that the actual observed gap would be even greater, seeing as the female values would go down and the male ones would go up. This of course would throw the trendline for diachronic variation into doubt, which is currently strong with p value < 0.05 when excluding females.

 

Email me if you want the full dataset to work on.

The Dangers of Ignoring Cognitive Inequality

This is a repost of an article originally appearing in Quillette.

On Sunday 28 April 1996, Martin Bryant was awoken by his alarm at 6am. He said goodbye to his girlfriend as she left the house, ate some breakfast, and set the burglar alarm before leaving his Hobart residence, as usual. He stopped briefly to purchase a coffee in the small town of Forcett, where he asked the cashier to “boil the kettle less time.” He then drove to the nearby town of Port Arthur, originally a colonial-era convict settlement populated only by a few hundred people. It was here that Bryant would go on to use the two rifles and a shotgun stashed inside a sports bag on the passenger seat of his car to perpetrate the worst massacre in modern Australian history. By the time it was over, 35 people were dead and a further 23 were left wounded.

Astoundingly, Bryant was caught alive. He was arrested fleeing a fire at the house into which he had barricaded himself during a shootout with the police. He later pled guilty to a list of charges described as “unprecedented” by the standing judge, and was sentenced to life in prison without the possibility of parole, thus sparing his victims and other survivors the suffering (and perhaps the catharsis) of a protracted trial. Yet, in spite of his guilty plea, Bryant did not take the opportunity provided by his official statement to offer any motive for his atrocities. Instead, he joked “I’m sure you’ll find the person who caused all this,” before mouthing the word ‘me.’ Intense media speculation followed, the main focus of which was Bryant’s history of behavioral difficulties. These were offered as possible evidence of a psychiatric disorder such as schizophrenia (which would have been far from sufficient to serve as a causal explanation for his crimes). However, the most notable and concrete fact of Bryant’s psychological condition was his extremely low IQ of 66—well within the range for mental disability.

Lascar Monument to Port Arthur massacre (wikicommons)

IQ scores are classified in a number of ways, all of which are broadly similar. The Wechsler Adult Intelligence Scale (WAIS-IV) establishes seven categories of IQ scores. Most of us fall into the ‘Average’ band, constituted by the 90-109 range. Those achieving scores of 130 or higher are considered ‘Very Superior.’ Conversely, scores of 69 and under are classified as ‘Extremely Low,’ and automatically qualify the scorer for a diagnosis of ‘mild retardation’ according to the APA’s Diagnostic and Statistical Manual of Mental Disorders. It is into this band that Bryant’s score falls.

The connection between intelligence and behavioral problems, such as Conduct Disorder (CD) or Antisocial Personality Disorder (APD), was well-known around the time of the Port Arthur Massacre. A review by biostatistician and UCSD professor Sonia Jain cites contemporaneous studies to suggest that low IQ scores in childhood should be considered a risk factor for APD and CD. In 2010, several psychologists published results from a longitudinal study containing data on over a million Swedish men, who were tracked from conscription for a little over 20 years. They found that IQ scores tested during conscription were a significant and robust predictor, not only for APD or CD, but for all categories of mental disorders. Conscripts with low IQ were substantially more likely to be diagnosed with one or more mental disorders, to suffer from mood and personality disorders, and to be hospitalized for mental illness. Those in the lowest band—like Bryant—were most at risk of severe psychological disorders.

a & b: Hazard ratios for admission for different categories of psychiatric disorder according to nine-point IQ scale. Highest IQ score (coded as 9) is the reference group. Estimates are adjusted for age at conscription, birth year, conscription testing centre, parental age and parental socioconomic status. (n=1,049,663)

While these correlations are concerning, they do not offer an explanation for Bryant’s atrocities. In a population where intelligence is normally distributed with a mean of 100, a little over two percent of people would attain IQ scores close to Bryant. A further 15 percent would receive IQ scores somewhere below 84—well beyond the threshold for disqualification in the Armed Forces Qualification Test (AFQT), used to determine suitability for admission into the US Army until 1980. Careers for those below this level are extremely rare—a fact that might help explain the correlation between low IQ and an enhanced risk of criminal offending, given the scarcity of well-paid jobs for those with an IQ of below 84. 

But much of this correlation is due to street-level petty and violent offenses, not mass murder, and it would be abhorrent—obscene, even—to suggest that people with a low IQ should be treated with suspicion, or as murderers-in-waiting. In almost all cases, these individuals pose a risk to no one but themselves, and are more likely to fall prey to victimization by others. On the other hand, it is equally irresponsible to ignore the specific difficulties which those with a low IQ face. The consequences of this wishful thinking—however noble in intent—can be devastating.

Perhaps the best example is offered by the recent history of the Cold War. While the American military-industrial complex was sufficiently sophisticated to provide the United States with all the arms and armaments it could possibly hope for, there are always things that money cannot buy. In this case, it was bodies—young American men needed to fight on the ground in 1960s-era Vietnam, where they found the most unforgiving of battlefields among the region’s unassuming jungles and innocuous rice paddies. The unusually high attrition rate of soldiers posted there, as well as the frequent use of student deferments or feigned illness to dodge the draft (President Trump’s excuse of ‘temporary bone spurs’ constitutes one particularly famous example), resulted in a shortage of men which meant that more troops were needed than the nation was able to supply.

The government was confounded by this problem for some time, attempting halfhearted crackdowns on draft dodgers as a temporary solution, until Secretary of Defense Robert McNamara arrived at a more permanent workaround. The US government would draft men whose low IQ scores had hitherto disqualified them from military service. This stratagem—codenamed ‘Project 100,000’—is detailed along with its dreadful consequences in the book McNamara’s Folly by the late Hamilton Gregory. Gregory witnessed the fate of the low-IQ draftees firsthand while he was a soldier in Vietnam. These draftees—cruelly nicknamed ‘McNamara’s Morons’—were generally capable of completing simple tasks, but even a simple task imperfectly executed can be disastrous in warfare.

US Secretary of Defense Robert McNamara at a press conference on Vietnam (26 April, 1965)

A case study in the book is ‘Jerry’ (not his real name). Jerry was a draftee from the 100,000 who had been assigned guard duty in a camp by the Quan Loi Green Line. Jerry’s task was to challenge anyone approaching the camp by calling: “Halt! Who goes there?” followed by “Advance and be recognized!” once a response had been obtained. This task was minimally demanding due to the clearly visible differences between an American soldier and the average Vietcong guerrilla. But when a well-liked American officer returned to camp, Jerry bungled his instructions. Upon seeing the officer approaching, he yelled “Halt!” and then opened fire, killing the man where he stood. Jerry subsequently disappeared in what was either an act of remorseful abscondence or murder by outraged members of his battalion. In another case described by Gregory, one of the ‘morons’ played a joke on his squadmates by throwing a disarmed hand grenade at them. Despite being beaten up for it, he found this prank so amusing that he repeated it every day until the inevitable happened; he forgot to disarm the grenade, causing the deaths of two soldiers and the grievous wounding of several more.

What happened to many of the 100,000 (whose actual total exceeded 350,000) is not hard to predict. “To survive in combat you had to be smart,” Gregory writes. “You had to know how to use your rifle effectively and keep it clean and operable, how to navigate through jungles and rice paddies without alerting the enemy, and how to communicate and cooperate with other members of your team.” Fulfilling all or any one of these minimum requirements for survival in a battlefield is contingent upon a certain level of verbal and visuospatial intelligence, which many of McNamara’s draftees did not possess. This ultimately led to their fatality rate in Vietnam exceeding that of other GIs by a factor of three.

The danger of physical harm faced by those with a low IQ is not restricted to the battlefield. A 2016 study by four psychologists using data from the Danish Conscription Database (containing 728,160 men) revealed low IQ to be a risk factor for almost all causes of death. A drop in IQ by a single standard deviation (roughly 15 points) was associated with a 28 percent increase in mortality risk. The association between low IQ and mortality was particularly great for homicide and respiratory disease (such as lung cancer). The high homicide rate could reflect a predisposition for those of low IQ to find themselves in dangerous situations, perhaps due to a lack of economic opportunity or an increased likelihood of being victimized by predatory individuals. Similar features could explain the prevalence of respiratory disease, which may be a product of high rates of smoking as well as a greater likelihood of inhabiting more polluted industrial areas where it’s easier to find low-skilled work. Clearly, being born with a low IQ is sufficient to set one up for an unlucky and unhappy life. But could low IQ have contributed to—not explain, but be a factor in—the massacre committed by Martin Bryant?

To answer this question, we have to transcend mere correlations between IQ and different types of outcome, and consider that those with low IQ are much more likely to experience misfortune in seemingly every endeavor. Having intelligence is what allows us to operate in the world—both on our own, and within the societies we inhabit. Those lucky enough to have a high IQ have an easier time at dispatching the various challenges they face, and thus naturally rise within hierarchies of competence. We can imagine any number of these hierarchies, most of which are unimportant (the hierarchy of Rubik’s Cube solvency speed, for example, is probably irrelevant), but all of which require some degree of intelligence. Furthermore, some of these areas of success—such as friendship groups, romantic relationships, and professional employment—are so fundamental to the individual pursuit of happiness that to be unable to progress in them is profoundly damaging to one’s sense of well-being and intrinsic self-worth.

This means that having a low IQ doesn’t only make you more likely to get killed or fall victim to an accident. It also means you’re more likely to undergo difficulties in progressing up every ladder in life. You’ll often feel permanently ‘stuck at zero’—unable to improve or change your position. Most of us will experience this feeling at least a few times in our lives, whether encountered in school (being unable to break the ‘A-grade’), in our social lives (being unable to establish or maintain a successful romantic relationship), or in comparatively trivial areas. Yet most of the time, it is transient—passing when we switch our efforts to a new endeavor, or after devising a way to solve the problem. Very few of us know what it is like to have that feeling almost all of the time—to have a large proportion of one’s attempts at self-betterment or advancement frustrated by forces that seem to be beyond our control. Being trapped in such a dismal psychological state for only a brief interval can lead to anxiety, depression, or dependence. In some, this feeling of ‘being stuck at zero’ (that the world is manifestly unfair and against them) will lead to resentment—and resentment can turn into murderousness.

Martin Bryant’s life, characterized by loneliness, depression, and numerous frustrated attempts at making friends, is replete with examples that follow this pattern. Clearly, his actions mark him as an extreme outlier among those with low IQ—but his troubled life experiences are distressingly representative. Four in 30 children in classrooms across America are made to compete with their peers for grades and university places in spite of low IQ and with little success. And, like them, Bryant found society’s ‘normal’ to be simply unobtainable. Because the role of cognitive ability is de-emphasized in childhood success, and often treated as a function of effort, children in these circumstances can find themselves trying harder than every other child in the classroom, while still being admonished to ‘try harder.’ While wise caregivers abstain from blaming these children outright for their failures, a taboo on acknowledging the importance of intelligence means that low IQ individuals themselves may be unaware of their condition or its full ramifications, making them likely to engage in repeated self-blaming injurious to self-esteem and mental stability.

None of this is to suggest that those with low IQ, or those who experience a duration of being ‘stuck’ due to their cognitive limitations, should be viewed as likely to break the law or engage in violent crime. But it’s one possible explanation for the fact that those with a low IQ are more likely to do so than those with an average or high IQ. And the uncomfortable reality is that the resentful, in this case, are somewhat correct in their analysis—they have been set up in a game rigged against them from the very start. Recent research in genomics has confirmed this: a 2018 study in Nature1 used genes sequenced from over a million individuals to examine the genetic contributions to educational attainment. This process allows for the construction of profiles for individual ability by evaluating polygenic scores (PGS). In this study, those within the highest PGS quintile had around a 50 percent likelihood of graduating from college; those in the lowest bracket, only 10 percent. Yet none of this difference in ‘genetic quality’ can be accounted for by individual merit or achievement. It is a difference of crucial importance, yet it is determined for us as individuals by luck alone.

Polygenic Scores from the Nature study (2018)

While generous welfare systems in Western countries do provide benefits to those most disadvantaged by the cognitive lottery, a much larger proportion do not qualify for any assistance at all. Instead, those with IQs below 84 are often forced to work arduous manual labor jobs, since they are unlikely to possess the array of qualifications required for non-manual work. These occupations make them the most marginalized in our complex capitalist society—and even those employment opportunities are shrinking under the unrelenting pressure for lower costs and greater efficiency. Job categories like driver, cleaner, and assembly line worker are rapidly disappearing due to automation, leaving those with low IQ nowhere else to go. While most of us delight at the luxurious comforts heralded by the ongoing automation revolution, these same comforts—such as self-driving cars, autonomous vacuum cleaners, and robotized assembly lines—are poised to render a cognitively vulnerable 15 percent of the population unemployed and unemployable.

What exactly are we doing to rectify or alleviate cognitive inequality? The answer, of course, is that we ignore it and hope it will go away. Continuing to force large numbers of cognitively underprivileged children through the arduous challenges of the standard education system is only perpetuating the devastating legacy of intelligence denialism. By pretending away the fact of IQ differences, McNamara drafted the intellectually challenged into a warzone more challenging and lethal than anything they would have faced at home and thereby caused the needless deaths of thousands. Furthermore, his initiative left tens of thousands of survivors with debilitating psychological conditions such as post-traumatic stress disorder and cruelly deprived many thousands of parents and relatives of the chance to see a beloved family member grow old. The apparent fair-mindedness in this act of conservative blank-slatism is belied by its atrocious outcomes, which render it morally indefensible.

Yet while McNamara’s policy has been called “a crime against the mentally disabled,” few have considered what crime might be constituted by our indifference to the cognitively underprivileged within our own societies. Fifty years after McNamara and 20 years after Martin Bryant, we have not yet begun to ask the question: is it really fair for one person to be born with an intellectual assurance of success in navigating the challenges of a twenty-first century society, while another is born almost certain to fail? Until we accept that people with low IQ exist, and that the ramifications of their condition are indeed severe, how can we even begin to discuss what might be done to alleviate their suffering? The importance of cognitive ability for life success in our technologically complex society makes answering that question a moral imperative—but economic and political leaders have shown scant interest in this issue. Despite the fact that low IQ is correlated with negative outcomes in a large number of areas and afflicts around 15 percent of the population, we seem incapable of treating it like any other public health problem.

Simply wishing away the fact that the genetic and environmental circumstances of a person’s birth inevitably endows everyone—for better or worse—with a personality, a level of sociability, and an intelligence is a form of denialism that serves only our urge for moral exculpation. Pretending that those burdened with low IQ are just lazy, or lack the appropriate motivation, is a way of absolving ourselves of responsibility to help them. Accepting that intelligence exists, that intelligence matters, and that the less intelligent are equal to us in moral worth and value and thus ought to be helped, constitute the first steps in addressing this increasingly urgent need to fully accommodate the cognitively underprivileged.