Before I became more biologically oriented in my thinking, I used to be super into ideologies – not any particular ideologies, but the very idea of ideologies as a category. It’s virtually impossible to have anything approximating a scientific discussion about ideologies, because even the very word is a semantically and semiotically loaded cultural construct; as Žižek says, we cannot look at ideology as we inhabit ideology. However, there are still a few interesting things to say about the topic. Take the following example:
This is one of the goofy graphs I made on the topic, that just so happens to encapsulate an interesting idea (ignore the numerically ordinated axes for a moment). The question is – who benefits most under a specific ideology? What the curve shows is that an authoritarian society will provide maximal benefit to those unable to coordinate their own affairs, simply because in such a society your affairs are already done for you. However, strong-willed thinkers and creative types will probably suffer in such conditions, lacking autonomy over their ideas and self-expression.
An individualist society, by contrast, will provide maximal benefit to those who have the aptitude to organize their own affairs and excel in absence of supervision. Yet such a society will necessarily provide less assistance to those who are less able to organize their own affairs – some of whom will be desperately in need of such assistance.
So which society is better? One constant feature of human sociality is that there rarely appears to be anything that is objectively ‘better’ than anything else; everything comes down to tradeoffs and opportunity costs. So if you’re Nikola Tesla you’ll probably do better in the individualistic United States, but if you’re a lonely depressed person suffering from the atheistic anomie that seems to grip Western civilization as of late, then you’d likely be better off staying in Serbia. At least homes are cheap there.
This is one of the most idiotic views I’ve ever seen advocated by tenured staff at a Western university. Let me explain why.
Let’s think about this logic for a moment. The statement above effectively states the following: since the decision as to whether or not you will eat the dessert is out of your control (proposition) therefore you are absolved of responsibility for it, and should not feel bad (conclusion).
To assess this proposition, we first have to come to a definition of ‘control’ – which is far more tricky than it seems. In this context people typically define ‘control’ in relation to various free-will arguments which can be broadly dichotomized into ‘pragmatist’ or ‘logico-determinist’ categories. Logico-determinists such as Sam Harris would state that since no neurobiological evidence exists whatsoever to support any mechanism isomorphic to free will or free choice, either for humans or for any other organisms, and as such we are not technically in control of anything. Pragmatists would broadly agree with these arguments, but would protest that such argumentation tautologically precludes the meaningfulness of free will as a term, which is contradicted by the semantic significance and lexical omnipresence of the term to modern humans.
I think that whichever view of free will or control we take, the proposition outlined above by the McGill Psychology Department is almost certainly true, but in a far broader sense than you might realize. We live in a world where our actions are contextualized by the sum total of all actions that preceded these, which effectively means that actions are really reactions, and are thus determined by that context. As such, when you inevitably act in accordance with the predispositions encoded into your neural hardware, your brain has in fact “already decided for you,” as per the statement above.
But how about the conclusion. Should this really absolve you from responsibility? If we say that you aren’t responsible for your actions because they were predestined by your brain, then what about in the case of rape? It’s not uncommon to hear, in courtrooms, defendants talk about ‘not being able to control themselves’. The Iraqi refugee who raped a 12-year old boy in an Austrian swimming pool a few years back defended his actions as the inescapable consequence of a “sexual emergency.”
Murder is another case we might examine. The circumstances typically resulting in physical conflict trigger a ‘fight-or-flight’ response in us, activating our endocrine systems and pumping our bodies full of chemicals such as adrenaline that cloud judgment and promote physical action. In such a situation, it’s also possible to say that your brain made the decision for you, precluding free will.
Now ask yourself – what difference does it make if some (or even all) murders and rapes do lie outside of the realm of our control? Are we going to start judging rapists in courts of law on the basis of their proclivity to hypersexuality, refusing to jail any rapists if indeed we see that their brains ‘already decided’ for them? Murder, too? If any country were ever to embark on such an endeavor with the propositions above as guiding axioms, they would quickly discover it impossible to judge any person guilty for any crime.
What am I getting at here? The conclusion you’re seeing above is horrifically wrong and terribly dangerous. A deterministic worldview may be scientifically valid, but this does not – and cannot – preclude holding individuals responsible for their actions. The consequences if we don’t – a world in which murderers, thieves, rapists, and worse are not held culpable due to the inevitably influence of their biology and socialization on these behaviors – are far too high to accept.
Even though our actions may not be subject to the kind of ‘free will module’ that proponents imply, there is no reason to suggest that consequences to these actions should not follow. No basis exists to suggest that where agency in actions is absent, reactions should not follow from those actions. All of which is to say: when actions are determined, consequences must be too.
Before we get into Sam, I’m going to start things off with a postulate by Yuval Harari. Harari is a colleague of Sam, the two having spoken on podcasts and referenced each other in written works, so I think it will be interesting to use one of his most powerful statements as a launchboard for my demonstration of the incoherence of Sam’s worldview.
Truth and power can travel together only so far. Sooner or later they go their separate ways. If you want power, at some point you will have to spread fictions. If you want to know the truth about the world, at some point you will have to renounce power.
This is a very interesting statement. To understand precisely what Harari means, it is necessary to explain the definitions of both ‘fiction’ and ‘truth’ as used here. To Harari, a ‘fiction’ is a proposition that is accepted by fiat, without underlying backing. We have to understand that Harari is not an individualist, and this may indeed be the most important difference between him and Sam Harris as thinkers. This means that Harari’s ‘fictions’ are not casual statements of mistruth that are shared by individuals, but rather systems-wide simplifications of reality that allow for greater levels of social organization.
Take laws, for instance. Laws are perhaps the prototypical examples of Harari’s fictions in operation (while he would likely demur in favor of God or money, if I discuss these here I’ll risk repeating myself later on). Laws are not grounded in the laws of physics, or the properties of physical matter, or even in our evolutionary biology (not necessarily, at least). Yet by the establishment and acceptance of laws for the regulation of society, higher levels of organizational complexity can be attained. A fiction, in short, is something used on or by a social system to guide it toward certain ends. Dig deep enough into any fiction, and you’ll find an arbitrary proposition at its core.
Truth, by contrast, is the underlying reality of things. I explained ‘fictions’ first, because Harari’s ‘truth’ can in a sense be defined simply as ‘that which contradicts a fiction’. An obvious yet brutal example of a truth is the following: that you and everyone you know is going to die.
Why does that matter? Because within the context of our social systems, we act as if this is not the case. We try our best to save people from death when possible, and to prolong life through whatever means available. We behave as if it is a tragedy, rather than an inevitability, when a large number of people are killed by something unexpected, and in doing so reinforce the idea that death is unnatural, is unnecessary, and (deontologically) ought to be prevented.
This systems-wide behavior reveals a collective ‘fiction’ (strictly in the Hararian sense, of course): we are engaged in a process of collective self-deceit in order to regulate society as so to allow for greater levels of social organization. Without anathematizing death, there is no basis for preventing people from dying, nor for punishing acts that induce death (such as murder). Obviously, any system in which murder is unobjectionable will find it exceedingly difficult to motivate its members to acheive complex tasks, because this generally requires cooperation, which is impossible if prosocially-defective behaviors as extreme as murder are unpreventable (the city of Chicago and its extremely low clearance rate for homicide is exemplary).
With this explained and Harari’s truths and fictions in mind, let’s get back to Sam Harris. Listen to this clip (set to play from 19m 23s) or read the transcript below if you’re short on time. I transcribed the clip carefully and removed some irrelevant muttering, but all errors are (obviously) my own.
Woman: So I have kids, you have kids. Right? Do you have kids?
Sam: Yep. Yep.
Eric Weinstein: We all do.
Woman: All have kids, great. So when it comes to free will, I get it. I’m completely on board, Sam, with your idea that there’s no free will.
Sam: Yep.
Woman: When it comes to raising kids, wh-
S: Don’t tell them. Don’t tell them th- [inaudible]
*AUDIENCE LAUGHS*
There’s a lot to say at this point, but it would be malapropos to cut the remaining context. Let’s hear Sam out. Cont:
W: Sorry but – I have an 18 year old boy, who’s… y’know, gorgeous. And when I’m trying to tell him to do the right thing, and he does something stupid… and then I wanna find out why he did that, I don’t even ask, cuz it’s a stupid question. Cuz he doesn’t even know why he did it, cuz he’s an 18-year-old boy. But when I’m looking at impacting his future behavior, where’s the practical separation between knowing… that there’s really no free will, and wanting your children to be responsible in their behavior and what they do in the world.
S: Okay… Well, this is an important question-
*APPLAUSE*
S: I think that there are many false assumptions about what it must mean to think that there’s no free will. I think there’s no free will, but I think that effort is incredibly important. I mean, you can’t wait around… I think the example I gave in my book is, well, if you wanna learn Chinese, you can’t just wait around to see if you learn it. It’s not gonna happen to you. There’s a way to learn Chinese, and you have to do the things you do to learn Chinese. Every skill or system of knowledge you can master is like that, and getting off of drugs is like that, and getting into shape is like that, and straightening out your life in any way that it’s crooked is like that. But the recognition that you didn’t make yourself, and that you are exactly as you are at this moment because the universe is as it is in this moment has a flip side, which is… you don’t know how fully you can be changed in the next moment, by good company, and good conversations, and reading good books, and… you don’t know, what you – you are an open system. It’s just a simple fact that people can radically change themselves. You’re not condemned to be who you were yesterday.
There’s a little more, but I think this is a good place to pause.
For the most part, Sam Harris is an incredibly rigorous, logical, and consistent thinker. After rejecting Islam I was drawn to Sam because he not only took the axiomatic standpoint of atheism, but also explored the implications and consequences of atheist realism by trying to develop what is essentially his own ‘atheist ethic’. His books ‘Lying’ and ‘Free Will’ are not just his own thoughts directed at a public audience; they’re actually a series of meditations through which Sam challenges himself, redressing areas that had long been considered ‘dead-ends’ wherein the repercussions of atheism (irrespective of its analytical correctness) are so deeply negative that in the final analysis pursuing such a worldview is simply not worth the trouble. In this respect, his philosophy is even comparable to Kantian deontology, which constitutes a similar attempt at grounding human morality in logic and proofs. Not bad, Sam. Not bad.
But nothing within Sam’s credentials can ameliorate the discrepancy we see in this conversation, between Sam’s philosophical idealism (what he calls ‘moral realism’) and his stated claims. Obviously we can let Sam off for the “Don’t tell them” line – it was a joke, and the audience got that. But as he goes on, we see the ethos within “Don’t tell them” repeated, explicated, and justified.
According to Sam, suppressing the ‘truth’ (that free will is an illusion) can at times lead to the actualization of greater potential. Obviously, we all know this already; at the most basic level, that is what religions do. By gathering the local township for collective prayer, meditation, and socialization, religious organizations have for time immemorial been using what Sam and Harari would call a ‘fiction’ to better people’s lives and endow them with a sense of meaning, purpose, and spiritual fulfillment. Yes, I am aware that the same mechanism can also be used for harm as well – that’s obvious, and unrelated to my point. The key here is that compromising the ‘truth’ in order to attain Hararian ’empowerment’ is precisely what Sam has criticized religion for.
This is why the Sam’s declaration that truth ought to be sacrificed in abet of empowerment (at least some of the time) is of such fundamental importance. His suggestion that we should utilize fictions in order to improve our social reality is devastating for moral realism, because it means that his desired system of social organization is essentially a religion, by virtue of operating along the same principles. By acknowledging that some aspects of reality (truth) should be set aside in name of functionality (power), Sam accepts the legitimacy of moral systems to uphold fictions for the good of their adherents. It is not just that he acknowledges that this is possible – he actually suggests that it ought to be pursued.
For what it’s worth, Sam himself admits this difficulty later in the video, conceding that what you say to people should be ‘true and useful’. But if it is valid and correct to use baseless fictions (such as the existence of free will) in order to better our lives, then we are instead promoting ‘what is useful and not true’. At this point we are effectively ruling out the possibility that there are ‘right’ or ‘wrong’ modes of social organization, since the discussion now moves to what level of usefulness justifies the abnegation of truth, and so on and so forth. The question ‘which moral system is correct’ becomes ‘which moral systems balance fiction and power appropriately’. Far from moral realism, this perspective is so blatantly pragmatist as to make William James turn in his grave. Moreover, Sam’s dismissal of the validity of religious systems based on their unrelatedness to material reality now seems positively hypocritical in light of his advocacy for those very same methods.
In short, if we accept Sam’s proposition, then it is meaningless to strive for the creation of a social system upholding an ‘objective’ or ‘correct’ moral reality. Instead, the question that then results is: ‘to what extent must we sacrifice the truth in order to attain the truth’. Needless to say, such a question – at least from Sam’s own moral realist perspective – is utterly incoherent.
This is a repost of an article originally appearing in Quillette.
On Sunday 28 April 1996, Martin Bryant was awoken by his alarm at 6am. He said goodbye to his girlfriend as she left the house, ate some breakfast, and set the burglar alarm before leaving his Hobart residence, as usual. He stopped briefly to purchase a coffee in the small town of Forcett, where he asked the cashier to “boil the kettle less time.” He then drove to the nearby town of Port Arthur, originally a colonial-era convict settlement populated only by a few hundred people. It was here that Bryant would go on to use the two rifles and a shotgun stashed inside a sports bag on the passenger seat of his car to perpetrate the worst massacre in modern Australian history. By the time it was over, 35 people were dead and a further 23 were left wounded.
Astoundingly, Bryant was caught alive. He was arrested fleeing a fire at the house into which he had barricaded himself during a shootout with the police. He later pled guilty to a list of charges described as “unprecedented” by the standing judge, and was sentenced to life in prison without the possibility of parole, thus sparing his victims and other survivors the suffering (and perhaps the catharsis) of a protracted trial. Yet, in spite of his guilty plea, Bryant did not take the opportunity provided by his official statement to offer any motive for his atrocities. Instead, he joked “I’m sure you’ll find the person who caused all this,” before mouthing the word ‘me.’ Intense media speculation followed, the main focus of which was Bryant’s history of behavioral difficulties. These were offered as possible evidence of a psychiatric disorder such as schizophrenia (which would have been far from sufficient to serve as a causal explanation for his crimes). However, the most notable and concrete fact of Bryant’s psychological condition was his extremely low IQ of 66—well within the range for mental disability.
Lascar Monument to Port Arthur massacre (wikicommons)
IQ scores are classified in a number of ways, all of which are broadly similar. The Wechsler Adult Intelligence Scale (WAIS-IV) establishes seven categories of IQ scores. Most of us fall into the ‘Average’ band, constituted by the 90-109 range. Those achieving scores of 130 or higher are considered ‘Very Superior.’ Conversely, scores of 69 and under are classified as ‘Extremely Low,’ and automatically qualify the scorer for a diagnosis of ‘mild retardation’ according to the APA’s Diagnostic and Statistical Manual of Mental Disorders. It is into this band that Bryant’s score falls.
The connection between intelligence and behavioral problems, such as Conduct Disorder (CD) or Antisocial Personality Disorder (APD), was well-known around the time of the Port Arthur Massacre. A review by biostatistician and UCSD professor Sonia Jain cites contemporaneous studies to suggest that low IQ scores in childhood should be considered a risk factor for APD and CD. In 2010, several psychologists published results from a longitudinal study containing data on over a million Swedish men, who were tracked from conscription for a little over 20 years. They found that IQ scores tested during conscription were a significant and robust predictor, not only for APD or CD, but for all categories of mental disorders. Conscripts with low IQ were substantially more likely to be diagnosed with one or more mental disorders, to suffer from mood and personality disorders, and to be hospitalized for mental illness. Those in the lowest band—like Bryant—were most at risk of severe psychological disorders.
a & b: Hazard ratios for admission for different categories of psychiatric disorder according to nine-point IQ scale. Highest IQ score (coded as 9) is the reference group. Estimates are adjusted for age at conscription, birth year, conscription testing centre, parental age and parental socioconomic status. (n=1,049,663)
While these correlations are concerning, they do not offer an explanation for Bryant’s atrocities. In a population where intelligence is normally distributed with a mean of 100, a little over two percent of people would attain IQ scores close to Bryant. A further 15 percent would receive IQ scores somewhere below 84—well beyond the threshold for disqualification in the Armed Forces Qualification Test (AFQT), used to determine suitability for admission into the US Army until 1980. Careers for those below this level are extremely rare—a fact that might help explain the correlation between low IQ and an enhanced risk of criminal offending, given the scarcity of well-paid jobs for those with an IQ of below 84.
But much of this correlation is due to street-level petty and violent offenses, not mass murder, and it would be abhorrent—obscene, even—to suggest that people with a low IQ should be treated with suspicion, or as murderers-in-waiting. In almost all cases, these individuals pose a risk to no one but themselves, and are more likely to fall prey to victimization by others. On the other hand, it is equally irresponsible to ignore the specific difficulties which those with a low IQ face. The consequences of this wishful thinking—however noble in intent—can be devastating.
Perhaps the best example is offered by the recent history of the Cold War. While the American military-industrial complex was sufficiently sophisticated to provide the United States with all the arms and armaments it could possibly hope for, there are always things that money cannot buy. In this case, it was bodies—young American men needed to fight on the ground in 1960s-era Vietnam, where they found the most unforgiving of battlefields among the region’s unassuming jungles and innocuous rice paddies. The unusually high attrition rate of soldiers posted there, as well as the frequent use of student deferments or feigned illness to dodge the draft (President Trump’s excuse of ‘temporary bone spurs’ constitutes one particularly famous example), resulted in a shortage of men which meant that more troops were needed than the nation was able to supply.
The government was confounded by this problem for some time, attempting halfhearted crackdowns on draft dodgers as a temporary solution, until Secretary of Defense Robert McNamara arrived at a more permanent workaround. The US government would draft men whose low IQ scores had hitherto disqualified them from military service. This stratagem—codenamed ‘Project 100,000’—is detailed along with its dreadful consequences in the book McNamara’s Folly by the late Hamilton Gregory. Gregory witnessed the fate of the low-IQ draftees firsthand while he was a soldier in Vietnam. These draftees—cruelly nicknamed ‘McNamara’s Morons’—were generally capable of completing simple tasks, but even a simple task imperfectly executed can be disastrous in warfare.
US Secretary of Defense Robert McNamara at a press conference on Vietnam (26 April, 1965)
A case study in the book is ‘Jerry’ (not his real name). Jerry was a draftee from the 100,000 who had been assigned guard duty in a camp by the Quan Loi Green Line. Jerry’s task was to challenge anyone approaching the camp by calling: “Halt! Who goes there?” followed by “Advance and be recognized!” once a response had been obtained. This task was minimally demanding due to the clearly visible differences between an American soldier and the average Vietcong guerrilla. But when a well-liked American officer returned to camp, Jerry bungled his instructions. Upon seeing the officer approaching, he yelled “Halt!” and then opened fire, killing the man where he stood. Jerry subsequently disappeared in what was either an act of remorseful abscondence or murder by outraged members of his battalion. In another case described by Gregory, one of the ‘morons’ played a joke on his squadmates by throwing a disarmed hand grenade at them. Despite being beaten up for it, he found this prank so amusing that he repeated it every day until the inevitable happened; he forgot to disarm the grenade, causing the deaths of two soldiers and the grievous wounding of several more.
What happened to many of the 100,000 (whose actual total exceeded 350,000) is not hard to predict. “To survive in combat you had to be smart,” Gregory writes. “You had to know how to use your rifle effectively and keep it clean and operable, how to navigate through jungles and rice paddies without alerting the enemy, and how to communicate and cooperate with other members of your team.” Fulfilling all or any one of these minimum requirements for survival in a battlefield is contingent upon a certain level of verbal and visuospatial intelligence, which many of McNamara’s draftees did not possess. This ultimately led to their fatality rate in Vietnam exceeding that of other GIs by a factor of three.
The danger of physical harm faced by those with a low IQ is not restricted to the battlefield. A 2016 study by four psychologists using data from the Danish Conscription Database (containing 728,160 men) revealed low IQ to be a risk factor for almost all causes of death. A drop in IQ by a single standard deviation (roughly 15 points) was associated with a 28 percent increase in mortality risk. The association between low IQ and mortality was particularly great for homicide and respiratory disease (such as lung cancer). The high homicide rate could reflect a predisposition for those of low IQ to find themselves in dangerous situations, perhaps due to a lack of economic opportunity or an increased likelihood of being victimized by predatory individuals. Similar features could explain the prevalence of respiratory disease, which may be a product of high rates of smoking as well as a greater likelihood of inhabiting more polluted industrial areas where it’s easier to find low-skilled work. Clearly, being born with a low IQ is sufficient to set one up for an unlucky and unhappy life. But could low IQ have contributed to—not explain, but be a factor in—the massacre committed by Martin Bryant?
To answer this question, we have to transcend mere correlations between IQ and different types of outcome, and consider that those with low IQ are much more likely to experience misfortune in seemingly every endeavor. Having intelligence is what allows us to operate in the world—both on our own, and within the societies we inhabit. Those lucky enough to have a high IQ have an easier time at dispatching the various challenges they face, and thus naturally rise within hierarchies of competence. We can imagine any number of these hierarchies, most of which are unimportant (the hierarchy of Rubik’s Cube solvency speed, for example, is probably irrelevant), but all of which require some degree of intelligence. Furthermore, some of these areas of success—such as friendship groups, romantic relationships, and professional employment—are so fundamental to the individual pursuit of happiness that to be unable to progress in them is profoundly damaging to one’s sense of well-being and intrinsic self-worth.
This means that having a low IQ doesn’t only make you more likely to get killed or fall victim to an accident. It also means you’re more likely to undergo difficulties in progressing up every ladder in life. You’ll often feel permanently ‘stuck at zero’—unable to improve or change your position. Most of us will experience this feeling at least a few times in our lives, whether encountered in school (being unable to break the ‘A-grade’), in our social lives (being unable to establish or maintain a successful romantic relationship), or in comparatively trivial areas. Yet most of the time, it is transient—passing when we switch our efforts to a new endeavor, or after devising a way to solve the problem. Very few of us know what it is like to have that feeling almost all of the time—to have a large proportion of one’s attempts at self-betterment or advancement frustrated by forces that seem to be beyond our control. Being trapped in such a dismal psychological state for only a brief interval can lead to anxiety, depression, or dependence. In some, this feeling of ‘being stuck at zero’ (that the world is manifestly unfair and against them) will lead to resentment—and resentment can turn into murderousness.
Martin Bryant’s life, characterized by loneliness, depression, and numerous frustrated attempts at making friends, is replete with examples that follow this pattern. Clearly, his actions mark him as an extreme outlier among those with low IQ—but his troubled life experiences are distressingly representative. Four in 30 children in classrooms across America are made to compete with their peers for grades and university places in spite of low IQ and with little success. And, like them, Bryant found society’s ‘normal’ to be simply unobtainable. Because the role of cognitive ability is de-emphasized in childhood success, and often treated as a function of effort, children in these circumstances can find themselves trying harder than every other child in the classroom, while still being admonished to ‘try harder.’ While wise caregivers abstain from blaming these children outright for their failures, a taboo on acknowledging the importance of intelligence means that low IQ individuals themselves may be unaware of their condition or its full ramifications, making them likely to engage in repeated self-blaming injurious to self-esteem and mental stability.
None of this is to suggest that those with low IQ, or those who experience a duration of being ‘stuck’ due to their cognitive limitations, should be viewed as likely to break the law or engage in violent crime. But it’s one possible explanation for the fact that those with a low IQ are more likely to do so than those with an average or high IQ. And the uncomfortable reality is that the resentful, in this case, are somewhat correct in their analysis—they have been set up in a game rigged against them from the very start. Recent research in genomics has confirmed this: a 2018 study in Nature1 used genes sequenced from over a million individuals to examine the genetic contributions to educational attainment. This process allows for the construction of profiles for individual ability by evaluating polygenic scores (PGS). In this study, those within the highest PGS quintile had around a 50 percent likelihood of graduating from college; those in the lowest bracket, only 10 percent. Yet none of this difference in ‘genetic quality’ can be accounted for by individual merit or achievement. It is a difference of crucial importance, yet it is determined for us as individuals by luck alone.
Polygenic Scores from the Nature study (2018)
While generous welfare systems in Western countries do provide benefits to those most disadvantaged by the cognitive lottery, a much larger proportion do not qualify for any assistance at all. Instead, those with IQs below 84 are often forced to work arduous manual labor jobs, since they are unlikely to possess the array of qualifications required for non-manual work. These occupations make them the most marginalized in our complex capitalist society—and even those employment opportunities are shrinking under the unrelenting pressure for lower costs and greater efficiency. Job categories like driver, cleaner, and assembly line worker are rapidly disappearing due to automation, leaving those with low IQ nowhere else to go. While most of us delight at the luxurious comforts heralded by the ongoing automation revolution, these same comforts—such as self-driving cars, autonomous vacuum cleaners, and robotized assembly lines—are poised to render a cognitively vulnerable 15 percent of the population unemployed and unemployable.
What exactly are we doing to rectify or alleviate cognitive inequality? The answer, of course, is that we ignore it and hope it will go away. Continuing to force large numbers of cognitively underprivileged children through the arduous challenges of the standard education system is only perpetuating the devastating legacy of intelligence denialism. By pretending away the fact of IQ differences, McNamara drafted the intellectually challenged into a warzone more challenging and lethal than anything they would have faced at home and thereby caused the needless deaths of thousands. Furthermore, his initiative left tens of thousands of survivors with debilitating psychological conditions such as post-traumatic stress disorder and cruelly deprived many thousands of parents and relatives of the chance to see a beloved family member grow old. The apparent fair-mindedness in this act of conservative blank-slatism is belied by its atrocious outcomes, which render it morally indefensible.
Yet while McNamara’s policy has been called “a crime against the mentally disabled,” few have considered what crime might be constituted by our indifference to the cognitively underprivileged within our own societies. Fifty years after McNamara and 20 years after Martin Bryant, we have not yet begun to ask the question: is it really fair for one person to be born with an intellectual assurance of success in navigating the challenges of a twenty-first century society, while another is born almost certain to fail? Until we accept that people with low IQ exist, and that the ramifications of their condition are indeed severe, how can we even begin to discuss what might be done to alleviate their suffering? The importance of cognitive ability for life success in our technologically complex society makes answering that question a moral imperative—but economic and political leaders have shown scant interest in this issue. Despite the fact that low IQ is correlated with negative outcomes in a large number of areas and afflicts around 15 percent of the population, we seem incapable of treating it like any other public health problem.
Simply wishing away the fact that the genetic and environmental circumstances of a person’s birth inevitably endows everyone—for better or worse—with a personality, a level of sociability, and an intelligence is a form of denialism that serves only our urge for moral exculpation. Pretending that those burdened with low IQ are just lazy, or lack the appropriate motivation, is a way of absolving ourselves of responsibility to help them. Accepting that intelligence exists, that intelligence matters, and that the less intelligent are equal to us in moral worth and value and thus ought to be helped, constitute the first steps in addressing this increasingly urgent need to fully accommodate the cognitively underprivileged.
Over here in the UK, feminist academic Germaine Greer recently caused a huge controversy by calling for the punishment for rape to be reduced (Guardian). She stated that rape is not a “spectacularly violent crime” but is often instead “lazy, careless, and insensitive”. Obviously, this created storms of furious chatter between the pro- and anti-camps that emerged out of the formless ether of the internet presumably only engage in the kind of mindless invective that seems to dominate sociocultural discourse right now. That said, Greer was making an argument, and treating it as such necessitates some consideration of what she was actually getting at.
Firstly, Germaine’s argument can be distinguished into two elements, one descriptive, one prescriptive. To me, her descriptive claims seem to be the following:
Rape generally doesn’t inflict much harm
Exceptions to this pattern are rare
A significant proportion of rapes, whether harmful or otherwise, are the results of miscommunications in which malevolence or ill-intent were absent (where mens rea cannot be reasonably attributed)
Following this reasoning, she makes the accordant prescriptive claim (4) that the punishment from rape should be reduced. I’m going to argue that while her fundamental descriptive claim (1) is wrong, her prescriptive claim (4) would still be wrong even if (1) was right. Here goes.
Greer makes the acknowledgment that while rates of PTSD among combat veterans is close to 20%, these pale into insignificance in comparison with the rates among rape victims which approximate 70%. I didn’t check the source of these claims because it’s irrelevant to my argument, but I’ll note that I’m skeptical of the notion that the collection of this data for PTSD rates was methodologically identical (i.e. self-reported vs medically diagnosed). In reference to this huge disparity, Greer says:
“What the hell are you saying? Something that leaves no sign, no injury, no nothing is more damaging to a woman than seeing your best friend blown up by an IED is to a veteran?”
This is the statement that set my psychology-sense tingling.
Unfortunately for Greer, clinical psychologists have known for decades that the likelihood and severity of PTSD symptoms cannot be evaluated in direct proportionality to any physical harm inflicted by an experience. This is because trauma is as a phenomenon dependent on our subjective expectations and self-image within the context of interlocking societal collectives, or what clinical psychologists like to call a ‘schema’ for short. James Pennebaker is a clinical psychologist who did a lot of research back in the ‘90s on trauma and its epidemiology; his papers are a goldmine for fascinating factoids, like gender being a better predictor of PTSD likelihood than proximity to ground zero.* Jordan Peterson, much of whose pre-fame academic career actually focused on trauma, summarizes findings for us as follows: “[a] blizzard that would incapacitate Washington for a month barely makes the residents of Montreal blink”.**
In short, Greer is wrong about the harmfulness of rape. People can be traumatized by quite a lot of things, and the fact that more rape victims report being traumatized than combat veterans is direct attestation to that fact. A soldier fighting in a combat zone will likely have a reasonable expectation of killing someone, of having his friends killed, or of being killed (or almost killed) himself. All of those events are pre-programmed into their schema from basic training onward. Frankly, the fact that around 20% of soldiers get PTSD at all seems to suggest that war is even more brutal than we think, since it’s unlikely that so many people would be traumatized by something they engaged in years of physical and mental preparation for otherwise.
Contrast this with a woman walking home from work who gets assaulted, dragged away and raped. As awful traumatic events go, this is pretty much 99th percentile. Given the assumption that healthy and psychologically normative individuals do not tend to make provision for the eventuality that they will be physically and sexually violated in such a way, nor would the future possibility of such an event feature prominently in their self-image, nor would they have engaged in years of intense physical and mental preparation for being victimized in such a fashion, they’re entirely defenseless to the psychological damage that subsequently ensues. Viewed in this context, Greer’s attempted comparison seems somewhat inane; why would rape victims get PTSD at a higher rate than combat soldiers? We might be better served by asking how on earth they would not?
But returning to the original point – even if we grant that these types of rapes are a small minority, and most rapes are not *as* traumatic (this does not mean *not* traumatic), she’s still totally wrong on the prescriptive point she made suggesting that the punishments for rape should be decreased.
Things aren’t illegal just because they upset people, and individual harm isn’t the sole factor considered by any justice system. Crimes and their legally mandated punishments are also integral elements to the social contract, which dictates what a society considers to be acceptable or integral to its cohesiveness and smooth operation. Ideally, this social contract reflects the social will, otherwise you’re most likely in a tyranny of some kind (a definition with which modern Britain surely complies).
That aside, the policing and punishment of rape is not an isolated phenomenon that pertains only to the retribution of the harm done by the offense – it also lays out in stone the framework around sexual relations which our culture, any culture, considers to be normative and moral. Forcing people to have sex with you is wrong – it’s a behavior we are so opposed to, that we’ll lock you up for 10 or more years in jail just so you have enough time to get that into your head before we let you out again.
Little of this, in the Western legal tradition, has anything to do with inflicting upon the perpetrator an equivalent level of harm as he himself committed; the Western justice system is not as victim-centric as Islamic law (e.g. qisas) or other similar systems, but in fact SOCIAL-centric, in the sense that it operates on the principle of the sanctity of the common good. In other words, it seems to me that Greer is failing to incorporate the social implications of rape into an overall assessment of its harm. This kind of failure is typically exhibited by Western moral individualists, whose elevation of the individual to absolute primacy results in a kind of vulgar utilitarianism exemplified by Greer’s comparison of PTSD rates between rape victims and soldiers. Such thinkers may be very good at assessing individual harm, but they’re also incredibly bad at building moral frameworks for functional social systems – if you want proof for that, just look at the West today.
However, I would support further distinction and categorization of rape within the legal system, simply because I do acknowledge that an aspect of Greer’s reasoning here is correct. Perhaps this situation could be remediated by adding further categories of legal distinction to rape, such as rape by coercion, rape by force, and rape by predation, or something of the kind. It seems to me that categorizing date rapes (as meriting of punishment as they obviously are) alongside those of the other kind mentioned here is somewhat nonsensical.
*Pennebaker, JW, Cohn, MA and Mehl, MR. “Linguistic markers of psychological change surrounding September 11, 2001,” Psychological
Science 2004, 15 (10). pp. 687-693. DOI: 10.1111/j.0956-7976.2004.00741.x. P691
**Peterson, Jordan B. Maps of Meaning: The Architecture of Belief. New York: Routledge, 1999, p249