In east as in west, Anti-Arab bigotry is historically ignorant and anti-Christian

The fact that ‘Arab’ as a word invokes negative connotations to certain people would likely be unsurprising to most Westerners. Yet it would probably come as a highly surprising fact that this antipathy also comes from the Middle-East.

No, I’m not making a point about Israeli Jews and anti-Arab bigotry. I’m talking about native Arabic speakers, born in Arab countries, who dismissively reject the term ‘Arab’ as a vulgar term at the same time they are identified as Arabs by the majority of the non-Arab world.

Perhaps the main reason for this can be found in the connotations invoked by the very word ‘Arab.’ To many native speakers in the Levant, the word ‘Arab’ is associated with Bedouin people and lifestyle for a plethora of historical reasons. Most people don’t realize, but the Qur’an itself even participates in slander against Arabs, as in the following passage (adapted from the Talil Itani translation: note that the literal translation is just “the Arabs” (العراب) not “the Desert-Arabs” or “Bedouins” or anything else):

(9:97) “The Arabs are the most steeped in disbelief and hypocrisy, and the most likely to ignore the limits that God revealed to His Messenger. God is Knowing and Wise.”

Seems Muhammad didn’t know how to play his audience!

Although most Westerners today don’t need to be told not to say bad things about other groups, recent developments in some parts of the Arab world have gone the opposite direction. Beginning in the latter half of the 20th century, Arabic-speakers in historically Christian Levantine countries such as Lebanon and Syria have sought to re-assert old identities, rebranding themselves as Phoenicians, Assyrians, or Greeks whose ties to the Arab identity exist in language alone. Nassim Taleb, for instance, Taleb proudly refers to himself as a descendant of the Greek colonists in Syria, who intermixed with the Levantine population to form a unique and flourishing Greco-Semitic Christian culture, and many others engage in similar acts of historical reclamation.

While this itself is innocuous and widespread, some people take it too far. Taleb, for instance, has actually suggested that the Lebanese dialect of Arabic is rather an entirely different language – not Arabic, but a direct descendant from the Phoenician/North Canaanite language, which he asserts developed independently, but under heavy Arabic influence. Okay. Now things are starting to get a bit weird.

Now again, I have no problem with the appropriation of distant history to create newfound identities. If Levantines like Nassim Taleb wish to brand themselves as Phoenicians, or as relics of Byzantine glory, then so be it. What I do not accept on any terms is the para-racist bigotry against the Arab identity that so frequently accompanies these recent developments of self-expression. For instance, see stuff like this:

Sure, it is an old tweet – but if you’re in the right circles, you hear this sort of stuff much more commonly. The Lebanese intellectual Said Akl, for instance, famously said “I would cut off my right hand just not to be an Arab.” Among other persecuted minority communities of the Middle-East, such as Kurds (particularly Yazidis), Assyrians, and Armenians, such remarks are even more often heard.

Unfortunately, a substantial proportion of this crap comes from Arabic-speaking Christians, who are unaware of the valuable and enriching role Arabs actually played in Christian history. For instance, look at this map from 565 AD:

Substantial Christian communities likely existed in the Hejaz and in Yemen (contested between Persians and Axumites)

As you can see, the picture of Eastern Europe and Sub-Saharan Africa (with the exception of Christian East African kingdoms around Ethiopia) is very unclear, and Britain is a veritable clusterfuck of Celts, Latins, and Germanics bashing each others’ heads in. But what about the (very foreign sounding) kingdoms at the northern end of the Arabian Peninsula? It might surprise you to know that both of these were not only Christian – but Arab, too.

The war banner of the Ghassanid kingdom

The Ghassanids (ar: الغسانية) were a foederatus of the Roman Empire, who played a critical role in securing the empire’s southern border against incursions from Pagan Arab raiders (who themselves formed the main body of the Islamic converts that overran this border in the 620s). While they mostly stayed true to Rome during the initial Islamic invasion, many were forced to convert sometime before the year 900, though others fled as refugees to Byzantium. Incredibly, the Christian influence remained so powerful that the surname ‘al-Ghassani’ still signifies Christian heritage today. One of the oldest surviving Orthodox Christian schools in the Levant bears the name ‘Al-Ghassaniyyah‘ for instance, and while Muslim Ghassanids can be found today. Perhaps most important of all, at least one Roman emperor – Nikephoros the First – was of probable Ghassanid descent.

The Lakhmids (ar: المناذرة) have a similarly impressive history. Although suzerains to the Sassanid Empire, they resisted the influence of state Zoroastrianism for centuries, holding stubbornly to their Christian heritage. They played a similar role to the Ghassanids as borderguards against the nomadic raiders to the southern deserts, although relations with their overlord were far less positive. The Sassanid overthrow and execution of the last independent Lakhmid king Nu3maan may have caused an Arab uprising which left this southern border effectively unguarded, allowing for the early Muslims to stage a bold and sweeping invasion of the Sassanid heartland of Asoristan just decades later, by which time the Persians were critically weakened due to infighting and war exhaustion. Despite repeated betrayals and oppression, some Arab Christians still rallied to the defense:

Pourshariati 2008 pp201

So what have we learned from all this?

Clearly, the moral matrix of the modern West has internalized – along with a great many other things – the notion that discrimination on the basis of group identity, or attacks against entire ethnic groups, is uncool. There isn’t so much of a ‘lesson’ for Westerners here as much as a reminder – yes, Arab Christians existed, yes, they still exist today, and yes, it’s wrong to draw an equivalence between the Arabic language or identity, and Islam or Islamism. Yet perhaps many Arabic-speakers in the Middle-East too could do with a reminder that anti-Arab bigotry is not only wrong, but historically ignorant and anti-Christian.

Evil as entertainment

You may be aware that Netflix has a new documentary series on infamous 20th century serial killer Ted Bundy. The Ted Bundy Tapes, as the series is entitled, utilizes ostensibly unknown footage to produce a new, more detailed portrait of the killer and his actions.

I watched the first episode of Netflix’s with my partner and a roommate. As I test relatively low in disgust sensitivity (20-50th percentile on the disgust scale) and extremely high in openness to experience (99th percentile BFI) I did not expect to be particularly unnerved or unsettled. I also have no problem with representations of violence or evil in film – Upgrade, one of my favorite films of all time, features brutal violence that ends with the bad guy getting away with it.

Yet to my immense surprise, this doc completely shut me down.

The Tapes begins with Bundy’s childhood, and sketches the chronological progression of his transformation into America’s most infamous serial killer. No violence is shown onscreen; the relatively tame crime scene photographs constitute the only PG-13 material on screen. Scene-by-scene, the content left nothing to be disturbed by.

Yet what both grips and horrifies the viewer at the same time is the interplay between the atrocious violence of Ted Bundy, retold with consistently palpable glee on audio, and the helpless, frustrated horror it caused, as retold by whose who knew his victims. The constant back-and-forth tennis match between Bundy’s reveling in his murders and the tortured confusion of the communities and families gives way to an emergent property of abject horror that by far outstrips anything one might expect to see in even the worst of horror films.

As a curious teen with an internet connection, I recall many times stumbling across grotesque shock videos on the dark corners of the internet. Images and videos of women in high-heels stepping on cats, among other unspeakably sick things involving children or animals. I recall struggling to analyze my own disgusted emotions at the time, and realizing that a key factor lay in the helplessness of the victims depicted. Obviously, it isn’t hard to step on a cat because a cat can’t defend itself; what keeps us from abusing animals and people isn’t the difficulty of the act, but the fact that every moral sensibility, innate or acquired, tells us not to do so.

Upon analysis, the actions of Ted Bundy seem strikingly similar. It was not hard, in the overwhelmingly white areas of 1970s America that Bundy victimized, to gain someone’s trust – but to abuse that trust, and the person offering it, would have been unthinkable.

Bundy reveled in that unthinkability. In the documentary, we see him boasting about gaining the trust of women whom he later violates and brutally murders. The shock value derived from his unspeakable crimes is precisely what motivates him to retell them to the recording interviewer.

It’s not particularly surprising that this experience left me as shocked and disgusted as it likely did his interviewer. But by feeling the reaction he intended, I gave him – even in death – exactly what he wanted. The entire audiovisual experience of this Netflix series inadvertently validates the motive behind Bundy’s psychopathic crimes in the first place.

Is it good that we make documentaries designed to give serial killers what they want, even in death? Probably not. Many news sites have already started anonymizing school shooters to address this very problem. Netflix’s commercialization of it is thus something I want nothing more to do with.

The Crafting of History: Christianity, Pakistan, and colonial narratives

The Islamic Republic of Pakistan was created in 1947, but its foundation has roots in the pro-segregationist stance of the British colonial administration, which generally viewed Muslims and Hindus (including also Jains, Buddhists, Animists, Christians etc) as incapable of coexistence. In part, such views motivated the many maps of the subcontinent divided by ethnic or religious minority that were drawn up under British rule, although these are mostly as meticulous as they are hopelessly inaccurate.

Nevertheless, the Wiki page for ‘History of Pakistan’ takes you back to the Neolithic period, telling the story of its people through their vibrant past well before Pakistan was conceptualized, and well before Islam was invented/revealed (whichever your preference). It describes the brilliant achievements of the Indus Valley Civilization, the Buddhist and Hindu dynasties which once ruled the area, and their positive contributions to what is now Pakistan.

A map of ‘prevailing religions’ within British India in the year of 1909. The mapping should be seen as highly speculative at best, since many regions depicted consisted of only slight majorities

By contrast, the Wiki page for ‘Christianity in Pakistan’ starts in the 1800s, and does not go further back than the Jesuit missions in the 1500s. In doing so, it casts Christianity as a foreign and alien import – sometimes explicitly with sentences like this: “The Europeans won small numbers of converts to [Christianity]… from the native populations.”

You will find zero mention of the Apostles Thomas and Bartholomew, who were sent to India through the Parthian Empire, and established Orthodox Christian communities that still exist today (see St. Thomas Christians). Nor will you find reference to any of the role played by ancient Pakistan as the heartland of Nestorian Christianity in the Indian subcontinent, or the ecclesiastical province (headquartered in Herat, but comprising most of modern Pakistan) which was elevated to the highest rank under Nestorian Patriarch Sliba-zkha in order to meet the needs of the local population after they fell to the advances of Islam in the mid-late 600s.

Just like evolution, history selects and rejects.

No, that’s not polygyny… comments on Ross et al. (2018)

Royal Society recently put out a massive paper by Ross et al. with over 9000 authors (you know, one of those ones) on polygyny and wealth inequality. The title, “Greater wealth inequality, less polygyny: rethinking the polygyny threshold model” would have you thinking that the authors were able to refute or quantitatively disprove the polygyny threshold model with some sophisticated mathematics, but unfortunately this is not the case. Instead, the paper uses a strange mixed sample of hunter gatherer and highly developed industrial populations to argue that the transition to agriculture increases socioeconomic inequality, and additionally results in conditions of subsistence living that for most make polygyny effectively impossible.

Don’t you love it when the author and affiliation list is so big you can’t even screencap it? Maybe it’s deliberate!

Firstly, we should realize that this doesn’t amount to either a refutation or even the titular ‘rethinking’ of the polygyny threshold model. While results from their quant analysis are basically legit, it doesn’t change the fact that the authors have effectively based their study on a tautological proposition; subsistence living results in no surplus wealth (also tautological) which means that it is exceedingly rare for polygyny to be mutually beneficial. Alright. So where’s the challenge to the polygyny threshold model?

I have read a lot about polygyny, but I have never encountered any claim that polygyny ipso facto increases linearly with socioeconomic inequality per se. Rather, claims are made that conditions of high socioeconomic inequality will guarantee polygyny, as male reproductive success is subject to greater resource-dependent elasticity than female fitness due to inherent biological features (i.e. 9 months of pregnancy). This great presentation has more details, but for those with little time:

I had to screenshot this in word since I don’t have LaTeX on my WordPress acc ;_;

Or if you prefer (from the presentation linked above; this contains an error, as the 1948 paper cited is by A.J. Bateman, not Bateson):

To be fair, the authors recognize this by stating their intention to merely “extend” the polygyny threshold model, but I’d argue they haven’t done so in a way that’s significant enough to merit the “rethinking” boast. But this is not to suggest the paper has no value. Instead, what the authors have actually done is modeled the conditions for polygyny to take place in a largely monogamous society at subsistence-level conditions – unironically a notable achievement. This is a far more interesting result, and one that would merit wider recognition than the paper has currently received.

There are still some problems though. For instance, the paper notes:

“Sequential marriage can be considered a form of polygyny insofar as men typically replace divorced wives with younger women, allowing a subset of males in the population to increase their lifetime reproductive success relative to less wealthy males in the population, as has been shown in many of the populations sampled…”

Now this actually is a problem, since the definition of polygyny that the authors are using is not actually “polygyny” but “effective polygyny” so defined. I hate it when researchers redefine constructs in this ad-hoc fashion (especially when it’s not highlighted in the abstract) because it can mislead people who don’t read the full paper, and most of the postdocs I know don’t. Luckily, I did.

I think the problem with including sequential marriage into a working definition for polygyny is that there are substantial qualitative differences that distinguish these behaviors. For instance, technical polygyny (one man, multiple women at the same time, in a sexually exclusive [typically marital] arrangement) actually alters the operational sex ratio, among other things. Sequential marriage, by contrast, only means that the available pool of females includes larger numbers of women who already have children – that is, single mothers. Of course this may change the calculus for male satisfaction or some other outcomes, but these are not equivalent to the social effects we expect from a normatively polygynous mating equilibrium. For example, they completely negate some of the reported correlates for polygynous mating, such as female suffering and self-reported detriment to well-being noted amongst women in polygynous marriages (source from observational study in Cameroon). I understand that well-being wasn’t strictly a feature of relevance to Ross et al.’s analysis, but it DOES have implications for the precise ‘leveling’ of the polygyny threshold. A situation where a woman is going to be a second or third-order co-wife is very different to one in which she’s merely a second or third-order sequential wife. These differences matter quite a lot if we’re paying any attention to the implications for (1) the OSR (2) female well-being and gender inequality (3) male violence and intrasexual competition, among many other things.

Sourcing locations for data used in the paper. Does this look representative to you?

Now look, I’m not trying to be an ass and negate all the hard work the forty-thousand authors of this study did. But I do find it somewhat annoying when people publish work under “GOTCHA!” titles like “rethinking XYZ” despite nothing comparable to this having actually taken place. Far from rethinking, the authors actually RELIED ON the polygyny threshold model for their analysis, and came to the result that the agricultural transition killed incentive for polygyny amongst most normal people living in subsistence-level conditions. Fair enough. But why not just say so?

IMHO, the far more interesting result we get from the paper is this: we know that transition to monogamy occurred around the transition to agriculture in some societies, and this paper provides some really awesome and useful analysis to explain why that might have happened. But what it DOESN’T do is explain why monogamy actually became a social institution to the exclusion of plural marriage. Just because it isn’t worth it to have 2+ wives doesn’t mean that your society will necessarily ban having 2+ wives. We still don’t have an answer for why polygyny becomes legally and socially prohibited in these agricultural societies. However, I think that primate inequality aversion (as exhibited by this outraged capuchin monkey) might be a good place to start.

I don’t have the data to hand, but I do have a hypothesis. Agricultural transition makes polygyny functionally impossible for overwhelming majority of people, who are living at subsistence. But it DOES NOT affect the ability of men with sufficient social standing and resources to obtain and retain multiple wives. Historically, such men were stratified into classes or castes – merchants or Japanese 商 etc. It seems plausible to suggest that the impoverished majority of monogamous males (and perhaps their wives!) would have expressed strong opposition to their rich rulers taking multiple wives, and rallied to condemn this behavior. Others have articulated this hypothesis before (e.g. Henrich, Boyd, and Richerdson, 2012) but this study provides some useful background evidence for its plausibility. If you’re a man farming away in a Neolithic village under fairly awful living conditions, you might be able to tolerate paying taxes to your overlord despite his nice villa on top of the hill. But what if he has 6 wives and your daughter is one of them? Perhaps there might be an ‘outrage threshold’ we need to think about alongside the polygyny threshold model.

God, why does Gurlockk get to have the biggest rocks, the shiniest gems, and 12 wives when I can’t even count past ten?

The meta-analysis that wasn’t: assessing Flynn Effects through diachronic change in ICV

A while ago I was conducting a meta-analysis on diachronic variation in cranial volume measurements for different East Asian populations. I got into the project after being inspired by Lynn’s suggestion that the Flynn Effect is primarily nutritional, as presumably this would also show an effect on height, head size, and thus ICV. It could even be possible to control for ICV changes to show only the direct change to IQ over time, which would be a more rigorous way to compare Flynn Effect magnitudes. What a great idea, I thought.

Unfortunately as I got deeper into the project, I realized that virtually all of my data couldn’t be sourced back to any obtainable research. A lot of numbers were sourced from papers that were only available in physical archives in Japan or Korea, so I did what any good researcher does and promptly gave up.

However, even with the limit to my data integrity acknowledged, I was still able to get some plots of the cranial volume measurements that show some interesting results. Here’s a scatterplot I built using values from all studies I could find; the x-axis shows the year of the study’s publication and not the date of subject collection, expiry, or measurement. Note that all datapoints are n-unweighted (raw averages).

While initially there doesn’t seem to be much to say about this graph, it’s noteworthy that there don’t appear to be (when plotted in this form) any outliers, but two relatively clear clusters by gender. Clearly the female cluster is far less grouped than the male one (the source of its statistical insignificance that can be clearly seen in the following trendlines) but both clusters have mutually exclusive ranges, which are interesting. Since all of the studies here had sample sizes that were above 20 we’d expect distributions to be roughly normal, meaning that group-level sex differences in ICV are likely very reliable. This of course is no surprise seeing as ICV is effectively a function of body size and height, which are also variant with respect to sex in the same way. This becomes more obvious when we ignore the insignificant ethnic variation and go purely to the sex differences:

Overall it is fairly clear that ICV as reported in studies is increasing overtime. However, note the lack of any female reports going back beyond the early 1920s? This is a sign that our data is shitty. For many of the earlier studies no sex information was present, and although in some cases I was able to make informed estimates based on the averagees (i.e. a mean ICV value close to 1500 is almost certainly male or mostly male in content) it was never really certain how accurate or representative the values might be. However, if we are correct in inferring that earlier studies likely did not disambiguate male and female samples (as opposed to exclusively using male samples) then it could be suggested that the actual observed gap would be even greater, seeing as the female values would go down and the male ones would go up. This of course would throw the trendline for diachronic variation into doubt, which is currently strong with p value < 0.05 when excluding females.


Email me if you want the full dataset to work on.

Narratives of conversion

A while ago I was doing a research project on converts to Islam and their motivations through qualitative self-reports, which I collated into a series of ‘types’ or narrative trends. Here’s what I found.

For many in the increasingly multicultural, heterogeneous, disoriented West, Islam provides a specific identity by which they can define themselves and their relationships with other entities in the world.

A young guy at my gym (16) who recently converted to Islam is an example; he is son to a white father who abandoned his black immigrant mother before he was born, and described to me the difficulties of identification at school. He was being raised by a black mother in a black neighborhood, but looks more white. Islam grants him a consistent in-group with which he can find solidarity. He knows absolutely nothing about the theological aspects of Islam and hasn’t read the Qur’an, but takes pride in his new identity as a Muslim and often drops standard Muslim vocab (hamdulillah, mashallah, etc).

His newly obtained insider status he displays frequently, saying ‘we’ (Muslims) and ‘you’ (non-Muslims) even though he only converted a few months ago. Previously, as a ‘white’ looking kid yet also a product of a black subculture, there were presumably fewer individuals he could feel connected with in the same way.

The West is increasingly irreligious, but the human need for spirituality remains a motivating factor in many of our lives. We naturally seek out something to fill that void. For many, Christianity seems ‘weak’ and ‘feminine’ in comparison to a much more ‘muscular’ and uncompromising Islam. The level of piety displayed by self-declared Christians, many of whom do not go to church and the vast majority of whom do not do anything ‘Christian’ above that, might also seem ‘shallow’ in comparison to Muslims, who (ostentatiously) display strong degrees of religious attachment and faith.

For some, then, a quest for spirituality and a desire to know the ‘truth’ of existence can drive them towards religion; Islam will often be the obvious choice due to its constant proselytizing, its muscularity and universal presence, and the unwavering faith (and total lack of doubt) of the vast majority of its adherents. As a convert, I myself fell into this category.

Islam is a religion, but that religion mandates a specific way of life for its adherents in a way that no other organized religion does. While this is authoritarian, many people in the West, especially young people, feel that this gives them a sense of purpose; they are no longer sleeping in until 1 in the morning on weekends, but waking up for fajr at 5 AM.

Not only that, but they have a clear template for how to live their lives; do this, do that, and things will all turn out fine. For many, these firm guidelines bestow upon their lives a sense of drive that would otherwise be lacking. There are also clear tiers of accomplishment and progression that allows someone to feel successful; memorizing surahs, for example, or improving one’s tajweed gives a sense of achievement that is otherwise lacking for many people, especially those in dead-end jobs or careers.

Indigenous women are the largest source of converts to Islam in Europe. Islam assigns women a clear role as homemakers submissive to their husbands (4:34) and although this may seem counterintuitive, some women (not only underachievers) are quite happy to be just that, and feel uncomfortable with an increased burden of responsibility placed upon them by feminism and the equality of the sexes, which demands they do more. For them, Islam gives them a more ‘traditional’ template which not only does not punish them for not taking on a career and being a stay-at-home mom, but in fact rewards them for it.

I feel like narratives around conversion in prisons is also something worth examining, as it is happening at an alarming rate in the UK and often results in the emergence of religious gangs within prisons openly exposing extremist views, which can impede reintegration. I’m currently reading more on this phenomenon.

Another type, the ‘decolonialist’ type of conversion, is observed particularly in the United States, where Islam has been seen as a more authentically ‘African’ religion, ever since the 1930s with the founding of the Nation of Islam movement. Prominent individuals like Malcolm X have popularized this model, and many black Americans feel that they have a collective grievance against Christianity for slavery which is best solved by turning to Islam. The MASSIVE increase in appropriated names of Arabic origin in the black American community is symptomatic of this trend.

The ideological benefit curve

Before I became more biologically oriented in my thinking, I used to be super into ideologies – not any particular ideologies, but the very idea of ideologies as a category. It’s virtually impossible to have anything approximating a scientific discussion about ideologies, because even the very word is a semantically and semiotically loaded cultural construct; as Žižek says, we cannot look at ideology as we inhabit ideology. However, there are still a few interesting things to say about the topic. Take the following example:

This is one of the goofy graphs I made on the topic, that just so happens to encapsulate an interesting idea (ignore the numerically ordinated axes for a moment). The question is – who benefits most under a specific ideology? What the curve shows is that an authoritarian society will provide maximal benefit to those unable to coordinate their own affairs, simply because in such a society your affairs are already done for you. However, strong-willed thinkers and creative types will probably suffer in such conditions, lacking autonomy over their ideas and self-expression.

An individualist society, by contrast, will provide maximal benefit to those who have the aptitude to organize their own affairs and excel in absence of supervision. Yet such a society will necessarily provide less assistance to those who are less able to organize their own affairs – some of whom will be desperately in need of such assistance.

So which society is better? One constant feature of human sociality is that there rarely appears to be anything that is objectively ‘better’ than anything else; everything comes down to tradeoffs and opportunity costs. So if you’re Nikola Tesla you’ll probably do better in the individualistic United States, but if you’re a lonely depressed person suffering from the atheistic anomie that seems to grip Western civilization as of late, then you’d likely be better off staying in Serbia. At least homes are cheap there.

When actions are determined, consequences must be too

This is one of the most idiotic views I’ve ever seen advocated by tenured staff at a Western university. Let me explain why.

Let’s think about this logic for a moment. The statement above effectively states the following: since the decision as to whether or not you will eat the dessert is out of your control (proposition) therefore you are absolved of responsibility for it, and should not feel bad (conclusion).

To assess this proposition, we first have to come to a definition of ‘control’ – which is far more tricky than it seems. In this context people typically define ‘control’ in relation to various free-will arguments which can be broadly dichotomized into ‘pragmatist’ or ‘logico-determinist’ categories. Logico-determinists such as Sam Harris would state that since no neurobiological evidence exists whatsoever to support any mechanism isomorphic to free will or free choice, either for humans or for any other organisms, and as such we are not technically in control of anything. Pragmatists would broadly agree with these arguments, but would protest that such argumentation tautologically precludes the meaningfulness of free will as a term, which is contradicted by the semantic significance and lexical omnipresence of the term to modern humans.

I think that whichever view of free will or control we take, the proposition outlined above by the McGill Psychology Department is almost certainly true, but in a far broader sense than you might realize. We live in a world where our actions are contextualized by the sum total of all actions that preceded these, which effectively means that actions are really reactions, and are thus determined by that context. As such, when you inevitably act in accordance with the predispositions encoded into your neural hardware, your brain has in fact “already decided for you,” as per the statement above.

But how about the conclusion. Should this really absolve you from responsibility? If we say that you aren’t responsible for your actions because they were predestined by your brain, then what about in the case of rape? It’s not uncommon to hear, in courtrooms, defendants talk about ‘not being able to control themselves’. The Iraqi refugee who raped a 12-year old boy in an Austrian swimming pool a few years back defended his actions as the inescapable consequence of a “sexual emergency.”

Murder is another case we might examine. The circumstances typically resulting in physical conflict trigger a ‘fight-or-flight’ response in us, activating our endocrine systems and pumping our bodies full of chemicals such as adrenaline that cloud judgment and promote physical action. In such a situation, it’s also possible to say that your brain made the decision for you, precluding free will.
Now ask yourself – what difference does it make if some (or even all) murders and rapes do lie outside of the realm of our control? Are we going to start judging rapists in courts of law on the basis of their proclivity to hypersexuality, refusing to jail any rapists if indeed we see that their brains ‘already decided’ for them? Murder, too? If any country were ever to embark on such an endeavor with the propositions above as guiding axioms, they would quickly discover it impossible to judge any person guilty for any crime.

What am I getting at here? The conclusion you’re seeing above is horrifically wrong and terribly dangerous. A deterministic worldview may be scientifically valid, but this does not – and cannot – preclude holding individuals responsible for their actions. The consequences if we don’t – a world in which murderers, thieves, rapists, and worse are not held culpable due to the inevitably influence of their biology and socialization on these behaviors – are far too high to accept.

Even though our actions may not be subject to the kind of ‘free will module’ that proponents imply, there is no reason to suggest that consequences to these actions should not follow. No basis exists to suggest that where agency in actions is absent, reactions should not follow from those actions. All of which is to say: when actions are determined, consequences must be too.

Film review: Enemy (2013)

Enemy is one of the most stunning films I’ve seen this year. Its genius is composite and gestaltic; it lies in the mind-blowing script of Gullón, the paradisal and dystopic direction of Villeneuve, and the compelling yet disturbing acting by Gyllenhaal.

While the film has received near universal acclaim, the plot and its incomprehensibility to many viewers has presented an interpretative problem that has spawned numerous analyses online. While not intending to sound solipsistic, Enemy truly spoke to me in a way few other films did, and as such my understanding is somewhat different to the majority of these reviews. Because this is an analysis as well as a review, none of it will make sense to you if you haven’t seen the film, so I strongly recommend that you go rent it now. What follows is my own analysis of Enemy, starting with our character pairs.

  • Adam/Mary – history professor and his girlfriend.
  • Anthony/Helen – actor and his wife.

Villeneuve tells us that the film is about dictatorship, and this is true. Beginning at the very start of the film we hear a lecture about dictatorships – a subject which history professor Adam (Gyllenhaal) happens to specialize in.

What dictatorships or totalitarian systems do – and hereafter I want to use the latter – is to subjugate people. But not only do totalitarian systems oppress people – they also suppress awareness of this subjugation, which can happen in a number of ways. As the film begins, Adam tells his students that in Ancient Rome, the government sponsored bread and circuses for the people in order to reduce dissent. Bread and circuses are at their core a type of entertainment. Modern governments act similarly to limit dissent in different ways, we are told – yet the focus on ‘entertainment’ will be relevant later on.

Adam and Anthony meet in a hotel

From here onwards we’ll be jumping around a bit in our analysis, but I’ll break the news to you first: Adam and Anthony are one and the same. Bear with me for a little while longer. When Adam and Anthony meet in the hotel, Anthony explains the presence of a scar on his stomach, asking Adam if he has one too. Adam recoils in fear and horror, and flees the hotel. How might we understand this scene?

There is no scar on Adam’s stomach, because that scar appeared when Adam/Anthony (hereafter ‘AA’ to refer to both) was in a car crash – something which also resulted in Mary’s death. Mary was the girlfriend of AA, or at least his mistress, while he was already with Helen. The scene where AA gets his scar, and Mary is killed in a car crash, is marked by a spiderweb on the windshield of the car. Spiders live inside webs.

Consider spiders for a moment. The spiders in this film are not some loose analogy for dictatorships or other systems of the political kind. But they do represent a totalitarian system. They are avatars of memory, and the totalitarian control that traumatic memories of the past have over people’s minds. Consider your own memories – of trauma, of bullying, of the dissolution of a romantic relationship – and how inescapable they often feel. Having crossed this point, things are slowly beginning to come clear.

At the very start of the film – and remember that it isn’t in chronological order, so this is actually the ending scene – AA is in a club [entertainment] and a beautiful attractive waitress crushes a spider. AA is crushing the pain of his own memories, by seeking entertainment. This liberates him from the totalitarian control they exert over him. What memories, you ask?

The answer is given earlier on, when Adam has a dream of walking down a hallway, passing by a woman who is at once also a spider. He wakes up, and sees a woman whose hair matches the spider pattern. This is another factual memory – it is one of the other girls that Anthony cheated on his wife with. Anthony has a serious problem with commitment and infidelity, and is repeatedly unfaithful – not just to his partner.

The spider – forever the avatar of the oppressive memories of the past – swaggers over Adam/Anthony’s hometown of Toronto

What about the giant spider walking over the entire city? This is explained by the poster for the film, which shows that same spider in the city right over Anthony (who we are able to positively identify because of his jacket) but also inside his head. This is key. The spider (the gargantuan weight of his traumatic memories) is above him, but it is also inside him; its influences on his life are pervasive and absolute like some totalitarian dictator – but these memories, like the spider, are also an integral part of his very being. His experiences are in his head to stay.

At the very end in the movie, Adam has taken the place of Anthony. He goes into the bedroom to see Helen, and is greeted by a horrifyingly giant spider. Why? Because his memories are coming back to haunt him. So what exactly does this mean?

The entire film is a story of Anthony being confronted by his past. Adam is a representative of this past, and his role as a teacher of HISTORY attests to the fact. Adam represents a past version of Anthony that was unfaithful to Helen. Now Adam was meek and quiet, even to the point that he allowed himself to be cuckolded by Anthony. How can we say that he was a representation of Anthony’s unfaithfulness?

The answer is simple; Adam is an unmanly coward, and unfaithfulness is a form of cowardice, or a lack of living up to one’s responsibilities as a man. This is attested to when Anthony is in the car with Mary who he has just tried to fuck – just before they crash, he says to her ‘You think I’m not a man?’ She DOES think he’s not a man – because she has just discovered his unfaithfulness, which precipitates their dramatic exit from the hotel. In this way, Adam’s weakness/unmanliness facilitates Anthony’s unfaithfulness – Adam does not stop Anthony from fucking his own girlfriend, because he represents Anthony’s own past weakness (and thus, his weak masculinity).

That scene where Anthony is having sex with Mary, who then freaks out when she sees the wedding ring [marks?] on his finger? This was real. Mary did not realize that Anthony was married. The freakout did indeed happen. The crash did indeed happen. And Mary died, and Anthony was injured as a result – giving us the scar from earlier.

The windshield after AA’s car crash forms a delicate spiderweb – all is connected

Why is Anthony an actor? Because he is acting out the horrors of the past in his head. The nightmares are all his. The gargantuan spider in the final scene looks poised to consume Adam, who by this point has assumed the role of Anthony before he enters the bedroom. It is memories of sinfulness, and the weight of his guilt, which seem ready to devour him.

Adam IS Anthony, so Adam allowing Anthony to fuck his girlfriend represents AA’s weak, sensitive, humanistic side failing to exert control over his brash, Dionysian side. This failure resulted in the death of Mary and nearly also the destruction of his marriage. This haunts him to this very day.

Simplified, it looks something like this:

  • AA are/is one person
  • The spiders are his memories and the weight of his guilt
  • They constitute a totalitarian system which holds him down and oppresses him, and dictates to him his actions; they force him to continuously remember the past
  • Adam, a history professor (the past) represents the past AA – he is weak and generally a shitty person. He allows himself to cheat on his own wife, because he is weak/unmanly/a coward.
  • Adam’s weakness leads to the death of Mary, his girlfriend and mistress. It gives him a scar, which stays with him.
  • AA suppresses this memory by entertainment, such as by attending clubs where the spider (his past) is crushed. But it keeps coming back to haunt him.
  • The spider is a totalitarian system above him (in terms of his control) but inside him (as it is constituted by his memories).
  • The film ends with AA having gone through all of the memories of this traumatic past. A gargantuan spider shows us how AA is confronted by his memories, and thus his own guilt and shame, when he goes into his wife’s bedroom.
  • As viewers, we are not told whether AA’s suppression of his past, his memories, and his guilt is successful or not. We do not know whether he stays with Helen, or what her transformation into his guilty conscience might entail (perhaps accusing him of another affair).
  • That is Enemy, and it is undoubtedly Villeneuve’s most impressive films to date.

5 out of 5 stars.

The self-immolation of Sam Harris

Before we get into Sam, I’m going to start things off with a postulate by Yuval Harari. Harari is a colleague of Sam, the two having spoken on podcasts and referenced each other in written works, so I think it will be interesting to use one of his most powerful statements as a launchboard for my demonstration of the incoherence of Sam’s worldview.

Truth and power can travel together only so far. Sooner or later they go their separate ways. If you want power, at some point you will have to spread fictions. If you want to know the truth about the world, at some point you will have to renounce power.

This is a very interesting statement. To understand precisely what Harari means, it is necessary to explain the definitions of both ‘fiction’ and ‘truth’ as used here. To Harari, a ‘fiction’ is a proposition that is accepted by fiat, without underlying backing. We have to understand that Harari is not an individualist, and this may indeed be the most important difference between him and Sam Harris as thinkers. This means that Harari’s ‘fictions’ are not casual statements of mistruth that are shared by individuals, but rather systems-wide simplifications of reality that allow for greater levels of social organization.

Take laws, for instance. Laws are perhaps the prototypical examples of Harari’s fictions in operation (while he would likely demur in favor of God or money, if I discuss these here I’ll risk repeating myself later on). Laws are not grounded in the laws of physics, or the properties of physical matter, or even in our evolutionary biology (not necessarily, at least). Yet by the establishment and acceptance of laws for the regulation of society, higher levels of organizational complexity can be attained. A fiction, in short, is something used on or by a social system to guide it toward certain ends. Dig deep enough into any fiction, and you’ll find an arbitrary proposition at its core.

Truth, by contrast, is the underlying reality of things. I explained ‘fictions’ first, because Harari’s ‘truth’ can in a sense be defined simply as ‘that which contradicts a fiction’. An obvious yet brutal example of a truth is the following: that you and everyone you know is going to die.

Why does that matter? Because within the context of our social systems, we act as if this is not the case. We try our best to save people from death when possible, and to prolong life through whatever means available. We behave as if it is a tragedy, rather than an inevitability, when a large number of people are killed by something unexpected, and in doing so reinforce the idea that death is unnatural, is unnecessary, and (deontologically) ought to be prevented.

This systems-wide behavior reveals a collective ‘fiction’ (strictly in the Hararian sense, of course): we are engaged in a process of collective self-deceit in order to regulate society as so to allow for greater levels of social organization. Without anathematizing death, there is no basis for preventing people from dying, nor for punishing acts that induce death (such as murder). Obviously, any system in which murder is unobjectionable will find it exceedingly difficult to motivate its members to acheive complex tasks, because this generally requires cooperation, which is impossible if prosocially-defective behaviors as extreme as murder are unpreventable (the city of Chicago and its extremely low clearance rate for homicide is exemplary).

With this explained and Harari’s truths and fictions in mind, let’s get back to Sam Harris. Listen to this clip (set to play from 19m 23s) or read the transcript below if you’re short on time. I transcribed the clip carefully and removed some irrelevant muttering, but all errors are (obviously) my own.

Woman: So I have kids, you have kids. Right? Do you have kids?

Sam: Yep. Yep.
Eric Weinstein: We all do.

Woman: All have kids, great. So when it comes to free will, I get it. I’m completely on board, Sam, with your idea that there’s no free will.

Sam: Yep.

Woman: When it comes to raising kids, wh-

S: Don’t tell them. Don’t tell them th- [inaudible]


There’s a lot to say at this point, but it would be malapropos to cut the remaining context. Let’s hear Sam out. Cont:

W: Sorry but – I have an 18 year old boy, who’s… y’know, gorgeous. And when I’m trying to tell him to do the right thing, and he does something stupid… and then I wanna find out why he did that, I don’t even ask, cuz it’s a stupid question. Cuz he doesn’t even know why he did it, cuz he’s an 18-year-old boy. But when I’m looking at impacting his future behavior, where’s the practical separation between knowing… that there’s really no free will, and wanting your children to be responsible in their behavior and what they do in the world.

S: Okay… Well, this is an important question-


S: I think that there are many false assumptions about what it must mean to think that there’s no free will. I think there’s no free will, but I think that effort is incredibly important. I mean, you can’t wait around… I think the example I gave in my book is, well, if you wanna learn Chinese, you can’t just wait around to see if you learn it. It’s not gonna happen to you. There’s a way to learn Chinese, and you have to do the things you do to learn Chinese. Every skill or system of knowledge you can master is like that, and getting off of drugs is like that, and getting into shape is like that, and straightening out your life in any way that it’s crooked is like that. But the recognition that you didn’t make yourself, and that you are exactly as you are at this moment because the universe is as it is in this moment has a flip side, which is… you don’t know how fully you can be changed in the next moment, by good company, and good conversations, and reading good books, and… you don’t know, what you – you are an open system. It’s just a simple fact that people can radically change themselves. You’re not condemned to be who you were yesterday.

There’s a little more, but I think this is a good place to pause.

For the most part, Sam Harris is an incredibly rigorous, logical, and consistent thinker. After rejecting Islam I was drawn to Sam because he not only took the axiomatic standpoint of atheism, but also explored the implications and consequences of atheist realism by trying to develop what is essentially his own ‘atheist ethic’. His books ‘Lying’ and ‘Free Will’ are not just his own thoughts directed at a public audience; they’re actually a series of meditations through which Sam challenges himself, redressing areas that had long been considered ‘dead-ends’ wherein the repercussions of atheism (irrespective of its analytical correctness) are so deeply negative that in the final analysis pursuing such a worldview is simply not worth the trouble. In this respect, his philosophy is even comparable to Kantian deontology, which constitutes a similar attempt at grounding human morality in logic and proofs. Not bad, Sam. Not bad.

But nothing within Sam’s credentials can ameliorate the discrepancy we see in this conversation, between Sam’s philosophical idealism (what he calls ‘moral realism’) and his stated claims. Obviously we can let Sam off for the “Don’t tell them” line – it was a joke, and the audience got that. But as he goes on, we see the ethos within “Don’t tell them” repeated, explicated, and justified.

According to Sam, suppressing the ‘truth’ (that free will is an illusion) can at times lead to the actualization of greater potential. Obviously, we all know this already; at the most basic level, that is what religions do. By gathering the local township for collective prayer, meditation, and socialization, religious organizations have for time immemorial been using what Sam and Harari would call a ‘fiction’ to better people’s lives and endow them with a sense of meaning, purpose, and spiritual fulfillment. Yes, I am aware that the same mechanism can also be used for harm as well – that’s obvious, and unrelated to my point. The key here is that compromising the ‘truth’ in order to attain Hararian ’empowerment’ is precisely what Sam has criticized religion for.

This is why the Sam’s declaration that truth ought to be sacrificed in abet of empowerment (at least some of the time) is of such fundamental importance. His suggestion that we should utilize fictions in order to improve our social reality is devastating for moral realism, because it means that his desired system of social organization is essentially a religion, by virtue of operating along the same principles. By acknowledging that some aspects of reality (truth) should be set aside in name of functionality (power), Sam accepts the legitimacy of moral systems to uphold fictions for the good of their adherents. It is not just that he acknowledges that this is possible – he actually suggests that it ought to be pursued.

For what it’s worth, Sam himself admits this difficulty later in the video, conceding that what you say to people should be ‘true and useful’. But if it is valid and correct to use baseless fictions (such as the existence of free will) in order to better our lives, then we are instead promoting ‘what is useful and not true’. At this point we are effectively ruling out the possibility that there are ‘right’ or ‘wrong’ modes of social organization, since the discussion now moves to what level of usefulness justifies the abnegation of truth, and so on and so forth. The question ‘which moral system is correct’  becomes ‘which moral systems balance fiction and power appropriately’. Far from moral realism, this perspective is so blatantly pragmatist as to make William James turn in his grave. Moreover, Sam’s dismissal of the validity of religious systems based on their unrelatedness to material reality now seems positively hypocritical in light of his advocacy for those very same methods.

In short, if we accept Sam’s proposition, then it is meaningless to strive for the creation of a social system upholding an ‘objective’ or ‘correct’ moral reality. Instead, the question that then results is: ‘to what extent must we sacrifice the truth in order to attain the truth’. Needless to say, such a question – at least from Sam’s own moral realist perspective – is utterly incoherent.