Royal Society recently put out a massive paper by Ross et al. with over 9000 authors (you know, one of those ones) on polygyny and wealth inequality. The title, “Greater wealth inequality, less polygyny: rethinking the polygyny threshold model” would have you thinking that the authors were able to refute or quantitatively disprove the polygyny threshold model with some sophisticated mathematics, but unfortunately this is not the case. Instead, the paper uses a strange mixed sample of hunter gatherer and highly developed industrial populations to argue that the transition to agriculture increases socioeconomic inequality, and additionally results in conditions of subsistence living that for most make polygyny effectively impossible.
Firstly, we should realize that this doesn’t amount to either a refutation or even the titular ‘rethinking’ of the polygyny threshold model. While results from their quant analysis are basically legit, it doesn’t change the fact that the authors have effectively based their study on a tautological proposition; subsistence living results in no surplus wealth (also tautological) which means that it is exceedingly rare for polygyny to be mutually beneficial. Alright. So where’s the challenge to the polygyny threshold model?
I have read a lot about polygyny, but I have never encountered any claim that polygyny ipso facto increases linearly with socioeconomic inequality per se. Rather, claims are made that conditions of high socioeconomic inequality will guarantee polygyny, as male reproductive success is subject to greater resource-dependent elasticity than female fitness due to inherent biological features (i.e. 9 months of pregnancy). This great presentation has more details, but for those with little time:
Or if you prefer (from the presentation linked above; this contains an error, as the 1948 paper cited is by A.J. Bateman, not Bateson):
To be fair, the authors recognize this by stating their intention to merely “extend” the polygyny threshold model, but I’d argue they haven’t done so in a way that’s significant enough to merit the “rethinking” boast. But this is not to suggest the paper has no value. Instead, what the authors have actually done is modeled the conditions for polygyny to take place in a largely monogamous society at subsistence-level conditions – unironically a notable achievement. This is a far more interesting result, and one that would merit wider recognition than the paper has currently received.
There are still some problems though. For instance, the paper notes:
“Sequential marriage can be considered a form of polygyny insofar as men typically replace divorced wives with younger women, allowing a subset of males in the population to increase their lifetime reproductive success relative to less wealthy males in the population, as has been shown in many of the populations sampled…”
Now this actually is a problem, since the definition of polygyny that the authors are using is not actually “polygyny” but “effective polygyny” so defined. I hate it when researchers redefine constructs in this ad-hoc fashion (especially when it’s not highlighted in the abstract) because it can mislead people who don’t read the full paper, and most of the postdocs I know don’t. Luckily, I did.
I think the problem with including sequential marriage into a working definition for polygyny is that there are substantial qualitative differences that distinguish these behaviors. For instance, technical polygyny (one man, multiple women at the same time, in a sexually exclusive [typically marital] arrangement) actually alters the operational sex ratio, among other things. Sequential marriage, by contrast, only means that the available pool of females includes larger numbers of women who already have children – that is, single mothers. Of course this may change the calculus for male satisfaction or some other outcomes, but these are not equivalent to the social effects we expect from a normatively polygynous mating equilibrium. For example, they completely negate some of the reported correlates for polygynous mating, such as female suffering and self-reported detriment to well-being noted amongst women in polygynous marriages (source from observational study in Cameroon). I understand that well-being wasn’t strictly a feature of relevance to Ross et al.’s analysis, but it DOES have implications for the precise ‘leveling’ of the polygyny threshold. A situation where a woman is going to be a second or third-order co-wife is verydifferent to one in which she’s merely a second or third-order sequentialwife. These differences matter quite a lot if we’re paying any attention to the implications for (1) the OSR (2) female well-being and gender inequality (3) male violence and intrasexual competition, among many other things.
Now look, I’m not trying to be an ass and negate all the hard work the forty-thousand authors of this study did. But I do find it somewhat annoying when people publish work under “GOTCHA!” titles like “rethinking XYZ” despite nothing comparable to this having actually taken place. Far from rethinking, the authors actually RELIED ON the polygyny threshold model for their analysis, and came to the result that the agricultural transition killed incentive for polygyny amongst most normal people living in subsistence-level conditions. Fair enough. But why not just say so?
IMHO, the far more interesting result we get from the paper is this: we know that transition to monogamy occurred around the transition to agriculture in some societies, and this paper provides some really awesome and useful analysis to explain why that might have happened. But what it DOESN’T do is explain why monogamy actually became a social institution to the exclusion of plural marriage. Just because it isn’t worth it to have 2+ wives doesn’t mean that your society will necessarily ban having 2+ wives. We still don’t have an answer for why polygyny becomes legally and socially prohibited in these agricultural societies. However, I think that primate inequality aversion (as exhibited by this outraged capuchin monkey) might be a good place to start.
I don’t have the data to hand, but I do have a hypothesis. Agricultural transition makes polygyny functionally impossible for overwhelming majority of people, who are living at subsistence. But it DOES NOT affect the ability of men with sufficient social standing and resources to obtain and retain multiple wives. Historically, such men were stratified into classes or castes – merchants or Japanese 商 etc. It seems plausible to suggest that the impoverished majority of monogamous males (and perhaps their wives!) would have expressed strong opposition to their rich rulers taking multiple wives, and rallied to condemn this behavior. Others have articulated this hypothesis before (e.g. Henrich, Boyd, and Richerdson, 2012) but this study provides some useful background evidence for its plausibility. If you’re a man farming away in a Neolithic village under fairly awful living conditions, you might be able to tolerate paying taxes to your overlord despite his nice villa on top of the hill. But what if he has 6 wives and your daughter is one of them? Perhaps there might be an ‘outrage threshold’ we need to think about alongside the polygyny threshold model.
A while ago I was conducting a meta-analysis on diachronic variation in cranial volume measurements for different East Asian populations. I got into the project after being inspired by Lynn’s suggestion that the Flynn Effect is primarily nutritional, as presumably this would also show an effect on height, head size, and thus ICV. It could even be possible to control for ICV changes to show only the direct change to IQ over time, which would be a more rigorous way to compare Flynn Effect magnitudes. What a great idea, I thought.
Unfortunately as I got deeper into the project, I realized that virtually all of my data couldn’t be sourced back to any obtainable research. A lot of numbers were sourced from papers that were only available in physical archives in Japan or Korea, so I did what any good researcher does and promptly gave up.
However, even with the limit to my data integrity acknowledged, I was still able to get some plots of the cranial volume measurements that show some interesting results. Here’s a scatterplot I built using values from all studies I could find; the x-axis shows the year of the study’s publication and not the date of subject collection, expiry, or measurement. Note that all datapoints are n-unweighted (raw averages).
While initially there doesn’t seem to be much to say about this graph, it’s noteworthy that there don’t appear to be (when plotted in this form) any outliers, but two relatively clear clusters by gender. Clearly the female cluster is far less grouped than the male one (the source of its statistical insignificance that can be clearly seen in the following trendlines) but both clusters have mutually exclusive ranges, which are interesting. Since all of the studies here had sample sizes that were above 20 we’d expect distributions to be roughly normal, meaning that group-level sex differences in ICV are likely very reliable. This of course is no surprise seeing as ICV is effectively a function of body size and height, which are also variant with respect to sex in the same way. This becomes more obvious when we ignore the insignificant ethnic variation and go purely to the sex differences:
Overall it is fairly clear that ICV as reported in studies is increasing overtime. However, note the lack of any female reports going back beyond the early 1920s? This is a sign that our data is shitty. For many of the earlier studies no sex information was present, and although in some cases I was able to make informed estimates based on the averagees (i.e. a mean ICV value close to 1500 is almost certainly male or mostly male in content) it was never really certain how accurate or representative the values might be. However, if we are correct in inferring that earlier studies likely did not disambiguate male and female samples (as opposed to exclusively using male samples) then it could be suggested that the actual observed gap would be even greater, seeing as the female values would go down and the male ones would go up. This of course would throw the trendline for diachronic variation into doubt, which is currently strong with p value < 0.05 when excluding females.
A while ago I was doing a research project on converts to Islam and their motivations through qualitative self-reports, which I collated into a series of ‘types’ or narrative trends. Here’s what I found.
Identity
For many in the increasingly multicultural, heterogeneous, disoriented West, Islam provides a specific identity by which they can define themselves and their relationships with other entities in the world.
A young guy at my gym (16) who recently converted to Islam is an example; he is son to a white father who abandoned his black immigrant mother before he was born, and described to me the difficulties of identification at school. He was being raised by a black mother in a black neighborhood, but looks more white. Islam grants him a consistent in-group with which he can find solidarity. He knows absolutely nothing about the theological aspects of Islam and hasn’t read the Qur’an, but takes pride in his new identity as a Muslim and often drops standard Muslim vocab (hamdulillah, mashallah, etc).
His newly obtained insider status he displays frequently, saying ‘we’ (Muslims) and ‘you’ (non-Muslims) even though he only converted a few months ago. Previously, as a ‘white’ looking kid yet also a product of a black subculture, there were presumably fewer individuals he could feel connected with in the same way.
Philosophy
The West is increasingly irreligious, but the human need for spirituality remains a motivating factor in many of our lives. We naturally seek out something to fill that void. For many, Christianity seems ‘weak’ and ‘feminine’ in comparison to a much more ‘muscular’ and uncompromising Islam. The level of piety displayed by self-declared Christians, many of whom do not go to church and the vast majority of whom do not do anything ‘Christian’ above that, might also seem ‘shallow’ in comparison to Muslims, who (ostentatiously) display strong degrees of religious attachment and faith.
For some, then, a quest for spirituality and a desire to know the ‘truth’ of existence can drive them towards religion; Islam will often be the obvious choice due to its constant proselytizing, its muscularity and universal presence, and the unwavering faith (and total lack of doubt) of the vast majority of its adherents. As a convert, I myself fell into this category.
Purpose
Islam is a religion, but that religion mandates a specific way of life for its adherents in a way that no other organized religion does. While this is authoritarian, many people in the West, especially young people, feel that this gives them a sense of purpose; they are no longer sleeping in until 1 in the morning on weekends, but waking up for fajr at 5 AM.
Not only that, but they have a clear template for how to live their lives; do this, do that, and things will all turn out fine. For many, these firm guidelines bestow upon their lives a sense of drive that would otherwise be lacking. There are also clear tiers of accomplishment and progression that allows someone to feel successful; memorizing surahs, for example, or improving one’s tajweed gives a sense of achievement that is otherwise lacking for many people, especially those in dead-end jobs or careers.
Indigenous women are the largest source of converts to Islam in Europe. Islam assigns women a clear role as homemakers submissive to their husbands (4:34) and although this may seem counterintuitive, some women (not only underachievers) are quite happy to be just that, and feel uncomfortable with an increased burden of responsibility placed upon them by feminism and the equality of the sexes, which demands they do more. For them, Islam gives them a more ‘traditional’ template which not only does not punish them for not taking on a career and being a stay-at-home mom, but in fact rewards them for it.
Others
I feel like narratives around conversion in prisons is also something worth examining, as it is happening at an alarming rate in the UK and often results in the emergence of religious gangs within prisons openly exposing extremist views, which can impede reintegration. I’m currently reading more on this phenomenon.
Another type, the ‘decolonialist’ type of conversion, is observed particularly in the United States, where Islam has been seen as a more authentically ‘African’ religion, ever since the 1930s with the founding of the Nation of Islam movement. Prominent individuals like Malcolm X have popularized this model, and many black Americans feel that they have a collective grievance against Christianity for slavery which is best solved by turning to Islam. The MASSIVE increase in appropriated names of Arabic origin in the black American community is symptomatic of this trend.
Before I became more biologically oriented in my thinking, I used to be super into ideologies – not any particular ideologies, but the very idea of ideologies as a category. It’s virtually impossible to have anything approximating a scientific discussion about ideologies, because even the very word is a semantically and semiotically loaded cultural construct; as Žižek says, we cannot look at ideology as we inhabit ideology. However, there are still a few interesting things to say about the topic. Take the following example:
This is one of the goofy graphs I made on the topic, that just so happens to encapsulate an interesting idea (ignore the numerically ordinated axes for a moment). The question is – who benefits most under a specific ideology? What the curve shows is that an authoritarian society will provide maximal benefit to those unable to coordinate their own affairs, simply because in such a society your affairs are already done for you. However, strong-willed thinkers and creative types will probably suffer in such conditions, lacking autonomy over their ideas and self-expression.
An individualist society, by contrast, will provide maximal benefit to those who have the aptitude to organize their own affairs and excel in absence of supervision. Yet such a society will necessarily provide less assistance to those who are less able to organize their own affairs – some of whom will be desperately in need of such assistance.
So which society is better? One constant feature of human sociality is that there rarely appears to be anything that is objectively ‘better’ than anything else; everything comes down to tradeoffs and opportunity costs. So if you’re Nikola Tesla you’ll probably do better in the individualistic United States, but if you’re a lonely depressed person suffering from the atheistic anomie that seems to grip Western civilization as of late, then you’d likely be better off staying in Serbia. At least homes are cheap there.
This is one of the most idiotic views I’ve ever seen advocated by tenured staff at a Western university. Let me explain why.
Let’s think about this logic for a moment. The statement above effectively states the following: since the decision as to whether or not you will eat the dessert is out of your control (proposition) therefore you are absolved of responsibility for it, and should not feel bad (conclusion).
To assess this proposition, we first have to come to a definition of ‘control’ – which is far more tricky than it seems. In this context people typically define ‘control’ in relation to various free-will arguments which can be broadly dichotomized into ‘pragmatist’ or ‘logico-determinist’ categories. Logico-determinists such as Sam Harris would state that since no neurobiological evidence exists whatsoever to support any mechanism isomorphic to free will or free choice, either for humans or for any other organisms, and as such we are not technically in control of anything. Pragmatists would broadly agree with these arguments, but would protest that such argumentation tautologically precludes the meaningfulness of free will as a term, which is contradicted by the semantic significance and lexical omnipresence of the term to modern humans.
I think that whichever view of free will or control we take, the proposition outlined above by the McGill Psychology Department is almost certainly true, but in a far broader sense than you might realize. We live in a world where our actions are contextualized by the sum total of all actions that preceded these, which effectively means that actions are really reactions, and are thus determined by that context. As such, when you inevitably act in accordance with the predispositions encoded into your neural hardware, your brain has in fact “already decided for you,” as per the statement above.
But how about the conclusion. Should this really absolve you from responsibility? If we say that you aren’t responsible for your actions because they were predestined by your brain, then what about in the case of rape? It’s not uncommon to hear, in courtrooms, defendants talk about ‘not being able to control themselves’. The Iraqi refugee who raped a 12-year old boy in an Austrian swimming pool a few years back defended his actions as the inescapable consequence of a “sexual emergency.”
Murder is another case we might examine. The circumstances typically resulting in physical conflict trigger a ‘fight-or-flight’ response in us, activating our endocrine systems and pumping our bodies full of chemicals such as adrenaline that cloud judgment and promote physical action. In such a situation, it’s also possible to say that your brain made the decision for you, precluding free will.
Now ask yourself – what difference does it make if some (or even all) murders and rapes do lie outside of the realm of our control? Are we going to start judging rapists in courts of law on the basis of their proclivity to hypersexuality, refusing to jail any rapists if indeed we see that their brains ‘already decided’ for them? Murder, too? If any country were ever to embark on such an endeavor with the propositions above as guiding axioms, they would quickly discover it impossible to judge any person guilty for any crime.
What am I getting at here? The conclusion you’re seeing above is horrifically wrong and terribly dangerous. A deterministic worldview may be scientifically valid, but this does not – and cannot – preclude holding individuals responsible for their actions. The consequences if we don’t – a world in which murderers, thieves, rapists, and worse are not held culpable due to the inevitably influence of their biology and socialization on these behaviors – are far too high to accept.
Even though our actions may not be subject to the kind of ‘free will module’ that proponents imply, there is no reason to suggest that consequences to these actions should not follow. No basis exists to suggest that where agency in actions is absent, reactions should not follow from those actions. All of which is to say: when actions are determined, consequences must be too.
Many people have asked me, since my legal name is Wael Taji Miller, why I publish as Wael Taji. The answer is actually very simple.
Firstly, it’s probably worth mentioning that I have used many different names in my life. Living in China, one has to have a Chinese name, for example (马维尔, if you’re wondering). Yet this is totally different to my (probably unnecessary) Japanese name that I used when I was living and working there (孫威史, for the curious). In hindsight, I probably just wanted to have a kanji for my name because I thought they were cool. This is why for me, working under a name that is different than my strictly legal one doesn’t feel weird – I’ve written and published content using five or more aliases already in my lifetime, usually to suit the cultural or linguistic context, but on rare occasions also to preserve anonymity.
Of course, that doesn’t answer the question: why use a different name now? Again, the answer is exceedingly simple.
Searching for ‘Miller’ in CiteSeer, which indexes citations and author names, we can see over 25,000 results. This is unsurprising, given that the Yiddo-Anglo-Germanic nomen is the third most common surname for Jews in the United States, and the 7th most popular for non-Jews. Because both Jews and Americans (by no means mutually exclusive categories) are powerhouse output demographics for global English-language scholarship, Bayesian inference would make this a predictable result.
This would be a non-issue had I not decided at the age of 12 that I wanted to become a professor. It became an even greater issue when I shelved my previous plans of writing books on culture and philosophy in Japanese in order to become a scientist, working in English. The barrier faced by academic researchers whose names are too common to make them identifiable even within highly specialized fields is widely recognized. A ‘career tips’ section from one university notes:
Consider using a name identifier, particularly if you have a common name, to help people distinguish your research from others and help stop any of your citations from ‘escaping’. The recommended name identifier scheme is ORCID, which works with both Scopus and Thomson Reuters’ Web of Science to allow researchers to ‘claim’ their publications
ORCID is a great system; I have an ORCID ID and use it for my journal submissions. But it’s long, clumsy, and impossible to use in informal language. “Hey man – did you read that article by 0000-0003-0166-248X?” just doesn’t have the same ring to it, at least to me. One solution many researchers resort to is a middle name, which is convenient as mine is technically also a surname anyway.
As you can see here, Taji is a vastly more recognizable name in English-language scholarship. Ironically, many of the Tajis listed here are actually Japanese, but alternate searches using more recognizably Arabic permutations such as ‘Eltaji’ or ‘Al-Taji’ amount to less than 10 results. The difference becomes most stark when using Google Ngram viewer, as you can see here:
So even despite a surprising amount of Taji or Al-Taji activity during the mid-1900s, it would appear that Miller remains the far more widely used surname. It’s also not recognizably Coptic, which becomes bothersome when I write about Christian theology or philosophy referencing Arabic sources and am bombarded by frenetic demands to justify my Ashkenazi surname.
Since you’ve probably been brought here out of curiosity, I hope that this explanation managed to answer all your questions.
P.S: For the etymologically curious, tāj is an Arabic word meaning ‘crown’ that is lexically productive as a loanword in Farsi, Urdu, Pashtun, and probably also other languages. As an adjective, tājī essentially means ‘crownly’ or ‘kingly,’ but isn’t commonly used. In both forms, the word is found in personal names (both forenames and surnames) as well as in place names throughout the greater Middle-East (think Taj Mahal, or ‘crown of the palace’). It’s also the name of a teeny little bird in Farsi.
Enemy is one of the most stunning films I’ve seen this year. Its genius is composite and gestaltic; it lies in the mind-blowing script of Gullón, the paradisal and dystopic direction of Villeneuve, and the compelling yet disturbing acting by Gyllenhaal.
While the film has received near universal acclaim, the plot and its incomprehensibility to many viewers has presented an interpretative problem that has spawned numerous analyses online. While not intending to sound solipsistic, Enemy truly spoke to me in a way few other films did, and as such my understanding is somewhat different to the majority of these reviews. Because this is an analysis as well as a review, none of it will make sense to you if you haven’t seen the film, so I strongly recommend that you go rent it now. What follows is my own analysis of Enemy, starting with our character pairs.
Adam/Mary – history professor and his girlfriend.
Anthony/Helen – actor and his wife.
Villeneuve tells us that the film is about dictatorship, and this is true. Beginning at the very start of the film we hear a lecture about dictatorships – a subject which history professor Adam (Gyllenhaal) happens to specialize in.
What dictatorships or totalitarian systems do – and hereafter I want to use the latter – is to subjugate people. But not only do totalitarian systems oppress people – they also suppress awareness of this subjugation, which can happen in a number of ways. As the film begins, Adam tells his students that in Ancient Rome, the government sponsored bread and circuses for the people in order to reduce dissent. Bread and circuses are at their core a type of entertainment. Modern governments act similarly to limit dissent in different ways, we are told – yet the focus on ‘entertainment’ will be relevant later on.
From here onwards we’ll be jumping around a bit in our analysis, but I’ll break the news to you first: Adam and Anthony are one and the same. Bear with me for a little while longer. When Adam and Anthony meet in the hotel, Anthony explains the presence of a scar on his stomach, asking Adam if he has one too. Adam recoils in fear and horror, and flees the hotel. How might we understand this scene?
There is no scar on Adam’s stomach, because that scar appeared when Adam/Anthony (hereafter ‘AA’ to refer to both) was in a car crash – something which also resulted in Mary’s death. Mary was the girlfriend of AA, or at least his mistress, while he was already with Helen. The scene where AA gets his scar, and Mary is killed in a car crash, is marked by a spiderweb on the windshield of the car. Spiders live inside webs.
Consider spiders for a moment. The spiders in this film are not some loose analogy for dictatorships or other systems of the political kind. But they do represent a totalitarian system. They are avatars of memory, and the totalitarian control that traumatic memories of the past have over people’s minds. Consider your own memories – of trauma, of bullying, of the dissolution of a romantic relationship – and how inescapable they often feel. Having crossed this point, things are slowly beginning to come clear.
At the very start of the film – and remember that it isn’t in chronological order, so this is actually the ending scene – AA is in a club [entertainment] and a beautiful attractive waitress crushes a spider. AA is crushing the pain of his own memories, by seeking entertainment. This liberates him from the totalitarian control they exert over him. What memories, you ask?
The answer is given earlier on, when Adam has a dream of walking down a hallway, passing by a woman who is at once also a spider. He wakes up, and sees a woman whose hair matches the spider pattern. This is another factual memory – it is one of the other girls that Anthony cheated on his wife with. Anthony has a serious problem with commitment and infidelity, and is repeatedly unfaithful – not just to his partner.
The spider – forever the avatar of the oppressive memories of the past – swaggers over Adam/Anthony’s hometown of Toronto
What about the giant spider walking over the entire city? This is explained by the poster for the film, which shows that same spider in the city right over Anthony (who we are able to positively identify because of his jacket) but also inside his head. This is key. The spider (the gargantuan weight of his traumatic memories) is above him, but it is also inside him; its influences on his life are pervasive and absolute like some totalitarian dictator – but these memories, like the spider, are also an integral part of his very being. His experiences are in his head to stay.
At the very end in the movie, Adam has taken the place of Anthony. He goes into the bedroom to see Helen, and is greeted by a horrifyingly giant spider. Why? Because his memories are coming back to haunt him. So what exactly does this mean?
The entire film is a story of Anthony being confronted by his past. Adam is a representative of this past, and his role as a teacher of HISTORY attests to the fact. Adam represents a past version of Anthony that was unfaithful to Helen. Now Adam was meek and quiet, even to the point that he allowed himself to be cuckolded by Anthony. How can we say that he was a representation of Anthony’s unfaithfulness?
The answer is simple; Adam is an unmanly coward, and unfaithfulness is a form of cowardice, or a lack of living up to one’s responsibilities as a man. This is attested to when Anthony is in the car with Mary who he has just tried to fuck – just before they crash, he says to her ‘You think I’m not a man?’ She DOES think he’s not a man – because she has just discovered his unfaithfulness, which precipitates their dramatic exit from the hotel. In this way, Adam’s weakness/unmanliness facilitates Anthony’s unfaithfulness – Adam does not stop Anthony from fucking his own girlfriend, because he represents Anthony’s own past weakness (and thus, his weak masculinity).
That scene where Anthony is having sex with Mary, who then freaks out when she sees the wedding ring [marks?] on his finger? This was real. Mary did not realize that Anthony was married. The freakout did indeed happen. The crash did indeed happen. And Mary died, and Anthony was injured as a result – giving us the scar from earlier.
The windshield after AA’s car crash forms a delicate spiderweb – all is connected
Why is Anthony an actor? Because he is acting out the horrors of the past in his head. The nightmares are all his. The gargantuan spider in the final scene looks poised to consume Adam, who by this point has assumed the role of Anthony before he enters the bedroom. It is memories of sinfulness, and the weight of his guilt, which seem ready to devour him.
Adam IS Anthony, so Adam allowing Anthony to fuck his girlfriend represents AA’s weak, sensitive, humanistic side failing to exert control over his brash, Dionysian side. This failure resulted in the death of Mary and nearly also the destruction of his marriage. This haunts him to this very day.
Simplified, it looks something like this:
AA are/is one person
The spiders are his memories and the weight of his guilt
They constitute a totalitarian system which holds him down and oppresses him, and dictates to him his actions; they force him to continuously remember the past
Adam, a history professor (the past) represents the past AA – he is weak and generally a shitty person. He allows himself to cheat on his own wife, because he is weak/unmanly/a coward.
Adam’s weakness leads to the death of Mary, his girlfriend and mistress. It gives him a scar, which stays with him.
AA suppresses this memory by entertainment, such as by attending clubs where the spider (his past) is crushed. But it keeps coming back to haunt him.
The spider is a totalitarian system above him (in terms of his control) but inside him (as it is constituted by his memories).
The film ends with AA having gone through all of the memories of this traumatic past. A gargantuan spider shows us how AA is confronted by his memories, and thus his own guilt and shame, when he goes into his wife’s bedroom.
As viewers, we are not told whether AA’s suppression of his past, his memories, and his guilt is successful or not. We do not know whether he stays with Helen, or what her transformation into his guilty conscience might entail (perhaps accusing him of another affair).
That is Enemy, and it is undoubtedly Villeneuve’s most impressive films to date.
Before we get into Sam, I’m going to start things off with a postulate by Yuval Harari. Harari is a colleague of Sam, the two having spoken on podcasts and referenced each other in written works, so I think it will be interesting to use one of his most powerful statements as a launchboard for my demonstration of the incoherence of Sam’s worldview.
Truth and power can travel together only so far. Sooner or later they go their separate ways. If you want power, at some point you will have to spread fictions. If you want to know the truth about the world, at some point you will have to renounce power.
This is a very interesting statement. To understand precisely what Harari means, it is necessary to explain the definitions of both ‘fiction’ and ‘truth’ as used here. To Harari, a ‘fiction’ is a proposition that is accepted by fiat, without underlying backing. We have to understand that Harari is not an individualist, and this may indeed be the most important difference between him and Sam Harris as thinkers. This means that Harari’s ‘fictions’ are not casual statements of mistruth that are shared by individuals, but rather systems-wide simplifications of reality that allow for greater levels of social organization.
Take laws, for instance. Laws are perhaps the prototypical examples of Harari’s fictions in operation (while he would likely demur in favor of God or money, if I discuss these here I’ll risk repeating myself later on). Laws are not grounded in the laws of physics, or the properties of physical matter, or even in our evolutionary biology (not necessarily, at least). Yet by the establishment and acceptance of laws for the regulation of society, higher levels of organizational complexity can be attained. A fiction, in short, is something used on or by a social system to guide it toward certain ends. Dig deep enough into any fiction, and you’ll find an arbitrary proposition at its core.
Truth, by contrast, is the underlying reality of things. I explained ‘fictions’ first, because Harari’s ‘truth’ can in a sense be defined simply as ‘that which contradicts a fiction’. An obvious yet brutal example of a truth is the following: that you and everyone you know is going to die.
Why does that matter? Because within the context of our social systems, we act as if this is not the case. We try our best to save people from death when possible, and to prolong life through whatever means available. We behave as if it is a tragedy, rather than an inevitability, when a large number of people are killed by something unexpected, and in doing so reinforce the idea that death is unnatural, is unnecessary, and (deontologically) ought to be prevented.
This systems-wide behavior reveals a collective ‘fiction’ (strictly in the Hararian sense, of course): we are engaged in a process of collective self-deceit in order to regulate society as so to allow for greater levels of social organization. Without anathematizing death, there is no basis for preventing people from dying, nor for punishing acts that induce death (such as murder). Obviously, any system in which murder is unobjectionable will find it exceedingly difficult to motivate its members to acheive complex tasks, because this generally requires cooperation, which is impossible if prosocially-defective behaviors as extreme as murder are unpreventable (the city of Chicago and its extremely low clearance rate for homicide is exemplary).
With this explained and Harari’s truths and fictions in mind, let’s get back to Sam Harris. Listen to this clip (set to play from 19m 23s) or read the transcript below if you’re short on time. I transcribed the clip carefully and removed some irrelevant muttering, but all errors are (obviously) my own.
Woman: So I have kids, you have kids. Right? Do you have kids?
Sam: Yep. Yep.
Eric Weinstein: We all do.
Woman: All have kids, great. So when it comes to free will, I get it. I’m completely on board, Sam, with your idea that there’s no free will.
Sam: Yep.
Woman: When it comes to raising kids, wh-
S: Don’t tell them. Don’t tell them th- [inaudible]
*AUDIENCE LAUGHS*
There’s a lot to say at this point, but it would be malapropos to cut the remaining context. Let’s hear Sam out. Cont:
W: Sorry but – I have an 18 year old boy, who’s… y’know, gorgeous. And when I’m trying to tell him to do the right thing, and he does something stupid… and then I wanna find out why he did that, I don’t even ask, cuz it’s a stupid question. Cuz he doesn’t even know why he did it, cuz he’s an 18-year-old boy. But when I’m looking at impacting his future behavior, where’s the practical separation between knowing… that there’s really no free will, and wanting your children to be responsible in their behavior and what they do in the world.
S: Okay… Well, this is an important question-
*APPLAUSE*
S: I think that there are many false assumptions about what it must mean to think that there’s no free will. I think there’s no free will, but I think that effort is incredibly important. I mean, you can’t wait around… I think the example I gave in my book is, well, if you wanna learn Chinese, you can’t just wait around to see if you learn it. It’s not gonna happen to you. There’s a way to learn Chinese, and you have to do the things you do to learn Chinese. Every skill or system of knowledge you can master is like that, and getting off of drugs is like that, and getting into shape is like that, and straightening out your life in any way that it’s crooked is like that. But the recognition that you didn’t make yourself, and that you are exactly as you are at this moment because the universe is as it is in this moment has a flip side, which is… you don’t know how fully you can be changed in the next moment, by good company, and good conversations, and reading good books, and… you don’t know, what you – you are an open system. It’s just a simple fact that people can radically change themselves. You’re not condemned to be who you were yesterday.
There’s a little more, but I think this is a good place to pause.
For the most part, Sam Harris is an incredibly rigorous, logical, and consistent thinker. After rejecting Islam I was drawn to Sam because he not only took the axiomatic standpoint of atheism, but also explored the implications and consequences of atheist realism by trying to develop what is essentially his own ‘atheist ethic’. His books ‘Lying’ and ‘Free Will’ are not just his own thoughts directed at a public audience; they’re actually a series of meditations through which Sam challenges himself, redressing areas that had long been considered ‘dead-ends’ wherein the repercussions of atheism (irrespective of its analytical correctness) are so deeply negative that in the final analysis pursuing such a worldview is simply not worth the trouble. In this respect, his philosophy is even comparable to Kantian deontology, which constitutes a similar attempt at grounding human morality in logic and proofs. Not bad, Sam. Not bad.
But nothing within Sam’s credentials can ameliorate the discrepancy we see in this conversation, between Sam’s philosophical idealism (what he calls ‘moral realism’) and his stated claims. Obviously we can let Sam off for the “Don’t tell them” line – it was a joke, and the audience got that. But as he goes on, we see the ethos within “Don’t tell them” repeated, explicated, and justified.
According to Sam, suppressing the ‘truth’ (that free will is an illusion) can at times lead to the actualization of greater potential. Obviously, we all know this already; at the most basic level, that is what religions do. By gathering the local township for collective prayer, meditation, and socialization, religious organizations have for time immemorial been using what Sam and Harari would call a ‘fiction’ to better people’s lives and endow them with a sense of meaning, purpose, and spiritual fulfillment. Yes, I am aware that the same mechanism can also be used for harm as well – that’s obvious, and unrelated to my point. The key here is that compromising the ‘truth’ in order to attain Hararian ’empowerment’ is precisely what Sam has criticized religion for.
This is why the Sam’s declaration that truth ought to be sacrificed in abet of empowerment (at least some of the time) is of such fundamental importance. His suggestion that we should utilize fictions in order to improve our social reality is devastating for moral realism, because it means that his desired system of social organization is essentially a religion, by virtue of operating along the same principles. By acknowledging that some aspects of reality (truth) should be set aside in name of functionality (power), Sam accepts the legitimacy of moral systems to uphold fictions for the good of their adherents. It is not just that he acknowledges that this is possible – he actually suggests that it ought to be pursued.
For what it’s worth, Sam himself admits this difficulty later in the video, conceding that what you say to people should be ‘true and useful’. But if it is valid and correct to use baseless fictions (such as the existence of free will) in order to better our lives, then we are instead promoting ‘what is useful and not true’. At this point we are effectively ruling out the possibility that there are ‘right’ or ‘wrong’ modes of social organization, since the discussion now moves to what level of usefulness justifies the abnegation of truth, and so on and so forth. The question ‘which moral system is correct’ becomes ‘which moral systems balance fiction and power appropriately’. Far from moral realism, this perspective is so blatantly pragmatist as to make William James turn in his grave. Moreover, Sam’s dismissal of the validity of religious systems based on their unrelatedness to material reality now seems positively hypocritical in light of his advocacy for those very same methods.
In short, if we accept Sam’s proposition, then it is meaningless to strive for the creation of a social system upholding an ‘objective’ or ‘correct’ moral reality. Instead, the question that then results is: ‘to what extent must we sacrifice the truth in order to attain the truth’. Needless to say, such a question – at least from Sam’s own moral realist perspective – is utterly incoherent.
This is a repost of an article originally appearing in Quillette.
On Sunday 28 April 1996, Martin Bryant was awoken by his alarm at 6am. He said goodbye to his girlfriend as she left the house, ate some breakfast, and set the burglar alarm before leaving his Hobart residence, as usual. He stopped briefly to purchase a coffee in the small town of Forcett, where he asked the cashier to “boil the kettle less time.” He then drove to the nearby town of Port Arthur, originally a colonial-era convict settlement populated only by a few hundred people. It was here that Bryant would go on to use the two rifles and a shotgun stashed inside a sports bag on the passenger seat of his car to perpetrate the worst massacre in modern Australian history. By the time it was over, 35 people were dead and a further 23 were left wounded.
Astoundingly, Bryant was caught alive. He was arrested fleeing a fire at the house into which he had barricaded himself during a shootout with the police. He later pled guilty to a list of charges described as “unprecedented” by the standing judge, and was sentenced to life in prison without the possibility of parole, thus sparing his victims and other survivors the suffering (and perhaps the catharsis) of a protracted trial. Yet, in spite of his guilty plea, Bryant did not take the opportunity provided by his official statement to offer any motive for his atrocities. Instead, he joked “I’m sure you’ll find the person who caused all this,” before mouthing the word ‘me.’ Intense media speculation followed, the main focus of which was Bryant’s history of behavioral difficulties. These were offered as possible evidence of a psychiatric disorder such as schizophrenia (which would have been far from sufficient to serve as a causal explanation for his crimes). However, the most notable and concrete fact of Bryant’s psychological condition was his extremely low IQ of 66—well within the range for mental disability.
Lascar Monument to Port Arthur massacre (wikicommons)
IQ scores are classified in a number of ways, all of which are broadly similar. The Wechsler Adult Intelligence Scale (WAIS-IV) establishes seven categories of IQ scores. Most of us fall into the ‘Average’ band, constituted by the 90-109 range. Those achieving scores of 130 or higher are considered ‘Very Superior.’ Conversely, scores of 69 and under are classified as ‘Extremely Low,’ and automatically qualify the scorer for a diagnosis of ‘mild retardation’ according to the APA’s Diagnostic and Statistical Manual of Mental Disorders. It is into this band that Bryant’s score falls.
The connection between intelligence and behavioral problems, such as Conduct Disorder (CD) or Antisocial Personality Disorder (APD), was well-known around the time of the Port Arthur Massacre. A review by biostatistician and UCSD professor Sonia Jain cites contemporaneous studies to suggest that low IQ scores in childhood should be considered a risk factor for APD and CD. In 2010, several psychologists published results from a longitudinal study containing data on over a million Swedish men, who were tracked from conscription for a little over 20 years. They found that IQ scores tested during conscription were a significant and robust predictor, not only for APD or CD, but for all categories of mental disorders. Conscripts with low IQ were substantially more likely to be diagnosed with one or more mental disorders, to suffer from mood and personality disorders, and to be hospitalized for mental illness. Those in the lowest band—like Bryant—were most at risk of severe psychological disorders.
a & b: Hazard ratios for admission for different categories of psychiatric disorder according to nine-point IQ scale. Highest IQ score (coded as 9) is the reference group. Estimates are adjusted for age at conscription, birth year, conscription testing centre, parental age and parental socioconomic status. (n=1,049,663)
While these correlations are concerning, they do not offer an explanation for Bryant’s atrocities. In a population where intelligence is normally distributed with a mean of 100, a little over two percent of people would attain IQ scores close to Bryant. A further 15 percent would receive IQ scores somewhere below 84—well beyond the threshold for disqualification in the Armed Forces Qualification Test (AFQT), used to determine suitability for admission into the US Army until 1980. Careers for those below this level are extremely rare—a fact that might help explain the correlation between low IQ and an enhanced risk of criminal offending, given the scarcity of well-paid jobs for those with an IQ of below 84.
But much of this correlation is due to street-level petty and violent offenses, not mass murder, and it would be abhorrent—obscene, even—to suggest that people with a low IQ should be treated with suspicion, or as murderers-in-waiting. In almost all cases, these individuals pose a risk to no one but themselves, and are more likely to fall prey to victimization by others. On the other hand, it is equally irresponsible to ignore the specific difficulties which those with a low IQ face. The consequences of this wishful thinking—however noble in intent—can be devastating.
Perhaps the best example is offered by the recent history of the Cold War. While the American military-industrial complex was sufficiently sophisticated to provide the United States with all the arms and armaments it could possibly hope for, there are always things that money cannot buy. In this case, it was bodies—young American men needed to fight on the ground in 1960s-era Vietnam, where they found the most unforgiving of battlefields among the region’s unassuming jungles and innocuous rice paddies. The unusually high attrition rate of soldiers posted there, as well as the frequent use of student deferments or feigned illness to dodge the draft (President Trump’s excuse of ‘temporary bone spurs’ constitutes one particularly famous example), resulted in a shortage of men which meant that more troops were needed than the nation was able to supply.
The government was confounded by this problem for some time, attempting halfhearted crackdowns on draft dodgers as a temporary solution, until Secretary of Defense Robert McNamara arrived at a more permanent workaround. The US government would draft men whose low IQ scores had hitherto disqualified them from military service. This stratagem—codenamed ‘Project 100,000’—is detailed along with its dreadful consequences in the book McNamara’s Folly by the late Hamilton Gregory. Gregory witnessed the fate of the low-IQ draftees firsthand while he was a soldier in Vietnam. These draftees—cruelly nicknamed ‘McNamara’s Morons’—were generally capable of completing simple tasks, but even a simple task imperfectly executed can be disastrous in warfare.
US Secretary of Defense Robert McNamara at a press conference on Vietnam (26 April, 1965)
A case study in the book is ‘Jerry’ (not his real name). Jerry was a draftee from the 100,000 who had been assigned guard duty in a camp by the Quan Loi Green Line. Jerry’s task was to challenge anyone approaching the camp by calling: “Halt! Who goes there?” followed by “Advance and be recognized!” once a response had been obtained. This task was minimally demanding due to the clearly visible differences between an American soldier and the average Vietcong guerrilla. But when a well-liked American officer returned to camp, Jerry bungled his instructions. Upon seeing the officer approaching, he yelled “Halt!” and then opened fire, killing the man where he stood. Jerry subsequently disappeared in what was either an act of remorseful abscondence or murder by outraged members of his battalion. In another case described by Gregory, one of the ‘morons’ played a joke on his squadmates by throwing a disarmed hand grenade at them. Despite being beaten up for it, he found this prank so amusing that he repeated it every day until the inevitable happened; he forgot to disarm the grenade, causing the deaths of two soldiers and the grievous wounding of several more.
What happened to many of the 100,000 (whose actual total exceeded 350,000) is not hard to predict. “To survive in combat you had to be smart,” Gregory writes. “You had to know how to use your rifle effectively and keep it clean and operable, how to navigate through jungles and rice paddies without alerting the enemy, and how to communicate and cooperate with other members of your team.” Fulfilling all or any one of these minimum requirements for survival in a battlefield is contingent upon a certain level of verbal and visuospatial intelligence, which many of McNamara’s draftees did not possess. This ultimately led to their fatality rate in Vietnam exceeding that of other GIs by a factor of three.
The danger of physical harm faced by those with a low IQ is not restricted to the battlefield. A 2016 study by four psychologists using data from the Danish Conscription Database (containing 728,160 men) revealed low IQ to be a risk factor for almost all causes of death. A drop in IQ by a single standard deviation (roughly 15 points) was associated with a 28 percent increase in mortality risk. The association between low IQ and mortality was particularly great for homicide and respiratory disease (such as lung cancer). The high homicide rate could reflect a predisposition for those of low IQ to find themselves in dangerous situations, perhaps due to a lack of economic opportunity or an increased likelihood of being victimized by predatory individuals. Similar features could explain the prevalence of respiratory disease, which may be a product of high rates of smoking as well as a greater likelihood of inhabiting more polluted industrial areas where it’s easier to find low-skilled work. Clearly, being born with a low IQ is sufficient to set one up for an unlucky and unhappy life. But could low IQ have contributed to—not explain, but be a factor in—the massacre committed by Martin Bryant?
To answer this question, we have to transcend mere correlations between IQ and different types of outcome, and consider that those with low IQ are much more likely to experience misfortune in seemingly every endeavor. Having intelligence is what allows us to operate in the world—both on our own, and within the societies we inhabit. Those lucky enough to have a high IQ have an easier time at dispatching the various challenges they face, and thus naturally rise within hierarchies of competence. We can imagine any number of these hierarchies, most of which are unimportant (the hierarchy of Rubik’s Cube solvency speed, for example, is probably irrelevant), but all of which require some degree of intelligence. Furthermore, some of these areas of success—such as friendship groups, romantic relationships, and professional employment—are so fundamental to the individual pursuit of happiness that to be unable to progress in them is profoundly damaging to one’s sense of well-being and intrinsic self-worth.
This means that having a low IQ doesn’t only make you more likely to get killed or fall victim to an accident. It also means you’re more likely to undergo difficulties in progressing up every ladder in life. You’ll often feel permanently ‘stuck at zero’—unable to improve or change your position. Most of us will experience this feeling at least a few times in our lives, whether encountered in school (being unable to break the ‘A-grade’), in our social lives (being unable to establish or maintain a successful romantic relationship), or in comparatively trivial areas. Yet most of the time, it is transient—passing when we switch our efforts to a new endeavor, or after devising a way to solve the problem. Very few of us know what it is like to have that feeling almost all of the time—to have a large proportion of one’s attempts at self-betterment or advancement frustrated by forces that seem to be beyond our control. Being trapped in such a dismal psychological state for only a brief interval can lead to anxiety, depression, or dependence. In some, this feeling of ‘being stuck at zero’ (that the world is manifestly unfair and against them) will lead to resentment—and resentment can turn into murderousness.
Martin Bryant’s life, characterized by loneliness, depression, and numerous frustrated attempts at making friends, is replete with examples that follow this pattern. Clearly, his actions mark him as an extreme outlier among those with low IQ—but his troubled life experiences are distressingly representative. Four in 30 children in classrooms across America are made to compete with their peers for grades and university places in spite of low IQ and with little success. And, like them, Bryant found society’s ‘normal’ to be simply unobtainable. Because the role of cognitive ability is de-emphasized in childhood success, and often treated as a function of effort, children in these circumstances can find themselves trying harder than every other child in the classroom, while still being admonished to ‘try harder.’ While wise caregivers abstain from blaming these children outright for their failures, a taboo on acknowledging the importance of intelligence means that low IQ individuals themselves may be unaware of their condition or its full ramifications, making them likely to engage in repeated self-blaming injurious to self-esteem and mental stability.
None of this is to suggest that those with low IQ, or those who experience a duration of being ‘stuck’ due to their cognitive limitations, should be viewed as likely to break the law or engage in violent crime. But it’s one possible explanation for the fact that those with a low IQ are more likely to do so than those with an average or high IQ. And the uncomfortable reality is that the resentful, in this case, are somewhat correct in their analysis—they have been set up in a game rigged against them from the very start. Recent research in genomics has confirmed this: a 2018 study in Nature1 used genes sequenced from over a million individuals to examine the genetic contributions to educational attainment. This process allows for the construction of profiles for individual ability by evaluating polygenic scores (PGS). In this study, those within the highest PGS quintile had around a 50 percent likelihood of graduating from college; those in the lowest bracket, only 10 percent. Yet none of this difference in ‘genetic quality’ can be accounted for by individual merit or achievement. It is a difference of crucial importance, yet it is determined for us as individuals by luck alone.
Polygenic Scores from the Nature study (2018)
While generous welfare systems in Western countries do provide benefits to those most disadvantaged by the cognitive lottery, a much larger proportion do not qualify for any assistance at all. Instead, those with IQs below 84 are often forced to work arduous manual labor jobs, since they are unlikely to possess the array of qualifications required for non-manual work. These occupations make them the most marginalized in our complex capitalist society—and even those employment opportunities are shrinking under the unrelenting pressure for lower costs and greater efficiency. Job categories like driver, cleaner, and assembly line worker are rapidly disappearing due to automation, leaving those with low IQ nowhere else to go. While most of us delight at the luxurious comforts heralded by the ongoing automation revolution, these same comforts—such as self-driving cars, autonomous vacuum cleaners, and robotized assembly lines—are poised to render a cognitively vulnerable 15 percent of the population unemployed and unemployable.
What exactly are we doing to rectify or alleviate cognitive inequality? The answer, of course, is that we ignore it and hope it will go away. Continuing to force large numbers of cognitively underprivileged children through the arduous challenges of the standard education system is only perpetuating the devastating legacy of intelligence denialism. By pretending away the fact of IQ differences, McNamara drafted the intellectually challenged into a warzone more challenging and lethal than anything they would have faced at home and thereby caused the needless deaths of thousands. Furthermore, his initiative left tens of thousands of survivors with debilitating psychological conditions such as post-traumatic stress disorder and cruelly deprived many thousands of parents and relatives of the chance to see a beloved family member grow old. The apparent fair-mindedness in this act of conservative blank-slatism is belied by its atrocious outcomes, which render it morally indefensible.
Yet while McNamara’s policy has been called “a crime against the mentally disabled,” few have considered what crime might be constituted by our indifference to the cognitively underprivileged within our own societies. Fifty years after McNamara and 20 years after Martin Bryant, we have not yet begun to ask the question: is it really fair for one person to be born with an intellectual assurance of success in navigating the challenges of a twenty-first century society, while another is born almost certain to fail? Until we accept that people with low IQ exist, and that the ramifications of their condition are indeed severe, how can we even begin to discuss what might be done to alleviate their suffering? The importance of cognitive ability for life success in our technologically complex society makes answering that question a moral imperative—but economic and political leaders have shown scant interest in this issue. Despite the fact that low IQ is correlated with negative outcomes in a large number of areas and afflicts around 15 percent of the population, we seem incapable of treating it like any other public health problem.
Simply wishing away the fact that the genetic and environmental circumstances of a person’s birth inevitably endows everyone—for better or worse—with a personality, a level of sociability, and an intelligence is a form of denialism that serves only our urge for moral exculpation. Pretending that those burdened with low IQ are just lazy, or lack the appropriate motivation, is a way of absolving ourselves of responsibility to help them. Accepting that intelligence exists, that intelligence matters, and that the less intelligent are equal to us in moral worth and value and thus ought to be helped, constitute the first steps in addressing this increasingly urgent need to fully accommodate the cognitively underprivileged.
Myopia runs strong on both sides of my family, so it wasn’t so much of a surprise when I began to lose my eyesight while in China last year (I am 24 years old, for reference). While it happened gradually enough that I was under no illusions as to what was going on, it still came as quite of a shock given that I had always – even in my early 20s – been proud of how much more ‘able’ I was than my lens-bound parents and relatives.
Of course China is the glasses capital of the world, so I was able to alleviate my state of hindrance by purchasing a (relatively cheap) pair of excellent glasses that allowed me to see better than I had in years. I’ve worn these since and practically forgotten about my blindness in the course of doing so.
Yesterday, I left my glasses at a house after attending a party, but I still have to work because I’m on a 9-5 schedule. Having to squint to merely make out the writing on a computer screen just a foot away from me, and maximize all the windows simply so I can see is a pretty humbling experience. It’s a reminder that almost all aspects of our ‘normal’ lives are enabled by a complex web of industry and technology working in cohesion to bring us a better quality of life enabled by the profit motive. Without that, we’d have nothing. We would still be troglodytes in caves; tribesmen on the African savannah.
One day, that web will collapse. Anyone skeptical of this proposition should realize that their denialism is equivalent to asserting that humans and the (exceedingly fragile) societies we have built up will last forever. In ‘real’ reality, there is no forever. There is only entropy, particle decay, and the heat death of the universe. There is only the greed of men, the biological impetus to reproduce, and the blazing heat of the atomic fire that one day will rend our very atoms asunder.
It may seem strange that losing one’s glasses could provide so poignant an opportunity to recognize and reflect these facts. But it is the experience of loss, and the revelatory acknowledgment of one’s own vulnerability, that allows us to recognize that our diligently hidden fragility is no less a part of us than it is a part of the human condition. That ephemerality, I think, is what makes life valuable. That is why life must be protected. But it is also the reason why the atavistic reversion to our pre-civilized state is never as far away as we might like to think.