In 1592, the Chinese philosopher Li Zhi wrote this preface:
I desire to burn this book. I say that I must burn and discard it. I cannot keep it… As for those who find my work grates upon their ears, they most certainly will succeed in killing me…
Li Zhi titled his manuscript, fittingly enough, A Book to Burn. The title can be read in many ways: as a challenge, a warning, a self-fulfilling prophecy, or even a bit of black humor. He followed it up with two sequels: Another Book to Burn and A Book to Hide. What drove him to publish a manuscript so controversial that he was certain it would bring about his death?
Li Zhi’s exasperation with the corruption, greed, and superficiality of the powerbrokers in his society fueled his writing. As he wrote, “if the ancient sages had not built up indignation they wouldn’t have written anything. To write something without indignation, that would be like shivering when you’re not cold, or groaning when you’re not sick. Even if they had done that, who would pay attention?”
Li Zhi never gave himself over fully to any one ideology. He called himself a Confucian, but was endlessly critical of Confucianism as he saw it practiced in the world around him. Even when he joined a Buddhist monastery and shaved his head like a monk, he let his beard grow long and refused to stop eating meat. You could call him a contrarian or a crank, or you could say that he was one of those people whose uncompromising principles make it impossible to live with the world as it is.
Nor did Li Zhi show much interest in creating an internally consistent philosophy. In fact, A Book to Burn is extravagantly and even joyfully self-contradictory. Perhaps, as some scholars have argued, that was his way of forcing readers to exercise their own judgement and form their own opinions. He didn’t want to become one of those dusty experts whose words people parroted without real understanding.
This was a radical position, because Li Zhi lived in a time when cultural status and success were deeply tied to “toeing the line” of acceptable beliefs. To receive a high-ranking position in the bureaucracy, young men were required to pass the famously difficult civil service exams. Success depended on regurgitating orthodox opinions. There was even a scandal in 1595, when it was revealed that students had passed the exams by copying example essays word-for-word from their study manuals. As a youth, Li Zhi passed the first level of the civil service examinations, but he refused to sit for the second, thus taking himself out of the running for any truly prestigious positions. This was perhaps the first sign of the rebellious streak that would come to define his life and work.
Despite his top-notch classical education, Li Zhi valued popular entertainment as well as the classics, and the vernacular tongue as well as the refined language of scholars. He followed the doctrine of Wang Yangming, believing that anyone had the potential to become a sage. He argued that women had equal intellectual powers to men, and were only deprived the opportunity to develop them. He even took on a female disciple, Mei Danran—an extremely shocking choice at the time.
Li Zhi challenged the orthodox belief, laid out in the classic text the Doctrine of the Mean, that the relationship between ruler and subject was the fundamental basis of social order. Instead, he argued for friendship as the most important social relationship. Indeed, Li Zhi’s friendships were central to the development of his philosophy. A Book to Burn started as a bundle of letters circulated among Li Zhi’s friends, and, in its preface, he justifies his choice to publish such a dangerous volume with the hope that “if one of my essays speaks to the heart of another, then perhaps I may find somebody who understands me!”
Yet the letters also record the strain that Li Zhi’s uncompromising principles put on his friends. He believed in harshly criticizing those friends who strayed from what he saw as right. As a result, many of his relationships suffered. As Rivi Handler-Spitz, Pauline Lee, and Haun Saussy write in the introduction to their translation of Li Zhi’s selected writings: “A Book to Burn reports the recurrent bewilderment and loneliness Li experienced as one by one his friends grew tired of his relentless faultfinding and abandoned him.”
On the other hand, those of his friends who refused to compromise their ideals often suffered for it. In 1579, one of Li Zhi’s role models, the philosopher He Xinyin, died in prison after being arrested for his radical ideas. Li Zhi must have known that the same fate was coming to him, sooner or later. In 1602, the Wanli emperor ordered Li Zhi arrested. He died in prison that same year, and all copies of A Book to Burn were ordered to be thrown on the bonfire, fulfilling the promise of the title.
Yet the official prohibitions only increased the text’s cachet. Zhu Guozhen wrote that almost every member of the literati kept a treasured copy of A Book to Burn hidden like a precious rarity. In fact, Li Zhi’s name became such a selling point that booksellers slapped his name on entirely fabricated manuscripts. Wang Benke wrote in a preface to one of his books, “Within the four seas there is no one who does not read this gentleman’s writings; there is no one who does not desire to read them all; they read them without stopping, and some even read pirated editions.” Not bad at all.
T.S. Eliot, born on September 26th, 1888, was considered one of the twentieth century’s major poets—and not just because he wrote the poems that would become the libretto for the musical Cats. He also wrote acclaimed essays, plays, and poems like The Wasteland and Four Quartets, and was awarded the Nobel Prize in Literature in 1948.
His famous “The Love Song of J. Alfred Prufrock” can be read in its entirety here, thanks to Poetry Magazine:
Let us go then, you and I,
When the evening is spread out against the sky
Like a patient etherized upon a table;
Let us go, through certain half-deserted streets,
The muttering retreats
Of restless nights in one-night cheap hotels
And sawdust restaurants with oyster-shells:
Streets that follow like a tedious argument
Of insidious intent
To lead you to an overwhelming question…
Oh, do not ask, “What is it?”
Let us go and make our visit.
In the room the women come and go
Talking of Michelangelo.
Download the PDF to read the rest of “The Love Song of J. Alfred Prufrock.”
Most readers of Beowulf understand it as a white, male hero story—tellingly, it’s named for the hero, not the monster—who slays a monster and the monster’s mother. Grendel, the ghastly uninvited guest, kills King Hrothgar’s men at a feast in Heorot. Beowulf, a warrior, lands in Hrothgar’s kingdom and kills Grendel but then must contend with Grendel’s mother who comes to enact revenge for her son’s murder. Years later, Beowulf deals with a dragon who is devastating his kingdom and dies while he and his thane, Wiglaf, are slaying the dragon. Crucially, Grendel is never clearly described, but is named a “grim demon,” “god-cursed brute,” a “prowler through the dark,” a part of “Cain’s clan.”
Indeed, Beowulf is a story about monsters, race, and political violence. Yet critics have always read it through the white gaze and a preserve of white English heritage. The foundational article on Beowulf and monsters is J.R.R. Tolkien’s “Beowulf: The Monsters and the Critics.” Yes, before and while writing The Lord of the Rings, Tolkien was an Oxford medieval professor who interpreted Beowulf for a white English audience. He uses Grendel and the dragon to discuss an aesthetic, non-politicized, close reading of monsters, asking critics to read it as a poem, a work of linguistic art:
Yet it is in fact written in a language that after many centuries has still essential kinship with our own, it was made in this land, and moves in our northern world beneath our northern sky, and for those who are native to that tongue and land, it must ever call with a profound appeal—until the dragon comes.
Beowulf—which is written in Old English—was produced over a millennium ago and is set in Denmark. Learning Old English is on par with learning a foreign language. Thus Tolkien’s view on which bodies, fluent in this “native” English tongue, can read Beowulf, also offers a window into the politics of who gets to and how to read and write about the medieval past.
Tolkien’s investment in whiteness does not just apply to his ideal readers of medieval literature. It also extends to the ideal medieval literature scholars. At the 2018 Belle da Costa Greene conference, Kathy Lavezzo highlighted Tolkien’s role in shutting the Jamaican-born, Black British academic Stuart Hall out of medieval studies. Hall’s autobiography, Familiar Stranger: A Life Between Two Islands, describes a white South African gatekeeper. Tolkien was the University of Oxford Merton professor of English Language and Literature when Hall was a Rhodes scholar in the 1950s. Hall explains how he almost became a medieval literature scholar: “I loved some of the poetry—Beowulf, Sir Gawain and the Green Knight, The Wanderer, The Seafarer—and at one point I planned to do graduate work on Langland’s Piers Plowman.” However, according to Lavezzo, it was Tolkien who intervened in these plans: “But when I tried to apply contemporary literary criticism to these texts, my ascetic South African language professor told me in a pained tone that this was not the point of the exercise.”
This clashes with Tolkien’s friendlier image that has permeated popular culture, thanks to The Lord of the Rings. Through Tolkien’s white critical gaze, Beowulf as an epic for white English people has formed the backbone of the poem’s scholarship. To this day, there has never been a black scholar of Anglo-Saxon studies who has published on Beowulf. Mary Rambaran-Olm has reported on the many instances of black and non-white scholars being shut out of medieval studies. She recently explained at the Race Before Race: Race and Periodization symposium what Tolkien did to Hall in light of her own decision to step down as second vice president of the field’s main academic society, citing incidents of white supremacy and gatekeeping. As a result of these incidents, studying Beowulf has long been a privilege reserved for white scholars.
Ironically, Tolkien’s advocacy for a Northern, “native,” and white ideal readership contrasts with his own personal and familial histories. He was born and raised in South Africa. Though Tolkien’s biographers have claimed that his African upbringing scarcely influenced him, scholarly critics have pointed out the structural racism in his creative work, particularly in The Lord of the Rings. Additionally, he wrote an entire philological series, “Sigelwara Land” and “Sigelwara Land (continued),” on the Old English word for “Ethiopia.” In this series, he explicates the connections between Sigelwara Land and monsters by flattening the categories of black Ethiopians, devils, and dragons. He writes:
The learned placed dragons and marvelous gems in Ethiopia, and credited the people with strange habits, and strange foods, not to mention contiguity with the Anthropophagi. As it has come down to us the word is used in translation (the accuracy of which cannot be determined) of Ethiopia, as a vaguely conceived geographical term, or else in passages descriptive of devils, the details of which may owe something to vulgar tradition, but are not necessarily in any case old. They are of a mediaeval kind, and paralleled elsewhere. Ethiopia was hot and its people black. That Hell was similar in both respect would occur to many.
Tolkien’s work of empirical philology is a form of racialized confirmation bias that strips Ethiopia of any kind of connection to the marvels of the East, gems, or even his own fixation on dragons. He highlights Sigelwara as a term related to black skin and its connections to devils and hell, framing Ethiopians within the same category as “monsters.” He has no qualms about consistently connecting the Ethiopians to the “sons of Ham,” and thus the biblical descendants of Cain, linking medieval Ethiopia with the justification for chattel black slavery. In fact, no part of the etymology (nor any part of medieval discussions of Ethiopia) discusses slavery. Tolkien would have read Beowulf’s Grendel, who is linked to Cain, as a black man:
Grendel was that grim creature called, the ill-famed haunter of the marches of the land, who kept the moors, the fastness of the fens, and, unhappy one, inhabited long while the troll-kind’s home; for the Maker had proscribed him with the race of Cain.
Tolkien’s articles on Ethiopia and on Beowulf, all published in the 1930s, reveal that Tolkien likely interpreted Grendel as a black man connected to a biblical justification for transatlantic chattel slavery. Thus, Grendel was raced within the logics of Tolkien’s white racist gazer. However, his philological method is still seen as a non-politicized and non-personal form of “empirical” scholarship. His interest in solidifying white Englishness and English identity—as a chain of links from the premodern medieval past to contemporary racial identities—is a project that extended into multiple scholarly areas.
Over the last several years, Tolkien’s most circulated political stance has been his resistance to fascism as displayed in letters he wrote to a German publisher. He may have abhorred fascism and antisemitism, but he upheld the English empire’s white supremacy. He held racialized beliefs against Africans and other members of the English black diaspora.
Black scholars have been systematically shut out of Old English literature. If there is no critical mass of black intellectuals, writers, and poets who can talk back to the early English literary corpus and the large-looming white supremacist gatekeepers, then Toni Morrison’s Beowulf essay might well be the first piece to do so. Because she writes about Beowulf, race, and how to read beyond the white gaze, her essay speaks back not only to Beowulf but to the English literary scholarship that has left Anglo-Saxon Studies a space of continued white supremacist scholarship.
In Toni Morrison’s 2019 collection, The Source of Self-Regard: Selected Essays, Speeches, and Meditations, we get the first revision of who should read Beowulf and how race matters. In her essay, “Grendel and His Mother,” she explains:
Delving into literature is neither escape nor surefire route to comfort. It has been a constant, sometimes violent, always provocative engagement with the contemporary world, the issues of the society we live in… As I tell it you may be reminded of the events and rhetoric and actions of many current militarized struggles and violent upheavals.
As a black feminist reader, Morrison examines Beowulf as political, current, for any reader. Indeed, she opens by explaining that literary criticism is always performed through the lens of its moment, urging her readers to “discover in the lines of association I am making with a medieval sensibility and a modern one a fertile ground on which we can appraise our contemporary world.” Morrison’s Beowulf interpretation highlights what other critics, following Tolkien’s lead, have deemed marginal. She decenters the white male hero, focusing instead on the racialized, politicized, and gendered figures of Grendel and his mother, who in Tolkien’s read would have been black. In his article “Beowulf: The Monsters and the Critics,” his white male gaze concentrates on what these two “monsters” can do for Beowulf’s development as the white male hero of Germanic epic. Morrison, on the other hand, is interested in Grendel and his mother as raced and marginal figures with interiority, psyche, context, and emotion.
In Morrison’s interviews with Bill Moyers, Charlie Rose, and The Paris Review, she explains her literary method when she unpacks nineteenth-century American literature—especially Faulkner, Twain, Hemingway, and Poe—and how white writers and critics hide blackness and race. Similarly, in Morrison’s discussion about Willa Cather’s Sapphira and the Slave Girl, she exposes the power dynamics of whiteness in Cather’s novel. The novel describes the complicated relationship between a white and a black woman in which Cather’s white gaze forces not just unspeakable violence onto the black woman but also erases her name, context, and point of view. Similarly, Tolkien is not interested in Grendel or his mother’s racialized contexts, emotions, and reasons. He writes with the white gaze—Grendel and his mother are racialized props that help explain Beowulf’s conflicts, contexts, emotions, and reasons. Morrison’s sentiments about nineteenth-century American literature apply to white supremacist Anglo-Saxon Studies: “The insanity of racism… you are there hunting this [race] thing that is nowhere to be found and yet makes all the difference.”
Morrison analyzes Beowulf through Grendel’s racialized gaze. She points out Grendel’s lack of back story:
But what seemed never to trouble or worry them was who was Grendel and why had he placed them on his menu? …The question does not surface for a simple reason: evil has no father. It is preternatural and exists without explanation. Grendel’s actions are dictated by his nature; the nature of an alien mind—an inhuman drift… But Grendel escapes these reasons: no one had attacked or offended him; no one had tried to invade his home or displace him from his territory; no one had stolen from him or visited any wrath upon him. Obviously he was neither defending himself nor seeking vengeance. In fact, no one knew who he was.
Morrison asks readers to dwell on Grendel beyond good versus evil binaries. She centers the marginal characters in Beowulf, who have not been given space and life in the poem itself. She forces us to rethink Grendel’s mother and Beowulf’s vengeance, writing:
Beowulf swims through demon-laden waters, is captured, and, entering the mother’s lair, weaponless, is forced to use his bare hands… With her own weapon he cuts off her head, and then the head of Grendel’s corpse. A curious thing happens then: the Victim’s blood melts the sword… The conventional reading is that the fiends’ blood is so foul it melts steel, but the image of Beowulf standing there with a mother’s head in one hand and a useless hilt in the other encourages more layered interpretations. One being that perhaps violence against violence—regardless of good and evil, right and wrong—is itself so foul the sword of vengeance collapses in exhaustion or shame.
Morrison’s discussion of Grendel, Grendel’s mother, and Beowulf is about violence and how it undoes all potential motivations, including vengeance. The final tableau of Beowulf holding both the blood-covered sword of vengeance and Grendel’s mother’s head is about the corrosiveness of violence. For Morrison, the corrosive violence that eats through the sword of vengeance is that of whiteness.
Morrison goes further to unpack Beowulf through the work of contemporary writers. She explains:
One challenge to the necessary but narrow expectations of this heroic narrative comes from a contemporary writer, the late John Gardner, in his novel, titled Grendel… The novel poses the question that the epic does not: Who is Grendel? The author asks us to enter his mind and test the assumption that evil is flagrantly unintelligible, wanton, and undecipherable.
Specifically, she discusses Gardner’s rethinking of Grendel’s interiority. She writes that Gardner tries to “penetrate the interior life—emotional, cognizant—of incarnate evil.” For Morrison, the poem’s most salient interpretation comes from reading it politically, cogently, and rigorously. She writes:
In this country… we are being asked to both recoil from violence and to embrace it; to waver between winning at all costs and caring for our neighbor; between the fear of the strange and the comfort of the familiar; between the blood feud of the Scandinavians and the monster’s yearning for nurture and community.
In Morrison’s analysis, Grendel has developed from being a murderous guest to Hrothgar’s Hall who kills for no reason, to becoming the central focus. This passage asks us to think about why Grendel would do what he did. Morrison understands him as dispossessed; his “dilemma is also ours.” She situates Grendel as kith and kin to her imagined critical reading audience—black women.
Morrison concludes with a meditation on complicity, inaction, and the politics of contemporary late fascism and democracy:
…language—informed, shaped, reasoned—will become the hand that stays crisis and gives creative, constructive conflict air to breathe, startling our lives and rippling our intellect. I know that democracy is worth fighting for. I know that fascism is not. To win the former intelligent struggle is needed. To win the latter nothing is required. You only have to cooperate, be silent, agree, and obey until the blood of Grendel’s mother annihilates her own weapon and the victor’s as well.
In other words, we can reread that scene as a statement about fascist violence and its self-destroying and gendered toxicity. Morrison has made reading Beowulf raced, gendered, political; she has envisioned its interpretation through the centrality of a black feminist reading audience where politics matter and “democracy is worth fighting for.”
As Tolkien’s intellectual grandchild (my advisor was his student), I do not think it is accidental that Morrison’s critical voice reframes Beowulf for the racialized, political now. Tolkien’s deliberate shut out of Stuart Hall means that we can only speculate about Hall as a critic of Beowulf, and we know that Anglo-Saxon scholarship continues to shut out black and minority scholars. With Morrison, finally, I believe we can put Tolkien’s “Monsters and Critics” to bed and read Beowulf anew.
For many people, Halloween means it’s time to throw on a classic teen slasher like Halloween or Friday the 13th. Today, we often look back on those movies as festivals of gore and cleavage designed to appeal to teen boys. But, as film historian Richard Nowell writes, the most coveted audience for these movies at the time was teenage girls.
Nowell writes that teen slashers emerged in the wake of 1970s horror films aimed at adults. Starting with the success of Rosemary’s Baby in 1968, many moviemakers had centered scary supernatural plots on strong female characters. In contrast to the horror movies of earlier eras, these films generally avoided the trope of cowering, half-dressed women. For example, in 1973, the theatrical trailer for The Exorcist, and the film itself, focused on the working single mother of the possessed girl.
By the late ‘70s, adult horror audiences were on the decline. Overall, market research found, half of U.S. theatergoers were between 12 and 20, with a fairly even gender balance. Many went to the movies with dates, and industry professionals generally believed that teen girls usually chose which movie to see on a date with a boy.
To sell movies to a teen audience, writers and directors took special care with their depictions of teen girls. Debra Hill, cowriter of 1978’s Halloween, later said she wanted young women to be able to “see themselves” in the female leads, who spend significant time talking about schoolwork, dating, and babysitting.
While later commentary has often assumed that the sex in teen slashers was gratuitous and promiscuous, Nowell writes that films like Friday the 13th (1980) actually spent a lot of screen time showing couples’ sexual relationships as emotionally intense and romantic. Following on the heels of non-horror teen films like Grease, studio executives had discovered that young love and platonic teen relationships were strong assets for marketing a movie. Lobby cards for Friday the 13th featured few moments of horror or titillating shots of female leads. Instead, they showed romantic moments, platonic friendships, and even a female character showing a young man how to change a light bulb.
“Taken as a whole, Paramount’s lobby cards marketed Friday the 13th as female-youth-friendly entertainment,” Nowell writes.
The marketing apparently worked. Forty-five percent of the theater audience for Halloween and Friday the 13th was under 17, and, of those young viewers, 55 percent were girls.
Following in the footsteps of those hits, a flood of teen slasher movies showed up in theaters in 1981, including My Bloody Valentine, The Burning, and Friday the 13th Part II. These movies followed the newfound convention of mixing romance with horror, leading New York Times critic Vincent Canby to refer to the genre as “teen-age love-and-meat-cleaver films.” The heroines of these movies were traditionally feminine, tough, and sexually confident.
So, if you’re inclined to throw on something scary this Halloween while also celebrating empowered young women, it turns out there are a lot of options.
In 1778, the Continental Congress decreed that it was “the duty of all persons in the service of the United States … to give the earliest information to Congress or any proper authority of any misconduct, frauds or misdemeanors by any officers or persons in the service of these states.”
This “founding” attitude has fared… rather ambiguously ever since. As law professor Shawn Marie Boyne shows in her review of the legal protections for whistleblowers in government and industry, “the country’s treatment of whistleblowers has been a conflicted one.” Regardless of the organizational model (public, private, non-profit), those in power who have had the whistle blown on them rarely applaud whistleblowers. Heroes to some, often the whistleblower is labeled a traitor by those in power, as in the cases of Boyne’s examples, Edward Snowden and Chelsea Manning.
“The question of whether a whistleblower will be protected or pilloried depends on the interests of those in power,” Boyne writes. Leaks to the media from officials for political advantage are standard operating procedure. But those outside this inner circle don’t fare as well: Snowden is in exile and Manning is in jail. Boyne notes that three NSA employees who did do what critics said Snowden and Manning should have done, that is, go through the system and use the proper channels to report government abuse, “found their lives destroyed and reputations tarnished.”
Retaliation against whistleblowers hit some of the pioneers, too, Boyne notes. Ernest Fitzgerald, who revealed billions in cost-overruns in a military transport program in 1968, was demoted after President Richard Nixon told his supervisors to “get rid of the son of a bitch.”
That same president ordered a break-in to Daniel Ellsberg’s psychiatrist’s office in 1971, in hopes of finding dirt on Ellsberg. An analyst for the RAND Corporation, Daniel Ellsberg released the Pentagon Papers to the New York Times. This classified historical study of the war in Vietnam revealed that the government realized early on that the war could not be won. Defending his actions in 1971, Daniel Ellsberg said, “I felt that as an American citizen, as a responsible citizen, I could no longer cooperate in concealing this information from the American public.”
Retaliation against whistleblowers is, as scholar Michael T. Rehg and his co-authors show, quite gendered. “Male whistleblowers were treated differently depending on their power in the organization, but female whistleblowers received the same treatment regardless of the amount of organizational power they held: Their status as women overrode their status as powerful or less powerful organization members.” These authors also found that “women who reported wrongdoing that was serious or which harmed them directly were more likely to suffer retaliation, whereas men were not.”
While laws have been strengthened to help whistleblowers, presidents and CEOs nevertheless continue to go after them.
We’re killing all the birds (New York Times)
by John W. Fitzpatrick and Peter P. Marra
Since 1970, populations of wild birds in the U.S. and Canada have declined by a third as humans have wrecked their habitats. Even scarier, we only know this because scientists have been counting birds for a long time. The study probably reflects an even bigger crisis that also includes many species that we don’t monitor as closely.
Data mining your medical records (Wired)
by Megan Molteni
The Mayo Clinic is working with Google on a plan that would mine enormous troves of patient records using AI. The effort could new yield ways to predict and prevent serious disease. It could also be a huge threat to patient privacy.
Studying physics and learning about bias (Public Books)
by Lawrence Ware
Dr. Chandra Prescod-Weinstein is a theoretical physicist who does pen-and-paper calculations to advance humanity’s understanding of dark matter. As a black woman, she’s also—perhaps inescapably—become an expert in the impact of racism and sexism in physics.
Why we all need to know statistics (Aeon)
by David Spiegelhalter
What’s the cost of being part of the EU? Just how bad for your health is bacon? Statistics isn’t always taught in ways that help us connect math to real-world problems, but when it is, it can help us understand the world, be better citizens—and even catch a serial killer.
Do strikes work? (The Washington Post)
by Laura C. Bucci
It’s not just the UAW—strikes are on the rise in the U.S. today. And there’s reason to believe they are becoming increasingly effective.
Got a hot tip about a well-researched story that belongs on this list? Email us here.
In October of 1953, the farmers of the Western hemisphere were busy toiling over harvested grain, either milling it into flour or prepping it for brewing. Meanwhile, a group of historians and anthropologists gathered to debate which of these two common grain uses humans mastered first—bread or beer?
The original question posed by Professor J. D. Sauer, of the University of Wisconsin’s Botany Department, was even more provocative. He wanted to know whether “thirst, rather than hunger, may have been the stimulus [for] grain agriculture.” In more scientific terms, the participants were asking: “Could the discovery that a mash of fermented grain yielded a palatable and nutritious beverage have acted as a greater stimulant toward the experimental selection and breeding of the cereals than the discovery of flour and bread making?”
Interestingly, the available archaeological evidence didn’t produce a definitive answer. The cereals and the tools used for planting and reaping, as well as the milling stones and various receptacles, could be involved for making either the bread or the beer. Nonetheless, the symposium, which ran under the title of Did Man Once Live by Beer Alone?, featured plenty of discussion.
The proponents of the beer-before-bread idea noted that the earliest grains might have actually been more suitable for brewing than for baking. For example, some wild wheat and barley varieties had husks or chaff stuck to the grains. Without additional processing, such husk-enclosed grains were useless for making bread—but fit for brewing. Brewing fermented drinks may also have been easier than baking. Making bread is a fairly complex operation that necessitates milling grains and making dough, which in the case of leavened bread requires yeast. It also requires fire and ovens, or heated stones at the least.
On the other hand, as some attendees pointed out, brewing needs only a simple receptacle in which grain can ferment, a chemical reaction that can be easily started in three different ways. Sprouting grain produces its own fermentation enzyme—diastase. There are also various types of yeast naturally present in the environment. Lastly, human saliva also contains fermentation enzymes, which could have started a brewing process in a partially chewed up grain. South American tribes make corn beer called chicha, as well as other fermented beverages, by chewing the seeds, roots, or flour to initiate the brewing process.
But those who believed in the “bread first, beer later” concept posed some important questions. If the ancient cereals weren’t used for food, what did their gatherers or growers actually eat? “Man cannot live on beer alone, and not too satisfactorily on beer and meat,” noted botanist and agronomist Paul Christoph Mangelsdorf. “And the addition of a few legumes, the wild peas and lentils of the Near East, would not have improved the situation appreciably. Additional carbohydrates were needed to balance the diet… Did these Neolithic farmers forego the extraordinary food values of the cereals in favor of alcohol, for which they had no physiological need?” He finished his statement with an even more provoking inquiry. “Are we to believe that the foundations of Western Civilization were laid by an ill-fed people living in a perpetual state of partial intoxication?” Another attendee said that proposing the idea of grain domestication for brewing was not unlike suggesting that cattle was “domesticated for making intoxicating beverages from the milk.”
In the end, the two camps met halfway. They agreed that our ancestors probably used cereal for food, but that food might have been in liquid rather than baked form. It’s likely that the earliest cereal dishes were prepared as gruel—a thinner, more liquidy version of porridge that had been a Western peasants’ dietary staple. But gruel could easily ferment. Anthropologist Ralph Linton, who chose to take “an intermediate position” in the beer vs. bread controversy, noted that beer “may have resulted from accidental souring of a thin gruel … which had been left standing in an open vessel.” So perhaps humankind indeed owes its effervescent bubbly beverage to some leftover mush gone bad thousands of years ago.
The post Did Humans Once Live by Beer Alone? An Oktoberfest Tale appeared first on JSTOR Daily.
There may be no crime that horrifies the public more than child abduction. Historian Elizabeth Foyster writes that this was also true in London 200 years ago, though the crime then typically took a much different form than we expect today.
Historians generally agree that the late eighteenth century brought a major change in what English childhood meant. This included more positive attitudes toward kids, and a new wealth of books, toys, and clothes for middle-class urban children. Children were increasingly prized “for giving women a role as mothers,” and as “miniature models of all that a more affluent consumer society could afford,” Foyster writes.
If children were becoming more valuable, it stands to reason that, like all valuable things, they were in danger of being stolen. And, indeed, Foyster found 108 cases of child abduction tried in London and reported in the newspapers between 1790 and 1849.
Child abduction was nothing new, but it was understood differently than in previous times. In fourteenth-century England, “ravishment” covered both forced and consensual “abduction” of children or adult women. It typically had a sexual element, and the child victims were generally teenagers. Later, in the seventeenth century, abduction was understood as a fate befalling unfortunate boys forced into indentured servitude.
In contrast, in the period Foyster studied, the majority of stolen children were under six, and the abductor was usually a woman in her 20s or 30s. In some cases, kids were stolen for their clothes. Abductors might bring fancy children’s clothes to a pawnbroker, leaving a half-naked child outside. Other times, women reportedly stole children to gain sympathy when begging for money or seeking a job.
There were also well-off married women who stole—or paid someone else to steal—children they could present as their own. One 22-year-old wrote to her husband, serving in the Navy, about an invented pregnancy and childbirth. When she learned he was returning home, she travelled to London, snatched a four-year-old boy, and cared for him for two months before she was caught.
Foyster writes that news accounts paid little attention to possible harm done to the children. Unlike today, child abduction wasn’t generally assumed to be motivated by deviant sexual desire. Instead, newspapers focused on the terror and despair of mothers whose children were stolen, and suggested a parallel lack of feeling in the abductors.
A judge told one convicted child thief that, as a childless woman, she was “Ignorant of those heavenly feelings which subsist in the relation between parent and child; for had you been a mother, you must have respected and regarded, instead of agonizing a mother’s heart.” Still, Foyster writes, news reports also acknowledged that child-stealers might be motivated by a twisted “fondness” for children—reflecting their own stunted development.
Child-thieves clearly had no place in the growing public conception of natural motherly love. Yet the new understanding of children as valuable objects who gave meaning to women’s lives may have spurred the increase of child abduction.
By all accounts the Clean Water Act (CWA), the preeminent federal law protecting water quality in the United States, has been highly successful. The 1972 law has been periodically amended, but the gist is that it limits pollution into surface waters of the U.S. through restrictions and permit requirements. The act does not directly regulate drinking water. Now the Trump administration wishes to significantly weaken the CWA by limiting its jurisdiction, a move cheered by some but bemoaned by many others. Nevertheless, according to April Collaku in Fordham Environmental Law Review, this question of exactly which waters are covered by the CWA is not new.
Upon enactment of the CWA, federal agencies charged with its enforcement saw the law as covering discharges in the “navigable waters of the United States,” which on the face of it sounds like any water that can hold a boat. The reality is more complicated. The CWA itself, in fact, defines navigable waters as “waters of the United States,” which sounds like all water everywhere under U.S. jurisdiction.
Given the ambiguity, this definition has repeatedly found itself under court review. The courts struggled to reconcile the “waters of the United States” language with “navigable waters,” roughly defined as waters used for commerce or travel. Courts have generally expanded that definition to include tributaries of those navigable bodies and wetlands that are adjacent or connected to those navigable bodies.
The rules the current administration is seeking to override stem from a 2006 Supreme Court Decision. The decision, known as Rapanos v. United States, left the exact scope of the CWA muddled, with some justices arguing for the expanded navigable waters definition above and others limiting jurisdiction only to permanent bodies of water.
To help end the confusion, during the Obama administration the EPA decided to spell out a clear definition of waters of the United States. The definition closely hews to the expanded definition of navigable waters, but specifies all tributaries and adjacent waters that have a “significant nexus” to navigable waters. This expanded definition included a lot more wetlands, as now wetlands adjacent to tributaries were also included. Certain seasonal streams and wetlands were included under this final definition as well.
This expansion provided clarification but also controversy. The newly covered waters often came into conflict with private property. Some farmers and business interests found themselves occasionally limited in what crops they could plant, or what practices they could follow, next to what they saw as unimportant streams or wetlands.
Now the controversy rolls on, as under the Trump administration, most tributaries and adjacent wetlands will be stripped of CWA protection. Opponents fear that increased pollution will inevitably cause downstream harm. One side effect of the rule reversal is that the CWA is once again operating without a firm definition. More confusion and lawsuits are inevitable.
Editors’ Note: An earlier version of this article stated that the Supreme Court Decision Rapanos v. United States was decided in 2015; in fact it was decided in 2006.
In 1982, the American Library Association established Banned Books Week in response to the increase in challenges to books in libraries, classrooms, and school libraries.
Of course, censorship and challenging creative thought did not begin in the 1980s. The earliest form of censorship was book burning, carried out in order to solidify governmental power, erase history, and prevent the spread of ideas.
Dr. Whitney Strub tackles the latter in “Black and White and Banned All Over: Race, Censorship, and Obscenity in Postwar Memphis.” According to Strub, the Board of Censors and the Memphis city government worked to censor films and media that they considered inappropriate. Ultimately, the films that they censored included scenes featuring a mixing of Black and White characters. There was a particular focus on regulating images of real or imagined intimate relations between Black men and White women, a trope that is a legacy of Reconstruction. The censors felt that the message of these films was one of “social equality” that challenged normative values. The intent was that by censoring these images, the Black community in Memphis would not get the wrong idea about their “place” in society.
Books, film, and art are commonly banned or challenged in American society because they are sexually explicit. However, as Strub notes, historically people use sex as a code for race. It is easier or more politically correct to claim that you oppose a work of art because it is sexually explicit, than to object to how it portrays race. A prime example of this comes from the challenges of Beloved and The Bluest Eye by Toni Morrison and I Know Why the Caged Bird Sings by Maya Angelou in schools and libraries across the country. All three of these works deal with issues relating to racism and have a sexual component. Nevertheless, they make a larger argument about the role and treatment of girls and women in American society.
Attempts to remove texts like these limit students’ ability to engage with subject matter that will help them survive in society and understand what is happening in their own lives and the lives of others. Tonya Perry explores the impact in “Taking Time to Reflect on Censorship: Warriors, Wanderers, and Magicians.” She notes that there are three roles an educator can play: warrior (who teaches just the facts), wanderer (who encourages questioning and interpreting experiences), and magician (where learning meets action and transformation). The magician educator will have material that addresses subject matter such as sexual harassment, sexuality, racism, and sexism, and demonstrates to students how they can put this knowledge into action. Thus, students become producers as opposed to being consumers of knowledge.
According to Perry, those who censor in an attempt to “protect” students are actually doing them a disservice by not providing them the language and tools to communicate. Furthermore, it disproportionately impacts the students who come from underrepresented communities. Censorship signifies that their stories and histories are not valuable or important enough to be studied. As Strub notes, the act of censoring puts attention on the action of the challenge rather than addressing the societal issues that are facing American communities. In other words: censorship is a dangerous distraction.
The natural gas industry is enjoying a renaissance, thanks to the widespread adoption of fracking around the country in the past fifteen years. In that time, domestic production of natural gas has increased around 50%. Natural gas now accounts for 1/3 of the energy produced in the United States, more than any other source. Until recently, natural gas was billed as the “green” fossil fuel. Compared to coal or petroleum products, burning methane gas (CH4) releases less carbon into the atmosphere to produce the same energy, but it does still release harmful emissions.
Coal and gasoline have earned their reputation as fossil fuel boogeymen. Both have played extremely visible roles as the principal feedstocks for electricity generation and automobiles, respectively. But, scholar Leslie Tomory writes, methane gas was actually the first fuel to be delivered in an integrated network that provided hydrocarbon energy to the masses at the flip of a switch, back in Regency-era London. In the process, the Gas Light and Coke Company (GLCC) confronted and solved problems of industrial politics, time coordination, machine standardization, contractor management, and even customer relations that have often been attributed to the later railway or electricity industries.
Founded in 1812, the Gas Light and Coke Company (GLCC) produced coal gas. The company heated coal in large vessels (“retorts”) inside ovens that forced out its gases and other impurities (such as sulfur) to produce coke. The expanding steel industry needed the purified carbon in coke to make high-quality steel. GLCC was the first company to store the released methane gases and to offer it for lighting in homes, businesses, and for street lamps.
Aside from a few local water supply networks, nothing of the scale had ever been attempted. Even the company’s political position was new, straddling private and public concerns. In exchange for papers of incorporation, GLCC agreed to install and fuel street lamps at low prices. In practice, this encouraged localities to agree to let the company tear up the streets to lay pipe.
GLCC intended only to provide gas from 4:00 pm to 10:00 pm, but the demand for the gas was high. Before the invention of gas meters, customers could pay a flat fee but then use the gas all night. Some widened their valves to make the gas brighter or even stored the gas illegally. The company responded by improving their generating capacity, but also by regularly inspecting homes and requiring the use of standardized pipes, valves, and burners.
Methane gas has recently overtaken coal as the most common source of energy for electrical generation worldwide. Migrating toward renewable energy today requires solving many of the same problems that the early gas industry faced: storage, transmission, and most of all, politics. Because—unlike in the 1810s—renewable energy is attempting to displace a pre-existing complex energy infrastructure, backed by powerful interests, that has structured the world we see around us.
Brood parasitism is a truly diabolical life strategy employed by certain birds, most famously cowbirds and cuckoos. Brood parasites make no nest of their own. Instead, they lay their eggs in another bird’s nest while the host bird is away. The impostor egg hatches, then often tosses all or some of the host eggs or babies out of the nest, killing them. The unsuspecting host parents dedicate their energy to raising the impostor chick. Oddly, they usually don’t seem to notice, no matter how significant the difference—as discussed by Oliver Krüger in Philosophical Transactions, cuckoos may lay eggs in nests of hosts up to six times smaller than they are. Sometimes, the chick dwarfs the hosts but the hosts diligently raise it anyway.
This sneak attack allows brood parasites to lay more eggs in one season than non-parasites, as they do not need to put any energy into building a nest or caring for offspring. The cost to the host, however, is enormous. They either waste energy caring for a parasitic egg, or in worst case the impostor kills every single host egg.
Hosts are not defenseless; brood parasites do not choose their victims randomly. Hiding nests better, for example, seems to be a deterrent in some cases, as does placing the nest so that there is no nearby place for a parasite to bide its time. Experience can help, in that young host birds are often more vulnerable. Sometimes the best defense is a good offense, many potential host species simply attack nearby brood parasites during breeding season. The downside, of course, is that while aggression may deter a parasite, it also lets the parasite bird know that a nest is near.
Once a brood parasite successfully infiltrates a nest, astute hosts will notice and damage or kill the parasite egg. In some cases hosts will even abandon the entire nest rather than raise a parasite chick. Some potential hosts lay homogeneous clutches of eggs to make parasite eggs stand out. Others are able to tell if one egg is much larger; they will kill the large egg. A related strategy is to stop feeding any chick when its needs become too great.
According to Krüger, this parasite-host struggle has the hallmarks of a co-evolutionary arms race. As host birds develop counter-measures, parasites develop new techniques for duping the hosts. When host defenses become too effective, the parasites might even switch hosts. Neither side is threatening the other with extinction, so the arms race stalemate drags on.
A recent book by Caoimhín De Barra explores the formation of Celtic nationalism. In the late twentieth century, “Celticity” was sparked anew in the UK’s devolution of power to Scottish, Welsh, and Northern Ireland. Celticity, however, has turned out to be quite exportable, and not just in the form of Celtic music, Irish dance, and fantasies like the 1995 movie Braveheart.
According to scholars Euan Hague, Benito Giordano, and Edward H. Sebesta, two organizations that arose in the 1990s have appropriated contemporary versions of Celtic nationalism as a proxy for whiteness. Both call for separate nations to be set aside for the citizens they count as “white.” One is the League of the South (LS), a fringe group that argues for a return to the Confederate States of America. Meanwhile, in Italy, Lega Nord (LN) has also taken up the banner of “Celtic culture” as a model of whiteness. They advocate for a state called Padania, separate from Italy’s south. The LN, frequently called just Lega, is part of Italy’s coalition government. In the 2018 elections, the LN took just under 18% of the vote for both the Chamber of Deputies and the Senate.
Both the LS and the LN argue that Celtic-ancestry people are a “distinct ethnic group deserving of self-determination and an independent nation state,” write Hague et al. Comparing the two leagues, the authors explore the confluence of ethno/race-based nationalism with the use (and misuse) of the myths of Celticity.
Celticity is “an attractive set of symbols and identities that come replete with popular recognition and a supposedly ancient past that can be invoked by people for many purposes, from ‘new age’ religion to popular ‘world music.'” Historically, however, that “ancient past” is hard to pin down. Hague et al. explain:
The very flexibility and the vagaries of archeological evidence regarding the original Celts enable multiple political and cultural meanings to be invested in the form, whilst retaining the symbolic value and historical authority accrued by the reference to a supposedly ancient Celtic culture.
“The Celts” can and have been envisioned in all sorts of ways: as a warrior class; a pan-European people; as the epitome of whiteness; “whatever version of the past seemed nationally expedient.” It’s a cultural identity that has come into vogue in recent decades.
The LN posits that northern Italy is culturally and ethnically distinct from southern Italy. Southern Italians aren’t seen as Celtic/white/European—shades of the way Italian immigrants were first treated in the U.S. For LN, separation is essential to block immigration from Africa, Asia, and southern Italy.
Nationalism tries to make “ancient connections between a people and a specific territory, an intersection of genealogy and geography.” By exploiting the ethos of multiculturalism, both the LS and the LN argue for a “right to cultural difference.” This right, the authors say, fits into “ongoing processes of white privilege.” While overt racism is generally frowned upon, “an appeal to Celtic ethnicity appears acceptable and can be justified by utilizing a rhetoric of cultural awareness while simultaneously subverting political commitments to cultural equality and reasserting white superiority.”
Judith Butler’s famous 1990 book Gender Trouble features on countless undergraduate reading lists in the humanities. The book’s wide-ranging line of inquiry, unforgiving style, and often abrupt shifts in focus are well known—and widely lamented among readers. Many students have been daunted by the book, and deriding especially challenging snippets has become something of a rite of passage.
Thankfully, a more rarely read set of texts can rescue a reader from despair. Between 1985 and 1989, Judith Butler published six short essays introducing ideas she would return to throughout her career. Each essay addresses a particular concern, in most cases focusing on a single thinker. Between these six pieces, Butler outlines a distinctive view of gender as tangled up with embodiment. This perspective opposes any tidy distinction between sex as both natural and bodily and gender as both cultural and historical. This idea is critical, and bears repeating: Butler is attacking the commonly assumed sex-gender distinction.
The French social theorists Butler addresses viewed our bodies as being immersed in social norms, in legal definitions, and in everyday routines. As Butler summarized it in “Performative Acts and Gender Constitution”:
The body is not passively scripted with cultural codes, as if it were a lifeless recipient of wholly pre-given cultural relations. But neither do embodied selves pre-exist the cultural conventions which essentially signify bodies.
This is a challenging claim. But Butler’s basic idea is that our experience of society is always through our bodies. Before Gender Trouble, Butler explored this idea repeatedly.
The first of Butler’s early essays, “Variations on Sex and Gender in Beauvoir, Wittig and Foucault” was published in 1985. Most of the essay focuses on French feminist philosopher Simone de Beauvoir, and particularly her great feminist treatise The Second Sex. Beauvoir held that there was no separable self, a self able to stand apart from the process of thinking. For Beauvoir (and Butler) there could be no “I” which predated cultural involvement, no aloof “thinker within,” staring into life from outside. Beauvoir thus saw gender as a project. Womanhood was never a settled matter; it changed across time. As Butler puts it, gender is “an incessant project, a daily act of reconstruction and interpretation.” This existentialist position implies a greatly expanded role for human behavior. As Butler puts it, if this view holds true, “then both gender and sex seem to be thoroughly cultural affairs.” (This phrasing echoes in the title of a great essay Butler would pen a decade later: “Merely Cultural.”)
But this argument left a dilemma. As Butler asked: “How can gender be both a matter of choice and cultural construction?” Beauvoir’s treatment of embodiment offered one way of answering this. Beauvoir proposed the term situation to describe the body’s status. Through our bodies, we can reinterpret existing mores, customs, and expectations. While never outside a social context, the body was also always active. The body’s social involvement can be experienced as a kind of oppression, but it also grants a license for liberation through “re-articulation,” or self-definition. Bodies are both the site of oppression and the means of escape.
In this early piece, Butler had already settled on a style characterized by a readiness to tackle contradictory aspects of gender:
Becoming a gender is an impulsive yet mindful process of interpreting a cultural reality laden with sanctions, taboos and prescriptions. The choice to assume a certain kind of body, to live or wear one’s body a certain way, implies a world of already established corporeal styles. To choose a gender is to interpret received gender norms in a way that reproduces and organizes them anew. Less a radical act of creation, gender is a tacit project to renew a cultural history in one’s own corporeal terms.
Our bodies can challenge the norms we encounter, but we also recreate those norms through our bodies.
Butler is less approving in her treatment of Monique Wittig, who she describes as “alarming.” Wittig saw gender as a weaponized delusion. While anatomical differences between people appear in manifold ways (for instance, the extension or inset of an earlobe), it was only those differences associated directly with reproduction which were declared “sexual.” Men and women are set apart on the basis of fairly arbitrary traits, onto which a contrived meaning is imposed. Then, for Wittig, a retroactive naturalization of the existing political order takes place: the sex we are now is presented as what we were all along.
This basic categorization of anatomies was threatened by the very existence of lesbians. Lesbian erotic practices were not limited to the genitals, and lesbians refused to define themselves as wives married to a particular man. Wittig’s writing envisioned these women making revolutionary efforts to rework their anatomies—and their societies—in their own terms. Butler grows incomprehending as the essay continues:
It might well seem that Wittig has entered into a Utopian ground that leaves the rest of us situated souls waiting impatiently this side of her liberating imaginary space. After all, The Lesbian Body is a fantasy, and it is not clear whether we readers are supposed to recognize a potential course of action in that text, or simply be dislocated from our usual assumptions about bodies and pleasure.
Despite this distancing, Wittig’s criticism of heterosexuality clearly enjoyed a profound grip on Butler. Both thinkers shared a lesbian reading of Beauvoir. Wittig saw sex as a category that required the political imposition of heterosexuality, which Wittig calls the “heterosexual regime.” Clear fingerprints of this position are found on Butler’s later description of a “heterosexual matrix.”
But Wittig’s strategy was more sweeping than Butler’s. Rather than subversion, Wittig argues for an end to sexual division itself. Butler’s doubts are at once practical and theoretical: “On the one hand, Wittig calls for a transcendence of sex altogether, but her theory might equally well lead to an inverse conclusion, to the dissolution of binary restrictions through the proliferation of genders.”
Whether abolishing gender would mean no or infinite genders is a recurring question in feminist thought (that I’ve examined in another essay). Butler ultimately says that Wittig’s politics are “profoundly humanistic,” but she certainly intended this remark as a putdown. Butler could never advocate doing away with gender altogether. This cautiousness was most certainly advantageous in the 1980s and 1990s, a time of collapsing fortune for the left internationally amid the rise of the New Right. Today, her timidity reads differently.
This essay also introduces Michel Foucault, who is closely associated with Butler’s thought. Foucault, like Wittig, saw sex as a wholly political assembly of anatomical features and animating drives, drawn together by the demands of power. Butler suggests that this agreement across contexts has “improbable but significant consequences for feminist theory.” From this medley of complex and challenging texts, Butler takes a surprisingly clear message: “The political program for overcoming binary restrictions ought to be concerned… with cultural innovation rather than myths of transcendence.” In other words, a newfound creativity is required for fruitful gender politics, rather than a myth of rising above distinction—or idealizing androgyny.
A second essay, “Sex and Gender in Simone de Beauvoir’s Second Sex,” published in Yale French Studies is really a second version of “Variations on Sex,” only more laser-focused on the thorny position on embodiment found in Beauvoir. Butler briefly addresses the question of sex, which she claims is more easily settled than womanhood: a sex is defined by what one cannot also be (those who can bear children being bracketed as female, as opposed to those who can inseminate). Butler then doubles back to acknowledge that chromosomal variation could provide yet another layer of complexity.
However, this is not an anatomically sufficient account of intersex variations: in many cases those born intersex have XY chromosomes accompanied by an insensitivity to sex hormones that causes them to be taken for female. Nevertheless, this acknowledgement of intersex experiences was unusual for theory of the time, and to Butler’s lasting credit she would follow up this early inclusivity in her essay “Doing Justice to Someone,” on the case of David Reimer.
“Performative Acts and Gender Constitution: An Essay in Phenomenology and Feminist Theory” was published in Theatre Journal. This article debuts Butler’s most famous argument: that gender is performative. In other words, that gendered practices are generative of gender, rather than reflecting any innate inner truth. Easily her greatest contribution to gender theory, Butler’s “performativity” argument also ranks as one of the most widely misunderstood propositions in the history of thought. In interviews and writings since, Butler has been quick to distinguish the performativity thesis from describing gender as simply performance. “Performative” is a quality of how we live out our genders: becoming by doing.
“Performative Acts and Gender Constitution” offers a clear account of Butler’s performativity thesis by opposing it to the expressive view of gender. Performativity was intended to replace the framework of gender roles (commonplace in gender theory then and since):
Gender cannot be understood as a role which either expresses or disguises an interior “self,” whether that “self” is conceived as sexed or not.
The expressive view Butler sought to replace presents gender as an inner self, which practices allow to emerge. By contrast, Butler saw those practices, and their repetition, as the source of gender.
This essay hints at an intimate familiarity with the restrictions and stigmatization that define gendered experience:
As a corporeal field of cultural play, gender is a basically innovative affair, although it is quite clear that there are strict punishments for contesting the script by performing out of turn or through unwarranted improvisations.
While Butler sees gender as potentially liberatory, she was also well aware that gender norms are often experienced in terms of confinement, stigmatization, and chastisement over deviance. “Performativity” describes the contours of an ongoing field of struggle.
Butler’s final three publications before Gender Trouble were all released in 1989. Each engages with a particular thinker’s thoughts on gender and the body: Julia Kristeva, Michel Foucault, and Maurice Merleau-Ponty.
“The Body Politics of Julia Kristeva,” published in the feminist philosophy journal Hypatia offers a critical view of a critical view. Kristeva, the Bulgarian-French feminist philosopher, attempted to correct the androcentrism of the seminal Parisian psychoanalyst, Jacques Lacan. While Lacan stressed the importance of the patriarchy in structuring the symbolic, and therefore language, Kristeva presents a version of psychoanalysis that features a formative trauma of maternal separation. In this view, motherhood occupied a dominating and “pre-discursive” role (in that maternal attachment comes before speech). Kristeva saw maternity as “semiotic” in scope—it unfolds on the level of sign-process, extending beyond mere linguistics.
Butler finds two aspects of Kristeva’s worldview unacceptable: Her view of motherhood accepts that women (or females?) wish to give birth as a matter of “pre-discursive biological necessity.” To be a female means to want to give birth. Secondly, this view has no place for lesbians as full participants in culture, with Kristeva instead declaring them “inherently psychotic.”
For Kristeva, female homosexuality was too radical a break with the paternal law and symbolic order to be culturally intelligible. Since heterosexuality was defined (for either partner) as a means of getting over the trauma of separation from the maternal body, desiring other women was anti-social. While heterosexuality’s psychodrama joined together two matchings sets of traumas, lesbianism could play no such role. Butler gently implies that Kristeva is examining her own phobia, rather than the phenomenon of lesbian desire itself:
Significantly, this description of lesbian experience is effected from the outside, and tells us more about the fantasies that a fearful heterosexual culture produces to defend against its own homosexual possibilities than about lesbian experience itself.
This defense of lesbianism was hardly surprising coming from Butler. Having spent most of her adult life out, Butler even played a minor role in the so called “lesbian sex wars.” During the early 1980s, sadomasochist groups such as New York City’s Lesbian Sex Mafia or California’s Samois were charged by more “radical” lesbian feminists with being subversive agents of patriarchy. In 1982, the Against Sadomasochism collection included an essay criticising Samois entitled, “Lesbian S&M: The Politics of Dis-Illusion,” written under the penname Judy Butler. Butler had moved well clear of this circle—and this commitment—by the later 1980s. In these essays, Butler often cited the queer thinker Gayle Rubin, once a prominent member of Samois.
By 1989, Butler had gained profound doubts that categories such as “female” or “the maternal” could be relied upon for an emancipatory politics: “The female body that [Kristeva] seeks to express is itself a construct produced by the very law it is supposed to undermine.” For Butler, female identity could not be presupposed, set apart from legal regimes as having some primordial force.
Next, Butler addresses Maurice Merleau-Ponty, a major French phenomenologist. Nine years earlier, Iris Marion Young had offered a favorable feminist account of Merleau-Ponty, but Butler was considerably more critical.
Butler charges Merleau-Ponty with assuming heterosexuality as the default state. In Merleau-Ponty’s examination of the famous Schneider case, a brain-damaged patient of influential German psychologists Adhémar Gelb and Kurt Goldstein, Merleau-Ponty assumes that Schneider’s lack of interest in women who he finds unappealing on a personal level is evidence of “repression.” Butler suggests it instead makes Schneider a “feminist of sorts.” Merleau-Ponty expected men to experience desire as an objectifying force, presupposing heterosexuality as a universal norm. This resulted in him failing not only as a feminist, but as a phenomenologist:
Viewed as an expression of sexual ideology, The Phenomenology of Perception reveals the cultural construction of the masculine subject as a strangely disembodied voyeur whose sexuality is strangely non-corporeal… Erotic experience is almost never described as tactile or physical or even passionate.
Later scholars have argued that Butler’s harsh approach overlooks a potential radicalism found in embodied phenomenology. And other subsequent scholarship has noted that Beauvoir’s theorizing was informed by Merleau-Ponty, developing their shared key theme of ambiguity.
“Foucault and the Paradox of Bodily Inscriptions” was published in The Journal of Philosophy and examines how Foucault’s work, taken as a whole, “raises the question of whether there is in fact a body which is external to its construction, invariant in some of its structures which… represents a dynamic locus of resistance to culture per se.” The essay seems unable to answer this question. One Foucault text is aimlessly compared to the next, without considering whether the resulting coherence may have an obvious source: developments in Foucault’s thought occurred as he completed one work after another.
Perhaps most remarkable is the essay’s opening line: “The position that the body is constructed is one that is surely, if not immediately, associated with Michel Foucault.” Today, it’s difficult to imagine a more immediate association than this one, in no small part as a result of Gender Trouble.
* * *
Between these six essays, Butler outlined a view of gender as extending beyond any straightforward distinction. Gender was a means used by any given individual to situate themselves in their era’s prevailing mores. Or to resist them. Performativity is at once the invariant burden and liberatory promise offered by Butler’s thinking. The “construction” Butler has in mind when she writes of gender is a messy and ongoing process, always featuring both punishment for transgression and the potential for getting free.
One of the most frequently taught novels about the Philippines is Jessica Hagedorn’s 1990 Dogeaters. The novel takes place in the Philippines, a former colony of the United States, in the 1950s. Income inequality is extreme. Political turmoil churns as leftists rise up to challenge the ruling dictator. And in Hagedorn’s fictionalized iteration, the lives of three families of various classes become entangled in revealing ways.
Specifically, Hagedorn focuses on the women living in midcentury Manila, breaking down the ways in which neocolonialism can impact gender. In “Masquerade, Hysteria, and Neocolonial Femininity in Jessica Hagedorn’s Dogeaters,” Asian American literature scholar Juliana Chang finds that there are two forms of ambivalent femininity in the novel: masquerade and hysteria. While psychoanalytic theory has described these forms of femininity as operating under American and Western bourgeois family and closed patriarchal systems, Chang posits that the forms are also relevant in a former colony like the Philippines.
By this theory, masquerade is “a performance of femininity that masks feminine claims to power and covers over other contradictions of patriarchy.” This is exemplified by both the commodified and privileged female characters throughout the novel. The character of Zenaida, a mother whose labor is taken for granted, becomes a symbol of exploitation. Zenaida feels that she is only a surplus laborer; fittingly, she drowns at the end of the novel.
The novel takes place in a time of political turmoil. Hagedorn prominently features the First Lady, rather than the male head of state, as the political face of the country. Chang notes that this privileged figure is another example of “masquerade femininity” and spectacle. Hagedorn’s descriptions of how the First Lady must behave emphasize the pressures of women to be feminine in ways that are acceptable to the patriarchy, down to the grace with which she must wipe her tears.
The hysteric’s role, on the other hand, is “both to arouse and irritate paternal desire.” According to Chang, a character exemplifying hysteria is Baby Alcaran, the daughter of the powerful businessman, Severo Alcaran, and his wife, Isabel. Hagedorn makes a clear distinction between the mother and daughter. The mother is a symbol of upper class femininity. Baby, on the other hand, is often described as having physical traits and characteristics “like a man.” Baby sweats, chews her nails, has “flat breasts” and wide hips. These descriptions refuse the patriarchal normative standard and “insist on its contradictions.”
According to Hagedorn, the neocolonialist Philippines held a heteronormative standard that ignored the possibilities of queer and female subjects. The structure of global capital, and the commodification with which human lives and labor are treated in the Americanized colony, only enforce these dominant systems. By representing diverse characters in Dogeaters, Hagedorn offers examples of the contradictory nature of neocolonialism—these characters are people who are not welcomed by the dominant systems, and yet they exist, living and loving in the world of the novel.
Hagedorn’s novel does not rely on tropes, but rather questions how those tropes came to exist. In the end, with its multiplicity of female and queer characters, Dogeaters presents an alternate narrative for the Filipino woman.
The post The Filipino Novel That Reimagined Neocolonial Gender appeared first on JSTOR Daily.
Thanks to the legalization of recreational cannabis in 10 states and the District of Columbia, sparking up a joint in these areas is as easy as ordering a glass of wine.
Spending on legal cannabis, which includes 33 states and the District of Columbia that allow medical cannabis use for conditions such as glaucoma, chronic pain, and the side effects of cancer treatments, topped $12 billion worldwide in 2018, according to industry analysts, and is expected to increase to $31.3 billion by 2022.
With all that potential profit on the line, it’s no surprise there is growing interest in legalizing cannabis cultivation. California has issued around 10,000 cultivation permits. Between 2012 and 2016, the number of cannabis farms in the Golden State increased 58 percent and the number of plants increased 183 percent.
While much of the research has focused on public health and criminalization, the environmental implications of commercial-scale cultivation have been largely ignored. Could the increases in cannabis cultivation send the environment up in smoke?
New research has linked production of the once-verboten plant to a host of issues ranging from water theft and degradation of public lands to wildlife deaths and potential ozone effects.“We have a culture and history of cannabis cultivation in remote areas that may be sensitive to environmental disruptions,” explains Van Butsic, co-director of the Cannabis Research Center at the University of California Berkeley.
In California, the water-hungry crop is often grown in remote, forested watersheds and requires almost 22 liters of water per plant a day during the growing season, which adds up to three billion liters per square kilometer of greenhouse-grown plants between June and October, according to some research. During the low flow period, irrigation demands for cultivation can exceed the amount of water flowing in a river, leaving little water to sustain aquatic life.
Some of the biggest environmental offenders are cultivators operating unpermitted farms on public lands. These “trespass grows” are often in national forests or on tribal lands where water is diverted from streams to irrigate acres of plants. In 2018, there were an estimated 14,000 trespass grows on federal and private lands in Humboldt County, California, alone.
At the Shasta-Trinity National Forest in California, a team from the Integral Ecology Research Center, or IREC, a nonprofit organization dedicated to wildlife conservation, removed more than five miles of irrigation lines that diverted more than 500,000 gallons of water per day to irrigate cannabis plants.
IREC co-director Mourad Gabriel notes that trespass grows are often located near headwaters and have disastrous downstream effects. For example, streams in Mendocino, California, often run dry during the summer when growers are diverting water, decimating populations of Coho salmon and steelhead trout. “These are drug trafficking organizations looking to profit off of our natural resources,” says Gabriel.
Unpermitted growers wanting to avoid detection often choose public and tribal lands as prime places to hide their operations. These locations are also pristine wildlife habitats.
The cultivation sites also interfere with the restoration of distressed habitats. Local environmental groups complained that the grows overwhelmed their conservation efforts and, in some cases, disrupted ongoing restorations or made the work more dangerous, according to a 2018 study published in Humboldt Journal of Social Relations. The grows drained and polluted streams, degraded watersheds and killed wildlife.
Trespass grows, which use mass quantities of toxic rodenticides to keep rodents from chewing on irrigation lines, have been linked to the deaths of fish, birds, and mammals. One study found that 79 percent of dead fishers—small carnivorous mammals, collected in California between 2006 and 2011—had been exposed to pesticides at trespass grow sites. The rate continues to increase, according to Gabriel. Mule deer, gray foxes, coyotes, northern spotted owls and ravens have also been victims of poisoning, linked to cannabis cultivation.
“The amount of fertilizers and pesticides we find on one half-acre [of illegal] cultivation plot could be [used on] 1,000 acres of corn—and wildlife are paying the price,” Gabriel says.
It’s not just trespass grows causing environmental issues. Since Colorado stores started legally selling recreational cannabis in 2014, emissions from the 600-plus licensed cultivation facilities in Denver have sparked concerns over air pollution.
William Vizuete, associate professor at the University of North Carolina’s Gillings School of Public Health, is working on an air quality model to better understand how commercial cannabis cultivation could affect the atmosphere. His research showed that cannabis plants produce volatile organic compounds or VOCs that can produce harmful pollutants.
“If plants produce VOCs, there is a high possibility that under certain conditions, cannabis cultivation could impact the ozone,” Vizuete explains.
Cannabis emits potent VOCs called terpenes that, when mixed with nitrogen oxide and sunlight, form ozone-degrading aerosols. In a high desert zone like Denver, where normally there are few sources of VOCs, any new source of such pollutants will likely lead to ground-level ozone production, Vizuete notes. He worries that the significant numbers of cannabis plants being grown will become the regular source of VOCs, exacerbating the issue by combining with the manmade nitrogen oxide spewed from the many cars in that urban environment. Vizuete worries that the significant numbers of cannabis plants being grown in an urban area could exacerbate the issue. (High concentrations of VOCs have been linked to a range of human health issues, from nausea and fatigue to liver damage and cancer).
To test the potential effects, Vizuete grew four strains of cannabis (from among the 600-plus strains available in Colorado): Critical Mass, Lemon Wheel, Elephant Purple, and Rockstar Kush—for 90 days and measured the terpenes at each stage of growth. The results showed that in Denver, assuming a concentration of 10,000 plants per cultivation facility, cannabis could more than double the existing rate of annual VOC emissions to 520 metric tons and produce 2,100 metric tons of ozone.
Vizuete believes his estimates might be conservative, explaining, “We picked four [cannabis] strains based on their popularity, and their VOC emissions might not be representative all of the strains. Additionally, in commercial facilities, where conditions are optimized for growth, emissions may be even higher.”
Regulating the production of cannabis can address many of the environmental issues associated with its cultivation, argues Jennifer Carah, senior scientist in the water program at The Nature Conservancy of California.
In California, where up to 70 percent of legal cannabis is grown, the California Department of Food and Agriculture regulates the licensure process but many counties and municipalities also have the authority to grant cultivation licenses and, Carah says, the regulations are highly variable. Plus, the black market for cannabis still exists. It’s more expensive to purchase legal cannabis than to buy it on the black market, plus not all growers are willing to go through the due process to become legal.
“The black market is not going away,” Carah admits, “but to the degree that we can entice growers into the legal market, their agricultural practices can be regulated like other agricultural crops, which will go a long way to addressing potential environmental impacts.”
Recently, legalization has put a dent in the number of trespass grows. Illicit cultivation in Oregon forests decreased following legalization.
Some states have established environmental regulations for cannabis growers. California Water Boards require permitted growers to register water rights and follow strict guidelines that include prohibitions on diverting surface water from April to October and irrigating with stored water during the dry season—regulations not imposed on other California-grown crops. In Washington State, the Puget Sound Clean Air Agency requires growers to submit information about their plans for monitoring and controlling air pollution.
Butsic of UC Berkeley argues that federal legalization would also provide new funding opportunities through organizations such as the National Science Foundation and Environmental Protection Agency to allow researchers to assess environmental risks and develop solutions.
From a pollution perspective, federal legalization could set emissions standards.
“There are lots of technologies that capture VOCs before they enter the atmosphere that are required in other industries like gas stations,” Butsic explains. “Before [emissions] standards can be set for cannabis, we need recognition of the issue and long-term data to develop regulatory statutes—and we’re a long way from that because federal prohibition has hindered research and we don’t have the science yet.”
The post The Environmental Downside of Cannabis Cultivation appeared first on JSTOR Daily.
The tech got away from us
And we weren’t ready
I don’t think we’re wired to handle this
Those lines come from Octet, a new a cappella musical about digital addiction from Tony-winning composer Dave Malloy. Octet offers a theatrical riposte to our increasingly saturated digital lives: Over the course of ninety minutes, the show takes us inside an imagined 12-step support group for eight digital addicts, each of whom speaks to a different online pathology. I was lucky to see a recent performance, and was awed by its insightful (not to mention amusing) take on digital temptations like online dating, conspiracy sites, and Candy Crush.
But perhaps the most striking thing about Octet is its very existence. As a thoughtful and entertaining consideration of the various perils of life online, Octet is an early harbinger of an inevitable wave of cultural production: Art and entertainment designed to help us navigate the emotional and psychological perils of the digital world.
These distresses are the necessary by-product of technological innovation itself. As Paul Virilio famously wrote in “An Architect’s Crime,” the advent of any new technology “necessarily entails the creation of a new kind of specific accident; for the invention of the ship ushered in the shipwreck; the invention of the railway, the derailment; of the plane, the plane crash; of the computer, the bug or virus.”
But the disasters inherent in new technology can be psychological as well as material. Some scholars have already framed the challenges of information technology in the language of addiction: In “Digital Disturbances, Disorders, and Pathologies,” Noela A. Haughton et al. summarize a 1998 definition of internet addiction as “an impulse-control disorder” in which “individuals derive satisfaction and gratification as they compulsively check their e-mails, browse Internet sites, or pursue other technology-centered activities, such as gaming and gambling, and are often unable to control the desire to be online.”
I’m not convinced that the addiction framing is a particularly accurate or helpful way of understanding the perils of life online. As Amnon Jacob Suissa writes in “Medicalization and Addictions,” “the more we label people as having or suffering from pathologies, the more we multiply their number.”
But the addiction framing is useful in embracing the potential of art to help us understand and address the challenges of our new lives online. Whether or not you consider them “addictions” per se, there are certainly many mental health and relationship issues that are unique to the online world—issues that often involve compulsive or even pathological behaviors like excessive online shopping, relentless social media browsing, or constant online gaming.
In engaging directly with these compulsions, Octet joins a rich tradition of artistic works that map emergent temptations and help us navigate their risks. Throughout history, every time we have encountered a new mind-altering substance, an outpouring of art and entertainment has helped us to figure out its potential risks and benefits.
While we are now grappling with a technology rather than something we eat, smoke, or swallow, the challenges closely parallel previous encounters with drugs and alcohol. Understanding how art has helped us navigate those previous encounters can help us anticipate the way it may now help us come to terms with our digital compulsions.
There are three key ways art helps us make sense of mind-altering substances (or technologies): Through causal stories, through cautionary tales, and through invocations of mind-expanding potential.
The tradition of using mind-altering substances as plot devices goes at least as far back as Shakespeare. “In various plays the drunkenness of some character is an essential feature of the plot,” scholar Albert H. Tolman writes in “Drunkenness in Shakespeare,” “and in most of these cases one feels a distinct note of disapproval.” As evidence, Tolman cites the Borachio’s drunken words in Much Ado About Nothing, which, when overhead, thwart a villainous plot; Othello’s dismissal of a drunken Cassio, as engineered by the evil Iago; and Hamlet’s condemnation of his uncle’s drinking.
Where once we blamed alcohol, now we use technology as the plot device that leads people astray. To take another example from musical theatre, the musical Dear Evan Hansen portrays an online video as the crucial catalyst for an escalating series of misrepresentations and misunderstandings.
These kinds of causal stories are dramatically useful—it’s easier to sympathize with a character who can blame his bad behavior on the devil alcohol or the demon YouTube—but they also help readers, audiences, and listeners recognize the potential impact of substance use and abuse. If excessive drinking, smoking, or web-surfing can lead a hero astray, we are implicitly warned, then we need to recognize the potential for these temptations to derail our own lives. By tapping into that tradition, art that uses digital compulsion as plot device serves as a useful reminder to continually scrutinize how technology shapes our own life choices.
But art and literature about mind-altering substances needn’t be limited to implicit warnings about their potential life-altering dangers: There is also plenty of art that explicitly warns us of the risks of addiction and substance abuse. As George W. Ewing writes in “The Well-Tempered Lyre: Songs of the Temperance Movement,”
Few people today realize the extent of the publication of temperance verse or its all-pervasive influence in the lives of the white Anglo-Saxon Protestants of nineteenth-century America. The other section of the choir is more often heard, for some drinking songs, such as “The Little Brown Jug,” are part of the repertory of virtually every English-speaking vocalist… In spite of this early recognition that poetry might play a part in keeping a people sober, centuries passed before the output of anti-drinking verse approached that of drinking songs.
The temperance movement—and its music—eventually found expression in the call for Prohibition, which went into force just as the film industry emerged. As Michael C. Gerald writes in the exhaustive “Drugs and Alcohol Go To Hollywood,” “[t]he use of alcohol in excess is a familiar film theme, which is not surprising since it mirrors American society’s involvement with booze.” As Gerald chronicles, these films typically followed one of a few, familiar trajectories:
The drinker may engage in regular heavy drinking leading to humorous situations. The drinker may be a celebrity or an “ordinary” individual whose career or life has followed a progressive downhill trajectory. Disastrous consequences result for the drinker, which often reverberate to the family or other close relationships. Still other films continue by retracing the alcoholic’s path back to recovery and even redemption.
If the alcoholic or drug addict has become a stock character through these cautionary tales, we can anticipate that the tech addict will soon become just as familiar. Already, in the movie Her, the novel The Circle, and the TV show Black Mirror, we have met characters who serve as object lessons in technology over-use.
As much as these portrayals sometimes rankle—please don’t make me think twice about how much I love my iPhone–there’s a reason we have a long cultural tradition of depicting addicts in art and entertainment. By explicitly preaching the dangers of substance or technology over-use, art can inspire us to keep our own compulsions in check.
While mind-altering substances have often been portrayed as dangerous or harmful, there’s also a creative tradition that celebrates their potential. In “William Burroughs and the Literature of Addiction,” Frank D. McConnell notes that substance abuse literature sits well within America’s frontier tradition. Citing critic Leslie Fiedler, McConnell writes that drugs are “simply another permutation of the American myth of westering, out of which he has gotten so much mileage: a retreat into the last undiscovered territory, the inner space of the mind.” Arguing that Burroughs’ Naked Lunch is a fine representation of this tradition, McConnell writes that:
It is only appropriate that the literature of addiction, European and Romantic in genesis, should find its fullest articulation in an American novel, just as it is inevitable that America should become the most addicted country in the West, and that only within the last half-century.
Even as Burroughs was exploring drug use on paper, the motion picture industry was birthing a new wave of films that not only tolerated substance use, but celebrated it. Gerald writes:
The 1960s witnessed a sharp escalation in societal interest in and use and general acceptance of such mind-altering substances as LSD and marijuana. This was coupled with the demise of the Motion Picture Production Code’s restrictions on subject matter deemed unsuitable for movies. With these changes came a number of movies positively depicting the use of such drugs to escape the boredom of traditional living and to relieve workplace pressures and the tension between the establishment and the counterculture.
If tech addiction literature follows this path, we may see productions like Octet countered by voices that celebrate our collective disappearance into the digital ether. The book and subsequent movie of Ready Player One suggest one potential framing: As the physical world grows gradually more polluted and less tolerable, the digital world may represent a relatively more appealing retreat.
But at this moment, with this set of technologies, our need for assistance in navigating the perils of online living vastly outweighs our need for more cheerleading. That’s why it’s so exciting to see a work of art like Octet join in the rich tradition of art that helps us come to terms with the negative impacts of mind-altering substances and technologies: By offering us causal and cautionary tales, art can help us recognize the risks of abuse and over-use alongside the benefits of mind-altering potential.
The more those problems seep into our day-to-day lives, the more we’re going to need stories like these—not the least because so many of those problems are larger than the internet itself. To quote one final lyric from Octet:
When we say “in real life”
This is a lie to protect us
It is all real
It is all real life
An emotional cure for pain (NPR)
by Patti Neighmond
For many chronic pain patients, standard cures do little good. Some are finding that what does help is delving into childhood traumas to reshape mental functioning.
Wait, did Prohibition actually work? (Vox)
by German Lopez
Everyone knows Prohibition failed, right? Actually, for all its problems, banning alcohol reduced drinking, improved some aspects of public health, and may have reduced violent crime. What does this suggest about drug and alcohol policy today?
Why we love to hate #CursedImages (Wired)
by Emma Grey Ellis
On the internet, #CursedImages are everywhere. Why do we click and share photos that disturb, confuse, and horrify us? It has to do with the appeal of novelty and the drive to investigate ambiguity and danger.
How wine shaped Marxism (Atlas Obscura)
by Reina Gattuso
Before Karl Marx was a revolutionary philosopher, he was a heavy-drinking, trouble-making student. In fact, his concern for the vineyards of the Mosel River Valley may have inspired his turn toward economic thinking.
The history of birthright citizenship (The Washington Post)
by Marixa Lasso
Birthright citizenship is under attack in the United States. To understand what that means, we need to look at how the concept was born—in Colombia and other nations emerging from Spanish colonialism—and how it spread to the U.S. That history has a lot to do with fighting racism and coping with the legacy of slavery.
Got a hot tip about a well-researched story that belongs on this list? Email us here.
As Theresa May tearily announced her departure outside 10 Downing Street to the British public, she served as a symbol of the personal and professional challenges faced by those put in the impossible position of inheriting leadership, rather than choosing it.
May faced the double challenge of having to manage the formidable charges given to any state leader (including, in her special case, the Gordian knot that is Brexit), without the popular support of an electorate. She was awkwardly shuffled into power after her predecessor, David Cameron, suddenly resigned, and her party scrambled to find a replacement.
It’s a position for which democracy is ill-prepared. The democratic system has certain measures in place for when the people’s choice somehow fails them through death, accident, injury, or self-selects out of the position of power. However, as anyone voting for president knows, the vice president is like an insurance policy you never expect to claim.
Nonetheless, through American history, there have been nine “accidental presidents,” as scholar Philip Abbott calls them in Presidential Studies Quarterly. Presidents Tyler, Fillmore, Andrew Johnson, Arthur, Theodore Roosevelt, Coolidge, Truman, Lyndon Johnson, and Ford all wrestled with unique challenge of proving themselves to be presidential material despite already being president.
These accidental presidents have had varying levels of success. Two are among historians’ most highly ranked. Four are among the worst remembered. Four were not nominated by their own parties for another term, and three others voluntarily gave up the opportunity.
Nonetheless, thrust into the role, all nine had to work to establish their legitimacy. Abbott draws some similarities between their various strategies:
The Homage Strategy: Some accidental presidents choose to lean heavily on the virtues of their predecessors, understanding the public’s (and their own party’s) deep emotional ties to their previous president-elect—like Lyndon B. Johnson, despite his known tensions with the Kennedys.
The Independent Strategy: All presidents, even those who begin their tenure by emphasizing the greatness of their predecessors, must eventually forge their own path. Some choose to do this sooner than others. Tyler—the first accidental president—chose to skip the homage to his predecessor and strike it out on his own. Tyler inherited the presidency from President William Henry Harrison, and, according to Abbott, quickly formalized his position by giving an inaugural address and moving into the White House. He then proceeded to infuriate other members of government by setting policies under his own agenda. While this seemed risky, there was a method to Tyler’s madness. As the first in his situation, he was in a unique position. To adhere too closely to the leader before him, or be too agreeable to his peers, may well have made him seem incompetent or timid, and significantly weaken his power.
The Minimalist Strategy: The minimalist strategy is what it sounds like—a cautious approach in which the former vice president acts as a steady caretaker, not concerned with blazing new paths or establishing a new form of leadership. Both Coolidge and Ford took this approach, with wildly varying degrees of success. The success of this strategy depends on the economic and political climate at the time; a quieter political atmosphere is much better suited to a quieter president.
What Abbott concludes, however, is that in each case and in each strategy, accidental presidents always somehow stumbled or struggled. After all, a leader needs integrity in the eyes of the populace, as well as their own political party, in order to establish complete legitimacy.
Theodore Roosevelt’s rhetorical arrangement of his relationship to the McKinley assassination, Coolidge’s construction of the “Silent Cal” persona, and Lyndon Johnson’s swift and systematic use of homage for his political agenda are all examples of the inventiveness available to accidental presidents. That despite many different instances of creativity, accidental presidents still fail in various degrees to fulfill the roles of rex and dux [i.e. legitimacy and leadership], is a valuable aspect of a theory of democratic succession, for the imagination of political leaders should always be tethered to election.
Today, many teachers agree that antiracist lessons are an important part of a good education, but most will concede that it can be difficult to craft these lessons well. That was also true during World War II, when American teachers embarked on an ambitious effort to fight racism, as education historian Zoë Burkholder writes.
In the late 1930s, an increased focus on national unity, along with concerns about Nazi propaganda, encouraged teachers to embrace “tolerance education.” Burkholder writes that for many teachers and students, standing up against racism was an obvious part of the fight against the Nazis. In most of the country, when teachers talked about race they were mostly discussing different ethnic groups that we would now lump together as white. One Indiana teacher, for example, focused on teaching her students about the scientific and artistic “gifts” brought to America “even from those countries whose political policies we condemn or whose sons and daughters we call wops and dagoes and hunkies.”
Burkholder writes that this concept of “cultural gifts” leaned heavily on stereotypes. One teacher discussed “color from Italy, stamina and restraint from the Scandinavian countries, artistry from France, steady nerve and purposefulness from Britain.” Few schools included Asian, African, or African-American cultures in these lessons.
That changed over the next few years, particularly in 1943, when riots targeting black and Mexican-American workers in many cities brought racism against these groups to educators’ attention. A new kind of tolerance education took hold, rooted in “scientific” ideas about race. Where teachers’ descriptions of the “races” of Europe had been based on informal, popular understandings, this new strain leaned on anthropology. The Races of Mankind, an illustrated pamphlet published by anthropologists Ruth Benedict and Gene Welfish in October of 1943, became enormously popular among educators.
The Races of Mankind argued that human diversity was based more on culture than genetics, and that people of all races had the potential to be equal. But it also gave scientific credence to the idea of race as a static, genetically-based category. The book held that humanity could be divided into three races: “Caucasian,” “Mongoloid,” and “Negroid.”
Sometimes these “anti-racist” lessons were scientific. One class demonstrated that blood samples from black and white students were indistinguishable in an effort to challenge the Red Cross policy of segregated blood banks. In other cases, though, the “anti-racist” lessons replicated the “cultural gifts” concept in disturbing ways. For example, one Ohio class of white students spent weeks studying African-American achievements. Then, the students donned blackface and “mammy dresses” to act out black Americans’ rise from slavery.
Burkholder writes that the racial curricula faded out of use after the end of World War II, to mixed results. Some white students reported a new understanding of equality with black and Asian peers. But the classes also helped reinforce Americans’ understanding of races as immutable “scientific” categories.