FreshRSS

🔒
❌ About FreshRSS
There are new available articles, click to refresh the page.
Before yesterdayYour RSS feeds

Whistleblowing: A Primer

By Matthew Wills

In 1778, the Continental Congress decreed that it was “the duty of all persons in the service of the United States … to give the earliest information to Congress or any proper authority of any misconduct, frauds or misdemeanors by any officers or persons in the service of these states.”

This “founding” attitude has fared… rather ambiguously ever since. As law professor Shawn Marie Boyne shows in her review of the legal protections for whistleblowers in government and industry, “the country’s treatment of whistleblowers has been a conflicted one.” Regardless of the organizational model (public, private, non-profit), those in power who have had the whistle blown on them rarely applaud whistleblowers. Heroes to some, often the whistleblower is labeled a traitor by those in power, as in the cases of Boyne’s examples, Edward Snowden and Chelsea Manning.

“The question of whether a whistleblower will be protected or pilloried depends on the interests of those in power,” Boyne writes. Leaks to the media from officials for political advantage are standard operating procedure. But those outside this inner circle don’t fare as well: Snowden is in exile and Manning is in jail. Boyne notes that three NSA employees who did do what critics said Snowden and Manning should have done, that is, go through the system and use the proper channels to report government abuse, “found their lives destroyed and reputations tarnished.”

Retaliation against whistleblowers hit some of the pioneers, too, Boyne notes. Ernest Fitzgerald, who revealed billions in cost-overruns in a military transport program in 1968, was demoted after President Richard Nixon told his supervisors to “get rid of the son of a bitch.”

That same president ordered a break-in to Daniel Ellsberg’s psychiatrist’s office in 1971, in hopes of finding dirt on Ellsberg.  An analyst for the RAND Corporation, Daniel Ellsberg released the Pentagon Papers to the New York Times. This classified historical study of the war in Vietnam revealed that the government realized early on that the war could not be won. Defending his actions in 1971, Daniel Ellsberg said, “I felt that as an American citizen, as a responsible citizen, I could no longer cooperate in concealing this information from the American public.”

Weekly Newsletter

[contact-form-7]

Retaliation against whistleblowers is, as scholar Michael T. Rehg and his co-authors show, quite gendered. “Male whistleblowers were treated differently depending on their power in the organization, but female whistleblowers received the same treatment regardless of the amount of organizational power they held: Their status as women overrode their status as powerful or less powerful organization members.” These authors also found that “women who reported wrongdoing that was serious or which harmed them directly were more likely to suffer retaliation, whereas men were not.”

While laws have been strengthened to help whistleblowers, presidents and CEOs nevertheless continue to go after them.

The post Whistleblowing: A Primer appeared first on JSTOR Daily.

Membership, Citizenship, and Democracy

President Trump’s pernicious attacks on nonwhite immigrants have thrust a particular theory of political membership—white nationalism—to the forefront ...

The post Membership, Citizenship, and Democracy appeared first on Public Books.

Did Humans Once Live by Beer Alone? An Oktoberfest Tale

By Lina Zeldovich

In October of 1953, the farmers of the Western hemisphere were busy toiling over harvested grain, either milling it into flour or prepping it for brewing. Meanwhile, a group of historians and anthropologists gathered to debate which of these two common grain uses humans mastered first—bread or beer?

The original question posed by Professor J. D. Sauer, of the University of Wisconsin’s Botany Department, was even more provocative. He wanted to know whether “thirst, rather than hunger, may have been the stimulus [for] grain agriculture.” In more scientific terms, the participants were asking: “Could the discovery that a mash of fermented grain yielded a palatable and nutritious beverage have acted as a greater stimulant toward the experimental selection and breeding of the cereals than the discovery of flour and bread making?”

Interestingly, the available archaeological evidence didn’t produce a definitive answer. The cereals and the tools used for planting and reaping, as well as the milling stones and various receptacles, could be involved for making either the bread or the beer. Nonetheless, the symposium, which ran under the title of Did Man Once Live by Beer Alone?, featured plenty of discussion.

The proponents of the beer-before-bread idea noted that the earliest grains might have actually been more suitable for brewing than for baking. For example, some wild wheat and barley varieties had husks or chaff stuck to the grains. Without additional processing, such husk-enclosed grains were useless for making bread—but fit for brewing. Brewing fermented drinks may also have been easier than baking. Making bread is a fairly complex operation that necessitates milling grains and making dough, which in the case of leavened bread requires yeast. It also requires fire and ovens, or heated stones at the least.

On the other hand, as some attendees pointed out, brewing needs only a simple receptacle in which grain can ferment, a chemical reaction that can be easily started in three different ways. Sprouting grain produces its own fermentation enzyme—diastase. There are also various types of yeast naturally present in the environment. Lastly, human saliva also contains fermentation enzymes, which could have started a brewing process in a partially chewed up grain. South American tribes make corn beer called chicha, as well as other fermented beverages, by chewing the seeds, roots, or flour to initiate the brewing process.

Weekly Newsletter

[contact-form-7]

But those who believed in the “bread first, beer later” concept posed some important questions. If the ancient cereals weren’t used for food, what did their gatherers or growers actually eat? “Man cannot live on beer alone, and not too satisfactorily on beer and meat,” noted botanist and agronomist Paul Christoph Mangelsdorf. “And the addition of a few legumes, the wild peas and lentils of the Near East, would not have improved the situation appreciably. Additional carbohydrates were needed to balance the diet… Did these Neolithic farmers forego the extraordinary food values of the cereals in favor of alcohol, for which they had no physiological need?” He finished his statement with an even more provoking inquiry. “Are we to believe that the foundations of Western Civilization were laid by an ill-fed people living in a perpetual state of partial intoxication?” Another attendee said that proposing the idea of grain domestication for brewing was not unlike suggesting that cattle was “domesticated for making intoxicating beverages from the milk.”

In the end, the two camps met halfway. They agreed that our ancestors probably used cereal for food, but that food might have been in liquid rather than baked form. It’s likely that the earliest cereal dishes were prepared as gruel—a thinner, more liquidy version of porridge that had been a Western peasants’ dietary staple. But gruel could easily ferment. Anthropologist Ralph Linton, who chose to take “an intermediate position” in the beer vs. bread controversy, noted that beer “may have resulted from accidental souring of a thin gruel … which had been left standing in an open vessel.” So perhaps humankind indeed owes its effervescent bubbly beverage to some leftover mush gone bad thousands of years ago.

The post Did Humans Once Live by Beer Alone? An Oktoberfest Tale appeared first on JSTOR Daily.

Industrial London’s Maternal Child Abductors

By Livia Gershon

There may be no crime that horrifies the public more than child abduction. Historian Elizabeth Foyster writes that this was also true in London 200 years ago, though the crime then typically took a much different form than we expect today.

Historians generally agree that the late eighteenth century brought a major change in what English childhood meant. This included more positive attitudes toward kids, and a new wealth of books, toys, and clothes for middle-class urban children. Children were increasingly prized “for giving women a role as mothers,” and as “miniature models of all that a more affluent consumer society could afford,” Foyster writes.

If children were becoming more valuable, it stands to reason that, like all valuable things, they were in danger of being stolen. And, indeed, Foyster found 108 cases of child abduction tried in London and reported in the newspapers between 1790 and 1849.

Child abduction was nothing new, but it was understood differently than in previous times. In fourteenth-century England, “ravishment” covered both forced and consensual “abduction” of children or adult women. It typically had a sexual element, and the child victims were generally teenagers. Later, in the seventeenth century, abduction was understood as a fate befalling unfortunate boys forced into indentured servitude.

In contrast, in the period Foyster studied, the majority of stolen children were under six, and the abductor was usually a woman in her 20s or 30s. In some cases, kids were stolen for their clothes. Abductors might bring fancy children’s clothes to a pawnbroker, leaving a half-naked child outside. Other times, women reportedly stole children to gain sympathy when begging for money or seeking a job.

There were also well-off married women who stole—or paid someone else to steal—children they could present as their own. One 22-year-old wrote to her husband, serving in the Navy, about an invented pregnancy and childbirth. When she learned he was returning home, she travelled to London, snatched a four-year-old boy, and cared for him for two months before she was caught.

Foyster writes that news accounts paid little attention to possible harm done to the children. Unlike today, child abduction wasn’t generally assumed to be motivated by deviant sexual desire. Instead, newspapers focused on the terror and despair of mothers whose children were stolen, and suggested a parallel lack of feeling in the abductors.

Weekly Newsletter

[contact-form-7]

A judge told one convicted child thief that, as a childless woman, she was “Ignorant of those heavenly feelings which subsist in the relation between parent and child; for had you been a mother, you must have respected and regarded, instead of agonizing a mother’s heart.” Still, Foyster writes, news reports also acknowledged that child-stealers might be motivated by a twisted “fondness” for children—reflecting their own stunted development.

Child-thieves clearly had no place in the growing public conception of natural motherly love. Yet the new understanding of children as valuable objects who gave meaning to women’s lives may have spurred the increase of child abduction.

The post Industrial London’s Maternal Child Abductors appeared first on JSTOR Daily.

How Natural Gas Helped Make our Industrial World

By Eric Schewe

The natural gas industry is enjoying a renaissance, thanks to the widespread adoption of fracking around the country in the past fifteen years. In that time, domestic production of natural gas has increased around 50%. Natural gas now accounts for 1/3 of the energy produced in the United States, more than any other source. Until recently, natural gas was billed as the “green” fossil fuel. Compared to coal or petroleum products, burning methane gas (CH4) releases less carbon into the atmosphere to produce the same energy, but it does still release harmful emissions. 

Coal and gasoline have earned their reputation as fossil fuel boogeymen. Both have played extremely visible roles as the principal feedstocks for electricity generation and automobiles, respectively. But, scholar Leslie Tomory writes, methane gas was actually the first fuel to be delivered in an integrated network that provided hydrocarbon energy to the masses at the flip of a switch, back in Regency-era London. In the process, the Gas Light and Coke Company (GLCC) confronted and solved problems of industrial politics, time coordination, machine standardization, contractor management, and even customer relations that have often been attributed to the later railway or electricity industries.

Founded in 1812, the Gas Light and Coke Company (GLCC) produced coal gas. The company heated coal in large vessels (“retorts”) inside ovens that forced out its gases and other impurities (such as sulfur) to produce coke. The expanding steel industry needed the purified carbon in coke to make high-quality steel. GLCC was the first company to store the released methane gases and to offer it for lighting in homes, businesses, and for street lamps.

Aside from a few local water supply networks, nothing of the scale had ever been attempted. Even the company’s political position was new, straddling private and public concerns. In exchange for papers of incorporation, GLCC agreed to install and fuel street lamps at low prices. In practice, this encouraged localities to agree to let the company tear up the streets to lay pipe.

Weekly Digest

[contact-form-7]

GLCC intended only to provide gas from 4:00 pm to 10:00 pm, but the demand for the gas was high. Before the invention of gas meters, customers could pay a flat fee but then use the gas all night. Some widened their valves to make the gas brighter or even stored the gas illegally. The company responded by improving their generating capacity, but also by regularly inspecting homes and requiring the use of standardized pipes, valves, and burners.

Methane gas has recently overtaken coal as the most common source of energy for electrical generation worldwide. Migrating toward renewable energy today requires solving many of the same problems that the early gas industry faced: storage, transmission, and most of all, politics. Because—unlike in the 1810s—renewable energy is attempting to displace a pre-existing complex energy infrastructure, backed by powerful interests, that has structured the world we see around us.

 

The post How Natural Gas Helped Make our Industrial World appeared first on JSTOR Daily.

Impossible Belonging

If the sharp end of critique’s job is to name injury, then it also has a soft lining that is oriented around recovery and repair. Even if a particular critical project stays with injury rather than whatever might come after, what else is there to want, in the wake of naming injury, but to fix it? Both writers and readers of such critiques are thrust into a morality tale, the drama of selves...

The post Impossible Belonging appeared first on Public Books.

What Does It Mean To Be Celtic?

By Matthew Wills

A recent book by Caoimhín De Barra explores the formation of Celtic nationalism. In the late twentieth century, “Celticity” was sparked anew in the UK’s devolution of power to Scottish, Welsh, and Northern Ireland. Celticity, however, has turned out to be quite exportable, and not just in the form of Celtic music, Irish dance, and fantasies like the 1995 movie Braveheart.

According to scholars Euan Hague, Benito Giordano, and Edward H. Sebesta, two organizations that arose in the 1990s have appropriated contemporary versions of Celtic nationalism as a proxy for whiteness. Both call for separate nations to be set aside for the citizens they count as “white.” One is the League of the South (LS), a fringe group that argues for a return to the Confederate States of America. Meanwhile, in Italy, Lega Nord (LN) has also taken up the banner of “Celtic culture” as a model of whiteness. They advocate for a state called Padania, separate from Italy’s south. The LN, frequently called just Lega, is part of Italy’s coalition government. In the 2018 elections, the LN took just under 18% of the vote for both the Chamber of Deputies and the Senate.

Both the LS and the LN argue that Celtic-ancestry people are a “distinct ethnic group deserving of self-determination and an independent nation state,” write Hague et al. Comparing the two leagues, the authors explore the confluence of ethno/race-based nationalism with the use (and misuse) of the myths of Celticity.

Celticity is “an attractive set of symbols and identities that come replete with popular recognition and a supposedly ancient past that can be invoked by people for many purposes, from ‘new age’ religion to popular ‘world music.'” Historically, however, that “ancient past” is hard to pin down. Hague et al. explain:

The very flexibility and the vagaries of archeological evidence regarding the original Celts enable multiple political and cultural meanings to be invested in the form, whilst retaining the symbolic value and historical authority accrued by the reference to a supposedly ancient Celtic culture.

“The Celts” can and have been envisioned in all sorts of ways: as a warrior class; a pan-European people; as the epitome of whiteness; “whatever version of the past seemed nationally expedient.” It’s a cultural identity that has come into vogue in recent decades.

Weekly Digest

[contact-form-7]

The LN posits that northern Italy is culturally and ethnically distinct from southern Italy. Southern Italians aren’t seen as Celtic/white/European—shades of the way Italian immigrants were first treated in the U.S. For LN, separation is essential to block immigration from Africa, Asia, and southern Italy.

Nationalism tries to make “ancient connections between a people and a specific territory, an intersection of genealogy and geography.” By exploiting the ethos of multiculturalism, both the LS and the LN argue for a “right to cultural difference.” This right, the authors say, fits into “ongoing processes of white privilege.” While overt racism is generally frowned upon, “an appeal to Celtic ethnicity appears acceptable and can be justified by utilizing a rhetoric of cultural awareness while simultaneously subverting political commitments to cultural equality and reasserting white superiority.”

The post What Does It Mean To Be Celtic? appeared first on JSTOR Daily.

The Accidental Presidents of the United States

By Farah Mohammed

As Theresa May tearily announced her departure outside 10 Downing Street to the British public, she served as a symbol of the personal and professional challenges faced by those put in the impossible position of inheriting leadership, rather than choosing it.

May faced the double challenge of having to manage the formidable charges given to any state leader (including, in her special case, the Gordian knot that is Brexit), without the popular support of an electorate. She was awkwardly shuffled into power after her predecessor, David Cameron, suddenly resigned, and her party scrambled to find a replacement.

It’s a position for which democracy is ill-prepared. The democratic system has certain measures in place for when the people’s choice somehow fails them through death, accident, injury, or self-selects out of the position of power. However, as anyone voting for president knows, the vice president is like an insurance policy you never expect to claim.

Nonetheless, through American history, there have been nine “accidental presidents,” as scholar Philip Abbott calls them in Presidential Studies Quarterly. Presidents Tyler, Fillmore, Andrew Johnson, Arthur, Theodore Roosevelt, Coolidge, Truman, Lyndon Johnson, and Ford all wrestled with unique challenge of proving themselves to be presidential material despite already being president.

These accidental presidents have had varying levels of success. Two are among historians’ most highly ranked. Four are among the worst remembered. Four were not nominated by their own parties for another term, and three others voluntarily gave up the opportunity.

Nonetheless, thrust into the role, all nine had to work to establish their legitimacy. Abbott draws some similarities between their various strategies:

The Homage Strategy: Some accidental presidents choose to lean heavily on the virtues of their predecessors, understanding the public’s (and their own party’s) deep emotional ties to their previous president-elect—like Lyndon B. Johnson, despite his known tensions with the Kennedys.

The Independent Strategy: All presidents, even those who begin their tenure by emphasizing the greatness of their predecessors, must eventually forge their own path. Some choose to do this sooner than others. Tyler—the first accidental president—chose to skip the homage to his predecessor and strike it out on his own. Tyler inherited the presidency from President William Henry Harrison, and, according to Abbott, quickly formalized his position by giving an inaugural address and moving into the White House. He then proceeded to infuriate other members of government by setting policies under his own agenda. While this seemed risky, there was a method to Tyler’s madness. As the first in his situation, he was in a unique position. To adhere too closely to the leader before him, or be too agreeable to his peers, may well have made him seem incompetent or timid, and significantly weaken his power.

Weekly Digest

[contact-form-7]

The Minimalist Strategy: The minimalist strategy is what it sounds like—a cautious approach in which the former vice president acts as a steady caretaker, not concerned with blazing new paths or establishing a new form of leadership. Both Coolidge and Ford took this approach, with wildly varying degrees of success. The success of this strategy depends on the economic and political climate at the time; a quieter political atmosphere is much better suited to a quieter president.

What Abbott concludes, however, is that in each case and in each strategy, accidental presidents always somehow stumbled or struggled. After all, a leader needs integrity in the eyes of the populace, as well as their own political party, in order to establish complete legitimacy.

Abbott concludes:

Theodore Roosevelt’s rhetorical arrangement of his relationship to the McKinley assassination, Coolidge’s construction of the “Silent Cal” persona, and Lyndon Johnson’s swift and systematic use of homage for his political agenda are all examples of the inventiveness available to accidental presidents. That despite many different instances of creativity, accidental presidents still fail in various degrees to fulfill the roles of rex and dux [i.e. legitimacy and leadership], is a valuable aspect of a theory of democratic succession, for the imagination of political leaders should always be tethered to election.

The post The Accidental Presidents of the United States appeared first on JSTOR Daily.

QUIZZICAL: Famous Writers, and Their Pets!

Behind many celebrated writers is a canine, feline, reptilian, or avian pal. Do you know the domestic creatures that have kept these novelists, playwrights, poets ...

The post QUIZZICAL: Famous Writers, and Their Pets! appeared first on Public Books.

Subscription Art for the 19th-Century Set

By Livia Gershon

Is art only for elites with the money to buy the most brilliant works and the education to enjoy them? Or is art a public good with the capacity to bring communities and nations together? Historian Rachel N. Klein explains how in the 1840s, a New York City organization called the American Art-Union embraced that last interpretation.

The Art-Union grew out of the Apollo Gallery, established by portrait painter James Herring in 1838 as a way for artists to exhibit work for sale. Over the next few years, it began buying up a range of paintings by artists around the country. Subscribers paid a $5 annual fee for the chance to win a piece of art, using the proceeds to pay artists and to keep a free gallery open to the public. By the late 1840s, the organization was the primary market for American paintings other than portraits. Klein writes that over its thirteen-year lifespan, the organization purchased 2,481 works by more than 300 artists. Many of these were landscape paintings, images of everyday life, and pictures that told exciting or funny stories.

The Art-Union was part of a movement on both sides of the Atlantic that saw art as a means of moral improvement for the general population. It opposed the patronage model of painting for elites, and reimagined art as profit-driven popular entertainment. Philip Hone, a former New York mayor who helped establish the union, railed against the “licentiousness” of the penny press and hoped to educate the “taste of the people” for the good of the republic.

And yet, Klein writes, the Art-Union’s economic model took full advantage of a growing popular interest in the expanding consumer and investment economy. Following P.T. Barnum’s lead, it lured people of all social classes to its gallery with a display of paintings and lavish furnishings that extended into the street. Additionally, it offered the chance to win not just its paintings but also limited-edition etchings and medals depicting famous artists. Subscribers were motivated by the hope of a valuable prize: In 1848, when it offered a series of Thomas Cole paintings worth $6,000, its subscriptions jumped from 9,666 to 16,475.

Weekly Digest

[contact-form-7]

The Art-Union ultimately collapsed under pressure from several directions. Within the world of high art, certain critics objected to the organization’s purchase of what they considered to be mediocre paintings for the sake of providing prizes affordably. Meanwhile, some artists complained they had been overlooked by the union and attempted to create rival institutions.

But Klein writes that the biggest blow came from the penny press—specifically, the New York Herald. The paper railed against the Art-Union as an elitist institution that used its lottery system to take advantage of subscribers. The Herald rallied public opinion to its side, eventually prompting legal action by the New York District Attorney. In 1852, the state’s Supreme Court ruled that the organization was an illegal lottery, bringing its work to an end.

The post Subscription Art for the 19th-Century Set appeared first on JSTOR Daily.

The Stonewall Riots Didn’t Start the Gay Rights Movement

By Catherine Halley

Despite what you may hear during this year’s fiftieth anniversary commemorations, Stonewall was not the spark that ignited the gay rights movement. The story is well known: A routine police raid of a mafia-owned gay bar in New York City sparked three nights of riots and, with them, the global gay rights movement. In fact it is conventional to date LGBTQ history into “before Stonewall” and “after Stonewall” periods—not just in the United States, but in Europe as well. British activists can join Stonewall UK, for example, while pride parades in Germany, Austria, and Switzerland are called “Christopher Street Day,” after the street in New York City on which the Stonewall Inn still sits.

But there were gay activists before that early morning of June 28, 1969, previous rebellions of LGBTQ people against police, earlier calls for “gay power,” and earlier riots. What was different about Stonewall was that gay activists around the country were prepared to commemorate it publicly. It was not the first rebellion, but it was the first to be called “the first,” and that act of naming mattered. Those nationally coordinated activist commemorations were evidence of an LGBTQ movement that had rapidly grown in strength during the 1960s, not a movement sparked by a single riot. The story of how this particular night and this particular bar came to signify global gay rebellion is a story of how collective memory works and how social movements organize to commemorate their gains.

Weekly Digest

[contact-form-7]

The sociologists Elizabeth A. Armstrong and Suzanna M. Crage detail four previous police raids on gay bars in cities across the United States that prompted activist responses—and local gains—but that either faded from local memory, did not inspire commemorations that lasted, or did not motivate activists in other cities.

For example, San Francisco activists mobilized in response to police raids on gay bars in the early 1960s, which came to a head during a raid on a New Year’s Eve ball in 1965 that eventually brought down the police commissioner. This New Year’s Eve raid attracted wide media attention, garnered heterosexual support, and is credited with galvanizing local activists, but it was subsequently forgotten. In 1966, again in San Francisco, LGBTQ people rioted at Compton’s Cafeteria, smashing all the windows of a police car, setting fires, and picketing the restaurant for its collusion with police. The city’s gay establishment did not participate, however, and distanced themselves from the transgender and street youths and their political organization, Vanguard, behind the “violent” protest.

San Francisco was not the only U.S. city with gay rights activists gaining strength. In Los Angeles, the first national gay rights organization, the Mattachine Society, was founded years earlier, in 1951, and spawned chapters in other cities around the country. Bar raids in late-1960s Los Angeles also prompted resistance. The 1967 police raid on the Black Cat bar, for instance, led to a demonstration 400 people strong that garnered evening news coverage. That demonstration played a role in the founding of the leading national LGBTQ magazine, The Advocate. While the Black Cat demonstration garnered support from heterosexual activists for Chicano and Black civil rights, no further coordination occurred, and the event was not commemorated. When police again descended on the L.A. nightclub The Patch, patrons struck back immediately, marching to city hall to lay flowers and singing civil rights anthem “We Shall Overcome.” But its anniversary passed without remembrance. Los Angeles activists did organize a one-year vigil on the anniversary of the night the L.A. police beat a gay man to death in front of the Dover Hotel, but this 120-person-strong rally and march to the police station did not inspire activists in other cities. Subsequent demonstrations were subsumed by the Stonewall commemorations.

Activists were busy on the East Coast before Stonewall, too. In Washington, D.C., LGBTQ veterans chose the Pentagon as their place to picket, making it onto national television with signs reading, “Homosexual citizens want to serve their country too.” Subsequent demonstrations targeted the White House and the offices of Federal agencies. New York City’s Mattachine Society secured legal gains in 1966 when they organized a “sip-in” at the bar Julius’, securing the right of homosexuals to gather in public. None of these actions inspired commemoration, locally or in other cities, however, leading scholars to look for pre-Stonewall protests. The question that scholars are seeking to answer is: Why not?

Four members of the Mattachine Society at a "sip-in" in 1966, demanding to be served at Julius's Bar in Greenwich Village
Four members of the Mattachine Society at a “sip-in” in 1966, demanding to be served at Julius’s Bar in Greenwich Village © Estate of Fred W. McDarrah via The National Portrait Gallery

There was an annual demonstration for gay civil rights before Stonewall, however, and it provides the best example of how gay politics were growing and changing before the riots. Beginning in 1965, Philadelphia LGBTQ activists began an annual picket of Independence Hall on the Fourth of July to protest state treatment of homosexuals. Soberly-dressed men and women with carefully worded signs walked solemnly in front of this iconic building where the Declaration of Independence and U.S. Constitution were debated and signed. These “Annual Reminders” were the result of coordination by activists in New York, Washington, and Philadelphia, evidence of burgeoning regional cooperation by gay rights activists in the 1960s. Yet these somber events unraveled in the week after Stonewall, and Philadelphia activists voted later in 1969 to shift the 1970 commemoration from a picket of Independence Hall to a parade in the streets on the Stonewall anniversary.

Gay politics had become more radical in the late 1960s, owing to the influence of the Black power movement, second-wave feminism, and the protests against the Vietnam war. Radical organizations advocating “gay power” had already sprung up in the 1960s, including in Greenwich Village, where the Stonewall Inn was located. These new activists stereotyped the actions of their forebears as conservative, erasing their contributions from a history that now was credited solely to Stonewall.

What was different about Stonewall was that organizers decided to commemorate it, and to make it a national event. At a meeting in November of 1969, regional activists broke with the respectable image of the Philadelphia “Annual Reminder” and vowed to secure a parade permit on the anniversary of the raid on the Stonewall Inn, calling it Christopher Street Liberation Day. These organizers reached out to groups in Chicago and Los Angeles who readily agreed to remember something that happened elsewhere, in part because it was one of the few acts of LGBTQ resistance to get widespread media coverage, including in national LGBTQ publications and the New York Times.

This media coverage was itself the product of previous ties between local LGBTQ activists and journalists—and the fact that the Stonewall Inn was so near to the offices of the Village Voice. Interestingly, San Francisco’s activists declined to participate because they had already made inroads with local politicians and clergy. As one member explained, “I did not think a riot should be memorialized.” Only a small breakaway group participated, to little local effect, in a city that today hosts one of the largest gay pride parades in the country. These coordinated marches in Los Angeles, New York, and Chicago in 1970 were the first gay pride parades, and sparked an idea that spread around the country—to 116 cities in the United States and 30 countries around the world.

It was this national act of commemoration that represented a truly new political phenomenon, not the riot itself. As Armstrong and Crage have written, “without the existence of homophile organizations elsewhere, many of them founded only in the late 1960s, a national event would have been unthinkable.” Stonewall was an “achievement of gay liberation,” and not its cause, and an achievement of collective memory and collective action, if not the first LGBTQ riot or protest.

It is notable that this achievement took the form of a joyful parade, rather than a somber picket like Philadelphia’s Annual Reminder. As the sociologist Katherine McFarland Bruce describes in her detailed history of pride parades in the United States, “planners settled on a parade format as the best way to accommodate diverse members and to produce the positive emotional experience that brought people together.” As early organizers noted, “a fun parade brings out more people than an angry march.” Unlike the Annual Reminder, which addressed the state in asserting the similarity of homosexuals with heterosexual citizens, parade participants celebrated their differences and aimed to change minds, not laws.

An LGBTQ parade through New York City on Christopher Street Gay Liberation Day, 1971
An LGBTQ parade through New York City on Christopher Street Gay Liberation Day, 1971. Getty

There were unique characteristics of Stonewall, of course. In his detailed history of the bar and those nights, the historian David Carter lists many: It was the only bar raid that prompted multiple nights of riots; it was the only raid that occurred in a neighborhood populated by lots of other LGBTQ people who might participate; and the bar was the city’s largest, located in a transportation hub surrounded by many public telephones that were used to alert media.

But Carter also notes that the riots were not inevitable, and were just a turning point in the United States’ burgeoning gay rights movement. New York City already had many gay activists “with the specialized skills to take on leadership roles to help shape and direct the event,” for example. He also gives special credit to the fact that several of the riots, including Stonewall and the Compton’s Cafeteria riots in San Francisco, occurred during police raids right after a period of liberalization. In San Francisco, Compton’s clientele only fought back after gaining hope from the city’s pre-Stonewall municipal liberalization towards homosexuality. In New York City (where the Stonewall riot took place), the police raid seemed out of step with the liberal administration of mayor John Lindsay. As Carter summarizes, “revolutions tend to happen after periods of liberalization.”

As activists commemorate the Stonewall Riots in 2019, perhaps they should also lay plans for next year, to remember the fiftieth anniversary of the first gay pride parade in 2020. The nation finds itself again in an era of retrenchment after the liberalization of the Obama era. It follows that 1970 thus deserves to be remembered as the first national act of LGBTQ remembrance, if not the first act of LGBTQ resistance.

The post The Stonewall Riots Didn’t Start the Gay Rights Movement appeared first on JSTOR Daily.

When Foster Care Meant Farm Labor

By Livia Gershon

In an effort to support children in foster care, Sesame Street has introduced Karli, a muppet living with “for now” parents until she can go back to her mother. Foster families, who get a small payment to offset the cost of caring for children, have been a central part of child welfare programs for the past century. Before that, historian Megan Birk writes, Americans depended on farmers to take care of kids in exchange for hard labor.

In the years after the Civil War, state and charity welfare workers commonly “placed out” children who they identified as “destitute or neglected,” often sending them to work on family farms in the Midwest. Tens of thousands of children from outside the region arrived, sometimes on “orphan trains” from the urban east.

Authorities viewed the farm placements as a win for everyone involved. Farmers got affordable labor. Governments and philanthropic organizations were relieved of the expense of running orphanages. And children got the chance to learn valuable work skills while living in a rural setting, widely seen as the ideal place for an American upbringing.

Birk writes that in some case, the children were virtually a part of the families that took them in. If they worked hard on the farms, that was no different from what farmers expected of their own kids. But some farmers exploited the child laborers, beating them, denying them schooling or medical care, and sometimes overworking them for a season before sending them back to an institution.

The welfare organizations in charge of the programs often had little capacity to monitor the children’s situations. Some relied on volunteers who might or might not actually bother checking in on the families.

Birk writes that by the 1880s, charity workers were calling attention to these problems and seeking reforms. Over the next few decades, supervision did improve, with states putting resources into professional supervision of the placements.

Weekly Digest

[contact-form-7]

At the same time, though, farm life was changing. Many people were leaving the countryside for growing cities, and farmers increasingly operated as tenants rather than owning their own land. In this climate, the skills learned working on a farm seemed less relevant to children’s success in the future. And doing hard labor and getting by without modern conveniences started to seem like inadequate training for a child who might hope to join the growing ranks of white-collar workers. One children’s advocate suggested that the best candidate for a farm placement was “a rather dull and backward but strong and good natured boy” who wouldn’t benefit much from a modern education anyway.

By the Progressive Era, Birk writes, child labor was widely seen as objectionable, and middle-class families led a shift toward seeing children as “priceless.” Rather than expecting children to work for their keep, welfare agencies increasingly paid urban or suburban foster families to take them in. The system represented by Sesame Street‘s Karli had arrived.

The post When Foster Care Meant Farm Labor appeared first on JSTOR Daily.

How War Revolutionized Ireland’s Linen Industry

By Matthew Wills

You probably know “Rosie the Riveter.” She’s the iconic incarnation of the women in the industrial workforce of World War II. With manpower siphoned off to war, womanpower was called upon to work in factories in the kinds of jobs few women had seen before.

But WWII was not the first time a deficit of male laborers opened doors traditionally closed to women. War can be a radical force, a great over-turner of traditions like sexual divisions in labor. As Anne McKernan tells us, something similar happened during the Napoleonic Wars, particularly in the linen industry in Ulster.

From 1803 to 1815, the United Kingdom of Great Britain and Ireland was at war with France. With 300,000 men in the army and another 140,000 in the Royal Navy, manpower was absent from the homefront, just as demand for everything from food to clothing rose. War devoured textiles like linen, which was used for canvas, duck, and sailcloth. Linen merchants turned to women to maintain and increase production.

The province of Ulster is now spilt between the Republic of Ireland and Northern Ireland. Before 1800, Ulster had a strong tradition of linen production by farmers who spun and wove the flax they themselves grew. (Irish linens are still famous.) The women of farming families spun flax fibers on spinning wheels, while the men wove the resulting thread into linen cloth on their own looms.

[War] presented Irish entrepreneurs with a golden opportunity to snap the link between gender and commercial linen weaving; snapping that link, in turn, prepared the way for snapping the link between farming and weaving, the bi-occupations of rural Ulster households. War-time innovations in the linen industry subsequently turned independent farmer-weavers into rural proletarian weavers.

Mechanization began to replace hand-spinning through the first decade of the nineteenth century. But spinsters (only later did the word come to mean unmarried women) could earn three times as much weaving. Still, the merchants of the Irish Linen Board had to overcome traditional gender divisions in this home-work system. And, to keep up production, they needed to do it fast.

The innovators believed time was of the essence. They could not wait until women found their way into the labor pool of weavers. Increased supplies of coarse yarn from mechanical spinning would create greater demand for weavers at a time when the labor supply was contracting in the face of increasing demand from the agricultural sector.

McKernan reveals how the Irish Linen Board recruited female weavers. For one thing, there were the higher wages. But there were also incentives: cash prizes for the first 200 yards and premiums for cloth with higher thread counts. The newest loom technology, which could double the volume of cloth a worker could produce, was distributed for free. This wasn’t altruism: linen merchants demanded the right to inspect homes with the new looms, which were no longer the workers’ possessions. “If the inspector found ‘obstinacy in attention or idleness be preserved in,’ he had discretion to ‘remove the loom.'”

Weekly Digest

[contact-form-7]

Nevertheless, “young women responded enthusiastically to weaving.” From 1806-1809, over 1,800 free looms were distributed. One six-month period saw 300 women claiming prizes, which cleaned out the prize fund. “Within a short time, female weavers took on apprentices. Besides providing substitute workers for male weavers engaged in war, the new weavers would prove to have long term consequences on the direction of Irish linen industry.”

Napoleon was finally defeated in 1815. Unlike after WWII, however, the woman were not thrown out of the industry when soldiers returned to civilian life. The market was too hot, even with the massive drop in war demand. By 1851, at least a third of Irish linen weavers were women. Even more worked in cotton weaving. The linen cloth market simply demanded large numbers of weavers. “Commercial interests,” writes McKernan, “had no incentive to exclude” women from the industry.

The post How War Revolutionized Ireland’s Linen Industry appeared first on JSTOR Daily.

Frank Lloyd Wright’s Fraught Attempt at Mass Production

By Allison Meier

In the 1950s, after creating some of the most visionary architecture of the twentieth century, Frank Lloyd Wright went where he had never gone before: commercial homewares. Every building he’d designed, from houses to hotels, was tailor-made to its environment, from the structure to the materials. But these new lines of wallpapers, textiles, and other wares would be a major shift in a practice that had long avoided mass production.

“Of course, the most obvious and immediate question was how Wright, whose contempt for commercialism was legendary, became involved in such an undertaking,” writes Christa C. Mayer Thurman, curator of textiles at the Art Institute of Chicago, in the Art Institute of Chicago Museum Studies.

The project coincided with the desire of a number of Wright’s supporters, among them Elizabeth Gordon, editor of the leading interior-decorating magazine House Beautiful, to revive the octogenarian architect’s reputation. Toward this end, she determined to dedicate an entire issue of the magazine to an overview of his work in the fall of 1955.

The creation of what would be known as the Taliesin Line was not a smooth process. As Thurman explores in her research, Gordon reached out to René Carrillo, then director of merchandising at the textile firm F. Schumacher and Co. The first meeting between Carrillo and Wright was caustic, starting with Wright proclaiming his dislike of interior decorators, calling them “inferior desecrators.”

Thurman describes how when Carrillo reminded Wright that he had already designed furniture for a number of his houses, “the architect admitted that he ‘was a terrible furniture designer and that he had never designed a comfortable chair and that he had become black and blue from sitting in his own furniture.’” Still, Carrillo convinced Wright, in the end, to sign on to the Taliesin Line (named for Wright’s home, studio, and school in Wisconsin) with Schumacher in 1954. The folio for the line ultimately included 137 printed and woven samples and twenty-six wallpaper samples, each accompanied with context on what inspired it. “Design 104” was based on floor plans of the spherical homes Wright designed for his sons Robert Llewellyn and David, while “Design 103” had rectangular patterns derived from the windows on Wright’s Carlson House in Phoenix.

The work debuted in the fall of 1955, but not before another Wright outburst. As Thurman writes:

Upon seeing the display rooms with textiles, wallpapers, furniture, and paints bearing his name, Wright exploded: “My God! An inferior desecrator! I won’t permit my name to be used by a decorator. I will have no part of this. You must take off my name. I will tell the world I have been misused.”

But the House Beautiful issue was already on newsstands, and the designs were soon available for purchase. Wright got over his initial fury and made a few more patterns for the Taliesin Line, with one added in 1960 just after his death.

The distribution of “Schumacher’s Taliesin Line of Decorative Fabrics and Wallpaper” was limited, as was the furniture line Wright worked on for Heritage-Henredon to complement these works. “These uniquely modern furnishings were priced to be accessible to the average consumer,” writes Amelia Peck, curator of American decorative arts, in The Metropolitan Museum of Art Bulletin. “However, most ‘average consumers’ were not familiar with Wright’s design vocabulary and did not respond favorably to patterns that seemed radical for the time. Neither the furniture nor the fabric and wallpaper were commercial successes.”

Weekly Digest

[contact-form-7]

The folio and the fabrics are now rare, although the Met started collecting them in the 1970s. The Antonio Ratti Textile Center at the Met Fifth Avenue in New York is exhibiting these pieces in Frank Lloyd Wright Textiles: The Taliesin Line, 1955–60. The compact exhibition features nine examples of the textiles, as well as the sample book for the Taliesin Line, which is now fully digitized online. Schumacher also released an updated version of Taliesin Line in 2017.

There’s no shortage of mass-produced homewares and accessories inspired by Frank Lloyd Wright in museum gift shops around the country. Wright quite possibly would have been piqued by such commercialization of his visions. Yet as Thurman observes, “this widespread enthusiasm for Wrightian design indicates how pervasive the architect’s ideas have become and the degree to which his work remains vital today.”

The post Frank Lloyd Wright’s Fraught Attempt at Mass Production appeared first on JSTOR Daily.

The Weather Forecast That Saved D-Day

By Matthew Wills

On June 6th, 1944, Operation Overlord began on the Normandy coast. It was the beginning of the liberation of Nazi-occupied France. More than a dozen nations contributed soldiers, the great majority from America, Britain, and Canada. The ultimate goal was the defeat of Nazism.

Most Allied invasions and landings in World War II began with a “D-Day.” The D stood, in military redundancy, for “day”—the day-day of the invasion. But this D-Day was different, an unparalleled undertaking involving more than two million Allied personnel. Of course, those involved in Overlord couldn’t know if the invasion would be a success. It all came down to the weather, the tides, and the amount of moonlight. Low tide, small waves, and a moonlit night were essential.

As geographer Mildred Berman explains, weather forecasting was fairly rudimentary in those days. There were no satellites, telecommunications, or supercomputers. “One or two days of continuously overcast sky with low clouds could bring the entire undertaking to a halt,” she writes, quoting a general who said weather “would remain as constant an enemy as the Germans.”

Berman writes,

The success or failure of the grand design would be controlled by a combination of elements, not only the moon and its effect on the tides but also sea swells, breaking surf, beach surfaces, and visibility in the air and on the ground…All the desired conditions for tides occurred on only six days a month, but optimum conditions for tides and moonlight occurred on only three days a month, which in June 1944 were the 5th, 6th, and 7th.

Cancelling these dates would mean a delay of two weeks at least, something logistics, secrecy, and hundreds of thousands of troops in the south of England could hardly bear. June 5th was the initial choice, but the 4th saw December-like high winds, five-foot waves, and four-foot surf. Planning shifted to June 6th.

American assault troops at Omaha Beach
American assault troops at Omaha Beach via Wikimedia Commons

Berman recreates the frantic back-and-forth between weathermen and military brass. At 3:30 a.m. on June 5th, “winds were almost hurricane strength, rain was driving in horizontal sheets, and roads to Allied headquarters in Portsmouth were morasses of mud.” Meteorologists predicted that the next morning, however, would see a brief break in the miserable weather, aligning with the optimal moon and tides. So just after midnight, on June 6th, airplanes with 18,000 airborne troops started their motors.

Berman comments on the vital weather-data gap:

German meteorologists had failed to predict the break in the weather that prompted the Allied decision, because they simply did not have sufficient data. Due to high winds and heavy overcast on 5 June, German naval patrols were canceled, mine layers restricted to port, and the Luftwaffe was grounded.

Field Marshall Erwin Rommel thought an invasion so unlikely in the bad weather that he returned to Germany for his wife’s birthday… and to ask for more tanks for what he considered a woefully under-defended Atlantic coast.

In 2018, oceanographer John Vassie and engineer Byung Ho Choi ran a “hindcast simulation” using all the modern tools of weather forecasting and “coupled tide-wave-storm surge physics” to see how good the original forecast was. They found “the tidal prediction at invasion at Omaha Beach [one of the five targeted beaches] was reasonably accurate.”

Weekly Digest

[contact-form-7]

Back in 1944, however, German commanders were convinced the Normandy landings were actually a diversion for a main assault at Pas de Calais, the narrowest part of the Channel. They should have been solving the Daily Telegraph’s crossword puzzle, which in the weeks prior to D-Day had answers that were the codewords for the operation.

The Allies suffered at least 10,000 casualties during the initial landings. There would be many more before Germany’s unconditional surrender in May of 1945.

The post The Weather Forecast That Saved D-Day appeared first on JSTOR Daily.

Navy Seals: Why the Military Uses Marine Mammals

By Farah Mohammed

In late April 2019, a friendly beluga whale surfaced off the shores of Norway and began following fishermen’s boats, tugging on loose straps for attention, seemingly asking for food.

But that’s not what grabbed world headlines. What caught global attention was the harness strapped to the cetacean’s back, meant to carry a camera, leading some to theorize that the beluga was a stray Russian spy. (After much speculation, most outlets are reporting the whale was likely trained as a therapy animal.)

Bizarre as it sounds, the use of military mammals is in fact a common military strategy, even in the U.S. According to science writer Ceiridwen Terrill, marine mammals have been deemed “invaluable components” of the defense force.

In the cold war, the Soviet military was said to have parachuted military-trained dolphins from heights of nearly two miles. Dolphins were also used in the study and design of underwater torpedoes. A pod of sea lions, known as Mk 5 Mod 1, was trained to retrieve explosives. In the U.S.’s 1969 Project Deep Ops, two killer whales and a pilot whale—Ahab, Ishmael and Morgan—were trained to retrieve objects that had been lost in the ocean, in conditions too deep for human divers to retrieve, and too rough for machines to handle.

Although stories like these capture the public’s imagination, the reality of the life of an animal in the armed forces is rather dark. As it is with humans, military training for animals is physically and psychologically punishing, and carries the inherent risk of casualties. Moreover, there isn’t the same level of oversight for animals in the military.

Unexplained deaths are a problem. Terrill writes,

the Marine Fisheries Service and Navy necropsy reports show that the Navy collected 146 dolphins of four species since 1962. Of this 146, 60 were still in service, 11 were transferred to private facilities, 5 had escaped, and 55 died. However, Fisheries Service records were incomplete; not all Navy dolphin necropsy reports were filed, not all dolphin deaths reported.

According to Terrill, abuse against animal trainees is another problem. She pinpoints common practices such as throwing fish outside pens during training (“baiting”), or denying food as a way to make animals more cooperative (“axing”).

Weekly Digest

[contact-form-7]

The issue is complex. According to Terrill, some argue that after years, the military training pens are the only home the animals know, and attempting to free them or re-introduce them to the wild would be cruel. This is an idea that the animals’ behavior both supports and contradicts: some escape given the opportunity, while others, even in open water, will linger and wait for their trainers to arrive.

Today, dolphins, sea lions, and whales are still in used tracking and retrieving objects, as their natural senses are superior to technology in rough weather and noisy areas.

The post Navy Seals: Why the Military Uses Marine Mammals appeared first on JSTOR Daily.

The Dangerous Game of Croquet

By Livia Gershon

Croquet arrived in the U.S. from England during the Civil War. It immediately became hugely popular and was hailed as a genteel, refined activity appropriate for groups of mixed ages and genders. But, as historian Jon Sterngass writes, to some critics it represented a danger to female morality.

Today, we might expect criticism of croquet to run along the lines expressed by Mark Twain, who called the game “ineffably insipid.” But Sterngass writes that many observers were disturbed by the way women shortened their dresses to play more comfortably, and the way young people took the co-ed sport as an opportunity to flirt. One magazine described the game as a “source of slumbering depravity, a veritable Frankenstein monster of recreation” and suggested that “it would be well if the enthusiasm of the clergy and laity were enlisted for suppressing the immoral practice of croquet.”

Croquet apparently had the potential to stir up not just lust but also the other deadly sins of anger and envy. Milton Bradley’s patented croquet set came with this advice to beginners: “KEEP YOUR TEMPER, and remember when your turn comes.” The International Herald Tribune reported that a woman testified during a separation hearing that her husband refused to speak to her for days after she questioned whether his ball had really gone through the hoop. The judge responded, “I do not think there is any game which is so liable to put one out of humour as croquet.”

Weekly Digest

[contact-form-7]

Sterngass writes that the combination of co-ed play and intense competition challenged Victorian ideas about benevolent, moral womanhood. The game’s popularity also challenged male superiority in competitive endeavors. It seems that women frequently bested their male companions and were often—rightly or wrongly—accused of cheating. Men complained about women using illegal techniques like the “push shot,” or even using their dresses to conceal the ball while shuffling it along the lawn. An 1865 croquet manual complained that “We are aware that young ladies are proverbially fond of cheating at this game; but as they only do it because ‘it is such fun,’ and also because they think that men like it…” Sterngass notes that this kind of comment doesn’t actually explain their behavior, however, as female players were well aware that men wouldn’t appreciate cheating by women since they were constantly complaining about it.

A more shocking violation of Victorian propriety emerged from a variation of the game called “tight croquet,” in which players could put their ball next to their opponent’s, plant their foot on their own ball, and smash it with the mallet to send the other ball flying. The titillating overtones were teased out in the caption of a Punch cartoon featuring a woman revealing a bit of ankle while performing the maneuver: “Fixing her eyes on his, and placing her pretty little foot on the ball, she said, ‘Now then, I am going to croquet you!’ and croquet’d he was completely.”

The post The Dangerous Game of Croquet appeared first on JSTOR Daily.

The Prince of Quacks (and How He Captivated London)

By Benjamin Winterhalter

Let me set the scene: In late eighteenth-century England, ladies and gentlemen flocked to exhibitions of solar microscopes. The miniature world of mites and polyps was blown up and cast on the wall like a magic lantern show. The ladies and gents might have witnessed Sir William Hamilton’s Vesuvian Apparatus, with its simulated flowing lava and drumbeat explosions.

It was the age of scientific entertainment, the rationalist ideals of the Enlightenment colliding with the passion of the Romantic spirit. These exhibitions demonstrated scientific control of nature at the same time that they overawed their viewers with the sublime grandeur of natural forces at play.

Electricity, newly harnessed, was “the youngest daughter of the sciences.” High society aristocrats held electrical soirées in which they shocked one another with electrified kisses. They marveled at green and blue luminescence swirling in aurora flasks. King Louis XV even employed a court electrician, Jean Abbe Nollet, who devised spectacular electrical experiments for the entertainment of the court. On one occasion, the electrician lined up a row of several hundred monks and ran a current through them, sending them all leaping up into the air at the same moment. To the witnesses of these spectacles, electricity was something almost miraculous, an “ethereal fire” that connected the roars and flashes of the heavens to the inner workings of the human body.

Enter James Graham, a man who the British Medical Journal described as “one of the vilest imposters in the history of quackery.” Handsome and dapper, Graham benefitted from his undeniable flair for showmanship and his talent for leaping on trends. In 1780, he opened his “Temple of Health,” a medical establishment on the fashionable Pall Mall street in London. It was a lavish home for Graham’s much-touted “medico-electrical apparatus,” which he proudly proclaimed the “largest, most stupendous, most useful, and most magnificent, that now is, or ever was, in the World.”

The entrance hall to the Temple was scattered with discarded walking sticks, ear trumpets, eyeglasses, and crutches, supposedly cast away in fits of exuberant health by the beneficiaries of Graham’s cures. Perfume and music, played by a band concealed under the stairs, wafted on the air. Guests were paraded past one electrical marvel after another: flint glass jars blazing with captive sparks, gilded dragons breathing electrical fire, a gilded throne on which Graham’s patients sat to receive their curative shocks. Among the jars and columns Emma Hart (later to become famous as Lord Nelson’s mistress) posed in gauzy Grecian robes as Hebe, the Goddess of Youth.

Some came to be treated, some simply to gawp. Many came to hear Graham’s lectures, which were famously titillating. At the end of each lecture, the guests were shocked (literally) by conductors concealed in their seats. Then, a gigantic, gaunt “spirit” would emerge from a hidden trapdoor, bearing a bottle of “aetherial balsam” to be distributed to the guests.

A depiction of James Graham's 'Celestial Bed'
A depiction of James Graham’s Celestial Bed. Getty

The most infamous feature of the Temple of Health was Graham’s Celestial Bed. Borne up on forty pillars of colored glass, perfumed with flowers and spices, and humming with vivifying electricity, the bed (according to Graham) guaranteed conception. “Any gentleman and his lady desirous of progeny, and wishing to spend an evening in this Celestial apartment… may, by a compliment of a fifty pound bank note be permitted to partake of the heavenly joys it affords,” Graham wrote. “The barren certainly must become fruitful when they are powerfully agitated in the delight.” The bed featured an adjustable frame, so that it could be set at various angles.

With his devices and his medicines, Graham claimed to have “an absolute command over the health, functions and diseases of the human body.” He confidently predicted that he would live to at least 150, healthy all the while. The English poet laureate Robert Southey, who met James Graham, described him as “half knave, half enthusiast.” That is, he bought into his own hype, but he was a con man all the same.

Weekly Digest

[contact-form-7]

James Graham may not have been much of a doctor, but he was a master of aesthetics. His Temple of Health was, ultimately, a performance, a stage on which he enacted his cosmic drama: the fire of the heavens, carried down to Earth for the benefit of humankind. His cures may not have worked, per se, but they worked for his audience because they fit precisely into their worldview. Graham wrote that, when confronted with the grandest apartment in his Temple, “words can convey no adequate idea of the astonishment and awful sublimity which seizes the mind of every spectator.” To have lightning coursing through your body may be the most literal possible version of the Romantic encounter with nature’s terror and majesty.

The Temple of Health didn’t last long, however. By 1782, Graham was bankrupt. He was forced to sell off the Temple’s lavish accoutrements, including the Celestial Bed. It didn’t take him long to develop a new angle. By 1790, he was promoting a new theory that the human body could absorb all the nutrients it needed through contact with soil. Thenceforth, he delivered his lectures buried up to the neck in dirt.

The post The Prince of Quacks (and How He Captivated London) appeared first on JSTOR Daily.

A Mini History of the Tiny Purse

By Catherine Halley

Blame the Balenciaga IKEA bag. When the $2,145 luxury lambskin version of the familiar blue plastic shopping bag appeared on the runway in June 2016, it was the beginning of the end of a glorious era of capacious hobo bags, boat totes, and bucket bags. The upscale counterfeit triggered a backlash against fashion’s flirtation with so-called poverty chic, but also against gigantic bags in general. From a 19-gallon capacity, there was nowhere to go but down.

Even Meghan Markle—whose first official public appearance with Prince Harry spiked sales of Everlane’s roomy (and relatively affordable) leather Day Market Tote—took up her duchess duties and swapped commoners’ carryalls for dainty, handled purses by high-end labels, including Gabriela Hearst, DeMellier, and Strathberry. Cue sellouts, waitlists, and crashed websites: Tiny purses were officially in.

Weekly Digest

[contact-form-7]

The petite purse trend reached its nadir in February 2019, when French label Jacquemus debuted the Mini Le Chiquito, a postage-stamp-sized version of its bestselling handbag. Barely big enough to hold a couple of breath mints, the teeny-weeny tote got big LOLs on social media, where it drew comparisons to binder clips and Barbie accessories. Jacquemus was in on the joke; the bag was made for the runway and not for sale. Nevertheless, it inspired imitations. Louis Vuitton and Prada soon introduced their own nano-bags.

Does purse size matter? For women, the purse has always been political, a reflection of changing economic realities and gender roles. While a large bag—however ugly or expensive—will always have a certain utilitarian value, small bags have historically been mocked and derided—and their female wearers with them.

Until the late eighteenth century, purses were small, unisex accessories, used to hold money and nothing else; they had more in common with wallets than handbags. They might be worn tucked in a pocket or dangling from a belt. At the court of Versailles, round-bottomed drawstring bags—often made of velvet and intricately embroidered with the owner’s coat of arms—held one’s gambling winnings or charitable donations. According to Miss Abigail Adams, who attended mass in the Royal Chapel on Pentecost in 1785, “the lady who goes round to collect the [alms] in a small velvet purse… was more elegantly dressed than any other person. After the king had entered, she went round to the knights, and with a courtesey the most graceful, presented her little purse to each. I am sure no one could have refused putting a louis in.”

A French bag made of horsehair and silk, 1865 (Artstor)
A French bag made of horsehair and silk, 1865 (Artstor)
A silk pair of British pockets from the early 18th century (Artstor)
A silk pair of British pockets from the early 18th century (Artstor)
A French gaming purse from the late 17th century (Artstor)
A French gaming purse from the late 17th century (Artstor)
A Japanese Inrō with Rinpa Style Kanzan and Jittoku from the first half of the 19th century (Artstor)
A Japanese Inrō with Rinpa Style Kanzan and Jittoku from the first half of the 19th century (Artstor)
A European beaded coin purse, 1780–1810 (Artstor)
A European beaded coin purse, 1780–1810 (Artstor)
An Incan feathered bag from the 15th–early 16th century (Artstor)
An Incan feathered bag from the 15th–early 16th century (Artstor)
An Italian stamped leather bag from the first quarter of the 19th century (Artstor)
An Italian stamped leather bag from the first quarter of the 19th century (Artstor)
A knitted American coin purse, 1830–50 (Artstor)
A knitted American coin purse, 1830–50 (Artstor)

Women didn’t need to carry anything but cash in their purses at the time, because their wide hoop petticoats allowed space for roomy pockets. While men had pockets of various sizes sewn in their coats, waistcoats, and breeches, sometimes including long “bottle pockets” concealed in coattails, women’s pockets were separate garments, worn on a ribbon around the waist, under the gown, and accessed through slits in the gown’s seams. According to James Henry Leigh Hunt, a lady’s pocket might hold her purse as well as other essentials, including “a pocket-book, a bunch of keys, a needle-case, a spectacle-case, . . . a smelling-bottle, and, according to the season, an orange or apple.” It might be bought, lost, or stolen, as in the children’s rhyme beginning: “Lucy Locket lost her pocket.” Its closest contemporary equivalent is not the stylish purse, but the serviceable fanny pack. The enormous muffs popular with both sexes in the 1780s also served as carrying cases for tobacco, sweets, handkerchiefs, and occasionally even a small dog. From the time of the handbag’s introduction in the late eighteenth century, its size would rise and fall in tandem with the volume of women’s pockets and muffs.

Larger purses did exist in the eighteenth century, but they were “workbags,” used for transporting sewing, embroidery, or knotting instruments and materials. Workbags may have suggested or served other uses, however. In 1769, Lady Mary Coke saw ladies knotting at the opera and admitted: “I never knott, but the bag is convenient for one’s gloves and Fan.” The workbag—often a beautifully embroidered work of art in its own right—advertised one’s feminine accomplishments and industry rather than one’s fashion sense.

A lantern-like British reticule with ribbon trim from the first quarter of the 19th century (Artstor)

The dramatic change in women’s fashion in the late 1780s—accelerated by the French Revolution of 1789—put an end to the pocket. Bulky underpinnings would have ruined the slim line of the columnar white gowns of the Directoire and Empire, which emulated the diaphanous draperies of classical statuary. Small, handheld purses called “reticules”—often decorated with tassels, fringe, or embroidery— became essential accessories. Women wore “a more or less ornamental bag with each gown, some being fastened to the waist, others suspended by long ribbons from the arm.” In addition to providing much-needed storage, reticules enlivened the simple, high-waisted silhouette while calling attention to newly bared arms and graceful hands. At the same time, women began to wear drawers or underpants, because their gowns were so body-conscious and transparent, and cashmere shawls, for warmth.

The term “reticule” comes from reticulum, the Latin word for “net.” Many of these early reticules were netted, and netting purses became a popular female pastime, as Mr. Bingley noted in Jane Austen’s Pride and Prejudice. Reticules might also be made of fabric, embroidered or ornamented according to the latest trends, providing a conspicuous and relatively affordable way for women to follow fashion. During Napoleon’s campaigns, reticules mimicked flat military sabretaches, or sported sphinxes, or portraits of Bonaparte himself. In December 1801, the nomenclature was so new that Catherine Wilmot, an English tourist in Paris, felt it necessary to define it in a letter, making reference to the earlier and more familiar form of handbag, the workbag:

We have not seen Bonaparte yet, except adorning ‘Reticules’ (which are a species of little Workbag worn by the Ladies, containing snuff-boxes, Billet-doux, Purses, Handkerchiefs, Fans, Prayer-Books, Bon-bons, Visiting tickets, and all the machinery of existence).

Napoleon’s wife, Joséphine, was instrumental in popularizing the new style of dress, with all its attendant accessories.

Reticules fully embroidered with glass beads resembled miniature mosaics. These beaded bags were:

…marvels of patience and eyesight. Tiny, almost imperceptible beads of every hue and shade were woven or knitted into a firm textile, that has outlasted the memory of those who made and used these gorgeous receptacles. Pastoral scenes and quaintly costumed figures were wrought with a fidelity to detail that is marvelous.

Several examples of these sturdy reticules survive in museum collections to this day.

An American bag made of linen and glass beads, 1838 (Artstor)

Satirists dubbed the new must-have accessory the “ridicule,” because it was so small and insubstantial as to be virtually useless. In Austen’s Emma, the ridiculous Mrs. Elton carries a “purple and gold ridicule.” George Cruikshank caricatured the reticule is his Monstrosities of 1822, depicting fashionably dressed strollers in Hyde Park. Even fashion magazines adopted the pejorative term: the February 1804 issue of the Lady’s Monthly Museum included a fashion plate captioned: “A Kerseymere Spencer… with Tippet. Purple ridicule.”

But reticules had another, more complimentary nickname: “indispensables.” On September 9, 1802, Eliza Southgate of Massachusetts wrote to her mother that a friend visiting Paris had “sent me a most elegant indispensable, white lutestring, spangled with silver.” And, during Lord Melvilles’ impeachment trial in London in 1806, Charles Kirkpatrick Sharpe observed that “rows of pretty peeresses… sat eating sandwiches from silk indispensables.” These contrasting depictions of the reticule—ridiculous or indispensable, frivolous luxury or practical necessity—capture the cultural ambivalence surrounding the new fashion, and fashionable clothing in general in the new, post-Revolutionary political and economic climate.

However, contemporary etiquette guides suggest that reticules were not so “indispensable” that women gave up their pockets entirely. Nostalgia for the commodious and concealed pocket remained strong. Theresa Tidy’s 1819 advice manual Eighteen Maxims of Neatness and Order advised: “Never sally forth from your own room in the morning without that old-fashioned article of dress—a pocket. Discard forever that modern invention called a ridicule (properly reticule).” By 1890, the magazine The Decorator and Furnisher lamented “the scarcity of pockets in women’s attire” which necessitated “the survival of the old fashion of carrying bags and satchels.” The purse, not the pocket, was seen as the temporary interloper. But recurring predictions of the purse’s death proved to be unfounded; it had become so fashionable that its function (or lack thereof) was irrelevant.

Today, when women’s pants, dresses, and even wedding gowns are frequently equipped with spacious pockets, a tiny purse might well be enough to hold any other essentials. It’s a fashion statement, to be sure, but it also makes other kinds of statements, signaling a minimalist lifestyle, a low-maintenance personality, or, perhaps, an entourage of PAs, stylists, and servants who handle life’s baggage. The nano bag’s days may be numbered, though. With the erstwhile Meghan Markle—now Duchess of Sussex—having just had a royal baby, don’t be surprised if roomy diaper bags are the new black.

The post A Mini History of the Tiny Purse appeared first on JSTOR Daily.

❌