FreshRSS

🔒
❌ About FreshRSS
There are new available articles, click to refresh the page.
Before yesterdayYour RSS feeds

Whistleblowing: A Primer

By Matthew Wills

In 1778, the Continental Congress decreed that it was “the duty of all persons in the service of the United States … to give the earliest information to Congress or any proper authority of any misconduct, frauds or misdemeanors by any officers or persons in the service of these states.”

This “founding” attitude has fared… rather ambiguously ever since. As law professor Shawn Marie Boyne shows in her review of the legal protections for whistleblowers in government and industry, “the country’s treatment of whistleblowers has been a conflicted one.” Regardless of the organizational model (public, private, non-profit), those in power who have had the whistle blown on them rarely applaud whistleblowers. Heroes to some, often the whistleblower is labeled a traitor by those in power, as in the cases of Boyne’s examples, Edward Snowden and Chelsea Manning.

“The question of whether a whistleblower will be protected or pilloried depends on the interests of those in power,” Boyne writes. Leaks to the media from officials for political advantage are standard operating procedure. But those outside this inner circle don’t fare as well: Snowden is in exile and Manning is in jail. Boyne notes that three NSA employees who did do what critics said Snowden and Manning should have done, that is, go through the system and use the proper channels to report government abuse, “found their lives destroyed and reputations tarnished.”

Retaliation against whistleblowers hit some of the pioneers, too, Boyne notes. Ernest Fitzgerald, who revealed billions in cost-overruns in a military transport program in 1968, was demoted after President Richard Nixon told his supervisors to “get rid of the son of a bitch.”

That same president ordered a break-in to Daniel Ellsberg’s psychiatrist’s office in 1971, in hopes of finding dirt on Ellsberg.  An analyst for the RAND Corporation, Daniel Ellsberg released the Pentagon Papers to the New York Times. This classified historical study of the war in Vietnam revealed that the government realized early on that the war could not be won. Defending his actions in 1971, Daniel Ellsberg said, “I felt that as an American citizen, as a responsible citizen, I could no longer cooperate in concealing this information from the American public.”

Weekly Newsletter

[contact-form-7]

Retaliation against whistleblowers is, as scholar Michael T. Rehg and his co-authors show, quite gendered. “Male whistleblowers were treated differently depending on their power in the organization, but female whistleblowers received the same treatment regardless of the amount of organizational power they held: Their status as women overrode their status as powerful or less powerful organization members.” These authors also found that “women who reported wrongdoing that was serious or which harmed them directly were more likely to suffer retaliation, whereas men were not.”

While laws have been strengthened to help whistleblowers, presidents and CEOs nevertheless continue to go after them.

The post Whistleblowing: A Primer appeared first on JSTOR Daily.

Membership, Citizenship, and Democracy

President Trump’s pernicious attacks on nonwhite immigrants have thrust a particular theory of political membership—white nationalism—to the forefront ...

The post Membership, Citizenship, and Democracy appeared first on Public Books.

Industrial London’s Maternal Child Abductors

By Livia Gershon

There may be no crime that horrifies the public more than child abduction. Historian Elizabeth Foyster writes that this was also true in London 200 years ago, though the crime then typically took a much different form than we expect today.

Historians generally agree that the late eighteenth century brought a major change in what English childhood meant. This included more positive attitudes toward kids, and a new wealth of books, toys, and clothes for middle-class urban children. Children were increasingly prized “for giving women a role as mothers,” and as “miniature models of all that a more affluent consumer society could afford,” Foyster writes.

If children were becoming more valuable, it stands to reason that, like all valuable things, they were in danger of being stolen. And, indeed, Foyster found 108 cases of child abduction tried in London and reported in the newspapers between 1790 and 1849.

Child abduction was nothing new, but it was understood differently than in previous times. In fourteenth-century England, “ravishment” covered both forced and consensual “abduction” of children or adult women. It typically had a sexual element, and the child victims were generally teenagers. Later, in the seventeenth century, abduction was understood as a fate befalling unfortunate boys forced into indentured servitude.

In contrast, in the period Foyster studied, the majority of stolen children were under six, and the abductor was usually a woman in her 20s or 30s. In some cases, kids were stolen for their clothes. Abductors might bring fancy children’s clothes to a pawnbroker, leaving a half-naked child outside. Other times, women reportedly stole children to gain sympathy when begging for money or seeking a job.

There were also well-off married women who stole—or paid someone else to steal—children they could present as their own. One 22-year-old wrote to her husband, serving in the Navy, about an invented pregnancy and childbirth. When she learned he was returning home, she travelled to London, snatched a four-year-old boy, and cared for him for two months before she was caught.

Foyster writes that news accounts paid little attention to possible harm done to the children. Unlike today, child abduction wasn’t generally assumed to be motivated by deviant sexual desire. Instead, newspapers focused on the terror and despair of mothers whose children were stolen, and suggested a parallel lack of feeling in the abductors.

Weekly Newsletter

[contact-form-7]

A judge told one convicted child thief that, as a childless woman, she was “Ignorant of those heavenly feelings which subsist in the relation between parent and child; for had you been a mother, you must have respected and regarded, instead of agonizing a mother’s heart.” Still, Foyster writes, news reports also acknowledged that child-stealers might be motivated by a twisted “fondness” for children—reflecting their own stunted development.

Child-thieves clearly had no place in the growing public conception of natural motherly love. Yet the new understanding of children as valuable objects who gave meaning to women’s lives may have spurred the increase of child abduction.

The post Industrial London’s Maternal Child Abductors appeared first on JSTOR Daily.

Whose Life?

This Life: Secular Faith and Spiritual Freedom, by the philosopher Martin Hägglund, who teaches at Yale, is a book anyone committed to public-facing scholarship ought to take note of. This is all the ...

The post Whose Life? appeared first on Public Books.

Homeland Insecurity and American Terrorism

By Timothy Burke

Let’s stop talking about guns and gun control.

Guns and gun control in America function as an intensifying signifier of cultural division. They amplify the idea that this historical moment is a division of rural white ‘traditional’ communities vs. urban diverse educated communities. For every responsible rural gun owner who has a rifle as a tool, for every home owner who keeps a gun out of a belief in self-protection, for every collector who is just interested in guns, using the gun itself as the totemic object that we believe is causing these shootings is a confirmation of an antagonism, of a sociocultural distance, of an us and them. The more we understand, accurately, the cultural depth of America’s attraction to guns, the more helpless we feel–because deep cultural formations are exceptionally hard to instrumentally change from above.

Yes, gun ownership in this country is a fetish; yes, strictly controlling the availability of weapons that are intended to kill many people as fast as possible would be transformative. Yes, guns in the home lead to many tragedies, from toddlers shooting family members by accident to depressed men committing suicide because of the availability of a convenient means for doing it. Yes, we need sane gun laws–licensing and permits, mandatory training, restrictions on manufacture, sale and ownership, and so on.
But guns are a distraction from what’s really going on.

What’s really going on is a slow-motion uprising by white men who feel lost, enraged and confused by the gradual ebbing of their unchallenged social, political and economic power. Most of the men who’ve joined this uprising and committed terrorist acts in its name don’t even quite know or reflexively understand their deeper sources of their rage and alienation. They’ve nearly invariably beaten or hurt women in their lives, they’ve felt confused or wounded by the world around them. They’ve sometimes understood themselves to be mentally unhealthy or lost. They’ve cut themselves off from social ties or never been able to make them in the first place.

But most of them are not mentally ill as we commonly understand it, any more than an alienated young man from London or Berlin or Detroit or Istanbul or Kano who goes to join ISIS or Boko Haram. Those insurgents are leaving because they feel there is nothing for them where they are, because they feel powerless or lost, because they want to make a new world that is also somehow an old and sanctified world where they are the powerful ones. They often know almost nothing about the Qu’ran or Islam.

So it is with America’s mass shooters: virtually all of them white and male, sensing that the safety of a world where mediocre, ordinary white men could still count on being bosses, on being in charge, is fading away.

This goes all the way back to James Huberty in San Ysidro in 1984. We are not in a fight with guns, any more than other societies trying to cope with insurgencies are in a fight with guns. The guns are a necessary but not sufficient condition of those insurgencies, but ordinary people don’t get blown up by an IED while travelling by bus because there are IEDs. They get blown up because insurgents and terrorists are mining the roadways. The IEDs are what allow buses to be blown up and many people to die: if insurgents just had butter knives, they’d kill very few people. But here, in some sense, the mantra of the gun owner is almost right: without the terrorist, the gun sits on the shelf, the IED never gets made.

Ordinary Americans don’t get shot because there are guns, and they don’t get shot because somehow we have the world’s worst mental health crisis. They get shot because there is a decentralized, distributed movement of white men who want their supremacy back. It’s always been visible to us but it is more apparent than ever now because there are open terrorist sympathizers in the White House, the Senate and the House of Representatives, in governors’ mansions and state legislatures. It is time to call them what they are and to understand truthfully that we the people are under attack.

What Does It Mean To Be Celtic?

By Matthew Wills

A recent book by Caoimhín De Barra explores the formation of Celtic nationalism. In the late twentieth century, “Celticity” was sparked anew in the UK’s devolution of power to Scottish, Welsh, and Northern Ireland. Celticity, however, has turned out to be quite exportable, and not just in the form of Celtic music, Irish dance, and fantasies like the 1995 movie Braveheart.

According to scholars Euan Hague, Benito Giordano, and Edward H. Sebesta, two organizations that arose in the 1990s have appropriated contemporary versions of Celtic nationalism as a proxy for whiteness. Both call for separate nations to be set aside for the citizens they count as “white.” One is the League of the South (LS), a fringe group that argues for a return to the Confederate States of America. Meanwhile, in Italy, Lega Nord (LN) has also taken up the banner of “Celtic culture” as a model of whiteness. They advocate for a state called Padania, separate from Italy’s south. The LN, frequently called just Lega, is part of Italy’s coalition government. In the 2018 elections, the LN took just under 18% of the vote for both the Chamber of Deputies and the Senate.

Both the LS and the LN argue that Celtic-ancestry people are a “distinct ethnic group deserving of self-determination and an independent nation state,” write Hague et al. Comparing the two leagues, the authors explore the confluence of ethno/race-based nationalism with the use (and misuse) of the myths of Celticity.

Celticity is “an attractive set of symbols and identities that come replete with popular recognition and a supposedly ancient past that can be invoked by people for many purposes, from ‘new age’ religion to popular ‘world music.'” Historically, however, that “ancient past” is hard to pin down. Hague et al. explain:

The very flexibility and the vagaries of archeological evidence regarding the original Celts enable multiple political and cultural meanings to be invested in the form, whilst retaining the symbolic value and historical authority accrued by the reference to a supposedly ancient Celtic culture.

“The Celts” can and have been envisioned in all sorts of ways: as a warrior class; a pan-European people; as the epitome of whiteness; “whatever version of the past seemed nationally expedient.” It’s a cultural identity that has come into vogue in recent decades.

Weekly Digest

[contact-form-7]

The LN posits that northern Italy is culturally and ethnically distinct from southern Italy. Southern Italians aren’t seen as Celtic/white/European—shades of the way Italian immigrants were first treated in the U.S. For LN, separation is essential to block immigration from Africa, Asia, and southern Italy.

Nationalism tries to make “ancient connections between a people and a specific territory, an intersection of genealogy and geography.” By exploiting the ethos of multiculturalism, both the LS and the LN argue for a “right to cultural difference.” This right, the authors say, fits into “ongoing processes of white privilege.” While overt racism is generally frowned upon, “an appeal to Celtic ethnicity appears acceptable and can be justified by utilizing a rhetoric of cultural awareness while simultaneously subverting political commitments to cultural equality and reasserting white superiority.”

The post What Does It Mean To Be Celtic? appeared first on JSTOR Daily.

The Accidental Presidents of the United States

By Farah Mohammed

As Theresa May tearily announced her departure outside 10 Downing Street to the British public, she served as a symbol of the personal and professional challenges faced by those put in the impossible position of inheriting leadership, rather than choosing it.

May faced the double challenge of having to manage the formidable charges given to any state leader (including, in her special case, the Gordian knot that is Brexit), without the popular support of an electorate. She was awkwardly shuffled into power after her predecessor, David Cameron, suddenly resigned, and her party scrambled to find a replacement.

It’s a position for which democracy is ill-prepared. The democratic system has certain measures in place for when the people’s choice somehow fails them through death, accident, injury, or self-selects out of the position of power. However, as anyone voting for president knows, the vice president is like an insurance policy you never expect to claim.

Nonetheless, through American history, there have been nine “accidental presidents,” as scholar Philip Abbott calls them in Presidential Studies Quarterly. Presidents Tyler, Fillmore, Andrew Johnson, Arthur, Theodore Roosevelt, Coolidge, Truman, Lyndon Johnson, and Ford all wrestled with unique challenge of proving themselves to be presidential material despite already being president.

These accidental presidents have had varying levels of success. Two are among historians’ most highly ranked. Four are among the worst remembered. Four were not nominated by their own parties for another term, and three others voluntarily gave up the opportunity.

Nonetheless, thrust into the role, all nine had to work to establish their legitimacy. Abbott draws some similarities between their various strategies:

The Homage Strategy: Some accidental presidents choose to lean heavily on the virtues of their predecessors, understanding the public’s (and their own party’s) deep emotional ties to their previous president-elect—like Lyndon B. Johnson, despite his known tensions with the Kennedys.

The Independent Strategy: All presidents, even those who begin their tenure by emphasizing the greatness of their predecessors, must eventually forge their own path. Some choose to do this sooner than others. Tyler—the first accidental president—chose to skip the homage to his predecessor and strike it out on his own. Tyler inherited the presidency from President William Henry Harrison, and, according to Abbott, quickly formalized his position by giving an inaugural address and moving into the White House. He then proceeded to infuriate other members of government by setting policies under his own agenda. While this seemed risky, there was a method to Tyler’s madness. As the first in his situation, he was in a unique position. To adhere too closely to the leader before him, or be too agreeable to his peers, may well have made him seem incompetent or timid, and significantly weaken his power.

Weekly Digest

[contact-form-7]

The Minimalist Strategy: The minimalist strategy is what it sounds like—a cautious approach in which the former vice president acts as a steady caretaker, not concerned with blazing new paths or establishing a new form of leadership. Both Coolidge and Ford took this approach, with wildly varying degrees of success. The success of this strategy depends on the economic and political climate at the time; a quieter political atmosphere is much better suited to a quieter president.

What Abbott concludes, however, is that in each case and in each strategy, accidental presidents always somehow stumbled or struggled. After all, a leader needs integrity in the eyes of the populace, as well as their own political party, in order to establish complete legitimacy.

Abbott concludes:

Theodore Roosevelt’s rhetorical arrangement of his relationship to the McKinley assassination, Coolidge’s construction of the “Silent Cal” persona, and Lyndon Johnson’s swift and systematic use of homage for his political agenda are all examples of the inventiveness available to accidental presidents. That despite many different instances of creativity, accidental presidents still fail in various degrees to fulfill the roles of rex and dux [i.e. legitimacy and leadership], is a valuable aspect of a theory of democratic succession, for the imagination of political leaders should always be tethered to election.

The post The Accidental Presidents of the United States appeared first on JSTOR Daily.

Co-Opting AI

Today, almost 70 years after Alan Turing famously asked, “Can machines think?,” what we call “artificial intelligence,” or AI, has seemingly come to penetrate our everyday life. It is in our phones, our homes, our workplaces, our modes of transportation, our schools, our welfare system. And while it remains unclear what AI really is, or can be, it is undeniably capturing the imagination of...

The post Co-Opting AI appeared first on Public Books.

excerpts from my Sent folder: localism

By ayjay

More broadly, you should understand that I am a deeply committed localist and doubt the legitimacy of all nation-states and all ecclesiastical structures larger than the diocese (and ideally the old city-sized diocese, not the hypertrophied things we have today). I don’t think there should be any polis larger than McClennan County, and within that local structure I advocate a fruitful hybrid of distributism and anarcho-syndicalism. And yes, I’m serious.

I have sometimes said that future generations will refer to this period of history as the Late Roman Era, because church and state alike have borrowed their understanding of political action and political legitimacy from the Roman model. When the church decided that the Roman administrative structure was what it should imitate, it drank from a poisoned chalice. (Hodie venenum effusum est in ecclesiam Christi.) The church should have seen the Roman way of organizing and disciplining people across great distances as the antithesis of the ecclesia, not something to imitate.

In the first 200 years or so of the Way, the church at Rome considered itself bound to offer other churches prayer, encouragement, and sometimes money. It was first not in power but in service. Then its bishops increasingly began to demand obedience from other dioceses. That was the Original Ecclesial Sin from which we have never recovered.

Or so I think.

The Stonewall Riots Didn’t Start the Gay Rights Movement

By Catherine Halley

Despite what you may hear during this year’s fiftieth anniversary commemorations, Stonewall was not the spark that ignited the gay rights movement. The story is well known: A routine police raid of a mafia-owned gay bar in New York City sparked three nights of riots and, with them, the global gay rights movement. In fact it is conventional to date LGBTQ history into “before Stonewall” and “after Stonewall” periods—not just in the United States, but in Europe as well. British activists can join Stonewall UK, for example, while pride parades in Germany, Austria, and Switzerland are called “Christopher Street Day,” after the street in New York City on which the Stonewall Inn still sits.

But there were gay activists before that early morning of June 28, 1969, previous rebellions of LGBTQ people against police, earlier calls for “gay power,” and earlier riots. What was different about Stonewall was that gay activists around the country were prepared to commemorate it publicly. It was not the first rebellion, but it was the first to be called “the first,” and that act of naming mattered. Those nationally coordinated activist commemorations were evidence of an LGBTQ movement that had rapidly grown in strength during the 1960s, not a movement sparked by a single riot. The story of how this particular night and this particular bar came to signify global gay rebellion is a story of how collective memory works and how social movements organize to commemorate their gains.

Weekly Digest

[contact-form-7]

The sociologists Elizabeth A. Armstrong and Suzanna M. Crage detail four previous police raids on gay bars in cities across the United States that prompted activist responses—and local gains—but that either faded from local memory, did not inspire commemorations that lasted, or did not motivate activists in other cities.

For example, San Francisco activists mobilized in response to police raids on gay bars in the early 1960s, which came to a head during a raid on a New Year’s Eve ball in 1965 that eventually brought down the police commissioner. This New Year’s Eve raid attracted wide media attention, garnered heterosexual support, and is credited with galvanizing local activists, but it was subsequently forgotten. In 1966, again in San Francisco, LGBTQ people rioted at Compton’s Cafeteria, smashing all the windows of a police car, setting fires, and picketing the restaurant for its collusion with police. The city’s gay establishment did not participate, however, and distanced themselves from the transgender and street youths and their political organization, Vanguard, behind the “violent” protest.

San Francisco was not the only U.S. city with gay rights activists gaining strength. In Los Angeles, the first national gay rights organization, the Mattachine Society, was founded years earlier, in 1951, and spawned chapters in other cities around the country. Bar raids in late-1960s Los Angeles also prompted resistance. The 1967 police raid on the Black Cat bar, for instance, led to a demonstration 400 people strong that garnered evening news coverage. That demonstration played a role in the founding of the leading national LGBTQ magazine, The Advocate. While the Black Cat demonstration garnered support from heterosexual activists for Chicano and Black civil rights, no further coordination occurred, and the event was not commemorated. When police again descended on the L.A. nightclub The Patch, patrons struck back immediately, marching to city hall to lay flowers and singing civil rights anthem “We Shall Overcome.” But its anniversary passed without remembrance. Los Angeles activists did organize a one-year vigil on the anniversary of the night the L.A. police beat a gay man to death in front of the Dover Hotel, but this 120-person-strong rally and march to the police station did not inspire activists in other cities. Subsequent demonstrations were subsumed by the Stonewall commemorations.

Activists were busy on the East Coast before Stonewall, too. In Washington, D.C., LGBTQ veterans chose the Pentagon as their place to picket, making it onto national television with signs reading, “Homosexual citizens want to serve their country too.” Subsequent demonstrations targeted the White House and the offices of Federal agencies. New York City’s Mattachine Society secured legal gains in 1966 when they organized a “sip-in” at the bar Julius’, securing the right of homosexuals to gather in public. None of these actions inspired commemoration, locally or in other cities, however, leading scholars to look for pre-Stonewall protests. The question that scholars are seeking to answer is: Why not?

Four members of the Mattachine Society at a "sip-in" in 1966, demanding to be served at Julius's Bar in Greenwich Village
Four members of the Mattachine Society at a “sip-in” in 1966, demanding to be served at Julius’s Bar in Greenwich Village © Estate of Fred W. McDarrah via The National Portrait Gallery

There was an annual demonstration for gay civil rights before Stonewall, however, and it provides the best example of how gay politics were growing and changing before the riots. Beginning in 1965, Philadelphia LGBTQ activists began an annual picket of Independence Hall on the Fourth of July to protest state treatment of homosexuals. Soberly-dressed men and women with carefully worded signs walked solemnly in front of this iconic building where the Declaration of Independence and U.S. Constitution were debated and signed. These “Annual Reminders” were the result of coordination by activists in New York, Washington, and Philadelphia, evidence of burgeoning regional cooperation by gay rights activists in the 1960s. Yet these somber events unraveled in the week after Stonewall, and Philadelphia activists voted later in 1969 to shift the 1970 commemoration from a picket of Independence Hall to a parade in the streets on the Stonewall anniversary.

Gay politics had become more radical in the late 1960s, owing to the influence of the Black power movement, second-wave feminism, and the protests against the Vietnam war. Radical organizations advocating “gay power” had already sprung up in the 1960s, including in Greenwich Village, where the Stonewall Inn was located. These new activists stereotyped the actions of their forebears as conservative, erasing their contributions from a history that now was credited solely to Stonewall.

What was different about Stonewall was that organizers decided to commemorate it, and to make it a national event. At a meeting in November of 1969, regional activists broke with the respectable image of the Philadelphia “Annual Reminder” and vowed to secure a parade permit on the anniversary of the raid on the Stonewall Inn, calling it Christopher Street Liberation Day. These organizers reached out to groups in Chicago and Los Angeles who readily agreed to remember something that happened elsewhere, in part because it was one of the few acts of LGBTQ resistance to get widespread media coverage, including in national LGBTQ publications and the New York Times.

This media coverage was itself the product of previous ties between local LGBTQ activists and journalists—and the fact that the Stonewall Inn was so near to the offices of the Village Voice. Interestingly, San Francisco’s activists declined to participate because they had already made inroads with local politicians and clergy. As one member explained, “I did not think a riot should be memorialized.” Only a small breakaway group participated, to little local effect, in a city that today hosts one of the largest gay pride parades in the country. These coordinated marches in Los Angeles, New York, and Chicago in 1970 were the first gay pride parades, and sparked an idea that spread around the country—to 116 cities in the United States and 30 countries around the world.

It was this national act of commemoration that represented a truly new political phenomenon, not the riot itself. As Armstrong and Crage have written, “without the existence of homophile organizations elsewhere, many of them founded only in the late 1960s, a national event would have been unthinkable.” Stonewall was an “achievement of gay liberation,” and not its cause, and an achievement of collective memory and collective action, if not the first LGBTQ riot or protest.

It is notable that this achievement took the form of a joyful parade, rather than a somber picket like Philadelphia’s Annual Reminder. As the sociologist Katherine McFarland Bruce describes in her detailed history of pride parades in the United States, “planners settled on a parade format as the best way to accommodate diverse members and to produce the positive emotional experience that brought people together.” As early organizers noted, “a fun parade brings out more people than an angry march.” Unlike the Annual Reminder, which addressed the state in asserting the similarity of homosexuals with heterosexual citizens, parade participants celebrated their differences and aimed to change minds, not laws.

An LGBTQ parade through New York City on Christopher Street Gay Liberation Day, 1971
An LGBTQ parade through New York City on Christopher Street Gay Liberation Day, 1971. Getty

There were unique characteristics of Stonewall, of course. In his detailed history of the bar and those nights, the historian David Carter lists many: It was the only bar raid that prompted multiple nights of riots; it was the only raid that occurred in a neighborhood populated by lots of other LGBTQ people who might participate; and the bar was the city’s largest, located in a transportation hub surrounded by many public telephones that were used to alert media.

But Carter also notes that the riots were not inevitable, and were just a turning point in the United States’ burgeoning gay rights movement. New York City already had many gay activists “with the specialized skills to take on leadership roles to help shape and direct the event,” for example. He also gives special credit to the fact that several of the riots, including Stonewall and the Compton’s Cafeteria riots in San Francisco, occurred during police raids right after a period of liberalization. In San Francisco, Compton’s clientele only fought back after gaining hope from the city’s pre-Stonewall municipal liberalization towards homosexuality. In New York City (where the Stonewall riot took place), the police raid seemed out of step with the liberal administration of mayor John Lindsay. As Carter summarizes, “revolutions tend to happen after periods of liberalization.”

As activists commemorate the Stonewall Riots in 2019, perhaps they should also lay plans for next year, to remember the fiftieth anniversary of the first gay pride parade in 2020. The nation finds itself again in an era of retrenchment after the liberalization of the Obama era. It follows that 1970 thus deserves to be remembered as the first national act of LGBTQ remembrance, if not the first act of LGBTQ resistance.

The post The Stonewall Riots Didn’t Start the Gay Rights Movement appeared first on JSTOR Daily.

When Foster Care Meant Farm Labor

By Livia Gershon

In an effort to support children in foster care, Sesame Street has introduced Karli, a muppet living with “for now” parents until she can go back to her mother. Foster families, who get a small payment to offset the cost of caring for children, have been a central part of child welfare programs for the past century. Before that, historian Megan Birk writes, Americans depended on farmers to take care of kids in exchange for hard labor.

In the years after the Civil War, state and charity welfare workers commonly “placed out” children who they identified as “destitute or neglected,” often sending them to work on family farms in the Midwest. Tens of thousands of children from outside the region arrived, sometimes on “orphan trains” from the urban east.

Authorities viewed the farm placements as a win for everyone involved. Farmers got affordable labor. Governments and philanthropic organizations were relieved of the expense of running orphanages. And children got the chance to learn valuable work skills while living in a rural setting, widely seen as the ideal place for an American upbringing.

Birk writes that in some case, the children were virtually a part of the families that took them in. If they worked hard on the farms, that was no different from what farmers expected of their own kids. But some farmers exploited the child laborers, beating them, denying them schooling or medical care, and sometimes overworking them for a season before sending them back to an institution.

The welfare organizations in charge of the programs often had little capacity to monitor the children’s situations. Some relied on volunteers who might or might not actually bother checking in on the families.

Birk writes that by the 1880s, charity workers were calling attention to these problems and seeking reforms. Over the next few decades, supervision did improve, with states putting resources into professional supervision of the placements.

Weekly Digest

[contact-form-7]

At the same time, though, farm life was changing. Many people were leaving the countryside for growing cities, and farmers increasingly operated as tenants rather than owning their own land. In this climate, the skills learned working on a farm seemed less relevant to children’s success in the future. And doing hard labor and getting by without modern conveniences started to seem like inadequate training for a child who might hope to join the growing ranks of white-collar workers. One children’s advocate suggested that the best candidate for a farm placement was “a rather dull and backward but strong and good natured boy” who wouldn’t benefit much from a modern education anyway.

By the Progressive Era, Birk writes, child labor was widely seen as objectionable, and middle-class families led a shift toward seeing children as “priceless.” Rather than expecting children to work for their keep, welfare agencies increasingly paid urban or suburban foster families to take them in. The system represented by Sesame Street‘s Karli had arrived.

The post When Foster Care Meant Farm Labor appeared first on JSTOR Daily.

How War Revolutionized Ireland’s Linen Industry

By Matthew Wills

You probably know “Rosie the Riveter.” She’s the iconic incarnation of the women in the industrial workforce of World War II. With manpower siphoned off to war, womanpower was called upon to work in factories in the kinds of jobs few women had seen before.

But WWII was not the first time a deficit of male laborers opened doors traditionally closed to women. War can be a radical force, a great over-turner of traditions like sexual divisions in labor. As Anne McKernan tells us, something similar happened during the Napoleonic Wars, particularly in the linen industry in Ulster.

From 1803 to 1815, the United Kingdom of Great Britain and Ireland was at war with France. With 300,000 men in the army and another 140,000 in the Royal Navy, manpower was absent from the homefront, just as demand for everything from food to clothing rose. War devoured textiles like linen, which was used for canvas, duck, and sailcloth. Linen merchants turned to women to maintain and increase production.

The province of Ulster is now spilt between the Republic of Ireland and Northern Ireland. Before 1800, Ulster had a strong tradition of linen production by farmers who spun and wove the flax they themselves grew. (Irish linens are still famous.) The women of farming families spun flax fibers on spinning wheels, while the men wove the resulting thread into linen cloth on their own looms.

[War] presented Irish entrepreneurs with a golden opportunity to snap the link between gender and commercial linen weaving; snapping that link, in turn, prepared the way for snapping the link between farming and weaving, the bi-occupations of rural Ulster households. War-time innovations in the linen industry subsequently turned independent farmer-weavers into rural proletarian weavers.

Mechanization began to replace hand-spinning through the first decade of the nineteenth century. But spinsters (only later did the word come to mean unmarried women) could earn three times as much weaving. Still, the merchants of the Irish Linen Board had to overcome traditional gender divisions in this home-work system. And, to keep up production, they needed to do it fast.

The innovators believed time was of the essence. They could not wait until women found their way into the labor pool of weavers. Increased supplies of coarse yarn from mechanical spinning would create greater demand for weavers at a time when the labor supply was contracting in the face of increasing demand from the agricultural sector.

McKernan reveals how the Irish Linen Board recruited female weavers. For one thing, there were the higher wages. But there were also incentives: cash prizes for the first 200 yards and premiums for cloth with higher thread counts. The newest loom technology, which could double the volume of cloth a worker could produce, was distributed for free. This wasn’t altruism: linen merchants demanded the right to inspect homes with the new looms, which were no longer the workers’ possessions. “If the inspector found ‘obstinacy in attention or idleness be preserved in,’ he had discretion to ‘remove the loom.'”

Weekly Digest

[contact-form-7]

Nevertheless, “young women responded enthusiastically to weaving.” From 1806-1809, over 1,800 free looms were distributed. One six-month period saw 300 women claiming prizes, which cleaned out the prize fund. “Within a short time, female weavers took on apprentices. Besides providing substitute workers for male weavers engaged in war, the new weavers would prove to have long term consequences on the direction of Irish linen industry.”

Napoleon was finally defeated in 1815. Unlike after WWII, however, the woman were not thrown out of the industry when soldiers returned to civilian life. The market was too hot, even with the massive drop in war demand. By 1851, at least a third of Irish linen weavers were women. Even more worked in cotton weaving. The linen cloth market simply demanded large numbers of weavers. “Commercial interests,” writes McKernan, “had no incentive to exclude” women from the industry.

The post How War Revolutionized Ireland’s Linen Industry appeared first on JSTOR Daily.

The Weather Forecast That Saved D-Day

By Matthew Wills

On June 6th, 1944, Operation Overlord began on the Normandy coast. It was the beginning of the liberation of Nazi-occupied France. More than a dozen nations contributed soldiers, the great majority from America, Britain, and Canada. The ultimate goal was the defeat of Nazism.

Most Allied invasions and landings in World War II began with a “D-Day.” The D stood, in military redundancy, for “day”—the day-day of the invasion. But this D-Day was different, an unparalleled undertaking involving more than two million Allied personnel. Of course, those involved in Overlord couldn’t know if the invasion would be a success. It all came down to the weather, the tides, and the amount of moonlight. Low tide, small waves, and a moonlit night were essential.

As geographer Mildred Berman explains, weather forecasting was fairly rudimentary in those days. There were no satellites, telecommunications, or supercomputers. “One or two days of continuously overcast sky with low clouds could bring the entire undertaking to a halt,” she writes, quoting a general who said weather “would remain as constant an enemy as the Germans.”

Berman writes,

The success or failure of the grand design would be controlled by a combination of elements, not only the moon and its effect on the tides but also sea swells, breaking surf, beach surfaces, and visibility in the air and on the ground…All the desired conditions for tides occurred on only six days a month, but optimum conditions for tides and moonlight occurred on only three days a month, which in June 1944 were the 5th, 6th, and 7th.

Cancelling these dates would mean a delay of two weeks at least, something logistics, secrecy, and hundreds of thousands of troops in the south of England could hardly bear. June 5th was the initial choice, but the 4th saw December-like high winds, five-foot waves, and four-foot surf. Planning shifted to June 6th.

American assault troops at Omaha Beach
American assault troops at Omaha Beach via Wikimedia Commons

Berman recreates the frantic back-and-forth between weathermen and military brass. At 3:30 a.m. on June 5th, “winds were almost hurricane strength, rain was driving in horizontal sheets, and roads to Allied headquarters in Portsmouth were morasses of mud.” Meteorologists predicted that the next morning, however, would see a brief break in the miserable weather, aligning with the optimal moon and tides. So just after midnight, on June 6th, airplanes with 18,000 airborne troops started their motors.

Berman comments on the vital weather-data gap:

German meteorologists had failed to predict the break in the weather that prompted the Allied decision, because they simply did not have sufficient data. Due to high winds and heavy overcast on 5 June, German naval patrols were canceled, mine layers restricted to port, and the Luftwaffe was grounded.

Field Marshall Erwin Rommel thought an invasion so unlikely in the bad weather that he returned to Germany for his wife’s birthday… and to ask for more tanks for what he considered a woefully under-defended Atlantic coast.

In 2018, oceanographer John Vassie and engineer Byung Ho Choi ran a “hindcast simulation” using all the modern tools of weather forecasting and “coupled tide-wave-storm surge physics” to see how good the original forecast was. They found “the tidal prediction at invasion at Omaha Beach [one of the five targeted beaches] was reasonably accurate.”

Weekly Digest

[contact-form-7]

Back in 1944, however, German commanders were convinced the Normandy landings were actually a diversion for a main assault at Pas de Calais, the narrowest part of the Channel. They should have been solving the Daily Telegraph’s crossword puzzle, which in the weeks prior to D-Day had answers that were the codewords for the operation.

The Allies suffered at least 10,000 casualties during the initial landings. There would be many more before Germany’s unconditional surrender in May of 1945.

The post The Weather Forecast That Saved D-Day appeared first on JSTOR Daily.

Navy Seals: Why the Military Uses Marine Mammals

By Farah Mohammed

In late April 2019, a friendly beluga whale surfaced off the shores of Norway and began following fishermen’s boats, tugging on loose straps for attention, seemingly asking for food.

But that’s not what grabbed world headlines. What caught global attention was the harness strapped to the cetacean’s back, meant to carry a camera, leading some to theorize that the beluga was a stray Russian spy. (After much speculation, most outlets are reporting the whale was likely trained as a therapy animal.)

Bizarre as it sounds, the use of military mammals is in fact a common military strategy, even in the U.S. According to science writer Ceiridwen Terrill, marine mammals have been deemed “invaluable components” of the defense force.

In the cold war, the Soviet military was said to have parachuted military-trained dolphins from heights of nearly two miles. Dolphins were also used in the study and design of underwater torpedoes. A pod of sea lions, known as Mk 5 Mod 1, was trained to retrieve explosives. In the U.S.’s 1969 Project Deep Ops, two killer whales and a pilot whale—Ahab, Ishmael and Morgan—were trained to retrieve objects that had been lost in the ocean, in conditions too deep for human divers to retrieve, and too rough for machines to handle.

Although stories like these capture the public’s imagination, the reality of the life of an animal in the armed forces is rather dark. As it is with humans, military training for animals is physically and psychologically punishing, and carries the inherent risk of casualties. Moreover, there isn’t the same level of oversight for animals in the military.

Unexplained deaths are a problem. Terrill writes,

the Marine Fisheries Service and Navy necropsy reports show that the Navy collected 146 dolphins of four species since 1962. Of this 146, 60 were still in service, 11 were transferred to private facilities, 5 had escaped, and 55 died. However, Fisheries Service records were incomplete; not all Navy dolphin necropsy reports were filed, not all dolphin deaths reported.

According to Terrill, abuse against animal trainees is another problem. She pinpoints common practices such as throwing fish outside pens during training (“baiting”), or denying food as a way to make animals more cooperative (“axing”).

Weekly Digest

[contact-form-7]

The issue is complex. According to Terrill, some argue that after years, the military training pens are the only home the animals know, and attempting to free them or re-introduce them to the wild would be cruel. This is an idea that the animals’ behavior both supports and contradicts: some escape given the opportunity, while others, even in open water, will linger and wait for their trainers to arrive.

Today, dolphins, sea lions, and whales are still in used tracking and retrieving objects, as their natural senses are superior to technology in rough weather and noisy areas.

The post Navy Seals: Why the Military Uses Marine Mammals appeared first on JSTOR Daily.

fair play to you

By ayjay

I’m getting a good bit of email today, most of it saying, in cleaned-up language: How dare you accuse us on the left of not playing fair, you Trump-supporting jerk?? (Maybe try entering “Trump” in the search box on this site?) Here’s why I say what I said, courtesy of my colleague Frank Beckwith

For the political liberal, the government should not only restrain its hand on matters of moral controversy, it should in some cases go out of its way to offer exemptions to generally applicable laws to idiosyncratic sects for the sake of civic peace (e.g. conscientious exemption statutes, Wisconsin v. Yoder, Sherbert v. Verner). But for the hegemonic liberal, the role of the state is to make men moral, as he understands morality. It is to scrupulously enforce “social justice” by direct coercion of the actions, speech, and private associations of those who remain unconvinced of the wisdom of the left side of the culture war. So, for example, the Little Sisters of the Poor must assist in providing contraception contrary to their Church’s teachings, a Christian baker must use her talents to help celebrate what she believes is a faux liturgical event or face crippling fines, and a religious college may have to set aside its moral theology or be singled out for special retribution by the government. 

(Go to the original to read the whole thing and get the links.) (Also read other posts on this site tagged “religious freedom.”) And that trend has continued. Conscience exemptions ain’t what they used to be — about that there is surely no disagreement. The dispute is simply whether that’s good or bad. For many on the secular left — for, as far as I can tell, the significant majority, though numbers on this are hard to come by —, the elimination of religious-conscience protections is a wholly good thing. But it’s indubitable that the goalposts have moved dramatically in the past decade — remember, in 2008 few Democratic voters were bothered that Barack Obama didn’t support same-sex marriage — so that religious commitments that were legally acceptable (if socially disapproved) from time out of mind have very quickly become altogether forbidden. For the (declining) “political liberal” fairness towards religious conscience was a virtue; for the (ascendent) “hegemonic liberal” it’s a vice. 

There’s a conversation on these matters that I’ve had a number of times, and it goes something like this:

Me: I’m concerned about the erosion of support on the left for religious liberty.

They: That’s a disgraceful calumny, we are passionately devoted to religious liberty.

Me: Only when you agree with, or at least are not offended by, the religious beliefs involved.

They: Another disgusting lie!

Me: So what do you think about that Masterpiece Cakeshop guy?

They: What a bigot! I hope the law comes down on him like a ton of bricks.

Me: But he says he’s acting out of his long-held religious convictions.

They: I despise it when people use religion to cover for their bigotry.

Me: So it’s like I said, you only support religious liberty when you agree with, or at least are not offended by, the beliefs involved — the ones you think are not bigoted.

They: Bigotry and religion are not the same thing! Religion is about a person’s relationship with whatever God they happen to believe in, it’s not about passing judgment on their neighbors.

Me: So having claimed the right to define what bigotry is, you’re now defining what religion is?

They: Look, you can go ahead and defend bigotry if you want to, but thank goodness there are laws against that in this country.

I’ve been trying to remember what these conversations remind me of and I finally figured it out. It’s this:

“And you can’t get away from it that, fundamentally, Jeeves’s idea is sound. In a striking costume like Mephistopheles, I might quite easily pull off something pretty impressive. Colour does make a difference. Look at newts. During the courting season the male newt is brilliantly coloured. It helps him a lot.”

“But you aren’t a male newt.”

“I wish I were. Do you know how a male newt proposes, Bertie? He just stands in front of the female newt vibrating his tail and bending his body in a semi-circle. I could do that on my head. No, you wouldn’t find me grousing if I were a male newt.”

“But if you were a male newt, Madeline Bassett wouldn’t look at you. Not with the eye of love, I mean.”

“She would, if she were a female newt.”

“But she isn’t a female newt.”

“No, but suppose she was.”

“Well, if she was, you wouldn’t be in love with her.”

“Yes, I would, if I were a male newt.”

A slight throbbing about the temples told me that this discussion had reached saturation point.

The Dangerous Game of Croquet

By Livia Gershon

Croquet arrived in the U.S. from England during the Civil War. It immediately became hugely popular and was hailed as a genteel, refined activity appropriate for groups of mixed ages and genders. But, as historian Jon Sterngass writes, to some critics it represented a danger to female morality.

Today, we might expect criticism of croquet to run along the lines expressed by Mark Twain, who called the game “ineffably insipid.” But Sterngass writes that many observers were disturbed by the way women shortened their dresses to play more comfortably, and the way young people took the co-ed sport as an opportunity to flirt. One magazine described the game as a “source of slumbering depravity, a veritable Frankenstein monster of recreation” and suggested that “it would be well if the enthusiasm of the clergy and laity were enlisted for suppressing the immoral practice of croquet.”

Croquet apparently had the potential to stir up not just lust but also the other deadly sins of anger and envy. Milton Bradley’s patented croquet set came with this advice to beginners: “KEEP YOUR TEMPER, and remember when your turn comes.” The International Herald Tribune reported that a woman testified during a separation hearing that her husband refused to speak to her for days after she questioned whether his ball had really gone through the hoop. The judge responded, “I do not think there is any game which is so liable to put one out of humour as croquet.”

Weekly Digest

[contact-form-7]

Sterngass writes that the combination of co-ed play and intense competition challenged Victorian ideas about benevolent, moral womanhood. The game’s popularity also challenged male superiority in competitive endeavors. It seems that women frequently bested their male companions and were often—rightly or wrongly—accused of cheating. Men complained about women using illegal techniques like the “push shot,” or even using their dresses to conceal the ball while shuffling it along the lawn. An 1865 croquet manual complained that “We are aware that young ladies are proverbially fond of cheating at this game; but as they only do it because ‘it is such fun,’ and also because they think that men like it…” Sterngass notes that this kind of comment doesn’t actually explain their behavior, however, as female players were well aware that men wouldn’t appreciate cheating by women since they were constantly complaining about it.

A more shocking violation of Victorian propriety emerged from a variation of the game called “tight croquet,” in which players could put their ball next to their opponent’s, plant their foot on their own ball, and smash it with the mallet to send the other ball flying. The titillating overtones were teased out in the caption of a Punch cartoon featuring a woman revealing a bit of ankle while performing the maneuver: “Fixing her eyes on his, and placing her pretty little foot on the ball, she said, ‘Now then, I am going to croquet you!’ and croquet’d he was completely.”

The post The Dangerous Game of Croquet appeared first on JSTOR Daily.

Ahmari revisited

By ayjay

This morning I have a post up at the Atlantic website on the scuffle Sohrab Amari kicked off with his recent attacks on David French. I want to add some cars to that train in the form of two sets of questions, and then a caboose.

First, though, I want to emphasize something that I said in passing in that post: that I basically share Ahmari’s view that the liberal order has become the Bad Liberalism — “tyrannical liberalism” — Neuhaus feared, and I agree that proceduralism is dying, is mostly dead maybe. Here’s one post, on matters closely related to the ones I’m dealing with today; and here’s the logic of Bad Liberalism in brief summary; and here’s a moment in which I grow nostalgic for a Proceduralism Lost. My critique does not concern Ahmari’s diagnosis, but rather some elements of his prescription. So, on to the questions.

First: Ahmari’s essay isn’t just a critique of David French – it contains a positive program as well:

Progressives understand that culture war means discrediting their opponents and weakening or destroying their institutions. Conservatives should approach the culture war with a similar realism. Civility and decency are secondary values. They regulate compliance with an established order and orthodoxy. We should seek to use these values to enforce our order and our orthodoxy, not pretend that they could ever be neutral. To recognize that enmity is real is its own kind of moral duty.

And when you recognize your moral duty, you will realize that your job is “to fight the culture war with the aim of defeating the enemy and enjoying the spoils in the form of a public square re-ordered to the common good and ultimately the Highest Good.”

Nothing about this is clear.

  • Who are the “we” implied in “our order and our orthodoxy”? Social conservatives? Religious social conservatives? Christian social conservatives? Catholic social conservatives? What about Muslim social conservatives? What about faithful Catholics who aren’t social conservatives? Who, in short, gets access to the control room?
  • Who is “the enemy”? This would be determined, I guess, by how you answer the questions above, but I wonder if David French — and any other Christian who defends the liberal social order — belongs to the enemy. (Probably not? Probably French is just an unreliable ally, like Mussolini to Hitler?)
  • How, specifically, would “we” “enforce our orthodoxy”? Would atheists be denied citizenship, or have their civil rights abridged in some way? And by what means would this enforcement be achieved? “Weakening or destroying their institutions” presumably means, for instance, something more dramatic than, say, removing federal funding from Planned Parenthood — so, maybe, finding legal means to punish systemically left-wing companies like those in Hollywood and Silicon Valley? But even that doesn’t seem nearly enough….

Unpacking that last bullet point: I’m going to assume that Ahmari is not counting on an angelic army to descend and impose the reordering of the public square to the Highest Good; I’m also going to assume that he’s not advocating a coup by the American armed forces. I think that leaves winning a great many elections and winning them by large majorities. (I mean, reordering the public square to the Highest Good is not something that could possibly be accomplished without amendments to the Constitution.) And that leads me to my …

Second question: If you believe that there is a “crisis facing religious conservatives” arising from the dominance of a tyrannical liberalism, and you want to defeat those enemies, drive them before you, and hear the lamentations of their (trans) women, how, exactly, do you further that goal by attacking … David French? What precisely is the strategic benefit of that? If you’re Ahmari, don’t you need people like French on your side? Or do you think you’re such a massive movement that you can do without people like French? Or do you think that French will be abashed by the incisiveness of your attack, your mockery of “Pastor French,” and will come over to your side, ultimately meekly submitting to the claims of the Catholic Magisterium? Or do you think that other people will read your attack and think “Wow, just look at how Ahmari dealt with that pathetic loser French, I want to be on his side”? Seriously: How’s this supposed to work?


And now the caboose — something I said in my essay that I want to re-emphasize here. I noted earlier that I largely agree with Ahmari that there is a “crisis facing religious conservatives.” But I dissent from his claim that Christians should let the urgency of the situation determine their behavior. (“It is in part that earnest and insistently polite quality of [French’s] that I find unsuitable to the depth of the present crisis facing religious conservatives.”) If David French is right that civility and decency are commanded to Christians, then they are always commanded to us. We don’t get to set aside the commandments of God when we find them “unsuitable” to the demands of the present moment. That way tyranny lies, and a tyranny that clothes itself in (misdirected) obedience.

In these contexts, and especially when I am feeling discouraged about the course of events, I often think of a passage from the Lord of the Rings, the moment when Eomer of Rohan meets Aragorn and Gimli and Legolas. Eomer:

‘It is hard to be sure of anything among so many marvels. The world is all grown strange. Elf and Dwarf in company walk in our daily fields; and folk speak with the Lady of the Wood and yet live; and the Sword comes back to war that was broken in the long ages ere the fathers of our fathers rode into the Mark! How shall a man judge what to do in such times?’

‘As he ever has judged,’ said Aragorn. ‘Good and ill have not changed since yesteryear; nor are they one thing among Elves and Dwarves and another among Men. It is a man’s part to discern them, as much in the Golden Wood as in his own house.’


P.S. For a further exposition of the two liberalisms that Father Neuhaus discussed — “political liberalism” and “hegemonic liberalism” — see this essay by my friend and colleague Frank Beckwith.

The Big Picture: Defending Society

Why, today, are many of the most antidemocratic voices in the United States not merely protected by Constitutional freedoms but draping themselves in them?

The post The Big Picture: Defending Society appeared first on Public Books.

❌