FreshRSS

🔒
❌ About FreshRSS
There are new available articles, click to refresh the page.
Before yesterdayYour RSS feeds

Membership, Citizenship, and Democracy

President Trump’s pernicious attacks on nonwhite immigrants have thrust a particular theory of political membership—white nationalism—to the forefront ...

The post Membership, Citizenship, and Democracy appeared first on Public Books.

Industrial London’s Maternal Child Abductors

By Livia Gershon

There may be no crime that horrifies the public more than child abduction. Historian Elizabeth Foyster writes that this was also true in London 200 years ago, though the crime then typically took a much different form than we expect today.

Historians generally agree that the late eighteenth century brought a major change in what English childhood meant. This included more positive attitudes toward kids, and a new wealth of books, toys, and clothes for middle-class urban children. Children were increasingly prized “for giving women a role as mothers,” and as “miniature models of all that a more affluent consumer society could afford,” Foyster writes.

If children were becoming more valuable, it stands to reason that, like all valuable things, they were in danger of being stolen. And, indeed, Foyster found 108 cases of child abduction tried in London and reported in the newspapers between 1790 and 1849.

Child abduction was nothing new, but it was understood differently than in previous times. In fourteenth-century England, “ravishment” covered both forced and consensual “abduction” of children or adult women. It typically had a sexual element, and the child victims were generally teenagers. Later, in the seventeenth century, abduction was understood as a fate befalling unfortunate boys forced into indentured servitude.

In contrast, in the period Foyster studied, the majority of stolen children were under six, and the abductor was usually a woman in her 20s or 30s. In some cases, kids were stolen for their clothes. Abductors might bring fancy children’s clothes to a pawnbroker, leaving a half-naked child outside. Other times, women reportedly stole children to gain sympathy when begging for money or seeking a job.

There were also well-off married women who stole—or paid someone else to steal—children they could present as their own. One 22-year-old wrote to her husband, serving in the Navy, about an invented pregnancy and childbirth. When she learned he was returning home, she travelled to London, snatched a four-year-old boy, and cared for him for two months before she was caught.

Foyster writes that news accounts paid little attention to possible harm done to the children. Unlike today, child abduction wasn’t generally assumed to be motivated by deviant sexual desire. Instead, newspapers focused on the terror and despair of mothers whose children were stolen, and suggested a parallel lack of feeling in the abductors.

Weekly Newsletter

[contact-form-7]

A judge told one convicted child thief that, as a childless woman, she was “Ignorant of those heavenly feelings which subsist in the relation between parent and child; for had you been a mother, you must have respected and regarded, instead of agonizing a mother’s heart.” Still, Foyster writes, news reports also acknowledged that child-stealers might be motivated by a twisted “fondness” for children—reflecting their own stunted development.

Child-thieves clearly had no place in the growing public conception of natural motherly love. Yet the new understanding of children as valuable objects who gave meaning to women’s lives may have spurred the increase of child abduction.

The post Industrial London’s Maternal Child Abductors appeared first on JSTOR Daily.

Whose Life?

This Life: Secular Faith and Spiritual Freedom, by the philosopher Martin Hägglund, who teaches at Yale, is a book anyone committed to public-facing scholarship ought to take note of. This is all the ...

The post Whose Life? appeared first on Public Books.

Engagement Is the Enemy of Serendipity

By Dan Cohen

Whenever I’m grumpy about an update to a technology I use, I try to perform a self-audit examining why I’m unhappy about this change. It’s a helpful exercise since we are all by nature resistant to even minor alterations to the technologies we use every day (which is why website redesign is now a synonym for bare-knuckle boxing), and this feeling only increases with age. Sometimes the grumpiness is justified, since one of your tools has become duller or less useful in a way you can clearly articulate; other times, well, welcome to middle age.

The New York Times recently changed their iPad app to emphasize three main tabs, Top Stories, For You, and Sections. The first is the app version of their chockablock website home page, which contains not only the main headlines and breaking news stories, but also an editor-picked mixture of stories and features from across the paper. For You is a new personalized zone that is algorithmically generated by looking at the stories and sections you have most frequently visited, or that you select to include by clicking on blue buttons that appear near specific columns and topics. The last tab is Sections, that holdover word from the print newspaper, with distinct parts that are folded and nested within each other, such as Metro, Business, Arts, and Sports.

Currently my For You tab looks as if it was designed for a hypochondriacal runner who wishes to live in outer space, but not too far away, since he still needs to acquire new books and follow the Red Sox. I shall not comment about the success of the New York Times algorithm here, other than to say that I almost never visit the For You tab, for reasons I will explain shortly. For now, suffice it to say that For You is not for me.

But the Sections tab I do visit, every day, and this is the real source of my grumpiness. At the same time that the New York Times launched those three premier tabs, they also removed the ability to swipe, simply and quickly, between sections of the newspaper. You used to be able to start your morning news consumption with the headlines and then browse through articles in different sections from left to right. Now you have to tap on Sections, which reveals a menu, from which you select another section, from which you select an article, over and over. It’s like going back to the table of contents every time you finish a chapter of a book, rather than just turning the page to the next chapter.

Sure, it seems relatively minor, and I suspect the change was made because confused people would accidentally swipe between sections, but paired with For You it subtly but firmly discourages the encounter with many of the newspaper’s sections. The assumption in this design is that if you’re a space runner, why would you want to slog through the International news section or the Arts section on the way to orbital bliss in the Science and Health sections?

* * *

When I was growing up in Boston, my first newspaper love was the sports section of the Boston Globe. I would get the paper in the morning and pull out that section and read it from cover to cover, all of the columns and game summaries and box scores. Somewhere along the way, I started briefly checking out adjacent sections, Metro and Business and Arts, and then the front section itself, with the latest news of the day and reports from around the country and world. The technology and design of the paper encouraged this sampling, as the unpacked paper was literally scattered in front of me on the table. Were many of these stories and columns boring to my young self? Undoubtedly. But for some reason—the same reason many of those reading this post will recognize—I slowly ended up paging through the whole thing from cover to cover, still focusing on the Sox, but diving into stories from various sections and broadly getting a sense of numerous fields and pursuits.

This kind of interface and user experience is now threatened because who needs to scan through seemingly irrelevant items when you can have constant go-go engagement, that holy grail of digital media. The Times, likely recognizing their analog past (which is still the present for a dwindling number of print subscribers), tries to replicate some of the old newspaper serendipity with Top Stories, which is more like A Bunch of Interesting Things after the top headlines. But I fear they have contradicted themselves in this new promotion of For You and the commensurate demotion of Sections.

The engagement of For You—which joins the countless For Yous that now dominate our online media landscape—is the enemy of serendipity, which is the chance encounter that leads to a longer, richer interaction with a topic or idea. It’s the way that a metalhead bumps into opera in a record store, or how a young kid becomes interested in history because of the book reviews that follow the box scores. It’s the way that a course taken on a whim in college leads, unexpectedly, to a new lifelong pursuit. Engagement isn’t a form of serendipity through algorithmically personalized feeds; it’s the repeated satisfaction of Present You with your myopically current loves and interests, at the expense of Future You, who will want new curiosities, hobbies, and experiences.

What Does It Mean To Be Celtic?

By Matthew Wills

A recent book by Caoimhín De Barra explores the formation of Celtic nationalism. In the late twentieth century, “Celticity” was sparked anew in the UK’s devolution of power to Scottish, Welsh, and Northern Ireland. Celticity, however, has turned out to be quite exportable, and not just in the form of Celtic music, Irish dance, and fantasies like the 1995 movie Braveheart.

According to scholars Euan Hague, Benito Giordano, and Edward H. Sebesta, two organizations that arose in the 1990s have appropriated contemporary versions of Celtic nationalism as a proxy for whiteness. Both call for separate nations to be set aside for the citizens they count as “white.” One is the League of the South (LS), a fringe group that argues for a return to the Confederate States of America. Meanwhile, in Italy, Lega Nord (LN) has also taken up the banner of “Celtic culture” as a model of whiteness. They advocate for a state called Padania, separate from Italy’s south. The LN, frequently called just Lega, is part of Italy’s coalition government. In the 2018 elections, the LN took just under 18% of the vote for both the Chamber of Deputies and the Senate.

Both the LS and the LN argue that Celtic-ancestry people are a “distinct ethnic group deserving of self-determination and an independent nation state,” write Hague et al. Comparing the two leagues, the authors explore the confluence of ethno/race-based nationalism with the use (and misuse) of the myths of Celticity.

Celticity is “an attractive set of symbols and identities that come replete with popular recognition and a supposedly ancient past that can be invoked by people for many purposes, from ‘new age’ religion to popular ‘world music.'” Historically, however, that “ancient past” is hard to pin down. Hague et al. explain:

The very flexibility and the vagaries of archeological evidence regarding the original Celts enable multiple political and cultural meanings to be invested in the form, whilst retaining the symbolic value and historical authority accrued by the reference to a supposedly ancient Celtic culture.

“The Celts” can and have been envisioned in all sorts of ways: as a warrior class; a pan-European people; as the epitome of whiteness; “whatever version of the past seemed nationally expedient.” It’s a cultural identity that has come into vogue in recent decades.

Weekly Digest

[contact-form-7]

The LN posits that northern Italy is culturally and ethnically distinct from southern Italy. Southern Italians aren’t seen as Celtic/white/European—shades of the way Italian immigrants were first treated in the U.S. For LN, separation is essential to block immigration from Africa, Asia, and southern Italy.

Nationalism tries to make “ancient connections between a people and a specific territory, an intersection of genealogy and geography.” By exploiting the ethos of multiculturalism, both the LS and the LN argue for a “right to cultural difference.” This right, the authors say, fits into “ongoing processes of white privilege.” While overt racism is generally frowned upon, “an appeal to Celtic ethnicity appears acceptable and can be justified by utilizing a rhetoric of cultural awareness while simultaneously subverting political commitments to cultural equality and reasserting white superiority.”

The post What Does It Mean To Be Celtic? appeared first on JSTOR Daily.

How Families with Two Dads Raise Their Kids

By Benjamin Winterhalter

Some states, including Texas, South Dakota and Alabama, have tried to defy the 2015 Supreme Court ruling that made marriage equality the law of the land. Their “religious freedom” bills allow taxpayer-funded agencies to deny qualified LGBTQ adults to foster and adopt children.

LGBTQ protection in education appears limited as well. When pressed on the question, Education Secretary Betsy DeVos is refusing to tell lawmakers whether she believes the federal government should include “sexual orientation” and “gender identity” in anti-discrimination policies.

That reminds me of how former Kentucky family court judge W. Mitchell Nance refused to hold hearings on same-sex couples’ adoptions in 2017 “as a matter of conscience.” He resigned after his state’s Judicial Conduct Commission found him guilty of misconduct.

Maybe any officials, judges and lawmakers who are alarmed by dual-dad or dual-mom households should check out the research on how gay parents differ from straight parents. So far, most of this scholarship has focused on the social, emotional and cognitive outcomes of children they raise. (Spoiler alert: These kids turn out fine.)

As a former teacher who now researches gay dads and their families, I’m studying how the growing number of men married to other men are raising their children. So far, I’m finding few differences between them and their straight peers of similar socioeconomic status – especially regarding their children’s schooling.

A Growing Population

Since the Census Bureau estimates – but does not count – the number of households headed by two fathers, it’s hard to track them.

Plans are taking shape for the Census Bureau to begin counting same-sex couples who share a household in 2020, although the agency won’t be counting all LGBTQ individuals.

Nevertheless, the American Community Survey, the Census Bureau’s ongoing demographic survey of approximately 3 million households, already follows same-sex parenting. It estimates that in 2017, almost 40,000 two-dad households were raising children, up from about 30,000 in 2010.

Parenting Roles

How do parents in these families settle into specific roles? In short, just like heterosexual parents do.

Research suggests that affluent, white, two-father households adhere to traditional parenting roles. One is the primary breadwinner, while the other earns either less income or none at all and handles most of the caregiving and chores.

However, two-dad households can challenge the 1940s Norman Rockwell image of gendered parenting – just like heterosexual couples can.

Households with two fathers working full-time rely on day care facilities, babysitters, housekeepers and nearby relatives for support. Some of these men even take on responsibilities based on skills and strengths, rather than who fits the socially and culturally constructed mold of being more “motherly” or “fatherly.”

Community and School Engagement

And that’s where the parenting of gay dads may differ from a traditional heterosexual household, as my research and the work of other scholars suggests.

While interviewing and spending time with 22 gay-fathered families living in the Northeast, I have learned that they’re apt to step up. But it depends on where dads live. Many living in more gay-friendly areas become involved as classroom parents, voluntarily assisting teachers, reading books or leading singalongs. Some take leadership roles by becoming active PTA members or organizing events that go beyond their children’s classes. In some cases, gay fathers become PTA presidents or serve on school boards.

Like all civically engaged parents, gay fathers support their local museums and libraries and enroll their kids in camps and extracurricular activities. They sometimes do additional volunteer work for social justice groups.

Dads living in less gay-friendly areas want to have more school-based presence, but concerns about their children’s and family’s safety have made it challenging.

The largest-scale survey to date was conducted in 2008 by the Gay Lesbian Straight Education Network, an organization focused on the safety of LGBTQ students in schools. That study, which included 588 LGBTQ parents, suggested that gay fathers could be more likely to be involved in school-based activities than heterosexual dads.

Aside from the simple fact that they love their children just like all parents do, Abbie Goldberg, a Clark University researcher, and her colleagues have shown that increased presence may be due, in part, to fathers’ initiatives to counter bias and assert more same-sex visibility and inclusion in schools. My current study indicates the same. Many of the men taking part have told me that being actively involved helps them preemptively counteract potential negative encounters with school personnel and other families.

Gay dads prefer schools and communities that are safe and inclusive. As my research suggests, living in a inclusive community makes them more likely to engage. Beyond that, they want lawmakers bent on barring them from fatherhood to see that two-dad families are for the most part just like any other family.


This article is republished from The Conversation under a Creative Commons license. Read the original article.

The Conversation

The post How Families with Two Dads Raise Their Kids appeared first on JSTOR Daily.

after the platforms

By ayjay

Ross Douthat:

Yes, it’s understandable for conservatives to worry that if Silicon Valley censors the likes of Molyneux, it will end up censoring them. It’s sensible for them to join parts in the left in worrying about the concentrated power over information that the stewards of social-media platforms enjoy. And it’s necessary for them to recognize that the influence of redpillers and white-identitarians reflects their own failure, across the decades of movement-conservative institution building, to create something that seems more compelling to fugitives from liberalism than the Spirit of the Reddit Thread.

With all that said, though, a humane conservatism should still be able to thrive in a world where white nationalists have trouble monetizing their extremism, in which YouTube algorithms are built to maximize something other than addiction.

I’m not sure what Ross means in the last sentence I’ve quoted by “should.” Does he mean that “humane conservatism” is likely to thrive, or that if the system is fair it ought to be able to do so? I doubt the first and doubt the conditional of the second.

Here’s the situation as I see it. First, as Alexis Madrigal has recently written, the big social media companies will from now on find it less likely to take refuge in the claim that they are “merely platforms”:

These companies are continuing to make their platform arguments, but every day brings more conflicts that they seem unprepared to resolve. The platform defense used to shut down the why questions: Why should YouTube host conspiracy content? Why should Facebook host provably false information? Facebook, YouTube, and their kin keep trying to answer, We’re platforms! But activists and legislators are now saying, So what? “I think they have proven — by not taking down something they know is false — that they were willing enablers of the Russian interference in our election,” Nancy Pelosi said in the wake of the altered-video fracas.

If you can’t plead platform neutrality, what do you do? Well, these companies being what they are, they’ll write algorithms to try to filter content. But the algorithms will often fail — after all, they can’t tell the difference between sites that promote hatred and sites that seek to combat it.

Where does that leave you? As Will Oremus points out, it leaves you with mob rule:

What should be clear to both sides, by now, is the extent to which these massive corporations are making up the rules of online speech as they go along. In the absence of any independent standards or accountability, public opinion has become an essential part of the process by which their moderation policies evolve.

Sure, online platforms have policies and terms of service that run thousands of words, which they enforce on a mass scale via software and a bureaucratic review process. But those rules have been stitched together piecemeal and ad hoc over the years to serve the companies’ own needs — which is why they tend to collapse as soon as a high-profile controversy subjects them to public scrutiny. Caving to pressure is a bad look, but it’s an inevitable feature of a system with policies that weren’t designed to withstand pressure in the first place.

Whatever should happen to humane conservatism on the internet, I don’t know what will, but as a person who is somewhat conservative and who would like to be humane, I wish I knew. In light of all the above, one thing seems nearly certain to me: If I were on a major social media service and a vocal group of that site’s users started calling me homophobic or transphobic or a white supremacist and demanded that I be banned, I would be banned.

The Stonewall Riots Didn’t Start the Gay Rights Movement

By Catherine Halley

Despite what you may hear during this year’s fiftieth anniversary commemorations, Stonewall was not the spark that ignited the gay rights movement. The story is well known: A routine police raid of a mafia-owned gay bar in New York City sparked three nights of riots and, with them, the global gay rights movement. In fact it is conventional to date LGBTQ history into “before Stonewall” and “after Stonewall” periods—not just in the United States, but in Europe as well. British activists can join Stonewall UK, for example, while pride parades in Germany, Austria, and Switzerland are called “Christopher Street Day,” after the street in New York City on which the Stonewall Inn still sits.

But there were gay activists before that early morning of June 28, 1969, previous rebellions of LGBTQ people against police, earlier calls for “gay power,” and earlier riots. What was different about Stonewall was that gay activists around the country were prepared to commemorate it publicly. It was not the first rebellion, but it was the first to be called “the first,” and that act of naming mattered. Those nationally coordinated activist commemorations were evidence of an LGBTQ movement that had rapidly grown in strength during the 1960s, not a movement sparked by a single riot. The story of how this particular night and this particular bar came to signify global gay rebellion is a story of how collective memory works and how social movements organize to commemorate their gains.

Weekly Digest

[contact-form-7]

The sociologists Elizabeth A. Armstrong and Suzanna M. Crage detail four previous police raids on gay bars in cities across the United States that prompted activist responses—and local gains—but that either faded from local memory, did not inspire commemorations that lasted, or did not motivate activists in other cities.

For example, San Francisco activists mobilized in response to police raids on gay bars in the early 1960s, which came to a head during a raid on a New Year’s Eve ball in 1965 that eventually brought down the police commissioner. This New Year’s Eve raid attracted wide media attention, garnered heterosexual support, and is credited with galvanizing local activists, but it was subsequently forgotten. In 1966, again in San Francisco, LGBTQ people rioted at Compton’s Cafeteria, smashing all the windows of a police car, setting fires, and picketing the restaurant for its collusion with police. The city’s gay establishment did not participate, however, and distanced themselves from the transgender and street youths and their political organization, Vanguard, behind the “violent” protest.

San Francisco was not the only U.S. city with gay rights activists gaining strength. In Los Angeles, the first national gay rights organization, the Mattachine Society, was founded years earlier, in 1951, and spawned chapters in other cities around the country. Bar raids in late-1960s Los Angeles also prompted resistance. The 1967 police raid on the Black Cat bar, for instance, led to a demonstration 400 people strong that garnered evening news coverage. That demonstration played a role in the founding of the leading national LGBTQ magazine, The Advocate. While the Black Cat demonstration garnered support from heterosexual activists for Chicano and Black civil rights, no further coordination occurred, and the event was not commemorated. When police again descended on the L.A. nightclub The Patch, patrons struck back immediately, marching to city hall to lay flowers and singing civil rights anthem “We Shall Overcome.” But its anniversary passed without remembrance. Los Angeles activists did organize a one-year vigil on the anniversary of the night the L.A. police beat a gay man to death in front of the Dover Hotel, but this 120-person-strong rally and march to the police station did not inspire activists in other cities. Subsequent demonstrations were subsumed by the Stonewall commemorations.

Activists were busy on the East Coast before Stonewall, too. In Washington, D.C., LGBTQ veterans chose the Pentagon as their place to picket, making it onto national television with signs reading, “Homosexual citizens want to serve their country too.” Subsequent demonstrations targeted the White House and the offices of Federal agencies. New York City’s Mattachine Society secured legal gains in 1966 when they organized a “sip-in” at the bar Julius’, securing the right of homosexuals to gather in public. None of these actions inspired commemoration, locally or in other cities, however, leading scholars to look for pre-Stonewall protests. The question that scholars are seeking to answer is: Why not?

Four members of the Mattachine Society at a "sip-in" in 1966, demanding to be served at Julius's Bar in Greenwich Village
Four members of the Mattachine Society at a “sip-in” in 1966, demanding to be served at Julius’s Bar in Greenwich Village © Estate of Fred W. McDarrah via The National Portrait Gallery

There was an annual demonstration for gay civil rights before Stonewall, however, and it provides the best example of how gay politics were growing and changing before the riots. Beginning in 1965, Philadelphia LGBTQ activists began an annual picket of Independence Hall on the Fourth of July to protest state treatment of homosexuals. Soberly-dressed men and women with carefully worded signs walked solemnly in front of this iconic building where the Declaration of Independence and U.S. Constitution were debated and signed. These “Annual Reminders” were the result of coordination by activists in New York, Washington, and Philadelphia, evidence of burgeoning regional cooperation by gay rights activists in the 1960s. Yet these somber events unraveled in the week after Stonewall, and Philadelphia activists voted later in 1969 to shift the 1970 commemoration from a picket of Independence Hall to a parade in the streets on the Stonewall anniversary.

Gay politics had become more radical in the late 1960s, owing to the influence of the Black power movement, second-wave feminism, and the protests against the Vietnam war. Radical organizations advocating “gay power” had already sprung up in the 1960s, including in Greenwich Village, where the Stonewall Inn was located. These new activists stereotyped the actions of their forebears as conservative, erasing their contributions from a history that now was credited solely to Stonewall.

What was different about Stonewall was that organizers decided to commemorate it, and to make it a national event. At a meeting in November of 1969, regional activists broke with the respectable image of the Philadelphia “Annual Reminder” and vowed to secure a parade permit on the anniversary of the raid on the Stonewall Inn, calling it Christopher Street Liberation Day. These organizers reached out to groups in Chicago and Los Angeles who readily agreed to remember something that happened elsewhere, in part because it was one of the few acts of LGBTQ resistance to get widespread media coverage, including in national LGBTQ publications and the New York Times.

This media coverage was itself the product of previous ties between local LGBTQ activists and journalists—and the fact that the Stonewall Inn was so near to the offices of the Village Voice. Interestingly, San Francisco’s activists declined to participate because they had already made inroads with local politicians and clergy. As one member explained, “I did not think a riot should be memorialized.” Only a small breakaway group participated, to little local effect, in a city that today hosts one of the largest gay pride parades in the country. These coordinated marches in Los Angeles, New York, and Chicago in 1970 were the first gay pride parades, and sparked an idea that spread around the country—to 116 cities in the United States and 30 countries around the world.

It was this national act of commemoration that represented a truly new political phenomenon, not the riot itself. As Armstrong and Crage have written, “without the existence of homophile organizations elsewhere, many of them founded only in the late 1960s, a national event would have been unthinkable.” Stonewall was an “achievement of gay liberation,” and not its cause, and an achievement of collective memory and collective action, if not the first LGBTQ riot or protest.

It is notable that this achievement took the form of a joyful parade, rather than a somber picket like Philadelphia’s Annual Reminder. As the sociologist Katherine McFarland Bruce describes in her detailed history of pride parades in the United States, “planners settled on a parade format as the best way to accommodate diverse members and to produce the positive emotional experience that brought people together.” As early organizers noted, “a fun parade brings out more people than an angry march.” Unlike the Annual Reminder, which addressed the state in asserting the similarity of homosexuals with heterosexual citizens, parade participants celebrated their differences and aimed to change minds, not laws.

An LGBTQ parade through New York City on Christopher Street Gay Liberation Day, 1971
An LGBTQ parade through New York City on Christopher Street Gay Liberation Day, 1971. Getty

There were unique characteristics of Stonewall, of course. In his detailed history of the bar and those nights, the historian David Carter lists many: It was the only bar raid that prompted multiple nights of riots; it was the only raid that occurred in a neighborhood populated by lots of other LGBTQ people who might participate; and the bar was the city’s largest, located in a transportation hub surrounded by many public telephones that were used to alert media.

But Carter also notes that the riots were not inevitable, and were just a turning point in the United States’ burgeoning gay rights movement. New York City already had many gay activists “with the specialized skills to take on leadership roles to help shape and direct the event,” for example. He also gives special credit to the fact that several of the riots, including Stonewall and the Compton’s Cafeteria riots in San Francisco, occurred during police raids right after a period of liberalization. In San Francisco, Compton’s clientele only fought back after gaining hope from the city’s pre-Stonewall municipal liberalization towards homosexuality. In New York City (where the Stonewall riot took place), the police raid seemed out of step with the liberal administration of mayor John Lindsay. As Carter summarizes, “revolutions tend to happen after periods of liberalization.”

As activists commemorate the Stonewall Riots in 2019, perhaps they should also lay plans for next year, to remember the fiftieth anniversary of the first gay pride parade in 2020. The nation finds itself again in an era of retrenchment after the liberalization of the Obama era. It follows that 1970 thus deserves to be remembered as the first national act of LGBTQ remembrance, if not the first act of LGBTQ resistance.

The post The Stonewall Riots Didn’t Start the Gay Rights Movement appeared first on JSTOR Daily.

How to Cure Groupthink

By Farah Mohammed

Social groups aim for harmony. It’s one of the primary objectives of every community—but there can always be too much of a good thing. When the social pressure to maintain harmony becomes greater than the individual’s dedication to the task the group is pursuing, that can ultimately cost the group dearly. It’s a not uncommon phenomenon sometimes referred to as “groupthink.”

In the Journal of Business Ethics, author Ronald Sims examines the extent of damage that groupthink can create, and more importantly, how to identify it and stem it in our own circles.

Political and Corporate Groupthink

Sims cites famous historical blunders attributed to the toxic effects of pressured group decision making, including Neville Chamberlain’s policy of appeasement of Hitler; President Truman’s advisors’ support of escalating war in North Korea, despite warnings from China about their inevitable strong reaction; President Kennedy’s inner circle’s support of the infamous Bay of Pigs invasion, despite its slim chance of success; and President Johnson’s advisors’ advocating the decision to head into Vietnam, despite intelligence reports advising that it was unlikely to defeat the Vietcong.

Sims quotes Irving Janis, who developed the theory of groupthink:

All these groupthink dominated groups were characterized by strong pressures toward uniformity, which inclined their members to avoid raising controversial issues, questioning weak arguments, or calling a halt to soft-headed thinking.

More mundane, but no less important, is the effect groupthink has on businesses and corporations every day. Especially in today’s world, where a few private businesses hold an inordinate amount of power, the ability of a few select groups of people to maintain a clear ethical direction has never been more important.

The Telltale Signs

We’d all like to believe that our groups and our communities are impervious to the effects of groupthink. What leads to a dangerous level of groupthink, however, isn’t an inherent weakness, or a collection of morally-lacking agents ready and waiting to conspire.

Rather, it’s the result of a tight-knit, insulated group with a strong leader, potentially in a stressful situation with little hope for finding a satisfactory solution other than the one presented to them, or the one favored by the group.

Symptoms of a group at risk of groupthink, according to Sims, are:

  • Illusion of invulnerability
  • Collective efforts to rationalize the group’s course of action
  • Unwillingness to reconsider assumptions
  • Unquestioned belief in the group’s inherent morality
  • Stereotyped view of those outside of the group
  • Suppression of dissent within the group

The Cure?

Sims offers steps for leaders, organizations, and individuals to combat letting groupthink settle into the psyches of their organizations.

Leaders can give everyone a role of critically evaluating a decision, and abstain from stating their own preferences, as well as bringing in outside experts to challenge the group. They can make “playing devil’s advocate” a standard part of decision-making procedure.

Weekly Digest

[contact-form-7]

Organizations can encourage healthy debate and discussion, and not shy away from dissent or view it as a negative part of company culture. They can encourage training in and engagement with ethics as much as with other practical skills. Periodically rotating new members in and old members out of groups may ease insulation or discourage the formation or overly rigid group dynamics.

Individually we can all contribute: critical thinking and following an internal moral compass on an individual level are the best starting points to maintaining a group’s integrity.

The post How to Cure Groupthink appeared first on JSTOR Daily.

When Foster Care Meant Farm Labor

By Livia Gershon

In an effort to support children in foster care, Sesame Street has introduced Karli, a muppet living with “for now” parents until she can go back to her mother. Foster families, who get a small payment to offset the cost of caring for children, have been a central part of child welfare programs for the past century. Before that, historian Megan Birk writes, Americans depended on farmers to take care of kids in exchange for hard labor.

In the years after the Civil War, state and charity welfare workers commonly “placed out” children who they identified as “destitute or neglected,” often sending them to work on family farms in the Midwest. Tens of thousands of children from outside the region arrived, sometimes on “orphan trains” from the urban east.

Authorities viewed the farm placements as a win for everyone involved. Farmers got affordable labor. Governments and philanthropic organizations were relieved of the expense of running orphanages. And children got the chance to learn valuable work skills while living in a rural setting, widely seen as the ideal place for an American upbringing.

Birk writes that in some case, the children were virtually a part of the families that took them in. If they worked hard on the farms, that was no different from what farmers expected of their own kids. But some farmers exploited the child laborers, beating them, denying them schooling or medical care, and sometimes overworking them for a season before sending them back to an institution.

The welfare organizations in charge of the programs often had little capacity to monitor the children’s situations. Some relied on volunteers who might or might not actually bother checking in on the families.

Birk writes that by the 1880s, charity workers were calling attention to these problems and seeking reforms. Over the next few decades, supervision did improve, with states putting resources into professional supervision of the placements.

Weekly Digest

[contact-form-7]

At the same time, though, farm life was changing. Many people were leaving the countryside for growing cities, and farmers increasingly operated as tenants rather than owning their own land. In this climate, the skills learned working on a farm seemed less relevant to children’s success in the future. And doing hard labor and getting by without modern conveniences started to seem like inadequate training for a child who might hope to join the growing ranks of white-collar workers. One children’s advocate suggested that the best candidate for a farm placement was “a rather dull and backward but strong and good natured boy” who wouldn’t benefit much from a modern education anyway.

By the Progressive Era, Birk writes, child labor was widely seen as objectionable, and middle-class families led a shift toward seeing children as “priceless.” Rather than expecting children to work for their keep, welfare agencies increasingly paid urban or suburban foster families to take them in. The system represented by Sesame Street‘s Karli had arrived.

The post When Foster Care Meant Farm Labor appeared first on JSTOR Daily.

How War Revolutionized Ireland’s Linen Industry

By Matthew Wills

You probably know “Rosie the Riveter.” She’s the iconic incarnation of the women in the industrial workforce of World War II. With manpower siphoned off to war, womanpower was called upon to work in factories in the kinds of jobs few women had seen before.

But WWII was not the first time a deficit of male laborers opened doors traditionally closed to women. War can be a radical force, a great over-turner of traditions like sexual divisions in labor. As Anne McKernan tells us, something similar happened during the Napoleonic Wars, particularly in the linen industry in Ulster.

From 1803 to 1815, the United Kingdom of Great Britain and Ireland was at war with France. With 300,000 men in the army and another 140,000 in the Royal Navy, manpower was absent from the homefront, just as demand for everything from food to clothing rose. War devoured textiles like linen, which was used for canvas, duck, and sailcloth. Linen merchants turned to women to maintain and increase production.

The province of Ulster is now spilt between the Republic of Ireland and Northern Ireland. Before 1800, Ulster had a strong tradition of linen production by farmers who spun and wove the flax they themselves grew. (Irish linens are still famous.) The women of farming families spun flax fibers on spinning wheels, while the men wove the resulting thread into linen cloth on their own looms.

[War] presented Irish entrepreneurs with a golden opportunity to snap the link between gender and commercial linen weaving; snapping that link, in turn, prepared the way for snapping the link between farming and weaving, the bi-occupations of rural Ulster households. War-time innovations in the linen industry subsequently turned independent farmer-weavers into rural proletarian weavers.

Mechanization began to replace hand-spinning through the first decade of the nineteenth century. But spinsters (only later did the word come to mean unmarried women) could earn three times as much weaving. Still, the merchants of the Irish Linen Board had to overcome traditional gender divisions in this home-work system. And, to keep up production, they needed to do it fast.

The innovators believed time was of the essence. They could not wait until women found their way into the labor pool of weavers. Increased supplies of coarse yarn from mechanical spinning would create greater demand for weavers at a time when the labor supply was contracting in the face of increasing demand from the agricultural sector.

McKernan reveals how the Irish Linen Board recruited female weavers. For one thing, there were the higher wages. But there were also incentives: cash prizes for the first 200 yards and premiums for cloth with higher thread counts. The newest loom technology, which could double the volume of cloth a worker could produce, was distributed for free. This wasn’t altruism: linen merchants demanded the right to inspect homes with the new looms, which were no longer the workers’ possessions. “If the inspector found ‘obstinacy in attention or idleness be preserved in,’ he had discretion to ‘remove the loom.'”

Weekly Digest

[contact-form-7]

Nevertheless, “young women responded enthusiastically to weaving.” From 1806-1809, over 1,800 free looms were distributed. One six-month period saw 300 women claiming prizes, which cleaned out the prize fund. “Within a short time, female weavers took on apprentices. Besides providing substitute workers for male weavers engaged in war, the new weavers would prove to have long term consequences on the direction of Irish linen industry.”

Napoleon was finally defeated in 1815. Unlike after WWII, however, the woman were not thrown out of the industry when soldiers returned to civilian life. The market was too hot, even with the massive drop in war demand. By 1851, at least a third of Irish linen weavers were women. Even more worked in cotton weaving. The linen cloth market simply demanded large numbers of weavers. “Commercial interests,” writes McKernan, “had no incentive to exclude” women from the industry.

The post How War Revolutionized Ireland’s Linen Industry appeared first on JSTOR Daily.

The Dangerous Game of Croquet

By Livia Gershon

Croquet arrived in the U.S. from England during the Civil War. It immediately became hugely popular and was hailed as a genteel, refined activity appropriate for groups of mixed ages and genders. But, as historian Jon Sterngass writes, to some critics it represented a danger to female morality.

Today, we might expect criticism of croquet to run along the lines expressed by Mark Twain, who called the game “ineffably insipid.” But Sterngass writes that many observers were disturbed by the way women shortened their dresses to play more comfortably, and the way young people took the co-ed sport as an opportunity to flirt. One magazine described the game as a “source of slumbering depravity, a veritable Frankenstein monster of recreation” and suggested that “it would be well if the enthusiasm of the clergy and laity were enlisted for suppressing the immoral practice of croquet.”

Croquet apparently had the potential to stir up not just lust but also the other deadly sins of anger and envy. Milton Bradley’s patented croquet set came with this advice to beginners: “KEEP YOUR TEMPER, and remember when your turn comes.” The International Herald Tribune reported that a woman testified during a separation hearing that her husband refused to speak to her for days after she questioned whether his ball had really gone through the hoop. The judge responded, “I do not think there is any game which is so liable to put one out of humour as croquet.”

Weekly Digest

[contact-form-7]

Sterngass writes that the combination of co-ed play and intense competition challenged Victorian ideas about benevolent, moral womanhood. The game’s popularity also challenged male superiority in competitive endeavors. It seems that women frequently bested their male companions and were often—rightly or wrongly—accused of cheating. Men complained about women using illegal techniques like the “push shot,” or even using their dresses to conceal the ball while shuffling it along the lawn. An 1865 croquet manual complained that “We are aware that young ladies are proverbially fond of cheating at this game; but as they only do it because ‘it is such fun,’ and also because they think that men like it…” Sterngass notes that this kind of comment doesn’t actually explain their behavior, however, as female players were well aware that men wouldn’t appreciate cheating by women since they were constantly complaining about it.

A more shocking violation of Victorian propriety emerged from a variation of the game called “tight croquet,” in which players could put their ball next to their opponent’s, plant their foot on their own ball, and smash it with the mallet to send the other ball flying. The titillating overtones were teased out in the caption of a Punch cartoon featuring a woman revealing a bit of ankle while performing the maneuver: “Fixing her eyes on his, and placing her pretty little foot on the ball, she said, ‘Now then, I am going to croquet you!’ and croquet’d he was completely.”

The post The Dangerous Game of Croquet appeared first on JSTOR Daily.

to put the point plainly

By ayjay

Nolan Lawson:

Get off of Twitter.

You can’t criticize Twitter on Twitter. It just doesn’t work. The medium is the message.

There’s an old joke where one fish says to the other, “How’s the water today?” And the fish responds, “What’s water?” On Twitter, you might ask, “How’s the outrage today?” (The answer, of course, is “I hate it! I’m so outraged about it!”)

Get off of Twitter.

Wait, have I said this before? Maybe only two or three hundred times. 

But here’s why I keep saying it: The decision to be on Twitter (or Facebook, etc.) is not simply a personal choice. It has run-on effects for you but also for others. When you use the big social media platforms you contribute to their power and influence, and you deplete the energy and value of the open web. You make things worse for everyone. I truly believe that. Which is why I’m so obnoxiously repetitive on this point.

Just give it a try: suspend your Big Social Media accounts and devote some time to the open web, to a blog of your own — maybe to micro.blog as an easy, simple way in. Give it a try and see if you’re not happier. I know I am. 

A Mini History of the Tiny Purse

By Catherine Halley

Blame the Balenciaga IKEA bag. When the $2,145 luxury lambskin version of the familiar blue plastic shopping bag appeared on the runway in June 2016, it was the beginning of the end of a glorious era of capacious hobo bags, boat totes, and bucket bags. The upscale counterfeit triggered a backlash against fashion’s flirtation with so-called poverty chic, but also against gigantic bags in general. From a 19-gallon capacity, there was nowhere to go but down.

Even Meghan Markle—whose first official public appearance with Prince Harry spiked sales of Everlane’s roomy (and relatively affordable) leather Day Market Tote—took up her duchess duties and swapped commoners’ carryalls for dainty, handled purses by high-end labels, including Gabriela Hearst, DeMellier, and Strathberry. Cue sellouts, waitlists, and crashed websites: Tiny purses were officially in.

Weekly Digest

[contact-form-7]

The petite purse trend reached its nadir in February 2019, when French label Jacquemus debuted the Mini Le Chiquito, a postage-stamp-sized version of its bestselling handbag. Barely big enough to hold a couple of breath mints, the teeny-weeny tote got big LOLs on social media, where it drew comparisons to binder clips and Barbie accessories. Jacquemus was in on the joke; the bag was made for the runway and not for sale. Nevertheless, it inspired imitations. Louis Vuitton and Prada soon introduced their own nano-bags.

Does purse size matter? For women, the purse has always been political, a reflection of changing economic realities and gender roles. While a large bag—however ugly or expensive—will always have a certain utilitarian value, small bags have historically been mocked and derided—and their female wearers with them.

Until the late eighteenth century, purses were small, unisex accessories, used to hold money and nothing else; they had more in common with wallets than handbags. They might be worn tucked in a pocket or dangling from a belt. At the court of Versailles, round-bottomed drawstring bags—often made of velvet and intricately embroidered with the owner’s coat of arms—held one’s gambling winnings or charitable donations. According to Miss Abigail Adams, who attended mass in the Royal Chapel on Pentecost in 1785, “the lady who goes round to collect the [alms] in a small velvet purse… was more elegantly dressed than any other person. After the king had entered, she went round to the knights, and with a courtesey the most graceful, presented her little purse to each. I am sure no one could have refused putting a louis in.”

A French bag made of horsehair and silk, 1865 (Artstor)
A French bag made of horsehair and silk, 1865 (Artstor)
A silk pair of British pockets from the early 18th century (Artstor)
A silk pair of British pockets from the early 18th century (Artstor)
A French gaming purse from the late 17th century (Artstor)
A French gaming purse from the late 17th century (Artstor)
A Japanese Inrō with Rinpa Style Kanzan and Jittoku from the first half of the 19th century (Artstor)
A Japanese Inrō with Rinpa Style Kanzan and Jittoku from the first half of the 19th century (Artstor)
A European beaded coin purse, 1780–1810 (Artstor)
A European beaded coin purse, 1780–1810 (Artstor)
An Incan feathered bag from the 15th–early 16th century (Artstor)
An Incan feathered bag from the 15th–early 16th century (Artstor)
An Italian stamped leather bag from the first quarter of the 19th century (Artstor)
An Italian stamped leather bag from the first quarter of the 19th century (Artstor)
A knitted American coin purse, 1830–50 (Artstor)
A knitted American coin purse, 1830–50 (Artstor)

Women didn’t need to carry anything but cash in their purses at the time, because their wide hoop petticoats allowed space for roomy pockets. While men had pockets of various sizes sewn in their coats, waistcoats, and breeches, sometimes including long “bottle pockets” concealed in coattails, women’s pockets were separate garments, worn on a ribbon around the waist, under the gown, and accessed through slits in the gown’s seams. According to James Henry Leigh Hunt, a lady’s pocket might hold her purse as well as other essentials, including “a pocket-book, a bunch of keys, a needle-case, a spectacle-case, . . . a smelling-bottle, and, according to the season, an orange or apple.” It might be bought, lost, or stolen, as in the children’s rhyme beginning: “Lucy Locket lost her pocket.” Its closest contemporary equivalent is not the stylish purse, but the serviceable fanny pack. The enormous muffs popular with both sexes in the 1780s also served as carrying cases for tobacco, sweets, handkerchiefs, and occasionally even a small dog. From the time of the handbag’s introduction in the late eighteenth century, its size would rise and fall in tandem with the volume of women’s pockets and muffs.

Larger purses did exist in the eighteenth century, but they were “workbags,” used for transporting sewing, embroidery, or knotting instruments and materials. Workbags may have suggested or served other uses, however. In 1769, Lady Mary Coke saw ladies knotting at the opera and admitted: “I never knott, but the bag is convenient for one’s gloves and Fan.” The workbag—often a beautifully embroidered work of art in its own right—advertised one’s feminine accomplishments and industry rather than one’s fashion sense.

A lantern-like British reticule with ribbon trim from the first quarter of the 19th century (Artstor)

The dramatic change in women’s fashion in the late 1780s—accelerated by the French Revolution of 1789—put an end to the pocket. Bulky underpinnings would have ruined the slim line of the columnar white gowns of the Directoire and Empire, which emulated the diaphanous draperies of classical statuary. Small, handheld purses called “reticules”—often decorated with tassels, fringe, or embroidery— became essential accessories. Women wore “a more or less ornamental bag with each gown, some being fastened to the waist, others suspended by long ribbons from the arm.” In addition to providing much-needed storage, reticules enlivened the simple, high-waisted silhouette while calling attention to newly bared arms and graceful hands. At the same time, women began to wear drawers or underpants, because their gowns were so body-conscious and transparent, and cashmere shawls, for warmth.

The term “reticule” comes from reticulum, the Latin word for “net.” Many of these early reticules were netted, and netting purses became a popular female pastime, as Mr. Bingley noted in Jane Austen’s Pride and Prejudice. Reticules might also be made of fabric, embroidered or ornamented according to the latest trends, providing a conspicuous and relatively affordable way for women to follow fashion. During Napoleon’s campaigns, reticules mimicked flat military sabretaches, or sported sphinxes, or portraits of Bonaparte himself. In December 1801, the nomenclature was so new that Catherine Wilmot, an English tourist in Paris, felt it necessary to define it in a letter, making reference to the earlier and more familiar form of handbag, the workbag:

We have not seen Bonaparte yet, except adorning ‘Reticules’ (which are a species of little Workbag worn by the Ladies, containing snuff-boxes, Billet-doux, Purses, Handkerchiefs, Fans, Prayer-Books, Bon-bons, Visiting tickets, and all the machinery of existence).

Napoleon’s wife, Joséphine, was instrumental in popularizing the new style of dress, with all its attendant accessories.

Reticules fully embroidered with glass beads resembled miniature mosaics. These beaded bags were:

…marvels of patience and eyesight. Tiny, almost imperceptible beads of every hue and shade were woven or knitted into a firm textile, that has outlasted the memory of those who made and used these gorgeous receptacles. Pastoral scenes and quaintly costumed figures were wrought with a fidelity to detail that is marvelous.

Several examples of these sturdy reticules survive in museum collections to this day.

An American bag made of linen and glass beads, 1838 (Artstor)

Satirists dubbed the new must-have accessory the “ridicule,” because it was so small and insubstantial as to be virtually useless. In Austen’s Emma, the ridiculous Mrs. Elton carries a “purple and gold ridicule.” George Cruikshank caricatured the reticule is his Monstrosities of 1822, depicting fashionably dressed strollers in Hyde Park. Even fashion magazines adopted the pejorative term: the February 1804 issue of the Lady’s Monthly Museum included a fashion plate captioned: “A Kerseymere Spencer… with Tippet. Purple ridicule.”

But reticules had another, more complimentary nickname: “indispensables.” On September 9, 1802, Eliza Southgate of Massachusetts wrote to her mother that a friend visiting Paris had “sent me a most elegant indispensable, white lutestring, spangled with silver.” And, during Lord Melvilles’ impeachment trial in London in 1806, Charles Kirkpatrick Sharpe observed that “rows of pretty peeresses… sat eating sandwiches from silk indispensables.” These contrasting depictions of the reticule—ridiculous or indispensable, frivolous luxury or practical necessity—capture the cultural ambivalence surrounding the new fashion, and fashionable clothing in general in the new, post-Revolutionary political and economic climate.

However, contemporary etiquette guides suggest that reticules were not so “indispensable” that women gave up their pockets entirely. Nostalgia for the commodious and concealed pocket remained strong. Theresa Tidy’s 1819 advice manual Eighteen Maxims of Neatness and Order advised: “Never sally forth from your own room in the morning without that old-fashioned article of dress—a pocket. Discard forever that modern invention called a ridicule (properly reticule).” By 1890, the magazine The Decorator and Furnisher lamented “the scarcity of pockets in women’s attire” which necessitated “the survival of the old fashion of carrying bags and satchels.” The purse, not the pocket, was seen as the temporary interloper. But recurring predictions of the purse’s death proved to be unfounded; it had become so fashionable that its function (or lack thereof) was irrelevant.

Today, when women’s pants, dresses, and even wedding gowns are frequently equipped with spacious pockets, a tiny purse might well be enough to hold any other essentials. It’s a fashion statement, to be sure, but it also makes other kinds of statements, signaling a minimalist lifestyle, a low-maintenance personality, or, perhaps, an entourage of PAs, stylists, and servants who handle life’s baggage. The nano bag’s days may be numbered, though. With the erstwhile Meghan Markle—now Duchess of Sussex—having just had a royal baby, don’t be surprised if roomy diaper bags are the new black.

The post A Mini History of the Tiny Purse appeared first on JSTOR Daily.

Dancing through Dark Times

What can dance contribute to the contemporary politics of resistance? If politics is about movement—uprising, oppression, resistance, setback, advance ...

The post Dancing through Dark Times appeared first on Public Books.

Style Tips from the Harem

By Livia Gershon

In the nineteenth century, Western visions of “the Orient,” and particularly the harem, were populated by exotic, submissive women. But when American missionary women spent time in Turkey, they brought back a very different view, as architecture and design scholar Gülen Cevik explains.

Cevik writes that in the nineteenth century, homes in the Ottoman Empire were becoming increasingly segregated by gender. According to Cevik, “The informality of seating habits and the lack of clearly defined individual spaces were legacies of the Central Asian nomadic past of the Ottomans, where personal space was much more casually defined.” The typical seating arrangement was a divan—a low, continuous surface along the wall where people might sit in a variety of positions. As many Turks converted to Islam, however, separating homes into men’s and women’s quarters became an important way to prevent physical contact between men and women that the religion forbade.

For European and American travelers, the separation of quarters was a matter of intense interest. There was the selamlik, or “place of greeting,” for men. Cevik notes that the harem or haremlik (literally meaning “forbidden” or “prohibited”) was designated for women. Male tourists, banned from the harem, wrote imaginative stories of beautiful women kept imprisoned for men’s pleasure. Western painters reimagined flowing women’s garments, worn for modesty, as clingy and revealing.

Female Christian missionaries—both the wives of male missionaries and single women—had very different experiences of the harem. Many of them traveled to “heathen lands” with the idea of uplifting the status of women and freeing them from backwards sexism. Cevik writes that these American women “became participants in an intense dialogue with their ‘heathen sisters’ about their status as women and wives in the intimacy of their home.”

Missionary women often gave talks to raise funds for further missionary activity, as well as letters published in religious journals. Through these formats, the women’s observations about Turkish life and the harem made their way into U.S. life. One Methodist Missionary Society publication describes a returning missionary inviting a church’s children to “a Turkish tea party,” asking them to remove their shoes and sit on cushions from the church pews.

Weekly Digest

[contact-form-7]

In the years when there was the most American missionary activity in Turkey, style magazines and etiquette books increasingly featured Turkish clothing and furniture. Turkish design elements—seating, fabrics, and plush cushions—made their way into many middle-class U.S. homes. Americans often created “Turkish corners” with these furnishings. Cevik notes how dramatically this interpretation of a harem contrasts with much Western writing about that setting.

For U.S. men, Cevik writes, Turkish elements might feature in “smoking rooms”—the man caves of their day—offering “fantasies about far-off exotic places where men could still enact the rituals and special privileges of their gender.”  But for women, the meaning of Turkish design was completely different. Avant-garde groups like the Aesthetic Dress Movement and the Rational Dress Society recommended Turkish clothing as more comfortable than American styles. Amelia Bloomer, publisher of the early feminist journal The Lily, described the Turkish pantaloons as a better alternative to Western dress. Thus, at the same time as American missionaries expressed sympathy for oppressed “oriental women,” they also helped make Turkish dress a symbol of women’s liberation.

The post Style Tips from the Harem appeared first on JSTOR Daily.

When Big Business Backed Social Security

By Livia Gershon

This April, the annual Social Security Trustees report triggered a predictable round of hand-wringing by business-backed conservative analysts. The program’s trust fund is currently set to be exhausted by 2035. Although this outcome could be averted by raising the cap on contributions by high earners, conservatives have instead called for the government to essentially ditch the program in favor of private savings.

As economic historian Sanford M. Jacoby writes, conservative hostility toward Social Security has been present from the moment of the program’s birth. But Jacoby also finds that the current shape of social benefits is partly a product of big businesses pushing for public benefits to be organized in a way that helps them.

Jacoby focuses on the career of corporate executive and technocrat Marion Bayard Folsom. Folsom was a pioneer in welfare capitalism, the idea that corporations—not the government—should take responsibility for workers’ long-term well-being. He helped design programs for Eastman Kodak Company in the 1910s and ‘20s, including employee cafeterias, athletic facilities, funds for sick and injured workers, and profit sharing. Only a handful of big corporations offered such benefits in that era, but many reformers and academic experts saw it as the wave of the future.

Jacoby writes that the Great Depression was a blow to welfare capitalism. Public opinion shifted toward support for more public programs. To Folsom, the idea of expanding state-provided benefits wasn’t all bad. If the government taxed employers to provide benefits like unemployment insurance and retirement support, it would narrow the gap between Kodak and competitors who didn’t provide benefits.

But most employers adamantly opposed the founding of social welfare programs including Social Security. Jacoby writes that business groups attacked Social Security as an “immoral departure from the ‘American way.’”

Even Folsom worried that if government programs were too generous they would reduce workers’ dependence on their employer. He argued for minimal government benefits, combined with tax incentives to spur employers to provide a more generous set of benefits. Others, including populists like Huey Long, pushed for a more comprehensive welfare state along the lines of those that many Western European countries eventually established.

Ultimately, the package of benefits included in the New Deal fit fairly closely with Folsom’s vision. And, with the advent of World War II, other business leaders came around to his way of thinking. The federally-mandated wartime wage freeze encouraged employers to offer welfare capitalism-style programs like pensions and health insurance, and to support government subsidies for these programs.

Weekly Digest

[contact-form-7]

Dwight D. Eisenhower’s election in 1952 represented the first Republican presidential victory in twenty years. By this time, Social Security had broader support. Instead of cutting it, Eisenhower modestly expanded the program. And, in 1955, he appointed Folsom to be Secretary of the Department of Health, Education, and Welfare.

Today, with the decline of stable, long-term jobs, there may be more need for government welfare programs than there was in the 1950s. But there may also be less incentive for businesses to back them.

The post When Big Business Backed Social Security appeared first on JSTOR Daily.

The Online Lie Detector Is No Better Than the Polygraph

By Livia Gershon

Artificial Intelligence researchers recently announced a prototype for an “online lie detector.” Experts say it probably doesn’t actually work that well in the real world. Then again, the historian Ken Adler writes, that didn’t stop the old-fashioned polygraph from gaining enormous popularity.

According to Adler, in the second half of the nineteenth century, some European psychologists began monitoring patients’ blood pressure, respiration, and pulse rates, investigating the way these physical signs correlated with emotional changes, tension, or reactions to sharp noises. In the 1910s, Hugo Münsterberg, a German-American psychologist, and his student William Moulton Marston began using these tests to attempt to determine whether witnesses were lying about criminal matters. Marston would go on to create Wonder Woman, with her Lasso of Truth.

American police reformers popularized the polygraph test as an alternative to brutal police interrogations in the 1920s and 30s. Before long, polygraph use spread to the State Department—which used it to screen civil servants suspected of homosexuality—and to corporate America. By the middle of the century, cops, managers, and industrial psychologists were conducting two million polygraph exams a year.

Alder writes that the machines appealed to a growing public desire for “a dispassionate search for truth conducted by impersonal rules.” That reflected a broader trend at a time when cities and businesses were becoming larger, making it impossible to rely on personal trust. The contemporaneous rise of intelligence testing, industrial management techniques, and other “scientific” methods for managing human behavior spoke to the desire to make sense of an increasingly complicated world.

And yet, for all its associations with the objective pursuit of the truth, several prominent proponents of the polygraph acknowledged that the machine would work only if its subjects believed in it. To make that happen, they used deceptions of their own. For example, polygraph entrepreneur Leonarde Keeler used a marked deck of cards to convince subjects that he could tell whether they were lying about which card they were looking at. Then and today, Alder argues, the actual functioning of the machines was almost beside the point. Some police examiners have achieved the same result—convincing a suspect to confess—by having them place their hands on a photocopy machine which then produces a paper printed with “LIAR!”

Weekly Digest

[contact-form-7]

From the beginning, judges rejected the use of lie detectors in courtroom trials, but allowed police to use them to extract confessions. Since the vast majority of criminal convictions occur not through trials but through confessions and plea bargains, that’s generally good enough for cops and prosecutors.

By the time Alder was writing, in 2002, the use of the polygraph in corporate settings was declining, and the general public was becoming more skeptical of its value in criminal cases. Still, as the buzz around the new online lie detector suggests, people love the idea of a machine that tells us who to trust.

The post The Online Lie Detector Is No Better Than the Polygraph appeared first on JSTOR Daily.

working the refs

By ayjay

Last Sunday afternoon, in the aftermath of the first game of the NBA playoff series between the Houston Rockets and the Golden State Warriors, there was much online huffing and puffing about whether the game’s referees had failed to call fouls against the Rockets’ James Harden and Chris Paul.

But something important was overlooked in said huffing and puffing: the fact that, whether Harden and Paul were fouled or not, they were desperately trying to get fouls called against their opponents. And that makes the last few seconds of that game a kind of parable of our cultural moment.

It’s possible that the Warriors’ Draymond Green grazed James Harden as Harden came to earth after shooting — after, that is, missing a shot quite badly, possibly because he was thinking less about making the shot than about getting the ref to believe that Green had fouled him, which he did by falling, completely unnecessarily, to the ground. The ball ended up in Chris Paul’s hands, and Paul charged into the Warriors’ Klay Thompson while flailing his arms wildly, determined to force a call. (He did not get the call, and in his rage shouldered the referee, which has earned him a fine.)

This kind of thing has, of course, long been the bane of soccer: players who might have a legitimate chance to score a goal, or at least got off a shot on goal, fling themselves to the ground and roll about in feigned agony hoping that they will get a penalty called or a yellow card assigned to the opponent.

I have come to believe that this is what almost all of our culture is about now: working the refs. Trying to get the refs, whoever the refs might be in any given instance, to make calls in our favor — to rule against our enemies and for us, and therefore justify us before the whole world.

What are students doing when they try to get speakers disinvited from their campus? Or when Twitter users try to get other Twitter users banned from the platform? Or when people try to get executives or members of some board of directors fired from their jobs? In each case, it’s an appeal to the refs. These people are not trying to persuade through reasoned argument or to attract public opinion to their side through the charm of their personality. They’re demanding that the designated arbitrators arbitrate in their favor. (Sometimes, as in the case of the college admissions, scandal, they just bribe the refs.)

And it’s easy to see why people would think this way: If I assume the point of view underlying this habit, it means that nothing that goes wrong is ever my fault. If anything that I want to go my way doesn’t go my way, it’s because the referees didn’t make the right call. It’s never because I made any dumb mistakes, or indeed had any shortcomings of any kind. Things didn’t go my way because, whether through incompetence or bias, the refs suck. I would’ve won if it hadn’t been for the stupid refs.

I think this is a particularly attractive strategy in our current moment, especially on social media. As I wrote a couple of years ago,

When a society rejects the Christian account of who we are, it doesn’t become less moralistic but far more so, because it retains an inchoate sense of justice but has no means of offering and receiving forgiveness. The great moral crisis of our time is not, as many of my fellow Christians believe, sexual licentiousness, but rather vindictiveness. Social media serve as crack for moralists: there’s no high like the high you get from punishing malefactors. But like every addiction, this one suffers from the inexorable law of diminishing returns. The mania for punishment will therefore get worse before it gets better.

Call-out culture has many, many mechanisms of enforcement but none of forgiveness or restoration. A culture that knows only how to punish creates an environment in which, as Freddie deBoer has said, “everyone’s a cop”; but it simultaneously creates disincentives for people to admit they they might themselves need policing. Because who wants to apply the single-sanction one-strike-and-you’re-out criterion to themselves?

These reflections might help to explain a phenomenon that Michael Lewis describes on his new podcast “Against the Rules”: that the NBA is dealing with unprecedented levels of complaint about its officials at the moment when the league gives those very officials unprecedented levels of scrutiny, and unprecedented levels of training, and unprecedented opportunities to review and correct bad calls.

If refs are doing their job better than ever and simultaneously catching more grief for their errors, that just might be a result of our expecting more of them than is reasonable. In the NBA, and also in society at large, we do better when we try to solve problems ourselves rather than try to manipulate the refs into solving them all for us. I hope the Rockets get swept by the Warriors. (And that the Warriors swept in the next round, because their moaning and bitching are almost as bad.)

UPDATE: I realized something right after I posted this — that’s always how it happens, isn’t it? — which is that by circling back to the NBA at the end of the post I elided a major distinction: The NBA refs may be “doing their job better than ever,” but that doesn’t mean that the same can be said for all our society’s referees. Indeed, many of them are doing a very bad job indeed. More on that in another post. (This is also what I get for writing a short post about an issue that needs to be treated at length.)

❌