In 1778, the Continental Congress decreed that it was “the duty of all persons in the service of the United States … to give the earliest information to Congress or any proper authority of any misconduct, frauds or misdemeanors by any officers or persons in the service of these states.”
This “founding” attitude has fared… rather ambiguously ever since. As law professor Shawn Marie Boyne shows in her review of the legal protections for whistleblowers in government and industry, “the country’s treatment of whistleblowers has been a conflicted one.” Regardless of the organizational model (public, private, non-profit), those in power who have had the whistle blown on them rarely applaud whistleblowers. Heroes to some, often the whistleblower is labeled a traitor by those in power, as in the cases of Boyne’s examples, Edward Snowden and Chelsea Manning.
“The question of whether a whistleblower will be protected or pilloried depends on the interests of those in power,” Boyne writes. Leaks to the media from officials for political advantage are standard operating procedure. But those outside this inner circle don’t fare as well: Snowden is in exile and Manning is in jail. Boyne notes that three NSA employees who did do what critics said Snowden and Manning should have done, that is, go through the system and use the proper channels to report government abuse, “found their lives destroyed and reputations tarnished.”
Retaliation against whistleblowers hit some of the pioneers, too, Boyne notes. Ernest Fitzgerald, who revealed billions in cost-overruns in a military transport program in 1968, was demoted after President Richard Nixon told his supervisors to “get rid of the son of a bitch.”
That same president ordered a break-in to Daniel Ellsberg’s psychiatrist’s office in 1971, in hopes of finding dirt on Ellsberg. An analyst for the RAND Corporation, Daniel Ellsberg released the Pentagon Papers to the New York Times. This classified historical study of the war in Vietnam revealed that the government realized early on that the war could not be won. Defending his actions in 1971, Daniel Ellsberg said, “I felt that as an American citizen, as a responsible citizen, I could no longer cooperate in concealing this information from the American public.”
Retaliation against whistleblowers is, as scholar Michael T. Rehg and his co-authors show, quite gendered. “Male whistleblowers were treated differently depending on their power in the organization, but female whistleblowers received the same treatment regardless of the amount of organizational power they held: Their status as women overrode their status as powerful or less powerful organization members.” These authors also found that “women who reported wrongdoing that was serious or which harmed them directly were more likely to suffer retaliation, whereas men were not.”
While laws have been strengthened to help whistleblowers, presidents and CEOs nevertheless continue to go after them.
So the Mueller Report is finally out. President Trump has called it a “total exoneration,” but we don’t have to take his word for it. After special counsel Robert Mueller’s comprehensive, two-year investigation into serious allegations of Russian electoral interference, conspiracy, and obstruction of justice, we’re free to read what the special counsel’s findings actually are, if we so choose—albeit with a number of careful redactions from William Barr that along with his four-page summary, framed public conversations about the report in important ways.
As Christopher Bail explains in an article for Sociological Theory, administrations have an undeniable power to lead in the shaping of meaning, reality, and public debate.
Call it spin, call it doublespeak, call it propaganda, call it taurascatic bullshit—in an age of growing distrust of big social institutions, from corporations to the government, many of us are quite aware of the social concept of framing a narrative, especially by those that wield power. Framing affects how listeners view reality. It can persuade us one way or another by using certain rhetorical or linguistic means. What’s more, a particular framing doesn’t just arise spontaneously to the top of the public consciousness from its own legitimate merits, because it happens to be the neutral truth. We would be naive to think so, yet many people do. When it comes to the power of states, in sociologist Christopher A. Bail’s view, it has to be knowingly crafted, with two realities. One is a front stage presentation for public consumption and the other a secret collective coordination behind the scenes. Their reality becomes your reality, one way or another.
Perhaps fewer people really understand that when it comes to language itself, just the act of speaking is not “transparent and innocent” as many people believe. We don’t say exactly what we mean, we naturally frame our language and make linguistic choices, even unknowingly, that reflect how we see the world. Using language is rarely just neutral. It’s often meaningfully leaky. Even black box redactions don’t necessarily always hide all the information, which can sometimes be inferred from the surrounding context. We may think we get plain meaning from only the words in a language, or that lexical, literal meanings are really what words are all about.
But this is not actually how we use language. Language use fundamentally involves questions of people’s relationships to each other and of power based on what we know about each other—the power to craft certain illusions of reality and have others fall in line with our views of the world. Language, leached of meaning or detached from reality, can become a kind of lie. Using the right kind of language, we can even make some listeners believe certain things that aren’t so—sometimes just by telling the literal truth.
To Dwight Bolinger, “literal truth—the kind one swears to tell on the witness stand—permits any amount of evasion.” He explains: “The most insidious of all concepts of truth is that of literalness. The California prune-growers tell us that prunes, pound for pound, offer several times more vitamins and minerals than fresh fruit; literally true. The oil industry advertises that no heat costs less than oil heat, which has to be true because no heat costs nothing at all.” Those savvy enough to see through it will simply eat fewer prunes and heat their homes differently to those who fall for it. But declaring that you didn’t lie but told the literal truth in this way seems a kind of hollow ethic, a careful weaselling around the words that some lawyers seem particularly adept at.
So, yes, it’s perhaps literally true that no collusion was found by Mueller’s team because they weren’t actually investigating collusion, but a much harder-to-prove charge of criminal conspiracy. Collusion and conspiracy may be related, but they’re not the same. Yet in the resulting media commentary, this framing by the Trump administration was largely successful in amplifying the confused conflation of two separate concepts: One is the fact that Mueller did not ultimately find definitive proof that the Trump campaign illegally conspired with Russia; The other is the non-legal Trump talking point of “no collusion,” even though not finding something is certainly not the same as there being none (at least you would hope so, if you ever can’t find your house keys or wallet).
Barr’s full-throated defense of Trump is centered on language, not in refuting the evidence of his actions. It’s a leapfrogging interpretation of language, seemingly empty of actual meaning, ethics, or the facts and evidence in the case. Barr edited Mueller’s words out of their original context and in some cases made calculated leaps in his wording that implied Mueller’s opinions aligned with his. He justified his right to do as he chose by framing the report, once it had been delivered, as his “baby.” By taking precise legal wording and translating it rather loosely but plausibly into colloquial speech for a wider audience, Barr was able to manipulate and frame the resulting discussion at the crucial moment for a news-hungry media to broadcast. The media amplified his framing, together with a confusion of opinions in the absence of an actual report to report on. It seems to have done the trick, what it was meant to, despite some of us being aware of the man behind the curtain and the cracks in his interpretation. This is because Barr is talking directly to and for a particular audience, those who already align with his views, not everyone who happens to be listening to him. The reframing of otherwise bad news for a certain group only has to satisfy and make sense to those likely to support them, not convince their antagonists.
This is not news to lawyers, because this is what lawyers do to convince people to see reality their way: gaming language. Robert Shuy discusses a study done on eyewitness recall (of a car collision on film) in which a lawyer’s question phrased like: “About how fast were the cars going when they smashed into each other?” got witnesses to report much higher speeds than the phrasing “About how fast were the cars going when they hit each other?” A week later, those who had been asked the first question reported they’d seen broken glass twice as frequently as the other group, even though the film showed no broken glass at all. So the linguistic choices lawyers make can have a major impact on people’s beliefs and memories, for good or for ill.
More and more, the law is basically a game of language. Words used for legal purposes do need precise definitions and interpretations in order for anyone to be able to decide how laws can be justly applied. But it also matters how one arrives at this linguistic understanding. The linguistic decisions made by legal professionals, unlike a crossword or even cross words from a judge, can result in people being punished, incarcerated, or killed… or perhaps escaping any kind of consequence for offenses they clearly did commit.
Despite working closely with language in ways that can profoundly affect people’s lives, many lawyers and judges are remarkably ill-informed about language, as law scholars and linguists have pointed out. Legal language researchers such as Edward Finegan have cautioned that increasingly, judges seem to have decided court cases by just looking up a dictionary and cherrypicking certain meanings over others that suit their purpose.
One such case is Muscarello v. United States, from 1998. The law was that anyone caught using or carrying a firearm while selling drugs (presumably on their person) would get an additional five year sentence. The defendant, Muscarello, had a firearm that he was actively not carrying because it was actually locked up in the glove compartment of his truck. Nevertheless this was redefined as “carrying” a firearm, because the judge in the case saw that one of the first definitions in a dictionary for the word happen to be in relation to “carrying” (i.e. transporting) things in vehicles.
Not all judges seem to be aware that dictionaries are not authorities from on high set in stone, but written by fallible people carrying, if not deadly weapons of words, at least their own agendas and beliefs. Submitting to the authority of a dictionary to decide a case absolves judges of a moral responsibility. But dictionary entries can be framed, too. Words and meanings change over time, so it matters what edition and what kind of a dictionary a judge consults, for example. Word senses may be ordered in different ways, so it can’t be assumed that the first definition is the most basic. Nor can a random definition just be picked as evidence, without taking into account the context that the meaning is used in. These are all fairly simple and obvious features of dictionaries that are frequently overlooked in legal contexts, just when it matters most that careful and ethical consideration should go into the linguistic interpretation of the law.
According to Bail, it’s crucial revelations of clandestine coordinations that can abruptly question the authority of the public framing that an administration has crafted to tell a more positive story about its own actions and policies. These revelations can take the form of the Mueller report or information leaks from whistleblowers such as Daniel Ellsburg and Edward Snowden. The more administrations try to adjust their framing to account for this new information and justify their suddenly revealed behind-the-scenes secret dealing to the public, the more their discourse starts to disconnect from reality, contradicting past framing, resulting in a even greater need for secrecy and deception.
If we don’t pay enough attention to linguistic ethics, language and the truth that it seems to tell can be subtly manipulated to misdirect us. Even the most precise legal wording of a thorough report can be misread, as long as the public is willing to allow it.
The world’s first parking meter was installed in Oklahoma City in January 1935. That December, the Bulletin of the National Tax Association explained the workings of these strange new “Park-O-Meters:”
In the old days horses went to the hitching post, no fee; in these modern days our substitute for the horse goes to the Park-O-Meter, pays the fee and can stand as legally permitted until the meter squawks for another ante.
Parking meters spread across the country quickly. In 1941, William H. Orrick Jr. reported in the California Law Review that, in the six years since their introduction in Oklahoma, they “have been hailed as the greatest traffic invention since the stoplight.”
Still, Orrick wrote, there were some snags. Cities found they had to hire a lot more traffic officers to enforce the meters’ limits. Some citizens also sued their cities over charges for parking. The municipal governments prevailed in most of these cases, since courts generally viewed the meters as an acceptable method of regulating parking. But, to meet legal muster, the money from the meters could only cover their installation and maintenance. Given the limits of cities’ regulatory authority, they had to argue that the meters were simply a way to keep cars from clogging up the streets, rather than a money-making scheme.
Today, many economists and planners argue that it’s irresponsible not to charge for parking. Many cities essentially subsidize car trips by providing free parking on public streets, requiring restaurants and stores to build parking lots (as the economist Daniel B. Klein noted in a 2006 review essay about urban planner Donald Shoup’s book The High Cost of Free Parking). Free parking encourages people to drive more rather than walking or using public transit. It also means that more space ends up devoted to parking, limiting space for homes and businesses, and making urban space less friendly for pedestrians. It seems modern cities still have something to learn from the people who ran Oklahoma City 84 years ago.
In Turkey, the cost of living has soared and inflation is hovering at 20%. During last month’s local elections, President Recep Tayyip Erdoğan’s ruling Justice and Development Party—A.K. Parti in Turkish—blamed this on “outsider” threats undermining Turkey over longstanding economic issues at home. The weakening currency, for example, was cast as an American and Zionist-led conspiracy against Turkey. According to historian Seda Altuğ, this tendency to blame outsiders—particularly Kurds—is a trend as old as the Turkish nation state itself.
Since its foundation as a republic in 1923, Turkey has pursued a policy of violent assimilation against its own Kurdish minority as well as Kurds living across the southern border in Syria and Iraq. Altuğ points to the ubiquity of cross-border operation bills passed by the Turkish national assembly in the last three decades which allowed the military to pursue policies of pacification and/or annihilation beyond Turkish territory.
As the Syrian conflict winds down, Turkey is fighting hard to influence the political status quo. Foremost among its priorities is to contain the ambitions of restive Kurds within Turkey by ensuring that Syrian Kurds do not attain representation in any post-war coalition government.
In a similar vein, Turkey remains threatened by the autonomous zone established in 2012 along two ends of the Turkish-Syrian border by the Syrian Kurdish party PYD (Democratic Union Party, or Partiya Yekîtiya Demoqrat in Kurmanji) and its armed wing, the YPG (People’s Defense Forces, or Yekîneyên Parastına Gel). The zone was greeted with joy and enthusiasm by Kurds across the world. As far as Turkey is concerned, the PYD is an extension of the PKK (Kurdistan Workers’ Party, or Partiya Karkerên Kurdistanê), an organization deemed “terrorist” by the United States, the EU, and Turkey since 2001.
Nevertheless, the PYD and YPG were important American allies in the fight against ISIL—and Kurdish majority areas in northern Syria are the most stable today. Without Kurds at the table, long-term peace in Syria might be hard to achieve. How Turkey’s fears of Kurdish resurgence will be reconciled with a sustainable balance of power between ethnic groups within Syria remains to be seen.
Threats from across the border are not new in Turkish history. The nascent Turkish republic feared a union of Kurdish-Armenian minority rebels across the border in French-held Syria through the 1920s, until France obliged and crushed the pro-autonomy movement. Today, the economic crisis in Turkey and Syria’s evolving peace process are linked by the specter of a Kurdish uprising permeating across the border. Turkey’s rhetoric at home has always been entangled in its ambitions abroad. Altuğ warns in her conclusion that, “in its ordeal with the Kurds in general and the Kurds of Syria in particular…the Turkish state is determined to use every possible age-old technique of repression and assimilation.”
By many standards, the United States in 2019 is a prosperous country. Yet a new report by the United Nations reveals that many Americans feel deeply unhappy. Moreover, they are increasingly pessimistic on a wide range of problems facing the country, according to the Pew Research Center. Americans feel fundamentally insecure about their standard of living, personal debt, personal safety, opportunities for advancement, and increasingly, climate change.
I believe American insecurity is the result of American disenfranchisement. Despite our narrative of American democracy, mass disenfranchisement has been a consistent condition in our history. Women only won the right to vote in 1920. The civil rights movement broke down widespread restrictions on voting for people of color in the 1960s, but the rise of mass incarceration with the war on drugs has sharply restricted voting once again; today, around six million Americans cannot vote because of state rules on felony convictions, including one in every thirteen African Americans.
Disenfranchisement is not merely the restriction of the voting rights of particular individuals. I consider it any factor that reduces the value of one vote against another vote. The widespread gerrymandering of electoral districts have reduced actual contests to a small number of battlegrounds. Voter ID laws are designed to disproportionately exclude the working class and people of color. Slow adoption of early voting laws and the widespread malfunction of voting machines are functional forms of disenfranchisement for working Americans.
The foundation of these problems is the influence of money in politics, which has been unregulated since the 2010 Supreme Court decision on Citizens United v. FEC. A tsunami of campaign contributions allows powerful candidates to unfairly sway elections, and allows them to pass policies unrepresentative of the will of their constituents. These policies tend to reinforce the accumulation of private wealth, creating a deepening cycle of corruption and inequality. The politics of disenfranchisement has thus become more naked—for example, Senate majority leader Mitch McConnell openly opposes reforms in HR 1 to increase voter turnout.
Security ideology has played an important role in reinforcing these trends. We live in a society of constitutionally-guaranteed rights and laws. But by designating problems as existential security threats, governments earn the discretion to create exceptions to the rule of law. This column has already reviewed instances in which this has infringed Americans’ individual rights. These range from the blatant, as in the internment of Japanese Americans in WWII and suspension of habeas corpus for Guantanamo Bay detainees in this century, to the more subtle, such as mass programs of metadata surveillance.
Over the long run, the spread of security, surveillance, and secrecy have had the more general effect of excluding democratic politics from the development of state policies. I believe mass surveillance run by a million-person bureaucracy with secret-level clearances, interminable delays in complying with Freedom of Information Act requests, and unending war-like military “interventions” and occupations overseas are all forms of disenfranchisement. The American people, and their representatives, are misinformed about these policies; they did not agree to them, nor do they have the power at present to stop them. It seems to me that security—as a permanent mode of government—is making Americans less secure.
Fortunately, the academic field of Security Studies has evolved in the past two decades with a more self-critical and humanistic focus. At the time of its development within the field of International Relations during the Cold War, the world seemed to have arrived at a final political formation, e.g., distinct nation states with regulated boundaries. The United States, with disproportionate political, economic, and cultural power, faced only one clear existential threat: nuclear war with the Soviet Union.
After the end of the Cold War, however, with the spread of globalization and rise of asymmetrical terrorist threats, this focus no longer seemed to address the real roots of insecurity. Starting in the 1990s, a new generation of “critical” security scholars argued that focusing on sovereign states as the only legitimate subjects of study erased human agency and the impact of security policies on real people. This led to a proliferation of new sub-fields, including food security, feminist security, environmental security, private corporate security and internet security. Critical security scholars have redefined the goal of security from protecting state sovereignty to achieving “emancipation”—that is, the progressive elimination of oppression from human life.
Unsurprisingly, critical scholars have denounced the security apparatus in contemporary states as betraying their stated aims—almost to the point of nullifying representative democracy, degrading living standards for the poor, and leading to conditions akin to fascism. International studies scholar João Nunez argues that recognizing and working through the fundamental politicization of security will be a more productive approach. According to Nunez, we should engage intellectually with security as a detailed network of power relations.
Tactical approaches to criticizing and confronting the power of security include fighting for a stronger union for federal employees, such as TSA agents; creating advocacy groups to highlight and protest the privatization of public space and the march of private and public surveillance; and generally, spreading awareness that poor government regulation or even subsidies currently grant corporations profits while promoting insecurity in the food supply, personal data, etc.
One of the most significant developments in security studies following the introduction of interdisciplinary approaches has been a critique of Eurocentrism in the field. Citing the latest approaches in postcolonial studies, scholars Tarak Barkawi and Mark Laffey examine the shortcomings of this perspective in light of twenty-first-century conditions of geopolitics and the globalized political economy.
When this article was published in 2006, conventional security studies was still struggling to understand transnational terrorism. The indigenous politics that had developed in Asia and Africa in the nineteenth and early twentieth centuries to confront European colonization were conducted on a dramatically unequal basis. Yet an analysis based on diplomacy and war between sovereign nation states has little room for the “politics of the weak.” In the conventional framework, the turn to terrorist tactics by radicalized splinter groups at mid-century could only be interpreted as wholly illegitimate. As natural as this interpretation may be in a moralizing sense, it does not help scholars understand the mechanisms of ideological development, radicalization, or recruitment into terrorist groups. Nor can the corresponding security policy—violent suppression—actually strike at the causes of terrorist mobilization. Disenfranchisement is one of the strongest grievances.
More relevant in 2019, however, is the authors’ suggestion that the rigid categorization of the world’s strong and weak states favored by Eurocentric political views blinds analysts to the political and social transformations of our globalized present. Widening inequality and political corruption in North American and European states are corroding the legitimacy of their claims of popular sovereignty. Put more bluntly, the authoritarian politics and populist mobilization common in many postcolonial states is becoming more recognizable in “the West.” The politics of the weak will no longer be an attribute only of “the other,” the Global South, but an endemic feature across the world.
It is my belief that the most productive effort will be to enfranchise the weak, rather than to securitize and exclude them.
In many of my columns for “Security State of Mind,” I have turned to legal studies. Inasmuch as American security and economic policies are bounded (if not totally controlled) by the law, following the actual development of legal opinion captures practical trends in governance. Both security and legal studies are coming to terms with the rapidly emerging new critiques of capitalism in the 2010s. Any hope for “emancipation” in the critical sense will need to deeply involve the reform of our unregulated global economy.
One expects lawyers, the traditional gatekeepers of constitutional rights and liberties, to defend both civic privileges and private property rights. The rapid evolution of public opinion, however, is fully evident in 2018’s “Liberal Constitutionalism and Economic Inequality,” by law professors Rosalind Dixon and Julie Suk. In the law review of the University of Chicago, whose economics department was a birthplace of neoliberalism, it is astonishing to read: “When individuals’ lives are determined by parental wealth with no significant role for individual autonomy, that society is an aristocracy; it is not the society that liberal-democratic constitutions purport to create.”
Their understandable fear, common in news analysis these days, is that wealth inequality is becoming a security threat to liberal constitutionalism by encouraging illiberal demagoguery. They consider how electoral reform could help society reinvest in the political process, and the ways in which social welfare and redistributive laws and policies could reduce inequality.
These legal scholars fear that new redistributive policies risk creating ever-growing subcategories of citizens to receive special treatment that creates incentives for corruption. They also worry that existing institutions, such as the courts, may not have the capacity to deal with legislation that seeks to impose public control on novel areas formerly determined by private or contract rights.
As a historian, my rejoinder is that political inclusion—enfranchisement—should be the basis for social reconciliation over time. We cannot predict the future; indeed, fate laughs at our ability to try. Democracy is not meant to defend an abstract or permanent set of laws or institutions—it is and can only be a living effort, the result of citizens assembling and negotiating their differences as time passes.
In contemporary society, pop music and politics mix freely—from voter registration drives at music festivals, to celebrities like Taylor Swift weighing in on elections. Back in 1970s Britain, however, that combination created controversy within political organizations.
Historian Evan Smith writes that in the late 1960s, a new organization formed, known as the National Front (NF). Appealing to some far-right members of the Conservative Party, it called for the expulsion of non-white immigrants from England. When an economic crisis hit the country in the 1970s, the NF began seeking support from white, working-class Labour voters, arguing that nonwhite immigrants were causing economic problems. Soon, NF members were holding street marches and sometimes violently attacking people in black communities.
In reaction, the British far left, including the Communist Party of Great Britain and the Socialist Workers Party (SWP), began organizing antifascist and antiracist campaigns. While the Communists focused on peaceful protest and cooperation with local authorities to curb the National Front, some SWP activists directly confronted them in the streets.
Some young activists with ties to the SWP wrote an open letter in response to racist comments made by Eric Clapton. Beyond criticizing an individual musician, they called for “a rank and file movement against the racist poison in rock music.” This was the beginning of the Rock Against Racism (RAR) movement.
Smith writes that, while the SWP was a crucial supporter of RAR, it did not completely control it, or use it as a recruiting tool. Instead, RAR built a culture on the celebration of working-class, mixed-race punk and reggae music scenes, fanzines, and events that combined live music with activism.
In contrast, the Communist Party had long distrusted pop music, which it saw as a form of capitalist propaganda, bringing American materialism and moral decadence to the country. Communist organizers favored folk music, considered an authentic art form of the people.
Starting in 1978, RAR organized a series of big carnivals, drawing anywhere from 2,000 to 100,000 people each. Smith writes that the SWP hailed the events as a victory that drew hundreds of thousands of young people into antiracist organizing.
Ultimately, the results of Rock Against Racism were mixed. The National Front declined in influence after the mid-1970s. RAR didn’t necessarily lead youths to join socialist organizations or the Labour Party but it did encourage many young people to identify with antiracist and left-wing politics. Pop music has been important in political organizing ever since.