FreshRSS

🔒
❌ About FreshRSS
There are new available articles, click to refresh the page.
Before yesterdayYour RSS feeds

Did Humans Once Live by Beer Alone? An Oktoberfest Tale

By Lina Zeldovich

In October of 1953, the farmers of the Western hemisphere were busy toiling over harvested grain, either milling it into flour or prepping it for brewing. Meanwhile, a group of historians and anthropologists gathered to debate which of these two common grain uses humans mastered first—bread or beer?

The original question posed by Professor J. D. Sauer, of the University of Wisconsin’s Botany Department, was even more provocative. He wanted to know whether “thirst, rather than hunger, may have been the stimulus [for] grain agriculture.” In more scientific terms, the participants were asking: “Could the discovery that a mash of fermented grain yielded a palatable and nutritious beverage have acted as a greater stimulant toward the experimental selection and breeding of the cereals than the discovery of flour and bread making?”

Interestingly, the available archaeological evidence didn’t produce a definitive answer. The cereals and the tools used for planting and reaping, as well as the milling stones and various receptacles, could be involved for making either the bread or the beer. Nonetheless, the symposium, which ran under the title of Did Man Once Live by Beer Alone?, featured plenty of discussion.

The proponents of the beer-before-bread idea noted that the earliest grains might have actually been more suitable for brewing than for baking. For example, some wild wheat and barley varieties had husks or chaff stuck to the grains. Without additional processing, such husk-enclosed grains were useless for making bread—but fit for brewing. Brewing fermented drinks may also have been easier than baking. Making bread is a fairly complex operation that necessitates milling grains and making dough, which in the case of leavened bread requires yeast. It also requires fire and ovens, or heated stones at the least.

On the other hand, as some attendees pointed out, brewing needs only a simple receptacle in which grain can ferment, a chemical reaction that can be easily started in three different ways. Sprouting grain produces its own fermentation enzyme—diastase. There are also various types of yeast naturally present in the environment. Lastly, human saliva also contains fermentation enzymes, which could have started a brewing process in a partially chewed up grain. South American tribes make corn beer called chicha, as well as other fermented beverages, by chewing the seeds, roots, or flour to initiate the brewing process.

Weekly Newsletter

[contact-form-7]

But those who believed in the “bread first, beer later” concept posed some important questions. If the ancient cereals weren’t used for food, what did their gatherers or growers actually eat? “Man cannot live on beer alone, and not too satisfactorily on beer and meat,” noted botanist and agronomist Paul Christoph Mangelsdorf. “And the addition of a few legumes, the wild peas and lentils of the Near East, would not have improved the situation appreciably. Additional carbohydrates were needed to balance the diet… Did these Neolithic farmers forego the extraordinary food values of the cereals in favor of alcohol, for which they had no physiological need?” He finished his statement with an even more provoking inquiry. “Are we to believe that the foundations of Western Civilization were laid by an ill-fed people living in a perpetual state of partial intoxication?” Another attendee said that proposing the idea of grain domestication for brewing was not unlike suggesting that cattle was “domesticated for making intoxicating beverages from the milk.”

In the end, the two camps met halfway. They agreed that our ancestors probably used cereal for food, but that food might have been in liquid rather than baked form. It’s likely that the earliest cereal dishes were prepared as gruel—a thinner, more liquidy version of porridge that had been a Western peasants’ dietary staple. But gruel could easily ferment. Anthropologist Ralph Linton, who chose to take “an intermediate position” in the beer vs. bread controversy, noted that beer “may have resulted from accidental souring of a thin gruel … which had been left standing in an open vessel.” So perhaps humankind indeed owes its effervescent bubbly beverage to some leftover mush gone bad thousands of years ago.

The post Did Humans Once Live by Beer Alone? An Oktoberfest Tale appeared first on JSTOR Daily.

Navy Seals: Why the Military Uses Marine Mammals

By Farah Mohammed

In late April 2019, a friendly beluga whale surfaced off the shores of Norway and began following fishermen’s boats, tugging on loose straps for attention, seemingly asking for food.

But that’s not what grabbed world headlines. What caught global attention was the harness strapped to the cetacean’s back, meant to carry a camera, leading some to theorize that the beluga was a stray Russian spy. (After much speculation, most outlets are reporting the whale was likely trained as a therapy animal.)

Bizarre as it sounds, the use of military mammals is in fact a common military strategy, even in the U.S. According to science writer Ceiridwen Terrill, marine mammals have been deemed “invaluable components” of the defense force.

In the cold war, the Soviet military was said to have parachuted military-trained dolphins from heights of nearly two miles. Dolphins were also used in the study and design of underwater torpedoes. A pod of sea lions, known as Mk 5 Mod 1, was trained to retrieve explosives. In the U.S.’s 1969 Project Deep Ops, two killer whales and a pilot whale—Ahab, Ishmael and Morgan—were trained to retrieve objects that had been lost in the ocean, in conditions too deep for human divers to retrieve, and too rough for machines to handle.

Although stories like these capture the public’s imagination, the reality of the life of an animal in the armed forces is rather dark. As it is with humans, military training for animals is physically and psychologically punishing, and carries the inherent risk of casualties. Moreover, there isn’t the same level of oversight for animals in the military.

Unexplained deaths are a problem. Terrill writes,

the Marine Fisheries Service and Navy necropsy reports show that the Navy collected 146 dolphins of four species since 1962. Of this 146, 60 were still in service, 11 were transferred to private facilities, 5 had escaped, and 55 died. However, Fisheries Service records were incomplete; not all Navy dolphin necropsy reports were filed, not all dolphin deaths reported.

According to Terrill, abuse against animal trainees is another problem. She pinpoints common practices such as throwing fish outside pens during training (“baiting”), or denying food as a way to make animals more cooperative (“axing”).

Weekly Digest

[contact-form-7]

The issue is complex. According to Terrill, some argue that after years, the military training pens are the only home the animals know, and attempting to free them or re-introduce them to the wild would be cruel. This is an idea that the animals’ behavior both supports and contradicts: some escape given the opportunity, while others, even in open water, will linger and wait for their trainers to arrive.

Today, dolphins, sea lions, and whales are still in used tracking and retrieving objects, as their natural senses are superior to technology in rough weather and noisy areas.

The post Navy Seals: Why the Military Uses Marine Mammals appeared first on JSTOR Daily.

The Prince of Quacks (and How He Captivated London)

By Benjamin Winterhalter

Let me set the scene: In late eighteenth-century England, ladies and gentlemen flocked to exhibitions of solar microscopes. The miniature world of mites and polyps was blown up and cast on the wall like a magic lantern show. The ladies and gents might have witnessed Sir William Hamilton’s Vesuvian Apparatus, with its simulated flowing lava and drumbeat explosions.

It was the age of scientific entertainment, the rationalist ideals of the Enlightenment colliding with the passion of the Romantic spirit. These exhibitions demonstrated scientific control of nature at the same time that they overawed their viewers with the sublime grandeur of natural forces at play.

Electricity, newly harnessed, was “the youngest daughter of the sciences.” High society aristocrats held electrical soirées in which they shocked one another with electrified kisses. They marveled at green and blue luminescence swirling in aurora flasks. King Louis XV even employed a court electrician, Jean Abbe Nollet, who devised spectacular electrical experiments for the entertainment of the court. On one occasion, the electrician lined up a row of several hundred monks and ran a current through them, sending them all leaping up into the air at the same moment. To the witnesses of these spectacles, electricity was something almost miraculous, an “ethereal fire” that connected the roars and flashes of the heavens to the inner workings of the human body.

Enter James Graham, a man who the British Medical Journal described as “one of the vilest imposters in the history of quackery.” Handsome and dapper, Graham benefitted from his undeniable flair for showmanship and his talent for leaping on trends. In 1780, he opened his “Temple of Health,” a medical establishment on the fashionable Pall Mall street in London. It was a lavish home for Graham’s much-touted “medico-electrical apparatus,” which he proudly proclaimed the “largest, most stupendous, most useful, and most magnificent, that now is, or ever was, in the World.”

The entrance hall to the Temple was scattered with discarded walking sticks, ear trumpets, eyeglasses, and crutches, supposedly cast away in fits of exuberant health by the beneficiaries of Graham’s cures. Perfume and music, played by a band concealed under the stairs, wafted on the air. Guests were paraded past one electrical marvel after another: flint glass jars blazing with captive sparks, gilded dragons breathing electrical fire, a gilded throne on which Graham’s patients sat to receive their curative shocks. Among the jars and columns Emma Hart (later to become famous as Lord Nelson’s mistress) posed in gauzy Grecian robes as Hebe, the Goddess of Youth.

Some came to be treated, some simply to gawp. Many came to hear Graham’s lectures, which were famously titillating. At the end of each lecture, the guests were shocked (literally) by conductors concealed in their seats. Then, a gigantic, gaunt “spirit” would emerge from a hidden trapdoor, bearing a bottle of “aetherial balsam” to be distributed to the guests.

A depiction of James Graham's 'Celestial Bed'
A depiction of James Graham’s Celestial Bed. Getty

The most infamous feature of the Temple of Health was Graham’s Celestial Bed. Borne up on forty pillars of colored glass, perfumed with flowers and spices, and humming with vivifying electricity, the bed (according to Graham) guaranteed conception. “Any gentleman and his lady desirous of progeny, and wishing to spend an evening in this Celestial apartment… may, by a compliment of a fifty pound bank note be permitted to partake of the heavenly joys it affords,” Graham wrote. “The barren certainly must become fruitful when they are powerfully agitated in the delight.” The bed featured an adjustable frame, so that it could be set at various angles.

With his devices and his medicines, Graham claimed to have “an absolute command over the health, functions and diseases of the human body.” He confidently predicted that he would live to at least 150, healthy all the while. The English poet laureate Robert Southey, who met James Graham, described him as “half knave, half enthusiast.” That is, he bought into his own hype, but he was a con man all the same.

Weekly Digest

[contact-form-7]

James Graham may not have been much of a doctor, but he was a master of aesthetics. His Temple of Health was, ultimately, a performance, a stage on which he enacted his cosmic drama: the fire of the heavens, carried down to Earth for the benefit of humankind. His cures may not have worked, per se, but they worked for his audience because they fit precisely into their worldview. Graham wrote that, when confronted with the grandest apartment in his Temple, “words can convey no adequate idea of the astonishment and awful sublimity which seizes the mind of every spectator.” To have lightning coursing through your body may be the most literal possible version of the Romantic encounter with nature’s terror and majesty.

The Temple of Health didn’t last long, however. By 1782, Graham was bankrupt. He was forced to sell off the Temple’s lavish accoutrements, including the Celestial Bed. It didn’t take him long to develop a new angle. By 1790, he was promoting a new theory that the human body could absorb all the nutrients it needed through contact with soil. Thenceforth, he delivered his lectures buried up to the neck in dirt.

The post The Prince of Quacks (and How He Captivated London) appeared first on JSTOR Daily.

Colonialism Created Navy Blue

By Allison Meier

In the second half of the eighteenth century, the Royal Navy sailed the world in service of the expansion and enforcement of the British Empire. Its officers were decked out in a deep blue, now known as navy blue. The rich hue was a recent development, and one that wouldn’t have been possible in previous centuries when the color was more scarce.

Researchers Adam Geczy, Vicki Karaminas, and Justine Taylor explain in the Journal of Asia-Pacific Pop Culture:

In 1748, the British Royal Navy adopted dark blue officer’s uniforms, which would become the basis of the naval dress of other countries…The blue of seamen’s uniforms, so ubiquitous now as to be taken for granted, is not due to the corollary with the color and that of the sky and sea. The explanation is far more logistical, relating back to the British colonization of India and the expansion of the East India Trading Company after the victory over the French in the Seven Years’ War (1756–63).

The rich color came from the indigo plant, Indigofera tinctoria, which was native to India, and thus available to the British after they had colonized the country. It had been in use in Europe since the late thirteenth century. “Indigo was then not only plentiful and affordable, but unlike other dyes was particularly color fast, outclassing other colors in withstanding extensive exposure to sun and salt water.”

Blue dye had existed in England, but it was often made from the flowering plant woad. Even when the more versatile indigo became accessible, there was some resistance to the imported blue, including from the woad cultivators. Historian Dauril Alden notes in The Journal of Economic History that the woad cultivators campaigned aggressively against indigo, declaring:

[it] was properly “food for the devil” and was also poisonous, as in fact it was (particularly to the woadmen). By the end of the sixteenth century, they had succeeded in persuading governments in the Germanies, France, and England to prohibit use of the so-called “devil’s dye.”

Still, they could not hold off the popularity of indigo for long, especially as dyers discovered its potential. “Different textiles required different treatment and even different dyes to achieve a given colour,” writes historian Susan Fairlie in The Economic History Review. Wool is the easiest to dye, while silk, cotton, and linen are each a bit harder and need varying amounts of dyes like woad. “The only fast attractive dye which worked equally on all four, with minor differences in preparation, was indigo.”

Weekly Digest

[contact-form-7]

With that flexibility, indigo began to be used much more widely. By the eighteenth century it was a major import to England. Economic historian R. C. Nash writes in The Economic History Review:

Indigo, ‘the most famous of all dyes’, was the most widely used dyestuff in the eighteenth-century European textile industry. From the late sixteenth century Europe’s indigo supplies came from India and from the more volatile output of the Spanish American Empire. By c. 1620, Europe’s indigo imports amounted to least 500,000 lbs. per year.

Eventually, South Carolina emerged as a leading indigo producer, when the crop was introduced as part of the plantation system in the eighteenth century. “In combination with rice, indigo underpinned the threefold increase in the colony’s exports in the generation before the American Revolution and was also mainly responsible for the striking gains in slave-labour productivity made in the same period,” Nash states. Enslaved people were integral to the forced labor that allowed for the spread of indigo into the dye markets and onto clothing.

Navy blue, meanwhile, endures as a color of authority today, worn by everyone from police to military officers, centuries after its promotion as the uniform of imperial expansion.

The post Colonialism Created Navy Blue appeared first on JSTOR Daily.

Is There a Witch Bottle in Your House?

By Allison Meier

In 2008, a ceramic bottle packed with about fifty bent copper alloy pins, some rusty nails, and a bit of wood or bone was discovered during an archaeological investigation by the Museum of London Archaeology Service. Now known as the “Holywell witch-bottle,” the vessel, which dates between 1670 and 1710, is believed to be a form of ritual protection that was hidden beneath a house near Shoreditch High Street in London.

“The most common contents of a witch-bottle are bent pins and urine, although a range of other objects were also used,” writes archaeologist Eamonn P. Kelly in Archaeology Ireland. Sometimes the bottles were glass, but others were ceramic or had designs with human faces. A witch bottle might contain nail clippings, iron nails, hair, thorns, and other sharp materials, all selected to conjure a physical charm for protection. “It was thought that the bending of the pins ‘killed’ them in a ritual sense, which meant that they then existed in the ‘otherworld’ where the witch travelled. The urine attracted the witch into the bottle, where she became trapped on the sharp pins,” Kelly writes.

Akin to witch marks, which were carved or burned onto windows, doors, fireplaces, and other entrances to homes in the sixteenth to eighteenth centuries, witch bottles were embedded in buildings across the British Isles and later the United States at these same entry points. “The victim would bury the bottle under or near the hearth of his house, and the heat of the hearth would animate the pins or iron nails and force the witch to break the link or suffer the consequences,” anthropologist Christopher C. Fennell explains in the International Journal of Historical Archaeology. “Placement near the hearth and chimney expressed associated beliefs that witches often gained access to homes through deviant paths such as the chimney stack.”

And much like witch marks, which tended to proliferate in times of political turmoil or bad harvest, the rather unpleasant ingredients in witch bottles reflected real threats to seventeenth-century people even as they were concocted for supernatural purposes. It’s probable many were made as a remedy at a time when available medicine fell short. “Urinary problems were common both in England and America during the seventeenth and eighteenth centuries, and it is reasonable to suppose that their symptoms often were attributed to the work of local witches,” scholar M.J. Becker notes in Archaeology. “The victims of bladder stones or other urinary ailments would have used a witch bottle to transfer the pains of the illness from themselves back to the witch.” In turn, if a person in the community then had a similar malady, or physical evidence of scratching, they might be accused of being the afflicting witch.

Weekly Digest

[contact-form-7]

Like other counter-magical devices, the bottled spells eventually faded out of popular folk practice, but not before immigrants to North America brought over the practice. “The witch-bottle tradition originated in the East Anglia region of England in the late Middle Ages and was introduced to North America by colonial immigrants, the tradition continuing well into the 20th century on both sides of the Atlantic,” writes historian M. Chris Manning in Historical Archaeology. “While nearly 200 examples have been documented in Great Britain, less than a dozen are known in the United States.”

Researchers with the Museum of London Archaeology and the University of Hertfordshire are now hoping to identify more. In April of 2019, their “Bottles concealed and revealed” project launched as a three-year investigation of witch bottles that will bring disparate reports together into a comprehensive survey of all the known examples in museums and collections around England. Through this project, they aim to better understand how these curious bottles spread as a popular practice, and how they convey ideas around medicine and beliefs. Part of this exploration is a “Witch Bottle Hunt” calling on the public to share any discoveries with their specialists. While they don’t want anyone breaking down the walls of historic homes, they are asking that any finds be treated as archaeological objects and left in situ for a specialist to examine. Most importantly, they advise, leave the stopper in. Let the experts deal with these containers of centuries-old urine and nail clippings.

The post Is There a Witch Bottle in Your House? appeared first on JSTOR Daily.

How the Beat Generation Became “Beatniks”

By Matthew Wills

What happened when the Beat Generation bumped up against the popular culture they were rebelling against? Historian Stephen Petrus writes about how the youth subculture was turned into a commodity.

From the end of 1958 through 1960, popular magazines, newspapers, television shows, and even comic strips bombarded Americans with images of the Beat Generation,” writes Petrus. But these images weren’t so much of writers like Allen Ginsberg, William S. Burroughs, Jack Kerouac, and Gregory Corso, the Beat Generation’s leading lights. These images were of the beatniks, the bohemian lifestyle highlighted by “Howl” and On the Road.

Petrus defines the original Beats thus:

To contemporary scholars the term “Beat Generation” refers to a group of post-World War II novelists and poets disenchanted with what they viewed to be an excessively repressive, materialistic, and conformist society, who sought spiritual regeneration through sensual experiences. This band of writers includes Allen Ginsberg, Jack Kerouac, and William Burroughs, who originally met in 1944 in New York City to form the core of this literary movement.

“Beatnik,” on the other hand, was a term coined by San Francisco Chronicle columnist Herb Caen in April 1958. It was a play on “Sputnik,” Earth’s first artificial satellite which had gone up in October 1957. Caen used the term to refer to “over 250 bearded cats and kits” “slopping up” free booze at a party sponsored by Look magazine. Like the more popular Life magazine, Look was obsessed with photographic spreads of beatniks. (Playboy had a different kind of beatnik photograph spread in its July 1959 issue.)

The years between 1957 and 1960 marked the “the acceptance of the beatnik dissent and the emergence of a fad: a cultural protest transformed into a commodity,” writes Petrus. There was fashion: loose sweaters, leotards, tight black pants, berets, and sunglasses were all the rage. There were spaces: coffee houses, cellar nightclubs, and espresso shops opened to meet the new demand. New York City even had a “Rent-A-Beatnik” service, where you could order up a poetry-reading/music-playing cool cat or cool chick for your event; sandals and bongos were available options.

The popular cultural responses to the beatniks ran from denunciation to tolerance to imitation. The 1960 Republican Convention featured J. Edgar Hoover proclaiming that “Communists, Eggheads, and Beatniks” were the country’s great enemies. Some Americans associated beatniks with drugs, delinquency, and un-Americanism.

Then again, as Petrus writes, “the original beatniks themselves became a tourist attraction” in San Francisco’s North Beach. There was even an American Beat Party, which nominated a Presidential candidate in 1960. Words like “cool,” “crazy,” “dig,” and “like,” entered the general American lexicon.

Weekly Digest

[contact-form-7]

In the media, beatniks were mostly portrayed as “innocuous and silly figures, causing Americans to laugh at them and embrace them.” TV’s favorite beatnik was Maynard G. Krebs, a goofy, harmless man-child who shivered at the thought of work and began many a sentence with “like.” Wearing a chin beard and playing the bongo drums, the stereotype ultimately edged out the title character of The Many Loves of Dobie Gillis (1959-1963) in popularity.

But, as Petrus writes, the original Beats were more serious than that.

Shaped by the effect of the senseless murder of World War II and the knowledge of a possible instant death by an atomic explosion or a slow deterioration by the cancerous force of conformity, the hipster responded to his situation by detaching himself from society and rebelling.

Then again, dissenters have a way of popping up again, elsewhere. The year 1960 was defined by a wave of civil rights sit-ins across the South, the birth of Students for a Democratic Society, and protests against the House Committee on Un-American Activities in Berkeley. What rough beast came slouching towards Berkeley to be born? Like, the Sixties had arrived.

The post How the Beat Generation Became “Beatniks” appeared first on JSTOR Daily.

The Codpiece and the Pox

By Matthew Wills

Actor Mark Rylance wore a codpiece as part of his costume for the role of Thomas Cromwell in Wolf Hall, but noted that historical accuracy was sacrificed to appeal to the sensibilities of contemporary audiences. His codpiece was toned down because viewers “may not know exactly what is going on down there,” particularly in America.

What exactly was going on down there remains a topic of historical inquiry. “The visible penis sheath worn in the 16th century,” writes cultural anthropologist Grace Q. Vicary, “has long been an enigma to social historians, ethnologists, and social psychologists.” Some have argued that the codpiece’s prime function was “phallic connotations of aggressive virility display.” Vicary thinks this idea is the product of the later Puritan age.

Since actual garments from the time are almost nonexistent, Vicay studied the drawings and portraits of men in contemporary clothing that began to be produced around 1500. Codpieces changed across time and place. Such coveralls kept the genitals from being thumped by the “various daggers, purses, tools, whisks, pomanders, or swords which Renaissance men hung from their belts.” But Vicay also argues that there was a “functional link between the codpiece” and the sexually-transmitted syphilis epidemic that swept through Europe, starting in 1494.

Treatments for this virulent outbreak included ointments made from arsenic, sulfur, black hellebore, pine resin, and a “whole galaxy of herbs, minerals, syrups, and decoctions.” One thing that did work was mercury—in fact, it would be used to fight syphilis until 1910. Topical application, in a mixture with grease, necessitated “a practical artefact—a large, boxy penis container which (1) would hold layers of bandages in place, (2) would keep the grease, or other drug stains from spreading to stylish” clothing. Such codpieces could also contain the pus associated with the disease.

Vicay argues that German mercenaries came up with this medicinal contraption by the 1510s. They were “the first sufferers of syphilis and its prime carrier through Europe.” Fashionable males, no strangers to the pox, adopted it by the 1530s.

Weekly Digest

[contact-form-7]

Vicay makes the point that that those afflicted with syphilis were discriminated against, since “social diseases” were stigmatized. So, if every mercenary wore one of these newfangled codpieces, none would stand out for having an STD. Soon enough, wearing a codpiece lost its association with the syphilitic:

When kings wore a new style of clothing, it was soon emulated by all who could afford it… A universal function of any clothing or adornment is communication of social and sexual status through varying shapes, colors, materials texture and decoration, so of course the codpiece also fulfilled this function.

The Renaissance codpiece faded from fashion by the 1590s, but the concept did not entirely disappear. Vicay cites the “D.A.B.D. Aprons” in a Montgomery Ward catalog of 1900. These were worn, to quote the catalog, to “keep the clothing and bedding from becoming soiled when afflicted with gonorrhea or gleet.”

The post The Codpiece and the Pox appeared first on JSTOR Daily.

The Invention of the Giveaway

By Livia Gershon

From cereal boxes to pop-up ads, we’re surrounded by marketing efforts promising free toys or chances to win valuable prizes. Historian Wendy A. Woloson explains that this kind of premium scheme dates back to the mid-nineteenth century.

The technique apparently started with Benjamin T. Babbitt, a friend of P.T. Barnum, around 1851. From a travelling wagon, Babbitt offered free lithographic prints with the purchase of baking soda. Soon, soap-seller Hibbard P. Ross improved on Babbitt’s method, promising that anyone buying ten bars of soap would also receive a “present.” What the prize, ranging from a linen handkerchief to a gold watch or a plot of land, consisted of was supposedly determined with a drawing.

By the late 1850s, retailers around the country were using prize lottery systems. The typical prize was close to worthless. In fact, suppliers often used prize lotteries to unload damaged, stolen, or unpopular items. Woloson writes that this allowed them “to do the seemingly impossible, to defy natural law, begetting value from the mating of two things that individually had little or no worth.”

In some cases, sellers skipped the pretense of selling a product at all. “Gift distributions” sold tickets entitling holders to a prize chosen by lottery. Many operators took the money and never held a drawing at all. When they did, the result was often a sham. One company sold fifty-cent tickets promising “a chance for a $500 prize pencil.” Entrants were told they won the grand prize and asked to send in their winning ticket plus five dollars to cover postage. When they did, they received plated pencils worth less than the fifty cents they’d originally paid.

Woloson writes that police sometimes stepped in to stop what were essentially illegal gambling operations. Some newspapers also warned readers about the schemes. In an 1858 issue of the New York Herald, one writer railed against “gift enterprises” and expressed dismay at “the facility with which our public are duped by any sort of project which holds out hope of gain.”

Weekly Digest

[contact-form-7]

On the other hand, some newspaper editors abetted the premium schemes in exchange for a chance to be part of them. The unfortunate Henry Catlin, editor of the True American in Erie, Pennsylvania, wrote to a gift distribution company that he had been told “if I would give you a puff you would draw me one of your prizes in payment. I gave you a lengthy notice saying unequivocally that I knew you were reliable. I sent you marked copies of the paper, but have received nothing from you.”

Woloson writes that the success of premiums reflected customers’ growing acquisitiveness and desire to be part of the exciting and fast-growing world of consumer goods. But she argues that the appeal of the free gift wasn’t just about exploiting the eternal dream of getting something for nothing. Presenting a company as a bestower of gifts was a ploy to elicit good will from consumers. As the continuing popularity of prize giveaways suggests, this still works on a lot of us.

The post The Invention of the Giveaway appeared first on JSTOR Daily.

When Cemeteries Became Natural Sanctuaries

By Allison Meier

In the nineteenth century, American cities had a problem with the dead. As urban areas became more densely populated, the churchyards in which the dead were interred became overcrowded and were considered public health hazards. Outbreaks of epidemics like cholera and yellow fever were blamed on these festering burial grounds, which were often so full of bodies that they towered over the streets. Industrialization was rapidly claiming available land for development, with little space left for the dead.

“As the urban environment became paved over, more hurried, and commercial, a change of scenery reminiscent of the rural past, a readily accessible natural sanctuary within close proximity to the city, became necessary,” writes historian Thomas Bender in The New England Quarterly. “A romantic landscape was sought as a counterbalance to the disturbing aspects of the cityscape. This was the attraction of the rural cemeteries on the outskirts of most American cities.”

The first example of the rural cemetery movement in the United States was Mount Auburn in Cambridge, Massachusetts. Dr. Jacob Bigelow and General Dearborn of the Massachusetts Horticultural Society opened the cemetery in 1831. It reflected their botanical interest and borrowed from English landscape traditions. Mount Auburn also mimicked European cemeteries like Père Lachaise in Paris, founded in 1804, in its location on the edge of a metropolitan area rather than right at its center. The unpleasant corpses would be far from day-to-day life, and, in turn, the living could come to these spaces as an escape.

“A proto-ecological reform movement was afoot in the cities of antebellum America, encompassing concerns with sanitation, public health, aesthetics, and equal access to green spaces,” states environmental historian Aaron Sachs in Environmental History. “Putrefying corpses no longer would pile up in grim graveyards (thought to be the source of rank, unhealthful miasmas), and walkers in the city no longer would be deprived of fresh air and sheltering trees.”

Hazel Dell, Mt. Auburn cemetery
Hazel Dell, Mt. Auburn Cemetery

Dozens of similar cemeteries followed, including Philadelphia’s Laurel Hill in 1836, Brooklyn’s Green-Wood in 1838, Baltimore’s Green Mount in 1838, and Rochester’s Mount Hope in 1838. Few American cities in the mid-nineteenth century had park systems, major art museums, or botanic gardens, and these cemeteries attracted people as much for recreation as for mourning. Picnics on the lawns and carriage rides on the winding paths accompanied viewing of the monuments, which similarly represented a shifting attitude toward death.

“Early visions of the rural cemetery emphasized naturalness of landscape, but increasingly that natural setting took second place to man-made adornments such as statuary,” explains historian Charles O. Jackson in the Journal of American Studies. “That effort to make the death-setting lovely would progress to the point where critics began to insist that limits be drawn. It was wrong to hide all the harsh realities of death.”

Indeed, whereas colonial American churchyards were packed with skulls, winged hourglasses, and other “memento mori” messages, the rural cemeteries were adorned with soaring angels, carvings (and plantings) of weeping willows, and other such statements that death was more of a peaceful transition than a deadline for repenting. Although these cemeteries were incredibly different from the previous burial grounds, particularly in their secular romanticism, they were popular.

Weekly Digest

[contact-form-7]

“Only two years after Mount Auburn’s establishment the English actress Fanny Kemble reported that it was already ‘one of the lions’ of the area, and that ‘for its beauty Mount Auburn might seem a pleasure garden instead of a place of graves,’” historian Stanley French relates in American Quarterly. “About the same time a Swedish visitor was so enchanted with Mount Auburn that he declared, ‘a glance at this beautiful cemetery almost excites a wish to die.’”

Eventually the rural cemetery and its sentimentality faded out of fashion, and these former outskirts were absorbed into cities. Contemporary burial grounds, now frequently referred to as “memorial parks,” are much more somber in terms of their layout and imagery, with stark granite stones positioned in grass with only sporadic trees. Yet these rural cemeteries survive within contemporary cities, and continue to offer a meditative respite from urban life.

The post When Cemeteries Became Natural Sanctuaries appeared first on JSTOR Daily.

The Extremely Real Science behind the Basilisk’s Lethal Gaze

By Jonathan Aprea

The basilisk, which slays its victims with a single glance, seems as fantastical as the scorpion-tailed manticore or the Barnacle Tree, which sprouts goslings like fruit. But there was a perfectly reasonable scientific explanation for the basilisk’s lethal look: the extramission theory of vision.

According to the extramission theory, which was developed by such thinkers as Plato, Galen, Euclid, and Ptolemy, our eyes are more than the passive recipients of images. Rather, they send out eye-beams—feelers made of elemental fire that spread, nerve-like, to create our field of vision. These luminous tendrils stream out from our eyes into the world, apprehending objects in their path and relaying back to us their qualities.

The extramissionists proposed a theory of vision completely alien to our current way of thinking. Though they saw the world in the same way we did, the way they saw seeing was something else entirely. In the words of Empedocles:

…as when a man who intends to make a journey prepares a light for himself, a flame of fire burning through a wintry night… in the same way the elemental fire, wrapped in membranes and delicate tissues, was then concealed in the round pupil.

In this view, sight is essentially a species of touch. The extramissionist theory explained why it is that, as objects recede into the distance, their details blur: because they take up less of our visual field, there are fewer eye-beams to strike their surface and feel its intricacies.

Likewise, when objects are too far too see, it’s because our eye-beams can’t stretch far enough to reach them. This explains why squinting can help you see distant details: the narrowed aperture focuses your eye-beams onto a single point. The luminous gleam of a cat’s eyes in the dark was considered to be visible proof of the fire emanating from their pupils, and one philosopher proposed that the reason we get dizzy watching waves or wheels whirl is that following the motion with our eyes actually makes our eye-beams twist and churn.

Of course, there were objections. For instance, how is it that we can see the constellations the instant we turn our eyes to the night sky, if gazing on the stars means our visual tendrils must stretch far enough to touch them? To answer riddles like these, alternative versions of the theory developed. Some argued that, when our eyes meet the air, they convert the atmosphere itself into an organ of our senses. Others argued that the objects we see send out beams of their own, which meet and meld with the beams of our eyes.

Interestingly, the counter-theory to extramission—intromission—is almost equally alien. Democritus, an intromissionist, put forth this bizarre theory: the objects around us slough off a continuous stream of atom-thin flakes, called eidola, each a miniature replica of its source. Eidola float through the air until they meet our open eyes and stamp themselves into our perception. The best evidence for this theory, according to its proponents, was the fact that when you gaze into someone’s eyes you see a floating miniature of the scene around them—the eidola, captured.

Once a Week

[contact-form-7]

The debate between intromissionists and extramissionists continued well into the Middle Ages, leaving a lasting imprint on art and literature. Byzantine icons, for instance, derived much of their reverential impact from the idea that worshippers were, in fact, caressing the holy object with their gaze. The depths and reliefs, the play of light and shadow, and the rich, shimmering textures were all meant to evoke the sensation of visual touch.

On the other hand, medieval writers found a less holy use for the concept—erotic poetry. Poets wrote of being pierced by the arrow-like gaze of their beloved, and revelled in the idea that meeting their lady’s eyes was in fact a kind of mutual caress. John Donne wrote one of the most striking depictions of extramissive sight when he penned these lines:

Our hands were firmly cemented
With a fast balm, which thence did spring;
Our eye-beams twisted, and did thread
Our eyes upon one double string.

As for the basilisk, if the rays of the eyes could touch you, it stood to reason that, under the right circumstances, they could also hurt you. Sir Thomas Browne expressed this best, writing “it is not impossible, what is affirmed of this animal, the visible rays of their eyes carrying forth the subtilest portion of their poison, which is received by the eye of man or beast, infecteth first the brain, and is from thence communicated to the heart.” For the extramissionists, looks could kill.

The post The Extremely Real Science behind the Basilisk’s Lethal Gaze appeared first on JSTOR Daily.

❌