In October of 1953, the farmers of the Western hemisphere were busy toiling over harvested grain, either milling it into flour or prepping it for brewing. Meanwhile, a group of historians and anthropologists gathered to debate which of these two common grain uses humans mastered first—bread or beer?
The original question posed by Professor J. D. Sauer, of the University of Wisconsin’s Botany Department, was even more provocative. He wanted to know whether “thirst, rather than hunger, may have been the stimulus [for] grain agriculture.” In more scientific terms, the participants were asking: “Could the discovery that a mash of fermented grain yielded a palatable and nutritious beverage have acted as a greater stimulant toward the experimental selection and breeding of the cereals than the discovery of flour and bread making?”
Interestingly, the available archaeological evidence didn’t produce a definitive answer. The cereals and the tools used for planting and reaping, as well as the milling stones and various receptacles, could be involved for making either the bread or the beer. Nonetheless, the symposium, which ran under the title of Did Man Once Live by Beer Alone?, featured plenty of discussion.
The proponents of the beer-before-bread idea noted that the earliest grains might have actually been more suitable for brewing than for baking. For example, some wild wheat and barley varieties had husks or chaff stuck to the grains. Without additional processing, such husk-enclosed grains were useless for making bread—but fit for brewing. Brewing fermented drinks may also have been easier than baking. Making bread is a fairly complex operation that necessitates milling grains and making dough, which in the case of leavened bread requires yeast. It also requires fire and ovens, or heated stones at the least.
On the other hand, as some attendees pointed out, brewing needs only a simple receptacle in which grain can ferment, a chemical reaction that can be easily started in three different ways. Sprouting grain produces its own fermentation enzyme—diastase. There are also various types of yeast naturally present in the environment. Lastly, human saliva also contains fermentation enzymes, which could have started a brewing process in a partially chewed up grain. South American tribes make corn beer called chicha, as well as other fermented beverages, by chewing the seeds, roots, or flour to initiate the brewing process.
But those who believed in the “bread first, beer later” concept posed some important questions. If the ancient cereals weren’t used for food, what did their gatherers or growers actually eat? “Man cannot live on beer alone, and not too satisfactorily on beer and meat,” noted botanist and agronomist Paul Christoph Mangelsdorf. “And the addition of a few legumes, the wild peas and lentils of the Near East, would not have improved the situation appreciably. Additional carbohydrates were needed to balance the diet… Did these Neolithic farmers forego the extraordinary food values of the cereals in favor of alcohol, for which they had no physiological need?” He finished his statement with an even more provoking inquiry. “Are we to believe that the foundations of Western Civilization were laid by an ill-fed people living in a perpetual state of partial intoxication?” Another attendee said that proposing the idea of grain domestication for brewing was not unlike suggesting that cattle was “domesticated for making intoxicating beverages from the milk.”
In the end, the two camps met halfway. They agreed that our ancestors probably used cereal for food, but that food might have been in liquid rather than baked form. It’s likely that the earliest cereal dishes were prepared as gruel—a thinner, more liquidy version of porridge that had been a Western peasants’ dietary staple. But gruel could easily ferment. Anthropologist Ralph Linton, who chose to take “an intermediate position” in the beer vs. bread controversy, noted that beer “may have resulted from accidental souring of a thin gruel … which had been left standing in an open vessel.” So perhaps humankind indeed owes its effervescent bubbly beverage to some leftover mush gone bad thousands of years ago.
The post Did Humans Once Live by Beer Alone? An Oktoberfest Tale appeared first on JSTOR Daily.
By all accounts the Clean Water Act (CWA), the preeminent federal law protecting water quality in the United States, has been highly successful. The 1972 law has been periodically amended, but the gist is that it limits pollution into surface waters of the U.S. through restrictions and permit requirements. The act does not directly regulate drinking water. Now the Trump administration wishes to significantly weaken the CWA by limiting its jurisdiction, a move cheered by some but bemoaned by many others. Nevertheless, according to April Collaku in Fordham Environmental Law Review, this question of exactly which waters are covered by the CWA is not new.
Upon enactment of the CWA, federal agencies charged with its enforcement saw the law as covering discharges in the “navigable waters of the United States,” which on the face of it sounds like any water that can hold a boat. The reality is more complicated. The CWA itself, in fact, defines navigable waters as “waters of the United States,” which sounds like all water everywhere under U.S. jurisdiction.
Given the ambiguity, this definition has repeatedly found itself under court review. The courts struggled to reconcile the “waters of the United States” language with “navigable waters,” roughly defined as waters used for commerce or travel. Courts have generally expanded that definition to include tributaries of those navigable bodies and wetlands that are adjacent or connected to those navigable bodies.
The rules the current administration is seeking to override stem from a 2006 Supreme Court Decision. The decision, known as Rapanos v. United States, left the exact scope of the CWA muddled, with some justices arguing for the expanded navigable waters definition above and others limiting jurisdiction only to permanent bodies of water.
To help end the confusion, during the Obama administration the EPA decided to spell out a clear definition of waters of the United States. The definition closely hews to the expanded definition of navigable waters, but specifies all tributaries and adjacent waters that have a “significant nexus” to navigable waters. This expanded definition included a lot more wetlands, as now wetlands adjacent to tributaries were also included. Certain seasonal streams and wetlands were included under this final definition as well.
This expansion provided clarification but also controversy. The newly covered waters often came into conflict with private property. Some farmers and business interests found themselves occasionally limited in what crops they could plant, or what practices they could follow, next to what they saw as unimportant streams or wetlands.
Now the controversy rolls on, as under the Trump administration, most tributaries and adjacent wetlands will be stripped of CWA protection. Opponents fear that increased pollution will inevitably cause downstream harm. One side effect of the rule reversal is that the CWA is once again operating without a firm definition. More confusion and lawsuits are inevitable.
Editors’ Note: An earlier version of this article stated that the Supreme Court Decision Rapanos v. United States was decided in 2015; in fact it was decided in 2006.
Thanks to the legalization of recreational cannabis in 10 states and the District of Columbia, sparking up a joint in these areas is as easy as ordering a glass of wine.
Spending on legal cannabis, which includes 33 states and the District of Columbia that allow medical cannabis use for conditions such as glaucoma, chronic pain, and the side effects of cancer treatments, topped $12 billion worldwide in 2018, according to industry analysts, and is expected to increase to $31.3 billion by 2022.
With all that potential profit on the line, it’s no surprise there is growing interest in legalizing cannabis cultivation. California has issued around 10,000 cultivation permits. Between 2012 and 2016, the number of cannabis farms in the Golden State increased 58 percent and the number of plants increased 183 percent.
While much of the research has focused on public health and criminalization, the environmental implications of commercial-scale cultivation have been largely ignored. Could the increases in cannabis cultivation send the environment up in smoke?
New research has linked production of the once-verboten plant to a host of issues ranging from water theft and degradation of public lands to wildlife deaths and potential ozone effects.“We have a culture and history of cannabis cultivation in remote areas that may be sensitive to environmental disruptions,” explains Van Butsic, co-director of the Cannabis Research Center at the University of California Berkeley.
In California, the water-hungry crop is often grown in remote, forested watersheds and requires almost 22 liters of water per plant a day during the growing season, which adds up to three billion liters per square kilometer of greenhouse-grown plants between June and October, according to some research. During the low flow period, irrigation demands for cultivation can exceed the amount of water flowing in a river, leaving little water to sustain aquatic life.
Some of the biggest environmental offenders are cultivators operating unpermitted farms on public lands. These “trespass grows” are often in national forests or on tribal lands where water is diverted from streams to irrigate acres of plants. In 2018, there were an estimated 14,000 trespass grows on federal and private lands in Humboldt County, California, alone.
At the Shasta-Trinity National Forest in California, a team from the Integral Ecology Research Center, or IREC, a nonprofit organization dedicated to wildlife conservation, removed more than five miles of irrigation lines that diverted more than 500,000 gallons of water per day to irrigate cannabis plants.
IREC co-director Mourad Gabriel notes that trespass grows are often located near headwaters and have disastrous downstream effects. For example, streams in Mendocino, California, often run dry during the summer when growers are diverting water, decimating populations of Coho salmon and steelhead trout. “These are drug trafficking organizations looking to profit off of our natural resources,” says Gabriel.
Unpermitted growers wanting to avoid detection often choose public and tribal lands as prime places to hide their operations. These locations are also pristine wildlife habitats.
The cultivation sites also interfere with the restoration of distressed habitats. Local environmental groups complained that the grows overwhelmed their conservation efforts and, in some cases, disrupted ongoing restorations or made the work more dangerous, according to a 2018 study published in Humboldt Journal of Social Relations. The grows drained and polluted streams, degraded watersheds and killed wildlife.
Trespass grows, which use mass quantities of toxic rodenticides to keep rodents from chewing on irrigation lines, have been linked to the deaths of fish, birds, and mammals. One study found that 79 percent of dead fishers—small carnivorous mammals, collected in California between 2006 and 2011—had been exposed to pesticides at trespass grow sites. The rate continues to increase, according to Gabriel. Mule deer, gray foxes, coyotes, northern spotted owls and ravens have also been victims of poisoning, linked to cannabis cultivation.
“The amount of fertilizers and pesticides we find on one half-acre [of illegal] cultivation plot could be [used on] 1,000 acres of corn—and wildlife are paying the price,” Gabriel says.
It’s not just trespass grows causing environmental issues. Since Colorado stores started legally selling recreational cannabis in 2014, emissions from the 600-plus licensed cultivation facilities in Denver have sparked concerns over air pollution.
William Vizuete, associate professor at the University of North Carolina’s Gillings School of Public Health, is working on an air quality model to better understand how commercial cannabis cultivation could affect the atmosphere. His research showed that cannabis plants produce volatile organic compounds or VOCs that can produce harmful pollutants.
“If plants produce VOCs, there is a high possibility that under certain conditions, cannabis cultivation could impact the ozone,” Vizuete explains.
Cannabis emits potent VOCs called terpenes that, when mixed with nitrogen oxide and sunlight, form ozone-degrading aerosols. In a high desert zone like Denver, where normally there are few sources of VOCs, any new source of such pollutants will likely lead to ground-level ozone production, Vizuete notes. He worries that the significant numbers of cannabis plants being grown will become the regular source of VOCs, exacerbating the issue by combining with the manmade nitrogen oxide spewed from the many cars in that urban environment. Vizuete worries that the significant numbers of cannabis plants being grown in an urban area could exacerbate the issue. (High concentrations of VOCs have been linked to a range of human health issues, from nausea and fatigue to liver damage and cancer).
To test the potential effects, Vizuete grew four strains of cannabis (from among the 600-plus strains available in Colorado): Critical Mass, Lemon Wheel, Elephant Purple, and Rockstar Kush—for 90 days and measured the terpenes at each stage of growth. The results showed that in Denver, assuming a concentration of 10,000 plants per cultivation facility, cannabis could more than double the existing rate of annual VOC emissions to 520 metric tons and produce 2,100 metric tons of ozone.
Vizuete believes his estimates might be conservative, explaining, “We picked four [cannabis] strains based on their popularity, and their VOC emissions might not be representative all of the strains. Additionally, in commercial facilities, where conditions are optimized for growth, emissions may be even higher.”
Regulating the production of cannabis can address many of the environmental issues associated with its cultivation, argues Jennifer Carah, senior scientist in the water program at The Nature Conservancy of California.
In California, where up to 70 percent of legal cannabis is grown, the California Department of Food and Agriculture regulates the licensure process but many counties and municipalities also have the authority to grant cultivation licenses and, Carah says, the regulations are highly variable. Plus, the black market for cannabis still exists. It’s more expensive to purchase legal cannabis than to buy it on the black market, plus not all growers are willing to go through the due process to become legal.
“The black market is not going away,” Carah admits, “but to the degree that we can entice growers into the legal market, their agricultural practices can be regulated like other agricultural crops, which will go a long way to addressing potential environmental impacts.”
Recently, legalization has put a dent in the number of trespass grows. Illicit cultivation in Oregon forests decreased following legalization.
Some states have established environmental regulations for cannabis growers. California Water Boards require permitted growers to register water rights and follow strict guidelines that include prohibitions on diverting surface water from April to October and irrigating with stored water during the dry season—regulations not imposed on other California-grown crops. In Washington State, the Puget Sound Clean Air Agency requires growers to submit information about their plans for monitoring and controlling air pollution.
Butsic of UC Berkeley argues that federal legalization would also provide new funding opportunities through organizations such as the National Science Foundation and Environmental Protection Agency to allow researchers to assess environmental risks and develop solutions.
From a pollution perspective, federal legalization could set emissions standards.
“There are lots of technologies that capture VOCs before they enter the atmosphere that are required in other industries like gas stations,” Butsic explains. “Before [emissions] standards can be set for cannabis, we need recognition of the issue and long-term data to develop regulatory statutes—and we’re a long way from that because federal prohibition has hindered research and we don’t have the science yet.”
The post The Environmental Downside of Cannabis Cultivation appeared first on JSTOR Daily.
When it comes to removing carbon dioxide from the atmosphere, nothing beats good old plants and their knack for photosynthesis. Photosynthesis is the process of converting sunlight, carbon dioxide, and water into oxygen and the sugars that feed all life on Earth. Ever since photosynthesis was discovered, scientists have been trying to artificially duplicate it. Now a French team thinks they have succeeded, at least on a small scale.
In a discussion of the potential avenues for artificial photosynthesis, science writer Katherine Bourzac notes that at its most basic, photosynthesis aims to convert energy and carbon into fuel. It’s really a form of solar power, sometimes called “wet solar,” for those situations when direct production of electricity is impractical. As Bourzac points out, at current levels of technology most of humanity uses liquid or gas fuel as energy. Until this changes, there need to be ways to produce carbon neutral, or near-neutral, fuels.
In both natural and artificial photosynthesis, water molecules are split into hydrogen and water. Plants absorb sunlight into pigments (e.g. chlorophyll), which energizes electrons; these juiced-up electrons are passed through a chain of molecules in a complicated process called electron transport. At the end of the process, the electrons split water.
Without electron transport, which nobody has yet been able to duplicate, splitting water can be a difficult and expensive process. (Unsurprisingly, interest in artificial photosynthesis research tends to rise and fall with oil prices.) The earliest attempts to simulate electron transport involved using sunlight to activate a circuit between platinum catalysts, or materials that facilitate a chemical reaction. Newer methods increase efficiency by running an electric current through the catalysts, developing better solar cells to directly feed energy into the system, and using cheaper metals as catalysts.
If the goal is to produce hydrogen, which can be used as fuel, the process can stop there. However, there are cheaper ways to produce hydrogen, so the end goal is usually to keep going and create hydrocarbon fuels such as methane or butane, ideally using waste carbon from emissions. Plants make fuel for living things, so the desired result is conceptually similar. The trouble is that plants use complicated enzymes to create sugars from carbon, and the enzymes are damaged by the process. Plants easily repair their enzymes, but constructed materials are harder to fix. Several ongoing efforts have been trying to work around the problems by using solar electricity to split water and various bacteria to create the hydrocarbons. This approach is sometimes called a “living catalyst.”
The French researchers claim to have a fully artificial system, creating ethylene and ethane, both potent fuels, from the common mineral perovskite. The French system, as all the others, is still a prototype in a lab and a long way from being a functional way to produce sustainable fuel at scale—but it’s a promising start.
The plastic debris floating in the world’s oceans entangles wildlife, kills birds that swallow it mistaking it for food, and seeps into the food chain, showing up in fish that humans eat. A major threat to the ocean, this plastic pollution is estimated to cause more than $13 billion in economic damage to marine ecosystems each year. Turns out, it also disturbs important ocean bacteria, and the effects are far more profound than one may think.
Dispersed throughout the upper 200 meters of ocean, abundant and important bacteria known as Prochlorococcus govern many processes that happen not only inside that water, but also on land. These tiny green species have been called the ocean’s invisible forests because they perform similar functions to what plants do on earth. Prochlorococcus are photosynthetic organisms—they use sunlight to convert carbon dioxide into oxygen, adding it to the atmosphere like miniature plants. They are a type of Cyanobacteria, which were the planet’s first oxygen-producing creatures—so they are essentially the ancestors of today’s plants.
Prochlorococcus may be the most plentiful microorganisms on the planet. They produce nearly ten percent of the all the oxygen we breathe. A single drop of ocean water can contain 20,000 of them. But despite their abundance, they managed to evade modern science until fifteen years ago, when Sallie W. Chisholm from the Massachusetts Institute of Technology and Robert J. Olson from the Woods Hole Oceanographic Institution first discovered the species. Passing the ocean water through a flow cytometer—a device that detects and measures physical and chemical characteristics of particles suspended in fluids—the scientists noticed the cells, which were later named Prochlorococcus.
Because there is less sunlight in the deep-sea water column, Prochlorococcus evolved to harvest light very efficiently. It is also a major player of the global carbon cycle. But scientists at the Macquarie University in Australia have found that these bacteria are susceptible to plastic pollution, which interferes with their growth, functioning, and oxygen production.
Working in lab settings, the team exposed two strains of Prochlorococcus, which live at different depths in the ocean, to chemicals leached from two common plastic products: polyvinyl chloride (PVC), and plastic grocery bags, which are made from high-density polyethylene.
The scientists found that exposure to these chemicals impaired Prochlorococcus’s growth and function, including the amount of oxygen they produce. It also altered the bacteria’s expression in many of their genes. “We found that exposure to chemicals leaching from plastic pollution interfered with the growth, photosynthesis, and oxygen production of Prochlorococcus,” says Macquarie University researcher Sasha Tetu, lead author of the study published in Communications Biology. Moreover, the higher the concentration of the plastic pollution, the thinner the density of the bacterial population. The team’s next step would be to take their experiments out to sea and see how Prochlorococcus fare in nature. “Now we’d like to explore if plastic pollution is having the same impact on these microbes in the ocean,” Tetu says.
Is oxygen deprivation looming ahead? The diminished oxygen production will have a ripple effect through the earth’s ecosystems. “Plastic leachate exposure could influence marine Prochlorococcus community composition and potentially the broader composition and productivity of ocean phytoplankton communities,” the authors note in the study. We won’t start suffocating any time soon, but the magnitude of these changes isn’t yet understood.
Currently, most of Baltimore’s household trash is burned in a massive incinerator. Faced with stricter air pollution regulations, however, the incinerator has not been permitted to renew its contract and by 2021 will either have to to invest in more expensive pollution-scrubbing technology, or close down.
Municipal waste incineration has been around since 1875, when the first “destructor” was patented. According to environmental law scholars Sara Imperiale and Wang Pian Pian, writing in the Vermont Journal of Environmental Law, incineration is now widely practiced in developed countries such as Denmark and Japan, alongside composting and recycling.
Imperiale and Wang looked at waste disposal in China, where a vast population and a rising middle class are generating unsustainable amounts of waste. Many of China’s urban landfills are out of space, leaving municipalities scrambling for alternatives such as incineration.
The main advantage of incineration is that it saves space—it’s no accident that it is so common in small countries where available open space for landfill is severely limited. In many cases waste heat from the incinerator is also captured and used to produce energy. One disadvantage is air pollution, especially toxic dioxins from improperly burned trash. Incineration also creates a very unpleasant ash, known as bottom ash. In many developed countries, however, this bottom ash is recycled for use in paving, fill, or building materials.
In addition to air pollution concerns, incinerators are also noisy and attract trucks full of trash, so generally, people don’t want them too close to their homes. Unfortunately, incinerators are typically located in low income communities that lack the political influence to resist. In areas with lax regulations, such as China, poorly-regulated incinerators located too close to residences in underserved communities can lead to documented health and safety concerns. Legal remedies exist sometimes, but not always. Often communities get stuck in a situation where the presence of one polluting industry attracts others.
Modern pollution-capture technology and good management can reduce the negative impacts of trash incinerators. Upgrading the Baltimore facility is one option; scrubbers and other technologies can reduce the air pollution the incinerator produces. But such upgrades are expensive and may not mollify neighbors. A truly zero-waste incinerator does not exist.
Urban parks and gardens help city dwellers stay connected with nature. Then there is the growing trend of gardening within one’s living space—no matter how small. These urban gardens comprise their own unique ecosystems. More than just houseplants, if done right, these urban mini-gardens can be lush and green even inside the tiniest spaces—in courtyards, on balconies, or inside living rooms.
If you live in a large building with an unpaved courtyard, you’re in luck. You can easily arrange a few flowerbeds or vegetable patches, planting right into the ground. If your courtyard is paved, a few raised beds filled with topsoil from a store might be an easier solution.
According to Kate Smalley of Small Spaces Garden Design, when choosing what to plant, it’s important to take into account your environment, assessing what’s around or what may be growing in your neighbors’ yards. If you are planting into the ground and your neighbors have tall, sprawling trees, those trees will provide shade on hot summer days, but will draw moisture from the soil. And if your courtyard is very narrow or has tall fences, getting enough sunlight may be a challenge. The buildings or boundary fences can cast a shadow over your beds, blocking light and rainfall. In these cases, shade-loving plants may be a good choice. If your beds aren’t getting enough rainwater, you will have to water them by hand. Another option may be installing a drip irrigation system, possibly connected to a battery-operated timer to assure that the plants get water regularly.
No matter how small front porches and balconies are, they can fit a few pots, whether on the floor or hanging from the rails. Unless the space is covered, your plants will likely receive some sunlight and rainfall. If you are planting on a balcony, which is elevated by definition, your miniature garden will likely be more exposed to the drying effects of the winds. Placing your pots in saucers with water isn’t a good solution because it stops oxygen from getting to the root zone and roots need air just as much as they need water. Using bigger planters would help preserve moisture, and also allow you to grow some small trees. When planting on a balcony you have to consider the weight of your miniature garden. If you are aiming to use larger pots or troughs, opt for lightweight containers, such as fiberglass rather than concrete or stone.
Permaculture experts Dan Palmer and Adam Grubb suggest using wicking beds—containers that water the plants from ground up by maintaining a layer of water on the bottom, which slowly rises up. Wicking beds are a perfect solution for busy horticulturists who don’t always have time to water their gardens and for those who travel frequently.
If you have no outdoor options, you can green your inside space. Who says you can’t have an herb garden on your wall? Vertical horticulture goes back to the Hanging Gardens of Babylon. More recently, living walls have become a popular trend, partially in response to the increasing urban population density.
Small planters can be attached to the walls. This article in ReNew: Technology for a Sustainable Future suggests Woolly Pockets made of recycled polyethylene, which can sustain herbs that have shorter roots. An assemblage of such wall-mounted Woolly Pockets can grow a variety of edible herbs, including oregano, thyme, parsley, basil, and rosemary. Non-edible plants that would look good on a wall are ground covers and grasses, especially of different colors and foliage shapes. Ferns and some flowers, such as fuchsia and begonia, can make your wall even more picturesque by adding color and blooms.
Because small pots dry out quickly, vertical gardens are best combined with an automatic watering system that is programmed to supply water to each pocket for a few minutes a day. You will also need to keep all that moisture away from the wall, so stretching a piece of plastic between the wall and the planters is a good idea.
The post Three Ways to Turn Your Apartment into a Sustainable Garden appeared first on JSTOR Daily.
In 1909, American agricultural scientist Franklin Hiram King embarked on a journey to Asia. Having been chief of the Division of Soil Management in the USDA Bureau of Soils from 1902 to 1904, King was concerned about the United State’s rapidly deteriorating soil health. He wasn’t alone. The 19th and 20th centuries were marked by an increasing demand for food from the growing populations in Europe and America, along with decreasing soil productivity. But in Asia, the situation was starkly different. Chinese farmers grew their crops on the same land year after year, but their soil never seemed to lose its fertility.
“At the time, the US was still a relatively young nation and there was already lots of concern about the loss of fertility of soils,” says Joseph Heckman, professor of soil science at Rutgers University. “King was concerned about it and he heard that things were different in Asia.” He hoped to learn how farmers there were able to keep soil healthy and productive over the long term.
King called the farming approach he observed in Asia “permanent agriculture,” emphasizing the fact that the soils remained fertile over thousands of years. Today we have a different name for it: sustainable farming. This term may sound like a buzzword of the last few decades, but farmers in China, Korea, and Japan had practiced this form of food production successfully for many generations. King actively advocated implementing similar ideas in the Western World. “They really had a form of sustainable agriculture, even though they never used that word,” Heckman says. If King’s ideas had taken hold, the U.S. might have been farming sustainably throughout the entire last century. But a powerful force was already turning against him.
By the early 20th century, scientists already knew that plants needed several vital chemical elements to grow, such as nitrogen, potassium, and phosphorus. As plants grow, they take these nutrients from the soil to support their cellular tissues and processes. After a few years, these vital minerals get used up. Once soils become deficient in these elements, their productivity drops—they simply can’t give plants enough building materials. So chemists argued that adding these minerals to the soil would help restore yields. But while potassium and phosphorus could be mined in a number of places in the United States and Europe, nitrogen was a problem. Except for one source—Chilean saltepeter that provided roughly 60 percent of the global need for nitrogen throughout the 19th century—there were no other mines to harvest it. As a gas, it is abundant in the atmosphere, but there was no easy way of gathering it from the air. At least not until two scientists in Germany, Fritz Haber and Carl Bosch, figured out how to do it.
It was Haber who first successfully produced the reaction in his lab. A very stable gas, atmospheric nitrogen doesn’t react with other elements easily, so Haber used high pressure and temperature, plus a special catalyst to get the reaction going. At 500° degrees centigrade and under pressure of 150 to 200 atmospheres, Haber managed to fuse nitrogen and hydrogen into ammonia, a fertilizer now used worldwide. Bosch, who was a chemist and an engineer, helped scale up the process, and in the early 1920s, synthetic ammonia use became widespread. Germany was producing 350,000 tons of nitrates annually, and was planning to reach 500,000 tons a year. There was no shortage of buyers. France alone consumed 70,000 tons of nitrogen a year, 80 percent of which was imported at a cost of 500,000,000 francs. Grabbing nitrogen out of thin air wasn’t just a boon for agriculture, it was also a lucrative business.
Consequently, the agricultural community divided into two camps. The organic camp believed that soil nutrients should be replenished by returning the biological matter back to it. The chemical camp embraced the idea that saturating land with synthetically made fertilizer would be good enough.
The second camp gained popularity. Farming with synthetic nitrogen was easier. Plants grew faster and bore more fruit. Using factory-made nitrates was cleaner than using manure and less labor-intensive than recycling agricultural refuse. As a result, more and more farmers switched to inorganic fertilizers and demand grew. The ability to fix atmospheric nitrogen into usable substances was so crucial for feeding the earth’s growing population that Haber received a Nobel Prize for his invention in 1919. The Haber-Bosch process is credited as the method that helped humankind battle hunger and sparked the planet’s population explosion.
Nonetheless, some scientists viewed the idea of synthetic fertilizers as deeply flawed.
They argued that soil health—which influenced the health of the plants it grew and of those who ate them—was more complex than a mix of three fertilizing chemicals. In the 1940s, British scientist Albert Howard published several books arguing this point. In An Agricultural Testament, Howard introduced the “The Law of Return,” arguing the importance of returning various waste materials to the soil to build and maintain its fertility. Synthetic fertilizers, he insisted, broke the Law of Return, and would eventually cause the soil to deteriorate. In another book, Farming and Gardening for Health or Disease, he argued that disease—whether in plants, animals, or humans—was caused by unhealthy soil and that using organic farming techniques would keep the soil and those living on it, healthy.
In the United States, the organic movement enjoyed an unexpected boost during World War II. As the country needed more food, the federal government encouraged Americans to grow fruits and vegetables in their backyard gardens. About 80 percent of the population responded, and in 1943, these gardens produced 40 percent of American produce. During the war, there was little synthetic fertilizer available because the nitrates were used to make munitions, and food growers needed other types of soil nourishment. So when Jerome Rodale launched his Organic Gardening magazine, which encouraged growing food without chemicals, it quickly became popular, fueling interest in organic agriculture.
However, when the war ended, so did the shortage of nitrates. With no more explosives and other war agents to produce, nitrogen manufacturers switched to making chemical fertilizers and pesticides, advertising the new, chemical ways of farming. For Howard, that looked like declaring a war on soil itself. In 1946, he published another book, The War in the Soil, in which he vehemently criticized the burgeoning big farming business. “The war in the soil is the result of a conflict between the birthright of humanity—fresh food from fertile soil—and the profits of a section of Big Business in the shape of the manufacturers of artificial fertilizers and … poison sprays to protect crops.” Nonetheless, the immediate benefits of fertilizers and pesticides, such as yields and convenience, overshadowed concerns. Consequently, large-scale industrial agriculture took over not only in produce growing, but also in animal farming, leading to concentrated animal feeding operations or CAFOs.
In the years that followed, humankind learned that heavy use of agricultural chemicals comes at a price—from plummeting bird populations to the quality of food it produces to human health issues. And CAFO operators received continuous criticism for polluting air and water, animal mistreatment, and antibiotic overuse. CAFOs also caused more soil deterioration. Instead of putting animals on natural pasturelands, CAFOs feed them with corn and soy. But these crops are often grown with fertilizers and pesticides. And because they grow over a few short months leaving soil bare the rest of the year, they cause erosion rather than building soil organic matter—unlike grasses that comprise pasturelands. The latest research finds that cows on pasture can help sequester carbon and thus help mitigate climate change.
Studies also found that food produced by animals raised on pastureland is more nutritious. Compared to industrially-produced animal foods, organic milk has better balanced Omega3 and Omega6 ratios, and organic eggs contains more vitamin A and E, Heckman notes. “There’s quite a bit of evidence that organic foods from animals raised on pasture are uniquely different,” he says, adding that having animals on pasture also helps maintain healthy soils, replenishing organic matter and preventing erosion.
In the early 20th century, Franklin Hiram King lost the battle against chemical agriculture, but can it be fought again and won in the 21st? It’s a matter of government policies, Heckman says—and often the policies favor big business while limiting small organic farms’ ability to reach consumers. “When certain government policies make it difficult for farmers and consumers to have access to these foods,” Heckman says, “it essentially shuts down those kinds of good farms. And by doing so, it shuts down good soil management, too.”
The post Chinese Peasants Taught the USDA to Farm Organically in 1909 appeared first on JSTOR Daily.
Every city needs its trees. They reduce air pollution. They cut down on wind. They absorb and store carbon in their massive trunks. In summer, their canopies provide welcoming shade, cooling the grounds and houses underneath, which saves energy and money on air conditioning. But it turns out that trees can perform yet another service essential for humans. They can also help reduce soil pollution caused by nutrients leaching into the grounds from commercial fertilizers used in agriculture, gardening, and lawn care.
First developed in the early twentieth century, commercial fertilizers helped humankind increase food production, fight famines, and make harvests more dependable. But these benefits came at a price. Unlike organic fertilizers that decompose slowly, such as manure or compost, commercial fertilizers are available to plants immediately. Consequently, they also wash out quicker, leaching into ponds, rivers, and lakes. This over-fertilization causes the infamous algae blooms, which decrease water oxygen levels and smother aquatic life. Many urban waterways suffer from over-fertilization by the excess nitrogen and phosphorus. In the summer, these waterways turn green, filling the air with an unpleasant stink.
Scientists at the University of Minnesota were wondering if planting more trees in cities might help with that problem. They ran a study to evaluate whether trees could absorb some of the excess nitrogen and phosphorus that percolate through the soil rather than let it reach the waterways.
Using lysimeters—devices that can measure the volume and chemical characteristics of water in soil—scientists assessed the amount of nitrogen and phosphorus trickling underneath thirty-three trees of fourteen different species in city parks in Saint Paul, Minnesota. To compare whether trees can absorb nutrients better than treeless urban areas, the scientists also took similar measurements in turfgrass spots. Installed at about two feet deep, the lysimeters collected ground water samples for over two years, except during winter and drought periods, measuring dissolved organic carbon, nitrogen, and phosphorus.
It turned out the results were nutrient-specific. Compared with turfgrass, trees removed a similar or higher amount of nitrogen the first year, but not the second. In fact turfgrasses seemed to absorb more nitrogen than trees during the second year—although the authors note that the results may have been affected by droughts followed by heavy rains, which could have changed nitrogen fluxes.
But trees—and particularly deciduous species—did a good job of eliminating phosphorus from the ground. “Trees reduced P [Phosphorus] leaching compared with turfgrass in both 2012 and 2013, with lower leaching under deciduous than evergreen trees,” the team wrote in the study. Moreover, compared to industrial solutions for phosphorus cleanup, trees also proved to be a cost-efficient option. When the team applied their measurements to the Mississippi River’s urban watershed, which includes about 1.5 million trees, they found that the trees helped achieve significant infrastructure savings. In 2012, the forested grounds prevented half a ton of phosphorus from spilling into the water, and the next year that amount more than doubled. That saved several million dollars in infrastructure costs. “Removing these same amounts of P [Phosphorus] using stormwater infrastructure would cost $2.2 million and $5.0 million per year (2012 and 2013 removal amounts, respectively),” the authors wrote.
Because the measurements varied significantly from one tree to another, it wasn’t clear which species were best at removing phosphorus. “At this time, we cannot confidently recommend tree species that would most reduce nutrient leaching,” the authors wrote—but overall, creating forested areas would be an important factors in removing excess soil phosphorus. “While the effect of any individual tree is fairly small, their aggregate effect can be important.”
It’s been eight years since the 2011 Tōhoku earthquake and tsunami damaged the Fukushima Daiichi nuclear power plant. The resulting damage led to hydrogen explosions and a partial meltdown, releasing radiation into the surrounding area. After workers’ brave efforts stabilized the situation, they began to focus on long-term cleanup. The cleanup recently reached a major milestone when workers began removing nuclear fuel rods for disposal. But how do you really clean up a nuclear accident?
The difficulty of the task depends on the severity of the accident. According to Richard Stone in Science, when it came to Chernobyl, the worst nuclear accident in history, containing the damage was paramount. The Chernobyl explosion led to a massive fire, spreading a radioactive plume across Europe. Extinguishing the fire was the immediate priority. Workers ran onto a fiery roof to shovel burning radioactive material down into the ruined core, while aircraft dumped clay and sand on the fire. Finally, liquid nitrogen was pumped in to cool the fuel from a tunnel dug underneath the core. There was kittle time to consider long-term containment: the entire structure was buried under a massive concrete “sarcophagus” and then the entire region was permanently evacuated. A new structure was recently erected over the sarcophagus so the sarcophagus, itself radioactive, could be safely disposed of.
The Fukushima accident was not as catastrophic as Chernobyl, but the challenges remain daunting. Workers have had to contend with the radioactive fuel rods themselves, containment ponds full of contaminated water, plus the contaminated ruins and surrounding soil. Since workers must limit radiation exposure even with protective gear, workers can only spend limited time around the fuel rods. Utilizing a machine, workers will remotely control transfer the rods into an underwater cask before sealing, cleaning, and transporting the containers to a storage building.
The major challenge, according to Science Asia correspondent Dennis Normile, is the sheer volume of contaminated material. Most of the airborne radiation and some of the water spread into the ocean, leading to closures of fisheries. While this process was far from ideal, it did quickly dilute the radiation. However, millions of tons of topsoil and other material had to be carefully scraped away. The contaminated material was literally stacked in garbage bags until it could be stored in special clay-lined landfills.
To deal with the massive volume of material, a variety of novel ideas and technology have been tested. Astronomers used equipment normally used to detect cosmic radiation to try and find the most radioactive areas. New technology, including chemical and biological processes, were employed to separate out the radioactive material and reduce the amount that needed special disposal. While many of these techniques seem to work, many people remain uncomfortable with the idea of living or working near decontaminated fill.
Renewable power technologies such as wind and solar are becoming economically competitive with fossil fuels. As ecological need and economic reality converge, renewables are going to make up an increasingly large percentage of the world’s power supply. It’s a necessary technological transition. But at the same time, renewables have a downside that needs to be addressed: rare earth elements.
According to environmental attorney Christopher “Smitty” Smith, rare earth elements are used in virtually all electronics. This includes solar panels, which require rare earth metals such as yttrium or europium, and wind power, which uses vast quantities of neodymium in the magnets that help convert wind energy to electricity.
Rare earth elements are actually fairly abundant in the Earth’s crust. But these metals are typically found in extremely low concentrations. That means a lot of destructive mining for minimal effect. Rare earth mining produces large quantities of contaminated mine waste, creating a disposal problem.
Additionally, many of the most productive rare earth mines are in countries with weak environmental regulations. Political instability and national security concerns provide further risks to long-term supply. Accordingly, Smith writes, recycling these elements is big business, employing thousands of mostly unskilled workers worldwide. Formal, supervised recycling processes are needed to safely dismantle and recycle the materials, in contrast to the informal recycling systems that are currently in place. These informal systems are cheaper but may expose workers to health risks. Even so, the formal and informal economies often work in tandem. Smith suggests charging a fee upfront so electronics producers and consumers have to pay for the costs to properly recycle these products. The most efficient solution would be to reuse electronics when possible. One positive trend is that reuse is becoming more and more common. Smith suggests a global collection system to maximize reuse and keep electronic waste away from smugglers and illegal disposal.
Another idea suggested by scholars Robert U. Ayres and Laura Talens Peiró is maximizing “material efficiency” for rare earths and other crucial materials. Currently, when high grade ore is mined, lower quality ores or unwanted side materials, the mining equivalent of fisheries’ bycatch, are excavated but not used. In addition to recycling, finding uses for these mining byproducts could potentially reduce waste in the electronic and renewable energy sector. One drawback is that some of these lower-grade materials are energy intensive to collect, given the low yield.
In the short term, the risks of mining rare earth elements do not outweigh the benefits of renewable power, but improvements are needed. And when it comes to each individual’s impact, forgoing that latest smart phone upgrade may just be the best thing you can do.
Most city dwellers are familiar with the concept of the urban heat island. The concrete that covers a city, along with the dark asphalt on the roads, absorbs the sun’s heat and reflects it back into the streets. On a sunny day, a city can be several degrees warmer than the surrounding countryside. The extra warmth persists after sunset, when ambient daytime heat is still being reflected. As the climate in general gets warmer, those few extra degrees can pose potentially fatal health risks. Aware of the danger, cities have been experimenting with a variety of techniques to cool things down. One New York City councilman recently proposed coating the streets with a reflective material designed to deceive spy satellites.
“Cool materials” are especially promising, and there is evidence that they work. A multi-year study in Athens, Greece, by A. Synnefa, A. Dandou, M. Santamouris, M. Tombrou, and N. Soulakellis modeled the impact of simply using lighter-colored building materials. They discovered that the heat island could be reduced by two degrees celsius. Their study proved that light materials can increase the reflectivity of the city, or albedo, so less heat is stored. Furthermore, as the ambient temperature lowers, less energy is used for central air and other cooling measures, increasing air quality and reducing the use of heat-producing machinery. The study only considered the impact of lightening the city’s rooftops; lightening other surfaces such as walls or roads would presumably be even more effective.
Of course, not every source of urban heat can be mitigated through building materials. In the Athens study, one of the biggest sources of urban heat was traffic. According to Sachiho A. Adachi et al., creating more compact urban centers with accessible public transportation should lead to reduced urban heat islands. Not only would there be less traffic, but compact living quarters would require less energy to cool than large suburban homes. Unfortunately, while compact cities have lower per capita emissions, simply having so many people in such close quarters immediately increases the heat island effect compared to suburban design. Accordingly, Adachi et al. recommend planning future development with an eye toward mitigating heat islands by setting aside space for trees and green spaces in compact cities.
Of course, implementing these strategies is easier said then done. To truly make heat island mitigation a reality, urban planning scholar Jason Corburn suggests re-thinking our entire approach to climate change science. Corburn argues that preparing for climate change at a local scale, rather than globally, is more likely to result in immediate action. Climate scientists can work with builders and urban planners at the local level in a way that’s basically impossible at the global level.
On wet spring nights, salamanders and frogs emerge from under rocks and logs across the United States and make their annual trek to seasonal puddles of water known as vernal pools. Traveling 100 meters or more, the amphibians slip into these small ponds that form each year in the same location. They are there to breed and the vernal pools are the only places they can do that.
It’s easy to mistake vernal pools for random puddles on the forest floor or flood plains, as they hold no fish and dry up during the summer and fall. But in the past few decades, ecologists have learned that the water is essential for a range of forest species. The more researchers discover, the more they realize how little protection they have from development. “I’m very concerned about the future of so-called isolated wetlands,” says Aram Calhoun, a wetland ecologist at the University of Maine, “They’re not isolated in any way—but that’s what they’ve been called.” They cannot be paved over without consequence. Calhoun and other researchers have led efforts in their own states to find all the vernal pools, and make sure the ones most essential to larger ecosystems are allowed to dry out and refill for decades to come.
Scientists first recognized vernal pools as ecosystems in the 1930s, but it took years to understand that they were breeding destinations for native crustaceans, such as fairy shrimp and hyperlocal salamander and frog species. Since then, ecologists have found that other species know when the protein-rich eggs and just-hatched amphibians will be available to eat, says Calhoun. “The more research we do, the more we find that these [vernal pools] are little fast food joints.” Animals drop by for dietary boosts when resources are scarce, like when other waters are still frozen. California researchers have documented dozens of bird species dipping into vernal pools for a snack. In Maine, bears come by to drink water, female bullfrogs visit as a respite from the males, and turtles burrow in the remaining mud at summer’s peak.
For many areas, this research came too late. Developers have filled in much of the shallow water and loggers, by removing trees, have shifted soil and disrupted water flow that fills seasonal pools from the ground below. The EPA estimates that less than ten percent of California’s vernal pools still exist. Many other states will never know how many pools have disappeared, in part because they never knew how many they had to begin with, since the depressions are often assumed to be unpredictable runoff ponds.
Meanwhile, the remaining pools have little protection because there is no independent federal regulation specifically for wetlands. Countrywide oversight of vernal pools comes via the federal definition of “waters of the United States,” explains Calhoun. Last December, the White House announced a plan to narrow the current definition and focus only on waters that touch one another, which would leave the vast majority of vernal pools at the mercy of state regulations. Local laws defending the features may not even exist, let alone protect the most biodiverse and ecologically important pools. “They want hydrological interactions,” says Calhoun. “We ecologists think of ecological interactions.”
These tenuous protections made state agencies, conservation nonprofits, and academic ecologists to realize they have to forge their own pool rules. “With that lack of information and lack of protection, we feel a sense of urgency of wanting to address this and get people more aware that there are vernal pools out there and why they’re important,” says Yu Man Lee, conservation scientist with the Michigan Natural Features Inventory out of Michigan State University Extension.
Her state is working on finding the pools. Since 2011, Lee, her colleagues, the Michigan Department of Natural Resources, and private conservation groups have been scanning aerial landscape photographs for indicative watery patches. Candidates are most visible in the spring when trees are leafless and the depressions in forest grounds are full. For swathes of the state densely covered by evergreens, the researchers are waiting on fly-over radar scans to detect pools below the pine needles. Lee estimates they have reviewed only about ten percent of the state’s forested landscape for pools, mostly because the team doesn’t have a budget to collect data and is limited by availability of images produced by other agencies.
As they scan the landscape, the researchers are also deciding what depressions count as vernal pools. “What’s the criteria we’re going to use for saying which ones we’re going to protect, and how do we decide?” Lee asks. As she and her colleagues learn more about what Michigan pools look like, where they form, and what wildlife they contain, the team revises their interpretation. “I thought when we started this, we would have been able to bang that out right off the bat,” says Lee, but “it’s getting more complicated than we thought.”
In Maine, Calhoun and her collaborators are glad to be done writing their own definition of vernal pools—it took them ten years to find the perfect phrasing. Since then, Calhoun and her team of researchers are focused on motivating residents to protect vernal pools.
Calhoun first attracted the public’s interest in the subject in 2007, when she helped write state laws that required developers to get permits before building over some vernal pools. That triggered a wave questions. “There’s nothing like a state regulation to get people interested,” Calhoun says.
Her early efforts to educate the public backfired. As she started giving presentations, she quickly learned that photos of amphibians in the pools made people recoil. “I hadn’t realized that people don’t think snakes are wonderful,” she said, so the team switched to the characters that Mainers love. Instead of snakes, they now talk about a female moose that, for three consecutive springs, stepped over her lab’s research fences on Sears Island to snack on greenery in the pools.
Towns started warming up to the idea of protecting the ecosystems, but one economic advisor asked what the 2007 law might be costing municipalities—could it be impeding economic growth? That inquiry set off a plan to create new, optional vernal pool protection guidelines, ones towns could adopt if they wanted stronger preservation measures while also keep developers interested in their properties.
Crafted with builders, economists, and biologists, the agreement lets developers bypass the pool permits in designated zones, and instead, give a portion of the land’s value to a trust that will permanently preserve vernal pools in more rural areas. Two towns agreed to be the trial cities, and eight more are interested, says Calhoun. Since Mainers take pride in the state’s rural character but also want to encourage economic growth, this compromise tactic could appeal to a lot of municipalities, Calhoun adds.
Other states with fewer habitats to preserve have chosen different vernal pool protection strategies. In New Jersey, the Department of Environmental Protection has started building under-road tunnels that let animals complete their yearly migrations without getting squashed by cars. Most people might not notice driving over an amphibian, says Brian Zarate, a senior zoologist with the department’s endangered and threatened species program. “When you run over a frog or salamander with your car, you probably don’t even realize you’re doing it half the time, especially on a rainy February or March night,” he says. But about twenty years ago, Zarate and his colleagues heard one-off reports of people seeing hordes of salamanders and other species crossing Jersey roads. After observing these yearly migrations, the department realized amphibians were crossing particular roads built between their normal habitat and their vernal pools.
While the state has land set aside for conservation and mapped several thousand pools since the early 2000s, discovering these crossings presented the department with a new problem—how to deal with the fact that animals need to travel over areas that have been paved over by humans who use the roads for their own transport. To keep frogs and salamanders out of harm’s way, the department is building several passageways in key migration areas that go underneath the roads, and will put fencing around the roads to help corral the animals into their dedicated tunnels.
Each project is expensive, financially and time-wise, as there can be several property owners in charge of land along the route, says Zarate. That’s part of why the team is ensuring that each tunnel–some of which have a price tag in the six figures–connects amphibians to not not just their favorite vernal pools, but to areas likely to be excluded from development. “We are cautious about investing that kind of money for a road structure project when five years from now, we’re just going to be leading wildlife into a parking lot or into a housing development,” he says. Several other states in the northeast have observed how roads disrupt vernal pool access and built similar structures, and it’s possible even more states will follow suit as their growing infrastructure starts to resemble New Jersey’s, Zarate points out.
Though Michigan, Maine, and New Jersey are at different stages in their vernal pools protections, they’re ahead of many states that aren’t considering these ecosystems. Oftentimes, states take their conservation cues from federal government standards, says Calhoun, which is part of why she’s concerned about the recently-loosened national laws. “Without that stick, a lot of states just don’t bother doing anything,” she says. So little national guidance makes it hard to advise states on how to get vernal pool protection and awareness started, says Calhoun, but she does have one recommendation: Bring anyone who thinks they’re opposed to regulation to the pools. That’s what she did for all the developers she worked with. As she watched these people in suits hold animals on the edge of the water, she knew it worked. “They fall in love all over again with what they did when they were 5 years old,” she says. “They come out different than when they went in.”
The post Salamanders Crossing: This Way to the Vernal Pool! appeared first on JSTOR Daily.
Until 1936, scientists believed that the Earth’s core was one big molten sphere. The measurements of seismic waves taken by different observatories around the globe, however, just didn’t add up mathematically. Then a woman seismologist and mathematician got to the problem’s very core.
Born in 1888 in Copenhagen, Denmark, Inge Lehmann had an unusual upbringing for her time. With a father who pioneered the study of experimental psychology in Denmark, a grandfather who laid out the first Danish telegraph line, and other well-educated relatives, she was raised in a progressive family.
Lehmann’s parents sent her to an enlightened co-educational school run by Hannah Adler, who was Niels Bohr’s aunt. The school treated boys and girls alike, so Lehmann learned from a very young age to think of both genders as equals. Her mathematics teacher particularly stood out, sometimes encouraging Inge’s interest by giving her special problems to solve. “No difference between the intellect of boys and girls was recognized, a fact that brought some disappointments later in life when I had to recognize that this was not the general attitude,” Lehmann later wrote.
Lehmann studied mathematics at the University of Copenhagen and later at Newnham College, at the University of Cambridge. While in England, she discovered different attitudes towards women than what she was used to in Denmark. But she enjoyed her stay “in spite of the severe restrictions inflicted on the conduct of young girls, restrictions completely foreign to a girl who had moved freely amongst boys and young men at home.” When she returned to Denmark, she studied actuarial sciences and eventually graduated from the University of Copenhagen in 1920.
She fell into her seismology career by chance. In 1925, she was appointed assistant to professor N.E. Norlund, who was planning to install seismographic stations near Copenhagen as well as in Ivigtut and Scoresbysund in Greenland. A few years later, she passed an examination in geodesy at the University of Copenhagen and became chief of the seismological department at the newly-established Royal Danish Geodetic Institute.
At the time, observatories around the world collected routine readings of seismograms for each substantial earthquake and mailed them to the International Seismological Summary at Kew, England. Knowing the relative accuracy of measurements from each station was crucial for making geological inferences.
When Lehmann began her research, she realized that the determination of earthquake epicenter parameters was not reliable. The new and improved seismographs of the 1920s and 1930s enabled her to spot additional seismic waves that were recorded, but could not be explained with the existing model of the single molten core. To minimize the reading errors in reported arrival times of seismic waves, she visually correlated wave forms between different seismograms. The measurements’ inconsistencies made her think that the existing vision of the earth’s structure was incorrect. She concluded that earth must have a solid inner core surrounded by a molten outer core—which, according to her mathematical calculations, would account for the inconsistencies.
“The most important result arrived at was that the presence of a distinct inner core was required for the interpretation of some phases recorded at great epicentral distances,” she later wrote. Other scientists tested her model and arrived at the same conclusion.
Lehmann became one of the pillars of international seismological research. She worked in observatories all over the world, including in the United States and Canada. She also held her post at the Royal Danish Geodetic Institute until she retired in 1953. At the time of her death, at 104 years old in 1993, she was the longest-lived woman scientist.