With the start of this academic year, I’m launching a new newsletter to explore technology that helps rather than hurts human understanding, and human understanding that helps us create better technology. It’s called Humane Ingenuity, and you can subscribe here. (It’s free, just drop your email address into that link.)
Subscribers to this blog know that it has largely focused on digital humanities. I’ll keep posting about that, and the newsletter will have significant digital humanities content, but I’m also seeking to broaden the scope and tackle some bigger issues that I’ve been thinking about recently (such as in my post on “Robin Sloan’s Fusion of Technology and Humanity“). And I’m hoping that the format of the newsletter, including input from the newsletter’s readers, can help shape these important discussions.
Here’s the first half of the first issue of Humane Ingenuity. I hope you’ll subscribe to catch the second half and all forthcoming issues.
Humane Ingenuity #1: The Big Reveal
An increasing array of cutting-edge, often computationally intensive methods can now reveal formerly hidden texts, images, and material culture from centuries ago, and make those documents available for search, discovery, and analysis. Note how in the following four case studies the emphasis is on the human; the futuristic technology is remarkable, but it is squarely focused on helping us understand human culture better.
If you look very closely, you can see that the stone ribs in these two vaults in Wells Cathedral are slightly different, even though they were supposed to be identical. Alexandrina Buchanan and Nicholas Webb noticed this too and wanted to know what it said about the creativity and input of the craftsmen into the design: how much latitude did they have to vary elements from the architectural plans, when were those decisions made, and by whom? Before construction or during it, or even on the spur of the moment, as the ribs were carved and converged on the ceiling? How can we recapture a decent sense of how people worked and thought from inert physical objects? What was the balance between the pursuit of idealized forms, and practical, seat-of-the-pants tinkering?
In “Creativity in Three Dimensions: An Investigation of the Presbytery Aisles of Wells Cathedral,” they decided to find out by measuring each piece of stone much more carefully than can be done with the human eye. Prior scholarship on the cathedral—and the question of the creative latitude and ability of medieval stone craftsmen—had used 2-D drawings, which were not granular enough to reveal how each piece of the cathedral was shaped by hand to fit, or to slightly shape-shift, into the final pattern. High-resolution 3-D scans using a laser revealed so much more about the cathedral—and those who constructed it, because individual decisions and their sequence became far clearer.
Although the article gets technical at moments (both with respect to the 3-D laser and computer modeling process, and with respect to medieval philosophy and architectural terms), it’s worth reading to see how Buchanan and Webb reach their affirming, humanistic conclusion:
The geometrical experimentation involved was largely contingent on measurements derived from the existing structure and the Wells vaults show no interest in ideal forms (except, perhaps in the five-point arches). We have so far found no evidence of so-called “Platonic” geometry, nor use of proportional formulae such as the ad quadratum and ad triangulatum principles. Use of the “four known elements” rule evidenced masons’ “cunning”, but did not involve anything more than manipulation and measurement using dividers rather than a calibrated ruler and none of the processes used required even the simplest mathematics. The designs and plans are based on practical ingenuity rather than theoretical knowledge.
Last year at the Northeastern University Library we hosted a meeting on “hard OCR”—that is, physical texts that are currently very difficult to convert into digital texts using optical character recognition (OCR), a process that involves rapidly improving techniques like computer vision and machine learning. Representatives from libraries and archives, technology companies that have emerging AI tech (such as Google), and scholars with deep subject and language expertise all gathered to talk about how we could make progress in this area. (This meeting and the overall project by Ryan Cordell and David Smith of Northeastern’s NULab for Texts, Maps, and Networks, “A Research Agenda for Historical and Multilingual Optical Character Recognition,” was generously funded by the Andrew W. Mellon Foundation.)
OCRing modern printed books has become if not a solved problem at least incredibly good—the best OCR software gets a character right in these textual conversions 99% of the time. But older printed books, ancient and medieval written works, writing outside of the Romance languages (e.g., in Arabic, Sanskrit, or Chinese), rare languages (such as Cherokee, with its unique 85-character alphabet, which I covered on the What’s New podcast), and handwritten documents of any kind, remain extremely challenging, with success rates often below 80%, and in some cases as low as 40%. That means 1-3 characters are mistakenly translated by the computer in a five-character word. Not good at all.
The meeting began to imagine a promising union of language expertise from scholars in the humanities and the most advanced technology for “reading” digital images. If the computer (which in the modern case, really means an immensely powerful cloud of thousands of computers) has some ground-truth texts to work from—say, a few thousand documents in their original form and a parallel machine-readable version of those same texts, painstakingly created by a subject/language expert—then a machine-learning algorithm can be created to interpret with much greater accuracy new texts in that language or from that era. In other words, if you have 10,000 medieval manuscript pages perfectly rendered in XML, you can train a computer to give you a reasonably effective OCR tool for the next 1,000,000 pages.
Transkribus is one of the tools that works in just this fashion, and it has been used to transcribe 1,000 years of highly variant written works, in many languages, into machine-readable text. Thanks to the monks of the Hilandar Monastery, who kindly shared their medieval manuscripts, Quinn Dombrowski, a digital humanities scholar with a specialty in medieval Slavic texts, trained Transkribus in handwritten Cyrillic manuscripts, and calls the latest results from the tool “truly nothing short of miraculous.”
[Again, you can subscribe to Humane Ingenuity to receive the full first issue right here. Thanks.]
Whenever I’m grumpy about an update to a technology I use, I try to perform a self-audit examining why I’m unhappy about this change. It’s a helpful exercise since we are all by nature resistant to even minor alterations to the technologies we use every day (which is why website redesign is now a synonym for bare-knuckle boxing), and this feeling only increases with age. Sometimes the grumpiness is justified, since one of your tools has become duller or less useful in a way you can clearly articulate; other times, well, welcome to middle age.
The New York Times recently changed their iPad app to emphasize three main tabs, Top Stories, For You, and Sections. The first is the app version of their chockablock website home page, which contains not only the main headlines and breaking news stories, but also an editor-picked mixture of stories and features from across the paper. For You is a new personalized zone that is algorithmically generated by looking at the stories and sections you have most frequently visited, or that you select to include by clicking on blue buttons that appear near specific columns and topics. The last tab is Sections, that holdover word from the print newspaper, with distinct parts that are folded and nested within each other, such as Metro, Business, Arts, and Sports.
Currently my For You tab looks as if it was designed for a hypochondriacal runner who wishes to live in outer space, but not too far away, since he still needs to acquire new books and follow the Red Sox. I shall not comment about the success of the New York Times algorithm here, other than to say that I almost never visit the For You tab, for reasons I will explain shortly. For now, suffice it to say that For You is not for me.
But the Sections tab I do visit, every day, and this is the real source of my grumpiness. At the same time that the New York Times launched those three premier tabs, they also removed the ability to swipe, simply and quickly, between sections of the newspaper. You used to be able to start your morning news consumption with the headlines and then browse through articles in different sections from left to right. Now you have to tap on Sections, which reveals a menu, from which you select another section, from which you select an article, over and over. It’s like going back to the table of contents every time you finish a chapter of a book, rather than just turning the page to the next chapter.
Sure, it seems relatively minor, and I suspect the change was made because confused people would accidentally swipe between sections, but paired with For You it subtly but firmly discourages the encounter with many of the newspaper’s sections. The assumption in this design is that if you’re a space runner, why would you want to slog through the International news section or the Arts section on the way to orbital bliss in the Science and Health sections?
* * *
When I was growing up in Boston, my first newspaper love was the sports section of the Boston Globe. I would get the paper in the morning and pull out that section and read it from cover to cover, all of the columns and game summaries and box scores. Somewhere along the way, I started briefly checking out adjacent sections, Metro and Business and Arts, and then the front section itself, with the latest news of the day and reports from around the country and world. The technology and design of the paper encouraged this sampling, as the unpacked paper was literally scattered in front of me on the table. Were many of these stories and columns boring to my young self? Undoubtedly. But for some reason—the same reason many of those reading this post will recognize—I slowly ended up paging through the whole thing from cover to cover, still focusing on the Sox, but diving into stories from various sections and broadly getting a sense of numerous fields and pursuits.
This kind of interface and user experience is now threatened because who needs to scan through seemingly irrelevant items when you can have constant go-go engagement, that holy grail of digital media. The Times, likely recognizing their analog past (which is still the present for a dwindling number of print subscribers), tries to replicate some of the old newspaper serendipity with Top Stories, which is more like A Bunch of Interesting Things after the top headlines. But I fear they have contradicted themselves in this new promotion of For You and the commensurate demotion of Sections.
The engagement of For You—which joins the countless For Yous that now dominate our online media landscape—is the enemy of serendipity, which is the chance encounter that leads to a longer, richer interaction with a topic or idea. It’s the way that a metalhead bumps into opera in a record store, or how a young kid becomes interested in history because of the book reviews that follow the box scores. It’s the way that a course taken on a whim in college leads, unexpectedly, to a new lifelong pursuit. Engagement isn’t a form of serendipity through algorithmically personalized feeds; it’s the repeated satisfaction of Present You with your myopically current loves and interests, at the expense of Future You, who will want new curiosities, hobbies, and experiences.
I was not expecting—but was gratified to see—an enormous response to my latest piece in The Atlantic, “The Books of College Libraries Are Turning Into Wallpaper,” on the seemingly inexorable decline in the circulation of print books on campus. I’m not sure that I’ve ever written anything that has generated as much feedback, commentary, and hand-wringing. I’ve gotten dozens of emails and hundreds of social media messages, and The Atlantic posted (and I responded in turn to) some passionate letters to the editor. Going viral was certainly not my intent: I simply wanted to lay out an important and under-discussed trend in the use of print books in the libraries of colleges and universities, and to outline why I thought it was happening. I also wanted to approach the issue both as the dean of a library and as a historian whose own research practices have changed over time.
I think the piece generated such a large response because it exposed a significant transition in the way that research, learning, and scholarship happens, and what that might imply for the status of books and the nature of libraries—topics that often touch a raw nerve, especially at a time when popular works extol libraries—I believe correctly—as essential civic infrastructure.
But those works focus mostly on public libraries, and this essay focused entirely on research libraries. People are thankfully still going to and extensively using libraries, both research and public (there were over a billion visits to public libraries in the U.S. last year), but they are doing so in increasingly diversified ways.
The key to my essay were these lines:
“The decline in the use of print books at universities relates to the kinds of books we read for scholarly pursuits rather than pure pleasure…A positive way of looking at these changes is that we are witnessing a Great Sorting within the [research] library, a matching of different kinds of scholarly uses with the right media, formats, and locations.”
Although I highlighted statistics from Yale and the University of Virginia (which, alas, was probably not very kind to my friends at those institutions, although I also used stats from my own library at Northeastern University), the trend I identified seems to be very widespread. Although I only mentioned specific U.S. research libraries, my investigations showed that the same decline in the use of print collections is happening globally, albeit not necessarily universally. In most of the libraries I examined, or from data that was sent to me by colleagues at scores of universities, the circulation of print books within research libraries is declining at about 5-10% per year per student (or FTE).
For example, in the U.K. and Ireland, over the three years between the 2013-14 school year and the 2016-17 school year, the circulation of print books per student declined by 27%, according to the Society of College, National and University Libraries (SCONUL), which represents all university libraries in the U.K. and Ireland. Meanwhile, SCONUL reports that visits to these libraries have actually increased during this period. (SCONUL’s other core metric, print circulations per student visit to the library, has thus declined even more, by 33% over three years.) Similarly, the Canadian Association of Research Libraries (CARL), which maintains the statistics for university libraries in Canada, notes that during these same three years, the average yearly print circulation at their member libraries dropped from 200,000 to 150,000 books, and their per-student circulation number also dropped by 25%.
Again, this is just over three recent years. The decline becomes even more severe as one goes further back in time. In the 2005-6 school year, the average Canadian research library circulated 30 books per student, which slid to 25 in 2008-9; by 2016-17 that number was just 5. Readers of my article were shocked that UVA students had only checked out 60,000 books last year, compared to 238,000 a decade ago, but had I gone all the way back in the UVA statistics to two decades ago, the comparison would have been even more stark. The total circulation of books in the UVA library system was 1,085,000 in 1999-2000 and 207,000 in 2016-17. Here’s the overall graph of print circulation (in “initial circs,” which do not include renewals) from the Association of Research Library (U.S.), showing a 58% decline between 1991 and 2015, but an even larger decline since Peak Book and an even larger decline on a per student basis, since during this same period the student body at these universities increased 40%.
These longer time frames underline how this is an ongoing, multi-decade shift in the ways that students and faculty interact with and use the research library. All research libraries are experiencing such forces and pressing additional demands—the need for new kinds of services and spaces as well as the surging use of digital resources and data—while at the same time continuing to value physical artifacts (archives and special collections) and printed works. It’s a very complicated, heterogeneous environment for learning and scholarship. Puzzling through the correct approach to these shifts, rather than ignoring them and sticking more or less with the status quo, was what I was trying to prod everyone to think about in the essay, and if I was at all successful, that’s hopefully all to the good.
With the end of the academic year at Northeastern University, the library wraps up our What’s New podcast, an interview series with researchers who help us understand, in plainspoken ways, some of the latest discoveries and ideas about our world. This year’s slate of podcasts, like last year’s, was extraordinarily diverse, ranging from the threat of autonomous killer robots to the wonders of tactile writing systems like Braille, and from the impact of streaming music on the recording industry to the disruption and meaning of Brexit. I’ve enjoyed producing and being the interviewer on these podcasts, and since I like to do my homework in addition to conversing with the guests live, I’ve learned an enormous amount from What’s New.
I hope you have too if you’re a subscriber to the podcast or just the occasional listener, and would love your feedback about what we can do better, and topics you would like to hear us cover in the future. One surprising and rewarding thing we’ve noticed about the podcast is how new subscribers are going back and listening to the show from Episode 1. Podcasts do seem to encourage binging, and the fact that we keep our podcasts to roughly 30 minutes means that you can easily go through both Seasons 1 and 2 during a relatively short timespan while commuting, walking your dog, or relaxing this summer.
The overall audience for What’s New has also gone up considerably over the last year. In the last 12 months we’ve had about 150,000 streams, and each episode now receives 5-10,000 listeners. These are not chart-topping numbers, but for a fairly serious educational podcast (with, I hope, intermittent humor) it’s good to find a decent-sized niche that continues to grow.
If you haven’t had a chance to listen yet, you can subscribe to What’s New on Apple Podcasts, Google Play, Stitcher, Overcast, or wherever you get your podcasts, or simply stream episodes from the What’s New website. Word of mouth has been the primary way new listeners have heard about the podcast, so if you like what we’re doing, please tell others or leave a review on iTunes, as that remains the starting point for most podcast listeners.
And as a jumping off point for new listeners or those who may have missed a few shows during the school year, here’s a summary of this year’s episodes:
Episode 17: Remaking the News – how consolidation in the news industry and the rise of the internet has changed professional journalism, with Dan Kennedy
Episode 18: Making Artificial Intelligence Fairer – exploring the biases endemic to AI, which come from its creators, with Tina Eliassi-Rad
Episode 19: The Shifting Landscape of Music – how the music industry moved from vinyl records to cassettes, CDs, downloads, and now streaming, and what this evolution has meant for musicians, with David Herlihy
Episode 20: A New Way to Scan the Human Body – pioneering the use of nanosensors within the body and its potential applications, with Heather Clark
Episode 21: Election Day Special: Michael Dukakis – on 2018’s Election Day, the three-term governor and presidential candidate spoke candidly about the state of politics
Episode 22: Bridging the Academic-Public Divide Through Podcasts – a recording of yours truly giving a keynote at the Sound Education conference at Harvard, which brought together hundreds of educational and academic podcasters and podcast listeners
Episode 23: The Regeneration of Body Parts – new research and techniques for stimulating the growth of limbs, eyes, and organs, with Anastasiya Yandulskaya, Brian Ruliffson, and Alex Lovely
Episode 24: The Urban Commons – how 311 systems, which allow citizens to provide feedback to municipalities, have changed our knowledge of cities and they ways residents and governments interact, with Dan O’Brien
Episode 25: Touch This Page – the history and future of tactile writing systems, and what they tell us about the act of reading, with Sari Altschuler
Episode 26: Seeking Justice for Hidden Deaths – between 1930 and 1970 there were thousands of racially motivated homicides in the U.S., and one project is attempting to document them all, with Margaret Burnham
Episode 27: Tracing the Spread of Fake News – looking carefully at the impact of untrustworthy online sources in the election of 2016, with David Lazer
Episode 28: How College Students Get the News – the surprising results of a large study of the news consumption habits of college students, with Alison Head and John Wihbey
Episode 29: The Web at 30 – celebrating the 30th anniversary of the founding of the World Wide Web with a discussion of how it has reshaped our world for better and worse, with Kyle Courtney
Episode 30: Controlling Killer Robots – how major advances in robotics and artificial intelligence have led to the dawn of deadly, independent machines, and how an international coalition is trying to prevent them from taking over warfare, with Denise Garcia
Episode 31: European Disunion – how Europe has regularly escaped the fate of dissolution, and what Brexit means in this longer history, with Mai’a Cross
Thanks for tuning in!
I’ve got a new piece over at The Atlantic on Barack Obama’s prospective presidential library, which will be digital rather than physical. This has caused some consternation. We need to realize, however, that the Obama library is already largely digital:
The vast majority of the record his presidency left behind consists not of evocative handwritten notes, printed cable transmissions, and black-and-white photographs, but email, Word docs, and JPEGs. The question now is how to leverage its digital nature to make it maximally useful and used.
This almost-entirely digital collection, and its unwieldy scale and multiple formats, should sound familiar to all of us. Over the past two decades, we have each become unwitting archivists for our own supersized collections, as we have adopted forms of communication that are prolific and easy to create, and that accumulate over time into numbers that dwarf our printed record and can easily mount into a pile of digital files that borders on shameful hoarding. I have over 300,000 email messages going back to my first email address in the 1990s (including an eye-watering 75,000 that I have sent), and 30,000 digital photos. This is what happens when work life meets Microsoft Office and our smartphone cameras meet kids and pets.
Will we have lost something in this transition? Of course. Keeping a dedicated archival staff in close proximity to a bounded paper-based collection yields real benefits. Having a researcher who is on site discover a key note on the back of a typescript page is also special.
However, although the analog world can foster great serendipity, it does not have a monopoly on such fortunate discoveries. Digital collections have a serendipity all their own.
Please do read the whole article for my thoughts about how we should approach the design of this digital library, and the possibilities it will enable, including broad access and new forms of research.
When Roy Rosenzweig and I wrote Digital History 15 years ago, we spent a lot of time thinking about the overall tone and approach of the book. It seemed to us that there were, on the one hand, a lot of our colleagues in professional history who were adamantly opposed to the use of digital media and technology, and, on the other hand, a rapidly growing number of people outside the academy who were extremely enthusiastic about the application of computers and computer networks to every aspect of society.
For the lack of better words—we struggled to avoid loaded ones like “Luddites”—we called these two diametrically opposed groups the “technoskeptics” and the “cyberenthusiasts” in our introduction, “The Promises and Perils of Digital History“:
Step back in time and open the pages of the inaugural issue of Wired magazine from the spring of 1993, and prophecies of an optimistic digital future call out to you. Management consultant Lewis J. Perleman confidently proclaims an “inevitable” “hyperlearning revolution” that will displace the thousand-year-old “technology” of the classroom, which has “as much utility in today’s modern economy of advanced information technology as the Conestoga wagon or the blacksmith shop.” John Browning, a friend of the magazine’s founders and later the Executive Editor of Wired UK, rhapsodizes about how “books once hoarded in subterranean stacks will be scanned into computers and made available to anyone, anywhere, almost instantly, over high-speed networks.” Not to be outdone by his authors, Wired publisher Louis Rossetto links the digital revolution to “social changes so profound that their only parallel is probably the discovery of fire.”
Although the Wired prophets could not contain their enthusiasm, the technoskeptics fretted about a very different future. Debating Wired Executive Editor Kevin Kelly in the May 1994 issue of Harper’s, literary critic Sven Birkerts implored readers to “refuse” the lure of “the electronic hive.” The new media, he warned, pose a dire threat to the search for “wisdom” and “depth”—“the struggle for which has for millennia been central to the very idea of culture.”
Reading passionate polemics such as these, Roy and I decided that it would be the animating theme of Digital History to find a sensible middle position between these two poles. Part of this approach was pragmatic—we wanted to understand how history could, and likely would, be created and disseminated given all of this new digital technology—but part of it was also temperamental and even a little personal for the two of us: we both loved history, including its very analog and tactile aspects of working with archives and printed works, but we were also both avid computer hobbyists and felt that the digital world could do some uncanny, unparalleled things. So we sought a profoundly humanistic, but also technologically sophisticated, position on which to base the pursuit of knowledge.
* * *
Robin Sloan is a novelist who has published two books, Mr. Penumbra’s 24-Hour Bookstore and Sourdough, that are very much about this intersection between the humanistic and the technological. Beyond his very successful work as an author, he has had a career at new media companies that are often associated with cyberenthusiasm, including Twitter and Current TV, and he has also spent considerable time engaging in crafts often associated with technoskepticism, including the production of artisanal olive oil, old-school printing, and 80s-era music-making. In this larger context of his vocations and avocations, his novels seem like an attempt to find that very same, if elusive, via media between the incredible power and potential of modern technology and the humanizing warmth of our prior, analog world.
Unlike some other contemporary novelists and nonfiction writers who work in the often tense borderlands between the present and future, Sloan neither can bring himself to buy fully into the utopian dreams of Silicon Valley—although he’s clearly tickled and even wowed by the way it constantly produces unusual, boundless new tech—nor can he simply conclude that we should throw away our smartphones and move off the grid. Although he clearly loves the peculiar, inventive shapes and functions of older technology, he doesn’t badger us with a cynical jeremiad to return to some imagined purity inherent in, say, vinyl records, nor will he overdo it with an uncritical ode to our augmented-reality, gene-edited future.
Instead, his helpful approach is to put the old and new into lively conversation with each other. In his first novel, Mr. Penumbra’s 24-Hour Bookstore, Sloan set the magic of an old bookstore in conversation with the full power of Google’s server farm. In his latest novel, Sourdough, he set the organic craft of the farmer’s market and the culinary artisanry of Chez Panisse in conversation with biohacked CRISPRed food and the automation of assembly robots.
But this was in the published version of the novel. In a revealing abandoned first draft of Sourdough that Sloan made available (as a Risograph printing, of course) to those who subscribe to his newsletter, he started the novel rather differently. In the introduction to this discarded draft, titled Treasured Subscribers, Sloan briefly notes that “these were not the right characters doing the right things.” I think he’s absolutely right about that, but it’s worth unpacking exactly why, because in doing so we can understand a bit better how Sloan pursues that elusive via media, and how in turn we might discover and promote humane technology in a rapidly changing world.
[Spoiler alert: If you haven’t read Sourdough yet, I’ve kept the plot twists mostly hidden, but as you’ll see, the following contains one critical character revelation. Please stop what you’re doing, read the book, and return here.]
Treasured Subscribers begins with a similar overarching narrative concept as Sourdough: a capable, intelligent young woman moves to the Bay Area and becomes part of a mysterious underground organization that focuses on artisanal food, and that is orchestrated by a charismatic leader. Mina Fisher, a writer, lands a new marketing job at Intrepid Cellars, led by one Wolfram Wild, who refuses to carry a smartphone or use a laptop. Wild barks text and directions for his newsletter on craft food and wine offerings over what we can only assume is an aging Motorola flip phone as he travels to far-flung fields and vineyards. In short, Wild appears to be a kind of gastronomic J. Peterman, globetrotting for foodie finds. The only hint of future tech in Treasured Subscribers is a quick mention of “Chernobyl honey,” although it’s framed as just another oddball discovery rather than—as Sourdough makes much more plain—an intriguing exercise in modding traditional food through science-fiction-y means. Wild seems too busy tracking down a cider mentioned by Flaubert to think about, or articulate, the significance of irradiated apiaries.
By itself, this seems like not such a bad setup for a novel, but the problem here is that if one wishes to explore, maximally, the intersection and possibilities of human craft and high tech, one can’t have a flattened figure like Wolfram Wild, who sticks with Windows 95 on an aging PC tower. (Given the implicit nod to Stephen Wolfram in Wild’s name, I wonder if Sloan planned to eventually reveal other computational layers to the character, but it’s not there in the first chapter.) In order for Sloan’s fiction to consider the tension between technoskepticism and cyberenthusiasm, and to find some potential resolution that is both excitingly technological and reassuringly human, he can’t have straw men at either pole. Had Sloan continued with Treasured Subscribers, it would have been all too easy for the reader to dismiss Wild, cheer for Mina, and resolve any artisanal/digital divide in favor of an app for aged Bordeaux. To generate some real debate in the reader’s mind, you need more multidimensional, sophisticated characters who can speak cogently and passionately about the advantages of technology, while also being cognizant of the impact of that technology on society. A clamshell cellphone-brandishing foodie J. Peterman won’t do.
Sloan solved this problem in multiple ways in the production version of Sourdough. In the published novel, the protagonist is the young Lois Clary, a software developer who gets a job automating robot arms at General Dexterity, and learns baking at night from two lively undocumented immigrants and their equally animated starter dough. General Dexterity is led by a charismatic tech leader, Andrei, who can articulate the remarkable features of robotic hands and their potential role in work. Also hanging out at the unabashed cyberenthusiast pole, ready for conversation and debate, is the founder and CEO of Slurry Systems, the maker of artificial, nutritious, and disgusting foods of the future, Dr. Klamath. And Clary ends up working at—yes, here it returns from Treasured Subscribers, but in a different form—an underground craft food market, which is chockablock with artisanal cheeses and beverages made by off-duty scientists and a librarian who maintains a San Francisco version of the New York Public Library’s menu collection. Tech and craft are in rich, helpful collision.
The most important character, however, for our purposes here, is the delightfully named Charlotte Clingstone, who is the head of the legendary Café Candide, and the stand-in for Alice Waters of Chez Panisse fame. Chez Panisse, in Berkeley, pioneered the locavore craft food movement, and normally a fictional Waters would be a novel’s unrelenting resident technoskeptic. But in a key twist, it turns out at the end of Sourdough that Clingstone also underwrites futuristic high-tech foodie endeavors—including that “Chernobyl honey” that is a carryover from Treasured Subscribers. Clingstone both defends the craft of the farm-to-table kitchen while seeing it as important to explore the next phase of food through robotics, radiation, and RNA.
As Sourdough develops with these characters, it can thus ask in a deeper way than Treasured Subscribers whether and how we can fuse tech know-how with humanistic values; whether it’s possible to exist in a world in which a robotic hand kneads dough but the process also involves an organic, magical yeast and well-paid workers; whether that starter dough should be gene sequenced to produce artificial, nutritious, and delicious food at scale; and how craft-worthy human labor and creativity can exist in the algorithmic, technological society that is quickly approaching. The only way to find out is to experiment with the technical and digital while keeping one’s heart in the mode of more traditional human pursuits. Sloan’s protagonist, Lois, thus follows an emotional arc between developing code and developing bread.
* * *
I suppose we shouldn’t make that much of an abandoned first draft of a novel (he says 1,000 words into an exploratory blog post), but reading Treasured Subscribers has made me think again about the right middle way between technoskepticism and cyberenthusiasm that we tried to find in Digital History. Certainly the skepticism side has been on the sharp ascent as Silicon Valley has continually been tone-deaf and inhumane in important areas like privacy. Certainly we need a good healthy dose of that criticism, which is valid. But at the end of the day, when it’s time to put down the newspaper and pick up the novel, Robin Sloan holds out hope for some forms of sophisticated technology that are attuned to and serve humanistic ends. We need a bit of that hope, too.
Robin Sloan is willing to give both the artisanal and the technical their own proper limelight and honest appraisal. Indeed, much of what makes his writing both fun and thoughtful is that rather than toning down cyberenthusiasm and technoskepticism to find a sensible middle, he instead uses fiction to turn them up to 11 and toward each other, to see what new harmonious sounds, if any, emerge from the cacophony. Sloan looks for the white light from the overlapping bright colors of the analog and digital worlds. Like the synthesizers he also loves—robotic computer loops intertwined with the soul of music—he seeks the fusion of the radically technological and the profoundly human.
Buried in the recent debates (New York Times, Chicago Tribune, The Public Historian) about the nature, objectives, and location of the Obama Presidential Center is the inexorable move toward a world in which virtually all of the documentation about our lives is digital.
To make this decades-long shift—now almost complete—clear, I made the following infographic comparing three representative presidential libraries, each a generation apart: LBJ’s, Bill Clinton’s, and Barack Obama’s. Each square represents the relative overall size of these presidential archives—roughly 46 million pages for LBJ, 100 million for Clinton, and 360 million for Obama—as well as the basic categories of archival material: paper documents, photographs and audiovisual media, and, starting with Clinton, email.
The LBJ Presidential Library has 45 million pages of paper documents and a million photographs, recordings, and other media. The Clinton Presidential Library contains 78 million pages of documents, 20 million emails, 2 million photographs, and 12,500 videotapes. (Note that contrary to all of the recent coverage of Obama as “the first digital president,” given his administration’s rapid adoption of email in the 1990s, Clinton really should hold that title, as I’ve discussed elsewhere.)
We are still in the process of assessing all that will go into the Obama Presidential Library (other libraries have added considerable new caches of documents over time), but the rough initial count from the U.S. National Archives and Records Administration is that there are about 300 million emails from Obama’s eight years in the White House, and about 30 million pages of paper documents. The chart above would be even more email-centric for Obama’s library if I used NARA’s calculation of a few paper pages per email, which would equal over a billion pages in printed form. In other words, using a more rigorous comparison at best only 3% of the Obama record is print vs. digital.
More vaguely estimated above are the millions of “pages” associated with the many other digital forms the Obama administration used, including websites, apps, and social media (you can already download the entirety of the latter as .zip files here). Most of the photos (many of which were uploaded to Flickr) and videos were of course also born digital. (Update, 3/11/19: The Obama Foundation came out with a new fact sheet that says that “an estimated 95 percent of the Obama Presidential Records were created digitally and have no paper equivalents. It also says that there are roughly 1.5 billion pages in the collection, including everything I’ve detailed here.)
It’s unfortunate that it’s still relatively expensive and time-consuming to digitize analog materials. Nearly two decades on, the Clinton Presidential Library has only digitized about 1% of their paper holdings (about 700,000 pages). The Reagan Presidential Library charges $.80 to digitize one page of his archives. The Obama Presidential Center’s commitment to funding the complete digitization of those 30 million paper pages, in what seems like a more rapid fashion and with open access to the public, seems rather laudable in this context.
Ultimately, I suppose it’s best to say that Obama was “the first almost fully digital president,” and with the digitization of the remaining paper record, will become “the first fully machine-readable and -indexed president.” (Part of the debate in academic and library circles about this shift in the Obama Presidential Center/Library has to do with the role of archivists and historians to create good metadata for, and more thorough searches through, administration documents, but with a billion+ pages, I don’t see how this can be done without serious computational means.)
Meanwhile, all of us have more quietly followed the same path, with only a very small percentage of our overall record now existing in physical formats rather than bits. How we will preserve this heterogeneous and perhaps ephemeral digital record when we don’t have our own presidential libraries and the resources of NARA is a different and more worrisome story.
Generosity and thoughtfulness are not in abundance right now, and so Kathleen Fitzpatrick‘s important new book, Generous Thinking: A Radical Approach to Saving the University, is wholeheartedly welcome. The generosity Kathleen seeks relates to lost virtues, such as listening to others and deconstructing barriers between groups. As such, Generous Thinking can be helpfully read alongside of Alan Jacobs’s How to Think, as both promote humility and perspective-taking as part of a much-needed, but depressingly difficult, re-socialization. Today’s polarization and social media only make this harder.
Fitzpatrick’s analysis of the university’s self-inflicted wounds is painful to acknowledge for those of us in the academy, but undoubtedly true. Scholars are almost engineered to cast a critical eye on all that passes before them, and few articulate their work well to broader audiences. Administrators are paying less attention than in the past to the communities that surround their campuses. Perhaps worst of all, the incentive structures of universities, such as the tenure process and college rankings, strongly reinforce these issues.
I read Generous Thinking in a draft form last year and thought an appropriate alternate title might be The Permeable University. Many of Fitzpatrick’s prescriptions involve dissolving the membrane of the academy so that it can integrate in a mutually beneficial way with the outside world, on an individual and institutional level. You will be unsurprised to hear that I agree completely with many of her suggestions, such as open access to scholarly resources and the importance of scholars engaging with the public. Like Fitzpatrick, I have had a career path that has alternated between the nonprofit and academic worlds in the pursuit of platforms and initiatives that try to maximize those values.
With universities currently receiving withering criticism from both the right and left, it is critical for all of us in the academy to take Generous Thinking seriously, and to think about other concrete steps we can take to open our doors and serve the wider public. The deep incentive structures will be very hard to change, but we can all take more modest steps such as thinking about how new media like podcasts can play a role in a more publicly approachable and helpful university, or how we might be able to provide services (e.g., archival services) to local communities. Fitzpatrick’s Humanities Commons, a site for scholars to connect not just with each other but with the public, is another venue for making the generosity she seeks a reality.
Much more needs to be done on this front, and so I encourage you to read Kathleen Fitzpatrick’s new book.
Abraham Lincoln and Charles Darwin were both born on February 12, 1809, and this odd fact used to be featured at the top of their Wikipedia entries. As Roy Rosenzweig noted 15 years ago in his groundbreaking essay “Can History be Open Source? Wikipedia and the Future of the Past,” this “affection for surprising, amusing, or curious details” was a key marker separating popular and academic history. At the time, Wikipedia was firmly on the popular side of that line.
Whereas history professors highlighted larger historical themes and the broad context of an individual’s life—placing the arc of one person’s existence within the complex patterns of historiography—the editors of Wikipedia often obsessed about single points and unusual coincidences, such as Al Jolson and Mary Pickford being in the same Ohio town during the 1920 presidential campaign, or Woodrow Wilson having written his initials on the underside of a table in the Johns Hopkins University history department.
Since Roy wrote that essay, I’ve kept an informal log of the lifespan of historical oddities on Wikipedia, which acts as an anecdotal measure of the online encyclopedia’s evolution, or perhaps convergence, with more “serious” history. When Roy gave Wikipedia that serious look in the pages of the Journal of American History—at a time when there was still furious opposition to its use in academic settings, with dire warnings from faculty to undergraduates who relied on it—the Lincoln/Darwin factoid had been on Darwin’s page for over a year, since July 18, 2004. It was placed there by an enthusiastic early Wikipedian with the handle Brutannica. (As Brutannica’s user page on Wikipedia helpfully notes, their handle was “an apparent misunderstanding of a character in the much-missed 18th episode of Pokemon, not from the world’s most renowned encyclopaedia.”)
The line about Charles Darwin having the exact same birthday as Abraham Lincoln lasted almost six years, until June 15, 2010, when Wikipedian Intelligentsium ruthlessly removed it over the objections of Playdagame6991. (Intelligentsium to Playdagame6991: “I don’t see how the bit about Lincoln is relevant.”)
Wikipedia’s early, long-lasting, and more shameful historical problems were of course massive omissions rather than trivial additions like the shared Lincoln/Darwin birthday. The lack of entries for many important women, the overemphasis on Pokemon and Star Wars over entire genres of culture, have been far more problematic than the appearance of Woodrow Wilson’s graffiti, and critical efforts have arisen to correct these imbalances.
But the slow-burn effort to correct the nature of historical writing on Wikipedia has been more subtle but still discernible over the last decade, evident in countless small contests like the one between Intelligentsium and Playdagame6991. It would be interesting to do a more systematic analysis of such battles to see how historical writing on Wikipedia has evolved into a form that seems today more recognizable and acceptable to those in the academy.
I was honored to be asked by Europeana, the indispensable, unified digital collection of Europe’s cultural heritage institutions, to write a piece celebrating the 10th anniversary of their launch. My opening words:
‘The future is already here – it’s just not very evenly distributed,’ science fiction writer William Gibson famously declared. But this is even more true about the past.
The world we live in, the very shape of our present, is the profound result of our history and culture in all of its variety, the good as well as the bad. Yet very few of us have had access to the full array of human expression across time and space.
Cultural artifacts that are the incarnations of this past, the repository of our feelings and ideas, have thankfully been preserved in individual museums, libraries, and archives. But they are indeed unevenly distributed, out of the reach of most of humanity.
Europeana changed all of this. It brought thousands of collections together and provided them freely to all. This potent original idea, made real, became an inspiration to all of us, and helped to launch similar initiatives around the world, such as the Digital Public Library of America.
You can read my entire piece at the special 10th anniversary website, along with pieces from the heads of the Wikimedia Foundation, Creative Commons, and others. Allez culture and congrats to my friends at Europeana on this great milestone!
[The text of my keynote at the Sound Education conference at Harvard on November 2, 2018. This was the first annual conference on educational and academic podcasts, and gathered hundreds of producers of audio and podcast listeners to discuss how podcasting can effectively and engagingly reach diverse audiences interested in a wide range of scholarly fields.]
It’s great to be back here in Andover Hall. I received a master’s degree in theological studies from Harvard Divinity School, and being in this chapel reminds me of what I was thinking about during my two years studying the history of religion. It is, perhaps surprisingly, germane to what I want to talk about today.
Studying religion means studying the biggest questions, the unanswerable questions. The study of religion is, necessarily, humbling. If it occasionally approaches higher truths, it also reminds us that human knowledge is woefully incomplete and fallible.
But this fallibility and the way we stumble toward the truth is not communicated regularly or well by academia to the outside world. Our formal communications take other forms that are more, shall we say, braggadocious. The academic monograph and article are necessarily shaped to show off expertise. These written forms have scholarly accoutrements, like the jewelry of footnotes, that make them dressed to impress. Mostly, of course, they are dressed to impress one’s peers.
On the other side of the academic house, press releases and magazine-like pieces from the university communications office are aimed at impressing the broader world, and to garner coverage beyond the walls of the academy. But these are also forms of writing that stay in a narrow lane, crowded as they are with spunky, crafted quotations from a world in which everything is a game-changing breakthrough.
But we’re here for podcasts. Let’s not dwell on these forms of academic expression other than to recognize them for what they are: genres. The press release and the academic article and the monograph may all be about scholarly research, but they are distinct genres, and throughout my brief remarks this morning, I want to encourage you to think about podcasts in terms of genres as well. I want you to think about the genre for your podcast.
Genres are enormously helpful structures. They are commonly agreed upon forms of communication that provide identifying signals to the audience about what they are reading, viewing, or in the case of podcasts, listening to. Genres give the audience, often unconsciously and rapidly, a general category for a creative work, which in turn colors its reception.
Genres prep the podcast listener’s ears and mind through repetition, recognition,and expectation. Conforming to a genre telegraphs structural information to the audience and makes audio more palatable and relatable.
Podcast elements like intro and outro music, for instance, are genre-building. They orient the listener, who after all might be tuning in for the first time, and communicate what kind of audio stream this is.
This conference is about podcasts, but there can be and indeed are many genres of podcasts. Podcasts no longer occupy the vast spectrum from two white guys talking about technology to three white guys talking about technology. What this conference represents is a wonderful flourishing of podcast genres.
Now we need to think more about the kinds of genres that academic work works well in, and that can take maximal advantage of the medium and have the maximal impact.
So let’s talk about how to situate educational and academic podcasts within the galaxy of possible genres.
We can take some helpful clues about this situation from other new media formats that have flourished on the web over the past quarter-century. For instance, since the advent of the web, and its ability to serve a wide array of text, in different lengths, sizes, and contexts, we have seen the birth of new genres that challenge traditional writing and break out of the constraints of print publication.
Take the blog. Originally a “web log” of interesting links, it evolved two decades ago in places like LiveJournal into personal musings and then in other platforms like MoveableType and ultimately WordPress into a fairly flexible, but always recognizable, reverse chronological, largely textual genre, one that accommodates posts of different lengths and purposes.
Because it lived on the web, and given its origins, the blog was colonized by a less formal, more freeform style that beneficially allowed academics who started blogs to loosen up a bit. I moment ago I used the word braggadocious. I would feel, shall we say, uncomfortable using that word in an academic article in my native field of history, but I’ve owned the domain dancohen.org for 20 years now and if I want to drop a braggadocious or two in a blog post there so be it.
More seriously, although the genre of the blog didn’t line up well with the strict structures needed for the peer-reviewed article, it did line up well with other aspects of academia. For instance, while the article and the book provide a final, formal genre for the results of research, they do not accommodate well, or often at all, the detailed, day-to-day research process that led up to the book or article. Indeed, most academic writing involves obscuring our processes and complexities and doubts behind the scenes, the starts and stops that happen throughout academic work, before the article or book is complete. (Note that this obscuring has led to such bad things as the replication crisis.)
The blog excels, in extraordinarily helpful ways, in portraying this process, and so we now have the distinct genre of the process blog. For example, one of the blogs I subscribe to is by a particle physicist who is providing daily updates on the fusion reactor his team is building. That is just plain cool, but will never appear in his submissions to physics journals. I have colleagues in history who blog about the ups and downs of archival research, the rare finds and the drudgery, the thousands of hours of research and writing. Those sentiments, revealed in a blog, and can enrich and humanize academic work.
Also, like a good movie, a successful article or book leaves on the cutting room floor dozens of other great scenes, half-baked but still pretty tasty thoughts, and possible connections that must wait until another time, or be forgotten forever. A blog can document the incredible swirl of evidence and thinking and knowledge that emerges out of an academic project. Blogging can be a powerful way to provide “notes from the field” and ongoing glosses in research areas that perhaps only a handful of others worldwide know much about, but that may fascinate the wider world if framed well.
Podcasts provide a fantastic opportunity, in many ways much better than the blog, to communicate the complex processes involved in acquiring new knowledge and passing it on to students and the public, and to show the bumps along the road, and the methods and heartache and excitement along the way.
For instance, last week on our What’s New podcast, we had a brilliant young biochemist, Heather Clark, on to talk about the nanosensors her lab is creating to determine the level of certain chemicals in the body. They custom design extraordinarily tiny molecules that light up when they find lithium or sodium in the bloodstream, and an electronic tattoo on the skin can then register and transmit that information.
This is truly the stuff of science fiction, but the best part of the podcast was Heather’s response to my question about how such nanotechnology is actually created. We hear this word “nanotechnology” all the time in the news, but do you have any idea what it actually looks like in practice? I didn’t. So I asked Heather to describe what goes on in her lab during a normal day. And she digressed into a remarkable discussion of how making nanosensors actually looks a lot like making salad dressing—literally mixing various oils and ingredients together to make the right blend. And she’s laughing as she’s describing this process because on the one hand it’s kitchen counter work, but on the other hand it’s a profound synthesis of physics, biology, chemistry, and engineering.
As Heather revealed these scientific principles and bench-science techniques, I couldn’t help but think of how magical, alchemical, her work is. Indeed, podcasts can frame academic expertise in a way that can thrill an audience because of this magical element. Teller, the shorter, quieter magician in Penn & Teller, has made the point that a big part of what makes magic what it is is that magicians will spend an unbelievable amount of time practicing a very specific skill or pursuing a trick, far more time than the audience considers humanly possible.
By this definition there is a lot of magic in the academy. Our colleagues spend years or even decades deciphering papyri, learning long-lost languages, trying to solve fantastically complex mathematical theorems, or tracking down the smallest bits of evidence or assembling the largest imaginable data sets. Audio, done well, can display this incredible obsession to audiences, and as Teller notes, revealing how a magic trick is done—by grit and practice and sheer will—often enhances appreciation for magic rather than dissipating it.
Finally, and most importantly, podcasts have an unparalleled ability to convey the reality of academic work, and inculcate appreciation of it—better than the blog because of the nature of audio and especially the unique character of the human voice. From the time we are babies, we respond differently to voices than to other sounds in our world. As the most social of animals, we are incredibly adept at picking up subtle cues from the human voice—excitement, nervousness, ambivalence, assurance.
The human voice can thus communicate one’s humanity to the listener in a way that most academic writing has enormous trouble with—and as I noted earlier, was never really structured to do.
But it has to be the right type of voice, a topic of many of the sessions at this conference today. If you are an academic, are you projecting a know-it-all voice, the voice of the article and the academic monograph, or the more cautious, thoughtful voice that is really in your head as you pursue your research? Are you merely recounting the end results of a process, or pulling the curtain back and showing the human—and often engrossing—processes behind the discoveries?
Academic podcasts are often criticized as raw and unedited, but they can take advantage of this lack of polish in comparison to a monograph or an article. In podcasts, we can hear a potent and unique combination of the expertise of academics with the informality of extemporaneous speech.
Done well, educational podcasts as a whole, the range of podcasts represented here today, can foster audiences who may not always agree with us or our research or conclusions, but who can grasp much more deeply the very human pursuits of the academy and see how those pursuits relate to their own lives. Critically, this has never been more important, as there is a growing skepticism about the value of the academy. All universities are struggling with how to communicate their worth to the public.
In his recent book How to Think: A Survival Guide for a World at Odds, literary scholar Alan Jacobs calls on us to foster what he calls “like-hearted, rather than like-minded” audiences. We are never to get everyone to agree with us about everything, but that shouldn’t be our ultimate goal. We should instead seek to cultivate receptivity to academic subjects again, and that hard work isn’t being adequately done through our formal writing or press releases. Podcasts give us the opportunity to show the humanity and relevance and relatability of academic practice, something that significant portions of the public have lost sight of.
Your podcast can be an important addition to this humanizing goal, one more step in expanding the audience of curious listeners, and the general population of the like-hearted.
Over the last year, I was fortunate to help guide a study of the news consumption habits of college students, and coordinate Northeastern University Library’s services for the study, including great work by our data visualization specialist Steven Braun and necessary infrastructure from our digital team, including Sarah Sweeney and Hillary Corbett. “How Students Engage with News,” out today as both a long article and accompanying datasets and media, provides a full snapshot of how college students navigate our complex and high-velocity media environment.
This is a topic that should be of urgent interest to everyone since the themes of the report, although heightened due to the more active digital practices of young people, capture how we all find and digest news today, and also points to where such consumption is heading. On a personal level, I was thrilled to be a part of this study as a librarian who wants students to develop good habits of truth-seeking, and as an intellectual historian, who has studied changing approaches to truth-seeking over time.
You should first read the entire report, or at least the executive summary, now available on a special site at Project Information Literacy, with data hosted at Northeastern University Library’s Digital Repository System (where the study will also have its long-term, preserved form). It’s been great to work with, and think along with, the lead study members, including Alison Head, John Wihbey, Pakis Metaxas, and Margy MacMillan.
“How Students Engage with News” details how college students are overwhelmed by the flood of information they see every day on multiple websites and in numerous apps, an outcome of their extraordinarily frequent attention to smartphones and social media. Students are interested in news, and want to know what’s going on, but given the sheer scale and sources of news, they find themselves somewhat paralyzed. As humans naturally do in such situations, students often satisfice in terms of news sources—accepting “good enough,” proximate (from friends or media) descriptions rather than seeking out multiple perspectives or going to “canonical” sources of news, like newspapers. Furthermore, much of what they consume is visual rather than textual—internet genres like memes, gifs, and short videos play an outsized role in their digestion of the day’s events. (Side note: After recently seeing Yale Art Gallery’s show “Seriously Funny: Caricature Through the Centuries,” I think there’s a good article to be written about the historical parallels between today’s visual memes and political cartoons from the past.) Of course, the entire population faces the same issues around our media ecology, but students are an extreme case.
And perhaps also a cautionary tale. I think this study’s analysis and large survey size (nearly 6,000 students from a wide variety of institutions) should be a wake-up call for those of us who care about the future of the news and the truth. What will happen to the careful ways we pursue an accurate understanding of what is happening in the world by weighing information sources and developing methods for verifying what one hears, sees, and reads? Librarians, for instance, used to be much more of a go-to source for students to find reliable sources of the truth, but the study shows that only 7% of students today have consulted their friendly local librarian.
It is incumbent upon us to change this. A purely technological approach—for instance, “improving” social media feeds through “better” algorithms—will not truly solve the major issues identified in the news consumption study, since students will still be overwhelmed by the volume, context, and heterogeneity of news sources. A more active stance by librarians, journalists, educators, and others who convey truth-seeking habits is essential. Along these lines, for example, we’ve greatly increased the number of workshops on digital research, information literacy, and related topics at Northeastern University Library, and students are eager attendees at these workshops. We will continue to find other ways to get out from behind our desks and connect more with students where they are.
Finally, I have used the word “habit” very consciously throughout this post, since inculcating and developing more healthy habits around news consumption will also be critical. Alan Jacobs’ notion of cultivating “temporal bandwidth” is similar to what I imagine will have to happen in this generation—habits and social norms that push against the constant now of social media, and stretch and temper our understanding of events beyond our unhealthily caffeinated present.
Last week we launched the second season of the What’s New podcast. My first guest was Dan Kennedy, who studies journalism and new media, and has a new book out on the changes happening right now to newspapers like the Washington Post. Dan’s got some great commentary on the difficulties of newspapers since the web emerged in the 1990s, the role of journalistic objectivity in the face of “fake news” criticism, and why someone like Jeff Bezos might want to buy the Post. His special focus on the future of news and newspapers is especially relevant right now. Do give it a listen.
I’m also really excited about our fall lineup, which includes Tina Eliassi-Rad talking about bias in artificial intelligence algorithms, Margaret Burnham on the Civil Rights and Restorative Justice project, and David Herlihy on the changes to the music industry in an age of streaming. In addition, close to the November election, former Governor Michael Dukakis will join us on the program.
To receive all of these shows and more, you can subscribe to What’s New on Apple Podcasts, Google Play, Stitcher, Overcast, or wherever you get your podcasts. Thanks in advance for tuning in—hope you enjoy the new season.
You don’t see it until you’re right there, and even then, you remain confused. Did you miss a turn in the road, or misread the map? You are now driving through someone’s yard, or maybe even their house. You slow to a stop.
On rural road R575, also known as the Ring of Beara and more recently rebranded as part of the Wild Atlantic Way, you are making your way along the northern coast of the Beara Peninsula in far southwestern Ireland. You are in the hamlet of Gortahig, between Eyeries, a multicolored strip of connected houses on the bay, and Allihies, where the copper mines once flourished. The road, like the landscape, is raw, and it is disconcertingly narrow, often too narrow for two cars to pass one another.
But not as narrow as what you suddenly see in front of you, which seems too thin for even one car. This road that strings together the scenic green towns of the peninsula into a jade necklace somehow threads its way between an old house and an old shed at a 45-degree angle. Even in a small car, you take your time making your way through, so as not to hit the buildings that crowd the road. A stern sheep looks down at you from the hill nearby.
Dumbfounded, you ponder: “How do trucks and buses make it through here?”
The answer, of course, is that they don’t. Arriving in the next town, you ask at the pub about the narrow passage behind you, and the bartender fills you in.
No, large vehicles can’t get through there. If they leave from Eyeries or Allihies, when they get to that house they realize they can’t go any further, and they have to back up a mile or more just to turn around — in reverse on a winding mountain road that has drop-offs into the Atlantic. So this narrow passage of Gortahig restricts movement along the main circulating road of the Beara Peninsula — a choke point of a hundred feet along a hundred-mile stretch.
Have they ever thought about, you know, widening the road?
Well, it is someone’s house and shed, you’re gently told, a family that’s lived there a long time. Some years ago, the owner evidently offered to let the shed be knocked down to open some more room for the road, but others in west Cork County weren’t passionate about forcing that change. The only group motivated to alter the road were the tour companies that wanted to send large coaches around the Ring of Beara, like they do on the next peninsula over, Kerry.
Given the history of the property and the cost of a new road, the majority decided just to let things be. So the narrow passage of Gortahig remains.
And as you think more about it, the more you realize how much this tiny dot on the map changes everything in western Ireland. Because the big tour buses can’t make it around the Ring of Beara, they stick to the Ring of Kerry. Because they stick to the Ring of Kerry, that peninsula to the north has dramatically more tourists than Beara, even though they are equally beautiful. Because there are far fewer tourists on Beara, large hotels haven’t been established there like they have been across Kerry. Because there aren’t many hotels or tourist infrastructure, the scene on Beara is decidedly calmer, smaller, and more local.
When you arrive in Castletownbere, the largest, but still rather small, town on the Beara Peninsula, you notice that it remains primarily an active fishing port, despite abundant natural beauty and an island just off the coast with medieval ruins. It’s a tourist magnet with the polarity reversed. The fair that comes to Castletownbere in August doesn’t have the pop acts that show up for Galway’s summer arts festival, but it does feature an egg toss and a fish packing box stacking contest.
All it would take to change all of this is to relocate a modest house or its even more modest shed, but they’ve chosen not to do that on Beara. They like things as they are.
Remember a decade or two ago when it was our national pastime to complain about email? More recently, as I’ve reassessed this blog, my social media presence, and our centralized digital platforms in general, I’ve come to realize just how much the email system got right, in spite of full inboxes, spam, and security issues. Despite, or perhaps because of, its early inception, email avoided many of the worst aspects of our modern media environment.
Compare that list with other, newer platforms we use today. I think it looks pretty darn good. Now think about structuring some of those platforms in the same way, and how much better they would be.
When I arrived at Northeastern University a year ago, I wanted to start a new podcast that highlighted new ideas and discoveries through interviews with a wide range of faculty and researchers. Snell Library has incredible facilities not only for quiet study but also for the production of media and digital scholarship, and so it was natural to use our professional recording studio and the expertise of our staff to create this podcast. The result was What’s New, which wrapped up its first season a couple of months ago.
Longtime readers of this blog will know that I had a prior podcast, Digital Campus, which began in 2007, during the first wave of podcasting. Created with my friends at the Roy Rosenzweig Center for History and New Media, it was a roundtable discussion of how digital media and technology were affecting learning, teaching, and scholarship at colleges, universities, libraries, and museums. Digital Campus lasted through 2015, and built up a nice audience of fellow practitioners in digital humanities, academia, and cultural heritage institutions over those eight years.
With What’s New I wanted to draw on a larger canvas than Digital Campus, and try to reach an even larger audience. This wasn’t purely populist. In part the new podcast was my audio answer to the ongoing question about the social role and value of the academy; to me, that answer is not very complicated, and can be seen just by walking around a campus and talking to people. For the most part, despite all of the criticism and hand-wringing, universities still foster the people, environment, time, and resources to allow us to delve into topics far more deeply than anywhere else, and that process leads to profound, applicable, and enriching ideas in the broadest sense: not only scientific and technical breakthroughs but also a better understanding of ourselves as human beings.
Think about the difference between a blog post and a book: one can be tossed off in an afternoon at a coffee shop, while the other generally requires years of thought and careful writing. Not all books are perfect — far from it — but at least authors have to wrestle with their subject matter more rigorously than in any other context, look at what others have written in their area, and situate their writing within that network of thought and research.
Podcasts have generally been more off the cuff than rigorous. Sure, there are now many NPR, BBC, and other podcasts that are professional and well-produced, but a majority of podcasts are still unedited conversations. Sometimes that format can work well — I’m biased, but I think Digital Campus was fun to listen to, in part because we were friends and could joke with each other, or quickly grasp where one of us was going with a topic and then riff off of that.
Before launching, we had a lot of discussions about the structure and tone of What’s New, and settled on a simple half-hour interview format that we thought would go deep enough into a topic but not exhaustively so, and that would not be casual conversation that dragged on for an hour or two. That gave us the opportunity to cover a number of challenging topics and do them justice, while not being exhaustive. We left it to the listener to learn more through links, by reading a related book, etc.
I’m thrilled with how the first season went, audience-wise. Last time I checked we had over 30,000 streams so far, and the weekly numbers continue to grow. I’ve really enjoyed reading articles and books on topics I know nothing about and then having 30 minutes to frame complicated subjects in plainspoken ways, and to ask some probing questions of the guests on the show. It’s allowed me to get to know the incredible faculty at Northeastern, and to promote their work. (At the end of the season, we had a special guest from off campus, and that is likely to happen more in the future.)
If there’s one bit of self-criticism, the format of What’s New, especially within the strictures of a professional recording studio, could occasionally come across as a bit too formal, and so as we think ahead to Season 2, we’re going to sprinkle in some looser elements. We’re changing up the sound design a bit and recording the podcast outside of the studio, potentially with sounds from the field (e.g., within a lab). There will be a new, less ponderous theme song. I think I got better and less stiff as an interviewer as the season went on, but I’ll be working on that too; I have to admit to being used to being the interviewee rather than the interviewer.
For now, it’s a good time to catch up on Season 1 if you haven’t done so already, and subscribe to the podcast (just use one of the links at the top of the What’s New site) for the launch of Season 2 in September. Here are the episodes from Season 1:
1. How We Respond to Disaster – how cities bounce back from natural disasters or terrorism
2. Fake News and the Next Generation – the news consumption habits of young people, and the elusiveness of the truth
3. The Steamship Revolution – the spaceships of the 19th century
4. Enabling Engineering – an incredible group that designs devices for those with physical and cognitive disabilities
5. Inventing Writing – a fascinating story of how the Cherokee language went from oral to written
6. The Secrets of Hollywood Storytelling – a screenwriter and film producer on how movies are written and sonically designed
7. Tracking the Invisible Infrastructure of Our Cities – what you learn when you attach GPS devices to your trash
8. The Algorithms That Shape Our Lives – clever methods reveal how Facebook, Amazon, and other big internet companies work
9. The Hidden Universe of Comics – beyond the superheroes you see at the multiplex
10. Designing for Diversity – how to design digital systems to be more attentive to the true diversity of humanity
11. The Future of Energy – adding solar power to the grid is not so simple
12. Fractivism – how communities are responding to this new energy production method
13. The Evolution of Cities – the collision of people, transportation, and buildings as seen through the eyes of a city planner
14. Privacy in the Facebook Age – or what’s left of it, and whether regulation will help
15. Addressing Neglected Diseases – discovering vaccines and cures for these diseases requires a completely different model
16. Engineering the Future: Boston’s Big Dig – inside one of the biggest engineering projects in history, from its primary engineer and advocate
Social media is like the weather: everyone likes to complain about it, but nobody does anything to change it. Of course, you can do something about it, and some have — namely, by deleting your social media accounts. But the vast majority of people, even those who see serious flaws with our social media landscape continue to use it, in many cases avidly.
As someone who is naturally social but who has found social media like Twitter increasingly unpleasant and lacking in what drew me to these services in the first place — the ability to meet new and interesting people, encounter and discuss new ideas and digital resources, and make a few bad puns on the side — deletion is not a great option, for a number of reasons.
Some of those reasons are undoubtedly selfish. Having a large number of followers on a social media platform is a kind of super power, as John Gruber has said. With over 18,000 followers, accreted over 10 years on Twitter, I can ask for help or advice and usually get a number of very useful responses, spread the word widely about new projects and initiatives, find new staffers for my organization, and highlight good, innovative work by others.
I will also admit to liking the feeling of ambient humanity online, although the experience of social media in the last two years has tempered that feeling.
So what to do? I’ve tried alternatives to Twitter before, such as App.net, a Twitter clone that launched in 2012. It went nowhere and shut down. I have considered Mastodon, a somewhat better thought-out Twitter replacement that is decentralized — you join an instance of the platform and can even host one yourself, and yet you can connect across these nodes in a very webby way.
Most of these Twitter replacements unfortunately have frictions that slow widespread adoption. It’s often hard to find people to follow, including your friends and colleagues. The technology can be janky, with posts not showing up as quickly as on Twitter. New services are largely populated in the early days by young white dudes (I am fully aware that I am not helping with this diversity problem, although I’m no longer so young). It’s unclear if they’ve truly solved the “I’d rather not be hounded by Nazis” problem, especially since they all have less than a million users, a tiny population in social media terms.
My new social media setup is this:
Here’s my early sense of how this will work:
For those who would like to replicate what I’ve done, Micro.blog has good documentation on setting up a personal social media domain like social.dancohen.org, including for the majority of domain registrars who don’t have automated mechanisms, like Hover, for creating a proper DNS CNAME record. Kathleen Fitzpatrick has a more sophisticated setup using WordPress, where her posts of the type “micro” are ported to Micro.blog, and then over to Twitter. Chris Aldrich has a longer description about how to structure your WordPress site to be able to do what Kathleen did, separating brief social media posts from longer blog posts that remain on the root domain.
Ever since Jyn Erso and Cassian Andor extracted the Death Star plans from a digital repository on the planet Scarif in Rogue One, libraries, archives, and museums have played an important role in tentpole science fiction films. From Luke Skywalker’s library of Jedi wisdom books in The Last Jedi, to Blade Runner 2049’s multiple storage media for DNA sequences, to a fateful scene in an ethnographic museum in Black Panther, the imposing and evocative halls of cultural heritage organizations have been in the foreground of the imagined future.
There have been scattered instances of cultural memory institutions in such films in the past—my colleagues in the library will recall, with some eye-rolling, the librarian Jocasta Nu in Star Wars, Episode II: Attack of the Clones—but the appearance of these institutions in recent speculative fiction on the screen seem especially relevant and rich, and central to their plots.
Which begs the question: Why are today’s science fiction films obsessed with libraries, archives, and museums?
The answer of course is rooted in how science fiction has always pursued a heightened understanding of our very real present. At the same time that these movies portray an imagined future, they are also exploring our current anxiety about the past and how it is stored; how we simultaneously wish to leave the past behind, and how it may also be impossible to shake it. They indicate that we live in an age that has an extremely strained relationship with history itself. These films are processing that anxiety on Hollywood’s big screen at a time when our small screens, social media, and browser histories document and preserve so much of we do and say.
Luke Skywalker’s collection of rare books in The Last Jedi neatly captures the tension inherent in these movies. In an egg-shaped stone hut reminiscent of (and indeed filmed in) the rural parts of western Ireland where Christian monasteries were established in the Middle Ages, Luke’s archive of Jedi books represent a profound bond to the traditional wisdom of the Jedi cult. Yet as the movie proceeds, it is clear that these volumes are also a strong link in the chain that holds Luke back. Ultimately his little library is not a source of knowledge, but one of angst. It makes him surly and disassociated from present possibilities, and he must ultimately sever himself from the past that is encapsulated in paper. Burning the books becomes a necessary precursor to his taking action, and to moving to the metaphysical (and more real) plane of the Jedi.
Black Panther uses two characters, rather than one, to embody the tense dynamic between setting history aside and being unable to let it go: the dueling figures of T’Challa (Black Panther) and N’Jadaka (Erik Killmonger). T’Challa understands that black people have been abused and enslaved, globally, for centuries. And yet he imagines a day when Wakanda steps beyond this past, and integrates their society and advanced technology with the outside world that has done so much wrong to them. He is a forward-looking optimist.
N’Jadaka, on the other hand, seethes with anger about the past, and how it is so vividly documented in the halls of cultural heritage institutions. Before he declines into a more monochromatic villain, he experiences frankly justifiable rage at what whites have done with black culture—namely, stolen and stored it like an alien, and lesser, culture, in glass-cased museums. A pivotal scene in one such museum reflects the troubled genesis of institutions such as the Pitt Rivers Museum, which collected artifacts of non-white culture from the British Empire to be viewed and dissected by professors in Oxford.
In one of the most memorable lines of Public Enemy’s It Takes a Nation of Millions to Hold Us Back, the seminal rap album that documents what happened to African slaves and their descendents in the United States, Flava Flav shouts “I got a right to be hostile!” given this terrible history. A poster of that album is on the wall of N’Jadaka’s father’s apartment in Oakland, and it frames, like the glass case in the museum, the young man’s views of the world in which his ancestors have been constantly subjugated.
Blade Runner 2049 is even more unrelentingly pessimistic about the future and its connection to the past. In the movie’s opening, we are told that the documentary evidence of that past has been wiped out in a catastrophic electronic pulse that destroyed digital photographs and electronic records. As we learn, however, not all archives are lost. While personal images and documents that were never printed are gone forever, some plutocratic corporations maintain archival records, and we see several of them in the film: digital media as well as formats encased in glass spheres and more recognizable microfilm. Nevertheless, these archives are imperfect, like so much in the film. Even a leather-bound handwritten book of records in a wasteland orphanage has critical pages ripped out.
Because it is based on the work of Philip K. Dick, who was obsessed with libraries as part of a larger obsession with memory and reality, Blade Runner 2049 ultimately binds not only the past and present together, but the archival and the alive. Humans and replicants, the movie seems to argue, are simply incarnations of archival records, fleshy beings made up of the synthetic or parental DNA that form their core information architecture and the libraries of memories that are either fabricated or lived. This uneasy fusion is at the dark core of the film and its philosophical examination of the permeable boundary between the real and the artificial.
For all of these films, the past constantly threatens to come back to haunt the present. (Just ask those on the Death Star.) In turn, these big-screen portrayals of imagined libraries, archives, and museums should make us reconsider how what we preserve and make accessible reflects—and perhaps determines—who we really are.
I was fortunate to sit down for a rare interview with Fred Salvucci on the final episode of this season of the What’s New podcast. Fred is now at MIT, but he is well-known in the Boston area for conceiving and being the champion of a massive engineering project which came to be known as the Big Dig, and which completely transformed the city of Boston for the twenty-first century.
For most of its postwar existence, downtown Boston was split by a giant elevated highway called the Central Artery. The Artery was an artifact common to many cities in twentieth-century America, a terrible byproduct of the car-centric culture and suburbanization that flourished in the 1950s. Elevated roadways were aggressively cut through small-scale livable neighborhoods so that people could get into the city from the suburbs, and so that others could drive through a city without entering its local roadways on their way to distant destinations. Homes were often taken from people to make way for these elevated highways, and the walkability and attractiveness of cities suffered.
The Big Dig not only put the Central Artery underground, but added a massive linear park in the center of Boston, a marquee bridge that aptly reflected the famous Bunker Hill Monument, and another tunnel to Logan Airport. It thus completely reshaped the city and improved not only its transportation, but Boston’s skyline and its ground-level fabric and beauty. It reconnected neighborhoods and people.
In a wide-ranging conversation, Fred spoke to me about how the Big Dig was engineered—it was one of the biggest engineering projects in history, at a cost of $15 billion, through a 400-year-old city ($1 billion just to relocate ancient pipes and wires)—but also how he was able to get so many people on board for such a gigantic project. Indeed, as you’ll hear, Fred saw it more as a political and socio-economic project than a transportation initiative.
Moreover, Fred provides some good thoughts about the future of transportation, including the impact (likely negative, in his view) of self-driving cars, and whether we can ever find the will—and the funds—to do something like the Big Dig again. Do tune in.
Adam Glanzman/Northeastern University
I’m delighted that the news is now out about the Andrew W. Mellon Foundation‘s grant to Northeastern University Library to launch the Boston Research Center. The BRC will seek to unify major archival collections related to Boston, hundreds of data sets about the city, digital modes of scholarship, and a wide array of researchers and visualization specialists to offer a seamless environment for studying and displaying Boston’s history and culture. It will be great to work with my colleagues at Northeastern and regional partners to develop this center over the coming years. Having grown up in Boston, and now having returned as an adult, it has a personal significance for me as well.
I’m also excited that the BRC will build upon, and combine, some of the signature strengths of Northeastern that drew me to the university last year. For decades, the library has been assembling and working with local communities to preserve materials and stories related to the city. We now have the archives of a number of local and regional newspapers, and the library has been active in the gathering of oral and documentary histories of nearby communities such as the Lower Roxbury Black History Project. We also have strong connections with other important regional collections and institutions, such the Boston Public Library, the Boston Library Consortium, and data sets produced by Boston’s municipal government and other sources, through our campus’s leadership in BARI.
My friends in digital humanities will know that Northeastern has a world-class array of faculty and researchers doing cutting-edge, interdisciplinary computational analysis. We have the NULab for Texts, Maps, and Networks, the Network Science Institute, numerous faculty in our College of Arts, Media, and Design who work on digital storytelling and information design, and the library has its own terrific Digital Scholarship Group and dedicated specialists in GIS and data visualization. We will all be working together, and with many others from beyond the university, to imagine and develop large-scale projects that examine major trends and elements of Boston, such as immigration, neighborhood transformations, economic growth, and environmental changes. There will also be an opportunity for smaller-scale stories to be documented, and of course the BRC itself will be open to anyone who would like to research the city or specific communities. As a place with a long and richly documented history, with a coastal location and educational, scientific, and commercial institutions that have long involved global relationships, the study of Boston also means the study of themes that are broadly important and applicable.
My thanks to the Mellon Foundation for their generous support. It should be fascinating to watch all of this come together—stay tuned.