FreshRSS

🔒
❌ About FreshRSS
There are new available articles, click to refresh the page.
Before yesterdayjill/txt

Thinking about larps for research dissemination about technology and society

By Jill

I spent some of last week at a wonderful larp (live action roleplaying) camp for kids run by Tidsreiser, and had a wonderful time. I have secretly wanted to try larping since I was a teenager, but there weren’t any local ones, then I didn’t dare try, and then I sort of forgot and just settled into being a boring grownup. Luckily, one of the advantages of having kids is you get to try out new stuff. So after a year of sitting around watching the kids battling and sneaking around the forest with their latex swords, and dropping them off at the Nordic Wizarding Academy (Trolldomsakademiet), I’ve started joining in a bit, and I absolutely love it.

Kids with swords at Eventyrspill last winter.

After chatting with the fascinating game masters and larpwriters at last week’s camp, and trying out some more different kinds of larp there, I started thinking about what a great tool larping could be for teaching and research dissemination – perhaps especially in subjects like digital culture, or for our research on the cultural implications of machine vision, because one of our main goals is to think through ethical dilemmas – what kind of technologies do we want? What kinds of consequences could these technologies have? What might they lead to? A well-designed larp could give participants a rich opportunity to act out situations that require them to make choices about or experience various consequences of technology use. This post gathers some of my initial ideas about how to do that, and some links to other larps about technology people have told me about.

To my delight, when I started talking about this idea, I discovered that two of the larpwriters at the camp, Anita Myhre Andersen and Harald Misje, are also working with the University Museum here at the University of Bergen, which is just relaunching this autumn with a big plan to host more participatory forms of research dissemination. We’re going to meet up after the summer holidays to talk about possibilities.

What might a machine vision larp include?

So what would a larp about machine vision be like? There’d need to be some technology. At a minimum lots of cameras – surveillance cameras, body cams, smart baby monitors or smart door bell cameras. Somewhere, somebody watches those images, or someone can gain access to them somehow. Someone can maybe manipulate the images, share the images, alter the images. Perhaps there’s a website that participants could access from their phones with news, in-game blogs, private photo messaging – and perhaps some people might have access to more of this than others, and some might find ways to access “private” images by nefarious means. There might be tools that could (fictionally) analyse people’s emotions, health, attractiveness, mental state, whatever, based on the images. Maybe we could adapt some of the scenarios from this speculative design research paper by James Pierce: “Smart Home Security Cameras and Shifting Lines of Creepiness: A Design-Led Inquiry” (Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems).

One of the scenarios in James Pierce’s CHI’19 paper asks how an employer could use information about their nanny’s emotional state. Something like this could probably be fictionalized and used in a larp.

I’m thinking participants would be given roles such as

  • director of/salesperson for a technology company – perhaps one trying to sell emotional recognition software to the city for use in schools, public surveillance, by the police, etc., or trying to sell smart home surveillance systems, or something slightly more outlandish, smart body cams with built-in facial recognition and networked tracking of suspect individuals for personal protection for young women worried about being raped or something, or even more Black Mirrorish, optical implants with total recall. Their goal would be to convince people to use their technology.
  • people with roles that allow them to make choices about buying/implementing technology that will affect other people (e.g. politicians, bureaucrats, the chief of police, the principal of a school or university, a shop owner).
  • Groups or individuals with ill intent who (using some game mechanism) can gain access to or alter personal data from the technology or generate deepfake videos or something to do something scary – this could be scammers, an oppressive regime, or something else.
  • Activists who are for/against the technology for various reasons. Could have backstories explaining why they hold strong opinions. (Could lead to interesting protests etc – have materials available for making banners etc 🙂
  • Regular users who experience some specific situation that makes them think about the technology. This could include something like a teacher or parent or prisoner told to wear a body cam to monitor all their social interactions, or somebody who works in a shop who is being constantly surveilled.
  • People whose job it is to watch the surveillance feeds, monitor the “smart” facial analysis algorithms etc.

Participants could be told that their character follows a specific ethical framework, such as utilitarianism, care ethics, deontology, ubuntu, confucianism etc. (If using this in teaching, I’d base it on Charles Ess’s chapter on ethical frameworks in his book Digital Media Ethics.)

Obviously these are all very early ideas, from a professor with very little larping experience (i.e. me), and we may end up doing something completely different.

Other larps and related projects dealing with contemporary technology and ethics

To learn more about what’s out there, I posted a question to the Association of Internet Researchers’ mailing list to see if any fellow internet researchers had experience with using LARPs in connection with research. As usual for questions to the list, I got some great answers, both on and off list. Here are some of the projects, people and books people told me about.

The most developed LARP-based teaching program for universities that I’ve seen so far is Reacting to the Past at Barnard College in New York City. Reacting to the Past is a centre that has developed lots of LARPs for teaching history. They have a system that seems really well-thought out for taking games through various levels of development and play testing, and once a game is very thoroughly tested, they publish it so others can use it in their own teaching. Here are their published LARPs. Their focus is on historical situations, so none of their games seem to be directly applicable to the emphasis I want on ethical negotiations about possible near futures – except possibly Rage Against the Machine: Technology, Rebellion, and the Industrial Revolution. I’ve filled out the form to request to download the materials for that game, and am looking forwards to seeing how they have it set up.

I’ve also received tips about two different artist-researcher collaborations that have resulted in LARPs. Omsk social club developed a LARP at Somerset House earlier this year, based on research on digital intimacy by Alessandro Gandini and artist/curator Marija Bozinovska Jones. They’re still working on putting documentation online, but you can get some idea of how it worked from this short video:

Secondly, Martin Zeilinger responded to my question to the list to tell me about a series of LARPs developed by Ruth Catlow with  Ben Vickers. Martin himself is currently in the early stages of developing a LARP with Ruth about cashless societies, aimed at 15-25 year olds. I found a description of one of Ruth and Ben’s earlier LARPs that explored the excitement about blockchain and tech startups in a workshop called ‘Role Play Your Way to Budgetary Blockchain Bliss’. The LARP was hosted by the Institute of Network Cultures in Amsterdam in 2016, and conveniently for me, they wrote up a blog post about it. This LARP was designed like a hackathon set in a near future, where all the projects that were pitched were about cats, and participants were “assigned a cat-invested persona and the general goal of networking their way into a profitable enterprise for themselves, the cat community, and the hosting institution.” The blog post explains that after the pitches:

The rest of the first day gave chance to the multiplicity of attendees to ask, negotiate, and offer their skills to their favourite projects. It became rapidly clear that the diversity of the audience had different motivations, skills, and ideologies. Each participant performed a part of the complex ecosystem of fintech and start-ups: investors, developers, experts, scholars, and naive enthusiasts had the difficult task to sort out differences in order to build up lasting and successful alliances. Everyone had something to invest (time, energy, money, venues, a van full of cats) and something to get in return (profits, cat life improvement, patents, philanthropy aspirations).

It’d be pretty straightforward to copy this structure and make a kind of speculative startup hackathon for new machine vision-related technologies – and that could certainly lead to many ethical debates. I can imagine something like that working well for teaching, and being reasonably easy to carry out. I’d really like to make something more narrative, though.

Netprovs are another genre that has a lot in common with larps, and which we’ve been involved with in our research group. Netprov is sort of an online, written version of a larp, that lasts for a day, a week or several months. Rob Wittig wrote his MA thesis here on netprov, and he and his collaborator Mark Marino have explicitly compared netprov to larps. Scott Rettberg is planning a machine vision-themed netprov in our course DIKULT203: Electronic Literature this autumn, which should be fun, and which may provide good ideas for a larp on the topic as well.

Another thread to consider is design fiction, design ethnography and user enactments. A really interesting paper by Michael Warren Skirpan, Jacqueline Cameron and Tom Yeh describes an “immersive theater experience” called “Quantified Self”, designed to support audience reflection about ethical uses of personal data. They used a script and professional actors, asked audience members to share their social media data, and set up a number of technological apps and games that used that data in various ways. So this isn’t a larp, because the audience aren’t really actors driving the narrative: they stay firmly audience members, but participatory.

The show had an overarching narrative following an ethical conflict within a famous tech company, DesignCraft. Imme- diately upon signing up for the show, participants were invited to a party for their supposed friend, Amelia, who was a star employee at DesignCraft. As the story unravels, they learn that Amelia is an experimental AI created using their personal data, who, herself, has begun grappling with the ethics of how the company uses her and its vast trove of data.

Within this broader plot arc, main characters were written to offer contrasting perspectives on our issues. Don, the CEO of DesignCraft, represented a business and innovation per- spective. Lily, the chief data scientist of DesignCraft, held scientific and humanitarian views on the possibilities of Big Data while struggling with some privacy concerns. Felicia, an ex-DesignCraft employee, offered a critical lens of tech- nology infiltrating and destroying the best parts of human re- lations. Evan, a hacker, saw technology as an opportunity for exploitation and intended to similarly use it to exploit De- signCraft. Amelia, a humanoid AI, struggled with the idea of being merely an instrument for technology and the artifi- ciality of knowing people only through data. Felicity, an FBI agent, believed data could support a more secure society. Bo, the chief marketing officer at DesignCraft, felt strongly that technology was entertaining, useful, and enjoyable and was willing to make this trade-off for any privacy concern. Finally, Veronica, a reporter, was concerned about the politics and intentions of the companies working with everyone’s personal data.


Skirpan, Michael Warren, Jacqueline Cameron, and Tom Yeh. “More Than a Show: Using Personalized Immersive Theater to Educate and Engage the Public in Technology Ethics.” In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, 464:1–464:13. CHI ’18. New York, NY, USA: ACM, 2018. https://doi.org/10.1145/3173574.3174038.

But what I would really like is to develop something both more participatory than the immersive theater example, and more narrative than the artist-led larps, with events and conflicts and problems to solve. That’s probably quite ambitious and difficult. I am very much looking forwards to sitting down with Anita and Harald, who have lots of experience (good thing, since I have practically none).

Harald works in a cubicle by day and plays the lord of the dark elves and various other parts on weekends. I’m hoping he and Anita will have good ideas for villains for the machine vision larp.

(And here’s a rather fun NRK documentary som 2011 about Anita – the kids’ larp in the forest is still going strong, and we played a version of the murder master at the camp last week.)

Here are some names of people doing relevant work that people have suggested:

Also I was recommended the following books:

  • Stenros, Jaakko and Markus Montola. Nordic Larp. Stockholm: Fëa Livia. 2010.
  • Simkins, David. The Arts of Larp: Design, Literacy, Learning and Community in Live-Action Role Play. Jefferson, North Carolina: McFarland & Company, 2015.

If you know about larps about technology and society, or that are used for research dissemination or teaching, please leave a comment! I would love to know more!

Lesson plan: DIKULT103 29.01.2019 – Video Game Aesthetics and Orientalism in Games

By Jill

ReadingsUnderstanding Video Games Chapter 5, Woke Gaming Chapter 6 (Kristin Bezio: The Perpetual Crusade: Rise of the Tomb Raider, Religious Extremism, and the Problem of Empire. (p 119-138))

Learning goals: After doing the reading, taking the quiz and attending the class, students can

  • Explain how video game aesthetics incorporate game mechanics as well as visuals, sounds, etc. 
  • Use some of the terms in Understanding Video Games chapter 5 to describe games
  • Explain Said’s concept of orientalism and discuss it in relation to video games

Participation goals: More than 20 different people have spoken in plenary, and everyone has participated in a small group discussion. (Total class size is 79)

  1. Discuss with your neighbour: What does “aesthetics” mean in general? What would “video game aesthetics” mean? Plenary: Class shares some suggestions.
  2. Talk about the concept of aesthetics – the philosophy of what is beautiful, why do some works of art give us pleasure, make us feel a sense of beauty or satisfaction, while we see others as ugly? The idea of form matching content. 
  3. Show Rise of the Tomb Raider review and ask students to note down what aspects of the game the reviewer describes as satisfying, fun, good. Are these aspects “the game aesthetics”?
  4. What does “aesthetics” mean in video games? This chapter is mostly about details about different kinds of rules and mechanics. Structuralist approach, useful for describing games. But what makes a game beautiful, satisfying? 
    • Could be the graphics? [show YouTube video: Graphics vs Aesthetics
    • Could be balance between gameplay/processing? E.g. Chris Crawford’s description of Space Invaders being beautiful because as you kill aliens, the computer has less processing work to do and so the remaining aliens become faster, which increases the difficulty of the game.
    • Is it to do with form/content matching? E.g. Journey.
    • Kant: separated beauty from the sublime. Show brief slideshow about the sublime. Can a game be sublime?
  5. Small groups: What is a game you have found beautiful or emotionally moving? What about the game made you feel that way?
  6. Orientalism– Edward Said, (show first Said himself explaining orientalism here from 1:23 to 3:50– then this rap about orientalism)
  7. Small groups: Do you know any games that might be orientalist, following Said’s definition?
  8. Plenary discussion.

After class:

  • Complete readings and quiz if you haven’t already
  • Analytical Let’s Play video due Friday

For next class:

  • Read: Understanding Video Gamesch 6 (p 157-198) and Ask, Svendsen & Karlstrøm: «Når jentene må inn i skapet» (use Google translate if you don’t read Norwegian – it Google translates surprisingly well)
  • Do the quiz or discussion on this material that will appear in mittuib by 1 Feb.

Hostile machine vision

By Jill

One of our goals in MACHINE VISION is to analyse how machine vision is represented in art, stories, games and popular culture. A really common trope is showing machine vision as hostile and as dangerous to humans. Machine vision is used as an effective visual metaphor for something alien that threatens us.

My eight-year-old and I watched Ralph Breaks the Internet last weekend. I found it surprisingly satisfying – I had been expecting something inane like that emoji movie, but the story was quite engaging, with an excellent exploration of the bad effects of neediness in friendships. But my research brain switched on in the computer virus scene , towards the end of the movie, because we see “through the eyes of the virus”. Here is a shot of the virus, depicted as a dark swooshing creature with a single red eye:


And here you see the camera switch to what the virus sees. It is an “insecurity virus”, that scans for “insecurities” (such as Vanellope’s anxious glitching and Ralph’s fear of losing Vanellope) and replicates them.

And of course it uses easily-recognisable visual cues that signify “machine vision” – a computer is seeing here.

I noticed an almost-identical use of this visual metaphor at another visit to the cinema with the kids, though this time in an ad from the Australian Cancer Council. Here, the sun is presented as seeing human skin like an alien.

The way humans see skin is not the same way the sun sees skin. And each time the sun sees your skin, when the UV is 3 or above, it’s doing damage beneath the surface. It builds up, until one day, it causes a mutation in your DNA, which can turn to skin cancer. Don’t let the sun see your DNA. Defend yourself.

The visuals are different. While Ralph Breaks the Internet uses an overlay of data, the ad shifts from a “human” camera angle to zooming in, black and white, fading around the sides of the image, a shaky camera, and then appears to penetrate the skin to show what we assume is the DNA mutating. The sound effects also suggest something dangerous, perhaps mechanic.

Certainly machine vision isn’t always represented as hostile. It’s often presented as useful, or protective, or simply as a tool. This year we are going to be tracking different representations and simulations of machine vision in order to sort through the different ways our culture sees machine vision. Hostile is definitely one of those ways.

If you have suggestions for other examples we should look at, please leave a comment and tell us about them!

Seeing brainwaves

By Jill

Last week I was in London, where I visited Pierre Huyghe’s exhibition Uumwelt at the Serpentine Gallery. You walk in, and there are flies in the air, flies and a large screen showing images flickering past, fast. The images are generated by a neural network and are reconstructions of images humans have looked at, but that the neural network hasn’t had direct access to – they are generated based on brainwave activity in the human subjects.

The images flicker past in bursts, fast fast fast fast fast slow fast fast fast, again and again, never resting. Jason Farago describes the rhythm as the machine’s “endless frantic attempts to render human thoughts into visual form”, and frantic describes it well, but it’s a nonhuman frantic, a mechanical frantic that doesn’t seem harried. It’s systematic, mechanical, but never resting, never quite sure of itself but trying again and again. I think (though I’m not sure) that this is an artefact of the fMRI scanning or the processing of the neural network  that Huyghe has chosen to retain, rather than something Huyghe has introduced.

Huyghe uses technology from Yukiyasu Kamitani’s lab at Kyoto University. A gif Kamitani posted to Twitter gives a glimpse into how the system uses existing photographs as starting points for figuring out what the fMRI data might mean – the images that flicker by on the right hand side sometimes have background features like grass or a horizon line that is not present in the left image (the image shown to the human). Here is a YouTube version of the gif he tweeted:

The images and even the flickering rhythms of the Kamitani Lab video are really quite close to Huyghe’s Uumwelt. At the exhibition I thought perhaps the artist had added a lot to the images, used filters or altered colours or something, but I think he actually just left the images pretty much as the neural network generated them. Here’s a short video from one of the other large screens in Uumwelt – there were several rooms in the exhibition, each with a large screen and flies. Sections of paint on the walls of the gallery were sanded down to show layers of old paint, leaving large patterns that at first glance looked like mould.

The neural network Kamitani’s lab uses has a training set of images (photographs of owls and tigers and beaches and so on) which have been viewed by humans who were hooked up to fMRI, so the system knows the patterns of brain activity that are associated with each of the training images. Then a human is shown a new image that the system doesn’t already know, and the system tries to figure out what that image looks like by combining features of the images it knows produce similar brain activity. Or to be more precise, “The reconstruction algorithm starts from a random image and iteratively optimize the pixel values so that the DNN [DNN=deep neural network] features of the input image become similar to those decoded from brain activity across multiple DNN layers” (Shen et.al. 2017) Looking at the lab’s video and at Uumwelt, I suspect the neural network has seen a lot of photos of puppy dogs.

I’ve read a few of the Kamitani Lab’s papers, and as far as I’ve seen, they don’t really discuss how they conceive of vision in their research. I mean, what exactly does the brain activity correspond to? Yes, when we look at an image, our brain reacts in ways that deep neural networks can use as data to reconstruct an image that has some similarities with the image we looked at. But when we look at an image, is our brain really reacting to the pixels? Or are we instead imagining a puppy dog or an owl or whatever? I would imagine that if I look at an image of somebody I love my brain activity will be rather different than if I look at an image of somebody I hate. How would Kamitani’s team deal with that? Is that data even visual?

Kamitani’s lab also tried just asking people to imagine an image they had previously been shown. To help them remember the image, they were “asked to relate words and visual images so that they can remember visual images from word cues” (Shen et.al. 2017). As you can see below, it’s pretty hard to tell the difference between a subject’s remembered swan or aeroplane and their remembered swan or aeroplane. I wonder if they were really remembering the image at all, or just thinking of the concept or thing itself.

Figure from a scientific paper.

Figure 4 in Shen, Horikawa, Majima and Kamitani’s pre-print Deep image reconstruction from human brain activity (2017), showing the reconstruction of images that humans imagined.

Uumwelt means “environment” or “world around us” in German, though Huyghe has given it an extra u at the start, in what Farago calls a “stutter” that matches the rhythms of the videos, though I had thought of it as more of a negator, an “un-environment”. Huyghe is known for his environmental art, where elements of the installation work together in an ecosystem, and of course the introduction of flies to Uumwelt is a way of combining the organic with the machine. Sensors detect the movements of the flies, as well as temperature and other data that relates to the movement of humans and flies through the gallery, and this influences the display of images. The docent I spoke with said she hadn’t noticed any difference in the speed or kinds of images displayed, but that the videos seemed to move from screen to screen, or a new set of videos that hadn’t been shown for a while would pop up from time to time. The exact nature of the interaction wasn’t clear. Perhaps the concept is more important than the actuality of it.

The flies apparently are born and die within the gallery, living their short lives entirely within the artwork. They are fed by the people working at the gallery, and appear as happy as flies usually appear, clearly attracted to the light of the videos.

Dead flies are scattered on the floors. They have no agency in this Uumwelt. At least none that affects the machines.

Updates on algorithms and society talks

By Jill

I’ve given a few more versions of the “algorithms and society” talks from this spring. You can still see the videos of those talks, but here are a few links to new material I’ve woven into them:

Social credit in China – this story by the Australian Broadcasting Company paints a vivid picture of what it might be like to live with this system. It’s hard to know exactly what is currently fact and what is conjecture.

Ray Serrato’s Twitter thread about YouTube recommending fake news about Chemnitz,and the New York Times article detailing the issue.

Generating portraits from DNA: Heather Dewey-Hagborg’s Becoming Chelsea

By Jill

Did you know you can generate a portrait of a person’s face based on a sample of their DNA? The thing is, despite companies selling this service to the police to help them identify suspects, it’s not really that accurate. That lack of precision is at the heart of Heather Dewey-Hagborg’s work Probably Chelsea, a display of 30 masks showing 30 possible portraits of Chelsea Manning based on a sample of her DNA that she mailed to the artist from prison. The work is showing at Kunsthall 3.14 here in Bergen until the end of September.

Many masks resembling human faces hang from the ceiling in an art gallery.

DNA fingerprinting is when a DNA sample is compared to the DNA of a known individual to see if they match. That is fairly uncontroversial. Forensic DNA phenotyping is analysing a DNA sample to find out something about an unknown individual. Gender is easy. Geographical ancestry (20% Asian, 60% European, 20% Native American for instance) and externally visible characteristics (EVCs) such as eye and hair colour – but have a significant margin for error. DNA research has focused more on disease markers than on appearance, so this is not yet well-developed. Perhaps for good reason. Right now, we can predict pigmentation (so skin colour, eye colour, hair colour) better than other externally visible characteristics like the shape of a nose or arch of an eyebrow (Kayser 2015). Yet portraits generated from DNA samples are often presented as very accurate, as on the company Parabon’s website:

Parabon — with funding support from the US Department of Defense (DoD) — developed the Snapshot Forensic DNA Phenotyping System, which accurately predicts genetic ancestry, eye color, hair color, skin color, freckling, and face shape in individuals from any ethnic background, even individuals with mixed ancestry.

The thing is, these characteristics are then mapped onto a 3D model of the face. That can be quite accurate if they have access to the individual’s skull, as they might for the victim in a murder case, or where historians want to find out what a historical person looked like. If they don’t already know the basic structure of the face and skull, they use average faces based on the ethnic breakdown of the individual’s DNA. Parabon shows the process for mapping the general facial attributes to a specific skeletal structure here. As you can see the end result is fairly different from the starting point.

a series of five male faces marked 1 to 5 with the following text: Stages of a Snapshot Facial Reconstruction: (1) a Snapshot composite produced from DNA extracted from the subject's bone; (2) Snapshot composite with skull overlay; (3) Snapshot composite after rescaling to conform to skull dimensions; (4) a cutaway image illustrating near final composite; and (5) final, blended Snapshot composite.

Heather Dewey-Hagborg writes in her essay for the exhibition catalogue that “Probably Chelsea shows just how many ways your DNA can be interpreted as data, and how subjective the act of reading DNA really is.”A mask of a photorealistic human face is hanging in the foreground. Other masks hang behind it.

One of the striking things about visiting the exhibition with other people present rather than just seeing videos and photos of the silent masks in the online documentation is that you see how people are drawn to standing behind the masks. At the opening in Bergen I saw dozens of people taking photos standing among the masks. Here are two women examining the photos they have just taken.

Two women stand among hanging masks, looking at a camera.

I felt the urge too! Here I am:

A woman with brown hair stands behind a hanging mask of a human face.

Masks are of course a symbol of identity, and of identity as something that can be enacted, exchanged, used in a performance, put on and taken off. There’s something more, too, though: by imagining ourselves wearing these masks, we imagine ourselves stepping into Chelsea Manning’s possible identity. As Dewey-Hagborg writes, “We have so much more in common genetically than difference. Probably Chelsea evokes a kind of DNA solidarity; on a molecular level we are all Chelsea Manning.”

Of course, this artwork speaks directly to my ideas of machine vision. They are generated from data, both in the sense that DNA is algorithmically interpreted to create facial images, and in the sense of the masks being 3D printed. The computer-generated nature of these faces becomes clear if you view the masks really close. A close-up photograph of a single mask.

The faces are also computer-readable. See what my phone camera does when I try to take a photo of the masks? It sees them as human faces, human identities. It tries to set the focus on each of the faces, and once the photo is taken, it will try to match them up to the people it has already identified on my phone.

A screenshot of the iPhone camera interface showing hanging masks resembling human faces in an art gallery, and yellow squares around each mask indicating that the phone camera has recognised a human face.

I found it unexpectedly moving to walk into the gallery and be faced by this crowd of masks, silently staring at me. It is an intriguing experience. Reading about the ambivalence of DNA forensics makes me all the more intrigued.

My ERC interview: the full story

By Jill

It seems more and more research funding is awarded in a two-step process, where applicants who make it to the second round are interviewed by the panel before the final decisions are made. I had never done this kind of interview before I went to Brussels last October, and was quite nervous. I must have done OK, because I was awarded the grant, and my ERC Consolidator project, Machine Vision in Everyday Life: Playful Interactions with Visual Technologies in Digital Art, Games, Narratives and Social Media, officially started on August 1! Hooray! 

I was on sabbatical at MIT last autumn, so had a long flight back to Boston after  the interview, and I spent a few hours of the flight home writing out my recollections and thoughts about the interview. Only 40% of candidates who get to the interviews win a grant, so I figured there was a good chance I would need to apply again and do another interview, and I wanted to remember the details. If you have a similar interview coming up, maybe my experiences will be useful for you too. I made a Snapchat story about the trip as well, which I just uploaded to YouTube, if you’re curious as to what things actually looked like. Like most Snapchat stories, the really juicy bits aren’t in there because you can’t film the important bits.

I stayed at Thon Brussels City Centre, on Berg-Hansen’s recommendation, which was great because it is just a few minutes walk from Covent Garden. The room was fine, and it’s very close to Brussel Nord, where the airport train stops, and even closer to the Rogiers stop on the metro.

The ERC building, Covent Garden, is big, and formal, but security was not as extreme as I had expected. I was a little worried they would have airport style security and not let me bring in my coffee (which I bought at the Starbucks between the hotel and Covent Garden, and which I was very glad to have in the waiting room.) There was no bag check or anything though. A man simply checked my passport and invitation letter and then walked me to a space with a couple of chairs and a sign saying it was the ERC candidate waiting area. Within a couple of minutes, a friendly man came over and said welcome, and walked me to a room with a few computers where I signed a list of names to confirm I was present, and then he helped me upload the presentation to their server. He gave me a card with the exact time of my interview (14:00, right at the start of the slot I had been given in advance, which was nice) and explained that I should go to the 24th floor. He also gave me a sheet with information about compensation for my travel costs. I took the lift upstairs, where there was a woman behind a desk in the corridor by the lift. I signed my name on a list of names again and then she walked me to the candidate waiting room, which was a meeting room the same as the room the interview was held in, with six men also waiting. Everyone was silent. More people came in as i waited, all men, and all wearing suits except one who wore a woollen sweater and a checkered shirt underneath. About half the men wore ties. There was wifi access, and the code was written on the whiteboard in the waiting room.

At 14:00 a woman came and called my name. She introduced herself as the coordinator of the panel, and said we would walk slowly down the corridor while she explained what would happen. She explained that my presentation would already be on the projector, that I would have a clicker and that I should check it worked at the start, and that she would hold up a blue folder after four minutes. An alarm would sound after five minutes, letting me know to finish my sentence but no more, and then a second alarm would sound “for the panel”, which I assume would mean I couldn’t speak more.

There were about 25 people or maybe even more in the room. The long table was full, and two or three people sat on chairs by the wall. There was a lectern at one end of the table where I was supposed to stand. There was a microphone fixed to the lectern, so I couldn’t move freely in front of the screen as I am used to doing, but it was fine for such a short speech. I did feel slightly odd standing behind a high lectern, as it was between my body and the audience, whereas when I speak I usually like not having anything between me and the audience. Also it seemed a little odd to be standing during the questions while everyone else sat – I had expected I would sit down after the presentation. But it wasn’t a big deal and I quickly forgot that once we started.

A man halfway down the table introduced himself as leading the interview, and said I could start. I don’t think anybody on the panel said their name, or if they did, I didn’t catch it. My talk went well, I said what I had planned to say, and finished before the first alarm went off. The man who said he was leading the interview asked a couple of questions, then passed the word to a slightly younger man who asked a couple of questions about the digital methods I was using – wouldn’t it take more time than I had planned for? And what did I hope to find, and why that particular method? A woman with an American accent asked how I would deal with the phenomenon I was studying being bound to change over the period I was studying it. A man at the end of the table asked a couple of questions about what was different between my research and what other researchers were doing – he mentioned three specific researchers, and I recognised one name (Sonia Livingstone), but didn’t quite hear the others (and perhaps wouldn’t have recognised them anyway?) I started by explaining that our methods were different, as they used primarily anthropological methods while I will use aesthetic analysis, but then he asked whether my topic wasn’t different as well, which of course it was, and I explained how. I’m not sure he quite got the answer he wanted, but I couldn’t figure out what exactly he did want. The coordinator said five minutes left, and the man who led the interview said they had a question from a remote viewer (I didn’t see a camera, but I suppose maybe there was one? Or perhaps it was only audio?), who wanted to know what methods the ethnographic part of the project would use – surveys, fieldwork or interviews? I was surprised by this question since it’s very clearly stated in the proposal that we’ll use fieldwork, interviews, observation. But I suppose they have a lot of proposals to read. When I had finished answering that question the coordinator said 20 seconds, and the leader said that I could add some final words if I wanted – which I hadn’t really expected, so I just said that I genuinely thought the research was important, and I hope to be able to do it, and then I thanked them for the opportunity to talk with them about it. The coordinator walked me back to the waiting room, which now had two women in it and about 12 men. I got my jacket, took the lift down, and left. I walked out of there full of adrenaline like YEAH! But then quickly just felt tired and hungry, so I bought a can of beer, a pizza, and binge-watched a Netflix show in my hotel room, and met a friend for dinner afterwards.

The day after, flying out, it feels rather anti-climatic. It was such a short interview with so much anxiety about it beforehand. And it really is quite strange presenting and talking with twenty or more anonymous people who don’t introduce themselves and give you little impression of whether they like what you say or not. It made me realise how much I like the informal chats you have with audience members after pretty much every other presentation in academia. I guess I’m happy though. I can’t think of any better answers than what I gave, and I was calm throughout the interview, so I guess it went well.

In retrospect I think I was well-prepared. Before the interview, I worried that I had not had much practice answering questions about my project. I was disappointed in Beacontech’s “mock interviews” because they only focused on the presentation and not at all on the content of the project or on what kind of questions the panel would ask and how to answer them – I would have liked to have had a real mock interview with a presentation AND people asking real questions based on the content of my proposal. I was very confident about the five minute presentation though, after first working with SpeakLab in Bergen on it and then getting the feedback from Beacontech. I also asked several colleagues in my field to read my proposal and tell me what kinds of questions they would ask if they were on the panel. Most of these questions didn’t match the questions I was actually asked, though. Really, the questions I was asked were much simpler than I had expected – I had already answered most of them in the proposal. So perhaps the focus from Beacontech was appropriate. In a way the emphasis on the presentation is a good way of calming candidates down by letting us have something concrete to make perfect.

The day before the interview I had this gut feeling that I didn’t want to practice the presentation and reread the proposal anymore. Finally I thought of ringing my sister, who is a professional musician, and asking what musicians do before an important audition. She gave me some great advice. I think for future candidates it would be really useful to have somebody who is an expert on preparing for a high-stakes performance, for instance music or sport or acting, to coach candidates on this sort of practical stuff. My sister said a musician would never practice the audition piece in the last 24 hours. They would do something completely different and relaxing. She would often eat carbs, like a runner before a race. Don’t eat fish just before as it makes you tired, or an apple, as it makes your mouth dry, she cautioned. Musicians need to warm up before an audition or concert, but they wouldn’t play their audition piece, at least not at tempo, because if they make a mistake just before the audition they’ll remember that and feel anxious, or if they play it wonderfully it can use up the energy so that they play it with less energy at the audition. They might instead practice one difficult technique, but slowly, or they might play something completely different. They might listen to music that inspires them while in the waiting room (but probably not the exact piece they will play). She also recommended visualising myself performing well, and especially visualising the feeling AFTER the interview when I was happy with how it had gone. And taking three deep breaths just before the interview to reduce adrenaline.

I tried to apply some of her techniques. The night before the interview, I ate cake and watched a favourite TV show instead of rehearsing more. I did go through the presentation once the morning before the interview, but only once. I went for a run. In the waiting room I watched YouTube videos of artists related to my proposal but not directly in it, who were talking about their art, which meant they were talking about similar ideas to mine but in a different mode. That was actually really inspiring. I took deep breaths. I didn’t reread my proposal or go through the presentation in the waiting room (as I saw many others doing), but I did repeat the first line of what I wanted to say for the last slide a few times, slowly, because I knew I often got that wrong. I figured that was similar to the musician practicing a single technique instead of the whole piece. I think the approach worked, for me at least, because I felt calm and confident when I walked into the interview room.

If you are heading to an interview: good luck! Prepare well, breathe deep. You’ll be OK. And if you’ve had an interview – was your experience similar to mine? Do you have different advice?

The god trick and the idea of infinite, technological vision

By Jill

When I was at the INDVIL workshop about data visualisation on Lesbos a couple of weeks ago, everybody kept citing Donna Haraway. “It’s the ‘god trick’ again,” they’d say, referring to Haraway’s 1988 paper on Situated Knowledges. In it, she uses vision as a metaphor for the way science has tended to imagine knowledge about the world.

The eyes have been used to signify a perverse capacity–honed to perfection in the history of science tied to militarism, capitalism, colonialism, and male supremacy–to distance the knowing subject from everybody and everything in the interests of unfettered power. (p. 581)

Haraway connects this to what I would call machine vision (“..satellite surveillance systems, home and office video display terminals, cameras for every purpose from filming the mucous membrane lining the gut cavity of a marine worm living in the vent gases on a fault between continental plates to mapping a plantery hemisphere elsewhere in the solar system..”) and states that these technologies don’t just pretend to be all-seeing, objective and complete, they also make this seem ordinary, part of everyday life:

Vision in this technological feast becomes unregulated gluttony; all seems not just mythically about the god trick of seeing everything from nowhere, but to have put the myth into ordinary practice. (p. 581)

Of course, “that view of infinite vision is an illusion, a god trick” (p. 582). But it’s not an illusion that we seem to have escaped since 1988. Google’s satellite maps, for instance, have that lovely feel of “seeing everything from nowhere.” I heard Rob Tovey present a fascinating paper about this at the Post Screen Festival in Lisbon a couple of years ago (“God’s Eye View: The Satellite Photography of Google“, 2016), noting not only the “god trick,” but also the mechanics of how these images are created from multiple photographs using specific projection techniques rather than others. A map is far from objective.

Section of a satellite map image of part of the city of Bergen, showing streets, trees, buildings, parked cars.

Haraway’s conclusion is that the only way of achieving any kind of objectivity in science, and for her this is a feminist point, though valid for all science, is by admitting that knowledge is partial and situated. Perhaps something like a 360 degree photosphere, taken by an individual like myself using Google’s Street View app on my phone, could be classified as an example of a visual position that is partial?

If you click through that screenshot to see the way Google displays my photo, you’ll see you can drag it around to see everything I saw in every direction.

Well, almost everything I saw. If you look down, you won’t see my feet.

A photograph of a puddle in a muddy path. A narrow google map below it shows where the photo was taken.

Google edited them out.

The knowing self is partial in all its guises, never finished, whole, simply there and original; it is always constructed and stitched together imperfectly, and therefore able to join with another, to see together without claiming to be another. Here is the promise of objectivity: a scientific knower seeks the subject position, not of identity, but of objectivity, that is, partial connection.

Those 360 images are certainly constructed and stitched together, and have a more specific standpoint or position, maybe even an implicit subject position from which you see. The glitches in the stitching together of the images remind us that they are imperfect, partial representations.

And yet the human is edited out.

Knowledge from the point of view of the unmarked is truly fantastic, distorted, and irrational. The only position from which objectivity could not possibly be practiced and honoured is the standpoint of the master, the Man, the One God, whose Eye produces, appropriates, and orders all difference. (..) Positioning is, therefore, the key practice in grounding knowledge organised around the imagery of vision. (p. 587)

I wonder whether today it is Google and technology, rather than the patriarchal male master, whose “Eye produces, appropriates, and orders all difference.”

And of course, as the scholars at our workshop about data visualisation pointed out, data visualisations are another way in which information is presented as objective, as seen from a disembodied, neutral viewpoint. The kind of viewpoint that doesn’t exist.

Above all, rational knowledge does not pretend to disengagement: to be from everywhere and so nowhere, to be free from interpretation, from being represented, to be fully self-contained or fully formalisable. (p. 590)

And of course, data visualisations tend to show the big picture. It’s nicely organised, you can see the patterns, and there are no “troubling details,” as Johanna Drucker puts it in Graphesis (2014).

Data visualisation showing flows of refugees from some countries to others.

That’s the god trick alright.

Skal samfunnet styres av algoritmer? To foredrag og syv bøker

By Jill

[English summary: info about two recent talks I gave about algorithmic bias in society]

Algoritmer, stordata og maskinlæring får mer og mer å si for samfunnet vårt, og brukes snart i alle samfunnsområder: i skolen, rettsstaten, politiet, helsevesenet og mer. Vi trenger mer kunnskap og offentlig debatt om dette temaet, og jeg har vært glad for å kunne holde to foredrag om det den siste måneden, en lang og en kort – og her kan du se videoene om du vil!

Sist onsdag holdt jeg et innlegg på Bergen offentlige bibliotek med fullsatt sal og en av de beste påfølgende debattene jeg har vært med på. Ikke bare IT-folk og studenter og Facebookbrukere, men også helsearbeidere, barnehagelærere og psykiatere fortalte om hvordan algoritmer brukes i deres yrker, og hva slags tvil og bekymringer de og deres kollegaer har. Innlegget ble streamet og du kan se hele her:

I mars var jeg invitert til å holde et 10-minutters innlegg for 600 kommunepolitikere på Kommunalpolitisk toppmøte, som hadde digitalt utenforskap som tema. Jeg argumenterte for at digital utenforskap handler om mer enn bare tilgang til nettet, og at vi også må tenke på hvordan samfunnsgrupper og individer kan ekskluderes eller diskrimineres gjennom algoritmisk styring.

Det er kommet ut en rekke gode bøker om dette temaet de siste  månedene – flest fra USA, hvor utviklingen er kommet lenger enn her. Om du kjenner til flere bøker, særlig norske eller europeiske, så håper jeg du legger igjen tips i kommentarfeltet!

Norske tekster:

Bår Stenvik: Informasjonen (roman). Tiden, 2018

Denne romanen skal jeg lese straks jeg er ferdig med Ada Palmers fremtidssamfunn: “Informasjonen er et kjærlighetsdrama mellom en mann, en kvinne og et dataprogram.”

Datatilsynets rapport Hva vet de om deg? 2018.

En rapport som viser hva fire vanlige, norske virksomheter lagrer om deg som kunde.

Amerikanske bøker:

Virginia Eubanks: Automating Inequality – How High-Tech Tools Profile, Police, and Punish the Poor. Macmillan, 2018.

Denne boken gjenforteller tre historier som viser hvordan automatisering av tildeling av velferdstjenester kan slå veldig feil. Argumentet er at algoritmisk styring slik den har vært brukt gjenskaper forskjeller. Lytt til et radio-intervju om boken eller se henne presentere den selv.

Safiya Umoja Noble: Algorithms of Oppression: How Search Engines Reinforce Racism. NYU Press, 2018.

Første gangen Noble googlet “black girls” for å finne aktiviteter til ti-åringen sin, fikk hun bare porno som treff. Boken starter med dette eksempelet, men går mye lenger i å vise hvordan google og andre søkemotorer har dyptgående problemer med rasisme. Se et kort foredrag hvor Noble presenterer boken sin.

Meredith Broussard: Artificial Unintelligence – How Computers Misunderstand the World. MIT Press, 2018.

Broussard er IT-utvikler og journalist, og i denne boken viser hun hvordan teknologi definitivt ikke løser alle problemer.

Andrew Guthrie Ferguson : The Rise of Big Data Policing – Surveillance, Race, and the Future of Law Enforcement. NYU Press, 2017.

I Norge har tollvesenet bestilt programvare som bruker storgata for å forutsi hvem som er sannsynlige lovbrytere. I Danmark bruker politiet “predictive policing”. Bruk av stordata og algoritmer kan endre politiarbeid også i Norge – og da er det viktig å forstå hva det vil innebære.

Cathy O’Neil: Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. Penguin, 2016.

Jeg var på et foredrag av O’Neil i fjor, og hun er en rasende god taler. Her kan du bl.a. se en kortere TED-talk hun har gjort om temaet.

Best Guess for this Image: Brassiere ( The sexist, commercialised gaze of image recognition algorithms.)

By Jill

Did you know the iPhone will search your photos for brassieres and breasts, but not for shoulders, knees and toes? Or boxers and underpants either for that matter. “Brassiere” seems to be a codeword for cleavage and tits.

I discovered this not by suddenly wanting to see photos of bras, but because I did a reverse image search for a sketch of a pregnant woman’s belly selfie to see if the sketch had anonymised it sufficently that Google wouldn’t find the original. Lo and behold, all the “related images” were of porn stars with huge tits and ads of busty women trying to sell bras, which surprised me, given that the original was of a woman with a singlet pulled up to show her pregnant belly. I would have expected person, woman, pregnant, selfie, belly, bare arms to show up as descriptions, but brassiere? Was that really the most salient feature of the image? Apparently so, to Google.

Usefully, one of the text hits for the image was to this article explaining with horror that Apple has “brassiere” as a search term for your photos. Well, clearly Google does too.

I promptly did a search in the photos on my iPhone, and was appalled to see a selfie I took one morning show up?—?I wasn’t wearing a bra, but a singlet, and the image is mostly of my face, neck and upper chest.

Seriously? I suppose you might think that the main point of that image was the triangle of my singlet that could have been a bra, but really?

The other images are a screenshot of some Kim Kardashian thing I saved for some reason I don’t remember, and fittingly enough, in the middle there, is a video of part of Erica Scourti’s excellent video poem, Body Scan, which is precisely about how commercial image recognition commodifies the human body. (Here is a paper I wrote about Body Scan)

The app Erica Scourti was using, CamFind, is in some ways more nuanced than the iPhone’s image recognition, which has no idea how to look for a human hand or a knee or a foot. That’s because those words weren’t among the images the system was trained to recognise.

Yeah. Somebody decided to program the systems to look for breasts and bras, but not for knees or toes or backs. I wonder why.

In her book on racist search engine results, Safiya Umoja Noble argues that one reason why a Google Search for “black girls” a few years ago only gave you porn results was that Google prioritised sales and profit above all, and so prioritised the results people would pay for rather than, say, showing web sites that black ten-year-old girls might like to visit.

Presumably that’s why “brassiere” is a search term for my photos, too. Some people will pay to see photos of tits and some people want to sell bras. The fact that other people want to sell socks and mittens just isn’t as lucrative as bras and tits.

Actually, my iPhone can find photos of mittens. Or at least, it can find photos I took that it thinks show mittens. I guess they must have fed the machine learning algorithm more photos of breasts and brassieres than of mittens, becuase the mitten recognition is far less accurate.

Two feet. My theatrical daughter’s efforts at terrifying me by sending a photo of her hand gorily made-up in stage makeup. A picture of a kid in a snapchat filter drinking juice. None of those are mittens.

It’s entirely probably that the image recognition algorithms were trained on pornography and ads for bras. There’d be a precedent for it: the Lena image, which is the most commonly used test image for image compression algorithms, is a scan of a Playboy centrefold, so that naked shoulder actually leads to a naked body in a fake Wild West setting. (This image is one of the main cases discussed in Dylan Mulvin’s forthcoming book Proxies: The Cultural Work of Standing In).

So why does this matter? It matters because these algorithms are organising our personal photos, memories we have captured of ourselves and of our loved ones. Those algorithms that create those cute video compilations of my photos, showing the kids’ growing up over the years, or all the smiley photos from our family holiday?—?they are also scanning my private photos for breasts and cleavage.

I really don’t like that my phone thinks the best way to describe my selfie is “brassiere”. I hate that. Image recognition needs to do something more than simply replicate a commercialised version of the male gaze.

My project on machine vision will be funded by the ERC!

By Jill

Amazing news today: my ERC Consolidator project is going to be funded! This is huge news: it’s a €2 million grant that will allow me to build a research team to work for five years to understand how machine vision affects our everyday understanding of ourselves and our world.

Three images showing examples of machine vision: Vertov's kinoeye, a game that simulates surveillance, Spectacles for Snapchat.

Here is the short summary of what the project will do:

In the last decade, machine vision has become part of the everyday life of ordinary people. Smartphones have advanced image manipulation capabilities, social media use image recognition algorithms to sort and filter visual content, and games, narratives and art increasingly represent and use machine vision techniques such as facial recognition algorithms, eye-tracking and virtual reality.

The ubiquity of machine vision in ordinary peoples’ lives marks a qualitative shift where once theoretical questions are now immediately relevant to the lived experience of ordinary people.

MACHINE VISION will develop a theory of how everyday machine vision affects the way ordinary people understand themselves and their world through 1) analyses of digital art, games and narratives that use machine vision as theme or interface, and 2) ethnographic studies of users of consumer-grade machine vision apps in social media and personal communication. Three main research questions address 1) new kinds of agency and subjectivity; 2) visual data as malleable; 3) values and biases.

MACHINE VISION fills a research gap on the cultural, aesthetic and ethical effects of machine vision. Current research on machine vision is skewed, with extensive computer science research and rapid development and adaptation of new technologies. Cultural research primarily focuses on systemic issues (e.g. surveillance) and professional use (e.g. scientific imaging). Aesthetic theories (e.g. in cinema theory) are valuable but mostly address 20th century technologies. Analyses of current technologies are fragmented and lack a cohesive theory or model.

MACHINE VISION challenges existing research and develops new empirical analyses and a cohesive theory of everyday machine vision. This project is a needed leap in visual aesthetic research. MACHINE VISION will also impact technical R&D on machine vision, enabling the design of technologies that are ethical, just and democratic.

The project is planned to begin in the second half of 2018, and will run until the middle of 2023. I’ll obviously post more as I find out more! For now, here’s a very succinct overview of the project, or you can take a look at this five-page summary of the project, which was part of what I sent the ERC when I applied for the funding.

Hand signs on musical.ly = emoji for video

By Jill

You know how we add emoji to texts?  In a face-to-face conversation, we don’t communicate simply with words, we also use facial expressions, tone of voice, gestures and body language, and sometimes touch. Emojis are pictograms that let us express some of these things in a textual medium. I think that as social media are becoming more video-based, we’re going to be seeing new kinds of pictograms that do the same work as emoji do in text, but that will work for video.

I wrote a paper about this that was just published in Social Media and Society, which is an open access journal that has published some really fabulous papers in social media and internet studies. It’s called Hand Signs for Lip-syncing: The Emergence of a Gestural Language on Musical.ly as a Video-Based Equivalent to Emoji. As you might have guessed, it argues that the hand signs lip-syncs on musical.ly use are doing what emoji do for text – but in video.

Musical.ly is super popular with tweens and teens, but for those of you not in the know, here is an example of how the hand signs work on musical.ly.

Musical.ly has become a pretty diverse video-sharing app, but it started as a lip-syncing app, and lip-syncing is still a major part of musical.ly. You record 15 second videos of yourself singing to a tune that you picked from the app’s library. You can add filters and special effects, but you can’t add text or your own voice.

I think the fact that the modalities are limited – you can have video but no voice or text – leads to the development of a pictogram to make up for that limitation. That’s exactly what happened with text-based communication. Emoticons came early, and were standardised as emoji 🙂 after a while.

Hand signs on musical.ly are pretty well defined. Looking at the videos or the tutorials on YouTube you’ll see that there are many that are quite standard. They’re usually made with just one hand, since the camera is held in the other hand, and often camera movements are important too, but more as a dance beat than as a unit of meaning. Here are the hand signs used by one lip-syncer to perform a 15 second sample from the song “Too Good” by Drake and Rihanna. First, she performs the words “I’m way too good to you,” using individual signs for “too”, “good”, “to” and “you”.

drawings of a musical.ly user using hand signs

The next words are “You take my love for granted/I just don’t understand it.” This is harder to translate into signs word for word, so the lip-syncer interprets it in just three signs, pointing to indicate “you”, shaping her fingers into half of a heart for “love”, and pointing to her head for “understand”.

drawings of a musical.ly user using hand signs

Looking at a lot of tutorials on YouTube (I love Nigeria Blessings’ tutorial) and at a lot of individual lip-syncing videos, I came up with a very incomplete list of some common signs used on musical.ly:

In my paper I talk about how these hand signs are similar to the codified gestures used in early oratory and in theatre. These are called chironomics, and there are 17th and 18th century books explaining them in detail. The drawings are fascinating:

I think it’s important to think of the hand signs as performance, and in the theatrical or musical sense, not in the more generalised sense that Goffman used for a metaphor, where all social interaction is “performative”. No, these are literal performances, interpretations of a script for an audience. That’s important, because without realising that, we might think the hand signs are just redundant. After all, they’re just repeating the same things that are said in the lyrics of the song, but using signs. When we think of the signs as part of a performance, though, we realise that they’re an interpretation, not simply a repetition. Each muser uses hand signs slightly differently.

And those hand signs aren’t easy. Just look at Baby Ariel, who is very popular on musical.ly, trying to teach her mother to  lip-sync. Or look at me in my Snapchat Research story trying to explain hand gestures on musical.ly just as I was starting to write the paper that was published this week:

The full paper, which is finally published after two rounds of Revise & Resubmit (it’s way better now) is open access, so free to read for anyone.

Oh, and sweethearts, if you feel like tweeting a link to the paper, it ups my Altmetrics. That makes the paper more visible. How about we all tweet each others papers and we’ll all be famous? ?

I’m a visiting scholar at MIT this semester

By Jill

I’m on sabbatical from teaching at the University of Bergen this semester, and am spending the autumn here at MIT. Hooray!

It’s a dream opportunity to get to hang out with so many fascinating scholars. I’m at Comparative Media Studies/Writing, where William Uricchio has done work in algorithmic images that meshes beautifully with my machine vision project plans, and where a lot of the other research is also very relevant to my interests. I love being able to see old friends like Nick Montfort, look forwards to making new friends and catching up with old conference buddies. And just looking at the various event calendars makes me dizzy to think of all the ideas I’ll get to learn about.

Nancy Baym and Tarleton Gillespie at Microsoft Research’s Social Media Collective have also invited me to attend their weekly meetings, and the couple of meetings I’ve been at so far have been really inspiring. On Tuesday I got to hear Ysabel Gerrad speaking about her summer project, where she used Tumblr, Pinterest and Instagram’s recommendation engines to find content about eating disorders that the platforms have ostensibly banned. You can’t search for eating disorder-related hashtags, but there are other ways to find it, and if you look at that kind of content, the platforms offer you more, in quite jarring ways. Nancy tweeted this screenshot from one of Ysabel’s slides – “Ideas you might love” is maybe not the best introduction to the themes listed…

Thinking about ways people work around censorship could clearly be applied to many other groups, both countercultures that we (and I know we is a slippery term) may want to protect and criminals we may want to stop. There are some ethical issues to work out here – but certainly the methodology of using the platform’ recommendation systems to find content is powerful.

Yesterday I dropped by the 4S conference: Society for Social Studies of Science. It’s my first time at one of these conferences, but it’s big, with lots of parallel sessions and lots of people. I could only attend one day, but it’s great to get a taste of it. I snapchatted bits of the sessions I attended if you’re interested.

Going abroad on a sabbatical means dealing with a lot of practical details, and we’ve spent a lot of time just getting things organised. We’re actually living in Providence, which is an hour’s train ride away. Scott is affiliated with Brown, and we thought Providence might be a more livable place to be. It was pretty complicated just getting the kids registered for school – they needed extra vaccinations, since Norway has a different schedule, and they had to have a language test and then they weren’t assigned to the school three blocks from our house but will be bussed to a school across town. School doesn’t even start until September 5, so Scott and I are still taking turns spending time with the kids and doing work. We’re also trying to figure out how to organize child care for the late afternoon and early evening seminars and talks that seem to be standard in the US. Why does so little happen during normal work hours? Or, to be more precise, during the hours of the day when kids are in school? I’m very happy that Microsoft Research at least seems to schedule their meetings for the day time, and a few events at MIT are during the day. I suppose it allows people who are working elsewhere to attend, which is good, but it makes it hard for parents.

I’ll share more of my sabbatical experiences as I get more into the groove here. Do let me know if there are events or people around here that I should know about!

Visa approved

By Jill

I’m going to be spending next semester as a visiting scholar at MIT’s Department of Comparative Media Studies, and there are a lot of practical things to organize. We have rented a flat there, but still need to rent out our place at home (anyone rneed a place in Bergen  from August to December?). I’ve done the paperwork for bringing Norwegian universal health insurance with us to the US, and still have a few other forms to fill out for taxes. I think we can’t do anything about the kids’ schools before we get there.

But today’s big task was going to the US embassy in Oslo to apply for a visa.

Stamp on my DS-2019

Notes of interest about visiting the US embassy:

  1. They’ll store your phone and other small items in a box at the gate, but no large items or laptops.
  2. There are no clocks on the walls of the waiting room. Rows of chairs face the counters where the embassy employees take your paperwork and then call you up for your interview.
  3. They only let you bring your paperwork with you, nothing else. It was a two hour wait. There is no reading material provided except some children’s books. So the room was full of silent people with no phones, staring into space. The lack of phones or newspapers did NOT make them speak to each other.
  4. I had luckily brought a printout of a paper that needs revising and they seemed to think that was part of my paperwork so didn’t confiscate it. They wouldn’t let me bring my book or even my pencil. Luckily there was a pen chained to a dish at a counter not being used so I borrowed that and now have a wonderfully marked up essay that, once my computer is out, I can hopefully fix in a jiffy after my two hours of paper-based work on it. I was the only person in the waiting room not staring into space.

I won the John Lovas Memorial Award for my Snapchat Research Stories!

By Jill

I am so excited: I won the John Lovas Memorial award last night at the Computers and Writing Conference for my Snapchat Research Stories! The award is given by Kairos: A Journal of Rhetoric, Technology, and Pedagogy, the leading digitally-native journal for “scholarship that examines digital and multimodal composing practices, promoting work that enacts its scholarly argument through rhetorical and innovative uses of new media.” Here are the editors, Cheryl Ball and Doug Eyman, flanking my friend and earlier colleague Jan Rune Holmevik, who was at the conference and very kindly accepted the award for me:

John Lovas Award -congrats from Kairos

The award has been given to a long and impressive list of academic bloggers. This is the first year it has been opened up to other forms of social media knowledge sharing, and I am honored to be the first award-winner to win for something other than blogging. Yay!

The John Lovas Award is sponsored by Kairos in recognition and remembrance of John Lovas’s contributions to the legitimation of academic knowledge­sharing using the emerging tools of Web publishing, from blogging, to newsletters, to social media. Each year the award underscores the valuable contributions that such knowledge-creation and community-building have made to the discipline by recognizing a person or project whose active, sustained engagement with topics in rhetoric, composition, or computers and writing using emerging communication tools best exemplifies John’s model of a public intellectual.

John Lovas was an influential early scholarly blogger, especially important within the fields of composition and rhetoric. I’ve been rereading some of his blog posts, and note that he experimented with visual argumentation in his blog, something that was quite unusual at the time, because it was more complicated to get images off cameras and onto the web than now, and bandwidth was limited too so images had to be carefully compressed in a photo editor so they would load before viewers got bored. So I like to think that John Lovas would have appreciated the combination of visual and textual communication about research that and other academics on Snapchat are exploring.

Here is an archive of some of my Snapchat Research Stories – they are better on snapchat, add me on Snapchat to see them live – I’m jilltxt. Thank you so much for this recognition – I really wish I could have been at the conference.

 

Finding my old notebook from 1997

By Jill

I found an old notebook when I was tidying my desk today.

Notebook from 1997

Its from 1997 and 1998, when I was working on my MA in comparative literature and writing about creative, non-fiction hypertext.

I read all the 1990s hypertext theory and took careful notes.

IMG_0960

Thinking about what David Kolb wrote about scholarly hypertext and whether you can actually do philosophy in a non-linear format.

IMG_0961

I worried about reading too much and not writing enough.

IMG_0962

And noted that while Walter Ong was interesting, he didn’t mention the internet.

IMG_0963

Then I got to go to my first conference! ACM Hypertext 1998 – it was amazing. My MA advisor, Espen Aarseth, paid for my flight and hotel out of a grant he had and gave me two tasks: hand out flyers for a conference he was organising, and go and introduce myself to Stuart Moulthrop and tell him hi from Espen.

I have very thorough notes from the conference. Very thorough.

IMG_0964

I even took thorough notes from discussions in the panel on hypertext and time. I love that Markku Eskelinen asked “Where is Genette?” Of course he did.

IMG_0969

I was so touched to see these traces of my younger self. So earnest. So diligent.

IMG_0967

So honest.

 

Snapchat erases protests and diversity in its White Australia version of Australia Day

By Jill

Snapchat’s live stories usually present the world in a way that emphasises diversity, tolerance and respect for different races, religions and sexualities. But sometimes they fail miserably – like in the Live Story about yesterday’s Australia Day, which is now available globally.

Australia Day is celebrated on January 26, the day the first fleet arrived in Australia from Britain, and there is a strong movement to #changethedate so that it celebrates Australia, and not the European invasion of indigenous Australian land. That movement is actually so strong that yesterday 50,000 people marched in Melbourne, and thousands more around in other cities all over Australia. Here is a photo of the rally in Melbourne yesterday:

invasion day 2017 Melbourne

Or take a look at The Guardian’s live blog to see a more diverse view of the day, including the formal celebrations and more.

Now look at how Snapchat presents it in its Live Story. I’ve taken one screenshot of each snap, but they’re all videos so imagine panning and sound. The story is still on Snapchat as of Jan 27, 09:53 am Central European Time.

 IMG_0967 IMG_0966 IMG_0968 IMG_0969 IMG_0970 IMG_0971 IMG_0972 IMG_0973

The first seven snaps are all of young, white people partying or at the pool. The last three are of fireworks.

It’s a short Live Story – the Live Story from the Women’s March last weekend had 71 snaps, so was obviously of a different scope altogether. But what an unbelievably skewed version of Australia Day this shows. What a skewed and stereotypical version of the Australian population it shows. Especially seen in contrast to the coverage of the inauguration and the Women’s March last weekend, this is pretty astounding.

I’ve previously noted that the Norwegian national day as seen on Snapchat appears to be nothing but young people partying in national costumes, which is not how the day looks to me. No doubt most of Snapchat’s portrayals of national, “exotic” festivals (at least exotic to young Americans) leave out a lot, or present things in a skewed manner. But at least Norway doesn’t have 50,000 people protesting the day that Snapchat somehow forgets to include in their story.

It looks as though Triple J may have sponsored this Live Story, based on the emphasis on their Hottest 100 in the first snaps. Triple J has been the radio channel for youth and music for decades, but their emphasis on a music countdown on what more and more people are calling Invasion Day rather than Australia Day may be ripe for change.

Another way in which Snapchat spreads very partial information about the politics of Australia Day is with their selfie filters and geofilters. I couldn’t access them in Norway, but the Live Story seems to have a couple of non-branded Australia Day geofilters, and some sponsored by Triple J. I imagine that Triple J actually sponsored the Live Story, or at least had significant influence on it, based on the number of geofilters they seem to have for the day, and their emphasis on the Hottest 100 on Australia Day. If that’s so, perhaps Snapchat’s US team, which seems to be pretty savvy about diversity in their own country, simply didn’t pay much attention. That would also explain why the narrative arc of the Live Story is pretty flat compared to many of the US Live Stories, which are more skilfully put together.

On Twitter, Elle Hunt shows us how politically biased the selfie filters are, too. This is what happens when advertisers control our means of production:

Screen Shot 2017-01-27 at 10.08.21

Here are the full images of her snaps:

elle4

But hey. Most people on Twitter like it. They love stories about young, white people getting lit.

Screen Shot 2017-01-27 at 09.54.57

But if Snapchat aims to be a news channel, and to spread public information about the public sphere, we need to know where they stand and especially, who is paying for it. In their Terms of Service, they write that

Live, Local, and any other crowd-sourced Services are inherently public and chronicle matters of public interest (..)

If so, their financing and bias should be transparent to the viewers.

❌