The Algorithm as Tamagotchi

A person works with paper labels.

This past February, Digital Humanities for Social Engagement and the Digital Justice Lab hosted a workshop called “Algorithms as Pets and Politicians.” Part of a series of events organized by Alex Juhasz (Chair of Film, Brooklyn College) on media literacy in this age of the digital proliferation of fake news, this particular workshop focused on how art practice can shed light on the political implications of the ways that Machine Learning systems view and comprehend the world. The workshop’s leaders were Alex, Orr Menirom (independent artist, NYC), and myself (postdoctoral fellow, Neukom Institute).

Orr screened an excerpt from her video work, “Clinton and Sanders Looking at the World and Naming Things for the First Time.” This piece interlaces footage from the 2016 debate between the two presidential candidates with decontextualized, fragmentary glimpses of multifarious objects and scenes, both quotidian (a pair of socks) and striking (a protest). Orr has edited audio from the two candidates to offer off-kilter descriptions of these things, giving the impression of an ill-trained Machine Learning system attempting but failing to label a material world whose objects are not nearly as straightforward as those in its “training set.” For instance, Bernie—or at least Bernie’s voice, made strangely robotic through Orr’s editing—seems to mistake a brick pinned beneath a door for a “human.” (To be fair, the holes in the brick do suggest a face.) Participants drew connections to the intertwined fallibility and creativity of computer vision systems; Google’s DeepDream, perhaps the most famous of these, can be trained to “see” faces in pictures where none exist. Orr’s work is not itself algorithmic, but the process of its creation can be seen as an attempt to see (or hallucinate) the world as if through the murky and error-prone layers of the neural networks that exert increasing power over our lives.

Next, Orr and I demoed a prototype of an interactive algorithmic text generation system called “The Speaking Egg.” Eschewing contemporary Machine Learning’s capacity to learn patterns from large, ready-made data sets, this system takes a decidedly more bespoke and laborious approach, one inspired by pet-like computational systems and interfaces that require care and attention to survive and thrive (e.g. the Tamagotchi, the Furby, or the virtual pets that scampered across desktops throughout most of the 1990s). To get this pet egg-bot to grow and to speak, one must manually provide example sentences. When the bot generates (simple and often nonsensical) sentences based on these inputs, the user must either praise or scold it, training the bot’s classification algorithm to help it produce more pleasing utterances. Workshop participants trained their algorithms to inhabit specific identities and ideological positions, exploring what it might mean to design algorithms that aspire to be intentionally political rather than “neutral” or “unbiased.” The feedback we received will be invaluable as Orr and I continue to develop this project.

The day concluded with an impromptu artist talks by several participants. Aaron Karp (Digital Musics) presented an agent-based digital music system based on the flocking of birds, Christiana Rose (Digital Musics) showed videos of her interfaces for sonifying the movements of acrobatic artists, and Josh Urban Davis (Computer Science) reflected on using deep learning to generate obituaries of people who never lived. These presentations, along with the lively participation of faculty, staff, and students, are evidence of the robust, interdisciplinary interest on campus in both the art and politics of computation.

Kyle Booten
Postdoctoral Fellow
Neukom Institute for Computational Science

Kyle Booten
Postdoctoral Fellow
Neukom Institute for Computational Science