Speculative Listening: Reading Audible Writing

Speculative Listening: Reading Audible Writing

By Megan Butchart (Undergraduate RA, SpokenWeb)

 

One of the richest audio genres within the SoundBox Collection are class lectures recorded between the 1960s and 1980s by UBC English professor Warren Tallman. From the clicking of cigarette lighters, to the buzzing of campus-wide bells or buzzers signalling the end of class, to the scratching of chalk on blackboards, the sounds that permeate these recorded lectures are no longer typically heard in the contemporary Canadian university classroom. As I begin to transcribe these now-digitized recordings, such sounds have generated new research questions and invited team-wide discussions about the potential for AI projects to assist with such data sets.

Due to Tallman’s positioning of the recording device at the front of the classroom, the sounds of chalk on the blackboard are clearly audible in each recording. Moreover, in many cases Tallman speaks the words as he writes them, allowing one to audibly discern the visual shapes of the letters as they are formed. For example, in one recording you can hear Tallman write the word “eidetic” on the blackboard, with the recognizable sounds of Tallman crossing the “t”, followed by the dotting of the two “i”s. You can also hear Tallman writing out various poets’ names, dates, book/poem titles, and poetic vocabulary. But what about the written words that Tallman does not verbally identify? What if there was a way through machine learning to read those written words through the sounds of chalk markings?

While Artificial Intelligence (AI) and Machine Learning (ML) in the literary audio archive is somewhat new territory for me, I have studied such compelling projects as Marit MacArthur’s exploration of pitch and timing to quantify “poet voice” and Tanya Clement and Steve McLaughlin’s project HiPSTAS in which they use the audio analysis tool ARLO to identify and analyse patterns of applause in the PennSound poetry archive. Although I do not know how large a training data set would be required or what threshold of verifiable written words would be necessary to accurately identify the sounds of writing, speculating about the possibilities of machine learning and what questions might be involved has been an interesting exercise. Indeed, there are many practical variables to consider:

  • Is Tallman writing in printing, cursive, or an individual hybrid-style?
  • Is he writing in upper-case or lower-case letters? How large is his writing on the board?
  • In what order does he cross “t”s and dot “i”s?
  • Does he ever use abbreviations?
  • What chalk sounds are not letters or numbers, but punctuation, arrows, circles, sketches, or scansion?

Many of these questions can be answered within the context of the recorded lecture. Likewise, visual samples of Tallman’s handwriting preserved in letters and documents within the archives could be compared with the audible chalkboard writing. While such a project might ultimately be unfeasible, in the effort of creating a robust metadata around each of these recordings, such a tool would nevertheless be very useful in allowing one to hear what information is being conveyed nonverbally.