In June 2018, completed a one week residency at CultureHub, where I attempted to create a prototype for a space that would visually respond to the live reading of poetry and prose. Thanks to the teaching of Matt Romein and help of Oren Shoham, was able to get a basic version working in four days, which was then tested by an amazing assortment of poets, writers, and editors, including Saeed Jones, Tommy Pico, and Meghann Plunkett.
As work was read live, the words would appear on screen, changing the color of the entire 40 foot space based on what word was read (for example, if you read "volcano" the space turned red). While very basic aesthetically, the project opens up new possibilities for how new media artists and writers could collaborate on to perform literary work live.
We used Google Cloud's speech-to-text API, Python, Max/MSP, and the NRC Word-Emotion Association Lexicon, created by Saif Mohammad and Peter Turney at the National Research Council Canada. The GitHub is available here. More pictures and video available here and here