No2Pho [from Noise to Voice] – paper

author: AnneMarie Maes [so-on], title: No2Pho [from Noise to Voice], published in: x-med-a [experimental media arts] – ISBN 9081073311

No2Pho is an artistic research project investigating the behaviour of language in its many appearances: textual, sonic and visual, as well as gestural or body language. How do these disparate elements relate to each other and how do they organize within a system which includes human and computer as a sender and a receiver [and vice versa]?
As a generative sound installation No2Pho plays with a connected set of elements. It is composed of dissonant synthetic voices, changing in real time from speech to sound. The multiple voices are spatialized in a virtual environment. Its compositional parameters are defined by the physical trajectory of the listeners on the installation site. The listeners’ motion is tracked and this data is fed into a software in which the code itself creates the score. By graphically rendering this score the sounds are visualized, making the speech visible.

notovo_ccnoa

Download the paper here.