Sound, of course, has always been vital to poetry. As a language
art, the sonic elements of poetry-- accent and duration, syntax
and line, like and unlike sounds, blank verse and free verse--
have differentiated poetry from prose for thousands of years.
Although the traditional medium of poetry is the human body,
the emergence of new acoustic technologies like the phonograph,
telephone, microphone, loudspeaker, radio, tape recorder,
and more recently, digital audio and surround sound, have
altered the range, volume, reach, and distance of the human
voice, and prompted new literary experiments that investigate
the qualities, characteristics, and material dimensions of
these sound transmitting and recording technologies.
When magnetic tape cassettes and stereo tape recorders were
mass-produced for the first time in the 1960's, new possibilities
were made available for cultural production and representation.
As Katherine Hayles points out, the phonograph produced objects
that could be consumed only in their manufactured form, whereas
magnetic tape allowed the consumer to be a producer as well.
As the technology
became more sophisticated, affordable, and widely available,
tape became a popular medium for electronic artists and musicians
to experiment. These experiments ranged from Stockhausen's
1950's tape pieces to the work of minimalist composers like
Steve Reich.
Early in his career, Reich composed two works for tape: "It's
Gonna Rain,"(1964) and "Come Out" (1966), which
introduced the concept of "phasing," a process Reich
developed in which two tape loops begin by playing synchronously,
but slowly move out of phase with each other before coming
back into unison. The result is powerfully hypnotic; words
and sentences are collapsed into short phonemes, the building
blocks of language uttered as repetitive sounds that, after
time, morph into new configurations.
In the liner notes, Reich describes the process he employed
and the background that inspired the work:
Composed in 1966, Come Out was originally part of a benefit
presented at Town Hall in New York City for the retrial,
with lawyers of their own choosing, of the six boys arrested
for murder during the Harlem riots of 1964. The voice is
that of Daniel Hamm, now acquitted and then 19, describing
a beating he took in Harlem's 28th Precinct Station. The
police were about to take the boys out to be "cleaned
up" and were only taking those that were visibly bleeding.
Since Hamm had no actual open bleeding he proceeded to squeeze
open a bruise on his leg so that he would be taken to the
hospital. 'I had to like open the bruise up and let some
of the bruise blood come out to show them.' Come Out is
composed of a single loop recorded on both channels. First
the loop is in unison with itself. As it begins to go out
of phase a slowly increasing reverberation is heard. This
gradually passes into a canon or round for two voices, then
four voices and finally eight.
A complex interplay is set up between the representational
codes of what is spoken or performed and the specificities
of the transmitting or recording technology.
In another tape experiment by Alvin Lucier, "I am Sitting
in a Room," (1970), several sentences of recorded speech
are played back into a room where they are re-recorded multiple
times. Over the course of this process, which goes on for
about fifteen minutes, the resonant frequencies of the space
act as a filter, and Lucier's speech (and speech impediment)
is transformed into pure sound.
I mention these early tape works by means of an introduction,
or perhaps, more accurately, as an inspiration for thinking
about sound and Web media. Like the tape recorder, new media
editing software is changing the dynamics of who is able to
produce interactive audio-video materials, and it too offers
a rich site to probe the relationship between the technology
and sound.
While artists who worked with analog tape media could employ
techniques like cutting and splicing, looping, tape echo,
and direction and speed changes, digital media artists face
new challenges orchestrating and organizing the seemingly
endless possibilities that editing software makes available.
Unlike analog technology, digital technology can be perfectly
precise, giving rise to new practices and techniques that
were not formerly possible. One such moment of digital triumph
occurred when avant-garde musician and composer Georg Anheil's
masterwork, Ballet mécanique, was performed
for the first time in its original instrumentation 75 years
after it was composed.
When Arnheim wrote the music for Ballet mécanique
in 1924, his production called for three xylophones, four
bass drums, a tamtam (gong), two pianos, a siren, three airplane
propellors, seven electric bells, and 16 synchronized player
pianos (or pianolas as they were called then).
Because it was impossible to perfectly synchronize the player
pianos, the work existed as a conceptual piece. Antheil produced
other versions, but he never heard the original in his lifetime.
It wasn't until 1999 when William Holab and Paul Lehrman hooked
up 16 MIDI-compatible (Musical Instrument Digital Interface,
the standard computer protocol for musical instruments) player
pianos to a central sequencer, which enabled all of them to
play in perfect synchronization.
Of course, the artists today that are exploring the new possibilities
for expression that audio-visual Web technologies make available
are still subject to limitations. Looping is easier than it's
ever been to create, but lengthy compositions are a major
challenge. The mp3 file format greatly improved the audio
quality of Web sound, but the file size is still huge compared
to that of text and images.
In addition, adding interactivity to sound work often involves
importing the sound files into another software program like
Flash or Director. Subjecting the work to the rules of another
layer of software programming shapes the possibilities of
the final composition.
The three works featured in this Sound issue of PTG were
all created using Flash software. We might see this as a limitation
imposed by a proprietary technology (as some Flash naysayers
might point out), but it also allows us to see a structured
investigation of the software as medium, and the way these
works are expressed through the sound capabilities of Flash.
Sounds in Flash can be controlled by means of "attaching
sounds" to "Sound Objects" using ActionScript
code. (An "Object" is a scripting concept that represents
a collection of data and methods for manipulating that data.
For example, the Date Object stores different pieces of information
that relate to time, as well as methods for getting or setting
different values, like the current time.)
This can be used to create work that allows viewers to manipulate
volume and pan controls by sliding a graphic on the screen,
a technique that Jason Nelson explores in Conversation,
which is organized around three subjects: injuries, robots,
and products. Each section is comprised of a series of volume
and pan sliders against a loud graphical display that allows
the user to shift the voices from left to right channels,
and to turn up and down the volume of the often humorous commentators,
who offer looped fragments of stories that relate to products,
injuries, and robots respectively.
As the user manipulates the slider controls, the voices result
in a bubbling crowd of conversation, with each commentary
sewn together so that it becomes difficult to find the beginning
or end of any story. The user becomes a DJ selecting voices
to silence or spotlight in the construction of this "verbal
composition."
In soundpoem
2, Joerg Peringer uses the selective repetition of short
words, phonemes and letter combinations to investigate the
relationship between words, sounds, and their absences. In
soundpoem 1,
Peringer applies a similar technique by associating repetitive
sounds with specific spaces within the screen. His polite
directions, "please drag the circles into the squares"
stand in shocking contrast to the resulting cacophony that
is revealed to the user who follows directions.
Finally, Neil Jenkin's generative poem-engine Orbital
plays with the ideas of space, location, correspondance, and
anonymity. Domain name servers exist to translate numerical
hard-to-remember IP addresses into the familiarity of words.
In this piece, a droning computer voice endlessly lists the
IP numbers of each visitor to the project, while the generative
text engine runs on two databases-- one that contains the
words to Dunlop's text, and another that lists logged IP addresses
of visitors to the engine. As Jenkins describes it, the engine
is programmed using Perl with Flash's Actionscript to count
back through the IP address list and plot the next word in
the poem in a three dimensional plane, using the first three
numbers of the IP address as its x, y and z co-ordinates.
The fourth number in the IP address determines the next word
from the poem to be displayed.
Enjoy these works, and as always, feel free to add your comments
to the discussion
board.
M.
Sapnar
.....
(1) See Robert Pinsky's The Sounds
of Poetry: A Brief Guide (Farrar, Straus and Girous,
New York) 1998 for an introductory text.
(2) Katherine Hayles, "Bodies out
of Voices, Voices out of Bodies: Audiotape and the Production
of Subjectivity," in Sound States: Innovative Poetics
and Acoustical Technologies Ed. Adalaide Morris, University
of South Carolina Press: Chapel Hill, 1997.
(3) Steve Reich, liner notes for LP
Music of Our Time: New Sounds in Electronic Music (Columbia
Odyssey, New York)
(4) For more on Georg Antheil's Ballet
mécanique visit the site mainted by Paul Lehrman,
www.antheil.org
(5) Paul Lehrman, "Blast
from the Past," WIRED, November 1998.
(6) For a basic review of how sound
works in Flash, start with Working
with Sound and Sound
Objects: Controlling sound in Flash 5
(7) From the Rhizome
Database statement for Orbital.
TOP
|