Articles, Essays, & Abstracts

Essays


Reflections on the Twentieth Anniversary of Computer Music Journal
Carla Scaletti

The Last 20 Years

At Symbolic Sound, we have a copy of every issue of Computer Music Journal from the very first (thanks to Kurt Hebel who started subscribing when he was still in high school), and, as far as I can tell, CMJ's first issue came out in 1977. I suppose this is the twentieth anniversary year for the same reason that we call this the twentieth century (for the next 5 years anyway...)

I didn't discover CMJ until 1978, but it had an immediate and profound impact; through CMJ I discovered, for the first time, that I wasn't alone, that there were others who had similarly hybridized interests, and that they took this kind of hybrid work seriously enough to give it a name and publish a journal about it. CMJ gave a name to what I had been trying to do all along, and the name was "computer music". It didn't matter to me that much of that first issue was beyond my technical grasp at the time; in fact, I took it as a challenge that I would do whatever it took to be able to read and understand everything in there. Thus, it was because of CMJ that I took my first programming class, my first logic design course, and found my way to the University of Illinois.

I remember working on my first FORTRAN program and having a friend of mine glance at what I was doing and tell me I was wasting my time learning to program; her father had told her not to bother because in one, maybe two, years, all computers would be programmed using spoken, natural language. Alice, wherever you are, I hope you are not still waiting.

In 1979 at the University of Illinois, studying computer music meant punched cards , Music 4BF on Cyber mainframes, 9-track tapes that held 3 minutes of sound at 20 kHz, and once a week conversions at the Astronomy building in the evening with a computer op who wore orange ear protectors as an aesthetic statement on our music.

Despite the "primitive "conditions, those were exciting times. Computer music was still something unusual and only people with special characteristics (primarily blind stubbornness and mule-like perseverance) could stand to do it. In those days, there were still some computer scientists and engineers interested in working with composers on computer music research which, at that time, was still synonymous with experimental music. MIDI changed all that. The engineers and computer scientists (with a few, highly cherished, exceptions) lost their tolerance for the composers. Why should they put up with experimental music any more when they could afford their own computers, and their own synthesizers and program them to play all the Bach they desired? Some of the professors who had previously been heavily involved in developing innovative software and hardware allowed themselves to be lulled into thinking that they no longer had to get their hands dirty (with solder or line-printer ink) because the commercial music industry would take care of all that low-level stuff for them--kind of like Alice waiting for the natural-speech interface to computers. The impact on computer music research was to push it outside the university and into private industry. There are still a few institutions where first-class computer music research has been continuously supported, but it can no longer be said that academia is the only environment where one can do innovative and original work in computer music. One has simply to look at the number of early ICMC delegates and see how many of them now attend only the AES conferences.

That first taste of computer music left me feeling as though it were, in many ways, a step backwards from the analog electronic studio. I couldn't understand why the software synthesis languages were based on a paradigm of "instruments" playing music notation when, in the analog studio, I could manipulate sound objects in an immediate, concrete, almost cinematic way-in a way that convinced me that tape music bore the same relationship to instrumental music as film bore to live theater, namely that it was a completely new art form. Despite these misgivings about the software synthesis languages, the structural and algorithmic capabilities of the computer had me hooked, so there was no turning back to the analog studio.

I thank Scott Wyatt, John Melby and James Beauchamp for introducing me to the traditional electronic music studio and Music N languages in their courses at the U of I; I was also inspired by some of the research projects going on at that time: Sal Martirano's fanciful and original SALMAR Construction, Herbert Brun's beautiful, almost-algebraic SAWDUST language, and, most of all, the chaotic and creative brilliance of the CERL Sound Group: at that time a loosely-organized , unofficial collection of undergraduate students housed (with their various small pets) in an old World War II radar research lab badly in need of paint and other amenities like running water but never lacking in excitement, sincere curiosity, intensity and sheer creative quirkiness.

In 1983, my composition for harp and Music 360-generated tape was accepted at the Rochester ICMC. A group of us managed to get enough money together and pack enough people into a hotel room to be able to attend our first ICMC. And the1983 conference was an exciting one: MIDI was just taking hold, IRCAM presented an object-oriented music language called FORMES, and Andy Moorer's group was developing an audio signal processing computer, the ASP for LucasFilm (though, as you can see from this photo, Moorer and Strawn remained completely unaffected by their affiliation with Hollywood).
[Photo by Kelly Nichol]

At the Ohio ICMC in 1989, I did "Trinity", a piece for performer and live signal processing. That year, all of the tape pieces and odd little experimental pieces like mine were segregated on afternoon concerts. At night, they brought in the big guns, the mainstream-full symphony orchestras, cellists in glittering evening gowns, audience members in furs-almost as a celebration of computer music's "legitimacy". It's not that I think we should strive to be thought of as illegitimate, it's just that I think we need be a little wary of getting too legitimate. By definition, new ideas are not yet legitimate (because they haven't had the benefit of years of repetition). While it is appealing to imagine being accepted by colleagues and understood by friends and family, we don't have to achieve this at the cost of rejecting new ideas and new technologies (or even just segregating them on the "less important" afternoon concerts or "listening rooms".)

I suppose that one definition of "success" is the extent to which a technology becomes invisible. Over the course of the last twenty years, many of the technologies of "computer music" have gone from being research projects accessible only to those with sufficient knowledge and access to one-of-a-kind equipment to affordable , ubiquitous, and taken-for-granted tools available in every home studio. What was once called "computer music" composition and practiced only by hacker/musician hybrids possessed of incredible patience and an almost unhealthy persistence, is now practiced by all composers, regardless of technical affinity. Many of the techniques that, twenty years ago, were called "computer music", are now simply "music". Sequencers, patch-librarians, and hard disk multi-track recorder/editors are responsible for making the computer the central fixture in nearly every commercial recording studio; yet, 20 years ago, something like a multi-track editor would have been called a tool for digital musique concrete . Software synthesis, once the exclusive domain of computer musicians, is the next logical step in the abstraction of the entire commercial recording studio into software.

So now that computer music has been proven "successful" and "commercially viable", is it time to shut down the CMJ and stop getting together at ICMC? Fortunately or unfortunately, no. Because, by definition, our job is to be "pre-successful". Our job is to stay at the front and take the heavy losses. Our job is to be mutants so that the process of evolution has a large enough gene pool to work with.

Opportunities for the Near and Medium Term

CMJ and the ICMC Proceedings

CMJ and the ICMC face two challenges in the immediate future. As the repository for much of the pioneering work in this field, they are saddled with the responsibility of making certain that these pioneering results are not lost on the "mainstream" engineering and computer science communities, some of whom are (needlessly) rediscovering these early results on their own and even filing software patents years after identical results have been reported to the computer music community. The contents of CMJ can be accessed through several engineering and scientific databases, but I am not sure the same is true for the ICMC proceedings. Perhaps the CMJ editors and the ICMA board could cooperate by cross-listing each others' contents or by automatically reviewing ICMC papers for possible revision and inclusion in the CMJ? Perhaps CMJ or Computer Music Research or some other journal could become the official journal for the ICMA, adding to the visibility and chances for the authors' archival immortality?

Occasionally, I hear people complain that CMJ and the ICMC paper sessions are too technical and esoteric. CMJ and the ICMC must resist the temptation to become easy-reading. If anything, they should hold themselves to even higher technical standards both in content and in fundamental scientific standards of ethics (such as ensuring that authors acknowledge and cite previous work and that papers are reviewed by qualified peers.) Readers who are truly interested will rise to the challenge of understanding technical papers (at least those papers that truly are technical and not purposefully obfuscative in order to distract the reader from the author's own lack of understanding). There is nothing wrong with including tutorials on occasion, but I can't help remembering that it was, in part, because I could not understand some of the articles in the first CMJ, that I was motivated to start reading and studying and taking courses on computers in the first place.

Education

In popular dance music, there is a curious retrograde tendency among synthesists who have grown bored with the push-button presets so conveniently provided for them by the synthesizer industry. They have rediscovered analog synthesizers in all their inaccurate, un-presettable and fascinating glory. As a result , a whole new generation is being weaned on the same machines as we were. And the more reflective among them are facing the same questions that we first faced in computer music 20 or more years ago. There is new generation of musicians who compose sound , not just notes. Could it be that the knowledge and expertise of all the sonologists and computer musicians might suddenly be in greater demand? That your courses will be filled with eager and expert sound designers seeking over-arching theories and consistent abstractions that will help them tie it all together? I am in email correspondence with high school students in Chicago industrial bands who immediately understand what Kyma is all about, because they are sound designers, not button pushers. I expect that we will see a renaissance in computer music and sonology as this new generation of sound designers discovers that digital audio doesn't have to mean presets and keyboards.

The Myth of Native Signal Processing

"Any day now, host processors will be fast enough to do real-time sound generation and processing, obviating the need for special hardware dedicated to sound." We have all heard variants on this platitude from any number of sources (most of whom are merely repeating it because they heard it from someone else). Of course the host processors are going to be fast enough to do real-time sound generation and processing; in fact, they are already fast enough. But there are two problems with the assertion that we should drop any dedicated sound hardware. The first has to do with the (necessary and desirable) greed of sound professionals. Let's face it, as the potential available processing and memory resources go up, so do our expectations and demands. When every PC owner in the world is doing 4 simultaneous clouds of granular synthesis on their Septium processors, the computer musicians are going to want 40 simultaneous clouds plus some as-yet-uninvented -but-extremely-cycle-intensive new synthesis algorithm. The second "problem" (albeit a problem that we all welcome) is that the programmers writing operating systems and graphics systems are also going to have raised expectations and they are going to steal as many cycles as they can away from us.

Let's just accept this fact and demand subsystems dedicated to sound. Professional computer graphics programmers have no qualms about demanding dedicated graphics subsystems for graphics processing. As professional sound programmers,we should demand our own subsystems dedicated to the production and processing of sound. Native signal processing will always be for the amateurs (even though what the amateurs of the future will have on their desktops would fill the racks of a professional's sound studio of today).

The Internet

Like most CMJ readers, I have been reading news groups and sending Email as a matter-of-course since the 1980's. So, in some ways, the hype surrounding the Internet is perplexing. When someone tells me excitedly about all the inter-terminal games they can play and how many newsgroups are available and how much they love email, I find myself thinking, "You'll see...In a few months you will be dreading the stacks of email demanding your attention every time you sign on; you'll notice that your gaming habits fit the classic symptoms of addiction; your CUSeeMe reflector site will be as boring as a 70s singles bar, and you will stand up after a web surfing session feeling just as bleary eyed and wasted as if you had been watching TV.

At the same time, I find it almost comforting to know that the Internet (and World Wide Web) provide a form of communication that is, as yet, decentralized and somewhat chaotic. It really is something different from the telephone (point-to-point) and the radio or TV (broadcast). And, while traditional media industries seem to be consolidating-the equipment manufacturer, the content provider, the distributor, the broadcast network, the artists, and the producers all part of the same company, and the highest selling CDs all seemed to be associated with films, which are associated with lunchboxes, fast food drink cups, and television shows--on the Web it is still possible to be idiosyncratically individual and unconsolidated. People unabashedly put up images of themselves, their words, their music for the whole world to see. Of course it is mostly noise, but there is also a comforting sense that there is very little centralized control. I have an uneasy feeling that all this must be about to end.

New Art Forms

New technologies and new media give artists new ways to interact with their audiences. Live interactive performance won't replace "tape music", interactive CD-ROMS won't replace the concert hall. But they do allow for different forms of interaction-some of which are more personal and less "authoritative" than the performer-on-stage-audience-sitting-quietly-in-the-dark model. Sound artists of the future will have a greater variety of personalities and talents than the composers of the past. The artist who creates entire virtual worlds and distributes them on CD-ROM to be explored on an individual basis has to have a different temperament and set of skills than the artist who conducted his own symphonies or the composer/performer on a stage with a stack of amplifiers, or, for that matter, from the organizer of an all-night house party.

A friend of mine once remarked that a good composer can make music out of pots and pans. Good (i.e. interesting) and bad (i.e. boring) music can be written in any medium and in any style. We should worry less about the choice of media and style and more about whether a piece stimulates us in any way or gives us new ideas.

Reasserting Computer Music

I was tempted to start out this little essay with, "Computer Music is Dead". Many people would agree that there is no more computer music per se. It has become some kind of integrated digital media; it has infused into the mainstream of computer games, CD technology, multitrack editors, synthesizers, samplers etc.

But in fact I do not believe that computer music is dead. I just think we have to redefine ourselves. I once received email (from the most mainstream person I know) joking with me and saying, "Don't get too mainstream." At first I was puzzled. Don't all the "normal" musicians wish we would just shut up and be more mainstream? Aren't they annoyed by our weird experimental noisy sounds? But then I realized that the mainstream needs us to be out there. They need us to be out at the edge taking lots of chances and making lots of mistakes and being serious and having high standards without being "professional" (if "professional" reduces to knowing very well how to repeat what you did well last time). They need us because we are their entertainment, their gladiators, but also because they know from past experience that our best ideas will survive and eventually seep into the affordable mainstream.

So computer music is not dead. We just have to reassert our experimental identity.

Remember Alice? My friend who let her father talk her out of learning to program? Remember how academia got complacent about developing new software and hardware in the 80s? Don't let anyone talk you out of taking control over your tools and your instruments. Don't wait for the "experts" to hand you the tools you need; take an active hand in co-developing them or, if necessary, doing whatever it takes to make them by yourself.

Finally, I hope that, over the next 20 years there is still a CMJ or equivalent for the next generation to discover in 1996 (and beyond) that this is what they were born to do. Whether we realize it or not, the things we create now on our computers and in our studios, what we write about in CMJ and talk about at the ICMC and in our newsgroups and mail lists and websites is having an impact on the people who read it. For that alone it would be worth our efforts at keeping all of this going for the next 20 years.



Abstracts
Agostino Di Scipio
Microcomposizione in Ambiente Interattivo.
28 Giugno 1995, ore 16.00, CSC Universita di Padova

Abstract
L'intervento si articola in due fasi: 1) introduzione alle caratteristiche tecniche e operative del sistema KYMA 4.0/Capybara33; 2) illustrazione di alcuni aspetti del lavoro svolto con questo sistema da parte dell'autore. Si valutera' il ruolo di processi interattivi attraverso i quali la microstruttura temporale del suono puo' essere trattata, grazie alla flessibilita' del sistema, quale dimensione di immediata pertinenza creativa e progettuale.

In relazione al primo punto si porra' l'accento, oltre che su aspetti tecnologici, sulla rilevanza dell'architettura software e dell'interfaccia utente in rapporto ad argomenti di teoria della composizione e di musicologia cognitiva.

In relazione al secondo si porra' l'accento su processi di elaborazione (del suono, delle strutture di controllo) e sui controlli dinamici e non deterministici che l'esecutore puo' avere su di essi.

Saranno discussi, in particolare, algoritmi di sintesi e elaborazione in tempo reale adoperatti in recenti composizioni dell'autore, tra i quali processi ricorsivi di granulazione del suono, di sintesi per "iterazioni funzionali" (tecnica non-standard in corso di studio al Laboratorio Musica e Sonologia, L'Aquila) e di ulteriori processi di microcomposizione risalenti ai primi anni della musica elettroacustica (p.e. "Scambi", di H. Pousseur).