What is Language?
We're back with another blog post—so, what have we been up to? A lot goes on behind the scenes at the Buchla Archives: instrument restorations, interviewing musicians, scanning documents, digging through library archives, taking lots of notes, and—of course—lots of writing.
I recently sat down to write a cursory look at a little-understood instrument: the Thunder. Like many of Donald Buchla’s designs, Thunder is quite deep…and as such, my writing quickly delved into finer details than I had initially anticipated—including some background information about the Buchla 400, yet another device we’ve recently been examining in depth.
Because we’re also actively preparing some more detailed documents about the 400 and its inner workings—and because there is so much conceptual overlap between the 400, Thunder, and many other Buchla designs—we’re instead dedicating this article to a brief discussion of a common, critical concept that they all embody: the concept of “language.” This foundational discussion is important not only in understanding Thunder and 400, but several other instruments, as well.
So, this post is a free-form, basic introduction to some of Buchla’s most important control concepts. We’ll use it to establish some background relevant to his work as a whole, especially regarding the instruments he developed from the mid/late 1970s and onward. Simtulaneously, we’ll treat this post as a way of setting the stage for the story of Thunder itself, which will be discussed in greater detail in subsequent posts.
Language in Electronic Musical Instruments
When thinking of Donald Buchla’s work, it is easy for novel, recurrent aspects of his instruments’ audio structures to come to mind: an embrace of audio-rate modulation, continuously variable waveshaping, frequency-domain gating, etc. But perhaps even more important than these characteristic sonic structures is Buchla’s open-ended, experimental, exploratory approaches to human-machine interaction—which, according to Buchla, he valued above the pursuit of new sounds for their own sake.
In a 1984 interview with Polyphony Magazine, for instance, he stated that "I am not that involved with the intricacies of sound as some. I pursue the investigations of timbre, but I'm more concerned with the investigation of musical structure." Later, in the same interview, he states that the most exciting possibilities of electronic instruments lie in "the instantaneous remapping of the relationship between input gesture and output response."
That is to say that Buchla's instruments were not solely concerned with the creation of new timbres, but with the facilitation of scenarios in which the nature of an instrument's response to the user could be defined arbitrarily, adapting to that specific user's immediate musical needs. Moreover, the nature of the instrument's response could even change over time—even dynamically, during the course of a performance. Buchla considered this possibility to be fertile territory for practice-led artistic investigation, and all of his instruments were designed to enable such investigation.
The aforementioned Polyphony Magazine interview was published during the development of the 400, one of Buchla's most elaborate, compact, and self-sufficient instruments up until that time. While not the first of his instruments to explore arbitrary control mappings or the concept of a computer-mediated “language,” it is an interesting case study to help us unfold some of his overarching design concepts.
As with the Buchla 502 (c. 1975) and Touché (c. 1980), the Buchla 400 (c. 1982) substituted the modularity of the 100 and 200 Series instruments with a self-contained complement of sonic resources. In the case of the 400, this included a six-voice polyphonic sound generator, a built-in performance interface, and an integrated computer. There's much to say about the 400, of course, and we'll follow up soon with some more in-depth posts describing its concept and functionality.
For the purposes of this discussion, though, we should further unpack some aspects of its approach to user interaction. So, the 400—as with the earlier 300 and later 700—was a computer-centric instrument which included an array of capacitive, touch-sensitive “keys.” Additionally, as with these other instruments, the 400 used special-purpose “languages” to define the relationship between its playing interface and its sound-generating electronics. In the Polyphony magazine interview mentioned above, Buchla states that electronic instruments (unlike most other musical instruments) employ a tripartite conceptual structure:
I like to regard an instrument as consisting of three major parts: an input structure that we contact physically, an output structure that generates the sound, and a connection between the two. The electronic family of instruments offers us the limitation, if we approach it traditionally, and the freedom if we approach it in a new way, of total independence between input and output. And in fact the necessity of some way of generating a connection between the two. Language becomes an important aspect in the electronic family of instruments, where it had played no part with all traditional acoustic instruments.
So, what does this mean? To Buchla, “language” is the logical structure/process that enables a user to make meaningful connections between the input structure (playing interface) and the output/generative structure (sound generation). Many aspects of any given instrument take part in defining its specific language: the hardware user interface, the sonic variables surfaced for the user, the parameterization of these variables, etc. Indeed, in a sense, language is an aspect of all electronic instruments. When an electronic instrument designer creates a new instrument, they decide how its generative structure works; they decide how a user should interact with the instrument; and ultimately, they decide (according to their own design ideals) how these two parts of the instrument are related to one another. That intermediate layer—that connective structure between the performer and the sound generation—is, to Buchla, the most important element in defining an instrument’s language.
When designing new instruments, Buchla treated this connective layer as a primary focus, often intentionally devising methods by which the end user could modify this conceptual connective structure themselves. Many of his instruments, such as the 400, operated in part on the basic assumption that this type of structural exploration could be a primary focus for the end user’s artistic intention.
In instruments like the 400, the “language” of interconnection manifests in part as specific software (complete with a graphical user interface on a color display) designed to facilitate particular types of logical linkages between the input and output structures. For instance, some of the software available for the 400 allowed for the use of the embedded touch interface to enter data into a musical score; other software permitted split/layered keyboard assignment; etc.
Perhaps more interestingly, though, all of Buchla’s “computerized” instruments—the 400 included—offered software which permitted the user, if they desired, to define distinct, independent behaviors for each individual key/user input. Think of it this way: in most electronic instruments of the time, pressing an individual key would trigger a single event (a note, sample, etc.). But Buchla tried to find a way for this simple, intuitive type of interaction—that is, pressing a key—to do many different types of things…or even to do many things simultaneously. His instruments’ software languages enabled the user to define how the translation of interaction to response would unfold.
So, for instance, the 400’s keyboard did not have a hardwired, predetermined connection to its sound-generating electronics; instead, the keyboard was connected to a computer which could interpret, manipulate, or otherwise obfuscate gestural data before routing it to the instrument’s audio structure. And, while it was certainly possible to use keys to activate individual, specific, repeatable notes/pitches—and while particular software “languages” were designed to facilitate this specific goal—this was only one of many possibilities.
What does this mean in practice? Imagine a more familiar instrument like the Sequential Circuits Prophet-5: it assumes a specific, fairly direct relationship between the instrument’s input structure (a black and white mechanical keyboard) and its generative structure (an analog, polyphonic synth engine). When you play a key, the results are typically somewhat similar: a single event is produced instantaneously, with a pitch that generally corresponds to the key that was pressed. Knobs and switches have fixed parametric assignments, and always respond in a predictable fashion. In that sense, this instrument’s “language” is largely fixed according to its design specifications, which prioritize immediacy and predictability. This is not a criticism; this is just to say that, while the Prophet-5 no doubt has been used to create many types of sound and music, the nature of the relationship between its input and output structures promotes the use of its own specific, idiosyncratic vocabularies.
On an instrument like the 400, this type of 1:1 relationship between input and output is only one possibility. Instead, a user might define a completely different type relationship between input and output. A single key could produce different results each time it is touched; pressing a specific set of keys in a specific order could enact changes in any level of detail; one key could modify the behavior of other keys; touching a key might start or stop an otherwise self-sustaining, ongoing process. Keys could act as triggers for conditional behaviors, perform mathematical manipulations of numbers in data registers, recall or otherwise manipulate any number of parametric settings, and much more.
The possibilities were intentionally open-ended, and instruments like the 400 became a sort of experimental sandbox for exploring alternative approaches to user interaction—its languages acting effectively as propositional models by which its own idiosyncratic vocabularies might be discovered. As discussed above, the 400 specifically offered several software “languages,” each of which facilitated different sorts of musical interaction (more on that at a later date).
MIDI as a Turning Point
The 400 was developed around the same time as (and independently of) the standardization of MIDI. Of course, the rapid proliferation of MIDI quickly changed the face of commercial electronic musical instrument design. MIDI became a standard part of most new electronic instruments; for many, it became the default basis of computer<>instrument interaction; it became the connective tissue of countless studios. At the same time, as newly-developed microprocessors became even more powerful, Donald Buchla's ambitions to create a compact, self-contained instrument intensified. 1987's 700 was his most intricate and compact instrument to date, and his first instrument to incorporate MIDI control.
Simultaneously, Buchla noticed that, despite the fact MIDI as a protocol had proliferated, the market seemed to be flooding with new sound sources and little to no new, novel approaches to control. Despite his misgivings with the MIDI protocol in general, he set out to produce a series of controller concepts—and soon, Thunder (and its many siblings) was born.
Now that we’ve established a foundational understanding of language in Donald Buchla’s instruments, we can further unpack the ideas at work in Thunder. As a next step in understanding Thunder, we’ll offer a brief description of the conceptual shift of Buchla’s work in the 1990s, along with a high-level overview of Thunder, specifically. Along the way, we’ll see how the concept of “language” as a connective structure between user input and sonic response continued into Buchla’s later designs.
Expect more on that front—and more about the 400—in the near future.
-RG