|Buxton Research Papers
|Buxton Research Videos
|Buxton Home Page
Mezei, along with his co-director, Ron Baecker, and the chair of the Department of Electrical Engineering, K.C.Smith, for some strange but wonderful reason, listened to my ideas, and took them seriously. They coached me in writing a research proposal to the Social Sciences and Humanities Research Council of Canada, which was submitted under their name (after all, who the hell was I?), which was actually accepted.
At the same time, they figured out a way that I could become a graduate student in Computer Science at the University of Toronto (no small thing since I didn't meet the entrance requirements for 1st year undergraduate), which brought with it student support. The reality is that I went to graduate school for: the money. It beat working in a bar or restaurant. I never had any intention of becoming a researcher. I just wanted to make my instrument, and then go back to becoming a full-time musician. Hah!
Anyhow, with the help of the above three mentors, and a lot of fellow students (whose names appear in the publications cited below), came the Structured Sound Synthesis Project (SSSP). Now a word about the name. The project was based in the Computer Systems Research Institute, and was receiving research funding. Consequently, I had to hide the fact that it was really motivated by music and artistic objectives. So, I figured that music was structured sound, and that was a far more scientific sounding description, so the SSSP it became. As long as I got my system, I didn't care what it was called.
The project received funding from around 1976-7, and continued to exist until about 1984.
During that time, we built one of the first digital syntesizers, certainly one of the first portable (if you had a van) digital live performance systems (at a time when tape music dominated computer music performances), and developed a lot of the graphical user interfaces for music, which are now common place.
This project laid the foundation for the rest of my career, such as it is.
I have to say, looking back, it was pretty cool. We designed a built a 16 voice digital synthesizer to make the sounds. We controlled it in real time via a dedicated DEC LSI-11 microcomputer. Tom Duff and Rob Pike wrote a real-time package that let it run as a slave to our PDP-11/45 minicomputer, which was running an early version of UNIX. The real-time LSI-11 communicated with the time-shared UNIX machine via some dual-port memory using a modification to UNIX written by Bill Reeves. For composition, and "studio" related things, we made heavy use of interactive computer graphics, employing a graphs package written by Bill Reeves (without whom I would never have been able to get the data structures right).
For concerts, we decoupled the LSI-11 from the 'mothership', and used it as a stand-alone microcomputer. While it no longer had the fancy graphics, nevertheless, even using only a 24-line x 80-column terminal, we were able to used graphical interaction. What we did is lay the control panel out on the screen like a spreadsheet (we didn't call it that at the time, since the spreadsheet had not been invented yet, but see the video of Conduct, below), and control the cells with a tablet and other graphical controllers. We actually had 8 RS-232 ports on the device (remember, this was years before MIDI) for control. The whole thing ran unbelievbly fast since Tom Duff and Rob Pike had written a tiny kernel for the machine that let us run compied C code native on it, without any operating system. This included support for all of the input devices, the display, the synthesizer, and even two huge (by today's standards) floppy disk drives.
Except for the last one, the videos below shows the system circa 1980-81. This final clip shows some follow-on work by John Kitamura. The articles directly relating to the individual clips are cited in the adjoining text. Additional publications are cited directly below. Almost all of them are on-line and can be accessed by clicking on thier titles.
Interaction is all about dynamics, and video and cinematic form in general, are critical to helping foster better communication and literacy. Hence, you are encouraged to copy and share any of this material for non-commercial, educational, and research purposes. Please just cite the source.
As usual, comments and suggestions are always welcome.
Buxton, W., Fogels, A., Fedorkow, G., Sasaki, L., & Smith, K. C. (1978). An Introduction to the SSSP Digital Synthesizer. Computer Music Journal (4), 28-38.
Buxton, W., Reeves, W., Baecker, R., & Mezei, L. (1978). The Use of Hierarchy and Instance in a Data Structure for Computer Music. Computer Music Journal 2(4), 10-20.
Fedorkow, G., Buxton, W. & Smith, K. C. (1978). A Computer Controlled Sound Distribution System for the Performance of Electroacoustic Music. Computer Music Journal 2(3), 33-42.
Kitamura, J., Buxton, W., Snelgrove, M. & Smith, K.C. (1985). Music Synthesis by Simulation Using a General-Purpose Signal Processing System, Proceedings of the 1985 International Computer Music Conference (ICMC), Vancouver, 155-158