[This blog entry is the third of a series on my work here — the process of deciding, doing, and promoting my music compositions.]
When I entered my Masters program at UK in mathematics education, I did something I had assumed would be required — I had my portable typewriter, a very cute manual Olympia, acid-dipped and balanced in preparation for a lot of hard use. This was 1985. As a graduate student I was awarded an assistantship: the management of the microcomputer lab in the College of Education. I sat down in front of an Apple ][e for the first time, and an entire world suddenly opened up. I could write papers as streams of consciousness, go back to fill in citations and details, edit and tighten the prose appropriately, and format the finished product right there on the screen. It was a seamless, non-linear path from ideas to published paper which truly matched how my brain worked. (My little Olympia portable was never used — I sold it at a yard sale several years later, for less than the service fee I’d paid.)
For composers, that transition was even stronger. Before manuscript software, you wrote by hand, handed the results off to a publishing company (or if this is way back, a music scribe), and, after a lot of money changed hands, you hoped it came back as you intended. Almost all composers actually wrote at the piano keyboard, which provided the only way they could “hear” their work before it was actually performed. (Mozart was thought to be able to write completely in his head, and commit a piece to manuscript as fast as he could write it down, fully formed. That is largely a myth, though composers can and do play passages in their heads just fine — but it’s an open question how much happens before or after they’ve played it or written it down.)

Béla Bartók working on a manuscript.
Music software these days matches my non-linear brain’s needs exactly, even as it satisfies my “legacy” requirement for notes on a page. My “word processor” is Avid Sibelius, one of a dozen or so music “score editors.” My referring to such software as “word processor” isn’t wrong, but there are some substantial differences. Typing in words ignores some things that music does, since one doesn’t generally consume music by “reading” it in one’s own time. I can “type” in music by playing a piano into Sibelius, but I’m limited by what I can actually play. Sibelius will attempt to render things like dynamics and tempo changes/nuances, but only crudely. Of course, since I’m at best a terrible pianist, that method of entering notes isn’t really an option for me anyway.
I generally use the computer mouse to select and add notes to the score, and use copy/paste quite liberably – from individual notes and chords to whole melodic lines and passages. To start, it’s slow, awkward, and pretty divorced from the process of actually making music (playing an instrument, or singing). But once there, just like word processing, I can reproduce and transpose passages with a few clicks of a mouse, making it easier to build a larger work from small pieces. That means I don’t have to work linearly — from beginning to end — since my current focus could end up anywhere in the piece. In addition, playback allows me to test ideas, and find mistakes, without a lot of wasted work if the results end up in the dust bin. Of course, a vision of the work is still required, but I can develop that vision as I work. Writing things out by hand pretty much require me to have a more-or-less completed vision in advance.
(A quick aside on the “performance” playback this software does. I chose Avid Sibelius specifically because a lot of attention was paid early on by users of Vienna Symphonic Library, a collection of sampled “virtual” instruments. “Sampled” means they’re actually recorded note performances by real instruments, so the “library” is a collection of sounds made by instruments doing a wide range of performance tasks. The “Sibelius Export” versions of my recordings here sound like the performances of the instruments themselves, because they are. The reason that they don’t sound quite true to human performance quality is the same reasons why AI-exported text sometimes seems less than real — human behavior is very complex and nuanced, and reproducing that by a machine is difficult. Although things have the potential to improve greatly, unlike AI text generators, there isn’t much money to be made by that development process in music, so what you hear here is as close as I can get.)
Below is a picture of my computer displays. On the left is the score. On the right is the display of the various “virtual instrument” controls. When I play the score back at left, the notes, and all of the things impacting their performance (dynamics, tempo, performance styles like staccato, etc.) are fed to the “instruments” at right, which call up all of the samples required to “play” the notes. I could probably spend the next 10 years just trying to better master the controls these “instruments” provide. I’d rather simply write, so the Sibelius exports you hear on this website are pretty much what happened “out of the box.” It’s good enough to be listenable, most certainly good enough to support the creation process.

Although the Sibelius display at the left can be simply exported to PDF for distribution and printing, there’s a whole collection of standards to which music publishing should adhere. After I’ve finished a composition, I have to attend to such things, which is just hard, nasty, time-consuming work. But since I don’t have a publishing contract (nor could I afford to pay for the services a publisher provides), I’m stuck. Thankfully, I’ve had at least two of my works performed by actual instrumentalists (as of this writing), so I’m assuming I haven’t done too bad so far!