Music is a networked process. At all levels, it involves connections between different parts—the references in lyrics, the chemistry of a band, the signal path in the studio, the clips in a DAW. In order to build software for music, we need to develop a structure that makes affordances for this network, that privileges the ability to make connections, to explore new paths and branches—to grow one’s network, so to speak.
In order to understand the structure of these networks, we must ask questions of the musical process as a whole. Who is involved? What tools do they use? What artifacts do they create? How are these organized? How are these evolved?
What follows is the product of my own experience, research and observations. The musical process is different for everyone and no single framework can fully describe it, but what I have done here is attempt to capture the patterns that I have seen, and arrange them in a roughly linear timeline. We’ll use these patterns to construct loose frameworks for how music is made, both in traditional studios and in the more modern “bedroom” studio.
We’ll first consider the traditional music-making process, at a bird’s eye view. We’ll separate the process into four phases—Exploration, Composition, Recording, Production—and explore the types of activities and the tools used for these activities in each phase.
We can briefly consider the traditional process of “manufacturing” a song (or album) from the research and development; to the initial experiments; to the assembly, refinement, final polishing and quality assurance.
In essence, songs loosely flow through the above phases, though not necessarily in the order presented or as a serial procession of steps. Creative work involves loops, parallel tasks, and back-tracking. For our purposes, however, these phases give a good overview of the process of designing a song.
We can use these stages to understand the tools and artifacts needed to create music—sheet music, writing instruments, musical instruments, speakers, mics, effects boxes, mixing boards and more.
First, we find that it takes music to make music. In sourcing references to serve as guidance, inspiration, orientation and direction for the project, we turn to live or pre-recorded music from others or riff on instruments or vocals of our own. We are semantically browsing for meaningful content—things that align, in terms of their sound, or their feeling or emotion, with what we want to express in our own creations.
The tools for this exploration are sound playback systems and music libraries, instruments, pens and paper, the brain, field recorders and mics. In general, we want objects that easily and simply facilitate the playing back, creating or recording of sounds.
We are building a collection—an arsenal of inspiration—that will serve as a moodboard for the design. In this stage of design, we want to privilege tools that offer lightweight ways to scan large bodies of references and to store our thoughts—prototypes and sketches. Thus, the exploration phase can be said to return a reference layer, useful for building on top of.
As the direction hardens, we begin to produce the base usable units within the song: lyrics, rhythm, melodies and harmonies. We begin to construct the different instrument layers, assign vocal roles, and arrange the different scenes within the song. We can think of this stage as moving blocks around a timeline, representing different instruments and vocalists playing and singing at different times, and the different parts that each of these blocks are playing.
At this stage, the goal is to build cohesion between these blocks, in terms of how the blocks work together at each unit of time and how the blocks work together across time. In this, we are defining our form across space and time. We decide what notes and words will fill the space across a stretch of time, and what instrument or voice will sound these.
Often, this process takes place with writing tools and surfaces, like pens and paper or markers and whiteboards. We produce artifacts like sheet music and lyrics—linear arrangements of musical content across time. We use these artifacts to guide the Recording Phase.
Once we know how the song will go, it’s time to record it. This often involves the use of a recording studio, and always involves the use of microphones (unless the piece of music is solely digital or MIDI-based, but we’ll ignore those for now). We might choose to record different parts at different times, or have certain groups of musicians play at once. We are conscious of the nature of the recording setting, the arrangement of the mics and players, and the techniques for coaxing and nudging a recording in the right direction (thinkings like pop filters on mics, stuffing drums with blankets, etc).
This system often involves the use of a recording engineer to manage the above, as well as when to start and stop the recording, to cue the tracks correctly, to select the proper tracks to send to each artist’s headphones, and to properly organize and store the results of the session. Often nowadays, this will be done within a computer, using digital versions of the recordings, in a piece of software like Pro Tools or more traditionally, this is done onto magnetic tape.
The artifacts, strips of tape or files on a computer, can then be cut up and assembled into whole pieces in the Production Phase.
Once we have the tracks recorded, we need to “produce” a complete, unified version. This phase includes the “production”, “mixing” and “mastering” phases of a traditional music cycle, in which tracks are defined, arrange, leveled, filtered, panned and effected in a whole host of other ways, slowly working towards producing a final mastered version, ready to be released.
This phase involves a number of producers and engineers, and may involve the producing, mixing and mastering of multiple copies to select between. We can think of an Inspection Phase within the Production Phase responsible for ensuring quality over the final product. This involves evaluating different versions, listening conditions (speakers/headphones, settings) and listening formats (for vinyl, CD, streaming, HQ,...).
This Inspection Phase returns feedback iteratively before eventually approving for release. This feedback may involve re-referencing, re-writing, re-recording, re-producing, re-mixing or re-mastering. From our production phase, we may flow back into any of the above phases, forming a knitted, intertwined, looping process through these phases.
We can now consider each of these phases in the modern digital context. One advantage of digital systems is their ability to have artifacts arbitrarily connected, through hierarchies in a file-system, or directly through hyperlinks, and arbitrarily annotated, using metadata or other forms of formatted data. We can explore how we might architect file and folder systems to bring together all four phases under one project namespace (represented as a folder). Fundamentally, our files can be thought of as artifacts, while our folders represent organized collections of these artifacts.
In the exploration phase, we seek to build up a library of content that will help guide the project. In building music, we are defining a landscape to explore with sound, and we are using a variety of sources to inform the nature of this landscape. We can think of two main avenues of content: external collection, and internal expression.
We use reference tracks to serve as inspiration—sonically, lyrically, tonally,.... — and to reference throughout the process. These external pieces of content are collected from the world around us—nature, art, media, conversation, product—to stimulate our thoughts. We might explore the Internet, or take recordings—audio or visual, use streaming services or files we own.
In the digital context, the ability to download and screenshot digital artifacts allows us to explore wide fields of content. What’s more, the availability of sampled sources or produced sound kits gives access to content that can be re-synthesized directly into a piece of music. In other words, some of the audio found in the exploration phase may see its way into the final product.
The other core component to exploration is our internal processes—our brain working to synthesize content and cognitions into some construction. To externally represent these mental forms, we may sketch them, write notes about them, verbally express them and, noodle on an instrument, make noises on something around us, tap, shake, clap, whistle, hum—whatever it takes to try and make tangible what is presently internal.
In these situations, it may well be enough to simply think and consider, only using our bodies and mind to explore areas and avenues of thought. But often, it is helpful to record and store some of these creative outbursts, so that one can go back and reconsider them at a later point. In this, devices that aid with quick capture are of high value, as they minimally disrupt the flow state and easily facilitate recording and filing away.
Handheld digital devices like smartphones are perfect for these use cases, serving as a portable device capable of audio/visual capture, Internet and file access, as well as note-taking tools, word processors, sketchpads, and other applications one might create in. We can create many different types of digital artifacts, organize and store them in local or cloud storage, and share them with others.
When it comes to composing, or writing the piece of music, the process shifts to hardening our explorations into two forms: lyrics and music. Some pieces may forego the former, and only be composed of music; and some pieces may forego the latter, and only be of lyrics. We’ll focus on pieces that compose both music and lyrics, as the other cases are contained within these.
In creating these as digital forms, we need to have both software for inputting text and inputting music notation. Note-taking apps like Notes and Word allow for the creation of formatted text documents. Notation software like Sibelius, MuseScore and Finale allow for sheet-music scores to be created and shared in a word processor-like environment. Both of these allow for the formatted creation of compositional artifacts.
We can use basic text document formatting, like plain or rich text, or if we want hierarchies and sections, we could use tree-based approaches like HTML or XML. For music, a number of formats exist, both open and proprietary. In general, we’ll want richer, hierarchical structures like MusicXML.
The process of composition may involve a series of drafts, or iterations, different paths and directions may be explored, we may build on past work, or start fresh. Digital filing systems, and file managers like Git are perfect for organizing, versioning, tracking changes, branching and forking projects. While valuable in every phase for handling artifacts—files and folders—and managing changes to these, Git’s strength is perhaps most strongly emphasized in handling compositions, from organizing drafts, allowing access to prior versions, enabling collaboration and safe exploration.
During the recording phase, we want ways to record and organize takes. In some cases, our musical pieces are composites of multiple takes, comped together to form a single construction. Often, these are a result of taking a certain section multiple times, in quick succession, where the DAW loops over a certain amount of time and builds up a stack of recordings.
On our digital devices, we can connect recording devices (mics and instruments) through audio interfaces. On mobile devices, products like iRig allow for direct connection between instruments and mics. We might want certain metadata to capture the timing, arrangement, personnel and purpose for a given take, and to aid in organization and retrieval.
Modern recording may still involve the use of a recording engineer, but often it may just involve the artist and/or producer. In this, we have new options for triggering the recording of takes (directly by artists and musicians) and new abilities to store large amounts of memory—in bits rather than tape—which means that we can record more, potentially foregoing starting and stopping and simply recording large takes, which may contain several takes within them.
Not only is digital storage more plentiful than tape, but also it is more malleable. We have the freedom to copy bits with ease; we can duplicate, cut up and combine strips of digital audio in a lightweight fashion. This greatly aids the Production Phase.
If you are familiar with DAWs, you are quite familiar with digital production. From making a beat, to recording vocals, to mixing and mastering it—all take place within a DAW like Pro Tools, Logic, Ableton, FL Studio, or others. In essence, our digital production phase is purposed to take a set of sounds—recorded or collected—and output a WAV or MP3 file that we can listen to.
We can simply think of our DAWs as doing two things.
First, it replicates the signal path one would find in a studio, from input through effects to a mixing board. The DAW creates a graph of audio units that handle different aspects of production, and we can adjust and modulate these over time, according to the needs at each timestamp in our production.
Second, it replicates the slicing and splicing together of tape, in the form of clips on a timeline. We saw above the advantages of this approach, and this process of creating, editing and arranging clips makes up a large portion of the producers time and efforts.
While we might typically think of DAWs as being a desktop-only application, increases in processing power on mobile and tablet devices is quickly changing that. Here, we see Madlib, a producer, say he made the beat for Kanye West’s No More Parties In L.A.on an iPad.
Here, we see Steve Lacy describe how he produces albums on his phone.
Clearly, the power of production extends to more than just our computers—our tablets and phones can also play this production game, and this mobile transition should excite those invested in music. Studios are expensive, inaccessible and often creatively sterile. The potential to produce high-quality music outside of them opens doors to much more music, situated in a greater diversity of contexts.
Finally, central to the final publishing of a song is getting the mix and master right. This often involves multiple trials, multiple listening sessions (often across different settings). The organization of the files involved in these is critical to the work, as well as having the ability to try new ideas (branch) and explore previous ones (version control). The ability to adjust things on the go, and capture the results of a listening session is of great value in the final shaping of the song.
In analyzing four different phases of music:Exploration, Composition, Recording and Production, there emerges a whole host of areas for design—filing systems; version control; Internet and streaming services; notation, recording and production devices; data formats; processing protocols and more. It is my goal to create tools that make it easier to make music, finding ways to integrate artifacts across all of these phases, so that thoughts formed during Exploration and Composition can be used and referenced during Recording and Production, as well as between projects.