Choosing Software Tools

Never has there been a better time to develop software. With rich access to ecosystems, frameworks, libraries, reference and learning material, developers are better equipped than every to build digital systems.

And they darn well better be, for never has there been a more complex time to develop software. With so many choices—devices, frameworks, languages, architectures—its hard to know how to best spend one’s time. What’s more, the pace of change is increasing. New devices, frameworks, languages and architectural patterns emerge faster than ever.

To make matters worse, there are many schools of thought about how to approach this mess of optionality. How does one know where to begin, and where to end?

Let’s define a few constraints.


The choice of a software toolset is a systems design problem, and as such, we can begin by enumerating the relevant constraints. These constraints will be related to the tools themselves, the team using them, and the project—the context for their use.

Team Size

The first constraint is the size of the team utilizing the tools. Often, this constraint boils down to a coverage and coordination problem. All members of a team must understand a set of tools, or at least a meaningful subset. They may know these tools during recruitment, or be educated on them during training.

Larger teams require more coordination between developers, as there are more developers to coordinate and cover with knowledge. Because this, it is often prudent to rely on long-established technologies, and prefer standard tools and more widely known patterns.

Smaller teams can get away with experimenting with newer technologies, as they often will require more leverage from their tools. This often comes with the cost of flirting with the cutting edge, and using tools that are still in development. As we’ll touch on later, these tools will have less to draw from, in terms of learning materials and community.

Thus, irregardless of team size, if one highly values interacting with known quantities, relying on older tools and technologies is a wise choice. If a team decides to adopt a new technology, then the larger it is, the more difficulty it will have recruiting and educating team members.


The next constraint is the scope of the project. In other words, what is the size of the project? How many pieces are involved? and how simple or complex are they?

As we evaluate tools, we’ll want to compare their relative value for building each of the pieces involved in our application. We’ll want to pay attention to the different subsystems involved, and how interdependent they are. We may need different tools to handle different subsystems, or there may we situations where integrating tools is not possible.

The size of our project will determine not only the amount of tools, but also their sophistication. If we want to dig a small hole, we use our hands or a shovel. If we want to dig a bigger one, we reach for more powerful, mechanized excavators.

A good example is modern web design. Simple, static websites are made with a small number of tools, namely HTML and CSS. But as we “dig this hole bigger”, adding dynamism, componentization, interactions and more, we need to use more sophisticated tooling—static site generators, JS frameworks, non-JS framworks, etc.

Together, the amount and sophistication of our tools responds to the scope. We can approach our tool choice by trying to find the “center of mass” of our scope—the most interdependent or valuable piece(s)—and choose tools that best match these. More independent pieces either are more flexible in their choice of tool, and less valuable ones can often be handled by tools that are less suitable fits.


While a project’s scope defines the projects ambition, the timeline grounds this ambition within a structured context—the calendar. The dynamics here should feel intuitive. A longer timeline allows for more experimentation and education, and permits a wider tool search, and a broader set of choices. A shorter timeline constricts all of these. With more time, we can learn new tools and technologies, and iterate/optimize our toolset to best fit our project. With less, we can’t, and we need to rely on what’s in front of us—well known and widely accepted tools and technologies.

Scope = Team Size x Timeline

The above three constraints are related by the following equation: Scope = Team Size x Timeline. If we increase the scope of our project, we either need to increase our team size or timeline, or both. Smaller projects can be done by less people, and/or in less time.

Prior Art

Beyond the size of a community, we’ll also want to pay attention to which tools people are using to do the same/similar things we are. As we evaluate the prior art for a given piece of software, we’ll want to inspect the types of tools each team uses, and how they combine multiple tools.

We’ll also want to contextualize this prior art within it’s time period. The knowledge and tooling surrounding software rapidly changes, in all aspects of hardware, firmware, software and wetware. What has changed since this team worked on their project? What is new? What is the same?

This research helps to inform our understanding of how suitable a software tool is for a certain task, or how much we might struggle to force a tool on an ill-suited problem.

Community Size

In general, it can be said that with more people using a given tool, the more knowledge produced around it: more code, more questions, more answers. While some tight-knit communities pushback disprove this claim, the size of a community can be a valuable indicator of how much support there will be in learning/using a tool.

However, popularity is a fickle thing, and communities around tools come and go over time. In this, it is useful to evaluate the trends surrounding a tool, and attempt to predict the future of the tool. Is the community active and growing? Or is it stale and dying?

While the size of the community should not be a reason to rule out a tool, it can help us sort between tools, and rank higher options that offer more in terms of support, as these will be easier to use.

The above two constraints, prior art and community size, define the available knowledge resources a team has to execute a project. Some software tools come with support teams, especially if one is paying for the rights to use the tool. But most don’t, and it is often on the strength of a volunteering community and freely-shared content that one makes sense of just how these tools work.


As a part of the “system design” problem of choosing tools, we often will use a combination of tools to construct software. In this, certain tools allow for more options as far as integrating with other tools. In this, we’ll want to pay attention to the system of tools we are creating, and the relationships between our tools.

Some tools solve a specific problem—like writing low-level processing math or providing database functionalities—and integrate well with higher-level wrappers, allowing us to extract certain components and handle them with tools best suited to their implementation.

In audio applications, we often separate the DSP, or audio math, from the other layers of the software. Pure Data is a visual, node-based language for building processing systems, and offers integrations that allow its programs to be used from iOS, MacOS, Android and Windows. This allows native or web interfaces to be built on top of Pure Data “patches”,

Alternately, we may choose to write the DSP in a lower-level language, like C, C++ or Rust. Through bridging, we can bring all of this code inside of higher-level layers in applications, to be called by languages like Swift or Javascript.

Application software brings together storage systems (like a database), processing systems (like an audio engine), and interaction systems (the user interface). It’s often the case that these all will be handled by different tools or languages.

The Realm database, developed by MongoDB to be highly integrated with web and native platforms, giving the freedom to choose multiple tools to build other components, like the UI.

Higher modularity makes systems more integrable, which eases development.


We can next synthesize from these constraints (order and summate them) to arrive at a space for potential solutions. As an example, I’d like to use my own experience choosing software tools to develop mobile applications. One area of mobile development I’m involved with is audio applications, specifically music production applications.

Due to being a team of one, I am highly constrained in some aspects. I can only learn and be familiar with so much. That being said, I have a lot of freedom in what tools I can use, as its only me learning them. Especially in the mobile world, and even more so in the mobile audio world, the technologies being made use of are new, so this doesn’t disadvantage. me too much. In fact, I’d argue that audio applications is a segment in the app market where solo developers have one surprisingly well.

That being said, as a team of one, I can’t possibly write all the code for these systems. I need to rely on libraries/frameworks to solve a host of issues—like data storage, data management, screen management, and the audio engine infrastructure—and provide me with convenient sets of objects and APIs to make use of.

One particular design is that of a “mini-DAW”—an app for making digital music. For this, I need a storage system for managing music project files, a interaction system to manage the creation/editing of audio, MIDI, and automation clips, as well as processor graphs (tracks of generators and effects), and a processing system to run these graphs, making use of the audio, MIDI and automation data, and to manage the interactions, providing a system of processed updates to the app’s data.

This project is a large undertaking, and as such has a long timeline, which gives me the flexibility to evaluate different options, and potentially transition between tools over time.

Audio applications come with their own set of constraints, one being the hardware they are run on. For mobile, Apple is the only consistently high-performing option, and as such, I’ve chose to constrain myself to the Apple ecosystem.

This constraint most strongly shapes the space of choice, as it limits it to tools that will integrate with iOS applications. In this space, there are several attractive candidates for each of the main levels of storage, interaction and processing.


Storage for applications breaks down into two main camps. Traditionally, apps stored information in files, and placed these files within a file system. Recently, advances in database design have made up for some of the short-comings of a file system (lack of querying, hierarchy, speed, etc), and allow developers to interact with data in more robust ways, and to do so performantly.

Often, applications take a hybrid approach, storing some data in the file system, and some in databases. This is the approach I’m taking, using the file system to store clip data, like audio or MIDI, and using a database to store project files, or the assemblages of these clips.

The decision here is mainly one of database choice. My case is one of wanting local-first database access, with the option of allowing multi-device in time. I don’t want the application to be network dependent, so the database and files must live on the device.

There are two strong choices here: Core Data and Realm. Core Data is the default option, an object-oriented relational database offered by Apple. It’s not amazing, but works well enough. The most compelling third party option, Realm, comes by way of MongoDB. Realm is very easy to use, and very powerful. It takes a lot of the confusion of Core Data away, and provides excellent abstractions for writing data, and observing a single object, or a collection of them.


Because I’m sticking with Apple only, I’m ignoring cross-platform options like React Native for this project, as I have no need to transport interfaces out of Apple devices. Apple offers a number of different levels, from low to high, to construct interfaces and handle gestures.

Most prominently, it offers the UI frameworks of UIKit and SwiftUI to construct interfaces, and manage gestures. UIKit takes an imperative approach, which results in more code, but more control, while SwiftUI takes a declarative approach, which allows for terser, more shapely code, at the cost of control and customization.

As well, SwiftUI takes a reactive approach, which allows it to replace the manual updating of screen content with an automatic reactionto the user’s interactions, by updating when a given view’s data dependencies change. At the low level, text and graphics can be drawn with Core Graphics or Metal graphics engines, for more expressive visual displays and maximal control over the on-screen visuals.

Touches can be handled at a high-level in SwiftUI, through simple callbacks, or through gesture recognizers in UIKit, which provide callbacks for tap, drag, pinch and other basic gestures, and allow for the setting of rule systems between these gestures, the manage their cancelling out or co-occurence. At the low-level, we can access raw touch values, and employ our own means of recognition or processing, and do so through a set of callbacks (began, changed, ended) as well.


Again, we have options for processing engines across a number of levels. At the low-level, we can make use of Core Audio itself, the lowest-level audio abstraction offered by Apple. Alternatively, we can use an abstraction built on top of Core Audio, like JUCE or AudioKit, and take advantage of a friendlier, more approachable set of primitives. In a choice between working with Swift (AudioKit) or C++ (JUCE), I’m all for Swift. For now at least, AudioKit has proved a capable engine, and a very accessible experience. Though JUCE has a larger community, AudioKit is specific to iOS, and has grown to be the de facto option for iOS audio development.

As time goes on, the desire to expand the engine with more generators or effects, or optimize it in general will require returning back down the levels, and potentially writing Rust or C/C++ code. This part of audio development is unavoidable, as the audio processing code has the largest set of constraints, being a hard real-time systems. It must meet strict timing guidelines, and so its limited to high-performing, memory-managed languages (no garbage collector).

Wrap up

All in all, my process of tool selection has been one of balance, trying to account for what I can do as a solo developer, and how much I can increase the scope of my projects. As such it makes sense for me to reach for newer libraries like SwiftUI, Combine, AudioKit, and Realm, and seek to maximize the leverage from these.

In all my projects, both solo and otherwise, I’ve found that the constraints of Team Size, Scope, Timeline, Prior Art, Community Size, and Integratability have featured most prominently, dictating much of the design of the choice of software tools.