7 min read

Paper Summary: No Silver Bullet

Software gets better not through magical solutions, but with great effort. Care and discipline towards the cleanliness of our code is essential to building better software.
Paper Summary: No Silver Bullet

No Silver Bullet — Essence and Accident in Software Engineering” is a widely discussed paper on software engineering written by Turing Award winner Fred Brooks in 1987. This is a summary of the key ideas.


Much of the progress in software engineering productivity that past generations experienced can be attributed to removing the accidental complexity that arose over time. By making hardware more efficient and accessible, and programming languages more approachable, we solved the easy problems and created an environment for more efficient development.

We’re now at a point where the low-hanging fruit has been picked, and if we want to continue building software better and faster, we need to figure out how to solve the challenges that are inherent in software development. This paper outlines how we might approach that.

The crux of the paper is that there’s no silver bullet; no one solution that is going to make software development easier, better and cheaper, all in one quick swoop.

A disciplined, consistent effort to develop, propagate, and exploit them should indeed yield an order-of-magnitude improvement. There is no royal road, but there is a road.”

Software gets better not through magical solutions, but with great effort. Care and discipline towards the cleanliness of our code is essential to building better software. Building software to solve a problem isn’t difficult, identifying and understanding the problem is the difficult part. You cannot engineer well if you don’t understand the problem you’re solving. As Brooks says:

We still make syntax errors, to be sure; but they are fuzz compared to the conceptual errors in most systems.”

Software becomes an awful lot harder when it needs to be scaled because scaling isn’t a linear process. Scaling introduces exponential complexity because of the increasing number of interactions between elements. A scaling-up of software is not merely a repetition of the same elements in a larger size; it is necessarily an increase in the number of different elements. In most cases, the elements interact with each other in some nonlinear fashion, and the complexity of the whole increases much more than linearly.

This is pertinent to any organisation attempting to scale its engineering offerings and capabilities. There is an awful lot of complexity that comes into play as teams and products interact with one another. As they get bigger this gets increasingly complex.

Complexity is inevitable, and your rate of success is directly proportional to your ability to deal with it.

How did software get better and faster in the past?

By attacking the accidental complexity. We picked the low-hanging fruit and made huge progress through the following:

  1. High-level languages. The most powerful stroke for software productivity, reliability, and simplicity has been the progressive use of high-level languages for programming. Most observers credit that development with at least a factor of five in productivity, and with concomitant gains in reliability, simplicity, and comprehensibility.
  2. Time-sharing (note that the paper was written in 1987 when development resources typically had to be shared). Most observers credit time-sharing with a major improvement in the productivity of programmers and in the quality of their product, although not so large as that brought by high-level languages. Time-sharing preserves immediacy and enables us to maintain an overview of complexity. The slow turnaround of batch programming means that we inevitably forget the minutiae, if not the very thrust, of what we were thinking when we stopped programming and called for compilation and execution.
  3. Unified programming environments. Unix and Interlisp, the first integrated programming environments to come into widespread use, are perceived to have improved productivity by integral factors. Why? They attack the accidental difficulties of using programs together, by providing integrated libraries, unified file formats, and piles and filters. As a result, conceptual structures that in principle could always call, feed, and use one another can indeed easily do so in practice.

How do we make software better today?

The low-hanging fruit has been picked, what do we attack next? Brooks provides some suggestions, many of which are still relevant 34 years later.

Buy, don’t build.

The most radical possible solution for constructing software is not to construct it at all. Every day this becomes easier, as more and more vendors offer more and better software products for a dizzying variety of applications.

Brooks believes that the development of the mass market for software is the most profound long-run trend in software engineering. The cost of software has always been development cost, not replication cost. Sharing that cost among even a few users radically cuts the per-user cost. Another way of looking at it is that the use of n copies of a software system effectively multiplies the productivity of its developers by n.

Brooks had elsewhere proposed a marketplace for individual modules. Any such product is cheaper to buy than to build afresh. Even at a cost of $100,000, a purchased piece of software is costing only about as much as one programmer-year. And delivery is immediate, at least for products that really exist, products whose developer can refer the prospect to a happy user. Such products tend to be much better documented and somewhat better maintained than homegrown software.

You can think of this from the perspective of the buyer or the builder. Most times, it’ll make sense to buy the software. Every now and then, you’ll realise that you’re in the perfect position to build something that doesn’t exist yet, or exists, but isn’t very good. This is a chance to capitalise because if you build it well, there will be buyers.

Equip people with technology.

“I believe the single most powerful software productivity strategy for many organisations today is to equip the computer-naïve intellectual workers on the firing line with personal computers and good generalised writing, drawing, file and spreadsheet programs, and turn them loose. The same strategy, with simple programming capabilities, will also work for hundreds of laboratory scientists.”

While the programs might have changed, the essence of what we’re doing when we build software for people hasn’t. Building and delivering software is about giving smart people tools to make ultra-smart decisions. Equip people doing good work to go out and do great work. That’s the beauty of software, it’s an easily replicable force multiplier for people’s efforts, and it provides more leverage than almost anything else.

Get better at understanding what to build.

The hardest part of building software is deciding what to build.

Establishing the detailed technical requirements, including all the interfaces to people, to machines, and to other software systems, is difficult. No other part of the work so cripples the resulting system if done wrong. No other part is more difficult to rectify later.

The best developers are able to identify and understand the problem to an extent that they can build useful and delightful solutions for customers. This is why it’s so important for developers to not just be focused on the nitty-gritty implementation details, but to be product-minded about the solutions they build.

Some useful reading on this idea:

The Product-Minded Software Engineer
Product-minded engineers are developers with lots of interest in the product itself. They want to understand why…
Code-first vs. Product-first
There are two kinds of programmers, generally speaking. There are programmers who care more about code, and there are…
Product Validation Frameworks are Useless Without Taste
One of the things that I was trying to get at in Product Development as Iterated Taste was this idea that all product…

Incremental development: Grow software, don’t build it.

“I still remember the jolt I felt in 1958 when I first heard a friend talk about building a program, as opposed to writing one. In a flash be broadened my whole view of the software process. The metaphor shift was powerful, and accurate. Today we understand how like other building processes the construction of software is, and we freely use other elements of the metaphor, such as specifications, assembly of components, and scaffolding. The building metaphor has outlived its usefulness. It is time to change again. If, as I believe, the conceptual structures we construct today are too complicated to be accurately specified in advance, and too complex to be built faultlessly, then we must take a radically different approach. Let us turn to nature and study complexity in living things, instead of just the dead works of man. Here we find constructs whose complexities thrill us with awe. The brain alone is intricate beyond mapping, powerful beyond imitation, rich in diversity, self-protecting, and self-renewing. The secret is that it is grown, not built. Incremental development−grow, not build, software.”

Personal Takeaways

  1. Good design is aided by designing from the outside in. Identify use cases and build from there.
  2. There is a lot of inherent complexity within each domain, and that makes it very easy to solve the wrong problem really well. You need to guard against this and make sure that you’re doing what’s necessary to identify and understand the problems you’re trying to solve.
  3. Over the past few decades, we’ve made a lot of progress, but is it as significant as we like to think? We can do things faster than ever, but how good are we at solving essential complexity? Are we any better than we were in the past? We’ve learnt to move really fast, but it still doesn’t feel like enough. It appears that fast is never fast enough. We get used to the new normal very quickly.
  4. Software today involves plugging different tools and methods together. It’s important to understand at least one layer of abstraction down from the layer you’re working on, otherwise, you risk building on a poor foundation. When things go wrong, it’s very difficult to understand why and to fix it.
  5. Domain-driven design is critical to understanding the problem you’re trying to solve. Build models based on reality because it changes less frequently than your perception of reality.
  6. Take the tracer bullet approach to building software (i.e. get a thin, working slice of the problem) and iterate from there.
  7. Teams must design solutions together. Having everyone in the team understand the problem and the domain and play a role in designing a solution leads to better understanding, better communication, and more buy-in from team members.

Enjoying these posts? Subscribe for more