Quality With a Name

I'm passionately interested in software design. It stems back to a transformative experience I had as a young geek. I was sixteen or seventeen years old and had already been programming for years at that point. I thought I was pretty hot stuff. (But, then again, what sixteen or seventeen year-old doesn't?)

I was working on a program to generate a D&D character. (Hey, I told you I was a geek!) I was writing it in AppleSoft BASIC, an early and primitive version of BASIC that used line numbers. It had two quirks that are relevent to this story: variable names could only be two characters long, and the only support for subroutines was "GOSUB," which was little more than a stack-based GOTO statement.

In this language, if I wanted to call a subroutine to, say, emphasize a string, I had to set a global variable and GOSUB to a line number.

10 PRINT "Not highlighted"
20 TX$ = "Highlighted!"
30 GOSUB 1000
40 END
1000 PRINT ">>" + TX$ + "<<"

My D&D character generator was by far the most complicated program I'd ever written at that point. I was slinging variable names and line numbers like crazy. RC$ = "Dwarf"! GOSUB 15000! CL$ = "Cleric"! GOSUB 3600! I was cruising along, programming furiously in my parent's basement, when it all fell apart.

Literally from one moment to the next, I lost it all. What did AN$ mean? What about NA$? When I roll the dice, do I GOSUB 1500 or 5100? My carefully constructed mental picture dissolved and I was left struggling to understand what I had written.

I tried to make a cheat sheet showing what all the variable names meant, and where the subroutines were, but I had lost my enthusiasm. I set the program aside and never returned to it. Ever since that day, I've been passionate about software design and structure.

What Is Good Design?

I'm the kind of person that likes logical frameworks and structures. When I think about design, I can't help but wonder: "What's the intellectual basis for design? What does it mean to have a good design?"

That's when I run into a problem. Many discussions of "good" design simply focus on specific techniques. These discussions often have undisclosed assumptions that one particular technology is better than another: that rich object-oriented domain models are obviously good, or that stored procedures are obviously good, or that service-oriented architecture is obviously good.

With so many conflicting points of view about what's "obviously" good, only one thing is clear: "good" isn't obvious.

Other folks describe good design as "elegant" or "pretty." Some might say it has "Quality Without a Name" (QWAN)--an ineffable (that means "indescribable," kids) sense of rightness in the design. The term comes from Christopher Alexander, a building architect whose thoughts on "patterns" formed the inspiration for software's patterns movement.

I have a lot of sympathy for QWAN. I, too, will describe a design as "elegant" or "pretty." In college, I had a professor, Dr. Lum, who liked the phrase "truth and beauty." Good design is Truth And Beauty.

There's just one problem. My QWAN is not your QWAN. My "Truth And Beauty" is your "Falsehood And Defilement." Us software folks seem prone to religious wars, which makes it worse. (Emacs vs. vi, anyone? How about Windows vs. Linux or domain models vs. stored procedures?)

QWAN is just too vague. I want something better.

Software Doesn't Exist

Let me digress for a moment. Software doesn't exist. Okay, I exaggerate... but not that much.

Think about it. When you run a program on your computer, it's loaded off the hard drive (where it exists as a very long series of magnetic fields) into RAM (where it exists as a whole lot of teeny capacitances). The CPU uses transistors to interpret those charges, switching them from one path to another, rather quickly, and sends the resulting electrical charges out to things like your video card. The video card routes those charges again, sending some to your monitor, which has still more transistors that selectively let light shine through colored dots, making things light up on the screen. It's pretty amazing.

Ahem. Anyway, software isn't even ones and zeroes. It's magnets, electricity, and light. Smoke and mirrors. The only way to create software is to toggle electrical switches up and down (and nobody does that any more) or use some existing software to create it for you.

"Okay, smarty-pants," you may be thinking, "that's a funny trick, but I write software."

Actually, you don't. You write a very detailed specification. You hand the specification to a program and it writes the software for you. It translates your specification into machine instructions then directs the computer's operating system to save those instructions as magnetic fields on the hard drive. Once you've got 'em there, you can run the program, copy it, share it, not share it, sue for intellectual property infringement, etc.

You can already see the punchline coming. The specification is called "source code" and the program that translates the specification into software is called a "compiler." Jack Reeves realized this long ago and explored some of the implications in his famous essay, "What is Software Design?"

Design is for Understanding

Jack Reeves argues that source code is a design document. I agree with him. But if source code is design... what is design? Why do we bother with all these UML diagrams and CRC cards and discussions around a whiteboard?

All of these things are abstractions. Even source code. The reality of software, in its billions of evanescent electrical charges, is too complex for us to grasp. So we create simplified models that we can understand. Some of those models, like source code, are machine translatable. Others, like UML, are not... yet.

In the early days, our source code was written in assembly language: a very thin abstraction over the software. Our programs were much simpler back then, but assembly language was hard to understand. We drew flow charts to give us a better view of the design.

People don't use flow charts any more. Why is that? I think it's because our programming languages have gotten so expressive that we don't need them. You can just read a method and see the flow of control. There's no need for a flow chart.

1000 NS% = (80 - LEN(T$)) / 2
1010 S$ = ""
1020 IF NS% = 0 GOTO 1060
1030 S$ = S$ + " "
1040 NS% = NS% - 1
1050 GOTO 1020
1060 PRINT S$ + T$

Before structured programming.

public void PrintCenteredString(string text) {
  int center = (80 - text.Length) / 2;
  string spaces = "";
  for (int i = 0; i < center; i++) {
    spaces += " ";
  Print(spaces + text);

After structured programming.

Actually, it's not entirely true that modern languages don't need flow charts. We still run across huge, 1000-line methods that are so convoluted we can't understand them without the help of a design sketch.

But that's bad design, isn't it?

Design Tradeoffs

Unlike other design disciplines, we don't have to make a lot of classic design trade-offs when we design software. When the engineers at Boeing design a passenger airplane, they constantly have to trade-off fuel efficiency, passenger capacity, and production cost.

In software, we rarely have to make those kinds of decisions. The assembly programmers of yesteryear did... they often had to choose between using lots of memory (space) or making the software fast (speed). Nowadays, we rarely make those sorts of speed/space tradeoffs. The machines are so fast, and have so much RAM, it doesn't usually matter.

In fact, our computers are so fast, modern languages actually waste computing resources. With an optimizing compiler, C is just as good as assembly language. C++, though, adds virtual method lookups... four bytes per method and an extra level of indirection. Java and C# add a complete intermediate language that's dynamically compiled each time a program is run. Ruby compiles the base language every time a program is run, then interprets the intermediate language!

How wasteful.

Why is it, then, that Ruby on Rails is so popular? How is it possible that Java and C# are such gigantic successes? What do they provide that makes their waste worthwhile? Why aren't we all programming in C?

Quality With a Name

A good airplane design is one that balances the tradeoffs of carrying capacity, fuel consumption, and manufacturing costs. A great airplane design gives you more people, for less fuel, at a cheaper price than the competition.

What about software? We don't usually have these classic design tradeoffs. If we're not balancing tradeoffs, what are we doing?

Actually, there is one tradeoff that we make over and over again. As Java, C#, and Ruby show us, we are often willing to sacrifice computer time in order to save programmer time.

Some programmers flinch at the thought of wasting computer time and making "slow" programs. But it makes sense. In software development, developers are the most expensive component. That's why so many shops are outsourcing their software development to countries with cheap developers. If the computer time isn't needed, "wasting" it to save programmer time is a wise design decision.

So if we look at good design as the art of maximizing the benefits of our tradeoffs, and software design only has one real tradeoff these days--the tradeoff between machine performance and programmer time--then the definition of "good software design" becomes crystal clear:

A good software design minimizes the time required to create, modify, and maintain the software while achieving acceptable run-time performance.

Oh, and it has to work. That's non-negotiable. If a Boeing jet can't fly, it doesn't matter how fuel efficient it is. Similarly, our software design must work.

Great Design

This definition, that "good design" equals "easy to change," is not new. But stating it this way leads to some interesting conclusions:

  1. Design quality is people-sensitive. Developers, even developers of equivalent competence, have varying levels of expertise. A design that assumes Java idioms may be incomprehensible to a programmer who's only familiar with Perl, and vice-versa. Since design quality is determined primarily by developer time, it's very sensitive to which developers are doing the work. A good design takes this into account.

  2. Design quality is change-specific. Software is often designed to be easy to change in specific ways. That can lead to other sorts of changes being difficult. A design that's good for some kinds of changes may be bad for other sorts of changes. A good design correctly anticipates the kinds of change that will be required... or it makes all changes easy. (I find that the latter approach is easier and more effective, but it takes a particular Design Mindset.)

  3. Modification and maintenance time are more important than creation time. It's been said before, but it bears repeating: most software spends far more time in maintenance than in initial development. When you consider that even unreleased software often requires modifications to its design, the importance of creation time shrinks even further. A good design focuses on minimizing modification and maintenance time over minimizing creation time.

  4. The quality of a design cannot be predicted. This is perhaps my most surprising conclusion. If a good design minimizes developer time, and it varies depending on the people doing the work and the changes required, then there's no way to predict the quality of a design. You can have an informed opinion, but ultimately a good design is proven by observing how it deals with changes.

We can learn something about great designs by taking these conclusions one step further.

Great designs:

  • Are easily modified by the people who most frequently work within them,
  • Easily support unexpected changes,
  • Are easy to modify and maintain,
  • and Prove their value by becoming steadily easier to modify over years of changes and upgrades.

Measuring Design Quality

Assuming that we've achieved our standards of "it works" and "acceptable performance," measuring the quality of a design should be easy. We simply measure the time required for each change:

Design quality = change / developer time

Unfortunately, we can't objectively measure the size of a change (not without making assumptions about the design), so we can't objectively measure design quality. I talk more about the difficulty of measuring the size of changes in The Productivity Metric.

On the other hand, it's possible that we could measure changes in design quality. That's almost as good. We won't be able to compare two teams against each other, but we will be able to answer the question, "Are we improving?" In my experience, this is a question that managers are very interested in answering.

If we assume that developers make perfectly accurate estimates... yeah, I know, it's a big assumption. Bear with me for a moment.

Anyway, if we assume that developers make perfectly accurate estimates, then an estimate made today and then delivered today will have a ratio of estimate to actual of "1."

ratio = estimate / actual

If our estimate was made six months ago, what happens to the actual time?

Well, if the design quality hasn't changed, I would expect that the ratio would remain "1."

ratio = estimate(T-6 months) / actual(T-0)

But what if the design quality has doubled? Then I would expect the amount of time required to be cut in half. For example, if the original estimate was for 10 days of work, then the actual time required would be 5 days of work. This makes sense because we're defining design quality as the amount of time required to make changes.

At T-0: ratio = 10 / 10 = 1
At T-6 months: ratio = 10 / 5 = 2

Following this logic, we can say the change in design quality is the ratio of the estimate from "then" to the actual time required to do the work "now."

quality(T-0) / quality(T-x) = estimate(T-x) / actual(T-0)

All right... sounds good. But developers don't make perfect estimates. How do we account for that?

Truthfully... I don't know. The math gets too hairy for me. I'm pretty sure it's possible to collect all of the necessary information. The team I'm working with now is doing it today. But each set of estimates is made with different knowledge and has differing accuracy and self-consistency. The work is done at varying times. Ideally, we'd punch in the estimates, actuals, and their dates and get a graph showing a quality trend and error bars. I just don't know how to get there.

I've asked my mathematician brother to help out and I'll let you know if he comes up with anything.

Universal Design Truths

In the absence of design quality measurements, we have no way to prove that one design approach is better than another. But there are a few principles that point the way. Because I lack humility, I'm going to call them "universal design truths."

In fact, none of these "truths" are my invention. How could they be? They're all old, worn, well-loved principles. They're so old, in fact, that you may have lost track of them amidst the incessant drum-beating for new languages, programming paradigms, and design techniques. Here's a reminder:

  • The Source Code is the (Final) Design. Please! Continue to sketch UML diagrams. Discuss design over CRC cards. You can even produce pretty wall charts on giant printers if you want. Abstractions like these are an indispensible tool for clarifying a design. Just don't confuse these artifacts with a completed design. Remember, your design has to work. That's non-negotiable. If a design can't be turned into software, automatically, by a compiler or similar tool, it doesn't work and it's not done.

    Specifically, if you're an "architect" or "designer" and you don't produce code, remember that it's the programmers who are finishing your design for you. They're going to fill in the inevitable gaps and they're going to encounter and solve problems you didn't anticipate. If you slough this detail work off onto junior staff, the final design could easily be lower quality than you expected. Get your hands dirty. Follow your design down to the code.

  • Don't Repeat Yourself. A clever name for a well-known principle, courtesy of Dave Thomas and Andy Hunt. Every concept should be represented once and only once in the final design.

    "Don't Repeat Yourself" is more than just avoiding cut-n-paste coding. It's having one cohesive location for every concept. What's a concept? Anything from "the way we interface with a database" to "the business rule for dealing with weekends and holidays."

    Eliminating duplication decreases the time required to make changes. Only one part of the code needs to be changed. It also decreases the risk that a defect will be introduced because a change was made in one place but not another.

  • Be Cohesive. This one's an old chestnut, to be sure. It's no less essential for its age.

    Cohesion means that more closely-related concepts are located closer together in the design. A classic example is "data" and "operations on data," such as the concepts of "date" and "determining the next business day." This is a well-known benefit of object-oriented programming: in OOP, you can group data and related operations into the same class. Cohesion extends beyond a single class, though: you can improve cohesion by grouping related files into a single directory, or by putting documentation closer to the parts of the design it documents.

    Cohesion improves design quality because it makes designs easier to understand: related concepts sit next to each other, allowing the reader to see how they fit together into the big picture. Cohesion reduces error by improving the reader's ability to see how changes to one concept affect others. Cohesion also makes duplication more apparent.

  • Decouple. Another essential old chestnut.

    Different parts of a design are "coupled" when a change to one part of the design necessitates a change to another part of the design. Coupling isn't necessarily bad--for example, if you change the way a date is stored, you would expect to have to change the way the next business day is calculated.

    On the other hand, when a change to one part of the design requires that an unrelated part of the design be changed, problems occur. Either developers spend extra time ferreting out these changes, or they miss them entirely and introduce defects. The more tenuous the relationship between two concepts, the more loosely coupled they should be; conversely, the more closely related two concepts are, the more tightly coupled they may be.

    Eliminating duplication, making designs cohesive, and decoupling attack the same problem from different angles. They tie together to improve the quality of a design by reducing the impact of changes. They allow developers to focus their efforts on a specific section of the design and give developers confidence that they don't need to search through the entire design for possible changes. They reduce defects by eliminating the possibility that unrelated parts of the design also need to be changed.

    (I should note that coupling and cohesion reportedly came from Larry Constantine's 1968 work on structured programming, "Segmentation and Design Strategies for Modular Programming", where he gave it a specific definition based on static code analysis. I've generalized and simplified the ideas while adhering to the spirit of the original as I understand it.)

  • Clarify, Simplify, and Refine. From my definition of "good design," I concluded that good designs are easy for other people to modify and maintain. If this is true, then one way to create a good design is to create the design to be easily read.

    When I'm writing code, I write for the future: I assume that my design will be read by people who I'll never meet. They'll judge me, and my design, based on what I type. As a result, I spend a lot of time making my code very easy to understand. Alistair Cockburn describes it as writing "screechingly obvious code:"

    Times used to be when people who were really conscientious wrote Squeaky Clean Code. Others, watching them, thought they were wasting their time. I got a shock one day when I realized that Squeaky Clean Code isn't good enough.


    It occurred to me that where they went down a dead end was because the method's contents did not match its name. These people were basically looking for a sign in the browser that would say, "Type in here, buddy!"

    That's when I recognized that they needed ScreechinglyObviousCode.

    At the time, it was considered an OK thing to do to have an event method, doubleClicked or similar, and inside that method to do whatever needed to be done. That would be allowed under Squeaky Clean Code. However, in ScreechinglyObviousCode, it wouldn't, because the method doubleClicked only says that something was double clicked, and not what would happen. So let's say that it should refresh the pane or something. There should therefore be a method called refreshPane, and doubleClick would only contain the one line: self refreshPane.

    The people fixing the bug knew there was a problem refreshing the pane, but had to dig to learn that refreshing the pane was being done inside doubleClick. It would have saved them much effort if the method refreshPane was clearly marked in the method list in the browser, so they could go straight there. ...The reading of the code is then, simply: "When a doubleClicked event occurs, refresh the pane" rather than "When a doubleClicked event occurs, do all this stuff, which, by the way, if you read carefully, you will notice refreshes the pane."

    Alistair Cockburn on Ward's Wiki

    Screechingly obvious code is most easily done with iterative simplification and refinement. Bob Martin has a great example of this in his article, “Clean Code: Args—A Command-Line Argument Parser.”1

  • 1Martin’s “Clean Code” article used to be available on objectmentor.com, but that site no longer exists, so I’m hosting a copy here. You can find an archive of the original site in the Internet Archive.

  • Fail Fast. This is one "universal design truth" that you may not think is so universal. I include it because I think it is. Feel free to disagree.

    A design that fails fast reveals its flaws quickly. One way to do this is to have a sophisticated test suite as part of the design, as with test-driven development. Another way is use a tool like assertions to check for inappropriate results and fail if they occur. And, before I piss off the surprisingly vocal Design by Contract minority (again), that technique makes designs fail fast, too.

    Failing fast improves design because it makes errors more visible, allowing them to be fixed sooner and more cheaply. I talk about the assertions-based approach to failing fast in my IEEE Software article, Fail Fast.

  • Optimize from Measurements. It's funny. Talk to somebody about avoiding premature optimization and they'll agree with you wholeheartedly. Until you get to their pet optimization. Then all you hear is, "But of course we need to optimize that! We know that will cause a performance problem!" (Hey, you with the stored procedure... yeah, you... I'm talking to you!)

    Optimized code is often unreadable code. It's usually tricky and prone to defects. If good design means reducing developer time, then optimizing is the exact opposite of good design.

    You know what else is funny? A few weeks ago, a colleague and I improved the performance of a piece of code ten-fold just by improving its design. This was after saying that we weren't going to worry about performance. (And we didn't.) It turns out that well-designed code is often fast code... and when it isn't, it's usually easy to optimize.

    Of course, our definition of "good design" is more than just "reduces developer time." It says (emphasis added):

    A good software design minimizes the time required to create, modify, and maintain the software while achieving acceptable run-time performance.

    Although well-designed code is often fast code, it isn't always. So we do need to optimize. Optimizing later allows us to do it in the smartest way possible: when the code has been refined, is cheapest to modify, and when performance profiling can be used to direct optimization effort to the most effective improvements.

    Delaying optimization is surprisingly scary. It feels unprofessional to let "obviously" slow designs slide by. Trust in the Force, Luke. Or at least, trust in the Fowler.

Principles in Practice

These "universal design truths" are good guidance, but they don't help with specifics. That's why we have a gazillion peddlers of design advice, each tailoring their techniques to a specific programming language or paradigm.

Most of this advice is good. But... here's the rub... it isn't universal. When you take the nice-and-vague descriptions above and turn them into specific design advice, you lose something important: context.

Every design decision is made in the context of the whole design. The problem being solved; the other design decisions that have been made; the time schedule; the available pool of programming talent; etc., etc.

Context makes every piece of specific design advice suspect. I'm not saying that you shouldn't listen to it... you should! But at every moment, you should ask yourself: "When is this not true? What is the author assuming?"

As just one example of this, let's look at the simple and popular "instance variables must be private" design rule. We typically learn this in school and it's one of those rules that we often apply blindly. It's one of my favorite examples. Not because it's a good design technique (it's so-so) but because it's so often misused.

Misused? Really? Yes! Let's look at the rule more closely.

First, why make instance variables private? Well, private variables help enforce decoupling. Decoupled code is good. (See above.) No, wait... appropriately decoupled code is good. And if you're tempted to make an instance variable public, chances are good that you have a coupling problem.

But where? The variable might not be the issue. Perhaps you also have a cohesion problem. Maybe some code outside the class really belongs inside the class. Hmm. Worth thinking about.

Now for the problem with the rule. People know about the "private variables" rule but don't think about why. So you end up with code like this. I swear to God, I see this stuff everywhere.

public class MySillyClass {
  private string _myFakeEncapsulatedVariable;

  public string getMyFakeEncapsulatedVariable() {
    return _myFakeEncapsulatedVariable;

  public void setMyFakeEncapsulatedVariable(string var) {
    _myFakeEncapsulatedVariable = var;

From a coupling standpoint, there's very little difference between this code and a public variable. The code follows the letter of the rule, sure. But the programmer of this class clearly hasn't thought things through. The "private variables" rule was a mere speedbump on the way to thoughtless design.

That's not to say that coupled variables are always bad. There are some situations in which I think they're okay. In those cases, though, I prefer to save a few methods and just make the variables public.

A Single Step

Or, As We Call It Around the Office, Ye Olde Conclusion.

A good software design minimizes the time required to create, modify, and maintain the software while achieving acceptable run-time performance.

This definition, and the conclusions it leads to, are the most important things I keep in mind when considering a design. I have some core design principles I follow, true. I also have some techniques that are useful for the languages I work with. But I'm willing to throw them away--even the so-called "universal truths"--if I think that they get in the way of reducing developer time.

With this essay, I've either opened your eyes or restated the obvious. I hope it's a little bit of both. If nothing else, please take away a little bit of healthy curiousity about the design rules you hold near and dear. Why is that technique good? When is it bad? How does it really help improve a design?

It's taken me years to clarify my understanding of these foundational concepts. (Maybe I'm just slow.) In many ways, though, this is only the first step. I still have a thousand miles to go. Come walk with me.

If you liked this entry, check out my best writing and presentations, and consider subscribing to updates by email or RSS.