AoAD2 Practice: Reflective Design

This is an excerpt from The Art of Agile Development, Second Edition. Visit the Second Edition home page for additional excerpts and more!

This excerpt is copyright 2007, 2021 by James Shore and Shane Warden. Although you are welcome to share this link, do not distribute or republish the content without James Shore’s express written permission.

📖 The full text of this section is available below, courtesy of the Art of Agile Development book club! Join us on Fridays from 8-8:45am Pacific for wide-ranging discussions about Agile. Details here.

Reflective Design


Every day, our code is better than it was the day before.

Traditional approaches to design assume code shouldn’t change. Instead, new features and capabilities are supported by adding new code. A traditional design supports this by anticipating what might be needed and building in extensibility “hooks,” in the form of inheritance, dependency injection, and so forth, so code for those features can be added in the future. The Open-Closed Principle, a classic design guideline, illustrates this mindset: “Software entities (classes, modules, functions, etc.) should be open for extension, but closed for modification.”

Simple Design

But Agile teams create simple designs that don’t anticipate the future. Their designs don’t have hooks. Instead, Agile teams have the ability to refactor their code and change its design. This creates the opportunity for an entirely different approach to design: one in which entities are not designed to be extended, but are designed to be modified instead.

I call this approach reflective design.

How Reflective Design Works

Reflective design is in contrast to traditional design, which I call predictive design. In predictive design, you predict what your software will need to do, based on your current requirements and best guess about how those requirements might change, then you create a design that cleanly anticipates all those needs.

Reflective design only cares about the change you’re making right now.

In contrast, reflective design doesn’t speculate about the future. It only cares about the change you’re making right now. When using reflective design, you analyze your existing code in the context of your software’s existing functionality, then figure out how you can improve the code to better support what you’re currently working on.

  1. Look at the code you’re about to work on. If you’re not familiar with it, reverse-engineer its design. For complicated code, drawing diagrams, such as a class diagram or sequence diagram, can help.

  2. Identify flaws in the design. What’s hard to understand? What doesn’t work well? If you’ve worked with this code recently, what caused problems? What will get in your way as you work on your current task?

  3. Choose one thing to improve first. Think of a design change that will clean up the code and make your current task easier or better. If big design changes are needed, talk them over with your teammates.

  4. Incrementally refactor the code to reach the desired design. Pay attention to how well the design changes work in practice. If they don’t work as well as hoped, change direction.

  5. Repeat until your task is done and the code is as clean as you want to make it. At a minimum, it needs to be a tiny bit better than when you started.

Reflective Design in Practice

I once had to replace the login infrastructure for one of my websites. My old authentication provider, Persona, had been discontinued, so I needed to switch to a new authentication provider, Auth0. This was a big change that required a new sign-up flow.

Feature Flags

Rather than planning out this whole change in advance, I used reflective design to take it step by step. I focused on my first story, which was to add a login flow that used Auth0. It would be hidden by a feature flag until the Auth0 change was done.

My first step was to reverse-engineer the design of the code. It had been several years since I had worked with this code, so it was like I had never seen it before. Fortunately, although the code was far from perfect, I had focused on simple design, so it was easy to understand. No method was longer than 20 lines of code, and most were less than 10. The largest file was 167 lines.

I started with the existing login endpoint. I didn’t do a deep dive; I just looked at each file’s imports and traced the dependencies. The login endpoint depended on PersonaClient and SubscriberAccount. PersonaClient depended on HttpsRestClient, which was a wrapper for third-party code. SubscriberAccount depended on RecurlyClient, which in turn depended on HttpsRestClient.

These relationships are illustrated in the “Authentication Design Analysis” figure. I didn’t actually make a class diagram at the time; I just opened the files in my editor. The relationships were simple enough that I could hold it all in my head.

A UML diagram showing three packages: “www,” “model,” and “persistence.” The “www” package has a class named “login,” the “model” package has a class named “SubscriberAccount,” and the “persistence” package has three classes, named “PersonaClient,” “RecurlyClient,” and “HttpsRestClient.” The diagram shows that “login” uses “PersonaClient” and “SubscriberAccount,” “SubscriberAccount” uses “RecurlyClient,” and both “PersonaClient” and “RecurlyClient” have a reference to “HttpsRestClient.”

Figure 1. Authentication design analysis

Next, I needed to identify flaws in the design. There were a lot. This was some of the earliest code I had written for the site, nearly four years prior, and I had learned a lot since then.

  • I didn’t separate my logic from my infrastructure. Instead, SubscriberAccount (logic) depended directly on RecurlyClient (infrastructure).

  • SubscriberAccount didn’t do anything substantial. Instead, a separate User class was responsible for user-related logic. The purpose of SubscriberAccount wasn’t clear.

  • None of the infrastructure classes (PersonaClient, RecurlyClient, and HttpsRestClient) had tests. When I first wrote them, I didn’t know how to write tests for them, so I had just tested them manually.

  • The login endpoint didn’t have tests, because the infrastructure classes weren’t written to be testable. Login had a lot of complexity, because it also validated subscription status. The lack of tests was a risk.

Focus your efforts on what matters most.

There were a lot of things I could have changed, but part of the trick of reflective design is to focus your efforts on what matters most. Although the vestigal SubscriberAccount class and its dependency on RecurlyClient was a problem, fixing it wouldn’t make writing the new login endpoint easier.

The core structure of having the login endpoint depend on PersonaClient also made sense. I decided that I’d implement a similar Auth0Client class for the Auth0 login endpoint.

Fast Reliable Tests

The lack of testability was clearly the biggest problem. I wanted my new login endpoint to have sociable tests. For that to happen, Auth0Client needed to be nullable [Shore2018b], and for that, I needed HttpsRestClient to be nullable. While I was at it, I wanted to add narrow integration tests to HttpsRestClient.

These changes weren’t everything I needed to do, but they were the obvious first step. Now I was ready to incrementally modify the code to get where I wanted to be:

  1. I added narrow integration tests to HttpsRestClient and cleaned up edge cases. (This took 3 hours.)

  2. Made HttpsRestClient nullable. (1 hour)

  3. Made RecurlyClient nullable. (1.25 hours)

  4. Made PersonaClient nullable. (0.75 hours)

  5. Modified HttpsRestClient to better support Auth0Client’s needs. (0.75 hours)

  6. Implemented Auth0Client. (2 hours)

Reflective design doesn’t always involve a big change. Once Auth0Client was implemented, my next task was to implement a feature flag that would allow me to manually test the Auth0 login endpoint in production, but hide it from regular users.

Implementing the feature flag was a much smaller task, but it followed the same reflective approach. First, I reviewed the SiteContext class that would contain the flag and the AuthCookie class it depended upon. Second, I identified flaws: the design was fine, but the tests weren’t up to my current standards. Third, I decided how to improve: fix the tests. Fourth, refactor incrementally: I reordered the SiteContext tests to make them more clear, and migrated the AuthCookie tests from an old test framework to my current test framework.

All together, this was only about half an hour of work, so the steps weren’t really that distinct. It was more a matter of “look at the code, see a few obvious issues, fix the issues.” Reflective design isn’t necessarily a crisp sequence of steps. The important part is that, while you work, you’re constantly reflecting on your code’s design and making improvements.

Reverse-Engineering the Design

The first step in reflective design is to analyze your existing code and reverse-engineer its design, if you don’t already understand it.

The best approach is to ask somebody on the team to explain the design to you. A conversation around a whiteboard sketch, whether in-person or remote, is a fast and effective way to learn, and it will often turn into a collaboration around possible improvements.

In some cases, no one on the team will understand the design, or you may wish to dive into the code yourself. When that happens, start by thinking about the responsibilities of the source files. Choose the file whose responsibilities seem most closely related to your current task. If nothing else, you can often start with the UI and trace the dependencies from there. For example, when I analyzed the authentication code, I started with the endpoint related to the login button.

Once you have a starting point, open up the file and skim through the method and function names. Use them to confirm or revise your guess about the file’s responsibilities. If you need more clues, skim through the test names in the file’s tests. Then look at this file’s dependencies (typically, its imports). Analyze those files, too, and repeat until the dependencies are no longer relevant to the change you’re making.

Now that you have a good idea of the files involved and each of their responsibilities, go back through and see how they relate to one another. If it’s complicated, draw a diagram. You can use a formal modeling technique, such as UML, but an ad-hoc sketch is just as good. I usually start by drawing boxes for each module or class, and lines with labels to show how they relate to one another. When the code is particularly complicated, I’ll create a sequence diagram, which has a column for each module or class instance, and arrows between columns showing function calls.

Some tools will automatically create UML diagrams from your source code. I prefer to generate my diagrams manually, by studying the code myself. Creating it manually requires me to study the code more deeply. It takes longer, but I end up with a much better understanding of how the code works.

This shouldn’t take long. If it does, remember that the best way to understand the design is to ask somebody on the team to describe it to you. Unless your team works with a lot of code it didn’t build, you should rarely have trouble finding someone who understands the design of existing code. Your team wrote it, after all. A quick review to update your understanding should be enough.

Identifying Improvements

All code has an underlying beauty. That’s the most important thing to remember when looking for design improvements. It’s easy to look at existing code and think, “This is terrible.” And it may actually be terrible—although you should be careful not to assume a design is terrible just because you don’t understand the code immediately. Code takes time to understand, no matter how well it’s designed.

But even if the code is terrible, it was most likely created with some underlying design in mind. That design might have gotten crufty over time, but somewhere underneath, there’s the seed of a good idea.

Your job is to find and appreciate the code’s underlying beauty.

Your job is to find and appreciate that underlying beauty. You don’t have to keep the original design if it’s no longer appropriate, but you do need to understand it. Quite often, the original design still makes sense. It needs tweaks, not wholesale revision.

To return to the authentication example, the login endpoint depended on PersonaClient, which depended on HttpsRestClient. None of the code was testable, which resulted in some ugly, untested login code. But the core idea of creating infrastructure wrappers was sound. Rather than abandon that core idea, I amplified it by making the infrastructure wrappers nullable, which later allowed me to use test-driven development to make a new, cleaner Auth0 login endpoint.

That’s not to say that the existing design will be perfect. There’s always something to improve. But as you think about improvements, don’t look for ways to scrap everything and start over. Instead, look for problems that detract from the underlying beauty. Restore and improve the design. Don’t reinvent it.

Code Smells

Code smells are condensed nuggets of wisdom about design problems. They’re a great way to notice opportunities for improvement in your code.

Noticing a code smell doesn’t necessarily mean there’s a problem with the design. It’s like a funky smell in the kitchen: it could indicate that it’s time to take out the garbage, or it could just mean that someone’s been cooking with a particularly pungent cheese. Either way, when you smell something funny, take a closer look.

Martin Fowler, writing with Kent Beck, has an excellent discussion of code smells in chapter 3 of Refactoring. [Fowler2018] It’s well worth reading. The following sections summarize a few of the smells I think are most important, including some that Fowler and Beck didn’t mention.1

1Code Class, Squashed Errors, Coddled Nulls, Time Dependency, and Half-Baked Objects are my invention.

Shotgun Surgery and Divergent Change

These two smells help you identify cohesion problems in your code. Shotgun Surgery occurs when you have to modify multiple modules or classes to make a single change. It’s an indication that the concept you’re changing needs to be centralized. Give it a name and module of its own.

Divergent Change is just the opposite: it occurs when unrelated changes affect the same module or class. It’s an indication that the module has too many responsibilities. Split those responsibilities into multiple modules.

Primitive Obsession and Data Clumps

Primitive Obsession occurs when important design concepts are represented by generic types. For example, when currency is represented with a decimal, or a subscription renewal date is represented with a Date. This leads to code involving those concepts being spread around the codebase, increasing duplication and decreasing cohesion.

Data Clumps are similar. They occur when several variables always appear together, representing some concept, but they don’t have a class or type that represents them. For example, the code might consistently pass street1, street2, state, country, and postalCode strings to various functions or methods. They’re a data clump representing an address.

The solution is the same in both cases: encapsulate the concept in a dedicated type or class.

Data Class and Code Class

One of the most common object-oriented design mistakes I see is data and code that are in separate classes. This often leads to duplicate data-manipulation code. When you have a class that’s little more than instance variables combined with getters and setters, you have a Data Class. Similarly, when you have a class that’s just a container for functions, with no per-instance state, you have a Code Class.

Code Classes aren’t necessarily a problem on their own, but they’re often found alongside Data Classes, Primitive Obsession, or Data Clumps. Reunite the code and its data: improve cohesion by putting methods in the same class as the data they operate upon.

Squashed Errors and Coddled Nulls

Robust error handling is one of the things that separates the great programmers from the merely good. All too often, code that’s otherwise well-written will throw up its metaphorical hands when it encounters an error. A common construct is to catch exceptions, log an error, and then return null or some other meaningless value. It’s particularly common in Java, where exception handling is required by the compiler.

These Squashed Errors lead to problems in the future, because the null ends up being used as a real value somewhere else in the code. Instead, handle errors only when you’re able to provide a meaningful alternative, such as retrying or providing a useful default. Otherwise, propagate the error to your caller.

Coddled Nulls are a related issue. They occur when a function receives an unexpected null value, either as a parameter or as a return value from a function it calls. Knowing the null will cause a problem, but not knowing what to do with it, the programmer checks for null and then returns null themselves. The null cascades deep into the application, causing unpredictable failures later in the execution of the software. Sometimes the null makes it into the database, leading to recurring application failures.

Instead, adopt a fail fast strategy. (See the “Fail Fast” section.) Don’t allow null as a parameter unless it has explicitly defined semantics. Don’t return null to indicate an error; throw an exception instead. When you receive null where it wasn’t expected, throw an exception.

Time Dependencies and Half-Baked Objects

Time Dependencies occur when a class’s methods must be called in a specific order. Half-Baked Objects are a special case of Time Dependency: they must first be constructed, then initialized with a method call, then used.

Time Dependencies and Half-Baked Objects often indicate an encapsulation problem. Rather than managing its state itself, the class expects its callers to manage some of its state. This results in bugs and duplicate code in callers. Look for ways to encapsulate the class’s state more effectively. In some cases, you may find that your class has too many responsibilities and would benefit from being split into multiple classes.

Incrementally Refactor

Test-Driven Development

After you’ve decided what to change, make the change using a series of small refactorings. Work incrementally, one small step at a time, making sure the tests pass after each step. Not counting time spent thinking, each refactoring should be a minute or two of work at most. Often less. Sometimes, you might need to add missing functions or methods; build those using test-driven development.

As you work, you’ll discover that some of your improvement ideas were, in fact, not good ideas. Keep your plans flexible. As you make each change, evaluate the result with reflective design as well. Commit your code frequently so you can revert ideas that don’t work out.

But don’t worry about making the code perfect. As long as you leave it better than you found it, it’s good enough for now.


How is reflective design different than refactoring?

Reflective design is deciding where to drive the car. Refactoring is pressing the pedals and moving the steering wheel.

How do we make time for reflective design?

It’s a normal, non-negotiable part of your work. You’re supposed to leave the code at least a little bit better than you found it, so when you start a task, start with reflective design to see what you’re going to improve. Sometimes, those improvements will even decrease the overall time needed for the task. But even if it doesn’t make your task quicker now, it will make a future task faster. Keeping the design clean is a net win.

On the other hand, you only need to leave the code a little bit better than you found it. Don’t fix everything. Instead, use slack to decide when to make time for additional opportunities, as described in the “Improving Internal Quality” section.


Anybody can use reflective design to identify improvements. It’s another tool in the toolbelt, and there’s no problem using it alongside predictive or ad-hoc design approaches.

Test-Driven Development

Actually following through on the improvements requires refactoring, and that generally relies on a good suite of tests.


When you use reflective design well:

  • Your team constantly improves the design of existing code.

  • When working on a task, programmers often refactor to make the task easier.

  • Refactorings are focused where they’ll do the most good.

  • The code steadily becomes easier and more convenient to work with.

Alternatives and Experiments

Teams who don’t know how to use reflective design often advocate for rewriting code instead, or taking a big chunk of time to refactor. Although this works, it’s clumsy in comparison. It can’t be done incrementally, leading to conflicts between programmers and stakeholders about how to allocate the team’s time.

Reflective design is really about incremental design improvements. It’s the same theme of incremental work that runs throughout the Delivering zone practices. You don’t need to use the exact approach described here, so feel free to experiment. As you do, focus on techniques that allow you to identify improvements and make changes gradually, without “stopping the world” to make a change.

Further Reading

Episode nine of [Shore2020b], “How to Add a Feature (Cleanly),” demonstrates reflective design on a small codebase.

Martin Fowler’s Refactoring [Fowler2018] accompanies each of its refactoring recipes with an in-depth discussion of why and when the refactoring is useful. They’re an invaluable source of design insights. Study them carefully to hone your ability to recognize design opportunities.

Share your thoughts about this excerpt on the AoAD2 mailing list or Discord server. Or come to the weekly book club!

For more excerpts from the book, see the Second Edition home page.

If you liked this entry, check out my best writing and presentations, and consider subscribing to updates by email or RSS.