The Art of Agile Development: Customer Tests

The second edition is now available! The Art of Agile Development has been completely revised and updated with all new material. Visit the Second Edition page for more information, or buy it on Amazon.

New Information

Although I still fully embrace using the Describe, Demonstrate, Develop process described for this practice, I've found that using tools such as Fit to automate those examples results in unacceptable maintenance burdens. The value is in the collaborative Describe, Demonstrate, Develop process, not the automated tests. I recommend ignoring the final section of this practice, "Automating the Examples." Everything else is still correct.

I describe the reasons for this change of heart in The Problems With Acceptance Testing. In Alternatives to Acceptance Testing, I explain what to do instead.

Full Text

The following text is excerpted from The Art of Agile Development by James Shore and Shane Warden, published by O'Reilly. Copyright © 2008 the authors. All rights reserved.

Customer Tests

Whole Team

We implement tricky domain concepts correctly.

Ubiquitous Language

Customers have specialized expertise, or domain knowledge, that programmers don't have. Some areas of the application—what programmers call domain rules—require this expertise. You need to make sure that the programmers understand the domain rules well enough to code them properly in the application. Customer tests help customers communicate their expertise.

Ten-Minute Build

Don't worry; this isn't as complicated as it sounds. Customer tests are really just examples. Your programmers turn them into automated tests, which they then use to check that they've implemented the domain rules correctly. Once the tests are passing, the programmers will include them in their ten-minute build, which will inform the programmers if they ever do anything to break the tests.

To create customer tests, follow the Describe, Demonstrate, Develop process outlined in the next section. Use this process during the iteration in which you develop the corresponding stories.


Customer tests are for communication.

At the beginning of the iteration, look at your stories and decide whether there are any aspects that programmers might misunderstand. You don't need to provide examples for everything. Customer tests are for communication, not for proving that the software works. (See No Bugs in Chapter 7).

For example, if one of your stories is "Allow invoice deleting", you don't need to explain how invoices are deleted. Programmers understand what it means to delete something. However, you might need examples that show when it's okay to delete an invoice, especially if there are complicated rules to ensure that invoices aren't deleted inappropriately.

If you're not sure what the programmers might misunderstand, ask. Be careful, though; when business experts and programmers first sit down to create customer unit tests, both groups are often surprised by the extent of existing misunderstandings.

Once you've identified potential misunderstandings, gather the team at a whiteboard and summarize the story in question. Briefly describe how the story should work and the rules you're going to provide examples for. It's okay to take questions, but don't get stuck on this step.

For example, if you decided to discuss invoice deletion, you might say:

Customer: One of the stories in this iteration is to add support for deleting invoices. In addition to the screen mockups we've given you, we felt some customer tests would be appropriate. Deleting invoices isn't as simple as it appears because we have to maintain an audit trail.

There are a bunch of rules around this issue. I'll get into the details in a moment, but the basic rule is that it's okay to delete invoices that haven't been sent to customers because presumably that kind of invoice was a mistake. Once an invoice has been sent to a customer, it can only be deleted by a manager. Even then, we have to save a copy for auditing purposes.

Programmer: When an invoice hasn't been sent and gets deleted, is it audited?

Customer: No—in that case, it's just deleted. I'll provide some examples in a moment.


After a brief discussion of the rules, provide concrete examples that illustrate the scenario. Tables are often the most natural way to describe this information, but you don't need to worry about formatting. Just get the examples on the whiteboard.

Customer (continued): As an example, this invoice hasn't been sent to customers, so an Account Rep can delete it.

Sent User Okay to delete
N Account Rep Y

In fact, anybody can delete it—CSR's, managers, and admins.

Sent User Okay to delete
N Manager Y
N Admin Y

But once it's sent, only managers and admins can delete it, and even then it's audited.

Sent User Okay to delete
Y Account Rep N
Y Manager Audited
Y Admin Audited

Also, it's not a simple case of whether something has been "sent" or not. "Sent" actually means one of several conditions. If you've done anything that could have resulted in a customer seeing the invoice, we consider it "sent". Now only a manager or admin can delete it.

Sent User Okay to delete
Printed Account Rep N
Exported Account Rep N
Posted to web Account Rep N
Emailed Account Rep N

As you provide examples, be completely specific. It's tempting to create generic examples, such as "this invoice hasn't been sent to customers, so anybody can delete it", but those get confusing quickly and programmers can't automate them. Provide specifics. "This invoice hasn't been sent to customers, so an account rep can delete it." This will require you to create more examples—that's a good thing.

Your discussion probably won't be as smooth and clean as this example. As you discuss business rules, you'll jump back and forth between describing the rules and demonstrating them with examples. You'll probably discover special cases that you hadn't considered. In some cases, you might even discover whole new categories of rules that you need customer tests for.

One particularly effective way to work is to elaborate on a theme. Start by discussing the most basic case and providing a few examples. Next, describe a special case or additional detail and provide a few more examples. Continue in this way, working from simplest to most complicated, until you have described all aspects of the rule.

You don't need to show all possible examples. Remember, the purpose here is to communicate, not to exhaustively test the application. You only need enough examples to show the differences in the rules. A handful of examples per case is usually enough, and sometimes just one or two is sufficient.


When you've covered enough ground, document your discussion so the programmers can start working on implementing your rules. This is also a good time to evaluate whether the examples are in a format that works well for automated testing. If not, discuss alternatives with the programmers.

Programmer: Okay, I think we understand what's going on here. We'd like to change your third set of examples, though—the ones where you say "Y" for Sent. Our invoices don't have a "Sent" property. We'll calculate that from the other properties you mentioned. Is it okay if we use "Emailed" instead?

Customer: Yeah, that's fine. Anything that sends it works for that example.

Don't formalize your examples too soon. While you're brainstorming, it's often easiest to work on the whiteboard. Wait until you've worked out all the examples around a particular business rule (or part of a business rule) before formalizing it. This will help you focus on the business rule rather than formatting details.

In some cases, you may discover that you have more examples and rules to discuss than you realized. The act of creating specific examples often reveals scenarios you hadn't considered. Testers are particularly good at finding these. If you have a lot of issues to discuss, consider letting some or all of the programmers get started on the examples you have while you figure out the rest of the details.

Don't use customer tests as a substitute for test-driven development.
Test-Driven Development

Programmers, once you have some examples, you can start implementing the code using normal test-driven development. Don't use the customers' tests as a substitute for writing your own tests. Although it's possible to drive your development with customer tests—in fact, this can feel quite natural and productive—the tests don't provide the fine-grained support that TDD does. Over time, you'll discover holes in your implementation and regression suite. Instead, pick a business rule, implement it with TDD, then confirm that the associated customer tests pass.

Focus on Business Rules

One of the most common mistakes in creating customer tests is describing what happens in the user interface rather than providing examples of business rules. For example, to show that an account rep must not delete a mailed invoice, you might make the mistake of writing this:

  1. Log in as an account rep

  2. Create new invoice

  3. Enter data

  4. Save invoice

  5. Email invoice to customer

  6. Check if invoice can be deleted (should be "no")

What happened to the core idea? It's too hard to see. Compare that to the previous approach:

When an invoice has been emailed, an account rep may not delete it... or, as you might draw it on the whiteboard:

Sent User Okay to delete
Emailed Account Rep N

Good examples focus on the essence of your rules. Rather than imagining how those rules might work in the application, just think about what the rules are. If you weren't creating an application at all, how would you describe those rules to a colleague? Talk about things rather than actions. Sometimes it helps to think in terms of a template: "When (scenario X), then (scenario Y)."

It takes a bit of practice to think this way, but the results are worth it. The tests become more compact, easier to maintain, and (when implemented correctly) faster to run.

Ask Customers to Lead

Remember the "Customer" in "Customer Tests".

Team members, watch out for a common pitfall in customer testing: no customers! Some teams have programmers and testers do all the work of customer testing, and some teams don't involve their customer at all. In others, a customer is present only as a mute observer. Don't forget the "customer" in "customer tests." The purpose of these activities to bring the customers' knowledge and perspective to the team's work. If programmers or testers take the reins, you've lost that benefit and missed the point.

In some cases, customers may not be willing to take the lead. Programmers and testers may be able to solve this problem by asking the customers for their help. When programmers need domain expertise, they can ask customers to join the team as they discuss examples. One particularly effective technique is to ask for an explanation of a business rule, pretend to be confused, then hand a customer the whiteboard marker and ask him to draw an example on the board.

If customers won't participate in customer testing at all, this may indicate a problem with your relationship with the customers. Ask your mentor (see "Find a Mentor" in Chapter 2) to help you troubleshoot the situation.

Automating the Examples

Programmers may use any tool they like to turn the customers' examples into automated tests. Ward Cunningham's Fit (Framework for Integrated Test)1, is specifically designed for this purpose. It allows you to use HTML to mix descriptions and tables, just as in my invoice auditing example, then runs the tables through programmer-created fixtures to execute the tests.

1Available free at

See or [Mugridge & Cunningham] for details about using Fit. It's available in several languages, including Java, .NET, Python, Perl, and C++.

You may be interested in FitNesse at, a variant of Fit. FitNesse is a complete IDE for Fit that uses a Wiki for editing and running tests. (Fit is a command-line tool and works with anything that produces tables in HTML.)

Exploratory Testing

Fit is a great tool for customer tests because it allows customers to review, run, and even expand on their own tests. Although programmers have to write the fixtures, customers can easily add to or modify existing tables to check an idea. Testers can also modify the tests as an aid to exploratory testing. Because the tests are written in HTML, they can use any HTML editor to modify the tests, including Microsoft Word.

Programmers, don't make Fit too complicated. It's a deceptively simple tool. Your fixtures should work like unit tests, focusing on just a few domain objects. For example, the invoice auditing example would use a custom ColumnFixture. Each column in the table corresponds to a variable or method in the fixture. The code is almost trivial (see Example 9-1).

Example 9-1. Example fixture (C#)

  public class InvoiceAuditingFixture : ColumnFixture
      public InvoiceStatus Sent;
      public UserRole User;

      public Permission OkayToDelete() {
          InvoiceAuditer auditer = new InvoiceAuditer(User, InvoiceStatus)
          return auditer.DeletePermission;

Ubiquitous Language

Using Fit in this way requires a ubiquitous language and good design. A dedicated domain layer with Whole Value objects2 works best. Without it, you may have to write end-to-end tests, with all the challenges that entails. If you have trouble using Fit, talk to your mentor about whether your design needs work.

2See Domain-Driven Design [Evans] for a discussion of domain layers and [Cunningham] for information about Whole Value.

I often see programmers try to make a complete library of generic fixtures so that no one need write another fixture. That misses the point of Fit, which is to segregate customer tests from programmer implementation. If you make generic fixtures, the implementation details will have to go into the tests, which will make them too complicated and obscure the underlying examples.

Most tests can be expressed with a simple ColumnFixture or RowFixture.


When do programmers run the customer tests?

Ten-Minute Build

Once the tests are passing, make them a standard part of your ten-minute build. Like programmers' tests, you should fix them immediately if they ever break.

Should we expand the customer tests when we think of a new scenario?


Absolutely! Often, the tests will continue to pass. That's good news; leave the new scenario in place to act as documentation for future readers. If the new test doesn't pass, talk with the programmers about whether they can fix it with iteration slack or whether you need a new story.

What about acceptance testing (also called functional testing)?

No Bugs

Automated acceptance tests tend to be brittle and slow. I've replaced acceptance tests with customer reviews (see "Customer Reviews" later in this chapter) and a variety of other techniques (see "A Little Lie" in Chapter 3).


Ubiquitous Language

When you use customer tests well, you reduce the number of mistakes in your domain logic. You discuss domain rules in concrete, unambiguous terms and often discover special cases that you hadn't considered. The examples influence the design of the code and help promote a ubiquitous language. When written well, the customer tests run quickly and require no more maintenance than unit tests do.


Test-Driven Development

Don't use customer tests as a substitute for test-driven development. Customer tests are a tool to help communicate challenging business rules, not a comprehensive automated testing tool. In particular, Fit doesn't work well as a test scripting tool—it doesn't have variables, loops, or subroutines. (Some people have attempted to add these things to Fit, but it's not pretty.) Real programming tools, such as xUnit or Watir, are better for test scripting.

Whole Team
Sit Together

In addition, customer tests require domain experts. The real value of the process is the conversation that explores and exposes the customers' business requirements and domain knowledge. If your customers are unavailable, those conversations won't happen.

Finally, because Fit tests are written in HTML, Fit carries more of a maintenance burden than xUnit frameworks do. Automated refactorings won't extend to your Fit tests. To keep your maintenance costs down, avoiding creating customer tests for every business rule. Focus on the tests that will help improve programmer understanding, and avoid further maintenance costs by refactoring your customer tests regularly. Similar stories will have similar tests: consolidate your tests whenever you have the opportunity.


Some teams have testers, not customers, write customer tests. Although this introduces another barrier between the customers' knowledge and the programmers' code, I have seen it succeed. It may be your best choice when customers aren't readily available.

Customer tests don't have to use Fit or FitNesse. Theoretically, you can write them in any testing tool, including xUnit, although I haven't seen anybody do this.

Further Reading

Fit for Developing Software [Mugridge & Cunningham] is the definitive reference for Fit.

"Agile Requirements" [Shore 2005a], online at, is a series of essays about agile requirements, customer testing, and Fit.

If you liked this entry, check out my best writing and presentations, and consider subscribing to updates by email or RSS.