AoAD2 Chapter: Accountability (introduction)

Book cover for “The Art of Agile Development, Second Edition.”

Second Edition cover

This is a pre-release excerpt of The Art of Agile Development, Second Edition, to be published by O’Reilly in 2021. Visit the Second Edition home page for information about the open development process, additional excerpts, and more.

Your feedback is appreciated! To share your thoughts, join the AoAD2 open review mailing list.

This excerpt is copyright 2007, 2020, 2021 by James Shore and Shane Warden. Although you are welcome to share this link, do not distribute or republish the content without James Shore’s express written permission.


If Agile teams own their work and their plans, how do their organizations know they’re doing the right thing? How do they know that the team is doing their best possible work, given the resources, information, and people they have?

Organizations may be willing, even eager, for teams to follow an Agile approach, but this doesn’t mean Agile teams have carte blanche authority to do whatever they want. They’re still accountable to the organization. They need to demonstrate that they’re spending the organization’s time and money appropriately.

This chapter has the practices you need to be accountable to your organization:

  • The “Trust” practice: Work in a way that gives stakeholders confidence.

  • The “Stakeholder Demos” practice: Get feedback about your progress.

  • The “Forecasting” practice: Predict when software will be released.

  • The “Roadmaps” practice: Share your progress and plans.

  • The “Management” practice: How managers can help their teams excel.

Share your feedback about this excerpt on the AoAD2 mailing list! Sign up here.

For more excerpts from the book, or to get a copy of the Early Release, see the Second Edition home page.

AoAD2 Practice: Management

Book cover for “The Art of Agile Development, Second Edition.”

Second Edition cover

This is a pre-release excerpt of The Art of Agile Development, Second Edition, to be published by O’Reilly in 2021. Visit the Second Edition home page for information about the open development process, additional excerpts, and more.

Your feedback is appreciated! To share your thoughts, join the AoAD2 open review mailing list.

This excerpt is copyright 2007, 2020, 2021 by James Shore and Shane Warden. Although you are welcome to share this link, do not distribute or republish the content without James Shore’s express written permission.



We help our teams excel.

Stakeholder demos and roadmaps allow managers to see what their teams are producing. But managers need more. They need to know whether their teams are working effectively and how they can help them succeed.

Unlike the other practices in this book, which are aimed at team members, this practice is for managers. It’s primarily for team-level managers, but the ideas can be applied by middle and senior managers as well. In an environment where teams decide for themselves how work will be done (see “Key Idea: Self-Organizing Teams”), what do managers do, and how do they help their teams excel?

Most organizations use measurement-based management: gathering metrics, asking for reports, and designing rewards to incentivize the right behavior. It’s a time-honored approach to management that stretches back to the invention of the assembly line.

Measurement-based management doesn’t work.

There’s just one problem. It doesn’t work.

Theory X and Theory Y

In the 1950’s, Douglas McGregor identified two opposing styles of management: Theory X and Theory Y. The two styles are each based on an underlying theory of worker motivation.

Theory X managers believe that workers dislike work and try to avoid it. As a result, workers have to be coerced and controlled. Extrinsic motivators such as pay, benefits, and other rewards are the primary mechanism for forcing employees to do what is needed. Furthermore, Theory X managers believe, workers want to be treated this way, because they’re inherently unambitious and avoid responsibility. Under Theory X management, the design and implementation of extrinsic motivation schemes, using tools such as measurement and rewards, is central to good management.

Theory Y managers believe that workers enjoy work and are capable of self-direction. They seek responsibility and enjoy problem-solving. Intrinsic motivators such as the satisfaction of doing a good job, contributing to a group effort, and solving hard problems are the primary drivers of employee behavior. Under Theory Y management, providing context and inspiration, so workers can work without close supervision, is central to good management.

Measurement-based management is a Theory X approach. It’s based on using extrinsic motivators to incentivize correct behavior. Agile, in contrast, is a Theory Y approach. Agile team members are expected to be intrinsically motivated to solve problems and achieve organizational goals. They need to be able to decide for themselves what to work on, who will do it, and how the work will be done.

Agile requires Theory Y management.

These assumptions are built into the foundations of Agile. Theory Y management is expected and required for Agile to succeed. Theory X management won’t work. Even if you strip out the disrespect for workers, the underlying reliance on measurements and rewards distorts behavior and creates dysfunction. I’ll explain in a moment.

The Role of Agile Management

Some managers worry that there’s no place for them in an Agile environment. Nothing could be further from the truth. Managers’ role changes, but it isn’t diminished. In fact, by delegating details to their teams, managers are freed up to focus on activities that have more impact.

Agile managers manage the work system rather than individual work. They set their teams up for success. Their job is to guide their teams’ context so that each team makes correct choices without explicit management involvement. In practice, this means team managers:2

2Thanks to Diana Larsen for her contributions to this list.

  • Make sure the right people are on the team, so that the team has all the skills needed for its work. This includes coordinating hiring and promotions.

  • Make sure the team includes the coaches it needs, and act as a backup coach, particularly around interpersonal issues.

  • Mediate interpersonal conflicts, help team members navigate the chaos of change, and help team members jell as a team.

  • Help individual team members develop their careers. Mentor individuals to become future leaders and encourage team members to cross-train so that the team is resilient to the loss of any one person.

  • Monitor the team’s progress towards fluency (see the skill checklists in the introductions to Parts II-IV) and coordinate with the team’s coaches to procure training and other resources the team needs to reach fluency.

  • Procure the tools, equipment, and other resources the team needs to be productive.

  • Ensure that the team understands how their work fits into the big picture of the organization, that they have a charter (see the “Planning Your Chartering Session” sidebar), and that the charter is updated regularly.

  • Provide insights about how well the team is fulfilling their charter and how their work is perceived by stakeholders, particularly management and business stakeholders.

  • Maintain awareness of the relationships between the team and its stakeholders, and help the team understand when and why those relationships aren’t working well.

  • Advocate for the team within the rest of the organization, and coordinate with peer managers to advocate for each others’ teams. Help the team navigate organizational bureaucracy and remove impediments to their success.

  • Ensure organizational expectations around topics such as budgeting, governance, and reporting are fulfilled. Judiciously push for relaxing those requirements when it would help the team.

Measurement Dysfunction

Measurement-based management distorts behavior and causes dysfunction.

One thing you won’t see on that list: reporting and metrics. That’s because measurement-based management distorts behavior and causes dysfunction. Some examples:

Stories and story points

A team’s manager wanted to know if the team was productive, so they tracked the number of stories their team finished every iteration. The team cut back on testing, refactoring, and design so they could get more stories done. The result was reduced internal quality, more defects, and lower productivity. (Tracking capacity yields the same results. See the “Capacity Is Not Productivity” section for more about this common mistake.)

Code coverage

An executive mandated that all new code be tested. Eighty-five percent code coverage was the goal. “All new code needs tests,” he said.

Good tests are small, fast, and targeted, but they take care and thought. This executive’s teams worked on meeting the metric in the quickest and easiest way instead. They wrote tests that covered a lot of code, but they were slow and brittle, failed randomly, and often didn’t check anything important. Their code quality continued to degrade, their productivity declined, and their maintenance costs went up.

Lines of code

In an effort to encourage productivity, a company rewarded people for number of lines added, changed, or deleted per day. (Number of commits per day is a similar metric.) Team members spent less time thinking about design and more time cutting and pasting code. Their code quality declined, maintenance costs increased, and they struggled with “mushroom” defects that kept popping back up after people thought they had been fixed.

Say/do ratio

Although meeting commitments is important for building trust, it isn’t a good metric. One company made commitments a key value. “Accountability is very important here,” they said. “If you say you’re going to do something by a certain date, you have to do it. No excuses.”

Their teams became very conservative in their commitments. Their work expanded to fill the time available, reducing throughput. Managers started pushing back on excessively-long deadlines. Now the teams had to rush their work and take shortcuts, resulting in reduced internal quality, more defects, higher maintenance costs, and customer dissatisfaction.

Defect counts

Which is easier: reducing the number of defects a team creates, or changing the definition of “defect?” An organization that tracked defect counts wasted time on contentious arguments about what counted as a defect. When the definition was too strict, the team spent time fixing defects that didn’t matter. When it was too loose, they shipped bugs to customers, hurting customer satisfaction.

Why Measurement Dysfunction is Inevitable

Rather than doing work that achieves the best result, people do work that achieves the best score.

When people believe that their performance will be judged based on a measurement, they change their behavior to get a better score on that measurement. But people’s time is limited. By doing more for the measurement, they must do less for something else. Rather than doing work that achieves the best result, they do work that achieves the best score.

Everybody knows that bad metrics cause problems. But that’s just because managers chose bad metrics, isn’t it? A savvy manager can prevent problems by carefully balancing their metrics... right?

Unfortunately, no. Robert Austin’s seminal book, Measuring and Managing Performance in Organizations [Austin 1996], explains:

The fundamental message of this book is that organizational measurement is hard. The organizational landscape is littered with the twisted wrecks of measurement systems designed by people who thought measurement was simple. If you catch yourself thinking things like, “Establishing a successful measurement program is easy if you just choose your measures carefully,” watch out! History has shown otherwise. (p. 180-181)

The situation would be different if you could measure everything that mattered in software development. But you can’t. There are too many things that are important that—although they can be measured in some ways—can’t be measured well. Internal quality. Maintenance costs. Development productivity. Customer satisfaction. Word-of-mouth. Here’s Robert Austin again:

As a professional activity that has much mental content and is not very rotable, software development seems particularly poorly suited to measurement-based management... There is evidence that software development is plagued by measurement dysfunction. (pp. 111-112)

In practice, measurements will not be comprehensive, and inhabitants of the black box will gain control of the measurement instrument to make it report what will make them look good. (p. 131)

People—particularly in software development—hate this message. We love the fantasy of a perfectly rational and measurable world. Surely it’s just a matter of selecting the right measurements!

There is no way to measure everything that matters in software development.

It’s a pretty story, but it’s a trap. There is no way to measure everything that matters in software development. The result is an endless cycle of metrics programs, leading to dysfunctions, leading to new metrics, leading to new dysfunctions.

A [manager] who commits dysfunctional acts mistakenly believes she is in a fully [measurable] situation when she is, in fact, in a partially [measurable] situation... In real settings, managers are charged with controlling activity in their areas of organizational responsibility. Unfortunately, the need for control is often interpreted narrowly as a need for measurement-based control. The [manager’s] job is then usually perceived to be the redesign of [worker] tasks to make them more measureable. (pp. 126-127)

Even when dysfunction is discovered and it is revealed that full [measurement] has not been achieved, a [manager] may still resist the conclusion that full [measurement] cannot be achieved. She may conclude instead that she simply got it wrong when she attempted the last job redesign. An unending succession of attempts at job redesign may follow, as the [manager] tries earnestly to get it right... The result is that designers of software production systems are forever redesigning, replacing old modes of control, and substituting new but structurally similar modes, with predictable lack of success. (pp. 132-133)

Delegatory Management

Even if an effective measurement system was possible, which it is not, measurements are missing the point. Agile requires Theory Y management, not Theory X management, and Theory Y management is based on intrinsic motivators, not measurements and reward systems.

Rather than thinking about measurements and rewards, focus on what intrinisically motivates your team members. What do they love about their work? Is it creating something “insanely great” that customers love? Is it pushing the bounds of technical achievement? Is it being part of a high-functioning, jelled team? Or getting lost in the flow of productive work?

Whatever the motivation, inspire your teams by showing how their work will fulfill their needs. Provide them with the resources and information they need. And step back so they can take ownership and excel.

In contrast [to measurement-based management], delegation cannot produce distortion. If the customer’s value function changes, the change is immediately reflected in the effort allocation of the [worker], as long as he is aware of the change... Under delegation, workers are likely to take more initiative; they act in accordance with their own expectations instead of reacting to whatever carrot hangs before them. (p. 109)

Robert Austin

Make measurements inconsequential

Measurement dysfunction occurs even when managers say they won’t use measurements to assess performance. That’s because it isn’t what managers say that matters; it’s what people think that causes dysfunction.

Unfortunately, people—especially software developers—tend to be cynical about these things. To avoid dysfunction, it’s not enough to say you won’t use the data; you have to make it structurally impossible to do so.

The easiest way to do so is to keep information private to the team. The team collects the data, the team analyzes the data, and the team discards the data. They report their conclusions and decisions, not the underlying data. If nobody else sees it, there’s no risk of distortion.

If that’s not possible, aggregate the data so that it can’t be attributed to any one person. Instead of using data to evaluate subordinates, use data to evaluate yourself. This can apply to all levels of the organization. Team managers see team measures, not individual measures. Directors see departmental measures, not team measures. And so forth.

Go to gemba

If managers don’t get data about their subordinates, how do they know how to help? They go to gemba.

The phrase “Go to Gemba” comes from Lean Manufacturing. It means “go see for yourself.”3 The idea is that managers learn more about what needs to be done by seeing the actual work than by looking at numbers.

3“Gemba” is a Japanese word meaning “the actual place [where something happened],” so “go to gemba” literally means “go to the actual place.”

To learn about your teams, go see for yourself.

Managers, to learn about your teams, go see for yourself. Look at the code. Review the UI mockups. Sit in on stakeholder interviews. Attend a planning meeting.

Then think about how you want your team improve. Ask yourself, “Why aren’t they already doing that themselves?” Assume positive intent: In most cases, it’s not a motivational issue; it’s a question of ability, organizational roadblocks, or—and don’t discount this one—the idea was already considered and set aside for good reasons that you’re not aware of. Crucial Accountability: Tools for Resolving Violated Expectations, Broken Commitments, and Bad Behavior [Patterson et al. 2013] is an excellent resource that discusses what to do next.

Ask the team

Fluent Agile teams have more information about the day-to-day details of their work than anybody else. Rather than asking for measurements, managers can ask their teams a simple question: “What can I do to help your team be more effective?” Listen. Then act.

Define goals and guardrails

Although the team owns their work, the goals of that work are defined by management. It’s okay to put requirements and boundaries in place. For example, one director needed to know that his teams were processing a firehose of incoming data effectively. He gathered together his team of managers, told them his need, and asked them to create a measurement that teams could track themselves, without fear of being judged. The director didn’t need to see the measurement; he needed to know that his teams were able to stay on top of it, and if not, what they needed to do so.

When Metrics Are Required

All too often, managers’ hands are tied by a larger organizational system. To return to Robert Austin:

The key fact to realize is that in a hierarchical organization every manager is [also measured]. Manager performance is very difficult to measure because of the intangible nature of managerial duties... her own performance is judged mostly by how well her organization—that is, her [workers]—does according to the very measurement system the [manager] installs. The [manager] has an interest, then, in installing easily exploitable measurement systems. The [manager] and [worker] quietly collude to their mutual benefit. (pp. 137-138)

Report narratives and qualitative information rather than quantitative data.

If you must report something, provide narratives and qualitative information, not quantitative measurements that can be abused. Tell stories about what your teams have done, what they’ve learned, and how you’ve helped.

That may not be enough. You might be required to generate quantitative results. Push back on this, if you can, but all too often, it will be out of your control.

If you have control over the measurements used, measure as close to real-world outcomes as possible. One such possibility is value velocity.

Value velocity is an actual measurement of productivity. It measures the output of the team over time. To calculate it, measure two numbers for each valuable increment the team releases: the impact, such as revenue; and the lead time, which is the number of weeks (or days) between when development started and when the increment was released. Then divide: impact ÷ time = value velocity.

In many cases, the impact isn’t easily measurable. In that case, you can estimate the impact of each increment instead. This should be done by the sponsor or key stakeholders outside the team. Make sure that all estimates are done by the same person or tight-knit team, so they’re consistent with each other.

Remember, though, that value velocity distorts behavior just like any other metric. Whichever metrics you collect, do everything you can to shield your team from dysfunction. Most metrics harm internal quality, maintenance costs, productivity, customer satisfaction, and long-term value, because these are hard to measure and tempting to shortchange. Emphasize the importance of these attributes to your teams, and—if you can do so honestly—promise them that you won’t use metrics in your performance evaluations.


What about “if you can’t measure it, you can’t manage it?”

“If you can’t measure it, you can’t manage it” is often attributed to W. Edwards Deming, a statistician, engineer, and management consultant whose work influenced Lean Manufacturing, Lean Software Development, and Agile.

Deming was massively influential, so it’s no wonder his quote is so well known. There’s just one problem: He didn’t say it. He said the opposite.

It is wrong to suppose that if you can’t measure it, you can’t manage it—a costly myth.4

4This quote is explained and put into context at The W. Edwards Demings Institute:

W. Edwards Deming


Delegatory management requires an organizational culture that understands measurement dysfunction. Despite being decades old—Deming articulated the need to remove measurement-based management in at least 19825—it’s still not widely understood and accepted.

5Point 12 of Deming’s 14 Points for Management: a) Remove barriers that rob the hourly worker to pride of workmanship. The responsibility of supervisors must be changed from sheer numbers to quality. b) Remove barriers that rob people in management and engineering of their right to pride of workmanship. This means, inter alia, abolishment of the annual or merit rating and of management by objective.

Agile can still work in a measurement-based environment, but the purpose of this book isn’t to tell you what merely works; it’s to tell you what excels. Delegatory management excels, if you’re able to use it.


When you use delegatory management well:

  • Teams feel they’ve been set up for success.

  • Teams own their work and make good decisions without management’s active participation.

  • Team members feel confident to do what leads to the best outcomes, not the best scores.

  • Team members and managers aren’t tempted to deflect blame and engage in finger-pointing.

  • Managers have a sophisticated, nuanced understanding of what their teams are doing and how they can help.

Alternatives and Experiments

The message in this practice—that measurement-based management leads to dysfunction—is a hard pill for a lot of organizations to swallow. You may be tempted by alternatives that promise to solve measurement dysfunction through elaborate balancing schemes.

Before you do that, remember that Agile is a Theory Y approach to development. The correct way to manage an Agile team is through delegatory management, not measurement-based management.

If you do look at alternative metrics ideas, be careful. Measurement dysfunction isn’t immediately obvious. It can take a few years to become apparent, so an idea can sound great on paper and even appear to work at first. You won’t discover the rot until later, and even then, it’s all too easy to blame the problem on something else.

In other words, be skeptical of any approach to metrics that isn’t at least as rigorous as [Austin 1996]. It’s based on Austin’s award-winning economics Ph.D. thesis.

That said, there are also good, thoughtful takes on Agile management. As you look for opportunities to experiment, look for opportunities that emphasize a collaborative and delegatory Theory Y approach. The resources in the Further Reading section are a good starting point.

Further Reading

Measuring and Managing Performance in Organizations [Austin 1996] was the inspiration for this practice. It presents a rigorous economic model while remaining engaging and approachable.

Turn the Ship Around! A True Story of Turning Followers into Leaders [Marquet 2013] is a gripping read, and an excellent way to learn more about delegatory management. The author describes how he, as captain of a U.S. nuclear submarine, learned to apply delegatory management with his crew.

Crucial Accountability: Tools for Resolving Violated Expectations, Broken Commitments, and Bad Behavior [Patterson et al. 2013] is an good resource for managers who need to intervene to help their employees.

Punished by Rewards: The Trouble with Gold Stars, Incentive Plans, A’s, Praise, and Other Bribes [Kohn 1999] is a thorough exploration of the differences between intrinsic and extrinsic motivation.

XXX Johanna Rothman, Pollyanna Pixton

XXX The Tyranny of Metrics (Jerry Z. Muller)

Share your feedback about this excerpt on the AoAD2 mailing list! Sign up here.

For more excerpts from the book, or to get a copy of the Early Release, see the Second Edition home page.

AoAD2 Practice: Roadmaps

Book cover for “The Art of Agile Development, Second Edition.”

Second Edition cover

This is a pre-release excerpt of The Art of Agile Development, Second Edition, to be published by O’Reilly in 2021. Visit the Second Edition home page for information about the open development process, additional excerpts, and more.

Your feedback is appreciated! To share your thoughts, join the AoAD2 open review mailing list.

This excerpt is copyright 2007, 2020, 2021 by James Shore and Shane Warden. Although you are welcome to share this link, do not distribute or republish the content without James Shore’s express written permission.


Product Managers

Our stakeholders know what to expect from us.

Ultimately, accountability is about providing good value for your organization’s investment. In a perfect world, your business stakeholders will trust your team to do so without close supervision. This is achievable, but it usually takes a year or two of delivering reliably first.

In the meantime, your organization is going to want to oversee your team’s work. Stakeholder demos help, but managers often want to know more about what you’re doing and what to expect. You’ll share this information in your roadmap.

Agile roadmaps don’t have to look like traditional software roadmaps. I’m using the term fairly loosely, to encompass a variety of ways that teams share information about their progress and plans.

Agile Governance

Roadmaps are built on a foundation of governance. How does the organization ensure teams are working effectively and moving in the right direction?

The classic approach is project-based governance. It involves creating a plan, an estimate of costs, and an estimate of value. The project is funded if the total value sufficiently exceeds the total costs. Once funded, the project is carefully tracked to ensure that it proceeds according to plan, because changes to the plan usually result in increased costs, and that could turn the project from a win to a loss.

This is a predictive approach to governance, not an Agile one. It assumes that plans should be defined in advance. Change is controlled carefully and success is defined as meeting the plan. Management needs detailed plans, cost estimates, and completion progress.

The Agile approach is product-based governance.

The Agile approach is product-based governance. It involves allocating an ongoing “business as usual” budget and estimating the value the team will produce over time. The product is funded if the ongoing value sufficiently exceeds the ongoing costs. Once funded, the product’s value and costs are carefully monitored to ensure that it’s achieving the desired return on investment. When the value is different than estimated, costs and plans are adjusted accordingly.

This is an adaptive approach to governance. It assumes that the team will seek out information and new opportunities, then change their plans to take advantage of what they learned. Success is defined in terms of business results, such as return on investment. Management needs revenue figures or other business metrics, costs, and a business model.

Although Agile is adaptive, not predictive, many Agile teams are subject to project-based governance. Your roadmaps need to accommodate this reality. I’ve provided four options, from maximally adaptive to maximally predictive. Choose the lowest numbered option you can get away with. In some cases, you’ll have multiple roadmaps, such as one for management oversight and one for sales and marketing.

Option 1: Just the Facts

A “just the facts” roadmap isn’t a roadmap at all, in the traditional sense of the word. Instead, it’s a description of what your team has actually done, with no speculation about the future.

From an accountability and commitment perspective, this is the safest type of roadmap, because you only share things that have happened. It’s also the easiest to adapt, because you don’t make any promises about future plans.


You can present your team’s roadmap in whatever format you like, and to any level of detail. A small slide deck, an email, or a wiki page are all common choices. It should include:

  • Your team’s purpose.

  • What’s complete and ready for your next release.

  • Your next release date, if you’re using pre-defined release dates. (See the “Predefined Release Dates” section.)

Additionally, Optimizing teams will include:

  • Current business value metrics (revenue, customer satisfaction, etc.)

  • Current costs

  • Business model

Even if management needs a more predictive roadmap, a “just the facts” roadmap can work well for sales and marketing. The advantage of the “just the facts” approach is that no one is ever upset when your plans change, because they don’t know your plans have changed. Combined with a release train (see the “Release Early Release Often” section), this can lead to a regular announcements of exciting new features that people can have right now.

One well-known example of this approach is Apple, which tends to announce new products only when they’re ready to buy. It’s also common in video games, which use regular updates accompanied by “what’s new” marketing videos to re-energize interest and engagement.

Option 2: General Direction

Stakeholders often want more than just the facts. They want to know what’s coming, too. A “general direction” roadmap strikes a good balance. Speculation is kept to a minimum, so your team can still adapt its plans, but stakeholders aren’t kept entirely in the dark about future plans.

The roadmap includes everything in the “just the facts” roadmap, plus:

  • The valuable increment the team is currently working on, and why it’s the top priority.

  • The valuable increment (or increments) most likely to be worked on next.

The increments are presented without dates.

Optimizing teams might also include hypotheses about performance of upcoming releases.

Option 3: Time and Scope


A “time and scope” roadmap adds release dates to the “general direction” roadmap. This reduces agility and increases risk, because people tend to take these sorts of roadmaps as commitments, no matter how many caveats you provide.

That leaves teams with an uncomfortable tradeoff: either you use a conservative forecast, such as one with a 90% probability of success, and provide a pessimistic release date; or you use a more optimistic forecast, such as one with a 50% probability of success, and risk missing the date. Furthermore, work tends to increase to fill the time available, so more conservative forecasts are likely to result in less work getting done.

However, because the roadmap doesn’t include the details of each increment, the team can still steer its plans as described in the “How to Steer Your Plans” section. By only forecasting the “must have” stories in the plan, you can make a conservative forecast that’s not too far in the future. If you end up with extra time—and, if the forecast was truly conservative, you usually will—you can use that time to add polish and other “nice to have” stories.

Optimizing teams usually don’t use this sort of roadmap. The business cost isn’t worth the benefit. However, it can be useful when they need to coordinate with third parties, such as for a trade show or other marketing event.

Option 4: Detailed Plans and Predictions

This option is the least agile and has the greatest risk. It’s a “time and scope” roadmap that also includes every story in the team’s plan. As a result, there’s no option for the team to steer its plans without having to justify their changes. This results in more conservative forecasts—meaning more potential for wasting time—and less willingness to change.

Although this is the riskiest type of roadmap, organizations tend to prefer it. Despite the risk, it feels safer. Uncertainty makes people uncomfortable, and this roadmap allows them to speak with certainty.

Artificial certainty just makes it more difficult to adapt to changing circumstances.

That certainty is an illusion, though. Software development is inherently uncertain. Artificial certainty just makes it more difficult to adapt to changing circumstances.

Sometimes you won’t have a choice. To create this sort of roadmap, make forecasts that include every story in each release, not just the “must-have” stories. As before, you’ll need to decide between conservative forecasts, which are reliable but potentially wasteful, and more optimistic forecasts, which you could fail to meet.

Teams without Delivering fluency typically have a lot of uncertainty in their forecasts, which means that a properly conservative forecast will show a release date that’s too far in the future for stakeholders to accept. You’ll typically have to use a less conservative forecast, even though the date is more likely to be missed. One way to work around this is to only forecast near-term releases, if you can. The “Improving Forecast Ranges” section has more details.

Optimizing teams don’t use this roadmap.

Corporate Tracking Tools

Tracking teams with planning tools is a mistake.

Companies will often mandate that their teams use a so-called Agile lifecycle management tool, or other planning tool, so they can track teams’ work and create reports automatically. This is a mistake. Not only does it hurt the team—which needs free-form visualizations that they can easily change and iterate—it reinforces a distinctly non-Agile approach to management.

Agile management is about creating a system where teams make effective decisions on their own. As the “Management” practice discusses, managers’ job is to ensure teams have the information, context, and support they need. “Agile” planning tools are anything but Agile: they’re built for tracking and controlling teams, not enabling them. They’re an expensive distraction at best. Don’t use them. They will hurt your agility.

Stakeholder Demos

That doesn’t mean teams have no guidance. Management still needs to keep its hands on the wheel. But this is done by iterating each team’s purpose, providing oversight and feedback during stakeholder demos, and using the most adaptive roadmaps possible, in addition to engaged team-level management.

If your team is required to use a corporate tracking tool, only enter the information required by your roadmap. Use the other planning practices described in this book for your day-to-day work, copying information into the tool when needed. If your roadmap only includes valuable increments, not stories, this won’t be too much of a burden.

Visual Planning

If you have to include stories in your roadmap—which I don’t recommend—see if there’s a lightweight way you can do so. Perhaps you can take a picture of your visual plan rather than transcribing the cards into a tool. Perhaps managers should be more involved in planning sessions, or perhaps they’re asking for something they actually need.

If they insist, though, you can transcribe stories into a corporate tracking tool. Do it once per week—or daily, if you have no other choice—and remember that each story should only be a short phrase, not a miniature requirements document.

If managers need you to maintain more detail in the tool, or insist on tracking individual tasks, something is wrong. Management may be having trouble letting go, or your organization may not be a good fit for Agile. Ask a mentor for help.

When Your Roadmap Isn’t Good Enough

Eventually, somebody is going to ask you for a time and scope roadmap, or a detailed plans and predictions roadmap, then tell you that you need to deliver sooner.

Cutting scope is the only sure way to deliver sooner.

There is only one sure way to deliver sooner: cut scope. You have to take stories out of your plan. Everything else is wishful thinking.

You can try improving your capacity (see the “How to Improve Capacity” section) or further developing fluency, but start by cutting scope. If your other efforts pay off, you can put the stories back in.

Sometimes, you won’t be allowed to cut scope. In this case, you have a tough choice to make. Reality won’t bend, so you’re stuck with political options. You can either stand your ground, refuse to change your forecast, and risk getting fired; or you can use a less conservative forecast, provide a nicer-looking date, and risk releasing late.

Before making that decision, look around at the other teams in your company. What happens when they miss their dates? In many companies, release dates are used as a bludgeon—a way of pressuring people to work harder—but have no real consequences. In others, release dates are sacred commitments.

If you’re trapped in a situation where your roadmap isn’t good enough and you don’t have the ability to cut scope, ask for help. Rely on team members who understand the politics of your organization, discuss your options with a trusted manager, or ask a mentor for advice.

Remember, whenever possible, the best approach to forecasting is to choose a predefined release date and steer your plans to meet that date exactly.


How often should we update our roadmap?

Stakeholder Demos

Update it whenever there’s substantive new information. The stakeholder demo is a good venue for sharing roadmap changes.

What should we tell our stakeholders about forecast probabilities?

In my experience, forecast probabilities are hard for stakeholders to understand. Providing a range of dates can work, but the probabilities behind the range are hard to explain succinctly.

If teams don’t report their detailed plans, how do team-level managers understand what their teams are doing?

Team-level managers can look at team’s planning boards directly. See the “Management” practice for more about managing teams.


Anybody can create roadmaps, but creating effective, lightweight roadmaps requires Agile governance and a willingness to allow teams to own their work, as discussed in the “Replace Waterfall Governance Assumptions” section and the “Delegate Authority and Responsibility to Teams” section.


When you use roadmaps well:

  • Managers and stakeholders understand what the team is working on and why.

  • The team isn’t prevented from adapting their plans.

Alternatives and Experiments

There are many ways of presenting roadmaps, and I haven’t gone into details about specific presentation styles. Experiment freely! The most common approach I see is short slide decks, but people also create videos (particularly for “just the facts” roadmaps), maintain wiki pages, and send status update emails. Talk with your stakeholders about what works for them.

As you experiment, look for ways to improve your adaptability and make fewer predictions. Over time, stakeholders will gain trust in your team, so be sure to revisit their expectations. You may discover that previously set-in-stone requirements are no longer important.

Share your feedback about this excerpt on the AoAD2 mailing list! Sign up here.

For more excerpts from the book, or to get a copy of the Early Release, see the Second Edition home page.

AoAD2 Practice: Forecasting

Book cover for “The Art of Agile Development, Second Edition.”

Second Edition cover

This is a pre-release excerpt of The Art of Agile Development, Second Edition, to be published by O’Reilly in 2021. Visit the Second Edition home page for information about the open development process, additional excerpts, and more.

Your feedback is appreciated! To share your thoughts, join the AoAD2 open review mailing list.

This excerpt is copyright 2007, 2020, 2021 by James Shore and Shane Warden. Although you are welcome to share this link, do not distribute or republish the content without James Shore’s express written permission.


Product Managers

We can predict when we’ll release.

“When are you going to release?”

At first, this seems like a simple question. Just add up your stories (or estimates), divide by your capacity/throughput, and voila! Release date.

And ten percent of the time, it works! Unfortunately, it’s the other 90% that’ll get you.

Capacity and throughput only work for short-term predictions. Life always has additional curve balls to throw. Team members get sick and take vacations; hard drives crash, and although the backup worked, the restore doesn’t; stakeholders suddenly realize that the software they’ve been nodding along to needs major changes; experiments reveal surprising new information.

Despite the uncertainty, sometimes you need to predict release dates. Forecasting is how you do so.

Predefined Release Dates

Let’s be clear: Forecasting reduces your agility. When you make a forecast, you’re looking at a set of work and predicting when it will be done. And that’s the problem: you’re making a prediction about a specific set of work.

But agility means seeking out new information and changing your plans in response. That invalidates your forecasts. At best, the time and effort that went into making the forecast is wasted. More often, people have made their own plans based on your forecasts, and get upset when you change them.

The best way to forecast is to define your release date in advance.

The best way to forecast is to not forecast at all. Instead, define your release date in advance. Steer your plans so that you’re ready to release your most valuable increments on that date. If your company has a genuine need for forecasts—for example, if the software needs to be ready for a trade show—then they’ll have a target date you can use.

A common variant of this idea is the release train, which is a predefined series of release dates, such as the beginning of every quarter. See the “Release Early Release Often” section.

How to Steer Your Plans

If you have a predefined release date, you can steer your plans so that you always have something to release. Without proper forecasting, you won’t be able to say what you’ll release—unless it’s already done—but you can confidently say that you will release, precisely on time.

Adaptive Planning

The secret to steering your plans successfully is to slice your work into the smallest valuable increments you can. Focus on getting to a releasable state as quickly possible. To do so, set aside every story that isn’t strictly necessary to release.

That bare minimum is your first increment. Once you’ve identified it, take a look at the stories you set aside and decide which can be done on their own as additional increments. Some of those increments might just be a single “just right”-sized story! In fact, that’s the ideal.

In a perfect world, you want every story to be something that can be released on its own, without having to wait for any additional stories. This gives you the maximum flexibility and ability to steer your plans. The “Keep Your Options Open” section has more details.

As you finish work, keep an eye on your predefined release date and use it to decide what to prioritize. If there’s a lot of time left, you can probably build a big new increment that takes the software in a new direction. If there isn’t much time left, focus on smaller increments that add polish and delight.

Your increments need to be small enough that you can easily finish at least one before your predefined release date. This will often be obvious. If it’s not, you’ll need to make a proper forecast. For best results, keep those forecasts secret, within your team, so you have the flexibility to change your plans later.

How Forecasting Works

If a predefined release date isn’t enough, you’ll have to gaze into the future, trace the lines of fate, and predict both what and when.

Forecasts have two parts: a baseline estimate and an adjustment for uncertainty.

It’s not witchcraft; it’s science. Specifically, statistics. Software forecasts have two parts: a baseline estimate and an adjustment for uncertainty.

The baseline estimate

When you make an estimate—let’s say you’re estimating how many jellybeans are in a jar—your estimate isn’t going to be right on target. It’s going to be a little high or a little low. (Sometimes a lot high or a lot low.) You can describe its accuracy as a single number: the “actual/estimate” ratio. If your actual/estimate ratio is 2, then there were twice as many jellybeans as you estimated, and if your actual/estimate ratio is 0.5, then there were half as many jellybeans as you estimated.

If you estimate 1,000 jellybean jars, first off, you’ll get really sick of counting jellybeans. Second, you could look at all your estimates together to see how accurate they were. From that, you could make better forecasts about how many jellybeans were in each jar, without having to change your estimates.

It’s like this. If you measured your estimate accuracy and learned that there were always twice as many jellybeans as you estimated—really lousy estimates with an actual/estimate ratio of 2—but you consistently and exactly had twice as many, then you could easily predict how many jellybeans were in a jar.

You’d make millions on the talk show circuit. Imagine it now: the host pulls back a curtain and unveils a massive jellybean jar. You estimate it at 5,962 jellybeans. You know that you’re always exactly half off, so you do some quick mental math. “There are exactly 11,924 jellybeans in that jar, Conan,” you say. Conan falls back in shock and the audience goes wild. (Okay, maybe not.)

If you know how accurate your estimates are, you can make great predictions from lousy estimates. That’s good, because software estimates are all lousy.

Accounting for uncertainty

There’s a problem with the “talk show millions” scheme. First, where are you going to get 1,000 jellybean jars? And second, your actual/estimate ratio won’t ever be perfectly consistent. You won’t be exactly half off every time—sometimes your actual/estimate ratio will be 2.1, sometimes it will be 1.9. If you were to graph the frequency of each ratio, they’d form a bell curve, which is called a “normal distribution” in statistics.

This makes estimating more difficult. Should you multiply by 1.9? Or 2.1? That’s the uncertainty in your forecast. Your estimates aren’t accurate, but they aren’t consistent either. So you can’t predict which actual/estimate ratio you should use.

If you want to be perfectly accurate, you can use the worst actual/estimate ratio you ever had. Let’s say it was 2.6. Or, if you wanted to be perfectly optimistic, you could take the best actual/estimate ratio ever. Maybe it was 1.3.

More realistically, though, you’d choose the part that was likely to be accurate 80% of the time—the part of the curve between 10% likely and 90% likely. Let’s say that, 90% of the time, your actual/estimate ratio is less than 2.1, and 10% of the time, it’s less than 1.9. Now you can make a forecast that’s accurate 80% of the time by giving a range that’s between 1.9x and 2.1x of your original estimate.

“Conan, I can say with 80% confidence that there are between 11,328 and 12,520 jellybeans in that jar.”

Not as impressive, but not too bad, either.

Software is worse

Building software is harder than counting jellybeans. (I know, I know, big surprise.) There are a lot of tasks and a lot of dependencies. In practice, there are a lot more things that can unexpectedly get delayed than can unexpectedly finish early.

As a result, when you graph the actual/estimate ratio for software estimates, you don’t get a normal distribution (a bell curve)—you get a log-normal distribution. The bell curve stretches out to the right.

If you take that data and graph it cumulatively, so each data point shows the percentage of estimates at or below a particular actual/estimate ratio, you get a graph that looks like the “Little Estimate Data” figure (reprinted from [Little 2003]).

A two-axis scatter chart. The x-axis is labelled “Actual/Estimate,” with a log scale from 0.1 to 10.0. The halfway point is marked 1.0 and has a dashed vertical line. The y-axis is labelled “Cumulative distribution,” with a linear scale from 0% to 100%. Two types of data are plotted: “Landmark” data and “DeMarco” data. Both show a clear cumulative log-normal distribution, although there are many more data points for the Landmark data, and it forms a more even curve.

Figure 1. Little estimate data

Little’s data isn’t a fluke. The “Star Citizen Estimate Data” figure shows the estimates made during the development of the 3.0 release of Star Citizen, a crowdfunded video game that shares a lot of behind-the-scenes information. It’s a huge project with hundreds of developers, but they had a very similar result.1

1The “stair steps” in the Star Citizen graph are a sign that the developers padded their estimates. When that happens, work commonly grows to meet the estimate, which results in parts of the curve being shoved to the right to form vertical lines.

A scatter chart similar to the “Little estimate data” figure. It also shows a cumulative log-normal distribution, although it’s not a perfectly smooth curve. In particular, the data from 1-15% is all perfectly aligned with the 1.0 actual/estimate ratio.

Figure 2. Star Citizen estimate data

The cumulative distribution is handy because it tells you what ratio to use for a given level of likelihood of success. Let’s say you want a forecast that you can meet or beat 90% of the time. Draw a horizontal line from the 90% mark on the y-axis, find the data point, and drop down to the actual/estimate ratio on the x-axis. For the Landmark data shown in the “Little Estimate Data” figure, you’d need to multiply your estimate by 3.25. For the DeMarco data, you’d multiply your estimate by 5.2. For Star Citizen, you’d multiply by 5.14.

So now, when Conan asks you to estimate his jellybean jar, you have to hem and haw. “Well, Conan, jellybean counting is a tough job. A lot of things can go wrong. How do I know you aren’t hiding a gigantic 5-pound jellybean in the middle? Or a lot of tiny jellybeans? All I know is that, with 80% confidence, there’s somewhere between 5,962 and 30,644 beans in that jar.”

Needless to say, the audience would not be impressed. Your stakeholders won’t be either.

Sources of Uncertainty

The hard part about forecasts isn’t accuracy; it’s precision. Accuracy without precision is no problem. Here you go: Your next release will come out sometime in the next 100 years... or be cancelled.

100% accurate. 100% useless. To be useful, you need more precision, and to have more precision, you need less uncertainty.

Uncertainty doesn’t come from estimate accuracy (the actual/estimate ratio). You can adjust for any actual/estimate ratio if it’s consistent. Uncertainty comes from the variability in your actual/estimate ratios. These are some common reasons for this variability:

  • Lack of internal quality

  • Changes in team members’ availability

  • Changes in team interruptions and overhead

  • Changes in reliability of suppliers and other dependencies

  • Changes in scope (the work to be done)

In general, teams without Delivering fluency have trouble making useful forecasts.
Whole Team
Team Room

Agile practices such as whole team, team room, capacity, and slack help reduce this variability. In general, though, teams without Delivering fluency have much higher variability. They have trouble making useful forecasts.

Measuring Uncertainty

You can measure your team’s uncertainty by looking at the accuracy of past estimates. Starting with your next release—or past releases, if you have the data—keep a copy of every baseline release estimate you make. (I’ll describe how to make a baseline estimate in a moment.) Track the date you made each estimate and the number of weeks you estimated were remaining.

Then, when the release actually happens, go back and calculate how long, in weeks, the release actually took from the date of each estimate. If you were pressured to release early, or had a lot of bugs or hotfixes, choose the date that represents your real release—the final release where the software was actually done—so your forecasts will represent the time your team really needs.

You should now have several pairs of numbers: an estimate, in weeks, and the actual time required, also in weeks. Divide the actual by the estimate to get an actual/estimate ratio for each pair.

Finally, sort the ratios from smallest to largest. Calculate the position of each row as a percentage of the total number of rows. (I have a spreadsheet at that will do this for you.) This is your cumulative distribution. The percentages show the likelihood of meeting or beating each ratio. The “Example Uncertainty” table shows an example with ten ratios.

Table 1. Example uncertainty


You can use the resulting table to perform your forecasts. I’ll describe how in a moment.

Continue adding to your data set every time you release an increment. For best accuracy, every team should track their data independently, but you can group together data from several similar teams to get started. More data results in better forecasts.

Uncertainty Rules of Thumb

The downside of measuring uncertainty is that it takes a lot of data. You’ll typically need to conduct several releases, over six months, before you have enough. In the meantime, or if gathering the data seems like too much work, you can use the following heuristic instead. Start by answering each question in the “Forecast Risks” table. (If you don’t have four weeks of history, skip it.)

Table 2. Forecast risks

QuestionLow RiskHigh Risk
Did you have the same capacity in the last four iterations? Or, if you’re using continuous flow, did you finish the same number of stories in each of the last four weeks?YesNo
Were all your stories in the last four iterations or weeks “done done?”YesNo
Did you have to add new stories to fix bugs in the last four iterations or weeks?NoYes
For your most recent release, when your stories were done, were you able to release to production immediately, without additional work?YesNo

If all your answers are in the “low risk” column, then you have low-risk forecasts. The main source of uncertainty for your team is likely to be changes that are under your control.

If any answers were in the “high risk” column, then you have high-risk forecasts. You’re likely to have to change your release date for reasons out of your control.

If you don’t have four iterations (or weeks) of history, it’s best to assume you’re in the high-risk category. Most teams are. But if your team is fluent in both the Focusing and Delivering zones, it’s okay to use the low-risk category.

Once you know your risk categorizations, see the “Uncertainty Rules of Thumb” table for your actual/estimate ratios.2

2These uncertainty numbers are an educated guess. The “high risk” numbers are based on [Little 2003]. The “low risk” numbers are based on DeMarco and Lister’s RISKOLOGY simulator, version 4a, available at I used the standard settings but turned off productivity variance, as capacity automatically adjusts for that risk.

Table 3. Uncertainty rules of thumb

LikelihoodLow-risk ratioHigh-risk ratio
10% (almost impossible)11
50% (coin toss)1.42
90% (very likely)1.84

How to Make a Forecast

Despite all this background, making a forecast is actually pretty easy, assuming you follow the other Focusing zone practices in this book. You’ll determine your total effort, create a baseline estimate, then adjust for uncertainty.

The Planning Game

Forecasts depend on stories that are sized ”just right” using the planning game. If you haven’t broken all the stories for a release down to that level of detail, you won’t be able to forecast the release. You’ll need to use the planning game to size all your stories first.

Similarly, if the release includes any spike stories, you’ll have to finish all of them before you can make a forecast. This is why spike stories are separate stories in your plan; sometimes it’s valuable to schedule them early so you can resolve risks and make forecasts.

1. Determine the total effort

Start by counting the number of stories remaining in your next release. If you use estimates, add up their estimates instead of counting them. This will give you your total effort for the release.

Repeat for each following release, if any, adding each to the total.

For example, if you’re forecasting three releases, and the first release has 14 stories, the second release has seven stories, and the third release has 11 stories, the total effort would be as follows:

Table 4. Example total effort

ReleaseTotal Effort
First release14
Second release21
Third release33
2. Calculate the baseline estimate

If your team uses iterations, divide the total effort by your team’s capacity, then multiply by the number of weeks in an iteration. This will give you an estimate in weeks.

If your team uses continuous flow, divide the total effort by your throughput instead of capacity. Your throughput is the number of stories your team finished last week.

If your capacity or throughput changes from week to week, just use the most recent number. You’ll iterate the forecasts, which will allow you to see the trends in the noise. You can also average your last three weeks (or iterations), but iterating the forecast is better.

To continue the example, if your team’s capacity was six stories per week, your baseline estimate would be as follows:

Table 5. Example baseline estimates

ReleaseTotal Effort
First release14 ÷ 6 = 2.3 weeks
Second release21 ÷ 6 = 3.5 weeks
Third release33 ÷ 6 = 5.5 weeks
3. Adjust for uncertainty

Decide the confidence range you want for your forecast. A range from 10% to 90% likelihood will give you dates that you’ll achieve 80% of the time, but the range will be very broad. A range from 50% to 90% will be narrower, and you’ll beat the forecast about half the time. You can also choose a single number, such as 90%, and say that you’ll deliver on or before that date.

I usually choose the 50-90% range, or the 90% number, if I don’t think my audience can handle a range-based forecast.

Whichever the likelihood numbers you’ve chosen, find the corresponding actual/estimate ratios in your estimate uncertainty table. If you don’t have one, use the rules of thumb described in the “Uncertainty Rules of Thumb” section.

Finally, multiply your baseline estimate by the actual/estimate ratios. This is your forecast, in weeks from today, of when the increment will release.

For example, if you chose 50% and 90% accuracy, and your team used the low-risk ratios in the “Uncertainty Rules of Thumb” table, your ratios would be 1.4 and 1.8. That yields the following forecasts:

Table 6. Example forecasts

Release50% Accuracy90% AccuracyDescription
First release2.3 × 1.4 = 3.2 weeks2.3 × 1.8 = 4.1 weeks“3-5 weeks”
Second release3.5 × 1.4 = 4.9 weeks3.5 × 1.8 = 6.3 weeks"5-7 weeks"
Third release5.5 × 1.4 = 7.7 weeks5.5 × 1.8 = 9.9 weeks"8-10 weeks"
4. Iterate your forecast

Update your forecast after every iteration, or once per week if you use continuous flow. As your release date approaches, the forecast will “narrow in” on the actual release date. Graphing the forecasted release dates over time will help you see trends, especially if your capacity or throughput isn’t stable. The “Example Iterated Forecast” figure shows an example.

A two-axis line chart. The x-axis is labelled “Date forecast made” and shows dates, in one week intervals, from January 1st to February 26th. The y-axis is labelled “Forecasted release date” and shows dates, in one week intervals, from January 29th to March 12th. The body of the graph shows three lines, labelled “10%,” “50%,” and “90%.” On January 1st, they show a forecast ranging from February 5th, to February 19th, to March 5th. Moving from left to right, they gradually converge on a release date of February 26th.

Figure 3. Example iterated forecast

Improving Forecast Ranges

If your team has a lot of uncertainty, your forecasts might not be very useful. For example, if the team shown in the “Example Forecasts” table had high-risk forecasts, their ranges would be much broader and more pessimistic:

Table 7. Example high-risk forecasts

First release3-5 weeks5-10 weeks
Second release5-7 weeks7-14 weeks
Third release8-10 weeks11-22 weeks

There are two ways to make your forecast narrower: an easy way, and a hard way.

The easy way is to make your increments smaller. A high-risk forecast with a baseline estimate of two weeks and 50-90% likelihood results in a forecast of 4-8 weeks. That’s not too hard to accept. On the other hand, a baseline estimate of six months yields a forecast of 1-2 years. That’s tough to swallow.


The hard way to improve your forecasts—but also the best for your team’s productivity—is to address your sources of variability. Using slack to stabilize capacity, as described in the “Stabilizing Capacity” section, might be enough. But usually, your team will need to achieve fluency in both Focusing and Delivering zone practices. If your team has a lot of dependencies on other teams, your organization might also need to revise team responsibilities as discussed in the “Scaling Agility” chapter.


Our forecast shows us releasing way too late. What should we do?

You have to cut scope. See the “When Your Roadmap Isn’t Good Enough” section for details.

Can you summarize the release date forecast calculation?

number of stories (or estimate total) remaining ÷ capacity or throughput per week × actual/estimate ratio = number of weeks remaining.

What about forecasting how much we’ll get done by a predefined release date?

number of weeks remaining × capacity or throughput per week ÷ actual/estimate ratio = number of stories (or estimate total) in the current plan that will be finished by the release date.

What if we want to forecast something other than a release date?

When you create your uncertainty table, instead of tracking how your estimates compared to your release date, track how they compared to the thing you want to forecast.

For example, let’s say you’re using a quarterly release train and your valuable increments “get on the train” well before the actual release happens. You don’t need to forecast when the release will happen, because that’s fixed. Instead, you want to forecast when an increment will get on the train.

To do so, you would create your actual/estimate ratios based on when each increment got on the train, compared to your baseline estimate for each increment, rather than tracking the release as a whole. The rest of the forecast calculation would remain the same.

Your rule-of-thumb actual/estimate ratios are too high. Can we use a lower ratio?

When your forecast gives you bad news, it’s tempting to play with the numbers until you feel happier. Speaking as somebody who’s been there and has the spreadsheets to prove it: this is a waste of time. It won’t change when your software actually releases.

You can use whatever ratio you like, but unless you’re basing your numbers on actual historical data, you’re probably just fooling yourself.

What if we want to get a rough forecast before we start development, without the cost of detailed story planning?

Any approach that doesn’t involve detailed planning will just be based on gut feel. That’s okay. People with a lot of experience can make good gut decisions.

Gather the team’s sponsor, a seasoned product manager or project manager, and a senior programmer or two (preferably ones that will be on the team). Choose people with a lot of experience at your company.

Ask the sponsor to describe the development goals, when work would start, who would be on the team, and the latest release date that would still be worth the cost. Then ask the product manager and programmers if they think it’s possible.

If the answer is “yes,” then it makes sense to invest in a month or two of development so you can make a real forecast.


The Planning Game

In order to use this approach to forecasting, you need to have a team that’s working on the actual software being forecasted. If your team is new, you should have at least four weeks of development history, and you can only forecast increments with stories that have been sized “just right” with the planning game.

More importantly, though, make sure you really need to forecast. Too many companies ask for forecasts out of habit. Forecasting takes time away from development. Not just the time required to make the forecast itself, but the time required to manage the many emotional responses that surround forecasts, both from team members and stakeholders. It also adds resistance to adapting your plans.

Be clear about who forecasts benefit, why, and how much.

As with everything the team does, you should be clear about who forecasts benefit, why, and how much. Then compare that value against the other ways your team could spend their time. Fixed release dates are often a better choice.


When your team forecasts well:

  • You can coordinate with external events, such as marketing campaigns, that have long lead times.

  • You’re able to coordinate with business stakeholders about upcoming delivery dates.

  • You understand when your team’s costs will exceed its value.

  • You have data to counter unrealistic expectations and deadlines.

Alternatives and Experiments

There are many approaches to forecasting. The one I’ve described has the benefit of being both accurate and easy. However, its dependency on real development stories that are sized “just right” makes it labor-intensive for pre-development forecasts. It also depends on a lot of historical data for best results. (Although the rules of thumb are often good enough.)

An alternative is to use Monte Carlo simulations to amplify small amounts of data. Troy Magennis has a popular set of spreadsheets to do so at (Look for the “Throughput Forecaster.”)

The risk of Magennis’ spreadsheet, and similar estimating tools, is that it asks you to estimate sources of uncertainty rather than using historical data. For example, Magennis’ spreadsheet asks the user to guess the number of stories remaining, as a range, as well as a range of how many stories will be added (or “split,” to use its terminology). These guesses have a profound impact on the forecast, but they’re just guesses.

The approach described in this book, on the other hand, uses historical actual/estimate ratios to account for all sources of uncertainty. Anything that has gone wrong in the past is included.

Before you experiment with other forecasting approaches, make sure you understand the fundamentals described in the “How Forecasting Works” section. A good forecast has two characteristics: first, it accounts for uncertainty by speaking in terms of ranges of probabilities, not absolutes; and second, it incorporates as much empirical data as possible—measurements of reality—not just estimates. Otherwise, it’s a house of cards.

Before you go too far down the rabbit hole, though, remember that the best way to forecast is to pick a predefined release date and steer your plans to meet that date exactly.

Further Reading

XXX Further reading to consider:

  • Agile Estimating and Planning (Cohn)

  • Software Estimation: Demystifying the Black Art (McConnell)

  • Software Estimation Without Guessing (Dinwiddie)

  • When Will It Be Done? (Vacanti)

Share your feedback about this excerpt on the AoAD2 mailing list! Sign up here.

For more excerpts from the book, or to get a copy of the Early Release, see the Second Edition home page.

AoAD2 Practice: Stakeholder Demos

Book cover for “The Art of Agile Development, Second Edition.”

Second Edition cover

This is a pre-release excerpt of The Art of Agile Development, Second Edition, to be published by O’Reilly in 2021. Visit the Second Edition home page for information about the open development process, additional excerpts, and more.

Your feedback is appreciated! To share your thoughts, join the AoAD2 open review mailing list.

This excerpt is copyright 2007, 2020, 2021 by James Shore and Shane Warden. Although you are welcome to share this link, do not distribute or republish the content without James Shore’s express written permission.

Stakeholder Demos

Product Managers, Whole Team

We keep it real.

Agile teams produce working software every week, starting with their very first week. This may sound impossible, but it’s not; it’s merely difficult. And the key to learning how to do it well is feedback.

Stakeholder demos are a powerful way of providing your team with the feedback they need. They’re just what they sound like: a demonstration, to stakeholders, of what your team has completed recently, along with a way for stakeholders to try the software for themselves.

The feedback comes in multiple ways. First, the obvious feedback: stakeholders will tell you what they think.

Incremental Requirements
Real Customer Involvement

Although this feedback is valuable, it’s not the most valuable feedback you get from a stakeholder demo. The team’s on-site customers work with stakeholders throughout development, so they should already know what stakeholders want and expect.

So the real feedback provided by stakeholder comments is not the feedback itself, but how surprising that feedback is. If you’re surprised, you’ve learned that you need to work harder to understand your stakeholders.

Another type of feedback is the reactions of the people involved. If the team is proud of their work and stakeholders are happy to see it, that’s a good sign. If team members aren’t proud, or are burned out, or stakeholders are unhappy, something is wrong.

The demo is also a “rubber meets the road” moment for the team. That gives you good feedback about your team’s ability to finish their work. It’s harder to fool yourself into thinking work is done when stakeholders expect a demo they can use.

Finally, the demo provides feedback to stakeholders, too. It shows them that your team is accountable: that you’re listening to their needs and making steady progress. This is vital for helping stakeholders trust that your team has their best interests at heart.

The Demo Cadence

Start by conducting a stakeholder demo every week. Be consistent: always conduct the demo at the same time and place. This will help you establish a rhythm, make it easier for people to attend, and show strong momentum right from the start.

If you use iterations, conduct the demo immediately after the iteration ends. I like to have mine first thing the following morning. This will help your team stay disciplined, because they won’t be able to stretch work into the next iteration.

Feature Toggles

In addition to the demo presentation, provide a way for stakeholders to try the changes on their own. This might take the form of a staging server or—if you’re using feature toggles—special permissions on stakeholders’ accounts.

After you’ve conducted several demos and the excitement of the new work dies down, you’re likely to find that a weekly demo is too frequent for some of your key stakeholders. You can start holding the demo every two weeks instead, or even once a month. Don’t wait longer than that, though; it’s too infrequent for good feedback.

Regardless of the frequency of your demo meetings, continue to share demo software that stakeholders can use every week, or at least every iteration, if your iterations are longer than one week.

How to Conduct a Stakeholder Demo

Anybody on the team can lead the stakeholder demo. The best person to do so is whoever works most closely with stakeholders. (Typically, this will be the team’s product manager.) They’ll speak stakeholders’ language and have the best understanding of their point of view. It also emphasizes how stakeholder needs steer the team’s work.

Incremental Requirements
Real Customer Involvement

Product managers often request that developers lead the demo instead. I see this most often when the product manager doesn’t see themselves as part of the team, or doesn’t feel that they know the software well. Push back on this request. Developers aren’t building the software for the product manager; the whole team, including the product manager, is building the software for stakeholders. The product manager is the face of that effort, so they should lead the demo. Help the product manager be more involved and comfortable by reviewing stories with them as they’re built.

Anybody’s who’s interested may attend the demo. The whole team, key stakeholders, and executive sponsor should attend as often as possible. Include real customers when appropriate. Other teams working nearby and people who are curious about Agile are welcome as well.

If you have a particularly large audience, you may need to set some ground rules about questions and interruptions to prevent the demo from taking too long.

If there are no interruptions, the demo itself should only take five to ten minutes. Questions and feedback can stretch that to half an hour. If you need more time because you’re getting a lot of feedback, that’s a sign that you should conduct demos more often. On the other hand, if you’re having trouble attracting attendees, or they don’t seem interested, conducting demos less often may give you more meat to share.

Because the meeting is so short, it’s good to start on time, even if some attendees are late. This will send the message that you value attendees’ time. Both the presenter and demo should remain available for further discussion and exploration after the meeting.

Once everyone is together, briefly remind attendees about the valuable increment the team is currently working on and why it’s the most important use of the team’s time. Set the stage and provide context for people who haven’t been paying full attention. Then provide an overview of the stories the team worked on since the last demo.

Calmly describe problems and how you handled them.

If you’ve made any changes that stakeholders care about, explain what happened. Don’t sugarcoat or gloss over problems. Full disclosure will raise your credibility. By neither simplifying nor exaggerating problems, you demonstrate your team’s ability to deal with problems professionally. For example:

Demonstrator: In the past two weeks, we’ve been focusing on adding polish to our flight reservation system. It’s already complete, in that we could release it as-is, but we’ve been adding “delighters” to make it more impressive and usable for our customers.

We finished all the stories we had planned, but we had to change the itinerary visualization, as I’ll show you in a moment. It turned out to be too expensive, so we had to find another solution. It’s not exactly what we had planned, but we’re happy with the result.

After your introduction, go through the stories the team worked on. Rather than literally reading each story, paraphrase them to provide context. It’s okay to combine stories or gloss over details that stakeholders might not be interested in. Then demonstrate the result in the software. Stories without a user interface can be glossed over or just described verbally.

Demonstrator: Our first two stories involved automatically filling in the user’s billing information if they’re logged in. First, I’ll log in with our test user... click “reservations”... and there, you can see that the billing information fills in automatically.

Audience member: What if they change their billing information?

Demonstrator: Then we ask them if they want to save the changed information. (Demonstrates.)

If you come to a story that didn’t work out as planned, provide a straightforward explanation. Don’t be defensive; simply explain what happened.

Demonstrator: Our next story involves the itinerary visualization. As I mentioned, we had to change our plans for this. You may remember that our original story was to show flight segments with an animated 3D fly-through. Programmers had some concerns about performance, so they did a test, and it turned out that rendering the animation would be a big hit on our cloud costs.

Audience member: Why is it so expensive? (Demonstrator motions to a programmer to explain.)

Programmer: Some mobile devices don’t have the ability to render 3-D animation in the browser, or can’t do it smoothly. So we would have had to do it in the cloud. But cloud GPU time is very expensive. We could have built a cloud version and a client-side version, or maybe cached some of the animations, but we’d need to take a close look at usage stats before we could say how much that would help.

Demonstrator: This was always a nice-to-have, and the increased cloud costs weren’t worth it. We didn’t want to spend extra development time on it either, so we dialed it back to a normal 2-D map. None of our competitors have a map of flight segments at all. We didn’t have enough time left over to animate the map, but after seeing the result (demonstrates), we decided that this was a nice, clean look. We’re going to move on rather than spending more time on it.

Once the demo is complete, tell stakeholders how they can run the software themselves. This is a good way of wrapping up if the demo is running long: let the audience know how they can try it for themselves, then ask if anybody would like a private followup for more feedback or questions.

Two Key Questions

At the end of the demo, leave time to ask your executive sponsor two key questions:1

1Thanks to Joshua Kerievsky of Industrial Logic for introducing me to this technique.

  1. Is our work to date satisfactory?

  2. May we continue?

These questions help keep the team on track and remind your sponsor to speak up if they’re unhappy. You should be communicating well enough with your sponsor that their answers are never a surprise.

Your sponsor isn’t likely to attend all demos, although that’s preferable. You can increase the likelihood of them attending by keeping the demo short. If they don’t attend at all, a team member with product management skills should conduct a private demo, including the two key questions, at least once per month.

Your sponsor may answer “no” to the first question, or they may be clearly reluctant as they answer “yes.” These are valuable early indicators that something is going wrong. After the demo, talk with your sponsor privately and find out what they’re unhappy about. Take immediate action to correct the problem.

Sometimes your sponsor will be unhappy because they expect you to be more productive. The “How to Improve Capacity” section describes how you can do more in the time available.

In rare cases, your sponsor will answer “no” to the second question, meaning that you can’t continue. You should never hear this answer—it indicates a serious breakdown in communication. It’s good to ask anyway. It reminds your sponsor to tell you when they’re unhappy.


If you do hear a “no,” you’re done. Meet with your sponsor after the demo and confirm that they want the team to stop. Let them know that you’re prepared to release what was demonstrated today and you’d like a final week to put the code into mothballs. (See the “As-Built Documentation” section.) Try to find out what went wrong, determine if your team will disband or take on a new purpose, and schedule a milestone retrospective that includes your sponsor, if possible.

Be Prepared

Done Done
Visual Planning

Before the demo, make sure all the stories being demoed are “done done” and you have an installation of the software—on a staging server, perhaps—that includes them. Make sure attendees have a way to try the demo for themselves.

The demo itself doesn’t need to be a polished presentation with glitzy graphics, but you still need to be prepared. You should be able to present the demo in 5-10 minutes, so that means knowing your material and being concise.

To prepare, review the stories that have been finished since the last demo and organize them into a coherent narrative. Decide which stories can be combined for the purpose of your explanation. Look at your team’s purpose and visual plan and decide how each set of stories connects to your current increment, your next release, and the team’s overall mission and vision. Create an outline of what you want to say.

Finally, conduct a few rehearsals. You don’t need a script—speaking off the cuff sounds more natural—but you do want to be practiced. Walk through the things you’re planning to demonstrate and make sure everything works the way you expect and all your example data is present. Then practice what you’re going to say. Do this a few times until you’re calm and confident.

Each time, the demo will take less and less preparation and practice. Eventually, it will become second nature, and preparing for it will only take a few minutes.

When Things Go Wrong

Sometimes, things just don’t work out and you won’t have anything to show, or what you do have will be disappointing.

It’s very tempting in this situation to fake the demo. You might be tempted to show a user interface that doesn’t have any logic behind it, or purposefully avoid showing an action that has a significant defect.

Be clear about your software’s limitations and what you intend to do about them.

It’s hard, but you need to be honest about what happened. Be clear about your software’s limitations and what you intend to do about them. Faking progress leads stakeholders to believe you have greater capacity than you actually do. They’ll expect you to continue at the inflated rate, and you’ll steadily fall behind.

Instead, take responsibility as a team (rather than blaming individuals or other groups), try not to be defensive, and let stakeholders know what you’re doing to prevent the same thing from happening again. Here’s an example:

This week, I’m afraid we have nothing to show. We planned to show you live flight tracking, but we underestimated the difficulty of interfacing with the back-end airline systems. We expected the data to be cleaner than it is, and we didn’t realize we’d need to build out our own test environment.

We identified these problems early on, and we thought we could work around them. We did, but not in time to finish anything we can show you. We should have replanned around smaller slices of functionality so we could still have something to show. Now we know, and we’ll be more proactive about replanning next time.

We expect similar problems with the airline systems in the future. We’ve had to add more stories to account for the changes. That’s used up most of our buffer. We’re still on target for the go-live marketing date, but we’ll have to cut features if we encounter any other major problems between now and then.

I’m sorry for the bad news and I’m available to answer your questions. I can take some now and we’ll have more information after we finish revising our plans later this week.


What do we do if stakeholders keep interrupting and asking questions during the demo?

Questions and interruptions are wonderful. It means stakeholders are engaged and interested.

If you’re getting so many interruptions and questions that you have trouble sticking with the 30-minute time limit, you might need to hold demos more often. Otherwise—especially if it’s one particularly engaged individual—you can ask people to hold further questions until after the meeting.

If you have a lot of questions, it’s okay to plan for meetings longer than 30 minutes, especially in the first month or two. Your most important stakeholders often have a lot of demands on their time, though, so it’s better to plan short meetings so they attend regularly.

What do we do if stakeholders keep nitpicking our choices?

Nitpicking is also normal, and a sign of interest, when you start giving demos. Don’t take it too personally. Write the ideas down on cards, as with any story, and have the on-site customers prioritize them after the meeting. Resist the temptation to address, prioritize, or begin designing solutions in the meeting. Not only does this extend the meeting, it avoids the discipline of the normal planning practices.

If nitpicking continues after the first month or two, it may be a sign that the on-site customers are missing something. Take a closer look at the complaints to see if there’s a deeper problem.

Stakeholders are excited by what they see and want to add a bunch of features. They’re good ideas, but we don’t have time for them—we need to move on to something else. What should we do?

Don’t say “no” during the demo. Don’t say “yes,” either. Simply thank the stakeholders for their suggestions and write them down as stories. After the demo is over, the on-site customers should take a close look at the suggestions and their value relative to the team’s purpose. If they don’t fit into the team’s schedule, a team member with product management skills can communicate that back to stakeholders.


Never fake a stakeholder demo by hiding bugs or showing a story that isn’t complete. You’ll just set yourself up for trouble down the line.

Inability to demo is a clear danger sign.
Task Planning

If you can’t demonstrate progress without faking it, it’s a clear sign that your team is in trouble. Slow down and try to figure out what’s going wrong. If you aren’t using iterations, try using them. If you are, see the “Making and Meeting Iteration Commitments” section and ask a mentor for help. The problem may be as simple as trying to do too much in parallel.


When your team conducts stakeholder demos well:

  • You generate trust with stakeholders.

  • You learn what stakeholders are most passionate about.

  • The team is confident in their ability to deliver.

  • You’re forthright about problems, which allows your team to prevent them from ballooning out of control.

Alternatives and Experiments

Stakeholder demos are a clear indication of your ability to deliver. Either you have completed stories to demonstrate, or you don’t. Either your executive sponsor is satisfied with your work, or they’re not. I’m not aware of any alternatives that provide such valuable feedback.

And it’s feedback that’s the important part of the stakeholder demo. Feedback about your team’s ability to deliver, feedback about your sponsors satisfaction, and also the feedback you get from observing stakeholders’ responses and hearing their questions and comments.

As you experiment with stakeholder demos, be sure to keep that feedback in mind. The demo isn’t just a way of sharing what you’re doing. It’s also a way of learning from your stakeholders. Some teams streamline their demos by creating a brief video recording. It’s a clever idea, and worth trying. But it doesn’t give you as much feedback. Be sure any experiments you try include a way to confirm your ability to complete work, to check in with your sponsor, and to learn from your stakeholders.

Share your feedback about this excerpt on the AoAD2 mailing list! Sign up here.

For more excerpts from the book, or to get a copy of the Early Release, see the Second Edition home page.

AoAD2 Practice: Trust

Book cover for “The Art of Agile Development, Second Edition.”

Second Edition cover

This is a pre-release excerpt of The Art of Agile Development, Second Edition, to be published by O’Reilly in 2021. Visit the Second Edition home page for information about the open development process, additional excerpts, and more.

Your feedback is appreciated! To share your thoughts, join the AoAD2 open review mailing list.

This excerpt is copyright 2007, 2020, 2021 by James Shore and Shane Warden. Although you are welcome to share this link, do not distribute or republish the content without James Shore’s express written permission.


Product Managers, Whole Team

We work with our stakeholders effectively and without fear.

I know somebody who worked in a company with two development teams. One was Agile, met its commitments, and delivered regularly. The team next door struggled: it fell behind schedule and didn’t have any working software to show. Yet when the company downsized, they let the Agile team go rather than the other team!

Why? When management looked in on the struggling team, they saw programmers working long hours with their heads down and UML diagrams papering the walls. When they looked in on the Agile team, they saw people talking, laughing, and going home at five with nothing but rough sketches and charts on the whiteboards.

Like it or not, our teams don’t exist in a vacuuum. Agile can seem strange and different at first. “Are they really working?” outsiders wonder. “It’s noisy and confusing. I don’t want to work that way. If it succeeds, will they force me to do it, too?”

Ironically, the more successful Agile is, the more these worries grow. Alistair Cockburn calls them organizational antibodies. (He credits Ron Holiday with the term.) If left unchecked, organizational antibodies will overcome and dismantle an otherwise successful Agile team.

No matter how effective you are, you’re in trouble without the goodwill of your stakeholders.

No matter how effective you are at delivering software, you’re in trouble without the goodwill of your stakeholders and sponsor. Yes, meeting schedules and technical expectations helps, but the interpersonal skills—soft skills—your team exhibits may be just as important to building trust in your team.

Does this sound unfair or illogical? Surely your ability to deliver high-quality software is all that really matters!

It is unfair. It is illogical. It’s also the way people think. If your stakeholders don’t trust you, they won’t collaborate with your team, which hurts your ability to deliver valuable software. They might even campaign against you.

Don’t wait for your stakeholders to realize how your work can help them. Show them.

Show Some Hustle

Things may come to those who wait, but only the things left by those who hustle.1

1Thanks to George Dinwiddie for this quote.

Abraham Lincoln

Many years ago, I hired a small local moving company to move my belongings from one apartment to another. When the movers arrived, I was impressed to see them hustle—they moved as quickly as possible from the van to the apartment and back. This was particularly unexpected because I was paying them by the hour. There was no advantage for them to move so quickly.

Those movers impressed me. I felt that they were dedicated to meeting my needs and respecting my pocketbook. If I still lived in that city and needed to move again, I would hire them in an instant. They earned my goodwill—and my trust.

Energized Work
Informative Workspace
Stakeholder Demos

In the case of a software team, hustle is energetic, productive work. It’s the sense that the team is putting in a fair day’s work for a fair day’s pay. Energized work, an informative workspace, stakeholder demos, and appropriate roadmaps all help convey this feeling of productivity. Perhaps most important, though, is attitude: during work hours, treat work as a welcome priority that deserves your full attention, not a burden to be avoided.

Show Some Empathy

Development teams often have contentious relationships with key business stakeholders. From the perspective of developers, it takes the form unfair demands and bureaucracy, particularly in the form of imposed deadlines and schedule pressure.

So it might be a surprise to learn that, for many of those stakeholders, developers are the ones holding all the cards. Stakeholders are in a scary situation, especially in companies that aren’t in the business of selling software. Take a moment to think about what it might be like:

  • Sponsors, product managers, and key stakeholders’ careers are often on the line. Developers’ often aren’t.

  • Developers often earn more than stakeholders, apparently without the hard work and toeing of lines that stakeholders have to put in.

  • Developers often come to work much later than stakeholders. They may leave later, too, but stakeholders don’t see that.

  • To outsiders, developers often don’t seem particularly invested in success. They seem to be more interested in things like learning new technologies, preparing for their next job hop, work/life balance, and office perks like ping-pong tables and free snacks.

  • Experienced stakeholders have a long history of developers failing to deliver what they needed at the time that they needed it.

  • Stakeholders are used to developers responding to questions about progress, estimates, and commitments with everything from condescending arrogance to well-meaning but unhelpful technobabble.

  • For many stakeholders, they can see that big tech companies deliver software well, but their company rarely does, and they don’t know why.

Think about what success and failure mean to your stakeholders.

I’m not saying developers are bad, or that these perceptions are necessarily true. I’m asking you to think about what success and failure mean to your stakeholders, and to consider whether, from the outside, your team appears to treat success with the respect it deserves.

Deliver on Commitments

If your stakeholders have worked with software teams before, they probably have plenty of war wounds from slipped schedules, unfixed defects, and wasted money. But at the same time, they probably don’t have software development skills themselves. That puts them in the uncomfortable position of relying on your work, having had poor results before, and being unable to tell if your work is any better.

Meanwhile, your team consumes tens of thousands of dollars every month in salary and support. How do stakeholders know whether you’re spending their money wisely? How do they know that the team is even competent?

Stakeholders may not know how to evaluate your process, but they can evaluate results. Two kinds of results speak particularly clearly to them: working software and delivering on commitments. For some people, that’s what accountability means: you did what you said you would.

Task Planning
Stakeholder Demos

Fortunately, Agile teams can deliver both of those results every week. You can use iteration-based task plans to make a commitment every week, and you can demonstrate that you’ve met that commitment, exactly one week later, with a stakeholder demo. You can also use release trains to create a similar cadence for releases, and steer your plans so you always release precisely on time, as described in the “How to Steer Your Plans” section.

This week-in, week-out delivery builds stakeholder trust like nothing I’ve ever seen. It’s extremely powerful. All you have to do is create a plan that you can achieve... and then achieve it. Again and again and again.

Manage Problems

Did I say, “All you have to do?” Silly me. It’s not that easy.

First, you need to plan and execute well (see the “Planning” chapter and the “Ownership” chapter). Second, as the poet said, “The best laid schemes o’ mice an’ men / Gang aft a-gley.”2

2“To a Mouse,” by renowned Scottish poet Robert Burns. The poem starts, “Wee, sleekit, cow’rin, tim’rous beastie, / O, what a panic’s in thy breastie!” Reminds me of how I felt when asked to integrate a year-old feature branch.

In other words, some releases don’t sail smoothly into port on the last day. What do you do when your best laid plans gang a-gley?

Actually, that’s your chance to shine. Anyone can look good when life goes according to plan. Your true character shows when you deal with unexpected problems.

The first thing to do is to limit your exposure to problems. Work on your hardest, most uncertain stories early in the release. You’ll find problems sooner, and you’ll have more time to fix them.

Stand-Up Meetings
Task Planning

When you encounter a problem, start by letting the whole team know about it. Bring it up in the next stand-up meeting at the very latest. This gives the entire team a chance to help solve the problem.

Iterations are also a good way to notice when things aren’t going to plan. Check your progress at every stand-up. If the setback is relatively small, you might be able to absorb it by using some of your iteration slack. Otherwise, you’ll need to revise your plans, as described in the “Making and Meeting Iteration Commitments” section.

The bigger the problem, the sooner you should disclose it.

When you identify a problem you can’t absorb, let key stakeholders know about it. They’ll appreciate your professionalism even if they don’t like the problem. I usually wait until the stakeholder demo to explain problems that we solved on our own, but bring bigger problems to stakeholders’ attention right away. Team members with political savvy should decide who to talk to and when.

The sooner your stakeholders know about a problem (and believe me, they’ll find out eventually), the more time they have to work around it. In my experience, it’s not the existence of problems that makes stakeholders most upset—it’s being blindsided by them.

When you bring a problem to stakeholders’ attention, bring mitigations too. It’s good to explain the problem, and it’s better to explain what you’re planning to do about it. It can take a lot of courage to have this discussion—but addressing a problem successfully can do wonders for building trust.

Beware of the temptation to work overtime or cut slack in order to make up for lost time. Although this can work for a week or two, it can’t solve systemic problems, and it will create problems of its own if allowed to continue.

Respect Customers’ Goals

Team Development

When Agile teams first form, it usually takes individual team members a while to think of themselves as part of a single team. In the beginning, developers and customers often see themselves as separate groups.

New on-site customers tend to be particularly skittish. Being part of a development team feels awkward; they’d rather work in their normal offices with their normal colleagues. Not only that, if on-site customers are unhappy, those colleagues—who are often have a direct line to the team’s key stakeholders—will be the first to hear about it.

When forming a new Agile team, make an effort to welcome on-site customers. One particularly effective way to do so is to treat customer goals with respect. This may even mean suppressing, for a time, cynical developer jokes about schedules and suits.

(Being respectful goes both ways, of course, and customers should also suppress their natural tendencies to complain about schedules and argue with estimates. I’m emphasizing customers’ needs here because they play such a big part in stakeholder perceptions.)

Another way for developers to take customer goals seriously is to come up with creative alternatives for meeting those goals. If customers want something that may take a long time or that involves tremendous technical risks, suggest alternate approaches to reach the same underlying goal for less cost. Similarly, if there’s a more impressive way of meeting a goal that customers haven’t considered, bring it up, especially if it’s not too hard.

As the team has these conversations, barriers will be broken and trust will develop. As stakeholders see that, their trust in the team will blossom as well.

You can also build trust directly with stakeholders. Consider this: the next time a stakeholder stops you in the hallway with a request, what would happen if you immediately and cheerfully listened to their request, wrote it down as a story on an index card, and then brought them both to the attention of a product manager for scheduling or further discussion?

This might be a ten-minute interruption for you, but imagine how the stakeholder would feel. You reponded to their concern, helped them express it, and took immediate steps to get it into the plan.

That’s worth infinitely more to them than firing an email into the black hole of your request tracking system.

Be Open

When your company is new to Agile, other people in the company are likely to be curious, and little wary, about your team’s strange new approach to software development. This curiosity can easily turn to resentment if your team seems insular or stuck up.

So be open about what you’re doing. One team posted pictures and charts on the outer wall of their team room that showed what they were working on and how it was progressing. Another invited anyone and everyone in the company to attend its stakeholder demos.

You can be open in many ways. Consider holding brown-bag lunch sessions describing your process, public code-fests in which you demonstrate your code and Delivering practices, or an “Agile open house day” in which you invite people to see what you’re doing and even participate for a little while. I’ve even heard of people wearing buttons or hats around the office that say “Ask me about Agile.”

Be Honest

In your enthusiasm to demonstrate progress, be careful not to step over the line. Borderline behavior includes glossing over known defects in a stakeholder demo, taking credit for stories that aren’t 100% complete, and extending an iteration deadline for a few days in order to finish everything in the iteration plan.

These are minor frauds, yes. You may even think that “fraud” is too strong a word—but all of these behaviors give stakeholders the impression that you’ve done more than you actually have.

There’s a practical reason not to do these things, too. Stakeholders will expect you to complete the remaining stories just as quickly, when in fact you haven’t even finished the first set. You’ll build up a backlog of work that looks done, but isn’t. At some point, you’ll have to finish that backlog, and the resulting delay will produce confusion, disappointment, and even anger.


Even scrupulously honest teams can run into this problem. In a desire to look good, teams sometimes sign up for more stories than they can implement well. They get the work done, but they take shortcuts and don’t do enough design and refactoring. The design suffers, defects creep in, and the team finds itself suddenly slowed while they struggle to improve internal quality.

Similarly, don’t yield to the tempatation to count partially-completed stories toward your capacity. If a story isn’t completely finished, it counts as zero. Don’t take partial credit. There’s an old programming joke: the first 90% of the work takes 90% of the time... and the last 10% of the work takes 90% of the time. Until the story is totally done, it’s impossible to say for certain what percentage has been done.


Why is it our responsibility to create trust? Shouldn’t stakeholders do their part?

You’re only in charge of yourselves. Ideally, stakeholders are working hard to make the relationship work, too, but that’s not under your control.

Isn’t it more important that we be good rather than look good?

Both are important. Do great work and make sure your organization knows it.

Why bring big problems to stakeholders’ attention before smaller, already-solved problems? That seems backward.

The sooner you disclose a problem, the more time you have to solve it.

Problems tend to grow over time. The sooner you disclose a problem, the more time you have to solve it. It reduces panic, too: early on, people are less stressed about deadlines and have more mental energy for problems.

You said developers should keep jokes about the schedule to themselves. Isn’t this just the same as telling developers to shut up and meet the schedule, no matter how ridiculous?

Certainly not. Everybody on the team should speak up and tell the truth when they see a problem. However, there’s a big difference between discussing a real problem and simply being cynical.

I’ve met many developers with cynical tendencies. That’s okay, but remember that customers’ careers are often on the line. They may not be able to tell the difference between a real joke and a complaint disguised as a joke. An inappropriate joke can set their adrenaline pumping just as easily as a real problem.


Commitments are a powerful tool for building trust, but only if you meet them. Don’t make commitments to stakeholders before you’ve proven your ability to make and meet commitments privately, within the team.


When your team establishes trust with your organization and stakeholders:

  • Stakeholders believe in your team’s ability to meet their needs.

  • You acknowledge mistakes, challenges, and problems rather than hiding them until they blow up.

  • Everyone involved seeks solutions rather than blame.

Alternatives and Experiments

Trust is vital. There are no alternatives.

There are, however, many ways of building trust. This is a topic with a long history, and the only thing truly new idea Agile brings to the table is the ability, using iterations, to make and meet commitments on a weekly basis. Other than that, feel free to take inspiration from the many existing resources on relationship building and trust.

Further Reading

The Trusted Advisor [Maister et al. 2000] is a good resource on generating trust.

The Power of a Positive No: How to Say No and Still Get to Yes [Ury 2007] describes how to say no while preserving important relationships. Diana Larsen describes this ability as “probably more important than any amount of negotiating skill in building trust.”

Share your feedback about this excerpt on the AoAD2 mailing list! Sign up here.

For more excerpts from the book, or to get a copy of the Early Release, see the Second Edition home page.

AoAD2 Practice: “Done Done”

Book cover for “The Art of Agile Development, Second Edition.”

Second Edition cover

This is a pre-release excerpt of The Art of Agile Development, Second Edition, to be published by O’Reilly in 2021. Visit the Second Edition home page for information about the open development process, additional excerpts, and more.

Your feedback is appreciated! To share your thoughts, join the AoAD2 open review mailing list.

This excerpt is copyright 2007, 2020, 2021 by James Shore and Shane Warden. Although you are welcome to share this link, do not distribute or republish the content without James Shore’s express written permission.

“Done Done”

Whole Team

We’re done when we’re production-ready.

“Hey, Valentina!” Shirley sticks her head into Valentina’s office. “Did you finish that new feature yet?”

Valentina nods. “Hold on a sec,” she says, without pausing in her typing. A flurry of keystrokes crescendos and then ends with a flourish. “Done!” She swivels triumphantly to look at Shirley. “It only took me half a day, too.”

“That’s impressive,” says Shirley. “We figured it would take at least a day, probably two. Can I look at it now?”

“Well, not quite,” says Valentina. “I haven’t integrated the new code yet.”

“Okay,” Shirley says. “But once you do that, I can look at it, right? I’m eager to show it to our new clients. They picked us specifically for this feature. I’m going to deploy the new build on their test bed so they can play with it.”

Valentina frowns. “Well, I wouldn’t show it to anybody yet. I haven’t tested it. And you can’t deploy it anywhere—I haven’t updated the deploy script or the migration tool.”

“I don’t understand,” Shirley grumbles. “I thought you said you were done!”

“I am,” insists Valentina. “I finished coding it just as you walked in. Here, I’ll show you.”

“No, no, I don’t need to see the code,” Shirley replies. “I need to show this to our customers. I need it to be finished. Really finished.”

“Well, why didn’t you say so?” says Valentina. “This feature is done—it’s all coded up. It’s just not done done. Give me a few more days.”

Production-Ready Software

A completed story is ready to release.

Wouldn’t be nice if, once you finished a story, you never had to come back to it? That’s the idea behind ”done done.” A completed story isn’t a lump of unintegrated, untested code. It’s ready to go. When the other stories planned for your current release are done, you can release without doing any further work.

Partially finished stories increase your work in progress, and this increases your costs, as “Key Idea: Minimize Work in Progress” describes. When your stories are done, rather than pushing a button to release, you have to complete an unpredictable amount of work. This destabilizes your release plans and prevents you from making and meeting commitments.

Task Planning

To avoid this problem, make sure your stories are “done done.” If you’re using iteration-based task planning, all the stories in the iteration should be done at the end of each iteration. If you’re using continuous flow, stories should be done before you take them off the board. You should have the technical ability to release every completed story, even if you don’t actually do so.

What does it take for a story to be “done done?” That depends on your organization. Create a definition of done that shows your team’s story completion criteria. I write mine on the task planning board:

  • Tested (all unit and integration tests finished)

  • Coded (all code written)

  • Designed (code refactored to the team’s satisfaction)

  • Integrated (the story works from end to end—typically, UI to database—and fits into the rest of the software)

  • Builds (the build script works with the changes)

  • Deploys (the deploy script deploys the changes)

  • Migrates (the deploy script updates database schema and migrates data, when needed)

  • Reviewed (customers have reviewed the story and confirmed that it meets their expectations)

  • Fixed (all known bugs have been fixed or scheduled as their own stories)

  • Accepted (customers agree that the story is finished)

Some teams add “Documented” to this list, meaning that the story has documentation, help text, and meets any other documentation standards. (See the “Documentation” section.)


Other teams include non-functional criteria to this list, such as performance or scalability expectations. This can lead to premature optimization, or difficulty getting stories done, so I prefer to plan these sorts of non-functional requirements with dedicated stories. A compromise I learned from Bill Wake is to check expectations as part of your “done done” checklist, but not act on them. Like this: “Create a performance story if response time is more than 500ms.”

How to Be “Done Done”

Make a little progress on every aspect of your work every day.

Agile works best when you make a little progress on every aspect of your work every day, rather than working in phases or reserving the last few days of your iteration for getting stories “done done.” This is an easier way to work, once you get used to it, and it reduces the risk of having unfinished work at the end of the iteration. However, it does rely on some Delivering zone practices.

Test-Driven Development
Continuous Integration
Zero Friction

Programmers, use test-driven development to combine testing, coding, and designing. As you work, integrate with the rest of the team’s work by using continuous integration. Incrementally improve your build and deployment automation with every task that needs it. Create tasks for database migration, when appropriate, and work on them as part of each story.

Just as importantly, include your on-site customers. When you work on a UI task, show an on-site customer your progress, even if the UI doesn’t work yet. Customers often want to tweak a UI when they want to see it for the first time. This can lead to a surprising amount of last-minute work.

Similarly, as you finish tasks and integrate the various pieces of a story, run the code to make sure everything works together. While this shouldn’t take the place of automated testing, it’s good to do a sanity check to make sure there aren’t any surprises.

No Bugs

Throughout this process, you may find mistakes, errors, or outright bugs. When you do, fix them right away—then improve your work habits to prevent that kind of error from occurring again.

When you believe the story is “done done,” show it to your on-site customers for final review and acceptance. Because you reviewed your progress with them throughout the iteration, this should only take a few minutes.

Making Time

Your team should finish 4-10 stories every week. Getting that many stories “done done” may seem like an impossibly large amount of work. Part of the trick is to work incrementally, as just described, rather than in phases. The real secret, though, is to create small stories.

Many teams new to Agile create stories that are too large to get “done done.” They finish coding, but they don’t have enough time to finish everything. The UI is a little off, the tests are incomplete, and bugs sneak through the cracks.

Remember, you own your schedule.

Remember, you own your schedule. You decide how many stories to sign up for and how big they are. If your stories are too big, make them smaller! (See the “Splitting and Combining Stories” section.)


Creating large stories is a natural mistake, but some teams compound the problem by thinking, “Well, we really did finish the story, except for that one little bug.” They count it towards their capacity, which just perpetuates the problem.

Stories that aren’t “done done” don’t count toward your capacity. Even if a story only has a few minor UI bugs, or you finished everything except the last few automated tests, it counts as a zero when calculating your capacity. This will lower your capacity, giving you more time, so you can finish everything next time.

You may find that this lowers your capacity so much that you can only finish one or two stories per week. This means that your stories were too large to begin with. Split the stories you have and work on making future stories smaller.

Teams using continuous flow rather than iterations don’t track capacity, but the same idea applies. You should start and finish 4-10 stories in a single week, and each one should be “done done.” If they aren’t, make your stories smaller.


What if a story isn’t “done done” at the end of an iteration?

You’ll either try again later or make a new story for what’s left. See the “Incomplete Stories” section.


Whole Team
Team Room
Test-Driven Development
Evolutionary Design

Getting stories “done done” requires a whole team—one that includes customers, at a minimum, and possibly also testers, operations, technical writers, and more. The team needs to share a team room, either physical or virtual. Otherwise, the team is likely to have too many hand-off delays to finish stories quickly.

You’re also likely to need test-driven development and evolutionary design in order to test, code, and design each story in such a short timeframe.


When your stories are “done done:”

  • You avoid unexpected batches of work.

  • Teams using iterations spread wrap-up and polish work throughout the iteration.

  • On-site customers and testers have a steady workload.

  • The final customer acceptance review only takes a few minutes.

  • When you demonstrate your stories to stakeholders, they work to their satisfaction.

Alternatives and Experiments

This practice is the cornerstone of Agile planning. If you aren’t “done done” after every story or iteration, your capacity and forecasting will be unreliable. You won’t be able to release at will. This will disrupt your release planning and prevent you from making and meeting commitments, which will in turn damage stakeholder trust. That’s likely to lead to increased stress and pressure on the team, hurt team morale, and damage the team’s capacity for energized work.

The alternative to being “done done” is to fill the end of your schedule with make-up work. You will end up with an indeterminate amount of work to fix bugs, polish the UI, migrate data, and so forth. Although many teams operate this way, it will damage your credibility and ability to deliver. I don’t recommend it.

Share your feedback about this excerpt on the AoAD2 mailing list! Sign up here.

For more excerpts from the book, or to get a copy of the Early Release, see the Second Edition home page.

AoAD2 Chapter: Ownership (introduction)

Book cover for “The Art of Agile Development, Second Edition.”

Second Edition cover

This is a pre-release excerpt of The Art of Agile Development, Second Edition, to be published by O’Reilly in 2021. Visit the Second Edition home page for information about the open development process, additional excerpts, and more.

Your feedback is appreciated! To share your thoughts, join the AoAD2 open review mailing list.

This excerpt is copyright 2007, 2020, 2021 by James Shore and Shane Warden. Although you are welcome to share this link, do not distribute or republish the content without James Shore’s express written permission.


Top-notch execution lies in getting the details right, and no one understands the details better than the people who actually do the work.

Lean Software Development

Agile teams own their work. They decide for themselves what to work on, how to break it into tasks, and who on the team will do it. This is due to a fundamental Agile principle: the people who are doing the work are the ones who best understand what needs to be done. They’re the ones most qualified to decide the details.

When teams take ownership of their work, they also take responsibility for getting it done.

Ownership isn’t just about control, though. It’s also about responsibility. When teams take ownership of their work, they also take responsibility for getting it done.

This chapter has the practices you need to take ownership of your work and successfully get it done:

  • The “Task Planning” practice: Break stories into tasks and decide how they’ll get done.

  • The “Capacity” practice: Stabilize your short-term plans by signing up for what you can actually complete.

  • The “Slack” practice: Improve capacity and make reliable short-term commitments.

  • The “Stand-Up Meetings” practice: Coordinate every day about how your team will finish their work.

  • The “Informative Workspace” practice: Surround your team with useful information.

  • The “Customer Examples” practice: Collaborate with experts to understand tricky details.

  • The “Done Done” practice: Create software that’s ready to be released.

Two key ideas are central to ownership. The first was included in the “Teamwork” chapter and the second is included in this chapter:

  • “Key Idea: Self-Organizing Teams”: Agile teams decide for themselves what to work on, who will do it, and how the work will be done.

  • “Key Idea: Collective Ownership”: Team members take joint responsibility for making the team’s work a success.

XXX Further reading to consider:

  • Turn the Ship Around

Bill Wake reading recommandations:

  • Facilitator's Guide to Participatory Decision-Making, by Sam Kaner - or another facilitation book

  • Getting It Done: How to Lead When You're Not in Charge, by Roger Fisher and Alan Sharp

  • Quality Software Management (series), Jerry Weinberg - Not something I used day-to-day but good background

  • Maybe: The Effective Manager, by Mark Horstmann. - I haven't read his book, but I used to follow their podcasts, and attended their 2-day training years ago. It's not an agile perspective, but rather focused on managing for organizational results, with concrete advice on coaching, one-on-ones, feedback, & delegation. (You'd want to vet this - the perspective may not be a fit.)

Share your feedback about this excerpt on the AoAD2 mailing list! Sign up here.

For more excerpts from the book, or to get a copy of the Early Release, see the Second Edition home page.

AoAD2 Practice: Customer Examples

Book cover for “The Art of Agile Development, Second Edition.”

Second Edition cover

This is a pre-release excerpt of The Art of Agile Development, Second Edition, to be published by O’Reilly in 2021. Visit the Second Edition home page for information about the open development process, additional excerpts, and more.

Your feedback is appreciated! To share your thoughts, join the AoAD2 open review mailing list.

This excerpt is copyright 2007, 2020, 2021 by James Shore and Shane Warden. Although you are welcome to share this link, do not distribute or republish the content without James Shore’s express written permission.

Customer Examples

Customers, Whole Team

We implement tricky details correctly.

Some software is straightforward: just another UI on top of yet another database. But often, the software that’s most valuable is the software that involves specialized expertise.

This specialized expertise, or domain knowledge, is full of details that are hard to understand and easy to get wrong. To communicate these details, use customer examples: concrete examples illustrating domain rules.

Whole Team
Real Customer Involvement

To create customer examples, you’ll need to talk to people with domain expertise. Ideally, you have people with those skills as part of your team. If not, you’ll have to go find them.

Your team might include people who have developed a layperson’s understanding of the domain. Programmers, testers, and business analysts often fall into this category. They may be able to create customer examples themselves. Even so, it’s a good idea to review those examples with real experts. There can be tricky details that a layperson will get wrong.

To create and use the examples, follow the Describe, Demonstrate, Develop process.


Task Planning
Customer examples are for communication.

During task planning, look at your stories and decide whether there are any details that developers might misunderstand. Add tasks for creating examples of those details. You don’t need to provide examples for everything: just the tricky details. Customer examples are for communication, not for proving that the software works.

For example, if one of your stories is “Allow invoice deleting,” you don’t need to provide an example of deleting an invoice. Developers already understand what it means to delete something. However, you might need examples that show when it’s okay to delete an invoice, particularly if there are complicated rules to ensure that invoices aren’t deleted inappropriately.

If you’re not sure what developers might misunderstand, ask them! But error on the side of providing too many examples, at least at first. When domain experts and developers first sit down to create examples, both groups are often surprised by the extent of existing misunderstandings.

When you’re ready to work on the examples, gather the team around a whiteboard, or shared document, if the team is remote. The whole team can participate. At a minimum, you’ll need a domain expert, all the programmers, and all the testers. They all need to be able to understand the details so they can work on them when needed. (See “Key Idea: Collective Ownership”.)

Start by summarizing the story and the rules involved. Be brief: this is just an overview. Save details for the examples. For example, a discussion of invoice deletion might go like this:

Expert: One of our stories is to add support for deleting invoices. In addition to the UI mock-ups we gave you, we thought some customer examples would be a good idea. Deleting invoices isn’t as simple as it appears because we have to maintain an audit trail.

There are a bunch of rules around this issue. In general, it’s okay to delete invoices that haven’t been sent to customers, so people can delete mistakes. But once an invoice has been sent to a customer, it can only be deleted by a manager. Even then, we have to save a copy for auditing purposes.


Make rules concrete by providing examples.

Once you’ve provided an overview, resist the temptation to keep describing rules. Instead, make the rules concrete by providing examples. Developers, you can get the ball rolling by proposing an example, but try to get the domain expert to take the lead. One trick is to make a deliberate mistake and allow the domain expert to correct you.1

1I learned this trick from Ward Cunningham. A variant was later popularized by Steven McGeady as Cunningham’s Law: “The best way to get an answer on the Internet is not to ask a question; it’s to post the wrong answer.”

Tables are often the most natural way to provide examples, but you don’t need to worry about formatting. Just get examples on the whiteboard or shared document. The scenario might continue like this:

Programmer: So if an invoice hasn’t been sent, an account rep can delete the invoice, and if it has been sent, they can’t. (Picks up a marker and writes on whiteboard.)

UserSentCan delete?
Account RepNY
Account RepYN

Expert: That’s right.

Programmer (deliberately getting it wrong): But a CSR can.

UserSentCan delete?
Account RepNY
Account RepYN

Expert: No, a CSR can’t, but a manager can. (Programmer hands marker to expert).

UserSentCan delete?
Account RepNY
Account RepYN
ManagerYY, but audited

Tester: What about a CSR supervisor? Or an admin?

Expert: CSR supervisors don’t count as managers, but admins do. But even admins leave an audit trail.

UserSentCan delete?
Account RepNY
Account RepYN
ManagerYY, but audited
CSR SupervisorYN
AdminYY, but audited

Expert: To add another wrinkle, “sent” actually means anything that could have resulted in a customer seeing the invoice, regardless of whether they actually did.

Exported as PDF
Exported to URL

Tester: What about previews?

Expert: Nobody’s ever asked me that before. Well, obviously... um... okay, let me get back to you on that.

This conversation continues until all relevant details have been worked out, with programmers and testers asking questions to fill in gaps. Expect there to be some questions that customers haven’t considered before.

As you dig into the details, continue creating specific examples. It’s tempting to talk in generalities, such as “Anyone can delete invoices that haven’t been sent,” but it’s better to create concrete examples, such as “An account rep can delete an invoice that hasn’t been sent.” This will help expose gaps in people’s thinking.

You may discover that you have more to discuss than you realized. The act of creating specific examples often reveals scenarios customers hadn’t considered. Testers are particularly good at finding these gaps. If you have a lot to discuss, consider splitting up so programmers can start implementing while customers and testers chase down additional examples.


When you’ve fleshed out the details, record the results for future reference. A simple photo of the whiteboard is often enough.

Test-Driven Development

The customer examples often represent some of the most important logic in your application. Be sure to document it. My preferred approach is to create automated tests. Rather than blindly copy every example into a corresponding test, though, I use the examples as inspiration for more carefully thought-out tests than can act as documentation for other programmers. To do so, I print out a copy of the examples and use test-driven development to build my tests and code incrementally. As I write each test and corresponding code, I check off the examples that the test covers.

As you develop, the rigor required by code is likely to reveal some more edge cases you hadn’t considered. It’s okay to go back to the whiteboard. It’s also okay to just ask a question, get an answer, and code it up. Either way, update your tests or other documentation.


Should we create examples prior to starting development on a story?

The Planning Game
Incremental Requirements

It shouldn’t be necessary. If you need to explore a few examples during the planning game in order to size a story, you can, but you don’t need to do so in general. Remember that requirements, including customer examples, should be developed incrementally, along with the rest of your software.


Many stories are straightforward enough that they don’t need customer examples. Don’t try to force them where they’re not needed.

When you do need customer examples, you also need domain expertise. If you don’t have any experts on your team, you’ll need to make an extra effort to involve them.


When your team uses customer examples well:

  • Your software has few, if any, domain logic bugs.

  • Your team discusses domain rules in concrete, unambiguous terms.

  • Your team often discovers and accounts for special-case domain rules nobody had considered.

Alternatives and Experiments

Some teams like to use natural-language test automation tools, such as Cucumber, to turn customer examples into automated tests. I used to be one of them—Ward Cunningham’s Framework for Integrated Test (Fit) was the first such tool in the Agile community, and I was heavily involved with it.

But, over time, I realized that the value of the examples was in the whiteboard conversation, not the automation. In theory, customers would help write Fit tests, and use Fit‘s output to gain confidence in the team’s progress. In practice, that rarely happened, and didn’t have much additional value. Regular test-driven development was an easier way to automate and worked just as well. The same is true for tools such as Cucumber.

Cucumber stems from the behavior-driven development (BDD) community, founded by Daniel Terhorst-North, which has long been a strong proponent of customer collaboration. Although I don’t think tools such as Cucumber are necessary, the BDD community is also a good source of ideas for experiments. One such idea is example mapping, a way of collecting examples that uses index cards. [Wynne 2015]

You’re welcome to explore other options for creating customer examples, too. Try the simple, collaborative, whiteboard-based approach several times first, so you have a baseline to compare against. When you do experiment with other options, remember that customer examples are a tool for collaboration and feedback, not automation or testing. Be sure that your experiments enhance that core principle rather than distracting from it.

Share your feedback about this excerpt on the AoAD2 mailing list! Sign up here.

For more excerpts from the book, or to get a copy of the Early Release, see the Second Edition home page.

AoAD2 Practice: Informative Workspace

Book cover for “The Art of Agile Development, Second Edition.”

Second Edition cover

This is a pre-release excerpt of The Art of Agile Development, Second Edition, to be published by O’Reilly in 2021. Visit the Second Edition home page for information about the open development process, additional excerpts, and more.

Your feedback is appreciated! To share your thoughts, join the AoAD2 open review mailing list.

This excerpt is copyright 2007, 2020, 2021 by James Shore and Shane Warden. Although you are welcome to share this link, do not distribute or republish the content without James Shore’s express written permission.

Informative Workspace

Whole Team

We are tuned in to our progress.

Your workspace is the cockpit of your development effort. Just as a pilot surrounds themself with information necessary to fly a plane, use an informative workspace to surround your team with information necessary to steer their work.

An informative workspace broadcasts information into the team room. On in-person teams, when people take a break, they will sometimes wander around and stare at the information surrounding them. That brief zone-out can result in an “aha” moment of discovery.

On remote teams, it’s harder to get the same “always visible” effect, but the same principles apply. Create opportunities for people to absorb information without having to consciously seek it out.

An informative workspace also allows people to sense the team’s progress just by walking into the room—or logging in, in the case of a virtual team room. It conveys status information without interrupting team members and helps improve stakeholder trust.

Subtle Cues

An informative workspace constantly broadcasts information to the team.

The essence of an informative workspace is information. An informative workspace constantly broadcasts information to the team. This takes the form of “big visible charts,” as described below, but it also takes the form of subtle cues that allow team members to maintain their situational awareness.

One source of situational awareness is seeing what people are doing. In a physical team room, if someone’s changing the visual plan, they’re probably thinking about upcoming work. If someone’s standing by the task board, they’re probably open to discussing what to work on next. By mid-iteration, if half the cards on the task board aren’t done, the team is going slower than expected.

Energized Work

The feel of the room is another cue. A healthy team is energized. There’s a buzz in the air—not of tension, but activity. People converse, work together, and make the occasional joke. It’s not rushed or hurried, but it’s clearly productive. When a person or a pair needs help, others notice, lend their assistance, then return to their tasks. When someone completes something well, everyone celebrates for a moment.

An unhealthy team is quiet and tense. Team members don’t talk much, if at all. It feels drab and bleak. People live by the clock, punching in and punching out—or worse, watching to see who is the first to dare to leave.


In a remote team, these cues are lost. Instead, make an extra effort to communicate status and mood. Establish working agreements around sharing information, such as leaving notes in the group chat and providing ways to check in with each other.

An informative workspace also provides ways for people to communicate. For in-person teams, this means plenty of whiteboards around the walls and stacks of index cards. A collaborative design sketch on a whiteboard can often communicate an idea far more quickly and effectively than a half-hour presentation. Index cards are great for retrospectives, planning, and creating visualizations.

For remote teams, the team’s virtual whiteboarding tool serves the same purpose. Some teams also establish one or two shared documents as the team’s “wall” of useful information. You can also improve your situational awareness by keeping the virtual task planning board always visible on a separate monitor or tablet, so you notice when people make changes.

Big Visible Charts

An essential aspect of an informative workspace is the big visible chart. The goal of a big visible chart is to display information so simply and unambiguously that it communicates from across the room.

The task planning board (such as the “A Task Grid” figure) and visual planning board (such as the “A Cluster Map” figure) are ubiquitous examples of such a chart. You’ll see variations of these boards in every Agile team, although many hide them away in an electronic tool. By using a physical board, you create an information radiator that constantly projects information into the room.

Another useful chart is a team calendar, which shows important dates, iterations, and when team members will be out of the office (along with contact information, when appropriate). For in-person teams, a large plastic perpetual calendar works well.

I also like to keep the team’s purpose—their vision, mission, and mission tests—prominently posted. It tends to fade into the background after a few weeks, but it’s good to be able to point to it when needed.

Don’t let electronic tools constrain what you can do.

Avoid the reflexive temptation to computerize your informative workspace. Your team needs to be able to change their process any time somebody comes up with a good idea. With flip chart paper, tape, and markers, the elapsed time from idea to chart on the wall is two or three minutes. In a physical team room, nothing else is as flexible or convenient. Electronic tools take longer and are limited by their programming. Don’t let them constrain what you can do.

Remote teams have to use electronic tools, of course, but they should also prefer tools that make quick changes and updates easy, rather than trying to automate. The basic cards, stickies, and drawing tools of your virtual whiteboard should be enough.

Improvement Charts


One type of big visible chart measures specific issues that the team wants to improve. Often, these issues come up during a retrospective. Unlike the planning boards or team calendar, which are permanent fixtures in the team room, improvement charts only stay up for a few weeks.

Create improvement charts as a team decision, and maintain them as a team responsibility. When you agree to create a chart, agree to keep it up-to-date. For some charts, this means everyone takes a few seconds to mark the board when their status changes. Other charts involve collecting some information at the end of the day. For these, collectively choose someone to be responsible for updating the chart.

There are many possible types of improvement charts; they take forms as diverse as the types of problems that teams experience. The principle behind all of them is the same: they appeal to our innate desire for improvement. If you show progress toward a mutual goal, people will usually try to improve their status.

Consider the problems you’re facing and what kind of chart, if any, would help. As an example, Agile teams have successfully used charts to improve:

  • Amount of pairing, by tracking the percentage of time spent pairing versus the percentage of time spent working solo

  • Pair switching, by tracking how many of the possible pairing combinations actually paired during each iteration

  • Build performance, by tracking the number of tests executed per second

  • Support responsiveness, by tracking the age of the oldest support request (an early chart, which tracked the number of outstanding requests, resulted in hard requests being ignored)

  • Needless interruptions, by tracking the number of hours spent on non-story work each iteration

See the “Sample Improvement Charts” figure for examples.

Two sample charts. The chart on the left is labelled “Pair combinations.” It shows half a matrix (split diagonally) with people’s initials on both axes. Both axes have the same initials. Some of the individual cells of the matrix have a checkmark, indicating when those two people paired together. The chart on the right is labelled “Tests per second.” It shows a standard bar chart. The x-axis is labelled “Date” and the y-axis is labelled “Average tests per second,” and ranges from zero to 100. A horizontal dashed line at the “100” mark is marked “Goal.”

Figure 1. Sample improvement charts

Try not to go overboard with your improvement charts. If you post too many, they’ll lose their effectiveness. I try to keep a limit of two or three at a time, not including permanent charts such as the task board.

That’s not to say that your only decorations should be a handful of charts. Team memorabilia, toys, and works in progress are also welcome. Just make sure the important charts stand out.


Although having too many improvement charts can reduce their impact, a bigger problem occurs when the team has too much interest in improving a number on a chart. They often start gaming the process. Gaming occurs when people focus on the number at the expense of overall progress.

A common example I see is when programmers focus too much on improving the number of tests they have, or amount of code coverage, rather than improving the quality of their testing approach. They make trivial tests that don’t have any value, or are difficult to maintain, or run slowly. Sometimes, they don’t even realize they’re doing so.

To alleviate this problem, use improvement charts with discretion. Discuss new charts as a team. Be clear about the overall improvement you want to see. Check in on whether they’re working every week and take them down within a month. By that time, the chart has either done its job or isn’t likely to help.

Never use workspace charts in a performance evaluation.

Above all, never use workspace charts in performance evaluations. Don’t even discuss them outside the team. People who feel judged according to their performance on a chart are much more likely to engage in gaming. See the “Reporting” practice for ideas about what to do instead.


We need to share status with people who can’t or won’t visit the team workspace regularly. How do we do that without computerized charts?

Stakeholder Demos

First and foremost, the informative workspace is for the team. To share status with people outside the team, use stakeholder demos and roadmaps.

Our charts are constantly out of date. How can I get team members to update them?

The first question to ask is, “Did the team really agree to this chart?” An informative workspace is for the team’s benefit, so if team members aren’t keeping a chart up-to-date, they may not think it’s worthwhile. It’s possible that the team is passive-aggressively ignoring the chart rather than telling you they don’t want it.

If people won’t take responsibility, perhaps you’re being too controlling.

I find that, when no one updates the charts, it’s because I’m being too controlling about them. Dialing back the amount of involvement I have with the charts is often enough to get the team to step in. Sometimes that means putting up with not-quite-perfect charts or sloppy handwriting, but it pays off.

If all else fails, discuss the issue during the retrospective or a stand-up meeting. Share your frustrations and ask for the team’s help in resolving the issue. Prepare to abandon some favorite charts if the team doesn’t want them.


Team Room

If your team doesn’t have a team room, either physical or virtual, you won’t be able to create an informative workspace.

Informative workspaces are easy to create when you have a physical team room. Just put up the charts you want. If you have a virtual team room, you’ll need to put forth extra effort to make information visible and create situational awareness.


When your team has an informative workspace:

  • You have up-to-the-minute information about all the important issues the team is facing.

  • You know exactly how far you’ve come and how far you have to go in your current plan.

  • You know whether the team is progressing well or having difficulty.

  • You know how well the team is solving problems.

Alternatives and Experiments

If you don’t have a team room, but your team has adjacent cubicles or offices, you can achieve some of the benefits of an informative workspace by posting information in the halls or a common area.

In terms of experiments, the sky’s the limit. The key to this practice is the cockpit metaphor: having all the information you need constantly visible, so you can automatically notice when things change and subconsciously realize when something is off track. Keep that in mind as you experiment with visualizations and posters. You can start experimenting right away.

Further Reading

Agile Software Development [Cockburn 2001] has an interesting discussion in chapter 3, “Communicating, Cooperating Teams,” that describes information as heat and distractions as drafts. It’s the source of the “information radiator” metaphor.

Share your feedback about this excerpt on the AoAD2 mailing list! Sign up here.

For more excerpts from the book, or to get a copy of the Early Release, see the Second Edition home page.

AoAD2 Practice: Stand-Up Meetings

Book cover for “The Art of Agile Development, Second Edition.”

Second Edition cover

This is a pre-release excerpt of The Art of Agile Development, Second Edition, to be published by O’Reilly in 2021. Visit the Second Edition home page for information about the open development process, additional excerpts, and more.

Your feedback is appreciated! To share your thoughts, join the AoAD2 open review mailing list.

This excerpt is copyright 2007, 2020, 2021 by James Shore and Shane Warden. Although you are welcome to share this link, do not distribute or republish the content without James Shore’s express written permission.

Stand-Up Meetings

Whole Team

We coordinate to complete our work.

I have a special antipathy for status meetings. You know the type: a manager reads a list of tasks and asks about each one in turn. They seem to last forever, although my part is usually only five minutes. I learn something new in perhaps ten of the other minutes. The remaining 45 minutes are pure waste.

Informative Workspace

Organizations have a good reason for holding status meetings. People need to know what’s going on. But Agile teams have a more effective mechanism: informative workspaces and the daily stand-up meeting.

How to Hold the Daily Stand-Up

Task Planning

A daily stand-up meeting is very simple. At a pre-set time every day, the whole team holds a brief, five-to-ten minute meeting. In-person teams gather around their task tracking board. Remote teams meet by video and log in to the virtual task board.

Stand-ups are a coordination meeting, not a status meeting.

Stand-ups are a coordination meeting, not a status meeting. If you need status, you just look at the task planning board. But because the team shares ownership and works together to finish stories (see “Key Idea: Collective Ownership”), they need a way to coordinate their work. That’s what the stand-up meeting is for. It’s a way for the team to sync up so they can continue coordinating on an ad-hoc basis throughout the day.

One challenge with stand-ups is that they interrupt the team’s work. This is a particular problem for morning stand-ups; because team members know the meeting will be an interruption, they sometimes just waste time waiting for the stand-up to start. You can reduce this problem by moving the stand-up to later in the day, such as just before lunch.

The most effective approach I’ve seen for stand-ups is to “walk the board.” It has four parts:

1. Walk the board

The stand-up starts with team members going through the stories on the task board one-by-one, starting with the story that’s closest to completion. For each story, the people who worked on that story describe what’s changed and what’s left to be done, as well as any new information that the team needs to know.

For example: (Pointing at board) “I finished this task with Genna.” (Bobbi speaks up.) “And Na and I finished that task, so this story is ready for final review. I told Rodney, and he said he wants to be the one to review it, but he had something urgent come up. He should be back in the office this afternoon. We should be able to mark this story green today, assuming no surprises.”

Although team members should ask for help and hold impromptu collaboration sessions as needed, throughout the day, this is a good time for less-urgent coordination. Some examples:

  • Someone who wants help: “I’m confused about our front-end CSS testing. Can somebody walk me through it after the stand-up?”

  • Somebody with new information: “Lucila and I tried the new TaskManager library yesterday and it worked really well. Take a look the next time you’re dealing with concurrency.”

  • Somebody who needs a collaboration session: “We have some new stories that need to be sized—can we have a quick planning game after lunch?”

In the beginning, while people are still getting used to the stand-up, you may need someone to facilitate the meeting. It’s best to rotate the role, so the team can share leadership. The facilitator should be careful not to dominate the meeting; their role is just to point to each story and prompt the team to speak up.

2. Focus on completion

After walking the board, take a moment to focus the team on what’s needed to complete their work, including blockers that aren’t getting resolved. Teams using iterations should take this opportunity to check on their iteration commitment. Like this: “We have two days left, so we’re 60% of the way through the iteration. It looks like we’ve got more than 60% of the tasks done, but only one of our stories is marked complete, so we should focus on closing out stories today.”

3. Choose tasks

Finally, everyone decides what they’re going to work on next. This is a conversation, not a unilateral decision: “Given what Na said about finishing off stories, it looks like this task should be high on our list. Anybody want to work on this with me?” (Na volunteers and takes the card off the board.) “Also, this afternoon, I’ll check in with Rodney about reviewing that other story.”

Similarly, if someone chooses a task that you have information about, be sure to mention it: “When you start working on that task, talk to me or Seymour. We made some changes to our fetch wrapper that you should be aware of.”

4. Take detailed conversations offline

After everyone’s clear on how the team’s going to make progress, the meeting is over. It should only take a few minutes. If anyone needs to have a more in-depth conversation about a topic, they can mention it during the stand-up, then whoever’s interested can “take it offline” by having the discussion after the stand-up ends.

Be Brief

The purpose of the stand-up meeting is to briefly coordinate the whole team. It’s not meant to give a complete inventory of everything that’s happened. The primary virtue of the stand-up meeting is brevity. That’s why in-person teams stand: their tired feet remind them to keep the meeting short.

Each story (or person, if you’re using old-school stand-ups—see the “Old-School Stand-Ups” sidebar) usually only needs a few sentences. Thirty to sixty seconds each is usually enough. Here are some more examples:

  • A programmer: “Yesterday, Dina and I finished this task (points at board). We ran into some trouble with the tests, so we refactored the service abstraction. It should make that task (points) easier too. Let one of us know if you’d like us to go over the changes with you.”

  • A product manager: “I just got back from the trade show, and I got some great feedback on the user interface and where we’re going with the product. It’s going to mean some changes to the visual plan. I’ll be working on that today and anybody who wants to know more is welcome to join.”

  • A domain expert: “Cynthia asked me about the financial rules for this story yesterday. I’ve since talked it over with Tatum and it turns out there’s more to it than I thought. I added this new task here to update the examples, and I’d like to work with a programmer or tester on that to make sure we cover all the bases.”

The stand-up meeting should only take about five minutes, or ten at most.

Most days, the stand-up meeting should only take about five minutes, or ten at most. If it consistently takes more than ten minutes, something is wrong. Some common reasons for slow stand-ups include:

  • Using an electronic planning tool rather than cards and a whiteboard (or virtual equivalent).

  • Updating the task board during the stand-up rather than throughout the day.

  • Saving conversations and collaboration for the stand-up rather than holding them throughout the day.

  • Holding detailed discussions during the stand-up rather than taking them offline.

  • Holding the stand-up in a meeting room rather than in your team room.

  • Waiting for people to arrive rather than starting on time.

If none of these apply, ask a mentor for help.


Can people outside the team attend the stand-up?

Stakeholder Demos

Yes, but keep in mind that the stand-up is owned by the team and conducted for the team’s benefit. If the outside people are detracting from the meeting, or if team members feel uncomfortable speaking up with them present, they need to stop attending. Team members with political savvy are probably the best choice to carry that message. You can use stakeholder demos and roadmaps to keep those attendees informed instead.

In a multi-team environment, it’s sometimes helpful for teams that work closely together to send people to each others’ stand-ups. In that case, work together to decide how to allow people to attend and contribute in a way that isn’t disruptive.

Participants are being too brief. What should we do?

Sometimes, particularly with old-school stand-ups, participants will devolve into no-content statements such as “same as yesterday” or “nothing new.” If this happens a lot, gently remind participants to go into a bit more detail.

What if somebody is late to the stand-up?

If someone’s late, start without them.

Start without them. Stand-ups are only a few minutes long, so you could be done by the time they get there. They can ask someone to fill them in if they need to. Starting on time will help establish a culture of arriving on time.

Do we still need a daily stand-up if we use mob programming?

Mob Programming

Teams using mob programming coordinate constantly, so they don’t technically need a stand-up meeting. But it’s still useful to take a moment every day to review progress and think about next steps. For teams using mobbing, that might happen naturally. If it doesn’t, holding an explicit stand-up could help.


Don’t let the daily stand-up stifle communication. Some teams find themselves waiting for the stand-up rather than talking to someone when they need to. If you find this happening, eliminating the stand-up for a while may actually improve communication.

Beware of leaders who dominate the stand-up. As reviewer Jonathan Clarke so aptly put it, the ideal facilitator is “a charismatic but impatient colleague who will hurry and curtail speakers.” The team—and the stand-up—is a gathering of peers. No one person should dominate.


When you conduct daily stand-up meetings well:

  • The team coordinates their work and makes steady progress toward completing their task plan.

  • The team is aware of when a task or story is stalled and takes action to un-block it.

  • Team members are aware of what others are working on and how it influences their work.

Alternatives and Experiments

Coordination, not status, is the underlying idea of the stand-up.

Coordination, not status, is the underlying idea of the stand-up. Teams new to Agile often have trouble with this; to them, the stand-up looks like a shorter, more frequent status meeting, but that’s missing the point.

Be careful about adding formality to the stand-up. People often experiment with adding structure—templates, or lists of questions to answer—but that structure tends to decrease collaboration rather than increase it. Instead, look for ways to improve the team’s ability to collectively own their work.

One team I worked with got so effective at walking the board they started holding very short stand-ups multiple times per day. Rather than scheduling a specific time for their stand-up, they would just get together whenever they finished their tasks. In just 30-60 seconds, they’d coordinate what to work on next and grab tasks off the board.

Further Reading

“It’s Not Just Standing Up: Patterns for Daily Standup Meetings” [Yip 2016] is a nice source of ideas for experimenting with stand-up meetings.

Share your feedback about this excerpt on the AoAD2 mailing list! Sign up here.

For more excerpts from the book, or to get a copy of the Early Release, see the Second Edition home page.

AoAD2 Practice: Slack

Book cover for “The Art of Agile Development, Second Edition.”

Second Edition cover

This is a pre-release excerpt of The Art of Agile Development, Second Edition, to be published by O’Reilly in 2021. Visit the Second Edition home page for information about the open development process, additional excerpts, and more.

Your feedback is appreciated! To share your thoughts, join the AoAD2 open review mailing list.

This excerpt is copyright 2007, 2020, 2021 by James Shore and Shane Warden. Although you are welcome to share this link, do not distribute or republish the content without James Shore’s express written permission.



We deliver on our iteration commitments.

Imagine that the power cable for your workstation is just barely long enough to reach the wall receptacle. You can plug it in if you stretch it taught, but the slightest vibration will cause the plug to pop out of the wall and the power to go off. You’ll lose everything you were working on.

I can’t afford to have my computer losing power at the slightest provocation. My work’s too important for that. In this situation, I would move the computer closer to the outlet so that it could handle some minor bumps. (Then I would tape the cord to the floor so people couldn’t trip over it, install an uninterruptable power supply, and invest in a continuous backup solution.)

Your iteration plans are also too important to be disrupted by the slightest provocation. Like the power cord, they need slack.

How Much Slack?


The amount of slack you need doesn’t depend on the number of problems you face. It depends on the randomness of problems. If you always experience exactly 20 hours of problems in each iteration, your capacity will automatically compensate. However, if you experience between 20 and 30 hours of problems, your capacity will bounce up and down. You need 10 hours of slack to stabilize your capacity and to ensure that you’ll meet your commitments.

Remember that the team decides for themselves what to commit to and whether those commitments are shared outside the team. See the “Making and Meeting Iteration Commitments” section for details.

These numbers are just for illustration. Instead of measuring the number of hours you spend on problems, take advantage of the capacity feedback loop (see the “Stabilizing Capacity” section). If your capacity bounces around a lot, stop signing up for more stories than your capacity allows. This will cause your capacity to settle at a lower number that incorporates enough slack for your team. On the other hand, if you finish everything early, including time to clean up the things you touched, reduce your slack by committing to a small extra story in the next iteration.

How to Use Slack

Used correctly, the capacity feedback loop will automatically give your team the amount of slack it needs to reliably finish every story in every iteration. But how should you use that slack?

First, only the team’s constraint needs slack. As the “Capacity” practice discusses, there will be one type of work—typically programming—that is the bottleneck for your team’s work. The team’s slack should be dedicated to relieving that constraint.

One way to do so would be to reserve the last part of your iteration for slack, and just go home early when your stories are done. That would be wasteful, of course. Another option would be to take on another story when everything is finished, but now you’re back to not having slack and just building as much as you can.

The best use of slack is to increase your actual ability to deliver.

No, the best use of slack is to increase your actual ability to deliver. The right way to do so depends on your constraint. Here are three good choices. Improving internal quality, in particular, is a must-have for nearly every team.

Improving internal quality

The team’s performance is directly tied to the quality of their code, tests, automation, and infrastructure. Together, they’re the software’s internal quality.

Even the best teams inadvertently accumulate internal quality problems. Although you should always make your software as clean as you can, even good work eventually gets out of sync with your needs.


If your constraint is programming, improving internal quality is a surefire way to increase your capacity. Every iteration, rather than doing the minimum necessary to create clean code, look for opportunities to make existing code better, too. Make it part of your moment-to-moment work. If you find yourself scratching your head over a variable or method name, change it. If you see code that’s no longer in use, delete it.

In addition to these small improvements, look for opportunities to make larger changes. Perhaps the code is missing a test, or using primitives instead of a custom type, or a module has too many responsibilities. Maybe a test is unreliable due to a race condition or global variable. Perhaps a build step is slow, or a deployment fails randomly, or new servers have to be configured manually. When these problems affect your work, incrementally improve them.

Make improvements every day, throughout the iteration.

Don’t batch up your improvements. Make improvements every day, throughout the iteration: an hour encapsulating a structure here, two hours fixing a deploy script there. Each improvement should address a specific, relatively small problem. Sometimes you’ll only be able to fix part of a larger problem—that’s okay, as long as it makes the code better. You’ll have another chance to improve the next time you work on that part of the system.

Task Planning

For these bigger, hour-or-two improvements, take a look at the task board before starting and compare it to the amount of time that has passed. Are there a lot of tasks done compared to the amount of time that’s elapsed? The team is ahead of schedule, so you can go ahead and clean things up. Does it instead seem like the team is falling behind? Shrug your shoulders and focus on your iteration commitment instead. You’ll have another opportunity next iteration. By varying the amount of time you spend on internal quality, you can ensure that most iterations come in exactly on time.

Always leave your code and other systems a little bit better than you found them, no matter how much time you have. You’re not choosing between “sloppy” and “clean;” you’re choosing between “slightly cleaner” and “a lot cleaner.” Always make time to do good work. Messy work will cost you more time than it saves.

Focus your improvements on the code, tests, and other systems that you’re actually working on. If you do, the things you work with most will see the most changes. It’s a simple feedback loop that magically directs your cleanup efforts right where they’ll do the most good.

Develop customer skills
Whole Team

Although Agile teams should be whole, cross-functional teams, a lot of organizations skimp on people with customer skills. If your team is constrained by lack of knowledge about customers, users, and business needs, use your slack to learn more. Study the domain. Join your product manager in meetings. Interview users and talk to stakeholders.

As with improving internal quality, spread this time throughout the iteration and use your team’s progress to judge how much time you can spend.

Dedicate time to exploration and experimentation

Developers tend to be naturally curious and must continually improve their skills. Given time to indulge their curiousity, they will often learn things that enhance their work on the team.

Dedicated time for exploration and experimentation, also called research time, is an excellent way to encourage learning while also adding slack into your iterations. Unlike the other techniques, it’s a half-day chunk set aside at the end of the iteration. If you end up running late, you can eat into the research time to meet your commitments.

Calculate your capacity when research time is scheduled to start.

If you use research time, calculate your capacity based on the stories that are finished when research time is scheduled to start, not when the iteration is scheduled to end. That way, if you do end up eating into your research time, your capacity will automatically decrease so you don’t need to do so next iteration. Research time gives you a buffer, but it shouldn’t be something you rely on.

Each team member uses the research time block to conduct self-directed exploration into a topic of their choice. It can be research into a new technology, studying an obscure section of the code, trying a new practice, exploring a new product idea, or anything else that interests them. There’s only one rule: don’t work on any stories or commit any production code.

If you’re concerned about people goofing off, provide lunch the next day and ask that people share what they’ve learned through informal peer discussion. This is a great way to cross-pollinate ideas anyway.

I’ve introduced research time to several teams, and it’s paid dividends each time. Two weeks after introducing research time at one organization, the product manager told me that research time was the most valuable time the team spent, and suggested that we double it.

Team members, for research time to be effective, you must focus and treat it as real work. Half a day can go by very quickly. It’s easy to think of research time as a catch-all for postponed meetings. Be strict about avoiding interruptions. Ignore your email, turn off text messages, block the time on your calendar, and restrict your web browsing to your actual research.

When you first adopt research time, you might have trouble deciding what to work on. Think about what’s puzzled you recently. Would you like to learn more about the details of your UI framework or code? Is there a programming language you’ve wanted to try, but your organization doesn’t use? Has real-time networking always fascinated you?

Spike Solutions

As you do your research, create spike solutions—small, standalone programs—that demonstrate what you’ve learned. If you’re experimenting with the production code, create a throwaway branch. Don’t try to make anything that’s generally useful; that will reduce the amount of time available to pursue core ideas. Just do enough to prove the concept, then move on to your next subject.

The role of overtime
Energized Work

Overtime doesn’t come from the capacity feedback loop, but it is a source of slack. Use it with caution. If you want to voluntarily work a bit extra to finish up some story or task, that’s okay. Don’t make a habit of it, though, and don’t work more than an hour or so extra on any given day. You need time to recharge if you’re going to be productive the next day. Pay attention to your energy and never use overtime as an excuse to lower your team’s standards.


If our commitment is at risk, shouldn’t we temporarily stop pair programming, refactoring, test-driven development, etc.? Meeting our commitment is most important, right?

With experience, these practices should speed you up, not slow you down, but they do have a learning curve. It’s true that setting them aside might make it easier for you to meet your commitments early on.

But you still shouldn’t use them as a source of slack. These practices maintain your capability to deliver high-quality code. If you don’t do them, the resulting decrease in internal quality will immediately slow you down. You may meet this iteration’s commitments, but you’ll do so at the expense of the next iteration.

If you don’t have enough slack to meet your commitments, modify your plans.

If you don’t have enough slack to meet your commitments, don’t lower your standards. Modify your plans instead, as discussed in the “Making and Meeting Iteration Commitments” section.

Should we pair or mob program during research time?

Mobbing is typically overkill. Pairing can be nice, if you want to collaborate on a topic, but it isn’t necessary.

How does slack relate to clean-up stories?

Clean-up stories are special stories just for improving internal quality (see the “Clean-Up Stories” section). To be honest, they’re kind of a mistake. The team should use their slack to constantly improve their code and other systems. Clean-up stories shouldn’t be needed.

But sometimes you inherit software that would get a speed-boost from extra clean-up. In those cases, on-site customers might choose to prioritize a clean-up story. But they should never be mandatory. They’re always at the discretion of the on-site customers, who trade off the benefits of extra clean-up with the benefits of other work the team can do. This is in contrast to clean-up performed using slack, which is at the discretion of the developers.


The risk of slack is that it can lead people to think that activities such as improving internal quality and developing customer skills aren’t important. They’re actually vital, and a team that doesn’t do them will slow down over time. They’re just not time-critical like your iteration commitment is. Make sure you have enough slack to steadily improve. If you don’t, reduce your capacity a bit so that you do.

In addition, never do sloppy work in the name of slack. If you can’t meet your iteration commitments while following your chosen process, revise the iteration plan instead.


When your team incorporates slack into your iterations:

  • You consistently meets your iteration commitments.

  • You rarely, if ever, need overtime.

  • Your internal quality steadily improves, making work easier and faster.

Alternatives and Experiments

It’s a clever little feedback loop that uses teams’ weaknesses to make them stronger.

On its face, slack appears to be about meeting commitments, and that is an important part of it. But the real innovation is using slack to fix the problems that caused the need for slack in the first place. Together with capacity, this forms a clever little feedback loop that uses teams’ weaknesses to make them stronger.

Many organizations are so stressed about productivity that they pressure their teams to maximize their capacity number. Their teams push to increase their capacity in every iteration, so they don’t introduce slack. Ironically, this prevents them from improving their actual capacity, and it makes it difficult for them to meet their commitments... which in turn leads to increased pressure, not to mention a lot of unpleasant thrashing around.

As you experiment with slack, keep the clever little feedback loop in mind. Don’t just look for ways to add slack; look for ways to use that slack in a way that improves your team’s capability.

Further Reading

Slack: Getting Past Burnout, Busywork, and the Myth of Total Efficiency [DeMarco 2002] provides a compelling case for providing slack throughout the organization.

The Goal [Goldratt 1992] and Critical Chain [Goldratt 1997] are two business novels that make the case for using slack (or “buffers”), instead of padding estimates, to protect commitments and increase throughput.

Share your feedback about this excerpt on the AoAD2 mailing list! Sign up here.

For more excerpts from the book, or to get a copy of the Early Release, see the Second Edition home page.

AoAD2 Practice: Capacity

Book cover for “The Art of Agile Development, Second Edition.”

Second Edition cover

This is a pre-release excerpt of The Art of Agile Development, Second Edition, to be published by O’Reilly in 2021. Visit the Second Edition home page for information about the open development process, additional excerpts, and more.

Your feedback is appreciated! To share your thoughts, join the AoAD2 open review mailing list.

This excerpt is copyright 2007, 2020, 2021 by James Shore and Shane Warden. Although you are welcome to share this link, do not distribute or republish the content without James Shore’s express written permission.



We know how much work we can sign up for.

Teams using iterations are supposed to finish every story, every iteration. But how do they know how much to sign up for? That’s where capacity comes in. Capacity is a prediction of how much the team can reliably accomplish in a single iteration.

Capacity is only for predicting what you can include in your next iteration. If you need to predict when a particular set of stories will be released, see the “Forecasting” practice instead.

If you’re using continuous flow rather than iterations, you don’t need to worry about capacity. You’ll just start new stories when the previous ones are finished.

Capacity was originally called “velocity.” I don’t use that term any more because “velocity” implies a level of control that doesn’t exist. Think of a car: it’s easy to increase the velocity; just press the pedal. But if you want to increase the car’s capacity, you need to make much more drastic changes. Team capacity is the same. It’s not easily changed.

Yesterday’s Weather

Capacity can be a contentious topic. Customers want the team to deliver more every week. Developers don’t want to be rushed or pressured. Because customers often have the ear of the team’s sponsor, they tend to win... in the short term. In the long-term, when teams are pressured to commit to more than they can deliver, everyone loses. Reality prevails and development ends up taking longer than expected.

To avoid these problems, measure your capacity. Don’t guess. Don’t hope. Just measure. It’s easy: you can assume you’ll get the same amount done this week that you did last week. This is also known as yesterday’s weather, because you can predict today’s weather by saying it’s likely to be the same as yesterday’s.

Your capacity is the number of stories you started and completely finished in the previous iteration.

More specifically, your capacity is the number of stories that you started, and completely finished, in the previous iteration. Partially-done stories don’t count. For example, if you started seven stories last iteration and finished six of them, your capacity is six, and you can choose six stories next iteration.

Don’t average multiple iterations. Just use the previous iteration. The “Stabilizing Capacity” section explains how to create a stable capacity without averaging.

Counting stories only works if your stories are all about the same size. You can split and combine stories to get the size “just right.” Over time, your team will learn how to make stories the same size.

Your stories probably won’t all be the same size, at first. In that case, you can estimate your stories instead, as I’ll describe in a moment. To measure your capacity when using estimates, start with the stories that you started and completely finished last iteration. (Partially-done stories still don’t count.) Add up their estimates. That’s your capacity.

For example, if you finished six of the stories you started last iteration, and their estimates were “1, 3, 2, 2, 1, 3,” your capacity is 1 + 3 + 2 + 2 + 1 + 3 = 12. Next iteration, you can choose any stories as you like, so long as the total of their estimates is twelve.

Yesterday’s Weather is a simple yet surprisingly sophisticated tool. It’s a feedback loop, which leads to a magical effect: if the team underestimates their workload, and are unable to finish all their stories by the iteration deadline, their capacity decreases, and they sign up for less work next time. If they overestimate their workload, and finish early, they take on more stories, their capacity increases, and they sign up for more work.


It’s an extremely effective way of balancing the team’s workload. Combined with slack, capacity allows you to reliably predict how much you can finish in every iteration.

Capacity and the Iteration Timebox

Yesterday’s Weather relies upon a strict iteration timebox. To make capacity work, never count stories that aren’t “done done” by the end of the iteration. Never allow the iteration deadline to slip, even by a few hours.

Artifically amplifying your capacity number will just make it harder to meet commitments.

You may be tempted to cheat a bit and delay the iteration deadline, or count a story that’s almost done. Don’t! It will increase your capacity number, sure, but it will disrupt the feedback loop. You’ll sign up for more than your team can actually accomplish, amplifying the problem for next time and making it even harder for to meet your commitments.

One project manager I worked with wanted to add a few days to the beginning of an iteration so his team could “hit the ground running” and he could have a more impressive capacity number to share with his manager. (This is one of the reasons I prefer “capacity” to “velocity.” It’s not as impressive sounding.) By doing so, he set his team up for failure: they couldn’t keep up the pace in the following iteration. Remember that capacity is for predicting how much you can fit in an iteration. It doesn’t represent productivity.

Capacity tends to be unstable when teams first form, and when they’re first learning to be Agile. Give it three or four iterations to stabilize. After that point, you should have the same capacity every iteration, unless there’s a holiday. Use your iteration slack to ensure that you consistently finish every story. If the team’s capacity changes more than one or twice per quarter, look for deeper problems, and consider asking a mentor for help.

Stabilizing Capacity

Whenever your team fails to finish everything it had planned, your capacity should go down. This will give you more time to finish your work in the next iteration, which will cause your capacity to stabilize at the new, lower level.

Only try to increase capacity when you have enough time to clean as you go.

But how does your capacity go back up? Counterintuitively, you should be quick to decrease your capacity, and slow to increase it. Only try to increase your capacity when you not only finish all the stories you had planned, but you also had time to clean as you go: you cleaned up rough spots in the code you touched, improved automation and infrastructure, and took care of other important but non-urgent tasks related to the stories you worked on.

If you had enough time to clean as you go, you can take on an additional story. If you finish it before the end of the iteration, your capacity will go up.

I work with a lot of teams, and one of the most common problems I see is excessive schedule pressure. Excessive schedule pressure universally reduces teams’ performance. It causes them to rush, take shortcuts, and make mistakes. Those shortcuts and mistakes hurt their internal quality—code, automation, and infrastructure quality—and that poor quality causes everything to take longer, ironically giving them less time to do their work. It’s a vicious cycle that further increases schedule pressure and decreases performance.

The most effective way to improve the performance of teams in this situation is to reduce their schedule pressure. Capacity will do this automatically, if you let it. The “Stabilizing Capacity” figure illustrates how.

A graph with “Time” on the X axis and “Capacity” on the Y axis. The graph shows two lines. A thin line labelled “capacity without slack” shows random changes in capacity over time. A thick line labelled “capacity with slack” follows the thin line, but only when it goes down. It stabilizes at the lowest value of the thin line fairly quickly. The peaks between the thin line and thick line are shaded and labelled “slack: extra time for absorbing and resolving problems.” Halfway across the “time” axis, the thin line stops and the thick line increases in capacity in an “S“ curve. This part of the line is labelled “increase in capacity due to fewer problems.”

Figure 1. Stabilizing capacity

The thin, jagged line shows the team’s “high pressure” capacity. This is their capacity if they rush as fast as they can. You can see that it’s highly variable. Some weeks, everything goes smoothly. Other weeks, they run into bugs and internal quality problems.

The thick, smooth line shows the team’s “low pressure” capacity. This is result of following the “be fast to decrease and slow to increase” rule. You can see that, whenever the team failed to deliver everything planned, they decreased their capacity, and they didn’t increase it again for quite some time.


The shaded peaks represent the team’s slack: the difference between their “low pressure” capacity and the amount of time they needed to finish their stories. Some weeks, they had a lot of slack. Others, very little. When the team has a lot of slack, they use it to improve internal quality and address issues that slow them down.

Over time, that extra effort builds up. Because the team isn’t rushing as fast as they can, they gradually improve their internal quality and fix problems. Eventually, they feel relaxed, in control, and in possession of more time than they need for cleanup. That’s when they increase their capacity. The result is better capacity, and more enjoyable work, than teams that rush as fast as they can.

Slack is your best option for improving how much work your team can do.

This graph illustrates my actual experience, not some abstract theory. I’ve seen variations on this theme play out, on real teams, time and time again. It can be hard to stabilize your capacity when your team is under a lot of pressure, but it’s worth it. It’s your best option for actually improving the amount of work your team can do.

Estimating Stories

Yesterday’s Weather depends on consistency, but your team may have trouble creating consistently-sized stories. That’s okay. You can use estimates instead.

It doesn’t matter how accurate your estimates are, so long as they’re consistent.

It doesn’t actually matter how accurate your estimates are, so long as they’re consistent. (See the “Why Estimate Accuracy Doesn’t Matter” sidebar.) That’s a good thing, because programmers tend to be terrible at estimating. One team I worked with measured the actual time their stories took. We did this for 18 months. The estimates were never accurate: they averaged about 60% of the actual time required.

But you know what? It didn’t matter, because their estimates were consistent, at least in aggregate. That team had a stable capacity and consistently finished every story for months on end.

So, to estimate your stories, don’t worry about accuracy. Just focus on consistency. Here’s how.

  • Only estimate the constraint. One type of work—typically programming—will be the bottleneck for your team. Estimate all your stories in terms of that work only, because your constraint determines your schedule. (There will be occasional exceptions, but they’ll be absorbed by your iteration slack.)

  • Let experts estimate. How long do the team members who are most qualified to do the work think the story will take?

  • Estimate in “ideal” hours or days. How long will the story take if one of your most qualified team members does it, they experience no interruptions, can ask questions of anyone else on the team, don’t have to wait for people outside the team, and everything goes well?

  • Think of tasks. If you’re having trouble estimating, mentally break the story down into the tasks it involves, then add up the time required for each one.

  • Round into three “buckets.” Anything larger needs to be split; anything smaller needs to be combined. (See the “Combining and Splitting Stories” section.) To choose your buckets, divide your capacity by 12 (rounded off), then multiply by two and three. For example, if your capacity is 15, then your buckets should be 1, 2, and 3. This will result in about 4-12 stories per iteration, six on average, which is just right.

This approach will give you an estimate in ideal hours or days. The real work will take much longer, but that doesn’t matter: you’re going for consistency, not accuracy. To avoid people accidentally interpreting your estimate as a commitment, call the number “points,” not “hours” or “days.”

Once you’ve had some experience, these techniques work even better:

  • Match other stories. What did you say for other stories like this one? Use the same estimate.

  • Compare to other stories. Is this story about twice as much work, or half as much, as another story? Use double or half the estimate.

  • Go with your gut. Use whatever number feels right.

I have two types of estimating sessions you can use: conversational estimating and estimate mapping. In either case, everybody who’s qualified to do the work (the “estimators”) and at least one on-site customer should participate. Other team members are also welcome—the discussions can be informative—but they aren’t required.

Conversational estimating

In conversational estimating, the team estimates one story at a time. It can be tedious, but it’s a good way to get everyone on the same page about what needs to be done.

An on-site customer starts each estimate by choosing a story and providing a brief explanation. Estimators can ask questions, but they should only ask questions if the answer would change their estimate. As soon as any estimator feels they have enough information, they suggest an estimate. Allow this to happen naturally—the person who is most comfortable should speak first, as this is often the person who’s most qualified to make the estimate.

If the suggested estimate doesn’t sound right, or if you don’t understand where it came from, ask for details. Alternatively, if you’re an estimator, provide your own estimate instead and explain your reasoning. The ensuing discussion will clarify the estimate. When the estimators are in agreement, write the estimate on the story card. The “A Conversational Estimate” sidebar has an example.

At first, different team members will have differing ideas of how long something should take. This will lead to inconsistent estimates. Talk it through, and if you can’t come to agreement, use the lowest estimate. (Remember, you only need consistency, not accuracy.) As your team continues to make estimates together, your estimates will synchronize, typically within three or four iterations.

If participants understand the stories and underlying technology, they should be able to estimate each story in less than a minute. If they need to discuss the technology, or ask questions of customers, then estimating may take longer. I look for ways to bring discussions to a close if an estimate takes longer than five minutes. If every story requires detailed discussion, something is wrong—see the “When Estimating Is Difficult” section.

Some people like to use planning poker1 for estimating. In planning poker, participants secretly choose a card with their estimate, reveal their estimates simultaneously, then discuss. It sounds fun, but it tends to result in a lot of unnecessary discussion. It’s useful if people are having trouble speaking up, but otherwise, it’s usually faster to just allow the person who’s most comfortable to speak first.

1Planning poker was invented by James Grenning in 2002 [Grenning 2002] and was later popularized by Mike Cohn in [Cohn 2005]. Cohn’s company, Mountain Goat Software, LLC, has trademarked the term.

Affinity estimating

Affinity estimating is a great technique for estimating a lot of stories quickly.2 It’s particularly useful when you have long planning horizons.

2Affinity estimating was invented by Lowell Lindstrom in the early days of Extreme Programming.

Affinity estimating is a variant of mute mapping (see the “Work Simultaneously” section). An on-site customer puts a pile of story cards to estimate on a table or virtual whiteboard. One end of the table is identified as “smallest” and the other end is identified as “largest.” Then estimators arrange the story cards along that spectrum, grouping them into clusters of similar size. Cards that need additional clarification from customers go into a separate cluster off to the side, as do cards that need a spike story (see the “Spike Stories” section).

All this work is done silently. Estimators can move cards if they disagree with where they’re placed, but they can’t discuss them. Doing the work silently avoids the sidetracking that tends to occur when discussing estimates. As a result, estimate mapping is very fast. One person I taught it to told me their team estimated 60 stories in 45 minutes the first time they tried it.

After all the stories are grouped, the team labels each cluster with an estimate. The actual numbers aren’t important, so long as their relative sizes are correct. In other words, a story in a cluster labelled “2” should take about twice as long as a story in a cluster labelled “1.” For consistency with conversational estimates, though, it can be useful to estimate in ideal hours or days. Estimating the clusters should only take a minute or two.

Finally, choose three clusters that match your estimate “buckets” (described previously). For example, if your capacity is 15, you’d choose the clusters estimated at 1, 2, and 3. The stories in larger clusters will need to be split, and the stories in smaller clusters will need to be combined.

All the cards in your final three buckets are done. Write their estimate on each one. The remaining cards need to be split, combined, discussed, or spiked, depending on which cluster they’re in. That can be done simultaneously (it’s easiest if you’re in-person), followed by another estimating mapping session, or it can be done one at a time using conversational estimating.

When Estimating is Difficult

When your team first forms, estimating will probably be somewhat slow and painful. It will get better with practice.

One common cause of slow estimation is inadequate preparation by on-site customers. At first, estimators are likely to ask questions that customers haven’t considered. In some cases, customers will disagree on the answer and need to work it out.

A customer huddle—in which the customers briefly discuss the issue, come to a decision, and return—is one way to handle this. While they huddle, estimators continue estimating stories they already understand.

Another option is to put the question on a sticky note and attach it to the card. The customers take the card and work out the details at their own pace, then bring it back for estimating in a later session.

Developer inexperience can also slow estimation. If estimators don’t understand the stories well, they will need to ask a lot of questions before they can make an estimate. If they don’t understand the technology, though, just create a spike story (see the “Spike Stories” section) and move on.

Some estimators try to figure out all details of a story before making an estimate, which slows things down. Remember that the only details that matter, during estimating, are the ones that would put the estimate in a different bucket. Practice focusing on the details that would change the estimate and saving the rest for later.

This sort of overattention to detail sometimes occurs when an estimator is reluctant to make estimates. It’s common among programmers who’ve had their estimates used against them in the past. They’ll try to make their estimates perfectly accurate, rather than aiming for consistency that’s “good enough.”

Estimator reluctance can be a sign of organizational difficulties or excessive schedule pressure, or it may stem from past experiences that have nothing to do with the current team. In the latter case, estimators usually come to trust the team over time.

To help address these issues during estimation, you can ask leading questions. For example:

  • Customers having trouble: Do we need a customer huddle on this question? Should we put this question on the story and come back to it later?

  • Estimators uncertain about technology: Should we make a spike story for this one?

  • Estimators asking a lot of questions: Do we have enough information to estimate this story? Will the answer to that question change your estimate?

  • A story taking more than five minutes: Should we come back to this story later?

Defending Estimates

It’s almost a law of nature: on-site customers and stakeholders are invariably disappointed with their teams’ capacity. Sometimes they express their disappointment in disrepectful ways. Team members with good social skills can help defuse the situation. Often, the best approach is to ignore people’s tone and treat comments as straightforward requests for information.

In fact, a certain amount of back-and-forth is healthy. As the “How to Win the Planning Game” section discusses, questions about estimates can lead to better stories that focus on the high-value, low-cost aspects of customers’ ideas.

Politely and firmly refuse to change your estimates when pressured.

Be careful, though: questions can cause estimators to doubt their estimates. Developers, your estimates are likely correct, or at least consistent, which is what really matters. Only change your estimate if you learn something genuinely new. Don’t change it just because you feel pressured. You’re the ones who will be implementing the stories, and you’re the ones most qualiified to make estimates. Be polite, but firm:

I’m sorry you don’t like these estimates. We believe they’re correct, but if they’re too pessimistic, our capacity will automatically increase to compensate. We have a professional obligation to you and this organization to give you the best estimates we know how, even if they’re disappointing, and that’s what we’re doing.

If a stakeholder reacts with disbelief or browbeats you, they may not realize how disrepectful they’re being. Sometimes making them aware of their behavior can help:

I’m getting the impression you don’t respect or trust our professionalism. Is that what you intended?


Stakeholders may also be confused by the idea of estimating in points. I tend to avoid sharing capacity and estimates outside the team for that reason. I report the stories and increments we’re working on instead. But if an explanation is needed, I start with a simplified explanation:

A point is an estimation technique that focuses on consistency. It allows us to make short-term predictions based on measured results. Our measured capacity is twelve points, which means we finished twelve points of work last week. Therefore, we predict that we can finish twelve points of work this week.

If they want more details, you can show them this book.

Sometimes, people will argue against measuring capacity. “If your team has six programmers and there are five days in an iteration, shouldn’t your capacity be 30 points?” You can try to explain how ideal time estimates work, but that has never worked for me. Now I just offer to provide detailed information:

Capacity is based on measurements and is expected to be lower than person-days. If you like, we can perform a detailed audit of our work next week and tell you exactly where the time is going. Would that be helpful?

Your interlocutor will usually back off at this point, but if they say “yes,” go ahead and track everyone’s time in detail for a week. It’s annoying, but should defuse concerns, and you can use the same report again the next time someone asks.


These sorts of questions tend to dissipate as stakeholders gain trust in the team’s ability to deliver. If they don’t, or if the lack of trust is particularly bad, ask your manager or a mentor for help.

Your Initial Capacity

When you plan your first iteration, you won’t have any history, so you won’t have a capacity or estimate buckets.

Partially-done work is never counted.

Start out by using one week iterations and estimate buckets of ½ day, one day, and 1½ days. Work on one story at a time, as the “Your First Week” section discusses. At the end of the first iteration, you’ll have a capacity you can use for your next iteration. Remember not to count stories that weren’t complete. Throw them away and make new stories, with new estimates, representing the amount of work that remains. (Yes, that means you won’t count the partially-done work. Partially-done work is never counted.)

If you got less than four stories done, cut the estimate buckets in half (use two-, four-, and six-hour buckets) for your next iteration. If you got more than twelve stories done, double the buckets (one, two, and three days). Continue in this way until your capacity stabilizes.

Your capacity should stabilize after about four iterations. With more experience, you’ll eventually be able to size stories so you finish the same number every iteration. When that becomes second nature, you can stop estimating entirely, and just count stories. But you’ll still need to talk stories over with customers to make sure they’re the right size.

How to Improve Capacity

Stakeholders always want more capacity. It is possible... but it’s not free. There are several options:

Improve internal quality

The most common capacity problem I see is poor internal quality: crufty code, slow and unreliable tests, poor automation, and flaky infrastructure. It’s also called “technical debt.”

Internal quality has a greater impact on team capacity than any other factor. Make it a priority and your capacity will improve dramatically. However, this isn’t a quick fix. Teams with internal quality problems often have months, or even years, of cleanup ahead of them, although you’ll see improvements before then.


Rather than stopping work to fix the problems, improve quality incrementally, using slack, as described in the “Stabilizing Capacity” section. Establish a habit of continuously improving everything you touch. Be patient: although you should see a morale increase almost immediately, you may not see an improvement in capacity for several months.

Improve customer skills
Whole Team

If your team doesn’t include on-site customers, or if they aren’t available to answer questions when developers need them, developers have to either wait or make guesses about the answers. Both of these reduce capacity. Improving developers’ customer skills can reduce their reliance on on-site customers.

Support energized work
Energized Work

Tired, burned-out developers make costly mistakes and don’t put forth their full effort. If your organization has been putting a lot of pressure on the team, or if developers have worked a lot of extra hours, shield them from organizational pressure and consider instituting a no-overtime policy.

Offload duties

The team members who can work on the constraint—often, it’s programmers—should hand off any work that others can do. Find ways to excuse them from unnecessary meetings, shield them from interruptions, and have somebody else take care of organizational bureaucracy such as time sheets and expense reports. You could even assign an assistant to the team.

Support the constraint

People who can’t contribute to constraint-related tasks will have some discretionary time available. Although they should make sure that people who do work on the constraint never have to wait for them, they shouldn’t work too far ahead. That will just create extra work-in-progress inventory. (See “Key Idea: Minimize Work in Progress”.)

Use your extra time to reduce the burden on the constraint.

Instead, use the extra time to reduce the burden on the constraint. A classic example is testing. Some teams need so much manual testing that the final days of every iteration are dedicated to testing the software. Rather than moving on to the next set of features, programmers can use that time to write automated tests and reduce the testing burden.

Provide needed resources

Most teams have all the resources they need. (Remember, “resources” refers to equipment and services, not people.) However, if team members complain about slow computers, insufficient RAM, or inappropriate tools, get those resources for them. It’s always surprising when a company nickle-and-dimes its software teams. Does it make sense to save $5,000 in equipment costs if it costs your team half an hour per person per day? A team of six people will recoup that cost within a month. And what about the opportunity costs of releasing more slowly?

Add people (carefully)
Pair Programming
Mob Programming
Collective Code Ownership
Team Room

Capacity is related to the number of people that can work on your team’s constraint, but unless your team is woefully understaffed and experienced personnel are readily available, adding people won’t make an immediate difference. As [Brooks 1995] famously said, “Adding people to a late project only makes it later.” Expect new team members to take a month or two to be productive. Close collaboration can help reduce that time.

Likewise, adding people to large teams can cause communication challenges that decrease productivity. Six programmers is my preferred number for teams using pair programming, and I readily add good programmers to reach that point. Past six, I’m cautious about adding programmers, and only increase past eight on rare occasions. Other skills are proportional, as the “Whole Team” practice describes.

Capacity Is Not Productivity

Capacity isn’t a measure of productivity.

One of the most common mistakes I see organizations make is to confuse capacity with productivity, as the “Gotta Go Fast” sidebar illustrates. Let me be clear: capacity isn’t a measure of productivity. It’s a prediction tool. It’s influenced by productivity changes, sure, but it doesn’t measure them, and even then, the relationship is tenuous. In particular, capacity can’t be compared across teams.

The capacity number is an amalgamation of many factors: The number of people working on the constraint. The number of hours they work. The ratio of their estimates to actual time. Their software’s internal quality. The amount of time they spend waiting for people. The amount of time they spend on organizational overhead. The number of shortcuts they take. The amount of slack they have.

These factors are different for every team, so you can’t use capacity to compare two teams. If one team has twice the capacity number of another, it could mean that they have less overhead... but it’s more likely that they just have a different approach to estimating.

Teams also don’t have control over most of the things that affect capacity. In the short term, they can only control the number of hours they work and the number of shortcuts they take. So a team that’s judged on its capacity can only respond to that pressure by working extra hours, doing sloppy work, or cutting their slack. That may lead to a short term boost in their capacity numbers, but it will reduce their actual ability to deliver.

Don’t share capacity numbers outside the team. If you’re a manager, don’t track, reward, or even talk about capacity, other than to encourage a stable capacity. And never, ever call it productivity.

To understand what to do instead, see the “Reporting” practice.


How should we count partially-done stories?

Partially-done stories don’t count. At the end of the iteration, if you have any partially-done stories, create a new story for the work remaining and give it a new estimate, if you’re using estimates. (See the “Incomplete Stories” section for details.) The part done in this iteration doesn’t count toward your capacity, which means your capacity will go down.

This may sound harsh, but if you’re using iterations, capacity, and slack correctly, partially-done stories should be extremely rare. If you have partially-done stories, something has gone wrong. Reducing your capacity will give your team the slack you need to resolve the problem.

How do we change our capacity if we add or remove people?

If you add or remove only one person, try leaving your capacity unchanged and see what happens. Another option is to adjust your capacity proportionally to the change. Either way, your capacity will adjust to the correct number after another iteration.

How can we have a stable capacity? People take vacations, get sick, and so on.

Your iteration slack should handle minor variations in people’s availability. If a large percentage of the team is away, as during a holiday, your capacity may go down for an iteration. This is normal. You can reset it in the next iteration.

If you have a small team, you might find that even one day of absence is enough to destabilize your capacity. In this case, you may wish to use two-week iterations. See the “Task Planning” practice for a discussion of the tradeoffs.

Isn’t it a waste of time for everyone to estimate stories together?

It does take a lot of time for people to estimate together, but this isn’t wasted time. Estimating sessions aren’t just for estimation—they’re also a crucial first step in communicating and clarifying what needs to be done. Developers ask questions and clarify details, which often leads to ideas on-site customers haven’t considered. Sometimes this collaboration reduces overall cost, as the “How to Win the Planning Game” section describes.

All the developers need to be present to ensure they understand what they will be building. Having them estimate together also improves consistency.

What if we don’t ask the right questions of the customer and miss something?

It happens. Information obvious to customers isn’t always obvious to developers. In the “A Conversational Estimate” sidebar, if Inga hadn’t asked Elissa about the “age” column, the team would have been surprised about it later.

Although you can’t prevent these mistakes, you can reduce them. If developers estimate together, they’re more likely to ask the right questions. Mistakes will also decrease as the team becomes more experienced with working together. Customers will learn which details to provide, and developers will learn which questions to ask.

In the meantime, don’t worry unless you encounter these surprises often. Address unexpected details when they come up, as described in the “Making and Meeting Iteration Commitments” section.

If unexpected details frequently surprise you, and the problem doesn’t improve with experience, ask a mentor for help.

Isn’t it risky to estimate based on the most-qualified team member? Shouldn’t we use the average team member, or least-qualified for extra safety?

The “Yesterday’s Weather” feedback loop eliminates the need for estimate accuracy, as the “Why Estimate Accuracy Doesn’t Matter” sidebar describes, so they’re all equally safe. What’s important is consistency, and thinking in terms of ideal time and the most-qualified team member is the easiest way to be consistent.

When should we re-estimate our stories?

Because story estimates need to be consistent with each other, you shouldn’t re-estimate stories unless their scope changes. Even then, don’t re-estimate stories after you’ve started working on them, because you’ll know too many implementation details to make a consistent estimate.

On the other hand, if your constraint changes, and different people start making estimates, both your estimates and capacity have to start over from scratch.

To make our estimates, we made some assumptions about the technical design. What if the design changes?

Agile assumes you’re building your design incrementally, and improving the whole design over time. As a result, your estimates will usually remain consistent with each other.

How do we deal with technical dependencies in our stories?

With proper incremental design, technical dependencies should be rare, although they can happen. I typically make a note along with the estimate: “6 (4 if story Foo done first).”

Evolutionary Design

If you find yourself making more than a few of these notes, something is wrong with your approach to incremental design. Evolutionary design can help, and consider asking a mentor for help.


Task Planning

Capacity assumes the use of iterations and requires slack to smooth out minor problems and inconsistencies.

Estimating requires trust: developers need to believe they can give accurate estimates without being attacked, and customers and stakeholders need to believe the developers are providing honest estimates. That trust often isn’t present at first, and if it isn’t, you need to work on developing it.

Regardless of your approach to estimating and capacity, never use capacity numbers or incorrect estimates to attack developers. This is a quick and easy way to destroy trust.


When you use capacity well:

  • Your capacity is consistent and predictable each iteration.

  • You make iteration commitments and meet them reliably.

  • Estimation is fast and easy, or not required at all.

  • You can size most stories in a minute or two.

Alternatives and Experiments

The central idea of capacity is Yesterday’s Weather: focusing on consistency, rather than accuracy; basing predictions on past measurements; and using that to create a feedback loop that automatically corrects itself.


There are countless approaches to estimation and prediction. Yesterday’s Weather has the advantage of being simple and reliable. It’s not perfect, though, and relies on slack to cover its imperfections. Other approaches add a lot complexity in an effort to be more precise. Despite that added complexity, I’ve yet to see any come close to working as well as the Yesterday’s Weather + Slack feedback loop.

You’re welcome to experiment with better ways of determining capacity, but don’t do it right away. Get good at using the approach in this book to reliably finish iterations, first, and stick with it for several months. The ripple effects of changing capacity planning are profound, and hard to see without experience.

One of the most popular alternatives I see is to base capacity on the average of prior iterations, rather than just the past iteration. Another approach is to include stories that were started in one iteration and finished in another. I think both approaches are misguided: they’re both based on a desire to increase capacity, but they increase the capacity number without increasing the team’s actual ability to deliver. It just makes the team more likely to have trouble meeting their commitments. It’s better to bite the bullet, plan for a lower capacity, and use the resultant slack to increase the team’s actual, real-world ability to deliver.

Another popular alternative is the #NoEstimates movement, which sidesteps estimation entirely. There are two approaches to #NoEstimates, and I’ve included both in this book. The first is to count stories rather than estimate them, as described in this practice. Some teams use very small stories—more than a dozen per iteration—to help make that work. The second is to not use iterations at all, and instead use continuous flow, as described in the “Task Planning” practice. Both of these ideas are worth trying, after you’ve mastered the basics.

Further Reading

Agile Estimating and Planning [Cohn 2005] describes a variety of approaches to Agile estimation.

Software Estimation: Demystifying the Black Art [Mcconnel 2006] provides a comprehensive look at traditional approaches to estimation.

XXX George Dinwiddie:

Share your feedback about this excerpt on the AoAD2 mailing list! Sign up here.

For more excerpts from the book, or to get a copy of the Early Release, see the Second Edition home page.

AoAD2 Practice: Task Planning

Book cover for “The Art of Agile Development, Second Edition.”

Second Edition cover

This is a pre-release excerpt of The Art of Agile Development, Second Edition, to be published by O’Reilly in 2021. Visit the Second Edition home page for information about the open development process, additional excerpts, and more.

Your feedback is appreciated! To share your thoughts, join the AoAD2 open review mailing list.

This excerpt is copyright 2007, 2020, 2021 by James Shore and Shane Warden. Although you are welcome to share this link, do not distribute or republish the content without James Shore’s express written permission.

Task Planning

Whole Team

We understand how we’re going to work together.

If you follow the practices described in the “Planning” chapter, you’ll end up with a visual plan with multiple levels of detail: valuable increments that could possibly be done in the future, small valuable increments that are likely to be done in the intermediate-term, and specific stories that will be done in the near-term.

That plan turns into action through task planning: breaking down stories into tasks and tracking the team’s progress. Because Agile teams are self-organizing (see “Key Idea: Self-Organizing Teams”), task creation, assignment, and tracking is done entirely by the team, not by managers.

There are three parts to task planning: cadence, creating tasks, and visual tracking.


Cadence is the frequency of your task planning. There are two common approaches in the Agile community: iterations (also called “Sprints”1) and continuous flow (also called “Kanban”).

1“Sprint” is a misleading name. Software development is more like a marathon than a series of sprints. You need to work at a pace that you can keep up indefinitely.

Iterations are fixed-length timeboxes lasting a week or two. At the beginning of every iteration, you choose a set of stories to complete, and by the end, you expect them to all be done. Continuous flow, in contrast, is an unending stream of stories. You choose a new story whenever the previous one is finished.

Teams new to Agile should use iterations.

Teams new to Agile should use iterations. Not because they’re easier—they’re actually harder—but because the strict iteration cadence provides important feedback about how the team needs to improve. More importantly, when used correctly, your iteration capacity gives you the slack to make those improvements.

Continuous flow doesn’t have the same built-in opportunities for improvement that iterations do. It’s harder to notice when your team is going off the rails, and harder to justify spending time on improvements. That said, continuous flow is less stressful and many teams prefer it.


Software development dies in inches. At first everything’s fine: “I’ll be done with this task once I finish this test.” Then you’re limping: “I’ll be done as soon as I fix this bug.” Then gasping: “I’ll be done as soon as I research this API flaw... no, really.” Before you know it, it’s taken you two days to finish a task that you expected to take two hours.

Death by inches sneaks up on a team. Each problem only takes hours or a day, so it doesn’t feel like a problem, but they multiply across the hundreds of tasks in a release. The cumulative effects blindside teams and their stakeholders.

Done Done

Iterations allow you to detect problems early. They’re strictly timeboxed: when the time is up, the iteration is over. At the beginning of the iteration—each is typically a week or two in length—you predict your capacity and choose stories to match that capacity. At the end of the iteration, all the stories should be “done done.” If they’re not, you know something went wrong. Although this doesn’t prevent problems, it reveals them, which gives you the opportunity to fix the underlying issues.

Stakeholder Demos
Continuous Deployment

Iterations follow a consistent schedule:

  1. Demonstrate results of previous iteration to stakeholders (up to half an hour)

  2. Hold retrospective on previous iteration (one hour)

  3. Plan iteration tasks (half an hour)

  4. Develop stories (remainder of iteration)

  5. Deploy, if not using continuous deployment (automated)

Many teams start their iterations on Monday morning, but I prefer iterations that start on Wednesday or Thursday morning. This allows team members to take a long weekend without missing important events. It also reduces the desire to work on the weekend.

Iterations can be of any length, but most teams use one- or two-week iterations. For teams new to Agile, one-week iterations are best. That’s because teams develop their understanding of Agile based on how many iterations they’ve undertaken, not how many weeks they’ve experienced. Shorter iterations result in more rapid improvement.2

2I’ve taught classes where students develop real software in 90-minute iterations. They experienced the same improvement I’ve seen from teams using week-long iterations.

Energized Work

On the other hand, one-week iterations put more pressure on the team. This makes energized work more difficult and can dissuade people from refactoring. Capacity is harder to predict in one-week iterations, too, because interruptions and holidays are proportionally bigger interruptions. So, once your team is able to reliably finish all its stories every iteration, go ahead and experiment with two-week iterations.

Iterations longer than two weeks are usually a mistake. Teams use longer iterations when they feel they need more time to get their work done, but that’s just papering over problems. Longer iterations won’t change the amount of time you have; they only change how often you check your progress.

If you trouble finishing stories, use shorter iterations and make your stories smaller.

If you have trouble finishing everything by the end of the iteration, it’s not because you need more time; it’s because you need more practice working incrementally. Shorten your iteration length, make your stories smaller, and focus on solving the problems that prevent you from finishing stories.

Continuous flow

Continuous flow is just what it sounds like: a continuous flow of stories with no particular start or end. Rather than predicting what your team can do each week, establish a “work-in-progress limit” for how many stories your team will work on at once. One to three is best. The fewer, the better. (See “Key Idea: Minimize Work in Progress”.) Once the limit is reached, no more stories can be added. When a story is “done done,” it’s removed, making room for a new story to be added.

In theory, continuous flow is less wasteful than iterations, because you don’t need to predict capacity or make stories fit within the iteration timebox. In practice, I haven’t found that to be true. A strict iteration timebox keeps the team focused on completing stories. Teams using continuous flow don’t have the same urgency to fix problems and cut scope. I recommend that teams new to Agile master iterations before trying continuous flow.

That said, continuous flow can be a good fit for teams with a lot of small, unpredictable stories, such as teams doing a lot of maintenance and bug-fixing work. If your plans are changing so often that even a one-week iteration is too long, continuous flow is a good choice.

Creating Tasks

Visual Planning
The Planning Game

Start your task planning by choosing stories. If you use iterations, choose stories based on your iteration capacity: for example, six stories, or 12 points. (See the “Capacity” practice for details.) If you use continuous flow, choose stories according to your work-on-progress limit, then plan a new story whenever a story is finished. Either way, only choose stories that are ready to be completed: third-party dependencies have either been resolved or the third party is ready to participate.

On-site customers choose stories by taking the highest-priority stories from the visual plan. They spread them out on a table, or virtual whiteboard, and explain them to the rest of the team. This should only take a moment: the team should already have seen the stories during the planning game.

Next, use simultaneous brainstorming (see the “Work Simultaneously” section) to come up with the tasks needed to complete each story. Each task should only be a few hours of work. Write each one on a card, or virtual equivalent, and put it next to the story it relates to.

Tasks can be anything you like. Everything needed to finish the story should be included. Examples include “update build script,” “add Customer class,” and “mock-up billing form.” Most tasks will be created by developers, but anybody can participate.

Task planning is a design activity.

Task planning is a design activity. It’s a way of getting the whole team on the same page. If everybody has the same ideas about how to develop the software, it should go quickly. If not, it’s a great opportunity for discussion before development begins.

You don’t need to go into a lot of detail on each task. Task planning is about getting everybody on the same page, not exhaustively deciding what to do. Leave room for people to work out details when they do the work. For example, if one of your tasks is to “add Customer class,” you don’t need to say what methods will be in that class, so long as the programmers all understand, in general, how the Customer class fits into the plan.

As you work, developers may have questions about the details of each story. Make sure team members with customer skills are on hand to answer those questions.

Creating your task plan should take less than 10-30 minutes. If it takes longer, you’re probably going into too much detail. If there’s a question you’re getting stuck on, don’t solve it during the meeting. Instead, add it as a task of its own. For example, if people are arguing about which authentication library to use, add a task that says “Choose authentication library.”

Stand-Up Meetings

Once all the tasks are ready, review the plan and double-check that it has everything the team needs to finish its stories. Ask the team if the plan is achievable. It usually will be, but if it isn’t, remove or replace stories until it is.

Finally conduct a consent vote (see the “Seek Consent” section). When the team consents to the plan, you’re ready to go. Set up your task tracking board, hold a brief stand-up to decide what people will do first, and get to work.

Visual Tracking

Agile teams share ownership of their work, as “Key Idea: Collective Ownership” describes. Tasks aren’t assigned to specific people. Instead, when somebody’s ready to start working on a task, they look at the tasks available and choose the next one that they can contribute to or learn from. It’s the whole team’s responsibility to keep track of progress and help out where needed.

Team Room
Informative Workspace

It’s easy to fixate on your own tasks at the expense of the team’s overall progress. Remember to stay aware of the broader picture, and to put the success of the team over finishing your individual tasks. Stand-up meetings will help you step back and think about the big picture, but an even more important tool is your task tracking board. It’s a central part of your informative workspace.

My favorite tool for task tracking is a large magnetic whiteboard. I like to use a six-foot whiteboard on wheels. I put the visual plan on one side and the task plan on the other. If your team is remote, you’ll use a virtual whiteboard. It’s often most convenient to put your task plan and release plan on the same virtual board.

The task board is the nerve center of your team room, whether physical or virtual. It makes your progress visible at all times. Be sure to keep it up to date. When you start working on a task, mark your name or initials on the board. Many teams I’ve worked with have created custom magnets with fun pictures for this purpose.

In a physical team room, bring the task card back to your desk. The act of physically moving to the board and taking a card will help the rest of the team maintain situational awareness. Just seeing people move around is a powerful tool for understanding the team’s status.

In a remote team, you can have the same impact on situational awareness by giving team members a tablet to use for for their virtual whiteboards. (It’s a good idea anyway. Tablets are inexpensive and make whiteboard sketches much easier.) Leave the tablet powered on and logged in to the task board so you can see changes out of the corner of your eye. You can always turn it off if it’s distracting.

A planning tool will only get in your way.

The best way to visualize your task plan is whatever works best for your team. Here, I present two options. Feel free to experiment with others. Keep it visual and lightweight, though, using a whiteboard or virtual equivalent. So-called Agile planning tools, such as Jira, are inflexible and add too much friction. Your task board is a visual representation of your process. As an Agile team, you own your team’s process and you should be constantly experimenting with improvements and new ways of working. A planning tool will only get in your way.

Task grid

The task grid has been a hit with every team I’ve introduced it to. It’s simple and compact. To create it, arrange your stories vertically, with the highest priority story on top. To the right of each story, arrange the tasks associated with that story in a horizontal line. Put the tasks in whatever order seems most natural. It’s okay to complete tasks out of order.

Done Done

When people are ready to work on a task, they take whichever card they’re prepared to work on, starting from the top-left corner. As each task is finished, mark the card in some way: circle it with a green marker, mark it with a green magnet, or change the color of a virtual card. (Avoid writing on the card. Sometimes tasks need to be revisited.) When all the tasks for a story are done, the final task is to review the team’s definition of done, make any final changes needed, and mark the story card green as well.

Task grids work particularly well for teams using iterations. The “A Task Grid” figure shows an example.

A diagram of a whiteboard. The top of the board is labelled “Iteration 32.” On the left side of the board is a column of six cards labelled “Stories.” To the right of each story is a row of cards of ranging from two to six cards in length, collectively labelled “Tasks.” The first two story cards and their tasks all have a box drawn around them. On the third row, the second and third task cards are boxed, but the first and fourth have been replaced with the circled initials “NS/SB” (for the first card) and “RAH/SR” (for the fourth card). On the fourth row of task cards, only the second card has been boxed. The third card has been replaced with the circled initials “JS/SW,” and the remaining task cards are unmarked. The final two rows are all unmarked, indicating that work has not yet begun on those stories. Also on the board is a “Done Done” checklist and a note about an upcoming microbrew-fest.

Figure 1. A task grid

Detectives’ whiteboard

You know how, in crime dramas, there’s a whiteboard with all the information about the crime? Suspect mug shots, evidence, arrows going from one part to another? That’s a detectives’ whiteboard, and that’s exactly how this visualization works.3 Every story gets its own board, or part of a board, and everything related to the story is put on the board. Tasks, mock-ups, documents... everything. They’re grouped in whichever way makes sense to the team.

3Arlo Belshee was the first person to introduce me to the detectives’ whiteboard, as part of his Naked Planning process. Later, Ron Quartel independently created a similar idea for his FAST process, where he called it a “feature board.”

When a task or piece of information is done, or no longer relevant, the team removes it from the board. When they realize something else could be helpful, they add it. The story is done when the board is empty.

Detectives’ whiteboards work particularly well for teams using continuous flow. The “A Detectives Whiteboard” figure shows an example.

A diagram of a whiteboard. The top of the board is labelled “WIP Limit: 2.” The board is divided into two halves. Each half has a story card at the top. On the left-hand side, there are several clusters of task cards, a handwritten list labelled “Perf criteria,” and a document with a UI mock-up on it. On the right-hand side, there is a column of a task cards and a large diagram of a statechart. There is a task card next to each node in the chart.

Figure 2. A detectives’ whiteboard

Cross-Team Dependencies

Some stories will depend on work from people outside the team. Because stories are small—typically, only a day or so of work, if the team works together—it’s best to wait until these third-party dependencies are resolved. Similarly, if a story requires somebody to join the team temporarily, wait until that person is available. Otherwise, you’ll end up with partially-completed work. (See “Key Idea: Minimize Work in Progress”.)

Don’t choose stories with unfulfilled dependencies.

To be specific: when choosing stories for task planning, don’t choose stories with unfulfilled dependencies. If you’re using iterations, they have to wait until the next iteration. If you’re using continuous flow, they have to wait until the next slot opens up.

If you start work on a story and discover that it has a dependency, it’s okay to leave it in your plan. But put a short timebox on it, such as two or three days. If the dependency isn’t resolved by then, take it out of your plan and replace it. I mark such tasks in red, with an expiration date.

Some stories might need work from your team, work from another team, and then work from your team again. Split those stories into two stories: first, a story to prepare for the other team, and second, a story that you start after the other team has made their contribution. Remember to keep them customer-centric, as described in the “Stories” practice.

Whole Team

If your team faces a lot of delays due to cross-team dependencies, something is wrong. You might not have a whole team, or your organization might have poor cross-team architecture (see the “Scaling Agility” chapter). Ask a mentor for help.

Making and Meeting Iteration Commitments

Making weekly commitments to stakeholders is an incredible way to build trust.

Iterations are a powerful tool for improving your team’s ability to deliver software reliably. To take full advantage of them, treat your iteration plan as a commitment: something that you are going to do your utmost to achieve.

At first, you’ll have achieving your iteration plans, so make your commitment privately, within the team. Commit to yourselves, not stakeholders. With practice, though, you’ll be consistent enough that you can share your commitments with stakeholders, too—and that is an incredible way to build trust.

Commitments are a choice the team makes for itself.

No matter what, though, commitments are a choice the team makes for itself. Agile teams own their work. Managers, never force commitments on your teams. It doesn’t end well.

Of course, owning your commitments still doesn’t mean you’ll always finish everything as planned. Things will go wrong. Yes, commitment is about doing what’s necessary, within reason, to finish the iteration’s stories on time, but it’s also about working around problems and being clear and honest in your communication when problems come up that you can’t work around.

Stand-Up Meetings

To meet your commitments, you need to be aware of problems before it’s too late. During your daily stand-up meeting, review your team’s progress. Is there task that’s been in progress since the last stand-up? It could be a problem. If you’re halfway through the iteration, are about half the task cards marked green? If not, you might not finish everything on time. Are half the stories also marked green? If tasks are done, but stories aren’t, you could be blindsided by extra work to get stories “done done” on the last day.

When you discover a problem that threatens your iteration commitment, see if there’s any way you can change your plan so that you still meet your commitments. Would using some of your slack help? Is there a task you can simplify or postpone? Discuss your options as a team and revise your plan.

Sometimes the problem will be too big to absorb. In this case, you’ll usually need to reduce the scope of the iteration. Typically, this involves splitting a story and postponing part of the work, or removing a story entirely. As a team, discuss your options and make the appropriate choice.

Always stay in control of your iteration, even if you have to remove a story to do so. Any iteration that delivers all the stories in your current plan—even if that plan is smaller than it was at the beginning—is a success. But under no circumstances should you change the iteration deadline. Always end the iteration on time. It’ll keep you from fooling yourselves.

Incomplete Stories

Done Done

At the end of the iteration, every story should be “done done.” Partially-completed stories should be rare. That said, they will happen occasionally, particularly while you’re still learning.

Incomplete code is harmful. If you don’t plan on finishing a story immediately in the next iteration, strip its code out of the codebase and put the story back in the visual plan. If you do plan on finishing the work, create a new story that represents the work that’s left to be done. If you’re using estimates, give it a new estimate. You don’t want the partially-completed work to count towards your capacity, because then you’ll just end up signing up for too much work again.

Sometimes, despite your best efforts, you may have a bad week and end up with nothing that’s completely done. Some teams declare a lost iteration when this happens. They roll back their code and start over as if the iteration never happened. Although this sounds harsh, it’s a good practice. Iterations are short, so you won’t throw away much code, and you’ll retain everything you learned when you wrote it the first time. The second attempt will produce better code.

Partially-complete work should be rare. If you’re having trouble finishing stories, change your approach. Reduce your planned capacity, split your stories smaller, and coordinate as a team on finishing each story before moving on to the next. If that doesn’t help, something is wrong. Ask a mentor for advice.

Emergency Requests

It’s inevitable: you’re in the middle of finishing a story, everything is going well, and then a stakeholder says, “We really need to get this new story in.” What do you do?

First, decide as a team whether the story is really an emergency. Your next task planning meeting will typically be just days away. Rather than inserting chaos into the team’s work, maybe the new story can wait until the next opportunity. Team members with the most business expertise and political savvy should lead that decision.

If you do decide to prioritize the emergency story, your approach depends on whether you’re using iterations or continuous flow.

For iterations, you can remove any story that hasn’t been started and replace it with a story of the same size. (The removed story goes back in the visual plan.) If all of your stories have already been started, you can still remove stories, but you’ll have to guess about how many to remove, and you’ll have to take out their code. That way you don’t have incomplete code gumming up the works.

Teams using continuous flow often create a separate work-in-progress limit just for emergency stories. Keep the limit very small—one emergency slot is often best. If there’s a second emergency, and it can’t wait, you can remove an existing story, but you have to take out its code.

If you have a steady trickle of small emergencies, you can treat them as overhead rather than stories. Put them on your task board, but don’t count them toward your capacity. Your capacity will automatically adjust to give you enough time to deal with the emergencies, at the cost of less time to work on stories.

If you have a lot of emergency requests, or other ongoing support needs, you can reserve a developer (or several) for taking care of those requests. In between requests, they can work on anything that’s not frustrating to interrupt—which typically excludes working on stories. Rotate a new person into this role every day or week to prevent burn-out.

Your First Week

When your team first tries Agile, expect the first month or two to be pretty chaotic. During the first month, on-site customers will be figuring out the visual plan, developers will be establishing technical infrastructure, and everyone will be learning how to work together and use Agile practices.

Some people think the best way to overcome this chaos is to take a week or two up-front to work on planning and technical infrastructure. (This is often called “Sprint Zero.”) Although there’s some merit to this idea, Agile teams work on planning and technical infrastructure iteratively and continuously for the entire life of the team. Starting with real work on the first day helps establish this good habit.

Adaptive Planning
Visual Planning
The Planning Game

Start out by using one-week iterations, and start your first day by planning your first iteration. Normally, this involves selecting stories from the visual plan, but you won’t have a plan yet. Instead, think of one valuable increment that will definitely be part of your first release and conduct a miniature planning game session for that increment. Come up with 10-20 “just right” stories that everyone understands well.

These first stories should sketch out a “vertical stripe” of your software, also known as a “walking skeleton.” They should build a tiny piece of every technology needed for your first increment, so you can see the software working for real. If the increment involves user interaction, create a story to display the initial screen or web page. If it includes a database, create a story to query a small amount of data. If it includes reporting, create a story for a bare-bones report.

The Planning Game

Don’t expect much from your initial stories. Developers will need to establish their technical infrastructure. As a result, the stories should be very small. The initial screen might have nothing more than your logo on it. The database query might have hard-coded parameters. The report might display headers and footers, but no line items.

No Friction
Done Done

Once you have your initial stories, you’re ready for task planning. You won’t know your capacity, so start by creating tasks for just one or two stories. Remember to create tasks for setting up your technical infrastructure: version control, automated build, and so forth. Just do the minimum for now.

During the iteration, focus on just one or two stories at a time and check your progress every day. Continue planning one or two stories at a time, focusing on getting each completely done, until the iteration ends. The stories that are done will establish your capacity for the next iteration. The ones that aren’t can be rolled back or turned into new stories, as described in the “Incomplete Stories” section.

Mob Programming

It’s a good idea to have programmers and operations work on the first few stories as a group, even if you don’t plan on using mob programming long-term. Set up a projector or shared screen so everybody can participate while people take turns controlling the keyboard. (Make sure they’re collaborative about it.) You don’t have to use a formal mob programming approach, but it could be helpful.

Working on your first stories as a group helps reduce the chaos that occurs when people start working together. It will help you jointly establish initial conventions, such as directory structure, filenames and namespaces, basic design choices, and infrastructure decisions. Individual developers (or pairs) can peel off to take care of some necessary issue, such as setting up version control or programming workstations, but for the most part you should work as a team.

Adaptive Planning
Visual Planning

While programmers and operations are working together, on-site customers and testers should work on the visual plan. If you don’t have a draft purpose yet, start with that. Other team members can work with customers or developers as they see fit.

Each subsequent week will go a little smoother. Developers will learn how to split stories and customers will have a visual plan ready to pull from. The team’s capacity will stabilize. The feelings of chaos will subside and the team will begin to work in a steady, predictable rhythm.


How can task planning take less than 10-30 minutes? It always takes us much longer.

The Planning Game

The trick to effective task planning is to only use it for task planning. A lot of teams use their task planning session to estimate and break down stories, but it’s better to do that in a separate planning game session. Task planning should be focused on tasks. Your stories should be ready to go before you begin.

The other trick is to work simultaneously, using a free-form approach, instead of a dedicated planning tool. (See the “Work Simultaneously” section.) Teams that use a dedicated planning tool tend to bottleneck behind a single person controlling the tool, and that slows everything down.

With those two tricks, and some practice, your team should be able to easily finish task planning in less than 30 minutes. If it’s still slow, it may be because people are having trouble coming to agreement about what to do. Remember to create tasks to resolve disagreements and questions rather than trying to resolve them in the task planning meeting. If that doesn’t help, ask a mentor for help.

How should we schedule time for fixing bugs?

Every time you find a bug, even if it’s not related to the stories in your plan, on-site customers should make a “fix” or “don’t fix” decision for that bug. If a bug needs to be fixed, add tasks for fixing them to your plan, regardless of whether they’re related to your current stories. These bug-fixing tasks are part of your overhead and they don’t contribute to calculating your capacity.

No Bugs

Some bugs will be too big to absorb into your current iteration. Create story cards for them and schedule them into your next iteration. Fixing bugs immediately will help reduce the number of bugs you face.

If you have a legacy codebase with a lot of bugs, go through your bug database and make a “fix” or “don’t fix” decision for your next release. Close or defer the “don’t fix” bugs and turn the remainder into stories.

All the tasks in our plan depend on code that other people are still working on. What should I do?

You can write code that depends on unfinished code. Talk to the people who have the other task and come to an agreement about module, class, and method names. Decide who’s going to do what. Then, for your code, create a copy of their module, class, or method, but don’t implement it. Just stub in a hard-coded return values.

When it’s time to merge the code, replace your stub code with their real code and make sure the tests still pass. Whoever merges second can do it.

Another option is to pair or mob with the people who are working on the other task. You’ll help them finish faster and gain a better understanding of how their code works.


Iterations and continuous flow both depend on small stories—about a day each, if the team works together. Larger stories make it easy for things to go wrong without being noticed.

Evolutionary Design
Evolutionary Architecture

Every story that the team finishes should make progress on-site customers can recognize—if not in production, then at least in a staging environment. This requires that stories be customer-centric and that technical infrastructure be built incrementally. Typically, that means evolutionary design and architecture. Teams that don’t do so are likely to have technical quality problems in the future, and may have trouble making their stories small enough.


Consistently meeting iteration commitments requires basing your capacity on measured reality. Never artifically inflate your team’s capacity. Even then, things will go wrong, so your iteration must include slack to absorb those problems.

Never use commitment as a club. Don’t force team members to commit to a plan that they don’t agree with. Don’t disclose commitments outside the team until you have a track record of meeting them.


When you plan your tasks well:

  • The whole team understands what needs to be done to finish their stories.

  • The team works together to accomplish their plan.

  • The team is aware of when things are going well, and when they’re not, and takes action to correct problems.

When you use iterations well:

  • Your team has a consistent, predictable capacity.

  • Stakeholders know what to expect from the team and trust that it will deliver on its iteration commitments.

  • The team discovers mistakes quickly and is usually able to deal with them without impacting their iteration commitments.

Alternatives and Experiments

The standout difference between Agile and non-Agile task planning is collective ownership. Not only are Agile teams in charge of their own planning, they work together to finish their plan. On non-Agile teams, tasks are typically assigned by managers, and people focus on their individual tasks.

Another aspect of Agile task planning that stands out, of course, is its iterative and incremental nature. Using small stories means that teams makes steady, incremental progress, and they show that progress with working software every week or two. They use that software to get feedback, which in causes them to iterate their plans.

As you think about ways of experimenting with task planning, be sure to keep those core differences in mind. Don’t be too eager to experiment, though: there are a lot of subtleties to task planning, particularly iterations, so focus on getting really good at making and meeting one-week iteration commitments before trying alternatives. Give it several months, at least.

When you’re ready to experiment, one obvious experiment is to try continuous flow rather than iterations. You can also experiment with iteration length and story size. Some teams prefer to use very small stories that only take a few hours to complete. For these teams, tasks aren’t necessary. The stories are so small, they act as tasks on their own.

Your task visualization can and should change any time you have ideas for improving your process.

One area where you can start experimenting right away is your task board visualization. As a visual representation of your team’s process, it can and should change any time you have ideas for improving your process.

One common task visualization is to create vertical “swim lanes” that show stories’ progress through various phases of development. I prefer to avoid this approach myself, because Agile works best when you work on all “phases” simultaneously—but, admittedly, that depends on Delivering zone practices. (See Part III.) For teams who aren’t developing Delivering fluency, a swim-lane diagram can be helpful.

Further Reading

Agile Estimating and Planning [Cohn 2005] and Planning Extreme Programming [Beck and Fowler 2000] each provide alternative ways of approaching iteration planning.

XXX Kanban book

Share your feedback about this excerpt on the AoAD2 mailing list! Sign up here.

For more excerpts from the book, or to get a copy of the Early Release, see the Second Edition home page.

AoAD2 Practice: Real Customer Involvement

Book cover for “The Art of Agile Development, Second Edition.”

Second Edition cover

This is a pre-release excerpt of The Art of Agile Development, Second Edition, to be published by O’Reilly in 2021. Visit the Second Edition home page for information about the open development process, additional excerpts, and more.

Your feedback is appreciated! To share your thoughts, join the AoAD2 open review mailing list.

This excerpt is copyright 2007, 2020, 2021 by James Shore and Shane Warden. Although you are welcome to share this link, do not distribute or republish the content without James Shore’s express written permission.

Real Customer Involvement


We understand the goals and frustrations of our customers and users.

I once worked with a team that was building software that made it easier for chemists to analyze certain types of molecules. The team’s domain expert was a chemist whose previous job involved analyzing those molecules using the company’s old software. She was invaluable, full of insight about what did and didn’t work with the old product. We were lucky to have her as a member of the team. Thanks to her, we created a more valuable product.

Whole Team

In an Agile team, on-site customers—team members with the skill to represent customer, user, and business interests—are responsible for choosing and prioritizing stories. The value of the team’s work is in their hands. This is a big responsibility. As an on-site customer, how do you know what to choose?

Some of that knowledge comes from your expertise as a product manager, domain expert, or user experience designer. You can’t think of everything, though. Your daily involvement with the team, although crucial, includes the risk of tunnel vision. You can get so caught up in daily details that you lose track of your real customers’ interests.

To widen your perspective, involve real customers and users.

To widen your perspective, you need to involve real customers and users. The best approach to doing so depends on who you’re building the software for.

Personal Development

In personal development, which I include mainly for completeness, the development team is its own customer. They’re developing software for their own use. There’s no need to involve anyone else; the team is the real customer.

Platform Development

In a multi-team development effort, some teams will build software solely for the other teams to use. The real customers of this sort of platform development are those client teams.

Client development teams need flexibility, autonomy, and ownership, not magic.

All too often, platform development falls into the trap of making tools and libraries that are “easy to use.” But that’s not what your client teams need. They need flexibility, autonomy, and ownership, not magic. They need to be able to do their work without depending on your team to make changes. In general, that means that you should prioritize simple programming interfaces with clear responsibilities, minimal side effects, and an “escape hatch” that allows teams to dig into details when they need to.

Some organizations divide their teams into senior developers, who build a platform, and junior developers, who customize it to build products. Avoid this approach. Too often, it leads to an ivory-tower platform that tries to make customization “easy” but actually requires inexperienced developers to constantly work around its gaps. The result is a hard-to-maintain mess.

Be sure to work closely with representatives from the teams you serve when designing APIs and deciding on capabilities. Focus on giving your customers the ability to solve problems on their own, so you don’t end up as a bottleneck to their work. One way to improve communication and understanding is to conduct “exchange programs” in which one of your developers trades places with a client team’s developers for several weeks.

If your team builds software to help developers in general, rather than supporting specific teams, see the “Vertical-Market Software” section instead.

In-House Custom Development

In-house custom development occurs when your team is building something for your organization’s own use. This is classic IT development. It may include writing software to streamline operations, automation for your company’s factories, or producing reports for accounting.

In this environment, the team has multiple customers to serve: the executive sponsor who pays for the software and the end-users who use the software. Their goals may not be in alignment. In the worst case, you may have a committee of sponsors and multiple user groups to satisfy.

Turn your real customers into on-site customers.

Despite this challenge, in-house custom development makes it easy to involve real customers because they’re easily accessible. The best approach is to bring your customers onto the team—to turn your real customers into on-site customers.

Rather than asking customers to join your team, it may be easier to move your team to sit near their customers.

To do so, recruit your executive sponsor or one of their trusted lieutenants to be your product manager. The product manager will make decisions about priorities, reflecting the desire of the executive sponsor to create software that provides value to the organization.

Also recruit some end-users of the software to act as domain experts. As with the chemist mentioned in the introduction, they will provide valuable information about how real people use the software. They will reflect the end-users’ desire to use software that makes their lives better.

Stakeholder Demos

To avoid tunnel vision, the product manager and on-site customers should solicit feedback from their colleagues by conducting stakeholder demos and sharing roadmaps.

If you have trouble getting your sponsors or users to join the team, see the discussion of outsourced development in the next section. If you have multiple sponsors or user groups, see the “Vertical-Market Software” section.

Outsourced Custom Development

Outsourced custom development is similar to in-house development, but you may not have the connections that an in-house team does. As a result, you may not be able to recruit real customers to act as the team’s on-site customers.

Still, you should try. One way to recruit real customers is to move your team to your customer’s offices rather than asking them to join you at yours.

Visual Planning
Stakeholder Demos
The Planning Game

If you can’t bring real customers onto the team, make an extra effort to involve them in other ways. Meet in person with your real customers for the first week or two of the project so you can discuss your purpose and context, visual plan, and get to know each other. If you’re located near each other, meet again for each stakeholder demo and planning session as well as occasional retrospectives.

If you’re far enough apart that regular visits aren’t feasible, stay in touch via videoconference and phone conferences. If you have remote team, consider giving them access to your virtual team room. Try to meet at least once per month to discuss plans. Even if you have a in-person team, consider using a virtual whiteboard for your visual plan, so you can more easily share and discuss plans.

Vertical-Market Software

Unlike custom development, vertical-market software is developed for many organizations. Like custom development, though, it’s built for a particular industry, and it’s often customized for each buyer. Most software-as-a-service (SaaS) products fall into this category.

Because vertical-market software has multiple customers, each with their own needs, you have to be careful about giving real customers too much control over the direction of the product. You could end up making a product that fits your on-site customers’ needs perfectly, but alienates your remaining customers.

Stakeholder Demo

Instead, your team should include a product manager who understands the needs of your real customers impeccably. Their job—and it’s a tough one—is to take into account all your real customers’ needs and combine them into a single, compelling purpose. This includes balancing the desires of people who buy the product with the needs of people who actually use the product. For vertical-market software, their goals are often different, and can even be in conflict.

Create opportunities to solicit feedback from real customers.

Rather than involving real customers as members of the team, create opportunities to solicit their feedback. Some companies create a customer review board filled with their most important customers. They share their release plans with these customers and provide stakeholder demos for customers to try.

Depending on your relationship with your customers, you may be able to ask them to donate real users to join the team as on-site domain experts. Alternatively, as with the chemist in the introduction, you may wish to hire previous users to be your domain experts.

In addition to the close relationship with your customer review board, you may also solicit feedback through trade shows and other traditional sources.

Horizontal-Market Software

Horizontal-market software is the visible tip of the software development iceberg: software that’s intended to be used across a wide range of industries. Consumer web sites fall into this category, as do games, many mobile apps, office software, and so on.

As with vertical-market software, it’s best to set limits on the control that real customers have over the direction of horizontal-market software. Horizontal-market software needs to appeal to a wide audience, and real customers aren’t likely to have that perspective. A product manager who creates a compelling purpose and go-to-market strategy based on all customers’ needs is particularly important for horizontal-market software.

Horizontal-market organizations may not have the close ties with customers that vertical-market organizations do. Thus, a customer review board may not be a good option. Instead, find other ways to involve customers: focus groups, user experience testing, community previews, early access and beta releases, and so forth.


We’re creating a web site for our marketing department. What kind of development is that?

At first glance, this may seem like custom development, but because the actual audience for the web site is the outside world, it’s closer to vertical-market or horizontal-market development, depending on your industry. The product manager should come from the marketing department, if possible, but you should also solicit feedback from people who will be visiting the site.



One danger of involving real customers is that they won’t necessarily reflect the needs of all your customers. Be careful that they don’t steer you toward creating software that’s only useful for them. Use your team’s purpose as its north star. Customer desires inform the purpose, and may even change it, but ultimately team members with product management skills hold final responsibility for the team’s direction.

End-users should be involved but not in control.

Similarly, users often think in terms of improving their existing way of working, rather than in terms of finding completely new ways of working. This is another reason why end-users should be involved but not in control. If innovation is important to your team, give innovative thinkers—such as a visionary product manager or user experience designer—prominent roles on your team.


When you involve real customers and users:

  • You improve your knowledge of how customers use the software in practice.

  • You have a better understanding of customers’ goals and frustrations.

  • You use customers’ feedback to revise your plans and software.

  • You increase your chances of delivering a truly useful and successful product.

Experiments and Alternatives

Feedback is essential, but direct involvement by real customers isn’t. Sometimes the best software comes from people who have a strong vision and pursue it vigorously. The resulting software tends to be either completely new or a strong rethinking of existing products.

Still, feedback from real customers is always informative, even if you choose to ignore it. This practice is about getting that real-world feedback. The goal is to create software that really meets customer and user needs, not just your team or organization’s imagination of their needs.

As you think of ways to experiment with this practice, focus on communication and feedback. How can you get better insights about how your software is perceived in the real world? How can you decrease the time between having an idea and getting feedback? How can you make better decisions based on feedback? The more information you have, the better decisions your team can make.

Further Reading

XXX Martin Fowler: Whenever I talk to people about product management, I always like to point people to the work of Kathy Sierra. Her "Badass" book is excellent, and there's tons of gold still there in her old Creating Passionate Users blog <>.

XXX Luiza Nunes: Recommend Team Topologies, Specification by Example

Share your feedback about this excerpt on the AoAD2 mailing list! Sign up here.

For more excerpts from the book, or to get a copy of the Early Release, see the Second Edition home page.