AoAD2 Chapter: Quality (introduction)
This is an excerpt from The Art of Agile Development, Second Edition. Visit the Second Edition home page for additional excerpts and more!
This excerpt is copyright 2007, 2021 by James Shore and Shane Warden. Although you are welcome to share this link, do not distribute or republish the content without James Shore’s express written permission.
Quality
For many people, “quality” means “testing,” but Agile teams treat quality differently. Quality isn’t something you test for; it’s something you build in. Not just into your code, but into your entire development system: the way your team approaches its work, the way people think about mistakes, and even the way your organization interacts with your team.
This chapter has three practices to help your team dedicate itself to quality:
The “No Bugs” practice builds quality in.
The “Blind Spot Discovery” practice helps team members learn what they don’t know.
The “Incident Analysis” practice focuses your team on systemic improvements.
Quality Sources
The ideas in No Bugs come from Extreme Programming.
Blind Spot Discovery is a collection of several techniques: Validated Learning, which comes from Eric Ries’s Lean Startup; Exploratory Testing, an approach spearheaded by Cem Kaner, although my description is based on Elisabeth Hendrickson’s work [Hendrickson2013]; Chaos Engineering, which originated with Greg Orzell and his colleagues at Netflix;1 and Penetration Testing and Vulnerability Assessment, which are well-established security techniques.
1I haven’t been able to find a definitive source for the origins of Chaos Engineering. It was formalized by Casey Rosenthal’s “Chaos Team” at Netflix in 2015, but the underlying ideas predate that team by several years. The original tool was Chaos Monkey, which [Dumiak2021] attributes to “Orzell and his Netflix colleagues.” US patent US20120072571A1, applied for in 2010, lists Greg Orzell and Yury Izrailevsky as the inventors.
My approach to Incident Analysis combines material from human factors and system safety research (specifically, Behind Human Error [Woods2010] and The Field Guide to Understanding ‘Human Error’ [Dekker2014]) with my understanding of effective retrospectives and facilitation, which owes a great deal to what I’ve learned from working with Diana Larsen. I learned about the human factors connection to incident analysis from Ward Cunningham, but I believe it stems from the Chaos Engineering community, particularly Nora Jones.
Share your thoughts about this excerpt on the AoAD2 mailing list or Discord server. For videos and interviews regarding the book, see the book club archive.
For more excerpts from the book, see the Second Edition home page.