Print

Forces Affecting Continuous Integration

13 Aug, 2008

Commentary on The Art of Agile Development's Continuous Integration practice.

Ultimately, continuous integration is about a very simple business idea:

Mr. Customer, you can release your software at any time and receive value from it reflecting your investment to date.

paraphrased from Planning Extreme Programming, by Jeffries and Fowler

From that core idea, we get the two rules of continuous integration:

  1. The code in the version control repository always builds, passes its tests, and is ready to release.
  2. The code in the repository represents all of our work up to a few hours ago.

It's not about integration servers, or automated builds, or any of that stuff. That's all incidental... technique, not essence. The essence is simple. Release the software at any time. Recoup your investment.

With that core idea firmly in mind, let's explore some of the forces that affect continuous integration. I have to warn you: this list is long, aimed at advanced readers, and not particularly well written. It's sort of a rambling brain dump.

Business Priorities

Ideally, the code in the repository is always ready to release. But even though we can make it technically capable of release (nearly bug-free, builds, passes tests), it's not always going to be ready to release from a business perspective.

There's two scales here. First, you have iteration cycles. During the iteration, the team's allowed to have the software in pieces. It still needs to be technically capable of release, but features can be half-done and bugs can be unfixed. Everything is supposed to be wrapped up by the end of the iteration--stories "done done" or removed, bugs fixed, and so forth.

Second, you have release cycles. Although all stories are supposed to be "done done" at the end of each iteration, those stories may not represent complete minimum marketable features (MMFs). So the software may not hang together well, or it may not have the polish necessary to succeed in the market, and so forth. That stuff is supposed to be wrapped up by the end of the release.

Really great teams don't have these problems. Their repository really is ready to release at any time. I think part of the reason is that they tend to be mature teams with a lot of experience... and they have a mature codebase, too, which allows them to add small features rather than big ones. It's a lot easier to release daily when you're improving existing features rather than creating a brand new piece of software from scratch.

That said, it is possible to practice real continuous integration even with a brand-new project. It requires that the continuous integration mindset extend to product planning as well as programming. In the past, I've described it as building a mansion by first building a doghouse... then stre-e-e-tching it out into a lean-to... then stre-e-e-tching it into a shed... then adding an out-house... and so on. That works really well in the Web 2.0 era, but wouldn't work so well for a company like Apple that likes to release polished, seamless obelisks.

Of all of the ways that continuous integration can be compromised, this is the one that I'm most comfortable with. Sometimes, it just doesn't make sense to release a piece of software because it isn't functionally ready. As long as release cycles are kept short and the team keeps producing live demos, I'm okay with that, for the most part.

Number of Concurrent Code Streams

By "concurrent code streams," I mean the number of development efforts you have going on at any given time. For example, if you have a classic XP environment with ten programmers, and they're all working, you'll have five code streams. The programmers will be working in pairs at five different workstations. If you aren't pairing, you'll have ten code streams.

Each of those code streams is, in fact, a short-lived branch of the code. In a distributed version control system, that fact is made explicit. In traditional version control systems, the branches are called "sandboxes," but they're still a sort of branch. When we integrate, what we're actually doing is merging the branch back in with the mainline.

The number of code streams is a force on continuous integration because the more code streams you have, the more likely you are to have non-trivial integrations. (A trivial integration is one where nobody else has checked anything in, hence nothing to integrate with.) In other words, more code streams means more integration effort and more potential for integration problems.

The number of concurrent code streams interacts with...

Frequency of Integration

The more frequently you integrate, the more likely you are to have a trivial integration, which is a good thing. Think of it this way. Imagine you and Bob the Schlub from down the hall are the only programmers. Bob integrates every two weeks and you integrate ten times a day. When you integrate, 99 out of 100 times, Bob won't have checked in and you'll only be integrating with yourself. (Kinky.)

Frequent integrations also makes the amount of code you have to merge smaller, too, which reduces the likelihood of conflict and makes your integrations easier. So, all in all, frequent integrations are a Good Thing.

It's not all beer and pork rinds. Frequent integrations interact with the number of code streams and...

Speed of Build

By "speed of build," here, I really mean "the amount of time it takes to build the software and run all of the tests."

A slow build leads to all kinds of evils. One of the things it does is to interact with the number of concurrent code streams to control the frequency of your integrations. It works like this: if you have ten concurrent code streams and people check in every hour, then you will average ten check-ins per hour. That means that your build has to take (60 minutes divided by ten equals) six minutes, max. If your build takes longer than six minutes, then your integrations will pile up.

In actuality, people don't check in perfectly like that. It's actually an application of queueing theory, although I can't check that right now. If I remember correctly that means we need about 30% slack in order to allow the queue to flow smoothly, assuming frequent small check-ins. So in this scenario, our maximum build time is actually closer to four minutes.

You can see how this interacts with all kinds of XP practices. Pair programming cuts the number of concurrent code streams in half, for free. The ten-minute build keeps builds small. XP teams tend to integrate every two to four hours, and XP recommends teams that are no larger than 10-12 programmers.

It works out perfectly: 12 programmers pairing and integrating every two hours is six concurrent code streams. Do the math (120 min / 6 streams * 70% utilization) and you get a maximum build length of 14 minutes. Cool.

Most teams I work with have very slow builds (it's because of their tests), so they compensate by using...

Multi-Stage Builds

A multi-stage build was one of the original workarounds presented for slow builds. (Continuous integration servers were another.) It breaks my heart, because rather than fixing their build problems, teams invest in these workarounds. Sigh... "easy" wins out over "good" once again.

That's not to say a multi-stage build is always bad. Like many easy solutions, though, it's over-used. The idea is quite simple: if your tests are slow, split your build into two parts: the "fast" build which runs whenever you integrate, and the "slow" build which runs daily, weekly, or manually just prior to release.

Sometimes the slow build is a manual regression test carried out by armies of disgrunted testers. (You'd be disgruntled, too, if you had to do manual script-based regression testing every day.) This takes days or even weeks. One company I worked with took days just to set up the test environment.

A multi-stage build provides fast feedback, but it isn't actually continuous integration. Remember the core idea: we want to be able to release software that reflects our investment. The software isn't ready to release until it's passed all of the tests in the slow build, so from a CI perspective, the frequency of integration is actually determined by the frequency of the least frequent build.

If you can't release until you've had an army of disgruntled testers run through an array of manual test scripts, then you aren't practicing continuous integration. You're integrating frequently, sure, but you aren't verifying that the integration succeeded. You're only doing half of it.

Slow builds force another common choice...

Synchronous vs. Asynchronous Integration

Even if a team works around their slow builds by using multi-stage builds, their "fast" build is often quite slow. Again, thirty minutes is common. This build is run after every check-in, which means that the team has to use asynchronous integration.

I talked about synchronous and asynchronous integration in the book, so I'll just summarize: asynchronous integration delays build feedback until after the person or pair has started another task. As a result, build failures force a context switch and interrupt flow. In practice, team members tend to delay fixing these build failures.

Asynchronous integration requires an automated CI tool, and it so happens that most of these tools do not require that the build pass before code is checked into the mainline. As a result, build errors propagate to the entire team and lead to delays and cascading failures.

In a team practicing CI well, the team's build should never break. An individual person might have trouble checking in, but those build failures should never affect the rest of the team. Never! (Well, once every month or two, maybe, due to cross-machine incompatibilities.) It's trivially easy to do with synchronous integration, and much more difficult with asynchronous integration.

Number of Collectively-Owned Codebases

Up until now, when I've been talking about a "team," I've been referring to a group of programmers that are working on the same codebase and using Collective Code Ownership.

There's another option, of course, and that's code ownership models. On agile teams, these are most prevalent when you have large groups working on a single product. For example, you might have five teams collaborating on a single codebase, with each team responsible for a particular part of the code. This is a common approach, but it's not required. Some groups choose to use collective ownership, even with larger groups.

In theory, this sort of code ownership model has clearly defined responsibilities and interfaces, allowing each team to work in their own area without worrying about integrating with the rest of the product. The reality is a bit less genteel. APIs never behave quite as expected, so continuous integration is still a good idea.

Continuous integration in this environment runs into the problems that occur with large numbers of concurrent code streams. However, it's possible to give each team its own branch for local integrations. The branches are integrated with the mainline on a regular basis--every two hours, for example.

By doing this, and assuming that code ownership prevents many of the team's changes from affecting others, each team acts as a single code stream. This reduces the number of concurrent changes from overwhelming the integration queue.

Code ownership has its downside. It requires that modules, integrations points, and APIs be clearly defined, which requires a level of up-front design. Teams will be organized around the design, which will make it very difficult to change... and some of that design will, inevitably, be wrong. This calcification of design is why some organizations prefer to have collectively-owned code, even with large teams.

End Transmission

So... some of the forces affecting continuous integration:

EasierHarder
mature feature setnew product
small number of concurrent code streamslarge number of concurrent code streams
frequent integrationinfrequent integration
fast buildslow build
synchronous integrationasynchronous integration
one teammany teams

And, as I've discussed, each of these forces affects the others in various ways.

In the end, though, continuous integration is about one simple idea: we should be able to release our software at any time and get value proportional to our investment. That's the forest. Don't let this meander through the trees (or the shiny baubles that are CI tools) distract you from that underlying goal.