Chris Wheeler has a very nice article on using test-driven development to create a report. It's a great example of how TDD can drive design.
When I first started reading this article, I got a little impatient. "Come on, that's not good enough!" I thought. "Nobody's going to want horizontal lines made up of dashes! Let's get to the meat!" I should have known better. Chris did a great job of using small, incremental steps to tackle a very tough TDD problem. Through baby steps and continued testing, he broke the problem into manageable pieces, then used refactoring and principles of good design as he accumulated tests and learned about the problem. An initial, poor solution turned into a great solution.
At the end, Chris waved his hands and said, "And then we manually test it from here." Because he had already factored out and tested much of the design complexity, I was okay with that. It is possible to go one step further, though, and I would have done so.
Chris ended up with a single class that communicated with Window's printer interface. In the future, he might choose to augment that with a class that communicates with Microsoft Word, or generates PDFs. Or not. It's a good, well encapsulated design, and that's the point.
The part Chris waved his hands over was the class that communicates with Windows' printer interface. He didn't test it directly. Instead, he created an integration test that uses Windows' "print preview" to allow a manual verification that the report prints properly.
This solution makes me a little uncomfortable because I know manual tests won't be run very frequently. How will future readers of the printing class have confidence that it works?
The one step further I think I would take would be to use something like the NMock framework to test the printing class. Now, these tests wouldn't actually do anything significant. All they could do is assert that certain Windows methods are called in a certain order. The test would look remarkably like the implementation code... it would almost be duplication. So why bother?
The most important reason, for me, would be to communicate to future readers of the code: "We carefully considered how to implement this functionality. We researched it, wrote spike solutions, and tested it manually. We're confident that these methods should be called in this order... so confident, that we've set that confidence in stone by writing this test."
Seeing that test would give me confidence that the people writing the code had done their job properly. If I wondered about a bug, I wouldn't think, "Is this code right?" I would think, "Is this the right way to use the Windows API?" It's a subtle but important difference, because to troubleshoot it, I would go off and write simple spike solutions to explore the Windows API. If I didn't have confidence in the code, I might be fooled into trying to modify the code and test it manually... and that would be a much slower and more error-prone test cycle.
This is a very subtle point I'm trying to make here and I'm not sure that I've expressed myself clearly. It will have to do, though. In many cases, we use TDD to help us design our code. In this case, I would use TDD to communicate with the programmers who would follow.