Art of Agile Development in Korean

Book cover for the Korean translation of “The Art of Agile Development, Second Edition” by James Shore. The title reads, “[국내도서] 애자일 개발의 기술 2/e”. It’s translated by 김모세 and published by O’Reilly. Other than translated text, the cover is the same as the English edition, showing a water glass containing a goldfish and a small sapling with green leaves.

I’m pleased to announce that the Korean translation of The Art of Agile Development is now available! You can buy it here.

Many thanks to 김모세 for their hard work on this translation.

Art of Agile Development in India and Africa (English)

Book cover for the Indian edition of “The Art of Agile Development, Second Edition” by James Shore. It’s the same as the normal edition, showing a water glass containing a goldfish and a small sapling with green leaves, except that the publisher is listed as SPD as well as O’Reilly. There’s also a black badge labelled “Greyscale Edition” that reads, “For Sale in the Indian Subcontinent and Selected Countries Only (refer back cover).”

I’m pleased to announce that there’s a special edition of The Art of Agile Development available in the Indian subcontinent and Africa! (It’s in English.) You can buy it here.

Many thanks to Shroff Publishers & Distributors Pvt. Ltd. (SPD) for making this edition available.

AI Chronicles #7: Configurable Client

In this weekly livestream series, Ted M. Young and I build an AI-powered role-playing game using React, Spring Boot, and Nullables. And, of course, plenty of discussion about design, architecture, and effective programming practices.

Watch us live every Monday! For details, see the event page. For more episode recordings, see the episode archive.

In this episode...

We turn to parsing the response returned from the “say” API, which will be the OpenAI response to a message. To do that, we add the ability to configure the nullable HttpClient so it returns predetermined responses from our tests. We discover that using the HTTP library's Response object provides a default Content Type, which we don't want for our tests, and deal with the window vs. Global implementation of fetch().

After we get everything working, we add types to make the TypeScript type checker happy. With that done, we're ready for the next episode, where we'll return to the Spring Boot back-end and implement the “say” API endpoint.

Contents

  • Ted spoke at the Kansas City Developer Conference (0:16)
  • Lack of Java-focused conferences in the USA (1:19)
  • Conferences in the USA vs. Europe/rest of world (2:10)
  • Trying to aim talks at the right level for the audience (4:38)
  • Ted's research to prepare for the AssertJ talk (6:05)
  • AssertJ assertions for Joda Money (7:23)
  • Joda Money project vs. JSR-354 Money & Currency (8:45)
  • Programming language cultures (10:34)
  • Checked exceptions and API design (12:20)
  • Language convention vs. enforced rules (14:02)
  • Why we wrap third-party libraries and objects (17:49)
  • Primitive Obsession (20:16)
  • Exploring and learning your tools (22:51)
  • Learning design from Martin Fowler's "Refactoring" book (23:25)
  • Small changes, small steps (24:39)
  • Loss of awareness of design? (25:32)
  • Reading books in a group (29:12)
  • Refactorings and their trade-offs (30:29)
  • James talks about CTO vs. VP Engineering (31:56)
  • Reviewing where we left off in the code (34:30)
  • Sidebar: forgetting what you were doing in a project (39:27)
  • Planning and doing one thing at a time (41:01)
  • Context-switching in a heavy pull-request environment (41:48)
  • Feedback loops and eXtreme Programming (42:55)
  • Testing parsing of responses in the BackEndClient (45:43)
  • Sidebar: who holds the state? (49:06)
  • Configuring the answer for the BackEndClient (50:05)
  • Test-driving HttpClient's default response (54:45)
  • Where is that text/plain content type coming from? (1:05:18)
  • Sidebar: differencing in test output and coding in VB, and QB (1:06:10)
  • Should our stubbed fetch() return content length? (1:09:06)
  • Configuring HttpClient's response for an endpoint (1:10:48)
  • Discovered need to specify full endpoint URL, not just path (1:14:58)
  • Test failed as expected, on to implementation (1:17:37)
  • Who has fetch()? Window vs. Global vs. globalThis (1:21:26)
  • Using Optional chaining and nullish coalescing (1:29:23)
  • Troubleshooting "headers.entries" (1:30:09)
  • Specifying content-type in configured response (1:36:30)
  • Generalize to allow partially configured response (1:42:10)
  • Sidebar on readability of "advanced" syntax in code (1:48:40)
  • Allowing multiple endpoints to be configured (1:52:13)
  • Avoiding real-world values in configuration tests (1:58:42)
  • Spiking some attempts at improving code (1:59:20)
  • Adding types to make TypeScript type checker happy (2:05:10)
  • Defining own type often easier than reusing library types (2:14:24)
  • Back to the BackEndClient failing test (2:16:12)
  • Refactor test code now that it passes (2:20:36)
  • Reviewing the test refactor (2:30:45)
  • BackEndClient is done: updated the plan and integrated (2:31:32)
  • Next time we'll start with the Spring Boot back-end endpoint (2:33:10)
  • Review our work (2:34:05)
  • Downside of sociable vs. isolated tests with mocks (2:35:02)
  • The Rubber Chicken (2:38:18)

Source code

Visit the episode archive for more.

AI Chronicles #6: Output Tracker

In this weekly livestream series, Ted M. Young and I build an AI-powered role-playing game using React, Spring Boot, and Nullables. And, of course, plenty of discussion about design, architecture, and effective programming practices.

Watch us live every Monday! For details, see the event page. For more episode recordings, see the episode archive.

In this episode...

We continue working on our front end. After some conversation about working in small steps, we turn our attention to BackEndClient, our front-end wrapper for communication to the back-end server. We start out by writing a test to define the back-end API, then modify our HttpClient wrapper to track requests. By the end of the episode, we have the back-end requests tested and working.

Contents

  • Program Note (0:12)
  • Multi-Step Refactorings (2:26)
  • Work in Small Steps (6:31)
  • Evaluating Complexity (11:11)
  • Collaborative Development (19:04)
  • Continuous Improvement (25:24)
  • Fixing the Typechecker (28:02)
  • Today’s Plan (31:11)
  • James Shore’s Housecleaning Tips (33:03)
  • Build the BackEndClient (35:59)
    • Sidebar: Delaying Good Code (38:48)
    • End sidebar (41:41)
  • Make HttpClient Nullable (51:02)
    • Sidebar: Tuple (58:48)
    • End sidebar (1:00:29)
  • Stubbing the fetch() Response (1:09:24)
    • Sidebar: In-Browser Testing (1:26:37)
  • Build OutputListener (1:28:00)
  • Request Tracking (1:55:04)
    • Sidebar: Sidebar: Lint Error (2:12:54)
    • End sidebar (2:16:50)
  • Back to the BackEndClient (2:33:51)
  • Debrief (2:44:23)

Source code

Visit the episode archive for more.

AI Chronicles #5: fetch() Wraps

In this weekly livestream series, Ted M. Young and I build an AI-powered role-playing game using React, Spring Boot, and Nullables. And, of course, plenty of discussion about design, architecture, and effective programming practices.

Watch us live every Monday! For details, see the event page. For more episode recordings, see the episode archive.

In this episode...

It’s an eventful episode as we start off with a discussion of event sourcing, event-driven code, event storming, and more. Then we return to working on our fetch() wrapper. We factor our test-based prototype into a real production class, clean up the tests, and add TypeScript types.

Contents

  • Event Sourcing (0:22)
  • Event-Driven Code (11:11)
  • Event Storming (23:30)
  • The Original Sin of Software Scaling (27:23)
  • Refactoring Events (28:45)
  • Naming Conventions (32:47)
  • Java 21 (42:08)
  • Inappropriate Abstractions (44:25)
  • Let’s Do Some Coding (53:01)
  • Design the fetch() Wrapper (56:34)
  • Factor Out HttpClient (1:17:41)
  • Add TypeScript Types (1:23:40)
  • Node/TypeScript Incompatibility (1:39:52)
  • Clean Up the Tests (1:58:36)
  • Close SpyServer with Extreme Prejudice (2:05:06)
  • Back to Cleaning Up Tests (2:11:08)
  • Debrief (2:32:22)

Source code

Visit the episode archive for more.

AI Chronicles #4: fetch() Quest

In this weekly livestream series, Ted M. Young and I build an AI-powered role-playing game using React, Spring Boot, and Nullables. And, of course, plenty of discussion about design, architecture, and effective programming practices.

Watch us live every Monday! For details, see the event page. For more episode recordings, see the episode archive.

In this episode...

It’s an “all rants, all the time” episode—at the least for the first hour. We start out talking about the role of engineering leadership in a company. Then it’s a discussion of evolutionary design and the costs of change. Then teaching through pairing. Finally, we buckle down to work, and make solid progress on a prototype integration test for the front-end fetch() wrapper.

Contents

  • Engineering Leadership (0:13)
  • Where We Left Off (23:53)
  • What We’re Going To Do Today (28:05)
  • Evolutionary Design (37:42)
  • Teaching Through Pairing (53:16)
  • WTF: Wholesome Test Framework (57:36)
  • Create a Spy Server (1:01:48)
  • fetch() (1:16:25)
  • The Server Isn’t Closing (1:26:06)
  • Look At the fetch() Response (1:40:29)
  • Factor Out the Server (1:49:41)
  • Compilation Error (1:58:08)
  • SpyServer.lastRequest (2:16:23)
  • SpyServer.setResponse() (2:30:34)
  • Prepare for Production (2:41:29)
  • Debrief (2:47:50)

Source code

Visit the episode archive for more.

Last Chance to Sign Up for “Testing Without Mocks” Training

If you're interested in my Nullables testing technique, this is your last chance to sign up for my "Testing Without Mocks" course. Ticket sales close this Thursday morning at midnight GMT and I don't plan to offer it again until October at the earliest.

Learn more and sign up here.

AI Chronicles #3: Fail Faster

In this weekly livestream series, Ted M. Young and I build an AI-powered role-playing game using React, Spring Boot, and Nullables. And, of course, plenty of discussion about design, architecture, and effective programming practices.

Watch us live every Monday! For details, see the event page. For more episode recordings, see the episode archive.

In this episode...

In a coding-heavy episode, we wrap up our OpenAiClient wrapper. Along the way, a confusing test failure inspires us to make our code fail faster. Then, in the final half hour of the show, we implement a placeholder front-end web site and make plans to connect it to the back end.

Contents

  • Intrinsic vs. Extrinsic Motivation (0:14)
  • Coaching Teams (14:26)
  • OpenAiClient Recap (20:36)
  • Failing OpenAiClient Test (27:54)
  • Parse the OpenAI Response (38:55)
  • OpenAiResponseBody DTO (54:37)
  • OpenAI Documentation (1:04:43)
  • Self-Documenting OpenAI (1:10:27)
  • Back to Parsing the OpenAI Response (1:13:48)
  • A Confusing Test Failure (1:21:07)
  • Fail Faster (1:26:47)
  • Guard Clauses (1:54:38)
  • Manually Test OpenAiClient (2:00:47)
  • The DTO Testing Gap (2:14:35)
  • Our Next Story (2:19:43)
  • Front-End Walkthrough (2:25:47)
  • Placeholder Front-End (2:28:01)
  • Integrate (2:41:31)
  • Debrief (2:45:29)

Source code

Visit the episode archive for more.

AI Chronicles #2: Faster Builds

In this weekly livestream series, Ted M. Young and I build an AI-powered role-playing game using React, Spring Boot, and Nullables. And, of course, plenty of discussion about design, architecture, and effective programming practices.

Watch us live every Monday! For details, see the event page. For more episode recordings, see the episode archive.

In this episode...

It’s a two-fer! In the first half, we look at the work James did on speeding up the front-end build, including a questionable choice to use a custom test framework. We debug a problem with the incremental build and end up with a nice, speedy build.

In the second half, we continue working on the OpenAiClient wrapper. The code POSTs to the Open AI service, but it doesn’t parse the responses. In order to implement that parsing, we modify JsonHttpClient to return POST responses and add the ability to configure those responses in our tests.

Contents

  • A Confession (0:11)
  • Buy vs. Build (12:42)
  • Incremental Compilation (24:47)
    • Sidebar: Why We Write Tests (45:26)
    • Sidebar: What Pairing is Good For (48:37)
    • End sidebar (51:08)
  • Clean Up the Build (54:10)
  • Failing the Build (58:32)
    • Sidebar: Using Booleans (1:05:50)
    • End sidebar (1:07:53)
  • Compilation Helper (1:22:08)
  • Bespoke Tooling (1:26:14)
  • Back to OpenAiClient (1:29:20)
    • Sidebar: TDD Reduces Stress (1:39:27)
    • End sidebar (1:40:23)
  • Reformatting and Merge Conflict (1:42:59)
  • Configuring the POST Response (1:52:28)
  • Refactor the ExampleDto (1:54:44)
  • Return Configured Values (2:02:35)
    • Sidebar: Repeating Yourself (2:03:27)
    • End sidebar (2:09:57)
  • Fine-Tuning the JsonHttpClient Tests (2:17:57)
    • Sidebar: Test Everything, or Just Enough? (2:24:56)
    • End sidebar (2:27:01)
    • Sidebar: When to Stop Pondering Design (2:33:31)
    • End sidebar (2:35:10)
  • Spring Complaints (2:36:03)
  • Frameworks vs. Libraries (2:40:50)
  • Microservices and Team Size (2:47:57)
  • Debrief (2:50:40)

Source code

Visit the episode archive for more.

The AI Chronicles #1

In this weekly livestream series, Ted M. Young and I build an AI-powered role-playing game using React, Spring Boot, and Nullables. And, of course, plenty of discussion about design, architecture, and effective programming practices.

Watch us live every Monday! For details, see the event page. For more episode recordings, see the episode archive.

In this episode...

Our new stream! We explain the goals of the project—to create an AI-powered role-playing game—then get to work. Our first task is to create a Nullable wrapper for the OpenAI service. The work goes smoothly, and by the end of the episode, we have an OpenAiClient that sends POST requests to the service.

Contents

  • About the Project (0:14)
  • What We’re Building (2:03)
  • Outside-In vs. Bottom-Up Design (14:17)
  • Structure of the Code (41:08)
  • Fake It Once You Make It (44:41)
  • Manual POST to OpenAI (47:47)
  • Start the OpenAiClient (1:01:24)
  • Import HttpClient (1:08:32)
  • Back to the OpenAIClient (1:21:19)
    • Sidebar: Configuration vs. Constants (1:23:04)
    • End sidebar (1:24:59)
  • Support HTTP headers (1:27:44)
    • Sidebar: Why Wrappers and Nullables? (1:43:32)
    • End sidebar (1:50:38)
    • Sidebar: Documenting APIs (2:03:50)
    • Sidebar: LLMs and Documentation (2:10:39)
    • End sidebar (2:13:07)
  • Tracking HTTP headers (2:17:02)
  • Finish the OpenAIClient POST (2:27:30)
    • Sidebar: New Hotness Syndrome (2:38:33)
    • Sidebar: TypeScript (2:41:07)
    • End sidebar (2:43:46)
  • Conclusion (2:46:44)

Source code

Visit the episode archive for more.

How Are Nullables Different From Mocks?

One of the most common questions I get about Nullables is, “How is that any different than a mock?” The short answer is that Nullables result in sociable, state-based tests, and mocks (and spies) result in solitary, interaction-based tests. This has two major benefits:

  1. Nullables catch bugs that mocks don’t.
  2. Nullables don’t break when you refactor.

Let’s dig deeper.

  1. Why They’re Different
  2. Nullables Catch More Bugs
  3. Nullables Don’t Break When You Refactor
  4. Conclusion

Why They’re Different

Imagine you have a class named HomePageController. It has a dependency, Rot13Client, that it uses to make calls to an external service. Rot13Client in turn depends on HttpClient to make the actual HTTP call to the service.

A class diagram for the example. HomePageController has an arrow pointing to Rot13Client, which has an arrow pointing to HttpClient. HttpClient has a jagged arrow pointing to Rot13Server. A class diagram for the example. HomePageController has an arrow pointing to Rot13Client, which has an arrow pointing to HttpClient. HttpClient has a jagged arrow pointing to Rot13Server.

Example design

A mock-based test of HomePageController will inject MockRot13Client in place of the real Rot13Client. It validates HomePageController by checking that the correct methods were called on the MockRot13Client.

The example design has been expanded with a test class pointing at HomePageController. The connection to Rot13Client has been x’d out and replaced with a connection to MockRot13Client. Rot13Client and all its dependencies are greyed out.The example design has been expanded with a test class pointing at HomePageController. The connection to Rot13Client has been x’d out and replaced with a connection to MockRot13Client. Rot13Client and all its dependencies are greyed out.

A mock-based test

This mock-based test is a “solitary, interaction-based test.” It’s solitary because the HomePageController is isolated from its real dependencies, and it’s interaction-based because the test checks how HomePageController interacts with its dependencies.

In contrast, a Nullable-based test of HomePageController will inject a real Rot13Client. The Rot13Client will be “nulled”—it’s configured not to talk to external systems—but other than that, it’s the exact same code that runs in production. The test validates HomePageController by checking its state and return values.

The example design has been expanded with a test class pointing at HomePageController. There is no mock class; instead, HomePageController depends on Rot13Client, which depends on HttpClient. Each of these connections is marked “nulled.” The jagged connection between HttpClient and Rot13Service has been x’d out. Rot13Service is greyed out.The example design has been expanded with a test class pointing at HomePageController. There is no mock class; instead, HomePageController depends on Rot13Client, which depends on HttpClient. Each of these connections is marked “nulled.” The jagged connection between HttpClient and Rot13Service has been x’d out. Rot13Service is greyed out.

A Nullable-based test

This is a “sociable, state-based test.” It’s sociable because the HomePageController talks to its real dependencies, and they talk to their real dependencies, and so on, all the way to the edge of the system. It’s state-based because the test checks HomePageController’s state and return values, not its interactions.

Nullables Catch More Bugs

Bugs tend to live in the boundaries. Imagine that someone intentionally changes the behavior of Rot13Client, not realizing that HomePageController relies on the old behavior. Now HomePageController doesn’t work properly. A well-meaning change to Rot13Client has introduced a bug in HomePageController.

Solitary tests, such as mock-based tests, can’t catch that bug. HomePageController’s tests don’t run the real Rot13Client, so they don’t see that the behavior is changed. The tests continue to pass, even though the code has a bug.

The “mock-based test” diagram has been annotated. It says, “A change here (Rot13Client) has an unexpected side effect here (HomePageController) and the mock (MockRot13Client) hides it. (Crying face emoji.)”The “mock-based test” diagram has been annotated. It says, “A change here (Rot13Client) has an unexpected side effect here (HomePageController) and the mock (MockRot13Client) hides it. (Crying face emoji.)”

How mocks hide bugs

Sociable tests, including Nullable-based tests, do catch that bug. That’s because HomePageController’s tests run the real Rot13Client. When its behavior changes, so do the tests results. The tests fail, revealing the bug.

The “Nullables-based test” diagram has been annotated. It says, “A change here (Rot13Client) has an unexpected side effect here (HomePageController) and it’s caught here (the test). (Celebration emoji.)”The “Nullables-based test” diagram has been annotated. It says, “A change here (Rot13Client) has an unexpected side effect here (HomePageController) and it’s caught here (the test). (Celebration emoji.)”

How Nullables reveal bugs

Nullables Don’t Break When You Refactor

Imagine that you need to change the Rot13Client API to support cancelling requests. You change its API, and when you do, you also update HomePageController to use the new API.

Interaction-based tests, such as mock-based tests, will break when you make this change. They’re expecting HomePageController to call the old API, and now it calls the new API.1

1Automated refactoring tools can prevent this problem, but not in every case.

The “mock-based test” diagram has been annotated. It says, “A design change here (Rot13Client) causes a failure here (the test) until the change is duplicated here (MockRot13Client).”The “mock-based test” diagram has been annotated. It says, “A design change here (Rot13Client) causes a failure here (the test) until the change is duplicated here (MockRot13Client).”

How mocks prevent refactoring

State-based tests, in contrast, won’t break when you refactor a dependency. The test checks the output of the HomePageController, not the methods it calls. As long as the code continues to return the correct value, the test will continue to pass.

The “mock-based test” diagram has been annotated. It says, “A design change here (Rot13Client) causes a failure here (the test) until the change is duplicated here (MockRot13Client).”The “mock-based test” diagram has been annotated. It says, “A design change here (Rot13Client) causes a failure here (the test) until the change is duplicated here (MockRot13Client).”

How Nullables support refactoring

Conclusion

Although Nullables and mocks seem similar at first glance, they take opposite approaches to testing. Nullables are sociable and state-based; mocks are solitary and interaction-based. This allows Nullable-based tests to catch more bugs and support more refactorings.

“Testing Without Mocks” Training

To be notified about future “Testing Without Mocks” training courses, join the mailing list here (requires Google login).

For private training, contact me directly.

Nullables Livestream #19: For the Win

In this weekly livestream series, I pair up with Ted M. Young, aka jitterted, to look at Nullables and A-Frame Architecture as an alternative to Mocks, Spies, and Hexagonal Architecture. Each episode combines a healthy dose of architecture and design discussion with hands-on coding.

In this episode...

The final episode of the season! We finish converting the Yacht code from using mocks, stubs, and fakes to using Nullables. This reveals a bug in our database code. We had missed a test of our deserialization logic, but a VueController test triggered the same bug. It was only caught because we used Nullables. Sociable tests for the win!

In addition to finishing up our work on Yacht, we also have our usual conversations about software development and design. This week, we discuss estimation, testing styles, deadlines, and more.

And that ends the season. We’ll be back in two weeks with a brand new codebase and an interesting new problem.

Visit the episode archive for more.

What I Learned From the First “Testing Without Mocks” Course

Earlier this month, I hosted my “Testing Without Mocks” course for the first time. It’s about a novel way of testing code. I’ve delivered part of this course at conferences before, but this was the first time I had delivered it online, and I added a ton of new material. At the risk of navel-gazing, this is what I learned.

tl;dr? Skip to the improvements.

The Numbers

Attendees: 17 people from 13 organizations. (One organization sent four people; another sent two people.)

Evaluations: 10 out of 17 (59%) people filled out the evaluation form. That’s a much lower ratio than my in-person courses, but unsurprising for an online course.

Net Promoter Score: 60. That’s an excellent score, and in line with my typical ratings.

“Net Promoter Score” bar chart. The question reads, “How likely are you to recommend this workshop to a friend or colleague?” There are five “10” responses, one “9” response, two “8” responses, and two “7” responses.

Attendees had the choice of working solo, in a pair, or in a team. Going into the course:

  • Solo: 1 person
  • Pairing: 10 people in 5 pairs; six people paired with colleagues they already knew
  • Team: 6 people in two 3-person teams

During the course, two people chose to switch to solo work.

Qualitative Feedback

Feedback form. There are two free-form questions. The first reads, “How could we make it a 10? (If you rated it a 10, what would make it even better?)” The second reads, “What should we be sure to keep for next time?”

Seven people (70%) commented that we should keep the exercise-focused format of the course. It was a free-form question, so this is an unusually high degree of alignment. Two people (20%) specifically called out the structure of the code as a positive, and two others liked the clear explanations.

The most common request was for more time. Three people (30%) said the course would be improved by having more time. One person suggested having a longer break between sessions, and another would have liked it to be a better fit for their time zone.

There were several suggestions about programming languages. Two people wanted the course to be available in Java, C#, or Go. Two others suggested providing more information about Node.js in advance.

The course allowed you to work alone, in a pair, or in a team. Two people said it was a highlight. Another called out the friendly environment and knowledgeable colleagues. One person said their pairing partner didn’t work out, but appreciated my quick action to move them to solo work.

Accolades

I gave people the opportunity to provide a testimonial. Six people (60%) did so:

I recommend this workshop because it will give you very effective tools to improve how you build software. It has certainly done it for me.

Cristóbal G., Senior Staff Engineer

[We recommend this workshop because] it shows there is a better way to test than using mocks.

Dave M. and Jasper H., .Net Developers

I recommend this workshop because it gives you a new perspective on how to deal with side effects in your unit tests.

Martin G., Senior IT Consultant

I recommend this workshop because it presents a new approach to dealing with the problems usually addressed by using mocks. It's the first such approach I've found that actually improves on mocks.

John M., Software Engineer II

I recommend this workshop because it taught me how to effectively test infrastructure without resorting to tedious, slow, and flaky integration tests or test doubles. Testing Without Mocks will allow me to fully utilize TDD in my work and solve many of the pains my clients and I experience. The course was efficient, realistic, and thorough, with generous resources and time with James.

David L., Independent Consultant

I would recommend this workshop for any developer looking to improve their tests and add additional design tools to their tool belt. James gives out a lot of instruments and approaches that help you deal with tricky testing situations, make your test more robust and help you isolate your system from outside world in a way that is simple and straightforward to add to an existing system without having to do a massive redesign.

anonymous

Analysis

The exercise-focused structure of the course was a hit, and the specific exercises worked well. Nothing’s perfect, though, and there were a few rough spots that could be cleaned up. I’ll definitely keep the exercises and continue to refine the exercise materials.

On the other hand, I felt that the course was rushed, a feeling that was supported by the feedback. The first module was particularly squeezed. I also wasn’t able to spend as much time as I wanted on debriefing each module. I need more time.

A lot of attendees were located in central Europe. The course started at 5pm their time and lasted for 4½ hours, and then had optional office hours for another few hours after that. That meant very late nights for those attendees. I would like to create a schedule that’s more CEST-friendly.

Nearly everybody chose to work collaboratively. I put a lot of effort into matching people’s preferences and skills, but even then, two people ended up deciding to switch to independent work. It went smoothly overall, and received positive feedback, but putting strangers together feels risky. I plan to support pairing and teaming again, but I’ll evaluate it with a critical eye.

I spent the week before the course communicating about logistics and providing access to course materials. During the course, everybody’s build and tooling worked the first time, even for the pairs and teams. That’s unusually smooth. I’ll keep the same approach to logistics.

A screenshot of the course materials. The headline reads, “Hints.” Below it are two hints labelled “1” and “2”. The first hint says, “Your test will need a HomePageController.” The second hint says, “The HomePageController needs a Rot13Client and a Clock.” Each hint is followed by a circled plus icon.

I put a lot of effort into the course materials, which have collapsible API documentation and progressive hints, but people largely ignored the materials, especially at the beginning of the course. This resulted in people getting stuck on questions that were answered in the materials. I need to provide more guidance on how to use the course materials.

I had each group work in a separate Zoom breakout room and share their screen. This allowed me to observe people’s progress without interrupting them. It worked well, so I’ll keep the same approach to breakout rooms.

For the most part, the use of JavaScript (or TypeScript) and Node.js wasn’t a problem, but a few people new to JavaScript struggled with it. I want to provide more support for people without JavaScript and Node.js experience.

Improvements

I’m very happy with how the course turned out, and I see opportunities to make it better. This is what I plan to change for the next course:

  1. Improved schedule. I’ll switch to four 3-hour sessions spread over two weeks. I expect this to make the biggest difference: I’ll be able to spend more time on introducing and debriefing the material without sacrificing the exercises; people will have time between sessions to consolidate their learning and review upcoming material; and the course will end earlier for people in Europe.

  2. Course material walkthrough. Before the first set of exercises, I’ll walk through the course materials and highlight the exercise setup, API and JavaScript documentation, and hints. I’ll demonstrate how to use the hints to get guidance without spoiling the answer. I’ll also show them where to find the exercise solutions, and tell them to doublecheck their solutions against the official answers as they complete each exercise.

  3. Exercise expectations. The course materials have more exercises than can be finished in a single sitting. I’ll modify the materials to be clear about the minimum amount needed to complete, and remind people to use the hints if they’re falling behind schedule.

  4. Exercise improvements. I took notes about common questions and sources of confusion. I’ll update the source code and exercise guides to smooth out the rough spots.

  5. JavaScript and Node.js preparation. For people new to JavaScript and Node.js, I’ll provide an introductory guide they can read in advance. It will explain key concepts such as concurrency, promises, and events. I’ll also provide a reference for each module so people have the option to familiarize themselves with upcoming material.

Thanks for reading! If you’re interested in the course, you can sign up here.

Nullables Livestream #18: Dangerous Developers

In this weekly livestream series, I pair up with Ted M. Young, aka jitterted, to look at Nullables and A-Frame Architecture as an alternative to Mocks, Spies, and Hexagonal Architecture. Each episode combines a healthy dose of architecture and design discussion with hands-on coding.

In this episode...

It’s a rant-heavy episode, with conversations ranging from team structure to dangerous developers, assertion APIs, the “Extract Class” refactoring, and more. Somehow we manage to get a bit of coding in, too. We finish migrating our YachtController tests to use our new fixture- and Nullables-based approach. All that’s left is to turn off the in-memory fake and replace it with our Nullable GameDatabase.

Visit the episode archive for more.

“All the Remote Things” Podcast

I appeared on the “All the Remote Things” podcast with Tony Ponton recently. We had a wonderful, wide-ranging conversation covering topics including Extreme Programming, changing the constraints of an organization, the Agile Fluency® Model, making software process investment tradeoffs, FAST, and schedule chicken! It’s fast-paced and worth watching. You can find it embedded below or watch it on YouTube.