The AI Chronicles

Episode title screen

In this weekly livestream series, Ted M. Young and I started to build an AI-powered role-playing game using React, Spring Boot, and Nullables. Unfortunately, life intervened, and we weren’t able to finish the series. But it’s still a great source of discussion about design, architecture, and effective programming practices!

The source code is on GitHub.


#1: The AI Chronicles

The AI Chronicles #1

Our new stream! We explain the goals of the project—to create an AI-powered role-playing game—then get to work. Our first task is to create a Nullable wrapper for the OpenAI service. The work goes smoothly, and by the end of the episode, we have an OpenAiClient that sends POST requests to the service.

#2: Faster Builds

The AI Chronicles #2: Faster Builds

It’s a two-fer! In the first half, we look at the work James did on speeding up the front-end build, including a questionable choice to use a custom test framework. We debug a problem with the incremental build and end up with a nice, speedy build.

In the second half, we continue working on the OpenAiClient wrapper. The code POSTs to the Open AI service, but it doesn’t parse the responses. In order to implement that parsing, we modify JsonHttpClient to return POST responses and add the ability to configure those responses in our tests.

#3: Fail Faster

The AI Chronicles #3: Fail Faster

In a coding-heavy episode, we wrap up our OpenAiClient wrapper. Along the way, a confusing test failure inspires us to make our code fail faster. Then, in the final half hour of the show, we implement a placeholder front-end web site and make plans to connect it to the back end.

#4: fetch() Quest

The AI Chronicles #4: fetch() Quest

It’s an “all rants, all the time” episode—at the least for the first hour. We start out talking about the role of engineering leadership in a company. Then it’s a discussion of evolutionary design and the costs of change. Then teaching through pairing. Finally, we buckle down to work, and make solid progress on a prototype integration test for the front-end fetch() wrapper.

#5: fetch() Wraps

The AI Chronicles #5: fetch() Wraps

It’s an eventful episode as we start off with a discussion of event sourcing, event-driven code, event storming, and more. Then we return to working on our fetch() wrapper. We factor our test-based prototype into a real production class, clean up the tests, and add TypeScript types.

#6: Output Tracker

The AI Chronicles #6: Output Tracker

We continue working on our front end. After some conversation about working in small steps, we turn our attention to BackEndClient, our front-end wrapper for communication to the back-end server. We start out by writing a test to define the back-end API, then modify our HttpClient wrapper to track requests. By the end of the episode, we have the back-end requests tested and working.

#7: Configurable Client

The AI Chronicles #7: Configurable Client

We turn to parsing the response returned from the “say” API, which will be the OpenAI response to a message. To do that, we add the ability to configure the nullable HttpClient so it returns predetermined responses from our tests. We discover that using the HTTP library's Response object provides a default Content Type, which we don't want for our tests, and deal with the window vs. Global implementation of fetch().

After we get everything working, we add types to make the TypeScript type checker happy. With that done, we're ready for the next episode, where we'll return to the Spring Boot back-end and implement the “say” API endpoint.