Let's Code: Test-Driven JavaScript, my new screencast series on rigorous, professional JavaScript development, is now available! Check out the demo video here.

Object Playground: The Definitive Guide to Object-Oriented JavaScript

27 Aug, 2013

Let's Code: Test-Driven JavaScript, my screencast series on rigorous, professional JavaScript development, recently celebrated its one year anniversary. There's over 130 episodes online now, covering topics ranging from test-driven development (of course!), to cross-browser automation, to software design and abstraction. It's incredibly in-depth and I'm very proud of it.

To celebrate the one-year anniversary, we've released Object Playground, a free video and visualizer for understanding object-oriented programming. The visualizer is very cool: it runs actual JavaScript code and graphs the object relationships created by that code. There are several preset examples and you can also type in arbitrary code of your own.

Example visualization

Object Playground in action

Understanding how objects and inheritance work in JavaScript is a challenge even for experienced developers, so I've supplemented the tool with an in-depth video illustrating how it all fits together. The feedback has been overwhelmingly positive. Check it out.

Estimation and Fluency

25 Feb, 2013

Martin Fowler recently asked me via email if I thought there might be a relationship between Agile Fluency and how teams approach estimation. This is my response:

I definitely see a relationship between fluency and estimation. I can't say it's clear cut or that I have real data on it, but this is my gut feel:

  1. "Focus on Value" teams tend to fall into two camps: either "estimation is bad Agile" or "we're terrible at estimating." These statements are the same boy dressed by different parents. One star teams can't provide reliable estimates because their iterations are unpredictable, typically with stories drifting into later iterations (making velocity meaningless), and they have a lot of technical debt (so even if they took a rigorous approach to "done done" iterations, there would be wide variance in velocity from iteration to iteration, so their predictions' error bars would be too wide to be useful).

  2. "Deliver Value" teams tend to take a "we serve our customer" attitude. They're very good at delivering what the customer asks for (if not necessarily what he wants). Their velocity is predictable, so they can make quite accurate predictions about how long it will take to get their current backlog done. Variance primarily comes from changes to the backlog and difficulty discussing needs with customers (leading to changes down the road), but those are manageable with error bars. Some two-star teams retain the "estimation is bad Agile" philosophy, but any two-star team with a reasonably stable backlog should be capable of making useful predictions.

  3. "Optimize Value" teams are more concerned with meeting business needs than delivering a particular backlog. Although they can make predictions about when work will be done, especially if they've had a lot of practice at it during their two-star phase, they're more likely to focus on releasing the next thing as soon as possible by reducing scope and collaboratively creating clever shortcuts. (E.g., "I know you said you wanted a subscriber database, but we think we can meet our first goal faster if we just make REST calls to our credit card processor as needed. That has ramifications x, y, z; what do you think?"). They may make predictions to share with stakeholders, but those stakeholders are higher-level and more willing to accept "we're working on business goal X" rather wanting than a detailed timeline.

  4. I'm not sure how this would play out with "Optimize for System" teams. I imagine it would be the same as three-star fluency, but with a different emphasis.

Analysis of Hacker News Traffic Surge on Let's Code TDJS Sales

25 Feb, 2013

A few weeks ago, my new screencast series, Let's Code: Test-Driven JavaScript, got mentioned on Hacker News. Daniel Markham asked that I share the traffic and conversion numbers. I agreed, and it's been long enough for me to collect some numbers, so here we are.

To begin with, Let's Code TDJS is a bootstrapped venture. It's a screencast series focused on rigorous, professional JavaScript development. It costs money. $19.95/month for now, $24.95/month starting March 1st, with a seven-day free trial.

Let's Code TDJS isn't exactly a startup--I already have a business as an independent Agile consultant--but I'm working on the series full-time. I've effectively "quit my day job" to work on it. I'm doing this solo.

I launched the series last year on Kickstarter. Thanks in large part to a mention on Hacker News and then even more from Peter Cooper's JavaScript Weekly, the Kickstarter was a huge success. It raised about $40K and attracted 879 backers. That confirmed the viability of the screencast and also gave me runway to develop the show.

(By the way, launching on Kickstarter was fantastic. I mean, sure, it was nailbiting and hectic and scary, and running the KS campaign took *way* more work than I ever expected, but the results were fantastic. As a platform for confirming the viability of an idea while simultaneously giving you seed funding, I can't imagine anything better for the bootstrapper.)

Anyway, the Kickstarter got a reasonable amount of attention. I tossed up a signup page for people who missed it, and by the time I was ready to release the series to the public, I had a little over 1500 people on my mailing list.

I announced the series' availability on February 4th. First on Sunday via Twitter, and then on Monday morning (recipient's time) via the mailing list. That's about 4,200 Twitter followers and 1,500 mailing list recipients.

Before we get into the numbers, I should tell you that I don't use Google Analytics. I don't track visitors, uniques, pageviews, none of that. I'm not sure what I would do with it. What I do track is 1) number of sales and 2) conversions/retention. That's it.

So, I announced on the weekend of the fourth. There was a corresponding surge in sales. Here's how many new subscriptions I got each day, with Monday the 4th counting as "100x." (So if I got 100,000 subscriptions on Monday--which, since you don't see me posting from a solid gold throne, you can assume I didn't--then I would have gotten 48,000 on Sunday.)

  • Sunday: 48x
  • Monday: 100x
  • Tuesday: 33x
  • Wednesday: 25x
  • (Total: 206x.)

82% of these subscribers converted to paying customers. 15% cancelled the trial and 3% just didn't pay. Although I collect credit card information at signup, those subscribers' credit card didn't process successfully.

A week and a half later, just after midnight my time (PST) on Wednesday the 13th, the series was posted to Hacker News by Shirish Padalkar. It was on the front page for about a day, and was near the top of the front page for the critical morning hours of every major European and US time zone. It peaked at #7. That led to a shorter, sharper surge.

  • Wednesday: 140x
  • Thursday: 23x
  • (Total: 163x, or 79% of the email's surge.)

83% of these subscribers converted from the free trial. 14% cancelled and 4% didn't pay. The only real difference between the two surges was that a lot more of the HN subscribers cancelled at the last moment. About half actually cancelled after their credit card failed to process and they got a dunning email. It makes me wonder if HN tire-kickers are more inclined to "hack the system" by somehow putting in a credit card that will can be authorized but cannot be charged. A debit card with insufficient funds would do the trick.

Another interesting data point is that "background" subscriptions--that is, the steady flow of subscriptions since February 2nd, other than traffic surges--averages 10x per day. (That is, on an average day, I get one tenth the subscriptions I got on Monday the 4th). However, the conversion rate for those "background" subscriptions is 95%. I'm not sure why it's so much higher. Perhaps those subscriptions are the result of word-of-mouth recommendations? That would make sense, since I'm not advertising or doing any real traffic generation yet.

My conclusions:

  • For this service, a mention on HN is about equivalent to a mailing list of 1,200 interested potential customers and 3,330 Twitter followers. That's actually less than I would have expected.

  • The conversion behavior of HN'ers is about the same as the mailing list. I would have expected the mailing list to convert at a higher rate, since they've already expressed interest. But it's essentially the same.

  • Both surges led to significantly less conversions than word-of-mouth subscribers. Don't get me wrong--from my research, I'm led to believe that ~83% is excellent. But 95% is frikkin' amazing.

  • HN'ers are more likely to cancel a free trial at the last moment and use credit cards that authorize but cannot be charged.

Finally--thanks to everyone who subscribed! I hope this was interesting. You can discuss this post on Hacker News. I'm available to answer questions.

(If you're curious about the series, you can find a demo video here.)

Let's Code: Test-Driven JavaScript Now Available

11 Feb, 2013

I'm thrilled to announce that Let's Code: Test-Driven JavaScript is now open to the public!

I've been working on this project for over nine months. Over a thousand people have had early access, and reactions have been overwhelmingly positive. Viewers are saying things like "truly phenomenal training" (Josh Adam, via email), "highly recommended" (Theo Andersen, via Twitter), and "*the* goto reference" (anonymous, via viewer survey).

Up until last week, the show was only available to Kickstarter backers. Now I've opened it up to everyone. Come check it out! There's a demo video below.

About Let's Code: Test-Driven JavaScript

If you've programmed in JavaScript, you know that it's an... interesting... language. Don't get me wrong: I love JavaScript. I love its first-class functions, the intensive VM competition between browser makers, and how it makes the web come alive. It definitely has its good parts.

It also has some not-so-good parts. Whether it's browser DOMs, automatic semicolon insertion, or an object model with a split personality, everyone's had some part of JavaScript bite them in the ass at some point. That's why test-driven development is so important.

Let's Code: Test-Driven JavaScript is a screencast series focused on rigorous, professional web development. That means test-driven development, of course, and also techniques such as build automation, continuous integration, refactoring, and evolutionary design. We support multiple browsers and platforms, including iOS, and we use Node.js on the server. The testing tools we're using include NodeUnit, Mocha, expect.js, and Testacular.

You can learn more on the Let's Code TDJS home page. If you'd like to subscribe, you can sign up here.

Come Play TDD With Me at CITCON

18 Sep, 2012

CITCON, the Continuous Integration and Testing Conference, is coming up this weekend in Portland, and I'm going to be there and recording episodes of Let's Play TDD! I'm looking for people to pair with during the conference.

If you're interested, check out the current source code and watch of a few of the most recent videos. Then, at the conference, come to my "Let's Play TDD" session and volunteer to pair! It should be a lot of fun.

There are still a few slots open at the conference, so if you haven't registered, there's still time. I hope to see you there!

Acceptance Testing Revisited

08 Sep, 2012

I recently had the chance to reconsider my position on acceptance testing, thanks to a question from Michal Svoboda over on the discussion forums at my Let's Code: Test-Driven Javascript screencast. I think this new answer adds some nuances I haven't mentioned before, so I'm reproducing it here:

I think "acceptance" is actually a nuanced problem that is fuzzy, social, and negotiable. Using tests to mediate this problem is a bad idea, in my opinion. I'd rather see "acceptance" be done through face-to-face conversations before, after, and during development of code, centering around whiteboard sketches (earlier) and manual demonstrations (later) rather than automated tests.

That said, we still need to test the behavior of the software. But this is a programmer concern, and can be done with programmer tests. Tools like Cucumber shift the burden of "the software being right" to the customer, which I feel is a mistake. It's our job as programmers to (a) work with the customer, on the customer's terms, so we build the right thing, and (b) make sure it actually works, and keeps working in the future. TDD helps us do it, and it's our responsibility, not the customer's.

I don't know if this is very clear. To rephrase: "acceptance" should be a conversation, and it's one that we should allow to grow and change as the customer sees the software and refines her understanding of what she wants. Testing is too limited, and too rigid. Asking customers to read and write acceptance tests is a poor use of their time, skill, and inclinations.

I have more here:

Lack of Fluency?

10 Aug, 2012

Dave Nicolette has written a thoughtful critique of my article with Diana Larsen, Your Path through Agile Fluency. I'm going to take a moment to respond to his points here.

Dave's lays out his core criticism thusly:

The gist of the article appears to be that we can effect organizational improvement in a large company by driving change from the level of individual software development teams.

He spends the rest of his article elaborating on this point, and particularly making the point that most software is built in IT (rather than product) organizations, where software teams have little ability to drive change.

It's a well-made argument, and I agree with it. Organizational change does require top-down support, and software teams in IT do have little ability to drive bottom-up change.

Just one problem: I don't see why Dave presents this as a criticism. Our article isn't about using software teams to drive organizational change.

Our essay describes how teams progress through stages of Agile fluency. It's based on what we've seen in 13 years of applying Agile and observing others' Agile efforts. We've developed the fluency model over the last year and a half. Along the way, we've reviewed it with dozens of people in all sorts of roles--team members, managers, executives, and consultants. The feedback has been clear: the model reflects their experiences. I doubt it's perfect, but the fluency model reflects reality to the best of our ability.

This model isn't about how organizations grow. It's about how teams grow. (There's probably room for an article about organizational Agile fluency, but that's for another time.) And this is what we see:

1. Teams learning Agile first get good at focusing on the business perspective. User stories and the like. (That's not to say that every team using user stories is successfully focusing on the business perspective!) This requires some cultural changes at the team level.

2. If the team has been working on improving their development skills, they get good at this next. It takes longer than the first stage because there's a lot to learn. I'm talking TDD, refactoring, and so forth. Some teams don't bother.

3. Typically, by this point, the team has been feeling the pain of poor business involvement for a while. Sometimes, in organizations that support it, the team gets good at the business side of things next. It takes longer than the previous stage because "getting good at it" means involving people outside the team. Most organizations are set up with "business" and "software" in different parts of the org chart, and this "third star" of fluency typically (but not always) requires them to be merged into cross-functional teams.

We don't say how to make this happen, just that it's a prerequisite for three-star fluency, and that these sorts of changes to organizational structure are difficult and require spending organizational capital. It can be top-down or bottom-up. (Really, it has to be both.) We also say that it may not be worth making this investment. Two-star fluency could be enough.

4. Finally, in a few companies, the team's focus extends beyond its product/projects to helping to optimize the overall system. This requires an organizational culture that likes its teams to advise on whole-company issues. Again, we don't say how to achieve this, just that it's a prerequisite for four-star fluency, and that we've only seen it happen in small companies, and typically companies that have this as an organizational value from the beginning. We again emphasize that more stars aren't necessarily worth the investment.

So, to recap: Dave argues that the individual software teams in IT cannot drive bottom-up organizational change. I agree. Organizational change must occur if you want to achieve three- or four-star fluency, but our article doesn't describe how to do so. It just says it's necessary.

Your Path through Agile Fluency

07 Aug, 2012

Agile methods are solidly in the mainstream, but that popularity hasn't been without its problems. Organizational leaders are complaining that they're not getting the benefits from Agile they expected. This article presents a model of Agile fluency that will help you achieve Agile's benefits. Fluency evolves through four distinct stages, each with its own benefits, costs of adoption, and key metrics.

Your Path through Agile Fluency

A Brief Guide to Success with Agile

by Diana Larsen and James Shore

For over twelve years, we’ve been leading and helping teams transition to Agile. The industry has changed a lot in that time. When we started in 1999, methods with names like Scrum, Extreme Programming, and Crystal were gaining visibility under the banner “lightweight methods.” Programmers looking for faster, simpler, and more effective ways of working were the primary drivers.

Throughout the next decade, Agile grew. In 2001, prominent members of the “lightweight methods” community met in Utah, coined the term “Agile” and created the Agile Manifesto. In 2005, the XP/Agile Universe and Agile Development conferences merged to form the Agile Alliance’s “Big” Agile conference.

The community grew, too. From a programmer-centric, Extreme Programming focus in the early days, to a more inclusive approach in the mid-2000s, to a project management and Scrum focus in more recent years. What was once a grassroots effort among early adopters is now solidly in the mainstream.

Growth hasn’t been without its problems. Programmers, once the drivers of Agile adoption, are increasingly turning away from what they see as a bloated, ineffective project management methodology. Agile luminaries are posting articles such as Martin Fowler’s “Flaccid Scrum” (2009). Organizational leaders are complaining that they’re not getting the benefits from Agile that they expected.

We’ve been helping teams transition to Agile since the beginning. We’ve learned a lot over the years about what it takes to achieve the benefits promised by Agile. In this paper, we share what we’ve learned.

Read the rest of this article at MartinFowler.com.

Let's Play TDD #201: From Mock-Based to State-Based

28 Jun, 2012

The source code for this episode is available here. Visit the Let's Play archive for more episodes!

Many thanks to Danny Jones for figuring out the HD Youtube embed code.

Let's Play TDD #200: To Kill a Mock

26 Jun, 2012

The source code for this episode is available here. Visit the Let's Play archive for more episodes!

Many thanks to Danny Jones for figuring out the HD Youtube embed code.

Let's Play TDD #199: Constructor Cleanup

21 Jun, 2012

The source code for this episode is available here. Visit the Let's Play archive for more episodes!

Many thanks to Danny Jones for figuring out the HD Youtube embed code.

Let's Play TDD #198: Removing Getters and Setters

19 Jun, 2012

The source code for this episode is available here. Visit the Let's Play archive for more episodes!

Many thanks to Danny Jones for figuring out the HD Youtube embed code.

Let's Play TDD #197: It's Like a Horror Movie

12 Jun, 2012

The source code for this episode is available here. Visit the Let's Play archive for more episodes!

Many thanks to Danny Jones for figuring out the HD Youtube embed code.

Let's Play TDD #196: I Don't Know If This is a Good Idea

07 Jun, 2012

The source code for this episode is available here. Visit the Let's Play archive for more episodes!

Many thanks to Danny Jones for figuring out the HD Youtube embed code.

Test-Driven Javascript Kickstarter Ends Today

05 Jun, 2012

Today is the last day for my Let's Code: Test-Driven Javascript kickstarter. It's been a great success, easily reaching its funding goal and close to half a dozen additional goals as well.

The funding closes today at 5pm PDT, so today's your last chance to participate! Thanks to the stretch goals, there are nearly 100 episodes' worth of content coming to backers... an amazing amount of content that will never again be available at this price.

Anyway, if you're interested in test-driven development, Javascript, or web development, check it out. There's just a few hours left.