Pragmatic TDD

TDD is simple. It involves the following steps in a loop:

  • Write a failing test.
  • Write code that makes the test pass.
  • Refactor the code.

In addition to proving your code works, you also prove your tests work, which means you can trust them to catch regressions. Because you know the tests work, you can freely refactor the code to improve it.

TDD is simple, but difficult. It requires supreme discipline. There’s always the temptation to just write large chunks of code at once, and then write the tests later (usually to fulfill a minimum code coverage requirement imposed from above). When you do that, then you don’t really have proof that your tests work. To put it simply, you can’t trust a test that you haven’t seen fail.

I’m a pragmatic TDD practitioner. Coding in the real world is messy, and there’s always time pressure. In general, I believe it’s a good idea to follow the TDD process as prescribed, but it’s better to be pragmatic and bend the rules from time to time. In this article, I’ll discuss a few pragmatic TDD ideas.

You don’t always need to write automated tests

Automated tests are only necessary if you’re writing production code that will stick around for a long time.

If you’re doing exploratory coding (such as building a prototype), forget about writing automated tests. The point of exploratory coding is to go fast and learn.

Let’s say you’re going to be working on a project where you have to integrate with the Twitter API and you’ve never used this API before. Be pragmatic. Explore the API by writing a simple console app that hardcodes requests and has no error handling at all. The goal is to learn about the requests and responses.

There are other scenarios where writing automated tests may not be worth the effort. Use your own judgement (discuss with your teammates!). Be pragmatic, not dogmatic. Just because you follow TDD most of the time doesn’t mean you need to handcuff yourself to the idea of perfect execution.

For example, I don’t bother writing automated tests for the UI. I run the application and visually verify it. If automated tests are required, I believe that’s a job best left to automation QAs.

Don’t look at one requirement at a time

In real world software development, we are always dealing with incomplete information. We never know all of the requirements at once. We implement the software based on the known requirements, and the software evolves over time as more requirements are discovered.

When you’re doing TDD, it may tempting to zoom in and only deal with one requirement at a time.

Don’t lose the forest for the trees. Zoom out. Look at the big picture. Look at all of the requirements available and come up with a high-level design.

It’s well known that Big Design Up Front is a bad idea. Too little design is also a bad idea. It leads to wasting time, because you didn’t plan ahead at all.

Don’t implement obviously wrong solutions

You’ve been given the task of implementing a method that adds two integers and returns the sum.

You write a test:

Assert.AreEqual(4, Add(2, 2));
Code language: C# (cs)

You go to implement the method, so you hardcode the result the test is looking for:

return 4;
Code language: C# (cs)

Don’t do this. This is obviously the wrong solution. The Add() method needs to solve the problem in general, not just for this specific test case. Be pragmatic and don’t waste effort like this.

Writing tests after the code is OK sometimes

The TDD process has you writing tests first, before writing any code. The reason for doing this is because then you can prove your test actually works. If you don’t see your test failing, you can’t be 100% sure your code is what made it start passing.

Doing a test-first approach is good in general, but sometimes you may find it necessary or desirable to write the code first.

In my experience, this usually arises when I need to do exploratory coding, and I want to do it in the production code instead of in a throwaway prototype project. Usually what I’ll do is try a few different approaches and test the code manually. When I’m satisfied with the approach, I comment out the code and write tests, and then use the commented out code as a reference to write the actual production code.

What I described there is actually a test-first approach, just with a detour to figure out the code first.

If you do a test-after approach, my only advice is to make sure to do small iterations. Don’t wait to write tests until big chunks of code have been written, otherwise it’s very likely that you’ll miss a test case. Furthermore, when you do a test-after approach, you have no proof that the test actually works. You can use pure reasoning to decide that your test really works and your code is what made it work, but I wouldn’t suggest relying on that approach too much.

TDD and the code coverage metric

If you’re doing TDD, then you’re guaranteeing all of your code is covered by tests. Therefore, you don’t need the code coverage metric.

Everyone on your team may not be doing TDD though. Have you ever heard: “I’m done with the code. Now I just need to write the tests.” I’ve heard this countless times. TDD is definitely not widespread. Neither is unit testing. It wasn’t long ago that I had to fight for devs on my teams to do unit testing at all.

This is where the code coverage metric comes in. To force devs to write unit tests, organizations start by imposing a minimum code coverage requirement. Usually something like 80% code coverage. Sometimes enforced with a quality gate.

The code coverage metric by itself isn’t enough. Devs who are forced to unit test will usually tack on bare minimum unit tests afterwards (just to meet the code coverage requirement). This leads to low quality tests:

  • No proof the tests work (because the tests were added after the code was written).
  • Shallow level of testing (usually only the happy path is covered).
  • Completely untested requirements (which increases your chances of bugs).
  • Huge tests that test too much.

My only advice here is to review the unit tests during code review. Meanwhile, continue to practice TDD yourself and maybe other devs will come around to doing it. Until then, be pragmatic and realize that you may need to live with the imperfect code coverage metric.

Leave a Comment