Elaborating on “it depends”

On the discussion on “When should I test?”, I followed up with a conversation:

When it provides value.

When is that?

It depends.

And it truly does depend. But upon what? That’s trickier to answer – and there is no absolute, concrete prescriptive guidance that will tell you in a given situation, that writing a test will provide value.

I started out my TDD experience writing unit tests all the time. Chasing “everything is unit tested” didn’t provide ultimate value. I started writing other tests as well. I went test-first, test-after, and test-when-I-feel-like-it. I’ve done unit tests, integration tests, subcutaneous tests, UI tests, functional tests, acceptance tests, and pretty much everything in-between. I’ve used mocking frameworks, sworn off mocking frameworks, used auto-mocking containers, swore off auto-mocking containers, used test generators, swore off test generators.

After all this time and all these tests, I’ve come to the conclusion that ultimately I was chasing the wrong goal. My goal is to make my customers successful. Not write software.

If my solution does require software, then I write software that provides value. If my software changes over time, then I write software in a way that enables change.

Sometimes that involves tests, sometimes it doesn’t. How do I know when to write tests? When I can see that not having them will hamper me in providing value. How do I know when not to write tests? When I can see that having them will hamper me in providing value.

It took me a long time to get to this point, so for folks new to testing, it’s important to build the experience to know when you feel value is there. Zero tests in all situations – not a good idea. 100% coverage in all situations – also not a good idea.

But not everyone should take the same 5-10 year journey to get at this point. So where should we start? In a codebase with no tests, end-to-end tests as a security blanket provide the most value.

From there, add tests when you feel like they add value. Not before.

About Jimmy Bogard

I'm a technical architect with Headspring in Austin, TX. I focus on DDD, distributed systems, and any other acronym-centric design/architecture/methodology. I created AutoMapper and am a co-author of the ASP.NET MVC in Action books.
This entry was posted in Agile, TDD. Bookmark the permalink. Follow any comments here with the RSS feed for this post.
  • Nice.

    Sometimes I find that I need to write a lot of end-to-end tests to cover all the variations of a use case – usually different combinations of inputs.

    In these cases I will *often* find good compromise in adding just one or two end-to-end tests for the use case, and lots of unit tests to cover all the combinations and edge cases – simply because they are a lot faster.

    I think that aligns with testing triangle philosophy.

  • RobAshton

    And just like that…

  • I too have come a long way in my TDD journey, starting in probably the same place but perhaps ending in a different place than you. I think tests always add value (for new code, at least) but they need to be the right tests, done the right way. The reason for this is that I focus on the “design” part of TDD, not the “test” part. Writing tests encourages me to use good design because testing becomes painful if you have too much coupling between your classes. Testing doesn’t solve the problems, but it usually drives me in the right direction.

    I also write tests first because I feel that thinking about how I would know if I was successful in writing the code (and also what could possibly go wrong with it) is better than coding without consideration or only consideration of the happy path. In my mind more thinking, less coding first is a win. YMMV.

    Where I have “loosened” the reins is in what I test and how I test. Now I try to test for actual features (and error conditions) rather than try to drive the code line by line. I try really hard not to test the framework or constrain things that aren’t necessary to the feature, though they may be one way to implement it. For example, I use mocking, but I no longer force validation of calls unless they are critical to a feature. I’m much looser in how I let the mocks respond so that my tests don’t rely on a particular implementation of dependent calls.

    The other thing that I do more of, and I should have been doing more all along, is refactoring within my tests.

    So, for me, fewer, more targeted tests with less coupling to particular implementations, but still defaulting to writing tests unless there is a compelling reason not to.

  • Pingback: When to Unit Test | Simon Online()

  • Pingback: Working outside the Disciplines | hansenfreddy.com()