Yet another reason to practice TDD
Things have been a little slow on the blog lately because this week is the start of a new job/company for me. I’m still getting a feel for things and of course one of the first things I’m trying to get folks interested in is TDD. I have a usual list of advantages and points that I use for getting across to folks the benefits of practicing TDD. Most of which center around the fact that TDD is primarily a design tool, and the fact that you get a nice set of automated regression tests is just a nice side effect. TDD is not about testing!
But today, after looking at some existing code, I thought of another example that does explain one way in which the “Test” part of TDD is very beneficial.
(For you fellow experienced TDD practitioners, this will be very obvious to you. So this is mainly for those who are still struggling with “why do I need TDD/Unit Testing?”…)
Non TDD/Unit Testing Approach
Let’s assume for a sec that you’re not using TDD, and not even TAD (Test-After Development). A typical set of steps to write a piece of code might be:
- Look at the requirements/use case/user story to figure out what feature needs to be implemented
- Use a modeling tool (or even better, a whiteboard) to design what you’re getting ready to code
- Write the code as you think it should be (hopefully not generated, but that’s a whole ‘nother discussion)
- Perform a series of manual steps to verify that your code works
- Repeat 2 & 3 until your manual testing steps have verified the code works as you expect</ol> Now, ask yourself a few questions:
- “What is my confidence level that changes to this code by me won’t introduce bugs?”
- “What is my confidence level that changes to this code by others won’t introduce bugs?”
- “How long is it going to take to test future changes in this code?”</ol> Chances are, you may have a fairly high confidence level that you won’t introduce bugs since you wrote the code initially (although that is faulty logic as well). But what if someone else needs to make changes to that code? You’re probably not all that confident now. Because you know that in order to verify this piece code, you had to run a series of manual steps (probably in different variations) to verify it works as expected. And if you didn’t document those manual steps for anyone else, well, you can see where I’m going with this.
But let’s just say you have been a good little developer and you have documented the manual steps needed to verify that a particular piece of code works. The time it’s going to take to go through those manual steps multiple times is probably not going to be very fast, and will soon become very tedious. And how often do you think folks are going to try and improve their design with refactoring if they have to run through a series of tedious manual steps just to verify they didn’t break anything?!? Probably never!
TAD (Test-After Development) / Standard Unit Testing Approach
Contrast that with an approach where you would write unit tests after you implemented the code to automate the steps needed to verify its result.
- Look at the requirements/use case/user story to figure out what feature needs to be implemented
- Use a modeling tool (or even better, a whiteboard) to design what you’re getting ready to code
- Write the code as you think it should be (hopefully not generated, but that’s a whole ‘nother discussion)
- Write unit test(s) that verify the code works as you expect
- Fix your code if necessary and re-run your (hopefully fast) unit tests until everything is verified</ol> Now this is definitely “better than nothing”, as they say. But you’ll probably find that writing simple unit tests for code that’s already been written, turns out to not be so simple in a lot of cases. The reason behind this is that the code probably wasn’t written with testability in mind and is tightly coupled with the rest of the system. This usually equates to really complex set up code needed for your unit tests to run correctly. And at that point your tests are probably nothing short of an end to end integration test. Integration tests definitely have their place, but not during code design.
But this is at least better in the fact that you have a set of automated tests that can be run (hopefully at any time) to verify that any changes to this code are still verified.
TDD (Test-Driven Design/Development) Approach
As you can probably guess, this is the approach I prefer for all of my development.
- Look at the requirements/use case/user story to figure out what feature needs to be implemented
- Use a whiteboard (if necessary) to get a very rough idea of the interaction the new code will need with the rest of the system
- Write a unit test (think executable specification), mocking out any dependencies, if necessary
- Get the test to compile and pass
- Refactor to remove duplication and improve design
- Repeat 3-5 until the feature is implemented</ol> I usually find that making changes to code that was implemented using TDD is a joy. The tests are likely to run very fast, since they’re only testing a small block of code without hitting a database or some service and the code’s coupling to other parts of the system is probably pretty low. Now my confidence level in making changes is very high, because I have that safety net of executable specifications that verify the feature is working as expected. And even better, they are repeatable and can be run at any time.
Conclusion
Use those CPU cycles for what they’re good at… automation! One of the biggest time savers is automating as much of your tedious work as possible. Testing and deployment are prime candidates for automation.
(I realize that my duplication of the list of steps in each section of this post, with slight modifications, clearly violates the DRY principle. Perhaps I could have introduced the Template Method pattern to consolidate the common items or a sprinkle of the Strategy pat… ok, I’m getting carried away… hehe…)
- Refactor to remove duplication and improve design
- Get the test to compile and pass
- Write a unit test (think executable specification), mocking out any dependencies, if necessary
- Use a whiteboard (if necessary) to get a very rough idea of the interaction the new code will need with the rest of the system
- Write unit test(s) that verify the code works as you expect
- Write the code as you think it should be (hopefully not generated, but that’s a whole ‘nother discussion)
- Use a modeling tool (or even better, a whiteboard) to design what you’re getting ready to code
- “What is my confidence level that changes to this code by others won’t introduce bugs?”
- Perform a series of manual steps to verify that your code works
- Write the code as you think it should be (hopefully not generated, but that’s a whole ‘nother discussion)
- Use a modeling tool (or even better, a whiteboard) to design what you’re getting ready to code