A testing survey on a large project


Awhile back, I found that a mature team doesn’t celebrate test number milestones any more.  I just threw out some numbers of our tests, which have grown since then:

  • 2591 unit tests
  • 681 integration tests
  • 280 UI/acceptance/regression tests

Over the past nine months on our project, our testing strategy has evolved quite a bit since we first started out.  On past teams, I never got close to the numbers here.  The highest number of automated tests on any team I’ve worked with is around 1100 or so, and that was unit and integration tests combined.  But these numbers tell just a small story on the journey our team has seen in our current project, and I’d like to take a look back on some of the more important lessons we’ve learned.

One of the first hard lessons we I had was to keep a consistent testing style.  When I first joined the team at about sprint 5 or so, I was writing context/spec style tests for about a year or so.  The current team of about a half-dozen developers hadn’t.  That I continued to write tests in my style caused friction early on, as it would if I chose a different coding style or naming convention.  We settled on the normal test-class-per-class style, breaking out to test-class-per-fixture and per-feature and even per-area as needed, and let the pain of our tests be our guide.  We still practiced “when..should”, but used it in the name of a single test instead (Should blah blah blah When blah blah blah).  Switching a coding style isn’t something to be taken lightly, especially in the middle of a project.  It’s something I still use on a personal basis, but should not have dropped without everyone agreeing on a style.

The line between “unit” and “integration” tests is easy – unit tests are fast, and test one thing.  Integration tests can be slow unit tests (such as testing a repository), or larger, more involved tests.  The next hard lesson we learned was to balance black box end-to-end integration tests with unit tests for maximum coverage.  I’m not referring to code coverage, but rather coverage that enables large-scale refactorings.  We found that if we needed to make a large-scale refactoring, we wound up tossing quite a few unit tests along the way.  With large-scale end-to-end tests in place, we enabled larger-scale refactorings.  Without that safety net, we introduced regression bugs without really knowing it.  These end-to-end tests didn’t go through the UI, but it did one layer right below that, simulating a UI message all the way down and all the way back.  We don’t try to cover every single scenario, but it’s scary to change a bunch of plumbing without a black-box high-level test backing us up.

As our system grew larger, we saw the importance of retrofitting designs throughout the system in order to keep a consistent architecture.  As we grew our test coverage, we would find 4 ways of doing the same thing.  Our chief architect at Headspring now keeps a list of “design consensus and pain points” to ensure we don’t go in five different directions.  Without our end-to-end integration tests, we won’t be able to do any architectural retrofitting.  Without the unit tests in place, we would lose the insight into the duplicated behavior.

Early on, our UI testing started out as somewhat of an experiment, as none of us had done any large-scale UI testing before.  We had created some UI tests here and there, but never made it a part of our normal development process.  With the guidance of John Heintz, we were able to slowly build a repeatable and solid UI testing strategy.  What we eventually found was that a solid UI testing strategy is essential for a solid application.  There are some bugs that unit and integration testing alone will never find, and we did not want to have to shell out money for a good tester when they are so hard to find.  Don’t get me wrong, manual exploratory testing is still critical, but we cannot rely on a human to run through a suite of regression tests, no matter what continent they’re in.

As we added more and more end-to-end integration tests and UI tests, it became clear that design for testability applies at every layer and level.  From our low-level entity unit tests to integration and on up to UI tests, software is not inherently testable, it must be designed that way.  In our unit tests, we had years and years of experience on our team of designing for testability.  But at the UI layer, hardly any at all.  At this layer, we had to employ techniques such as sharing presentation models with the UI tests, adding descriptive IDs and classes for selecting sections, and adding things like NAAK to automate Section 508 and XHTML compliance.  None of these things existed in our application before we started, but now are essential for maintainable UI tests.  Every data-driven UI element is surrounded by a SPAN tag with a class created from an expression.  Every data-driven form element has a selector driven from an expression.  From all of this we learned the importance of designing test hooks into your software to enable testing later.  Design for testability can’t happen after the fact either, it has to be built in.

Finally, we’ve found that a solid testing strategy does not mean defects do not exist.  Software is complex, far more complex than anything we can keep in our heads.  We don’t always know the implications of a change.  It’s not always possible to fully understand the side-effects of a modification.  But with automation in place, we can be sure that any issue we do find, never happens again.  With the investment in testing and automation, the cost of a bug found is so low, we know our project is not sunk if something is found.  Because from the time of a commit to the time all tests pass, unit, integration and UI and our build is deployed is only around an hour.  Bugs are still much more expensive than features, but I no longer have that fear and dread of “will my changes break something” I learned to live with in past projects and teams.

In this project, we’re about halfway through.  I’m positive our testing strategy will continue to evolve as our technologies change and our codebase grows.  As an agile team, we ensure that our process includes regular and meaningful feedback, giving us confidence that we will deliver a solid product.

AutoMapper feature – custom type converters