More on Quality


I started typing a comment to John Teague’s post about Creating a Culture of Quality, but it got a little long so I decided to make it a post. If you would be so kind, please read John’s post first (linked in the last sentence) and then proceed with this post.

Here are some of my additional thoughts:

  1. There are three over-arching principles that form the basis for everything John said and what I’m about to say:  1.) Fast fail  2.) Quick feedback cycle and 3.) Automate it.
  2. Fast fail – To John’s point (‘Make it Easy’), fast fail is crucial to avoid a long delay between change and corresponding failure. If something is wrong and it’s detected, fail ASAP so that it can be fixed. Don’t allow problems to persist.
  3. Quick feedback cycle – Whether it’s a success or failure, I need to know very fast (minutes, not hours).  If your build/test cycle is necessarily long, consider breaking it up and doing the fastest, most critical tests first and then doing the lengthy setup and tests as part of a cascading build that only happens maybe a few times a day.
  4. Automate it – Whatever you’re doing for quality, try to automate it to the maximum extent possible. Leave the team and any future maintainers with automated paths to success. Eliminate magic and bubblegum/tooth picks/duct tape in your process ASAP.  These things will continue to creep up during your project. Be vigilant and constantly scan for and eliminate them every time they appear.

  5. Automate as much as possible. If you’re doing anything manually more than just a few times, you’re a.) wasting time and b.) introducing variables and instability into the process that could and should be automated.  If you think it’ll be hard to automate, but you plan on doing it manually more than a few times, I’ve observed that it’s always cheaper to bite the bullet and automate. Anyone who has doubts about whether quality-first (i.e. TDD, BDD, etc) slow down a project, I can tell you that you’re worried about premature optimization because I can guarantee that on most projects, MUCH more time is being wasted in other areas than would be affected by TDD/BDD/etc. The interesting thing here is, though, that you probably don’t realize how many things that you’re currently doing manually that could be automated. Until I was on a team with someone who was automation-infected (Jeremy Miller), I didn’t realize just how much time I had been wasting doing things that I didn’t think were automatable!  (side note: I plan on blogging more about real-world examples of this in the next few months, so if you’re thinking ‘This guy is BS’, please check back later)</li>

    • Add tests for conventions and tribal wisdom types of things. If you hear team members saying “Everyone:  Please make sure that none of your controllers have a method called Floogle() because that will mess up XYZ”, this should be a big clue.  Sure, that’s a contrived example, but you know what I mean.  Anyhow, add tests for these to enforce that convention.  When new developers come onto the team and don’t yet have the tribal knowledge, they’ll be protected from harm this way.  When the convention tests break, you have to make the choice to change the convention or fix your code.  Changing the convention requires everyone on the team to be aware and in agreement. For your reference, Glen Block was working on a project with Ayende and they both wrote about a situation where they did this and it worked out really well. You can read Glen’s Post and Ayende’s Post at their respective links.</li>

      • John Teague mentioned this in his post, but I want to reiterate: Don’t get too focused on coverage.  Tests should enforce that the code does what it should and, in most cases, not necessarily that everything is perfect. Use tests to help you flush out design issues. If tests are hard to write: design smell.  If lots of tests break  when you change a small aspect of your design, that’s either a design smell or, more likely, a smell that your tests are too brittle and too envious of the code.  Tests should be there for ensure that the basic requirements and acceptance criteria are met, not to make sure that every line in the code actually executes (which is a common TDD beginner mistake — I did it a lot 😉 ).  This is why I tend to shy away from code coverage as a metric because it tends to encourage code envy from your tests.
      • No one leaves the building without having committed their code for the day. If the code’s not ready to integrate, then create a branch and commit it there. Don’t walk out with your laptop and then drop it and lose a day’s-worth of work. Frequent checkins is a must, must, must!</ol>
Why Microsoft Won’t/Can’t/Shouldn’t Lead