I don’t trust me


Where I’m Coming From

HAL 9000 - I can't allow you to do that, Dave.I’ve learned that, in general, I can’t trust humans’ judgement, knowledge, or experience when working on software  projects, among other things. I’m not saying that humans are bad, I’m just saying that humans are creatures and subject to mistakes and failure. Quite so, as a matter of fact.

The past couple years I have worked in predominately old-style software development modes. I have seen success in spite of everything — in spite of the non-developers on the team, in spite of the politics, in spite of the command/control management, in spite of waterfall-esque project structure. Despite all these things, we were usually able to get something out and help the customer somewhat, but not nearly as effectively as we could have with a highly motivated team, focused on a goal, working towards total success with minimal interference in the creative process known as ‘software development’.

This frustrated me, and it has made clear, in my mind, the value of the processes I preach/endorse and attempt to practice in this environment (occasionally strides are made, but they’re hard to sustain).

Managing Human Weaknesses

Ultimately, what I’ve come to realize is that software development is really all about the people. With good people and processes that enable them to work effectively, you will have success to one degree or another (usually a good degree). So the goal is really to make sure that everyone on the team (including non-developers) are properly motivated and share the goal of the project. If not, they shouldn’t be on the team, or the project manager (or similar role) needs to work with that person and persuade them to cooperate (find out why they’re not cooperating and work with them to resolve the issues).  Once you have a good team with everyone interested in accomplishing the goal, the next task is to try to implement processes to appeal to their higher nature and set them up for success rather than crush them with threat of looming failure.

There are many processes out there, and I have found the following to be very effective in my own practice, and through observing other teams who have been practicing them.  The only failures I have seen is when the team is not properly motivated, has conflicting goals, or there are personality issues that are not properly managed by the manager.

These processes (detailed below), can be summed up with this statement: 

Setting Yourself up for Success

I don’t trust the customer

I don’t trust the customer or the target consumer for the software we’re building. I don’t trust them to know thoroughly what their problem is. I don’t trust them to be able to communicate effectively to me what picture of a solution they have in their mind. I don’t trust them to be able to know, beforehand, everything that they would need to have their problem solved to complete satisfaction. They will change their mind, remember things they missed mid-way through the project, remember that what they asked for earlier on was wrong and needs corrected, etc.

So we put in processes to help achieve better, more structured communication — but not too much, and the kind of structure that facilitates dialogue and interaction versus lengthy 500 page Word documents.  We talk with the customer more often (every few weeks) and show them what we’ve got so far to help them coalesce in their mind what it is we’re all trying to accomplish and what steps we need to take to finally accomplish it.  Finally, we (the team) hold ourselves accountable by keeping tabs on what we understood from the customer, what we promised them, and how long we said it would take us to accomplish it.  We help achieve better accuracy by promising smaller things and promising them more often since more promises over longer periods of time virtually guarantees failure.

I don’t trust the team, in general

I don’t trust the team, in general (including myself). So we have daily stand-up meetings to keep each other apprised of our situations and to keep tabs on any problems that may be brewing. It’s also a chance to allow the manager to get a feel for what roadblocks are hampering development (including non-technical ones like personnel issues).

I don’t trust the team (including myself) to deliver on what we promise, so we break promises into more manageable chunks and estimate them the best we can. After the chunks are done, we review our promises and estimates and grade ourselves on how well we did. We use this information to get better at promising and estimating in the future, and also to help us plan how close we are to being ‘finished’ based on our accuracy.

I don’t trust the developers

I don’t trust the developers (including myself). I don’t trust the developers (including myself) to:

  • make sure they don’t lose their work
    • not overwrite each other’s work.
      • not make changes that break the build and cause a work stoppage among the developers.
        • be able to manage the complexity of building new software while maintaining an existing, production branch of the software. </ul> So I use source control/revision control/version control (whatever you want to call it).  I, personally, have found Subversion to be most effective at addressing all of these problems, but there are other similar products that are also effective. I have found the Microsoft Visual SourceSafe product to be inadequate at addressing all of the above concerns. I would recommend not using Microsoft VSS for a team development project like I have been describing.

        I don’t trust the developers, including myself, to write code:

        • that, first and foremost, accomplishes the acceptance criteria
          • that is well tested and has good code coverage (where ‘good’ is subjective and relative)
            • that works as expected with all the other code in the system
              • that is acceptable to the coding standards/policies we have set as a team
                • that doesn’t break the build and cause work stoppage among all the other developers
                  • that works on a system that is not a developer workstation (i.e. ‘It works on my box!’)</ul> So I use an automated build process with continuous integration. The build process compiles the code on non-developer workstation/server that doesn’t have all the developer tools on it (only the bare minimum necessary to compile).  The build process then executes the tests (unit, integration, acceptance, etc) to ensure the fitness and working condition of the software, as well as it’s cohesiveness as an entire unit.  The build process then runs code coverage, complexity, and policy analysis to determine whether it is of the quality standards we have set for ourselves as a team.  If any of these steps are not met to our high standards of satisfaction, the build will fail and our Continuous Integration software will alert us to this fact.  Personally, I have used NAnt and MSBuild as the build tool, CruiseControl.NET as the continuous integration software, FXCop as the policy analysis tool, NCover as the code coverage analysis tool.  I have heard good things about Rake and FinalBuilder as build tools and JetBrains’ TeamCity as a Continuous Integration server.

                  I don’t trust me or any other developer, individually

                  I don’t trust my ability to estimate, so I track my estimation accuracy as the project progresses.  I don’t trust my ability to understand the requirements placed before me. So I encourage the customer not to write big long requirements specification, but rather to discuss the requirements with me using conversation starters like User Stories. I get a greater understanding of the problem and the desired solution (including the technical component of that solution) and participate in defining the specification for that requirement WITH the customer.  We then develop the actual specification and codify it in documentation, the code, and any other necessary artifacts (i.e. auditor documentation for later review).

                  I don’t trust my ability to actually accomplish the requirement, even if I understand it completely.  But I know that I am not likely to ever understand any requirement completely — or even that the customer himself understands the requirement completely — so I make sure to design my code such that it can be easily tested, and easily refactored later. I make sure that I don’t code too much of my assumptions in one place because it’ll be harder to unravel later.  I also write lots of tests that assert my assumptions and understanding of the problem.  I write integration tests to ensure that the code I write works well within the entire system (and not just the specific unit in which I’m working). I write acceptance tests at a higher level that serve as a customer-driven sanity check of my code which isn’t concerned with how the code is implemented as much as the end result of it’s function (i.e. when I say ‘debit from account’, it really debits from the account).

                  I don’t trust our tests

                  I don’t trust my testing discipline enough to ensure that I will achieve acceptable coverage of the code unit and cover all the edge cases and any other important scenario worth testing. I don’t trust my discipline to avoid writing a bunch of code that isn’t directly necessary to accomplishing the requirement. I don’t trust myself that I won’t go code happy and write a bunch of unnecessary code just for the pure joy of writing ‘cool code’.

                  So I write my tests first. I write the tests first while I’m fresh and not burnt out on writing code. I write the tests to get some of the non-interesting, non-cool stuff out of the way first.  I do this to ensure that when I get to the ‘fun’ coding, I can feel comfortable knowing that I’ve boxed my creativity and directed it into the areas I want it to go and will provide the most value for the client.  Now I don’t have the dark specter of having to come back and test my code after-the-fact which is not very enjoyable.  I also know that my code is inherently more testable because it was written to be testable in the first place and necessarily must be so. Even when trying to write code with tests in mind, I have found that I can never quite do it 100% and, when writing the tests afterwards, I end up having to refactor the main code a little to get it to work right with the tests. So writing test makes my life a lot easier. It also helps me ensure that I’ve met all the requirements and achieved quality up front instead of afterwards.

                  Even after all that, I still don’t trust my tests

                  Doing the tests first, up-front helps a lot, but it still requires some discipline and creativity to come up with test scenarios. The temptation is to take it easy and not test every case you can think of. Or maybe one day you’re tired, after a big lunch, and you just don’t feel like it. The project will suffer.

                  So, I pair with another developer. We keep each other honest. We work in a friendly, but adversarial/competitive mode where we write tests for the other developer and ask them to implement the code to pass the test. This keeps things interesting and adds some incentive to write good tests as well as good code.

                  Finally, I don’t trust the testers or the code release itself

                  I don’t trust the testers to test everything they should properly. I don’t trust that they won’t also fall into human nature and not test everything every time.  So I work with them to automate things to the maximum extent possible and give them the tools they need to set themselves up for success like we have on the development side.  Monkey testing (banging on the keyboard or mouse) is tedious and soul-sucking. Few people in this world enjoy it.  So, we try to minimize monkey testing and, instead, give the testers tools to automate things and add in their own test cases into the automated suite.  Then, they only have to monkey test as a last resort for some very specific, complicated case. These are usually somewhat interesting to set up and discover (but not always), so it helps to keep the testers focused and interested.

                  Once the testers are done and the software is ready for release, I still don’t trust that it’s ready or that any human involved at this point won’t introduce a problem at the last minute. So I make sure that releases are automated and I test this release process with the testers and other team members so we can trust the automated release packaging system.  For this, I usually use NAnt or MSBuild to automate the final packaging of the build, the documentation, licenses, or any other ‘final build’ type tasks that need to be done. I try to avoid doing anything after the testers, but, at least in my cases, I have never been able to avoid having to do SOMETHING to gather up the final package for pressing onto a CD or pushing out to a download server and sending out notifications to customers.

Austin TDD CodingDojo Full!