Unstated Requirements

I was formulating some thoughts for a whitepaper today and I was talking about two different ways to accomplish a particular task. Both ways accomplished the task, but one of them was clearly better.  I pondered why, exactly, I thought it was better.

I’ve talked about “good” or “better” software design being an objective reality and not merely a subjective one, before.  I guess this blog post is merely an addendum to that post because there was one specific point that popped in my mind and which I wanted to share with you, dear reader:  “Good” or “better” solutions solve not just the stated requirements (anyone can do that), they also solve the “unstated requirements.”

But you should have known that!

In every stated requirement (no matter how detailed that “Requirements Specification Document” is), there is at least one if not many unstated requirements.  Some of these are self-evident, but some are not.  Some, if not conceived of by the developer, product manager, or anyone else beforehand, will come back to haunt or thwart the product team going forward.  One example of an unstated requirement that is not always self-evident is “and this requirement should continue working even in future versions of the software.”  You might think that this is self-evident, but it is not. I say it’s not, because many software development teams do not take precautions to ensure that a given requirement remains in tact as the rest of the system continues changing.  Some call these problems “Regression issues” and take steps to mitigate them, but they may not be doing enough.

“Professional”

Where am I going with all this?  Looking back at the history of most of my philosophical blog posts (the ones that aren’t about solving a particular coding or architecture problem), they all seems to come from the same presupposition:  That if you’re a “professional”, you’ll want to do these things.  It’s a backhanded insult to those developers who don’t do those things because it implies that they are not “professional.”  That was probably my intention in most of those posts, I’m afraid to admit, but I did so because I really believed (and still do) that those things are extremely important for good software.  Each post was an attempt to illustrate a particular facet of what I consider responsible, “professional” software engineering.  But I was never able, in any of those posts, to elucidate how they all fit together in a consistent philosophy. I could argue why practice X or principle Y was important and which benefits it offered, but I couldn’t really tie them all together into a coherent stream.

I think I’ve finally figured it out…

The Primary Unstated Requirement: It works and keeps working

It seems simple. It seems stupid to have to say it, really. But it is surprisingly (and frighteningly so) simple to forget this simple unstated requirement.  Don’t believe me? Some software teams don’t have testers at all. Some developers don’t consider automated testing one of their top priorities (as important as implementation itself – for what good is implementing something if you can’t verify it actually works?).  They bang out features, giving them a quick once-over, and then mark it “done.”  In the past, I might have characterized this behavior as “sloppy” or “unprofessional” or at the least “hasty” and “cowboy coding.”  To be sure, I’m as guilty of this behavior as anyone. I’m not claiming moral high ground here.  A drunk can say that getting drunk is wrong and still be speaking the truth. But now instead of calling people names (i.e. “sloppy”) I can clearly explain why this behavior is contrary to good practice: Because it fails to meet the unstated requirements of “it works” and “it keeps working.”

Prove It

Without proper testing (both by a QA person and by automated tests at multiple levels [unit, integration, acceptance]) you cannot honestly say that the “it works” requirement is fulfilled.   You must prove it.

Without a proper bevy of automated tests and (or at least) a clear manual test plan (that actually gets executed by a trained QA professional – that point is important!), you cannot honestly say that the “it keeps working” requirement is fulfilled. You must prove it.

What’s more, you must KEEP proving it.  Just because it worked yesterday, or even before the last commit to source control, doesn’t mean it works NOW. Prove it.

A common objection I hear to the imploring that people do to encourage Test-Driven Development or use SOLID design principles, etc is that these are wastes of time and that delivery of functionality is the most important aspect of coding. That is, the argument goes, that meeting requirements is the only real deliverable that matters and that TDD or efforts on making SOLID designs are out of scope and therefore wastes of time.  To this I can now make a good argument back and say: Oh yeah, so you say you’ve accomplished the requirement at hand, prove it!  And then, in 6 months, I’ll come back again and say, remember that requirement that you said works? Does it still work? Prove it!

Related Articles:

Post Footer automatically generated by Add Post Footer Plugin for wordpress.

About Chad Myers

Chad Myers is the Director of Development for Dovetail Software, in Austin, TX, where he leads a premiere software team building complex enterprise software products. Chad is a .NET software developer specializing in enterprise software designs and architectures. He has over 12 years of software development experience and a proven track record of Agile, test-driven project leadership using both Microsoft and open source tools. He is a community leader who speaks at the Austin .NET User's Group, the ADNUG Code Camp, and participates in various development communities and open source projects.
This entry was posted in professionalism. Bookmark the permalink. Follow any comments here with the RSS feed for this post.
  • http://www.lostechies.com/members/louissalin/default.aspx Louis Salin

    Good post, Chad!
    I’d like to bounce a few ideas off you and see where that takes us. I’ve been starting to form my own opinions on automated tests and I must say that I’m either doing them totally wrong, or they’re just not that useful. Researching the subject, I’ve come across arguments that automated tests: 1) only test the “happy” path through a feature, and 2) a passing automated test does not prove anything.

    If you couple that with the incredibly high maintenance cost, are they truly useful? I think it was Brian Marick that had observed that automated tests only allow you to find 30% of the defects in your code base, whereas spending that time doing more traditional QA would result in finding 70% of the defects instead. In my own experience, I find that 99% of automated tests failures are the result of an intentional change in the behavior of our app, not that something broke.

    Bob Martin says UI testing should be at no more than 20% coverage, with unit tests and integration tests covering the rest.

    However, I do agree that we need to prove our features work. The question I’m having is how? Keep in mind that a passing test doesn’t prove anything. (by the way, I might not do TDD or BDD as often as I ought to, but I fully agree with those methodologies and SOLID principles. Feel free to call me out on those :))

  • http://www.lostechies.com/members/chadmyers/default.aspx chadmyers

    @Louis:

    Note that I talked about testing in layers (unit, integration, acceptance). “100% coverage” at the acceptance level involves a lot fewer tests than 100% coverage at the unit level because there are more seams (interactions) to test.

    If you’re seeing “incredibly high” maintenance cost with tests, then, frankly, you’re doing it wrong. I’m not sure what tech you’re using, but if it’s using WatiN, WatiR, or Selenium directly, then you’re doing it wrong. Those technologies are important, but they should be driven by something higher level (an acceptance testing framework) like StoryTeller, Fitness, or Slim, etc.

    “Keep in mind that a passing test doesn’t prove anything” I disagree with this statement categorically unless you say it’s a poorly written test. Tests are not useful simply because they’re tests. I agree that tests must be written correctly (testing the right thing for the right reason with the right expectation). With this established, then tests prove lots of things and are of great value.

    We were able to make rather stark and major changes to our infrastructure and all levels of our application (from redoing our data access entirely, to changing out our MVC framework entirely, to changing out our grid technology entirely, etc) with high confidence due to our bevy of StoryTeller/Selenium tests.

    This confidence in change and accuracy of delivery is of immeasurable value to us. Any “incredibly high maintenance cost[s]” were entirely justified on the first time we had to make a major readjustment of our infrastructure. We’ve now done it several times and are reaping the dividends.

    I see the automated acceptance tests as something akin to seat belts or railings on stairs. Sure they can be a pain and get in your way, but the first time you need them and don’t have them, you will sorely regret it. Likewise if you need them and have them, all pain or cost becomes moot.

    “I find that 99% of automated test failures are the result of an intentional change in the behavior of our app, not that something broke” – Us, too (though maybe not 99%, more like 80%). If this is the case, then your testing framework should make it easy to change the guts of the test to match the new reality.

    For us, we express our (StoryTeller) tests in business language and not implementation language. The implementation is behind the scenes, in code, close to the actual code of the feature. If the feature changes slightly, so does the underlying implementation of the test. Thus, the essence of the test is still expressed in StoryTeller and only a slight code change need to happen.

    If you’re using record/playback mechanics like Selenium, then you’re in for a world of hurt as even a small change to the application can involve long, tedious changes to the web tests. Thus you should separate the essence of the test (what to test) from the implementation of the test (how to test it). This is what, among other things, StoryTeller does for us.

  • http://www.lostechies.com/members/louissalin/default.aspx Louis Salin

    Our entire UI is in Silverlight, which requires us to create extra methods on our view classes that can then be called by our ruby scripts through Watir. This is where the maintenance cost is high, since we need to revisit those methods whenever the behavior of our app changes. Maybe it’s easier to maintain when you’re working with something as widely used and supported as the DOM. But in Silverlight’s case, UI testing hasn’t been streamlined yet. (or I do not know about a possible solution)

    Thanks for your input! I’m really on the fence and still accumulating arguments from both side. I’m glad to hear that you guys are reaping benefits from your automated tests.

  • http://www.lostechies.com/members/louissalin/default.aspx Louis Salin

    The more I think about it, the more I see how Silverlight is not testable. I’ve been fighting all day to change the value on a combo box through our test. When you’re dealing with HTML, it’s fairly easy to do. Change the value and everything will be fine when you submit the form. With Silverlight, changing the value doesn’t necessarily trigger the right events to update the view model underneath, which forces me to go update the view model directly instead of doing it through the UI. But now, whenever the view models change, tests start breaking…

    It’s just a pain.

  • http://www.lostechies.com/members/chadmyers/default.aspx chadmyers

    @Louis:

    The idea is to keep the essence of the test (the feature you hope to assert) and the mechanics of the test (the clicking of buttons, selecting drop-downs, clicking links, etc).

    So if you’re mixing them both in one test, you’ve got a problem. In our web testing, we have different layers of the test from the “driver” layer (the thing that directly interacts with Selenium) to the fixture/grammar level which is the StoryTeller stuff (which contains the essence of the test expressed in as business-like language as we can get.

    So basically, we keep Selenium as far away from our testing code as possible so that we can change how, say, drop-down/selectboxes work in our app without breaking all our tests.

    So if you can find a way to isolate how your combo boxes work, then put that in your driver layer. That way your *test* can just say: “Select this value for this field” and your driver will know that “that field” is a combo box and it’ll know how to deal with combo boxes (change values, get the current value, get the list of all the available values, etc).

    If that’s simply not possible in silverlight, then that’s a real problem. I find that kind of hard to believe, but I don’t know much about silverlight.

  • http://www.twitter.com/VitalyStakhov Vitaly Stakhov

    Unfortunately in some cases The Primary Unstated Requirement is considered as impossible to achieve. And constant finding and fixing the same bugs is treated like a natural process of software development.

    I think in many cases it would be hard to understand by somebody being NOT a professional that TPUR is achievable at all.

  • Elroy

    Nice post Chad.

    But, the fact remains that in most cases this will never get through to all people and things will still remain the way they are. I’ve tried hard to make my co-devs write automated unit/functional tests, showing them the benefits time and again. Doesn’t work for the most part. But sometimes I do run across people who share these thoughts and it feels great working with them.

  • Carsten

    Shouldn’t your conclusion be that you should first make it a ”Stated Requirement”?

    If you build or change software, it needs to be tested. That’s not breathtaking news. Testing can be done in many different ways; automated testing is one way and it might be a very efficient way depending on the individual case.

    If you say testing is a requirement then you need to treat is as a requirement like any other feature request. It’s not something that you simply do because you want to be “professional”. Instead, you should choose your appropriate testing strategy depending on aspects of the requested feature like criticality (risk), triviality (trivial things don’t need to be tested extensively) or maintainability. In any case, you would need to make the requirement “stated”.

    I also tend to agree with Louis saying that “a passing automated test does not prove anything”. I have seen projects that put hell of a lot of effort in increasing their CodeConverage but finally found out that what they tested was not what was required. I am probably nearby those dirty hackers saying that functionality is the first thing that matters. Having said that, I don’t intent to say that testing doesn’t matter. It matters and it is an important aspect of software in general. Due to its importance, it needs to be made transparent; it needs to become a “Stated Requirement”.