How To Produce Bug-Free Software

Many are resigned to the fact that all software is destined to contain some “bugs”, but did you know it’s possible (and arguably pretty easy) to always produce “bug-free” software?  In this article, I’ll explain how.

Terms

To begin, let’s consider the definition of a software “bug”.  Merriam-Webster’s dictionary defines a bug as follows:

bug – “an unexpected defect, fault, flaw, or imperfection”

This definition may be fine for casual use, but you certainly wouldn’t want to use this as the basis for any contractual obligations.  The problem with this definition is that it lacks objectivity.  The terms ‘defect’, ‘fault’, ‘flaw’, and ‘imperfection’ are all relative terms, but what are they relative to?  Based upon this definition, these designations are made based upon some deviation from an unstated set of expectations, but upon what are these expectations based?  What if different expectations are held independently by your users, testing groups, mangers, product owners and yourself?  Should all differences between any of these expectations and the actual behavior be considered bugs?  Clearly a more objective definition is needed if we are to ever be capable of producing bug-free software.

To this end, I submit the following definition:

bug – “a deviation from an objective set of specifications set forth at the outset of a development effort.”

In order for a development team to consistently write bug-free software, an objective set of specifications must exist by which the software may be measured by prior to delivery.  Unfortunately, such specifications are rarely created … not in any objective form.  This problem often stems from the application of an incorrect process control model.

Process Control Theory

There are two primary approaches to controlling processes:  The defined process control model and the empirical process control model.

The defined process control model is applicable to processes where all the work involved is completely understood at the outset of production and the result of each stage of production is predictable and repeatable.  An assembly-line production of automobiles is a example where a defined process model might be applied.

The empirical process control model is applicable to processes where the product of the work has a high degree of unpredictability and/or the product’s specifications aren’t completely defined and understood at the outset of production.  This model is characterized by the use of frequent inspection and adaptation during the production process.  The research and initial formulation of medications to treat diseases are examples where an empirical process control model would be applied.

Unfortunately, it’s been the defined process control model which has been the predominate approach to software development throughout its history.  The most prevalent manifestation of the defined process control model has been the  Waterfall model, a process control model characterized by sequential stages of development which include requirements gathering, analysis, design, implementation, and verification.  Fortunately some within the industry have come to understand that software development requires an empirical process control model and the rest of the industry appears to be slowly coming around1.

In the book Agile Software Development with Scrum, Ken Schwaber discusses some feedback he received upon consulting with process theory experts concerning his lack of success in applying commercial methodologies within his company.  The following is his account of this feedback:

They inspected the systems development processes that I brought them.  I have rarely provided a group with so much laughter.  They were amazed and appalled that my industry, systems development, was trying to do its work using a completely inappropriate process control model.  They said systems development had so much complexity and unpredictability that it had to be managed by a process control model they referred to as “empirical.”  They said this was nothing new, and all complex processes that weren’t completely understood required the empirical model.  They helped me go through a book that is the Bible of industrial process control theory, Process Dynamics, Modeling and Control [Tunde] to understand why I was off track.

In a nutshell, there are two major approaches to controlling any process.  The “defined” process control model requires that every piece of work be completely understood.  Given a well-defined set of inputs, the same outputs are generated every time. … Tunde told me the empirical model of process, on the other hand, expects the unexpected.  It provides and exercises control through frequent inspection and adaptation for processes that are imperfectly defined and generate unpredictable and unrepeatable outputs.

When software development is correctly understood to be an inherently empirical process in need of frequent inspection and adjustment, many naturally conclude that an objective, repeatable inspection process is need.  Enter executable specifications …

Executable Specifications

In the Scientific Method, empirical data is collected through repeatable processes to guard against mistakes or confusion by experimenters.  To ensure our software is defect-free, we need to employ a similar set of repeatable processes.  Not only should such processes guard against mistakes or confusion during the development effort, but such processes can and should themselves be the agreed-upon specifications.  In the software development world, these sets of controlled processes can be fulfilled by Executable Specifications.  

Simply, executable specifications are a set of automated tests which encapsulate the context, actions, and observable outcomes defined and agreed upon between a customer and a development team. Without such specifications, no objective measure exists by which to weigh the integrity of a software system.  With such specifications, the development team has an objective measure by which to weigh the software behavior against prior to it being delivered to a customer … thus equipping the development team with the ability to always produce bug-free software.

 


1. ‘Agile’ Development Winning Over ‘Waterfall’ Method – http://itmanagement.earthweb.com/entdev/article.php/3841636/Agile-Development-Winning-Over-Waterfall-Method.htm

About Derek Greer

Derek Greer is a consultant, aspiring software craftsman and agile enthusiast currently specializing in C# development on the .Net platform.
This entry was posted in Uncategorized and tagged . Bookmark the permalink. Follow any comments here with the RSS feed for this post.
  • http://twitter.com/sharpbites alberto rodriguez

    So, in order to demonstrate it’s possible to produce bug-free code, let’s start by redefining what a bug is… I see what you did here. clever! :P

  • Pingback: The Morning Brew - Chris Alcock » The Morning Brew #961

  • Enrique

    Over twenty years ago I heard Tom Gilb explain the difference between a bug and a defect.  What you have tried to redefine as a bug is close to what he defines as a defect.  I also believe Tom Gilb’s definition of the term defect is essentially an industry standard, at least for those who are in the know about such things.

  • Anonymous

    Very smart post. Thanks for this.