Simplest versus first thing that could possibly work

One of the core XP practices that resonated with me quite early on was the concept of simple design.  When I learned TDD, this practice was further refined with the concept of doing the “simplest thing that could possibly work”.  To make a test pass, I would code the simplest thing that could possibly work.  It takes quite a bit of discipline to adhere to this mantra of simplicity, fighting a constant urge to design something more complex than the problem at hand requires.

Browsing the XP wiki, you can find a lot of discussion of what exactly this means.  TDD calls for “Red, Green, Refactor”, which might lead you to wonder why you would need to refactor after doing the simplest thing that could possibly work.  It seems that the consensus formed around first performing the fewest steps, then refactoring to the fewest pieces or components.  But in our quest for simplicity, I notice a second, more subtle mistake: confusing the first thing that happens to work with the simplest thing that could possibly work.  If I choose the first thing that happens to work, my refactoring step often leads me merely to the simplest solution, but not the most elegant.

The difference between the two is easy to fix – it just requires thinking!  Thinking about possible solutions, different designs, vetting alternate paths to the goal.  This can be accomplished through pair programming, whiteboarding, and just about any exercise that requires us to think of at least two possible solutions before picking the winner.  I don’t necessarily see this happening with every possible solution, however.  But some of the most awkward designs I’ve created seem to stem from just picking the first thing that works, and not doing a little thinking.

Which is quite sad, as a little effort and investment in investigating multiple designs pays off many-fold in the long run.  Evolutionary design works best when we’re not stumbling in the dark, but making informed decisions with the most options as possible on the table.  If this sounds something like the idea of concurrent or set-based engineering, it should!  Except in this case, we perform it in the micro, at the point of every non-trivial design decision.  Simplicity is not automatic, and often comes from choosing the best design from a few options.  But we can’t be lazy about it, as taking just five minutes to think of a different approach can save man-hours (or days even) further down the road.

About Jimmy Bogard

I'm a technical architect with Headspring in Austin, TX. I focus on DDD, distributed systems, and any other acronym-centric design/architecture/methodology. I created AutoMapper and am a co-author of the ASP.NET MVC in Action books.
This entry was posted in Design. Bookmark the permalink. Follow any comments here with the RSS feed for this post.
  • Well put. We go wrong here when we base our decision on the easy->hard continuum rather than the simple->complex continuum. The easiest path rarely leads to the simplest solution.

  • Excellent post. I will also add that code reviews (even reviewing your own code after a day or two) have a great impact in the quality of the design. While you are working in a problem you may get mentally fixated on one or two solutions, and never see an alternative.
    Changing context (working on something else, or go to sleep) goes a long way in brake that “writers block”.

  • (Sorry if this gets duplicated, something weird happened when I submitted the first time…)

    I absolutely agree. This is the same problem with YAGNI. Especially when you do, in fact, know you are going to need it yet write it as though you didn’t and then have to almost rip out the original implementation to get it to work later. It’s a nice phrase to emphasize the point, but it goes a bit overboard.

    Everyone wants a single definition of “simple”, but it just doesn’t exist. You have to be thoughtful as well if you want to ensure quality throughout the entire system. Again, well said!