Limits of performance optimization

Datasheet ThumbnailBack in college, where I was an Electrical Engineering undergrad, I had an especially difficult professor for my microcontrollers course. In this course, we would hand-roll assembly language instructions and upload them to the 68HC12 testing board. (Side-note, I never, EVER want to hand-roll assembly language again. Or hand-compile C code to assembly).

In microcontrollers, onboard memory is a huge limiting factor. Modern devices have lots of available memory, but for embedded devices, your ROM pretty much decides how complicated your program can be.

As part of the course, we were graded on how compact our code was, rather than merely how correct it was. And this makes sense, since compactness can allow for more features/decisions etc.

So we all made reasonable efforts to compact our code using common tricks like shift-operators for multiplication and so on. However, what we didn’t know is that our professor had spent 20-30 years optimizing assembly code for compactness, and our efforts were being graded against his. Any deviation from his solution was a deduction in our grade.

After receiving poor marks, and seeing why, we all as a class reviewed the (his) solution. And wouldn’t you know that while compact, code optimized to its maximum is nearly impossible to understand or maintain. No one in the class, viewing the code for the first time, would be able to decipher what it actually did.

Long-term maintainability

Since we change code far more often than we write code, optimizing solely for performance can make it difficult or impossible to change that code in the future. In the case of our college course, we were being held to standards that were nearly impossible to reach, let alone understand. Performance isn’t an accomplishment, it’s a feature.

It’s a feature that needs to be balanced against all other constraints, like the ability to maintain the code in the future. Highly optimized code often becomes more difficult to understand or comprehend, making it difficult to tweak or refactor in the future.

So when looking at performance optimization, which is many times a necessary endeavor, always keep an eye on the true goal of the performance optimization. How much more optimized does it need to be? What is the threshold for success?

Performance optimization without a clear definition of success just leads down the path of obfuscation and unmaintainability. Optimization does have an upper limit, not only in terms of gains, but of losses in maintainability.

About Jimmy Bogard

I'm a technical architect with Headspring in Austin, TX. I focus on DDD, distributed systems, and any other acronym-centric design/architecture/methodology. I created AutoMapper and am a co-author of the ASP.NET MVC in Action books.
This entry was posted in Architecture. Bookmark the permalink. Follow any comments here with the RSS feed for this post.
  • Gene Hughson

    Bravo.  Doggedly eking out every ounce of a given QOS criteria (performance, security, etc.) without thought to cost and need is as detrimental as sloppy work.  

  • Sometimes you can make beautiful code that is easy to maintain and looks great. But your data-access and network are abstracted 10 layers deep. 

    Try and optimise that son. Or even better – diagnose it.

    A bit of awareness of what is all I’m saying based on experience of facades, wrapping, controllers, wrapping god knows what with 50 database and web service calls scattered from top to bottom of the stack.

    • Jason Meckley

      Jimmy’s not suggesting there should be 10 layers of abstraction.  The point is to balance performance with maintainability. To far in either direction creates a problem.

      • haha yep. That’s what I was saying Jason. A bit of awareness of the hole you’re digging for yourself.

  • Pingback: james mckay dot net » In response to criticisms of CSS pre-processors()

  • Pingback: Finding the Balance | Form Follows Function()

  • Pingback: Finding the Balance | Iasa Global()