I suppose it’s time for the obligatory weigh-in on the latest bit o’ reckless software advice from Joel Spolsky on the merits of the “Duct Tape Programmer”.
I think being a duct tape programmer is a bit like being an alcoholic. Once you become one, you are one, and when you want to stop, you have to constantly be vigilant against backsliding. Oh, and the first step is admitting you have a problem.
Hi, I’m Scott, and I’m a recovering duct tape programmer.
I don’t want to get too deep in the weeds on Joel’s article, because the simple fact is that it doesn’t warrant as much discussion as it’s getting. On further review, he doesn’t say much more than “be like this and get things done, but, you’re probably not smart enough to be like this, so…”. I will be pulling out a few key parts of his article to make my points however.
What I do want to talk about is this notion that somehow being a craftsman and striving for appropriate, maintainable, and dare I say elegant solutions is in opposition of getting things done.
There’s a major problem in this industry (and the sub-industry of blogging/speaking/writing about this industry) of trying to remove contextual analysis from what we say. When we do that we get ourselves in trouble. Every product is different, every organization is different, and the needs of each varies wildly. Therefore it’s impossible to prescribe one action over another free of context. The second side of that same coin is that the vast majority of people taking this sort of context free advice don’t recognize this lack of context and run with it because some hero they worship said it. Then you get people that say things like “I don’t want to write tests because Jeff Atwood and Joel Spolsky don’t” or “I just want to code all night in a cave and ‘get shit done’ because that’s what Joel said is valuable to business”. This is a major problem.
There’s also this problem of the definition of “quality” in software. Some people tend to take a narrow view that the software quality issue is about patterns and clean code. This is part of it to be sure, but it’s not the whole thing. Quality is part of solving the problem. Solving the right problem, with the right solution, that works for the business, in an amount of time that ensures value was provided, that’s part of quality. Ensuring that the code is at the very least *correct* is also part of quality. Ensuring that code is maintainable can also be part of quality. Each of these parts of quality has a cost associated with it, and business decisions have to be made about the ROI tradeoffs associated. The problem is the business rarely makes the decisions, because the developers either don’t care to raise the issue and educate, or don’t want to.
Joel’s hero of the day was Jamie Zawinski, one of the original developers at Netscape. He stands him up as a bastion of getting things done, and I won’t begin to deny Jamie’s talent. But I do want to break down the straw men set up by Joel and add some context.
Just what is “important code”?
Remember, before you freak out, that Zawinski was at Netscape when they were changing the world. They thought that they only had a few months before someone else came along and ate their lunch. A lot of important code is like that.
Yes, that was back in the glory wild west days of the internet and in the formative years of what the software industry has become. And yes, a lot of people are under the gun to get something to market before their funding runs out or someone bypasses them. But let’s be real. The vast majority of software development out there is on the more “mundane” systems…you know…the ones that run the world, rather than the fancy Web 2.0 products. Do you want a duct tape programmer writing the software that controls your bank, or the lab equipment processing your mother’s biopsy, or your insurance company’s claims system or the air traffic control systems? I don’t know the numbers, but I’m willing to bet that more people working in corporate IT on systems that have real impact on lives read Joel’s blog than do people like Jamie. Just a guess.
For me, the code running my bank is more important code than web browser code. Calling it “important code” is a huge red flag for me of the developer/geek bias in Joel’s opinions. Important in this case means approximately “cool and new”. My definition of important code makes me afraid that the people who write such code will follow Joel’s advice.
“Yeah,” he says, “At the end of the day, ship the fucking thing! It’s great to rewrite your code and make it cleaner and by the third time it’ll actually be pretty. But that’s not the point—you’re not here to write code; you’re here to ship products.”
Yes, you’re here to ship. I don’t think that’s a question in anyone’s mind. But let’s talk about the meat of this quote. The assumption here is that the developers of the non-duct-tape variety are focused on “pretty” code. Coupled with other sentiments in the article, it would seem that Joel (and Jamie) is saying that a testing practice and the idea of putting together a readable, maintainable codebase is the enemy of getting things done. This isn’t a binary issue. Being for a maintainable codebase doesn’t mean being against shipping. Being for code quality doesn’t mean being for cleverness or complexity for the sake of cool (quite often it means the opposite).
Again, this statement lacks the analysis of the context of different project and organization needs. If you’re trying to produce the world’s first web browser then yeah, get it out there and get it done. If you’re trying to produce a long-lasting insurance claims application that will be the backbone of a corporation’s business, then maintainability should probably be on your radar. Again, include the business in the decision making process. You shouldn’t go off on your own in either direction, sacrificing quality for speed or vice-versa.
Zawinski didn’t do many unit tests. They “sound great in principle. Given a leisurely development pace, that’s certainly the way to go.
Now this is just crap, and it assumes and perpetuates the flawed idea that a mature testing practice costs you time. Yes, I said mature. Getting started on TDD will be slow at first, but the payoff is huge. And let’s not forget that time to market is total time to market, not just time to code features. Those features have to be implemented and tested before release (hopefully), and a practice that eschews testing for the sake of speed is often going to result in costly rework that will cost time ultimately. Again, context being what it is, I’m not saying that you can’t do quality work and avoid rework with no testing, but what I am saying is that you also can’t say that testing is only for those with “a leisurely development pace”. My team cranks out features at an alarming rate sometimes, and manages to do so while practicing TDD and testing all the way up and down the stack.
The point I’m getting at here is that we need to evaluate each project on its own merits and decide the practices that are appropriate to produce the value needed in the time required, and that we absolutely have to be careful about tossing around potentially harmful advice to our large audiences that will likely lead to some really crappy software being produced. Matt Hinze said it best when he said “…never take software advice from a bug tracking system salesman”.
Oh and let’s not forget how bad Netscape’s browser ultimately became, and what a bloated piece of garbage it was by the time it died. Guess all that duct tape caught up with them.