Design And Testability

In the line of business applications that I build, it’s considered good practice to use a test-first approach; Test-Driven Development, Behavior-Driven Development, or whatever you want to call it. Write a test, verify that it fails for the right reasons, make it pass, refactor the code to ensure it’s up to all required standards. How a person goes about doing the implementation of the tests and the code to fulfill the tests depends largely on the platform, language and testing tools used. Each platform has different needs and different ways of approaching the idea of “testability” in code. Some languages require specific design decisions to enable testable code, while other languages pretty much guarantee that your code will be testable – even if some designs are easier to test than others.


Design And Testability In Ruby

If I were writing some code in Ruby, I could easily test this:

   1: class Foo

   2:   def bar

   3:     baz =

   4:     baz.do_something

   5:   end

   6: end


There are a number of options for being able to test the behavior of the Foo class’s .bar method. I could use the nature of Ruby’s open-type system and just replace the initializer and do_something method on the Baz class; I could use RSpec and it’s built in mocking syntax; I could use the not-a-mock gem (which is my preference) to stub the methods; etc.

We essentially get testability in ruby for “free” – it’s built into the dynamic nature of the language. Other dynamic languages such as Python, etc, also give us testable code by nature of the language. We are not required to do anything special to create code that is “testable”. Now that doesn’t mean all code is easily tested, though. There are still design principles and paradigms that will make your code easier to test, which also tends to lead to code that is easier to understand. The point, though, is that you don’t have to do anything special to isolate the behavior of the Foo class from the implementation of the Baz class in the above example.


Design And Testability In C#

Looking at the equivalent code in C#, we would say that this code is not “testable” from the perspective of unit tests:

   1: public class Foo

   2: {

   3:     public void Bar()

   4:     {

   5:         var Baz = new Baz();

   6:         Baz.DoSomething();

   7:     }

   8: }

By all the principles, practices, and design standards that we preach in C# / .NET, this code is not testable because of the hard dependency on the Baz object and it’s implementation.

There are a significant number of principles that are being violated in these few lines of executable behavior, and we would need to change the code in a very significant way to create something that is “testable”. We would need to introduce an abstraction over Baz – but ensure that the Foo class owns the abstraction so we don’t violate the Dependency Inversion principle. And we would need to introduce Inversion Of Control and some form of Dependency Injection to ensure that Foo is not directly dependent on Baz’s implementation (neither the constructor nor the DoSomething method’s implementation). The resulting code, to be “testable” by all accounts, would look something like this:

   1: public class Foo

   2: {

   3:   private doSomething;


   5:   public Foo(IDoSomething doSomething)

   6:   {

   7:     this.doSomething = doSomething;

   8:   }


  10:   public void Bar()

  11:   {

  12:     doSomething.DoSomething();

  13:   }

  14: }


  16: public interface IDoSomething

  17: {

  18:     void DoSomething();

  19: }


  21: public class Baz: IDoSomething

  22: {

  23:     public void DoSomething()

  24:     {

  25:       // ... whatever this does...

  26:     }

  27: }

(Note: I included the shell of the implementation for Baz in this example – but those extra few lines of code don’t diminish the expansion of the rest of the code. I included it to show the requirement of implementing the interface on the Baz class.)


Angels And Demons

As a person who dabbles in ruby and that community, I get the sense that we applaud Matz for the open nature of ruby, allowing the great minds of people like David Chelimsky to develop tools like RSpec with it’s built in mocking capabilities. We have the freedom to express the intent of our code without the significant ceremony of the abstraction, dependency inversion, and “testable” code the we say is required in C#. These people are the heroes – the angels – of the ruby community, held in high esteem because they have made the art of “testable” code approachable by anyone that can write code. And they deserve our applause for these efforts, without question. The tools and capabilities in Ruby and RSpec are quite wonderful and I enjoy working with them.

Why, then, do we demonize companies with tools like Telerik’s JustMock, Typemock’s various offerings, and Microsoft “Pex and Moles” for providing the same capabilities in C# / .NET? Why do we attack people like Roy Osherove and dismiss his contributions to the community? Have we become so dogmatic about our “principles” and “standards” that we no longer have a sense of pragmatism or exploration and questioning? Has the “” community become “”, “”, or “” as so many others have suggested, for so many years? What value do we truly gain – other than the admiration and awe of the people that wish they were “smart enough” to point out the “flaws” – through this continuous disregard for what is a valid perspective and approach to software development in .NET?

(Edit: the above content create a whirlwind of comments that would have been better off on another communication channel. I should not have taken the tone and stance that I did with this section. The LosTechies community should not be a place where I rant and say these types of incendiary things. As such, I’ve decided to moderate the comments from this post and strike out the above section. Please do not comment on this section, on this blog anymore. I’ll remove the comments. Please continue commenting on the rest of the post, though, as I believe it is still valid.)


What’s The Point?

I honestly ask – why? … or, why not? If I can write this test in rspec:

   1: Baz.should_receive(:do_something)

or write this test in typemock:

   1: var fake = Isolate.Fake.Instance<Baz>();

   2: Isolate.Swap.NextInstance<Baz>().With(Fake);

   3: //... run the foo.Bar method, here

   4: Isolate.Verify.WasCalledWithAnyArguments(() => fake.DoSomething());

why shouldn’t I write that one in typemock? Why should we applaud the ruby community for it’s contributions and not the .NET community that has given us the same core capabilities. Is it because the capabilities to do this are not “free” in a static language? Is it because we’re afraid of the profiling API that is required to do this in .NET? Is it because we’ve become dogmatic instead of pragmatic? Is it because TypeMock is expensive? or is there a legitimate reason that we have emotional reactions and cry-foul the possibilities that these tools introduce?


Searching …

I don’t know the answers. I’m asking because I want to find the answers. And yes, I recognized that I still have an attachment to the abstractions and interfaces. I’m not gong to go spend the $ on TypeMock or JustMock today, but at least I’m asking the question in an open and honest manner. I hope the rest of the <divisive-name>.NET community will join in and begin to question everything we hold sacred. We might actually learn something if we do.

About Derick Bailey

Derick Bailey is an entrepreneur, problem solver (and creator? :P ), software developer, screecaster, writer, blogger, speaker and technology leader in central Texas (north of Austin). He runs - the amazingly awesome podcast audio hosting service that everyone should be using, and where he throws down the JavaScript gauntlets to get you up to speed. He has been a professional software developer since the late 90's, and has been writing code since the late 80's. Find me on twitter: @derickbailey, @mutedsolutions, @backbonejsclass Find me on the web: SignalLeaf, WatchMeCode, Kendo UI blog, MarionetteJS, My Github profile, On Google+.
This entry was posted in .NET, Behavior Driven Development, C#, Community, Pragmatism, Principles and Patterns, RSpec, Ruby, Telerik, Tools and Vendors, Unit Testing. Bookmark the permalink. Follow any comments here with the RSS feed for this post.
  • Because it’s not about testability. It’s about bad design.

    Just because Ruby let’s you test a bad design easily, doesn’t mean that we should praise the heavens and bow down to Typemock. If we want to swap out Baz, then the method shouldn’t be creating it. It should be getting an instance of it passed in, or at least getting it from a factory.

    Testing tools which make this easy do nothing but hide you from the pain of the design. Which is fine if you are disciplined and using this as a refactoring step. But if you think it means that you can just code like this now…

  • Cory, what about the useful classes Dir and File in ruby? They have static methods that I use all over the place. RSpec lets me override their functionality easily. Should I start injecting them into every class that uses them? Wouldn’t that be overkill?

    This is the reason we C# developers preach not to use static methods, because they make their users hard to test. In ruby, they are quite testable! Is it bad design to use static methods or did we decide it’s bad design because it’s hard to test in C#?

    Maybe some of these tools have their use. And without a concrete example that doesn’t involve Foos and Bars, we can’t really decide what’s good design and what’s bad design.

  • Roco

    Cory – The Ruby example is not bad design. The nature of Ruby allows you to do things without relying on interfaces to satisfy a compiler. Different approach != bad design.

  • Yuriy Solovyov

    Following TDD is key here. It *forced* me to write better code. The fact that .Net is not as “testable” as Ruby actually made me a better developer. The code sample in the article only emphasizes this point. You were forced to write better-designed code as compared to Ruby equivalent.

  • Yuriy Solovyov

    Following TDD is key here. It *forced* me to write better code. The fact that .Net is not as “testable” as Ruby actually made me a better developer. The code sample in the article only emphasizes this point as you were forced to write better-designed code in C#. May be it’s a blessing that we can’t write tightly-coupled code and make the code testable at the same time.

  • Peter

    I remember reading your fellow LosTechies discussing this previously. Ironically, the Google brought up a post with an identical title:

    Then there’s:

    The answer’s in the details…details which have been covered previously.

  • It isn’t about testability. Compare your ruby solution, and your c# typemock solution. In real code (NOT a testing scenario), you can substitute new functionality in the ruby solution, because of open classes. The code is still flexible.
    If you rely on TypeMock to enable your testing in new c# code (as opposed to legacy code, where TypeMock could be valuable), you may end up with an inflexible design where you cannot easily substitute functionality.
    If you dont know the answers, then why the sidetrack rant in the “demon” paragaph where you let your imagination run wild?

  • I agree with Joshua. Open classes were the first thing that came to mind when I saw the Ruby example.

    That said, I do agree that tests lead to better code design in .NET.

  • thank you all for the discussion so far. this is what i love to see…

    now – for all of the “it’s about design” people – yes, it absolutely is. but until you have the context, goals, objects, and requirements on the code in question, you’ll never be able to correctly judge whether or not the design is bad. … unfortunately, from that perspective, what I’ve posted here is a giant wall of stupid because the design can’t be judged – there is no context.

    backing down from that position, though, i do agree that the code i’ve shown here is not “great” design, but i don’t know that it’s “bad” design either.

    with all that being said – i would like all of the “it’s about design” people to show me what you would consider “good” design for these examples. and i have one request for the C# example – show me a “good” design that does not require superfluous abstractions to facilitate testing.

  • I think this comment thread is an interesting example of part of the problem. For some people, the “best practices” have become so ingrained that they can’t imagine that making a given class’s references to other classes “swap-able” might not always be a good thing.

    True, in a static language you *have to* make it swap-able to mock it out (unless you use TypeMock). But if you *didn’t have to* make it swap-able to test it, and you had no “business” requirement for it to be swap-able, why would you design it that way?!

  • Roco

    Ah, I think Kevin is close to the heart of the issue. The answer for me is no – if there is only one implementation and therefore no need to “swap” anything, then I would prefer to not use an abstraction. However, I do it anyway – for testing.

  • I’m not saying either of the code examples is bad design. I was responding to the question of “if this is ok in ruby, why is this not ok with typemock” (which I thought to be the point of the post).

    The ruby code doesn’t raise the same alarm bells, because you know you aren’t losing any flexibility. The c# code raises alarms because you can instantly see that your code is completely coupled and inflexible. If that flexibility is not needed, it is not bad design.

  • Ha, I was just trying to answer a question on this very topic on Stack Overflow:

    The question deals with the issue of how a C# developer transitions from the TDD workflow they are used to when moving over to Ruby. Perhaps you can weigh in with your thoughts. Thanks for a great post!

  • Flexibility is a fundamental idea in Ruby. In a way, *some* of the design principles (like SOLID) that many of us have been talking about for a while are so very valuable because they help us claw back some of the flexibility we’re trading for using a static language. Some of the techniques we need to use to achieve that flexibility (interfaces on just about everything, for instance, DI) look like ridiculous cruft to a dynamic language programmer… it’s not that they don’t value the flexibility that we do (read the Zen of Python, and you’ll see quite a few familiar ideas) it’s just that they value simplicity to accompany that flexibility more than safety. You might even hear something from dynamic language programmers like “I’m an adult, why should I accept that much useless complexity so the language can treat me like I’m a child?”

  • Nice post. Good questions.

    I agree with a lot of people saying “good design”. Unit tests really never test anything, the main purpose is to promote good design.

    BUT, I strongly believe the good design comes from breaking apart things into their responsibilities, not from making them (for lack of a better word) swap-out-able.

    More and more I have been not liking Dependency Injection. It is so over used that suddenly code is bad if it doesn’t use it. I hate creating interfaces for no reason. I feel absolutely retarded when I put an interface into the class definition file that is the only class that will ever implement it.

    I think you hit onto something really big here. In the .NET world we need to get away from dependency injection for solving all of our problems.

    Striving to reduce the number of dependencies of any class or method is good, creating interfaces for no reason is bad.

    I like the idea of using TDD to drive out highly cohesive design.

    Let us not create interfaces and use dependency injection until we actually have a 2nd class that will implement the functionality and an interface is required to make the solution polymorphic.

    Let us not be afraid to call new. Calling new is ok, we can use tools like TypeMock to fake calls.

    Thanks for writing this… it seems like YAGNI gets thrown out the window for dependency injection and IoC.

  • Derick,

    “Testability” doesn’t now – nor never has – meant “mocking” or “mockability”. And Inversion of Control doesn’t require shunts to be punched into classes – it only means this in languages like C# that don’t have language-level support for Inversion of Control – allowing programmers only to punch holes in class design to enable tool-based support for Inversion of Control.

    TDD is about evolving an ever greater understanding of the single responsibility of an abstraction. In other words, its cohesion. In other words its modularity.

    Structural design is always and only about modularity. The raw materials you use most often are going to shape the very perspectives you have on the principles of modularity.

    I wouldn’t use TypeMock or any other runtime interception framework in .NET just to achieve isolation. I would create isolation the way it should be created: by intentional modularity.

    The bigger problem with .NET isn’t modularity – modularity can be had in any programming language. The bigger problem is the limited number of options available to the .NET developer to Invert Control.

    Ruby has language-level support for Inversion of Control. And this, in a nutshell, is why Ruby and TypeMock are not the same. The client syntax can appear the same, but they will always only ever be appearances.

    TypeMock provides one flavor of Inversion of Control, one that is only available by putting the software into a specific mode in a specific environment. This form of Inversion of Control is not a pervasive language feature and has no impact on how we implement the mechanisms of modularity, which is not the case with Ruby.

    Structural design in Ruby is inextricable from language-level support for Inversion of Control. It’s not just something we use in the context of testing and mocking. It’s something used in every aspect of structural design in Ruby – and because of this, and as a side-effect, it can also be found in the context of testing and mocking.

    If your understanding of the principle of Inversion of Control is shaped by working in C#, what I’m saying here might not make any sense. You might have an overly-narrow perspective of Inversion of Control – one that is informed only by its limited manifestations in that language. More time in Ruby will make it more natural.

    TypeMock calls itself an “isolation framework”. This nomenclature betrays a deep lack of experience and understanding of highly-effective modularity in structural software software design. Isolation can never be created by a tool. It’s a quality of a software’s geometry rather than its runtime. And that, in turn, is a reflection of the designer’s mind and experience. It’s a quality of human cognition, and can no more be created by machinery than any other artifact of creative cognition.

    Personally, I don’t think that the purveyors of TypeMock have gone far enough in their own journey of understanding of the massive increases in productivity that come from a profound understanding of structural design, and have settled for something much lesser. But we’re talking about the Microsoft culture here. Settling for something much lesser can be a rather lucrative option in the Microsoft space. It doesn’t work out as well in a more meritocratic culture like Ruby’s.

  • First, no one should be demonising TypeMock or it the people that work on it. TypeMock does some awesome stuff; if people want to use it then great! If they don’t then that’s fine too. There’s no problems debating the merits of various approaches, but to attack a tool or devs that work on it seems ridiculous; if you don’t like it then don’t use it.

    Now as to why opening up classes for testing is considered fine in Ruby, but less fine in C#, I completely agree with Joshua’s comments. This is built into Ruby; you can use it anywhere to extend and modify your code. With C# you can only do this via the profiler, so you are flexing your test code in ways you cannot flex your production code. (Unless you want to hook into the profiler API for your production code too.)

    This is fine of course. There is no reason you can’t use TypeMock to do this; you are just not going to get the feedback as to the flexibility of your design. If you don’t need that feedback, or you get it from other sources, or you don’t need the flexibility, then there’s no problem.

    I ranted a bit about this a while back:


  • Should stay out of this… but … too opinionated :)

    First, the dependency injection with the interface passed in is preferred, regardless of OO language imo.

    public Foo(IDoSomething doSomething)

    is preferred and isn’t just about .net

    “The bigger problem with .NET isn’t modularity – modularity can be had in any programming language. The bigger problem is the limited number of options available to the .NET developer to Invert Control.

    Ruby has language-level support for Inversion of Control. And this, in a nutshell, is why Ruby and TypeMock are not the same. The client syntax can appear the same, but they will always only ever be appearances.”

    Totally agree here with Scott.

    (Same applies to ‘Grails’ – uses the underlying Spring to inject)

  • Scott,

    thanks for the insight. of all the comments, you’ve touched on a few points that really get down to what I’m asking.

    “If your understanding of the principle of Inversion of Control is shaped by working in C#, what I’m saying here might not make any sense. You might have an overly-narrow perspective of Inversion of Control – one that is informed only by its limited manifestations in that language. More time in Ruby will make it more natural.”

    yeah, you’ve hit the nail on the head with that one. my time spent in ruby is one of the largest reasons i’m questioning all this stuff and exploring the boundaries of what is reasonable in the principles themselves and in the languages we implement with.

    i think i need to pick up another language or two to get more perspective, while also continuing to work on my ruby skills. maybe python next.

  • Steve,

    > First, the dependency injection with the interface passed in is
    > preferred, regardless of OO language imo

    An “interface” is Object-Oriented design is a class signature. You’re saying “interface” but your code example is an interface *type*:

    > public Foo(IDoSomething doSomething)

    This is in fact NOT preferred in Object-Orientation at large and is only a preference in Class-Orientated languages like C#, Java, etc.

    In essence, interface types in languages like C# are not really “interfaces” at all from the perspective of Object-Oriented design. They’re “protocols”. And every object doesn’t need a protocol. Often, and object’s interface is sufficient.

  • nieve

    Going back to your request to ‘show me a “good” design that does not require superfluous abstractions to facilitate testing’- I was just wondering whether using functional programming for inversion of control could be seen as a “good” design that would require less superfluous abstractions to facilitate testing? So that your example would look like:
    public class Foo
    Baz _baz;
    public Foo(Baz baz){
    _baz = baz;
    public void Bar()

    public class Baz
    private Action _doSomething = () => Console.Write(“doing something”);
    //or throw NotImplemented…

    public Baz(){}

    public Baz (Action doSomething)
    _doSomething = doSomething;

    public void DoSomething(){


  • Nieve,

    > superfluous abstractions

    Somehow, somewhere along the line, we got distracted by a few influencers’ declarations that only interfaces should be mocked. Many otherwise-smart people got caught up in this mindless orthodoxy and most of them have since abandoned it.

    An interface type in a static language signals the strongest kind of irreversibility of interaction possible between two early-bound objects: A protocol. Objective C at least gets this right by specifically calling it’s “pure” virtual objects “protocols”.

    Your example at least doesn’t fall into the trap of believing that every interaction requires a protocol. But the example also introduces a pass-through proxy, the “Baz” class, that doesn’t add anything new to the solution. If instead of assigning Baz to Foo, you assigned that instance of the Action to Foo, you’d in fact be removing a superfluous abstraction.

    Inevitably, what you end up with is a command pattern that makes use of callable objects, rather than creating an extra class to represent the command, which is what is required in languages that don’t have callable objects. Using a callable object is the common idiomatic way of implementing command patterns in languages like Ruby and Javascript that have first class support for callable objects (Proc and Lambda).

    With the advent of callable objects in C# (Action, Predicate, and friends), command patterns can also be implemented using callable objects. However, there are years of habit-forming class-oriented implementation of command patterns backed up in idiomatic C#, and often command patterns fall back to the class-oriented habit even though there are now callable object alternatives.

    It’s hard to see the “superfluous” in C# if you’re not looking in from the outside, but with an insider’s eye. Baz is a “superfluous abstraction”. It’s a common enough pattern in C#, but so once was putting protocols between every object interaction.

  • nieve

    That was spot on! how much do i owe you for the diagnosis?
    I think it is definitely time for me to plunge into ruby properly, any suggestions where to start?

    BTW, this reminds me an article from ny times:

  • Nieve,

    A good place to start for someone coming from C# is probably a look at something that both languages have in common but do differently. Since the example in you message above was patterns-related, you might try the book, Design Patterns in Ruby:

    Dave Thomas’ screencast series is quite enlightening as well. It goes from very basic ideas in Ruby through some rather mind-boggling examples of meta programming:

    There’s also a great Socratic-style book on Ruby from Pragmatic Programmers that is a good page-turner from beginner and beyond:

    Here’s a video of a presentation I did at the NDC about Ruby for .NET devs, but it is mostly a regurgitation of Dave Thomas’ videos (but with a dose of shock value to wake up those lethargic .NET identities in the audience:

  • Louis and Roco –

    As Scott has pointed out above, testability != good design. If you have Dir calls sprinkled all over your code, then you have an abstraction problem. The nature of the dynamic language makes it easier to test, true, but that doesn’t excuse the design of it.


  • Cory,

    I’m not sure I’m willing to go along with that, or if that’s what I said.

    To be precise, I’m saying that modularity is a higher order concern. Testability is a reflection of modularity. Someone who doesn’t understand modularity can learn it from TDD, as long as he doesn’t use crutches like profile interception to mask root causes and only solve testability problems rather than solving higher order modularity problems. The ability to test is better than no ability to test, but modularity will delivery the added productivity of testing along with other forms of productivity that come from mistake-proofing and the elimination of re-learning that comes from modularity but isn’t afforded by merely having the ability to test.

    Testability is often good design. But the ability to test is not the same thing as testability. Testability is a design quality that is a reflection of certain kinds of geometries in structural design. Using profiler interception might give you the ability to test, but it doesn’t give you “testability”.

    I think anyone using tools like TypeMock and conflating the mere ability to test with the design quality of testability is doing the entire community a great disservice and is being incredibly dishonorable. But unfortunately, I’ve seen this very thing happen. It means than rather than achieve the very human goal of testability, self-interested elements in the community are willing to re-define “testability” so that it means something much less. In other words, rather than rise up, they have no issue with bringing everyone else down to their level – no matter how much progress it destroys.

    PS: The Dir class is a reasonable example of modularity. Personally, I have no issues of using it directly. But then, I’m talking about the Dir class in a language that has first class support of Inversion of Control as a language feature. And ultimately, I’ve been trained by ten years of TDD to create modular code instinctually.

    I think we’re largely saying the same thing, but I want to be perfectly clear about what I see as some finer points.

  • Greg Young


    This is a side discussion that probably should be had elsewhere (I have posted about it before) but I believe that you have a fairly narrow view on what modern static typing can offer. Modern static typing is moving towards theorem proving which is a completely different beast and has a very different value statement than say static typing in java or C++. Static typing was needed to reach these goals but they are the end goals of the idea of static typing, static typing is about limiting problem spaces with the hope of being able to automatically verify them to be correct.

    There are many theorem provers available including an early production version from microsoft. These theorem provers will be game changers for many types of code though that does not mean its appropriate for all code.



  • Roco

    >There are many theorem provers available including an early production version from microsoft.

    Shocking, this came from a MVP. Please, just don’t somehow bring CQRS into this discussion.

  • Thanks for the explanation above Scott (on interface) – guess I’ve just been in the C# world for too long :)

    Passing in interfaces to constructors (ctor DI) and using IoC to handle object creation has been my MO for awhile now.

    But again, I’ve only really done this within Grails, C#, Java

  • Greg,

    I don’t have a narrow view of what modern static typing can offer, and I’m aware of the continuing efforts toward fulfilling the dream of fully safe code. I also know that the pragmatism of self-modifying code is lost in this pursuit.

    And that ultimately, this pursuit is a pursuit of an obsession with purity that mistakes wholly-optimized software development effort for locally-optimized programmer expediency. As such, it remains valueless but to those psychologies who fail to recognize having been shaped for years by geek idées fixes.

    It’s not something I’m interested in because it delivers far less holistic productivity than it trades for localized expediency. If “correctness” were the only bottleneck, I would value it much more. And ultimately, it remains a non-actionable point while it remains largely hypothetical.

    When this particular geek obsession delivers on all pillars of product development optimization both on the procedural and organizational fronts, I’ll value it. Until then, I’ve got larger fish to fry – fish that are even bigger than any sense of entitlement to premature, next-generation tools by any group of people who have become overly-specialized in their favorite work step.

    I don’t have a narrow view of static typing, I’ve got a dim view of it. That view comes from having chosen to get immersive, in-depth experience with technologies that challenged my own unrecognized fixation with static typing. What I learned from spending a couple of years in this immersion is that I previously didn’t have an informed basis to evaluate one against the other. Now I understand why static typing fails to contribute as much to product development productivity – except in a few exceptional cases. I have a much better understanding of when to use either, and that understanding is fed by knowledge and concerns that go far beyond the realm of what programmers tend to become distracted by.

  • Steve,

    I haven’t worked with Grails for more than a year, but at the time it was based on Spring. If this hasn’t changed, then it remains a framework that doesn’t take advantage of language strengths inherent in Groovy, and maintains biases – whether recognized or not – toward tool-based Inversion of Control and dependency injection.

    Ultimately, I found Grails to be just more Java and C# style ceremony in language that didn’t always necessitate it. It was like trying to make a leap to a new paradigm by leaving one foot firmly glued to the floor, and not really ending up in any particularly good place.

    Grails to me is like a limbo – an intermediate place that is nether one thing nor the other. It’s often the kind of thing that happens when people aren’t sufficiently at-ease with the full extent of the necessity for the commitment of our attention when making a significant leap.

    The leap from Spring MVC to Rails was too big of a gap for many Java developers to commit themselves to. Grails is the nether world where they are currently stewing in the fixative power of that karma.

    The same could be said about .NET developers’ susceptibility to the fixative power of ASP .NET MVC, and friends.

    These aren’t technologies driven by an indomitable personal drive for excellence, they’re technologies driven by only so much pursuit of excellence that doesn’t cause us to face the momentary discomfort of paradigm shift and the momentary loss of personal social status that we’ve built up in the culture surrounding the previous paradigm.

    At some point, we’re called to go all-in or else we only end up re-learning and reinforcing things that we already know and are already comfortable with.

  • Cory, why is using the Dir class in your code a bad design? The Dir class is a great abstraction of low level directory manipulation. Why encapsulate the encapsulation? What is the chance that you’ll want to use something else? And if you do end up using something else, how hard is it going to be to replace the Dir class? We are talking about something that is pretty much set in stone, here. I doubt the way we use files and folders will change, which makes using the Dir class pretty safe, in my opinion.

    We need to think about this before making broad “Bad Design” declarations.

    In our code base, at work, we use dependency injection everywhere. It’s great. I love it. But the reason I love it is because our container does all the work for us. In the year and a half I’ve been here, I have yet to swap a class with another. It has never happened! If it wasn’t for the container and the features it gives us that have nothing to do with DI, I’d seriously question its use. So far, the only advantage I see is that when changing the constructor on a class, I don’t have to go edit anything else, because we don’t new-up anything.

    I do understand the flexibility that DI gives us. It’s great. But to say that DI should apply to everything might be short sighted in my humble opinion.

  • Steve

    So…did Chad’s comment get removed? Seems to be a gap in the comments there.

    I was reading something Giles Bowkett (a Ruby guy for those who have no idea who he is) had written similar to what Scott is talking about (my apology, I can’t find the link).

    It was more about how all the effort to add safety to static languages has just made developers worse as opposed to improving code. Not exactly the same discussion, but it touches on similar topics.

  • Steve – Chad requested his comment be removed, so I did. Yeah, it does leave a hole in the comments… i wasn’t sure what to do about that. didn’t want to appear that i was censoring anyone, but then the hole in the comment stream is odd…

  • Scott – Yes, we’re on the same page, if what you are saying is that the goal isn’t testability, it’s modular design.

    Louis – The Dir class forces your code to rely on a concrete implementation, instead of an abstraction. That’s not good design. But, it may be /acceptable/ design. Sometimes the extra abstraction step isn’t worth it, especially if the calls are all centralized in an encapsulated module.

    def CreateNewStagingArea(name)

    Is that Good Design or Bad Design? The answer is all about context. But my general guidance to someone showing me that is, no, it’s not.

  • Cory,

    It’s good design if it’s elegant, and it’s elegant if it’s simple and clear. A call to Dir likely has some meaning within the context that it’s used, like your example of “CreateNewStagingArea” above. Adding meaning adds clarity, but adding another class abstraction isn’t a slam dunk, as you’ve also pointed out.

    I’m using the definition of “elegance” that Edsger Djykstra was fond of, one that mathematicians use. This is a commentary of his about abstraction, “the purpose of abstraction is not to be vague, but to create a new semantic level in which one can be absolutely precise.”

    The point I was making earlier is that I’m not terribly worried about using calls to Dir where ever they’re necessary because they will likely be represented by a higher-level signature that is already there to add meaning.

    Strictly from a testing perspective, if the conditions are right, and elegance is not lost in test design, I’d functionally test the “CreateNewStagingArea” method at least once, and not necessarily need coverage over it subsequently from other tests. I might replace the method if the mocking framework added to the clarity of what I was documenting with the test, or I might just let it go. I would only mock the call to Dir when directly testing the CreateNewStagingArea if it was valuable to do so, and I wouldn’t make that decision based on black and white orthodoxy until I had more of the rest of the factors in play in my possession.

    Ad I would likely name the method “create_new_staging_area” as an exercise to understand Ruby’s native idioms in hope that I would assimilate more of them through practice, rather than reinforce my existing C# biases :)

  • Anonymous

    Testability as a design concept is right in line with this kind of thinking. Testability means being able to easily create rapid, effective, and focused acknowledgment cycles adjoin your code with automated tests.

  • Warren LaFrance

    Interesting read and the simple c# example really puts several concepts in plain sight for newbies to see and understand. I found you via Julie Lerman’s post about you in her “Automated Testing for Fraidy Cats Like Me” @ pluralsight.