Where TDD fails for me


TDD is by far the sharpest tool in my belt.  The simplicity of client-driven design combined with the safety net of unit tests allow me to build software at a remarkable constant pace.  At the edges of most of the applications I’ve worked on are areas where TDD has completely fallen flat on its face (for me).  It’s a little disheartening when these areas are always around frameworks that I can’t change.  These are areas where adding unit tests provides coverage, but completely fails in the “tests as documentation” category.  Or, it’s in an area where testing is difficult or impossible, regardless of the available tools at your disposal.

My current TDD failures are around NHibernate and ASP.NET MVC, but they both center around a common theme – I’m deep in the extensibility points of someone else’s framework.  These frameworks offer great extensibility points, but often at the cost of the final result making any sense whatsoever.  Perhaps it’s how I practice TDD, as I like to start on the outermost visible behavior, and let client-driven code direct the design underneath.  Often the outermost behavior leaves little point to TDD internal implementation details.  Other times, the outermost visible behavior is voodoo to get set up, and verification impossible for the next developer to understand.

Example 1 – Extensibility through inheritance

In NHibernate, mapping from types to the database at the property level is done through a set of IType implementations.  These mappings provide the logic to map from System.Decimal to something out of an IDataReader.  Often, we need to provide custom mapping types to do things like map values from enumerations, custom Value Object types, or dealing with legacy databases.  NHibernate is fantastic in that regard, as there has not been a problem I’ve thrown at it that I haven’t been able to solve with an obvious extensibility point.  Side note – this is common with good OSS frameworks – feedback from the community funnels back in to further refine the design.

The one issue with these extensibility points is that it is completely non-obvious to unit test one of these implementations.  Here’s one example of an implementation:

public class DummyCustomType : PrimitiveType
{
    public DummyCustomType()
        : base(new SqlType(DbType.String))
    {
    }

    public override object Get(IDataReader rs, int index)
    {
        var o = rs[index];
        var value = o.ToString();
        return value;
    }

    public override object Get(IDataReader rs, string name)
    {
        int ordinal = rs.GetOrdinal(name);
        return Get(rs, ordinal);
    }

    /* etc etc etc */
}

Blah blah blah.  I can TDD individual pieces, but notice that I’m inheriting from PrimitiveType – an NHibernate extensibility point.  But are unit tests valuable here’?  I think not, as the true test as to whether this custom type works is in the context of where it is used – NHibernate.  Instead of testing the individual class, I’ll define the behavior I want against NHibernate, usually by loading up entities that match up to the scenarios I’m interested in.

So it’s very rare that I TDD an extensibility point driven through inheritance.  The voodoo going on underneath in the base type would put too much knowledge into the test of an implementation that I can’t often see without Reflector, so where is the value?  I don’t really see any.  Some of the pieces I have to implement I don’t even know why I need, so I leave “throw new NotImplementedException” in, and wait for my application to blow up and tell me why I need that piece.  It’s another reason I see 100% coverage as a goal that has to be balanced against other concerns.

In cases of extensibility through inheritance, it’s only the macro behavior I care about.  I could care less how the specific extensibility is used – all I care about is that the eventual result of my cog in the giant machine works as specified.

Example 2 – Thorny observations

How do I TDD a custom WebForms control?  Or an ASP.NET MVC ActionFilterAttribute?  I know I can create the crazy dependencies required – such as the ActionExecutingContext – but what exactly is this telling me?  The final result, or verification, is the action filter plugged in to the complete pipeline.  Otherwise, I’m only verifying that it’s doing exactly what I told it to do – not what I need it to do.  In domain model specifications, I describe how I want the model to behave under specific conditions.  I could TDD this:

public class AdminRoleFilterAttribute : ActionFilterAttribute
{
    public override void OnActionExecuting(ActionExecutingContext filterContext)
    {
        if (filterContext.HttpContext.User.IsInRole("Administrator"))
        {
            filterContext.Result = new RedirectResult("/unauthorized");
        }
    }
}

But honestly, only after a spike.  And if I did spike this, how would I get back here?  TDD would lead to a big disconnect between what I was verifying (that some Result is set under certain conditions) and what I actually want to happen.  TDD’ing this part requires knowledge of how the final result is used, leaving the eventual maintainer of the code to guess that this result hopefully has the desired effect.

What I’d really like to verify is that a user gets redirected when they are not an administrator.  However, I have to poke around a lot of framework pieces to get that.  I understand those actions are part of using a framework, but it’s up to the next developer to make the jump that hopefully, I did my homework and that setting that Result to that value will have the desired end behavior.  End behavior I can’t test easily, as it’s really a browser-level interaction test.

I usually still TDD these scenarios, but only after a spike.  And all I’m really doing is TDD’ing back to the point where I knew things were working in my prototype.  At that point, I still do manual, one-time-only verifications that what I did actually created the behavior I wanted.  Because the unit tests didn’t really do that, they just verified I’m using the framework in the way I specified.

Whack-a-mole

In the end, I’ve noticed my tests around framework interactions have the least amount of value.  They definitely have value, but the act of TDD is severely stunted as I’m playing by someone else’s rules.  I can verify my little cog behaves exactly how I specify in isolation, but that’s pointless if it fails when inserted back into the big machine.  Sometimes I can verify the output of the big machine, as in NHibernate, and sometimes it’s very difficult, or slow, as in ASP.NET MVC’s example (and other web frameworks).

Not only do these tests have less value, but they tend to be far more brittle than tests against POCOs, or even services where I have the Dependency Inversion Principle in play.  I create some custom NHibernate type, hooray!  But what about the dozen other scenarios for legacy data I don’t know about?  Or, when the only true verification is a manual or slow verification of an ASP.NET MVC implementation detail?

The unit tests themselves still provide value, but even tools like TypeMock wouldn’t solve the issue.  The issue isn’t that I can’t mock the framework, it’s that the framework in action is the only true verification of the intended behavior.  I’m not getting nearly as much, if any, client-driven design benefits when TDD’ing framework extensibility points.

SystemTime versus ISystemClock – dependencies revisited