Moving containers beyond testability

In Derick Bailey’s two recent posts on containers, I found a lot of déjà vu in his sentiments.  In fact, it’s quite similar to the issues that I was running into a while back, trying to move beyond top-down design.

I had become a little disenchanted with my container usage.  I created top-level classes, abstracted dependencies in the form of interfaces, and then filled in implementations.  It works well for test-driven development, as an interface in C# is still the easiest way to provide the shape of what’s needed, before it’s actually implemented.  Classes with virtual members work, but it’s more kludgy and introduces extra steps in the process.

The problem I was running into is that following a strictly top-down approach of building classes, creating interfaces and driving downwards with design is that it still tended to create shape-less architectures.  The issue I finally settled on was that I had been focusing on features, instead of driving out concepts.

The real power of a container

For me, the use of a container really became useful once I moved past strict top-down design and I embraced the container as a conduit for application composition.  Containers provide two things well:

  • Dependency injection
  • Inversion of control

Dependency injection is rather simple.  A class specifies what it needs to work simply by exposing these dependencies as constructor arguments.  You can go one step further in using interfaces to represent dependencies, helping with the Dependency Inversion Principle.

Inversion of control is a bit larger in its scope.  Instead of just talking about exposing dependencies, IoC means that we remove responsibility for wiring up dependencies from the service requesting or needing them.  We can also go one step further, where we remove responsibility for controlling lifecycle from the service.  This means that if the lifecycle of hte dependency needs to change, we don’t modify the design of the service.  We don’t modify the service to needing a dependency to suddenly needing an IFooFactory.  We don’t want to let the design of an implementation of a dependency affect the service.

Finally, we can fully embrace the container by designing our application around architectural concepts, and letting the container wire everything up as needed.  This last piece isn’t helpful unless we start to explore architectural concepts.

Named instances

Named instances are fantastic when you have a common engine with different plugged in instances.  For example, we have a common batch agent executor.  We have a lot of functionality around running batch jobs, such as logging, monitoring etc.  But the batch agents themselves are not aware of all this.  The IBatchJob interface itself is really just the command pattern:

public interface IBatchJob
{
    void Execute();
}

Very simple, but we can configure StructureMap to use named instances for different implementations.  From the command-line, I can just pass in the name of the instance, “batchjob.exe SomeNamedInstance”, and that specific job will execute.

I’ve separated out the execution of the batch job from the actual work being done, allowing each orthogonal design vector to grow as needed.  When you can completely change the design of one aspect without affecting another, that’s an orthogonal design.  It’s easy for me to add any additional logging, exception handling, health monitoring and so on without touching any of the batch jobs executed.

Dependency lifecycle

Dependency lifecycle can be tricky.  In many applications, some dependencies need to be tied to a certain scope, whether it’s:

  • Per-request
  • Http context
  • Singleton
  • Per-call
  • Contextual

If as my dependencies’ lifecycle changed I needed to modify every single class that used that dependency, I’d run into some real problems.  Most often it’s not the abstraction itself that needs a specific lifecycle, but a single implementation.  One common example is around a unit of work or NHibernate’s ISession.  Depending on the context (in a test, on the web, in a batch job), I have many different needs for the lifecycle of ISession.

However, I don’t want to change the design of the service just because I have different needs for the implementation of ISession.  Providing some kind of IFooFactory for the complications of lifecycle management leaks the concerns of specific implementations into the service.

Abstracting procedural code

One common concept in applications is the idea of many things needing to run at startup.  Whether it’s defining routes, loading up NHibernate configuration, scanning for MVC areas, these are all things that only happen once per AppDomain.

Instead of having a bunch of procedural code in the application startup area, we can instead define the concept of a startup task:

public interface IStartupTask
{
    void Execute();
}

It’s our old friend the command pattern again.  But this time, instead of requesting a specific named instance, we’ll ask our container for all instances.  Then it’s just a matter of looping through the instances and executing them one by one.

Similar to the batch job example, each startup task is very atomic in its responsibilities.  If we need to enhance the concept of executing startup tasks, we again do not need to modify each task.

Pluggable strategies

One common pattern we get a lot of mileage out of are self-selecting strategies.  We did this in our implementation of input builders and model binders, where we defined a very simple interface:

public interface IInputBuilder
{
    bool IsMatch(InputBuilderContext context);
    string Build(InputBuilderContext context);
}

The first input builder that matched was the one that got built.  Unlike the MVC implementation, there was no need to hard-code the conventions or rules on which input builder was chosen.  Instead, the first one that matched was chosen, and we only needed to define the precedence in a single list in our container configuration.

If we want all strategies to have a crack at processing, that’s the chain of responsibility pattern, easily accomplished with a container.  Instead of finding the first service that matches, we just loop through them all, executing them all in turn.

Enrichment with decorators

One issue we ran into recently was that a message handling execution engine that didn’t have a plugin point for exception logging.  It did, however, allow for a plugin point for instantiating the handlers.  With a single line of configuration code, we were able to add exception logging to all implementations of IHandler<T>, without needing to change any implementation.

The logging handler was just a simple try-catch, executing the inner composed handler using a decorator pattern.  But because the logging handler had its own dependencies, we were again able to take advantage of the container for wiring everything up.

Another example of allowing different orthogonal design vectors to change without affecting each other.  None of the handlers needed to change, but using the container to instantiate allowed me to enrich their behavior with decorators without modifying each individual handler.

Wrapping it up

Containers provide a fantastic pinch point for composing applications together.  When I started harnessing design patterns through the container, I felt I really achieved that “Inversion of Control” sweet spot that truly allowed for orthogonal design.  It wasn’t anything very different in the structure of my code, I still programmed against interfaces.  But by combining design patterns with container usage, I grew my use of the container far beyond just the “enabling testability” that dependency injection initially allows.

Related Articles:

Post Footer automatically generated by Add Post Footer Plugin for wordpress.

About Jimmy Bogard

I'm a technical architect with Headspring in Austin, TX. I focus on DDD, distributed systems, and any other acronym-centric design/architecture/methodology. I created AutoMapper and am a co-author of the ASP.NET MVC in Action books.
This entry was posted in Dependency Injection. Bookmark the permalink. Follow any comments here with the RSS feed for this post.
  • http://lozanotek.com/blog Javier Lozano

    Great post, Jimmy!

    Some of the topics you cover are the core concepts for which MVC Turbine was built on. Think of the application as an entry point for composition processes to happen, thus pushing all concerns that don’t apply to composition out of the scope.

  • http://www.cauthon.com Darren Cauthon

    I totally agree, IoC adds a layer of “composability” to our applications that we’re not all taking advantage of. I like to think of it as a way to finally *use* the seams that we’ve been creating between our classes.

    Personally, I’ve had success using different contexts to resolve dependencies. It’s like lifetime management, but based more on a business situation than the lifetime of the object.

    For example, I might have an ICatalogContext interface with one method, GetProducts, that returns the products that appear on the page. If different prices should be shown depending on whether a person is logged in, I’ll make two implementations:

    GuestCatalogContext : ICatalogContext
    MemberCatalogContext : ICatalogContext

    and then create a class that the IoC container uses to decide which type to resolve when a request for ICatalogContext is made. So instead of writing one big CatalogContext implementation that has a bunch of logic and if’s built-in, I have two small classes that do one thing for one situation. Perfect for maintainability later.

  • http://ampgt.com Scott Bellware

    Jimmy,

    The only real reason for moving lifecycle management away from a consumer is if the complexity of various lifecycle strategies reduces the cohesion of the consumer to a point where it’s no longer possible to gain understanding of the service at a glance.

    If the lifecycle of the dependency changes, then it’s very likely that the semantics of the service will change. If those semantics are not part of the service abstraction, then it’s possible that there’s knowledge missing from that abstraction. This affects the design elegance of the abstraction. This is an essential consideration for the usability of code and for reducing relearning. And those are productivity and human workflow concerns, and these concerns, like testability, have huge influence on the shape of development cost depending on how they’re tuned for the specific circumstances of a project.

    When you say “we don’t want to let the design of an implementation of a dependency affect the service” I hear robotic thinking. You can say it well enough, but there’s no real design principle that will universally serve the goals of software product development that supports that statement.

    I’m not saying that this kind of separation is necessarily bad, but it’s also not necessarily good. There are circumstances that are served by this pattern, and circumstances dis-served by the pattern. It isn’t the universal truth that static language pattern orthodoxy suggests.

    Inversion of Control doesn’t mean that we remove responsibility for wiring up dependencies from the service requesting or needing them. That’s what “Inversion of Control Framework” means. That’s more of a definition of dependency injection than Inversion of Control. Inversion of Control means that the control of creation of a dependency is removed from the module that uses the dependency. Lifetime management can certainly be combined in a framework with autowiring features, but that’s not specifically what Inversion of Control means as a design quality.

    As a side note, Inversion of Control tools and Inversion of Control principles seem to be increasingly mashed up into a single glob, and I think that the conglomeration of these two separate concerns creates some muddy water. You can see this in colloquialisms in the community where “IoC” has become synonymous with “IoC tool” or “IoC framework”.

    Lifecycle management and Inversion of Control share some concerns, but they’re not necessarily the same thing. Some frameworks smush these two concerns together, and do so to good effect. Some frameworks don’t smush these two concerns together, and do so to good effect. Either approach can be as easily influenced by the way that the programming language works as by the way that the tool works. And the way that the tool works will reflect its operating environment.

    This strikes me as an example of tools occluding principles. Not that you’re not getting benefit from it in your work, but there’s something being lost in not recognizing the precision in the subtlety in the difference between Inversion of Control and Inversion of Control Tool, and the tool’s uses and practices, and the reasons for its design. If the tool informs your whole concept of the principle, the principle’s whole meaning and power risks becoming something lesser than its entire potential.

  • http://www.lostechies.com/members/bogardj/default.aspx bogardj

    @Scott

    Thanks for the insight! I do agree that lifecycle management shouldn’t just be a switch that’s flipped on or off. Perhaps IoC is not the right name for everything that’s going on in containers these days.

  • http://isaiahperumalla.wordpress.com isaiah
  • Scott

    @Jimmy

    > Then it’s just a matter of looping through the instances and executing them one by one.

    I have used a similar approach – letting the container grab all instances of a type (in my case events) and executing them one by one. What do you do to control the order of these tasks, if necessary. I could see the need for running startup tasks in a specific order.

  • http://www.drrandom.org Casey Kramer

    This actually ties in nicely with what Udi Dahan has to say about “Making Roles Explicit” (http://www.infoq.com/presentations/Making-Roles-Explicit-Udi-Dahan). The overarching concept being that you explicitly define the roles in your application (which can, but don’t have to be, associated with a use case) and lean on the compositional aspects of IoC to build the correct behavior for a role. The “Profile” concept in the NServiceBus Generic Host is a perfect example of this. It allows you to specify a profile when running the service, and that profile dictates which mechanisms are used when the Message Bus is constructed. So for example, there is a built-in production profile which turns on persistent subscription storage (via a DB), among other things. The design of NServiceBus is such that you can define your own profiles, and utilize the profiles within your service in a simple, straight-forward way (lets here it for OCP and SRP, huh?). I think our industry needs more of this kind of thinking.

  • http://ampgt.com Scott Bellware

    Casey,

    Is Udi’s counsel here on using an IoC tool due to the supremacy of an IoC tool for this job, or that this is just the way that Udi does things in that he’s been a static language developer for a good part of his career?

    Ultimately, is this some kind of best way to do it, or just the way we currently have available to us in C#?

  • http://ampgt.com Scott Bellware

    Scott and Jimmy,

    Why loop through a container rather than just loop through the subclasses of a class?

    Of course, it’s a loaded question because looping through the subclasses of a class is a non-trivial problem in .NET. Building a list of subclasses of a parent class suggests that something dynamic is going to be done with those classes.

    This is a case where using the assembly-scanning abilities of a container makes a lot of sense, but in the end, it makes sense because there really aren’t many better choices.

    Here’s how I’d do this in a language that has this feature as a first class capability.

    BusinessEvent.subclasses.each { |clazz| clazz.do_something }

    One implementation for this can be seen at: http://gist.github.com/591695

    This is possible because class definitions are runtime code rather than frozen at some point between the editor and runtime.

    Another way to do this would be to query the object space at runtime: http://gist.github.com/591707

    The object space is the active memory of a Ruby program. Every object in memory is available through the object space. Since every class in Ruby is an object, a program’s classes are also available through the object space. Note, that Ruby objects are garbage collected, so take this into consideration when using the object space.

    In either case, the code to collect subclasses can be generalized to a module and included into any base class that offers this service. It can also be added to the Class class, extending the service to every class in an app.

    This is a good example of a reasonable argument against container frameworks on the grounds of protecting design elegance from low cohesion components like composition containers that have to take on all of the dynamic programming patterns under the sun because static languages aren’t amenable to this style of programming out of the box.

    Ultimately, the responsibility for providing a list of subclasses is the responsibility of the base class of interest. If the environment doesn’t allow for this, then we have to fall back to workarounds.

    Nonetheless, it’s a darned good use of a framework in a static language. I just don’t think that these – as I’ve called them – “bionic crutches” should be celebrated when really they are signaling some fairly significant limitations that require remediation tools before they can be really useful. No doubt, they’re useful when needed, but the other way around this is to change the environment so that these needs are no longer concerns.

  • http://ampgt.com Scott Bellware

    Just as an aside, the reason that static object containers reduce design elegance is that they move knowledge that should be part of (or close to) the abstractions that the knowledge concerns to a more remote knowledge location. The immediacy of knowledge is reduced, affecting simplicity and clarity, and this is ultimately an affect on the ability for humans to understand the code under their noses by looking at the code under their noses. The more removed that an abstraction’s knowledge is, the less elegance it has, and the more error-prone the code will be for humans, and the more frustrating to learn or remember.

    If all you have is an environment where you must use something like an IoC framework to get by, then it’s a good thing that they exist. But when there are better alternatives available, it’s as much an imperative to master them as it was to master IoC frameworks back in the day before you had realized the value of IoC frameworks.

    When I talk about the learning trajectory pointing out of the .NET sphere, this is an example of what I mean.

    For me, I want to make the most of my abilities, and work in an environment that compliments my understanding and ability to the absolute utmost – just as I had done when I started day-to-day use of an IoC framework in 2005 (admittedly late to the IoC party).

    The skills that we’ve learned from the programming patterns that are in the same sphere as IoC tools are vastly more effective one the constraints the require an IoC tool are lifted. I want all of that ability and all of that productivity – regardless of whether it runs on open source on Microsoft technology or open source on non-Microsoft technology.

    .NET is not my bottleneck. I hope that it doesn’t persist in being the bottleneck in this community, because there’s enough smarts and ability in this community to be really lit up by removing these same constraints that I mentioned.

    I wonder what people in this community could achieve if the fullest potential of their abilities were unleashed from mere frameworks and language constraints.

    Just a thought…

  • http://www.drrandom.org Casey Kramer

    Scott,
    Naturally I can’t speak for Udi directly, but my feeling after attending this particular talk was that he was presenting you with a new way of solving your problems, using tools that you have available right no, and with only a small amount of work (defining some interfaces and setting up the container to resolve them). The examples given were C# (maybe there was some Java too, don’t remember now), and so the details of the proposed solution were specific to C#/Java.

    Part of the reason for using generics and interfaces to define your roles, I think, was because it was semantically pleasant to work with in C#. So, if you have a class that implements an : IUseLazyLoading, then it is clear to the person looking at it what is going on. You actually see a lot of this in NServiceBus, where you have combinations of interfaces like: IConfigureThisEndoing, As_A.Publisher

  • http://ampgt.com Scott Bellware

    Casey,

    Those semantic markers are available in most programming languages. It’s nothing special to C# or Java. Unfortunately though, these markers in C# and Java, and languages like them, introduce unnecessary constraints on code construction.