Los Techies Welcomes Derik Whittaker


    Los Techies would like to introduce, and extend a welcome to Derik Whittaker. Derik is a C# MVP, member of the AspInsiders group, community speaker, and Pluralsight author. Derik was previously a contributor at CodeBetter.com. Welcome, Derik!

    Ditch the Repository Pattern Already


    One pattern that still seems particularly common among .Net developers is the Repository pattern. I began using this pattern with NHibernate around 2006 and only abandoned its use a few years ago.

    I had read several articles over the years advocating abandoning the Repository pattern in favor of other suggested approaches which served as a pebble in my shoe for a few years, but there were a few design principles whose application seemed to keep motivating me to use the pattern.  It wasn’t until a change of tooling and a shift in thinking about how these principles should be applied that I finally felt comfortable ditching the use of repositories, so I thought I’d recount my journey to provide some food for thought for those who still feel compelled to use the pattern.

    Mental Obstacle 1: Testing Isolation

    What I remember being the biggest barrier to moving away from the use of repositories was writing tests for components which interacted with the database.  About a year or so before I actually abandoned use of the pattern, I remember trying to stub out a class derived from Entity Framework’s DbContext after reading an anti-repository blog post.  I don’t remember the details now, but I remember it being painful and even exploring use of a 3rd-party library designed to help write tests for components dependent upon Entity Framework.  I gave up after a while, concluding it just wasn’t worth the effort.  It wasn’t as if my previous approach was pain-free, as at that point I was accustomed to stubbing out particularly complex repository method calls, but as with many things we often don’t notice friction to which we’ve become accustomed for one reason or another.  I had assumed that doing all that work to stub out my repositories was what I should be doing.

    Another principle that I picked up from somewhere (maybe the big xUnit Test Patterns book? … I don’t remember) that seemed to keep me bound to my repositories was that you shouldn’t write tests that depend upon dependencies you don’t own.  I believed at the time that I should be writing tests for Application Layer services (which later morphed into discrete dispatched command handlers) and the idea of stubbing out either NHIbernate or Entity Framework violated my sensibilities.

    Mental Obstacle 2: The Dependency Inversion Principle Adherence

    The Dependency Inversion Principle seems to be a source of confusion for many which stems in part from the similarity of wording with the practice of Dependency Injection as well as from the fact that the pattern’s formal definition reflects the platform from whence the principle was conceived (i.e. C++).  One might say that the abstract definition of the Dependency Inversion Principle was too dependent upon the details of its origin (ba dum tss).  I’ve written about the principle a few times (perhaps my most succinct being this Stack Overflow answer), but put simply, the Dependency Inversion Principle has at its primary goal the decoupling of the portions of your application which define policy from the portions which define implementation.  That is to say, this principle seeks to keep the portions of your application which govern what your application does (e.g. workflow, business logic, etc.) from being tightly coupled to the portions of your application which govern the low level details of how it gets done (e.g. persistence to an Sql Server database, use of Redis for caching, etc.).

    A good example of a violation of this principle, which I recall from my NHibernate days, was that once upon a time NHibernate was tightly coupled to log4net.  This was later corrected, but at one time the NHibernate assembly had a hard dependency on log4net.  You could use a different logging library for your own code if you wanted, and you could use binding redirects to use a different version of log4net if you wanted, but at one time if you had a dependency on NHibernate then you had to deploy the log4net library.  I think this went unnoticed by many due to the fact that most developers who used NHibernate also used log4net.

    When I first learned about the principle, I immediately recognized that it seemed to have limited advertized value for most business applications in light of what Udi Dahan labeled The Fallacy Of ReUse.  That is to say, properly understood, the Dependency Inversion Principle has as its primary goal the reuse of components and keeping those components decoupled from dependencies which would keep them from being easily reused with other implementation components, but your application and business logic isn’t something that is likely to ever be reused in a different context.  The take away from that is basically that the advertized value of adhering to the Dependency Inversion Principle is really more applicable to libraries like NHibernate, Automapper, etc. and not so much to that workflow your team built for Acme Inc.’s distribution system.  Nevertheless, the Dependency Inversion Principle had a practical value of implementing an architecture style Jeffrey Palermo labeled the Onion Architecture. Specifically, in contrast to traditional 3-layered architecture models where UI, Business, and Data Access layers precluded using something like Data Access Logic Components to encapsulate an ORM to map data directly to entities within the Business Layer, inverting the dependencies between the Business Layer and the Data Access layer provided the ability for the application to interact with the database while also seemingly abstracting away the details of the data access technology used.

    While I always saw the fallacy in strictly trying to apply the Dependency Inversion Principle to invert the implementation details of how I got my data from my application layer so that I’d someday be able to use the application in a completely different context, it seemed the academically astute and in vogue way of doing Domain-driven Design at the time, seemed consistent with the GoF’s advice to program to an interface rather than an implementation, and provided an easier way to write isolation tests than trying to partially stub out ORM types.

    The Catalyst

    For the longest time, I resisted using Entity Framework.  I had become fairly proficient at using NHibernate, the early versions of Entity Framework were years behind in features and maturity, it didn’t support Domain-driven Design well, and there was a fairly steep learning curve with little payoff. A combination of things happened, however, that began to make it harder to ignore. First, a lot of the NHibernate supporters (like many within the Alt.Net crowd) moved on to other platforms like Ruby and Node. Second, despite it lacking many features, .Net developers began flocking to the framework in droves due to it’s backing and promotion by Microsoft. So, eventually I found it impossible to avoid which led to me trying to apply the same patterns I’d used before with this newer-to-me framework.

    To be honest, once I adapted my repository implementation to Entity Framework everything mostly just worked, especially for the really simple stuff. Eventually, though, I began to see little ways I had to modify my abstraction to accommodate differences in how Entity Framework did things from how NHibernate did things.  What I discovered was that, while my repositories allowed my application code to be physically decoupled from the ORM, the way I was using the repositories was in small ways semantically coupled to the framework.  I wish I had kept some sort of record every time I ran into something, as the only real thing I can recall now were motivations with certain design approaches to expose the SaveChanges method for Unit of Work implementations. I don’t want to make more of the semantic coupling argument against repositories than it’s worth, but observing little places where my abstractions were leaking, combined with the pebble in my shoe from developers who I felt were far better than me which were saying I shouldn’t use them lead me to begin rethinking things.

    More Effective Testing Strategies

    It was actually a few years before I stopped using repositories that I stopped stubbing out repositories.  Around 2010, I learned that you can use Test-Driven Development to achieve 100% test coverage for the code for which you’re responsible, but when you plug your code in for the first time with that team that wasn’t designing to the same specification and not writing any tests at all that things may not work.  It was then that I got turned on to Acceptance Test Driven Development.  What I found was that writing high-level subcutaneous tests (i.e. skipping the UI layer, but otherwise end-to-end) was overall easier, was possible to align with acceptance criteria contained within a user story, provided more assurance that everything worked as a whole, and was easier to get teams on board with.  Later on, I surmised that I really shouldn’t have been writing isolation tests for components which, for the most part, are just specialized facades anyway.  All an isolation test for a facade really says is “did I delegate this operation correctly” and if you’re not careful you can end up just writing a whole bunch of tests that basically just validate whether you correctly configured your mocking library.

    So, by the time I started rethinking my use of repositories, I had long since stopped using them for test isolation.

    Taking the Plunge

    It was actually about a year after I had become convinced that repositories were unnecessary, useless abstractions that I started working with a new codebase I had the opportunity to steer.  Once I eliminated them from the equation, everything got so much simpler.   Having been repository-free for about two years now, I think I’d have a hard time joining a team that had an affinity for them.

    Conclusion

    If you’re still using repositories and you don’t have some other hangup you still need to get over like writing unit tests for your controllers or application services then give the repository-free lifestyle a try.  I bet you’ll love it.

    On Migrating Los Techies to Github Pages


    We recently migrated Los Techies from a multi-site installation of WordPress to Github Pages, so I thought I’d share some of the more unique portions of the process. For a straightforward guide on migrating from WordPress to Github Pages, Tomomi Imura has published an excellent guide available here that covers exporting content, setting up a new Jekyll site (what Github Pages uses as its static site engine), porting the comments, and DNS configuration. The purpose of this post is really just to cover some of the unique aspects that related to our particular installation.

    Step 1: Exporting Content

    Having recently migrated my personal blog from WordPress to Github Pages using the aforementioned guide, I thought the process of doing the same for Los Techies would be relatively easy. Unfortunately, due to the fact that we had a woefully out-of-date installation of WordPress, migrating Los Techies proved to be a bit problematic. First, the WordPress to Jekyll Exporter plugin wasn’t compatible with our version of WordPress. Additionally, our installation of WordPress couldn’t be upgraded in place for various reasons. As a result, I ended up taking the rather labor-intensive path of exporting each author’s content using the default WordPress XML export and then, for each author, importing into an up-to-date installation of WordPress using the hosting site with which I previously hosting my personal blog, exporting the posts using the Jekyll Exporter plugin, and then deleting the posts in preparation for the next iteration. This resulted in a collection of zipped, mostly ready posts for each author.

    Step 2: Configuring Authors

    Our previous platform utilized the multi-site features of WordPress to facilitate a single site with multiple contributors. By default, Jekyll looks for content within a special folder in the root of the site named _posts, but there are several issues with trying to represent multiple contributors within the _posts folder. Fortunately Jekyll has a feature called Collections which allows you to set up groups of posts which can each have their own associated configuration properties. Once each of the author’s posts were copied to corresponding collection folders, a series of scripts were written to create author-specific index.html, archive.html, and tags.html files which are used by a custom post layout. Additionally, due to the way the WordPress content was exported, the permalinks generated for each post did not reflect the author’s subdirectory, so another script was written to strip out all the generated permalinks.

    Step 3: Correcting Liquid Errors

    Jekyll uses a language called Liquid as its templating engine. Once all the content was in place, all posts which contained double curly braces were interpreted as Liquid commands which ended up breaking the build process. For that, each offending post had to be edited to wrap the content in Liquid directives {% raw %} … {% endraw %} to keep the content from being interpreted by the Liquid parser. Additionally, there were a few other odd things which were causing issues (such as posts with non-breaking space characters) for which more scripts were written to modify the posts to non-offending content.

    Step 4: Enabling Disqus

    The next step was to get Disqus comments working for the posts. By default, Disqus will use the page URL as the page identifier, so as long as the paths match then enabling Disqus should just work. The WordPress Disqus plugin we were using utilized a unique post id and guid as the Disqus page identifier, so the Disqus javascript had to be configured to use these properties. These values were preserved by the Jekyll exporter, but unfortunately the generated id property in the Jekyll front matter was getting internally overridden by Jekyll so another script had to be written to modify all the posts to rename the properties used for these values. Properties were added to the Collection configuration in the main _config.yml to designate the Disqus shortname for each author and allow people to toggle whether disqus was enabled or disabled for their posts.

    Step 5: Converting Gists

    Many authors at Los Techies used a Gist WordPress plugin to embed code samples within their posts. Github Pages supports a jekyll-gist plugin, so another script was written to modify all the posts to use Liquid syntax to denote the gists. This mostly worked, but there were still a number of posts which had to be manually edited to deal with different ways people were denoting their gists. In retrospect, it would have been better to use JavaScript rather than the Jekyll gist plugin due to the size of the Los Techies site. Every plugin you use adds time to the overall build process which can become problematic as we’ll touch on next.

    Step 6: Excessive Build-time Mitigation

    The first iteration of the conversion used the Liquid syntax for generating the sidebar content which lists recent site-wide posts, recent author-specific posts, and the list of contributing authors. This resulted in extremely long build times, but it worked and who cares once the site is rendered, right? Well, what I found out was that Github has a hard cut off of 10 minutes for Jekyll site builds. If your site doesn’t build within 10 minutes, the process gets killed. At first I thought “Oh no! After all this effort, Github just isn’t going to support a site our size!” I then realized that rather than having every page loop over all the content, I could create a Jekyll template to generate JSON content one time and then use JavaScript to retrieve the content and dynamically generate the sidebar DOM elements. This sped up the build significantly, taking the build from close to a half-hour to just a few minutes.

    Step 8: Converting WordPress Uploaded Content

    Another headache that presented itself is how WordPress represented uploaded content. Everything that anyone had ever uploaded to the site for images and downloads used within their posts were stored in a cryptic folder structure. Each folder had to be interrogated to see which files contained therein matched what author, the folder structure had to be reworked to accommodate the nature of the Jekyll site, and more scripts had to be written to edit everyone’s posts to change paths to the new content. Of course, the scripts only worked for about 95% of the posts, a number of posts had to be edited manually to fix things like non-printable characters being used in file names, etc.

    Step 9: Handling Redirects

    The final step to get the initial version of the conversion complete was to handle redirects which were formally being handled by .httpacess. The Los Techies site started off using Community Server prior to migrating to WordPress and redirects were set up using .httpaccess to maintain the paths to all the previous content locations. Github Pages doesn’t support .httpaccess, but it does support a Jekyll redirect plugin. Unfortunately, it requires adding a redirect property to each post requiring a redirect and we had several thousand, so I had to write another script to read the .httpaccess file and figure out which post went with each line. Another unfortunate aspect of using the Jekyll redirect plugin is that it adds overhead to the build time which, as discussed earlier, can become an issue.

    Step 10: Enabling Aggregation

    Once the conversion was complete, I decided to dedicate some time to figuring out how we might be able to add the ability to aggregate posts from external feeds. The first step to this was finding a service that could aggregate feeds together. You might think there would be a number of things that do this, and while I did find at least a half-dozen services, there were only a couple I found that allowed you to maintain a single feed and add/remove new feeds while preserving the aggregated feed. Most seemed to only allow you to do a one-time aggregation. For this I settled on a site named feed.informer.com. Next, I replaced the landing page with JavaScript that dynamically built the site from the aggregated feed along with replacing the recent author posts section that did the same and a special external template capable of making an individual post look like it’s actually hosted on Los Techies. The final result was a site that displays a mixture of local content along with aggregated content.

    Conclusion

    Overall, the conversion was way more work than I anticipated, but I believe worth the effort. The site is now much faster than it used to be and we aren’t having to pay a hosting service to host our site.

    Hello, React! – A Beginner’s Setup Tutorial


    React has been around for a few years now and there are quite a few tutorials available. Unfortunately, many are outdated, overly complex, or gloss over configuration for getting started. Tutorials which side-step configuration by using jsfiddle or code generator options are great when you’re wanting to just focus on the framework features itself, but many tutorials leave beginners struggling to piece things together when you’re ready to create a simple react application from scratch. This tutorial is intended to help beginners get up and going with React by manually walking through a minimal setup process.

    A Simple Tutorial

    This tutorial is merely intended to help walk you through the steps to getting a simple React example up and running. When you’re ready to dive into actually learning the React framework, a great list of tutorials can be found here.

    There are a several build, transpiler, or bundling tools from which to select when working with React. For this tutorial, we’ll be using be using Node, NPM, Webpack, and Babel.

    Step 1: Install Node

    Download and install Node for your target platform. Node distributions can be obtained here.

    Step 2: Create a Project Folder

    From a command line prompt, create a folder where you plan to develop your example.

    $> mkdir hello-react
    

    Step 3: Initialize Project

    Change directory into the example folder and use the Node Package Manager (npm) to initialize the project:

    $> cd hello-react
    $> npm init --yes
    

    This results in the creation of a package.json file. While not technically necessary for this example, creating this file will allow us to persist our packaging and runtime dependencies.

    Step 4: Install React

    React is broken up into a core framework package and a package related to rendering to the Document Object Model (DOM).

    From the hello-react folder, run the following command to install these packages and add them to your package.json file:

    $> npm install --save-dev react react-dom
    

    Step 5: Install Babel

    Babel is a transpiler, which is to say it’s a tool from converting one language or language version to another. In our case, we’ll be converting EcmaScript 2015 to EcmaScript 5.

    From the hello-react folder, run the following command to install babel:

    $> npm install --save-dev babel-core
    

    Step 6: Install Webpack

    Webpack is a module bundler. We’ll be using it to package all of our scripts into a single script we’ll include in our example Web page.

    From the hello-react folder, run the following command to install webpack globally:

    $> npm install webpack --global
    

    Step 7: Install Babel Loader

    Babel loader is a Webpack plugin for using Babel to transpile scripts during the bundling process.

    From the hello-react folder, run the following command to install babel loader:

    $> npm install --save-dev babel-loader
    

    Step 8: Install Babel Presets

    Babel presets are collections of plugins needed to support a given feature. For example, the latest version of babel-preset-es2015 at the time this writing will install 24 plugins which enables Babel to transpile ECMAScript 2015 to ECMAScript 5. We’ll be using presets for ES2015 as well as presets for React. The React presets are primarily needed for processing of JSX.

    From the hello-react folder, run the following command to install the babel presets for both ES2015 and React:

    $> npm install --save-dev babel-preset-es2015 babel-preset-react
    

    Step 9: Configure Babel

    In order to tell Babel which presets we want to use when transpiling our scripts, we need to provide a babel config file.

    Within the hello-react folder, create a file named .babelrc with the following contents:

    {                                    
      "presets" : ["es2015", "react"]    
    }                                 
    

    Step 10: Configure Webpack

    In order to tell Webpack we want to use Babel, where our entry point module is, and where we want the output bundle to be created, we need to create a Webpack config file.

    Within the hello-react folder, create a file named webpack.config.js with the following contents:

    const path = require('path');
     
    module.exports = {
      entry: './app/index.js',
      output: {
        path: path.resolve('dist'),
        filename: 'index_bundle.js'
      },
      module: {
        rules: [
          { test: /\.js$/, loader: 'babel-loader', exclude: /node_modules/ }
        ]
      }
    }
    

    Step 11: Create a React Component

    For our example, we’ll just be creating a simple component which renders the text “Hello, React!”.

    First, create an app sub-folder:

    $> mkdir app
    

    Next, create a file named app/index.js with the following content:

    import React from 'react';
    import ReactDOM from 'react-dom';
     
    class HelloWorld extends React.Component {
        render() {
              return (
                      <div>
                        Hello, React!
                      </div>
                    )
            }
    };
     
    ReactDOM.render(<HelloWorld />, document.getElementById('root'));
    

    Briefly, this code includes the react and react-dom modules, defines a HelloWorld class which returns an element containing the text “Hello, React!” expressed using JSX syntax, and finally renders an instance of the HelloWorld element (also using JSX syntax) to the DOM.

    If you’re completely new to React, don’t worry too much about trying to fully understand the code. Once you’ve completed this tutorial and have an example up and running, you can move on to one of the aforementioned tutorials, or work through React’s Hello World example to learn more about the syntax used in this example.

    Note: In many examples, you will see the following syntax:

    var HelloWorld = React.createClass( {
        render() {
              return (
                      <div>
                        Hello, React!
                      </div>
                    )
            }
    });
    

    This syntax is how classes were defined in older versions of React and will therefore be what you see in older tutorials. As of React version 15.5.0 use of this syntax will produce the following warning:

    Warning: HelloWorld: React.createClass is deprecated and will be removed in version 16. Use plain JavaScript classes instead. If you’re not yet ready to migrate, create-react-class is available on npm as a drop-in replacement.

    Step 12: Create a Webpage

    Next, we’ll create a simple html file which includes the bundled output defined in step 10 and declare a <div> element with the id “root” which is used by our react source in step 11 to render our HelloWorld component.

    Within the hello-react folder, create a file named index.html with the following contents:

    <html>
      <div id="root"></div>
      <script src="./dist/index_bundle.js"></script>
    </html>
    

    Step 13: Bundle the Application

    To convert our app/index.js source to ECMAScript 5 and bundle it with the react and react-dom modules we’ve included, we simply need to execute webpack.

    Within the hello-react folder, run the following command to create the dist/index_bundle.js file reference by our index.html file:

    $> webpack
    

    Step 14: Run the Example

    Using a browser, open up the index.html file. If you’ve followed all the steps correctly, you should see the following text displayed:

    Hello, React!
    

    Conclusion

    Congratulations! After completing this tutorial, you should have a pretty good idea about the steps involved in getting a basic React app up and going. Hopefully this will save some absolute beginners from spending too much time trying to piece these steps together.

    Exploring TypeScript


    A proposal to use TypeScript was recently made within my development team, so I’ve taken a bit of time to investigate the platform.  This article reflects my thoughts and conclusions on where the platform is at this point.

     

    TypeScript: What is It?

    TypeScript is a scripting language created by Microsoft which provides static typing and a class-based object-oriented programming paradigm for transpiling to JavaScript.  In contrast to other compile-to-javascript languages such as CoffeeScript and Dart, TypeScript is a superset of JavaScript which means that TypeScript introduces syntax enhancements to the JavaScript language.

     

    Recent Rise In Popularity

    TypeScript made it’s debut in late 2012 and was first released in April 2014.  Community interest has been fairly marginal since it’s debut, but has shown an increase since an announcement that the next version of Google’s popular Angular framework would be written in TypeScript.

    The following Google Trends chart shows the interest parallel between Angular 2 and TypeScript from 2014 to present:

     

    The Good

    Type System

    TypeScript provides an optional type system which can aid in catching certain types of programing errors at compile time.  The information derived from the type system also serves as the foundation for most of the tooling surrounding TypeScript.

    The following is a simple example showing a basic usage of the type system:

    interface Person {
        firstName: string;
        lastName: string;
    }
    
    class Greeter {
        greeting: string;
        constructor(message: string) {
            this.greeting = message;
        }
        greet(person: Person) {
            return this.greeting + " " + person.firstName + " " + person.lastName;
        }
    }
    
    let greeter = new Greeter("Hello,");
    let person = { firstName: "John", lastName: "Doe" };
    
    document.body.innerHTML = greeter.greet(person);
    

    In this example, a Person interface is declared with two string properties: firstName and lastName.  Next, a Greeter class is created with a greet() function which is declared to take a parameter of type Person.  Next, instances of Greeter and Person are instantiated and the Greeter instance’s greet() function is invoked passing in the Person instance.  At compile time, TypeScript is able to detect whether the object passed to the greet() function conforms to the Person interface and whether the values assigned to the expected properties are of the expected type.

    Tooling

    While the type system and programming paradigm introduced by TypeScript are its key features, it’s really the tooling facilitated by the type system that makes the platform shine.  Being notified of syntax errors at compile time is helpful, but it’s really the productivity that stems from features such as design-time type checking, intellisense/code-completion, and refactoring that make TypeScript compelling.

    TypeScript is currently supported by many popular IDEs including Visual Studio, WebStorm, Sublime Text, Brackets, and Eclipse.

    EcmaScript Foundation

    One of the differentiators of TypeScript from other languages which transpile to JavaScript (CoffeeScript, Dart, etc.) is that TypeScript builds upon the JavaScript language.  This means that all valid JavaScript code is valid TypeScript code.

    Idiomatic JavaScript Generation

    One of the goals of the TypeScript team was to ensure the TypeScript compiler emitted idiomatic JavaScript.  This means the code produced by the TypeScript compiler is readable and generally follows normal JavaScript conventions.

     

    The Not So Good

    Type Definitions and 3rd-Party Libraries

    Typescript requires type definitions to be created for 3rd-party code to realize many of the benefits of the tooling.  While  the DefinitelyTyped project provides type definitions for the most popular JavaScript libraries used today, there will probably be the occasion where the library you want to use has no type definition file.

    Moreover, interfaces maintained by 3rd-party sources are somewhat antithetical to their primary purpose.  Interfaces should serve as contracts for the behavior of a library.  If the interfaces are maintained by a 3rd-party, however, they can’t be accurately described as “contracts” since no implicit promise is being made by the library author that the interface being provided accurately matches the library’s behavior.  It’s probably the case that this doesn’t prove to be much of an issue in practice, but at minimum I would think relying upon type definitions created by 3rd parties would eventually lead to the available type definitions lagging behind new releases of the libraries being used.

    Type System Overhead

    Introducing a typesystem is a bit of a double-edged sword.  While a type system can provide a lot of benefits, it also adds syntactical overhead to a codebase.  In some cases this can result in the code you maintain actually being harder to read and understand than the code being generated.  This can be illustrated using Anders Hejlsberg’s example presented at Build 2014.

    The TypeScript source in the first listing shows a generic sortBy method which takes a callback for retrieving the value by which to sort while the second listing shows the generated JavaScript source:

    interface Entity {
    	name: string;
    }
    
    function sortBy<T>(a: T[], keyOf: (item: T) => any): T[] {
    	var result = a.slice(0);
    	result.sort(function(x, y) {
    		var kx = keyOf(x);
    		var ky = keyOf(y);
    		return kx > ky ? 1: kx < ky ? -1 : 0;
    	});
    	return result;
    }
    
    var products = [
    	{ name: "Lawnmower", price: 395.00, id: 345801 },
    	{ name: "Hammer", price: 5.75, id: 266701 },
    	{ name: "Toaster", price: 19.95, id: 400670 },
    	{ name: "Padlock", price: 4.50, id: 560004 }
    ];
    var sorted = sortBy(products, x => x.price);
    document.body.innerText = JSON.stringify(sorted, null, 4);
    
    function sortBy(a, keyOf) {
        var result = a.slice(0);
        result.sort(function (x, y) {
            var kx = keyOf(x);
            var ky = keyOf(y);
            return kx > ky ? 1 : kx < ky ? -1 : 0;
        });
        return result;
    }
    var products = [
        { name: "Lawnmower", price: 395.00, id: 345801 },
        { name: "Hammer", price: 5.75, id: 266701 },
        { name: "Toaster", price: 19.95, id: 400670 },
        { name: "Padlock", price: 4.50, id: 560004 }
    ];
    var sorted = sortBy(products, function (x) { return x.price; });
    document.body.innerText = JSON.stringify(sorted, null, 4);
    

    Comparing the two signatures, which is easier to understand?

    TypeScript

    function sortBy(a: T[], keyOf: (item: T) => any): T[] </p> **JavaScript**

    function sortBy(a, keyOf)

    It might be reasoned that the TypeScript version should be easier to understand given that it provides more information, but many would disagree that this is in fact the case.  The reason for this is that the TypeScript version adds quite a bit of syntax to explicitly describe information that can otherwise be deduced fairly easily.  In many ways this is similar to how we process natural language.  When we communicate, we don’t encode each word with its grammatical function (e.g. “I [subject] bought [past tense verb] you [indirect object] a [indefinite article] gift [direct object].”)  Rather, we rapidly and subconsciously make guesses based on familiarity with the vocabulary, context, convention and other such signals.

     In the case of the sortBy example, we can guess at the parameters and return type for the function faster than we can parse the type syntax.  This becomes even easier if descriptive names are used (e.g. sortByKey(array, keySelector)).  Sometimes implicit expression is simply easier to understand.

    Now to be fair, there are cases where TypeScript is arguably going to be more clear than the generated JavaScript (and for similar reasons).  Consider the following listing:

    class Auto{
      constructor(public wheels = 4, public doors?){
      }
    }
    var car = new Auto();
    car.doors = 2;
    
    var Auto = (function () {
        function Auto(wheels, doors) {
            if (wheels === void 0) { wheels = 4; }
            this.wheels = wheels;
            this.doors = doors;
        }
        return Auto;
    }());
    var car = new Auto();
    car.doors = 2;
    

    In this example, the TypeScript version results in less syntax noise than the generated JavaScript version.   Of course, this is a comparison between TypeScript and it’s generated syntax rather than the following syntax many may have used:

    wheels = wheels || 4; Community Alignment

    While TypeScript is a superset of JavaScript, this deserves some qualification.  Unlike languages such as CoffeeScript and Dart which also compile to JavaScript, TypeScript starts with the EcmaScript specification as the base of it’s language.  Nevertheless, TypeScript is still a separate language.

    A team’s choice to maintain an application in TypeScript over JavaScript isn’t quite the same thing as choosing to implement an application in C# version 6 instead of C# version 5.  TypeScript isn’t the promise: “Programming with the ECMAScript of tomorrow ... today!”.  Rather, it’s a language that layers a different programming paradigm on top of JavaScript.  While you can choose how much of the feature superset and programming paradigm you wish to use, the more features and approaches peculiar to TypeScript that are adopted the further the codebase will diverge from standard JavaScript syntax and conventions.

    A codebase that fully leverages TypeScript can tend to look far more like C# than standard JavaScript.  In many ways, TypeScript is the perfect front-end development environment for C# developers as it provides a familiar syntax and programming paradigm to which they are already accustomed.  Unfortunately, developers who spend most of their time in C# often struggle with JavaScript syntax, conventions, and patterns.  The same might be expected to be true for TypeScript developers who utilize the language to emulate object-oriented development in C#.

    Ultimately, the real negative I see with this is that (at least right now) TypeScript doesn’t represent how the majority of Web development is being done in the community.  This has implications on the availability of documentation, availability of online help, candidate pool size, marketability, and skill portability.

    Consider the following chart which compares the current job openings available for JavaScript and TypeScript:

    Source: simplyhired.com - August 2016

    Now, the fact that there may be far less TypeScript jobs out there than JavaScript jobs doesn’t mean that TypeScript isn’t going to be the next big thing.  What it does mean, however, is that you are going to experience less friction in the aforementioned areas if you stick with standard EcmaScript.  

    Alternatives

    For those considering TypeScript, the following are a couple of options you might consider before converting just yet.

    ECMAScript 2015

    If you’re  interested in TypeScript and currently still writing ES5 code, one step you might consider is to begin using ES2015.  In John Papa’s article: “Understanding ES5, ES2015 and TypeScript”, he writes:

    Why Not Just use ES2015?  That’s a great option! Learning ES2015 is a huge leap from ES5. Once you master ES2015, I argue that going from there to TypeScript is a very small step. In many ways, taking the time to learn ECMAScript 2015 is the best option even if you think you’re ready to start using TypeScript.  Making the journey from ES5 to ES2015 and then later on to TypeScript will help you to clearly understand which new features are standard ECMAScript and which are TypeScript … knowledge you’re likely to be fuzzy on if you move straight from ES5 to TypeScript.

    Flow

    If you’ve already become convinced that you need a type system for JavaScript development or you’re just looking to test the waters, you might consider a lighter-weight alternative to the TypeScript platform: Facebook’s Flow project.  Flow is a static type checker for JavaScript designed to gain static type checking benefits  without losing the “feel” of coding in JavaScript and in some cases it does a better job at catching type-related errors than TypeScript.

    For the most part, Flow’s type system is identical to that of TypeScript, so it shouldn’t be too hard to convert to TypeScript down the road if desired.  Several IDEs have Flow support including Web Storm, Sublime Text, Atom, and of course Facebook’s own Nuclide. As of August 2016, [Flow also supports Windows](https://flowtype.org/blog/2016/08/01/Windows-Support.html).  Unfortunately this support has only recently become available, so Flow doesn’t yet enjoy the same IDE support on Windows as it does on OSX and Linux platforms.  IDE support can likely be expected to improve going forward.  

    Test-Driven Development

    If you’ve found the primary appeal of TypeScript to be the immediate feedback you receive from the tooling, another methodology for achieving this (which has far greater benefits) is the practice of Test-Driven Development (TDD). The TDD methodology not only provides a rapid feedback cycle, but (if done properly) results in duplication-free code that is more maintainable by constraining the team to only developing the behavior needed by the application, and results in a regression-test suite which provides a safety net for future modifications as well as documentation for how the system is intended to be used. Of course, these same benefits can be realized with TypeScript development as well, but teams practicing TDD may find less need for TypeScript’s compiler-generated error checking.

     

    Conclusion

    After taking some time to explore TypeScript, I’ve found that aspects of its ecosystem are very compelling, particularly the tooling that’s available for the platform.  Nevertheless, it still seems a bit early to know what role the platform will play in the future of Web development.

    Personally, I like the JavaScript language and, while I see some advantages of introducing type checking, I think a wiser course for now would be to invest in learning EcmaScript 2015 and keep a watchful eye on TypeScript adoption going forward.

    Git on Windows: Whence Cometh Configuration


    I recently went through the process of setting up a new development environment on Windows which included installing Git for Windows. At one point in the course of tweaking my environment, I found myself trying to determine which config file a particular setting originated. The command ‘git config –list’ showed the setting, but ‘git config –list –system’, ‘git config –list –global’, and ‘git config –list –local’ all failed to reflect the setting. Looking at the options for config, I discovered you can add a ‘–show-origin’ which led to a discovery: Git for Windows has an additional location from which it derives your configuration.

    It turns out, since the last time I installed git on Windows, a change was made for the purposes of sharing git configuration across different git projects (namely, libgit2 and Git for Windows) where a Windows-specific location is now used as the lowest setting precedence (i.e. the default settings). This is the file: C:\ProgramData\Git\config. It doesn’t appear git added a way to list or edit this file as a well-known location (e.g. ‘git config –list windows’), so it’s not particularly discoverable aside from knowing about the ‘–show-origin’ switch.

    So the order in which Git for Windows sources configuration information is as follows:

    1. C:\ProgramData\Git\config
    2. system config (e.g. C:\Program Files\Git\mingw64\etc\gitconfig)
    3. global config (%HOMEPATH%.gitconfig
    4. local config (repository-specific .git/config)

    Perhaps this article might help the next soul who finds themselves trying to figure out from where some seemingly magical git setting is originating.

    Separation of Concerns: Application Builds & Continuous Integration


    I’ve always had an interest in application build processes. From the start of my career, I’ve generally been in the position of establishing the solution architecture for the projects I’ve participated in and this has usually involved establishing a baseline build process.

    My career began as a Unix C developer while still in college where much of my responsibilities required writing tools in both C and various Unix shell scripting languages which were deployed to other workstations throughout the country. From there, I moved on to Unix C-CGI Web development and worked a number of years with Make files. With the advent of Java, I begin using tools like Ant and Maven for several more years before switching to the .Net platform where I used open source build tools like NAnt until Microsoft introduced MSBuild with its 2.0 release. Upon moving to the Austin, TX area, I was greatly influenced by what was the early seat of the Alt.Net movement. It was there where I abandoned what in hindsight has always been a ridiculous idea … trying to script a build using XML. For the next 4-5 years, I used Rake to define all of my builds. Starting last year, I began using Gulp and associated tooling on the Node platform for authoring .Net builds.

    Throughout this journey of working with various build technologies, I’ve formed a few opinions along the way. One of these opinions is that the Build process shouldn’t be coupled to the Continuous Integration process.

    A project should have a build process which exists and can be executed independent of the particular continuous integration tool one chooses. This allows builds to be created and maintained on the developer’s local machine. The particular build steps involved in building a given application are inherently part of its ontology. What compilers and preprocessors need to be used, how dependencies are obtained and published, when and how configuration values are supplied for different environments, how and where automated test suites are run, how the application distribution is created … all of these are concerns whose definition and orchestration are particular to a given project. Such concerns should be encapsulated in a build script which lives with the rest of the application source, not as discrete build steps defined within your CI tool.

    Ideally, builds should never break, but when they do it’s important to resolve the issue as quickly as possible. Not being able to run a build locally means potentially having to repeatedly introduce changes until the build is fixed. This tends to pollute the source code commit history with comments like: “Fixing the build”, “Fixing the build for realz this time”, and “Please let this be it … I’m ready to go home”. Of course, there are times when a build can break because of environmental issues that may not be mirrored locally (e.g. lack of disk space, network related issues, 3rd-party software dependencies, etc.), but encapsulating as much of your build as possible goes a long way to keeping builds running along smoothly. Anyone on your team should be able to clone/check-out the project, issue a single command from the command line (e.g. gulp, rake, psake, etc.) and watch the full build process execute including any pre-processing steps, compilation, distribution packaging and even deployment to a target environment.

    Aside from being able to run a build locally, decoupling the build from the CI process allows the technologies used by each to vary independently. Switching from one CI tool to another should ideally just require installing the software, pointing it to your source control, defining the single step to issue the build, and defining the triggers that initiate the process.

    The creation of a project distribution and the scheduling mechanism for how often this happens are separate concerns. Just because a CI tool allows you to script out your build steps doesn’t mean you should.

    Survey of Entity Framework Unit of Work Patterns


    Earlier this year I joined a development team which chose Entity Framework for the persistence needs of a new greenfield project. While I’ve worked on a few projects which used Entity Framework here and there over the years, the bulk of my experience has been with NHibernate and, more recently, Dapper.Net. As a result, there hasn’t been all that much occasion for me to explore it in any level of depth until this year.

    One area I recently took some time to research is how the Unit of Work pattern is best implemented within the context of using Entity Framework. While the topic is still relatively fresh on my mind, I thought I’d use this as an opportunity to create a catalog of various approaches I’ve encountered and include some thoughts about each approach.

    Unit of Work

    To start, it may be helpful to give a basic definition of the Unit of Work pattern. A Unit of Work can be defined as a collection of operations that succeed or fail as a single unit. Given a series of operations which need to be executed in response to some interaction with an application, it’s often necessary to ensure that none of the operations cause side-effects if any one of them fails. This is accomplished by having participating operations respond to either a commit or rollback message indicating whether the operation performed should be completed or reverted.

    A Unit of Work can consist of different types of operations such as Web Service calls, Database operations, or even in-memory operations, however, the focus of this article will be on approaches to facilitating the Unit of Work pattern with Entity Framework.

    With that out of the way, let’s take a look at various approaches to facilitating the Unit of Work pattern with Entity Framework.

    Implicit Transactions

    The first approach to achieving a Unit of Work around a series of Entity Framework operations is to simply create an instance of a DbContext class, make changes to one or more DbSet instances, and then call SaveChanges() on the context. Entity Framework automatically creates an implicit transaction for changesets which include INSERTs, UPDATEs, and DELETEs.

    Here’s an example:

    public Customer CreateCustomer(CreateCustomerRequest request)
    {
      Customer customer = null;
    
      using (var context = new MyStoreContext())
      {
        customer = new Customer { FirstName = request.FirstName, LastName = request.LastName };
        context.Customers.Add(customer);
        context.SaveChanges();
        return customer;
      }
    }
    

    The benefit of this approach is that a transaction is created only when necessary and is kept alive only for the duration of the SaveChanges() call. Some drawbacks to this approach, however, are that it leads to opaque dependencies and adds a bit of repetitive infrastructure code to each of your applications services.

    If you prefer to work directly with Entity Framework then this approach may be fine for simple needs.

    TransactionScope

    Another approach is to use the System.Transactions.TransactionScope class provided by the .Net framework. When any of the Entity Framework operations are used which cause a connection to be opened (e.g. SaveChanges()), the connection will enlist in the ambient transaction defined by the TransactionScope class and close the transaction once the TransactionScope is successfully completed. Here’s an example of this approach:

    public Customer CreateCustomer(CreateCustomerRequest request)
    {
      Customer customer = null;
    
      using (var transaction = new TransactionScope())
      {
        using (var context = new MyStoreContext())
        {
          customer = new Customer { FirstName = request.FirstName, LastName = request.LastName };
          context.Customers.Add(customer);
          context.SaveChanges();
          transaction.Complete();
        }
    
        return customer;
      }
    }
    

    In general, I find using TransactionScope to be a good general-purpose solution for defining a Unit of Work around Entity Framework operations as it works with ADO.Net, all versions of Entity Framework, and other ORMs which provides the ability to use multiple libraries if needed. Additionally, it provides a foundation for building a more comprehensive Unit of Work pattern which would allow other types of operations to enlist in the Unit of Work.

    Caution should be exercised when using TransactionScope, however, as certain operations can implicitly escalate the transaction to a distributed transaction causing undesired overhead. For those choosing solutions involving TransactionScope, I would recommend educating yourself on how and when transactions are escalated.

    While I find using the TransactionScope class to be a good general-purpose solution, using it directly does couple your services to a specific strategy and adds a bit of noise to your code. While it’s a viable choice, I would recommend inverting the concerns of managing the Unit of Work boundary as shown in approaches we’ll look at later.

    ADO.Net Transactions

    This approach involves creating an instance of DbTransaction and instructing the participating DbContext instance to use the existing transaction:

    public Customer CreateCustomer(CreateCustomerRequest request)
    {
      Customer customer = null;
    
      var connectionString = ConfigurationManager.ConnectionStrings["MyStoreContext"].ConnectionString;
      using (var connection = new SqlConnection(connectionString))
      {
        connection.Open();
        using (var transaction = connection.BeginTransaction())
        {
          using (var context = new MyStoreContext(connection))
          {
            context.Database.UseTransaction(transaction);
            try
            {
              customer = new Customer { FirstName = request.FirstName, LastName = request.LastName };
              context.Customers.Add(customer);
              context.SaveChanges();
            }
            catch (Exception e)
            {
              transaction.Rollback();
              throw;
            }
          }
    
          transaction.Commit();
          return customer;
        }
      }
    

    As can be seen from the example, this approach adds quite a bit of infrastructure noise to your code. While not something I’d recommend standardizing upon, this approach provides another avenue for sharing transactions between Entity Framework and straight ADO.Net code which might prove useful in certain situations. In general, I wouldn’t recommend such an approach.

    Entity Framework Transactions

    The relative new-comer to the mix is the new transaction API introduced with Entity Framework 6. Here’s a basic example of it’s use:

    public Customer CreateCustomer(CreateCustomerRequest request)
    {
      Customer customer = null;
    
      using (var context = new MyStoreContext())
      {
        using (var transaction = context.Database.BeginTransaction())
        {
          try
          {
            customer = new Customer { FirstName = request.FirstName, LastName = request.LastName };
            context.Customers.Add(customer);
            context.SaveChanges();
            transaction.Commit();
          }
          catch (Exception e)
          {
            transaction.Rollback();
            throw;
          }
        }
      }
    
      return customer;
    }
    

    This is the approach recommended by Microsoft for achieving transactions with Entity Framework going forward. If you’re deploying applications with Entity Framework 6 and beyond, this will be your safest choice for Unit of Work implementations which only require database operation participation. Similar to a couple of the previous approaches we’ve already considered, the drawbacks of using this directly are that it creates opaque dependencies and adds repetitive infrastructure code to all of your application services. This is also a viable option, but I would recommend coupling this with other approaches we’ll look at later to improve the readability and maintainability of your application services.

    Unit of Work Repository Manager

    The first approach I encountered when researching how others were facilitating the Unit of Work pattern with Entity Framework was a strategy set forth by Microsoft’s guidance on the topic here. This strategy involves creating a UnitOfWork class which encapsulates an instance of the DbContext and exposes each repository as a property. Clients of repositories take a dependency upon an instance of UnitOfWork and access each repository as needed through properties on the UnitOfWork instance. The UnitOfWork type exposes a SaveChanges() method to be used when all the changes made through the repositories are to be persisted to the database. Here is an example of this approach:

    public interface IUnitOfWork
    {
      ICustomerRepository CustomerRepository { get; }
      IOrderRepository OrderRepository { get; }
      void Save();
    }
    
    public class UnitOfWork : IDisposable, IUnitOfWork
    {
      readonly MyContext _context = new MyContext();
      ICustomerRepository _customerRepository;
      IOrderRepository _orderRepository;
    
      public ICustomerRepository CustomerRepository
      {
        get { return _customerRepository ?? (_customerRepository = new CustomerRepository(_context)); }
      }
    
      public IOrderRepository OrderRepository
      {
        get { return _orderRepository ?? (_orderRepository = new OrderRepository(_context)); }
      }
    
      public void Dispose()
      {
        if (_context != null)
        {
          _context.Dispose();
        }
      }
    
      public void Save()
      {
        _context.SaveChanges();
      }
    }
    
    public class CustomerService : ICustomerService
    {
      readonly IUnitOfWork _unitOfWork;
    
      public CustomerService(IUnitOfWork unitOfWork)
      {
        _unitOfWork = unitOfWork;
      }
    
      public void CreateCustomer(CreateCustomerRequest request)
      {
        customer = new Customer { FirstName = request.FirstName, LastName = request.LastName };
        _unitOfWork.CustomerRepository.Add(customer);
        _unitOfWork.Save();
      }
    }
    

    It isn’t hard to imagine how this approach was conceived given it closely mirrors the typical implementation of the DbContext instance you find in Entity Framework guidance where public instances of DbSet are exposed for each aggregate root. Given this pattern is presented on the ASP.Net website and comes up as one of the first results when doing a search for “Entity Framework” and “Unit of Work”, I imagine this approach has gained some popularity among .Net developers. There are, however, a number of issues I have with this approach.

    First, this approach leads to opaque dependencies. Due to the fact that classes interact with repositories through the UnitOfWork instance, the client interface doesn’t clearly express the inherent business-level collaborators it depends upon (i.e. any aggregate root collections).

    Second, this violates the Open/Closed Principle. To add new aggregate roots to the system requires modifying the UnitOfWork each time.

    Third, this violates the Single Responsibility Principle. The single responsibility of a Unit of Work implementation should be to encapsulate the behavior necessary to commit or rollback an set of operations atomically. The instantiation and management of repositories or any other component which may wish to enlist in a unit of work is a separate concern.

    Lastly, this results in a nominal abstraction which is semantically coupled with Entity Framework. The example code for this approach sets forth an interface to the UnitOfWork implementation which isn’t the approach used in the aforementioned Microsoft article. Whether you take a dependency upon the interface or the implementation directly, however, the presumption of such an abstraction is to decouple the application from using Entity Framework directly. While such an abstraction might provide some benefits, it reflects Entity Framework usage semantics and as such doesn’t really decouple you from the particular persistence technology you’re using. While you could use this approach with another ORM (e.g. NHibernate), this approach is more of a reflection of Entity Framework operations (e.g. it’s flushing model) and usage patterns. As such, you probably wouldn’t arrive at this same abstraction were you to have started by defining the abstraction in terms of the behavior required by your application prior to choosing a specific ORM (i.e. following The Dependency Inversion Principle). You might even find yourself violating the Liskof Substitution Principle if you actually attempted to provide an alternate ORM implementation. Given these issues, I would advise people to avoid this approach.

    Injected Unit of Work and Repositories

    For those inclined to make all dependencies transparent while maintaining an abstraction from Entity Framework, the next strategy may seem the natural next step. This strategy involves creating an abstraction around the call to DbContext.SaveChanges() and requires sharing a single instance of DbContext among all the components whose operations need to participate within the underlying SaveChanges() call as a single transaction.

    Here is an example:

    public class CustomerService : ICustomerService
    {
      readonly IUnitOfWork _unitOfWork;
      readonly ICustomerRepository _customerRepository;
    
      public CustomerService(IUnitOfWork unitOfWork, ICustomerRepository customerRepository)
      {
        _unitOfWork = unitOfWork;
        _customerRepository = customerRepository;
      }
    
      public void CreateCustomer(CreateCustomerRequest request)
      {
        customer = new Customer { FirstName = request.FirstName, LastName = request.LastName };
        _customerRepository.Add(customer);
        _unitOfWork.Save();
      }
    }
    

    While this approach improves upon the opaque design of the Repository Manager, there are several issues I find with this approach as well.

    Similar to the first example, this UnitOfWork implementation is still semantically coupled to how Entity Framework is urging you to think about things. Entity Framework wants you to call SaveChanges() whenever you’re ready to flush any INSERT, UPDATE, or DELETE operations you’ve issued against the database and this abstraction basically surfaces this behavior. If you were to use an alternate framework that supported a different flushing model (e.g. NHibernate), you likely wouldn’t end up with the same abstraction.

    Moreover, this approach has no definitive Unit of Work boundary. With this approach, you aren’t defining a logical Unit of Work, but are merely injecting a UnitOfWork you can participate within. When you invoke the underlying DBContext.SaveChanges() method, it isn’t explicit what work will be committed.

    While this approach corrects a few design issues I find with the Repository Manager, overall I like this approach even less. At least with the Repository Manager approach you have a defined Unit of Work boundary which is kind of the whole point. My recommendation would be to avoid this approach as well.

    Repository SaveChanges Method

    The next strategy is basically a variation on the previous one. Rather than injecting a separate type whose sole purpose is to provide an indirect way to call the SaveChanges() method, some merely expose this through the Repository:

    public class CustomerService : ICustomerService
    {
      readonly ICustomerRepository _customerRepository;
    
      public CustomerService(ICustomerRepository customerRepository)
      {
        _customerRepository = customerRepository;
      }
    
      public void CreateCustomer(CreateCustomerRequest request)
      {
        customer = new Customer { FirstName = request.FirstName, LastName = request.LastName };
        _customerRepository.Add(customer);
        _customerRepository.SaveChanges();
      }
    }
    

    This approach shares many of the same issues with the previous one. While it reduces a bit of infrastructure noise, it’s still semantically coupled to Entity Framework’s approach and still lacks a defined Unit of Work boundary. Additionally, it lacks clarity as to what happens when you call the SaveChanges() method. Given the Repository pattern is intended to be a virtual collection of all the entities within your system of a given type, one might suppose a method named “SaveChanges” means that you are somehow persisting any changes made to the particular entities represented by the repository (setting aside the fact that doing so is really a subversion of the pattern’s purpose). On the contrary, it really means “save all the changes made to any entities tracked by the underlying DbContext”. I would also recommend avoiding this approach.

    Unit of Work Per Request

    A pattern I’m a bit embarrassed to admit has been characteristic of many projects I’ve worked on in the past (though not with EF) is to create a Unit of Work implementation which is scoped to a Web Application’s Request lifetime. Using this approach, whatever method is used to facilitate a Unit of Work is configured with a DI container using a Per-HttpRequest lifetime scope and the Unit of Work boundary is opened by the first component being injected by the UnitOfWork and committed/rolled-back when the HttpRequest is disposed by the container.

    There are a few different manifestations of this approach depending upon the particular framework and strategy you’re using, but here’s a pseudo-code example of how configuring this might look for Entity Framework with the Autofac DI container:

    builder.RegisterType<MyDbContext>()
            .As<DbContext>()
            .InstancePerRequest()
            .OnActivating(x =>
            {
              // start a transaction
            })
            .OnRelease(context =>
            {
              try
              {
                // commit or rollback the transaction
              }
              catch (Exception e)
              {
                // log the exception
                throw;
              }
            });
    
    public class SomeService : ISomeService
    {
      public void DoSomething()
      {
        // do some work
      }
    }
    
    

    While this approach eliminates the need for your services to be concerned with the Unit of Work infrastructure, the biggest issue with this is when an error happens to occur. When the application can’t successfully commit a transaction for whatever reason, the rollback occurs AFTER you’ve typically relinquished control of the request (e.g. You’ve already returned results from a controller). When this occurs, you may end up telling your customer that something happened when it actually didn’t and your client state may end up out of sync with the actual persisted state of the application.

    While I used this strategy without incident for some time with NHibernate, I eventually ran into a problem and concluded that the concern of transaction boundary management inherently belongs to the application-level entry point for a particular interaction with the system. This is another approach I’d recommend avoiding.

    Instantiated Unit of Work

    The next strategy involves instantiating a UnitOfWork implemented using either the .Net framework TransactionScope class or the transaction API introduced by Entity Framework 6 to define a transaction boundary within the application service. Here’s an example:

    public class CustomerService : ICustomerService
    {
      readonly ICustomerRepository _customerRepository;
    
      public CustomerService(ICustomerRepository customerRepository)
      {
        _customerRepository = customerRepository;
      }
    
      public void CreateCustomer(CreateCustomerRequest request)
      {
        using (var unitOfWork = new UnitOfWork())
        {
          try
          {
            customer = new Customer { FirstName = request.FirstName, LastName = request.LastName };
            _customerRepository.Add(customer);        
            unitOfWork.Commit();
          }
          catch (Exception ex)
          {
            unitOfWork.Rollback();
          }
        }
      }
    }
    

    Functionally, this is a viable approach to facilitating a Unit of Work boundary with Entity Framework. A few drawbacks, however, are that the dependency upon the Unit Of Work implementation is opaque and that it’s coupled to a specific implementation. While this isn’t a terrible approach, I would recommend other approaches discussed here which either surface any dependencies being taken on the Unit of Work infrastructure or invert the concerns of transaction management completely.

    Injected Unit of Work Factory

    This strategy is similar to the one presented in the Instantiated Unit of Work example, but makes its dependence upon the Unit of Work infrastructure transparent and provides a point of abstraction which allows for an alternate implementation to be provided by the factory:

    public class CustomerService : ICustomerService
    {
      readonly ICustomerRepository _customerRepository;
      readonly IUnitOfWorkFactory _unitOfWorkFactory;
    
      public CustomerService(IUnitOfWorkFactory unitOfWorkFactory, ICustomerRepository customerRepository)
      {
        _customerRepository = customerRepository;
        _unitOfWorkFactory = unitOfWorkFactory;
      }
    
      public void CreateCustomer(CreateCustomerRequest request)
      {
        using (var unitOfWork = _unitOfWorkFactory.Create())
        {
          try
          {
            customer = new Customer { FirstName = request.FirstName, LastName = request.LastName };
            _customerRepository.Add(customer);
            unitOfWork.Commit();
          }
          catch (Exception ex)
          {
            unitOfWork.Rollback();
          }
        }
      }
    }
    

    While I personally prefer to invert such concerns, I consider this to be a sound approach.

    As a side note, if you decide to use this approach, you might also consider utilizing your DI Container to just inject a Func to avoid the overhead of maintaining an IUnitOfWorkFactory abstraction and implementation.

    Unit of Work ActionFilterAttribute

    For those who prefer to invert the Unit of Work concerns as I do, the following approach provides an easy to implement solution for those using ASP.Net MVC and/or Web API. This technique involves creating a custom Action filter which can be used to control the boundary of a Unit of Work at the Controller action level. The particular implementation may vary, but here’s a general template:

    public class UnitOfWorkFilter : ActionFilterAttribute
    {
      public override void OnActionExecuting(ActionExecutingContext filterContext)
      {
        // begin transaction
      }
    
      public override void OnActionExecuted(ActionExecutedContext filterContext)
      {
        // commit/rollback transaction
      }
    }
    

    The benefits of this approach are that it’s easy to implement and that it eliminates the need for introducing repetitive infrastructure code into your application services. This attribute can be registered with the global action filters, or for the more discriminant, only placed on actions resulting in state changes to the database. Overall, this would be my recommended approach for Web applications. It’s easy to implement, simple, and keeps your code clean.

    Unit of Work Decorator

    A similar approach to the use of a custom ActionFilterAttribute is the creation of a custom decorator. This approach can be accomplished by utilizing a DI container to automatically decorate specific application service interfaces with a class which implements a Unit of Work boundary.

    Here is a pseudo-code example of how configuring this might look for Entity Framework with the Autofac DI container which presumes that some form of command/command-handler pattern is being utilized (e.g. frameworks like MediatR , ShortBus, etc.):

    // DI Registration
    builder.RegisterGenericDecorator(
         typeof(TransactionRequestHandler<,>), // the decorator instance
         typeof(IRequestHandler<,>), // the types to decorate
        "requestHandler", // the name of the key to decorate
         null); // the name of the key to this decorator
    
    
    
    public class TransactionRequestHandler<TRequest, TResponse> : IRequestHandler<TRequest, TResponse> where TResponse : ApplicationResponse
    {
      readonly DbContext _context;
      readonly IRequestHandler<TRequest, TResponse> _decorated;
    
      public TransactionRequestHandler(IRequestHandler<TRequest, TResponse> decorated, DbContext context)
      {
        _decorated = decorated;
        _context = context;
      }
    
      public TResponse Handle(TRequest request)
      {
        TResponse response;
    
        // Open transaction here
    
        try
        {
          response = _decorated.Handle(request);
    
          // commit transaction
    
        }
        catch (Exception e)
        {
          //rollback transaction
          throw;
        }
    
        return response;
      }
    }
    
    
    public class SomeRequestHandler : IRequestHandler<SomeRequest, ApplicationResponse>
    {
      public ApplicationResponse Handle()
      {
        // do some work
        return new SuccessResponse();
      }
    }
    

    While this approach requires a bit of setup, it provides an alternate means of facilitating the Unit of Work pattern through a decorator which can be used by other consumers of the application layer aside from just ASP.Net (i.e. Windows services, CLI, etc.) It also provides the ability to move the Unit of Work boundary closer to the point of need for those who would rather provide any error handling prior to returning control to the application service client (e.g. the Controller actions) as well as giving more control over the types of operations decorated (e.g. IQueryHandler vs. ICommandHandler). For Web applications, I’d recommend trying the custom Action Filter approach first, as it’s easier to implement and doesn’t presume upon the design of your application layer, but this is certainly a good approach if it fits your needs.

    Conclusion

    Out of the approaches I’ve evaluated, there are several that I see as sound approaches which maintain some minimum adherence to good design practices. Of course, which approach is best for your application will be dependent upon the context of what you’re doing and to some extent the design values of your team.

    Introducing NUnit.Specifications



    I recently started working with a new team that uses NUnit as their testing framework.  While I think NUnit is a solid framework, I don’t think the default API and style lead to effective tests

    As an advocate of Test-Driven Development, I’ve always appreciated how context/specification-style frameworks such as Machine.Specifications (MSpec) allow for the expression of executable specifications which model how a system is expected to be used rather than the typical unit-test style of testing which tends to obscure the overall purpose of the system. 

    To facilitate a context/specification-style API, I created a base class which makes use of the hooks provided by the NUnit testing framework to emulate MSpec.  I’ve published this code under the project name NUnit.Specifications.

    The following is an example NUnit test written using the ContextSpecification based class from NUnit.Specifications using the Should assertion library:

    image01{.thickbox}

    One nice benefit of building on top of NUnit is the wide-spread tool support available.  Here is the test as seen through various test runners:

    Resharper Test Runner:

    image03{.thickbox}

    TestDriven.Net: (see notes below)

    image04{.thickbox}

    NUnit Test Runner:

    image00{.thickbox}

    NUnit Test Adaptor for Visual Studio:

    image02{.thickbox}

     

    One caveat I discovered with the TestDriven.Net runner is it’s failure to recognize tests without the specification referencing types from the NUnit.Framework namespace (e.g. TestFixtureAttribute, CategoryAttribute, use of Assert, etc.).  That is to say, it didn’t seem to be enough that the spec inherited from a base type with NUnit attributes, but something in the derived class had to reference a type from the NUnit.Framework namespace for the test to be recognized.  Therefore, the TestDriven.Net results shown above were actually achieved by annotating the class with [Category(“component”)] explicitly.

     

    Other Stuff

    As a convenience, NUnit.Specifications also provides attributes for denoting categories of Unit, Component, Integration, Acceptance, and Subcutaneous as well as a Catch class (as provided by the MSpec library) for working with exceptions.

    You can obtain the NUnit.Specifications from NuGet or grab the source from github.

    Expected Objects Custom Comparisons


    ExpectedObjects is a testing library I developed a few years ago to facilitate using the Expected Objects pattern within my specifications to avoid obscure tests.  You can find the original introduction to the library here.

    As of version 1.1.0, the ExpectedObjects library has been updated to include a feature called Custom Comparisons.  The standard behavior of the library is to traverse a strategy chain (which is itself configurable) to determine which comparison strategy is to be used for each type of object encountered within the object graph.  The Custom Comparisons feature allows you to override this behavior for specific properties.

    For example, let’s say we’re writing a end-to-end test which validates a Receipt class as follows:

    public class Receipt
    
    {
        public string Name { get; set; }
        public DateTime TransactionDate { get; set; }
        public string VerificationCode { get; set; }
    }

     

    Given the following class, the VerificationCode property would probably not be a value you could anticipate.  In such a case, while you can’t verify that the property has a specific value, you may care that it at least has some value.  This is where the Custom Comparisons feature can help.  We can verify that the actual Receipt received matches the expected receipt structure using the following expected object configuration:

    var expected = new
    {
    	Name = "John Doe",
    	DateTime = DateTime.Today,
    	VerificationCode = Expect.NotNull()
    }.ToExpectedObject();
    
    
    var actual = new Receipt
    {
    	Name = "John Doe",
    	DateTime = DateTime.Today,
    	VerificationCode = "ABC123"
    };
    
    
    
    expected.ShouldMatch(actual);

    In the event that the VerificationCode property is null, the library will raise an exception with the following message:

    For Receipt.VerificationCode, expected a non-null value but found [null].

    The ExpectedObjects library currently provides a static Expect class which  includes convenience methods to check for null, not null, and an Any comparison for checking that an object is of a specific type (e.g. Expect.Any()).  To supply your own comparisons, simply implement the IComparsion interface which defines the custom comparison and the text to include within any exception messages raised (e.g. “For SomeType.SomeProperty, expected [_text you supply here_] but found “42”).

subscribe via RSS