I started my career as a programmer developing on Unix platforms, primarily writing applications in ANSI C and C++. Due to a number of factors, including the platform dependency of C/C++ libraries, the low-level nature of the language and the immaturity of the Internet, code reuse in the form of reusable libraries wasn’t as prevalent as it is today. Most of the projects I developed back then didn’t have a lot of external dependencies and the code I reused across projects was checked out and compiled locally as part of my build process. Then came Java.
When I first started developing in Java, I remember being excited over the level of community surrounding the platform. The Java platform inspired numerous open source projects, due both to the platform’s architecture and the increasing popularity of the Internet. The Apache Jakarta Project in particular was a repository for many of the most popular frameworks at the time. The increase in the use of open source libraries during this time, along with some conditioning from the past, help forge a new approach to dependency management.
The Unix development community had long since established best practices around the use of source control and one of the practices long discouraged was that of checking in binaries and assets generated by your project. Helping facilitate this practice was Apache’s Ant framework, an XML-based Java build library. One of the targets provided by Ant was <get> which allowed for the retrieval of files over HTTP. A typical scenario was to set up an internal site which hosted all the versioned libraries shared by an organization and to use Ant build files to download the libraries locally (if not already present) when compiling the application. The task used for retrieving the dependencies effectively became the manifest for what was required to reproduce the build represented by a particular version of an application. The shortcoming of this approach, however, was the lack of standards around setting up distribution repositories and dealing with caching. Enter Maven. Maven was a 2nd generation Java build framework which standardized the dependency management process. Among other things, Maven introduced schema for denoting project dependencies, local caching and recommendations around repository setup and versioning conventions.
After developing on the Java platform for several years, I landed in a group which decided to rewrite the project I was assigned to from Java to .Net. After some reorganization, I found myself working alongside new team members whose background was primarily in Microsoft based technologies. I soon discovered that the typical practice within the Microsoft community was to check in any dependencies needed by a project. This certainly added a level of convenience for getting projects set up, but no strategy existed for effectively managing versioned distributions of common libraries or easily discovering what versions of what dependencies a project used.
Around this time, Microsoft released beta 2 of the .Net framework and my team decided to upgrade our fledgling project to the new version. Along with the 2.0 version of the framework came MSBuild, Microsoft’s new build engine. While a port of Ant was available for the .Net framework at the time, my team decided to go with MSBuild since Visual Studio used it as its underlying build solution. Unfortunately, MSBuild didn’t provide tasks for downloading dependencies, so I set out to write my own set of tasks which allowed us to manage dependencies “Maven-style”. While these new tasks provided the desired capability, the strategy proved to be too foreign a concept for the rest of my team resulting in a return to just checking in all dependencies. Several years later I made another attempt at introducing dependency management to a different .Net team, this time using NAnt, though I believe the group decided to return to using MSBuild and checking everything in again after I left the company.
Around mid-2010, I heard that a new .Net package management system was in the works named “OpenWrap”. It wasn’t ready at the time, but I was excited that the community seemed to be moving in the right direction. Not too long after the announcement of OpenWrap came an announcement from Microsoft that they had joined forces with an existing .Net package management project called Nubular (Nu). The Nu project was a command line .Net package management system built upon RubyGems. Nu was rewritten to remove the Ruby dependency and re-branded as NuPack which was shortly thereafter re-branded as NuGet.
NuGet was first released in January of 2011 and seems to have been well-received by many in the .Net community. It’s reception is likely due to the fact that it was designed to accommodate how the majority of .Net developers were already working. Primarily designed as a Visual Studio extension, NuGet adds a new menu item under the project ‘References’ context-sensitive menu for referencing packages along with adding a Package Manager Console for integrating PowerShell usage and (as of version 1.4) a Package Visualizer which provides various graphical diagrams for visualizing dependencies. The NuGet team also provides a separate command-line utility (NuGet.exe) which adds the ability to create and publish your own packages.
The availability of a good .Net dependency management tool has been long overdue and NuGet addresses this need in a way palatable to most .Net development teams. That said, there are some dependency management scenarios I wish the NuGet team had put more emphasis on, namely build-time retrieval of dependencies and application level management independent of Visual Studio integration.
NuGet works a little differently than the other approaches I’ve used in the past in that it’s primary focus isn’t to facilitate the build-time retrieval of dependencies, but rather to make it easy to add, update, and remove project references to external libraries from within Visual Studio. When using NuGet, it’s expected that you’ll still be checking in any dependencies you reference by your project (though solutions have been set forth to facilitate source-only commits). While the NuGet.exe command line tool can be used to facilitate a more traditional approach to dependency management, the NuGet team’s focus on Visual Studio integration imposes some limitation on what can be done without a bit of supplemental infrastructure and perhaps a bit of compromise.
While I appreciate the value offered through NuGet’s Visual Studio integration (without which the tool may have suffered in its reception), I would have preferred the team had started with the following key scenarios:
Provide a command line tool to Retrieve, Update and Remove assets along with transitive dependencies independent of coupling with Visual Studio.
Use a single, plain-text manifest file for listing dependencies to retrieve.
Allow transitive dependencies to be retrieved from any specified source
Provide options for extracting to versioned or non-versioned destination folders as well as a single target destination folder (e.g. “lib”).
The support of these scenarios would certainly have influenced the evolution of NuGet’s Visual Studio integration, but while the underlying implementation may have differed, I believe a similar user experience could still have been implemented.
In my next article, I’ll show how my team is currently leveraging NuGet’s command line tool to facilitate dependency management needs apart from the tool’s Visual Studio integration. Stay tuned!