Project anti-pattern: Many projects in a Visual Studio Solution File

I’ve been hearing from several colleagues about how their Visual Studio solution files have many (i.e. more than 10, and usually more than 30 — in one case, more than 100!).  So far, none of them have been able to give me any good explanation for why this is and most of them hate it but they can’t change it because their architect/lead/whatever won’t let them.

I’m hoping that by getting the discussion going on this in the greater community, we can try to discourage everyone from having lots of projects in a solution. 

Why are lots of projects in a single solution not good?

Aside from some of the more obvious arguments about performance, runtime optimization, PDB and assembly size, etc, etc, etc — actually, wait. These are obvious, right? Anyone who’s ever loaded a VS solution file with more than 20 projects should know exactly what I’m talking about.  And if you’ve made the mistake of kicking off a build in Visual Studio with this solution, you know that you’re in for a 1-5 minute sit-on-your-hands party.  And also — and I could be wrong about this, but it was true as of .NET 2.0 — the JIT cannot optimize code across assembly boundaries (or at least it can’t do ALL its optimizations).  Then there’s the inherent overhead of each DLL file and assembly metadata being loaded for each assembly, not to mention the extra overhead of having so many PDB/symbols loaded in Debug mode, etc, etc, etc.  If you need more proof of the performance problems caused by lots of assemblies, let me know and we’ll go deep. I’m hoping that these facts are well established in the wide, wide world of .NET.

Ok, so hopefully we’re past the obvious arguments, let’s get back to some of the more subtle ones.  Why do you need so many assemblies? Is it namespace control? Why not put them in one big assembly and use namespaces there?  Is it Strong Naming? Ok, I’ll give you that one, strong naming does throw a wrench in things sometimes, but I’d still challenge whether you need 30+ assemblies in your solution just due to strong naming.  Is it licensing? Security?  All of these problems have a better solution that usually doesn’t require more assemblies.

One common argument I’ve heard is ‘dependency’ management. That is, I don’t want my XYZ.Foo assembly to reference System.Web or something like that. My counter to this is: Why not? What does it matter? It’s usually an aesthetic argument that comes back and has little to do in the way of any real merit from a business value perspective.  In fact, I can usually counter back with arguments that more business value is gained by having things easier to use and package and not worrying about dependencies for dependency’s sake.  System.Web is in the GAC just as much as System or mscorlib are. You’re not saving yourself any problems by having an assembly that has references to all of those.

Another argument is that I don’t want my different ‘layers’ all in the same assembly. Why not, I ask?  Sometimes there’s a valid argument here because you need to deploy these things separately to separate physical layers. Ok, I’ll grant you that one, but remember, we’re talking 3-4 assemblies here, TOPS. If you’re over 20, something is probably seriously wrong. It’s a smell, not a sure sign of fire, so your mileage may vary here, but 20 is definitely a line that I would try very hard not to cross. In fact, 10 is probably pushing it.

What are some exceptions when consolidating assemblies?

Utility/console application projects. Unit test projects. Integration/longer-running test projects might do well to be in their separate project.  Interface assemblies for remoting/serialization/integration purposes. Plug-in/frequently changing assemblies, resource assemblies, etc.

In the case of utility/console application assemblies and things like resource or satellite assemblies, you might consider a separate solution for these since they are likely not built or used as often as the main-line code.  You can have multiple SLN files reference a single project, so you can mix and max your SLN files. Be careful though, as the management of these things can get out of hand, so make sure you always have a core SLN file that you trust as the definitive source for what ‘works’ in your project.

Also, consider an automated build and test process (NAnt, Rake, Bake, etc) that can independently build and verify the fitness of the build after tests and such so that you remain honest.

About Chad Myers

Chad Myers is the Director of Development for Dovetail Software, in Austin, TX, where he leads a premiere software team building complex enterprise software products. Chad is a .NET software developer specializing in enterprise software designs and architectures. He has over 12 years of software development experience and a proven track record of Agile, test-driven project leadership using both Microsoft and open source tools. He is a community leader who speaks at the Austin .NET User's Group, the ADNUG Code Camp, and participates in various development communities and open source projects.
This entry was posted in .NET. Bookmark the permalink. Follow any comments here with the RSS feed for this post.
  • Great post!

    Our project currently has 10+ projects, but these are composed of 4 different services and these services each have their own SLNs which has around 4 projects. We only work within our own SLNs so that we don’t have to load all the projects we don’t need for our particular service.

  • Hi Chad,

    This is something I often struggle with myself. I subscribe to the dependency management argument, and I do not think your example of system.web is a good one. I know you are a fan of persistance ignorance – is it okay for your domain assembly to reference nhibernate?

    I think much of this issue hinges on what type of team you have (or expect to have once the project is in maintenance). I think we both believe that defensive coding is a good idea – what about defensive architecture?

    As I said, this is certainly something I struggle with. I would be more than happy to be convinced that dependency management is unnecessary, but to do so you would need to look at some better examples than the system.web strawman.

  • Kim

    We usually divide our solutions into to projects(assemblies) based on deployment boundaries. What needs to be deployed separately, is in a separate assembly.

    On another note, I’m not familiar with cross assembly JIT optimization, but I question the applicability of this argument. If you have a perf problem, I think that an improvement because of a different JIT optimization is a border case.

    Great blog!

  • I agree with the thrust of this — each Project is a 1:1 with an assembly (without some post-compile MSIL aggregation going on ) so each Project needs to be considered for two primary reasons:
    1) a deployment unit (e.g., it will be deployed elsewhere from the other assemby or assemblies)
    2) a logical re-use unit (e.g., it will be referenced into two or more assemblies that in turn will be deployed to two or more differnt locations eventually)

    Typically we use different projects for:
    1) unit tests vs. production code (since our tests aren’t USUALLY deployable)
    2) services (taken in the broad conceptual sense to include DAL repositoties, webservices, logging facilities, whatever)
    3) DTO classes
    4) etc.

    I too have seen the hyper-normalized project-bloat anti-pattern a lot and its usually accompanied by “this sln takes 5+ minutes to compile but we just live with it” and is often the result of people just not understanding what affect this has on performance of the VS compile-build process.

    Hopefully posts like yours will help — I usually recommend the highly-underused ‘create-new-solution-folder’ VS feature as a secondary organizing principle rather than a huge number of projects. While the ‘we want to enforce SOC and SRP’ argument SORT OF makes sense to me, it seems like most solutions pay a terrible price for using this kind of BIG STICK approach to trying to ensure proper design principles are followed when a more comprehensive code review could often achieve the same thing without such a constant negative impact on the whole rest of the project team.

  • @Chad
    I somewhat agree but I think you are writing off some arguments too quickly.

    For instance on dependencies, as a reader of Robert Martin you’ll have read his package dependency patterns. Admittedly I don’t follow those patterns rigidly but they do make a lot of sense to me not least as VS does enforce the fact that you add a dependency on a project (not, for example, a namespace). Thus if I don’t want my domain to have a dependency on NHibernate I either need to use seperate projects or try to use something like NDepend and configure it to point out any dependencies we want to avoid.

    The optimal number of projects also depends on the system you are developing. So in my last project we had a lot of common code, we had multiple domain models with their own dependencies, we had multiple applications using those domain models, we had shared common Web code between applications, we had lots of common test code…trying to do all this in a few projects would have been very difficult. What I’m really saying is in a lot of cases reuse changes the way you structure things and that might be the reason for the larger solutions you describe.

    Also on the complexity argument, I’ve worked with very large projects and I’ve worked with lots of smaller projects and (ignoring performance) I’m not sure one is naturally superior when it comes to working out what you need to change and where.

    Also another good link on this discussion:

    On the solution folders, not sure what you mean as they are external to projects.

  • “…in one case, more than 100!”

    My ears are burning!

  • @Kim: I don’t believe that the JIT can optimize (i.e. inline methods, etc) code across assembly boundaries, or if it can, it’s very limited. So you’re sacrificing a LOT just to have your assemblies neatly organized.

    @Steven: Most of those are good exceptions, though I usually don’t have a problem with deploying too much code to a server if it makes all the other things in my process easier. If you have one, big “Whatever.Core” assembly that has most of your code in it, but “Server A” is only going to use a portion of it, does it bother you that you’re deploying a little too much? It used to bother me, but I got over it because it makes everything else much more simple.

    I’ve found that one of the excuses for breaking out assemblies is so that different versions can be rolled out independently to different servers, etc. This almost always ends in disaster or a maintenance nightmare scenario. So I’ll happily trade too-much-code being deployed with consistent version leveling over micro-deployment specification.

    @Colin: Why does it matter that your domain assembly has a ref to NHibernate? I used to have a problem with it, but I found it was largely a vanity argument (i.e. it didn’t ‘look’ right) but there was no real technical argument there.

    I end up usually having a big Foo.Core assembly that has folders/namespaces where everything is properly separated.

    I usually don’t pay attention to assembly references unless I have to. It’s just taken care of for me by my build process and I deploy everything as a unit and break things out into other assemblies where I have to (i.e. ASP.NET web projects in Visual Studio, interface/remoting assemblies, satellite assemblies, etc)

  • I’m with you on this one. However another reason you might want to break them apart is to allow others to inherit your assemblies without much baggage.

    We have a core framework assembly that has all kinds of good stuff in it, but it’s also tied directly to SubSonic. Which means you need configuration and a database before you can even use it.

    We’ve toyed with separating those out so that other teams can use our framework utilities, UI controls, and other stuff without the baggage of Subsonic.

  • Chad, you touched a subject very important to me.

    I wrote a lot on this topic, prefering namespaces over assemblies, manage dependencies across namespaces with NDepend, poor performance of having multiple project:

  • Getting a good easy to use way of managing this at the namespace level would certainly be nice, but I’m not 100% convinced NDepend is the while answer right now (at least it wasn’t for me). To me its the sort of thing that VS team should be looking at too though, as right now the way you reference/import entire projects is certainly limiting.

    Mind you getting to a situation where VS can handle more projects without slowing down so much would also be cool…

  • The project I’m working on now has 26 dependencies. I desperately want to bring that number down but I honestly don’t know how to.

    In my case, each assembly is a reusable software component that we can (and do!) share between multiple applications (solutions).

    Most projects can’t be combined because each one can be used without the others. If we combined them, we’d be forced to release code that wasn’t being used. Apart from this just not feeling right, there are also business reasons to not release unused code.

    We could build these projects and include them as dlls instead of project references… But there’s a problem with shared assemblies. That is, we have a core project that is used by every other project. If we build Shared Assembly A with Core v.1 and then try to use Shared Assembly A in Application A which also directly references Core… you get version conflicts. I go into this problem in more detail here:

    If anyone can lend any advice or tips on ways to work around this I would certainly love to hear it!

  • I had one that was cresting 50, but after some major refactoring its down to 13 (three unit test projects–internal classes, API classes, and integration; installer, designer, service, core dll (i.e., our implementations of providers and addins), and our API projects (sliced into areas of interest for extenders). Originally, I started out with a unit test solution per namespace, and we were using application blocks for our providers. That was over thirty individual projects. When the P&P group ditched application blocks for Unity, I figured it’d be a good time to refactor all that nonsense out (it also coincided with our move from 3.0 -> 3.5 (2k5 to 2k8)).

    Hey, its a learning experience, k?

  • Wow I guess we are the underdogs. We have had 1 project and recently split into 4 ( Present, DAL, BLL, types). We do have a sperate soloution for our generic/utility code. We just copy the dlls from the utility code into our user apps.

  • There’s good discussion in the comments of Ayende’s two-project-solution linked above (see Udi’s dissenting viewpoint).

    Additional discussion on this subject was kicked off by Scott Hanselman a year ago:

  • We generally just have a few.

    I used to work with a guy that wanted many projects so that if it was deployed in an on-demand web application (XBAP?), it would only download the assemblies it needed.

    However, I disagreed because of the connection and assembly overhead. He ended up with assemblies with 1 class in them. It reminds me of having dozens of JavaScript files in a webpage. You’re asking for slow load times, especially on a low-latency connection. All to save a second on that one user that only needs one page that doesn’t require all the other JavaScript files.

  • Russell

    30? 100? Child’s play! We have a top level solution file with 185 projects. I am not joking.

    Of course, the IDE is pretty much unusable at that point. Everyone else just deals with it, but it was pissing me off so I wrote a tool that parses the sln file, finds project dependencies, and can build a new .sln with just the things you need in it. Again, I am not joking.

    The obvious answer is to merge most of them together, but I’ve got people who want to make a new assembly for every bloody class. I’m not about to fight that battle.

  • Sam

    Good post, well written, nice links :)

  • Wow, I’m floored. I had no idea this was such a hot topic. Great points, everyone. Indeed there are several really good excuses for having more than 10 projects but it appears really easy to abuse.

    Part of my wishes VS had better support for lotsa projects to help with the scenarios where it’s valid (or otherwise unavoidable), but part of me is glad it doesn’t because it would just give abusers the ability to abuse it all the more readily.

    I’ll try to group these together into some sort of positive guidance. I hope that you all think it over and come up with some principles of your own. Please post them back here or send ‘em to me and I can help collate them.

    Please post on your blogs about your experience and send me the link (you don’t have to trackback, I’m not link whoring here, I’m genuinely interested in getting good guidance on this).

  • This is one really great post about those things that we all take for granted. The more obvious it is, the more room for mistakes.

  • This will be a bit rambling…

    One thing I’m now convinced of is there is nothing like a right answer for project/folder structures.

    As an example myself and colleagues went through an exercise of changing the entire structure of our main domain projects, changing the folder structure on disk and the structure in the projects/solution(s). However although we all felt we had improved it the result was still frustratingly far from perfect. I tried to explain one part of the restructuring in this blog post (interestingly doesn’t show up well in FireFox, oh well):

    Project structure is even more difficult than folder structure as you have so many different factors. Reuse of code, dependencies, build time etc. With this in mind I actually think that we need another way of organizing codebases and much better guidance.

    I’m not even sure projects/folders are enough, I often want a more virtualized view for example if I’m working on Login I want to see LoginController, its view, the tests, the login parts of the domain mode, LoginService etc. I don’t want to see everything else. So does that mean I put everything in one project and organizing my folders by task/feature? Not sure, I’ve tried that approach and I’m not sure it’s great especially when you have re-use (between applications/teams or even just within a codebase).

    Oh and here’s another good link:

    “@Colin: Why does it matter that your domain assembly has a ref to NHibernate? I used to have a problem with it, but I found it was largely a vanity argument (i.e. it didn’t ‘look’ right) but there was no real technical argument there.”

    So my domain classes never ever access NHibernate and since dependencies are at the project level that’s how I’d manage it. I could be convinced to weaken on the issue but I’ve yet to feel a compelling need.

    Let’s say I don’t mind on that front though, its still possible that the domain/controls/utilities are shared between apps. Of course the argument that apps never share large sections of code (especially domain code) is one some people seem to be going for these days, I don’t necessarily agree 100% with this which shapes my thinking.

    “I end up usually having a big Foo.Core assembly that has folders/namespaces where everything is properly separated.”

    Yeah I’ve worked on such a project. In that case we had a reasonable amount of developers over a reasonable amount of time and in my view the resulting project ended up quite confusing. Not saying it always would though, this project was confusing in general :)

    Other thing to consider is where you have multiple apps and code sharing between them. In those cases bringing in one Foo.Core becomes a bit of an issue (Uncle bobs reuse-release arguments and so on). I know it’s all managable but…

  • To Russell w/ 185 projects: only one response can be made to this: OMFG…

    I guess you need some serious ILMerge after that…

  • @Colin
    “I actually think that we need another way of organizing codebases and much better guidance.”

    “I’m not even sure projects/folders are enough, I often want a more virtualized view for example if I’m working on Login I want to see LoginController, its view, the tests, the login parts of the domain mode, LoginService etc. I don’t want to see everything else. So does that mean I put everything in one project and organizing my folders by task/feature?”

    See for an interesting teaser of what the future may look like. Can’t wait for PDC to see what this is all about….

  • sreenivas.k

    We don’t have a single solution containing more than 5 projects… but, I must confess (even at the risk of becoming a legend amongst you), we have around 500 solutions!

    Wait! This is a comprehensive enterprise-wide application covering distributed business processes. It consists of 11 modules. There were around 50 developers organized into 10 teams. The final application is deployed across a (albeit small) nation on 30 servers, contains around 2000 assemblies with 1.9m LOC in C# (excluding empty lines and comments) + 900 aspx pages.

    I believe that the (extreme) partitioning into small projects helped in minimizing the need for coordination among the teams. It did help in managing the project; but the technical challenges had become more and more complex. From having to build a tool that analyzes the dependencies and generates NAnt build file using NVelocity templates, to dealing with memory fragmentation.

    The project has completed pilot stage, and the team size has gone down to 15; now we are thinking of consolidating the projects into 30 (3 in each module: UI, Services and Data Access).

    I feel embarrassed to confess here about the number of projects. But I asked myself what would I do in the next project of the same size. If I were not forced to confess again on the internet, I would follow the same pattern. [Or perhaps, we won't be as lucky on the next project in terms of performance requirements?]

    I am wondering if the comments above missed this aspect of partitioning a system: to support concurrent development.

  • When you have multi-core CPUs would’nt separate assemblies (~2-4) speed up the compile-test-debug cycle?

  • Mark

    I guess Java did it right with just namespaces and assemblies are the wrong substitute for .jar files (which are just packaging, not part of build).

  • Mark

    Considering converting to few-large-assemblies. Found 2 serious problems:

    1. visual studio enforces default namespace (just in gui, manual tweak in the .csproj seems to work)

    2. Dataset designer uses intermediate directories too when creating it’s namespace (can be worked around with reorganization of source tree)

  • Mark

    The only one that should be embarrased about large number of VS projects should be the Visual Studio/msbuild and Windows file system developers.

    I’d consider it an advantage for incremental building, but it comes at a terrible price for a full build.

    My results: ~2x faster full build when reducing the number of projects from ~80 to ~10.

    IMO, the C#/.NET assembly model is broken anyway for incremental builds.

  • Thomas Eyde

    @Kevin: A solution to the versioning problem I often promote, is, if the code belongs to someone else, then they should build and deploy the assemblies to a wellknown location. Then each project should aquire their own local copy of needed assemblies and reference those. GAC is forbidden, as is reference to custom assembly outside of the local pool.

    Do you believe something like this will work for you?