Evolutionary Project Structure

I used to care quite a bit about project structure in applications. Trying to enforce logical layering through physical projects, I would start off a project by defaulting to building an app with at least two projects, if not more. Something like:

  • Core
  • UI
  • Infrastructure
  • Data Access
  • etc etc

I’ve since moved completely away from this sort of project structure, mostly because it can devolve into arguments about what the right dependency directions are and so on. But for most systems I’ve seen, deciding about project structure is a waste of time and energy. Folders are just fine for layering, and you can always introduce projects when the need arises.

Instead, I LOVE the project structure of RaccoonBlog:


So what’s the project structure here? None!

Instead of layering using project structure, we just use folders. The domain model is in the Models folder. Infrastructure components are in the Infrastructure folder. Dogs and cats, living together, mass hysteria! Insane, right? Not at all.

Flatten your layers

What really makes this work is that there are no pointless abstractions like repositories or even DI containers to get in the way. It’s not a simple application, but it’s not terribly complex either.

One of the reasons why this works well is that the underlying architecture (RavenDB) clearly separates reads from writes. Writes go against aggregates realized as documents, and reads are clearly separated into RavenDB indexes. When your data access layer clearly separates things out this way, it makes it much easier to build a layered architecture on top without needing to resort to enforcement via projects. It’s called the “pit of success”.

So what does a typical action look like? Let’s look at the action displaying posts:


*gasp* It’s data access directly in the controller! Where’s the repository? Who cares! This query is used in one and exactly one place, this one controller action. Any common query methods are abstracted into extension methods (the WhereIsPublicPost method).

This action is perfectly testable, as RavenDB supports an embedded mode. And it won’t force us to create repositories that are essentially violations of both the Interface Segregation Principle and command-query separation principle.

By eliminating unnecessary abstractions, we don’t have to invent physical layering to support those abstractions. I never liked the repository pattern, but my desire to abstract things forced me into building things like Query objects to abstract the above query. I argue that abstracting the above action merely obfuscates with indirection.

What about ViewModels? There’s a purpose-built ViewModel folder to hold those. What about infrastructure-specific usages/extensions? Inside the Infrastructure folder we have those extensions for each 3rd-party component in its own folder.

Explicit component usage

One mistake I see teams make over and over again is trying to abstract components. I find it OK to put facades in front of things like 3rd-party web services, but when it comes to infrastructure components, there’s really no need to abstract.

In the method above, RavenDB is used directly inside the controller action. If we wanted to move this code somewhere else, we would need to invent an abstraction to do so, likely a repository. Why misdirect? Just consume the component directly, so that you have the full usage of the component at hand, and you don’t tie one hand behind your back by pretending important and valuable features don’t exist.

Where this falls down is when a component doesn’t support a given layering/architecture. But even with NHibernate, I just use the ISession directly in the controller action these days. Why make things complicated?

Even worse would be to put an abstraction around ISession and pretend NHibernate doesn’t exist. That’s a quick path to a lot of additional infrastructure code where you have to re-invent features already present in your 3rd-party component.

However, if using the 3rd-party component is difficult (like ADO.NET code), then by all means provide a façade. Defer abstraction decisions until the need presents itself, and you’ll find yourself with a much more flexible application.

When complexity arises

This approach goes a long way until your application actually has enough complex behavior to warrant additional layering. We have two options here:

  • Reduce complexity in requirements
  • Realize complexity explicitly in your model

If your application is complex, you might be able to work to reduce the complexity in the actual requirements. This is the ideal solution, as complexity in business requirements is often a smell for the need for simplicity. I often see this when business owners can’t explain a feature to me the same way twice.

If that fails, model the complexity explicitly. This is where techniques in the DDD book and patterns books help. It’s still a flat structure, but I might have to do a little more organization to keep rules separate from the rest of the application.

And if two applications need to use the same code, then by all means, introduce projects to share!

Layering versus structure

For me, it all comes down to logical versus physical architecture. Architectural styles aren’t project structure layouts. They’re simply a way to describe roles, responsibilities, and layers. Forcing ourselves into project structures and component ignorance just gives rise to more and more code needed to prop up our invented abstractions.

So check out RaccoonBlog, enjoy its elegance and simplicity. You might disagree with certain decisions, but it’s certainly easy to understand, extend and evolve.

About Jimmy Bogard

I'm a technical architect with Headspring in Austin, TX. I focus on DDD, distributed systems, and any other acronym-centric design/architecture/methodology. I created AutoMapper and am a co-author of the ASP.NET MVC in Action books.
This entry was posted in Architecture, DomainDrivenDesign. Bookmark the permalink. Follow any comments here with the RSS feed for this post.
  • Anonymous

    My bet is that 99% of MVC apps can be solved this way. Unnecessary DLL layering just gets in the way.

    Also, I gave up the Repository pattern about a year ago. I started coding directly against NHib’s ISession and haven’t looked back.

    • Exactly my thoughts – don’t make me link to my old blog posts to prove it haha :P

    • Alberto Basile

      I cannot agree more. I have tried the repository patter with NH and it was a mess. It still makes me laugh when I read people asking question about NH + repository pattern on StackOverflow.

    • James Morcom

      We all still use the repository pattern. We all represent our data as collections of objects in code, which is essentially what the repository pattern is about.

      ISession represents a repository!

      What’s changed is we now have an incredibly powerful and expressive common language/tool for defining queries that we didn’t have before, and it’s common to EF, NH and Raven, as well as the standard .net collection classes. This means we don’t have to build abstractions to hide them or enable us to test them anymore.

      The tool is Linq.

      • Anonymous

        Linq works for simple cases, but not for more complex ones. For example, you can’t do fetching, multi-queries, and the LINQ providers across ORMs are pretty far apart in what they support in expressions.

        LINQ looks like it’s the same, but it’s really not. You can choose to implement IQueryProvider or whatever however you like. At least in NHibernate, forcing all data access to use LINQ queries can be pretty restrictive, imo. I use LINQ, but I always have the ISession available to do whatever I want, depending on the context.

        • James Morcom

          You’re absolutely right of course :)

          I still think it’s a big step forward though!

          In my experience yes you do need to add the extra Fetch (EF) and Include (NH) extension methods when you come to optimisation, along with maybe some preloading stuff, but the code still works the same logical way when you pass in a List of T for testing, if that makes sense?

          The key point being you no longer need to define interfaces for querying and therefore have no reason to have a separate assembly to contain them.

          • James Morcom

            Where I work we tend to separate our commands and queries using “commands” and “view model builders”, both of which have access to an “IDbContext”.

            Our unit tests inject fake IDbContexts that return Lists of T whereas in production those collections are implemented by Session.Query (NH) or DbSet (EF). The LINQ statements within those bits of code return the same result in any of those environments, even though the implementation can be quite different, and there may be some implementation specific pre-fetching/joining code that just gets ignored in the unit test case.

            So far we’ve gotten on OK with that, and our NH projects read very similarly to our EF ones. I dare say if we moved to Raven (as we would like to) we wouldn’t see a huge shift in what we’re used to, and that’s all thanks to the wonders of LINQ!

  • Kenny Eliasson

    I’ve also started using the ISession directly, but it can get harder to test thanks to all the extensions methods which arent easy testable, any thought about this?

    • Anonymous

      Since I’m using NHibernate, it’s quite easy to have a test bootstrapper that creates the schema, populates the database with a set of known values, and then tests against an actual database. If you can use SQLite, you can still run the tests in-memory, so it’s really fast. I’ve worked on one project where an NHibernate.Linq Query() worked successfully against SQL Server, but did not work against SQLite. It was an ugly query, though, so the problem was most likely me, not SQLite.

      • Kenny Eliasson

        I’ve usually resort to creating a in-memory SQlite db, but its still slower than using mocking/faking it out.

  • Anonymous
    • Make sure you read the comments :)

  • Paul H

    I still like to use an ultra-lightweight generic repository wrapper over my orm allowing my controllers to be unit tested rather than integration tested which is essentially what you are doing with the embedded rabendb mode.

    • Mock the session then

  • Joseph Daigle

    I too have recently come to share the same conclusions you have found. It is no coincidence that much of my influence also comes from how Oren architects products.

  • Great Post.

    Was thinking about writing a post on this for a while – IMO chasing holy grail architectures is the primary cause of over engineering, code bloat and eventual rewrites.
    I’m not sure why this practice is so prevalent (P&P guidelines?), as I find it’s rarely suffered by the most experienced hackers.

  • Is there a way to unit test the static extension methods? Obviously I can mock out ISession but how can I mock out the extension calls; I think I would be forced to buy TypeMock licenses to achive this

    • Chris B

      My two cents…There isn’t much point in mocking ISession or trying to UT the extension methods. Any testing that goes against the ISession is already an integration test since ISession is your proxy for the database. If you mock it, you are cutting off a lot of the value of the test. You are better off with a test database (possibly generated at runtime) with known values in it that you use as a launching point. It does make your testing slightly less deterministic in the sense that the test is not in direct control of the implicit inputs (values in the database), but you can get a lot more value of a test since you know that your mappings are correct, you can read/write the db, etc…

    • Why do you need to do this? Don’t they just operate on the data in-memory? I think mocking Session would do the trick akshay

  • Great post Jimmy. Great to me because I have these beliefs although most do not.

    I hate seeing the “method-per-class” anti-pattern where there is an abstraction for every method.

    It’s harder to understand and it means more test coverage, and thus more friction when refactoring.

  • What about holding off on architectural decisions until the last moment possible? You know, Uncle Bob & Clean Architecture and all that? Seriously, for a guy still trying to grasp “good architecture” following the various schools of thought is overwhelming.

    • Sometimes you have to think for yourself jesse. You will find many conflicting opinions, and it’s best to acknowledge what they say but use your own judgement.

      Don’t just follow the leader.

  • It’s interesting that so many people stray from the simple folder structure you’ve presented here, because (with the addition of a couple extra folders), this is pretty much the default MVC template when you create a new project in Visual Studio.

  • This is Ayende all over again!88!!


    I used this kind opf structure in my last project and I got to say it’s perfect (mvc 3 + EF 4) ! The only downside is that your unit test are more integration test so they might be harder/longer to code.

  • Pingback: The Morning Brew - Chris Alcock » The Morning Brew #1179()

  • Betty

    I typically move the services/helpers/datalogic into the infrastructure folder, but otherwise do pretty much the same thing.

    Only bit that bugs me is the View/Models/Controllers folders always start feeling a bit odd after you add an area or two.

    • You can always change your view engine behavior. I’ve done that and have only Areas directory where I have Default aka old Views and of course directories for areas. :)

  • Anonymous

    The moment you need to add a background service or a command line utility, you have to restructure the whole dang solution to avoid unnecessary dependencies across projects. For that reason alone, this architecture fails.Also, writing unit tests directly against ISession is impractical (there’s no difference between unit tests and integration tests at that point). I can’t believe someone so influential in the community would actually advocate taking three steps back like this. Very disappointing.

    • George

      You are right – I’ve seen many applications designed like this, with a “practical” mindset and 4 years later, when somebody else was asked to replace the asp.net web application with a Silverlight or WInForms front-end, it had to manually and painfully extract the classes that were not dependent on ASP.NET into a separate project.

      Separate projects and assemblies might not be necessary initially from a technical point of view, but enforce a discipline – if all is in the same assembly it’s just to easy for a lazy developer to use a HttpContext call inside the service or data access classes.

      • That’s your fault for being lazy then :P

        • Anonymous

          Most developers ARE lazy though. You yourself eluded to this when you said “At a cost of extra work”. It’s not that much more work, and most routine abstraction can be summarized in a set of base classes and interfaces that can be reused across projects. The only laziness here is being too lazy to allow for (even if not coding for) future expansion.

          • Not really lazy. I’d prefer to be adding business value than doing unnecessary work.

            Future expansion is always a contentious point. Generally I follow YAGNI unless there is a good reason not too.

            A problem where I work now is that re-use across projects has caused a mess of coupling. Something I’d prefer to avoid at the cost of duplicating code (and not concepts).

            Sorry to disagree on all 3 of your points lol. Have a good weekend.

      • Jason Christian

        And how many other times did the need to replace the application with another front end never arise?

    • So you create abstractions for every part of your application “in case” “some” of the logic “might” need to be reused?

      At a cost of extra work and extra complexity I don’t think it’s as clear cut as yo make it. And from my experience this trade off of abstractions never pays you back.

      Maybe you can elaborate more on the mocking of ISession being an integration test? I don’t see it myself, but I’d like to hear your point.

      • Guest

        There is nothing really stopping you of using DI or UnitTesting…. But I think that every that you can learn a lot about designing simpler and more concise applications from examples like RaccoonBlog.

        I personally like to keep my Models and some other classes in a different project as well with most Helpers but I’m never going back to using the Repository pattern again on small projects. It’s just such an overkill.

        From my personal experience, you have to keep in mind that one of the more important aspects of beign a developer is to know which is the right tool for the job. Some times is Raven, some times is SQL. But creating a lot of code to help you in case you need to change, just going to make a mess.

    • Anonymous

      Take a step back, and ask – why did I take this position, after for so many years talk about unit tests, mocks etc? I have a reason – dig deeper, read again, and ask *why* did I rethink my position?

    • Anonymous

      I understand the desire to reduce complexity here, and generally speaking, that’s a good thing. But best practices and design patterns and the like are not singular in the benefit they offer. Sure, you may think your app is never going to have a need to use anything other than RavenDB or Log4net, but what if? Using a model such as this, you can’t simply try something else. Those dependencies are so deeply woven into the code that it’s coupled for life (or at least, the lifetime of a simple application).

      One advantage of using dependency injection and unit testing and all these other great things is that it has taught me to evaluate design decisions earlier in the process and catch myself doing things that might cause headaches later. It also allows me to better isolate individual functions and classes in my code and test them appropriately, rather than looking at the app as one big conglomerate where all the pieces are interdependent on each other.

      I’m not saying doing things this way is *always* wrong, but beyond relatively simple single-faceted web apps of this nature, I’m not quite sure I agree with it.

    • I love YAGNI and I hate BDUF. You are the opposite it seems. If at some point business decides they want a Silverlight or WPF app then they will pay the cost of doing the refactoring/redesign then. Why make them pay now for setting up a complex architecture with maybe many deployable components and all the overhead that comes with it if you are never going to need it?

      • Shorthand for Silverlight or WPF is making REST API on top of this app. :)

  • Go one step further and stop creating model, controllers, etc folders. Arrange most of your business code along the lines of vertical slices, this means to understand or add a feature, you open that featufeature. re’s folder and look at the small amount of files that make up the feature.

  • me

    If there is no need for some special deployment (physical separation via DLLs) then you should go with as few as possible assemblies. We do this for that last 10 years. Logical separation is the key, not physical one. Sometimes performance issues might be a good point to have many DLLs but that is a rare scenario.

    A question: did you *really* see so many teams as you claim that go for unneeded abstraction of the internal components? I had *never* seen such a thing though I worked for different companies. Usually developers are avoiding complex abstractions.

  • I kind of like this idea. For the longest time I was an advocate of having separate projects for the data access, the infrastructure, the models, etc. but this is so much cleaner. Now I think it depends on the context of the project. If you have a few web services that expose the models, and two web clients (let’s say a customer-facing ecommerce part and a fulfillment site) then obviously you don’t do this, you put it in a separate library. But I would wager for most single client projects, this is perfect. Needless complexity, remember?

  • Bill Sorensen

    Several years ago, our department standardized on Gentle.NET (an ORM) for data access. We coupled applications, shared libraries, and internal frameworks to this. Now Gentle.NET is no longer maintained and is showing its age. We’re moving to Entity Framework. This is difficult and time-consuming due to the coupling. We’re now trying to make our libraries persistence-agnostic. For another example, a different group was looking at Velocity (now AppFabric) for caching and has now settled on MongoDB. My point isn’t the frameworks we chose, it’s that we changed our minds over time. What if we want to use a Micro-ORM for performance in one case? What if the database access will be replaced by a web service? Read section 2.1 (“Doing it wrong”) in the book Dependency Injection in .NET for a great example. Are you sure that your company will *always* be using RavenDB and NHibernate? Also, what may work well for a particular web application, team, and company may not work for a fat client application in a different environment.

  • Adam Tuliper

    I’m still not getting it. I greatly respect your prior postings so have to imagine its either a joke or I just dont get it yet. “What really makes this work is that there are no pointless abstractions like repositories or even DI containers to get in the way.” DI containers serve a solid and well known purpose including being able to provide object lifetime management and help force separation so you don’t shoot yourself in the foot for testing. In the project structure above for many apps, it’s extremely easy to let your dependencies and concerns bleed into what should otherwise be separate layers. For a basic web app – sure this works fine. For anything beyond that I can think of several reasons this can easily cause problems.

    • Anonymous

      I’m not saying DI is pointless, but for the new application, it is. Wait until the system proves through its emergent complexity that it NEEDS these abstractions and infrastructure, but not before. I’ve surprised myself how long I can go without things like this, provided that I’ve built my app with the appropriate fundamental design choices. I’m not saying don’t use these tools. Don’t use them until you need them, and not before.

  • Jimmy, How would you go about testing your controller logic when using NHibernate ISession within the controller itself? Do you mock out the ISession? Or is your actual BL in a testable Interface?

    • Anonymous

      I’m working on a post for that – options for testing with data access. Stay tuned!

  • Anonymous

    Interesting points. I wholeheartedly agree that alot of the .NET world jumped the shark and dove way too far into onion architectures way too fast. Really no reason for lots of the complexity that is suggested across the web.

    That said, I think a bit of overreaction is starting in some places. Do we need 42 assemblies for a relatively simple web project? Probably not. But I’d suggest you at least want to start with:

    MyProject.Core — the core logic, main data access and other hoo ha. No reference to System.Web to keep it real clean and focused on your own testable code.
    MyProject.Core.Tests — test the core
    MyProject.Web — your web app

    Helps keep things focused and portable while not introducing nightmarish levels of complexity to the project.

    • Anonymous

      So why even start with Core? Why not wait until you outgrow the one project, complexity-wise?

      • Anonymous

        As I noted, it is a happy compromise between overly simplified and overly complex. I failed to mention it helps focus in my experience — if you start in a web project things can get pretty ugly and intertwined quickly.

  • Pingback: Testing controller logic that uses ISession directly | DIGG LINK()

  • Paul Hadfield

    I see your next article is “Why I’m done with Scrum” so do wonder if you’re gunning for the low hanging fruit of just taking a polarised argument approach. Anyway, you say you’ve replaced a project structure with a folder structure! Well done you, your folder could be a project, your project could be folder – no real difference or overhead.

    • Anonymous

      What’s polarizing about starting small? I didn’t mean to say that projects are bad – just that you can EVOLVE into multiple projects, but not start there.

      But folders vs. projects does matter – there is an overhead to projects instead of folders, especially around compile-time. It’s also easy to move folders around, but projects are a lot “stickier” – there’s not even a “Delete Project” command in VS – only Remove!

      What I’m trying to challenge is the assumption that when we start a new project, we automatically need to build out other projects (Core, Infrastructure etc.) to support it. I think that’s a decision that can be deferred until the need presents itself.

      • Paul Tiseo

        Side bar: I would say that once you’ve built them [structures] out, they are often reusable. (How many times have I seen the redo of StringExtensions.cs. Ugh.)

    • Paul Tiseo

      I agree. I’m also asking myself what are the fundamental conceptual/logical differences between a project-based separation into {UI, Infrastructure, Core, etc} and a folder-based separation into {Views, Infrastructure, Helpers, etc} as per the RacoonBlog example?

      • Anonymous

        1) Flexibility. Folders are much easier to rename/move/rearrange than projects. Projects, renaming the project doesn’t even rename the folder! Nor does removing a project delete anything.

        2) Compile time is quicker. With multiple projects, you have I/O time with files copied around (unless you use a common output folder, but at that point, what does it matter?)

        Having a Core project assumes you need a Core layer. Having an Infrastructure project assumes you need an Infrastructure layer. With folders, my options are wide open!

        If you’ve followed my blog, you’ve seen my opinion change on this over the years. Highly organized, medium organized, to now, just one project/deployed app, with more projects built for shared components.

        What’s funny is that a lot of this comes from a project I’m on with 41 projects (about two-dozen discrete deployed apps). All started with a single project, more added _only_ as needed.

  • Paul Tiseo

    I think it’s just as bad to tell people to start super-simple as it is to say “always start with a five-layer, multi-DLL solution for every project”.

    I think that starting every project with a super-simple structure presupposes that *nobody* can anticipate the initial *overall* complexity of their project. Now, if you’ve lived through enough projects, you start being able to gauge (along a simple SM-MED-LG-XLG granularity) the project right from the get-go and use the appropriate project structure, toolsets, etc.

    In fact, if you are really, really smart, you’ll have the scaffolding already setup for those various sizes, pret-a-porter! Whoa! :)

    • Anonymous

      So….what’s the middle ground? I’ve started even with medium-size sites (just 2 projects, Core and UI), and still found that a folder could suffice in the beginning.

      I just haven’t seen that those decisions need to be made up front. It’s SO easy to add a project, but quite difficult to take one away. I’d rather focus on my refactoring skills to guide me in the right direction based on emergent complexity.

      But, not every team is mature/experienced enough to be able to know when to pivot. So I can see why you’d need to add guardrails for teams that haven’t shown these kinds of skills.

      • Paul Tiseo

        I’m not sure I get your question. And, I’m guessing we may be debating partly on semantics.

        You say your folder approach could suffice in the beginning for almost all projects. That is very true, esp. if your project truly is a black box and you have no idea of the near-future state of your solution would look like.

        But, most people have a 10K view of what they will build most of the time. (Agreed not every time.) And, in that view, some pivots can be anticipated as whens, not ifs. Others are ifs, but with high probability.

        I guess what I am trying to say is that there’s a difference between guessing an unknown future state and knowing a likely (but still not guaranteed) future state. Should you build for some uncertain, presumed event? No. Pivot if and when it happens.

        But, I think in an effort to appear “Look Ma! I’m supa-Agile”, we tend to overemphasize the former (assuming everything is unlikely) at the detriment of the latter. A senior developer doesn’t manage risk by *always* deferring the mitigation of it to the last minute.

        At the risk of getting political, don’t fall into the Quayle Quandary of wanting to ignore “fuzzy math”. :)

        • Anonymous

          This is the way I look at it – too often we decide what the boundaries should be well before it’s clear what they ought to be. We make decisions to do DDD, to put in a domain model, when in a lot of cases, it’s just over-complicating things.

          I give talks on crafting wicked domain models, but I also have a system that has a data set as its model, and transaction scripts to handle operations. Repository? It’s one class! Domain model? It’s a table of data, with one or two calculations! All this alongside a system that does have complex domain models.

          I’m not saying don’t anticipate what’s coming down the pipe – but rather don’t make premature decisions that limit your options. Don’t make decisions using speculation, use evidence. Besides, as long as it’s covered by good functional/acceptance tests, you can refactor the bejeezus of the insides to tackle any eventual complexity.

  • jodomofo

    This is a great thread and I am very happy to see the
    momentum towards removing abstraction that adds no value.

    For all the naysayers questioning when the business will
    need to move to a new UI or new data technology or whatever, ask yourself what
    is the realistic lifetime of an application? Is the application your
    architecting today really going to be what the business and/or developer wants
    in 3,4,5 years? Probably not, we just can’t see that far into the future. Build
    what you need now and get over it.

    Your greatest weapon in making your application not
    restrict your growth(and profits) is partitioning of concerns. Partition
    business capabilities to the finest grain possible(within reason), and create
    testable, intention revealing code. This will allow you to refactor as needed
    and evolve When Needed!


    In light of these revelations I have to ask when are you
    going to write the article titled, “Stop using AutoMapper, its

    Not trying to break your balls but I have seen to many
    projects get burned by this needless complexity. If you can “AutoMap” one
    object to anther, maybe you need to ask what are your really accomplishing
    and re-evaluate your use of layers and/or just deploy the code you need where
    you need it? Do we just have a DTO here?, doesn’t sound like a behavior object.
    If you can’t “AutoMap” you objects and need to Configure one property to
    another, what are you saving? Your definitely adding complexity! I thought
    AutoMapper was a cool tool years ago as well, but then I was lost in the nLayer
    abstraction sauce as well!



    • jbogard

      Ha! Well, I still use AutoMapper, that’s why! It’s still great for when I have layer boundaries and very similar objects. I still use it, but for different reasons and different places (at times).

      It’s why AutoMapper doesn’t reference System.Web or anything – it’s not meant to be an MVC-only thing. I use it quite a bit now for just translating messages from one layer to the other (form to command message etc).

      What I might need is a post about AutoMapper, 3 years later or something.

      • jodomofo

        I would argue it’s just better to explicitly write mapping code where you need it, rather then use a tool that adds obfuscation and complexity. But better yet try and alleviate the need to write mapping code. I have been exploring a pattern of putting my command on my view model. So far so good, and removes the mapping need. Works with a task based ui.

        • jbogard

          Like I said – works well when it gets rid of code you’re already writing (and it’s code you want to delete too). What do you mean by putting your command on your view model? Curious to know what other people do here. I never felt it was complex or obfuscated things, but that’s just me.

          • jodomofo

            Say you have a CreateShippingAddress view. Your ViewModel could be:
            public class CreateShippingAddressViewModel
            public CreateShippingAddress Command {get; set;}
            Your command is serializable and you can also put your DataAnnotations on it:
            public class CreateShippingAddress : ICommand
            public string Street {get; set;}
            public string City {get; set;}
            If you using Asp.net MVC you have the framework binding your Command with data, so you are cutting out work and elimination mapping code.

          • jbogard

            OK I think this goes beyond what can fit in a blog comment – I’d love if you blogged about your approach and linked back here when you do! There are so many approaches to these things, I love seeing how other folks tackle this problem. Thanks for sharing!

      • jodomofo

        It was almost exactly three years ago that I discovered automapper and introduced it to my team thinking it was the bees knees, lol! What it revealed to me over time was that the desire to want such a tool stemmed from the fact that I was doing stupid stuff!; pointless nlayer abstraction.

  • dario-g

    This is pretty example of very simple application and conclusions are obvious but what about more complicated (when you know that will be complicated more)? What structure do you recommend?

    • jbogard

      In what way will things get more complicated?

  • Md.Ibrahim

    I was searching for architecture articles for a web project and I stumbled on this article. I like the structure here; simple and effective. But what happens when another project such as WebAPI or Desktop app is required to add to this? Wouldn’t it get messy then as I have to reference the web project in my api and desktop apps?

    • jbogard

      That’s the evolutionary part – only when you have the need to pull common behavior/code out to a shared project do you do so. Not before, when it’s just a guess.

      • Md.Ibrahim

        But what if I know from beforehand and not guessing? I would have to implement some structure to share code between projects. What do you suggest then?

        • jbogard

          The purpose of this post is not to prescribe a specific starting point, but to recommend starting with the simplest solution possible and only when faced with additional complexity to evolve in the face of change. Does that help?

          • Md.Ibrahim

            Yeah, I understand. Thanks.

  • Pingback: Organizing an Application – Layering, Slicing, or Dicing? | Form Follows Function()