CQRS and user experience

CQRS as a concept is relatively easy to grasp, as it’s really just two objects where there was once one (plus all the stuff underneath the covers to make that happen). Where I see most teams struggle to apply these concepts is when they get to building the user experience around a CQRS system.

In a typical N-Tier architecture, commands and queries are served by the same persistent records. When you do this, barring any kind of back-end replication, users see changes to what they’re modifying immediately. Often, the workflow presented is something like:


When moving to CQRS, the “View” side of things is in a separate store than the “Form” side of things. I elaborated earlier on UI designs when you chose eventual consistency, but there is a choice beforehand. In cases where we’re introducing CQRS, it can be fairly difficult to wrest the above synchronicity away from users and try to replace every screen with:


This works when the user actually expects some sort of “background” work to happen, or that we present this to the user in a meaningful way.

But when doing CQRS, eventual consistency is an orthogonal choice. They are two completely separate concerns. Going back to our new CQRS design:


We have many choices here on what should be synchronous, and what should not. It can all be synchronous, all be async, it’s just a separate decision.

What I have found though that is if we build asynchronous denormalization in the back-end, but try to mimic synchronous results in the front end, we’re really just choosing async where it’s not needed. Not in all cases of course, but for most of the ones I’ve seen.

Some async-sync mimicry I’ve seen includes:

  • Using Ajax from the server to ping the read store to see if denormalization is “done”
  • Using SignalR to notify the client when the denormalization is “done”
  • Writing to the read store synchronously, but then allowing eventual consistency to fix any mistakes

All of these seem a little bizarre to me, as there’s a clear difference in my mind between managing an asynchronous process that is temporally decoupled from the UI and being able to supply synchronous updates to the user.

That’s why I start with a synchronous denormalizer in CQRS systems – where users already expect to see their results immediately. When the user expects immediate results, jumping through hoops to mimic this expectation is just swimming upstream.

Synchronicity of the web

The web is inherently synchronous request/response. When we’re building interactions in a CQRS system, we have to work within those boundaries. With every action the user takes, there is some synchronous activity that takes place.

We just need to make sure that our interaction design respects this fundamental constraint, and doesn’t confuse users. If the expectation is fire-and-forget, that’s fine, but often we will still need some way for the user to see the status of their request (just like Amazon orders).

Eventual consistency in CQRS can be a powerful tool, but it can’t confuse users. Build the user experience around whatever path you choose.

About Jimmy Bogard

I'm a technical architect with Headspring in Austin, TX. I focus on DDD, distributed systems, and any other acronym-centric design/architecture/methodology. I created AutoMapper and am a co-author of the ASP.NET MVC in Action books.
This entry was posted in CQRS. Bookmark the permalink. Follow any comments here with the RSS feed for this post.
  • Joseph Daigle

    Well said. This single post could easily be the most simple and straight forward introduction into what CQRS is really about.

  • Charlie Barker

    Thought provoking stuff.
    I had always assumed that you need to process commands asynchronously to get a lot of the benefits that CQRS offers but as soon as you do that your UI has to start doing the async-sync mimicry. So do I understand correctly that your saying it is a valid option to process commands synchronously and update denormalised views at the same time?

    • Anonymous

      Yep, that’s right, just update it all at once. It’s that whole physical vs. logical separation thing, really.

      • Charlie Barker

        Makes sense.
        So it seems if you take this approach your going to use a thread on your web server whilst you wait for the command and denormalised update to complete. The result being if the system has peak periods that exceed the IO capacity of your storage then pages will take longer to load whilst waiting for a thread to execute on. The same would also be true if you have to call out to third party services that are slow to respond, in effect you have temporal coupling. In certain situations this would make me nervous because if one of the 3rd party services you use started responding slowly the result could be your site unable to serve new requests.

        • Anonymous

          “So it seems if you take this approach your going to use a thread on your web server whilst you wait for the command and denormalised update to complete.”

          Not necessarily. You could use ASP MVC 4′s support for “async” on controller actions, for example.

          • Anonymous

            What is the user doing during this time too?

          • Charlie Barker

            So my concern is not that the affected users have to wait it is that the site remains available and responsive to unaffected users.

          • Charlie Barker

            Async threads are a smart choice as they’ll just consume memory on the server whilst they wait for a 3rd party response or IO to complete but they are not free. My experience has been that when pages become un responsive customers start refreshing their browsers and starting fresh applications in effect  they become  mini DOS attackers. The problem only becomes noticible when enough customers are on the site that your server becomes starved of resources to the point it can no longer serve new pages the killer thing is that you are most likely to see this behaviour when your site is at it’s busiest. If your selling something then it is likely that this means it is the worst time for your site to be off line or responding slowly. Not all developers need to be concerned by this, low traffic sites that have servers with lots of  RAM will never reach this state.

    • I would say a majority of the benefits from CQRS are really on the read(query) side more so than the write(command) side. Users will be more understanding of latency when saving work (writes) than they will be when hitting a page to see data(read).

  • Chad Lee

    I’ve found that using something like knockout.js here has really helped to mitigate issues like this. We treat logical portions of our app as a “single-page application.” When using knockout, when a user makes a change to the model, it updates the JS view model immediately providing instant user feedback. On the backend, an ajax request is made to actually send the commands. But as far as the JS view model is concerned, the change has already happened even if the read-model hasn’t quite updated yet.

    I guess this falls into the category of synchronous mimicry, but for us in many situations it works very well.

    • Anonymous

      I think that’s OK, as long as it’s obvious to the end-user if the back-end commands fail/are rejected for some reason.

      • I would expect validation to occur synchronously at least, so you basically have a 99.9% chance of the command actually succeeding later on (even when it’s asynchronous).

  • Totally agree. This whole async UI wrestling has had a nasty smell about it to me for a long time. We have just started a new app at work using CQRS with RavenDB and just as you describe, we are updating the UI in a synchronous manner. It works really well and It’s also much easier for other team members to grasp when done this way.

    • With Raven you actually ARE doing async UI updates (the indices are eventually consistent).

      Command -> DocumentQuery -> IndexAjax Poll for updates -> WaitForNonStaleResults***

      It’s ‘easier to understand’ because the infrastructure is taking care of it for you.

      • If you’re using WaitForNonStaleResults more than a few edge cases you’re not using Raven properly. Heavy dependence on WaitForNonStaleResults shows a failure at transaction boundary modeling.

        • Anonymous

          Yeah, and perhaps the aggregate boundaries too?

        • I agree with this. I think my intention did not get through, there are supposed to be carriage returns there! I was trying to show the parallels between an eventually consistent cqrs system and the eventually consistent raven indices.

  • Pingback: CQRS – Use Your Common Sense | Eventually Consistent()

  • Pingback: The Morning Brew - Chris Alcock » The Morning Brew #1175()

  • t0PPy

    I don’t have much experience but from a “purist” standpoint i like the “fake it on the client” approach described by Chad Lee. And as Roy points out, client side validation to ensure that commands generally will not fail.

    • Anonymous

      So at that point, you really are writing to 2 stores, no?

  • seankearon

    First sentence, last paragraph: shouldnt “..can’t confuse users…” be “..can confuse users…”?

    • Anonymous

      Yeah, I guess I meant, “can’t confuse users if we want it to be successful”. I no good with words.

  • hahaha


  • Anonymous

    Can you guys please recommend a good example with src code?

  • Anonymous

    Can you guys please recommend a good example with src code?

  • This “needs” to be one of the first things mentioned in every Beginner CQRS article. You’re right, Jimmy, CQRS is not all that difficult. But when ‘experts’ start diluting the concept with eventual consistency, async, etc, it becomes pretty confusing.

    Like Chad Lee mentions, there are obvious scenarios where async makes total sense. For example, on Reddit, if you post a comment you get immediate feedback. However, it can take up to 10 seconds for that comment to truly persist.

  • Radu

    I don’t know if this is still open, but I have a quick question.

    How did you implement the SignalR part? I mean, the denormalizer is doing some background work and when it finishes it notifies the client.

    Is in this case the denormalizer a SignalR client connected to the hub with everybody else or is it raising events that the SignalR hub is picking up?(and if it is, how is it doing it?)


    • jbogard

      Man that was like 5 years ago. Check out the CQRS journey from MS patterns and practices.