NServiceBus and concurrency

A while back, Andreas posted on NServiceBus sagas and concurrency. In that post, he described both what to consider and how to change the concurrency model of NServiceBus and how it relates to sagas.

One thing that comes as a surprise to those new to NServiceBus (it was to me) is that the default transaction isolation level that System.Transactions uses is the highest level, Serializable. For a lot of applications, this is quite a bit of overkill.

Just to review the different IsolationLevel values:

Serializable
Volatile data can be read but not modified, and no new data can be added during the transaction.
RepeatableRead
Volatile data can be read but not modified during the transaction. New data can be added during the transaction.
ReadCommitted
Volatile data cannot be read during the transaction, but can be modified.
ReadUncommitted
Volatile data can be read and modified during the transaction.
Snapshot
Volatile data can be read. Before a transaction modifies data, it verifies if another transaction has changed the data after it was initially read. If the data has been updated, an error is raised. This allows a transaction to get to the previously committed value of the data.
Chaos
The pending changes from more highly isolated transactions cannot be overwritten.
Unspecified
A different isolation level than the one specified is being used, but the level cannot be determined. An exception is thrown if this value is set.

This is slightly more generic for than the SQL Server description of these isolation levels. Looking at those statements, Serializable is likely not what we want. It’s the safest of all the levels, achieved by sacrificing concurrency. But with NServiceBus, we have the ability to scale our endpoints both up and out. In order to scale, we’ll need to revisit our concurrency strategy.

Choosing an isolation level

The nice thing about isolation levels in NServiceBus is that you can tune these on an endpoint basis. If you have messages that require different concurrency needs, you’re better off putting those in their own endpoint as those messages likely have different SLAs than ones with other concurrency needs. To override the isolation level for a given endpoint, just use the IsolationLevel configuration method:

public class IsolationLevelConfigurer : IWantCustomInitialization
{
    public void Init()
    {
        Configure.Instance.IsolationLevel(IsolationLevel.ReadCommitted);
    }
}

But I wouldn’t just choose an isolation level at random, it’s something we should carefully consider. In fact, if you expect to have any sort of concurrent users against a single entity, it’s wise to be explicit about your concurrency model. Having gone through the exercise of choosing an isolation level, I start to wonder if it shouldn’t always be explicit, no matter what your application (unless handled for you automatically by the underlying frameworks).

In my system, concurrent users do access the same entities. Read Committed is a sensible default for these scenarios, as it prevents someone accessing a row I’m changing or access data that I’ve read (preventing dirty reads):

Specifies that statements cannot read data that has been modified but not committed by other transactions. This prevents dirty reads. Data can be changed by other transactions between individual statements within the current transaction, resulting in nonrepeatable reads or phantom data. This option is the SQL Server default.

However, if we’re using ORMs, we can go to an even further to use the built-in concurrency models to further tune our isolation levels.

With optimistic locking at the application level, tied with relaxed transaction isolation levels, we were able to fairly easily boost the performance of our system. In our case, we were able to go with Read Uncommitted and the optimistic concurrency strategy of using a SQL rowversion column (only for entities that were truly mutable), allowing us to increase the number of concurrent threads in our endpoints from 1 each to 4 each.

Choosing a concurrency strategy requires careful analysis and planning. Changing the models are quite easy, but it’s choosing that’s the difficult part.

Related Articles:

Post Footer automatically generated by Add Post Footer Plugin for wordpress.

About Jimmy Bogard

I'm a technical architect with Headspring in Austin, TX. I focus on DDD, distributed systems, and any other acronym-centric design/architecture/methodology. I created AutoMapper and am a co-author of the ASP.NET MVC in Action books.
This entry was posted in NHibernate, NServiceBus. Bookmark the permalink. Follow any comments here with the RSS feed for this post.
  • Pingback: Distributed Weekly 179 — Scott Banwart's Blog

  • http://www.facebook.com/cribas Carlos Ribas

    Jimmy, a heads-up: we’ve seen SQL Server 2008 behave unexpectedly with Read Committed isolation level. Specifically, NHibernate was doing hundreds of delete/insert/updates in a transaction and we saw duplicate rows that would be impossible if Read Committed worked as expected. We found that it is possible to have two transactions modifying the same rows and for them to see each other’s modifications before the transactions were committed. This was of course SQL Server functioning as-designed, but not as one might expect with this isolation level.

    We still went with Read Committed for performance reasons, but we cleaned up NHibernate’s database interactions so that it does set operations using set commands instead of running hundreds of delete/insert/update statements. After we did that, SQL Server’s readcommitted functionality worked as expected for us.

    An another note — NServiceBus is changing their default isolation level to Read Committed in a future release.

    • jbogard

      Ha, wow! For me, I never use NH to do more than update an aggregate or entity. If I’m updating things in sets, going just raw SQL (still through ISession) works a lot better. More control, too!

      Thanks!

      • http://www.facebook.com/cribas Carlos Ribas

        Yeah, that’s what I changed it to. We were updating entity’s collection(s) of other entities for a set of roots in this case. I was actually pretty shocked this could happen but I reproduced it easily with two query analyzer windows and then found some blog posts about it. SQL Server is taking some shortcuts to look better in benchmarks IMO ;) Anyway it works fine if you don’t do stupid things like NH was doing in this case, lol

  • Pingback: The Morning Brew - Chris Alcock » The Morning Brew #1225