Event sourcing revisited

Event Sourcing (ES) over the last few years has become one of my favorite architectural patterns when implementing a complex line of business (LOB) application or a complex component making part of a LOB application.

Attention – paradigm change!

It took me some time thought to make the full mental switch that is required to really understand what ES is an what its implications are. I openly admit that the first two iterations I was responsible for were sub-optimal to say the least. But then, to fail is not a problem as long as we learn from our failures and do better next time.
Interestingly the theory behind ES seems to be really easy to understand. A lot of developers I have personally taught or mentored quickly assured me that “they got it” when in reality they still remained very much afflicted with the classical stateful thinking.

The classical, stateful world

In the classical main-stream architecture the applications we design and build are stateful. What does that mean? This means that we always store a snapshot of the objects we deal with in our application in a data store. The snapshot represents the state the object was/is after the last modification. We continuously overwrite the previous state or snapshot with the newest version in the data store. For simple applications this might be more than sufficient since we are not really interested in what was before and how we did get to the point we currently are. Such applications are only interested in the here and now.

Let me give you a sample of what I am talking about: Let’s assume that John Doe has a bank account. Today he wants to know what the current balance of his account is. With the mobile app of his bank he can access this information and finds that the balance is $2535.45 as of today. A week later John wants to again know the newest balance and he’s told that now the balance is $5455.10. John is very happy that he has now more money on his account than a week before. John is an easy going man and doesn’t worry too much about details as long as the big picture looks OK, and in this case since the balance is clearly positive, he’s satisfied.

We want more insight

Laura, John’s wife on the other hand is a little bit more worried and wants to know more details. She’s interested in how it comes that in only one week the balance of their account changed so much. Thus she drills into the details and sees the following

Date Description Debit Credit Balance
05/01/2015 HEB Round Rock 125.24 5455.10
05/02/2015 Loewe Hutto 25.00 5430.10
05/03/2015 Check #181 335.00 5095.10
05/03/2015 Payroll …  3145.25 8240.35
 …
05/26/2015 Fire Bowl Cafe  9.20 2696.35

This is the list of transactions executed on the account. It carefully and in detail lists each change that has been applied to the account. We can easily see where the account has been credited and where it has been debited and when it happened and what was the reason of the change. This is a journal of financial transactions. We can call each line of this journal an event that happened. Each event telling us what has happened to the account at a specific date and/or time.

When we look at the above table then we see that the we have a stream of events that when applied to the account will result in the balance of $2696.35 as of today 05/26/2015. While John Doe is only interested in this last number his wife knows so much more now. She can reason about all the events that happened during the current month. At any time can she ask questions (and get answers for them) like: “what was the balance 10 days ago?” or “why it comes that the balance increased dramatically on May 3rd?” and “how much did we spend at HEB this month?”.

Exactly this kind of deeper insight can be provided to users of an application that uses event sourcing as an architectural pattern. Instead of storing the current state of “things” in the data store we store for each object a stream of events which represent what happened to this particular object over time. Having this stream of events somewhere persisted we can then go and replay it such as that we can generate the state of the respective object as of today or as of yesterday or a week ago, or… The possibilities are endless.

Events are immutable

Another very interesting fact is that events that have happened are immutable. An event describes something that happened in the past and thus cannot be undone. Consequently the storage mechanism that we use to persist our events becomes very simple. It is basically a stack. We continuously append new events on top of the stack. Existing events are never touched again. No update or delete operation is defined, only add operations are ever possible.

Mistakes always happen

If for some reason we made a mistake and added a wrong entry to our transaction log then we can fix this by adding another compensating transaction. That is, I credit the say $24 back to the account that I had previously debited by mistake.

Names are important

When we use event sourcing in a LOB application then we give our events meaningful names. The name of the event describes the exact context. Since an event describes what has happened in the past the name should always be in written in past tense with the verb at the end.

The payload of the event (its properties) are the delta of what has changed.

Another sample

If we work with the same domain as in my previous posts about DDD (see here, here, here, here and here) – the loan application – we could have events such as

  • ApplicationStarted
  • PersonalInfosApplied
  • FinancialInfosApplied
  • ApplicationSubmitted
  • OffersGenerated
  • OfferAccepted
  • ApplicationApproved
  • LoanBoarded
  • etc.

From the chosen names it should be pretty evident what in each step happened to and with the loan application object. A new loan application is started by the user. She first provides her personal infos then continues to provide some additional financial infos. Finally she submits the application. The system then generates some loan offers for her. The user then selects on offer and accepts it. The system will do some more credit checks and finally approves the loan application. Now the loan can be boarded which means that the funds are transferred to the user’s account.

In our data store we will now find such a stream of events for each loan application that has been made over time. There will be no LoanApplication table or similar needed.

The read model

Now this is all straight forward and relatively easy to implement. But what about queries? So far we have talked about operations that change object. I call this the write side. But a normal LOB application also needs to have a read side which is represented by the queries that are executed on the data to display something on the screen or print it out to paper. A data store for events – called an event store – is not at all suited for (complex) queries. For this reason we need a store which contains the data in a shape that best suits our needs for display. We call this store the read model. True to the spirit of CQRS read concerns should be handled totally separated from write concerns.

The read model is in most cases a denormalized view of the current state of objects. It can be provided by a relational database, a document database or a full text index to name just a few. The read model is constantly updated using the events that are generated by the system as discussed above. This constant updating of the read model can either happen synchronously or asynchronously. In the latter case we say that the read model is eventual consistent with the write model since there is a tiny time gap (usually milliseconds) between the moment where the write model records a change and the moment where the read model has been updated with that change too.

The read model should be designed in a way that all necessary queries triggered by the application can be executed with the least amount of I/O operations possible. The query logic should be as simple as possible. Data should be pre-aggregated where needed, etc.

It is important to not that the read model is pre-prepared for queries. The update of the read model happens when something in the system changes which in turn is represented by the events. This makes totally sense since write operations that cause a change are much much less frequent in a typical LOB application than read operations.

About Gabriel Schenker

Gabriel N. Schenker started his career as a physicist. Following his passion and interest in stars and the universe he chose to write his Ph.D. thesis in astrophysics. Soon after this he dedicated all his time to his second passion, writing and architecting software. Gabriel has since been working for over 25 years as a consultant, software architect, trainer, and mentor mainly on the .NET platform. He is currently working as senior software architect at Alien Vault in Austin, Texas. Gabriel is passionate about software development and tries to make the life of developers easier by providing guidelines and frameworks to reduce friction in the software development process. Gabriel is married and father of four children and during his spare time likes hiking in the mountains, cooking and reading.
This entry was posted in architecture, DDD, design, Event sourcing, How To. Bookmark the permalink. Follow any comments here with the RSS feed for this post.
  • Pingback: The Morning Brew - Chris Alcock » The Morning Brew #1868()

  • Halvard Hagesaether

    I got it :)

  • Joel Warburton

    I’ve always had trouble with the read model. For example, let’s say I have an OrderCreated event, this event includes various order related things including CustomerId. On my read model I require a listing of all my current orders and the name of the customer for this order. What would be the best way to get the customer name as it wouldn’t be included in the event? Cheers.

    • gabrielschenker

      As always, it depends. The easiest solution is to have the customer ID as a key in the orders and when querying to join the order with the customer view.

      • Matt Goodwin

        Im interested in how you would ‘join’ these two together? in a QueryHandler maybe? Would your two joined views then make another new view model?

        • gabrielschenker

          It really depends on what data store you use. If it is a relational DB then joins are straight forward but if you’re using e.g. a document DB which don’t support joins natively then you probably would want to write a “provider” or “query handler” that does the join in-memory.

    • Matt Goodwin

      I too have the same problem at times and cant seem to get my head around fat events (including the customer name in the OrderCreated event) or having a join like what @gabrielschenker:disqus mentioned.

      The problem i find with storing the customer name in the same read model as your current orders is that if ever the customer name changes, you would have to update your models so a lot of I/O is required which i guess why having a separate view is the easiest solution.

  • Nice article and as you have pointed out it takes some time to start seeing your system as series of events. Any thoughts on how you would decide on for a event sourced system? Though this again is ‘it depends’, any thought process that you apply

    • jbogard

      My process is “does the business think and model their problem as a series of events”, and not developers. If so, then they’re already thinking of tools like StreamInsight and the like and a transition to an event store is natural. They might recognize events, but that doesn’t mean they accept the source of truth are events. So be very, very careful here – such a huge paradigm shift in the fundamental building blocks of your system, in which the data will live far beyond your application is not a decision to make lightly.

      If not, well, just don’t. If anything about the half-dozen event sourcing rescue projects I’ve been a part of over the years has taught me, it’s that choosing nascent technology with almost no production support and tooling is beyond adjectives for a “bad idea”.

      • gabrielschenker

        Event sourcing when done wrong can indeed lead to loads of problems but the same is true for any technology. If I use a hammer when a screw driver would be more appropriate then I have an issue. If I use an RDBMS when a document DB would be a better fit then “I’m screwed”. In my long career in the IT business there is not a single software architecture that hasn’t been totally abused, done wrong or failed. It’s the people (developers, product owners, stake holders, etc.) and their skills (and personal agendas!) that decide whether or not a project is going to succeed or not.
        I personally have had very good experiences with event sourcing…
        But as always… It depends

      • My concern with event sourcing…regardless of tooling preferences (Udi’s just add two date fields to a table approach, or Greg’s full on storage tooling)…is that there seems to be the notion that things like backup, “replay all” to populate new features, etc. – are easy and quick to do known things. But when handing a software project off to a customer and their dev team to support usually introduces such a high learning curve to the dev team and the devops/IT team that they don’t know where to begin. It is totally awesome if you sell the complete story up front to the customer…and they buy it – but to put it into a project under the flag that “development will go faster” never seems to be the appropriate answer. A “solution” should consider all aspects of a customers/clients need – never just code structure.

        I will take the agnostic stance here. An it depends approach. I treat event sourcing as any other mechanism to do my job…a tool in the tool box. I agree with Gabriel from time to time (we are building an internal dev team supported product that will benefit from the historical data). But I also agree with Jimmy in many other cases. I don’t think we have a single client that we are currently working on that would directly benefit from event sourcing (at the moment) so to put it in for the better dev story wouldn’t be right or fair to the customer.

        • Marco

          After all this hype surrounding ES people will come back to earth and realize that yes there’s a lot of benefits from using ES but not as the main source of truth. It’s madness if you just think about if for a minute, having to replay a couple of events just to get the latest state of an object and it does’t really matter if you do a snapshot for every two events you receive, because just the fact that you have to load more than one is just plain wrong. Sure you can go the easiest way and get the whole thing in faster memory, but that’s just trying to justify something which isn’t justifiable. ES is amazing for auditing, and to keep track of the changes in your system, but that’s about it IMO.

          • gabrielschenker

            No offense, but I think you got something wrong here. When using ES then you never query the event store. You always have a (denormalized) read model that represents the current state of the entities or views you care for. Actually you shape the read model in a way that allows you to retrieve the data you need to e.g. display on screen with the least amount of I/O necessary. The read model is created synchronously or asynchronously from the events that the domain aggregates create.
            But if you are talking about re-hydrating the aggregate from the stream of events stored in the event store then that is a totally different topic. According to Greg Young (the “father” of ES and creator of GetEventStore) retrieving a stream of several hundred or even several thousand events from GetEventStore is extremely fast (a few milliseconds) and rebuilding the current state of the aggregate from this stream which happens in momory is a matter of microseconds. It is very rare to have aggregates whose life cycle is such as that they produce more than a few hundred or few thousand events. Performance to re-hydrate the aggregates has never ever been a problem in the projects I was responsible for.

          • Marco

            No offense taken, but I have been working with CQRS/ES for a couple of years now, I’m not talking about the read-models I’m talking about the domain side of things. That’s exactly what I meant, you can make it so fast that something that is wrong IMO becomes right. You see, you’re right about the read models being de-normalized, we want things to be quick enough, so forgetting about the eventual consistency side of things, we can always query for the last state of an object whereas in the actual domain, to get that same state you need to reply events. I simply don’t care if the “father” of this or that says things are faster if you using a specific tool, because the concept does look right on paper. Like I said before, ES is a must in all the projects I’m responsible unless i’m not allowed because it is the best auditing mechanism I’ve ever seen so far, I’m just against using it a the sole source of truth. In any case, I like the way you’ve put things together in the article, even if I don’t agree with the way some of us use ES.

          • gabrielschenker

            If speed is a concern then we can always cache the aggregates in memory (in a single node scenario this is easy and in a distributed system we can e.g. use Redis). We have done this with success in a large enterprise application

          • Marco

            Hi Gabriel, the problem isn’t just about the speed, for me it feels wrong being able to get the latest state of an object directly in the read models but not when working with the aggregates as we have to replay all the events, as I said snapshots do help or all the other techniques you mentioning as well, to speed up the process, cashing included, but I just don’t really like using ES as the main source of truth, neither all the DBAs I have worked with, so although I use it (albeit in a different way) very often, it will always be as an auditing solution only. In one of the companies I worked for they had an average daily figure of around 1,5 billion requests,so not even foreign keys are allowed in the databases, so I wonder if I was to came to those guys and introduce them to ES as a main source of truth, how those guys would take it.

  • This is a great introduction on the concepts around this whole architectural pattern. Great job!

  • Henry Ho

    Thanks for such an insightful article.

    What are the common options for keeping the read model updated and what would be their pros & cons? Would Memory-Optimized Table be the right tool for the read model? Is it considered ok for the system to use the read model for business decision or should business logic only consult with the events as the single source of truth?

    • gabrielschenker

      I will blog about this topic shortly. What are my options to create a read model. Stay tuned!

  • Matt Goodwin

    Great explanation, i feel like ive learnt so much from your DDD series so far :-)

    Sometimes in my domain we have events that have happened and we have to model behaviour around that, for example, an external company informs us that a candidate didnt turn up for their interview, this is something thats happened that the business is interested in, do we model this as an event (i.e CandidateDidntTurnUpToInterview) which has a ViewModel projector listener to update the read model?

  • Marco

    The “burden” which is necessary to get the latest state of an object is the reason why I won’t use ES as my main data storage. I know there are ways to make sure you don’t have to load a lot of events (lets call those, “optimizations”) to get the latest state of an object but that’s just something I can’t live with, it feels so wrong IMO. I do see a lot of benefits in using ES for auditing tho and that’s where I use it only. One might argue that keeping two sources of truth isn’t ideal, but for those I have two words to say: eventual consistency.

    • just watched this, maybe it will be useful in your case ?
      https://www.youtube.com/watch?v=JHGkaShoyNs

      • Marco

        Thanks for the video Vahagn, but I know ES and I’ve in fact used it many times (but in fairness not in the way many people do), I’m talking about something related to how you get the latest state of object and the reason why I wouldn’t use ES as my main source of truth.

        If you watch that maybe you should also watch Greg’s 6 hours classic” one: https://www.youtube.com/watch?v=whCk1Q87_ZI

  • John Nickerson

    The thing I wonder about ES is forward compatibility or migrations. When you get a new version of an event long after you’ve been recording the old version, what’s the best way to have the two versions of the same event live side by side, to replay the history? Is it best to add a whole new event to handle the new version? Do you write all your events to be forwards-compatible so they can be handled by the newer code? Do you take a new snapshot to work just with the new event from then on?

    I’m sure I’m missing something here. It just seems that, over time, the history is going to get pretty hairy and harder to handle, as the requirements and capabilities of the system change.

    • gabrielschenker

      Whenever an event changes this is a contract change. And as in real life changing a contract is not a trivial thing and should be avoided *IF* possible. If the change is absolutely required then there are 2 main categories of changes a) adding a new property or b) removing or changing an existing property. The former is simple. Just provide meaningful defaults for the new property such as when you deserialize an old event it gets the correct defaults set. The latter is more involved. Here you should write up-converters that convert your old events to new ones whenever you you read them from the event store. The point is that your code should then only have to deal with the new version of the event(s). I will write about this in more details in my upcoming ES related posts

    • Harry McIntyre

      I’ve tackled this in my Sourcery project by having rebuild interceptors which can modify the raw json of events as they are read from the immutable store.

  • Noor

    Hi, how you will ensure that while withdrawing amount from an account the account should have sufficient balance?

    • jbogard

      How does the real world do this? Hint: not with transactions. But with policies. A very common misconception is that ACH is ACID. It’s not, instead you have balance checks but if something goes off, you can have negative balances or even overdraft.

      • gabrielschenker

        Thanks Jimmy for replying to that. I agree, it’s not ACID. In real life hardly anything ever that needs to scale is ACID. So yes, you have processes in place that deal with temporary violation of business rules. In this case compensating actions will be triggered, e.g. a transfer of money between another account (maybe savings) and the over-draft account.

        • Noor

          What about modeling account aggregate as parallel model (http://martinfowler.com/eaaDev/ParallelModel.html) on the write side itself. This will not only help in easily restoring current state, but also in undo and audit trail. Moreover, any invatiants that need to be ensured at aggregate level can also be acheived easily. I have tried this in one of my project and so far I am quite satisfied with this approach.