Clean Tests: Database Persistence
Other posts in this series:
- A Primer
- Building Test Types
- Isolating Internal State
- Isolating the Database
- Isolation with Fakes
- Database Persistence
A couple of posts ago, I walked through my preferred solution of isolating database state using intelligent database wiping with Respawn. Inside a test, we still need to worry about persisting items.
This is where things can get a bit tricky. We have to worry about transactions, connections, ORMs (maybe), lazy loading, first-level caches and more. When it comes to figuring out which direction to go in setting up a test environment, I tend to default to matching production behavior. Too many times I’ve been burned by bizarre test behavior, only to find my test fixture/environment doesn’t match against any plausible or possible production scenario. It’s one thing to simply and isolate, it’s another to operate in a bizzaro world.
In production environments, I deal with a single unit of work per request, whether that request is a command in a thick client app, a web API call, or a server-side MVC request. The world is built up and torn down on every request, creating a lovely stateless environment.
The kicker is that I often need to deal with ORMs, and barring that, some sort of unit of work mechanism even if it’s a PetaPoco DB object. When I set up state, I want nothing shared between that setup part and the Execute step of my test:
Each of these steps is isolated from the other. With my apps, the Execute step is easy to put inside an isolated unit of work since I’m using MediatR, so I’ll just need to worry about Setup and Verify.
I want something flexible that works with different styles of tests and not have something implicit like a Before/After hook in my tests. It needs to be completely obvious “these things are in a unit of work”. Luckily, I have a good hook to do so with that Fixture object I use to have a central point of my test setup.
At the setup portion of my tests, I’m generally only saving things. In that case, I can just create a helper method in my test fixture to build up a DbContext (in the case of Entity Framework) and save some things:
We create our context, open a transaction, perform whatever action and commit/rollback our transaction. With this method, we now have a simple way to perform any action in an isolated transaction without our test needing to worry about the semantics of transactions, change tracking and the like. We can create a convenience method to save a set of entities:
And finally in our tests:
We still have our entities to be used in our tests, but they’re now detached and isolated from any ORMs. When we get to Verify, we’ll look at reloading these entities. But first, let’s look at Execute.
As I mentioned earlier, for most of the apps I build today requests are funneled through MediatR. This provides a nice uniform interface and jumping off point for any additional behaviors/extensions. A side benefit are the Execute step in my tests are usually just a Send call (unless it’s unit tests against the domain model directly).
In production, there’s a context set up, a transaction started, request made and sent down to MediatR. Some of these steps, however, are embedded in extension points of the environment, and even if extracted out, they’re started from extension points. Take for example transactions, I hook these up using filters/modules. To use that exact execution path I would need to stand up a dummy server.
That’s a little much, but I can at least do the same things I was doing before. I like to treat the Fixture as the fixture for Execute, and isolate Setup and Verify. If I do this, then I just need a little helper method to send a request and get a response, all inside a transaction:
It looks very similar to the “Txn” method I build earlier, except I’m treating the child container as part of my context and retrieving all items from it including any ORM class. Sending a request like this ensures that when I’m done with Send in my test method, everything is completely done and persisted:
My class under test now routes through this handler:
With my Execute built around a uniform interface with reliable, repeatable results, all that’s left is the Verify step.
Failures around Verify typically arise because I’m verifying against in-memory objects that haven’t been rehydrated. A test might pass or fail because I’m asserting against the result from a method, but in actuality a user makes a POST, something mutates, and a subsequent GET retrieves the new information. I want to reliably recreate this flow in my tests, but not go through all the hoops of making requests. I need to make a fresh request to the database, bypassing any caches, in-memory objects and the like.
One way to do this is to reload an item:
I pass in an entity I want to reload, and a means to get the item’s ID. Inside a transaction and fresh DbContext, I reload the entity and set it as the ref parameter in my method. In my test, I can then use this reloaded entity as what I assert against:
In this case, I tend to prefer the “ref” argument rather than something like “foo = fixture.Reload(foo, foo.Id)”, but I might be in the minority here.
With these patterns in place, I can rest assured that my Setup, Execute and Verify are appropriately isolated and match production usage as much as possible. When my tests match reality, I’m far less likely to get myself in trouble with false positives/negatives and I can have much greater confidence that my tests actually reduce bugs.