Refactoring to tests is no easy task. The largest problem that you run into is moving around code with no feedback as to whether you are breaking modules other than doing very high level user/functional testing directly against the UI. This makes the first step of refactoring to tests the most difficult to take.
Before I begin describing the technique at hand I would like to prefix it with a disclaimer and a warning: DO NOT use this to replace actual unit tests. This type of testing should ONLY be used as a baseline so that you can start to do larger refactorings that could potentially break portions of code. This gives you the start of a safety net, albeit a pretty flimsy one.
Awhile back Joe Ocampo mentioned a technique that Michael Feathers called ‘vice’ testing. I recently came across an article on infoq where Ian Roughley calls it Logging Seam Testing. At the core, it’s the same thing just named a little differently. The article on infoq is in java, so I created a utility to do the same thing in c# that you can find in my github repository here. Here is a direct download link to a zip archive.
This is quite the interesting technique. One thing to note: I have yet to use this technique myself, so please take it with a grain of salt. Seems like this could work well for extremely tightly coupled, monolithic code and wouldn’t necessarily fit for every scenario. I could see this being a good fit while trying to reduce cyclomatic complexity or breaking large methods/classes into smaller ones. That being said, let’s dive in.
The basic idea here is you want to get some form of tests into your code, but it’s much too difficult to break dependencies and create seams to get unit tests in. That’s where vice testing comes in. It’s actually very simple and straightforward. The process goes like this.
- Determine what it is that you want to test, in this sample its the amount of deductions and the amount paid to an employee via an Employee.Pay method
- Insert logging statements in your production code that simply log internal state of an object. In the example it is deduction total, salary and ‘completed’
- Write a test that sets up the class under test, intializes the log4net logger, sets the expectations to be met
- Execute the method on the class under test and assert that all expectations were met via the ViceAppender class
First let’s look at the actual test code that would do the work for us:
2: public class EmployeeTests
4: private ViceAppender _viceAppender;
7: public void TestFixtureSetUp()
9: _viceAppender = new ViceAppender(new log4net.Layout.SimpleLayout());
14: public void employee_is_paid_expected_amount()
16: _viceAppender.AddVerification("deduction total=" + 70m);
17: _viceAppender.AddVerification("salary=" + 330m);
19: IList<decimal> _deductions = new List<decimal>();
23: var employee = new Employee(400m, _deductions);
27: Assert.IsTrue(_viceAppender.Verify(), "Expectations not met");
In the example you can see that I am initializing a log4net logger and setting some configuration on it. Then you setup verifications on the ViceAppender class that matches expectations added to your class along with the expected internal state that will result from the input.
Here is the Pay method from the Employee class:
1: private ILog _log = LogManager.GetLogger(typeof(Employee));
3: public void Pay()
5: // bunch of code outside the scope of this test
7: foreach (var deduction in _deductions)
8: _salaryPerWeek -= deduction;
10: _log.Info("deduction total=" + _deductions.Sum());
11: _log.Info("salary=" + _salaryPerWeek);
14: // more code we don't want to test right now
Now that we have the expected output, you can now assert what was dumped to a log file and determine if your tests are passing or not. This is achieved pretty easily with a simple Expectation class and a derived Appender class that is called ViceAppender. We have a ‘Verify’ method here that will iterate over the logging statements using the message as a key and see if the expected message is actually in the logger. The verify method returns a boolean to indicate if all expectations were met during the test run. The message is used as the key on a dictionary. Therefore the message needs to match exactly in order to match an expectation.
1: public bool Verify()
3: bool result = true;
4: foreach (string key in _verifications.Keys)
5: if (!_verifications[key].WasCalled)
6: result = false;
8: return result;
Also on the ViceAppender I have the PrintExpectations() method that outputs the expectations to the Console:
1: public void PrintExpectations()
3: foreach (string message in _verifications.Keys)
5: var sb = new StringBuilder();
6: sb.Append("Logger called '")
8: .Append("'. Was Called? '")
Produces the following output:
1: Logger called 'deduction total=70'. Was Called? 'True'
2: Logger called 'salary=330'. Was Called? 'True'
The benefit from doing this type of testing is no modification of the class under test needs to take place. All you need to add is logging statements and a field for getting at the ILog instance. This gives you a solid foundation to begin doing refactoring.
This could be viewed as integration tests as well as unit testing because your class under test may leak into other portions of code. Make sure you make your scope of your test as narrow as possible to combat the problem. This is a good place to start, but use wisely. It’s not meant to be a long term solution and can be abused just like any other technique.
After typing out some code and playing with it for awhile, You could get this to work without even using a logging utility and just make one yourself. The java example on infoq used the java equivalent of log4net so I did the same here. One other thing I left out was that Ian in his infoq article had the ability to assert on expectations that were called when not expected. While this is a good feature, I decided it was a little overkill for a simple example and left it out for brevity.