Effective Tests: Introduction


Posts In This Series

This is the first installment of a series discussing  topics surrounding effective automated testing.  Automated testing can be instrumental to both the short and long-term success of a project.  Unfortunately, it is too often overlooked due to either a lack of knowledge of how to incorporate test into an existing process, or a lack of recognition of the deficiencies within an existing process.  As with any new pursuit, learning how to use automated testing effectively within a development process can take time.  The goal of this series is to help those new to the practice of automated testing by gradually introducing concepts which aid in the creation of working, maintainable software that matters.

This introduction will start things off by discussing some of the fundamental types of automated tests.

 

Unit Testing

Unit tests are perhaps the most widely recognized form of test automation.  A Unit Test is a process which validates behavior within an application in isolation from any of its dependencies.   Unit tests are typically written in the same language as the software being tested, take the form of a method or class designed for a particular testing framework (such as JUnit, NUnit, or MSTests) and are generally designed to validate the behavior of individual methods or functions within an application.  The behavior of a component being tested can be isolated from the behavior of its dependencies using substitute dependencies known as Test Doubles.  By testing each component’s behavior in isolation, failing tests can be used to more readily identify which components are causing a regression in behavior.

The primary goal of traditional unit testing is verification.  By establishing a suite of unit tests, the software can be tested each time modifications are made to ensure the software still behaves as expected.  To ensure that existing tests are always run when modifications are introduced, the tests can also be run at regular intervals or triggered as part of a check-in policy using a process known as Continuous Integration.

 

Integration Testing

While unit tests are useful for verifying that the encapsulated behavior of a component works as expected within a known context, they often fall short of anticipating how the component will interact with other components used by the system.  Tests which verify that components behave as expected with all, or a subset of their real dependencies are often categorized as Integration Tests.  Of particular interest are interactions with third-party libraries and external resources such as file systems, databases, and network services.  This is due to the fact that the behavior of such dependencies may not be fully known or controlled by the consuming development team, or may change in unexpected ways when new versions are introduced.

Integration tests often require more setup and/or reliance upon communication with external processes and are therefore usually slower than unit test suites.  Due to this fact, separate strategies are often used to ensure regular feedback of integration tests.  In some cases, slow-running tests can be mitigated by the use of “almost-real” substitutes such as in memory file systems and databases which are known to adequately represent the functionality expected by the real dependencies.

While the term “integration test” is often applied to any test verifying the behavior of collaborating components, it can be useful for test organization and the development of testing strategies to draw a distinction between tests which verify integration with disparate systems and those that verify that a collection of classes correctly provide a logical service.  In the book xUnit Test Patterns: Refactoring Test Code, Gerard Meszaros refers to tests for such collaborating classes as Component Tests.  While both test the interaction of multiple components, Component Tests ask the question “Does this logical service perform the expected behavior?” while Integration Tests asks the question “Do these components work together?”.

 

Acceptance Testing

Acceptance tests, or end-to-end tests, verify that particular use cases of the system work as a whole.  Such tests are characterized by a focus on how the system will be used by the customer by exercising as much of the real system as possible.  While finer-grained tests such as unit and component tests can help ensure the functional integrity of the individual components, acceptance testing ensures that the components function correctly together.

Although the purpose of acceptance testing is to verify the system works as a whole, it may still be necessary in some cases to substitute portions of the system where full end-to-end testing isn’t  cost-effective or practical.  For instance, some external services may not provide integration testing environments or may place limits on its use.  In other cases, the user interface technology used may not lend itself to test automation in which case tests may be written against a layer just below the user interface layer.  This is referred to as Subcutaneous Testing.

 

Classifications

The terms unit test, integration test and acceptance test classify tests in terms of their utility.  Another way to classify tests are in terms of their audience.  In Extreme Programming (XP), tests are broken down into the categories of Customer Tests and Programmer Tests.  Customer Tests are synonymous with User Acceptance Tests and are focused on the external quality of the system.  Programmer Tests are similar to Unit Tests in that they are written by programmers and are generally focused on the internal quality of the system, but they are less prescriptive about their level of test isolation.  Programmer Tests will be discussed in more detail later in our series.

 

Conclusion

This article presents only a brief introduction to some of the classifications of automated tests.  We’ll continue to explore these and others throughout the series.  Next time, we’ll take a look a traditional approach to writing unit tests.

C# vs. C#