I notice that there’s a lot of confusion for people that are just starting to explore automated testing for applications, be it using TDD or not. There are many kinds of tests, all terms are used extensively, and I’ve seen this being the cause of confusion for many developers.
So here’s an overview of the most common types of testing and what their goals are.

Unit testing

As the name already states, a unit test, tests a single unit. What’s a unit? The smallest thing you could test, thus, in object orientation, that would be a method.
It’s common to create a fixture per class, and one or more tests per method. You’ll probably create one passing test, and several failing tests.

You should always start out with unit tests. If they’re failing, there is no use in starting with other types of tests. It’s simple. Build up the dependencies you need in the wanted state (valid or invalid, depending on the goal of the test). Perform the operation you’re testing, and verify if the result was correct.

Unit testing also introduces a whole new world of curious little objects such as mocks and stubs. I’ve been using Rhino.Mocks as my mocking framework for a few weeks now. Mocking is a subject that deserves a post on its own (or maybe even more). My xUnit Test Patterns book arrived yesterday, so you can expect some test-posts in the future :) .

Integration testing

I think integration testing can be summarized in the following line:
Testing of (sub)systems that interact (or integrate) with expensive or external resources

Thus, integration testing aims to test the combination of different software modules. Some examples:
- Test CRUD operations on your datastore
- Test import and export of data (system talks to the file system)
- Test systems that connect to webservices
- Test the integration of two modules with valid dependencies
- …

The typical example for demonstrating an integration test, is the repository example.

These tests are a lot more expensive than unit tests (since they connect to external resources, or build expensive objects), but you should still try to keep them fast. In my opinion, your integration tests should run -at least- once a day.

Acceptance testing

Acceptance tests come in two flavours: user acceptance tests and automated acceptance tests.

If you’ve got dedicated users that are testing the system after each iteration, you’d be doing user acceptance testing, or better said, your users would be doing acceptance tests.
It’s a common and best practice, to deploy an application into a user acceptance environment several times during the development process. This has several advantages:
- Bugs are discovered sooner. And thus are easier to fix.
- Misconceptions in analysis are discovered sooner (since you’ve got user feedback)
- Usability is tested sooner
- User adaptiveness can be estimated better
- Deployment problems are discovered and solved before going to a production environment

Automated acceptance tests are just like user acceptance tests, only automated. They also perform user interface actions, like clicking on buttons and filling in data in forms. These tests go through the whole cycle, just like a normal user would. They create a new product (for example) by clicking on the “New product”-menuitem, they fill in the data, and finally click the save button. You should create acceptance tests that fail when required information is missing, and assert you’re showing an error message. Let’s say it’s a bit weird the first time you do this, but it has great advantages.

You miss a few of the advantages you get when your users test, but there are ways to minimize them. For starters, let your users write your acceptance tests (not the code, but their intent), try to make them cover each scenario. Have them review the tests, whenever they want to change functionality. Demo the application after each iteration, so they can give feedback about usability.

Smoke testing

Smoke testing can be defined as a set of tests that need to pass in order to include the new or modified functionality into the entire system.
To give you a specific example, let’s consider the ordering story. Assume we’ve just added functionality to cancel an order.
The corresponding smoke tests to commit this functionality could be:
- Can I still add a new order?
- Can I still update an order?
- Can I confirm and update a canceled order (these should fail)?
- Can I still delete an order, unless canceled?
- Can I still search for orders?

You’re not testing the whole application, but, you’re testing only the functionality that is connected to the one built or repaired.

If you’re doing continuous integration (and you should), this is actually covered by the build that starts after commit, thus smoke tests arent’s specifically set up (at least not in my experience).

I’ve come across the use of smoke tests in legacy applications that don’t even have automated tests. No automated tests means no quality assurance. That’s why developers had to run a set of manual smoke tests before releasing an application. They stored a list of test cases in form of an Excel file containing a few steps to verify when a subsystem was changed. These were high-level tests, they didnt’ test complex functionality, but they had to run, or you weren’t allowed to release the application for user testing.

Regression testing

Regression testing is the term used for rerunning old tests to check if any functionality was broken by added or modified functionality. The ideal way of doing this is rerunning the entire test suite (unit tests as well as integrations tests) after each change to the system.
Sometimes changes are coming in too fast to keep this up, but it’s important to at least run all your unit tests after each change (and if even this seems to take a lot of time, you should try to make your tests faster).
We don’t really talk about regression testing, it’s something that’s invisible when your doing automated testing, and especially when doing continuous integration :) . If you’ve got users that are testing, it’s common that they re-execute all the tests they did in the previous release, and finally test the new/added functionality to test the whole system. In that case, it’s more common to explicitly talk about regression tests.

Performance testing

There are several types of performance testing. I’m covering the most common ones with a brief description of their goals.

Load testing
How will my application behave under a load of 50 users performing 5 transactions per minute?
Load testing is applied to systems that have a specified performance requirement. If performance is an important requirement, the tests should run at least once a day, so the impact of changes on performance can be noticed immediately.

Sress testing
How much load can my application carry?
This type of testing is usally applied to check at what load the application will crash. It’s good to know the maximum load your app can carry. If it’s very close to the performance requirement set by the customer, it’s time to do some serious profiling :) .

Endurance testing
Will my application be able to carry 50 users even after running x hours-days-…?
Endurance tests are used to evaluate if your application is able to run under a normal work load (users and transactions) during a prolonged time.

Wrapup

This post turned out to be longer than I thought, but I think it gives you a nice overview of what each test type intends to do. If I forgot anything, feel free to add!

Share it:
  • Kick it!
  • DotNetShoutout
  • Technorati
  • DZone
  • TwitThis
  • Facebook
  • LinkedIn
  • del.icio.us
  • Digg
  • Reddit
  • Google
  • E-mail this story to a friend!