Posts Tagged ‘Integration testing’

3 Comments Test types and Continous Integration - 02/23/09

Martin Fowler defines continuous integration as follows:

“Continuous Integration is a software development practice where members of a team integrate their work frequently, usually each person integrates at least daily – leading to multiple integrations per day. Each integration is verified by an automated build (including test) to detect integration errors as quickly as possible. Many teams find that this approach leads to significantly reduced integration problems and allows a team to develop cohesive software more rapidly.”

I’m writing this post as a follow-up to my previous one, types of testing. I’ll talk about how each type of test, fits into a continuous integration process.

Introduction

You can read all about Continuous integration in Martin Fowler’s paper here. A nice addition (and one that’s lying on my bookshelf as many others), is the book -> Continuous integration: Improving software quality and reducing risk, by Paul Duvall, Steve Matyas and Andrew Glover.

I’ll be talking about two types of builds. I’ll refer to them as the commit build and the secondary build. The primary-stage build (aka the commit build) automatically runs whenever someone commits changes to the repository (see Every build should build the mainline on an integration machine). When this build has tests that fail, the build also fails. The broken tests must be repaired as soon as possible to fix the build. This is a show-stopper and must be dealt with as soon as possible. The secondary-stage build, is a build that runs whenever possible; in my opinion, at least once a day. It can be done manually, or it can run as a nightly build in a script that grabs the latests executables and runs these specific test suite. If this build fails, developers can carry on working. Don’t get me wrong, this build has to be fixed too, but it doesn’t have the same priority as a broken commit build.

Unit tests

Unit tests are the most important part of your continuous integration process (in the sense that these tests are ran the most). After each commit to the repository, the build executes all unit tests to finalize the commit. Your unit tests should run within the commit build and make the build fail if any test fails.

It’s very important to keep these tests focused, and especially fast. You must realize that each commit will execute the tests, and it’s important to have immediate feedback. You can’t be waiting half an hour just to commit some changes, right?! That’s why unit tests use test double patterns (use a test double for each of the SUT’s expensive dependencies). I’ve only read a few pages in Meszaros’ book, but I know it contains a chapter that covers these patterns (can’t wait to get there!).

Integration tests

Integration tests run within the secondary build. These tests are normally slower than unit tests since they test the integration of several components, thus they do use (and set up) actual dependencies. This makes these tests slower, but still, we should try to keep them relatively fast. Running these tests is also very important, but since it’s an expensive operation, we do it far fewer times than running the unit tests. In my opinion, they should run at least once a day. These tests normally include testing access to your database, so I try to  run these tests after each database-change, for example. If they fail, you’ve probably broken a NHibernate mapping, a typed DataSet, or some code using an ugly magic string somewhere. My rule is, run them at least once a day, and every time you’ve made a change that directly affects the integration of your code with an external component.

Acceptance testing

If you’re using automated acceptance testing, these tests should can also be executed automatically within your integration process. I think it’s a good habit to run these tests daily, only it can be very annoying when developing your user interface. Whenever you need to add some textbox somewhere, you’ll have some failing tests (hopefully -remember TDD-). In that case,  I tend to keep the general rule of having them all pass at the end of the iteration, that’s the final deadline. If you choose to do so, it might be a good idea to set up a third build, or to just run them manually as part of your iteration (a bit like regression tests in this sense). If you just run them at the same level as your integration tests, you’ll have your secondary build failing during the whole iteration, which is not a good thing.

If you’re doing user acceptance testing, you should have your CI process deploy your application to the UAT-environment automatically (we do this after each iteration).

Performance testing

I’ve heard of projects where the secondary build also includes performance tests. Usually I don’t think this is necessary, unless in applications where performance is absolutely critical. If a certain level of performance is a requirement, including them in your continuous integration process gives you the advantage of constant feedback and easily identifying what part of your code might contain a memory leak and needs some investigation or rolling back.

I’d use these rules to make up my mind:
1) Do I really need performance tests?
2) Do I really need constant feedback on my application’s performance?
3) Can I have these tests executed by an independent build (not in the commit build, nor in the secondary build)?

Smoke testing and regression testing

I have skipped these two types of tests out of my initial list in my previous post, because these are just unit tests, integration tests, acceptance tests or performance tests in the long run. The big difference in the naming is just because of when they are executed, basicly. And in a continuous integration process, this would be during the commit build, or during the secondary build (or any other builds), depending on the type of test :D .

Wrapup

I think this post gives a nice overview of what tests to put in what build within a continuous integration process. Maybe this approach isn’t the best one, so if you’ve got any other ideas, be sure to leave them in the comments :) .

7 Comments Types of testing - 02/18/09

I notice that there’s a lot of confusion for people that are just starting to explore automated testing for applications, be it using TDD or not. There are many kinds of tests, all terms are used extensively, and I’ve seen this being the cause of confusion for many developers.
So here’s an overview of the most common types of testing and what their goals are.

Unit testing

As the name already states, a unit test, tests a single unit. What’s a unit? The smallest thing you could test, thus, in object orientation, that would be a method.
It’s common to create a fixture per class, and one or more tests per method. You’ll probably create one passing test, and several failing tests.

You should always start out with unit tests. If they’re failing, there is no use in starting with other types of tests. It’s simple. Build up the dependencies you need in the wanted state (valid or invalid, depending on the goal of the test). Perform the operation you’re testing, and verify if the result was correct.

Unit testing also introduces a whole new world of curious little objects such as mocks and stubs. I’ve been using Rhino.Mocks as my mocking framework for a few weeks now. Mocking is a subject that deserves a post on its own (or maybe even more). My xUnit Test Patterns book arrived yesterday, so you can expect some test-posts in the future :) .

Integration testing

I think integration testing can be summarized in the following line:
Testing of (sub)systems that interact (or integrate) with expensive or external resources

Thus, integration testing aims to test the combination of different software modules. Some examples:
- Test CRUD operations on your datastore
- Test import and export of data (system talks to the file system)
- Test systems that connect to webservices
- Test the integration of two modules with valid dependencies
- …

The typical example for demonstrating an integration test, is the repository example.

These tests are a lot more expensive than unit tests (since they connect to external resources, or build expensive objects), but you should still try to keep them fast. In my opinion, your integration tests should run -at least- once a day.

Acceptance testing

Acceptance tests come in two flavours: user acceptance tests and automated acceptance tests.

If you’ve got dedicated users that are testing the system after each iteration, you’d be doing user acceptance testing, or better said, your users would be doing acceptance tests.
It’s a common and best practice, to deploy an application into a user acceptance environment several times during the development process. This has several advantages:
- Bugs are discovered sooner. And thus are easier to fix.
- Misconceptions in analysis are discovered sooner (since you’ve got user feedback)
- Usability is tested sooner
- User adaptiveness can be estimated better
- Deployment problems are discovered and solved before going to a production environment

Automated acceptance tests are just like user acceptance tests, only automated. They also perform user interface actions, like clicking on buttons and filling in data in forms. These tests go through the whole cycle, just like a normal user would. They create a new product (for example) by clicking on the “New product”-menuitem, they fill in the data, and finally click the save button. You should create acceptance tests that fail when required information is missing, and assert you’re showing an error message. Let’s say it’s a bit weird the first time you do this, but it has great advantages.

You miss a few of the advantages you get when your users test, but there are ways to minimize them. For starters, let your users write your acceptance tests (not the code, but their intent), try to make them cover each scenario. Have them review the tests, whenever they want to change functionality. Demo the application after each iteration, so they can give feedback about usability.

Smoke testing

Smoke testing can be defined as a set of tests that need to pass in order to include the new or modified functionality into the entire system.
To give you a specific example, let’s consider the ordering story. Assume we’ve just added functionality to cancel an order.
The corresponding smoke tests to commit this functionality could be:
- Can I still add a new order?
- Can I still update an order?
- Can I confirm and update a canceled order (these should fail)?
- Can I still delete an order, unless canceled?
- Can I still search for orders?

You’re not testing the whole application, but, you’re testing only the functionality that is connected to the one built or repaired.

If you’re doing continuous integration (and you should), this is actually covered by the build that starts after commit, thus smoke tests arent’s specifically set up (at least not in my experience).

I’ve come across the use of smoke tests in legacy applications that don’t even have automated tests. No automated tests means no quality assurance. That’s why developers had to run a set of manual smoke tests before releasing an application. They stored a list of test cases in form of an Excel file containing a few steps to verify when a subsystem was changed. These were high-level tests, they didnt’ test complex functionality, but they had to run, or you weren’t allowed to release the application for user testing.

Regression testing

Regression testing is the term used for rerunning old tests to check if any functionality was broken by added or modified functionality. The ideal way of doing this is rerunning the entire test suite (unit tests as well as integrations tests) after each change to the system.
Sometimes changes are coming in too fast to keep this up, but it’s important to at least run all your unit tests after each change (and if even this seems to take a lot of time, you should try to make your tests faster).
We don’t really talk about regression testing, it’s something that’s invisible when your doing automated testing, and especially when doing continuous integration :) . If you’ve got users that are testing, it’s common that they re-execute all the tests they did in the previous release, and finally test the new/added functionality to test the whole system. In that case, it’s more common to explicitly talk about regression tests.

Performance testing

There are several types of performance testing. I’m covering the most common ones with a brief description of their goals.

Load testing
How will my application behave under a load of 50 users performing 5 transactions per minute?
Load testing is applied to systems that have a specified performance requirement. If performance is an important requirement, the tests should run at least once a day, so the impact of changes on performance can be noticed immediately.

Sress testing
How much load can my application carry?
This type of testing is usally applied to check at what load the application will crash. It’s good to know the maximum load your app can carry. If it’s very close to the performance requirement set by the customer, it’s time to do some serious profiling :) .

Endurance testing
Will my application be able to carry 50 users even after running x hours-days-…?
Endurance tests are used to evaluate if your application is able to run under a normal work load (users and transactions) during a prolonged time.

Wrapup

This post turned out to be longer than I thought, but I think it gives you a nice overview of what each test type intends to do. If I forgot anything, feel free to add!

|