11 Comments Things I liked most out of TFS2008 - 03/2/09

Last week, I followed a TFS2008 course, since we’re planning to upgrade soon (finally!). I thought I’d list the most important improvements, or better said, the improvements I liked most.

Out of the box Continuous integration

It’s unnecessary to say that this is the feature I love most in the new TFS… Previously we did our integration builds manually on commit. It’s not the same as having it automated. It requires more discipline (I’m still seeing check-ins that don’t build, or with failing unit tests…), but now those days are over. You can create a new build definition and configure it to run after each check in.

TFS also offers the following option:

Accumulate check-ins until the prior build finishes (Build no more often than every xx minutes) I’m having very mixed feelings about this option.
1) If you accumulate check-ins and build them together, it will be harder to detect what broke the build if it contains several changesets => you loose feedback and isolation of errors, while that’s one of the nicest features of continuous integration
2) If you choose to put a periodicity on your builds, that means there’s just something wrong with your build.

For both options, the conclusion is the same one: Your commit build should run very fast, and if it’s not fast enough to keep up with your check-ins, take a look at it in stead of using this option. Look at the origin of your problem, in stead of just patching it with fewer builds… You’ve now got the possibility to create several builds, make them run at different times, and based on different criteria, so use this possibility! In my opinion, I’d create an extremely fast commit build, a daily secondary build (containing integration tests and performance tests), and another daily build that deploys your application to a production-mimiced environment. You can also opt to just use one build in stead of two for the last two mentioned above, it depends on the situation and the size of your application.

New check-in policies

There’s a new build policy available that states you cannot check in unless the previous continuous integration build succeeded, and I personally think this is a valuable option. Imagine the following sequence of events:
- Developer1 checks in and a commit build is triggered
- Deverloper1’s commit build fails
- Developer2 checks in changes (that will break the build for another reason)
- Developer2’s build fails because of several reasons

I hope you can see the problem here. If Developer2 would be able to commit his changes to the repository, the build would break because of several reasons. So error-detection gets harder. And that’s where this policy becomes interesting. Only commit your changes if the previous build succeeded.
The only thing I ask myself is if TFS also prevents queued builds to run after the broken one… and also resume when the broken build is fixed… That would be a great addition to the policy!

Running unit tests without test lists

Finally we can drop the annoying VSMDI-file and tell the build to run all tests in the specified assembly. If you have several test projects you can work with wild cards to include all assemblies that end with Tests for example. This is much better than test lists, it doesn’t require any maintenance (adjusting the vsmdi-file every time you add tests) and ensures that all tests are always run. You can’t cheat anymore by excluding your failing tests from the test list to keep the build running :-) , and that’s a good thing.

Other nice additions

Build queing
You can now queue your builds, so it won’t be rejected if a co-worker checks in a second before you do. It will just be queued and will run when the previous build finishes. What’s also nice is that you can prioritize your builds. I can image that in some cases this is a valuable option.

Build retention policies
Finally you can automatically delete the build-outputs that are xx-days/weeks old, and you can even apply a different retention policy on the following levels:
- Succeeded builds
- Partially succeeded builds
- Failed builds
- Stopped builds

About partially succeeded builds by the way… I don’t think this option is very valuable. Why would you want builds that can partially succeed? A build succeeds or it fails, period. This is a black-or-white situation, grey is not a possibility. I’ve heard people defending this option by stating that a build can be partially succeeded if the solution builds but the tests don’t run, or when the solution builds, the tests run, but deployment fails…
Pfffff, all excuses. If your tests don’t run, your build fails, period. Otherwise you’re not doing continuous integration. And if you want to seperate deployment from the rest of your build, create a seperate build for it, in-stead of this option.

Version control improvements
I like the Get-latest on check-out option. I’m always reminding myself to do a get latest every time I want to check out something, so this option is very welcome. There are some developers that dislike this option because it forces you to integrate with changes your teammates made even if you don’t want to. I don’t agree. Why wouldn’t you want to integrate with other changes? You’ve got a whole test suite that’s backing you up ;-) .
There are flaws to this option though.
1) It only performs a get latest of the file your checking out. If this file is using new types or methods, you’ll be forced to do a complete get latest on project, or even solution level. If this could be automated, it would be nicer.
2) The one and only case I don’t want to integrate with the changes made is when the commit build fails.

Apart from that there have been nice UI-improvements that easen dead-simple actions a lot:
- Save attachment from workitem to disk
- Drag and drop features in the source control explorer and in workitem attachments
- Go to Windows explorer from source control explorer
- Improved help in command line (tf.exe)

TFS Power tools

The TFS power tools are a set of tools that you can download separately and use on top of TFS. And these always include very cool features. Here are the ones I appreciate the most:

Shell extensions
It’s been available for ages with tools such as Subversion and TortoiseSVN, but now we can finally perform source control operations on our files directly from windows explorer, with TFS.

Search
Improved search capibility using wildcards and paths, but my favourite certainly is searching by status:
- Files that are checked-out
- Files checked out to user x

Build notification application
A little monitoring application that polls the build server in search for builds that are queued, started or completed. It notifies you even if the build was started by another team-member, and displays a nice Outlook-like popup containing the build status and a direct link to it (or even the drop location).

Alerts editor
With this nice addition you can subscribe to alerts on 3 different levels: work item, check in or build. My favourites: only getting the build-e-mail when a build fails and when a work item is assigned directly to me.

3 Comments Test types and Continous Integration - 02/23/09

Martin Fowler defines continuous integration as follows:

“Continuous Integration is a software development practice where members of a team integrate their work frequently, usually each person integrates at least daily – leading to multiple integrations per day. Each integration is verified by an automated build (including test) to detect integration errors as quickly as possible. Many teams find that this approach leads to significantly reduced integration problems and allows a team to develop cohesive software more rapidly.”

I’m writing this post as a follow-up to my previous one, types of testing. I’ll talk about how each type of test, fits into a continuous integration process.

Introduction

You can read all about Continuous integration in Martin Fowler’s paper here. A nice addition (and one that’s lying on my bookshelf as many others), is the book -> Continuous integration: Improving software quality and reducing risk, by Paul Duvall, Steve Matyas and Andrew Glover.

I’ll be talking about two types of builds. I’ll refer to them as the commit build and the secondary build. The primary-stage build (aka the commit build) automatically runs whenever someone commits changes to the repository (see Every build should build the mainline on an integration machine). When this build has tests that fail, the build also fails. The broken tests must be repaired as soon as possible to fix the build. This is a show-stopper and must be dealt with as soon as possible. The secondary-stage build, is a build that runs whenever possible; in my opinion, at least once a day. It can be done manually, or it can run as a nightly build in a script that grabs the latests executables and runs these specific test suite. If this build fails, developers can carry on working. Don’t get me wrong, this build has to be fixed too, but it doesn’t have the same priority as a broken commit build.

Unit tests

Unit tests are the most important part of your continuous integration process (in the sense that these tests are ran the most). After each commit to the repository, the build executes all unit tests to finalize the commit. Your unit tests should run within the commit build and make the build fail if any test fails.

It’s very important to keep these tests focused, and especially fast. You must realize that each commit will execute the tests, and it’s important to have immediate feedback. You can’t be waiting half an hour just to commit some changes, right?! That’s why unit tests use test double patterns (use a test double for each of the SUT’s expensive dependencies). I’ve only read a few pages in Meszaros’ book, but I know it contains a chapter that covers these patterns (can’t wait to get there!).

Integration tests

Integration tests run within the secondary build. These tests are normally slower than unit tests since they test the integration of several components, thus they do use (and set up) actual dependencies. This makes these tests slower, but still, we should try to keep them relatively fast. Running these tests is also very important, but since it’s an expensive operation, we do it far fewer times than running the unit tests. In my opinion, they should run at least once a day. These tests normally include testing access to your database, so I try to  run these tests after each database-change, for example. If they fail, you’ve probably broken a NHibernate mapping, a typed DataSet, or some code using an ugly magic string somewhere. My rule is, run them at least once a day, and every time you’ve made a change that directly affects the integration of your code with an external component.

Acceptance testing

If you’re using automated acceptance testing, these tests should can also be executed automatically within your integration process. I think it’s a good habit to run these tests daily, only it can be very annoying when developing your user interface. Whenever you need to add some textbox somewhere, you’ll have some failing tests (hopefully -remember TDD-). In that case,  I tend to keep the general rule of having them all pass at the end of the iteration, that’s the final deadline. If you choose to do so, it might be a good idea to set up a third build, or to just run them manually as part of your iteration (a bit like regression tests in this sense). If you just run them at the same level as your integration tests, you’ll have your secondary build failing during the whole iteration, which is not a good thing.

If you’re doing user acceptance testing, you should have your CI process deploy your application to the UAT-environment automatically (we do this after each iteration).

Performance testing

I’ve heard of projects where the secondary build also includes performance tests. Usually I don’t think this is necessary, unless in applications where performance is absolutely critical. If a certain level of performance is a requirement, including them in your continuous integration process gives you the advantage of constant feedback and easily identifying what part of your code might contain a memory leak and needs some investigation or rolling back.

I’d use these rules to make up my mind:
1) Do I really need performance tests?
2) Do I really need constant feedback on my application’s performance?
3) Can I have these tests executed by an independent build (not in the commit build, nor in the secondary build)?

Smoke testing and regression testing

I have skipped these two types of tests out of my initial list in my previous post, because these are just unit tests, integration tests, acceptance tests or performance tests in the long run. The big difference in the naming is just because of when they are executed, basicly. And in a continuous integration process, this would be during the commit build, or during the secondary build (or any other builds), depending on the type of test :D .

Wrapup

I think this post gives a nice overview of what tests to put in what build within a continuous integration process. Maybe this approach isn’t the best one, so if you’ve got any other ideas, be sure to leave them in the comments :) .

|