3 Comments Thoughts about the Design Master Class - 10/13/09

So, after a lot of impatient waiting, last week my colleagues and me attended Dino Esposito’s Design Master Class, which was organized by Pieter’s new company: Sparkles.

First of all, we had a lot of fun! We had a great group and I also bumped into some ex-colleagues from Compuware which is also always amusing to say the least :-) . The food was great, each day a different experience :) . Personally, the location was a bit too far away, but all in all I didn’t spend too much time in traffic jams so that turned out OK to.

Now let me share my thoughts about this class with you…

I don’t know if you know what topics this class covered, but you should certainly look at the table of contents before you continue reading, which you can find here.

As you’ve seen, the table of contents looks very promising and interesting, it covers all the “hot topics” on which everybody is reading and writing lately: OO design principles and patterns, unit testing and design for testability, IoC, Domain-centric design versus procedural patterns, persistance ignorance, ORM, presentation patterns, and so on…

It was a 5 day class, during which every one of these concepts was covered in depth, but in a rather theoretical context. Dino provided us with lots of examples and demo’s, but every example and/or demo only covered the specific subject we we’re handling at the moment. This provided me with a more detailed knowledge (in isolation) about these topics, but in my opinion, I missed a more practical approach.

Now, to be honest, my expectations for this class were a bit different than what we actually did (and maybe that’s just my mistake). I’ve been reading a lot about these subjects the last months (for about a year and a half now I think) and I also applied them whenever I could and the best I could. The problem with these concepts, is that they are quite “easy” to understand in isolation and to apply to simple problems, however it tends to get difficult once you’re writing a big enterprise application. That’s were the the use of all these concepts shines, but that’s also when it’s hard to apply them. You have to always keep your both eyes open in order to keep your model clean. So, in that sense, I expected we’d briefly cover these topics in theory, to finally walk through a full enterprise application, and see how these concepts are finally integrated into a complete solution. I was hoping for some pair-analysis and pair-programming sessions to compare and discuss our results in group afterwards and gain a greater insight while doing so. A little more the agile way I guess…

So what’s my final thought about this class: if your team needs to write complex enterprise solutions that require a domain-centric and object-oriented approach, this is the material you need to master to succeed. But generally speaking, I’d recommend this class to a junior developer who needs introduction to all these topics rather than to developers or architects that already have the basic knowledge and need to gain insight on how to apply them in the real-world. Gaining this insight is very hard, and I don’t doubt that it’s very hard to explain them to someone. Still I’m convinced that theory only helps to introduce the concepts, and practice is how we gain insight and refine our knowledge. There’s no silver bullet for knowing when to apply certain principles/patterns or not, there are cases in which it would be crazy to even try to apply them. That’s why I’m convinced that the only thing that can help you here is practice…

Now what’s next? I know a bunch of people who attended the Architect’s Master Class, brought by Juval Lowy last year… Pieter dit it again and will probably be organizing this class next year. I’m already subscribed to the Sparkles blog to keep an eye on this course and go whining to my boss the day registration is opened :) !

Share it:
  • Kick it!
  • DotNetShoutout
  • Technorati
  • DZone
  • TwitThis
  • Facebook
  • LinkedIn
  • del.icio.us
  • Digg
  • Reddit
  • Google
  • E-mail this story to a friend!

3 Comments Going to Esposito’s Design Master Class in October! - 06/4/09

I’m very excited about this, so I thought I’d share it with you guys!

The Design Master Class will be held here in Belgium in October (5-9) this year. Our team just subscribed, so I can finally sleep well now: I’ll be there ;-) .

I don’t know if you’ve heard about this class, so let me give you some background information. It’s a five day workshop given by Dino Esposito, one of IDesign’s trainers. The course include topics such as object-oriented principles and patterns, class design, domain-based design, persistance ignorance, and so on…

My ex-colleague Pieter Gheysens, who just founded his very own “Sparkles” was so kind to organize this course for us here in little Belgium! I’m convinced that this is one of the best courses you can attend here in Belgium, and I don’t know about you, but I’d be willing to give up attending any other events or courses to follow this one!

Just take a look at the full course details here and you’ll see what I mean! And by the way, don’t let the price tag scare you because of the financial crisis that’s going on… Remember it’s a 5-day course, and remember what it’s about. I’m sure the ROI will be worth the investment. Think about the impact of such a workshop on your team’s development… ;-)

Congratulations Pieter! I’m looking forward to it… and hope to see you all there :-)

Share it:
  • Kick it!
  • DotNetShoutout
  • Technorati
  • DZone
  • TwitThis
  • Facebook
  • LinkedIn
  • del.icio.us
  • Digg
  • Reddit
  • Google
  • E-mail this story to a friend!

7 Comments Introduction to test-doubles - 03/16/09

As soon as you start unit-testing or test-driving your development, you’ll learn about test-doubles and how they can make your tests lightening-fast sooner or later. And if you set up a continuous integration process (which you should) and you have more than 5 unit tests, you’ll probably have to know about test-doubles sooner in stead of later :-) .

What is a test-double?

Gerard Meszaros defines a test-double as follows:

“Sometimes it is hard to test the system under test (SUT) because it depends on other components that cannot be used in the test environment. This could be because they aren’t available, they will not return the results needed for the test or because executing them would have undesirable side effects. In other cases, our test strategy requires us to have more control or visibility of the internal behavior of the SUT. When we are writing a test in which we cannot (or choose not to) use a real depended-on component (DOC), we can replace it with a Test Double. The Test Double doesn’t have to behave exactly like the real DOC; it merely has to provide the same API as the real one so that the SUT thinks it is the real one! “

The concept is very easy to understand, but if you’ve never heard of them, I assume that how a test-double looks like, is still a blurry thing. First I have to say you have to use them with caution and only when appropriated. Apart from that, there are several types of test-doubles. You can find a list of all types in Meszaros’ book and an enumeration of them here

Why use test-doubles?

I think I can summarize the need of a test-double in one line: Use a test-double to keep your tests focussed and fast. If you’re doing CI and TDD, you’ll have a very big test suite after a while, and it’s critical to keep it running in a few minutes. If you don’t, you’ll end up giving up CI, and you’ll loose the continous feedback it offers you.

If your SUT depends on a component that needs a lot of setup code or needs expensive resources, you don’t want to be doing all this in a simple test. Your SUT shouldn’t care how the component it depends on needs to be configured. If you are doing it, you’re writing integration or even acceptance tests to go through the whole system… That’s why replacing a DOC object with a fake, can come in very handy sometimes. Test your SUT in isolation, that’s the goal. The DOC-components will have tests of their own. And you’ll have integration tests on top of it all.

Expectations, verifications, and stuff like that

Before I get to mocks and stubs, you need to understand the expectation-verification thing.

First of all, a mock  or a stub, is just an object that looks like the real DOC, but is actually just a fake which you can use to pass the test, or record the calls your SUT makes to it. When using such a mock/stub, you can set expectations on it. An expectation, is a statement, in which you explicitly expect a call to a particular method or property, with particular parameters, and even a particular return value. After you’ve set the expectations you consider important*, you can verify that these calls actually took place, and thus verifying that your SUT is executing what you expected.

What is a stub?

A stub is an object that you use just to get your code passing. When you don’t really care about how the interaction with the DOC-object happens, you can use a stub to replace the real dependency. A stub can be an empty implementation or a so-called “dumb” implementation. In stead of performing a calculation, you could just return a fixed value.

When you use stubs using a mocking framework, it’s way easier than creating an extra class that can act as a stub for your test. How you do this exactly is for the upcoming post, but the good news is that you don’t need to manually code the stub.

What is a mock?

You’ll use mocks, when you really want to test the behavior of the system. You can set expectations on a mock, of methods/properties to be called with specified parameters and/or return arguments. The final part of the test is then always verification of the expectations that were set. If they were not satisified, your tests fails. This is especially interesting when you need to be completely sure that these were actually called. Just imagine an overly simplified profit calculator. You can never calculate your profits (or losses), if you havn’t calculated your revenues and expenses first, can you? Well, you could expect these are calculated first. (This is of course an overly simplified example for the sake of simplicity…)

What is a fake?

A fake is a class or method that’s implemented just for the sake of testing. It has the same goal as the other variations of test-doubles, replace the depend-on-component to avoid slow or unreliable tests. The classic example, is replacing a Repository that accesses the database, with an in memory repository, which just has a collection of objects it uses to return. That way you’ve got data that you can use to test the SUT on, without the overhead of communication with expensive or external components.

The database is just an example, you can perfectly use a fake object, to hide a complex processing of some kind, and just return the data you need to continue (which would be returned from the complex processing in production code). It will make your tests focus better on the SUT, and they will be a lot faster.

Wrapup

It’s almost impossible to unit test without using mocking-techniques. Your tests can become extremely slow when you have a lot of them, and the continuous feedback loop is lost.
Mocking is a very powerful technique, but beware of misusing it. I actually try to avoid mocks. Just think: Do I need to verify how my SUT interacts with the DOC? If not, don’t use a mock, use a stub. When using lots and lots of mocks, your tests can become brittle. Just imagine refactoring your DOC and breaking 20 tests. After looking into the problem, you notice that 17/20 tests broke because the expectations set on this DOC as a mock, aren’t fully correct anymore? That’s something you really should avoid. Keep your tests focused ;-) .

Recommended readings

Mocks aren’t stubs by Martin Fowler
Test doubles by Martin Fowler
xUnit Test Patterns by Gerard Meszaros (also check out the website)
Test Doubles: When (not) to use them by Davy Brion

I’ll continue this post with how you can use these types of test-doubles using a mocking framework like Rhino.Mocks as soon as I get the chance ;-) .

Share it:
  • Kick it!
  • DotNetShoutout
  • Technorati
  • DZone
  • TwitThis
  • Facebook
  • LinkedIn
  • del.icio.us
  • Digg
  • Reddit
  • Google
  • E-mail this story to a friend!

6 Comments Test-driven development with code generation - 03/5/09

As I’ve mentioned lately, I’ve been involved with some code generation lately. The software factory generates, on one side, production code, and on the other side, tests to verify the generated code. So that’s all good and well, but…

The software factory also generates a set of tests for each layer, that are there to give you a starting point. I’m having trouble with this, because of several reasons. Since the generated code is fully data-driven (based on stored procedures), the datalayer and its test are ok. They don’t need much adjustment. The tests are testing the way they should, and they are passing, and thus verifying the generation went well.

But the data layer is not the only one with generated tests. The Business layer and presentation layer also have generated tests. As I’ve already said, the generated tests provide developers with a starting point, but all the tests pass when they are generated (while they don’t test any logic). And that bothers me a lot. They should all fail! This may be a good system for developers doing TDD, but if you’ve got developers on your team that don’t do TDD, you’re doomed to have passing tests that don’t test anything useful or are empty. That’s why, in my opinion, all tests should fail after generation. That way the developer will be forced to fix them. And if I see an Assert.IsTrue(true); then I’ll just have a valid reason to kill ;-) .

It you have your SF generating failing tests (I mean only the tests that aren’t complete), it has it downsides too. For starters, it will take you a while to be completly green again. And as Beck says, the time you’re red should be minimal, for motivation reasons. Still, I prefer failing generated tests, since I’ve seen a lot of misuse (useless tests) when they’re passing. And if you can’t live with that, then don’t generate tests that are not finished. You could generate parts, for example: generate the test class itsself, containing a setup and a teardown method. If it’s obvious this code will need some mocking or stubbing, you can initialize a mockrepository in the setup method. But avoid tests that are just half-tests or no tests at all. They should be written by the developers. At least, you’ll have usefull tests.

Having passing useless tests just leads to a maintenance problem after a while, and you’re completely missing the safety net that your tests should provide. The test-safety-net can contain wholes, which you’ll find in bugs, but these are fixed, because you fix a bug by writing a new test. But the net should still be a net, and not some spider web that isn’t going to hold your weight when you fall.

What do you think about test generation? Let me know!

Share it:
  • Kick it!
  • DotNetShoutout
  • Technorati
  • DZone
  • TwitThis
  • Facebook
  • LinkedIn
  • del.icio.us
  • Digg
  • Reddit
  • Google
  • E-mail this story to a friend!

Comment Going to TechDays 2009 @Metropolis - 03/5/09

Just wanted to note that I’ll be attending TechDays Belgium 2009 next week.
It’s nice to get a short introduction into some things I haven’t been following to closely lately.

You never know if I finally change my mind, but this is my choice of sessions for now (you can find the whole agenda here):

March 10:
- The future of C#: a first look at C# 4.0
- Silverlight 2.0 CoreCLR: Bringing the power
- Code contracts, Pex, CHESS, 3 tools for 1 talk
- Lean principles, Agile techniques and Team System
- .NET Services: Infrastructure building in the cloud

March 12:
- Building workflow services in .NET 3.5
- ASP.NET MVC for smart people
- Pex – Automated white box testing for .NET
- ASP.NET 4.0 what is coming? How do I prepare my app?
- .NET CLR 4: Working better together, faster with fewer bugs

I will also attend the pre-conference and follow the “Software + Services: the convergence of SOA, SaaS, Cloud computing and Web 2.0″ track.

If you’re going, I’ll see you there ;-) !

468x60ani

Share it:
  • Kick it!
  • DotNetShoutout
  • Technorati
  • DZone
  • TwitThis
  • Facebook
  • LinkedIn
  • del.icio.us
  • Digg
  • Reddit
  • Google
  • E-mail this story to a friend!

11 Comments Things I liked most out of TFS2008 - 03/2/09

Last week, I followed a TFS2008 course, since we’re planning to upgrade soon (finally!). I thought I’d list the most important improvements, or better said, the improvements I liked most.

Out of the box Continuous integration

It’s unnecessary to say that this is the feature I love most in the new TFS… Previously we did our integration builds manually on commit. It’s not the same as having it automated. It requires more discipline (I’m still seeing check-ins that don’t build, or with failing unit tests…), but now those days are over. You can create a new build definition and configure it to run after each check in.

TFS also offers the following option:

Accumulate check-ins until the prior build finishes (Build no more often than every xx minutes) I’m having very mixed feelings about this option.
1) If you accumulate check-ins and build them together, it will be harder to detect what broke the build if it contains several changesets => you loose feedback and isolation of errors, while that’s one of the nicest features of continuous integration
2) If you choose to put a periodicity on your builds, that means there’s just something wrong with your build.

For both options, the conclusion is the same one: Your commit build should run very fast, and if it’s not fast enough to keep up with your check-ins, take a look at it in stead of using this option. Look at the origin of your problem, in stead of just patching it with fewer builds… You’ve now got the possibility to create several builds, make them run at different times, and based on different criteria, so use this possibility! In my opinion, I’d create an extremely fast commit build, a daily secondary build (containing integration tests and performance tests), and another daily build that deploys your application to a production-mimiced environment. You can also opt to just use one build in stead of two for the last two mentioned above, it depends on the situation and the size of your application.

New check-in policies

There’s a new build policy available that states you cannot check in unless the previous continuous integration build succeeded, and I personally think this is a valuable option. Imagine the following sequence of events:
- Developer1 checks in and a commit build is triggered
- Deverloper1’s commit build fails
- Developer2 checks in changes (that will break the build for another reason)
- Developer2’s build fails because of several reasons

I hope you can see the problem here. If Developer2 would be able to commit his changes to the repository, the build would break because of several reasons. So error-detection gets harder. And that’s where this policy becomes interesting. Only commit your changes if the previous build succeeded.
The only thing I ask myself is if TFS also prevents queued builds to run after the broken one… and also resume when the broken build is fixed… That would be a great addition to the policy!

Running unit tests without test lists

Finally we can drop the annoying VSMDI-file and tell the build to run all tests in the specified assembly. If you have several test projects you can work with wild cards to include all assemblies that end with Tests for example. This is much better than test lists, it doesn’t require any maintenance (adjusting the vsmdi-file every time you add tests) and ensures that all tests are always run. You can’t cheat anymore by excluding your failing tests from the test list to keep the build running :-) , and that’s a good thing.

Other nice additions

Build queing
You can now queue your builds, so it won’t be rejected if a co-worker checks in a second before you do. It will just be queued and will run when the previous build finishes. What’s also nice is that you can prioritize your builds. I can image that in some cases this is a valuable option.

Build retention policies
Finally you can automatically delete the build-outputs that are xx-days/weeks old, and you can even apply a different retention policy on the following levels:
- Succeeded builds
- Partially succeeded builds
- Failed builds
- Stopped builds

About partially succeeded builds by the way… I don’t think this option is very valuable. Why would you want builds that can partially succeed? A build succeeds or it fails, period. This is a black-or-white situation, grey is not a possibility. I’ve heard people defending this option by stating that a build can be partially succeeded if the solution builds but the tests don’t run, or when the solution builds, the tests run, but deployment fails…
Pfffff, all excuses. If your tests don’t run, your build fails, period. Otherwise you’re not doing continuous integration. And if you want to seperate deployment from the rest of your build, create a seperate build for it, in-stead of this option.

Version control improvements
I like the Get-latest on check-out option. I’m always reminding myself to do a get latest every time I want to check out something, so this option is very welcome. There are some developers that dislike this option because it forces you to integrate with changes your teammates made even if you don’t want to. I don’t agree. Why wouldn’t you want to integrate with other changes? You’ve got a whole test suite that’s backing you up ;-) .
There are flaws to this option though.
1) It only performs a get latest of the file your checking out. If this file is using new types or methods, you’ll be forced to do a complete get latest on project, or even solution level. If this could be automated, it would be nicer.
2) The one and only case I don’t want to integrate with the changes made is when the commit build fails.

Apart from that there have been nice UI-improvements that easen dead-simple actions a lot:
- Save attachment from workitem to disk
- Drag and drop features in the source control explorer and in workitem attachments
- Go to Windows explorer from source control explorer
- Improved help in command line (tf.exe)

TFS Power tools

The TFS power tools are a set of tools that you can download separately and use on top of TFS. And these always include very cool features. Here are the ones I appreciate the most:

Shell extensions
It’s been available for ages with tools such as Subversion and TortoiseSVN, but now we can finally perform source control operations on our files directly from windows explorer, with TFS.

Search
Improved search capibility using wildcards and paths, but my favourite certainly is searching by status:
- Files that are checked-out
- Files checked out to user x

Build notification application
A little monitoring application that polls the build server in search for builds that are queued, started or completed. It notifies you even if the build was started by another team-member, and displays a nice Outlook-like popup containing the build status and a direct link to it (or even the drop location).

Alerts editor
With this nice addition you can subscribe to alerts on 3 different levels: work item, check in or build. My favourites: only getting the build-e-mail when a build fails and when a work item is assigned directly to me.

Share it:
  • Kick it!
  • DotNetShoutout
  • Technorati
  • DZone
  • TwitThis
  • Facebook
  • LinkedIn
  • del.icio.us
  • Digg
  • Reddit
  • Google
  • E-mail this story to a friend!

8 Comments Starting with Test-driven development - 02/26/09

I received some great how-to-start-with-TDD-advice from Davy a few months ago. I’m still a TDD-newbie, that’s for sure, but I sure have learned a lot during these past months. Since I think Davy’s advice gave me a great head start, I thought I’d share it with all the developers that want to start with TDD. Thanks to Davy!

Start small, VERY small

My problem when I began, was that I started waaaay too big. I was worrying about how to test complex scenario’s, even before I wrote the very simple tests, test-first.

When you’re just starting, try not to think about these problems just yet. Start out very simple, with code that’s easy to test and doesn’t even require more advanced techniques such as test doubles. Once you get to the more complex ones, you’ll already be a step further.
One rule I always keep in mind: if the code I want to write is so complex that I don’t even know how to test it, it’s time to split it up in smaller tests.

Read about it

You’ll find lots and lots of information on blogs out there, but I think you should start with Kent Beck’s Test-driven development by example (I’ll talk about this book later). I’m not saying the info on blogs isn’t valuable, I’m just saying there are better resources out there to learn the basics. Reading blogs about TDD without knowing the basics well, caused me a lot of confusion because of several reasons:
First, because TDD-posts mostly assume some level of TDD-knowledge, and usually more than knowing the red-green-refactor cycle. Second, because at the end of the road, posts are personal opinions and personal perceptions, which aren’t always completely correct, so you’ll find contradictory opinions and won’t know which one is correct. Once you understand the TDD-basics, you’ll get much more value out of all the information TDD’ers are sharing with us on the net.

TDD by example is a great book to get started. It’s about 200 pages long and it’s extremely easy to read. Once you’ve read it, you’ll be able to start practicing it, talking about it and understanding it’s value. What I also enjoyed a lot, is that Kent writes the book just like you’re thinking now (a not TDD-mentality) => “Let’s implement this. We should call this method, and then perform a calculation… Shame on me! I should start test-first!”. I loved it!

After this one, you can read xUnit Test patterns, which is a more advanced book that describes the patterns you’ll commonly use during testing. I’ve just started to read it (finally) and I’m already enjoying it. This is not going to be a quick read (900 pages), but I know it’ll be very valuable.

Apart from the books, I’d also recommend to read Martin Fowler’s paper about the difference between mocks and stubs, which you can find here. It’s a quick read and it’s very clarifying.

Don’t give up

It won’t be easy, it’s a complete mind-shift. I’ve been in the situation a lot of times that I began test-first, and ended up writing everything test-after. The clue is not giving up, or as Davy might say, be stubborn! The advantage I’ve had is, that I felt guilty whenever I was testing-after, so whenever I got to new functionality, I began test-first again.
Even now, when I’ve just implemented something I’ve tested and I’m green again, I find myself starting to type some new method somewhere. At that point, I try to be very strict with myself, delete what I just typed, and get back to my test suite and write a new test. Remember red-green-refactor :) !

Apart from discipline, I think the easiest way to hold on when learning TDD, is having an experienced TDD’er in your direct environment. But that’s just not always possible. When I got into TDD, I was working with people that don’t saw any value in testing at all (not even in tests-after-coding), so that shouldn’t stop you!

A final word

In my limited and humble experience, I can say that I’ve only experienced advantages from the TDD-approach. It may be the hard way in the beginning, when you’re learning it (and I still havn’t completely passed that stage), but the value you get out of it (less bugs, reliability, clear design, more focused and clean code) isn’t representative to the time you invest in it.

Share it:
  • Kick it!
  • DotNetShoutout
  • Technorati
  • DZone
  • TwitThis
  • Facebook
  • LinkedIn
  • del.icio.us
  • Digg
  • Reddit
  • Google
  • E-mail this story to a friend!

3 Comments Test types and Continous Integration - 02/23/09

Martin Fowler defines continuous integration as follows:

“Continuous Integration is a software development practice where members of a team integrate their work frequently, usually each person integrates at least daily – leading to multiple integrations per day. Each integration is verified by an automated build (including test) to detect integration errors as quickly as possible. Many teams find that this approach leads to significantly reduced integration problems and allows a team to develop cohesive software more rapidly.”

I’m writing this post as a follow-up to my previous one, types of testing. I’ll talk about how each type of test, fits into a continuous integration process.

Introduction

You can read all about Continuous integration in Martin Fowler’s paper here. A nice addition (and one that’s lying on my bookshelf as many others), is the book -> Continuous integration: Improving software quality and reducing risk, by Paul Duvall, Steve Matyas and Andrew Glover.

I’ll be talking about two types of builds. I’ll refer to them as the commit build and the secondary build. The primary-stage build (aka the commit build) automatically runs whenever someone commits changes to the repository (see Every build should build the mainline on an integration machine). When this build has tests that fail, the build also fails. The broken tests must be repaired as soon as possible to fix the build. This is a show-stopper and must be dealt with as soon as possible. The secondary-stage build, is a build that runs whenever possible; in my opinion, at least once a day. It can be done manually, or it can run as a nightly build in a script that grabs the latests executables and runs these specific test suite. If this build fails, developers can carry on working. Don’t get me wrong, this build has to be fixed too, but it doesn’t have the same priority as a broken commit build.

Unit tests

Unit tests are the most important part of your continuous integration process (in the sense that these tests are ran the most). After each commit to the repository, the build executes all unit tests to finalize the commit. Your unit tests should run within the commit build and make the build fail if any test fails.

It’s very important to keep these tests focused, and especially fast. You must realize that each commit will execute the tests, and it’s important to have immediate feedback. You can’t be waiting half an hour just to commit some changes, right?! That’s why unit tests use test double patterns (use a test double for each of the SUT’s expensive dependencies). I’ve only read a few pages in Meszaros’ book, but I know it contains a chapter that covers these patterns (can’t wait to get there!).

Integration tests

Integration tests run within the secondary build. These tests are normally slower than unit tests since they test the integration of several components, thus they do use (and set up) actual dependencies. This makes these tests slower, but still, we should try to keep them relatively fast. Running these tests is also very important, but since it’s an expensive operation, we do it far fewer times than running the unit tests. In my opinion, they should run at least once a day. These tests normally include testing access to your database, so I try to  run these tests after each database-change, for example. If they fail, you’ve probably broken a NHibernate mapping, a typed DataSet, or some code using an ugly magic string somewhere. My rule is, run them at least once a day, and every time you’ve made a change that directly affects the integration of your code with an external component.

Acceptance testing

If you’re using automated acceptance testing, these tests should can also be executed automatically within your integration process. I think it’s a good habit to run these tests daily, only it can be very annoying when developing your user interface. Whenever you need to add some textbox somewhere, you’ll have some failing tests (hopefully -remember TDD-). In that case,  I tend to keep the general rule of having them all pass at the end of the iteration, that’s the final deadline. If you choose to do so, it might be a good idea to set up a third build, or to just run them manually as part of your iteration (a bit like regression tests in this sense). If you just run them at the same level as your integration tests, you’ll have your secondary build failing during the whole iteration, which is not a good thing.

If you’re doing user acceptance testing, you should have your CI process deploy your application to the UAT-environment automatically (we do this after each iteration).

Performance testing

I’ve heard of projects where the secondary build also includes performance tests. Usually I don’t think this is necessary, unless in applications where performance is absolutely critical. If a certain level of performance is a requirement, including them in your continuous integration process gives you the advantage of constant feedback and easily identifying what part of your code might contain a memory leak and needs some investigation or rolling back.

I’d use these rules to make up my mind:
1) Do I really need performance tests?
2) Do I really need constant feedback on my application’s performance?
3) Can I have these tests executed by an independent build (not in the commit build, nor in the secondary build)?

Smoke testing and regression testing

I have skipped these two types of tests out of my initial list in my previous post, because these are just unit tests, integration tests, acceptance tests or performance tests in the long run. The big difference in the naming is just because of when they are executed, basicly. And in a continuous integration process, this would be during the commit build, or during the secondary build (or any other builds), depending on the type of test :D .

Wrapup

I think this post gives a nice overview of what tests to put in what build within a continuous integration process. Maybe this approach isn’t the best one, so if you’ve got any other ideas, be sure to leave them in the comments :) .

Share it:
  • Kick it!
  • DotNetShoutout
  • Technorati
  • DZone
  • TwitThis
  • Facebook
  • LinkedIn
  • del.icio.us
  • Digg
  • Reddit
  • Google
  • E-mail this story to a friend!

7 Comments Types of testing - 02/18/09

I notice that there’s a lot of confusion for people that are just starting to explore automated testing for applications, be it using TDD or not. There are many kinds of tests, all terms are used extensively, and I’ve seen this being the cause of confusion for many developers.
So here’s an overview of the most common types of testing and what their goals are.

Unit testing

As the name already states, a unit test, tests a single unit. What’s a unit? The smallest thing you could test, thus, in object orientation, that would be a method.
It’s common to create a fixture per class, and one or more tests per method. You’ll probably create one passing test, and several failing tests.

You should always start out with unit tests. If they’re failing, there is no use in starting with other types of tests. It’s simple. Build up the dependencies you need in the wanted state (valid or invalid, depending on the goal of the test). Perform the operation you’re testing, and verify if the result was correct.

Unit testing also introduces a whole new world of curious little objects such as mocks and stubs. I’ve been using Rhino.Mocks as my mocking framework for a few weeks now. Mocking is a subject that deserves a post on its own (or maybe even more). My xUnit Test Patterns book arrived yesterday, so you can expect some test-posts in the future :) .

Integration testing

I think integration testing can be summarized in the following line:
Testing of (sub)systems that interact (or integrate) with expensive or external resources

Thus, integration testing aims to test the combination of different software modules. Some examples:
- Test CRUD operations on your datastore
- Test import and export of data (system talks to the file system)
- Test systems that connect to webservices
- Test the integration of two modules with valid dependencies
- …

The typical example for demonstrating an integration test, is the repository example.

These tests are a lot more expensive than unit tests (since they connect to external resources, or build expensive objects), but you should still try to keep them fast. In my opinion, your integration tests should run -at least- once a day.

Acceptance testing

Acceptance tests come in two flavours: user acceptance tests and automated acceptance tests.

If you’ve got dedicated users that are testing the system after each iteration, you’d be doing user acceptance testing, or better said, your users would be doing acceptance tests.
It’s a common and best practice, to deploy an application into a user acceptance environment several times during the development process. This has several advantages:
- Bugs are discovered sooner. And thus are easier to fix.
- Misconceptions in analysis are discovered sooner (since you’ve got user feedback)
- Usability is tested sooner
- User adaptiveness can be estimated better
- Deployment problems are discovered and solved before going to a production environment

Automated acceptance tests are just like user acceptance tests, only automated. They also perform user interface actions, like clicking on buttons and filling in data in forms. These tests go through the whole cycle, just like a normal user would. They create a new product (for example) by clicking on the “New product”-menuitem, they fill in the data, and finally click the save button. You should create acceptance tests that fail when required information is missing, and assert you’re showing an error message. Let’s say it’s a bit weird the first time you do this, but it has great advantages.

You miss a few of the advantages you get when your users test, but there are ways to minimize them. For starters, let your users write your acceptance tests (not the code, but their intent), try to make them cover each scenario. Have them review the tests, whenever they want to change functionality. Demo the application after each iteration, so they can give feedback about usability.

Smoke testing

Smoke testing can be defined as a set of tests that need to pass in order to include the new or modified functionality into the entire system.
To give you a specific example, let’s consider the ordering story. Assume we’ve just added functionality to cancel an order.
The corresponding smoke tests to commit this functionality could be:
- Can I still add a new order?
- Can I still update an order?
- Can I confirm and update a canceled order (these should fail)?
- Can I still delete an order, unless canceled?
- Can I still search for orders?

You’re not testing the whole application, but, you’re testing only the functionality that is connected to the one built or repaired.

If you’re doing continuous integration (and you should), this is actually covered by the build that starts after commit, thus smoke tests arent’s specifically set up (at least not in my experience).

I’ve come across the use of smoke tests in legacy applications that don’t even have automated tests. No automated tests means no quality assurance. That’s why developers had to run a set of manual smoke tests before releasing an application. They stored a list of test cases in form of an Excel file containing a few steps to verify when a subsystem was changed. These were high-level tests, they didnt’ test complex functionality, but they had to run, or you weren’t allowed to release the application for user testing.

Regression testing

Regression testing is the term used for rerunning old tests to check if any functionality was broken by added or modified functionality. The ideal way of doing this is rerunning the entire test suite (unit tests as well as integrations tests) after each change to the system.
Sometimes changes are coming in too fast to keep this up, but it’s important to at least run all your unit tests after each change (and if even this seems to take a lot of time, you should try to make your tests faster).
We don’t really talk about regression testing, it’s something that’s invisible when your doing automated testing, and especially when doing continuous integration :) . If you’ve got users that are testing, it’s common that they re-execute all the tests they did in the previous release, and finally test the new/added functionality to test the whole system. In that case, it’s more common to explicitly talk about regression tests.

Performance testing

There are several types of performance testing. I’m covering the most common ones with a brief description of their goals.

Load testing
How will my application behave under a load of 50 users performing 5 transactions per minute?
Load testing is applied to systems that have a specified performance requirement. If performance is an important requirement, the tests should run at least once a day, so the impact of changes on performance can be noticed immediately.

Sress testing
How much load can my application carry?
This type of testing is usally applied to check at what load the application will crash. It’s good to know the maximum load your app can carry. If it’s very close to the performance requirement set by the customer, it’s time to do some serious profiling :) .

Endurance testing
Will my application be able to carry 50 users even after running x hours-days-…?
Endurance tests are used to evaluate if your application is able to run under a normal work load (users and transactions) during a prolonged time.

Wrapup

This post turned out to be longer than I thought, but I think it gives you a nice overview of what each test type intends to do. If I forgot anything, feel free to add!

Share it:
  • Kick it!
  • DotNetShoutout
  • Technorati
  • DZone
  • TwitThis
  • Facebook
  • LinkedIn
  • del.icio.us
  • Digg
  • Reddit
  • Google
  • E-mail this story to a friend!

7 Comments Testing websites in different browsers - 02/16/09

It’s a pain when your web application needs to look the same in several browsers. And I’m not really talking about pain if you need to support IE7 and Firefox. I’m talking about some serious headaches when you need to support IE6.

I’m refering to CSS hell. Actually, it’s not CSS that’s hell (although you need to get used to it), it’s the damn browsers that don’t render as they should! And on top of that, ASP.NET also does a good job at screwing up your user interface sometimes (with all the hidden (not always so hidden) divs it renders).

I’ve recently had to do some serious CSS-ing on a friend’s website, and I’ve had a rough time.

How I get it to look the same in all browsers

1) Start out with Firefox
If everything looks as it should in Firefox, you’re further than you’d think. Firefox is my default webbrowser, and it definitly should be yours to. Especially for web development.
2) Follow with IE7 and IE8
Probably, you’ll have to fix some issues, but they won’t keep you up during the night.
3) Finish off with IE6 hell
As a developer, you wouldn’t even think of it, but there are a loooooooot of people out there that are still using IE6. And I’m not talking only about people, but also about whole organizations. Depending on the target of your application, you can’t deny IE6.

How I structure my CSS

Generally speaking, I use several CSS files within a web application. I seperate the styles I use on my master page, from the styles I use on my pages, just to keep the CSS focussed.
Apart from that, I have a seperate CSS file that contains IE fixes (mostly for IE6) and I only include it if the client is running IE6:

<!–[if IE 6]>
   <link rel=”stylesheet” type=”text/css” href=”CSS/ie6Fix.css” media=”screen” />
<![endif]–>

Apart from that, I have this rule: Never put style tags in your pages.
Fix everything in your CSS file. Don’t mix them. You’ll avoid weirdo CSS bugs and a clean seperation of your HTML and your CSS ;) .

How to test in different browsers?

I use Firefox, and IE7 on my computer. You could install IE6 using VirtualPC, but I prefer another way. I just came accross an interesting tool lately that allows you to test a web application in the most important IE versions. It’s called IETester. It’s freeware and it allows you to test in IE5.5, IE6, IE7 and IE8! It’s still an alpha release, but it sure made my job easier. To be completely sure, I always try to test the web app on a pc running IE6, just to make sure. If the application should contain a bug, that would mean you’ve got a CSS bug, and that one is your responsibility. No excuses.

Debugging tools (Updated 17/02/09)

Prajwal Tuladhar just reminded me in his comment that I didn’t mention the tools I use for CSS debugging.
Here they are:
- FireBug for Firefox
- Web developer toolbar for Firefox
- Internet explorer developer toolbar for IE7 and IE8

Share it:
  • Kick it!
  • DotNetShoutout
  • Technorati
  • DZone
  • TwitThis
  • Facebook
  • LinkedIn
  • del.icio.us
  • Digg
  • Reddit
  • Google
  • E-mail this story to a friend!