8 Comments The use of partial classes - 02/2/09

Yesterday, I came across some code using partial classes. I couldn’t avoid thinking a little more about the subject.

Partial classes were introduced in the .NET Framework 2.0, and here you can read what Microsoft has to say about it.

As we all know, we can define partial classes in C# to split a class in two (or more) different files. These are the prerequisites and consequences:
- All parts use the partial keyword in the class definition
- All parts must be in the same namespace
- All parts should have the same visibility
- You can still only specify one base class
- You can still implement many interfaces

Now, that’s obvious… Why am I telling you this? Well, because I’ve seen a lot developers misusing partial classes.

Microsoft states that splitting a class definition is desirable:
- When working on large projects, spreading a class over separate files allows multiple programmers to work on it simultaneously.
- When working with automatically generated source, code can be added to the class without having to recreate the source file. Visual Studio uses this approach when creating Windows Forms, Web Service wrapper code, and so on. You can create code that uses these classes without having to edit the file created by Visual Studio.

I only agree with the second usage. We all like the fact we don’t have our “Windows designer generated code” wondering around in our clean classes. That pile of disturbing code is moved to a partial designer-class. Our classes remain clean, without the code bloat needed to do the visual positioning stuff. The same applies to ASP.NET development. We’ve got all our HTML in the aspx file, while we’ve got our .cs file to delegate all the work to the controller in the MVC-story.

If you’re generating code, say with a software factory, you should also use partial classes. Imagine a class is generated and it works fine. But now, you need some more functionality or business rules, and you need to add them to the class. If for some reason later, you need to regenerate your code because of a major change you’ve made, you would lose all your custom development if you wouldn’t have put it in a partial class.

So, these are the cases where I agree and even strongly recommend the use of partial classes.

But let me get back to point number one in which MS states that splitting a class is desirable when working on large projects so multiple programmers can work on it simultaneously.
I do not agree at all.

I can’t avoid remembering a codebase I worked on some months ago. It contained classes, that were huge! It contained methods for all sorts of things: parsing XML files, saving and retrieving data, executing calculations on data, checking business rules on data, … I automatically feel sick when I think about it.
They had splitted the class into two partial classes for two reasons:
1) so multiple programmers could work on it (as MS encourages)
2) since the class was so big, it wasn’t handy to work in it (too much scrolling)

I hope you havn’t thrown up by now and you’re still with me.

If a class is so big you feel the need to split it up to keep it bearable to even read it, you’ve seriously been messing around with the SOLID principles, the DRY principle, and God knows what more… Thus, it’s time to think and refactor.

If your class contains methods that do CRUD operations, parse XML files, execute calculations and check on business rules, it’s obvious several developers will be needing that class at the same time. But splitting the class into two (or more) partial classes is not a solution.

It’s time to refactor the code, and look for places you’ve violated the Single Responsibility Principle (to start with). That’ll be a lot of places, if you have such spaghetti code!
The parsing should be handled by another class, the CRUD-operations should be handled by a repository (again, read the blue bible and also the follow-up book), the calculations should be handled in their own classes, and you should consider the specification pattern to perform business validation.

If you’ve got clean code, that follows the SOLID principles, you won’t feel the need to use partial classes.

I guess I have a set of rules before you decide to make use of partial classes:
1) If you need to be working with several developers on one class and you’re not working on the same functionality, you should check if you havn’t violated SRP (and I’m sure you have, since a class should only have one reason to change, remember?).
2) If you’re sure you havn’t violated SRP, check it with a co-worker to make sure.
3) If SRP was not violated, check how OCP, LSP and DIP-friendly your code is (if needed, again with co-workers). If these rules are broken, you’ll be making changes when you shouldn’t be making them. You’ll need to refactor your code to allow extensibility without changing existing classes.
4) If two developers need to access the same class to work on the same functionality, you’ve got some serious communication issues. It’s important to know what you’re colleagues are doing, so even if you’re not scrumming, you could gather on the coffee-break and discuss it. Maybe then you’ll decide to pair-program that piece of functionality.
5) Is it really that important to change that class right now? Can’t you just wait until your co-worker is done? It’s not going to take that long. Remember the class is only solving one responsibility ;)

If you still need to be accessing the same class, at the same moment, for another reason, consider shared-checkout, and merging your changes afterwards. Merging mechanisms are still improving day by day, so just give it a try before using partial classes.

I’m not against partial classes or anything. I only want to say you shouldn’t misuse them.

7 Comments My thoughts about IRepository<T> - 01/22/09

As we all know, the blue bible states that we should use repository-classes that provide access to our objects, and that encapsulate the actual CRUD operations performed on the datastore. And if you didn’t know this, it’s time to read Eric Evan’s great Domain-driven Design book.

There’s been a lot of fuzz lately about generic repositories. Is a generic repository the right thing to do? It’s goal is to avoid the duplication of code for common operations, such as the saving and deleting of entities. NHibernate provides an API, that makes it very easy to create a generic IRepository class, that satisfies all the basic needs of any repository we would be implementing. You can see how this is done in Ayende’s Rhino.Commons, and on top of that, Davy wrote a great post that explains the how and the why. Read about it here.

There is only one thing I would change about Davy’s example. I would define the interface like this: IRepository<EntityType, IdType>, instead of IRepository<EntityType>. You’ll see in his example, that the Get operation, uses the object type for the ID. Working with object will work perfectly, but I changed it just in favour of type safety.

What does Evans say about repositories?

Let me quote him exactly (Extract from Chapter 6):

“For each type of object that needs global access, create an object that can provide the illusion of an in-memory collection of all objects of that type. Set up access through a well-known global interface. Provide methods to add and remove objects, which will encapsulate the actual insertion or removal of data in the data store. Provide methods that select objects based on some criteria and return fully instantiated objects or collections of objects whose attribute values meet the criteria, thereby encapsulating the actual storage and query technology. Provide REPOSITORIES only for AGGREGATE roots that actually need direct access. Keep the client focused on the model, delegating all object storage and access to the REPOSITORIES.”

Just take a second look at whay I put in bold. Every type of object that needs global access, should have it’s own repository. Which means that the generic repository isn’t the way to go, right?
As most of you already know, I’m very passionate about enforcing the DRY principle. If we create a repository for each object we need to crud data for, then we’ll have some code duplication, since the way we get, save and delete our objects with NHibernate, is always the same. And that’s why we introduced IRepository<T>.

The way I see it

Create a repository for each aggregate root you need access to, and expose it’s methods through an interface. I didn’t say you shouldn’t be using the IRepository, though. I’m just saying to not expose it to your clients. Let them deal only with your specific repositories.

Give me some code, please…

Define your IRepository interface, and implement it in a Repository class. Define an interface per specific repository, and implement it. It’s the specific repository’s implementation that will be accessing the generic Repository class.

The client will only access IUserRepository directly.

public interface IUserRepository
{
	/// <summary>
	/// Retrieves a User by his e-mail address
	/// </summary>
	/// <param name="email">The User's e-mail address</param>
	/// <returns></returns>
	User GetByEmail(string email);
}

public class UserRepository : Repository<User, Guid>, IUserRepository
{
	/// <summary>
	/// Retrieves a User by his e-mail address
	/// </summary>
	/// <param name="email">The User's e-mail address</param>
	/// <returns></returns>
	public User GetByEmail(string email)
	{
		var criteria = DetachedCriteria
			.For<User>()
			.Add(Expression.Eq("Email", email));

		return base.FindOne(criteria);
	}
}

The way the client’s will use the repository, is obvious:

IUserRepository userRepository = new UserRepository();
userRepository.GetByEmail("me@noctovis.net");

As you can see, the code remains clean, readable, short and you’re not repeating any code. If you’re doing basic Save and Delete operations, you’ll have write methods that just wrap the generic Repository’s methods, but still, you won’t have code duplication.

Why I don’t want to expose IRepository<T> directly to clients

There are a few downsides to exposing IRepository<T> to your clients.

1) NHibernate’s Criteria API will be exposed to all your clients. The Repository pattern’s goal is to encapsulate the way you do your data-access. If you offer IRepository to your clients, they are going to have to construct Criteria objects to query what they want. So, again, you’re forcing them to know how the Criteria API works, which conflicts with the goal of a repository.

2) You gain more flexibility. How? Well, if the client would use IRepository directly, he would have to deal with a lot more stuff.

Let me give you an example. Imagine, that it’s a requirement to not actually delete your entities from the database when a user clicks ‘Delete’. You only have to set the entity’s Active-bit to false, and set the ChangedOn and ChangedBy properties. When using IRepository<T> directly, this responsibility shifts towards the client. That’s not good. If you use a custom repository, you can have this handled by UserRepository (from my example above), and have the UserRepository’s delete method, adjust the properties you want, and call the Save-method. Your client doesn’t have to know about this. In his little world, he deleted an entity, and that’s it.

The consequence is again, flexibity. If one day, the product owner says that it’s unnecessary to save deleted Product-entities, you can change your ProductRepository class, make it actually delete your products, without affecting any client-code.

This is just an example, I could come up with more scenario’s where this way of working would be interesting. Just imagine you need to validate entities before saving them to the datastore, or if you need to track changes and store every change a user makes in a seperate datastore…

Note, that it’s not my intention to make you put all the logic to do validation, change tracking, or whatever other requirements you have, in your specific repository classes. If you need to do validation, have that handled by a Specification, and if you need to track changes, have that handled by a different class too. Conclusion: never forget to apply SRP!

3 Comments How code generation breaks YAGNI - 01/19/09

My post-title was getting too long, so I’ll just finish it here:

… and what you should do about it!

Actually, this is a bit of a follow up to post to my post about design principles. I felt I had something more to say about YAGNI in combination with software factories. Not very long ago, I talked about a software factory I layed my hands on.
Now, I’ve had the chance to mess around with it a little more.

Imagine starting a new project with a software factory. It generates a lot of code. Some of it usefull, some of it not very.
I’m asking two questions:
1) What code is actually usefull? => It depends on the case
2) What code is not usefull at all? => It depends on the case

According to YAGNI, we shouldn’t be writing code that we’re not going to use. But in this case, a lot of code we’re not going to use, has already been generated by the software factory. So the software factory breaks the YAGNI principle.
I don’t find it abnormal that the code generated by a SF breaks the YAGNI principle, but that doesn’t mean it’s OK to keep the code breaking YAGNI when using a SF!

Why does a SF break the principle?

A software factory, solves a common problem. But as we all know, every problem domain, has exceptions. A software factory will only be able to solve that what’s -generally- the same over all cases, thus the solution offered by the SF, will not be entirely correct in all cases.
Since we’re not starting from scratch with the codebase, the developer can’t avoid breaking the principle, since it’s the SF that breaks it. But as I said, that doesn’t mean it’s ok. While in cases we don’t use a SF, it’s the developer’s responsibility not to break the principle, now it’s the developer’s responsibility to clean up the mess the SF made.

Time to refactor the software factory?

If the SF is generating code that breaks YAGNI, doesn’t that mean it’s time to refactor? Well, yes and no. The SF is nothing more than an existing codebase, just like any other one. So it evolves with time, and like in every project, it also needs it’s refactoring love once in a while. We all know refactoring is a good thing.
During your refactoring-time, you could take into account what code to add, but most importantly since I’m talking about YAGNI here, what code to remove.
But that’s not an easy decision, since that code is usefull in one project, but not in the other one. The only code you can delete without thinking about it twice, is the code that’s just -never- used.
The code that deserves some serious brainstorming, headaches and analysis, is the code that is used in less than 80% of the cases.

The SF guys might be thinking: 80%??? Then you’re taking away 20% of value? And I’ll say, no, I’m not at all. I’m taking away the code, that is an added value for 20% of your projects, but that is an added misvalue (i had to give the kid a name) for the other 80% of the projects! I must say the developers that created this SF, thought about YAGNI carefully. If you only have about 20% that is YAGNI-code in code generated by a SF, I think we can say we’ve got ourselves a powerfull thingie here.

But still, you’ve got that overhead. That 20% of other code that’s only used in special cases.
You can’t delete it, still it bothers the hell out of you. And more important, it breaks the YAGNI principle! It makes your code base too big for nothing, and sometimes very confusing to work with.

Things I think should not be generated in the SF

- method overrides that only call your base implemenation: override them when you need them, typing “identifier override methodName” is not that hard, is it?
- tests that aren’t testing anything but prepare your mocks and variables: you could still have this partly handled by the SF, by creating a recipe for them

I’m sure I can add things, but these are the most obvious ones I’ve seen until now.

So what do we do about it?

Well, in my suggestion, I would just leave it in there… But not until the end of time!
You have to pay off your technical debt, and in this case it means you have to remove all YAGNI code! Plan some time to do this after you’ve finished a user story. If this seems to be too often, and you’re not removing hardly any code, you could choose to do this after each iteration. But don’t prolong it more than that, or you’ll end up not doing it at all, since it will take too much time to do it all in one go.

You’ll think, but what if I need that code in another user story, or an upcoming iteration? That’s where you’re breaking the principle. No excuses. After you’ve finished a user story or an interation, and you haven’t used the code, the chance you will be using it all, decreases a lot.
If you need it anyway in the end, just add it manually, it won’t kill you, I promise.

Important note!

If for some reason you need to regenerate your code (executing DSL’s for example), you should review all generated code for unused code again! And not after the iteration. I think in this case, it’s something you should do instantly. Do it at the same moment you regenerate your code. Consider it a part of regenerating your code. If you notice you’re regenerating a lot of code in different places while your change is small, then I would advice to take a look at your SF. It should offer you a way to regenerate parts of code, without affecting other ones.

How to identify YAGNI code?

Resharper already makes it easy to see YAGNI code (it will look gray) such as:
- unused overrides
- unused private methods
- unused parameters
- redundant code

But that won’t get you all the way through the cleanup. ReSharper only detects these redundancies locally (I’m hearing in a next version this option will be available solution wide), and it also gets harder with unused types. There are other tools that help you out with those problems.

FxCop is a great tool offered by Microsoft to perform static code analysis on compiled assemblies. On the other hand, -and I personally think this is the most powerfull code analysis tool on the market-, we’ve got NDepend. It analyzes your code, and let’s you examine dependencies, complexity, and  a bunch of other stuff using code metrics, graphs, other visual representations and -last but certainly not least-, CQL queries. NDepend already executes some CQL queries during the analysis, but afterwards you can even execute more specific CQL queries to analyze your codebase.

In our case, we’re looking for unused code. This CQL query is included in the analysis, so it’s veeeery easy! Just take a look at the results, and refactor them away.

For your information, here are the basic CQL queries NDepend uses for detection of dead code:

WARN IF Count > 0 IN SELECT TOP 10 TYPES WHERE
TypeCa == 0 AND     // Ca=0 -> No Afferent Coupling -> The type is not used in the context of this application.
!IsPublic AND       // Public types might be used by client applications of your assemblies.
!NameIs "Program"   // Generally, types named Program contain a Main() entry-point method and this condition avoid to consider such type as unused code.

WARN IF Count > 0 IN SELECT TOP 10 FIELDS WHERE
FieldCa == 0 AND  // Ca=0 -> No Afferent Coupling -> The field is not used in the context of this application.
!IsPublic AND     // Although not recommended, public fields might be used by client applications of your assemblies.
!IsLiteral AND    // The IL code never explicitely uses literal fields.
!IsEnumValue AND  // The IL code never explicitely uses enumeration value.
!NameIs "value__" // Field named 'value__' are relative to enumerations and the IL code never explicitely uses them.

WARN IF Count > 0 IN SELECT TOP 10 METHODS WHERE
MethodCa == 0 AND            // Ca=0 -> No Afferent Coupling -> The method is not used in the context of this application.
!IsPublic AND                // Public methods might be used by client applications of your assemblies.
!IsEntryPoint AND            // Main() method is not used by-design.
!IsExplicitInterfaceImpl AND // The IL code never explicitely calls explicit interface methods implementation.
!IsClassConstructor AND      // The IL code never explicitely calls class constructors.
!IsFinalizer                 // The IL code never explicitely calls finalizers.

If you haven’t looked at NDepend yet, I strongly advice you to do so!

Then, you’ll have a non-YAGNI codebase again.

59 Comments Design principles - 01/15/09

Recently, I was asked which patterns and principles I would use in an OO-project.

As I started summing them up, I noticed the person who asked getting a StackOverflowException :) . Obviously, I had a lot to tell about the subject…
I’ll add a description (I’ll try to keep it short) to each one of them, but if you don’t know the meaning, just read the papers, blogs, or books. I’ll link to them (also check the titles, they may be links).

Feel free to add anything I forgot!

 

Single Responsibility principle (The S in SOLID principles)

This principle states an object should only have one responsibility. Why? So a class only has one reason to change. Why? Because when a class has several responsibilities, they become coupled. So? If you change one responsibility of the class, it could have consequences in the other responsibilities, so you have to retest all responsibilities.
It’s a bit confusing in the beginning to recognize a responsibility. Uncle Bob says a responsibility within SRP can be defined as a reason to change. That should get you started!

Open Closed principle (The O in SOLID principles)

The OCP states that software entities, should be open for extension, but closed for modification. What? You should be able to extend a SE without changing it. Why? If you don’t change it, you won’t break it. You will only have to test what you extended. How? Abstract away the functionality that could be implemented in different ways. If you have calculations, create an interface ICalculation, and add an implementation per calculationtype. The consequence is, that your calculation is closed for modification (a calculation calculates something and that’s it) but open for extension (if you have an addition, but also need a substraction, just add a class that implements ICalculation and you’re done). This is also a perfect example of the Strategy pattern.

Liskov Substitution principle (The L in SOLID principles)

This principle states that if you have a base class Base and two subclasses SubC and SubD, you should always be reffering to them as Base and not as their specific implementations SubC and SubD. What? Let’s get back to the calculation issue. If you have an AdditionCalculation and a SubstractionCalculation, you have to refer to both of them as Calculations. Why? Well, imagine, that apart from the Addition and Substraction calculation, you create a multiplication, a division, an exponent, … The code where you are referring to addition and substraction has to be modified to also know how to handle multiplications and divisions… Consequence => you’re code explodes to unreliable, unmaintanable, unreadable spaghetti… If you would have referred to addition and substraction as a calculation in the first place, you would have never need to make a modification.

Interface seggregation principle (The I in SOLID principles)

The ISP states that a client should not be forced to implement members they won’t use. I know this is a very common example of this principle, but since I’ve been personally confronted with it, I’m going to use it as clarification. Just take a look at it, and you’ll get the point :D  

Dependency Inversion principle (The D in SOLID principles)

This principle states that:
a) High level modules should not depend on low level modules, they both should depend on abtractions
b) Abstractions should not depend upon details, but the details upon the abstractions.

That’s a mouthfull.
Just imagine calculating a wage. Here in Belgium, we discount RSZ (social security) and taxes. Of course this is oversimplified for the sake of this example. The calculation of RSZ depends on your statute. The RSZ is always 13.07% of the bruto salary.
- for laborers, this percentage is applied to 108% of the bruto salary
- for employees it’s applied on 100% of your salary.
The WageCalculator, shouldn’t be making this distinction. Why? Well, reread SRP and OCP if you don’t know why… How? Well if you just pass a parameter IRszCalculator to the WageCalculator class, you can just call it’s Calculate method without worrying about the implementation. You can imagine that calculation a Wage is a bit more complicated than this, and eventually you can end up with a few parameters, including IRszCalculator. That’s where Dependency injection comes into the picture.
What? The client calling the Calculate method of the WageCalculator class, is now responsible for passing the correct IRszCalculation to the method, and when you’ve got several of these, the burden on the client grows. This can be solved with Dependency Injection. How? Several containers exist that resolve the dependencies needed by methods for you, such as Windsor, Unity, StructureMap, Spring.NET… How you do that, deserves it’s own post, but Davy already covered that one for me:
- Intro to Dependency Injection
- Intro to DI with Widsor

Chris Brandsma also gives a great overview of all DI-containers out there. Check it out.

Dry prinicple

DRY stands for Don’t repeat yourself. The goal of this principle, is to avoid repetitive code. In other words: copy and paste is forbidden. Why? I’ll just tell you a story.
A while ago, I was working on an existing codebase. I got a workitem assigned that told to fix an issue regarding the processing of incoming data. OK, no problemo! I fixed it, tested it, and it worked. I released a new version and got an angry e-mail. This issue isn’t fixed, the processing-bug is still there! I ran the processing method again, to check what was wrong. Nothing was wrong! Ok, let’s try again. Eventually, I called a business analyst by my side, to show that it DID work. We went through the steps to trigger the processing, and it worked. How can this be possible??? It doesn’t work for me!!! (Note the frustration and anger in my signs !!!). Finally I said: I don’t have magic hands, I promise. Do it yourself, and you’ll see it works too. He started the application, and started the processing. BA: You see? There it is! It doesn’t work!. Me: Could you please repeat what you just did? The BA repeated what he did, and again, there was the error. The way I triggered the processing, and the way he triggered it, was different… Normally that shouldn’t matter, but it was obvious that in this case, it did matter. I went back to my computer, opened the codebase, and confirmed what I already knew. The processing method was copied.
The DRY principle shouldn’t only be applied with code. Apply it in your DB, tests, documentation, …, thus in your whole system.
I rest my case.

YAGNI principle

I’ll tell you about this one with another story. Some time ago, I had the chance to work out a project on my own. I analyzed the domain and built up a model. The data I had to create, needed to satisfy a couple of rules, so I wrote them out in specifications. I created the data I needed, and compared it to my spec as a validation (Read more about using specifications for validation in the blue bible). I added tests that checked if my specs worked. All was good.
The tests passed, I had a great code coverage, and even the BA didn’t find functional bugs. Great! I had only one question. How healthy (or unhealthy) was my codebase? NDepend to the rescue. I performed an analysis, and everything looked OK. But then the results of a particular CQL query got my attention. Potentially unused methods = 2. How could that be?
After digging into my codebase (which you can do with a double click from NDepend!), I found out that 2 specs I wrote in the beginning, weren’t being used. They turned out to be unnecessary, and the problem I thought they would be solving, was handled in another way. They existed without a reason. So? Well, in this case I’m talking about two small spec-classes, but if you don’t keep an eye on this, it can result in a code bloat. I also had tests that covered the spec, so that’s -again- unecessary code to maintain.
This is what YAGNI aims to prevent. Don’t just write code you -think- you are going to need in the future. Only write code you need right now. If in the end you need the other code too, so be it. If not, you will not have waisted your time writing it, nor maintaining it.

Law of demeter

I’ll try to demonstrate this one with a bit of dummy code.
This code violates LoD:

public class Class1
{
	Class2 class2Instance = new Class2(); 

	public Class1()
	{
		// violates Law of Demeter since Class1 needs to access Class3 and does that trough Class2
		class2Instance.Class3.DoSomething();
	}
} 

public class Class2
{
	// we also need some serious information hiding here!
	public Class3 Class3 { get; set; }
} 

public class Class3
{
	public void DoSomething()
	{}
}

This is wrong because Class1 needs to know about the internal structure of class2 to be able to do something.

This code fixes the problem above:

public class Class1
{
	Class2 class2Instance = new Class2(); 

	public Class1()
	{
		// doesn't violate Law of Demeter since Class2 propagates Class1's request to Class3
		class2Instance.DoSomething();
	}
} 

public class Class2
{
	private Class3 Class3 { get; set; } 

	public void DoSomething()
	{
		Class3.DoSomething();
	}
} 

public class Class3
{
	public void DoSomething()
	{ }
}

Another solution could be:

public class Class1
{
	Class2 class2Instance = new Class2();
	Class3 class3Instance = new Class3(); 

	public Class1()
	{
		// doesn't violate Law of Demeter since Class2 directly calls Class3
		class3Instance.DoSomething();
	}
} 

public class Class2
{
} 

public class Class3
{
	public void DoSomething()
	{ }
}

You should decide what the best option is depending on the situation, of course.

Roundup

This turned about to be a longer post than I thought, sorry! But hey, I’m covering a lot of stuff here. I think I covered the most important principles when designing and developing an object oriented application. There’s a lot more information about this principles out there. My examples were pretty brief, and without lots of concrete examples. Just Google them, and you’ll find everything you need to know. If that’s not enough, read the Agile Principles, Patterns and Practices in C#. I havn’t read it yet, but plan to in the near future (it sure has some great reviews out there). I know it covers the topics I wrote about in this post, just take a look at the table of contents.

Comment Fluent interfaces - 01/3/09

I’ll be talking about fluent nHibernate in the near feature. Fluent nHibernate is a great new open source framework, that will allow us to create our mappings in code (means no XML!) using a fluent API.

More about fluent nHibernate later. The goal of this post is to bring some more light into the dark when talking about fluent interfaces as fluent nHibernate uses this style.

It’s been a while since this term is around, just notice the date this article was written… But to be honest, I didn’t actually know the term until I ran into fluent nHibernate myself!

Fluent interfaces… Come again?

Fluent interfaces are not a very common style, but it sure is very nice to work with.

Imagine baking a pizza (that’s what I’m eating right now :p ):

Pizza myFluentPizza = Pizza.Create()
                           .OfSize(PizzaSize.Medium)
                           .WithBorderFilling(PizzaFilling.Cheese)
                           .WithTopping("Tuna")
                           .WithTopping("Onions")
                           .WithTopping("Mozarella")
                           .Bake();

instead of how we do it now:

Pizza pizza = new Pizza();
pizza.Size  = PizzaSize.Medium;
pizza.BorderFilling = PizzaFilling.Cheese;
Topping tunaTopping = new Topping("Tuna");
pizza.AddTopping(tunaTopping);
Topping onionsTopping = new Topping("Onions");
pizza.AddTopping(onionsTopping);
Topping mozarellaTopping = new Topping("Mozarella");
pizza.AddTopping(mozarellaTopping);
pizza.Bake();

Just notice the complexity of the code we write normally to achieve this. It makes you create all the toppings yourself, while when using the fluent interface, the Pizza worries about creating the topping, you just worry (if that’s even worrying at all) adding it to your pizza, thus wiring it up.

There’s a lot less noise in the code, just with a very quick look you immediately know what kind of pizza you’re baking here.

How did you…?

Well actually, I’m sure you’ll think “d’oh” when you see it, it’s quite simple:

public class Pizza
{
   public PizzaSize Size { get; private set; }
   public PizzaFilling Filling { get; private set; }
   public IList<Topping> Toppings { get; private set; }

   public static Pizza Create()
   {
      Pizza pizza = new Pizza { Toppings = new List<Topping>() };
      return pizza;
   }

   public Pizza OfSize(PizzaSize size)
   {
      Size = size;
      return this;
   }

   public Pizza WithBorderFilling(PizzaFilling filling)
   {
      Filling = filling;
      return this;
   }

   public Pizza WithTopping(string topping)
   {
      Toppings.Add(new Topping(topping));
      return this;
   }

   public Pizza Bake()
   {
      // bake the pizza somehow
      return this;
   }
}

I’ll spare you the enums and the Topping class, since it just contains a constructor accepting a string parameter…

Notice the only method that’s not returning the Pizza instance is the Bake method. I did this so you can’t directly call the other methods anymore. Once a pizza is baked, it’s a bit useless to add more toppings or adjust the border filling, don’t you think? Baking the pizza, is in this case the end of the chain.

Simpler than you thought, isn’t it?

Method chaining

I mentioned that the Bake() method above, was the end of the chain for this example. End of what chain? Well, when implementing a fluent interface, you’ll notice you’re using a lot of method chaining.

First let me give you an example of method chaining we can find in the .NET framework to clarify what it actually is:

StringBuilder builder = new StringBuilder();
builder.Append("Did you ")
       .Append("like my ")
       .AppendLine("pizza?");

string test = "Hello, I'm here to clarify method chaining";
test.Substring(1, 7)
    .ToUpper()
    .Trim();

Don’t tell me you’ve never used this syntax before? It makes your code very straightforward and easy to read.

How this works is very simple:
Just make your method return the object you’re calling the method on, and there you go! It also gives a different implementation if you’re using it on your properties (no automatic setters anymore!), but it’s very straightforward.

Roundup

Fluent interfaces introduce a new style of coding, one that’s more readable and to-the-point. It does acquire some more coding, but in some cases, offering a fluent interface just is more important than spending more time on development.

Expect some posts about fluent nHibernate in the near future!

|