Jon Kruger -
  • About Me
  • Blog
  • Values
  • Presentations
About Me
Blog
Values
Presentations
  • About Me
  • Blog
  • Values
  • Presentations
Jon Kruger
TDD

TDD Boot Camp upcoming events – coming to a city near you

If you read my blog at all, you know that I’m passionate about test-driven development. A few months ago, I announced that I was developing a TDD training course called TDD Boot Camp that will teach you everything that you need to know to do test-driven development on real world .NET projects.

I currently have two upcoming events scheduled: July 13-15 in Columbus and August 18-20 in Detroit. You can find out more details on what we’ll cover and how to register at the TDD Boot Camp website. If you can’t make any the events or if you would like me to come to your company, send me an email and we can work something out where I can come to you.

If the whole idea of test-driven development is new to you, or you’re not sure why you should care, come to the Path to Agility conference in Columbus on May 27, where I’m doing a talk on Test-Driven Development In Action. I’ll show you how TDD works and how it can help you deliver IT projects with higher quality, lower maintenance costs, and more peace of mind. Actually, you should go to the conference either way, because there is an excellent lineup of sessions for anyone involved in IT.

April 28, 2010by Jon Kruger
development process

Dividing and conquering

When we work on software development projects, we break the project up into features. But what do you do when that feature is assigned to you?

Something that has helped me is to break the feature up into as many small tasks as possible. This helps me deliver working software incrementally and do it in an orderly fashion.

Let me put this another way. Let’s say that you want to make a meal for dinner, and you have to go to the grocery store to buy the ingredients. You could just go the store and try to remember what you need to buy. I’ve done this before. I need to get 5 things, and I’m constantly repeating over and over in my head what I need to buy. It’s mentally tiring! Instead, I’ve learned to make a list of things that I need to buy, and I bring a pen along so that I can cross things off after I’ve put them in my cart.

The same idea applies to software development, and there are many benefits.

Figuring out everything that needs to be done

Before I start working on a feature, I stop and think about everything I’m going to do to complete the feature. I try and break this down into the smallest unit of work that produces some working functionality. So I’m not writing down things like “Put Save button on screen”, I’m doing stuff like “Save a new order”. Now that I have this list, I can work through it in an orderly fashion, and after several days when I’m starting to get tired of the feature, I don’t run the risk of not thinking of everything I need to do to complete the feature just because my brain is tired, bored, or frustrated.

Knowing where to look when you break something

If you add two lines of code and something breaks, I think you know where to look! If you spend a day writing code and then realize that something is broken, it’s a lot harder to figure out what you did to break something.

Focusing on one thing at a time

In my grocery store example, trying to remember my list while shopping was difficult because mentally I was trying to do several things at once. If you are working on a single task like “save a new order”, you’re not thinking about loading orders or saving existing orders, you only have to think about one task. Mentally, this is much easier to do (and less stressful).

Knowing how far along you are

It’s fairly easy to get a rough estimate of how far along you are on a feature by looking at your list of tasks and seeing how many you have completed.

Feeling good

It feels good to cross things off on a list. At the end of the day, you can look back and see everything that you got accomplished. You may have not completed the entire feature, but you can have the good feeling that comes with crossing things off. I’ve seen people get quite flustered after several days on the same feature because they don’t feel like they’ve accomplished anything.

How I do this

feature exampleYou can use an online tool, a text file, pen and paper, or whatever else you like to keep track of tasks. I use Agile Zen to keep track of my tasks on a feature (Agile Zen can also keep track of your features in an online Kanban board). I used to use pen and paper, but my desk started getting cluttered with papers. Do whatever works best for you.

It also helps to have a source control system that helps you out. I like doing branch-per-feature, where you create a branch in your source control system for each feature that you work on, and then you merge that branch back into the main development branch once you complete the feature. In this case, you’re the only one working on your branch, so checking in is easier because you don’t have to do the usual check in dance. Now you can check in as often as you like with very little effort, and if you set up your CI server with a build that points to your branch, it will run the tests for you every time. This way, if you go down the wrong path with your feature or realize that you broke something at some point, you will have lots of checkpoints to roll back to.

In order to do branch-per-feature, you will need a source control system that makes branching easy. The one that I recommend (which I use) is Git. If you’ve heard about Git and haven’t learned how to use it yet, go watch this video. Mercurial is another one that is similar to Git. I’ve done branch-per-feature with Subversion, but every time you create a branch or check in, it has to make calls to the server, which means that it’s slower. Git and Mercurial do everything locally on your machine (so it’s fast), and then you push your commits and branches up to the server when you decide to.

If you’re stuck using a source control system that doesn’t make branch-per-feature easy, you can use Git locally for feature branches and then just check into your team’s source control when you complete a feature. I’m doing this on my current project and it’s not that hard to set up.

Regardless of how you do it, I think breaking features up into small tasks will reduce your stress level and make it easier to develop working software incrementally. Give it a try and see what you think!

April 5, 2010by Jon Kruger
.NET, Rhino Mocks, TDD, unit testing

How to use Rhino Mocks – documented through tests

I wanted to come up with a way to show people how to use Rhino Mocks (other than telling them to read the documentation). What better way to do this than by showing you how it works through a bunch of simple unit tests that document how Rhino Mocks works?

So that’s what I did. You can view the code here, or if you want to download the whole project and run the tests, you can get the whole thing here.

(If you’re interested in Moq, how it compares to Rhino Mocks, and to see Moq documented through tests, check out Steve Horn’s post.)

UPDATE: Fixed the test that was incorrectly showing how to use Expect() and VerifyAllExpectations(). Thanks to Sharon for pointing this out.

March 12, 2010by Jon Kruger
Uncategorized

Improving your validation code — a refactoring exercise

Today we’re going to talk about validation. Most people have some concept where they validate an entity object before it is saved to the database. There are many ways to implement this, and I’ve finally found my favorite way of writing validation code. But what I think is really interesting is the thought process of the many years of validation refactoring that have got me how I do validation today. A lot of this has to do with good coding practices that I’ve picked up over time and little tricks that allow me to write better code. This is a really good example of how I’ve learned to write better code, so I thought I’d walk you through it (and maybe you’ll like how I do validation too).

In the past, I would’ve created one method called something like Validate() and put all of the validation rules for that entity inside that method. It ended up looking something like this.


public class Order
{
	public Customer Customer { get; set; }
	public IList Products { get; set; }
	public string State { get; set; }
	public decimal Tax { get; set; }
	public decimal ShippingCharges { get; set; }
	public decimal Total { get; set; }
	
	public ValidationErrorsCollection Validate()
	{
		var errors = new ValidationErrorsCollection();
		if (Customer == null)
			errors.Add("Customer is required.");
		if (Products.Count == 0)
			errors.Add("You must have at least one product.");
		if (State == "OH")
		{
			if (Tax == 0)
				errors.Add("You must charge tax in Ohio.");
		}
		else
		{	
			if (ShippingCharges > 0)
				errors.Add("You cannot have free shipping outside of Ohio.");
		}
		
		return errors;
	}
}

The problem with this approach is that it’s a pain to read the Validate() method. If you have a large object, this method starts getting really cluttered really fast and you have all kinds of crazy if statements floating around that make things hard to figure out. This method may be called Validate(), but it’s not telling much about how the object is going to be validated.

So how can we make our validation classes more readable and descriptive? First, I like to do simple validation using attributes. I’m talking about whether a field is required, checking for null, checking for min/max values, etc. I know that there is a certain percentage of the population that despises attributes on entity objects. They feel like it clutters up their class. In my opinion, I like using attributes because it’s really easy, it reduces duplication, it’s less work, and I like having attributes that describe a property and give me more information about it than just its type. On my project, I’m using NHibernate.Validator to give me these attributes, and I’ve also defined several new validation attributes of my own (just open NHibernate.Validator.dll in Reflector and see how the out-of-the-box ones are written and you’ll be able to create your own attributes with no problems). Now my class looks more like this:


public class Order
{
	[Required("Customer")]
	public Customer Customer { get; set; }
	[AtLeastOneItemInList("You must have at least one product.")]
	public IList Products { get; set; }
	public string State { get; set; }
	public decimal Tax { get; set; }
	public decimal ShippingCharges { get; set; }
	public decimal Total { get; set; }
	
	public ValidationErrorsCollection Validate()
	{
		var errors = new ValidationErrorsCollection();
		if (State == "OH")
		{
			if (Tax == 0)
				errors.Add("You must charge tax in Ohio.");
		}
		else
		{	
			if (ShippingCharges > 0)
				errors.Add("You cannot have free shipping outside of Ohio.");
		}
		return errors;
	}
}

When I put these attributes on properties, I don’t write unit tests for that validation. If I was really concerned about whether or not I put an attribute on a property, I could spend 2 seconds going and actually checking to see if that attribute was on the property instead of spending 2 minutes writing a test. It’s just so easy to use an attribute that it’s hard to screw it up. I haven’t been burned by this yet. So already I’ve eliminated some validation code that was cluttering up my validation methods and I eliminated some tests that I would’ve otherwise written.

But my Validate() method still looks messy, and it’s doing a bunch of different validations. The method name sure isn’t telling me anything about the type of custom validation that is being done.

Let’s write some tests and see where our tests might lead us.


[TestFixture]
public class When_validating_whether_tax_is_charged
{
	[Test]
	public void Should_return_error_if_tax_is_0_and_state_is_Ohio()
	{
		var order = new Order {Tax = 0, State = "OH"};
		order.Validate().ShouldContain(Order.TaxValidationMessage);
	}

	[Test]
	public void Should_not_return_error_if_tax_is_greater_than_0_and_state_is_Ohio()
	{
		var order = new Order { Tax = 3, State = "OH" };
		order.Validate().ShouldNotContain(Order.TaxValidationMessage);
	}

	[Test]
	public void Should_not_return_error_if_tax_is_0_and_state_is_not_Ohio()
	{
		var order = new Order { Tax = 0, State = "MI" };
		order.Validate().ShouldNotContain(Order.TaxValidationMessage);
	}

	[Test]
	public void Should_not_return_error_if_tax_is_greater_than_0_and_state_is_not_Ohio()
	{
		var order = new Order { Tax = 3, State = "MI" };
		order.Validate().ShouldNotContain(Order.TaxValidationMessage);
	}
}

[TestFixture]
public class When_validating_whether_shipping_is_charged
{
	[Test]
	public void Should_return_error_if_shipping_is_0_and_state_is_not_Ohio()
	{
		var order = new Order {ShippingCharges = 0, State = "MI"};
		order.Validate().ShouldContain(Order.ShippingValidationMessage);
	}

	[Test]
	public void Should_not_return_error_if_shipping_is_0_and_state_is_Ohio()
	{
		var order = new Order { ShippingCharges = 0, State = "OH" };
		order.Validate().ShouldNotContain(Order.ShippingValidationMessage);
	}

	[Test]
	public void Should_not_return_error_if_shipping_is_greater_than_0_and_state_is_Ohio()
	{
		var order = new Order { ShippingCharges = 5, State = "OH" };
		order.Validate().ShouldNotContain(Order.ShippingValidationMessage);
	}

	[Test]
	public void Should_not_return_error_if_shipping_is_greater_than_0_and_state_is_not_Ohio()
	{
		var order = new Order { ShippingCharges = 5, State = "MI" };
		order.Validate().ShouldNotContain(Order.ShippingValidationMessage);
	}
}

These tests are testing all of the positive and negative possibilities of each validation rule. Notice that I’m not checking just that the object was valid, I’m testing for the presence of a specific error message. If you just try to test whether the object is valid or not, how do you really know if your code is working? If you test that the object should not be valid in a certain scenario and it comes back with some validation error, how would you know that it returned an error for the specific rule that you were testing unless you check for that validation message?

I could just leave the validation code in my implementation class as is. But I’m still not happy with the Validate() method because it has a bunch of different rules all thrown in one place, with the potential for even more to get added. If I just refactor the code in the method into smaller methods, it’ll read better. So now I have this:


public class Order
{
	public const string ShippingValidationMessage = "You cannot have free shipping outside of Ohio.";
	public const string TaxValidationMessage = "You must charge tax in Ohio.";

	public Customer Customer { get; set; }
	public IList Products { get; set; }
	public string State { get; set; }
	public decimal Tax { get; set; }
	public decimal ShippingCharges { get; set; }
	public decimal Total { get; set; }

	public ValidationErrorsCollection Validate()
	{
		var errors = new ValidationErrorsCollection();
		ValidateThatTaxIsChargedInOhio(errors);
		ValidateThatShippingIsChargedOnOrdersSentOutsideOfOhio(errors);
		return errors;
	}

	private void ValidateThatShippingIsChargedOnOrdersSentOutsideOfOhio(
            ValidationErrorsCollection errors)
	{
		if (State != "OH" && ShippingCharges == 0)
			errors.Add(ShippingValidationMessage);
	}

	private void ValidateThatTaxIsChargedInOhio(ValidationErrorsCollection errors)
	{
		if (State == "OH" && Tax == 0)
			errors.Add(TaxValidationMessage);
	}
}

That’s better. Now when you read my Validate() method, you have more details about what validation rules we are testing for. This is a more natural way of writing the code when you write your tests first because it just makes sense to create one method for each test class.

Then your boss comes to you with a new rule — an Ohio customer only gets free shipping on their first order. This is a little bit trickier to test because now I’m dealing with data outside of the object that is being validated. In order to test this, I am going to have to call out to the database in order to see if this customer has an order with free shipping. How this is done I don’t really care about in this example, I just know that I’m going to have some class that determines whether a customer has an existing order with free shipping.

One of the cardinal rules of writing unit tests is that I need to stub out external dependencies (like a database), and in order to do that, I need to use dependency injection and take in those dependencies as interface parameters in my constructor. But another rule of DI is that I can’t take dependencies into entity objects. This means that I’m going to have to split the validation code out from my entity object. I’ll move them out into a class called OrderValidator, and it’ll look like this:


public class OrderValidator : IValidator
{
	private readonly IGetOrdersForCustomerService _getOrdersForCustomerService;
	public const string ShippingValidationMessage = "You cannot have free shipping outside of Ohio.";
	public const string TaxValidationMessage = "You must charge tax in Ohio.";
	public const string CustomersDoNotHaveMoreThanOneOrderWithFreeShippingValidationMessage =
		"A customer cannot have more than one order with free shipping.";

	public OrderValidator(IGetOrdersForCustomerService getOrdersForCustomerService)
	{
		_getOrdersForCustomerService = getOrdersForCustomerService;
	}

	public ValidationErrorsCollection Validate(Order order)
	{
		var errors = new ValidationErrorsCollection();
		ValidateThatTaxIsChargedInOhio(order, errors);
		ValidateThatShippingIsChargedOnOrdersSentOutsideOfOhio(order, errors);
		ValidateThatCustomersDoNotHaveMoreThanOneOrderWithFreeShipping(order, errors);
		return errors;
	}

	private void ValidateThatCustomersDoNotHaveMoreThanOneOrderWithFreeShipping(Order order, ValidationErrorsCollection errors)
	{
		var ordersForCustomer = _getOrdersForCustomerService.GetOrdersForCustomer(order.Customer);
		if (order.IsNew)
			ordersForCustomer.Add(order);
		else
		{
			ordersForCustomer = ordersForCustomer.Where(o => o.Id != order.Id).ToList();
			ordersForCustomer.Add(order);
		}
		if (ordersForCustomer.Count(o => o.ShippingCharges == 0) > 1)
			errors.Add(CustomersDoNotHaveMoreThanOneOrderWithFreeShippingValidationMessage);
	}

	private void ValidateThatShippingIsChargedOnOrdersSentOutsideOfOhio(Order order, ValidationErrorsCollection errors)
	{
		if (order.State != "OH" && order.ShippingCharges == 0)
			errors.Add(ShippingValidationMessage);
	}

	private void ValidateThatTaxIsChargedInOhio(Order order, ValidationErrorsCollection errors)
	{
		if (order.State == "OH" && order.Tax == 0)
			errors.Add(TaxValidationMessage);
	}
}

Notice that the OrderValidator class implements IValidator. This interface is pretty simple and looks like this:


public interface IValidator
{
    ValidationErrorsCollection Validate(T obj);
}

Now that the validation class has been moved outside of the entity object, I need to know which IValidator<T> classes I need to run when I want to validate an object of a certain type. No worries, I can just create a class that will register validator objects by type. When the application starts up, I’ll tell my registration class to go search the assemblies for classes that implement IValidator<T>. Then when it’s time to validate, I can ask this registration class for all IValidator<T> types that it found for the type of entity that I need to validate and have it do the validation.

I think we could take this one step further. Currently our OrderValidator class is doing three different validations. You could argue that this violates the Single Responsibility Principle because this class is doing three things. But you might also be able to argue that it doesn’t violate the Single Reposibility Principle because OrderValidator is only doing one type of thing. Does it really matter?

What if you got a new validation rule that says that the State property on the Order can only contain one of the lower 48 U.S. states (any state other than Alaska or Hawaii). We also want to add this rule to a bunch of other entity objects that only should use the lower 48 states for their State property. Ideally, I would like to write this validation rule once and use it for all of those objects.

In order to do this, I’m going to create an interface first:


public interface IHasLower48State
{
	public string State { get; }
}

I’ll put this interface on the Order class and all of the other classes that have this rule. Now I’ll write my validation code (after I write my tests, of course!). The only problem is that it doesn’t really fit inside OrderValidator anymore because I’m not necessarily validating an Order, I’m validating a IHasLower48State, which could be an Order, but it also could be something else.

What I really need now is a class for this one validation rule. I’m going to give it an uber-descriptive name.


public class Validate_that_state_is_one_of_the_lower_48_states : IValidator
{
	public const string Message = "State must be one of the lower 48 states.";
	
	public ValidationErrorsCollection Validate(IHasLower48State obj)
	{
		if (obj.State == "AK" || obj.State == "HI")
			errors.Add(Message);
	}
}

Some of you are freaking out because I put underscores in the class name. I put underscores in test class names, and it just seemed natural to use a very descriptive English class name that describes exactly what validation is being performed here. If you don’t like the underscores, then call it something descriptive without using underscores.

Now I change my registration class so that when you ask for all of the validation classes for an entity, it also checks for validation classes for interfaces that the entity implements.

What’s great about is that if an entity object implements IHasLower48State, it will now pick up this validation rule for free. My registration class has auto-wired it for me, so I don’t have to configure anything. I get functionality for free with no extra work! I’m creating cross-cutting validation rules where I’m validating types of entities.

Conclusion

If you made it to this point, you’re a dedicated reader after making it through all of that (or you just skipped to the end). I wrote all of this not only to show how I do validation, but also to show you the thought process I go through and the hows and whys behind how I refactor things and find better ways to write code.

March 11, 2010by Jon Kruger
TDD

A response to the SWE101 attendees who tried to solve the TDD problem without TDD

This weekend we did some live TDD at the Software Engineering 101 event and broadcast it over LiveMeeting, which was a lot of fun. However, some people have posted solutions (here and here – see first comment) on how they solved the Greed scoring problem without using TDD.

I think you missed the point. The point was to help you learn the TDD thought process and how to put TDD into practice. There are lots of benefits to TDD, and I’ll use these non-TDD examples to illustrate.

How do you know your code works?

Those of you who solved the problem without TDD probably did some sort of manual testing in order to prove that your code is working. You could probably do that with a simple example like the one we had (there were only a few scoring rules). But what happens as we add rules (and in real life we’re always adding rules) and you have to manually test 16 rules? For the record, I took the solution posted here and ran my tests against it.

failing test

Tests are documentation

Tests document what the code is supposed to do. Since we wrote our test methods as sentences that read like English, you can figure out the rules just from reading our tests (as you can see in the image above). This is not documentation in Word format which becomes stale, this is living, breathing executable documentation of what the code is supposed to do.

Readability is important

I feel that the solution that we ended up with was very readable. Here is our implementation code:


public class GreedScorer
{
    public double Score(params Die[] dice)
    {
        var score = 0;

        score += ScoreASetOfThreeOnes(dice);
        score += ScoreASetOfThreeTwos(dice);
        score += ScoreASetOfThreeThrees(dice);
        score += ScoreASetOfThreeFours(dice);
        score += ScoreASetOfThreeFives(dice);
        score += ScoreASetOfThreeSixes(dice);
        score += ScoreEachOneThatIsNotAPartOfASetOfThree(dice);
        score += ScoreEachFiveThatIsNotAPartOfASetOfThree(dice);
        return score;
    }

    private int ScoreASetOfThreeOnes(Die[] dice)
    {
        if (dice.Count(die => die.Value == 1) >= 3)
            return 1000;
        return 0;
    }

    private int ScoreASetOfThreeTwos(Die[] dice)
    {
        if (dice.Count(die => die.Value == 2) >= 3)
            return 200;
        return 0;
    }

    private int ScoreASetOfThreeThrees(Die[] dice)
    {
        if (dice.Count(die => die.Value == 3) >= 3)
            return 300;
        return 0;
    }

    private int ScoreASetOfThreeFours(Die[] dice)
    {
        if (dice.Count(die => die.Value == 4) >= 3)
            return 400;
        return 0;
    }

    private int ScoreASetOfThreeFives(Die[] dice)
    {
        if (dice.Count(die => die.Value == 5) >= 3)
            return 500;
        return 0;
    }

    private int ScoreASetOfThreeSixes(Die[] dice)
    {
        if (dice.Count(die => die.Value == 6) >= 3)
            return 600;
        return 0;
    }

    private int ScoreEachOneThatIsNotAPartOfASetOfThree(Die[] dice)
    {
        if (dice.Count(die => die.Value == 1) < 3)
            return (dice.Count(die => die.Value == 1) * 100);
        if (dice.Count(die => die.Value == 1) > 3)
            return ((dice.Count(die => die.Value == 1) - 3) * 100);

        return 0;
    }

    private int ScoreEachFiveThatIsNotAPartOfASetOfThree(Die[] dice)
    {
        if (dice.Count(die => die.Value == 5) < 3)
            return (dice.Count(die => die.Value == 5) * 50);
        if (dice.Count(die => die.Value == 5) > 3)
            return ((dice.Count(die => die.Value == 5) - 3) * 50);

        return 0;
    }
}

Look how readable our code is. Read the score method. Notice how it tells you exactly what it’s doing. Contrast that with one of the other solutions:


static int Score(int[] numbers)
{
    int valueToReturn = 0;

    for (int numberIndex = 0; numberIndex < numbers.Length; numberIndex++)
    {
        switch (numbers[numberIndex])
        {
            case 1:
                if (
                    (numberIndex + 1 < numbers.Length && numbers[numberIndex + 1] == 1) &&
                    (numberIndex + 2 < numbers.Length && numbers[numberIndex + 2] == 1)
                   )
                {
                    valueToReturn += 1000;
                    numberIndex += 2;
                }
                else
                {
                    valueToReturn += 100;
                }
                break;

            case 5:
                if (
                    (numberIndex + 1 < numbers.Length && numbers[numberIndex + 1] == 5) &&
                    (numberIndex + 2 < numbers.Length && numbers[numberIndex + 2] == 5)
                   )
                {
                    valueToReturn += 500;
                    numberIndex += 2;
                }
                else
                {
                    valueToReturn += 50;
                }
                break;

            default:
                if (
                    (numberIndex + 1 < numbers.Length && numbers[numberIndex + 1] == numbers[numberIndex]) &&
                    (numberIndex + 2 < numbers.Length && numbers[numberIndex + 2] == numbers[numberIndex])
                   )
                {
                    valueToReturn += 100 * numbers[numberIndex];
                    numberIndex += 2;
                }
                break;

        }
    }

    return valueToReturn;
}

To me, this code is not very readable. If you had to implement a new rule in this method, it would be hard to do (plus you have no tests to tell you that you broke an existing rule).

The point of this exercise was not to find a clever way to solve a brain teaser. If you want to do a brain teaser, try and write code that will output a Fibonacci sequence using one LINQ expression. That's the kind of geek stuff that you might do at night for fun. But when you're writing real code, we care about things like readability and maintainability. When you're writing implementation code, use common language and write methods whose names tell you what they do.

TDD leads to well-designed code

As we were writing our first test, our test told us that we needed some class that would score the Greed game. So we named it GreedScorer, which does exactly what the name intends. This class will most likely end up following the single responsibility principle because we gave it a very specific name.

But you guys took so long!

We weren't trying to finish as fast as possible. We were not doing a coding competition. We were practicing and learning the TDD mindset and trying to explain things as we went. As you get better at TDD, it becomes more natural and you learn tricks that help you go faster (for example, sometimes writing a whole bunch of tests, watching them all fail, then making them all go pass is faster than just writing one test at a time and making them pass one at a time).

The bottom line is that we came out with some good, readable, maintainable code (with props to Sirena who helped write it). We were able to prove that our code is working. We don't have any code that we don't need. We could easily respond to change and implementing new scoring rules would be pretty easy. Implementing the Score method was fairly easy because we were building it incrementally (solving lots of little problems is easier than trying to solve a big problem all at once). This is why we do TDD!

March 2, 2010by Jon Kruger
TDD

Code from Software Engineering 101 TDD session + more practice

Thanks to everyone who tuned in to watch our live TDD session. Here is the completed code that we ended up with. If you want to try it for yourself, you can get the rules for the Greed game here.

Here are some other “kata” exercises that you can use to help with your TDD practice. These are other simple examples similar to the Greed game.

String Calculator
Bowling scoring
Tennis scoring

Happy testing!

February 27, 2010by Jon Kruger
Uncategorized

Software Engineering 101 – Sat., Feb. 27 – Nashville and online!

Leon Gersing, Jim Holmes, and I are putting on our Software Engineering 101 event this Saturday, Februrary 27 in Nashville. If you’re not in Nashville (and you probably aren’t), you can watch the entire event online on LiveMeeting!

We’ll cover topics like object-oriented programming, the SOLID principles, and code metrics and we’ll spend the afternoon doing some hands-on test-driven development. This event was a lot of fun the first time we did it and I’m really looking forward to it.

You can register for this FREE event here (you need to register if you want to watch online to get the LiveMeeting info).

February 24, 2010by Jon Kruger
Quality, TDD, unit testing

The automated testing triangle

Recently I had the privilege of hearing Uncle Bob Martin talk at the Columbus Ruby Brigade. Among the many nuggets of wisdom that I learned that night, my favorite part was the Automated Testing Triangle. I don’t know if Uncle Bob made this up or if he got it from somewhere else, but it goes something like this.

The Automated Testing TriangleAt the bottom of the triangle we have unit tests. These tests are testing code, individual methods in classes, really small pieces of functionality. We mock out dependencies in these tests so that we can test individual methods in isolation. These tests are written using testing frameworks like NUnit and use mocking frameworks like Rhino Mocks. Writing these kinds of tests will help us prove that our code is working and it will help us design our code. They will ensure that we only write enough code to make our tests pass. Unit tests are the foundation of a maintainable codebase.

But there will be situations where unit tests don’t do enough for us because we will need to test multiple parts of the system working together. This means that we need to write integration tests — tests that test the integration between different parts of the system. The most common type of integration test is a test that interacts with the database. These tests tend to be slower and are more brittle, but they serve a purpose by testing things that we can’t test with unit tests.

Everything we’ve discussed so far will test technical behavior, but doesn’t necessarily test functional business specifications. At some point we might want to write tests that read like our technical specs so that we can show that our code is doing what the business wants it to do. This is when we write acceptance tests. These tests are written using tools like Cucumber, Fitnesse, StoryTeller, and NBehave. These tests are usually written in plain text sentences that a business analyst could write, like this:

As a user
When I enter a valid username and password and click Submit
Then I should be logged in

At this point, we’re are no longer just testing technical aspects of our system, we are testing that our system meets the functional specifications provided by the business.

By now we should be able to prove that our individual pieces of code are working, that everything works together, and that it does what the business wants it to do — and all of it is automated. Now comes the manual testing. This is for all of the random stuff — checking to make sure that the page looks right, that fancy AJAX stuff works, that the app is fast enough. This is where you try to break the app, hack it, put weird values in, etc.

The un-automated testing triangleI find that the testing triangle on most projects tends to look more like this triangle. There are some automated integration tests, but these tests don’t use mocking frameworks to isolate dependencies, so they are slow and brittle, which makes them less valuable. An enormous amount of manpower is spent on manual testing.

Lots of projects are run this way, and many of them are successful. So what’s the big deal? Becuase what really matters is the total cost of ownership of an application over the entire lifetime of the application. Most applications need to be changed quite often, so there is much value in doing things that will allow the application be changed easily and quickly.

Many people get hung up on things like, “I don’t have time to write tests!” This is a short term view of things. Sometimes we have deadlines that cannot be moved, so I’m not denying this reality. But realize that you are making a short term decision that will have long term effects.

If you’ve ever worked on a project that had loads of manual testing, then you can at least imagine how nice it would be to have automated tests that would test a majority of your application by clicking a button. You could deploy to production quite often because regression testing would take drastically less time.

I’m still trying to figure out how to achieve this goal. I totally buy into Uncle Bob’s testing triangle, but it requires a big shift in the way we staff teams. For example, it would really help if QA people knew how to use automated testing tools (which may require basic coding skills). Or maybe we have developers writing more automated tests (beyond the unit tests that they usually write). Either way, the benefits of automated testing are tremendous and will save loads of time and money over the life of an application.

February 8, 2010by Jon Kruger
TDD

Announcing TDD Boot Camp – comprehensive test-driven development training in .NET

If any of you have tried to learn test-driven development, you’ve probably discovered that it is not easy to learn. It’s not something where you can just go read a book or find a few good blog posts and start doing it tomorrow. You have to learn to how to write tests first, how to use testing frameworks, how to write testable code, how to do dependency injection, how to use mocking frameworks, and on and on.

People have asked me for advice on how to learn TDD and I really haven’t had a good answer for them. I’ve tried doing lunch and learn sessions on TDD and other people have done half-day sessions on TDD. All of these are good, but there’s no way that you can cover all of the subjects that you would need to understand in order to do TDD on real world projects in such a short amount of time. Sure, there’s definitely value in these sessions, but when I did my lunch and learn session on TDD, I felt like people went away feeling more confused about all of the stuff that they just realized that they didn’t understand.

Which is why I’m developing the TDD Boot Camp, a comprehensive, three day training course that will cover everything that you need to know in order to do TDD on real world .NET projects. It will be very hands-on with a lot of coding exercises that will help you understand all of the hows and whys of test driven development in .NET. My goal is to teach all of the concepts, tools, and techniques that you need to know to do TDD so that, with some practice, you will effectively be able to do TDD (and hopefully teach others how to do it too!).

I’m working on scheduling some events, so keep an eye on the website as I get things set up. I can also come out to your site if that would work better. Hopefully this will fill in the TDD learning gap so that more people can start realizing the benefits of test driven development.

February 2, 2010by Jon Kruger
TDD

The business value of test-driven development

Most businesses are creating software for one primary reason — to make money. In order to make money, we need software that meets the needs of the business and can be developed and maintained in a reasonable amount of time with a high level of quality. Test-driven development is a discipline that will help you achieve these goals.

Test-driven development involves writing automated unit tests to prove that code is working. The test code is written before the implementation code is written. By writing the tests first, you will know when the code is working once the tests pass. Test names are written as sentences in plain English so that the tests describe what the code is supposed to do. Over time, you will end up with a large suite of automated tests which you can run in a short amount of time. These tests will prove to you that your code is working and will continue to work as you modify or refactor the code base.

Most software applications are intended to be used for many years, and throughout most of their existence, someone will be changing them. The total cost of ownership of an application goes far beyond the cost of the initial effort to create the initial version of the software. The first release is the easy part — you can build the application from the ground up, you don’t have many hindrances, and developers feel very productive. But as time goes on, productivity tends to decrease due to complexity, developer turnover, poor software design, and any number of other reasons. This is where software development really becomes expensive. So much focus is placed on the original cost of building an application without considering how the original development effort will affect the cost of maintaining that application over several years.

Test-driven development can reduce the total cost of ownership of an application. New developers on the team will be able to change the code without fear of breaking something important. The application will have fewer defects (and far fewer major defects), reducing the need for manual QA testing. Your code will be self-documenting because the tests will describe the intended behavior of the code. All of this leads to flexible, maintainable software that can be changed in less time with higher quality.

Software is intended to deliver business value, and test-driven development will help you write higher quality, more maintainable software that can be changed at the fast pace of the business. Test-driven development will lead to successful software projects and enable you to write software that will withstand the test of time.

January 25, 2010by Jon Kruger
Page 16 of 24« First...10«15161718»20...Last »

About Me

I am a technical leader and software developer in Columbus, OH. Find out more here...

I am a technical leader and software developer in Columbus, OH, currently working as a Senior Engineering Manager at Upstart. Find out more here...