Jon Kruger -
  • About Me
  • Blog
  • Values
  • Presentations
About Me
Blog
Values
Presentations
  • About Me
  • Blog
  • Values
  • Presentations
Jon Kruger
unit testing

Why mocking is good

As I said in my last post, unit testing is hard, and it’s something that can be hard to learn.

I’ve been fortunate on my current project to be working with guys who introduced me to mocking frameworks (in our case, Rhino Mocks). I had heard of Rhino Mocks before, but I never really looked into it. I thought, if I’m not testing against a database, how is that a valid test of what will actually happen in the application?

I can think of a very simple example that I think will show the value of mocks. Let’s say that it’s your first day on a brand new, green field project where you’re writing a system that deals with claims. You create a simple Claim object and you want to write a unit test to make sure that your validation code checks to see if the ClaimNumber property has a value. Your tests (not using mocks) might look like this:


[TestMethod]
public void SaveClaimPositiveTest()
{
    ClaimLogic logic = new ClaimLogic();
    Claim c = new Claim();
    c.ClaimNumber = "12345";
    logic.Save(c);
    
    Claim loadedClaim = logic.Get(c.Id);
    Assert.IsTrue(loadedClaim != null);
    Assert.IsTrue(loadedClaim.ClaimNumber == c.ClaimNumber);
}

[TestMethod]
public void SaveClaimNegativeTest()
{
    ClaimLogic logic = new ClaimLogic();
    Claim c = new Claim();
    c.ClaimNumber = null;
    try
    {
        logic.Save(c);
        Assert.Fail("Expected exception was not thrown.");
    }
    catch (MyValidationException ex)
    {
        Assert.IsTrue(ex.Property == "ClaimNumber", "Validation for ClaimNumber did not occur.");
    }
}

These are fairly simple tests that I’ve written many times in the past. What is wrong with these tests? Nothing right now. But that is all going to change.

What happens when someone adds a new property to the Claim object and adds a validation rule that says that the new field is required? My tests break. Now I have to go back and add more code to the beginning to create a valid Claim object that I can save.

What happens when someone adds some more logic to the Save method that kicks off some complicated workflow? Now my simple tests are getting a lot more complicated, and I’m going to have to deal with all kinds of side effects and errors just to test a really simple validation rule.

I’m also cluttering up the database with test data. So I need to write some cleanup code to delete the Claim object I inserted. Later on someone will add some foreign key relationships to the table, and I’ll have to come back add more code to clean up those related objects too. Someone might add some triggers too, and I’ll have to account for that. Those triggers might make it hard (or impossible) to clean up this test data.

These tests are also going to be pretty slow because they have to interact with the database. Yes, it’s a simple test, but what happens when I have 2000 of these tests? It’s going to take a long time to run them all!

All I wanted to do was verify whether my validation code was written correctly! Why do I have to actually try and save the object to the database to do this?

What happened on past projects where I wrote these kinds of tests was this: the unit tests take 5 minutes to run, so no one runs them when they check in. Eventually someone does something that breaks the test (usually harmless stuff like adding validation rules), but no one takes time to fix it. Before you know it, half of the unit tests are broken, but it’s getting close to your release date, so you don’t have time to fix them. Then, sometime after the release, someone will spend a week updating the tests so that they all work again.

This is where mocking comes in.

I’m not going to go in depth into mocking because there are plenty of people who have written in depth posts on how to do this. But the basic idea is that I am going to “mock out” external dependencies by telling the mocking framework what those external dependencies are going to do and return. In this case, I’ll mock out the code that would actually do the actual saving to the database because all I’m testing is the validation.

Yesterday I wrote a unit test for a method that returned a list of users in the application who were in certain security roles, and the list of users returned depended on the current user’s security role (if you’re in certain roles, I don’t want the list of users to include people in certain other roles). The list of users is stored in a database table, and the security roles are stored in Active Directory groups.

In this case, it would be pretty much impossible to test this without mocks. Think about what I have to do:

1) Test that the list of users returned will be correct depending on the user’s security role
2) Test that users in each security role will be returned at the proper time

The problem is that I don’t know what users are in the database and I don’t know what users are in each role in Active Directory. I don’t have a way to write code to insert users into Active Directory groups. Even if I could do that, I would then have to write code to insert a bunch of test users into the database (or even worse, write a test that would expect certain actual users to be in the database). Then I’d have to clean everything up when the test finished.

I don’t need a database or Active Directory to test my logic that would return the correct list of users. I can mock out the database by just using a list of User objects instead, and I can mock out Active Directory by just telling the mock framework what users to return for each role.

I wrote 16 tests around this section of code and they all run in about 5 seconds! I don’t have to insert any test code anywhere, I don’t have to clean it up, I don’t have to worry about people adding new validation rules, and I don’t have to worry about people adding other external code that will break my test (that wouldn’t otherwise have an effect on the code I was testing).

I will say this — mocking is not easy. It really is an art. It is not something that I learned overnight. Even though I understood the concept, I didn’t really get good at it until recently. But now that I understand it, I write much better unit tests, I write more unit tests, and I can do it pretty quickly. Test Driven Development (writing tests first) is also an art, and it’s something that I’m just starting to get into.

When I first started programming, I didn’t care about unit tests because I just wanted my programs to work. Sadly, in my four years of college, no one even mentioned a unit test. I didn’t even know what one was when I started my first job! Trying to reprogram yourself to work in a TDD way is hard because you’re breaking old habits that you’ve had since the day you first started programming.

Mocking and TDD will revolutionize the way that you write unit tests, but like I said, it can feel like you’re learning a foreign language. If you have someone that you know that is good at this stuff, beg them, plead with them, take them out to lunch, or do whatever else you have to do to get them to sit with you and pair program for a day or two and learn from them. You won’t want to go back!

Also, if you don’t have a continuous integration build that runs all of your unit tests every time you check in, get one. We’re using TFS 2008 and I created a CI build using TFS in less than 5 minutes using the TFS build definition wizard. Then make sure that you fix broken tests as soon as they break!

Happy mocking!

June 30, 2008by Jon Kruger
unit testing

Why write unit tests?

Unit testing is always a hot topic on every project. Usually in the form of, “Why should I write unit tests?” or “Do we have enough code coverage?”

This is a not an implication of anyone that I’ve worked with. Lots of people can write code, but writing unit tests is definitely a skill that takes awhile to learn. So if I come across people who don’t know how to write unit tests, I don’t get too bent out of shape because it’s takes time to get good at it.

Let’s go back to the original question — Why should I write unit tests?

How many of you have ever had to debug, maintain, or replace legacy software? I’m in the process of doing that right now. Now not all of the code that we’re replacing is bad, and some of it is still useful. The problem is that we can’t change any of it because we don’t know what our changes will break. That’s because we don’t know what the intentions of the original developers were, and we have no way of verifying whether our changes were successful (what is the acceptance criteria?). As a result, we end up having to rewrite code that otherwise might be perfectly good code because we have to make some minor change to it. Since we can’t modify it safely, we have to rewrite the code and everything that depends on it.

Legacy software is the obvious example. Let’s think about your own code.

On my project, there are 15 developers. 15 developers is a lot of people, and we’re stepping on each other toes all the time. Sure, I can write code and just step through it in the debugger and verify that it’s working, but what happens when someone writes code tomorrow that will break my code? How will they know that they broke it, and how will I know that they broke it? Heck, I break my own code all the time, how is someone else supposed to not break it?

Unit tests take time to write, but they will usually save time in the end. In many cases, you’ll find bugs in your code before it gets to QA, so QA doesn’t have to spend time testing it, writing up the bug, retesting it. Then when you have to change your code later on (or if someone else has to change it), the unit tests can tell you if you broke something, saving more QA time. Unit testing also makes you really think about your code and all of the possible edge cases. Sometimes it’s really hard (or impossible) to recreate an edge case when testing through the app and it’s much easier to recreate it in a unit test. Remember, if your code allows for a certain edge case to happen, you should write unit tests around it even if you can’t recreate that edge case when running your application. Just because you can’t do it now doesn’t mean that someone won’t try and use your code to do it later.

How many projects have you been on where you’ve had to make a big change just before the app when into production or while the app was in production? I’ve had to do this many times. Unit tests probably won’t catch everything that can go wrong in this situation, but it sure is nice being able to run 2000 tests to see what I might have broke.

The important thing to remember is that we should be writing software that is going to last as long as possible. This is really hard to keep in mind when you have a deadline looming, or you’re trying to figure out a particular problem.

Nothing lasts forever – business requirements change, programming languages change, development teams change, everything changes. But it’s my responsibility to write software that will last as long as possible. I want my software to get replaced because changes in technology or the business environment allowed them to write something much better than what I wrote. I don’t want them to replace my software because they couldn’t maintain it.

June 24, 2008by Jon Kruger
.NET

Best WCF Blog Ever

I’ve used WCF on the last few projects that I’ve worked on. While WCF is pretty cool, it’s not easy, and knowing how to configure it can be tough since you can customize anything and everything.

If you are doing anything at all with WCF, J.D. Meier’s blog is a definite must read. He (and his team) have been posting tons of WCF guidance articles on how to configure WCF in lots of different situations. These articles have been invaluable for me and now I actually feel confident that I’m doing everything correctly with WCF.

June 7, 2008by Jon Kruger
Uncategorized

How to automatically back up your personal files

Most of us have lots of pictures, music, and other stuff on our home computers that we can’t afford to lose.

We’ve all heard many times that we need to back our stuff up, and other people have posted about this before. Unfortunately I ignored all the warnings and was met with a “disk read error” when I booted up my laptop last week. Crap.

Luckily I had been backing up a lot of stuff to CDs, but I had slacked off over the last year. Most of the pictures that I really wanted I could get back from other people, so I didn’t lose too much.

(Side note: seeing “disk read error” has an upside — the wife turns to me and says, “I think we need to buy a new computer.” The Wife Acceptance Factor will never get any higher than that.)

My Backup Strategy

I need something that is automated so that it doesn’t rely on me having to manually go and burn CDs, upload files, etc. because I will forget to do it. I need something that happens frequently because with a newborn in the house I will be taking lots of pictures, and I can’t afford to lose them. I didn’t want to back up to another computer in my house either (because I don’t want to maintain it, and to protect against some unlikely event like a house fire or someone breaking in and stealing everything). Here’s how it all works:

I downloaded and installed WinSCP, which is an FTP client that has a very powerful scripting language and has built-in functions to help you synchronize data. I downloaded the 4.1.3 beta because I needed some of the scripting capabilities that they added in the later releases.

Now that I have WinSCP installed, I had to write my script. Luckily the WinSCP scripting language has some pretty good documentation. I have two files, a batch file that I will run and a WinSCP script file that is called from the batch file. I am going to synchronize files from my machine to my web hosting provider (which I have to host my blog), which is running on Linux.

Here are the two files (I have them in my C:\autobackup directory):

winscp backup batch file.bat:

@echo off
:waitloop
echo Waiting for wireless....
Ping jonkruger.com -n 2 |find /i "Request timed out" > nul
if %errorlevel% ==0 goto waitloop
:connected

rem Stall for time so that I'm sure that the wireless is connected
Ping 127.0.0.1 -n 20

@echo on

echo connected to wireless
"C:\program files\winscp\winscp.com" /console /script="c:\autobackup\winscp backup script.txt" /log="C:\autobackup\backup detailed log.txt" > "C:\autobackup\backup log.txt"
echo done

winscp backup script.txt:

# Automatically answer all prompts negatively not to stall
# the script on errors
option batch on

# Disable overwrite confirmations that conflict with the previous
option confirm off

# Exclude files that I don't care about
option exclude "*.db; *.ini; *.tmp;"

# Connect to the server (replace with your username, password, domain)
open username:password@mydomain.com

# Do the work
synchronize remote -delete -mirror "C:\Documents and Settings\all users\Documents\My Pictures" "/pictures-backup"

# Close and exit
close
exit

Note that in winscp backup batch file.bat, the stuff at the top of the file is checking to make sure that I’ve connected to my wireless network before I try and connect to the server.

You can read more about the options for the WinSCP “synchronize” script command here. Basically what I’m doing is synchronizing the remote FTP server to have the same files that I have on my local machine. I’m only doing the synchronization one way (meaning that changes on the remote FTP server will not be synched back to my local machine), but WinSCP will allow you do the two-way synchronization if you want to.

Now I set up a scheduled task in Windows to run my “winscp backup batch file.bat” file. I checked the box that says “Wake the computer to run this task”.

Just like that, all of my pictures are backed up every night to a remote server, without any interaction from me, and I can see the results in a log file. I can easily update my script file to back up other directories too.

There are probably lots of other backup solutions out there, including Mozy.com, which allows you to back up 2 GB worth of data for free, or unlimited data for $4.95 a month. Web sites like this are probably worth looking into… I created my own solution because I already had the web hosting space available and WinSCP made it pretty easy.

So now that I’ve done all the work for you, you have no excuse! Don’t wait to back up your stuff or you might end up with nothing left to back up!

May 23, 2008by Jon Kruger
Architecture

In my former life, I got to write code

I used to write about LINQ and writing code and other developer stuff. That was back when I got to write code for a living.

My role on my current project started out as a lead developer role but has since morphed into non-coding architect/project manager. I have to admit that I got a little sad when I took my name off of the feature wall because I had worked on one feature in the last 3 months. But that’s life in the consulting world, and even though I’m not writing as much code as I used to, I’m still having fun and I’m still learning a lot.

We just went live with our first release last week and it was a really big deal at the client site. It was a big deal for me too since this is the first time I have been in charge of a project. Having a solid development team has made the whole process pretty painless, but I feel like what has made me successful personally are things that I’ve learned over time on previous projects.

Maintaining realistic expectations is crucial

Probably the worst thing that I can do is not be realistic. We have to give the business a realistic expectation of what we can get done and in how much time we will get it done. I have to be realistic with how much stuff I tell myself that I can get done myself and how much I need to delegate to others. The project sponsors need to be realistic with the rest of the business and not promise them something that is not realistic (this is done really well at the client that I’m at right now). I have to be realistic about how I balance work stuff and non-work stuff. In other words, don’t lie to others and don’t lie to yourself.

Estimating features is crucial

One of the most important things that I (and the rest of the development team) have to do is estimate how long features will take to complete. If we are unable to come up with accurate estimates, we’ll end up behind on our schedule, we’ll start to cheat on good development practices like unit testing, and the project will probably end up over budget. I’ve seen it happen many times.

The longer I’m in this business, the better I’m getting at estimating features. When I first started as a developer, I underestimated everything because I never considered all of the factors. On each project, I feel like I add a little more to my estimates. Hopefully this doesn’t mean that I’m becoming a slower developer!

Actually, what it means is that I’m learning everything that goes into completing a feature. When I first started estimating, I would just estimate the time it would take to complete the first pass at the feature. I never included the time that it will take to manually test the feature, write unit tests, fix the average number of bugs for that type of feature, design the feature with other developers, and even chase down requirements. All of those things affect how long it will take me to complete a feature.

People skills are crucial

You here this all the time, but it’s true. The reason I’m not writing code is that I spend a good portion of my day talking to people. On a normal day I might be chasing down requirements, making sure that the app is running OK in production, planning for future releases, working with the infrastructure team to make sure they can meet our needs, answering QA questions, helping out the training staff, helping other developers, discussing other projects, and sitting in numerous meetings. This is why I’m still having a lot of fun even though I don’t get to write code. I really enjoy working with people, and it’s even more fun when the people you are working with are doing a good job.

Part of my job is taking business requirements and translating them into technical requirements. The hardest part of doing this is that everyone speaks a different language. Every business has it’s own lingo, and every industry has it’s own lingo. Project managers, BAs, QA people, architects, developers, and DBAs each have their own lingo. So not only do I have to ask the right questions, I have to know how to interpret what someone else is saying and translate it into terms that someone else in another role can understand. I feel like I’m getting better at this, but it’s not easy.

Keeping everyone on the same page is of the utmost importance. Everything might be fine with our application, but if we broke someone else’s application in the process, we didn’t really succeed. So it’s my job to make sure that I know about everything and everyone involved in the process and then make sure that they feel and know that they are a part of the process.

That’s why I think it’s important to make people feel like they know me and feel like they are a part of my team. Otherwise, they won’t be as willing to work with us because they’ll probably feel like I’m being a burden to them. I would rather have them feel like they’re helping out a friend.

People skills apply to anything collaborative in life. It doesn’t matter if you’re a developer, an architect, a project manager, a football player, a student, a doctor, or a spouse. So don’t think that you can ignore this skill, because sooner or later you’re going to end up needing it.

April 22, 2008by Jon Kruger
.NET, LINQ

LINQ to SQL talk stuff

Here is the sample project from my talk at the Columbus .NET Users’ Group last night. If you open the project, you’ll notice a Northwind.sql file. This is my modified version of the Northwind database (I had to add a timestamp column to the Employees table to get it to cooperate with LINQ to SQL).

Like I mentioned yesterday, if you’re new to LINQ to SQL, a great place to start is Scott Guthrie’s series of blog posts on LINQ to SQL. Here they are:

Part 1: Introduction to LINQ to SQL
Part 2: Defining our Data Model Classes
Part 3: Querying our Database
Part 4: Updating our Database
Part 5: Binding UI using the ASP:LinqDataSource Control
Part 6: Retrieving Data Using Stored Procedures
Part 7: Updating our Database using Stored Procedures
Part 8: Executing Custom SQL Expressions
Part 9: Using a Custom LINQ Expression with the <asp:LinqDatasource> control

February 29, 2008by Jon Kruger
.NET, LINQ

LINQ to SQL In Disconnected/N-Tier scenarios: Saving an Object

LINQ to SQL is a great tool, but when you’re using it in an n-tier scenario, there are several problems that you have to solve. A simple example is a web service that allows you to retrieve a record of data and save a record of data when the object that you are loading/saving had child lists of objects. Here are some of the problems that you have to solve:

1) How can you create these web services without having to create a separate set of entity objects (that is, we want to use the entity objects that the LINQ to SQL designer generates for us)?
2) What do we have to do to get LINQ to SQL to work with entities passed in through web services?
3) How do you create these loading/saving web services while keeping the amount of data passed across the wire to a minimum?

I created a sample project using the Northwind sample database. The first thing to do is to create your .dbml file and then drag tables from the Server Explorer onto the design surface. I have something that looks like this:

dbml

I’m also going to click on blank space in the design surface, go to the Properties window, and set the SerializationMode to Unidirectional. This will put the [DataContract] and [DataMember] attributes on the designer-generated entity objects so that they can be used in WCF services.

SerializationMode

Now I’ll create some web services that look something like this:

public class EmployeeService
{
	public Employee GetEmployee(int employeeId)
	{
		NorthwindDataContext dc = new NorthwindDataContext();
		Employee employee = dc.Employees.FirstOrDefault(e => e.EmployeeID == employeeId);
		return employee;
	}

	public void SaveEmployee(Employee employee)
	{
		// TODO
	}
}

I have two web service methods — one that loads an Employee by ID and one that saves an Employee.

Notice that both web service methods are using the entity objects generated by LINQ to SQL. There are some situations when you will not want to use the LINQ to SQL generated entities. For example, if you’re exposing a public web service, you probably don’t want to use the LINQ to SQL entities because that can make it a lot harder for you to refactor your database or your object model without having to change the web service definition, which could break code that calls the web service. In my case, I have a web service where I own both sides of the wire and this service is not exposed publicly, so I don’t have these concerns.

The GetEmployee() method is fairly straight-forward — just load up the object and return it. Let’s look at how we should implement SaveEmployee().

In order for the DataContext to be able to save an object that wasn’t loaded from the same DataContext, you have to let the DataContext know about the object. How you do this depends on whether the object has ever been saved before.

How you make this determination is based on your own convention. Since I’m dealing with integer primary keys with identity insert starting at 1, I can assume that if the primary key value is < 1, this object is new. Let's create a base class for our entity object called BusinessEntityBase and have that class expose a property called IsNew. This property will return a boolean value based on the primary key value of this object.

namespace Northwind
{
	[DataContract]
	public abstract class BusinessEntityBase
	{
		public abstract int Id { get; set; }

		public virtual bool IsNew
		{
			get { return this.Id <= 0; }
		}
	}
}

Now we have to tell Employee to derive from BusinessEntityBase. We can do this because the entities that LINQ to SQL generates are partial classes that don't derive from any class, so we can define that in our half of the partial class.

namespace Northwind
{
	public partial class Employee : BusinessEntityBase
	{
		public override int Id
		{
			get { return this.EmployeeID; }
			set { this.EmployeeID = value; }
		}
	}
}

Now we should be able to tell if an Employee object is new or not. I'm also going to do the same thing with the Order class since the Employee object contains a list of Order objects.

namespace Northwind
{
	public partial class Order : BusinessEntityBase
	{
		public override int Id
		{
			get { return this.OrderID; }
			set { this.OrderID = value; }
		}
	}
}

OK, let's start filling out the SaveEmployee() method.

public void SaveEmployee(Employee employee)
{
	NorthwindDataContext dc = new NorthwindDataContext();
	if (employee.IsNew)
		dc.Employees.InsertOnSubmit(employee);
	else
		dc.Employees.Attach(employee);
	dc.SubmitChanges();
}

Great. So now I can call the GetEmployee() web method to get an employee, change something on the Employee object, and call the SaveEmployee() web method to save it. But when I do it, nothing happens.

The problem is with this line:

public void SaveEmployee(Employee employee)
{
	NorthwindDataContext dc = new NorthwindDataContext();
	if (employee.IsNew)
		dc.Employees.InsertOnSubmit(employee);
	else
		dc.Employees.Attach(employee); // <-- PROBLEM HERE
	dc.SubmitChanges();
}

The Attach() method attaches the entity object to the DataContext so that the DataContext can save it. But the overload that I called just attached the entity to the DataContext and didn't check to see that anything on the object had been changed. That doesn't do us a whole lot of good. Let's try this overload:

public void SaveEmployee(Employee employee)
{
	NorthwindDataContext dc = new NorthwindDataContext();
	if (employee.IsNew)
		dc.Employees.InsertOnSubmit(employee);
	else
		dc.Employees.Attach(employee, true); // <-- UPDATE
	dc.SubmitChanges();
}

This second parameter is going to tell LINQ to SQL that it should treat this entity as modified so that it needs to be saved to the database. Now when I call SaveEmployee(), I get an exception when I call Attach() that says:

An entity can only be attached as modified without original state if it declares a version member or does not have an update check policy.

What this means is that my database table does not have a timestamp column on it. Without a timestamp, LINQ to SQL can't do it's optimistic concurrency checking. No big deal, I'll go add timestamp columns to the Employees and Orders tables in the database. I'll also have to go into my DBML file and add the column to the table in there. You can either add a new property to the object by right-clicking on the object in the designer and selecting Add / Property, or you can just delete the object from the designer and then dragging it back on from the Server Explorer.

Now the DBML looks like this:

updated dbml

Now let's try calling SaveEmployee() again. This time it works. Here is the SQL that LINQ to SQL ran:


UPDATE [dbo].[Employees]
SET [LastName] = @p2, [FirstName] = @p3, [Title] = @p4, [TitleOfCourtesy] = @p5, [BirthDate] = @p6, [HireDate] = @p7, [Address] = @p8, [City] = @p9, [Region] = @p10, [PostalCode] = @p11, [Country] = @p12, [HomePhone] = @p13, [Extension] = @p14, [Photo] = @p15, [Notes] = @p16, [ReportsTo] = @p17, [PhotoPath] = @p18
WHERE ([EmployeeID] = @p0) AND ([Timestamp] = @p1)

SELECT [t1].[Timestamp]
FROM [dbo].[Employees] AS [t1]
WHERE ((@@ROWCOUNT) > 0) AND ([t1].[EmployeeID] = @p19)
-- @p0: Input Int (Size = 0; Prec = 0; Scale = 0) [5]
-- @p1: Input Timestamp (Size = 8; Prec = 0; Scale = 0) [SqlBinary(8)]
-- @p2: Input NVarChar (Size = 8; Prec = 0; Scale = 0) [Buchanan]
-- @p3: Input NVarChar (Size = 6; Prec = 0; Scale = 0) [Steven]
-- @p4: Input NVarChar (Size = 13; Prec = 0; Scale = 0) [Sales Manager]
-- @p5: Input NVarChar (Size = 3; Prec = 0; Scale = 0) [Mr.]
-- @p6: Input DateTime (Size = 0; Prec = 0; Scale = 0) [3/4/1955 12:00:00 AM]
-- @p7: Input DateTime (Size = 0; Prec = 0; Scale = 0) [10/17/1993 12:00:00 AM]
-- @p8: Input NVarChar (Size = 15; Prec = 0; Scale = 0) [14 Garrett Hill]
-- @p9: Input NVarChar (Size = 6; Prec = 0; Scale = 0) [London]
-- @p10: Input NVarChar (Size = 0; Prec = 0; Scale = 0) [Null]
-- @p11: Input NVarChar (Size = 7; Prec = 0; Scale = 0) [SW2 8JR]
-- @p12: Input NVarChar (Size = 2; Prec = 0; Scale = 0) [UK]
-- @p13: Input NVarChar (Size = 13; Prec = 0; Scale = 0) [(71) 555-4848]
-- @p14: Input NVarChar (Size = 4; Prec = 0; Scale = 0) [3453]
-- @p15: Input Image (Size = 21626; Prec = 0; Scale = 0) [SqlBinary(21626)]
-- @p16: Input NText (Size = 448; Prec = 0; Scale = 0) [Steven Buchanan graduated from St. Andrews University, Scotland, with a BSC degree in 1976.  Upon joining the company as a sales representative in 1992, he spent 6 months in an orientation program at the Seattle office and then returned to his permanent post in London.  He was promoted to sales manager in March 1993.  Mr. Buchanan has completed the courses "Successful Telemarketing" and "International Sales Management."  He is fluent in French.]
-- @p17: Input Int (Size = 0; Prec = 0; Scale = 0) [2]
-- @p18: Input NVarChar (Size = 37; Prec = 0; Scale = 0) [http://accweb/emmployees/buchanan.bmp]
-- @p19: Input Int (Size = 0; Prec = 0; Scale = 0) [5]

Notice that it passed back all of the properties in the SQL statement -- not just the one that I changed (I only changed one property when I made this call). But isn't it horribly ineffecient to save every property when only one property changed?

Well, you don't have much choice here. Now there is another overload of Attach() that takes in the original version of the object instead of the boolean parameter. In other words, it is saying that it will compare your object with the original version of the object and see if any properties are different, and then only update those properties in the SQL statement.

Unfortunately, there's no good way to use this overload in this case, nor do I think you would want to. I suppose you could load up the existing version of the entity from the database and then pass that into Attach() as the "original", but now we're doing even more work -- we're doing a SELECT that selects the entire row, and then we're doing an UPDATE that only updates the changed properties. I would rather stick with the one UPDATE that updates everything.

February 10, 2008by Jon Kruger
Uncategorized

Filtering Intellisense lists in the WF RuleSetDialog

Recently on our project we’ve been diving into Windows Workflow Foundation, particularly the rules engine. This process is relatively painless since Microsoft was kind enough to expose the RuleSetDialog class so that you can use the WF Rule Set editor in your application. This code is as easy as doing something like this:

// Create a RuleSet that works with Orders (just another .net Object)
RuleSetDialog ruleSetDialog = new RuleSetDialog(typeof(Order), null, null);

// Show the RuleSet Editor
ruleSetDialog.ShowDialog();

// Get the RuleSet after editing
RuleSet ruleSet = ruleSetDialog.RuleSet;

That’s how simple it is to include the RuleSetDialog in your application. The problem is that the Intellisense dropdowns in the RuleSetDialog expose private and protected members of your class, and Microsoft doesn’t give you any way to filter the Intellisense list. So you end up with stuff like this:

Intellisense with private and protected members

Microsoft is aware of this issue, and they haven’t said anything definite about doing anything about this problem.

When you’re writing a commercial application or something that non-developers are going to use, you don’t want this kind of cryptic stuff in the list. I don’t want to expose all of the private members of my classes to the user, just like how you don’t expose private members of a class in a public API.

One way to filter the list is to create an interface and pass the interface type in as the first parameter in the RuleSetDialog constructor. This way you won’t have all of the private and protected members of the class in the Intellisense because an interface only exposes public methods. So now you’re constructor looks like this:

// Create a RuleSet that works with Orders (just another .net Object)
RuleSetDialog ruleSetDialog = new RuleSetDialog(typeof(IOrder), null, null);

This is a decent solution, but it still has problems:

  • You have to create the interface.
  • System.Object members like Finalize(), GetHashCode(), and Equals() are still exposed.

Like I said before, in my commercial application, I don’t want users to have to see all of this extra stuff. I only want to show them the things that I want to show them.

Well, thanks to Reflector, I was able to come up with a way to let you filter the list. In my example, I can filter out all of the protected and private members, filter out static types, only display members decorated with an attribute, or completely override the list to only display strings that I’ve added. So now you can easily get something that looks more like this:

Filtered Intellisense

Much better!

Now I must warn you. This solution is making extensive use of reflection to get at private and internal methods and events that Microsoft didn’t feel like exposing to us. So I felt a little dirty while I was writing it, but it gets the job done!

Here is the code. Please leave a comment if you find anything wrong with it.

Here are some other good posts about the WF Rules Engine:

Execute Windows Workflow Rules without Workflow
Introduction to the Windows Workflow Foundation Rules Engine
External Ruleset Demo

Enjoy!

February 3, 2008by Jon Kruger
.NET, LINQ

LINQ to SQL: a three-month checkpoint

We are now about 3 months into our project using LINQ to SQL. Our project is a Winforms app using SQL Server 2005 (LINQ to SQL only works with SQL Server). We are planning on moving to an n-tier system with a WCF service layer, but for now our application talks directly to the database. Even though we don’t have the service layer in there yet, we’re architecting the system as if we had the service layer in there so we’re having many of the same issues that we will have when actually have the service layer.

Microsoft hasn’t always been known for stellar 1.0 releases (e.g. Vista, the Zune, etc.). When it comes to something that’s in the .NET Framework, I had a little more faith because it’s a little harder for them to go back and fix something if they screw it up. I figured that because of that, they’ll make sure that they get it right.

LINQ to SQL is not complete. There are some issues that Microsoft knows about that LINQ to SQL doesn’t currently handle. None of these issues are show-stoppers. While we’ve had to jump through some hoops to get around these issues, but we’ve been able to do everything that we’ve needed to do. I’ll get into more detail on the hoop-jumping in later posts.

Even with all of these issues, I give LINQ to SQL a rousing endorsement. I’ve always been an ORM fan, and I’ve used Nhibernate on several projects.

Getting Started

When we started our project, we inherited a legacy database that has been developed over the last 10 years. There are some interesting things in the database, such as numerics being used as primary keys, tables that aren’t normalized, and spotty referential integrity.

For the first week, three of us dragged all of the tables onto the LINQ to SQL designer and renamed all the properties to more friendly names. This was a fairly painless process. Now we had all of our entity objects created and ready to go.

Well, almost. We created a BusinessEntityBase class and all of the entity objects derive from this class. We do this by creating partial classes that match up with the classes generated by LINQ to SQL (all of the classes generated by LINQ to SQL are partial classes) and specifying that those classes derive from BusinessEntityBase. We don’t have much in the BusinessEntityBase class — the main thing in there is an abstract Id property that each entity must override to specify the value of the primary key. We use this to keep track of whether an entity object is unsaved or not.

At this point, we were ready to start working! All of our entity objects were generated for us. Contrast this with Nhibernate, where we had to write (or generate) all of our entity objects and the Nhibernate mapping files. It takes most people a long time to figure out how to write those Nhibernate mapping files!

Working with LINQ

“LINQ” is the general term for the syntax that we now use to write queries. These queries can be executed against a database (LINQ to SQL), a collection (LINQ to Objects), and various other things (LINQ to Amazon).

The LINQ syntax and particularly lambda expressions were very foreign concepts at first. You’re just not used to using those types of things in C# code. Then one day is just clicks, and you start discovering all kinds of new ways to use LINQ queries and lambda expressions.

Personally, I think lambda expressions are more revolutionary than the LINQ syntax. They don’t provide you with anything that you couldn’t do in .NET 2.0 with anonymous delegates, but now the syntax is much more concise. You can do what you want to do in fewer lines of code, which also makes for more readable code. Here’s an example of why I like lambda expressions.

Let’s say that I’m working with everyone’s favorite sample database (Northwind) and I want to find a Employee by first name, last name, or both. In the past, you probably wrote a stored procedure that looked like this:

create procedure EmployeeSearch
     @FirstName varchar(20),
     @LastName varchar(20)
as
select EmployeeID, FirstName, LastName, Title, TitleOfCourtesy, 
    BirthDate, HireDate, Address, City, Region, PostalCode, Country, 
    HomePhone, Extension, Photo, Notes, ReportsTo, PhotoPath
from Employees
where (@FirstName is null or FirstName = @FirstName)
and (@LastName is null or LastName = @LastName)

That worked fine, but having to check if the parameters are null is a performance hit in the stored procedure, and someone had to write the stored procedure in the first place.

With lambda expressions and LINQ to SQL, you can now do something like this and build your query incrementally:

public IQueryable SearchEmployees(string firstName, string lastName)
{
    NorthwindDataContext dc = new NorthwindDataContext();

    // We'll start with the entire list of employees.
    IQueryable employees = dc.Employees;

    if (!string.IsNullOrEmpty(firstName))
    {
        // Filter the employees by first name
        employees = employees.Where(e => e.FirstName == firstName);
    }

    if (!string.IsNullOrEmpty(lastName))
    {
        // Filter the employees by last name
        employees = employees.Where(e => e.LastName == lastName);
    }

    return employees;
}

Why is this better?

  • We didn’t have to write a separate stored procedure.
  • The generated SQL code won’t have to check for NULL parameters passed into a stored procedure.
  • This code is much more testable and easier to read than a stored procedure (IMO).
  • This is compiled, type safe code!

Here is the SQL code that LINQ to SQL runs as a result of this method:

SELECT [t0].[EmployeeID], [t0].[LastName], [t0].[FirstName], [t0].[Title], [t0].[TitleOfCourtesy], [t0].[BirthDate], [t0].[HireDate], [t0].[Address], [t0].[City], [t0].[Region], [t0].[PostalCode], [t0].[Country], [t0].[HomePhone], [t0].[Extension], [t0].[Photo], [t0].[Notes], [t0].[ReportsTo], [t0].[PhotoPath]
FROM [dbo].[Employees] AS [t0]
WHERE [t0].[LastName] = @p0
-- @p0: Input NVarChar (Size = 6; Prec = 0; Scale = 0) [Kruger]

I’m not saying that stored procedures are obsolete. There will still be cases where you have a query that is so complex that it’s easier to do it in a stored procedure, or it may not be possible to do it in LINQ at all. But LINQ to SQL is allowing me to scrap many of the stored procedures that I used in the past.

More to come…

Over the next few weeks, I’ll post in more detail about how we are using LINQ to SQL and some of the things we’ve had to do to make it work.

January 19, 2008by Jon Kruger
Uncategorized

Why I should’ve gone to CodeMash last year

I went to CodeMash this year. Last year I did not. Sure, I knew that there would be a lot of good talks, but can’t I earn the same information by reading books and blogs, listening to podcasts, etc.?

Now I see why I was wrong. People I work with talk about the value of being involved in the .NET community, and now I see why they are right.

Sure, the talks were great. But by far the best part is being able to sit down with people who know way more than me and ask them about problems that I’m having right now on my current project. That kind of free advice is invaluable.

In the technology world, there is always tons of new stuff out there, and there’s no way that I can keep up with it all (especially with a wife and a kid on the way). If I want to be someone who can make good architectural decisions, how can I do that without having knowledge of what’s out there? Since I can’t keep up with it all, I could use some other people that can help out.

So I plan on trying to be more involved in the local .NET community (user groups, blogging, etc.), and I’m really excited about it. Hopefully I can make some worthwhile contributions of my own while I’m at it.

January 13, 2008by Jon Kruger
Page 22 of 24« First...1020«21222324»

About Me

I am a technical leader and software developer in Columbus, OH. Find out more here...

I am a technical leader and software developer in Columbus, OH, currently working as a Senior Engineering Manager at Upstart. Find out more here...