Jon Kruger - Technical Leadership / Software Solutions / Agile Coaching and Training
  • About Me
  • Blog
  • Resume
  • Values
  • Presentations
  • Contact Me
About Me
Blog
Resume
Values
Presentations
Contact Me
  • About Me
  • Blog
  • Resume
  • Values
  • Presentations
  • Contact Me
Jon Kruger
Technical Leadership / Software Solutions / Agile Coaching and Training
Architecture

In my former life, I got to write code

I used to write about LINQ and writing code and other developer stuff. That was back when I got to write code for a living.

My role on my current project started out as a lead developer role but has since morphed into non-coding architect/project manager. I have to admit that I got a little sad when I took my name off of the feature wall because I had worked on one feature in the last 3 months. But that’s life in the consulting world, and even though I’m not writing as much code as I used to, I’m still having fun and I’m still learning a lot.

We just went live with our first release last week and it was a really big deal at the client site. It was a big deal for me too since this is the first time I have been in charge of a project. Having a solid development team has made the whole process pretty painless, but I feel like what has made me successful personally are things that I’ve learned over time on previous projects.

Maintaining realistic expectations is crucial

Probably the worst thing that I can do is not be realistic. We have to give the business a realistic expectation of what we can get done and in how much time we will get it done. I have to be realistic with how much stuff I tell myself that I can get done myself and how much I need to delegate to others. The project sponsors need to be realistic with the rest of the business and not promise them something that is not realistic (this is done really well at the client that I’m at right now). I have to be realistic about how I balance work stuff and non-work stuff. In other words, don’t lie to others and don’t lie to yourself.

Estimating features is crucial

One of the most important things that I (and the rest of the development team) have to do is estimate how long features will take to complete. If we are unable to come up with accurate estimates, we’ll end up behind on our schedule, we’ll start to cheat on good development practices like unit testing, and the project will probably end up over budget. I’ve seen it happen many times.

The longer I’m in this business, the better I’m getting at estimating features. When I first started as a developer, I underestimated everything because I never considered all of the factors. On each project, I feel like I add a little more to my estimates. Hopefully this doesn’t mean that I’m becoming a slower developer!

Actually, what it means is that I’m learning everything that goes into completing a feature. When I first started estimating, I would just estimate the time it would take to complete the first pass at the feature. I never included the time that it will take to manually test the feature, write unit tests, fix the average number of bugs for that type of feature, design the feature with other developers, and even chase down requirements. All of those things affect how long it will take me to complete a feature.

People skills are crucial

You here this all the time, but it’s true. The reason I’m not writing code is that I spend a good portion of my day talking to people. On a normal day I might be chasing down requirements, making sure that the app is running OK in production, planning for future releases, working with the infrastructure team to make sure they can meet our needs, answering QA questions, helping out the training staff, helping other developers, discussing other projects, and sitting in numerous meetings. This is why I’m still having a lot of fun even though I don’t get to write code. I really enjoy working with people, and it’s even more fun when the people you are working with are doing a good job.

Part of my job is taking business requirements and translating them into technical requirements. The hardest part of doing this is that everyone speaks a different language. Every business has it’s own lingo, and every industry has it’s own lingo. Project managers, BAs, QA people, architects, developers, and DBAs each have their own lingo. So not only do I have to ask the right questions, I have to know how to interpret what someone else is saying and translate it into terms that someone else in another role can understand. I feel like I’m getting better at this, but it’s not easy.

Keeping everyone on the same page is of the utmost importance. Everything might be fine with our application, but if we broke someone else’s application in the process, we didn’t really succeed. So it’s my job to make sure that I know about everything and everyone involved in the process and then make sure that they feel and know that they are a part of the process.

That’s why I think it’s important to make people feel like they know me and feel like they are a part of my team. Otherwise, they won’t be as willing to work with us because they’ll probably feel like I’m being a burden to them. I would rather have them feel like they’re helping out a friend.

People skills apply to anything collaborative in life. It doesn’t matter if you’re a developer, an architect, a project manager, a football player, a student, a doctor, or a spouse. So don’t think that you can ignore this skill, because sooner or later you’re going to end up needing it.

April 22, 2008by Jon Kruger
.NET, LINQ

LINQ to SQL talk stuff

Here is the sample project from my talk at the Columbus .NET Users’ Group last night. If you open the project, you’ll notice a Northwind.sql file. This is my modified version of the Northwind database (I had to add a timestamp column to the Employees table to get it to cooperate with LINQ to SQL).

Like I mentioned yesterday, if you’re new to LINQ to SQL, a great place to start is Scott Guthrie’s series of blog posts on LINQ to SQL. Here they are:

Part 1: Introduction to LINQ to SQL
Part 2: Defining our Data Model Classes
Part 3: Querying our Database
Part 4: Updating our Database
Part 5: Binding UI using the ASP:LinqDataSource Control
Part 6: Retrieving Data Using Stored Procedures
Part 7: Updating our Database using Stored Procedures
Part 8: Executing Custom SQL Expressions
Part 9: Using a Custom LINQ Expression with the <asp:LinqDatasource> control

February 29, 2008by Jon Kruger
.NET, LINQ

LINQ to SQL In Disconnected/N-Tier scenarios: Saving an Object

LINQ to SQL is a great tool, but when you’re using it in an n-tier scenario, there are several problems that you have to solve. A simple example is a web service that allows you to retrieve a record of data and save a record of data when the object that you are loading/saving had child lists of objects. Here are some of the problems that you have to solve:

1) How can you create these web services without having to create a separate set of entity objects (that is, we want to use the entity objects that the LINQ to SQL designer generates for us)?
2) What do we have to do to get LINQ to SQL to work with entities passed in through web services?
3) How do you create these loading/saving web services while keeping the amount of data passed across the wire to a minimum?

I created a sample project using the Northwind sample database. The first thing to do is to create your .dbml file and then drag tables from the Server Explorer onto the design surface. I have something that looks like this:

dbml

I’m also going to click on blank space in the design surface, go to the Properties window, and set the SerializationMode to Unidirectional. This will put the [DataContract] and [DataMember] attributes on the designer-generated entity objects so that they can be used in WCF services.

SerializationMode

Now I’ll create some web services that look something like this:

public class EmployeeService
{
	public Employee GetEmployee(int employeeId)
	{
		NorthwindDataContext dc = new NorthwindDataContext();
		Employee employee = dc.Employees.FirstOrDefault(e => e.EmployeeID == employeeId);
		return employee;
	}

	public void SaveEmployee(Employee employee)
	{
		// TODO
	}
}

I have two web service methods — one that loads an Employee by ID and one that saves an Employee.

Notice that both web service methods are using the entity objects generated by LINQ to SQL. There are some situations when you will not want to use the LINQ to SQL generated entities. For example, if you’re exposing a public web service, you probably don’t want to use the LINQ to SQL entities because that can make it a lot harder for you to refactor your database or your object model without having to change the web service definition, which could break code that calls the web service. In my case, I have a web service where I own both sides of the wire and this service is not exposed publicly, so I don’t have these concerns.

The GetEmployee() method is fairly straight-forward — just load up the object and return it. Let’s look at how we should implement SaveEmployee().

In order for the DataContext to be able to save an object that wasn’t loaded from the same DataContext, you have to let the DataContext know about the object. How you do this depends on whether the object has ever been saved before.

How you make this determination is based on your own convention. Since I’m dealing with integer primary keys with identity insert starting at 1, I can assume that if the primary key value is < 1, this object is new. Let's create a base class for our entity object called BusinessEntityBase and have that class expose a property called IsNew. This property will return a boolean value based on the primary key value of this object.

namespace Northwind
{
	[DataContract]
	public abstract class BusinessEntityBase
	{
		public abstract int Id { get; set; }

		public virtual bool IsNew
		{
			get { return this.Id <= 0; }
		}
	}
}

Now we have to tell Employee to derive from BusinessEntityBase. We can do this because the entities that LINQ to SQL generates are partial classes that don't derive from any class, so we can define that in our half of the partial class.

namespace Northwind
{
	public partial class Employee : BusinessEntityBase
	{
		public override int Id
		{
			get { return this.EmployeeID; }
			set { this.EmployeeID = value; }
		}
	}
}

Now we should be able to tell if an Employee object is new or not. I'm also going to do the same thing with the Order class since the Employee object contains a list of Order objects.

namespace Northwind
{
	public partial class Order : BusinessEntityBase
	{
		public override int Id
		{
			get { return this.OrderID; }
			set { this.OrderID = value; }
		}
	}
}

OK, let's start filling out the SaveEmployee() method.

public void SaveEmployee(Employee employee)
{
	NorthwindDataContext dc = new NorthwindDataContext();
	if (employee.IsNew)
		dc.Employees.InsertOnSubmit(employee);
	else
		dc.Employees.Attach(employee);
	dc.SubmitChanges();
}

Great. So now I can call the GetEmployee() web method to get an employee, change something on the Employee object, and call the SaveEmployee() web method to save it. But when I do it, nothing happens.

The problem is with this line:

public void SaveEmployee(Employee employee)
{
	NorthwindDataContext dc = new NorthwindDataContext();
	if (employee.IsNew)
		dc.Employees.InsertOnSubmit(employee);
	else
		dc.Employees.Attach(employee); // <-- PROBLEM HERE
	dc.SubmitChanges();
}

The Attach() method attaches the entity object to the DataContext so that the DataContext can save it. But the overload that I called just attached the entity to the DataContext and didn't check to see that anything on the object had been changed. That doesn't do us a whole lot of good. Let's try this overload:

public void SaveEmployee(Employee employee)
{
	NorthwindDataContext dc = new NorthwindDataContext();
	if (employee.IsNew)
		dc.Employees.InsertOnSubmit(employee);
	else
		dc.Employees.Attach(employee, true); // <-- UPDATE
	dc.SubmitChanges();
}

This second parameter is going to tell LINQ to SQL that it should treat this entity as modified so that it needs to be saved to the database. Now when I call SaveEmployee(), I get an exception when I call Attach() that says:

An entity can only be attached as modified without original state if it declares a version member or does not have an update check policy.

What this means is that my database table does not have a timestamp column on it. Without a timestamp, LINQ to SQL can't do it's optimistic concurrency checking. No big deal, I'll go add timestamp columns to the Employees and Orders tables in the database. I'll also have to go into my DBML file and add the column to the table in there. You can either add a new property to the object by right-clicking on the object in the designer and selecting Add / Property, or you can just delete the object from the designer and then dragging it back on from the Server Explorer.

Now the DBML looks like this:

updated dbml

Now let's try calling SaveEmployee() again. This time it works. Here is the SQL that LINQ to SQL ran:


UPDATE [dbo].[Employees]
SET [LastName] = @p2, [FirstName] = @p3, [Title] = @p4, [TitleOfCourtesy] = @p5, [BirthDate] = @p6, [HireDate] = @p7, [Address] = @p8, [City] = @p9, [Region] = @p10, [PostalCode] = @p11, [Country] = @p12, [HomePhone] = @p13, [Extension] = @p14, [Photo] = @p15, [Notes] = @p16, [ReportsTo] = @p17, [PhotoPath] = @p18
WHERE ([EmployeeID] = @p0) AND ([Timestamp] = @p1)

SELECT [t1].[Timestamp]
FROM [dbo].[Employees] AS [t1]
WHERE ((@@ROWCOUNT) > 0) AND ([t1].[EmployeeID] = @p19)
-- @p0: Input Int (Size = 0; Prec = 0; Scale = 0) [5]
-- @p1: Input Timestamp (Size = 8; Prec = 0; Scale = 0) [SqlBinary(8)]
-- @p2: Input NVarChar (Size = 8; Prec = 0; Scale = 0) [Buchanan]
-- @p3: Input NVarChar (Size = 6; Prec = 0; Scale = 0) [Steven]
-- @p4: Input NVarChar (Size = 13; Prec = 0; Scale = 0) [Sales Manager]
-- @p5: Input NVarChar (Size = 3; Prec = 0; Scale = 0) [Mr.]
-- @p6: Input DateTime (Size = 0; Prec = 0; Scale = 0) [3/4/1955 12:00:00 AM]
-- @p7: Input DateTime (Size = 0; Prec = 0; Scale = 0) [10/17/1993 12:00:00 AM]
-- @p8: Input NVarChar (Size = 15; Prec = 0; Scale = 0) [14 Garrett Hill]
-- @p9: Input NVarChar (Size = 6; Prec = 0; Scale = 0) [London]
-- @p10: Input NVarChar (Size = 0; Prec = 0; Scale = 0) [Null]
-- @p11: Input NVarChar (Size = 7; Prec = 0; Scale = 0) [SW2 8JR]
-- @p12: Input NVarChar (Size = 2; Prec = 0; Scale = 0) [UK]
-- @p13: Input NVarChar (Size = 13; Prec = 0; Scale = 0) [(71) 555-4848]
-- @p14: Input NVarChar (Size = 4; Prec = 0; Scale = 0) [3453]
-- @p15: Input Image (Size = 21626; Prec = 0; Scale = 0) [SqlBinary(21626)]
-- @p16: Input NText (Size = 448; Prec = 0; Scale = 0) [Steven Buchanan graduated from St. Andrews University, Scotland, with a BSC degree in 1976.  Upon joining the company as a sales representative in 1992, he spent 6 months in an orientation program at the Seattle office and then returned to his permanent post in London.  He was promoted to sales manager in March 1993.  Mr. Buchanan has completed the courses "Successful Telemarketing" and "International Sales Management."  He is fluent in French.]
-- @p17: Input Int (Size = 0; Prec = 0; Scale = 0) [2]
-- @p18: Input NVarChar (Size = 37; Prec = 0; Scale = 0) [http://accweb/emmployees/buchanan.bmp]
-- @p19: Input Int (Size = 0; Prec = 0; Scale = 0) [5]

Notice that it passed back all of the properties in the SQL statement -- not just the one that I changed (I only changed one property when I made this call). But isn't it horribly ineffecient to save every property when only one property changed?

Well, you don't have much choice here. Now there is another overload of Attach() that takes in the original version of the object instead of the boolean parameter. In other words, it is saying that it will compare your object with the original version of the object and see if any properties are different, and then only update those properties in the SQL statement.

Unfortunately, there's no good way to use this overload in this case, nor do I think you would want to. I suppose you could load up the existing version of the entity from the database and then pass that into Attach() as the "original", but now we're doing even more work -- we're doing a SELECT that selects the entire row, and then we're doing an UPDATE that only updates the changed properties. I would rather stick with the one UPDATE that updates everything.

February 10, 2008by Jon Kruger
Uncategorized

Filtering Intellisense lists in the WF RuleSetDialog

Recently on our project we’ve been diving into Windows Workflow Foundation, particularly the rules engine. This process is relatively painless since Microsoft was kind enough to expose the RuleSetDialog class so that you can use the WF Rule Set editor in your application. This code is as easy as doing something like this:

// Create a RuleSet that works with Orders (just another .net Object)
RuleSetDialog ruleSetDialog = new RuleSetDialog(typeof(Order), null, null);

// Show the RuleSet Editor
ruleSetDialog.ShowDialog();

// Get the RuleSet after editing
RuleSet ruleSet = ruleSetDialog.RuleSet;

That’s how simple it is to include the RuleSetDialog in your application. The problem is that the Intellisense dropdowns in the RuleSetDialog expose private and protected members of your class, and Microsoft doesn’t give you any way to filter the Intellisense list. So you end up with stuff like this:

Intellisense with private and protected members

Microsoft is aware of this issue, and they haven’t said anything definite about doing anything about this problem.

When you’re writing a commercial application or something that non-developers are going to use, you don’t want this kind of cryptic stuff in the list. I don’t want to expose all of the private members of my classes to the user, just like how you don’t expose private members of a class in a public API.

One way to filter the list is to create an interface and pass the interface type in as the first parameter in the RuleSetDialog constructor. This way you won’t have all of the private and protected members of the class in the Intellisense because an interface only exposes public methods. So now you’re constructor looks like this:

// Create a RuleSet that works with Orders (just another .net Object)
RuleSetDialog ruleSetDialog = new RuleSetDialog(typeof(IOrder), null, null);

This is a decent solution, but it still has problems:

  • You have to create the interface.
  • System.Object members like Finalize(), GetHashCode(), and Equals() are still exposed.

Like I said before, in my commercial application, I don’t want users to have to see all of this extra stuff. I only want to show them the things that I want to show them.

Well, thanks to Reflector, I was able to come up with a way to let you filter the list. In my example, I can filter out all of the protected and private members, filter out static types, only display members decorated with an attribute, or completely override the list to only display strings that I’ve added. So now you can easily get something that looks more like this:

Filtered Intellisense

Much better!

Now I must warn you. This solution is making extensive use of reflection to get at private and internal methods and events that Microsoft didn’t feel like exposing to us. So I felt a little dirty while I was writing it, but it gets the job done!

Here is the code. Please leave a comment if you find anything wrong with it.

Here are some other good posts about the WF Rules Engine:

Execute Windows Workflow Rules without Workflow
Introduction to the Windows Workflow Foundation Rules Engine
External Ruleset Demo

Enjoy!

February 3, 2008by Jon Kruger
.NET, LINQ

LINQ to SQL: a three-month checkpoint

We are now about 3 months into our project using LINQ to SQL. Our project is a Winforms app using SQL Server 2005 (LINQ to SQL only works with SQL Server). We are planning on moving to an n-tier system with a WCF service layer, but for now our application talks directly to the database. Even though we don’t have the service layer in there yet, we’re architecting the system as if we had the service layer in there so we’re having many of the same issues that we will have when actually have the service layer.

Microsoft hasn’t always been known for stellar 1.0 releases (e.g. Vista, the Zune, etc.). When it comes to something that’s in the .NET Framework, I had a little more faith because it’s a little harder for them to go back and fix something if they screw it up. I figured that because of that, they’ll make sure that they get it right.

LINQ to SQL is not complete. There are some issues that Microsoft knows about that LINQ to SQL doesn’t currently handle. None of these issues are show-stoppers. While we’ve had to jump through some hoops to get around these issues, but we’ve been able to do everything that we’ve needed to do. I’ll get into more detail on the hoop-jumping in later posts.

Even with all of these issues, I give LINQ to SQL a rousing endorsement. I’ve always been an ORM fan, and I’ve used Nhibernate on several projects.

Getting Started

When we started our project, we inherited a legacy database that has been developed over the last 10 years. There are some interesting things in the database, such as numerics being used as primary keys, tables that aren’t normalized, and spotty referential integrity.

For the first week, three of us dragged all of the tables onto the LINQ to SQL designer and renamed all the properties to more friendly names. This was a fairly painless process. Now we had all of our entity objects created and ready to go.

Well, almost. We created a BusinessEntityBase class and all of the entity objects derive from this class. We do this by creating partial classes that match up with the classes generated by LINQ to SQL (all of the classes generated by LINQ to SQL are partial classes) and specifying that those classes derive from BusinessEntityBase. We don’t have much in the BusinessEntityBase class — the main thing in there is an abstract Id property that each entity must override to specify the value of the primary key. We use this to keep track of whether an entity object is unsaved or not.

At this point, we were ready to start working! All of our entity objects were generated for us. Contrast this with Nhibernate, where we had to write (or generate) all of our entity objects and the Nhibernate mapping files. It takes most people a long time to figure out how to write those Nhibernate mapping files!

Working with LINQ

“LINQ” is the general term for the syntax that we now use to write queries. These queries can be executed against a database (LINQ to SQL), a collection (LINQ to Objects), and various other things (LINQ to Amazon).

The LINQ syntax and particularly lambda expressions were very foreign concepts at first. You’re just not used to using those types of things in C# code. Then one day is just clicks, and you start discovering all kinds of new ways to use LINQ queries and lambda expressions.

Personally, I think lambda expressions are more revolutionary than the LINQ syntax. They don’t provide you with anything that you couldn’t do in .NET 2.0 with anonymous delegates, but now the syntax is much more concise. You can do what you want to do in fewer lines of code, which also makes for more readable code. Here’s an example of why I like lambda expressions.

Let’s say that I’m working with everyone’s favorite sample database (Northwind) and I want to find a Employee by first name, last name, or both. In the past, you probably wrote a stored procedure that looked like this:

create procedure EmployeeSearch
     @FirstName varchar(20),
     @LastName varchar(20)
as
select EmployeeID, FirstName, LastName, Title, TitleOfCourtesy, 
    BirthDate, HireDate, Address, City, Region, PostalCode, Country, 
    HomePhone, Extension, Photo, Notes, ReportsTo, PhotoPath
from Employees
where (@FirstName is null or FirstName = @FirstName)
and (@LastName is null or LastName = @LastName)

That worked fine, but having to check if the parameters are null is a performance hit in the stored procedure, and someone had to write the stored procedure in the first place.

With lambda expressions and LINQ to SQL, you can now do something like this and build your query incrementally:

public IQueryable SearchEmployees(string firstName, string lastName)
{
    NorthwindDataContext dc = new NorthwindDataContext();

    // We'll start with the entire list of employees.
    IQueryable employees = dc.Employees;

    if (!string.IsNullOrEmpty(firstName))
    {
        // Filter the employees by first name
        employees = employees.Where(e => e.FirstName == firstName);
    }

    if (!string.IsNullOrEmpty(lastName))
    {
        // Filter the employees by last name
        employees = employees.Where(e => e.LastName == lastName);
    }

    return employees;
}

Why is this better?

  • We didn’t have to write a separate stored procedure.
  • The generated SQL code won’t have to check for NULL parameters passed into a stored procedure.
  • This code is much more testable and easier to read than a stored procedure (IMO).
  • This is compiled, type safe code!

Here is the SQL code that LINQ to SQL runs as a result of this method:

SELECT [t0].[EmployeeID], [t0].[LastName], [t0].[FirstName], [t0].[Title], [t0].[TitleOfCourtesy], [t0].[BirthDate], [t0].[HireDate], [t0].[Address], [t0].[City], [t0].[Region], [t0].[PostalCode], [t0].[Country], [t0].[HomePhone], [t0].[Extension], [t0].[Photo], [t0].[Notes], [t0].[ReportsTo], [t0].[PhotoPath]
FROM [dbo].[Employees] AS [t0]
WHERE [t0].[LastName] = @p0
-- @p0: Input NVarChar (Size = 6; Prec = 0; Scale = 0) [Kruger]

I’m not saying that stored procedures are obsolete. There will still be cases where you have a query that is so complex that it’s easier to do it in a stored procedure, or it may not be possible to do it in LINQ at all. But LINQ to SQL is allowing me to scrap many of the stored procedures that I used in the past.

More to come…

Over the next few weeks, I’ll post in more detail about how we are using LINQ to SQL and some of the things we’ve had to do to make it work.

January 19, 2008by Jon Kruger
Uncategorized

Why I should’ve gone to CodeMash last year

I went to CodeMash this year. Last year I did not. Sure, I knew that there would be a lot of good talks, but can’t I earn the same information by reading books and blogs, listening to podcasts, etc.?

Now I see why I was wrong. People I work with talk about the value of being involved in the .NET community, and now I see why they are right.

Sure, the talks were great. But by far the best part is being able to sit down with people who know way more than me and ask them about problems that I’m having right now on my current project. That kind of free advice is invaluable.

In the technology world, there is always tons of new stuff out there, and there’s no way that I can keep up with it all (especially with a wife and a kid on the way). If I want to be someone who can make good architectural decisions, how can I do that without having knowledge of what’s out there? Since I can’t keep up with it all, I could use some other people that can help out.

So I plan on trying to be more involved in the local .NET community (user groups, blogging, etc.), and I’m really excited about it. Hopefully I can make some worthwhile contributions of my own while I’m at it.

January 13, 2008by Jon Kruger
Uncategorized

Minor batch file tricks

Just so I don’t forget how to do these things…

Remove quotes:
SET Line=”C:\Program Files\”
SET Line=%Line:”=%
echo %Line% — will output: C:\Program Files\

Remove a trailing backslash:
SET Line=C:\Windows\
IF “%Line:~-1%”==”\” SET Line=%Line:~0,-1%
echo %Line% — will output: C:\Windows

January 7, 2008by Jon Kruger
.NET

Singing the praises of continuous integration

One thing that almost every project seems to struggle with is unit testing. Not so much the writing of said tests as much as the fact that inevitably you will get really busy and no one will run all of the unit tests for a week or so, and then all of a sudden you decide to run them and you find out that half of them are failing.

Many times these are easy to fix, but when there are so many failing, you don’t have time to dive in and fix them all. So no one fixes them, and the unit tests become pretty much worthless because you can’t count on any of them.

At this point, many people stop writing unit tests because they’re not used to ever running unit tests anymore, so they forget about writing them altogether.

Someone will eventually decide that the unit tests need to be fixed, so someone will spend an entire week fixing them all up. But by then the code coverage is lacking because people had stopped writing them (see above). You don’t have time to add tests at that point because you’re a week behind because you spent a week fixing unit tests.

For the first time I am working on a project where none of this is happening. Much of the credit goes to my co-workers, who do a great job of writing lots of good tests and keeping them up. But what is going to keep everything going is using continuous integration with TFS 2008.

I know, some people are getting ready to click the comment button and say that they’ve been doing CI with CruiseControl for years. CruiseControl is great, I won’t deny that.

Much to my surprise, it took me no more than 10 minutes to set up our CI build using TFS 2008. Now I can look at a dashboard screen and see that all 684 of our unit tests have been passing all day. If a check-in causes a test to break, everyone gets an email saying so and TFS automatically creates a bug for the person that broke it. So we stop and fix the tests right away and we get back to work.

Next up is to figure out how can configure our USB Missile Launcher to automatically shoot someone when they break the tests!

December 20, 2007by Jon Kruger
Architecture

Frameworks and layers

I’ve recently started on a new project where we are rewriting a legacy app basically from scratch. This means that the first coding task is to set up all of the architecture, frameworks, and layers that we will use for the rest of the project.

I think that this is the most crucial time in the life of the project. The work that is done now will affect many things, including:

  • How long it will take to develop the rest of the application
  • How long it will take for other developers to understand the architecture (learning curve) so that they can be productive
  • How easy it will be to add new features and make changes several years later (after many of the original developers are no longer on the project)
  • Whether or not the project ultimately succeeds

Those are some important issues! So we better get it right!

Many of the problems I have seen have come from having too many unnecessary layers to deal with in the project. Let’s look at some common rationalizations for layers and tiers in project architecture.

“By separating the business entities and logic from the database code, we can insulate the business code so that it can work with any database.”

I would hope that almost every application does this one. I was on a project that used NHibernate and we were able to rip out Oracle and replace it with SQL Server in a day or so. This separation is fairly easy to implement, especially if you use an O/R mapper. It also lets you write unit tests against the business layer so that you can make sure that database changes don’t break your application.

“We’ll have a separate UI layer and business layer and keep the validation code out of the UI.”

This is good on many fronts — you are able to write a new UI and use the existing business layer (e.g. put a web front end on top of a business layer that was used in a Winforms app, or maybe you’re really brave and you move from Winforms to WPF!). Also, in most cases you will need certain pieces of validation and logic code that can’t just live in one screen on the UI, and putting validation logic in the UI makes it impossible to unit test.

“Let’s write our data layer so that we could plug in any database with any schema so that it will work without our business layer knowing that anything changed.”

I really have to question this one. Let’s say that there is a possibility that we could have another database schema that we might have to plug in. But let’s think about this some more.

  • Honestly, what are the chances that this will ever happen? If we’re going to invest a significant amount of time in order to have this flexibility, we better be pretty sure that we’re going to have to plug in another database someday.
  • Even if we did someday have to plug in a different database schema, would we even be able to get it working? Can you really switch to a drastically different database schema and have the app be fast enough? What if the new database has different constraints and foreign keys?
  • Would it be easier to write some kind of integration that would bring the data from the new database into our database? No one likes to write integration code, but it gets the job done and it won’t affect the performance of the application.
  • Having a bulkier architecture to deal with will affect how fast you can develop pretty much any feature that you want to add to the application. Also, when a new developer joins your team, it will affect how fast they can get up to speed (and they might be bitter at you for making their life more painful).

We need to always remember why we are writing code in the first place — to provide business value. Sure, writing applications can be fun, but they’re not paying us just to have fun!

Frameworks and layers are meant to serve us… if you feel like you’re a slave to your framework, maybe you need to rethink how your project is structured.

I like to keep things simple. Use existing code libraries whenever possible (such as Enterprise Library, an O/R mapper, etc.). Don’t create some extra layer just because of what we might have to do in the future, when we don’t even know if we’re going to have to do it. Design your application to be flexible so that you can adapt to change, but don’t burden yourself with something that is just going to make everything more difficult in the process.

November 5, 2007by Jon Kruger
.NET

Creating a custom handler for the Policy Injection Application Block

I’ve thought for awhile that the Policy Injection Application Block looked interesting, but now I’ve finally had a chance to use it. The basic idea is that you can wrap a method call with a “handler” which will execute custom code before and after the actual method is executed. The block comes with a bunch of handlers out of the box, but you can also add custom handlers that you can use either by putting a custom attribute on a method or by adding to the configuration file. This post explains in more detail how to use the Policy Injection Application Block.

I’ve taken the Policy Injection Quick Start solution and added a custom handler as an example. Here’s a quick overview of what I did:

  • Added four files:
    • MyHandler.cs – this is the file where you will write the custom code that you want to execute before and after the actual method call.
    • MyHandlerAssembler.cs – creates a MyHandler object from a configuration object.
    • MyHandlerAttribute.cs – attribute that creates a MyHandler object when placed on a class, method, or property.
    • MyHandlerData.cs – stores data from custom attributes when handlers are created in the configuration file.
  • There are two ways to add a handler, and either way will accomplish the same purpose.
    • Place a [MyHandler] attribute on a method, property, or class. In the BankAccount.cs class, I decorated the Deposit() method with a [MyHandler] attribute. If you run the application, click the Deposit button, enter a value, and click OK, code in MyHandler.Invoke() will be executed and you’ll see some stuff in the output window.
    • Add to the <policyInjection> section of the app.config file. If you open the app.config file and search for “My Custom Stuff” you will find the section that I have added. If you run the app and click the “Balance Inquiry” button, code in MyHandler.Invoke() will be executed and you’ll see some stuff in the output window.

Here is the quick start project with my changes included.

Hope this helps!

November 4, 2007by Jon Kruger
Page 22 of 23« First...10«20212223»

About Me

I am a software developer and technical leader in Columbus, OH, specializing in software solutions, project leadership, and Agile coaching and training in a wide range of industries and environments. Find out more here...

Recent Posts

It’s not just a job

The nuance of organizational operating systems

Why are so many projects behind schedule?

The New Results-Oriented Workplace

Reinvention

Playing from ahead

The Freedom Disruption

Wanted: Technical problem solvers

Team-based organizations vs. role-based organizations

Impostor syndrome is bringing us down

I am a technical leader and software developer in Columbus, OH, specializing in technical leadership, software solutions, and Agile coaching and training in a wide range of industries and environments. Find out more here...