software solutions / project leadership / agile coaching and training

LINQ to SQL is NOT dead!

Posted on June 6, 2009

Ever since Microsoft announced that the Entity Framework was their ORM of choice, people everywhere have been saying, “LINQ to SQL is dead!” A lot of people feel like they’re not allowed to use LINQ to SQL anymore and that they have to use Entity Framework instead.

In fact, LINQ to SQL is not only alive and well, Microsoft has even announced LINQ to SQL improvements in .NET 4.0, including finally adding using ITable<T> for tables instead of Table<T>, which makes it much easier to test. Combine that with this open source tool that will create an IDataContext interface for you and you’re on your way to testable LINQ to SQL. So no, LINQ to SQL is not dead!!

Look, just because Microsoft says you should do something a certain way doesn’t mean that you have to do everything that they say. For example, Microsoft will almost never recommend using an open source software package over something that they’ve built (I say almost never because I’m sure they’ve done it at some point, but I’ve never seen it done). That certainly doesn’t mean that the Microsoft product is better than every open source tool.

For example, if you’re going to use an ORM, LINQ to SQL can get the job done for you, but if you’re looking for an ORM with more features, you owe it to yourself to check out Fluent NHibernate.

Helping everyday developers succeed

Posted on January 22, 2009

Software development is hard. You have to know how to use so many different tools and you have to try and keep up with it all.

Lately I’ve been hearing some griping about certain tools that people feel are trading good software design principles for ease of use. Many of these tools usually have some kind of drag-and-drop designer functionality that is designed to help you get things done faster. The griping comes from the fact that these tools can cause developers to make poor design decisions because the drag-and-drop tool led them in the wrong direction. I can’t say that I disagree with these complaints.

There are two sides of every coin, so let’s swing the pendulum the other way. Many open source libraries that I’ve seen take a different approach to design and the people that write them say that they’re not writing them for the novice user and that you will have to be familiar with advanced design patterns in order to use these tools. I think this is a good thing because software development is complicated and it’s OK to have make a library more complicated so that it fits in with good design patterns (like the need for an IoC container in FubuMVC).

Sometimes I wonder if we could do more to make these complicated tools easier to use. Take ORMs for example. Most people out there would agree that NHibernate is the most fully-functional ORM that we have. Yet I was on a project a couple years ago where developers floundered around trying to get their NHibernate mapping files to work (usually because of stupid stuff like typos, not making the .hbm.xml files Embedded Resources, not fully qualifying an assembly name, not making something virtual, etc.). Eventually we got better at it and we learned the tricks, but we wasted a ton of time trying to fix stupid mapping file issues.

For the next project I was on, we chose LINQ to SQL over NHibernate (this was in early 2008 when LINQ to SQL had just come out). I had an existing database with my tables in it, so I dragged them all into the LINQ to SQL designer and started writing business logic. It was amazing! It all just worked!

Now that led me into other pitfalls because now I had to map my objects 1:1 with my database tables, and crap from the database concerns cluttered up my model and caused a lot of grief. But even with that, we got work done much faster with LINQ to SQL than with NHibernate. Does this mean that LINQ to SQL is “better” than NHibernate? I wouldn’t say that, yet I was able to be more successful with LINQ to SQL. This is an important thing to note.

Open source tools like NHibernate are usually written by really smart people. Really smart people have no trouble figuring out complicated stuff. Sure, they may experience the same pain at first, but they pick it up quickly. All of the other really smart people that they talk to understand it too.

Not everyone is really really smart, unfortunately. I hear a lot of people harp on NHibernate and say it’s way too hard to use, that it’s not worth it, that there has to be a better way, etc. Most of these opinions come from a bad experience that they had with NHibernate.

I can’t discount these opinions that people may have. While these same people may agree that NHibernate is the most fully-featured ORM out there, for them it is not the ORM that will make them the most successful. I know a lot of people in this camp and they are not stupid people and they aren’t too lazy to learn NHibernate. It’s just that NHibernate did not lead them into the pit of success.

The fact is that most developers that are going to be using tools like NHibernate are your common, average, everyday developer — not bad developers by any stretch of the imagination, but they’re not as smart as the really smart people. I’m certainly not saying that we should sacrifice functionality and good design patterns to make things easier to use. I just think that more effort should be spent on making sure that average developers can figure out how to use something correctly, whether through documentation, good examples, or coming up with easier ways to do things. Really smart people don’t need these things to be successful with these tools, but the average developer will benefit immensely.

For me, this is what Fluent NHibernate did for NHibernate – it abstracted away a lot of what was difficult about NHibernate. Recently I’ve been able to use it on a project and it totally turned the tables. Now I get the power of NHibernate but I can use conventions to auto-wire my mappings, and the mappings are all in code instead of in XML. For me this is a huge deal. Now I get the power of NHibernate and I’m not stuck doing all of the crap that slowed me down before.

Another example of this is the reams and reams of documentation that the Microsoft patterns & practices team has written up for writing secure applications with WCF. WCF is very complex and powerful, but Microsoft did not leave us hanging and wrote up what to do in almost every scenario imaginable. They didn’t just give us a tool, they gave us what we needed to be successful with it.

We need more stuff like this.

LINQ to SQL talk stuff

Posted on February 29, 2008

Here is the sample project from my talk at the Columbus .NET Users’ Group last night. If you open the project, you’ll notice a Northwind.sql file. This is my modified version of the Northwind database (I had to add a timestamp column to the Employees table to get it to cooperate with LINQ to SQL).

Like I mentioned yesterday, if you’re new to LINQ to SQL, a great place to start is Scott Guthrie’s series of blog posts on LINQ to SQL. Here they are:

Part 1: Introduction to LINQ to SQL
Part 2: Defining our Data Model Classes
Part 3: Querying our Database
Part 4: Updating our Database
Part 5: Binding UI using the ASP:LinqDataSource Control
Part 6: Retrieving Data Using Stored Procedures
Part 7: Updating our Database using Stored Procedures
Part 8: Executing Custom SQL Expressions
Part 9: Using a Custom LINQ Expression with the <asp:LinqDatasource> control

LINQ to SQL In Disconnected/N-Tier scenarios: Saving an Object

Posted on February 10, 2008

LINQ to SQL is a great tool, but when you’re using it in an n-tier scenario, there are several problems that you have to solve. A simple example is a web service that allows you to retrieve a record of data and save a record of data when the object that you are loading/saving had child lists of objects. Here are some of the problems that you have to solve:

1) How can you create these web services without having to create a separate set of entity objects (that is, we want to use the entity objects that the LINQ to SQL designer generates for us)?
2) What do we have to do to get LINQ to SQL to work with entities passed in through web services?
3) How do you create these loading/saving web services while keeping the amount of data passed across the wire to a minimum?

I created a sample project using the Northwind sample database. The first thing to do is to create your .dbml file and then drag tables from the Server Explorer onto the design surface. I have something that looks like this:


I’m also going to click on blank space in the design surface, go to the Properties window, and set the SerializationMode to Unidirectional. This will put the [DataContract] and [DataMember] attributes on the designer-generated entity objects so that they can be used in WCF services.


Now I’ll create some web services that look something like this:

public class EmployeeService
	public Employee GetEmployee(int employeeId)
		NorthwindDataContext dc = new NorthwindDataContext();
		Employee employee = dc.Employees.FirstOrDefault(e => e.EmployeeID == employeeId);
		return employee;

	public void SaveEmployee(Employee employee)
		// TODO

I have two web service methods — one that loads an Employee by ID and one that saves an Employee.

Notice that both web service methods are using the entity objects generated by LINQ to SQL. There are some situations when you will not want to use the LINQ to SQL generated entities. For example, if you’re exposing a public web service, you probably don’t want to use the LINQ to SQL entities because that can make it a lot harder for you to refactor your database or your object model without having to change the web service definition, which could break code that calls the web service. In my case, I have a web service where I own both sides of the wire and this service is not exposed publicly, so I don’t have these concerns.

The GetEmployee() method is fairly straight-forward — just load up the object and return it. Let’s look at how we should implement SaveEmployee().

In order for the DataContext to be able to save an object that wasn’t loaded from the same DataContext, you have to let the DataContext know about the object. How you do this depends on whether the object has ever been saved before.

How you make this determination is based on your own convention. Since I’m dealing with integer primary keys with identity insert starting at 1, I can assume that if the primary key value is < 1, this object is new.

Let's create a base class for our entity object called BusinessEntityBase and have that class expose a property called IsNew. This property will return a boolean value based on the primary key value of this object.

namespace Northwind
	public abstract class BusinessEntityBase
		public abstract int Id { get; set; }

		public virtual bool IsNew
			get { return this.Id <= 0; }

Now we have to tell Employee to derive from BusinessEntityBase. We can do this because the entities that LINQ to SQL generates are partial classes that don't derive from any class, so we can define that in our half of the partial class.

namespace Northwind
	public partial class Employee : BusinessEntityBase
		public override int Id
			get { return this.EmployeeID; }
			set { this.EmployeeID = value; }

Now we should be able to tell if an Employee object is new or not. I'm also going to do the same thing with the Order class since the Employee object contains a list of Order objects.

namespace Northwind
	public partial class Order : BusinessEntityBase
		public override int Id
			get { return this.OrderID; }
			set { this.OrderID = value; }

OK, let's start filling out the SaveEmployee() method.

public void SaveEmployee(Employee employee)
	NorthwindDataContext dc = new NorthwindDataContext();
	if (employee.IsNew)

Great. So now I can call the GetEmployee() web method to get an employee, change something on the Employee object, and call the SaveEmployee() web method to save it. But when I do it, nothing happens.

The problem is with this line:

public void SaveEmployee(Employee employee)
	NorthwindDataContext dc = new NorthwindDataContext();
	if (employee.IsNew)
		dc.Employees.Attach(employee); // <-- PROBLEM HERE

The Attach() method attaches the entity object to the DataContext so that the DataContext can save it. But the overload that I called just attached the entity to the DataContext and didn't check to see that anything on the object had been changed. That doesn't do us a whole lot of good. Let's try this overload:

public void SaveEmployee(Employee employee)
	NorthwindDataContext dc = new NorthwindDataContext();
	if (employee.IsNew)
		dc.Employees.Attach(employee, true); // <-- UPDATE

This second parameter is going to tell LINQ to SQL that it should treat this entity as modified so that it needs to be saved to the database. Now when I call SaveEmployee(), I get an exception when I call Attach() that says:

An entity can only be attached as modified without original state if it declares a version member or does not have an update check policy.

What this means is that my database table does not have a timestamp column on it. Without a timestamp, LINQ to SQL can't do it's optimistic concurrency checking. No big deal, I'll go add timestamp columns to the Employees and Orders tables in the database. I'll also have to go into my DBML file and add the column to the table in there. You can either add a new property to the object by right-clicking on the object in the designer and selecting Add / Property, or you can just delete the object from the designer and then dragging it back on from the Server Explorer.

Now the DBML looks like this:

updated dbml

Now let's try calling SaveEmployee() again. This time it works. Here is the SQL that LINQ to SQL ran:

UPDATE [dbo].[Employees]
SET [LastName] = @p2, [FirstName] = @p3, [Title] = @p4, [TitleOfCourtesy] = @p5, [BirthDate] = @p6, [HireDate] = @p7, [Address] = @p8, [City] = @p9, [Region] = @p10, [PostalCode] = @p11, [Country] = @p12, [HomePhone] = @p13, [Extension] = @p14, [Photo] = @p15, [Notes] = @p16, [ReportsTo] = @p17, [PhotoPath] = @p18
WHERE ([EmployeeID] = @p0) AND ([Timestamp] = @p1)

SELECT [t1].[Timestamp]
FROM [dbo].[Employees] AS [t1]
WHERE ((@@ROWCOUNT) > 0) AND ([t1].[EmployeeID] = @p19)
-- @p0: Input Int (Size = 0; Prec = 0; Scale = 0) [5]
-- @p1: Input Timestamp (Size = 8; Prec = 0; Scale = 0) [SqlBinary(8)]
-- @p2: Input NVarChar (Size = 8; Prec = 0; Scale = 0) [Buchanan]
-- @p3: Input NVarChar (Size = 6; Prec = 0; Scale = 0) [Steven]
-- @p4: Input NVarChar (Size = 13; Prec = 0; Scale = 0) [Sales Manager]
-- @p5: Input NVarChar (Size = 3; Prec = 0; Scale = 0) [Mr.]
-- @p6: Input DateTime (Size = 0; Prec = 0; Scale = 0) [3/4/1955 12:00:00 AM]
-- @p7: Input DateTime (Size = 0; Prec = 0; Scale = 0) [10/17/1993 12:00:00 AM]
-- @p8: Input NVarChar (Size = 15; Prec = 0; Scale = 0) [14 Garrett Hill]
-- @p9: Input NVarChar (Size = 6; Prec = 0; Scale = 0) [London]
-- @p10: Input NVarChar (Size = 0; Prec = 0; Scale = 0) [Null]
-- @p11: Input NVarChar (Size = 7; Prec = 0; Scale = 0) [SW2 8JR]
-- @p12: Input NVarChar (Size = 2; Prec = 0; Scale = 0) [UK]
-- @p13: Input NVarChar (Size = 13; Prec = 0; Scale = 0) [(71) 555-4848]
-- @p14: Input NVarChar (Size = 4; Prec = 0; Scale = 0) [3453]
-- @p15: Input Image (Size = 21626; Prec = 0; Scale = 0) [SqlBinary(21626)]
-- @p16: Input NText (Size = 448; Prec = 0; Scale = 0) [Steven Buchanan graduated from St. Andrews University, Scotland, with a BSC degree in 1976.  Upon joining the company as a sales representative in 1992, he spent 6 months in an orientation program at the Seattle office and then returned to his permanent post in London.  He was promoted to sales manager in March 1993.  Mr. Buchanan has completed the courses "Successful Telemarketing" and "International Sales Management."  He is fluent in French.]
-- @p17: Input Int (Size = 0; Prec = 0; Scale = 0) [2]
-- @p18: Input NVarChar (Size = 37; Prec = 0; Scale = 0) [http://accweb/emmployees/buchanan.bmp]
-- @p19: Input Int (Size = 0; Prec = 0; Scale = 0) [5]

Notice that it passed back all of the properties in the SQL statement -- not just the one that I changed (I only changed one property when I made this call). But isn't it horribly ineffecient to save every property when only one property changed?

Well, you don't have much choice here. Now there is another overload of Attach() that takes in the original version of the object instead of the boolean parameter. In other words, it is saying that it will compare your object with the original version of the object and see if any properties are different, and then only update those properties in the SQL statement.

Unfortunately, there's no good way to use this overload in this case, nor do I think you would want to. I suppose you could load up the existing version of the entity from the database and then pass that into Attach() as the "original", but now we're doing even more work -- we're doing a SELECT that selects the entire row, and then we're doing an UPDATE that only updates the changed properties. I would rather stick with the one UPDATE that updates everything.

LINQ to SQL: a three-month checkpoint

Posted on January 19, 2008

We are now about 3 months into our project using LINQ to SQL. Our project is a Winforms app using SQL Server 2005 (LINQ to SQL only works with SQL Server). We are planning on moving to an n-tier system with a WCF service layer, but for now our application talks directly to the database. Even though we don’t have the service layer in there yet, we’re architecting the system as if we had the service layer in there so we’re having many of the same issues that we will have when actually have the service layer.

Microsoft hasn’t always been known for stellar 1.0 releases (e.g. Vista, the Zune, etc.). When it comes to something that’s in the .NET Framework, I had a little more faith because it’s a little harder for them to go back and fix something if they screw it up. I figured that because of that, they’ll make sure that they get it right.

LINQ to SQL is not complete. There are some issues that Microsoft knows about that LINQ to SQL doesn’t currently handle. None of these issues are show-stoppers. While we’ve had to jump through some hoops to get around these issues, but we’ve been able to do everything that we’ve needed to do. I’ll get into more detail on the hoop-jumping in later posts.

Even with all of these issues, I give LINQ to SQL a rousing endorsement. I’ve always been an ORM fan, and I’ve used Nhibernate on several projects.

Getting Started

When we started our project, we inherited a legacy database that has been developed over the last 10 years. There are some interesting things in the database, such as numerics being used as primary keys, tables that aren’t normalized, and spotty referential integrity.

For the first week, three of us dragged all of the tables onto the LINQ to SQL designer and renamed all the properties to more friendly names. This was a fairly painless process. Now we had all of our entity objects created and ready to go.

Well, almost. We created a BusinessEntityBase class and all of the entity objects derive from this class. We do this by creating partial classes that match up with the classes generated by LINQ to SQL (all of the classes generated by LINQ to SQL are partial classes) and specifying that those classes derive from BusinessEntityBase. We don’t have much in the BusinessEntityBase class — the main thing in there is an abstract Id property that each entity must override to specify the value of the primary key. We use this to keep track of whether an entity object is unsaved or not.

At this point, we were ready to start working! All of our entity objects were generated for us. Contrast this with Nhibernate, where we had to write (or generate) all of our entity objects and the Nhibernate mapping files. It takes most people a long time to figure out how to write those Nhibernate mapping files!

Working with LINQ

“LINQ” is the general term for the syntax that we now use to write queries. These queries can be executed against a database (LINQ to SQL), a collection (LINQ to Objects), and various other things (LINQ to Amazon).

The LINQ syntax and particularly lambda expressions were very foreign concepts at first. You’re just not used to using those types of things in C# code. Then one day is just clicks, and you start discovering all kinds of new ways to use LINQ queries and lambda expressions.

Personally, I think lambda expressions are more revolutionary than the LINQ syntax. They don’t provide you with anything that you couldn’t do in .NET 2.0 with anonymous delegates, but now the syntax is much more concise. You can do what you want to do in fewer lines of code, which also makes for more readable code. Here’s an example of why I like lambda expressions.

Let’s say that I’m working with everyone’s favorite sample database (Northwind) and I want to find a Employee by first name, last name, or both. In the past, you probably wrote a stored procedure that looked like this:

create procedure EmployeeSearch
     @FirstName varchar(20),
     @LastName varchar(20)
select EmployeeID, FirstName, LastName, Title, TitleOfCourtesy, 
    BirthDate, HireDate, Address, City, Region, PostalCode, Country, 
    HomePhone, Extension, Photo, Notes, ReportsTo, PhotoPath
from Employees
where (@FirstName is null or FirstName = @FirstName)
and (@LastName is null or LastName = @LastName)

That worked fine, but having to check if the parameters are null is a performance hit in the stored procedure, and someone had to write the stored procedure in the first place.

With lambda expressions and LINQ to SQL, you can now do something like this and build your query incrementally:

public IQueryable SearchEmployees(string firstName, string lastName)
    NorthwindDataContext dc = new NorthwindDataContext();

    // We'll start with the entire list of employees.
    IQueryable employees = dc.Employees;

    if (!string.IsNullOrEmpty(firstName))
        // Filter the employees by first name
        employees = employees.Where(e => e.FirstName == firstName);

    if (!string.IsNullOrEmpty(lastName))
        // Filter the employees by last name
        employees = employees.Where(e => e.LastName == lastName);

    return employees;

Why is this better?

  • We didn’t have to write a separate stored procedure.
  • The generated SQL code won’t have to check for NULL parameters passed into a stored procedure.
  • This code is much more testable and easier to read than a stored procedure (IMO).
  • This is compiled, type safe code!

Here is the SQL code that LINQ to SQL runs as a result of this method:

SELECT [t0].[EmployeeID], [t0].[LastName], [t0].[FirstName], [t0].[Title], [t0].[TitleOfCourtesy], [t0].[BirthDate], [t0].[HireDate], [t0].[Address], [t0].[City], [t0].[Region], [t0].[PostalCode], [t0].[Country], [t0].[HomePhone], [t0].[Extension], [t0].[Photo], [t0].[Notes], [t0].[ReportsTo], [t0].[PhotoPath]
FROM [dbo].[Employees] AS [t0]
WHERE [t0].[LastName] = @p0
-- @p0: Input NVarChar (Size = 6; Prec = 0; Scale = 0) [Kruger]

I’m not saying that stored procedures are obsolete. There will still be cases where you have a query that is so complex that it’s easier to do it in a stored procedure, or it may not be possible to do it in LINQ at all. But LINQ to SQL is allowing me to scrap many of the stored procedures that I used in the past.

More to come…

Over the next few weeks, I’ll post in more detail about how we are using LINQ to SQL and some of the things we’ve had to do to make it work.

IQueryable<T> vs. IEnumerable<T> in LINQ to SQL queries

Posted on October 19, 2007

I ran into something interesting today when working with LINQ to SQL. Take a look at these three code snippets:


NorthwindDataContext dc = new NorthwindDataContext();
IEnumerable list = dc.Products
     .Where(p => p.ProductName.StartsWith("A"));
list = list.Take(10);


NorthwindDataContext dc = new NorthwindDataContext();
IEnumerable list2 = dc.Products
     .Where(p => p.ProductName.StartsWith("A"))


NorthwindDataContext dc = new NorthwindDataContext();
IQueryable list3 = dc.Products
     .Where(p => p.ProductName.StartsWith("A"));
list3 = list3.Take(10);

What would you expect the generated SQL statements of all of these to be? I was expecting something like this:

SELECT TOP 10 [t0].[ProductID], [t0].[ProductName], [t0].[SupplierID], [t0].[CategoryID], [t0].[QuantityPerUnit], [t0].[UnitPrice], [t0].[UnitsInStock], [t0].[UnitsOnOrder], [t0].[ReorderLevel], [t0].[Discontinued]
FROM [Products] AS [t0]
WHERE [t0].[ProductName] LIKE @p0

Which is what you get in #2 and #3, but not in #1, where you get this:

SELECT [t0].[ProductID], [t0].[ProductName], [t0].[SupplierID], [t0].[CategoryID], [t0].[QuantityPerUnit], [t0].[UnitPrice], [t0].[UnitsInStock], [t0].[UnitsOnOrder], [t0].[ReorderLevel], [t0].[Discontinued]
FROM [Products] AS [t0]
WHERE [t0].[ProductName] LIKE @p0

(notice the “TOP 10″ is missing)

I’m not exactly sure what is going on under the hood, but it appears that when you do a query (or a portion of a query in this case), you cannot save the result as an IEnumerable<T>, add to the query in a later statement (as I did in #1), and have LINQ to SQL know that it needs to combine the statements into one query. However, if you store the first part of your query as IQueryable<T>, then you can add onto that query later (before it actually gets executed, of course).

Bottom line — if you have a LINQ to SQL statement and you can save it as an IQueryable<T> instead of an IEnumerable<T>, do so so that you have flexibility to continue to add onto that query.

I have over 15 years of software development experience on several different platforms (.NET, Ruby, JavaScript, SQL Server, and more). I recognize that software is expensive, so I'm always trying to find ways to speed up the software development process, but at the same time remembering that high quality is essential to building software that stands the test of time.
I have experience leading and architecting large Agile software projects and coordinating all aspects of a project's lifecycle. Whether you're looking for technical expertise or someone to lead all aspects of an Agile project, I have proven experience from multiple projects in different environments that can help make your project a success.
Every team and every situation is different, and I believe that processes and tools should be applied with common sense. I've spent the last 10+ years working on projects using Agile and Lean concepts in many different environments, both in leadership roles and as a practitioner doing the work. I can help you develop a process that works best in your organization, not just apply a prescriptive process.
Have any questions? Contact me for more information.
From Stir Trek 2017
Iteration Management - Your Key to Predictable Delivery
From Stir Trek 2016 and QA or the Highway 2015
From CodeMash 2016, QA or the Highway 2014, Stir Trek 2012
The Business of You: 10 Steps For Running Your Career Like a Business
From CodeMash 2015, Stir Trek 2014, CONDG 2012
From Stir Trek 2013, DogFoodCon 2013
(presented with Brandon Childers, Chris Hoover, Laurel Odronic, and Lan Bloch from IGS Energy) from Path to Agility 2012
From CodeMash 2012 and 2013
(presented with Paul Bahler and Kevin Chivington from IGS Energy)
From CodeMash 2011
An idea of how to make JavaScript testable, presented at Stir Trek 2011. The world of JavaScript frameworks has changed greatly since then, but I still agree with the concepts.
A description of how test-driven development works along with some hands-on examples.
From CodeMash 2010
From CodeMash 2010