Jon Kruger -
  • About Me
  • Blog
  • Values
  • Presentations
About Me
Blog
Values
Presentations
  • About Me
  • Blog
  • Values
  • Presentations
Jon Kruger
Uncategorized

Minor batch file tricks

Just so I don’t forget how to do these things…

Remove quotes:
SET Line=”C:\Program Files\”
SET Line=%Line:”=%
echo %Line% — will output: C:\Program Files\

Remove a trailing backslash:
SET Line=C:\Windows\
IF “%Line:~-1%”==”\” SET Line=%Line:~0,-1%
echo %Line% — will output: C:\Windows

January 7, 2008by Jon Kruger
.NET

Singing the praises of continuous integration

One thing that almost every project seems to struggle with is unit testing. Not so much the writing of said tests as much as the fact that inevitably you will get really busy and no one will run all of the unit tests for a week or so, and then all of a sudden you decide to run them and you find out that half of them are failing.

Many times these are easy to fix, but when there are so many failing, you don’t have time to dive in and fix them all. So no one fixes them, and the unit tests become pretty much worthless because you can’t count on any of them.

At this point, many people stop writing unit tests because they’re not used to ever running unit tests anymore, so they forget about writing them altogether.

Someone will eventually decide that the unit tests need to be fixed, so someone will spend an entire week fixing them all up. But by then the code coverage is lacking because people had stopped writing them (see above). You don’t have time to add tests at that point because you’re a week behind because you spent a week fixing unit tests.

For the first time I am working on a project where none of this is happening. Much of the credit goes to my co-workers, who do a great job of writing lots of good tests and keeping them up. But what is going to keep everything going is using continuous integration with TFS 2008.

I know, some people are getting ready to click the comment button and say that they’ve been doing CI with CruiseControl for years. CruiseControl is great, I won’t deny that.

Much to my surprise, it took me no more than 10 minutes to set up our CI build using TFS 2008. Now I can look at a dashboard screen and see that all 684 of our unit tests have been passing all day. If a check-in causes a test to break, everyone gets an email saying so and TFS automatically creates a bug for the person that broke it. So we stop and fix the tests right away and we get back to work.

Next up is to figure out how can configure our USB Missile Launcher to automatically shoot someone when they break the tests!

December 20, 2007by Jon Kruger
Architecture

Frameworks and layers

I’ve recently started on a new project where we are rewriting a legacy app basically from scratch. This means that the first coding task is to set up all of the architecture, frameworks, and layers that we will use for the rest of the project.

I think that this is the most crucial time in the life of the project. The work that is done now will affect many things, including:

  • How long it will take to develop the rest of the application
  • How long it will take for other developers to understand the architecture (learning curve) so that they can be productive
  • How easy it will be to add new features and make changes several years later (after many of the original developers are no longer on the project)
  • Whether or not the project ultimately succeeds

Those are some important issues! So we better get it right!

Many of the problems I have seen have come from having too many unnecessary layers to deal with in the project. Let’s look at some common rationalizations for layers and tiers in project architecture.

“By separating the business entities and logic from the database code, we can insulate the business code so that it can work with any database.”

I would hope that almost every application does this one. I was on a project that used NHibernate and we were able to rip out Oracle and replace it with SQL Server in a day or so. This separation is fairly easy to implement, especially if you use an O/R mapper. It also lets you write unit tests against the business layer so that you can make sure that database changes don’t break your application.

“We’ll have a separate UI layer and business layer and keep the validation code out of the UI.”

This is good on many fronts — you are able to write a new UI and use the existing business layer (e.g. put a web front end on top of a business layer that was used in a Winforms app, or maybe you’re really brave and you move from Winforms to WPF!). Also, in most cases you will need certain pieces of validation and logic code that can’t just live in one screen on the UI, and putting validation logic in the UI makes it impossible to unit test.

“Let’s write our data layer so that we could plug in any database with any schema so that it will work without our business layer knowing that anything changed.”

I really have to question this one. Let’s say that there is a possibility that we could have another database schema that we might have to plug in. But let’s think about this some more.

  • Honestly, what are the chances that this will ever happen? If we’re going to invest a significant amount of time in order to have this flexibility, we better be pretty sure that we’re going to have to plug in another database someday.
  • Even if we did someday have to plug in a different database schema, would we even be able to get it working? Can you really switch to a drastically different database schema and have the app be fast enough? What if the new database has different constraints and foreign keys?
  • Would it be easier to write some kind of integration that would bring the data from the new database into our database? No one likes to write integration code, but it gets the job done and it won’t affect the performance of the application.
  • Having a bulkier architecture to deal with will affect how fast you can develop pretty much any feature that you want to add to the application. Also, when a new developer joins your team, it will affect how fast they can get up to speed (and they might be bitter at you for making their life more painful).

We need to always remember why we are writing code in the first place — to provide business value. Sure, writing applications can be fun, but they’re not paying us just to have fun!

Frameworks and layers are meant to serve us… if you feel like you’re a slave to your framework, maybe you need to rethink how your project is structured.

I like to keep things simple. Use existing code libraries whenever possible (such as Enterprise Library, an O/R mapper, etc.). Don’t create some extra layer just because of what we might have to do in the future, when we don’t even know if we’re going to have to do it. Design your application to be flexible so that you can adapt to change, but don’t burden yourself with something that is just going to make everything more difficult in the process.

November 5, 2007by Jon Kruger
.NET

Creating a custom handler for the Policy Injection Application Block

I’ve thought for awhile that the Policy Injection Application Block looked interesting, but now I’ve finally had a chance to use it. The basic idea is that you can wrap a method call with a “handler” which will execute custom code before and after the actual method is executed. The block comes with a bunch of handlers out of the box, but you can also add custom handlers that you can use either by putting a custom attribute on a method or by adding to the configuration file. This post explains in more detail how to use the Policy Injection Application Block.

I’ve taken the Policy Injection Quick Start solution and added a custom handler as an example. Here’s a quick overview of what I did:

  • Added four files:
    • MyHandler.cs – this is the file where you will write the custom code that you want to execute before and after the actual method call.
    • MyHandlerAssembler.cs – creates a MyHandler object from a configuration object.
    • MyHandlerAttribute.cs – attribute that creates a MyHandler object when placed on a class, method, or property.
    • MyHandlerData.cs – stores data from custom attributes when handlers are created in the configuration file.
  • There are two ways to add a handler, and either way will accomplish the same purpose.
    • Place a [MyHandler] attribute on a method, property, or class. In the BankAccount.cs class, I decorated the Deposit() method with a [MyHandler] attribute. If you run the application, click the Deposit button, enter a value, and click OK, code in MyHandler.Invoke() will be executed and you’ll see some stuff in the output window.
    • Add to the <policyInjection> section of the app.config file. If you open the app.config file and search for “My Custom Stuff” you will find the section that I have added. If you run the app and click the “Balance Inquiry” button, code in MyHandler.Invoke() will be executed and you’ll see some stuff in the output window.

Here is the quick start project with my changes included.

Hope this helps!

November 4, 2007by Jon Kruger
LINQ

IQueryable<T> vs. IEnumerable<T> in LINQ to SQL queries

I ran into something interesting today when working with LINQ to SQL. Take a look at these three code snippets:

#1:

NorthwindDataContext dc = new NorthwindDataContext();
IEnumerable list = dc.Products
     .Where(p => p.ProductName.StartsWith("A"));
list = list.Take(10);
Debug.WriteLine(list.Count());

#2:

NorthwindDataContext dc = new NorthwindDataContext();
IEnumerable list2 = dc.Products
     .Where(p => p.ProductName.StartsWith("A"))
     .Take(10);
Debug.WriteLine(list2.Count());

#3:

NorthwindDataContext dc = new NorthwindDataContext();
IQueryable list3 = dc.Products
     .Where(p => p.ProductName.StartsWith("A"));
list3 = list3.Take(10);
Debug.WriteLine(list3.Count());

What would you expect the generated SQL statements of all of these to be? I was expecting something like this:

SELECT TOP 10 [t0].[ProductID], [t0].[ProductName], [t0].[SupplierID], [t0].[CategoryID], [t0].[QuantityPerUnit], [t0].[UnitPrice], [t0].[UnitsInStock], [t0].[UnitsOnOrder], [t0].[ReorderLevel], [t0].[Discontinued]
FROM [Products] AS [t0]
WHERE [t0].[ProductName] LIKE @p0

Which is what you get in #2 and #3, but not in #1, where you get this:

SELECT [t0].[ProductID], [t0].[ProductName], [t0].[SupplierID], [t0].[CategoryID], [t0].[QuantityPerUnit], [t0].[UnitPrice], [t0].[UnitsInStock], [t0].[UnitsOnOrder], [t0].[ReorderLevel], [t0].[Discontinued]
FROM [Products] AS [t0]
WHERE [t0].[ProductName] LIKE @p0

(notice the “TOP 10” is missing)

I’m not exactly sure what is going on under the hood, but it appears that when you do a query (or a portion of a query in this case), you cannot save the result as an IEnumerable<T>, add to the query in a later statement (as I did in #1), and have LINQ to SQL know that it needs to combine the statements into one query. However, if you store the first part of your query as IQueryable<T>, then you can add onto that query later (before it actually gets executed, of course).

Bottom line — if you have a LINQ to SQL statement and you can save it as an IQueryable<T> instead of an IEnumerable<T>, do so so that you have flexibility to continue to add onto that query.

October 19, 2007by Jon Kruger
Uncategorized

I’ve been published!

A couple of my old posts have been published on ASPAlliance.com!

Getting the most out of Windows Forms Data Binding
Handling Windows Forms Data Binding Errors

July 24, 2007by Jon Kruger
.NET, NHibernate

How to step into NHibernate code

In 10 minutes you can be stepping into NHibernate code to see exactly what’s it’s doing with your project. Here’s how to do it:

1) Go download the NHibernate source code from here. As I’m writing this, the latest release version is 1.2.0.GA. (I’m assuming here that you already have NHibernate set up to work with your application.)
2) Unzip the source to a location on your machine.
3) Open the NHibernate.sln file in Visual Studio that corresponds to the version of the .NET framework that you are using (1.1 or 2.0). If you really want to build everything, including the Iesi.Collections assemblies and all the unit tests, then open the NHibernate-Everything.sln file, but for me this is overkill because I usually only want to step into the NHibernate code. If you build NHibernate-Everything.sln, you’ll have to have NUnit 2.2.8 installed to build the unit test projects.
4) Make sure the Debug configuration is selected and build the entire solution.
5) If you open the \src\NHibernate\bin\Debug-2.0 (or Debug-1.1) folder, you will see all of the assemblies along with the .pdb files (if you just build the NHibernate-1.1/-2.0.sln file, you will only see one .pdb — NHibernate.pdb). Copy all of the .dll and .pdb files to the location where you currently have your NHibernate .dll files in your project.

Now when you debug your project, you should be able to step into NHibernate code!

Obvious caveat: we’re assuming that the source code that you downloaded is the exact same code that they used to build the release assemblies. I don’t see any reason why this wouldn’t be the case, but I’ve run into this problem before with other third party packages (like Infragistics NetAdvantage controls). So it might not be a bad idea to remove the debug NHibernate dlls when you’re done stepping into them and put the release dlls back in so that you don’t get burned.

Hope this helps! If you encounter any problems with this process, please let me know.

June 29, 2007by Jon Kruger
.NET, WPF

My First WPF App

12 hours in the car coming back from the Outer Banks means that I finally had time to delve into WPF. It seems like every other week Microsoft is telling us about some new technology that is coming out. I figured that rather than try and learn them all at once, I’ll concentrate on just one. I picked WPF because WPF will be used extensively in Silverlight and because all of the WPF apps I’ve seen so far look awesome.

WPF is a big change from any desktop application platform in the past in that it encourages you to design a rich user experience. WinForms apps almost always have the same look and feel — buttons look the same in every app, everyone uses the same menus, the same toolbars, etc., and it’s really difficult to change the look and feel of them. Now Microsoft is pushing you to use your imagination and creativity to do whatever you want, while giving you the tools you need to do so.

This is both exciting and challenging. It’s really hard to try and put aside all of the UI concepts, layouts, colors, etc. and try and think outside the box. With WPF, you can do stuff that you would’ve never dreamed of attemping in Windows Forms, and it’s not that hard to do. Coming up with the ideas is the hard part!

Here’s what you need to get started:

  • .NET Framework 3.0
  • Visual Studio 2005 extensions for .NET Framework 3.0 – right now this is still a CTP (Nov. 06)
  • Expression Blend – you can download a fully featured 30 day trial

These installs all take a long time, so allow plenty of time. :)

Here’s where I learned everything that I know:

  • Reading Windows Presentation Foundation Unleashed, by Adam Nathan. I’m only about halfway through, but this book does an excellent job of walking you through everything. Normally I don’t buy dead tree books, but when you are learning something completely new, it’s nice to have someone walk you through everything in the correct order.
  • Downloaded the Family.Show app. This app was written in WPF as an example application that contained examples of everything that you might want to do in a WPF app.
  • Watched the Family.Show guys talk about their app in this video.
  • Watched this video on designing rich client experiences with Expression Blend and WPF.

For my application I chose to solve a problem that I have — I can’t keep track of car repairs and when they need to be done. So I’m going to create a simple application that will allow me to record car repairs that I do and let me know when I’m due to have the repairs done again.

Most WPF apps that you’ve seen out there do lots crazy animation and 3D rotating of panels. I tried to stay away from that for now and just do the basic stuff that I would do in a normal application. In WPF, the basic stuff actually is in some cases a lot harder than the animation and 3D rotations.

Some thoughts so far:

  • I did more work in Expression Blend than I did in Visual Studio. Anyone doing WPF will almost always use Blend and Visual Studio side by side. You can even compile and run in Blend.
  • Blend is a really powerful tool and it allows you do to all the styling, layout, and animation. Timelines and event handling make it easy to make your controls react to button clicks, hovering over something with the mouse, etc. The use of data binding is really encouraged in WPF, more than it was in Windows Forms, and Blend makes it easy to use data binding in all kinds of places.
  • The learning curve of Blend wasn’t as bad as I expected it to be. Watching the MIX videos that I mentioned earlier really helped me learn all of the basic stuff. When you do get stuck and can’t find something in Blend, you can go edit the XAML directly and then go back into Blend and see what changed.
  • The Visual Studio Extensions are supposed to install the “Cider” designer (which is still in CTP). I couldn’t get anything to show up at all in the designer. I don’t know if this was a problem with my environment or a bug, but it doesn’t really matter because there’s no reason to do anything in the VS designer when you have Blend.
  • There is also Expression Design, which is essentially MS’s version of Adobe Illustrator. I haven’t really used either, but the nice thing about Design is that it also outputs XAML, so you can bring stuff from Design right into Blend. You can import all of your Illustrator files into Design.
  • Right now it’s taking me a long time to do relatively simple stuff, like figuring out how to get stuff laid out inside a list box item. Then again, I was in the car doing this so I was sans-Internet, and I’m sure there are lots of examples out there by now of how to do certain things.

I’ll try and post more updates and I continue to try and figure this all out.

June 19, 2007by Jon Kruger
.NET

Ode to third party software

Third-party tools can be very useful, if not essential, when developing applications. But what I find really interesting is the way people talk about and criticize third-party products. Here are some observations that I’ve noticed over time:

Most developers think that if they had the time, they could do a better job of developing a third party product than the people who actually wrote it. OK, I’ve been guilty of this one too. This probably the same reason that most developers, when given an application or module that they didn’t write, will usually recommended that it be completely refactored and rewritten. We don’t always understand code that we didn’t write. This, however, doesn’t make the code bad.

Have you ever tried and develop a base control for your team to use? It’s hard! You create it and people start using it, but then you find something that you could’ve done better, but going back and changing it is really hard because now all of these other people are using it and you might break their code. So I have lots of respect for people who try and write controls that 10,000 people are going to use.

Also, most developers are much quicker to point out the fault of a third party package than they are to point out the good aspects of it. In my opinion, just because you have issues with someone’s coding style, their overuse of reflection or generics, or whatever it may be doesn’t mean that you should completely ignore a third party product and go off and write your own! The real question you need to ask is whether or not the software is going to provide business value and help you accomplish your task better, faster, and cheaper.

So next time you have to evaluate third party software, ask yourself these questions:
– Is this software going to provide business value and help me accomplish my task?
– Is something in this software package going to prohibit me from doing what I need to do?

March 8, 2007by Jon Kruger
.NET

Policy Injection Application Block

The patterns & practices group at Microsoft just announced the Policy Injection Application Block. The basic idea is that you can define a set of policies and handlers that will execute before and after certain policy-enabled methods. This will allow you to perform such tasks as validation, exception handling, authorization, etc. without having to write the same sections of code over and over in every method.

The main benefit (as I see it) is the separation of concerns. How many times have you worked on a project where someone told you, “Make sure you call this method at the beginning of every method that does _____.” or, “Make sure you put this specific code around your code to handle exceptions.”? It’s hard for everyone on a team to remember all of these little tricks, and you also run the risk that people on the team won’t always do everything correctly. My boss always says that we as developers need to “get out of the plumbing business” and write code that actually does something meaningful from a business perspective, and this will definitely help accomplish that goal.

But in reality, the first thing I thought of when I first heard about this was that many people will probably find many ways to misuse and abuse this application block. By giving the people more power, you’re also giving them more power to screw things up. I’m sure there will be some interesting anti-patterns with this application block.

March 3, 2007by Jon Kruger
Page 23 of 24« First...1020«21222324»

About Me

I am a technical leader and software developer in Columbus, OH. Find out more here...

I am a technical leader and software developer in Columbus, OH, currently working as a Senior Engineering Manager at Upstart. Find out more here...