Jon Kruger -
  • About Me
  • Blog
  • Values
  • Presentations
About Me
Blog
Values
Presentations
  • About Me
  • Blog
  • Values
  • Presentations
Jon Kruger
Quality, TDD

Confidence and testing… what does “confidence” mean anyway?

I tweeted a quote today that I found to be very insightful.

A poignant reminder of why I'm here. pic.twitter.com/H9Nkbnm4Qt

— Jon Kruger (@JonKruger) April 8, 2014

You could take this two different ways. I take it to mean that I need to just the right amount of testing in order to be confident that my code works and will continue to work.

You could also see it another way, as pointed out by some others:

@JonKruger I think the line which follows that quote is also important; as an industry, that level of confidence might not bee too high.

— Steven Harman (@stevenharman) April 8, 2014

@JonKruger #coding pic.twitter.com/CV98xQuKDe I admire KBeck but disagree.. Absent good #Tests coder confidence is myth.. #OverConfident = Bugs

— SeattleFan4Dan4 (@TJWilk_WA) April 8, 2014

There are those out there that think that testing is a waste of time and I could see some of them taking a quote like this and saying, “I’m a good enough developer that I don’t need tests in order to write code that works!” (I heard someone say those exact words once, sadly.)

@JonKruger Most developers do the same thing… they just have way too much confidence in their coding skills. ;)

— jared richardson (@jaredrichardson) April 8, 2014

The discrepancy is due to what the word “confidence” means in this quote. If it’s just confidence that it works now, then you might be fine without tests, but if it’s confidence that your code will still be working a year from now after someone else has had to modify it, then you probably want tests. (Heck, I want tests so that I can change my own code tomorrow!) I find that the problem is that on many teams, developers define confidence as “good enough that QA can look at it” and then achieving true “confidence” is someone else’s problem.

Maybe we should change the pronouns.

We (the team) get paid for code that works, so our philosophy is to tests as little as possible to reach a given level of confidence.

Now this changes things. What’s the best way to test as little as possible? Automate all the tests that should be automated. Let QA spend time doing valuable things like exploratory testing instead of doing loads of manual regression tests. Funny how you can turn everything on it’s head when you take what is perceived to be an individual’s problem and make it everyone’s problem!

April 8, 2014by Jon Kruger
Quality

A new kind of developer testing kata

There are lots of code katas out there that are typically used to help developers practice test driven development and writing well-designed code. I have a new kind of kata for you today.

In my last post I talked about how developers should be able to test their own code, but as developers we need to get better and defining all of the test cases. The only way to get better is to practice.

Testing a four-function calculator

Let’s say you need to test a basic four function calculator like this one.

Calculator

What are all of the testing scenarios? Keep in mind that just from looking at the picture, you might not know the exact requirements of how the calculator works without asking some questions.

Some basic details (for those of you who are too young to ever have used one of these calculators):

  • Pressing M+ will add the current value to the value stored in memory (or 0 if there is no memory value), but not change the value on the screen
  • Pressing M- will subtract the current value to the value stored in memory (or 0 if there is no memory value), but not change the value on the screen
  • Pressing MR will load the current memory value and display it on the screen
  • Pressing MC will clear the current memory value, but not change the value on the screen

If you don’t know how something should work, write down the question that you need to ask or record the assumption that you made about how it should work (so that you could validate the assumption later). You’re also allowed to try things on your computer’s calculator to see how something should work (in real life, we often look at other implementations of our software to get ideas about how it should work), but remember to write down your assumptions so that you can validate them.

Come up with your test cases, questions, and assumptions and post your answers in the comments. You can use any format for the test cases that you would like. Let’s see what you can come up with.

November 20, 2013by Jon Kruger
Agile, ATDD, QA, Quality, TDD, unit testing

Can developers test their own code?

There are those who believe that a developer should not test their own code. This may sound logical, but I’m not sure I’m buying it.

This statement typically refers to QA testing, and doesn’t mean that a developer shouldn’t write unit tests. The thinking here is that a second person testing the features that you’ve developed might think of things a different way and find a problem that you didn’t think of when you wrote the code.

There are lots of commonly accepted statements and ideas like this in software development. But I’ve found that in many cases, these ideas are usually based on certain assumptions, and if you can challenge those assumptions, you might open yourself up to things that you didn’t think were possible.

The assumption that I see here is that a developer writing the code is not sufficiently capable of thinking of all of the test cases. Imagine you write code for a feature, and now you have to test it. At this point, you’ve gone through a certain mental thought process when you implemented the feature. This makes it much harder to think outside of the box to come up with the edge cases. Not to mention that when you see that the feature appears to be working overall, it’s really really tempting to do some basic manual testing and then move on to the next feature without really doing your due diligence. A independent QA tester, however, will look at the feature objectively because their thought process isn’t clouded by past experience of having written the code.

OK, so what if the developer figured out all of the tests cases before writing the code? Now their thinking isn’t clouded by the implementation of the feature because they haven’t wrote the code yet. Maybe a QA person helps define the test cases, but this post is about developers testing their own code, so let’s assume that QA people aren’t involved. I would argue that now we’ve removed the main reason that developers are not good at testing their own code (thinking of the test cases after writing the code), so they should be able to think of test cases just as well as a QA tester, and therefore they should be able to test their own code.

Don’t misconstrue what I’m saying there – I’m not saying that we don’t need our QA teams. I’m saying that developers need to be responsible for testing. QA teams can add more testing help, but developers need to be responsible for their own code.

This opens you up to new possibilities. It enables developers to be confident about the quality of their code. It limits the wasted time you incur when you have the back-and-forth that comes with QA finding bugs and developers having to go back and fix things. It can reduce the amount of “checking” that QA people need to do because they might be comfortable knowing that developers are writing quality code.

If you’re a developer, this is some thing you can start doing today. Before you implement a feature, come up with all of the test scenarios before you write your code. If you have a QA team, have them review your test cases to see if you’ve missed anything. Then go write some bug free code!

November 18, 2013by Jon Kruger
Agile, ATDD, Quality

Responsible software development

How much of the responsibility for the software that you are creating falls on your shoulders?

There are two ways of looking at responsibility on a team. One view would be to say that if there are 10 people on the team, then I’m responsible for 10% of the software development process. I don’t need to be concerned with the quality of the requirements because that’s the business analysts’ job. I don’t need to worry about testing my code because we have testers to do that.

The other view is that even in a team environment, I am still 100% responsible for the software I’m creating. That means that I’m not just implementing the requirements, I’m trying to understand the requirements to make sure that they’re correct and that we’re building things in the best way possible that will meet the needs of the business. And that certainly means that I’m never ever going to send code to QA that I haven’t tested just because I think they’re going to test it for me.

I need to be responsible for communicating with my team members the best that I can. I would much rather be clear about who is doing what rather than assuming. If I make an assumption, it’s probably going to be wrong – either I will assume someone is going to do something when they’re not and something will slip through the cracks, or I will assume someone is not going to do something when they are, and now we’ve duplicated effort.

This is what I see happen often on software teams, especially when it comes to testing. Developers assume that QA is going to do the testing, so they give code to QA without completely knowing if it’s working. QA assumes that developers are going to do this and don’t know what was tested and what wasn’t, so they think they have to test 100% of the functionality of the feature. In reality, the developer probably wrote decent code that mostly works, and they probably did test it to some extent (whether manual or automated), so the tester is duplicating some of the effort, but if they don’t know what was already done, they don’t have a choice.

This is where communication is key. If I as a developer am going to write automated tests to prove that my code works, then I want QA to be involved in the writing of the tests (even if that’s just reviewing what I’m doing). That way QA doesn’t have to duplicate effort because they can know that developers have already done some of the testing, and developers can start giving code to QA when they are confident that it works (with actual proof to back it up). QA and developers can work together to decide who is going to test what, what is going to be tested, and how it’s going to be tested.

In this case, everyone is 100% responsible for the quality of the software. Instead of expecting others to cover for us, we work together with others to make sure everything is covered. This requires people to move past their traditional roles, trust each other, and work together. In the end, we won’t duplicate effort and we won’t let things slip through the cracks.

September 25, 2013by Jon Kruger
ATDD, QA, Quality

Modifying production code to help testing

We had a week-long debate awhile back about whether or not it’s OK to modify your production code in order to enable automated acceptance testing. I’m not talking about using dependency injection, interfaces, etc. to allow you to mock things in unit tests. I’m talking about modifying application code solely to help your automated acceptance tests.

There are many ways this can be done, some of which we’ve done:

  • Creating a SystemTime class, which is like DateTime except that we can set what “Now” is, so we can change time in tests
  • Adding optional parameters to stored procedures solely so that we can have them only operate on a subset of data in an acceptance test instead of operating on the entire set of data in the database
  • Adding extra HTML attributes so that automated tests can find elements on a page easily

To me, modifying production code to help us do automated testing is no big deal. First, if our goal is quality, I don’t think it matters how we get there. After all, we own the code base and tests so there aren’t any real restrictions on what we can do with the code or the tests as long as the end product is good.

Second, developers and QA are on the same team, and we work together quite closely, so we should do what we can to help each other out. So if we can make a minor change to the application code to save us a lot of time developing or running automated tests, then to me it makes sense to do so.

This goes back to my assertion that we need to stop thinking of QA like external auditors that have to take the application just as it is without talking to the developers and act as the independent quality police. We need to all work together to ensure quality, both developers and QA. Developers are just as responsible for quality as QA. If we place all of the responsibility for quality on QA, then developers will care less and less about quality, and you end up with shoddy code with lots of bugs (and usually no tests). I’d rather treat testing as a whole-team activity and structure the application to make testing as easy as possible.

October 19, 2012by Jon Kruger
Agile, ATDD, Cucumber, Quality, TDD, unit testing

Does a whole team approach to testing change how developers should test?

Lately I’ve been thinking about a whole team approach to testing, where we decide as a team how features will be tested and where we use the skillsets of the whole team to automate testing. We do this on our project, and this has led to a regression testing suite of ~2500 SpecFlow acceptance tests that automate almost all scripted QA testing and regression testing for our application.

We didn’t always do this. Originally there was no automated acceptance testing, but developers were diligently writing unit tests. Those unit tests are still around, but we don’t write many unit tests anymore. We start with acceptance tests now, and the acceptance tests cover all of the testing scenarios that need to be covered. Our application has well-defined design patterns that we follow, so the idea of TDD driving the design of our code doesn’t really apply. If the unit tests fail, we often just delete them because it’s not worth fixing all of the mocks in the unit tests that are causing them to fail, and we have acceptance testing coverage around all of it.

This approach does not line up with the conventional wisdom on automated testing. They say that you’re supposed to write lots of unit tests that run really fast to give you fast feedback, help design your code, and ensure the internal quality of your code. In the past, this is how I’ve always done it. In fact, many of them dislike Cucumber.

Cucumber makes no sense to me unless you have clients reading the tests. Why would you build a test-specific parser for English?

— DHH (@dhh) March 29, 2011

While TDD isn’t as mainstream as I would like, TDD is nothing new. Kent Beck was writing about it 10 years ago, and the original XP guys valued such things as unit testing, the SOLID principles, and things like that.

Automated acceptance testing still feels like a relatively new phenomenon. I’m sure people were doing it 10 years ago, but back then we didn’t have Cucumber and SpecFlow and the Gherkin language. Now I see a lot more people using tools like that to automate QA testing in way that uses business language and more maintainable code, rather than the old “enterprise” solutions like QTP.

Here’s what I’m getting at – I wasn’t there 10 years ago when Kent Beck was writing his books and the XP movement was starting, but it seems to me to be primarily an effort by developers to ensure the quality of their code through the effort of developers. I see very little talk of where QA fits into that process. There is some mention of QA for sure, but the general gist seems to be that developers need to write tests in order to ensure quality, and the best way to do that is to write unit tests. QA people typically don’t say that unit testing is enough because it doesn’t test end-to-end, so then what do they do? Manually test? Use QTP?

My question is this – if we think of testing as whole-team activity and not just a QA activity or a developer activity, will we arrive at the same conclusions as we did before?

I’m not ready to discount unit testing as a valuable tool, and I’m also not ready to say that everyone should do it my way because it worked for us on one project. But we have largely abandoned unit testing in favor of acceptance testing, and other teams in our department are doing it too. I write unit tests for things like extension methods and some classes that have important behavior on their own and I want to ensure that those classes work independent of the larger system.

We have 3 Amigos meetings in which one of the things we do is develop a set of acceptance tests for a feature before any code is written. We usually decide at this point (before any code is written) that most or all of these scenarios will be automated. We write the acceptance tests in SpecFlow, I watch them all fail, and them I write the code to make them pass. I follow the patterns and framework that we have set up in our application, so there aren’t many design decisions to make. When my acceptance tests pass, I am done.

Where do unit tests fit in there? If my acceptance tests pass, then I’m done, so why spend more time writing duplicate tests? Also, with acceptance tests, I’m not dealing with mocks, and more importantly, I’m not fixing broken unit tests because of broken mocks. If you follow the Single Responsibility Principle (which we try to do), you end up with lots of small classes, and unit tests for those classes would be mostly useless because those classes do so little that it’s hard to write bugs and each class does such a small part of the larger activity.

There is an obvious trade-off here – my acceptance tests are not fast. I’m just testing web services (no driving a browser), so all ~2500 tests will run in about an hour. But we accepted this trade-off because we were able to get things done faster by just writing the acceptance tests, which we were going to do anyway to automate QA testing. The end result is high quality software with few bugs, not just because we have tests, but also because we communicate as a team and decide on the best way to test each feature and what it is that needs to be tested, and then we find the best way to automate the testing as a team.

Again, I’m not ready to say that this way is the best way for every project, and I’ve seen each approach work extremely well. I just wonder if the conventional wisdom on testing would be the same if we thought of it from the perspective of the whole team.

October 15, 2012by Jon Kruger
Quality, TDD, unit testing

The automated testing triangle

Recently I had the privilege of hearing Uncle Bob Martin talk at the Columbus Ruby Brigade. Among the many nuggets of wisdom that I learned that night, my favorite part was the Automated Testing Triangle. I don’t know if Uncle Bob made this up or if he got it from somewhere else, but it goes something like this.

The Automated Testing TriangleAt the bottom of the triangle we have unit tests. These tests are testing code, individual methods in classes, really small pieces of functionality. We mock out dependencies in these tests so that we can test individual methods in isolation. These tests are written using testing frameworks like NUnit and use mocking frameworks like Rhino Mocks. Writing these kinds of tests will help us prove that our code is working and it will help us design our code. They will ensure that we only write enough code to make our tests pass. Unit tests are the foundation of a maintainable codebase.

But there will be situations where unit tests don’t do enough for us because we will need to test multiple parts of the system working together. This means that we need to write integration tests — tests that test the integration between different parts of the system. The most common type of integration test is a test that interacts with the database. These tests tend to be slower and are more brittle, but they serve a purpose by testing things that we can’t test with unit tests.

Everything we’ve discussed so far will test technical behavior, but doesn’t necessarily test functional business specifications. At some point we might want to write tests that read like our technical specs so that we can show that our code is doing what the business wants it to do. This is when we write acceptance tests. These tests are written using tools like Cucumber, Fitnesse, StoryTeller, and NBehave. These tests are usually written in plain text sentences that a business analyst could write, like this:

As a user
When I enter a valid username and password and click Submit
Then I should be logged in

At this point, we’re are no longer just testing technical aspects of our system, we are testing that our system meets the functional specifications provided by the business.

By now we should be able to prove that our individual pieces of code are working, that everything works together, and that it does what the business wants it to do — and all of it is automated. Now comes the manual testing. This is for all of the random stuff — checking to make sure that the page looks right, that fancy AJAX stuff works, that the app is fast enough. This is where you try to break the app, hack it, put weird values in, etc.

The un-automated testing triangleI find that the testing triangle on most projects tends to look more like this triangle. There are some automated integration tests, but these tests don’t use mocking frameworks to isolate dependencies, so they are slow and brittle, which makes them less valuable. An enormous amount of manpower is spent on manual testing.

Lots of projects are run this way, and many of them are successful. So what’s the big deal? Becuase what really matters is the total cost of ownership of an application over the entire lifetime of the application. Most applications need to be changed quite often, so there is much value in doing things that will allow the application be changed easily and quickly.

Many people get hung up on things like, “I don’t have time to write tests!” This is a short term view of things. Sometimes we have deadlines that cannot be moved, so I’m not denying this reality. But realize that you are making a short term decision that will have long term effects.

If you’ve ever worked on a project that had loads of manual testing, then you can at least imagine how nice it would be to have automated tests that would test a majority of your application by clicking a button. You could deploy to production quite often because regression testing would take drastically less time.

I’m still trying to figure out how to achieve this goal. I totally buy into Uncle Bob’s testing triangle, but it requires a big shift in the way we staff teams. For example, it would really help if QA people knew how to use automated testing tools (which may require basic coding skills). Or maybe we have developers writing more automated tests (beyond the unit tests that they usually write). Either way, the benefits of automated testing are tremendous and will save loads of time and money over the life of an application.

February 8, 2010by Jon Kruger
Design, Quality

I’m speaking on SOLID at QSI Tech Night on Wed., Sept. 16

I’ll be speaking on the SOLID software design principles at the Quick Solutions office (440 Polaris Pkwy., Suite 500, Westerville) on Wednesday, Sept. 16 from 5:30pm-to 7pm. In this talk, we’ll go through Uncle Bob’s “SOLID” software design principles, separation of concerns, a brief overview of inversion of control containers like StructureMap and Ninject, and more object-oriented goodness that will help you write better code.

There’s free food too. If you’re coming, please RSVP to amorey at quicksolutions dot com so that we can have enough food.

If you’re going to the Software Engineering 101 conference, this is basically the same talk that I’ll be giving there. So if you’re going to go to one, go to the Software Engineering conference since you’ll get lots of other good stuff there too.

September 6, 2009by Jon Kruger
Design, Quality

Don’t cheat on those small apps!

At some point, we all have to write some small apps. I’m talking about things like…

  • Some small utility or diagnostic app
  • Something to help with your deployment process
  • Other small applications or websites (i.e. something that takes a month or less)

In these cases, we often throw good software design principles out the window. We say that we don’t need to write unit tests, we don’t need dependency injection, we can put data access code in our code-behinds, and things like that. Since it’s a small app, we think we can get away with it.

Just because your app didn’t take you long to write doesn’t mean that you get off easy. The consequences of your decisions just aren’t as severe, but that doesn’t mean that the pain is gone!

How many times have you written some small utility to help with something and then your boss sees it and really likes it, and then he asks you to add more functionality to it. Eventually you’ve spent a couple months on the app. If you cheated at the beginning, that code is going to hard to change and it’s going to start becoming more of a pain.

Maybe you’re writing a console app to help with your deployment process. Now this code better work, because if it doesn’t, your app might not deploy correctly. This is very important code! Doesn’t code this important warrant some extra attention (i.e. tests)?

Look, I’m not saying that every little app has to be this big, blown-out, enterprise application. I’m just saying that you should be careful when you cheat, because you don’t want that to come back to haunt you.

Many times you can take simple steps to make things easier to change. Take dependency injection, for example. It is really easy to set up an IoC container like StructureMap, it doesn’t overly complicate your code, and you don’t have to write tons of extra code to use it. But if you want to write tests for you code, dependency injection will make it a lot easier. You’re just putting code in better places.

Again, there are times to use dependency injection and there are times where it’s superfluous. There are times when unit testing is essential and times when you can get away with it (personally I always like writing tests if I can, that way I know my code works). But you need to be careful. Software is software, and many of the same development principles that apply to big apps still apply to small apps.

March 31, 2009by Jon Kruger
Quality

Why does our industry have such a low standard of quality?

As I was writing my post on aspiring to be a Software Craftsman, I was thinking about why we have such a low standard of quality in our industry. Clearly there is a problem when there are so many software projects out there that are rewrites of previous projects, and businesses are hamstrung by software that is too difficult to change.

I’m very much an optimist most of the time, honest! But I can’t help notice all kinds of things that don’t sit well with me.

Poor university training

I had 5 classes in my 4 years of college that were relevant to software development. I graduated in 2002. They started us out on C++, which was a fair choice in 1998. I was not able to take a database course until my senior year. During my last semester they finally introduced an intro to Java course which I was able to take. No courses on .NET, no courses in PHP, no courses in software design, no courses on techniques that could’ve made me successful (like unit testing).

The software industry is constantly changing, and many university programs are not keeping up. When you graduate you know just enough to be dangerous but not enough to really know what you’re doing. 4 years is a long time, you should be able to learn a lot during that time.

Post-college training is not encouraged

If you want to learn about software development, there is always a user group you can go to, a conference you can attend, thousands of blogs you can read, podcasts to listen to, and on and on. There is a wealth of information out there, but the average developer is not taking advantage of it. Some of these people don’t care to learn. But I think that most people care about doing a good job. The problem is that their employer does not encourage it. If someone doesn’t want to spend their time outside of work reading blogs and attending conferences, that’s their choice. There is nothing wrong with leaving your work at work and spending time with your family or doing other things. But most employers don’t offer enough training during work hours, whether that is organizing lunch and learns, paying for their devs to go to CodeMash, bringing in guest speakers, etc. It costs a lot of money to build software. Shouldn’t employers be willing to spend a little more money to train their employers so that they can build better quality software?

People don’t know what they don’t know

There are a lot of developers out there don’t know software design techniques and concepts like unit testing, what SOLID stands for, why you should use an IoC container, and why you should design with interfaces. I would guess that a majority of these developers would love to know this stuff, but they don’t know that they don’t know it. A year ago I had never used Rhino Mocks, and I didn’t know what SOLID stood for or what an IoC container was. I thought I was a good developer, and I was in charge of a fairly large project, but I didn’t know that I didn’t know these things. Now that I’ve learned all that stuff I am a much better developer than I was a year ago. But now I’m wondering what other important stuff I don’t know.

My point is that I’m sure that it would really help developers if there was some way that they could go through some kind of training that would teach advanced software development topics like the ones that I mentioned. I see a lot of junior developers take Microsoft certification tests. It is unlikely that Microsoft certification tests will teach you any of the concepts that I mentioned. You will, on the other hand, memorize a lot of stuff that you can find on Google. I’m not saying that studying for and taking those tests won’t teach you anything, but they’re not going to teach you how to write quality software.

Low business expectations

Many IT managers and executives have very low expectations for their software. Having to rewrite software is widely accepted as normal these days. Wouldn’t it be so much better and cheaper to change your existing software when you need it to do something different?

The problem is that so much of our software is too difficult to change due to bad design, lack of testing, and many other factors. Many businesses do very little to address the root problem — that their developers don’t have the adequate training or don’t have the ability to write good software that can last.

Many IT managers believe that the best that they can do is this disposable software, so they decide that they might as well pay less money if they’re going to get low quality, so they turn to offshoring. Either way, they’re still getting the same low quality software and they’re wasting money rewriting their software every several years.

Short term thinking

Many business get sucked into short term thinking. I’m sure there are many ways that this can work, but here’s one way I can see this happening.

  • Executives want to show results because they’re worried about their stock price. They demand results this year.
  • IT Manager is under pressure to deliver something this year, possibly with a limited budget.
  • Developers are under pressure to deliver something and have really tight deadlines.
  • Developers throw something together and finish the project. IT Managers show it to executives who like what they see. Executives show it at a trade show and shareholders are happy.
  • Two years later the software does not meet the needs of the business because the app is unmaintainable and they start budgeting money to bring in consultants to rewrite it.

It would’ve made much more sense to invest more time and money into the project the first time so that it wouldn’t have to be rewritten two years later. But because the business needs to show progress to the shareholders they won’t do it this way. This is a tough situation, and I can understand why executives make some of these decisions, but it doesn’t seem to make sense to me.

Little or no accountability

If you’re a developer and you write bad code, you probably aren’t going to lose your job unless there is a recession, you do a really really bad job, or just quit showing up for work. If a civil engineer builds a bridge that fails after 5 years, I guarantee you he won’t be working on bridges anymore, at least not at the same company. There is very little pressure in the software industry to write quality software.

In closing…

I don’t know what the solution to this problem is because it is a problem on so many fronts. But I do have control over what’s around me. I can aspire to become a software craftsman — someone who designs and writes quality software. I can hold myself accountable and make sure that I have high standards. And I can try and teach other people to do the same.

January 14, 2009by Jon Kruger
Page 1 of 212»

About Me

I am a technical leader and software developer in Columbus, OH. Find out more here...

I am a technical leader and software developer in Columbus, OH, currently working as a Senior Engineering Manager at Upstart. Find out more here...