Jon Kruger -
  • About Me
  • Blog
  • Resume
  • Values
  • Presentations
About Me
Blog
Resume
Values
Presentations
  • About Me
  • Blog
  • Resume
  • Values
  • Presentations
Jon Kruger
Agile, ATDD, QA, Quality, TDD, unit testing

Can developers test their own code?

There are those who believe that a developer should not test their own code. This may sound logical, but I’m not sure I’m buying it.

This statement typically refers to QA testing, and doesn’t mean that a developer shouldn’t write unit tests. The thinking here is that a second person testing the features that you’ve developed might think of things a different way and find a problem that you didn’t think of when you wrote the code.

There are lots of commonly accepted statements and ideas like this in software development. But I’ve found that in many cases, these ideas are usually based on certain assumptions, and if you can challenge those assumptions, you might open yourself up to things that you didn’t think were possible.

The assumption that I see here is that a developer writing the code is not sufficiently capable of thinking of all of the test cases. Imagine you write code for a feature, and now you have to test it. At this point, you’ve gone through a certain mental thought process when you implemented the feature. This makes it much harder to think outside of the box to come up with the edge cases. Not to mention that when you see that the feature appears to be working overall, it’s really really tempting to do some basic manual testing and then move on to the next feature without really doing your due diligence. A independent QA tester, however, will look at the feature objectively because their thought process isn’t clouded by past experience of having written the code.

OK, so what if the developer figured out all of the tests cases before writing the code? Now their thinking isn’t clouded by the implementation of the feature because they haven’t wrote the code yet. Maybe a QA person helps define the test cases, but this post is about developers testing their own code, so let’s assume that QA people aren’t involved. I would argue that now we’ve removed the main reason that developers are not good at testing their own code (thinking of the test cases after writing the code), so they should be able to think of test cases just as well as a QA tester, and therefore they should be able to test their own code.

Don’t misconstrue what I’m saying there – I’m not saying that we don’t need our QA teams. I’m saying that developers need to be responsible for testing. QA teams can add more testing help, but developers need to be responsible for their own code.

This opens you up to new possibilities. It enables developers to be confident about the quality of their code. It limits the wasted time you incur when you have the back-and-forth that comes with QA finding bugs and developers having to go back and fix things. It can reduce the amount of “checking” that QA people need to do because they might be comfortable knowing that developers are writing quality code.

If you’re a developer, this is some thing you can start doing today. Before you implement a feature, come up with all of the test scenarios before you write your code. If you have a QA team, have them review your test cases to see if you’ve missed anything. Then go write some bug free code!

November 18, 2013by Jon Kruger
Agile, ATDD, Quality

Responsible software development

How much of the responsibility for the software that you are creating falls on your shoulders?

There are two ways of looking at responsibility on a team. One view would be to say that if there are 10 people on the team, then I’m responsible for 10% of the software development process. I don’t need to be concerned with the quality of the requirements because that’s the business analysts’ job. I don’t need to worry about testing my code because we have testers to do that.

The other view is that even in a team environment, I am still 100% responsible for the software I’m creating. That means that I’m not just implementing the requirements, I’m trying to understand the requirements to make sure that they’re correct and that we’re building things in the best way possible that will meet the needs of the business. And that certainly means that I’m never ever going to send code to QA that I haven’t tested just because I think they’re going to test it for me.

I need to be responsible for communicating with my team members the best that I can. I would much rather be clear about who is doing what rather than assuming. If I make an assumption, it’s probably going to be wrong – either I will assume someone is going to do something when they’re not and something will slip through the cracks, or I will assume someone is not going to do something when they are, and now we’ve duplicated effort.

This is what I see happen often on software teams, especially when it comes to testing. Developers assume that QA is going to do the testing, so they give code to QA without completely knowing if it’s working. QA assumes that developers are going to do this and don’t know what was tested and what wasn’t, so they think they have to test 100% of the functionality of the feature. In reality, the developer probably wrote decent code that mostly works, and they probably did test it to some extent (whether manual or automated), so the tester is duplicating some of the effort, but if they don’t know what was already done, they don’t have a choice.

This is where communication is key. If I as a developer am going to write automated tests to prove that my code works, then I want QA to be involved in the writing of the tests (even if that’s just reviewing what I’m doing). That way QA doesn’t have to duplicate effort because they can know that developers have already done some of the testing, and developers can start giving code to QA when they are confident that it works (with actual proof to back it up). QA and developers can work together to decide who is going to test what, what is going to be tested, and how it’s going to be tested.

In this case, everyone is 100% responsible for the quality of the software. Instead of expecting others to cover for us, we work together with others to make sure everything is covered. This requires people to move past their traditional roles, trust each other, and work together. In the end, we won’t duplicate effort and we won’t let things slip through the cracks.

September 25, 2013by Jon Kruger
ATDD, QA, Quality

Modifying production code to help testing

We had a week-long debate awhile back about whether or not it’s OK to modify your production code in order to enable automated acceptance testing. I’m not talking about using dependency injection, interfaces, etc. to allow you to mock things in unit tests. I’m talking about modifying application code solely to help your automated acceptance tests.

There are many ways this can be done, some of which we’ve done:

  • Creating a SystemTime class, which is like DateTime except that we can set what “Now” is, so we can change time in tests
  • Adding optional parameters to stored procedures solely so that we can have them only operate on a subset of data in an acceptance test instead of operating on the entire set of data in the database
  • Adding extra HTML attributes so that automated tests can find elements on a page easily

To me, modifying production code to help us do automated testing is no big deal. First, if our goal is quality, I don’t think it matters how we get there. After all, we own the code base and tests so there aren’t any real restrictions on what we can do with the code or the tests as long as the end product is good.

Second, developers and QA are on the same team, and we work together quite closely, so we should do what we can to help each other out. So if we can make a minor change to the application code to save us a lot of time developing or running automated tests, then to me it makes sense to do so.

This goes back to my assertion that we need to stop thinking of QA like external auditors that have to take the application just as it is without talking to the developers and act as the independent quality police. We need to all work together to ensure quality, both developers and QA. Developers are just as responsible for quality as QA. If we place all of the responsibility for quality on QA, then developers will care less and less about quality, and you end up with shoddy code with lots of bugs (and usually no tests). I’d rather treat testing as a whole-team activity and structure the application to make testing as easy as possible.

October 19, 2012by Jon Kruger
Agile, ATDD, Cucumber, Quality, TDD, unit testing

Does a whole team approach to testing change how developers should test?

Lately I’ve been thinking about a whole team approach to testing, where we decide as a team how features will be tested and where we use the skillsets of the whole team to automate testing. We do this on our project, and this has led to a regression testing suite of ~2500 SpecFlow acceptance tests that automate almost all scripted QA testing and regression testing for our application.

We didn’t always do this. Originally there was no automated acceptance testing, but developers were diligently writing unit tests. Those unit tests are still around, but we don’t write many unit tests anymore. We start with acceptance tests now, and the acceptance tests cover all of the testing scenarios that need to be covered. Our application has well-defined design patterns that we follow, so the idea of TDD driving the design of our code doesn’t really apply. If the unit tests fail, we often just delete them because it’s not worth fixing all of the mocks in the unit tests that are causing them to fail, and we have acceptance testing coverage around all of it.

This approach does not line up with the conventional wisdom on automated testing. They say that you’re supposed to write lots of unit tests that run really fast to give you fast feedback, help design your code, and ensure the internal quality of your code. In the past, this is how I’ve always done it. In fact, many of them dislike Cucumber.

Cucumber makes no sense to me unless you have clients reading the tests. Why would you build a test-specific parser for English?

— DHH (@dhh) March 29, 2011

While TDD isn’t as mainstream as I would like, TDD is nothing new. Kent Beck was writing about it 10 years ago, and the original XP guys valued such things as unit testing, the SOLID principles, and things like that.

Automated acceptance testing still feels like a relatively new phenomenon. I’m sure people were doing it 10 years ago, but back then we didn’t have Cucumber and SpecFlow and the Gherkin language. Now I see a lot more people using tools like that to automate QA testing in way that uses business language and more maintainable code, rather than the old “enterprise” solutions like QTP.

Here’s what I’m getting at – I wasn’t there 10 years ago when Kent Beck was writing his books and the XP movement was starting, but it seems to me to be primarily an effort by developers to ensure the quality of their code through the effort of developers. I see very little talk of where QA fits into that process. There is some mention of QA for sure, but the general gist seems to be that developers need to write tests in order to ensure quality, and the best way to do that is to write unit tests. QA people typically don’t say that unit testing is enough because it doesn’t test end-to-end, so then what do they do? Manually test? Use QTP?

My question is this – if we think of testing as whole-team activity and not just a QA activity or a developer activity, will we arrive at the same conclusions as we did before?

I’m not ready to discount unit testing as a valuable tool, and I’m also not ready to say that everyone should do it my way because it worked for us on one project. But we have largely abandoned unit testing in favor of acceptance testing, and other teams in our department are doing it too. I write unit tests for things like extension methods and some classes that have important behavior on their own and I want to ensure that those classes work independent of the larger system.

We have 3 Amigos meetings in which one of the things we do is develop a set of acceptance tests for a feature before any code is written. We usually decide at this point (before any code is written) that most or all of these scenarios will be automated. We write the acceptance tests in SpecFlow, I watch them all fail, and them I write the code to make them pass. I follow the patterns and framework that we have set up in our application, so there aren’t many design decisions to make. When my acceptance tests pass, I am done.

Where do unit tests fit in there? If my acceptance tests pass, then I’m done, so why spend more time writing duplicate tests? Also, with acceptance tests, I’m not dealing with mocks, and more importantly, I’m not fixing broken unit tests because of broken mocks. If you follow the Single Responsibility Principle (which we try to do), you end up with lots of small classes, and unit tests for those classes would be mostly useless because those classes do so little that it’s hard to write bugs and each class does such a small part of the larger activity.

There is an obvious trade-off here – my acceptance tests are not fast. I’m just testing web services (no driving a browser), so all ~2500 tests will run in about an hour. But we accepted this trade-off because we were able to get things done faster by just writing the acceptance tests, which we were going to do anyway to automate QA testing. The end result is high quality software with few bugs, not just because we have tests, but also because we communicate as a team and decide on the best way to test each feature and what it is that needs to be tested, and then we find the best way to automate the testing as a team.

Again, I’m not ready to say that this way is the best way for every project, and I’ve seen each approach work extremely well. I just wonder if the conventional wisdom on testing would be the same if we thought of it from the perspective of the whole team.

October 15, 2012by Jon Kruger
ATDD

What’s the best way to write requirements?

If you’re a developer, you probably aren’t extensively involved with requirements gathering, but they affect you dramatically because you have to read requirements and turn them into code. So how would you like to have requirements written so that you have all the details that you need to implement the feature and easily write acceptance tests (hopefully automated)?

I have ideas but only some answers, which is why we’re talking about this on Thursday at the Columbus ATDD Developers Group. We’ll be talking about questions like these, among other things:

  • How can we structure our requirements to help us come up with our acceptance criteria?
  • What requirements should we have other than our acceptance criteria?
  • What can we do to help BAs make sure that they have all of the details that we need to write tests and write the code?

It’s a fishbowl discussion so we all get to figure it out together. I think it will be fun. I feel like there are some dots that I’m having trouble connecting when it comes to requirements gathering so hopefully we’ll turn on some light bulbs.

The meeting will be Thursday, August 2 from 11:30-12:30 at the Quick Solutions office, 440 Polaris Pkwy., Suite 500 in Westerville. Please RSVP so that we know how many people are coming.

July 30, 2012by Jon Kruger

About Me

I am a technical leader and software developer in Columbus, OH, currently working as a Director of Engineering at Upstart. Find out more here...

I am a technical leader and software developer in Columbus, OH, currently working as a Senior Engineering Manager at Upstart. Find out more here...