Developing a testing strategy
As soon as you write code, bad things can happen. Your code can be defective, and it can break existing code. It’s our job to make sure that we don’t break things.
Not breaking things is not easy. You might end up with only one bug that sneaks through into production but that can be one too many.
The team I’m on is currently trying to figure out what our plan is for testing and QA. We have two concerns:
- How can we wrote code without defects?
- How can we make sure that our code changes don’t break existing code?
The second one is proving to be the difficult one. Our application is 4 years old, and a good chunk of the code was written by people who are no longer on the team. It’s really difficult at times to not break something when you don’t know enough about the system to know the ramifications of what you’re about to do.
When I develop new code, I try to figure out how we can validate that everything is working and complete. When possible, I want to write automated tests. In the Ruby world, I’m using Rspec and Cucumber to do this. I want to make sure that all of the code that I write is covered by a test if possible. Thankfully Rspec and Cucumber do not make this very hard at all, and we tend to test the crap out of things as a result.
But some things are hard to validate. For example, if you’re pulling a subset of data from an external database using an SQL query, how do you know that you’re pulling back the correct data? It’s hard (if not impossible) to write an automated test for this, and in some cases you have to coordinate with someone on the team that owns the database in order to test it. It’s really important not to ignore these tests and just say that it’s going to be too hard to test. Unless you’re OK with it being broken, you have to validate it.
As a developer, don’t think that this is just the responsibility of your QA team to figure out how to test things. Certainly your QA team should be involved in coming up with the plan for testing, but you might know of edge cases that a QA person wouldn’t know about, and you might know of technical constraints that might make testing hard. Too many defects are created because developers put the onus on QA to find everything. Remember, we’re all on the same team.
So now you go into production and you move on to something else, but you’ve created a lot of features that will need to be regression tested. Here is where I think it’s really difficult. How do you keep these regression tests up to date? For example:
- When you add a new security role to the system, do you update old test plans to test whether or not the new security role should be able to do something?
- If you change a validation rule, do you update old test plans that could be affected by the new validation rule? (This could mean adding new tests to an old test plan because you might now have two paths through something where you might’ve only had one test before.)
- When things change like this, how do you even know where you might need to update test plans?
I’d be interested in hearing how you all handle these situations. This is a situation where I’m really glad that I have automated tests (yesterday I was doing a big refactoring and my Cucumber tests pointed out a whole bunch of things that I had broken). But automated tests only solve the problem when you have a test for something. If you create new paths through the code that now don’t have a test, you need a way to find those scenarios and write tests for them.
Leave a Comment