When unit tests are better than acceptance tests
In my last post, I talked about how acceptance tests can be more important than unit tests. But clearly there is another side of the coin.
The conventional wisdom is that you should have more unit tests than acceptance tests. This is how I’ve done it in the past (on pretty much every project other than my current one).
A common scenario is a web application that uses an ORM for data access with as few stored procedures as possible. Because I’m using an ORM, my data layer has very little code. I’ll probably write a few basic unit tests and integration tests to test what data layer I have, and I’ll write integration tests to test the stored procs that I have (usually only for queries that need custom SQL).
My data layer primarily contains a really simple repository class that lets me do CRUD operations and get sort of IQueryable
The business layer is going to contain the meat of the code. This is where I’m going to have a lot of unit tests. Because I kept my data layer so simple, I’m not going to have as many classes to stub out, and when I do, I’m stubbing out simple methods like Save() and Get(), which have no business logic in them.
I also have controllers, which take information from the business layer and return some sort of view model. I’ll write unit tests that verify that that translation happens correctly.
If I have some hardcore JavaScript, I’ll write unit tests for that. If it’s really simple stuff like, “when this radio button is clicked, disable this text box”, I don’t write tests for that. But when you build something that has more logic, you’ll want to test it.
At this point, I’ve tested pretty much everything very extensively. I could add some acceptance tests, which could help me communicate with the business and automate some QA testing. But at the same time, I’ve been on projects where I had excellent unit test coverage and no acceptance tests and it came out just fine. Now, that project was a two-person project with no QA team, so we just trusted our coding abilities and accepted the risk and things came out just fine. The point of saying this is not to say that you should fire your QA team, it just an example of how you can structure your code in a way that will allow you to unit test the code and get it right pretty much all the time.
This is just one example of one project. Your project might be different, with a different architecture, a different team, and a different business. You need to figure out what works best for you.
This is a 2 parter:
I’ve run into issues with the Linq provider for ORMs before being able to do something in Linq2Objects but not Linq2NH/Entities. Yes usually its a complex query, but even if it isn’t, do you write tests injecting the ORM (maybe using something like Sqlite) just to make sure the LINQ actually works?
Secondly, how do you avoid that repository from becoming a leaky abstraction if you want to use something like NH’s caching/futures/other features?
Thanks Jon!
Usually I write the tests first and make it work with Linq2Objects. Then when I run it, I might modify the LINQ to make the generated SQL better, but the tests still tell me if conceptually it works. I might write an integration test to prove it, but that’s of limited value because I can’t write a test that guarantees that my queries don’t generate horrendous SQL. That I just have to watch manually.
About the leaky abstractions, I think I’ve had some of those on every project. As much as you try to avoid it, there are some places where you just can’t. So I try to minimize it as much as makes sense, but I don’t fight it too much. I usually will make some wrapper class to handle things like that (e.g. an IUnitOfWorkManager class that handles transactions).