Jon Kruger -
  • About Me
  • Blog
  • Values
  • Presentations
About Me
Blog
Values
Presentations
  • About Me
  • Blog
  • Values
  • Presentations
Jon Kruger
Quality, TDD

Confidence and testing… what does “confidence” mean anyway?

I tweeted a quote today that I found to be very insightful.

A poignant reminder of why I'm here. pic.twitter.com/H9Nkbnm4Qt

— Jon Kruger (@JonKruger) April 8, 2014

You could take this two different ways. I take it to mean that I need to just the right amount of testing in order to be confident that my code works and will continue to work.

You could also see it another way, as pointed out by some others:

@JonKruger I think the line which follows that quote is also important; as an industry, that level of confidence might not bee too high.

— Steven Harman (@stevenharman) April 8, 2014

@JonKruger #coding pic.twitter.com/CV98xQuKDe I admire KBeck but disagree.. Absent good #Tests coder confidence is myth.. #OverConfident = Bugs

— SeattleFan4Dan4 (@TJWilk_WA) April 8, 2014

There are those out there that think that testing is a waste of time and I could see some of them taking a quote like this and saying, “I’m a good enough developer that I don’t need tests in order to write code that works!” (I heard someone say those exact words once, sadly.)

@JonKruger Most developers do the same thing… they just have way too much confidence in their coding skills. ;)

— jared richardson (@jaredrichardson) April 8, 2014

The discrepancy is due to what the word “confidence” means in this quote. If it’s just confidence that it works now, then you might be fine without tests, but if it’s confidence that your code will still be working a year from now after someone else has had to modify it, then you probably want tests. (Heck, I want tests so that I can change my own code tomorrow!) I find that the problem is that on many teams, developers define confidence as “good enough that QA can look at it” and then achieving true “confidence” is someone else’s problem.

Maybe we should change the pronouns.

We (the team) get paid for code that works, so our philosophy is to tests as little as possible to reach a given level of confidence.

Now this changes things. What’s the best way to test as little as possible? Automate all the tests that should be automated. Let QA spend time doing valuable things like exploratory testing instead of doing loads of manual regression tests. Funny how you can turn everything on it’s head when you take what is perceived to be an individual’s problem and make it everyone’s problem!

April 8, 2014by Jon Kruger
Design, Uncategorized

Modularity and testing

When we write automated tests, we want to do it in the simplest way possible while achieving the highest level of quality for the time we invest in our tests. Unfortunately, this is easier said than done.

Writing tests can be hard. It’s hard figuring out what kinds of automated tests to write. We can write unit tests, integration tests, functional/acceptance tests, performance tests, or just do manual testing. We have to decide if developers are writing tests or if QA people are doing it (or both). It’s really complicated!

Let’s look at some ways that people sometimes tackle the testing problem and then discuss some different ways we can approach it.

The black box

On some projects, the team is testing a big black box. Usually this is where QA people are doing all the testing and developers aren’t writing automated tests. If you read my blog at all you know that I do not like this approach (unless time to market is really that important) because it leads to lots of bugs and makes refactoring anything really scary. In this scenario, QA testers typically only control the inputs and outputs into the black box (which is usually a user interface of some kind). This leads to problems like having to jump through a bunch of hoops just to test something that’s farther down the line of some complicated workflow.

Tiny slices

If you’re a fan of test-driven development, you will write a lot of unit tests. In this case, you’re slicing your application up into very tiny, testable units (like at a class level) and mocking and stubbing the dependencies of each class. This gives you faster, less brittle tests. You still probably have QA people testing the black box as well, so now we’re using two different approaches, with is good.

I’m not satisfied

While each of the previous two approaches have their positives, they both have their negatives. I’ve already talked about the negatives of all manual QA black box testing, so I won’t go into that again (although I’m always up for a good rant). But writing lots of unit tests has its problems as well. For example:

  • Tests with lots of mocks and stubs, and failing tests that fail because someone refactored a class any now my stubs and mocks are all hosed. You know the feeling.
  • Testing an individual class or method is great, but my tests aren’t always testing something that has business value on its own.

No limits

You own your application. Your application does not own you (at least it shouldn’t). We also own the testing of our application and should be able to test it in any way that we can think of. The goal is high quality, low cost of maintaining the test suite, and speed to market. How can we best achieve this goal? (And don’t just give me the first textbook answer that pops in your head.)

There is no one-size-fits-all method for testing applications, and there isn’t even one single best way to test a single application. So what if we used many different approaches and broke our system up into chunks so that we can use each testing method to its fullest potential?

In this case, I’m dividing up my application into separate modules, each with its own purpose, function, and business value. Some may have a user interface component and other might just do some task behind the scenes. I can decide how to best test each module individually. Maybe some modules are done with all black-box acceptance testing, and other modules are done with lots of unit tests. Even within the black-box modules, I might still write unit tests. Of course, I’m still going to have some end-to-end tests (manual and/or automated) that test the whole system working together, but I don’t have to test the majority of the functionality this way.

My favorite kinds of tests are ones that test a system working together because I can specify the inputs and outputs and I don’t have to deal with tons of stubs and mocks. Now if you try to test the whole application end to end this way, it can be a bit cumbersome. But if you have a smaller module that you can test end-to-end, now you can have clean, readable, well-defined tests that don’t have tons of mocks, and the tests define some business function that the module needs to perform. My module might still work independent from the UI or the database, so I might still be able to stub those out and have fast tests. This feels like the kinds of tests I’ve always wanted – tests that test a module end-to-end but are able to run fast because I can still isolate dependencies.

Hey, look, it turns out that modular applications is a good idea in general! It’s way easier to deal with lots of smaller applications that work together than dealing with one monolithic application. Those of you with large solution files and long compilation time (I’m raising my hand) know the pain of dealing with large applications.

The emerging blob

We like to talk about “emergent design” and that we can write tests first and let that drive the design of your code. That is true, but your codebase will evolve into a monolithic (albeit well-tested) blob of an application that assimilates all code into it’s large collective.

The only way you’re going to have a modular application is if you draw the lines in the sand up front. This can be really hard to do when you have a newer application and you don’t have a ton of insight to tell you how to keep things separate. Compounding the confusion is the fact that you might have a single database for the application as a whole, which I think is fine. You can multiple modules that use the same database, even the same tables. Sure, it would be better if you can keep the tables in separate databases, but sometimes that’s not possible or realistic.

You might start out with certain modules and then realize that you created a separation that is too painful to maintain. That’s OK, it’s much easier to combine two modules than it is it try and separate things into modules after the fact!

Once you’ve defined your modules, now you can decide how to test them (QA and devs should work together on this!).

This feels better

  • Cleaner tests with fewer mocks that test mini-systems that provide some function or business value
  • More modularity means I can change code without potentially breaking as many things
  • Smaller solution files!

I really like how this sounds.

April 8, 2014by Jon Kruger

About Me

I am a technical leader and software developer in Columbus, OH. Find out more here...

I am a technical leader and software developer in Columbus, OH, currently working as a Senior Engineering Manager at Upstart. Find out more here...