12 July 2010

Where and how to test what

J.B. Rainsberger wrote in the fall of 2009 about why he thinks typical programmer use of "Integrated Tests" leads to a vicious cycle of denial and suboptimal behavior.

His overall ideas were summarized well by Gabino Roche, Jr.

And there are good uses of integration tests.

I was about to hit the delete button on this post, because I thought all I had to say had already been said. But there was still something to say: How do I personally work in a way that avoids the Vortex of Doom?

The key idea that has me personally is to pause, and ask the following question:
What is the responsibility of this test?
and then to consider the answer to the related question:
What is the responsibility of the class being tested?

Of course, those are fairly basic OO questions. However, when you're writing tests along with the code, there is a situation that is easy to get stuck in: having so many things in mind at once, that you get confused about the purpose of the test, and even the software you are working to create.

There are at least three things that tend to compete for mind space:
  1. What observable behavior do you want out of your software?
  2. How do you think you might implement that?
  3. How does what you are building interacts with the rest of your system?
And, when #2 gets the top spot in my mind, I find myself forgetting about #3, and resorting to copy/paste for #1 (from other similar tests). However, when I focus on #1 and, by extension, #3, I find myself getting new ideas about how to actually implement the new behavior.

In addition, I find that these new ideas are reorienting in nature. The new stuff I'm working on ends up either modifying an existing concept in a novel way, or introducing a new concept that collaborates with the existing system in a certain way. Then the test I thought I was going to write ends up being a straightforward unit test on a new class, or new methods of an existing class. And a couple of collaboration tests that make sure the new behavior actually gets used.

In the end, there are a few questions that need to get answered:
  1. Does the new behavior work? (unit tests will tell you this, 80% of tests)
  2. Is the new behavior hooked up? (collaboration tests will tell you this, 15% of tests)
  3. Does the whole thing hold together? (automated deploy to a production-style test site with system-level smoke tests will tell you this, 5% of tests)
And the system-level smoke tests are only responsible for making sure that certain itemized flows work, not all permutations of state that go through those flows.

Hopefully this is a useful addition to the already-posted conversations started in 2009.

No comments:

Post a Comment