28 July 2010

Playing catch-up

The world is changing all the time, which implies the requirement of constant learning to keep current. In other words, you are always behind.

If you accept the fact that you are always going to be behind, in one way or another, the problem then becomes more tractable. How, in a limited amount of time, can you bootstrap yourself into a learning environment where you can catch up enough to get something working?

The core questions are:
  1. What to learn? (because of the limited time, you know you have to be selective)
  2. How to go about it? (because of the limited time, and the constant churn, you have to be a quick learner/applier)

Andy Hunt wrote a book about pragmatic learning. Clayton Christensen wrote several books about disruptive innovation. The Wikipedia contributors wrote an article about the term "learning curve". The ideas in these books can be instructive.

I have my own opinion about the matter.

My answers to the core questions are:
  1. Look around and get creative about how you can apply about-to-be-stable newer technology to the software problem at hand.
  2. Climb the dynamic learning curve by becoming an "early adopter".

Being an "early adopter" is a productive approach to bootstrapping yourself into a rich learning environment. The key to quality learning is keeping it real & experiential. And trying new technology out and trying to apply it to the task at hand is certainly real & experiential.

The part that makes this whole learning equation possible is that the "innovators" actually need the "early adopters" in order to gain traction and stability. In an open world, that means that you can use early adoption as a means by which it is always possible to inject yourself into a rich and productive learning environment.

After I play this game for a while and become skilled at it, I'm guessing there will be a point at which I will want to have a talk with Paul Graham about a startup. Or maybe I will care about being an innovator in my family more than being famous in a technical sphere. Who knows.

Counterpoint

Donald Knuth is the classic example of someone whose life mission specifically excludes playing month-to-month catch-up. Oh, and by the way, that page returns the following HTTP header (in 2010):
Last-Modified: Fri, 23 Sep 2005 04:39:22 GMT

Even the innovation-encouraging Paul Graham wrote an article about addiction that cautions about blanket acceptance of technical improvements.

A Theist Balance?

I recently came across Clayton Christensen's recent article in the Washington Post: A Theist Balance on the Court.

I liked the pun on a-theist.

But, more seriously, I think that the topic of "voluntary obedience to unenforceable laws" is core to the American experiment.

There is a point at which people cannot be governed by law except to their own detriment. I believe that this point is reached when individuals decide to trample each others' rights and attempt to defend themselves, by legal or illegal means, in doing so.

And I very much like Clayton's reframing of the problem: If the "religions of atheism and secularism offer us no institutions whose mission is to inculcate in the next generation of Americans the instinct to obey unenforceable laws," then it is improper to marginalize the contribution of any and all of the institutions that DO inculcate such an instinct.

12 July 2010

How to rewrite a complex test

An Integrated Test is suboptimal for asserting basic functionality and basic sub-component collaboration behavior.

So if you have a massive Integrated Test, how is it possible to rewrite that test into some number of the following kinds of tests?
  • focused unit test
  • focused collaboration test (how one class collaborates with another)
  • systems-level integration test (load balancer behavior, queuing system behavior)
I think it comes down to the following activities:
  • enumerate the different permutations of state
  • enumerate the different permutations of flow
  • for each permutation of state: create one focused unit test
  • for each permutation of flow: decide whether 1) the permutation of flow devolves into a sub-component collaboration test, or 2) into a systems-level integration test
  • create the required focused collaboration tests
  • create any required systems-level integration tests (usually very rare)
There is an interesting smell that comes from the activity of creating tests. There may be an existing test that is responsible for asserting the focused behavior, but it isn't in the right place, so it is hard to find out whether it exists. In this case, the act of "create focused test" implies the act of "move focused test into its rightful home" (so others, including yourself, can find it later).

In a meeting in Feb. 2010, I wrote the following about problems I've experienced with a test suite at work:
  • Inability to run a test context-free => high re-run costs and downstream delays
  • Too much custom test infrastructure => high maintenance costs
  • Risk of centralized integration => waiting on central integration before shipping
The approach I suggested was to find the top 20% costliest tests, and focus on those.

I suggested measuring "costliest tests" using a combination of the following criteria:
  • How many superfluous assertions in this test?
  • How many superfluous historical failures has this test generated in the last 6 months?
  • How long does it take to run this test?
  • How many "permutations of state" is this test trying to cover?
  • How many "permutations of flow" is this test trying to cover?
  • How far away from the code is this test?
  • Is there a place closer to the code where those "permutations of state and flow" can be adequately tested?
  • Are there ways to ensure all the "permutations of flow" can be covered without having to mix the test with trying to test all the "permutations of state" at the same time?
The whole idea is to simulate expensive parts of our tests in a way that still gives us the confidence that the test is valid and covers the desired integration case.

Where and how to test what

J.B. Rainsberger wrote in the fall of 2009 about why he thinks typical programmer use of "Integrated Tests" leads to a vicious cycle of denial and suboptimal behavior.

His overall ideas were summarized well by Gabino Roche, Jr.

And there are good uses of integration tests.

I was about to hit the delete button on this post, because I thought all I had to say had already been said. But there was still something to say: How do I personally work in a way that avoids the Vortex of Doom?

The key idea that has me personally is to pause, and ask the following question:
What is the responsibility of this test?
and then to consider the answer to the related question:
What is the responsibility of the class being tested?

Of course, those are fairly basic OO questions. However, when you're writing tests along with the code, there is a situation that is easy to get stuck in: having so many things in mind at once, that you get confused about the purpose of the test, and even the software you are working to create.

There are at least three things that tend to compete for mind space:
  1. What observable behavior do you want out of your software?
  2. How do you think you might implement that?
  3. How does what you are building interacts with the rest of your system?
And, when #2 gets the top spot in my mind, I find myself forgetting about #3, and resorting to copy/paste for #1 (from other similar tests). However, when I focus on #1 and, by extension, #3, I find myself getting new ideas about how to actually implement the new behavior.

In addition, I find that these new ideas are reorienting in nature. The new stuff I'm working on ends up either modifying an existing concept in a novel way, or introducing a new concept that collaborates with the existing system in a certain way. Then the test I thought I was going to write ends up being a straightforward unit test on a new class, or new methods of an existing class. And a couple of collaboration tests that make sure the new behavior actually gets used.

In the end, there are a few questions that need to get answered:
  1. Does the new behavior work? (unit tests will tell you this, 80% of tests)
  2. Is the new behavior hooked up? (collaboration tests will tell you this, 15% of tests)
  3. Does the whole thing hold together? (automated deploy to a production-style test site with system-level smoke tests will tell you this, 5% of tests)
And the system-level smoke tests are only responsible for making sure that certain itemized flows work, not all permutations of state that go through those flows.

Hopefully this is a useful addition to the already-posted conversations started in 2009.