Being intentional about exploratory testing

This is the second post in a (to be) three-part series about my statement “The difference between a test case and a requirement is the moment of discovery.”

In the previous post I distinguished test cases that are translated requirements from ones that aren’t. This is something I learned from James Lyndsay. As he describes in “Why Exploration has a Place in any Strategy”:

Some tests are designed to find risks. They’re made on-the-fly and run once. Some are designed to tell us about retained value. They’re made once, and run forever after. You need both: they tell you different things.

The tests with a focus on value are based on requirements, on things we know we want, they are prescribed (as in: written before). The tests with a focus on risks are exploratory, they are based on our decisions in the moment, we look for surprises and decide how we feel about those surprises.

One thing I’ve noticed through the years, is that a lot more exploratory testing is happening than we give credit for. It’s hidden, a required but implicit part of the work. We do it, but we’re not intentional about it.

Today I want to argue that it pays to be more intentional about exploratory testing. Before I get there, however, I want to explain what exploratory testing is, because there are still plenty of misconceptions going around.

Read more…

The Fluxx ensemble exercise

Earlier this week I ran a full-day workshop at the excellent HUSTEF conference on working in an ensemble (aka mob programming/testing or software teaming). As part of the workshop I tried out a new exercise, in which participants were allowed to change the rules of the ensemble. The goal was to experience why the basic rules of ensembling are the way they are and what happens if they are different.

Since the participants really liked the exercise, I figured I’d write about it and name it: the Fluxx ensemble exercise. For those not familiar with Fluxx: it is a card game in which changing the rules is a key part of the game. It’s one of my favourite games.

Before I go into the exercise, though, I’ll first need to explain the basic rules of ensembling.

Read more…

What do you fix when you fix a test?

You ran the tests1 - or a pipeline did it for you - and some of them failed. Time to fix the tests! But what is it exactly that needs fixing?

There are quite a few things that might make a test fail:

  1. an issue with the build
  2. an issue with the pipeline (if that’s where the test runs)
  3. an issue in the environment the code under test is running on
  4. an issue in the environment the test code is running on
  5. a bug in the code under test
  6. a mistake in the test code
  7. a mistake in what the test should test

Arguably, on the last three describe a test that fails. The test did its job detecting a problem. In the first four we didn’t even get that far. The issues prevented the test from doing its job. So in those cases, it’s not the test(s) as such that need fixing.

Read more…