Comparing counterstring implementations in TypeScript

In my previous post “Using ‘fake it till you make it’ to implement counterstring” I mentioned the implementation I included there, wasn’t my initial implementation:

I did something less performant with reversing an array, because I had looked at PerlClips’s source code. How that came about and what I learned from it, is for another blog post.

This is that blog post.

As a matter of fact, I currently have 9 different implementations of counterstring in TypeScript. Including two that are not mine: one is from PerClip but translated to TypeScript by me, the other is EvilTester’s implementation. There are some interesting lessons to take, both from comparing the code of the different implementations, as from comparing the differences in performance. The performance-part of the comparison will have to wait for my next post, though.

Read more…

Using “fake it till you make it” to implement counterstring

Last week I implemented PerlClip‘s counterstring in TypeScript. A counterstring is a string that tells you how long it is. For example a counterstring with length 9 looks like this: *3*5*7*9*. Each number tells you the position of the asterisk following the number. My main goal with this project is to learn more about front-end development.

Before I could start doing any front-end stuff, however, I needed to write a function that correctly generates counterstrings. Since I approached it in a way that I really enjoyed, inspired by Llewellyn Falco “Fake it till you make it”, I figured it would make a good first post about this project.

The idea behind “fake it till you make it” is simple. Start with an implementation covering a single case (“fake it”) and then pull it apart little-by-little until it becomes an actual program (“make it”). As Llewellyn explains in the video, the value of this technique is that it’s a lot easier to start from a working example and proceed from there than it is to get complete requirements.

I did approach counterstring from the opposite direction, though, as Llewellyn did with Fizzbuzz in the video. Llewellyn starts with FizzBuzz length 20, so a case covering all the logic. Then he refactors it using different techniques, such as separation and encapsulation. While I started with counterstring length 0, the most simple case, and then worked my way up to larger lengths.

Read more…

Your default response should be a safe one

In his training DVD Ukemi from the Ground Up, Ellis Amdur explains how your default response should be a safe one. In the context of the video it’s about what your action should be in response to an Aikido technique like kote gaeshi: Do you jump and do a breakfall? Or do you roll? The breakfall is the safe option. The roll is the comfortable one, except for the times you should have done the breakfall. Then you break your wrist…

Unfortunately, the choice between breakfall and roll is not up to you. Kote gaeshi is a throw executed through a wrist lock1 and it’s up to the person applying the technique what kind of throw it will be. Either they gently apply the wrist lock, guide you to the ground, and you can roll. Or they apply the technique more dynamically and there’s no time to roll. In that case you have to jump and turn over your arm to fall safely on your side/back. That’s what’s called a breakfall.

As you can imagine, there’s not always a lot of time to think and decide between roll and breakfall. And if there’s no time to think, whatever your default response is, that’s what your body will do. That’s why your default response to kote gaeshi should be the breakfall, the response that’s safe in both circumstances. The worst case scenario is that you take a breakfall you didn’t need to. While the alternative, defaulting to the role even when you should have done the breakfall, comes with significantly worse consequences.

Read more…

The nine skills of exploratory testing

Exploratory testing is a learned skill, as I claimed in my previous post “Being intentional about exploratory testing”. In that post I mentioned the importance of two skills: noticing what there is to notice and deciding what to do next. Turns out it’s not the first time I mentioned that pair of skills. In a post about how to teach Agile, I quoted John Mason’s “Researching Your Own Practice, The Discipline of Noticing”:

All professional development could be described as changes in sensitivity to notice and accumulation of alternative actions to initiate.”” (p. 147)

That does raise the question if the skills of exploratory testing can’t be made a little more specific. After giving it some thought, I came up with seven additional skills, making a total of nine. For some reasons they ended up as questions rather than nouns. I like how that makes this post less of a checklist and more of a tool for self-reflection. Each skill could be its own blog post, so I’m going to focus on one key element of each skill.

Read more…

Being intentional about exploratory testing

This is the second post in a (to be) three-part series about my statement “The difference between a test case and a requirement is the moment of discovery.”

In the previous post I distinguished test cases that are translated requirements from ones that aren’t. This is something I learned from James Lyndsay. As he describes in “Why Exploration has a Place in any Strategy”:

Some tests are designed to find risks. They’re made on-the-fly and run once. Some are designed to tell us about retained value. They’re made once, and run forever after. You need both: they tell you different things.

The tests with a focus on value are based on requirements, on things we know we want, they are prescribed (as in: written before). The tests with a focus on risks are exploratory, they are based on our decisions in the moment, we look for surprises and decide how we feel about those surprises.

One thing I’ve noticed through the years, is that a lot more exploratory testing is happening than we give credit for. It’s hidden, a required but implicit part of the work. We do it, but we’re not intentional about it.

Today I want to argue that it pays to be more intentional about exploratory testing. Before I get there, however, I want to explain what exploratory testing is, because there are still plenty of misconceptions going around.

Read more…

The Fluxx ensemble exercise

Earlier this week I ran a full-day workshop at the excellent HUSTEF conference on working in an ensemble (aka mob programming/testing or software teaming). As part of the workshop I tried out a new exercise, in which participants were allowed to change the rules of the ensemble. The goal was to experience why the basic rules of ensembling are the way they are and what happens if they are different.

Since the participants really liked the exercise, I figured I’d write about it and name it: the Fluxx ensemble exercise. For those not familiar with Fluxx: it is a card game in which changing the rules is a key part of the game. It’s one of my favourite games.

Before I go into the exercise, though, I’ll first need to explain the basic rules of ensembling.

Read more…

What do you fix when you fix a test?

You ran the tests1 - or a pipeline did it for you - and some of them failed. Time to fix the tests! But what is it exactly that needs fixing?

There are quite a few things that might make a test fail:

  1. an issue with the build
  2. an issue with the pipeline (if that’s where the test runs)
  3. an issue in the environment the code under test is running on
  4. an issue in the environment the test code is running on
  5. a bug in the code under test
  6. a mistake in the test code
  7. a mistake in what the test should test

Arguably, on the last three describe a test that fails. The test did its job detecting a problem. In the first four we didn’t even get that far. The issues prevented the test from doing its job. So in those cases, it’s not the test(s) as such that need fixing.

Read more…

The difference between a test case and a requirement is the moment of discovery

There are several straightforward ways to distinguish a test case from a requirement. A test case tells you how to check some kind of thing about the application, a requirement tells you that the application should do some kind of thing. A test case is written by a tester, a requirement by a business analyst. A test case takes the shape of an action and an evaluation of the result, a requirement takes the form of a sentence like “product ABC shall do XYZ.”1

A less straightforward, but more interesting way to distinguish a test case and a requirement, is this:

The difference between a test case and a requirement is the moment of discovery.2

In this post I want to explore the meaning of that statement. In the next post I’ll explore how looking at requirements and test cases in this way, can help us to do better testing. So this post will be a bit more philosophical, the next one more practical.

Read more…

Two short checklists for Scrum

checklist no.1

  • Do you add acceptance criteria and story points to each ticket before planning?
  • Do you have daily team meetings where people provide updates on their progress?
  • After each iteration, do you report to stakeholders what work was done and what will be planned next?

checklist no.2

  • Is the team protected during the sprint from stakeholders trying to interfere?
  • Is a sprint focused on achieving a goal and is how that goal is achieved, left sufficiently open?
  • Does the team address impediments as soon as they are discovered?

The difference between a dead and an alive Agile Manifesto

One of my favorite books on leadership is “Extreme Ownership” by Jocko Willink and Leif Babin. I can imagine some people bouncing off of the book because of the Navy SEAL angle, but to be honest I’m a bit of a sucker for the whole military leadership genre.

The second part of “Extreme Ownership” covers four critical leadership concepts, the “Laws of Combat”. Curiously enough, you can map these to the four values in the Agile Manifesto. These four concepts do come in a specific order, so you have to shuffle the Agile values around a little bit:

  • Cover and Move maps to customer collaboration over contract negotiation.
  • Simple maps to working software over comprehensive documentation.
  • Prioritize and Execute maps to responding to change over following a plan.
  • Decentralized Command maps to individuals and interactions over processes and tools.

To me this mapping is interesting in two ways. It sheds a different light on the four Agile values. And it’s an example of how I think we should be engaging with the Agile Manifesto, in a way that keeps it alive.

Read more…