Flatland: A Romance of Many Dimensions is a novella set in world whose citizens are only aware of two dimensions; the third one is a secret. After many years of observing the way that organisations approach software testing I have an ever strengthening belief that testing is hindered by a failure to recognise dimensions along which layered approaches should be used. Testing is a discipline where anonymous uniform interchangeable tests exist and managers think in two dimensions these being effort and schedule. These Flatland style limitations leads to testing that is both ineffective and inefficient,
So after that philosophical introduction what am I really getting at. There are a number of things about the way testing is generally approached, resourced and executed that lack a layered approach (layering denoting a dimension) and that suffer as a result. In this post I will describe the main ones that are repeatedly found in organisation we work with. Later I hope to make time to explore each in more detail. The four recurring themes are:
People. There are testers and well there are testers; that is it. Compare this with enterprise level development organisations where we see architects, lead end-to-end designers, platform architects, platform designers, lead developers and developers. This is not necessarily anything to do with the line or task management structures; this is people with different levels of skill and experience who are matched to the different challenges to be faced when delivering the work. Compare again testing where organisations generally think in terms of a flat interchangeable population of testers. A source of problems or not; what do you think?
Single step test set creation. At one point there is nothing other than a need to have some tests, usually to have them ready very quickly, then there are several hundred test cases often described as a sequence of activities to be executed. Any idea how we got from A to B; any idea whether B is anywhere near the right place never mind whether it is optimal; any chance of figuring it out retrospectively? No; not a chance. Its like starting off with a high level wish for a system and coding like mad for two weeks and expecting to get something of value (actually come to think of it isn’t there something called Agile…). Seriously an effective test set is a complex optimised construct; complex constructs generally do not get to be coherent and optimised without a layered process of decomposition and design. In most places test set design lacks any layered systematic approach and has no transparency; it depends on the ability and the on the day performance of the individual tester. Then once it is done it is done; you can’t review and inspect quality into something that is not in the right place to start off with.
Tiers of testing. Many places and projects have separate testing activities; for example system testing, end-to-end testing, customer experience testing, business testing and acceptance testing. How often is the theoretical distinction clear; how often does the reality match the theory? Take a look and in many cases you will see that the tests are all similar in style and coverage. There is a tendency to converge on testing that the system does what it says it does and to do this in the areas and ways that are easy to test. This can lead to a drive to merge the testing into one homogenous mass to save time and cost; given that the tests had already become indistinguishable it is drive that it is hard to resist. Distinct tiered testing has a high value but the lack of clear recognition of what makes the tiers different is the start of the road to failure.
- The focus of tests. When you see a test can you tell what sort of errors it is trying to find? Is it designed to find reliability problems, to ensure user anomalies are handled, to ensure a user always knows what is going on or to check that a sale is reflected correctly in the accounting system? A different focus requires a different type of test. Yet generally there are just tests and more tests. No concept of a specific focus for a particular group of tests, little concept of different types of test to serve different purposes. Testers lack clear guidance on what the tests they are designing need to do and so produce generic tests that deliver generic test results.
These four themes demonstrate a common lack of sophistication in the way that testing is approached. A view of testing as set of uniform activities to be exercised by standardised people in a single step process is the downfall of many testing activities. It is a Flatland approach and testing practices need to invade and spread out along these other dimensions for testing to become more effective and valued. Hopefully I will be able to provide some ideas on how to escape from Flatland at a later date.