Maintaining Focus
If you want testing to be effective and want it to be manageable in the wider sense of the word (understood by others, amenable to peer and expert review and controllable) then everything has to be focussed. Each constituent part of the effort needs a clear purpose and this has to extend down to quite a fine grained level. Macros level building blocks such as Functional Test, Performance Test and Deployment Test don’t do it. What is required is to break the work into a set of well defined heterogeneous testing tasks each one focussing on certain risks.
This approach originated when myself and a guy called Stuart Gent were working through the challenge of shaping and scoping the testing for a major telecommunications system programme. We had a team of twelve analysts simply trying to understand what need to be tested. We had already divided the work into twelve workstreams but recognised we needed something more. We also had the experience of not using an adequate analysis approach on preceding releases of the system. These were far smaller and less complex than this one but we had learnt the dangers of inadequate independent analysis, of tending to follow the development centric requirements, specifications and designs, of testing what was highlighted by these and of missing less obvious but vitally important aspects.
Out of this challenge the concept of Focus Area based test management emerged. The name isn’t ideal but it services it purposes. The fundamental approach is that test activity should be divided up into a number of packages each being a Focus Area. Each has a tight well defined remit. There can be quite a few Focus Areas on large projects we are not talking about single digits; inventories exceeding a hundred, possibly approaching two, have been known.
A key thing is that a focus area is coherent and people can understand what it aims to cover and what it does not cover. This enables far clearer assessment of whether a group of tests is adequate; because the focus is clear it is a tractable intellectual challenge to judge whether the tests do the job; divide and conquer. Looking from the other end of the telescope how well are the overall risks of the system covered? If you have one thousand test cases with no way of telling what they really do, other than reading them, then you haven’t got a chance of finding the gaps. If you have forty three well defined Focus Areas around which the tests are structured then you are in a much better shape.
What makes up a Focus Area definition? This is something that flexes and depends on how formal you want to be but there are some basic things that should always be present: (a) The aspects of the system’s behaviour to be covered. (b) Distinct from this the conditions and scenarios that behaviour is being exercised under. (c) The sorts of malfunctions in this behaviour that we are trying to make sure aren’t there or at least that we need to catch before they get into the wild. (d) Any particular threats to be exercised. (e) Whether we are after hard faults or ones that don’t always manifest themselves even when the things we are doing to try and make a fault happen appear the same.
Look at how this works. If you don’t apply a Focus Area approach and ask a team to create tests for some system then what is it that you are actually doing? Well putting this situation into our basic Focus Area form you are saying:
“(a) Test all aspects of the system’s behaviour. (b) Do this under arbitrary conditions and usage scenarios. (c) Whilst you are at it look for anything that could possibly go wrong. (d) We aren’t telling you what particular things have a high probability of breaking it. (e) We are not highlighting whether things that may manifest themselves as reliability issues need to be caught.”
That is a lot of ground to cover both in area and types of terrain. Thinking will be difficult as there are lots of different concerns all mixed in together. Our experience is that you will tend to get homogenous testing using a small number of patterns that focuses on primary behaviour. Much of the terrain will not get tackled; particularly the stuff that is harder to traverses. Also, as discussed above, it is very difficult to review a set of tests covering such wide concerns and when you do you will probably find gaps all over the place.
Alternatively perhaps experienced people should define a number of Focus Areas to shape the work. An example high level brief for a focus area might be:
“(a) Test the generation of keep the customer informed messages sent to the customer during order handling. (b) Test this for straightforward orders and for orders that the customer amends or cancels (don’t cover internal order fulfilment situations as they are covered elsewhere). (c) Testing should check for the occurrence of the message and the accuracy of the dynamic content of the message. Testing should check for spurious messages. Static content and presentation need not be checked. The latency of the message issue mechanism is outside the scope of this package. (d) Particular concerns are orders for multiple products and orders where the customer has amended contact rules after placing the order. The impact of load on operation is outside the scope of this package. (e) It is accepted that this package should provide reliable detection of consistent failures and will not be implemented to detect issues that manifest themselves as reliability failures.”
A definition likes this helps to focus the mind of the test designers; it should help to shape the pattern of testing so as to most effectively cover the ground. It should ensure there are fewer gaps around its target and it should make reviewing more effective. The overall set of well thought out focus areas allows the Test Architect to shape the overall coverage delivered by the testing exercise.
Personally I would never consider even reviewing a set of tests without first having my Focus Areas to hand.