We reform how you...
Programmes suffer poor testing outcomes because too little care is given to tailoring “what is done”, “when it is done”, “how it is approached” and “how much is done”. Generally, it is left to “chance”, influenced by historic precedence, what people have done previously and, possibly, by the activities identified in some delivery model.
A plan emerges, generally owned by a test manager, that lists activities. However they tend to be “big buckets” and bounded more by schedule or responsibility factors rather than purposes and content. The actual content of activities tends to be very loosely defined.
Even if what is intended is spot on, the intent can get distorted by communication noise and different interpretations or ignored, without it being too obvious, in the heat of the battle. There is rarely a recorded explanation of how this intent was arrived at and of what external factors need to remain constant for it to remain valid.
Does this matter? Well contrary too many people’s perception, testing anything but the smallest change to the simplest system is complex. Take a look at The Testing Activity Landscape to get a feel for the variety of activities that may need to be considered and tailored.
How much of each of these types of activity to do, when to do whatever is to be done and how to approach it? These are big questions. The answers are not fixed. They need to be addressed by a disciplined, and recorded, risk driven analysis and test activity synthesis practice. This is Test Inception, a discipline at the heart of a Test Architecture work.
As new work joins or comes to the front of the queue, as the nature of inflight work changes, the person or team owning the Test Architecture need respond.
They have to understand the nature of the changes being proposed to the solution estate or to the way this estate is used. They need to assess how is it is being developed and the “track record” of components of the system in equivalent situations. They need to identify the points at which changes will have the potential to affect the “real-world”. This knowledge enables a risk analysis to identify where testing must focus to assure quality.
On top of this there will be demands to surface information, via testing, to maintain the situation awareness to the project team. This may mean testing lower risk things to confirm they have actually been done. There will also be specific demands for evidence and information needed to get end user and stakeholder buy into bringing the system into use.
These additional demands all needed to be blended into the quality focussed test activities to create the overall set of things to be done. This needs to be recorded along with the reasoning behind it and a evidence that the proposed approach is “fit for purpose”. This information is brought together in a Test Inception Document. Test planning activities need to be driven off the inception document. When this is not done matters are being left far too much to chance.
Example Test Inception Document (Integration Sevice Upgrade)