We reform how you...
Test Architecture brings the concept of applying analysis to address the “what”, “when” and “how” topics of testing. Starting with the need for “Test Inception” activities it aims to bring a more disciplined approach to shaping what testing does. More can be found in the Concept of Test Inception section.
On the surface writing tests is simple. Define what will happen to the system, express the expected system response and define the cumulative overall outcome of the steps. This commonly recognised “good practice” pattern, action plus expected result, for writing tests that has been known for years. Then again “for years” is the problem, it has not evolved in years despite its limitations.
Its weakness is in the merging of the definition of the test with the details of how the test will be executed. These are two distinct things best kept separate:
If the distinction is still not clear then think of this. You should be able to use a Test Definition to sort through random real world traces of a system in operation and locate instances of the test occurring there. This is done by filtering the real-world activity, using the definition of the event sequence, to locate cases where the test has been “run” and then checking what actually occurred to see if it “passed”.
Too often tests “fail” when there is nothing actually wrong with what is being tested. Test teams need to adopt the concepts of a test Faulting rather than Failing. A test Faults when it cannot be completed due to some cause that is not related to the reason the test was included in the first place. Now this presupposes that the purposes of that particular test is known, but issues in that area are a separate topic. Assuming the purpose is known then, when applying this principle, the test only Fails if that purpose is interested in that type of issue seen.
An example? Suppose we have tests that are there to check the revaluation of some asset when a financial condition changes. When an attempt is made to run the test the report of values, needed to allow checking of the values, cannot be triggered due to a user access permission problem. Has the test failed? Hopefully, reading this article, it is clear that it has not. In our terms it has Faulted, It has not been concluded. Now, being honest, what would you expect in practice? How many times out of ten do you think you would be told “the test has failed”? Too many we suspect.
So what if we see a real system problem but, by our rules, the test is Faulted and should not be Failed? Again our position would be that tests designed for purpose X are not failing. Nothing indicates that the area these tests address has a problem and so saying they “fail” would give a misleading indication that a problem had been spotted in that area. In this case the defect must be raised independently, it is not linked to the test being executed at the time. A specific reproduction test case could be created and linked but the no failure should be logged against the original test.