We reform how you...
30 March 2012 by Neil Hudson
We have just hit something that occurs time and time again; that is testing the interface without looking at what is going on below the surface. It is often in the depths that the really important stuff occurs and yet this is where we often find no one is looking.
19 February 2011 by Neil Hudson
Despite the trend in software development that thinks of requirements as old hat many people still do and always will work with requirements. This fact makes the observations that follow relevant and likely to remain of relevance.
18 January 2011 by Neil Hudson
When I get into the client’s office this morning I have to do a conference call to look at how well the integration of two of their systems (via a third) is going. This follows a call last Thursday and there is one already scheduled for tomorrow. The original plan was to hand this over to the unit I am setting up at the end of December to be tested. I had that plan changed in early December to one where the construction team would actually Integrate it (see article here) before handing it over for test. We also defined a set of basic integration tests we wanted the construction team to demonstrate at the time of handover. Four weeks were allowed for the integration prior to the handover.
28 December 2010 by Neil Hudson
When should you classify a test as Failed? This sounds such a simple question and you may think the answer is obvious; however there are some factors that mean a well thought out approach can have significant benefits to the test manager.
15 December 2010 by Neil Hudson
If you want testing to be effective and want it to be manageable in the wider sense of the word (understood by others, amenable to peer and expert review and controllable) then everything has to be focussed. Each constituent part of the effort needs a clear purpose and this has to extend down to quite a fine grained level. Macros level building blocks such as Functional Test, Performance Test and Deployment Test don’t do it. What is required is to break the work into a set of well defined heterogeneous testing tasks each one focussing on certain risks.
03 December 2010 by Neil Hudson
I have just encountered an old friend of mine; one that I see most places I go. My friend is that recurring defect - the different date format bug. In its most common and insidious form it is a mix of DD/MM/YYYY and MM/DD/YYYY representations of dates as strings. Date format clashes of any sort cause defects but this is the worst ones because for many cases it appears to work waiting to create problems in future or corrupting data that passes through it.
26 November 2010 by Neil Hudson
We have recently started working with a new client on changes to their testing and delivery practice. The aims is to increase the throughput of development and at the same time accelerate delivery and maintain quality. This has been running for a few weeks now and enough time has elapsed for us to start hearing stories about previous projects and what went well and what was problematic.
20 November 2010 by Neil Hudson
I heard a comment recently; it went something along the lines of “if they can’t deliver testing to us then they won’t be able to do anything”. Was I surprised to hear this coming from a senior test manager? Well actually no; I wasn’t surprised. It illustrates that even people with many years in senior testing posts can fail to understand what first class testing is, how different it is from run of the mill work and how complex and difficult it is to do first class testing well and at speed. This was not the first time I have come across this view and I doubt it will be the last.
16 November 2010 by Neil Hudson
Here are two interesting propositions. Number one; test managers should focus on getting as quickly as possible to a state where it is obvious that further testing offers little benefit compared with finding out how the system survives in the wild. Number two; it is easier to make the decision to release a system when delaying the release to permit further testing is not likely to put you in any better position than you are already in.
13 November 2010 by Neil Hudson
After doing a fair bit of performance testing and troubleshooting we have seen the effects of performance only receiving attention at the end of the project. We encounter teams making herculean efforts to ring acceptable performance out of systems; we encounter systems that do not reach and never will reach acceptable levels; we encounter cancellations.