Blog

+44 (0)207 993 2287

Performance by request.

by Neil Hudson

Neil Hudson

After doing a fair bit of performance testing and troubleshooting we have seen the effects of performance only receiving attention at the end of the project. We encounter teams making herculean efforts to ring acceptable performance out of systems; we encounter systems that do not reach and never will reach acceptable levels; we encounter cancellations.

Few organisations spend much time and effort worrying about performance at the start of a project. Many spending an awful lot of time and money at the end dealing with the consequences. This pattern is not limited to naive first offenders; there are major organisations, ones that most people would expect to have sophisticated performance risk controls, that fall foul of this problem. It would be safe to say that, in general, the software industry doesn’t do performance engineering it does performance mend and make do.

What makes this madness is that simple techniques can make things a lot better; there is no need to turn to rocket science. These techniques may not be up to delivering the performance certainty required by an air traffic control system but they can certainly reduce risk for your average web application. Some thought and a little effort can provide a major reduction in performance risk. The first trick is to ask for what you want.

Ask and you might receive.

This may sound obvious but if it is so obvious then why is it not done? The people who need the system have to ask the people supplying the system to deliver a certain level of performance. Once that has been done you can look them in the eyes and let them try and provide evidence that will convince you that this will be achieved. This is founded on the adage “if you don’t ask then you don’t get”. When you think about it if you don’t ask for something then what is the chance you will get it?

For this to work well two things are necessary. Firstly the people doing the asking have to understand what they need and have to express it in an organised way. Secondly they have to be sensible and avoid asking for the impossible; if you do you won’t get it and you won’t be taken seriously so you may end up with something worse than you could have had.

How to describe what you need.

Has anyone seen a performance requirement of the form “all response times must be less than 3 seconds”. How much difference do you think that makes to the way developers approach the implementation of individual features. Not a jot; it has no real influence on the end game what so ever. How can this be done better? Three techniques provide the right framework.

  1. Recognise that the amount of time a user can wait for a response without it becoming a usability or throughput issue depends upon what the user is doing and what they are waiting for. Reflect these different needs as separate performance requirements with different and appropriate targets for each. Differentiation of types of responses is essential.

  2. Accept that, generally, real systems go slower when busy. With no one on it may be lighting fast; on a normal day it may be quick; during the busiest period of the year ir will almost inevitably be slower. Think about the different loads it will be used under and set distinct targets for each one. The limits may be close or it may be that at your busiest time you relax them; which ever it is good to be explicit about it.

  3. This discipline avoids there being a covert interpretation that your targets are for ‘normal’ load conditions and an unstated assumption that in more extreme periods slower responses are acceptable. Also it can point architectural design in the right direction. Trade offs become possible; particularly when some aspects must remain constant under all conditions whilst some can slow down under heavier loads.

  4. Don’t use a simple limit; this can have strange side effects. You might pick the number that reflects the speed you want in the vast majority of cases but specify it as a maximum. Its origins are likely to mean that it is too challenging to achieve in all cases; if this is glaringly obvious the requirement is discredited. Alternatively you might pick the worst acceptable duration; now you have not constrained the middle ground; suppose they all come in around this limit. Targets should be percentile distributions; not single upper limits nor single percentile limits.

In summary identify things or classes of things with different response requirements, have distinct targets for different periods and use percentile distribution profiles to define each target.

Remaining realistic.

The second trick is to ask for things that you stand a chance of getting. Base you requirements on what the sort of technology you are mandating or are willing to pay for is able to deliver. Web technology has its strengths but does not deliver 95% interface updates for zone selection in under 0.5 seconds under any circumstances. The targets set have to be achievable or they will be ignored.

Reflect on what is plausible given the technology and the environment it is used in. What does this mean if you have an activities that must complete in a time that is unrealistic? It means you have to step back and reassess the concept. Redesign the interaction and the task structure to reduce the time criticality. Alternatively ask have we choosen the right technology?

Targets have to be achievable; unachievable ones will either be ignored or will consume lots of resources and attention and then fail. Where the tasking and interaction design mandates targets that cannot be met you need to redesign or reassess the technology options.

Concluding

One of the biggest mistakes possible is to fail to put enough thought and care into specifying performance requirements. If you don’t ask or if your request is nonsense then you risk getting something far removed from what you need. When you do decide to ask properly you have to really understand your need, define it in the right structure and ensure that what you ask for is possible. Once this framework is in place developers have something to work to and you have a firm basis for performance assurance activities.