It is important that response time is clearly defined, and the response time requirements (or expectations) are stated in such a way to ensure that unacceptable performance is flagged in the load and performance testing process.
One simple suggestion is to state an Average and a 90th Percentile response time for each group of transactions that are time critical. In a set of 100 values that are sorted from best to worst, the 90th percentile simply means the 90th value in the list. The specification is as follows:
Time to display order details | |
Average time to display order details. |
less than 5.0 seconds. |
90th percentile time to display order details. |
less than 7.0 seconds. |
The above specification, or response time service level agreement, is a reasonably tight specification that is easy to validate against.
For example, a customer 'display order details' transaction was executed 20 times under similar conditions, with response times in seconds, sorted from best to worst, as follows -
2,2,2,2,2, 2,2,2,2,2, 3,3,3,3,3, 4,10,10,10,20 Average = 4.45 seconds, 90th Percentile = 10 seconds
The above test would fail when compared against the above stated criteria, as too many transactions were slower than seven seconds, even though the average was less than five seconds.
If the performance requirement was a simple "Average must be less than five seconds" then the test would pass, even though every fifth transaction was ten seconds or slower.
This simple approach can easily extended to include 99th percentile and other percentiles as required for even tighter response time service level agreement specifications.