Performance Test Run Report Template

You have gathered requirements from clients, identified key critical business scenarios, analyzed the workload distributions, found out performance goals and requirements, identified  tool for generating load, setup the test environment, designed the scripts, executed the performance run, collected the raw data, analyzed it, what next?  Send information to stakeholders? No!!  Wait!! It needs to formatted, customized for different stakeholder(s) and reported accordingly.

This article suggests one of the ways to report performance run results to client especially if the performance testing activity has been outsourced to yours company.  It discusses the performance report template and importantly the reasons for including the items in the report.
The performance report template contains information by data collected from client side only and doesn’t discuss / talk about data from server side like processor utilization etc.
Broadly the performance report has been divided in three major categories –
  1. General Context – It sets the context for the performance run
  2. Performance Testing Run Input – The information that has been used / considered for the performance run
  3. Performance Testing Run Output – The result collected, represented and analyzed after the run


Revision History
  • It should include name of persons, their email ids and their contact numbers who are responsible for authoring the report as well as modifying the report so that the person can be contacted when required for any clarifications.
  • It should include date when the modification has been done, sections that were modified so that while referring it becomes easy for the reviewers or stakeholders.
  • It should also include reasons for the revisions, so that proper context can be perceived while reviewing by stakeholders.
Content Index
  • It should list all sections with its hierarchies maintained so that reviewer can jump to particular section of interest directly if required.
  • In MS word document, it is called as ‘Table of Contents’ and can be added from ‘Insert à Reference à Index and Tables’.  It allows automatic update of page numbers using single click.
1.  Introduction
  • It should provide the context of the performance testing run in brief.  Why the run is required? What are the expectations from this run?  How result from performance test run will be used etc.?
  • The context is required so that expectations of the performance testing run’s result are set in advance.
2.  Definitions and Acronyms
  • It should provide definitions of key terms, whose definitions among community are ambiguous (e.g. Load Testing).  The definitions should be given in context of the performance testing run.
  • It should expand and describe all non-common acronyms (e.g. SUT, PT) being used in the performance test report to help stakeholders to review the report easily.
3.  Documents References
  • It should list all documents that are ‘nice to refer’ (e.g. Test Plan) before reviewing the performance test report.  The referenced documents should not be mandatory for reviewing the same.
  • It should not list documents / materials (e.g. raw data, calculations etc.) that have been used for preparing the performance test report.  All these kinds of references should be listed at end in ‘References’ section though.
4.  Description of the Application
  • It should describe the purpose of the application in brief from end users perspective.  This information is usually preserved from the beginning of the project cycle – proposal, test plan, test strategy etc. and it makes sense to have similar information in performance test report rather than referring other documents if required.
4.1. Technologies
§         Here if system / physical / logical architecture diagram of the application is provided then it is more than enough for performance testing project context.  The architecture diagram helps in setting up the environment and at the time during performance analysis.

4.2. Functionalities
§         Here list all functionalities / scenarios with explicit actions / steps, which are used in designing the scripts.  It shows key scenarios being used in the performance run.

5.  Performance Test Goals and Requirements
  • It should point out, why the performance runs is being conducted? Is it for creating baseline? Is it for report against some benchmarks? Is it for identifying bottlenecks etc.?
  • It should list performance goals and requirements in terms of response time, if has been provided by client.
  • It should list performance goals and requirements in terms of throughput, if has been provided by client.
  • It should list performance goals and requirements in terms of error percentage associated with scenarios, if has been provided by client.

6.  Test Environment
  • It should discuss in brief how the test environment is being setup.  Are there any existing gaps / limitations in test environment that needs to be known?  Are there any assumptions being made about the test environment? What are the dependencies associated with test environment? How many agents are being used to generate the load etc.? Is the test environment is production replica or scaled down version of production? It is important to know, so that while reviewing stakeholders can either ask further questions or are aware about the limitations. 
6.1. Test Data Details
§         It should list all information related to test data either gathered from clients or calculated indirectly from stakeholders’ comments.
§         What is work load distribution need to be considered for this run?
§         How many requests per second need to be considered for this run?
§         What is initial database size in terms of disk space used and number of rows need to be considered for this run.
§         How many concurrent users need to be considered for this run?
§         How many total users need to be considered for this run?

6.2. Machine Configuration Details
§         It should list configuration of all machines being used in the test environment.
§         It adds value to put system / physical / logical diagram of the test environment so that the same can be reviewed with expected.
§         The configuration should mention both in terms of hardware (RAM, Disk etc.) as well as software (OS, Service Pack, Application version etc.)
    7.  Performance Run Detail
    • What is the type of the performance testing? Is it Load Test, Spike Test etc.?
    • What is the duration of the run being considered for reporting per se?
    • What is the schedule of the performance run?
    • Detail of ramp-up / down being used in this run?
    • Detail on sleep time and / or think time being used in this run?
    • It would be nice to show expected performance run chart in terms of ramp up/down and duration here.
    • If expected requests / sec for each scenarios are dynamic and changing than the expected graph related to it can also be shown here.

    8.  Performance Run Validations
    • If performance metrics of application under test is very poor, is this really application issues or can it be issues related to script designing?  This section should target these issues.  Before start analyzing the performance run results, first it needs to be validated that performance run is as per expected. 
    8.1. V-Users Ramping
    §         Have we ramped up / down the users as per the expectation?  To validate the same, graphs representing expected ramping details and actual ramping details can be placed side by side.
    §         Is it possible that ramping of users are correct but some of the users just idle or failing to send any requests to server? To validate the same, plot graph against actual v-users ramping vs. actual total requests / sec.  The number of total requests / sec should follow the pattern of increasing concurrent v-users.
    §         Is it possible that v-users sending the requests but somehow start sending the bad requests (e.g. 0 KB file size) to server? To address the same, plot graph against actual total requests / sec vs. actual requests size (sent bytes).  It should follow the pattern of increasing requests / sec. 

    8.2. Workload Distribution
    §         Is work load distribution as per expectation? For the same, find out percentage of scenarios actually being called in the actual run and verify the same with required numbers.
    §         Is number of requests / sec for scenarios are as per expectation?

    8.3. Errors
    §         Are there any errors being reported by performance testing tools? If yes, are these errors related to tool or from application? For example, in JMeter you might get heap memory exceptions, in that case, it is better to fix those exceptions first before analyzing the performance run results.  If the exceptions cannot be fixed then it must be ensured that it doesn’t affect the performance run in any way.


    8.4. Network Bandwidth
    §         Is network bandwidth capable of supporting number of requests per second and number of responses per second? Is network bandwidth choking due to large response sizes? It is important to consider these as it might affect the overall response time of transaction due to network latency at load generation end.

    9.  Client Side Statistics
    • It is where performance run results should be shown and analyzed.  It is at the very end of the report but I guess it flows with the content.  Anyway, a reviewer if required can always come directly to this section for performance run results and analysis. 
    9.1. Throughput Statistics
    §         As the HTTP response contains the bytes sent by web server, it is not needed to collect this information from web servers and can be reported directly using performance testing tools.
    §         A graph can be plotted against Elapsed Time vs. Throughput (in Kbytes) for further analysis
    §         All relevant observations related to this metrics should be listed in this section.

    9.2. Response Time Statistics
    §         Response Time metrics is one of most important metrics in performance testing as it is directly associated with end user perception.  Usually most of the performance requirements and goals are set around this metric.

    9.2.1 Aggregated Response Time Statistics
    o        A table of response time (expected as well as actual) of all scenarios steps should be shown.  The actual response time should have details of Min, Max, Average and 90th percentile information.  The 90th percentile information can be compared against expected response time and accordingly passed / failed can decided.
    o        All relevant observations related to this metrics should be listed in this section.

    9.2.2 Dynamic Response Time Statistics
    o        In table format, only aggregation can be represented but for further analysis it is also required to find out how requests are being responded dynamically i.e. against ‘Elapsed Time’.  For the same, plot graph between Elapsed Time vs. Response Time of steps associated with scenario.  The graphs should be plotted for each scenario separately to avoid information cluttering.
    o        All relevant observations related to this metrics should be listed for every scenario’s graphs.

    9.3. Checkpoint Related Error Statistics

    §         Response time undoubtedly is one of the key metric from performance perspective but how about for a case when performance is very good in terms of latency / response time but most of the requests are resulting in error and possibly that might be the reason for good response time.

    9.3.1 Aggregated Error Statistics
    o        A table of error percentage (expected as well as actual) of all scenarios steps should be shown.  The actual error statistics should have total requests, failed requests and percentage of failed requests.  The percentage of failed requests information can be compared against expected error percentage and accordingly passed / failed can decided.
    o        All relevant observations related to this metrics should be listed in this section.

    9.3.2 Dynamic Error Statistics
    o        In table format, only aggregation can be represented but for further analysis it is also required to find out how errors are occurring  dynamically i.e. against ‘Elapsed Time’.  For the same, plot graph between Elapsed Time vs. Errors / sec of steps associated with scenario.  The graphs should be plotted for each scenario separately to avoid information cluttering.
    o        All relevant observations related to this metrics should be listed for every scenario’s graphs.


    9.4. HTTP Status Code Related Error Statistics

    §         Errors in terms of checkpoints gives the error information in totality in terms of functionalities but how about the case where the actions has been completed successfully but with few minor missing details, for example – login requests has succeeded but one of the images couldn’t be download because of its absence in web server.  In this case, either requests for images shouldn’t be invoked at all or it should succeed.

    9.4.1 Aggregated Error Statistics
    o        A pie chart should be drawn for all http requests and their corresponding status codes.  It provides in glance information, what is the majority of failure types and what should be fixed first.
    o        All relevant observations related to this metrics should be listed in this section.

    9.4.2 Dynamic Error Statistics
    o        In pie chart format, only aggregation can be represented but that doesn’t give any hint whether the errors were consistently happening or was there some particular time slice when they happened.  At the same time, it can also suggest what types of errors occurs during what time period.  For further analysis it is also required to find out how errors are occurring dynamically i.e. against ‘Elapsed Time’.  For the same, plot graph between Elapsed Time vs. HTTP Status Code / sec.  The graphs should be plotted for all requests at one place.
    o        All relevant observations related to this metrics should be listed in this section.

    10.  Observations
    §         Here all observations related to the performance run should be summarized.
    §         It should not just list all the observations that have been written with every section rather it should analyze those observations and write in brief so that if someone doesn’t want to go through complete reports they can still find relevant observations from this section.

    11.  Recommendations
    • After analyzing all the data, what are the recommendations you have for the development team or to stakeholders.
    12.  Conclusions
    • What are the conclusions of the performance run? Has the performance run conducted successfully? Do further performance runs need to be conducted?
    13.  References
    • It should include references to any link or location of raw data that were used directly or indirectly for preparing this report.

    Comments

    Post a Comment

    Popular posts from this blog

    Financial Measures for Managing Professional Service Projects

    Understanding Blockchain