Blog

3 Key Performance Testing Metrics Every Tester Should Know

Dive into the core of performance testing metrics and discover the importance of accurate analysis in ensuring optimal system performance. Through simple explanations, we will guide you toward making informed decisions in your testing endeavors and identifying performance bottlenecks.

Ilustrative image - 3 Key Performance Testing Metrics Every Tester Should Know

Unlocking the full potential of performance testing means delving into the performance metrics that matter. It’s not enough to run tests and gather data; the real power lies in accurate analysis, empowering you to make informed decisions and boost your system’s performance.

Embarking on performance testing unveils three crucial metrics: Average, Standard Deviation, and Percentiles. Each offers unique insights, painting a comprehensive picture of system performance.

Through a thoughtful analysis of these metrics, we lay the foundation for enhanced system responsiveness.

Looking for a Performance Testing Partner? Explore our Performance Testing Services! Our global client reviews on Clutch speak for themselves.

Making Sense of The Average, Standard Deviation, and Percentiles in Performance Testing Reports

There are certain performance testing metrics that are essential to understand properly in order to draw the right conclusions from your test results. These performance metrics require some basic understanding of math and statistics, but nothing too complicated.

The issue is that if you don’t understand well what each one means or what they represent, you’ll come to some very wrong conclusions.

In this post, we want to focus on average response time, standard deviation, and percentiles. Without going into a lot of math, we’ll discuss test metrics and their usefulness when analyzing performance results.

Want to learn all about performance testing? Don’t miss our Continuous Performance Testing Comprehensive Guide

The Importance of Analyzing Data as a Graph

The first time we thought about this subject was during a course that Scott Barber gave in 2008 (when we were just starting up Abstracta), on his visit to Uruguay. He showed us a table with values like this:

Performance testing data chart

He asked us which data set we thought had the best performance, which is not quite as easy to discern as when you display the data in a graph:

Performance testing metrics: Data Set A

In Set A, you can tell there was a peak, but then it recovers.

Performance testing metrics: Data Set B

In Set B, it seems that it started out with a very poor response time, and probably 20 seconds into testing, the system collapsed and began to respond to an error page, which then got resolved in a second.

Performance testing metrics: Data Set C

Finally, in Set C, it’s clear that as time passed, the system performance continued to degrade.

Barber’s aim with this exercise was to show that it’s much easier to analyze information when it’s presented in a graph. In addition, in the table, the information is summarized, but in the graphs, you can see all the points. Thus, with more data points, we can gain a clearer picture of what is going on.

Interested in data analysis? We invite you to read this article: Data Observability: What It Is and Why It Matters. (PONER COMO RECUADRO)

Understanding Key Performance Testing Metrics

Okay, now let’s see what each of the metrics for performance testing means, as a key part of your performance testing process. Let’s do it one by one, checking their importance for analysis purposes. Understanding metrics like server CPU usage or CPU capacity utilized can provide insights into how efficiently the system is processing requests.

Average Response Time

To calculate the average, simply add up all the values of the samples and then divide that number by the quantity of samples.

Let’s say we do this and our resulting average peak response time is 3 seconds. The problem with this is that, at face value, it gives you a false sense that all response times are about three seconds, some a little more and some a little less, but that might not be the case.

Imagine we had three samples, the first two with a response time of one second, the third with a response time of seven:

1 + 1 + 7 = 9

9/3 = 3

This is a very simple example that shows that three very different values could result in an average of three, yet the individual values may not be anywhere close to 3.

Fabian Baptista, co-founder and member of Abstracta’s board, made a funny comment related to this:

“If I were to put one hand in a bucket of water at -100 degrees Fahrenheit and another hand in a bucket of burning lava, on average, my hand temperature would be fine, but I’d lose both of my hands.” 

So, when analyzing average response time, it’s possible to have a result that’s within the acceptable level, but be careful with the conclusions you reach.

That’s why it is not recommended to define service level agreements (SLAs) using averages; instead, have something like “The service must respond in less than 1 second for 99% of cases.” We’ll see more about this later with the percentile metric.

Don’t miss this Quality Sense Podcast episode about why observability is such relevant in software testing, with Federico Toledo and Lisa Crispin.

Standard Deviation

Standard deviation is a measure of dispersion concerning the average, how much the values vary for their average, or how far apart they are.

If the value of the standard deviation is small, this indicates that all the values of the samples are close to the average, but if it’s large, then they are far apart and have a greater range.

To understand how to interpret this value, let’s look at a couple of examples.

If all the values are equal, then the standard deviation is 0. If there are very scattered values, for example, consider 9 samples with values from 1 to 9 (1, 2, 3, 4, 5, 6, 7, 8, 9), the standard deviation is ~ 2.6 (you can use this online calculator to calculate it).

Although the value of the average as a metric can be greatly improved by also including the standard deviation, what’s more useful yet are the percentile values.

Percentiles: p90, p95, and p99

Understanding percentiles is crucial for accurate system performance analysis.

Let’s break down what percentiles like the 90th percentile (p90), p95, and p99 mean and how they can be used effectively in performance tests.

What Are Percentiles?

A percentile is a valuable performance testing metric that gives a measure under which a percentage of the sample is found. This helps in understanding the distribution of response times and other performance metrics. The percentile rank is another important metric that helps in understanding the distribution of response times.

The 90th Percentile (p90)

The 90th percentile (p90) indicates that 90% of the sample values are below this threshold, while the remaining 10% are above it. This is useful for identifying the majority of user experiences and ensuring that most users have acceptable response times.

The 95th Percentile (p95)

The 95th percentile (p95) shows that 95% of the sample values fall below this threshold, with the remaining 5% above it. This provides a more stringent measure of performance, ensuring that nearly all users have a good experience.

The 99th Percentile (p99)

The 99th percentile (p99) represents the value below which 99% of the sample falls, leaving only 1% above it. This is particularly useful for identifying outliers and ensuring that even the worst-case scenarios are within acceptable limits.

Why Use Multiple Percentiles?

Analyzing multiple percentile values, such as p90, p95, and p99, provides a more detailed view of system performance. Tools like JMeter and Gatling include these in their reports, allowing teams to calculate percentile scores using different methods. This comprehensive approach helps in identifying performance bottlenecks and understanding how the system behaves under various conditions.

Complementing Percentiles with Other Metrics

To get a complete picture, teams should complement percentiles with other metrics like minimum, maximum, and average values. For example:

  • p100: Represents the maximum value (100% of the data is below this value).
  • p50: Known as the median (50% of the data is below and 50% is above).

Establishing Acceptance Criteria

Teams often use percentiles to establish acceptance criteria. For instance, setting a requirement that 90% of the sample should be below a certain value helps in ruling out outliers and enabling consistent system performance. This is particularly useful in identifying issues related to memory utilization and other critical performance aspects.

By focusing on the percentile score, teams can make more informed decisions and optimize their performance tests to achieve better results.

Need help with percentiles? Explore our Performance Testing Services! Our global client reviews on Clutch speak for themselves.

Careful with Performance Testing Metrics

Before you go analyzing your next performance test’s results, make sure to remember these key considerations:

1. Avoid Averages

Never consider the average as “the” value to pay attention to, since it can be deceiving, as it often hides important information.

2. Check Standard Deviation

Consider the standard deviation to know just how useful the average is, the higher the standard deviation, the less meaningful it is.

3. Use Percentile Values

Observe the percentile values and define acceptance criteria based on that, keeping in mind that if you select the 90th percentile, you’re basically saying, “I don’t care if 10% of my users experience bad response times”.

If you are interested in learning about the best continuous performance testing practices for improving your system’s performance, we invite you to read this article.

What other considerations and performance issues do you have when analyzing performance testing metrics? Let us know!

Reaching for open-source software, or a free performance load testing tool? Get to know JMeter .Net DSL, one of the leading open-source performance testing tools, bridging JMeter and . NET. It revolutionizes performance testing and open-source tools, enhancing software quality, efficiency, and reliability.

How We Can Help You

With over 16 years of experience and a global presence, Abstracta is a leading technology solutions company specializing in end-to-end software testing services and AI software development.

We believe that actively bonding ties propels us further and helps us enhance our clients’ software. That’s why we’ve forged robust partnerships with industry leaders like Microsoft, Datadog, Tricentis, and Perforce, empowering us to incorporate cutting-edge technologies.

We craft strategies meticulously tailored to your unique needs and goals, aligning with your core values and business priorities. Our holistic approach enables us to support you across the entire software development life cycle.

Embrace agility and cost-effectiveness. Visit our Performance Testing Services page and contact us to discuss how we can help you improve your system’s performance.

Follow us on Linkedin & X to be part of our community!

Recommended for You

TOP 10 Best Performance Testing Tools

API Load Testing

Cost vs. Value: Analyzing the ROI of Outsourcing Application Testing Services

444 / 446