What should we be more cautious about choosing: the tool we use or the person to run performance tests?

Does the tool or the tester matter more for a good performance test? An aspect we always stress in performance testing is that the tools are circumstantial (at least to a certain degree).

What does this mean?

It’s true that different tools enable us to test different functions producing varying results. Anyone experienced in performance tests will be knowledgeable about the concepts, the types of performance tests, the communications protocol, and monitoring. Therefore, he or she will be capable of managing any kind of tool, with more or less difficulty, because, despite the differences in scripting languages and the way in which things are done between one tool and the other, the base will always be the same.

Scott Barber, one of the greatest gurus in performance testing once said, “Enterprise grade load generation tools are designed to look easy in sales demos. Don’t be fooled.”

So, what is more important… the tool or the tester?

The Tester is More Important Than the Tool

At Abstracta, we always say this, but we could actually show how this is true due to one of our last consultancy projects. I had not handled the tool they used until then but I had the support of other team members whose help, in the end, I didn’t need. Upon reviewing the scripts, I started to figure out how to use this new tool by doing a mapping of concepts between it and the tools I am already familiar with. In coordination with the team in charge of performing the tests, we could make all the scripts function, by simply applying the concepts that are already part of a proven method and by using best practices.

Best Practices


  • Verify that think times are not included in the response time measures.
  • Parameterize tests, and verify that correlations function correctly.
  • Test the script with different data, users, etc., so as to guarantee that the parameterization is working correctly (even controlling how data is impacted on the database).
  • Perform validation for each step so as to have enough information, when executing the tests, to analyze which users had flaws, and in which step of which test case, and when using which data. That information will then be used to verify that the script is working and that the application functions with that data. If everything works, then the error occurred exclusively in the concurrence.


  • Verify that the load generators are not saturated. If this happens, a load different from the one programmed will be generated, and the response times will not be measured properly.
  • Analyzing the results in the load generation tool is not enough. That is just half of the task. The most interesting part is obtained by cross-referencing this with the behavior of each component in the infrastructure (observing the monitored indicators, such as CPU, memory, network, etc., and with the help of specific tools to detect bottlenecks and opportunities for improvement, such as profiles, tracing of database, traffic analyzers, analyzing logs, etc.)

We could generally conclude that, nowadays, we use tools for programming, but it’s necessary to learn good programming practices including good practices with each specific tool. The same goes for automated functional testing tools, performance testing tools, or monitoring tools, for example.

What is your experience regarding different testing tools? 


Recommended for You

Introduction to Taurus: An Alternative to JMeter
Gatling Tool Review for Performance Tests (Written in Scala)