Performance tests are for simulating the load on the system under test to analyze its performance (response
times and insights usage) to find bottlenecks and opportunities for improvement.
There are specific tools for simulation which automate actions that generate this load, for example,
interactions between the user and the server. In order to simulate many users with little testing
infrastructure, interactions are automated at the protocol level, which makes automation more complex (as for
the necessary prep work) than automated scripts at the graphical interface level.
Two approaches to performance tests can be distinguished: testing early in development (testing the
performance of units, components or services) and testing before going into production (in acceptance testing
mode). The most important takeaway here is that both approaches are essential. It is necessary to
simulate the expected load before going live, but not everything should be left until the last minute, since,
if there are problems, they will surely be more complex to solve. Moreover, testing each component frequently
reduces the cost of corrections, but there’s no guarantee that everything will work properly when integrated
and installed on a server and under the expected load.
Here DevOps are needed, as they are the ones that are able to analyze the different components of the
application, operating system, databases, etc., and can handle different monitoring tools to analyze where
there might be a bottleneck, being able to adjust any settings as necessary. It is also imperative to involve
developers with these tasks since automation, a programming task, is needed and often the improvements that
must be made are at the SQLs level, data schema, or at a logic level, algorithms, code, etc.
To execute automated performance tests frequently, an important problem is figuring out how to make sure the
test scripts are maintainable. This happens because automation is done at the protocol level, and in the case
of web systems, for example, it’s at the HTTP level, making the scripts highly susceptible to changes in the
application. A couple of the tools that have emerged to overcome this problem that we use at Abstracta are Taurus and Gatling.
Although they have different approaches, both handle simple scripting language and seek to reduce their
complexity. For instance, Gatling applies test design patterns like Page Object, which can reduce the impact of
changes, increasing maintainability.
It goes to show that before selecting a tool, it is extremely important to define the objectives of the
performance tests, in order to choose the one that best addresses your needs and challenges. Each tool comes
with its own advantages and disadvantages and features that compensate for different things.
For more on performance testing tools and approaches, visit our performance engineering blog.
Consider performance testing for the acceptance of a product, simulating the
expected production load, as well as accompanying the whole development process with unit performance tests to