What practices are relevant in continuous performance testing today? We talked about this with an expert panel made up of Roger Abelenda, Andréi Guchin, Sofia Palamarchuk, Paul Holland, Andy Hohenner, and Eric Proegler.
Two important Software Testing conferences will take place in Uruguay next December, which will do a great contribution to the field of continuous performance testing. Their names are WOPR29 and Quality Sense. What makes this so important? As a result of these kinds of events, software experts worldwide can network and share ideas, contributing to the development of quality software.
The small South American country of Uruguay, with a population of about 3.5 million, ranks among the top software exporters in the world. Financial Times informed that “more than 1.000 software development companies” operate in Uruguay, generating almost $1bn in exports, mostly to the US. That makes it one of the world’s leading software exporters per capita.”
Referring to Uruguay as the “Silicon Valley of South America” is starting to ring a bell. According to the Digital Rise Report 2021, Uruguay was the top Digital Riser over the last three years in Latin America and the Caribbean. With progressive politics and amazing opportunities, this country has earned international recognition.
This report is prepared by the European Center for Digital Competitiveness every three years. It ranks the digital competitiveness of 140 countries with information from The World Economic Forum, World Bank, and the International Communications Union.
There is an ever-growing audience worldwide interested in learning more about performance testing and software quality. So here we are, digging into all of it through this saga that we named “Performance Testing In-Depth.”
Since Abstracta is one of the most reliable testing companies in the region, we are proud to handle this task. Aiming to build bridges and broaden the knowledge about this huge practice that is enabling the creation of better and better software.
In this article, we focused on continuous performance testing practices, with the opinions of a panel of experts made up of Paul Holland, Andy Hohenner, and Eric Proegler, organizers of WOPR29; Roger Abelenda, Chief Technology Officer at Abstracta; Andréi Guchin, Lead of Performance Hub at Abstracta; and Sofia Palamarchuk, a member of Abstracta’s management team and Apptim’s CEO.
– WOPR29’s website says that synthetic usage injection can still provide predictive benefits that are difficult to achieve in any other way. In this context, what practices are relevant in continuous performance testing today?
Andy Hohenner: That’s part of what I hope to explore with this WOPR. There are practices like Observability testing, instrumenting test automation, synthetic testing/monitoring, etc. But none of those give us the forward-looking at-scale performance data that old school load tests did.
Eric Proegler: For software under development, we want to know if we are making things better, or at least not making them worse. Calibrated (as opposed to “realistic”) loads can find information to confirm – and new information that we can get safely without risking brand and revenue. Verifying new architectures and component reworking at an early stage can be very valuable for baking good performance into the system.
In terms of scalability, load injection and assembled system testing can help us discover problems we don’t really want to find later in Production. At a certain level of reliability, “Service Restart Engineering” can get us to (an average of) two nines of uptime, but that’s not enough for every context. Even when it is, point load effects should be explored, autoscaling is not immediate, and operations systems need verification as well.
Roger Abelenda: There are many practices that are relevant and may be used to assure the performance of a system under different loads.
Generating synthetic load is still relevant since it allows us to verify the performance of the application to anticipate potential future loads (for example for an upcoming event, like black Friday or some particular promotion event), analyze known profiles with proper preparation, and monitor in place and reproduce it in a simple way to evaluate adjustments to configuration or fixes in code, etc. In the context of continuous performance testing, generating synthetic loads allows us to verify the behavior of a system remains in acceptable ranges under a well-known scenario while the system itself changes on each iteration (either code, configuration, or infrastructure changes).
Additionally, other types of tests may be applied as resiliency testing (through chaos engineering), infrastructure scalability ramp up and down speed, etc.
I think the key practice in continuous performance testing is versioning performance scripts, and the closer the versioning to the code under test, the better (ideal: performance tests in the same repository as code under test). Keeping performance tests versioning close to code under test versioning allows us to easily track changes in both places, easy rollback in case of any potential issue and apply fixes to production code and easily verify it with the proper tests for such version. Moreover, it fosters collaboration between developers and performance testers (for example through code review practice), a truly cross-functional team, and not by 2 teams that try to sync on sprints.
Andréi Guchin: In order to complement Roger’s answer, another practice that is relevant in continuous performance testing, and aligned with microservices architectures, is running performance testing for each system in isolation and in an integration fashion. As with functional tests, it is essential to test each component in isolation to verify it properly works and get fast feedback loops, through the usage of mocks or virtual services but it is also important in later steps to test integration with other systems to verify that the functionality or performance is not affected in the integrated environment due to networking, special scenario exercised in integration or other factors.
Also, having properly represented and reproducible environments for running such performance tests continually is of special importance, where infrastructure as code and disposable environments play a bigger role than they used to play in the past when performance tests were done in a one-off fashion. In contrast to the “traditional” performance testing, for continuous performance testing, it’s not so important to have a production-like environment to run the tests, but the capacity to reproduce exactly the same environment throughout the different sprints in order to have effective results comparisons.
Finally, having an appropriate user-friendly result report with all the important information to get useful performance-related conclusions it’s something to keep in mind when running this kind of testing. Most of the time no one will be monitoring the tests as they are running, so it’s essential to have a good report to understand later what happened.
Sofía Palamarchuk: In terms of performance testing during the SDLC (Software Development Life Cycle), there are different types of checks that can be done to ensure performance regressions are caught in time and addressed.
What type of performance testing is more relevant in CI depends on the end goal of the software under test and what is more critical for the business.
The minimum would be to automatically check the response time of all requests and the user experience of the main user flows. This requires analyzing performance across multiple environments (browsers, mobile devices, network connections, etc.). Because this is done in a continuous fashion (every time a new version of the software is released and goes through the CI pipeline), it is critical that these tests are automated, run in the same controlled environment, and are fast. As a result, we can detect performance degradation (regressions) earlier and take appropriate action.
Find more about the “Performance Testing In-Depth” saga here.
Would you like to join WOPR29? Find all the details and apply here.
If you want to know more about Uruguay as a Digital Hub, we suggest you read this article.
In need of a testing partner? Abstracta is one of the most trusted companies in software quality engineering. Learn more about our solutions, and contact us to discuss how we can help you grow your business.
Follow us on Linkedin & Twitter to be part of our community!
Related Posts
4 Reasons to Attend Velocity 2016
Join us at one of the largest conferences about all things performance Only two more weeks until Velocity 2016 in Santa Clara! We are busy getting ready for one of the best conferences for performance professionals taking place on June 22-23. The conference mainly revolves…
System Performance Problems
How do we diagnose and fix system performance problems? While working with systems in production, it is common to encounter system performance problems that must be analyzed and solved. In general, those problems are complex, difficult to replicate issues that occur occasionally and may be…