Blog

WOPR29 is already here!

What will be the importance of performance testing in 5 years? The prestigious performance testing event WOPR29 is in full swing in Montevideo, and we interviewed a panel of experts to talk about this topic. They are Roger Abelenda, Andréi Guchin, Sofia Palamarchuk, Paul Holland, Andy Hohenner and Eric Proegler.

By Natalie Rodgers

It’s time! WOPR29 (Workshops on Performance and Reliability) is taking place in Montevideo, Uruguay, till December 8, 2022, and we are the first company in Latin America to host it!

WOPR is a crucial event to deepen the knowledge of performance testing with leaders in the field from around the world. It is a workshop for 20 to 25 people where participants can discuss various topics.

In the past, it was hosted by giant companies like Google, Microsoft, Facebook, Salesforce, eBay, BlazeMeter, among others. In different places in the US, Canada, and Europe. After 14 years of existence, it is our turn! It is really happening, and we couldn’t be more proud!

Today we share insight into the future of performance testing, with a panel of experts made up of Paul Holland, Andy Hohenner, and Eric Proegler, organizers of WOPR29; Roger Abelenda, CTO of Abstracta; Andrei Guchin, Head of Abstracta’s Performance Hub; and Sofia Palamarchuk, member of Abstracta’s management team and CEO of Apptim.

– Given the current trends, how do you envision performance testing in 5 years? 

Paul Holland: I imagine there will be tools that will make performance testing a little easier to do but the essence of performance testing is to find system limitations, see how the system behaves under load, where the bottlenecks are, and how to predict when the system will fail. All those things will likely remain the same.

Andy Hohenner: In the last 10 years, performance testing practices have changed pretty significantly. I expect that they will continue to evolve and not look like what we do today just like today doesn’t look like 10 years ago. Tools will continue evolving and supporting new approaches, along with new open-sourced solutions to problems we haven’t seen or thought of yet. 

Eric Proegler: I think performance testing will continue to evolve to keep up with changes in software development. I think the two greatest opportunities for increasing its value are in scaling other kinds of test automation to bring more of the user experience under measurement, and in improving the ability to replay/duplicate/merge production traffic.

For most contexts these days, front-end performance is the majority of user response time and is at least as complex as the n-tier applications load testing was developed for. Replaying a handful of transaction types solely against a backend does not solve as much of the problem as it used to, and there is a vast body of automated tests just waiting to be leveraged with load to make them even more informative. I was very focused on this problem (essentially, scaling Selenium to thousands) a few years ago, but never got as far as some others we’ve heard from previous WOPRs.

The traffic shaping//replay problem has always been a very interesting way to move past crude models that almost always said more about available time to script than desired fidelity of models. For at least a decade, the most observant, nimble, and bravest have added synthetic load to production or replayed/split parts of production load against alternate versions of code to learn more. This still hasn’t been democratized.   

Roger Abelenda: I envisioned it to be a lot more assisted and automated, requiring the performance tester more effort into focusing on higher-order problems but at the same time having to know more about particular systems. 

I expect more and better tools for helping in test calibration, automation, root cause analysis, auto-fixing recommendation, configuration auto-tune based on performance tests metrics, trend analysis, and alerts, collaborative tools to share common patterns/fixes and share more knowledge in the performance community. 

Regarding performance testing, I think more concepts and knowledge will be shared and explored, raising new niches for specific/particular performance testing, like nowadays chaos engineering.

Andréi Guchin: As technology evolves, quality standards also evolve and users become more demanding. Particularly in terms of speed, stability, reliability, and user experience of the apps and systems. I think that performance testing will continue evolving to accomplish these demands, with tools and practices that will be a better fit for teams’ work. 

Regarding tools specifically, I can see a tendency of moving to open-source platforms where people can share knowledge and connect with others to find better solutions to common problems. 

Therefore, the challenge for us is to keep learning and staying on top of these trends in order to succeed and remain competitive.

Sofia Palamarchuk: Performance will be a critical bottleneck for companies to remain competitive in the next few years as users’ expectations of digital experiences rise. 

Performance testing will become a widespread practice in many industries, with quality teams and developers empowered with tools to test performance earlier in the SDLC. As AI learns from production data, it will assist in identifying the types of tests that should be automated, as well as assisting with root cause analysis and automatically detecting code changes that introduce performance regressions.

Don’t miss Quality Sense Conf! Organized by Abstracta, It will be divided into 14 sessions, focused on a variety of software testing topics. The event will take place just after WOPR29 in Montevideo, Uruguay, and you will be able to take the opportunity to meet face-to-face many of WOPR29’s speakers who will also be there. Register here.

Find here more about this saga named Performance Testing In-Depth.

Follow us on Linkedin & Twitter to be part of our community!

336 / 472

Leave a Reply

Required fields are marked