How was the Workshop on Performance and Reliability (WOPR) born? What is the vision behind it? Find out in this article, featuring Eric Proegler and Paul Holland.
By Natalie Rodgers
WOPR is still making significant contributions even after 29 editions. The first one was held in New York in 2003. Since then, it has tremendously impacted the worldwide testing community and IT industry.
It has been hosted by giant companies like Google, Microsoft, Facebook, Salesforce, eBay, BlazeMeter, and so on. In different places in the US, Canada, and Europe. At Abstracta, we are delighted to be hosting the 29th edition of the event in Uruguay this year.
The last workshop was held in Marsella, France in 2019. This year’s WOPR will be the first since the pandemic hit. “We held off having a WOPR during the first two years of the pandemic. Now that people are more comfortable again with traveling and meeting with others, it is time for WOPRs to come back”, said Paul Holland, one of its organizers.
“We’re still convinced WOPR can’t be virtual because the bandwidth in the room is one of the things that help produce the magic”, emphasized Eric Proegler, another organizer.
It is an honor for Abstracta to bring the workshop to Latin America, specifically to Uruguay, and to host it. But why is Latin America a good place to run it after so long?
Eric answered this way: “We found the right partners at the right time, people we respect and trust. The globalization of IT is not really about labor costs, it’s about sourcing talent and new ideas everywhere.”
“There is great work being done in Latin America, and I am honored we were asked to come to Montevideo. WOPR29 brings us to our fourth continent and eighth country,” outlined Eric.
Through this article, we will explore the workshop’s history, vision, and motivations. In order to make this happen, we interviewed Eric Proegler and Paul Holland, some of WOPR’s current organizers and protagonists.
– How was WOPR born? When, under which context?
Paul: In early 2003, Ross Collard (software testing consultant and teacher) reached out to Scott Barber (well-known performance tester) to start a peer conference called WOPR, following the LAWST-style of peer conferences.
LAWST is the Los Altos Workshop on Software Testing. LAWST created a set of facilitation rules to promote sharing experiences, conversations, and discussions that were designed to allow everyone to participate equally.
Facilitation is a crucial aspect of these peer conferences as it prevents one person from dominating the conversation and allows for a logical flow of discussions. James Bach facilitated the first WOPR which was held in New York City in October 2003.
Eric: The first time I attended WOPR was in 2004, when it was in its 3rd edition (WOPR3). I was early in my career and had been assigned to learn performance testing. I had taken a course in Performance Testing from Ross Collard and was invited by him to attend. I came to learn that I was not the first or last vouched for by Ross – but that invitation changed my career.
At that time, WOPR was run by a collection of independent test consultants, and many of the attendees were too. They had formed WOPR to share experiences and made the incredible and still relevant decision to focus on Experience Reports. They facilitated untimeboxed discussions. Those were inherited from the LAWST framework, but they are the magic of WOPR.
At WOPR3, I met Dawn Haynes, Cem Kaner, Rob Sabourin, Michael Kelly, Karen Johnson, Brett Pettichord, and even the illustrious Richard Leeke! They were all veteran consultants who had seen some things. I didn’t know about their reputations at that time, but I knew they had a lot to teach me. There were several other really smart people in the room that I also learned a lot from.
Due to a Hurricane in Florida that week, some people arrived late or didn’t attend at all. Paul Holland stepped in as Facilitator and essentially invented Peer Workshop Conference facilitation starting there.
For my part, I sat there with an open notebook and wrote down every single term or technology I had never heard before, so I wouldn’t be distracted from hearing these brilliant people relate their experiences and describe their methods.
– What goals did you have with WOPR at the beginning?
Paul: I became an organizer in its 5th edition (WOPR5), after I had started facilitating at WOPR3. The vision of WOPR was very different for Scott and Ross.
I think that difference ultimately caused Scott to step away from an organizing role. Scott wanted WOPR to generate new insights that would be impactful to the performance testing field and publish papers with those discoveries.
Ross wanted to create a community of performance testers who could share stories and learn a lot from each other while also creating a network of peers. Ross’ vision is the one that became WOPR’s vision. The one that Eric Proegler, Andy Hohenner, Mais Tawfik Ashkar, and I (Paul Holland) try to maintain.
Eric: At first, I just wanted to keep up. After a few WOPRs, I was invited to help organize them and was happy to do so. Paying forward the debt for what WOPR meant for me was a strong motivator. At that point, I wanted WOPR to continue being held twice a year, with a seat guaranteed for me in return for my efforts. It’s still a great deal to me.
– Has it changed over the years? How?
Paul: The early WOPRs had some growing pains. The difference in vision caused some heated exchanges during the conference. Those were sorted out in the first few years. Ross used to send out a survey after each WOPR. We would take the feedback very seriously and attempt to address any concerns.
The facilitation has improved by using K-cards to allow participants to indicate that they have a question or comment. The flow of the workshop organization has improved since Eric took over. He has the organization down to an art and ensures that everything is lined up for each conference. In general, the conferences are mostly the same – just small tweaks to make subtle improvements as needed.
Eric: The composition of attendees has moved away somewhat from cranky consultants towards more people with full-time work in the field. We had access to Ross’ contact list for the first 20 WOPRs or so. Since then, our networks and word of mouth have taken things forward. We still get a pretty good mix of people.
– WOPR29 is an event for a few people. What is the vision behind it?
Paul: The numbers are small because that is what experience has shown to be effective. The smallest WOPR was only 12 people. The largest was 29 people. The best number for a LAWST-style conference is between 15 and 22 people. Any fewer discussions may not have the desired amount of depth. Any more and the facilitation tends to fall apart. The one conference with 29 people was too many. We now cap the number of attendees at 27.
Eric: We limit the overall headcount, and we use an application process to select who attends carefully. This is to create the best conditions for the WOPR Magic (which is the LAWST Magic, more or less) to occur. There are three main ingredients to this Magic.
First, a group of experienced practitioners is assembled through an application process. They propose the experience they want to share at WOPR, and describe their experience as practitioners. We often know them from conferences or other experiences where we may have met them. We want to choose people equipped to participate in and add to a deep conversation about performance and reliability testing.
Hopefully, they are humble, willing to be vulnerable about what they know and what they don’t know, and will show up ready to learn from each other. People who are not aligned with these values are definitely not invited back, and not in the first place if we can help it.
The second ingredient is Experience Reports. These are prepared before the meeting but distilled on-site, by an experienced practitioner talking with other people who already share a great deal of the same context. The person speaking does not present generalized conclusions or instructions to the room as would occur at a typical conference.
The people in the room know the terminology, tooling, history, and goals of the subject matter. We talk about performance testing with a group of people who have all created, run, and reported on performance tests. This means communication bandwidth is extremely high.
The third ingredient is Facilitated Discussion, sometimes called Open Season. This is a threaded discussion model, with no time limit. All attendees can propose new threads and respond to any thread, with the Facilitator handling the sequencing.
This facilitation model helps prevent individuals from dominating the conversation and makes sure everyone can be heard. Paul does this better than anyone in the world and is asked to Facilitate a large number of workshops and conferences all around the world.
– What has been your biggest surprise so far at WOPR?
Paul: That WOPRs are still being held. There have been more WOPRs than any other LAWST-style peer conference. We are about to host the 29th edition. I think LAWST is the next most successful (by the number of meetings) at 18 (or so).
Eric: People who are often isolated, working under difficult circumstances, keep synthesizing new ways to do things. I hear about novel approaches and new techniques at every WOPR. It’s actually very invigorating to know that there is so much room for new ideas and approaches in what some people might consider a dry subject.
– What are your expectations about the future of WOPR?
Paul Holland: I hope that we keep having WOPRs, but I’m not sure about the future of performance testing as a specific role in software testing. Recently, I have seen performance testing done by various other roles. The tools are easier to apply and use.
The results tend to be easier to evaluate with the movement to micro-services and larger more capable computers. But, there is likely still a need for performance testers and as long as that is the case then WOPR will hopefully be around to allow practitioners a place to meet and have meaningful discussions with peers from around the world.
Eric Proegler: I definitely want to have a 30th WOPR in 2023, preferably in New York if we can, since WOPRs 1st, 10th, and 20th were there. I still have Vienna on my list because of tool vendors located there.
I feel like Performance Testing is coming back into vogue to some extent in the last couple of years, but I still have the sense that performance work is more of an observability game than a testing one these days. I definitely don’t meet as many people who specialize full-time the way I did in the 2010s.
Iterative Development is the biggest impact on the field – no surprise our theme “Iterative Performance Testing” is related. But public cloud and container deployment technologies changing have also really impacted the field. Our attendees are showing up with DevOps and SRE titles these days. Perhaps we will explore these modern contexts more.
Don’t miss Quality Sense Conf! Organized by Abstracta, It will be focusing on a variety of software testing topics. The event will take place just after WOPR29 in Montevideo, Uruguay, and you will be able to take the opportunity to meet face-to-face many of WOPR29’s speakers who will also be there. Register here.
Read more articles of our saga named “Performance Testing In-Depth” here.
Follow us on Linkedin & Twitter to be part of our community!
Tags In
Related Posts
Designing Performance Tests With a Little Bit of Futurology
Futurology: noun, systematically forecasting the future, especially from present trends in society I’m not going to lie, I can’t predict the future. I don’t even know what I am going to eat for dinner most days, but in testing, sometimes we have to put our…
Is The System Actually Running Slow?
When a user says that “the system is slow” we would like to know: Is the system actually running slow or is it a matter of perception? Is the chronometer distorted or does the system, in fact, prevent the user from working in a normal…
Leave a Reply Cancel reply
Search
Contents