Blog

Quality Sense Podcast: Rob Sabourin – Testing Under Pressure (Part 2)

Learn about the 5 principles of Rob Sabourin’s Just-in-Time Testing methodology to help test under pressure

Listen to this episode of the Quality Sense Podcast in which Federico Toledo continues his two-part interview with Robert Sabourin, Adjunct Professor of Software Engineering at McGill University and President of AmiBug.Com.

What’s the Episode About?

As you’ve probably felt yourself, teams are under greater pressure during today’s global health crisis. How can testers adapt to shorter release cycles yet help maintain business continuity? Rob’s developed five principles as part of his Just-in-Time Testing Methodology that holds some of the answers.

In episode 1, Rob shared the first two principles, which are purposeful testing and active context listening. In part two, he covers the other three: flexible decision making, ruthless triage, and always know the last best build.

Learn why he thinks testers are actually quite used to turbulence, why purpose is now more important than ever, how to actively search for more context for better testing, and more!

Listen Here:

Episode Transcript

Rob:

Third principle: Decision making workflows.

I like to know how people are going to decide… Now specifically for testing, there’s three of them. How do you decide what requirements matter, how do you decide what bugs matter, how do you decide what tests matter.

Now this is something that is individual to every company, whether you’re agile or not or whatever methods you have, and whatever projects you’re working on, there’s going to be different ways that you’re going to make decisions about things.

Before I start testing, I want to know how they decided that this requirement was important, I want to know how it was decided, not just it was… I don’t want just the result of the decision. I want to know how they came to their decision, because I want to know the reasoning behind it. Because when I have a question, when I’m learning, I’m going to be able to go back to a book and say what’s the answer, look it up in some sort of “Bible of requirements”, I’m going to have to sort of figure out a lot of things myself and knowing the thinking behind it. Well, this is important because it’s related to this database which is being changed from an old version to a new version, and therefore I know the reasoning for the prioritization, therefore I can use that to guide me.

So I want to know the reason for prioritizing requirements, even though testing is not about prioritizing requirements, if I know why this requirement is important, that helps me, it’s not just… I know it’s important, I know in agile they say that the product owner prioritizes things, I want to know why this is important.

This is high priority, I’m going to do it, okay, but why is this high priority and that one low priority? I want to know that.

Federico:

This also helps you to have visibility into the purpose, right? Is it connected to the first principle [purposeful testing]? Because if you try to understand the decision making process, you also understand the motivation for making those decisions, right?

Rob:

In a good company, in a healthy company, yes. But sometimes you’re in a chaotic company that’s changing, and so there’s a business purpose to it that’s not always reflected in the decision workflow. So you have to watch it.

And when you find a conflict, you’ve found the best bug of your life, right? When you find the workflow is working against the purpose, you win.

Like that’s the prize for a tester to find that, because you’re working across purposes. And you can find bugs like that. So yes, in a good company, the workflows derive from the purpose.

I find in a lot of companies, the workflows derive sometimes from history, from “we’ve always done it that way” and there is an inertia to continue with the process. And also politics comes into play. Like if I ask you the question, “Who decides when you close a bug?”

Now, you might think that the ultimate answer is the tester who found it that closes the bug. Okay, fine. But that means that the tester who found it is the most important part of your project team, because you can’t ship it unless you close the bug. And the decision to make that closing the bug is the ultimate decision for shipping, and I would like to suggest that in a healthy workflow, you probably have a pair of people involved in that decision, not one person.

To me, it’s very important to understand how the decisions are going to be made. And if you know how they’re made, then you can use the decision process, if you will, as a guide. And this is real people now. Literally if I know how requirements are prioritized, how tests are prioritized, how bugs are prioritized, I could actually have a Slack channel open while I’m testing, and contact the person who’s the key decision maker about the requirements when I have a question when in the middle of testing a feature. And that’s the guy I want to know to know if this element of it had any bearing to why it’s important or not.

And if I know how the decisions are made, I can do that. If I don’t know, I’m in a black box. And so I want to see and have access to these people. And again, there’s no one who’s paid to do this, the tester, for me, has to learn about this stuff.

Federico:

Also understanding who is going to make the decision and why, gives you more guidance in order to provide the correct information for this person to make the best decision he can.

Rob:

Exactly. And that’s where I get the feedback. Right? This testing is providing feedback to help make a decision to ship or not ship. Ultimately that’s what it seems to be. Again, maybe you’re just testing for fun or maybe you’re testing academically, I think I’m testing usually because we have to get something out the door, and give feedback to people saying, “Hey, is it good enough to do that?”

So it’s definitely for me… if I’m working under turbulence, I don’t have time and find some procedure that someone wrote that has 18 pages of steps to go through, I gotta talk to someone.

Federico:

Totally.

Rob:

The three things [decision workflows], again, it’s not just just requirements. Requirements, tests and bugs. And it could be the same guys, like some companies have the three amigo concept. Well, if you have a three amigo team, maybe that’s the team that does all the stuff. Maybe you have the requirements that’s totally a business function, and so you have someone there who doesn’t even know what testing is that might be able to guide you.

Certainly people who make these decisions might not even be aware of how that can help you. So that’s the decision making workflows. Then if we know that, we can certainly use it. And what I want to do is I want to triage. So I’m testing under pressure, imagine you’re testing under pressure. You’re sitting at home, your kids are screaming, they gave you a build, they want to ship, they want to deploy that build at three in the afternoon, it’s 12:15, what do you do? What do you do?

So I said okay, I know the purpose of the project, yes, I know the context factors, I’ll double check the context factors, double check, am I in sync with the context? Okay, I’m in sync with the context, I know the requirements and how it was prioritized. [For the fourth principle, triage] Now, I’m going to think of as many relevant in-scope test ideas as I can. I’m not going to start testing the first thing I think of. I’m going to try to identify a rich set of testing ideas.

Myself personally, I can sit there and in about four or five minutes, come up with easily 20-30 testing ideas that are all possible starting points, that are all things that I might want to test. I want to think of more types of testing ideas than I will ever be able to do.

I want to think of ideas about capabilities, about failure modes, about quality factors, about usage scenarios. I want to use creative ideas, look at cross feature interference. I want to look at white box things, black box things. I want a whole catalog of types of ideas I want to find, so I collect a list. And this list, to me, it’s like I use very often colored index cards. I always walk around, if you see me at conferences, I have thousands of these index cards with me, not hundreds, thousands of them. I put one idea per card, and sort of shuffle them out and look at them. And then I try to take those ideas I have and triage them very quickly. Triage is a decision making process where I’m going to look at every single test idea I came up with. And there’s no wrong ideas. I want them in scope related to the purpose, right? I want them to help, but there’s no wrong ideas.

I go through them, and then I ask for each idea what’s the benefit of doing it and the consequence of not doing it. The benefit of doing it usually is learning something that helps know if we’re on purpose. That’s the benefit. The consequence could be opportunity, time, money, cost, even a consequence could be if we know that we had a can of worms, we’re going to dive into something that we don’t want to dive into. For each one of my ideas, I try to guesstimate the impact and consequence. And I want to find, from that deck of cards, one of them will bubble up to the top. One of them will be more important than the other ones. That’s what I’m going to start with.

And I’m going to deal with that one for a bit, for a while, as I learn about that one, then I will take note, what is your sampling rate? It depends on this, if I start at 12:15, I have to ship it at 03:00, I’ll probably give it about 15 minutes, and then I’ll say what did I learn in these 15 minutes? Did I learn enough, should I go deeper? Am I using tools, am I automating? What am I doing? And I might change what I want to do, because as I test, I get new ideas. So what I ask while you’re testing is to look for new ideas.

If someone gives you one little feature and says we made a change to this feature, we’re going to ship it, what do you test? Well, I start, I make a list of testing ideas. Some of them are related to that feature, but some are cross functional interference. Some are risk, some are data. And I have these ideas, I prioritize them, myself, it’s my business, it’s my ideas, there’s no wrong ideas, but you can’t test everything. You choose the one that you feel has the best value with the least consequence, and you move on. You work on that, you look at what you learn after a period of time, you know that you can go forever, so you have to stop, you time-box.

I like the time box myself using session-based testing at about 90 minutes. But if I’m talking about … you’re talking about something that has to be shipped in an hour, you talk something shorter than that, and look at what you learned, decide what you do next based on what you learned.

Federico:

Do you repeat this triage every 90 minutes or after each card?

Rob:

No. Truly if I’m under pressure like I just described, and so I have like three hours before they want to ship, and so this one, I would every time, I would sample it, I’m going to review my ideas, because as I learn, as I learn, my view of what’s important changes. And some ideas I found before become more important, some become less important, and I add new ideas.

So you just think about this, like having the cards in your pocket here, right. And as you go, you get a new idea, you write on a card, put it in your pocket. Then every so many minutes, 20 minutes, a buzzer goes off, you stop, you take a look at what you learned, you look at the pockets, and maybe the best thing to do is to continue. Right? Or maybe now because you learned something different, the best thing to do is to move to something different.

And if you’re not alone in the project, you want to share this information with people. So every time you sample it, you don’t keep the information private, you share it, you tell your stakeholders, “Okay, this claim you’ve demonstrated, it’s true. We think there might be some flakiness in this area.” You can share that information. If you keep it to yourself, it’s useless, right? You have to share it.

And so when I’m stopping, I make sure I share the information, make sure I review what I should be doing. Should I continue or move on?

Now, if it’s not a crisis like that, you’re still testing under pressure, but you have a few days, sample at more like 90 minutes is what I would recommend. And they call that a session, a testing session, about 90 minutes, uninterrupted testing on one go.

And don’t be afraid to change direction. This is triage. And I say ruthless triage. It’s tough, but you gotta do it, because you can’t test everything. This now also means that I expect people are able to do exploratory testing, to use tools to help them to test data. I don’t necessarily say you want to automate on the fly, but if you’ve got tools, maybe you can do data driven test automation and scripting to help you to get things done.

Certainly I’m not building a regression test suite when I’m doing this, I’m building test tools to help me. But they’re only good now, like next week, they might not work anymore, but now they’re helping me. So this is called good enough test automation.

And I’m basically … I’m dealing with reality. I’m sort of saying I can’t test everything, but at least I’m going to test the most important things, which have the most value, and I’m going to share it as I go and just keep at it. I’ll never finish everything, but after a couple hours, if I’m doing every 20 minutes, I have six of my test ideas that I have exercised, I’ve tried them out. I’ve learned about it, I may have found some important bugs. If I didn’t find bugs, I might have certain confidence that the behavior is acceptable. And you move on.

It’s incomplete, but that’s what ruthless triage is all about.

Federico:

A question that comes to mind, what happens when you are in a team of testers? How do you manage or distribute the task? I think there are two different scenarios, doing the triage together maybe, or dividing it in different components, working independently.

Rob:

The COVID-19 example idea was if you’re on your own.

Federico:

Okay, yeah.

Rob:

So that’s on your own. If you have the test lead or like an agile team, you’d have a bucket of tests, like a backlog of test ideas. And this backlog would be just prioritized periodically. In Scrum, the natural thing would be to prioritize it daily. So every day, you just take a look and you say what ideas are still important, right? And then we add no ideas as we test, we take out ideas if they become out of scope. But once a day you sample that list. And the whole team then when they’re doing test activities or test charters, they will take the highest priority ones after that backlog. And of course talking about only ones that you can do.

Don’t forget that if you’re doing Scrum, if a programmer hasn’t checked in the code, it might be too early to do certain types of tests, even though you know it’s the most important one. So you have to not just prioritize based on the impact, but also based on dependencies. And the dependency sometimes has to do with the technical work other team members are doing.

I would argue … I know you have a lot of experience in non-functional testing, performance testing, and you would probably say even if performance testing is the most important test with the best benefit, I wouldn’t do it until I can prove the functionality works, or I could demonstrate the functionality works. And therefore you would say that perhaps the priority should be let’s do a couple of sessions of functional testing before we do a session of performance testing. Right?

So that’s like a technical dependency. You want to be aware of that. Don’t start doing it before it’s able to be done, then it’s wasteful. I’ve seen people do this, I’ve seen people go and take performance testing on something that has functional bugs in it, and when the functional bugs are fixed, the performance is totally different.

Federico:

Yeah, maybe you changed the way you-

Rob:

Query the database or something.

Federico:

… Yes, exactly. I was going to say that example, because it happened to me once.

Rob:

Change your cache by like 10%, and the whole world is different from a performance perspective. I don’t want to say performance testing isn’t important, I’m just saying that sometimes in a team, you’ll have activities that are charters that come out in your triage, they bubble to the top, but there’s a dependency. So we want to be aware of that.

Federico:

Yeah, sure.

Rob:

So that’s my ruthless triage. But there’s one more.

Federico:

I remember.

Rob:

My top five list here, so there’s one more. The last principle is always know the last best build. I learned this so long ago, and it’s been part of my DNA, it’s part of my DNA. As I’m testing, I realize when I’m testing under pressure, there’s often a time limit. Right? They say, “Rob, we have to ship at three!” And I say, “Are you going to ship at three?” Yes, we’re going to ship at three. Okay.

Let me help you decide what to ship at three. And that’s the last best build. So as I’m testing, while I’m testing, I know that we’re under the gun. I know that there’s a crisis that we have to hit a dig. And what I’m getting to test as different strengths and weaknesses, and every build is different. And what I find many testers are inclined to do, which is a bad practice in my opinion, is to take a build from the programmers a few minutes before shipment, and they say, “We’ve only changed one thing.” You try to test that thing, and then you deploy it, and the whole world collapses.

And I prefer to look at the same, while I’m testing, I’m always testing different builds, always. And what I want to know any time, any time during the whole project, what’s the best build. So if I had to ship now, I would say ship the build from last Tuesday. That’s the last best build. 

The most recent build is rarely the last best build. Sometimes it is. Sometimes the most recent build is the last best build. But very often it’s a build of a couple of days ago.

And it’s interesting, people sort of say well, wait a sec, the developers weren’t finished programming. You said, “No, you know what, the bugs we know are sometimes easier to deal with than the bugs we don’t know.” Right? 

If you have a long project, the last best build is really cool, because you can present to management periodically, hey, if we ship today, here’s what would be working for the customers. And management gets really excited and says what? We can ship early. I said no, if you ship today, this is what you get. And you can make a dashboard that represents that, that shows people if we ship now, what we’re confident in, what we’re not confident in. And of course if using an appropriate architecture like microservice architecture, you can actually have confidence in some services that are very very strong, even though other services you don’t know about.

And so if you can tie this then to what are called… it’s like the expectations or claims people make about the product. And say okay, you made these 50 claims, these 40 of them, we can say are true in this build, but we can’t say it about the build afterwards. So people like this notion of getting knowledge that hey, you could actually ship early if you had to, and this is the last best build.

So instead of working with the pressure and time squeezing and squeezing and squeezing, we’re just saying hey, I have a list of all these builds for the last five days, we did 14 builds, and you rank them, this is the last best build.

So if I have to ship now, at least I always have something I can ship.

Federico:

This is something that the whole team should be paying attention to do, version management, right?

Rob:

This is more a team thing too. It’s more a team thing. I’ve seen it done beautifully at some very famous companies in history. There’s banks in the United Kingdom that do this all the time, there are network management companies that I’ve seen do this. It’s just a very practical way to turn around the whole software engineering.

Instead of killing yourself to squeeze everything into the last day, you say okay, well, we’re going to ship the best [build] we can at that time. And hopefully the best one you have is relatively recent and has a really reasonable set of features that meet the needs of your business.

Federico:

Yeah. Rob, I think I could continue talking with you for hours.

Rob:

Hours, hours, I love talking to you about testing stuff.

Federico:

Yeah, but I have a couple of final questions. When it’s related to … Well, I can see behind you that you have a lot of books. So do you have any recommendation for listeners to pay attention to any specific book?

Rob:

Yeah, I have. So if you ask the question what is the one book that I think that people who are starting in the field would benefit the most from, I would start with the book, Raving Fans by Ken Blanchard. Raving Fans is a management book, it’s not a testing book, told in parable style. So teaching by storytelling, which I love that approach. And it’s about trying to do your job so that the people who benefit from your work are a fan of your work, that they become your biggest fans. Like if you were a super star, they’re your fans, they love your work.

And it shows how to look at your job as a service and to basically deliver that service in a way that people who are your stakeholders really benefit from it. And in testing, this works beautifully in many different contexts. The style of testing is called service-oriented testing. It works in any lifecycle, it doesn’t care about life cycles. And it helps people basically to focus on what matters and to help their customers out, perhaps exceeding expectations a bit, and building consistency in that.

So Raving Fans by Ken Blanchard is the book I’d suggest. Of course I’ve got many many technical books on testing, and that probably depends on what space you’re in. Learn as much about test design as you can, so if it’s technical books of test design, if you’ve never read one, read Lee Copeland’s A Practitioner’s Guide to Test Design. That’s my starting point for people in test design. That’s a book that’s again, technology independent, lifecycle independent, but it teaches about 14 different test design techniques, which I urge everybody in our field to master. I don’t mean just know them, I mean master them.

When you’re testing under pressure, you don’t have time to perfect the skill, right? You got to be able to do it.

Federico:

The last question, I truly believe you can change a lot of your results by changing small things that you do many times. Like the habits that we have. So, do you have any habits you recommend people to adopt or to avoid?

Rob:

Well, there’s a lot of them, and I think thematically today I was talking a lot about proactive learning of things, and there’s certain habits in learning that I’ve picked up over the years.

One is I really want you to take pride in your ignorance, like honor ignorance. It’s important to be able to have an open mind and not prejudice things with biases. So I would say that the habit would be to really not be afraid to walk into something and be ignorant about it in learning and applying knowledge and don’t bias things. So the habit I have for example is when I’m learning from someone, I will basically never use the name of an object unless they use that name of an object. So I’m not going to force my terminology on someone else, I’m going to build on their terminology.

And in terms of testing stuff, when I’m describing what I’m going to do for testing, I try to avoid as much as possible to use the word test as a verb. And this is a habit that’s hard to get into for testers, because a lot of people say well, I’m going to test this feature, and did you test that feature? And you have this word test everywhere. And the word test means nothing to anybody except you.

Instead of talking about testing, talk about what you’ll learn. I want to study, I want to learn, I want to explore risks and express them in a way that isn’t just saying I’m going to test this or test that.

ROB SABOURIN

Now, maybe among testers, between your peers, you can use whatever terminology you want, but when you’re going outside of the sphere of people who have studied the craft of testing, I think it’s very important to avoid using the word test as a verb.

Federico:

That could be very challenging for me.

Rob:

It’s challenging for everybody. I have to do it deliberately, but I can, and it’s a habit. What is interesting is now you’re going to get rid of this abstraction layer and really talk about what matters to the business or what matters to the technology.

And believe me, if you say I’m going to test this feature, and I say I’m going to test the feature, that means two totally different things. So instead let’s talk about what we want to learn about. I want to learn if this function behaves this way, so that this customer can use it for this purpose. Okay. That’s very specific. I can do that.

And when I give you information, you know what I’m giving information on, it’s not some abstract list of passed/failed check marks in a dashboard, right? It’s knowledge that’s expressed in a way that is actionable.

Federico:

It’s trying to change the assumption about what we understand.

Rob:

I get passionate about stuff like this. But that’s hard to do. So you’re asking me about habits, it is hard, it’s hard to do. But it’s making a huge difference.

Federico:

Totally. Very interesting. Thank you so much Rob, one last thing, do you have anything you want our listeners to do, access your site or…?

Rob:

Well, I don’t really have, how do I put it? Resources that I make publicly available, but I share with everybody all the things that I do as you know. And I have an article on the subject of today’s talk, that I’m happy to share a PDF of. I have resources related to this field of just-in-time testing in a Dropbox form that I can share. But I prefer people to send me an email at [email protected]. Ask for the link, and I’ll send you it

Federico:

Okay, excellent, excellent.

Rob:

I mean I’m on Twitter, I’m on LinkedIn, I don’t know how to use social media. I rely on people like you, Federico, to help me.

And if anybody listening has any questions or stuff they should feel very welcome to just zap me an email and I’ll be happy to give them more insights into these different principles and ideas.

Federico:

I really enjoyed this conversation, and as I say I think I could continue for hours. So we will-

Rob:

Yeah, and let’s do it again.

Federico:

… Yeah, for sure.

Rob:

It’s really fun.

Federico:

Thank you so much Rob-

Rob:

It’s been an honor.

Federico:

Stay safe.

Rob:

Okay, you too, stay safe.


We hope you enjoyed this podcast episode! Stay tuned for more.

Check out part one of the conversation here.


Recommended for You

How Can You Optimize the Cost of Software Testing?
Video: Managing Your Fully Remote Team in Times of Crisis

165 / 437