Blog

Quality Sense Podcast: Eric Proegler – Testing Mission-Critical Software

Figuring out if certain applications, like aircraft software, are ready to ship with confidence

In this episode of Quality Sense, Federico Toledo invites Eric Proegler for a fun conversation on some of the bigger questions in software testing.

Eric Proegler has over 20 years of experience in software, mostly split between performance testing and managing testing. He’s currently a staff test engineer at Credit Karma in San Francisco. He’s also an organizer of WOPR, the Workshop on Performance and Reliability, and he’s the President of the Association for Software Testing

Highlights

  • How certain industry’s systems are tested like contact tracing apps for COVID-19 or aircraft software (think the Boeing 737 MAX disaster)
  • Software quality certifications: When are they needed and are they really useful?
  • Test documentation, context-driven testing, test automation vs manual testing.
  • How Eric discovered he’d rather be a software tester than a developer
  • A super important skill for testers: learning how to disagree politely

Listen Here:

Episode Transcript

(Lightly edited for clarity.)

Federico:

Hey! Welcome Eric… If anyone needs recommendations about good places to visit on a night out in San Francisco, I think here we have the man.

Eric:

It’s really nice to get the chance to talk to you under any circumstances, Federico.

Federico:

So to start, can you tell me how did you end up in, in software testing? I know that you have a story behind that.

Eric:

In 1998, I was working for the Indiana State Department of Health on a migration of an immunization tracking system. 

That was my first real programming job was to work on this immunization system, basically the kind where your kid gets shots and then you have to make sure that those are registered so that you can get into school and that kind of thing. 

And so I was doing the programming and it was okay. I mean, it’s a lot of sitting there and staring at screens and trying to get into the zone. And then I got to this part where I was trying to test this function where you would upload your results, all the new shots you’d given that day. And then you would upload them to a central store so that the state could have an index.

So it turned into this whole thing about assessing how these shots that were given got uploaded to a central data store. At the same time, the program we had was supposed to like mark the kids as like “ready to go” or not, based on certain kinds of logic. Had they had all the right shots? And some of these shots like they’re combinations, like you can get three of them at once if you’re three months old or you might’ve only gotten two of them when you were two months old.

So it was super complicated to figure out how all this came together. And I spent like maybe a week trying to sketch out all the different combinations for how this went.

It was the most fun I’d had programming. And that’s when I realized that I really enjoyed testing more than I enjoyed working in the code mine.

I was probably destined to be a tester since that’s what I enjoyed the most, but that’s when professionally I left coding full time and became a tester.

Federico:

The one who thinks that testing is something easy it’s because he’s doing it in a very wrong way. Right?

Eric:

Very shallow, confirmatory testing, like I did exactly what I was supposed to do and the system did not explode in my hands. Ship it. 

I still am fighting people like that. I still find people who are years into a career in software who still see testing that way.

I think there’s this essay I read called “Everything is Broken” and it was written by a security researcher.

But the thing that stuck with me from that essay was that almost “all software ships, the moment it doesn’t obviously break” or something along those lines, because by the time software ships, everyone’s tired of it.

They want to be working on something else. And there’s a lot of pressure to make money with it because it’s not making any money while you’re testing it. And those business drivers make sense, but…

Federico:

This is a great introduction to the topic we wanted to address, which is related to an article that you spoke not long ago with a journalist. It was related to the obligations of software professionals in making and testing critical systems. 

And the article starts with a very interesting question, which is who is making sure airplanes are safe?

This is kind of scary, just to think about it, but can you summarize or?

Eric:

Oh, sure. Yeah, sure. Yeah. 

So I like airline examples and a lot of people who work in testing like them, because they’re a way to laugh at ourselves when we claim we’re doing engineering. 

I was actually at TestBash I did in San Francisco and I got this connection on LinkedIn. I’m like, “Oh, it’s somebody who’s just saw my talk at the conference.” 

And I look at it and it’s this journalist person. And it turns out she was writing an article about what the certification process for airplanes is like, because of the Boeing 737 MAX issue that had come up in… I guess it would be 2018 first and then it became serious sometime early in 2019. And they shut down the line for a while and grounded a bunch of planes.

And it was a pretty big deal, but essentially there was a software bug that would cause the airplane to behave in an unsafe way. 

What struck me is not so much the details of how it happened, but the fact that when I was reading about what the software is supposed to do, it was at the point where the software was necessary for the plane to properly fly. There’s something about the way that they had the control services of the plane were working that the software was needed to correct for it. 

Like they were getting farther and farther away from the idea that pilot could flip a switch to manual and just fly a plane. I think that’s one of the reasons why airplanes are safe is that there is a trained person who doesn’t need the software, who can operate the plane if necessary.

So I talked to her about this and what she actually really wanted to ask me about was the process of doing some sort of software certification. How do we make sure that this is working? 

So it turns out that in, at least in the American airline industry, that it’s a bit like other regulated industries I’ve worked in, like pharma, where the regulator actually does, the regulator meaning the government agency, just has this list of requirements for documentation and things you have to sign off on to say things work, and that’s supposed to just make quality happen somehow. 

Like if you hand over a big, giant stack of paper that says we did all of these things, then it must work. And as opposed to any sort of deeper examination of the software, looking for ways it could fail.

And one of the things that came out of the conversation I thought was pretty interesting was that it was not really in Boeing’s interest to find a problem other than to avoid future liability and not kill people. 

But if they were trying to get something to market and the time they were spending testing the software was time they weren’t selling. And that was very much like what my day job felt like at that time, where there was a lot of pressure to get things onto the market. We were talking about the idea of some sort of independent verifier of software, like in the US we have this thing called the UL Labs where they used to certify that all of these electrical appliances had been properly examined to be safe and unlikely to burn down your house and things like that. Could such a thing exist for software?

Federico:

But today there is nothing like this running or?

Eric:

I think that there are people who will buy something like, “Hey, I brought in this consulting firm and they tested our software and they said it was all good. So look, I did what I was supposed to do.” But are they managing liability or are they trying to do everything they can to make sure the software works correctly? 

And it got to the point with, when I was working in one regulated context, where I would tell my team that we were delivering the part of the product that was necessary to sell in that industry, which was a big stack of paper. Like without this big stack of paper, the customers weren’t allowed to buy the software. So we had to do that as part of the product delivery.

So we avoided a lot of angst about whether we were properly testing because we did that before, after we got some confidence, then we would generate this paper. 

But the process of generating the paper did not produce much confidence because it was this matter of very prescriptive test steps. I did X, then Y, then Z, and saw the expected result. That was what was in the paperwork. 

But I haven’t found that kind of testing to be as helpful for me in building confidence that software’s working correctly. 

I kind of need to challenge software and maybe take an adversarial approach to see if I can trick it.

ERIC PROEGLER

Federico:

Do you think, is there any type of coverage that can be useful as a metric to measure the quality of our testing?

Eric:

Well, I don’t think one measurement is sufficient. Like I do think about coverage when I think about how I would assess what kind of testing we’ve done. 

Have we covered as many of the common use cases we can? Have we covered as many of the functions of the system as we can? 

That’s definitely a way to think about, have I done enough testing, but I think that any approach, like I have this percentage covered by unit tests. I can do this percentage of lines of code or have been touched by automation I’ve done. I think that that’s like a measurement at one place. 

When I think about designing a test strategy, security has this great idea of like layers that I have this kind of testing and I have this kind of testing and I have this kind of testing. And if I design them really well, it’s not a straight vertical stack of doing the same thing over and over again and proving that, yep: I can still fall off the happy path. It means I’m looking for different kinds of risks and different kinds of tests.

But, getting things right on stuff like unit tests or regression frameworks frees up people to find really interesting stuff. So that’s kind of frustrating to hear like manual or automated. Yeah. That’s a stupid discussion. You need both. And actually, I’m more interested in how clever are you, in making sure that you’re getting better coverage by spreading those resources out in a way to reduce your exposure.

Federico:

Another thing, that’s related to “what’s the quality that we need?”… Because the quality is the sum of different factors, right? Accessibility, functionality, performance, security, different things. Depending on the context, one is more important than the other. 

So how can we combine this idea of their regulations with the context-driven approach? Is there a way?

Eric:

Well, so the true context driven roots that I really align with and when I first heard them, they said out loud better everything I’d ever thought about testing. 

We’re basically saying to adapt the test that you are doing to the circumstances that you find yourself in. 

So I believe a true picture of a tester who was testing an airplane would have a very methodical, very thorough approach to how they would test. And that the point of context-driven testing is that you find practices that suit the context that you’re in, and you do those, as opposed to believing that there’s one approach to testing that works for many different contexts. 

Like the stuff I do now, where there’s A/B testing with users and because we have millions of them and we’ll show 1% of them something and see if it works or not, that’s a very different context than an airplane where there’s no 1% experiment on an airplane, but it’s just has to be ready.

I think that context-driven testing has gotten this rap as being like anti-automation, which I don’t think is necessarily true, but a lot of the rhetoric that’s come out around it does seem to be pushing back on claims made about automation. 

Trying to find the right place in the middle of, “Yeah. The tool assistant testing is awesome and test automation does make it so I can spend less time doing shallow testing. And I can come up with more interesting testing.”

Eric:

There’s this project I looked at lately that was to do contact tracing was one of these ideas where they want to make this framework where they keep track of everyone you come within three feet of, with a mobile phone, we can use that for contact tracing if somebody tests positive for the virus because we all want to get out of our house and be able to have beers in person, right?

Federico:

Yeah.

Eric:

But it’s the Wild West. There is no actual standard for that. 

When somebody came onto Twitter and said, “Hey, we’re making this thing. We would like some help testing it.” And in the corner of the testing where alignment, people just jumped onto gripe with them about how they weren’t thinking about privacy. A

nd they were legitimate criticisms. I think the project was pretty naive about their privacy concerns. But I don’t think that helped that project any to just be told, “Yeah, you suck.”

Well… what exactly should we do?

Federico:

And who should pay attention to that?

Eric:

Well, I mean, that’s a good question, right? If we’re talking about something like an airplane, there’s a standard you have to meet where the FAA signs off on your airplane and you can’t sell any until you have it. But once you have it, you can sell airplanes basically anywhere in the world because so many other countries accept the FAA’s approval as sufficient or have minimal requirements in addition to it. 

So if I think about it that way, if the Elon Musk of airplanes shows up, I’m going to have a pretty hard time getting through all of that. And that’s just building the capacity to generate the giant stack of paper. 

Right? What would I tell them about what they need to do to… these are all the things you need to test to be able to ship your product.

It would take hundreds of people a long, probably a couple of years just to define that program. Right? 

Does that mean enough testing happened? I mean, enough paper generation happened. 

What I would want to know is what do we tell people who are consumers of software about how to figure out whether their software’s any good or not? This is a huge problem. 

If you think about like a government project that’s going to touch the lives of, in the US, it’s 335 million people, what shopping tools can I give somebody who doesn’t work in software every day to find out whether software’s any good or not?

And yeah, I mean, yeah, hire an expert. I give you my hourly rate if you want it, but how would you go about setting policy about these things?

Policy around standards for software quality for critical systems – like aircrafts

How would you go about making a informed decision about buying something? 

The first time I really engaged with this was went to a CAST in 2014. I saw two talks that just blew my mind. The first one was James Christie from Scotland came and he started talking about the concept of using certification and regulations around what kind of software testing that you did as a way to limit competitiveness as a way to have an advantage. 

The term rent seeking got used, which is pretty accusatory, set that aside for a moment. But regulatory capture was the part I thought was interesting. Now, here in America, we have the best government money can buy. So when there’s a big project, it’s the usual suspects who get to carve up the big allocation of money for a project. 

That leads me to the second topic I saw at CAST 2014 in New York, which was Ben Simo.

He was talking about how he had been attempting to use the newly launched healthcare.gov website to try to get insurance for his children and grandchildren. And that the process of testing that, it’s been a few years now, but it’s one of those big face-plants in soft quality history where the system came out that was going to change many people’s lives and how they bought healthcare. 

And I mean, it was crashing all over the place. 

It was a big mess. 

It had been a matter of a federal contractor. One of the usual ones had come on to this job. There were layers and layers and layers of subcontractors. And there are people running that project who didn’t know software, and were going to make the date no matter what. And their results are actually fairly predictable. You know all of that as to what happened.

We already have software having such a huge impact on our lives, whether it’s airplanes, whether it’s, I think a lot about the algorithms that trade stocks and how they’ve affected the market and hurt people’s retirements and things like that. 

How there are now all these circuit breakers that turn off the market, because people are scared about an algorithm that goes rogue and starts buying stock and selling it automatically before anyone knows what’s up. 

And we don’t even have the robots yet! So what are we going to do about this? 

How are we going to engage with the problem with software quality in a meaningful way to set public policy and to help people make decisions?

Federico:

Yeah. That reminds me… When using an app or webpage for something or service or whatever you’re using, you put a username and password, you forget your password and you ask to record that and they send you your  password in plain text to your email. Right?

 So this is a very silly example, but I’ve seen it many times. So, and I understand there is a risk associated with that because I am a computer engineer, but I know that most of the people, they wouldn’t see any problem about that. Okay, I have my password, I can access it again.

Eric:

Well, they wouldn’t notice. I mean, it takes a computer professional to even realize that as we [inaudible 00:19:14] 

Everyone else, it’s like, “Oh, that’s cool. I can copy and paste it now. Thank you.”

Federico:

Exactly. So, and like this, there are so many things that we are facing like a lot of problems related to software. And in some cases the users realize about the problems or the risks and in some cases, we live with that and we don’t know there is a problem. 

We need more education for the user, for the people to understand the risks associated with the software in order to be better consumers, in order to make better decisions that will force, in the future, companies to develop with better quality. Right?

Eric:

I mean, we’re hoping. I mean, I remember a time when a breach of user data was considered a potentially company ending event. And that was not that long ago. 

And now there’ve been breaches from almost every major online retailer, banks, places like that. I don’t even want to go look, where my accounts might be compromised, but I’m sure it’s multiple times my information has been spilled. No weird charges have shown up on my credit report. That’s the best I can do because otherwise I would be chasing after one breach after another constantly. 

People are just getting comfortable with living in a world where everything’s pretty much broken, at least a little.

ERIC PROEGLER

Federico:

You mean that the user accepts, “You need to review your machine” or you get used to that. Right?

Eric:

Well, so the people who learn how to operate these systems, you have digital natives, you have young people who grew up this way. They learn, Oh, I remove this. Sometimes it works. Sometimes it doesn’t. 

And that’s just like how things work. They’ve come to terms with that. 

There are people who are less technically savvy, are more frustrated and they’re stuck. Like every 18 months, I need to re-image my mom’s computer because she’s installed so much adware and games that it stops working. And I tell her where it comes from and it still happens. 

But, and the thing is that I’m privileged because I work in tech and basically everyone I know, I know from work or from professional associations and things like that. So basically just about everybody I know is technically savvy. That’s not what most of the world is! 

Most of the world is not able to debug a web browser not working and restarting the web browser and cleaning the cookies would be like something they wouldn’t know how to do.

Someone would have to walk them through it, but I still, in my professional life, have seen lots of bugs just sit in the backlog that could be solved by flushing your cookies… so we’re not going to fix it.

Federico:

And there is another thing related to the startup culture, which is that you shouldn’t be proud of your product, because if you are proud of your product, you released it too late.

Eric:

You spent too much time on it. And you fell in love with your own code or the smell of your own farts or whatever. 

You spent all this time trying to make something perfect when it should have been out making you money two months ago. What’s the problem? 

So that approach to delivering technology bleeds over though, like the example of Uber, like just like went into business in cities without talking to the local government there and were like, “Regulators, if you dare, whatever. We’re doing this now.” And then hiding that from governments to get things done. 

I mean, not everything in life is something waiting for a 25 year old to optimize with software. But the majority of the financial power in software is pushing towards that. Do the minimum possible, get this out. It’s okay if it’s a little bit crappy.

Well, I went downstairs and bought expensive beer to have with my friend Federico. 

I didn’t buy Budweiser. 

Not everything should be Budweiser. Sometimes I want to buy a good beer. Is there software made for people like me who can feel that way where I’m willing to pay for a little bit of quality? Doesn’t feel like it.

Federico:

Yeah. So, Eric, I think we could continue talking about this for hours. I think I will move on to the final questions I have for you. 

What habits do you have that you can suggest people to adopt or maybe to avoid?

Eric:

Yeah. So I think that for testers, I would say that you should learn how to disagree politely, and then also learn when you’re going to have to let it go. 

Because there are going to be times where you’ll describe a risk very perfectly and explain what the potential downside is, and people will still choose to take that risk. 

And earlier in my career, I was very emotionally involved in that. I would get super pissed off when people would not take my bugs seriously, and it made my life difficult. 

I’m less stressed now that I’ve learned, “Hey, I reported it. My test report is awesome. I did a great job! But I won’t be invested in what happens to that bug.”

Federico:

I really like you mentioned that because I was talking a couple of days ago with someone in our team and he was asking, “Should I try to be politically correct all the time? Should I be formal in the way I communicate?”

Eric:

No. Somewhat, if it helps.

Federico:

Yeah. But it’s sort of related to how you make the other person feel. It’s like having empathy or respect for the other person. It’s like paying attention to the way you communicate.

Eric:

This is a professionalism thing. I think if you’re a software tester, there are a couple of things that you’re charged with. 

One of them is to be a really excellent communicator about what you know, what you fear, what you’ve done it to address what you fear, what you would do if you had more time to test? 

Being able to explain all those things as well as being a professional. Doing it in a way that’s not insulting or upsetting to the people that you work with it’s kind of part of being a good human and it’s kind of being a professional because once you piss people off, they don’t listen to you. 

However, the reason you’re there is somewhat adversarial. You are supposed to think of the things that the rest of the team has not thought of yet. You’re going to have to share ideas that the rest of the team might not like. 

So your presentation is super important. You said politically correct and I was dismissive of that. Like I think that I was wrong because the idea with politically correct, is just to treat people with respect and to have good manners. 

And I think that you should be doing that, but you shouldn’t be afraid to say, “I don’t agree with the thing that you said. I think the assertion that we’re ready is incorrect and here’s why.” Just make sure you always have the “and here’s why.”

Federico:

Yeah, it doesn’t mean don’t say what you really believe, but it’s the way you say it.

Eric:

One of the things I do is I work with a nonprofit organization, the Association of Software Testing, and our constituents are software testers. And they are very opinionated, but to be honest, the only time I’ve ever really been disappointed is when I say something like, “This whole experience is terrible. The software is awful.” 

And it’s because that’s not actionable, what you’ve just told me. And it’s not being a very good tester to just say this all sucks. 

I can’t turn in a test report that says this all sucks. I have to be able to say a little bit more than that. 

So when I see software testers who have one bad experience, then tell everybody how crappy this thing is. Well, we live in bugs all day. We swim in them. We can’t overreact to them and decide it’s all… you know.

Federico:

Do you have any books to recommend?

Eric:

Well, the book that made me feel like testing wasn’t completely full of itself was “Lessons Learned in Software Testing.” 

Everything I’ve read about testing up to that point was all about formalized quality and said things about quality that I just knew was bullshit. 

Like this idea that I would sit down and I would write 172 test cases and that would be exactly all I needed to do. I just knew that that was wrong. 

I knew that it wasn’t supportable when there’s maintenance happening to code and things like this. And then I read that book and that’s probably why I’m still call myself context-driven because that kind of thinking about testing really opened up my mind to like what I was supposed to be accomplishing, which was to reveal the characteristics of the thing that I’m testing, not own the quality, not be the gatekeeper and not ruin everybody’s day.

Federico:

Excellent. Is there anything you’d like to invite the listeners to do?

Eric:

The majority of the professional work I do outside of my day job these days, is for the Association for Software Testing. There are not many nonprofits that work in that area. 

We’re run by testers for testers, but we’re this little tiny nonprofit, just trying to figure out how we can help people be better testers and provide some education, like the black box software testing courses, to help people connect with other testers and become better testers. So that might be something to look at.

And I mentioned Test Automation University earlier. I have not seen a better resource for finding your footing. I do know that basically every tester I’ve ever talked to feels they need to know more about automation. Whether that’s true or not, that is where I would send them to learn more about how to do it and how to think about it.

Federico:

Excellent. Thank you so much, Eric. I really enjoy talking with you.

Eric:

It’s great to talk with you, Federico.

Federico:

See you soon.

Eric:

Bye. Have a good day.


Recommended for You

Quality Sense Podcast: Alan Richardson “The Evil Tester” – On Test Automation
How Can You Optimize the Cost of Software Testing?

Tags In
180 / 432