Blog

Quality Sense Podcast: Vaishali Padghan – Test Automation

Welcome to the second to last episode of this season of the Quality Sense podcast! We’re getting closer to the end of this cycle with an interview with Vaishali Padghan, she is responsible for different quality initiatives, trying to deliver quality products faster at J.P.Morgan where she has worked for almost a decade. She empowers teams to continuously improve by leading automated testing strategies and quality transformation. 

If you are working in test automation or defining a test strategy for your team, I think this conversation will be very useful for you as it brings real and high-risk scenarios where test automation is key for the success of JP Morgan software development teams. 

Get comfortable and let’s get into the episode.

Episode Highlights

  • How Vaishali got into software testing.
  • Her experience working with agile teams.
  • Her advice on adding test automation to your strategy.
  • Vaishali’s take on building accountability in test automation.

Relevant Links:

Follow Vaishali on LinkedIn
Vaishali’s Book Recommendation

Listen Here

Episode Transcript

Federico:   

Hello, Vaishali I’m so glad to have you here in the show. Welcome. How are you doing today?

Vaishali:

I’m doing good. Thank you. Thank you for giving me this opportunity. I’m really excited.

Federico:   

Amazing. So the first question I have for you is, I want to learn more about your story. I would like to know even how you ended up working in software testing.

Vaishali: 

Yeah, I’ve been in the software industry for almost 20 years and started working as a software developer, right after college. And in my career, I had worked with different roles. I worked in various fields, small firms, large firms, I worked at a technology company, which was very good for the career growth, because we get to learn a lot being technology focused. I joined a startup, like a travel portal for a short while, because I love to travel, but then sooner realized that employees are not expected, especially software developers, they don’t get to travel to fix code on a beach. So eventually, I joined JPMorgan, around nine years back, I love the complexity involved with finance, and there’s always something new and new roles and new line of businesses that support that it supposes. In my current role, I’m leading the quality and driving quality initiatives, my job is to enable teams to deliver quality-focused products faster. So that’s that summarizes my job here.

Federico: 

Oh, being testing professionally in a company dedicated to the finance sector, I guess there are so many challenges around there. So today, we wanted to talk specifically about test automation. And I think this is part of our way to deliver products with good quality and faster as you were, as you just mentioned, and one of the questions related to that to start talking about the topic, is it possible to work in an Agile team successfully, without considering test automation as part of your test strategy? What’s your experience?

Vaishali: 

It is possible if it is a one-person team. So I don’t think that’s going to be a team anyway. But if there is more than one person, there is always a need to collaborate and communicate. And if you have end users, the expectations are set, you’re trying to build something as a part of a team. And as I helped you to deliver what was promised. And then test automation ensures that it works every time and it is delivered faster and works perfectly. So they complement each other, It’s not one is better than the other, they are actually supporting and helping the success. So test automation would actually go hand in hand with that trial. So if you have a team more than two people, you definitely need both of it.

Federico:

You remember when you started with your team to work in test automation. It was since the beginning or it’s something that you adopted after experiencing only with manual testing.

Vaishali: 

So when I moved from software developer, and then I was more curious, I wanted to be more from you know, understanding, like what the end users are trying to do and how how to support them better how to deliver better products. So that itself shifted me towards being more from the quality automation side where you’re close to development. But at the same time, you are getting the bigger picture of what’s the outcome that you’re looking for, and not specifically this limiting on what is delivered, what needs to be produced, or what’s going into the release this, this release. So for me, I had the experience of working in automation. But when I joined the firm, it was mostly a QA organization where there was dedicated QA teams and manual testers who were driving testing later in the phase. But lucky for me, and then the organization as such did value and did see the change coming through. Which means that if you wanted to move faster, we cannot have a separate team testing towards the end of the release. But they wanted to move testing left. That’s where the shift left happened and where the, I’ve trained quality manual quality developer engineers, sorry, I’ve trained quality engineers who are manual testers, how to write code or how to understand and read code. And I’ve also trained developers how to test because they come from totally different mindset and having that balance within a team definitely helps. So I think moving from manual testing to automation testing, it is a shift and a mindset shift and a culture shift, but it’s always, you know, trying to get better and faster and more stable obligations.

Federico:

Well, you mentioned many things that I really liked. Because you’re talking about combining different skills. You’re talking about having a balance, you’re talking about teaching the missing skills in, in developers, I would say are the, but a skill that is very useful for them when they are developing, which is having kind of a testing mindset. And also try to teach, and train testers with the development skills so they can also do a better job automating right. That’s, that’s amazing. So I think many companies failing with their efforts to do test automation or to implement test automation, according to your experience, what, which are the most common reasons that make automation fail.

Vaishali: 

So the main challenges that you face when you’re trying to do automate is going to be more on like expertise, and execution. Either you don’t have the right tool to do your automation, or you don’t have the bandwidth, or you don’t have the priority to do the automation. And, and a lot of times teams assume that it’s like a, it’s like a magical tool that you can bring in, and it’s going to do everything for you on day one. And that’s where the expertise comes into picture. It’s not about the fastest tool, which is giving you the right test to run or the maximum number of tests written. It’s more about the type of tests you’re running, and also what you’re testing. So it’s not just about the tool, but it’s also what are you asking the tool to test, that’s actually very important. And that only comes with experience, you have to have the right mindset, and that’s where I really admire when people are trying to break the system, and they are able to, they won’t stop until they break it, you definitely need to have some testing mindset for that. And also on the execution side, like those teams who do have a set up. And they have probably have automated test frameworks running for over the years. But eventually, at some point, it’s going to become more complex, it’s going to be more unstable, because you now have 1000s and 1000s of lines of code that’s running for every release. And it will break at some point. And we don’t realize that then when we have some flaky tests, we say Oh, I know why this failing. Just ignore that. And it always fails. And then let’s move on to the next one. By saying that, you’re actually asking them to not have confidence in your test. So if it is breaking, if it is failing, you either fix it or you delete it . So that is the execution side, you have to have that expertise and execution to ensure that you’re building the right tool. And when you’re executing it, you’re still continuing to maintain it the right way.

Yeah, it’s better to have less test there are useful than tests that are lying to you, or giving you results that are confusing, or you have to explain this is saying that it’s something is wrong, but actually, we shouldn’t be paying attention to it. So yeah, absolutely.

Absolutely. And then the 80/20 rule definitely works on execution, like you only have, like 20% of your tests should cover 80% of the functionality, all the complex scenarios. You don’t have to have 1000 tests, testing every possible scenario. If you look at what’s complex, what’s important, what’s more risk, then that is the flow that you need to validate every time and then the test if they’re failing for a certain reason, you know that if it’s important or not?

Federico:

Okay, I understand that you have starting with a disinformation in your team. So for those people listening to, to the podcast now, what what advice could you give them, or what things they should take into account for their test strategy to start with test automation.

Vaishali: 

So the number one is, start small, but smart. So you don’t have to build the entire 100% automated test suite on day one, you have to build the whole, you don’t have to target to build the whole thing. You have to build it right. So you can actually scale it. And at the same time, you have to be smart about how you build it. So that you can easily pivot it so that you ran a small test, it works. Yes, you build on top of it. But then you realize, okay, it’s not going to scale enough. It’s not the right test, it’s not going to be easy to maintain. You should be able to promote it and then make it better, and you should be tools agnostic, you should be independent of people’s specific tests. It should be a process that’s been more in place. And it should be simple. It shouldn’t take them a long time to understand what you’re trying to test. So simple, straightforward, smart, and small.

Federico:

Very clear, very, very easy to follow. Well, very easy to sell. But maybe it requires some experience in order to follow all those ideas, right?

Vaishali: 

Yeah, absolutely. You won’t get it right on day one. And that’s why you need to start small And then try to see how I can improve every time. So every iteration, you should see how can you make it better, and eventually you will get to the place where it’s more stable, and you can be confident about what to test.

Federico: 

It’s a learning process, right?

Vaishali:

Absolutely. Yeah. Cool.

Federico: 

I remember when we met, like a month ago in, in the Starry’s conference. And even though I didn’t have the chance to attend to your talk, I know that you were presenting different models related to the testing that you are doing in your team now, and there was one in particular that caught my attention, which is the tests maturity model?, and I don’t know if you can explain a little bit of that model to us.

Vaishali:

Absolutely. And that’s pretty much the one-pager that summarizes what teams should be doing, and what’s the definition of good. So the way it started, as I wasn’t, I was working with a lot of teams. And each team was different. They were in a different stage of their testing journey as well. And a lot of time if I talk to them, and I asked them, Do you have automated testing, and they would say, yes, we are unit test coverage is 100%, we don’t have to worry about anything. So I had to put this model in place for them to understand what it means when you’re saying that you are having automated tests, where it means that it is, and those who are only doing some end-to-end tests, some integration tests, they needed to move to continuous testing. So this test maturity model is like a tree, it has three columns, and it has its color-coded, the dark blue is a must-have, and the light blue is like a good half. So basically, when you look at it, it can talk about if you’re doing unit testing, you should have at least a minimum 50 to 70% coverage. If you’re doing integration tests, at least you have one or two tests that are testing what’s the upstream or downstream systems. But if you want to be in automated testing, you should be doing at least 70% enough unit tests, you should have the basic integration tests for the 80% of the API’s that you’re writing, you should have an end to end test. So it it summarizes what is a should have, which puts you in a category to tell you where you stand into the entire maturity model, and continuous testing is where you need to be aware, and it’s like a version that we keep doing so that in first round, we just said, Okay, do some amount of performance testing, because it’s too overwhelming for teams to look at this big chart and say, Oh, we have to do all of these different types of tests. But we said test your performance testing, and you can move to continuous testing band, and then eventually, as teams moved from automated to continuous testing, we can add more details. Now you have to do soak test, stress test, you want to do spike tests, you want to do some chaos engineering experiments, you want to do addition tests. So the idea was for this model to exhibit like, where are the risks? How can you reduce them? How can you improve the quality? And how can you control the costs? That’s the main objective, and I also have a list of different types of test. And a lot of things, say, I’m doing component tests, integration tests are handling end-to-end tests, can it be acceptance test. So the bottom line is don’t get hung up on the terminology, your application, your end users, you know where the risk is, and you have to focus on that level of testing. integration tests can be end to end test can be acceptance test, as long as you know what you’re trying to test over there. So it basically shows you like a, like a chart where how to move from basic testing to continuous testing.

Federico: 

Yeah, I guess that it also works like our roadmap, trying to understand when where each team is right now, and what they can do in order to improve in the different areas, right?

Vaishali: 

Absolutely, yes. In fact, a lot of teams took that as their roadmap mapper, and then they would put milestones across each band, like this is where we want to be in q1, and then we which we should be flowing into here by q2, and q3, and then eventually, by end of q4, we should be at this journey right here. So they were able to chart out their roadmap based on that. And it kind of helped them to gauge how much efforts and work they need to do in all those different factors like when the test pyramid they could, they could go out like what level what we need to add.

Federico: 

I think it’s very useful. We’ve been working in something similar but not specifically for automation, but for the different areas of testing, and the different factors of quality like… because we as testers, we have to pay attention to different factors within quality, right security, accessibility, the functionality, of course, the performance, and probably for each team or for each context at different moments. We have to weigh every factor differently. So learning about ourselves where our position is at each month, according to each factor, understanding what’s important for our business, helps us to build a plan, and as you just explained, so that’s why I find this model very interesting. Because it goes one step further, like, particularly for test automation, how we can improve in the different areas in the different layers of the pyramid, as you said, amazing, amazing job.

Vaishali: 

Thank you, thank you for the feedback, and I would love to see your plan also, I refer to your website a lot. I’m I’m a big fan of your blog.

Federico: 

Thank you. Thank you so much. More questions for you. What’s your approach to build accountability in test automation.

Vaishali:

So I think it’s gonna be visibility, you need to show what is failing, and you need to share what you’re testing. A lot of times, if there is some failures, or some problems that we know that there is some risk areas, you need, you don’t have to wait for a P van in production. For that to be exposed, you need to be upfront that I think we don’t have enough areas covered for this area, this section. So having that transparency will build your confidence, and at the same time, when you are sharing what you’re testing, it also helps from the end user point of view that okay, I know that this is important, and yes, you are testing it. Because a lot of times if you’ve spent a lot of time testing something which is not important, they can come and point it to you that I think this is okay, I’m and it’s okay if it fails. But this is what we want to cover more. So having that visibility across is definitely helpful, and it will help build your accountability because they’ve seen how many efforts and what you’re trying to test. So if there’s anything missed, they are not going to come and hold you for it because they’ve seen your progress. They’ve seen the way you think, they’ve seen what you’ve thought through. And you wrote those scenarios. So it’s a team effort. Yeah.

Federico: 

Yeah, absolutely. I think we always have this challenge, how to show our value, right? How to show what we are doing and try to make other people, other roles or decision makers understand that in order to continue investing in the team, in the activities that we should be carrying out.

Vaishali: 

Right? Yeah, and it’s not on day one that you can show the value, and that’s where the challenge comes. It’s the it’s, it’s the end result can be seen maybe six months down the line also. So you have to have them know that upfront that you won’t see the change today or tomorrow, if you implemented, it’s going to be adding up to the six months,Milestone.

Federico: 

Yeah, absolutely. And also, I think, that also helps to our own careers, right. Because if we can learn and improve what we are doing, and we give visibility to that, about that, we are also helping ourselves and our team to grow and continue progressing on our careers. Right.

Vaishali: 

Yes, absolutely.

Federico: 

So my final question for you to wrap up this, this interview, if you have to recommend a book. It could be about software testing, or whatever topic you like, which one would it be.

Vaishali: 

There is this book about accelerate, that we’ve been reading, it talks about how to have metrics that matter. It talks about the different dura metrics. So it shows you how you can add value. So it’s difficult to measure automated testing. Because, you know, if there is a bug in production, that doesn’t mean everything is failing, and that’s not the only way you can quantify how the quality is. So that’s a good book, which talks about, you know, what are the different ways you can analyze this the stability and quality of an application.

Federico: 

We share the the link to the book in the Episode Notes. It’s a great book, I also really like the way it’s written because it’s, it has a lot of data, to support all everything that they they share in the book. So…

Vaishali: 

Absolutely, then the data makes it more conclusive about what you should do and how you should do something. And what should you look for to measure.

Federico: 

Yeah, exactly. So Vaishali, thank you again, for accepting the invitation to participate in the in the podcast. It was great to listen to all your experience, and know your thoughts about test automation and everything. And I hope to see you soon.

Vaishali: 

Thank you. I am really excited about this opportunity. And thank you for this. I’m really enjoyed talking to you and keep adding the good work that you do on your website. It’s helping a lot of us so we go back and refer to those. Thank you so much.

Federico: 

I really appreciate it. Thank you. Bye bye.


319 / 437