Blog

Webinar: Crisis-Proof Your Software Testing Budget with Federico Toledo

Sharing ways to cut costs, but not quality (and save yourself some precious time)

Due to the economic impact of the coronavirus pandemic, companies are experiencing an increased sense of urgency to react quickly to their customers’ needs, reduce waste, and uphold business continuity. Fortunately, you don’t have to let the quality of your products suffer just because of new constraints.

Federico will share ways to cut down costs in the development process associated with software testing and QA for CTOs, Directors of Software Development and Product Managers or anyone responsible for testing, based on his 15 years of experience in helping clients do the same.

Learning takeaways:

  • Ways to reduce the cost of software testing, infrastructure and training
  • How to find and eliminate costly wastes in your processes
  • The importance of shift-left and shift-right testing for better quality outcomes
  • How major brands like Shutterfly and PedidosYa employ cutting-edge testing practices that help save costs down the line and eliminate risk

Watch Here

Or watch on Youtube.

Also, you can view the slides here.

Webinar Transcription

Kalei White:

Hello, everybody, and welcome to today’s webinar. My name’s Kalei White, I’m the CMO of Abstracta. Today our speaker is Federico Toledo. He’s our COO and co-founder. Today we’ll be talking about ways you can reduce and optimize cost related to software testing and development. Don’t worry, this session is being recorded, and if you have any questions at all I’ll be monitoring those as we go. And we can chat about them at the end. I’m going to let Federico take over from here.

Federico Toledo:

Excellent. Thank you so much. Welcome everyone. I’ve been working in software testing for 15 years already. That’s why with this current situation, with the crisis and everything, I’m pretty sure that there are many companies trying to reduce costs. I’m going to share some ideas in order to reduce cost specifically in software testing.

I had to clarify, maybe do a disclaimer here. I’m not proposing that you should reduce costing software testing, but this is the area of where I have more experience, so all my ideas are going to be around software testing and how to reduce costs associated with these activities. But it’s not only about money, it’s also about time: how to make better use of your time. Because in some cases you have a budget and you look for ways of reducing the budget.

But in some other cases, in other times, what we need is to make better use of our time. So we are going to be talking about both things, money and time.

I organized the talk in four different parts, I will start with specifically testing costs. Also, I will talk about infrastructure and tooling costs. Costs associated with training. And at the end, we’ll take a broader view, looking to optimize costs in the whole development process. 

So to start off… I’ll say something that maybe seems contradictory, but if you want to improve, optimizing the cost associated with software testing, you should start testing earlier. 

And it seems contradictory because if you start earlier, that seems like you would be testing for a longer amount of time, but actually by doing this, (which is called shift left testing) you’re going to be more efficient. You are going to have better results. And this is mainly because you’re going to have benefits in different ways. 

If you involve testers in earlier stages, like when you are getting information for your requirements, you’re going to start asking more questions related to their requirements at the beginning. And in that way, you’re going to improve the requirements which will prevent errors. You’ll also reduce re-work, but not only for testers… for the whole team. Because the developers are going to code a better solution because the requirements are better. 

And there’s another way to obtain benefits from this approach, which is that if you involve testers before releasing the feature or the user story that you have to test, they’re going to understand the business rules and everything related to these requirements beforehand. So when they receive the version to test, they are going to be able to provide you with feedback earlier, which is very beneficial. 

I wanted to mention a couple of projects where we were helping these companies to shift left testing, but not only functional testing, but also automation and performance, involving these activities with the developers. 

What I want to highlight here is that if there is any quality factor that is important for you, you should start earlier in order to test more efficiently. For example, if security is important for you, start security testing earlier. 

And as I say that we should shift the testing to the left of our process, we also should shift it to the right. This doesn’t mean that you should only be testing production. But, what it does mean is that if you don’t test in production, your users will. So I guess that you prefer to do the testing in production  yourself, because if you find errors before your users do… this is much better!

For me this is also very related to the definition of testing. 

Testing is gathering information about the quality of our product in order to make a decision. We shouldn’t only pay attention and gather information from the test environment, because in the production environment, we have a lot more rich information to gather. 

Because we have our users from whom we can learn how they behave and use the system. And for example, we can understand which kind of issues they face using tools like Google Analytics or by reviewing the logs of your application, you can understand from which platform, from which devices, which operating system they use, which functionalities they use the most.

So with this information, you can improve your testing. Also, there are some tools that are very useful in order to define alerts and notifications. So every time there is an error and a section in any layer of your system, you get a notification and you can start working on those problems before the users report something wrong is happening. 

You can also apply these for different metrics associated with performance. 

I wanted to tell a story here about the project I was involved in a couple of years ago in a company here in the Bay area. They have this eCommerce site and when I started to work with them, I asked, “Hey, you don’t have any performance testing… You don’t do performance testing at all?,” and they said, “We don’t need it, because we do continuous delivery.” 

They used to deploy every day or every two days. So with every deploy, there is very little risk because there were few changes that they put into production every time. And they were monitoring in production. So every time they deployed, they could see if there was a degradation in the performance associated with this deploy. 

And if there was a problem, a degradation or some metric that, for example, the application itself that used more memory or more CPU or something like that, they knew which lines of code changed between one version and the other. 

And it was easier to find the root cause of this problem. 

So shifting right, the testing means getting information from production about the quality of your product.And in that way you can optimize and improve your testing process and reduce costs, and just make better use of your time. 

Another important way to optimize your testing to reduce costs is by testing less. We must accept that we can’t test everything. So we should focus only on the important things: the riskiest things.

And this is why risk-based testing is a great approach to apply when we need to reduce costs. 

Risk has two different components: the probability of an error to occur and the impact, the negative impact that this problem can cause. Considering those things, we can analyze the return of investment of our testing, prioritize, and find the 20% that gives us 80% of the advantages (applying Pareto’s law). 

I really like this graph in order to show this idea of analyzing and prioritizing according to the ROI. It basically shows that when you invest more testing effort, your testing costs increase. But, at the same time, with greater testing effort, we do not necessarily guarantee that the system quality improves. 

There are more important things than that. But just for the idea here, let’s assume that with more testing effort, the costs associated with the failures are going to decrease. So the theoretical goal that we should try to achieve is to be where these lines intersect. 

If we invest more effort in testing than that above that intersection point, it means that it’s more expensive to prevent the error than the actual negative impact it will have. 

And there is another way which is more practical actually to look at this, which is the risk matrix. Remember, risk is probability and impact. So if we consider, if we divide amongst low probability, high probability, low impact and high impact, we have four quadrants. 

We can think that low probability and low impact = low risk. Because there is a low probability that you will fall in this hole. And if you fall, nothing will happen because it’s very shallow. 

Then you have in this case, high probability. And it’s really likely that you’re going to fall in this hole, but it’s shallow so nothing will happen probably. 

Then we have this low probability and high impact, that means that it’s really easy to jump here. The probability of falling is low, but if we do fall we are going to die here, so the impact is really high. 

And then at the end, we also have this one, which is that it is really likely that we are going to fall here and the impact is too high. 

So what does MoSCoW stand for here? The M is for must. We “must” test this, this is the high risk area. Then we have the “should” test this, which is it’s low probability, but if it happens, it’s going to hurt. 

Then, we have the C for “could”. If we still have time… we could test this, because it’s really likely to happen. But the negative impact is low. But we could test it. 

And lastly, the W is for we “won’t”… we won’t test here because we shouldn’t invest time on testing something that has low probability to occur and low impact. Here is where I mentioned we have the cases where it’s more expensive to prevent an error than having them occur in production. 

Again, this is something to keep in mind, something theoretical, but I think it’s a good approach to understand that we shouldn’t be trying to test everything. We have to think about what’s the risk associated with what we are testing.

Okay, there are more ways to reduce costs, and here there is one associated with the documentation. 

Because the problem with the documentation is not only that you invest time in creating the documentation, but also in maintaining it. 

The more documentation you have, the more effort is required to maintain this documentation. 

So my advice is to have simpler documentation or use mind maps, which is an excellent option. 

But for me, the important thing here is that we shouldn’t make assumptions. We should ask the people that consume this documentation and ask them what’s useful for them. What’s the minimum product that you can have in documentation? The minimum documentation that is useful for them in order to reduce the cost associated with the maintenance. 

I really like this graphic from Jon Bach. The way he shows this idea, which is specifically thinking about functional testing and it talks also about the documentation associated with functional testing. In one extreme, we have the pure scripted tests. And on the other extreme we have freestyle exploratory testing (which doesn’t mean that you don’t do any documentation at all, but it’s lighter). 

So here, what I want to highlight is that there is a whole continuum, a huge spectrum of options. So we should understand which one is better for us. Also thinking that we should reduce the amount of effort required to maintain the documentation. And also, we could have a combination of options. Perhaps some cases we need a very detailed script, and in other cases we are fine just having a freestyle exploratory testing approach.

And we are talking about the cost associated with maintenance, and something that also comes to my mind related to that is the cost of automation. Again, automation is great. And with automation you can improve the use of your time for testing. But the problem is that the more automation you have, the more effort is required in order to maintain it. 

So my suggestion here, this is not always easy to apply or feasible, but if it’s possible, the suggestion is to put more focus on the API layer. And this is not only applicable to functional testing, but also performance. And the reason for this is that you can run the scripts sooner and more frequently, which means more benefits from the automation. Because if you’re going to have more and more feedback and faster. 

I’m pretty sure that everyone knows about this pyramid. On the left side, you have the antipattern. And on the right side you have the good practice, which is basically having a lot of unit tests, some API automation, and a few GUI tests. Because these are the most expensive to maintain. 

The unit layer is lower, and so the benefit is not that good compared to the ones in the API layer. So one possible strategy that we push in our client projects is to have a flow at the GUI level, and then analyze which endpoints we’re calling from this flow and improve the test coverage using different data at the API level.

So I think this way, you can greatly improve the return on investment of your automation. And again, I want to mention these projects where, as I said before, we were working on doing performance tests in a continuous integration environment. And the important thing here is that all the test cases were performed, and were automated in the API layer. Actually it was part of the definition of done for the teams. So every endpoint has a performance test, which is in the pipeline in Jenkins running every day. So the advantage of these test cases is that they are easier to maintain and they run faster, and you can get continuous information about the performance and the evolution of the performance metrics associated with them. 

Well, let’s move on to the second part of the presentation, which is associated with the infrastructure and tuning costs. I think many people are considering migrating to open source if they are already paying for licenses for tools, for testing tools. 

This is tricky because the advantage is mainly that you will see a reduction in costs in the long term. In the short term, you have to invest in the initial effort in order to migrate what you already have in the commercial tool to the open source one. 

I suggest this BlazeMeter article I just published last week where I talk specifically about open source tools for performance testing, which I consider to be amazing. There are excellent tools for test automation and for performance testing. There are a lot of options and really good quality products. So I think this is a great idea, but the problem in many cases is that the initial investment could be very high.

Due to this effort, there are tools that help you do this migration more easily. For example, if you have Load Runner and you want to migrate your performance testing to JMeter, which is the most popular open source tool for performance testing, Abstracta’ developed a script converter with the BlazeMeter team. It’s available for free here. Basically you upload a loadrunner script and you download a JMeter file. It’s not magic. It doesn’t convert everything, but it can help you to save a lot of time in the migration. 

So check it out if you are planning to do this type of migration. And as I said, maybe you can reduce costs by reducing the amount of licenses that you are paying for. Maybe, there are some cases where by paying some licenses you can reduce costs associated with something else.

For example, if you consider BlazeMeter or tools like this one, you can reduce the cost associated with the infrastructure you need when doing performance tests. Or there are some other tools like Testim that can help you to reduce the cost of the maintenance of the test automation for web applications. 

And there are other tools that maybe you can implement like Apptim for mobile app functional and performance testing (which is free to use and in open beta today).  

Basically what I’m trying to say when it comes to paid and free tools is that we should analyze the different alternatives. We have open source commercial tools and everything, and analyze in the near or long term, what’s the best approach in order to reduce costs? 

But there’s another way to improve, to reduce the costs associated with the infrastructure. Mainly thinking in production, what you could do is some performance engineering. Where you may be able to find out that you could provide your users the same response times, the same throughput, but with less servers, less resources.

So how to do that? Well, we can use APM tools or different monitoring tools in order to analyze, work together as a team of performance engineers, developers, DevOps, everyone trying to understand how to make the system work better with better performance. And in that way, maybe you can reduce the amount of servers that you need. Related to testing environments, maybe you can consider using test doubles like mocks or things like this in order to reduce the infrastructure in your test environments. 

And also, I think there are some questions that we should be asking ourselves. Which is, if we need to run all the test cases, always, in all the browsers or in all the devices, in all the combinations of operating systems? And so on. But the thing is that here we can also apply risk-based testing, understand which are the most risky platforms or whatever it may be and try to focus our testing efforts on those. And in that way we can also reduce testing platforms costs.

Now, the third point, optimizing training costs.

So there is something that we… I think it’s in our DNA, in Abstracta, but lately with the crisis and the lockdown we are trying to promote this value much more, which is sharing what we know, trying to teach other people and to learn from others. We do this by having internal webinars or workshops, sharing experiences and best practices and also our failures. This is very important specifically in these times because it’s a way to be more connected and to continue growing and learning. I think this is really, really important. And also, it’s a way to reduce costs associated with training.

Also, there are a lot of platforms with excellent content, for example, BlazeMeter University, and Test Automation University from Applitools. They have excellent content for free. And there are a lot of courses also in Coursera. We also have our Abstracta Academy. 

There are also many webinars like this one, conferences that are going virtual, that are going online, and in some cases for free. In this article that you have the link here, we updated this article telling which conferences are free or are online, in some cases they were canceled or postponed. But there are many that you can now access for free. And you should pay attention also to not only the conferences and webinars that are happening now, but on YouTube or in some platforms you can find a lot of videos with great content from previous events that I think it’s another amazing way to keep our team trained. Lastly, don’t forget podcasts!

Now, for the last part of the presentation, I wanted to talk about how to optimize processes. 

For this I want to start talking about two very important metrics, which is the lead time and cycle time. Those metrics are amazing because if you want to improve them, you have to work as a team. It’s not something that you can improve by yourself only, you need to work as a team in order to think how we can deliver a better service to our business. Because these are the most important metrics for the business. It’s like, how we can put a new idea, a new requirement in production in the user’s hand as fast as possible without, of course, without compromising the quality. But for this we have our own model that we apply in our projects:

This is the most important thing. We look for wastes and try to keep the processes as lean as possible. 

And we are looking for the eight wastes that lean methodologies propose that came from the manufacturing industry, but it’s really applicable to software development as well. 

For example, looking for bottlenecks, for parts of our process where someone is waiting for the output of a similar task. If we find these bottlenecks we can improve our processes, and in that way reduce costs. 

Also, extra inventory. Are we piling up test cases or documentation? Remember what I say before, the more documentation or the more the scripts you have, the more effort is required to maintain these artifacts. 

Another thing could be another waste could be extra processing. For example, if a decision to make a decision we need the approval of three different persons. Maybe we are investing too much time in this. 

And of course there is always a waste in the sources of errors.

We should find where the errors originate and try to solve the problems from their root in order to avoid re-work.

For that, we have a couple of activities that we like to do as a group, as a team. Discussing, again, trying to find ways of collaborating in a better way in order to improve the lead time as the metric I mentioned before. 

I have to mention also that there are other ways of improving costs, which is optimizing costs. Which is thinking about different engagement models. You can have your testing team in-house. You can consider outsourcing. This is what we do in Abstracta. This is why I have to mention this. Also, there is the possibility to do crowdsourcing. What I wanted to mention here is basically, consider all the options that you have in order to optimize the use of your time and your budget.

And, as a summary to wrap up this presentation, everything that I mentioned, I think three are associated to these three points:

  • Try to do more with what you already have, with the time or with the money that you already have, try to do more. 
  • Identify and reduce the waste in your processes in everything you do. 
  • And also, take advantage of the free stuff: open source tools, free webinars, conferences and everything. 

Here, you have a couple of resources, extra resources. You can find a lot of information related to software testing in our blog in Abstracta. And also I have one which is in Spanish. And recently I started to work and share in this Quality Sense Podcast interviews with different experts in the software testing community. For example, I interviewed Rob Sabourin and Janet Gregory and Refael Botbol. So please listen to the podcast and give me your feedback as I plan to continue doing more interviews!

And one final thought, it’s that a smooth sea never made a skillful sailor. And I think this is very important nowadays in this time of crisis. We have to get ready, get stronger, get together, help each other in order to overcome this situation. And I think we can do it. Thank you so much everyone for your time. And I hope to answer any question you have, anything you want to continue talking about.

Kalei White:

Yeah. Thank you so much, Federico. That was really illuminating. And it makes you really think about how you can go back, look at everything you’re doing today and do it better.

We have an audience question. Out of all of the online conferences happening these days, is there one that you recommend the most?

Federico Toledo:

Well, I would suggest one which I really like, because I was part of the creation since the beginning. Which is TestingUY from Uruguay, from my country. They are organizing different webinars in English with experts like Michael Bolton and Janet Gregory, and many people. Really experts in their area. But there are so many, I think I recommend checking our blog post about that. Yeah.

Kalei White:

Well, that’s about all the time we have today about how to optimize your software testing budget. Thanks Federico! 

Federico Toledo:

Feel free to contact me here. You have my contact (@fltoledo on twitter). Please reach out if you have any question or anything you want to continue discussing about this topic. I am passionate about software testing, and I really enjoy discussing and sharing with anyone. So, please.

Kalei White:

Thank you all for joining us. Bye!

Federico Toledo:

Enjoy. Have a nice day. Be safe.


Recommended for You

Migrating to Open Source Tools (Especially Now)
How Can You Optimize the Cost of Software Testing?

172 / 437