Blog

Losing and Regaining Confidence in Automated Tests

Why we lose faith in automated tests and what to do about it

Every now and then we find ourselves meeting clients who have lost faith in the usefulness of automated tests. Not necessarily because of the quality of the tests themselves, but because many times they’re unable to clearly and continuously convey the value of the tests and what they contribute to the project. In this post, we’ll share some strategies that we have used for regaining confidence in automated tests on our part and the client’s.

When is the Confidence Lost?

In some situations, we have seen a team come to question whether or not to continue with their automated tests. Having already invested the time and effort to prepare the tests, they must decide if they want to continue investing in maintaining them or toss them out.

But how does a team get to that point?

A common factor in all these projects is that the automation work focuses more on the creation of test cases than on frequently analyzing the value of this work and aligning it with the expectations of the team and what the business needs each moment.

Many times, the goal of automated testing is something like, “We want to automate the entire user interface.” Going about it that way, as the weeks pass, the set of test cases will increase, thus increasing the time required to maintain the automation, analyze the results, etc. This is even worse if all sorts of test cases are implemented, from simple to complex cases, difficult ones to maintain, etc.

For example, we participated in a project in which the team ended up with a set of more than 600 automated test cases. They weren’t clear about the utility of each one, and with such a high cost of maintaining them, the tests often produced false positives because they weren’t adjusted often enough.

Another complexity that the “let’s automate everything” approach presents is the dynamics it generates. You eventually see automation as a project apart from development and not as an integrated activity aligned with the same objectives. It can happen so that without maintenance, the first automated test cases begin to become obsolete. This is often difficult because of the desire to automate everything, without giving rise to the work that is necessary to maintain a healthy and stable test architecture.

What happens in the face of all this? Well, confidence in automation is lost. In this situation, it’s best to pause the automation to define a strategy to reverse the situation.

Lessons Learned

From experiences in which we had to “fight” for regaining confidence in automatic tests, we’ll share some of the interesting lessons we learned and strategies that proved useful.

Know the Objectives and Prioritize Accordingly

Perhaps one of the most important lessons: always aim to generate value. We must always keep in mind the objectives of the project and the business itself in order to adjust the objectives of the automation project. One technique that has been of great help to align us in the objectives is the use of MindMaps. They help us to outline the modules and functionalities of the system to be tested and thus, in a graphic way, order priorities and define a plan.

In order to generate value, it is extremely necessary to be one more member of the team, and not to make the automation project something that runs parallel and independent of development. Something that has been positive for us is getting involved in the framework of the client’s “Scrum” team, participating in daily meetings and other events, aligning day by day. By participating in meetings like these, where the whole team is involved, it’s possible to understand the priorities of the project as well as the functionalities planned for each Sprint. As expected, this information is valuable input to prioritize automation work.

One crucial thing to remember is to distinguish the ROI of the automation of the different modules, according to their particularities. There will be cases where due to the technology used, environmental difficulties, data, etc., it will be so difficult to achieve reliable automation, that it’s not preferable to automate. With that said, we don’t recommend starting out with the objective: “automate everything.”

Make Visible the Results and Value of the Tests

It’s key to give visibility to the benefits that the tests bring to the team. This can be achieved by, as testers, giving our status in dailys so that everyone knows what we are working on, and how it affects others and the product. That way we also build up the idea that the test cycle brings something real to the product, and it is a part of the activity of development. Ideally, even, it would be great to reach the point where automation is not seen as a separate task, but is part of the “Definition of Done” for the features that we want to test with automation. In this way, the team will focus on everything that is necessary for automation to be available on time, in order to complete the tasks that are committed to the Sprint.

While the use of tools such as Jenkins facilitate the visibility of the tests, giving constant results about their status, if false positives are not controlled, confidence is immediately lost, and the information reported in the tools is dismissed.

Faced with this type of situation, a possible strategy is to dedicate effort to let Jenkins run only those cases that are high priority and run consistently, that only fail when errors actually occur in the system. In this sense I liked a tweet from Katrina Clokie regarding an Agile Testing Days 2017 session:

 

 

In a specific project where we applied this strategy, at the beginning there were approximately 80 test cases running in Jenkins, of which more than 50 failed. After making the change, we managed to have more than 50 cases running correctly in Jenkins, thus recovering confidence in the test report.

The important thing here is that automation contributes to the objective of testing: to provide relevant information to all interested parties about the possible problems that may arise, the risks, and ultimately, about the quality of the system.

Generate Opportunities for Knowledge Exchange

It’s immensely helpful to ask the opinion of other people working on automation projects and generate a space for exchange that broadens the vision with which we are working and focusing on the project. As a result of this exercise, we have always benefitted, gathering amongst everyone, including experts, a set of good practices and a combination of tools that improve the stability of the test cases.

Takeaways

In summary, what we hoped to keep from these experiences that we wanted to share with you are:

  • Know for what/whom the tests are directed. Without this, the project becomes the development of many test cases that may not actually have any business value.
  • Prioritize modules and functionalities. Know where to start, plan the work according to the needs of the business, and be agile in that if something new comes out with higher priority, for example, a new functionality, adapt to that.
  • Generate a real interest in the developers regarding the results and the value that the tests provide. This is linked to the above, always aim at generating value, and for this it is extremely necessary to be one more member of the team, and not to make the automation project parallel and independent to development. This could be achieved by including the tasks of automation in the planning of the functionalities, that is, planning the task of automating a test as part of the “Definition of Done” of the feature.
  • Consult with other teams about how their automation projects are being managed to apply a proven set of good practices.

And you, have you faced a similar situation? Have you ever had to help others (or yourself) in regaining confidence in automated tests? What strategies did you use?

Leave us a comment!

Thank you Matias Fornara for helping me with this post!


Recommended for You

Can There be Testers in Scrum?
The True ROI of Test Automation

87 / 472

Leave a Reply

Required fields are marked