When to Automate a Test?

Useful guidelines for making the decision if a test case is worth the time and energy of automating

By: Charles Rodriguez and Alejandro Berardinelli

“Let’s automate testing as much as possible.” That always sounds like a good idea, right? It’s the way the world is going in general, isn’t it? In software testing, automation can be a huge productivity enhancer, but only in certain contexts.

In this post, we’ll present an approach to test automation with the aim of recognizing its feasibility according to the context of the project. It’s very useful for a tester to understand what automation is and to be clear when something is automatable. Testers should be mindful about how they can optimize their work, whether collaborating with other colleagues, developers, or by encouraging themselves to try out an automation tool.

We’ll cover some concepts that are fundamental when you don’t have experience yet with automation as well as evaluate their importance and benefits in relation to manual testing.

What is Test Automation?

Historically, automation arose to reduce the human effort required in activities that could be replicated by a programmable system or machine, with the aim of simplifying onerous, repetitive or complex work, making it effective and more productive. In this way, it’s possible to save energy, time and costs, while freeing people up to focus on other tasks.

In software development, this practice can be approached in the same way by automating certain efforts that are done manually. The steps followed by humans are translated into repeatable scripts, so they can focus their energy on other specific tasks that provide greater value and reduce execution times. In some cases, automation allows us to perform tests that a human could not, especially taking into account the limitation of the number of executions we can do in a certain period of time.

One of the most common questions that occurs when a tester thinks about automating is, “When is something automatable?”

Knowing if something should be automated involves assessing the potential investment, approach, benefits, and most importantly, the current knowledge of the manual process.

We first have to fully understand and become experts in the manual process, and only then can it be possible to automate. Complete knowledge of the manual process is the pillar to know when something is automatable, which implies that manual testing is not completely substitutable. (There is often a debate about the impending death of manual testing. It simply cannot die!)

Automation Myths

Automation has its advantages and disadvantages, depending on the project, time, cost, quality and methodology.

Based on the above, another very important point is that beyond automating or not automating, you have to understand the context and that everything you do is based on fulfilling the objectives in the best possible way, selecting and applying the appropriate methods, tools and skills.

Avoid falling into these common myths about test automation:

  • You can automate everything
  • Automating always leads to better software quality
  • Automated testing is better than manual
  • Automating brings a rapid return on investment

Of course, there may be times when one of these myths is actually true, but it would be an exception to the rule.

In the Context-Driven Testing school, the following seven principles are explained that help to understand the goal of testing, whether it be manual or automated:

  • The value of any practice depends on its context.
  • There are good practices in context, but there are no “good practices”.
  • People, working together, is the most important part of the context of any project.
  • Projects are not static and often take unpredictable paths.
  • The product is a solution. If the problem is not solved, the product will not work.
  • Good software testing is a challenging intellectual process.
  • Only through judgment and skill, practiced cooperatively throughout the project, will we be able to do the right things at the right time to test our products effectively.

These principles were proposed by Cem Kaner, James Bach and Brett Pettichord in their book, “Lessons Learned in Software Testing: A Context-Driven Approach,” which help us to grasp the importance of the ability to adapt according to the current project situation.

Manual vs Automated

When getting started, we might want to automate everything, but the cost of developing and maintaining the scripts for automated tests is not something to take lightly.

When a project bets on automation, ideally it should have a solid base starting with the unit test cases, preventing as many bugs as possible with immediate feedback, and successively continuing to the different layers. This way, manual and exploratory tests are most valuable at the UI level, focusing on those that are not possible to automate.

This concept is explained by Michael Cohen’s test automation pyramid:

automation pyramid best agile testing practices

On the left, we see how automation is commonly done and on the right, we can see the ideal way, where unit tests carry the most weight in the pyramid.

Although there are differences between automated and manual testing, they aren’t mutually exclusive, but are seen as complementary tasks in the search for better software quality.

If we think about the return on investment of testing, testing a new functionality manually allows you to quickly know more about the application at a low cost. As knowledge is acquired, the inventory of tests increases, and consequently, the cost also increases for manual tests. On the other hand, automation has a higher initial cost which decreases as it progresses. This behavior can be seen in the following graph (taken from here):

automated vs manual testing costs

Analyzing this, we see that automation has a large initial investment until the “breaking point” where we begin to see the positive impact it generates on long-term costs when compared to manual testing, for which we can assess that both activities of testing are fully compatible, generating short and long-term benefits.

What to Automate?

Now that we are aware of the importance and benefits of automation, we have to identify the cases that we can automate. For this, we must take into account the objective that is being pursued and at what level, as we saw in the Cohn pyramid.

Try to answer the following:

What’s the Objective?

The first thing we are looking for is to always aim for a higher level of software quality and analyze if automation “fits” within the project.

To answer the question, it would be advisable to carry out a feasibility analysis in relation to the objectives.

The following scenarios are some in which it will most likely make sense to automate:

  • There’s technical debt to eliminate
  • Regression tests are time consuming.
  • The project is highly complex and long term

Which Test Cases Should We Automate?

As we have seen, not everything is automatable in context, that is why it is relevant to know which cases are suitable for our purpose. Thinking at the code level and on the developer side, unit tests are the easiest for which to make a script. On the tester side, we usually work on automating the regression cases at the UI and API level, thinking first of the most critical and complex flows.

The following are those test cases that can be automated:

Regression Tests

Given the situation in which we already have a defined test suite that must be executed periodically after each product release, the effort to run them manually becomes repetitive, in addition to taking time away from other tasks that are not automatable and where we can provide more value.

These regression test cases are highly automatable, being particularly convenient to integrate within a CI/CD model. This adds value in terms of the cost and time that is gained to perform other tasks, since the scripts can be executed unattended while performing other activities.

High-Risk Tests

These cases are usually agreed upon by the stakeholders, where the emphasis is placed on checking the high priority and critical functions, which, if they fail, greatly affect the business model. This is why this approach is called “Risk-based Testing”.

Automating the cases that test these functionalities can help to find, almost immediately after each release, the incidents in which action must be taken quickly and can be blocking for the production of said release.

Complex and/or Time-Consuming Tests

There may be cases in a project that are complex to reproduce manually, so if we take it to a script, it will be easier to execute them in an automated way. If it’s a form with a lot of data, perhaps the tester is more error-prone, especially if he or she has to test the same form with many variants of data. We can reduce the probability of error by automating.

Repetitive Test Cases

In the same way that regression tests become a repetitive task, we may have particular cases in which it is convenient to give rise to automation. For example, it can be presented that by manually testing a large amount of data for the same flow, it would take us a considerable amount of time and if we also have to repeat it, it becomes somewhat tedious. However, by automating this flow, we could parameterize this data and forget about having to test each value manually. This is also known as data-driven testing, where an automated test is parameterized and fed with data from a data source, such as a file or a database.

Tool Selection

Now that we know what to automate, we can move on to selecting the tool we are going to use. This activity can be one of the most complex to analyze initially given the number of tools available, but the decision will have to consider the project, budget, knowledge, and experience of those involved.

There are several open-source, commercial, and custom tools which vary in their limitations and possibilities. To select the correct tool, you must be clear about the requirements that must be met to continue the cost-benefit analysis of its use.

At Abstracta, there is a wide variety of tools that we select according to the context of the project, but we often use Selenium, AppiumCucumberGhost Inspector, and GXtest given the flexibility that they provide.

Screen Shot 2019-04-18 at 11.14.30 AM-min

Here is a brief overview of our favorite test automation tools:

  • Selenium: An open-source tool, it’s widely accepted around the world for testing web applications on different browsers and platforms.
  • Appium: Another open-source framework (based on Selenium) that you can use to automate tests mainly on mobile devices for iOS and Android.
  • Cucumber: This tool is part of the BDD (Behavior Driven Development) approach. Cucumber’s main advantage is its ease of use, since it’s very intuitive, providing a wide variety of features and is also open-source.
  • Ghost Inspector: The most remarkable thing about this tool is that it allows us to automate without knowing how to code, which makes it great for beginners. On the other hand, this tool is commercial and only allows 100 free executions per month. See our full review of Ghost Inspector.
  • GXtest: At Abstracta, we developed GXtest to enable automation for applications developed with Genexus in a simple way (the only one of its kind). It also allows for integrating tests in a CI/CD model.

It’s important to note that there are no best tools for all cases. Yes, we can choose between those that offer us more flexibility, although it will always depend on the application under test and the decision making criteria.


In general, we can find different ways to guide our automation efforts, but the most important thing is to have a well-defined purpose and objective. The context of the application under test is no minor detail and we should know that not everything is automatable, since the return on investment comes from the work of a good feasibility analysis.

In any case, if automation occurs, it’s highly recommended that it be done at all levels and with greater effort at the lowest levels, such as unit and API tests, and not only at the UI level.

Taking into account the above, we will be able to prevent a greater number of bugs and accompany the “manual testing“, which needs the ability of the testers, and thus avoid the excessive load of repetitive tasks that can be programmed.

More Resources

To delve deeper into automation and to know when to automate a test, here are some excellent resources:

read our functional test automation ebook

137 / 398