The Complete Beginner’s Guide to Functional Test Automation
In this guide, Abstracta team members, Federico Toledo and Matias Fornara, are going to share with you the who, what, where, when, why, and how of functional test automation. You’ll discover the IT and business value of test automation as well as when it makes sense to automate. Lastly, we’ll share some industry-leading practices and approaches that will help you bring about maximum effectiveness.
By the end of this post, you should be able to confidently present a plan to your team for functional test automation and act on it.
Getting Acquainted with Test Automation
As we are focusing on functional test automation in this guide, we first have to discuss regression tests. Although it’s one of the most popular types of tests to automate, it’s not the only use for test automation.
Regression tests are a subset of scheduled tests chosen to be executed periodically, for instance, before a product release. Their aim is to verify that the product hasn’t suffered any regressions. There are three types of tests that you generally automate which we will touch upon later: unit tests, API tests, and UI tests.
Why are they called regression tests?
At first we believed it meant to go back and execute the same tests, given that it’s related to that idea. After a while, we realized the concept is actually associated with verifying that what is being tested has no regressions. We imagined that “not having regressions” meant no regressions in quality or functionality, but we heard the rumor that the concept comes from the following situation: If users have version N installed, and you install N+1, and the latter has bugs, you will be tormented by having to go back to the previous version, to regress to version N. You want to avoid these regressions! And that is why these tests are carried out.
It is incorrect to think that regression tests are limited to verifying that the reported bugs were fixed, as it’s just as important to see if what used to work is still working properly.
Generally speaking, when the tests for certain functionalities are designed, a decision has already been made about what tests are being considered within the set of regression tests such as the ones that will be executed before every new product release or in each development cycle. Running regression tests consists of executing the previously designed tests all over again.
Some will argue that by having a checklist of steps to follow and things to observe, one is not really testing, but simply checking. James Bach and Michael Bolton, two experts in the field of testing, often discuss the differences between testing and checking. Testing is where you use creativity, focus, search for new paths to take, and asks yourself, “How else can this break?” By checking, you simply follow a list thought of by someone else.
Here’s the thing with regression tests: they can be boring. Boredom fosters distraction. Distraction leads to mistakes. Regression tests are tied to human error.
We are not saying that testing is dull! We love it! But, we mean that routines can be monotonous, therefore, prone to error. Moreover, us techies tend to see things that can be automated and wonder how we could program them so we don’t have to do them manually. This is when automated testing comes to the rescue, given that robots don’t get bored!
Test automation consists of a machine being able to execute the test cases automatically, somehow reading their specifications which could be scripts in a general purpose programming language or a tool specific language, or from spreadsheets, models, etc.
Here’s what some stakeholders within the development process might say they want to accomplish by automating regression tests:
“I want to make changes to the application, but I am afraid I might break other things. Testing them all over again would be too much work. Executing automated tests gives me peace of mind by knowing that despite the changes I’ve made, things that have been automated will continue working correctly.”
“While I automate I can see if the application is working as required and afterwards I know that whatever has been automated has already been reviewed, giving me the opportunity to dedicate my time to other tests, therefore obtaining more information about my product’s quality.“
“When I have a huge system in which changes in one module could affect many functionalities, I hold back from innovating in fear of breaking things.”
“When I’m given a new version of the application, nothing’s worse than finding that what used to work no longer does. If the error is in something new, then it’s understandable, but, when it has to do with something that usually worked and now doesn’t, then it’s not as easy to forgive.”
When Can You See the Results?
It’s often believed that the moment a test finds an error is when you reap the benefits and the most important metric is the number of bugs found. In reality, the benefits immediately appear from the moment you start modeling and specifying the tests to be carried out in a formal way. Afterwards, the information resulting from the execution of the tests also provides great value.
Detecting an error is not the only useful result, but also the confirmation that the tests are verifying what they should is useful as well. An article in Methods and Tools states that a large number of bugs are found upon automating test cases. When automating, you have to explore the functionalities, test different data, and so on. Generally, fiddling around with the functionality to be automated takes a little while. Afterwards, you execute it with different data to prove that the automation went well. At that time, a rigorous testing process is already taking place.
Note, If you automate tests in one module, and consider that it’s good enough with those tests, do you stop testing? The risk here is that the automated tests aren’t covering all the functionalities (like in the pesticide paradox). It depends on the quality of the tests. You could have a thousand tests and therefore believe that you have a solid amount of testing, but those tests might not be verifying enough, may be superficial, too similar to one another, or not cover the most important functionalities in the system.
The value of automating is not in the number of tests or the frequency in which they are executed, but in the information they provide.
Why Automate and to What End?
If you take the traditional definition of automation from industrial automation, you can say it refers to a technology that can automate manual processes, bringing about several other advantages:
- Improvement in quality, as there are fewer human errors
- Improvement in production performance, given that more work can be achieved with the same amount of people, at a higher speed and larger scale
This definition also applies perfectly to software test automation (or checking).
Now, we would like to bring the “zero accumulation” theory forward. Basically, the features keep growing as time goes on (from one version to the next) but the tests do not grow (we haven’t heard of any company that hires more testers as it develops more functionalities).
As a product grows, you have to choose what to test and what not to test, leaving many things untested.
The fact that the features grow with time means that the effort put into testing should grow in a proportionate manner. Here lies the problem of not having enough time to automate, given that there’s not even time for manual testing!
Ernesto Kiszkurno, an Argentine consultant at a firm specializing in quality and process engineering, says that the hardest thing (aka most expensive) in testing is design and execution. You would consider that the design is cumulative, given that you design and record it in spreadsheets or documents. The difficulty is that test executions are not cumulative. Every time a new version of the system is released it’s necessary (well it’s desirable, yet should be necessary) to test all the accumulated functionalities, not just the ones from the last addition. This is because it’s possible that some of the functionalities implemented in previous versions change their desired behavior due to the new changes.
The good news is that automation is cumulative. It’s the only way to make testing constant (without requiring more effort as time goes by and as the software to be tested grows). The challenge is to perform testing efficiently, in a way that pays off, where you can see results and in a way that it adds value.
How is the Return on Investment of Test Automation?
From our experience, it’s safe to say that there is a high ROI of test automation. We’ve seen that the cost of a single defect in most organizations can offset the price of one or more tool licenses. Also, coding defects found post-release can cost up to 5x more to fix than those found during unit testing.
The Business Value
- Improve software quality
- Avoid operational problems
- Maintain a good customer image
- Avoid legal problems and minimize risk
- Decrease the cost of fixing bugs by 5x
The IT Value
- Test in parallel, in an unattended manner, on different platforms
- Simplify routine tasks
- Run more tests without increasing costs in the same amount of time
- Increase scope of coverage
- Find the hard-to-detect defects earlier, when they are easier to fix
- Improve overall software quality
Check out this post where we go into an in-depth breakdown of the math behind the potential ROI of test automation.
What to Automate and What Not to Automate?
As previously mentioned, after designing the tests, it’s necessary to execute them every time there’s a change in the system (like before the release of a new version). Even though the benefits are well known, it can also be argued that it requires a certain effort to automate and maintain regression tests. Almost all automation tools provide the possibility of recording the tests and executing them later, which is known as record and playback.
In the past, this paradigm would have only been used for simple tests or when you’re learning how to use a tool. Nowadays there are powerful script-less tools such as Testim.io or TestProject, which don’t only allow you to record your tests on browser or mobile device, but also they leverage AI-based features such as auto-healing. This means they can find a new selector if a test fails because something has changed in the target system. This has been a quality breakthrough for teams that only had manual testers with no room for an automator, or as you might have heard say, “we’ve no time for automation.”
However, sometimes you need to carry out more complex tests or maybe you just feel more comfortable with a code-based framework because you know it will be more flexible and suitable for your needs in terms of how to handle test data sets, manage test environments, test databases, and so on. In spite of the “how” you do it, once these aspects are taken care of, you can execute the test as many times as you want with very little effort.
The best tests to automate are the ones which are highly repetitive, given that it’s necessary to execute them many times.
If tests for a development cycle are automated, the automated tests for the next cycle can once again check everything that has already been automated with little effort, allowing the testing team to increase the volume of the set of tests, therefore increasing coverage. Otherwise, you would end up having test cycles that are longer than the development cycles (and they’d keep getting longer every time) or you’d leave things untested, accepting the risk that that involves.
One of the most significant factors is the amount of times you’re going to have to repeat the execution of a test case.
Note that the cost of a single repetition is greater in the automated case. The graph below represents this hypothetically. Where the lines cross is the inflection point at which one makes more sense cost-wise than the other. If the test case is executed less than that number of times, it’s better not to automate. Conversely, if you are going to test more than that number, then it’s better to automate.
The number of times you execute a test is determined by many things:
- The number of versions of the application that we want to test
- The different platforms we will be executing on
- The data (Does the same test case have to be run several times with different data?)
Basic Principles of Test Automation
The Automation Pyramid
Many agilists adopt automation as it helps to speed up the process of testing and the entire development process. If you want to understand more about agile environments, you can find a good explanation here. In non-agile software development, many people end up inadvertently falling into the “ice cream cone anti-pattern” for testing by putting more emphasis on automating at the UI level than in any other level.
It’s more advantageous to follow the practice that flips that ice cream cone upside down. Made popular by Mike Cohn, the agile test automation pyramid below gives you the most bang for your automation buck, improving the ROI of automation and guaranteeing that you’ll receive more benefits from automation.
When most of your efforts are focused on automation at the UI level, the focus is on finding bugs, whereas with the agile pyramid, the idea is to prevent them.
The pyramid is a stronger, more beneficial and cost-effective way to implement test automation because it provides a strong testing base in the unit testing phase from upon which to build further testing in the integration and UI phases.
API Automation Approaches
The other decision is about the strategy, combining different approaches: isolated and integrated tests. In the unit test approach, you are going to create different kinds of tests for each specific endpoint. Integration tests require a different approach, more like a “test case/flow” approach where you simulate what happens when you run a manual test case (a user flow), mapping that to API calls. This is an important decision, because it will shape how your tests and test suites will look, and how easy they will be to increase, maintain and for someone new to the team to understand.
UI Automation Approaches
There are several automation approaches, and for different contexts, some will be more useful than others. It’s good to bear them in mind when selecting the test strategy and even more for selecting the adequate tools.
Let’s now turn to three approaches that, based on my experience, are the most common and most beneficial (typically when thinking of automating at the user interface level).
This is one of the most common approaches to test automation. Usually tools possess a language where you can specify the test cases as a sequence of commands that manage to execute actions on the system being tested.
These languages can be tool-specific, as in the case of Selense from the Selenium tool or they could be a library or API for a general purpose language like JUnit for Java.
The type of commands provided by the tool will vary according to the level of the test case. There are tools that work on a graphic interface level so you’d have commands that allow actions like clicks or data input in fields to be executed. Others work on a communications protocol level, so you’d have actions related to those. For example, at an http level like the HttpUnit tool, which gives you the possibility of executing GET and POST at the protocol level.
Imagine the following example: a JUnit test invokes a functionality of the system directly onto an object being tested. You use a certain input value for its parameters and the output parameters are checked. In this case, the execution is on an object, whereas for Selenium, the parameters will be loaded onto existing inputs in the websites, and then the execution of the functionality will be performed by pressing submit on the corresponding button. Now let’s imagine a Selenium automated test case. First the values are added in two inputs with the “type” command and then you click the button to send the form.
In order to prepare automated tests following this approach, it’s necessary to program the scripts. For this you need to know the language or API of the tool and the different elements of the system you’re interacting with. For example, it could be the buttons on a website, the methods of the logic you want to execute, or the parameters you need to send in a GET request of a web system.
Record and Playback
Given that programming the scripts is an expensive endeavor, traditionally the paradigm of “record and playback” will allow you to create (at least the basics) of the scripts in a simple way.
The point is for the tool to be able to capture the user’s actions on the system (record) and can later put that in a script that can be reproduced (playback). Let’s try to imagine this process by breaking it down into three parts:
- The user manually executes on the system being tested
- At the same time the tool captures the actions
- It creates a script that can later be executed onto the same system
Without this sort of functionality, it would be necessary to manually write the test cases and in order to do so, as previously mentioned, insider knowledge of the application and the language for scripting of the tool would be essential.
As we already mentioned, there has been some agitation in the record and play back field, which brought back some of the people who used to think this paradigm was only meant for PoC and rookies, but why?
Typically with tools such as Selenium IDE or Katalon Recorder, the scripts created from recording the user’s actions have to be modified, therefore you have to know the language and the elements of the system being tested. However, even though it’s much easier to edit a pre-generated script than to program one from scratch, that’s not enough. You may need to implement test parametrizing so that the script includes different test data (following the data-driven testing approach) or add certain logic to the scripts. For instance, you can use structures like if-then-else in case different workflows or loop structures need to be pursued.
They reshaped the paradigm by asking: What’s more important? Being able to have a flexible framework which you can maintain, modularize and customize so you can react to the changes on the system? Or do you just want to record something that could be repeatable with the least human intervention possible? They concluded, the latter!
This mindset change brought about features such as auto-healing, user friendly interfaces, versioning without the necessity of ever having used Git, and so on.
Model-Based Testing / Model-Driven Testing
The next level of automation implies automating not just the test execution but also its design. We suggest following a model based approach, which can come from two different sources:
- Models created specifically for testing purposes
- Models created specifically for development purposes
On the one hand, this approach can rely on the tester somehow developing a specific model for test creation, for example, a state machine or any other type of model with information on how the system being tested should behave. On the other hand, you can generate tests from the information of the application or the different artifacts that you have produced during the development process.
The results obtained will depend on each tool, but generally speaking, they will be specifically designed test cases in a certain language, tests data or scripts of automated tests in order to directly execute the generated test cases.
This way, the tests are based on an abstraction from reality via a model. This allows for working at a higher degree of abstraction, without having to deal with the technical difficulties, focusing only on the model of the problem, making the tests easier to understand and maintain.
In this post, we’ll continue discussing mainly automation with scripting, relying on tools like Record and Playback that allow you to parametrize their actions in order to follow a Data-driven Testing approach. In addition, we’ll make suggestions related to test design, and different aspects of the automation environment, considering the design will be done manually, not necessarily with model based technique tools.
The Most Important Test Automation Pattern: Page Object
As we mentioned before, it’s super important to work with maintainable code for the test automation, otherwise your test cases won’t be considered useful as they won’t produce the expected ROI or will simply die out because they’ll be too expensive to maintain.
What can you do in order to have maintainable test code? Well, the same things you hopefully do to have maintainable code for your applications, such as paying attention to different internal quality metrics and using proper design patterns.
Design patterns are a well-known solution for this problem. They’re adaptable to different contexts so you don’t have to reinvent the wheel every time a problem comes up.
As you can imagine, creating and updating test code in an efficient way is a very common pain point. The solution mainly focuses on the abstraction layers, trying to encapsulate the application in different objects that absorb the impact of the changes that the system under test could suffer during its development. It’s pretty typical that the user interface gets modified from its structure to its elements or its attributes. So, your test framework should consider how these elements could potentially change and be prepared for that.
What can you do for this? Well, the page object pattern proposes having an adaptation layer conformed by specific objects to manage the interaction between the test cases and the application under test. To do so, you need to store the different element locators in a very organized way. For example, you could have a class for each web page (if your system has a web interface, which is the most common situation when applying this pattern) and you could have different attributes for each element with which the test interacts. Additionally, of course you could create a taxonomy between the classes that represents the pages in order to modularize and encapsulate the code even more, since you may find that certain elements repeat among the different web pages.
If you want to see some good examples, check this.
Which problem are you solving by having maintainable code? If you have 100 test cases that interact with a certain button and the development changed its element locator, then you would need to maintain 100 test cases or at least 100 lines of code! The solution for that is very simple: encapsulation. You have to have the element locator defined in one single place and reference the element from the 100 test cases. Then, if the change happens, you only need to maintain one line of code and all your test cases will work properly.
Going one step further, even more encapsulation and abstraction could be added to your test architecture. You could define different test methods in the page objects, including common actions. The most basic example is when you have the login page, you could have a login method executing the following steps:
- Access the URL
- Type the username
- Type the password
- Press the button
- Verify if the login was successful
Again, if you do not do that, then you will have many lines of duplicate code, undermining maintainability.
Therefore, when you find yourself starting an automation project and designing your test framework, take into consideration at least the two following things as the basis:
- Page object pattern
- Data-driven testing
It’s well known that Page Object is not the only pattern you can use. For example, you can achieve the same goals by using Screen Play, and of course, any other design patterns that could be used within the framework in order to improve maintainability of the code.
Design Tests According to Your Goals
As with everything in life, you must have an objective in mind for test automation. Think about what you want the automated tests to be used for and act accordingly. You will have to make certain decisions about going one way or another, selecting certain test cases instead of others and designing them with a certain approach or strategy.
Every team and company may have different goals for test automation. Here are just some of the possibilities:
- Consistent and repeatable testing, making sure the same actions are always being executed, with the same data, and verifying every single thing that has to be verified, both at the interface level and at the database level
- Run test cases in an unsupervised way
- Find regression errors at a lower cost and at an earlier stage
- Run test cases more often (for instance, after every commit to the code repository)
- Improve the software’s quality (more testing = more chances for improvement) and therefore increasing user confidence
- Measure performance
- Test different operating systems, browsers, settings, DBMS (Database Management Systems), and so on without doubling the execution cost
- Reduce the release time to market/run tests faster
- Improve tester morale, executing boring, routine tasks automatically
- Follow a continuous integration approach thereby detecting bugs earlier and run test cases at night
- Have a set of test cases to run before every new product version
- Have basic tests such as smoke tests or sanity checks to know if the version released for testing is valid or catches on fire easily
- Make sure that the incidents reported don’t come back to the client
Even though some of these might look similar, it is important to ask yourself which of these objectives you want to accomplish and how to measure whether or not those targets are being hit before beginning with any automation project.
None of these objectives are mutually exclusive, they even complement each other.
It is just as important to keep the objectives in mind as it is to define the objectives. A possible danger could be, for example, when the person in charge of automating is a programmer and they end up finding the tool challenging and fun. That’s great, but it could lead to a scenario in which they end up automating a lot of functionalities without having analyzed beforehand how relevant they actually are in order to reach their goals.
With an objective in mind, it will be easier to determine which test cases to automate. For this, you can practice “risk-based testing.” This test strategy gives higher priority to testing the elements that are most at risk of failing, whereas, if said failures were to occur, they would carry the greatest negative consequences.
With this in mind, it’s paramount to run a risk analysis to decide which test cases to automate, taking into account different factors:
- How important or how critical it is for running the business
- The potential financial impact of the errors
- The probability of failure (it would be a good idea to ask developers who would know, for example, which module had to be finished in less time than others)
- Service Level Agreements (SLA)
- If there is money or lives are at stake (it may seem dramatic, but we know that many systems deal with highly sensitive information)
For more on risk-based testing, check out this post.
How to Automate
Let’s say you already have your test cases designed. You’ll start by checking the functionality inventory (or backlog or wherever you store this information) and assign a level of priority to each. Afterwards, you should assign priority to each test case prepared for each of the different functionalities. This organizing and prioritizing will help divide the work (in case it’s a group of testers) and to put it in order, given that grouping the test devices by some criteria, for example by functionality, is highly recommended.
Test case designs for automated testing are better off being defined on two levels of abstraction.
On the one side, you have what we will call abstract or parametric test cases and on the other hand, the so-called specific test cases or concrete test cases.
Let’s review these concepts and apply them to this particular context. Abstract test cases are test scripts that, when indicating what data will be used, do not refer to concrete values, but to equivalency classes, or a valid set of values, such as “number between 0 and 18” or “string of length 5” or “valid client ID”.
On the other hand, there are concrete test cases, where abstract test cases have specific values, such as, for instance, the number “17”, or the “abcde” string, and “1.234.567-8” which could be said is a valid identifier. These last ones are the ones you can actually execute and that’s why they’re also called “executable test cases”.
It is important to make the distinction between these two “levels” as you will be working with them at different stages of the automation process in order to follow a data-driven testing approach, which differs greatly from simple scripting.
For automated tests scripts, data-driven testing implies testing the application by using information from a data source, like a CSV file, spreadsheet, database, etc., instead of having the same data hardcoded in the script.
In other words, you parametrize the test case, allowing it to run with different data. The main goal is to be able to add more test cases by simply adding more lines to the test data file.
In addition, consider the test oracle. When a test case is designed, the tester expresses the actions and data to be used in the execution, but what happens with the oracle? How do you determine if the result of the test case is valid or invalid? It is necessary to define the validation actions that permit you to fully capture an oracle capable of determining whether the behavior of the system is correct or incorrect. You have to add sufficient validations in order to reach a verdict while also, step by step, pursuing the goal of avoiding false positives and false negatives.
Test Suite Design
Tools typically allow you to group test cases in order for them to be organized and run them all together. The organization can be defined by different criteria such as:
- Module or functionality: grouping all test cases that act on the same functionality.
- Criticality: You could define test cases that must always be run (in every build), given that they are the most important ones. Then another medium level (not as critical), to run less frequently (or perhaps only selected if changes occur in some particular functionalities) and one of less importance that you’d run if there’s enough (or when a development cycle ends and you want to run all possible tests).
These approaches could even be combined by having a crossed or nested criteria.
Defining dependencies between suites can be highly interesting, given that there are some functionalities that if they fail, they directly invalidate other tests. It makes no sense to waste time by running tests which you know will fail. Meaning, why run them if they don’t bring any new information to the table? It’s better to stop everything when a problem arises and attack it head on and then run the test again until everything is working properly (this follows the Jidoka methodology).
Important Considerations to Take into Account
Start Off with Things Clear
Over a product’s lifetime, it’s important to maintain the automated tests. If it’s a small set of tests, it won’t be difficult to find the right test to tweak when necessary, but when the set of tests begins to grow, it can get quite messy. Hence why it’s essential to clearly define the organization and denomination to be used, which in the future, will help to deal with the large set of tests in a simple way.
It’s important to define a denomination of test cases and folders (or whatever the automation tool provides to organize the tests). Even though this practice is simple, it yields great benefits.
Some style recommendations:
- Use names for test cases in such a way that you can provide as much information as possible for your peers. Now this shouldn’t be taken lightly, there’s a certain compromise you have to make when naming tests, because you have to pick names rich in terms of information but at the same time, these names shouldn’t collide with your styling policy or the linter you’re using. They shouldn’t be too long either and most importantly, they should have your peers’ approval since you might be working in a team that will reuse the test.
- It’s useful to define a structure of folders that allows for separating the general test cases (typically login, menu access, etc.) from the different test case modules. The aim of this is to promote the recycling of test cases designed in such a way that it is easy to include them in other cases.
- On many occasions as well there is a need for temporal test cases, which could be named with a common prefix such as pru or tmp.
The way you define a denomination depends on your preferences and specific needs. Have this clear before preparing scripts and before the repository begins to grow in a disorganized manner.
Comments and Descriptions
Every test case and datapool can have a description that roughly describes its objective. On top of that, you could include comments that illustrate the different steps to follow for each test case. Inside the datapools, you could also add an extra column to write down comments in which the objective of each concrete data used for the tests is indicated, telling you what it’s trying to test.
“Read Me” File
In public speaking, there is a well known concept that is, “Nobody ever complained about a too simple speech.” Why? Well, because you can understand it and that is just great! Same thing with test frameworks, you can be super smart in writing tests and creating the most accurate comments for them, but people sometimes need more context, and even more if your framework is going to be public and accessible to a lot of people.
That being said, it’s always a value-add for your framework to include a “Read Me” containing:
- Motivation and goal of the framework
- Technologies used
- Dependendencies and how to install them
- Basic explanation of the directory structure
- How to run the tests
- How to generate the reporting
- How to collaborate (if necessary)
Link Between the Test Case and Automated Script
How should scripts be done with the tool? One for every test case? Could a script be made that tests different test cases at the same time?
Below, you can see examples of both. On the left, you have a script that runs different test cases. When it runs, it might analyze various options upon the data or the system state and according to its evaluation, decide to execute one test case or another. On the right, you could have a test case modularized into different scripts. It would have different, smaller test cases that are run by a script that includes and manages them all.
As with anything in software engineering, in this particular instance, it all depends on the test case. Some propose thinking about how many logical bifurcations present themselves in the test case. From our point of view, the best way would be to take a modular approach. Meaning, to have different modules (scripts) that carry out different parts of the test and then a script that manages all of those parts. This way, you can reuse the small parts and run different tests that compose them in various ways.
In that case, the relationship would be a test case made of several scripts.
- Easier to maintain
- Modules can be reused
- The flow can be changed at different levels
- The test case script is clearer, as you can see the whole flow going through the “bigger picture,” and then dive deeper into the interesting parts
- It’s easier to manage the flow of the test case
- For example, it’s easier to make a certain module repeat itself a certain number of times (fixed or variable). In the typical example of an invoice, if you modularize the part where you enter a line of the invoice with a certain product and quantity, you can make that part execute a certain number of times, with the aim of testing the invoice with different lines.
- It’s easier to analyze the reports
If you have documentation of the test cases (if they used to be manually executed for instance), a good practice would be to have a matrix that connects all the test cases with the different scripts involved. This helps you to know what to verify when certain requirements change that have an impact on tests and consequently, on some scripts.
An alternative would be designing test cases in a linear manner in case the results are deterministic and only if they have some undefined variability beforehand, add different flows, but, the best option is to keep things simple and sequential. A lot of times, coming from a programming background, we tend to make very generic test cases (that cover all cases) and they end up being too complex.
If only one test case is designed to contemplate all options, it will probably be more difficult to comprehend. Therefore, you have to analyze what is being checked in each decision (bifurcation), what is being done with one flow or the other, and so on, unless you’re very careful and fill the test case with comments to simplify that analysis. Anyway, a sequential test case with a descriptive name informs you of what it is and what it does.
However, if one day you were to decide to add a new case, where should you do it? How to add the bifurcation? How do you handle the data associated with it? If on the other hand, we create a new test case, a sequential one, with a datapool for the case, it rather simplifies that task.
Avoiding False Positives and False Negatives
When dealing with automation, one of its most delicate areas is results that lie, otherwise known as false positives and false negatives. Those of us who have already automated know this to be an issue and those of you who are about to begin, let us give you fair warning that you will encounter this problem! What can you do to avoid it? What can you do so that the test case does what it is supposed to do? Doesn’t that sound like testing?
We’ve explained in detail in this post how to avoid false positives and negatives in your test automation.
System Tests that Interact with External Systems
What happens if your application communicates with other applications through complex mechanisms? What happens if it uses web services exposed to other servers? What happens if its logic is very complex? Can you automate more tests in these situations?
Imagine the following: A button in the application being tested executes a complex logic, there’s communication between several external applications, and a rocket is launched!
The automation tools (at least the ones we are focusing on here) have the objective of reproducing the user interaction with the system, therefore these background complexities almost don’t matter.
Once the user presses a button, the logic being executed due to that action could be simple or complex, but to the tool, this is hidden (just as hidden as it is for the user). It doesn’t matter if you shoot a rocket or something else, what’s important to automate is the user interface in this case.
Sometimes the test case requires other actions to be carried out that cannot be done in the browser or the graphical user interface of the system being tested. For example, consulting a database, copying files from a specific place, etc. For these actions, the tools generally bring about the possibility to do them by carrying out a special functionality or by programming in a general purpose language.
The fact that an application with complex logic doesn’t add difficulties to the automation process does not mean it doesn’t add difficulties at the time of thinking about and designing the tests. Two aspects that can become the most complicated are the data preparation and the simulation of the external services used. Regarding the latter, there are times in which it would be preferable for the system being tested to actually connect to the external service and other times when it would be better to simulate the service and even test the interaction with it.
The device that mimics the external service is generally known as Mock Service, and there are tools to implement it with ease. For example, in the case that the service is a web service, you could consider SoapU, Postman, or any other with mock server capacities. Both have a user friendly interface to create mock services and to test web services as well.
Thinking of Automation When Planning the Project
A lot of people still believe that testing is something that should be left for the end of the software development life cycle, if there’s time to spare. However, it’s a task that should be well thought out and planned from the beginning, even before planning development. And more than that, it should be considered part of the same activity: development.
When it comes to automation, these are a few of the tasks you need to plan ahead for:
- Verifying and reporting bugs
- Fixing detected bugs
Whether you are in an agile team or not, you must decide when to start automating (from the beginning or after a certain stage in which a stable version of the application exists) and consider the upkeep it will incur. This is inevitably linked to the tool you choose and the conveniences it offers.
Running Your Automated Tests
Using tools like record and playback sounds easy but as we’ve already seen, several things must be taken into account for the moment before playback happens. Now, we will also see there are some important aspects to consider for the action of Playback.
Managing Test Environments
It is of paramount importance to properly manage the test environments, especially if you are integrating the automated tests into your pipeline for CI/CD. For that to happen, consider many elements that are part of the environment:
- The sources and executables of the application being tested
- The test devices and the data they use
- If the information is related to the database of the system being tested where you would have to manage the outline and the information of the database that corresponds to the test environment
- If you are using Docker, the images or dockerfiles you need for your testing purposes
Let’s add the complication that you might have different tests to be run with different settings, parameters, etc. So for this, you have more than one test environment or have one environment and a lot of database backups, one per every set of tests. This adds the extra complexity of having to carry out specific maintenance for each backup (for example, every time there is a change in the system where the database is modified, it will be necessary to impact every backup with those changes).
But, if one of these elements is out of sync with the rest, the tests will likely fail and you would waste resources. It’s important that every time a test reports an error that it be due to an actual bug and not because of a false alarm.
How to Execute the Tests, Where, and by Whom
Now, let’s move to another topic that doesn’t have to do with the “technical” side of testing: Planning. It’s necessary to plan the executions, but not just that. Ideally, testing would be considered from the beginning (Yes, we are repeating ourselves, but it needs to be clear!) and if you’re going to automate, think about the tasks associated with it from the start.
When to Run?
The first answer that comes to mind is as often as possible. However, resources may be slim and depending on the quantity of automated tests, the time it takes to run them could be quite long. Don’t forget that the goal is to get early feedback on the most risky aspects of the application, so, the decision could be made following this heuristic:
If you don’t have a lot of tests or they run in a short time span, then execute all of them.
If it take too long to execute all of them, select what to run:
- Consider priority based on risk
- Take into account impact analysis (based on the changes of the new version to test)
Know that the greater the number of executions the higher return on investment (ROI) you will see. It is not enough to test, you have to make improvements as well and the time it takes to do so must be considered when planning.
Besides planning when, don’t forget to consider where. Usually you could aim at having separate environments. For example the:
- Development environment (for each developer)
- Development integration environment
- Testing environment (within the development team)
- Pre-production environment (testing in customer testing facilities)
- Production environment
The set of tests and their frequency in each of these environments might be different.
For instance, in development, agility is more important, given that you would want to run the tests more frequently, after every major change, before doing a commit in the code repository. For that, it would be convenient to only run the necessary tests. The aim of these tests is to provide the developer with quick feedback.
Once the developer frees his or her module or moves to the consolidation stage, Integration Tests would be run. Ideally, they would run automatically with your CI engine, maybe at night, so in the morning when the developers arrive back at work, they have a list of possible issues to solve and feedback from the changes introduced the day before. The less time between changes and the test results, the faster they will fix it (one of the main benefits of Continuous Integration). This would mean preventing things that don’t work from moving onto the testing stage. They would be like smoke tests in a way.
Then, when the application is passed on to testing, a larger set of regression tests should be run to try to validate that the mistakes that have been reported and have been marked as fixed aren’t there in this new version. This set of tests might take longer to run. They don’t have to be periodic, but they can adjust to the project’s schedule alongside the foreseen release dates.
When the deliverable version of the application is achieved (approved by the testing team), the same is released to the client. The client would generally also test it in a pre-production environment, which should be completely symmetrical in settings as that of production (this is the difference with the development team testing environment). Having an automated set of tests at this point would also add value.
As you may already notice in all the aforementioned in this section, UI automated tests even when they are done according to the best practices, still take time to execute. This demonstrates the importance of the testing pyramid, because think of how much regression time will be saved if the majority of the tests are carried out at API level.
What Skills Does a Tester Need to Automate?
According to James Bach, you don’t need special conditions to be able to automate. Ideally, the same testers who have been responsible for the traditional functional testing should address the task of automation because they already know the requirements and the business function of the application. This would prevent the automation from falling into the lap of someone who only knows how to use the tool but is unfamiliar with the application. These testers are better suited for the task for several reasons:
- No competition would be generated between manual and automated testing
- It would help ensure the correct choice of which tests to automate
- The automated testing tools could also be of service in generating data for test cases
- Letting the manual testers perform automation would also eliminate any reason for them to fear being replaced
Things a test automator should know about:
- The application and business domain
- The automation tool
- The platform with which they are working (for typical technical problems)
- Testing (techniques for generating test cases)
Each skill will add value in different ways, making your profile move in different areas of knowledge shown in the figure below. Clearly, the closer you get to the center, the more capacity you will have to add value to the development of a product, but you might want to specialize in one of these areas or any special intersection. This is also known as having “T-shaped” skills.
Finally, note that test automation is not something that can be done in your free time.
That’s because, in parallel with the manual work, you should begin training with an automation tool as well as read material such as this about automated testing methodology and experiences in general.
Now, Time to Automate!
After reading this, we hope you feel confident in getting started with your functional test automation efforts. We’d like to share one last thought….
We highly agree. Automation can’t replace a tester’s job, but it can make it better and more encompassing.
Not everything should be automated, and you shouldn’t try to completely replace manual testing, as there are things which cannot be automated and sometimes it is easier to manually execute something than to automate it. In fact, if all executions could be run manually, it would probably be much better in the sense that by executing manually, other things can be found at the same time. Do remember that automated tests check, but don’t test. The problem is that some things take more time to do check manually and that is why it’s convenient to automate (what’s worth automating)!
We hope this guide helps you in your endeavor to automate functional testing. Please feel free to comment below if you have any questions or visit our test automation services page to learn more about how our testers can help.
Download a PDF version of our Functional Test Automation Guide here.
About the Authors
Federico Toledo is COO and Co-founder of software testing company, Abstracta. He holds a PhD in Computer Science from UCLM, Spain and has over 15 years of experience in test automation and performance engineering, working hands-on with companies like Shutterfly and Broadcom to improve application performance. Dedicated to the community, he wrote one of the first Spanish books on testing and co-founded Abstracta Academy. Today, he hosts the Quality Sense podcast where he interviews different thought leaders in testing from around the world.
Matias Fornara is a native of a small town in coastal Uruguay named Juan Lacaze, a fan of the Uruguayan soccer team, Peñarol, and is a Senior Technician in IT. He has more than six years of experience in tech, working at Abstracta since 2017, with experience in development, automation, performance testing and Scrum. Matias has also worked as a functional test automation instructor at the BIOS institute in Uruguay. In his spare time, he enjoys playing sports, particularly swimming and soccer, as well as listening to music and playing his guitar.
Recommended For You
Read the Ultimate Guide to Continuous Testing
- Our Strength Lies in our Diversity
- Abstracta is recognized with the award “Talent has no Gender” for its work towards gender equality.
- GoodFirms: Abstracta CEO Steers the Company Towards Its Vision to Co-Create World-Class Solutions, Improving People’s Quality of Life
- An End-to-End Guide of Load Testing
- Quality Sense Podcast: Ash Coleman – Diversity, Equity, and Inclusion at Work