Make sure you can trust the results of your test automation
When dealing with automation, one of its most delicate subjects is the results that lie, otherwise known as false positives and false negatives. Those who have already automated know this to be an issue and those who are about to begin, let us give you fair warning that you will encounter this problem. What can we do to avoid false positives and negatives in test automation? What can we do so that the test case does what it is supposed to do? Doesn’t that sound like testing?
These definitions come from the medical field:
- False Positive: an examination indicates a disease when there is none.
- False Negative: an examination indicates everything is normal when in fact the patient is sick.
If one were to translate this to our field, we could say the following:
- False Positive: when a test is executed and despite it running correctly, the test tells us there is an error (that there is a disease). This adds a lot of cost, as the tester will search for the non-existent bug.
- False Negative: when the execution of a test shows no faults even though there is a bug in the application. This, as much as the false positive, can be due to an incorrect initial state of the database or problems dealing with the test environment setting.
If we believe that the false positive is a problem due to the extra costs, with a false negative, errors are there but we are not aware of them and we feel at ease! We trust all functionalities are covered and that they are being tested, therefore they must not have any mistakes.
We obviously want to avoid results lying to us! No one likes lies. Automated test case results are expected to be reliable so that we can be assured that we aren’t wasting time checking whether the results are correct or not.
The only choice is to carry out a proactive analysis, checking the quality of our tests and anticipating possible errors. We must be actually thinking about the test and not simply doing a record and playback.
To lower the risk of environment or data problems, we should have a controlled environment only accessible through automated tests. With this, we are already avoiding some major headaches because if the data is constantly changing, we will not be able to reproduce problems detected by the tests and we won’t be able to find out what’s the matter with them.
Moreover, we should check the actual test cases! Because who can assure us they are programmed correctly?
And who better than us testers to test them?
In Search of False Positives
If the software is “healthy,” and we don’t want it to display any errors, we must make sure the test is testing what it wants to test. So, we must verify the starting conditions just as much as the final ones. Meaning, a test case tries to execute a determined set of actions with certain input data to verify the outgoing data and the final state, but it is highly important (especially when the system we are testing uses a database) to make sure the initial state is what we expected it to be.
Therefore, if for example, we are creating an instance of a particular entity in the system, the test should verify if that information already exists before beginning the execution of the actions to be tested, because if so, the test will fail (due to duplicate key or similar) but in reality the problem is not with the system but with the data on the test. We have two options: checking if it exists, and if so, we’ve already used that information, or we finish off the test by saying the result is “inconclusive” (or are pass and fail the only possible results for a test?).
If we make sure all the things that could affect our result are in place, just as expected, then we will reduce the percentage of errors that aren’t errors.
In Search of False Negatives
If the software is “sick,” the test must fail! One way of detecting false negatives is to insert errors into the software and verify that the test case finds the mistake. This goes in line with mutation testing. It is very difficult when not working directly with the developer to input the mistakes into the system. It’s also quite expensive to prepare every error, compile it and deploy it, and so on, and to verify that the test finds that fault. In many cases, it can be done by varying the data of the test or playing around with different things. For example, if I have a plain text file as input, I can change something in the content of the file in order to force the test to fail and verify that the automated test case finds that error. In a parameterizable application, it could also be achieved by modifying some parameter.
The idea is to verify that the test case realizes the mistake and that’s why we try to make it fail with these alterations. Anyway, what we could at least do is think about what happens if the software fails at this point, will this test case notice it, or should we add some other validation?
Both strategies will allow us to have more robust test cases, but keep in mind: would they be more difficult to keep up later? Of course, this will not be done to every test case we automate, only to the most critical ones, or the ones really worthwhile, or perhaps the ones we know will stir up trouble for us every now and again.
What do you do to prevent false positives and false negatives in test automation?
Recommended for You
JMeter Response Assertions: How to know what to validate in an HTTP Response Request
The 4 Most Common Test Automation Challenges (and How to Overcome Them)
Low Code for Test Automation – state of the art
We will share in this article a state of the art in the test automation field, specifically with the low code approach. We want to help you select the best tool for your context, offering a centralized place with information about the different options on…
Why is Automation essential in Shift-Left Testing?
It is a fact. When it comes to Shift-Left Testing, automation is an indispensable ally. But how does Shift-Left Testing specifically benefit from automation? Why is it not possible to perform Shift Left Testing without it? Find out in this article written by Matias Fornara….