Detecting errors that automated functional tests often miss
Generally when we think of bugs in software, failures and unexpected behavior due to errors in the system logic come to mind. In part, by running (and automating) functional tests, we seek to detect these errors as early as possible. However, sometimes we miss other types of errors that are more obvious and recognizable to a user such as visual errors, which leads to the question: how important would it be to detect these errors within our tests?
In this post, I’ll explain how to achieve this with automated visual regression testing, advantages and disadvantages of this practice, and some tips to make it more worthwhile.
The Impact of Visual Errors on the User Experience
When talking about visual errors, we’re not referring to the aforementioned errors in the system logic, but to those aesthetic defects that cause interfaces to be displayed incorrectly, thus worsening the user experience.
Here is a clear example of a visual error in Amazon’s mobile experience:
The “Filter” bar is following us in this #GUIGoneWrong from @amazon! Visual #UI bugs sneak into production regularly, but Applitools Eyes is here to help 🛠️ Reach out to learn how visual #AI #testautomation can keep your apps and sites visually flawless pic.twitter.com/IG4oVNC8iv— Applitools (@Applitools) November 12, 2018
As you can see, the filter bar overlaps with the product description, which would be annoying for any shopper that wanted to read it easily. In this case, the error is an inconvenience, but there are other examples of when a visual error causes an application to be unusable. For example, when a visual component moves or disappears altogether.
Unfortunately, automated UI tests aren’t meant for visual validation, so they’d miss this kind of error. As you can see by looking at the Amazon example, even tech giants aren’t immune to these bugs.
To sum up thus far:
- As automated UI tests do not detect them, it’s possible for an application to have visual errors
- It’s important to detect these bugs as they can be very detrimental to the usability and accessibility of an app, directly impacting the UX
Now that we’ve covered what visual errors are and why they matter, let’s see how to detect them earlier in the development cycle.
Visual Regression Testing
Visual regression tests validate that the appearance of an application hasn’t undergone any unexpected changes by first capturing a screenshot (a baseline) of the application and then, after each test, checking if it has changed with respect to it. If so, the test is marked as failed.
You can think of it like a game of “spot the difference” where you try to find differences between similar images:
Visual regression tests are especially useful for detecting whether visual errors were generated after making changes to the system. They can be integrated into existing automated tests frameworks (such as Selenium or WebdriverIO) using tools that allow visual regression. In addition, many of these tools allow you to apply various image comparison logics (for example, looking at the structure of an interface or the content), which allows flexibility in testing and obtaining deterministic results.
Should You Incorporate Automated Visual Regression Tests?
Now that we have a better understanding of what visual regression tests consist of, let’s compare the advantages and disadvantages of incorporating them into an automated testing process.
For more on test automation, read “When to Automate a Test.”
Advantages of Automatic Visual Regression Testing
- Increase the scope of your existing automated tests
- Detect visual errors early and quickly
Disadvantages of Automatic Visual Regression Testing
- Added maintenance costs due to the need to keep screenshots of web interfaces while taking into account their several variations such as on different browsers, devices, operating systems and more
- Image comparisons can return false positives leading to unreliable results and time wasted analyzing failures that don’t correspond to errors
Taking these points into account, we can see that these tests help to solve the problem of detecting visual errors in a timely way. However, the high cost of implementation and maintenance that they imply can result in them being unprofitable. If that happens, a team could end up discarding them, so it’s key to implement them in such a way that these disadvantages can be mitigated as much as possible.
Good Practices for Automated Visual Regression Tests
- Target the application interfaces where visual regression testing would provide the most valuable results (where the highest priority or most critical cases are). This will help limit the number of images per case to focus only the most important ones.
- Always avoid making exact comparisons (pixel by pixel), since they’re more prone to failure. Instead, use other types of comparison logic more suited to your needs (layout, content, etc.). This will help obtain more deterministic and reliable results.
- Consider baselines for devices where the application is most often used, and similarly, in their respective most used browsers. This will help to cover only the most important variants.
- In recent years, codeless testing tools are evolving to support the visual regression testing story. Most of them offer visual test recorders for end-to-end tests. Read more in the article The Case of Codeless Testing by Progress.
By implementing these good practices, it’s possible to make the most of the test results and reduce both maintenance costs and the probability of encountering false positives.
Visual Regression Tools
Last, but not least, let’s look at some of the tools that can allow you to incorporate these tests.
Check out this episode of the Quality Sense podcast, where our COO, Federico Toledo interviews Anand Bagmar, Quality Evangelist and Solution Architect at Applitools.
Percy by BrowserStack is a visual testing platform that, like Applitools, provides SDKs that can be integrated with UI-level tests in various languages. Its screenshot stabilization functionalities allow you to avoid false positives due to rendering of sources or animated images. For more information, check out Percy’s documentation.
Improve UX with Automated Visual Regression Testing
Ultimately, it’s clear that visual regression tests have a lot of potential since they allow us to test applications with a visual approach that we cannot achieve with other types of tests. In this way, we achieve a very important level of coverage against errors that we wouldn’t normally detect that directly affect the user experience. Incorporating these tests is a practice that I’d definitely recommend to any tester, especially because of the great importance that their results mean for the user experience.
Have you tried automated visual regression testing or plan to? Tell me about your experience in the comments!
Recommended for You
Free tools to process logs for performance analysis
An overview of free tools to process logs for performance that we created for Apache, IIS, and other web servers Our tools, Replace All and Access Log Analyzer are open and free, located in our Github repository. Some time ago, Símon de Uvarow, (performance expert) started automating tasks…
Migrating from TestLink to JAMA
Guest post by Juan Techera, QA Manager for Verifone Recently, our test team at Verifone was challenged for the first time with how to move thousands of test cases and results from TestLink to JAMA. There’s a lack of documentation and public information about how to handle this…