Detecting errors that automated functional tests often miss
Generally when we think of bugs in software, failures and unexpected behavior due to errors in the system logic come to mind. In part, by running (and automating) functional tests, we seek to detect these errors as early as possible. However, sometimes we miss other types of errors that are more obvious and recognizable to a user such as visual errors, which leads to the question: how important would it be to detect these errors within our tests?
In this post, I’ll explain how to achieve this with automated visual regression testing, advantages and disadvantages of this practice, and some tips to make it more worthwhile.
The Impact of Visual Errors on the User Experience
When talking about visual errors, we’re not referring to the aforementioned errors in the system logic, but to those aesthetic defects that cause interfaces to be displayed incorrectly, thus worsening the user experience.
Here is a clear example of a visual error in Amazon’s mobile experience:
The “Filter” bar is following us in this #GUIGoneWrong from @amazon! Visual #UI bugs sneak into production regularly, but Applitools Eyes is here to help 🛠️ Reach out to learn how visual #AI #testautomation can keep your apps and sites visually flawless pic.twitter.com/IG4oVNC8iv
— Applitools (@Applitools) November 12, 2018
As you can see, the filter bar overlaps with the product description, which would be annoying for any shopper that wanted to read it easily. In this case, the error is an inconvenience, but there are other examples of when a visual error causes an application to be unusable. For example, when a visual component moves or disappears altogether.
Unfortunately, automated UI tests aren’t meant for visual validation, so they’d miss this kind of error. As you can see by looking at the Amazon example, even tech giants aren’t immune to these bugs.
To sum up thus far:
- As automated UI tests do not detect them, it’s possible for an application to have visual errors
- It’s important to detect these bugs as they can be very detrimental to the usability and accessibility of an app, directly impacting the UX
Now that we’ve covered what visual errors are and why they matter, let’s see how to detect them earlier in the development cycle.
Visual Regression Testing
Visual regression tests validate that the appearance of an application hasn’t undergone any unexpected changes by first capturing a screenshot (a baseline) of the application and then, after each test, checking if it has changed with respect to it. If so, the test is marked as failed.
You can think of it like a game of “spot the difference” where you try to find differences between similar images:
Visual regression tests are especially useful for detecting whether visual errors were generated after making changes to the system. They can be integrated into existing automated tests frameworks (such as Selenium or WebdriverIO) using tools that allow visual regression. In addition, many of these tools allow you to apply various image comparison logics (for example, looking at the structure of an interface or the content), which allows flexibility in testing and obtaining deterministic results.
Should You Incorporate Automated Visual Regression Tests?
Now that we have a better understanding of what visual regression tests consist of, let’s compare the advantages and disadvantages of incorporating them into an automated testing process.
For more on test automation, read “When to Automate a Test.”
Advantages of Automatic Visual Regression Testing
- Increase the scope of your existing automated tests
- Detect visual errors early and quickly
Disadvantages of Automatic Visual Regression Testing
- Added maintenance costs due to the need to keep screenshots of web interfaces while taking into account their several variations such as on different browsers, devices, operating systems and more
- Image comparisons can return false positives leading to unreliable results and time wasted analyzing failures that don’t correspond to errors
Taking these points into account, we can see that these tests help to solve the problem of detecting visual errors in a timely way. However, the high cost of implementation and maintenance that they imply can result in them being unprofitable. If that happens, a team could end up discarding them, so it’s key to implement them in such a way that these disadvantages can be mitigated as much as possible.
Good Practices for Automated Visual Regression Tests
- Target the application interfaces where visual regression testing would provide the most valuable results (where the highest priority or most critical cases are). This will help limit the number of images per case to focus only the most important ones.
- Always avoid making exact comparisons (pixel by pixel), since they’re more prone to failure. Instead, use other types of comparison logic more suited to your needs (layout, content, etc.). This will help obtain more deterministic and reliable results.
- Consider baselines for devices where the application is most often used, and similarly, in their respective most used browsers. This will help to cover only the most important variants.
- In recent years, codeless testing tools are evolving to support the visual regression testing story. Most of them offer visual test recorders for end-to-end tests. Read more in the article The Case of Codeless Testing by Progress.
By implementing these good practices, it’s possible to make the most of the test results and reduce both maintenance costs and the probability of encountering false positives.
Visual Regression Tools
Last, but not least, let’s look at some of the tools that can allow you to incorporate these tests.
Applitools
Applitools is a visual testing platform that provides a library called Applitools Eyes available for Java, Javascript, Python, C #, Ruby and PHP. It can be integrated into automated tests at the UI level. Through this library, you can integrate various functionalities to your tests to visually validate your application. Its massive community and extensive library of learning material make this tool stand out, above all.
Check out this episode of the Quality Sense podcast, where our COO, Federico Toledo interviews Anand Bagmar, Quality Evangelist and Solution Architect at Applitools.
Percy
Percy by BrowserStack is a visual testing platform that, like Applitools, provides SDKs that can be integrated with UI-level tests in various languages. Its screenshot stabilization functionalities allow you to avoid false positives due to rendering of sources or animated images. For more information, check out Percy’s documentation.
Oculow
Oculow is an innovative visual testing tool that offers a web test management platform and a library available in various programming languages (Java, Javascript and Python). Besides this, it offers a convenient functionality to manage baselines by applying artificial intelligence. Basically, it gives you the option to let Oculow decide for itself if there are visual bugs in the captured image or if the interface has changed. You can read more about it in the Oculow documentation.
Improve UX with Automated Visual Regression Testing
Ultimately, it’s clear that visual regression tests have a lot of potential since they allow us to test applications with a visual approach that we cannot achieve with other types of tests. In this way, we achieve a very important level of coverage against errors that we wouldn’t normally detect that directly affect the user experience. Incorporating these tests is a practice that I’d definitely recommend to any tester, especially because of the great importance that their results mean for the user experience.
Have you tried automated visual regression testing or plan to? Tell me about your experience in the comments!
Recommended for You
Quality Sense Podcast: Anand Bagmar – What You Should Know About Visual Testing
The Ultimate Guide to Continuous Testing
Tags In
Abstracta Team
Related Posts
Quality Sense Podcast: Paul-Henri Pillet – Why We Made Gatling
What it’s like to build a company around developer-driven performance testing In this episode of the Quality Sense podcast, our COO, Federico Toledo, interviews Paul-Henri Pillet, a Frenchman and the CEO of one of our favorite open source load testing tools, Gatling, with 5,000,000 downloads…
Provar Automation: New Partnership Between Abstracta and Provar
We are delighted to tell you that we have strengthened ties with Provar. Through this strategic partnership, we are leveraging the Provar Automation tool and much more. It’s not a new partnership, but there’s a lot of news on the way and, before diving into…
Leave a Reply Cancel reply
Search
Contents