Mobile Application Testing Strategy: How to Integrate Mobile Performance Validations in a CI/CD Pipeline

When it comes to creating quality software, it is essential to understand why it is important to integrate mobile performance validations in a CI/CD pipeline, and specifically how to do it, as part of your Mobile Application Testing Strategy. Apptim has a CLI tool that allows you to run automated performance validations of your app, integrate to CI/CD, and set pass/fail criteria for the most important KPIs that affect user experience. Find out more in this article, with an interview with Apptim’s CEO, Sofia Palamarchuk.

The importance of Continuous Integration and Continuous Delivery in today’s world cannot be overstated. By doing this, IT professionals are able to create better software every second, which is crucial in the era in which software quality is critical. 

It also deeply impacts the growth, sustainability, and scalability of companies. As you can see, it is key to take it in mind for your Mobile Application Testing Strategy.

According to “The Cost of Poor Software Quality in the US” CISQ report 2020, the total cost of poor software quality in the US was $2.08 trillion (T) in 2020.

Just as Abstracta’s Chief Technology Officer Roger Abelenda said, “An efficient continuous integration pipeline is key to creating quality software and remaining competitive in today’s technological landscape. Why? Because it allows focusing on the changes delivered. Thus, it provides value to users sooner, trying ideas faster, avoiding regression problems, and streamlining software delivery”.

Long story short: when it comes to software testing, understanding how to integrate mobile performance validations into a CI/CD pipeline is really relevant. And Apptim is a truly useful tool for achieving this goal. 

How Can Apptim Help With Your Mobile Application Testing Strategy?

Matías Reina 
CEO at Abstracta

Apptim has been our first spin-off at Abstracta! It is a mobile app performance testing solution used by more than 250 companies worldwide. One is Playtika, based in Israel and one of the world’s largest mobile gaming companies. 

As explained by Matias Reina, CEO of Abstracta, “Apptim measures device-side performance as opposed to server-side performance as most tools. It is true that some of these metrics are affected by the server, but that is not their focus”.

In order to get a better understanding of this point, Matias exemplified that Apptim measures from the moment a user clicks on the search for products until all the products are actually displayed. “It involves two different processing methods, one on the server and one on the mobile phone.” 

Matías went deeper: “The problem comes when you need to simulate a high demand, with large amounts of load and thousands of users, and you use tools like JMeter instead of Apptim. In these cases, the analysis is usually split in two: scripts/automation focused on the server (with Jmeter) and scripts/automation focused on the client (with Apptim).”

And he continued: “On the other hand, client-side processing depends a lot on the type of device, so it is good to be able to measure on different types of devices to ensure that all target users have a good experience.

With all this in mind, it is clear that Apptim can be a great ally when it comes to mobile performance testing, as part of your Mobile Application Testing Strategy. But how to automate these validations on mobile? And how to truly take advantage of all the benefits offered by Apptim? We talked about this and more with Sofia Palamarchuk, CEO at Apptim.

Sofía Palamarchuk
CEO at Apptim | Partner at Abstracta

– How can people automate mobile performance validations?

Sofía Palamarchuk: You first need to find a way to run automated profiling of your app. The easiest way to get started is by reusing automated functional tests that have been created for functional validation purposes. These tests run at the UI level on a package version of the app that can be installed in a real device or emulator and simulate a real user journey. While this test runs, you can capture performance data of what’s happening “under the hood”.

Most mobile teams already have some type of automated UI tests running in their CI pipeline. If your app is in beta or new to the market, you may be thinking about adding them in the near future. This is the best time to think about how to include mobile performance validations in your app release process, always focusing on your Mobile Application Testing Strategy. 

For example, here’s what happens in an Appium test that runs a typical user journey in an e-commerce app: a user searches for a product, selects the product from a list, adds it to a cart, navigates to the cart, and completes the checkout. This functional test might be checking to make sure that the correct product was added to the cart, the product quantity is correct, or that the checkout is working properly. At the same time, we validate what the response time is for a simple action like clicking the “add to cart” button as well as the memory usage if the action is performed several times. Will it cause an OutOfMemory error? 

If your team doesn’t have any automated functional tests, we strongly recommend you automate a small and valuable use case to start measuring performance over time. For example, measure the app’s startup time or test the login user experience. 

– Should people run these validations using emulators or real devices?

Sofía Palamarchuk: It depends. If what you’re most interested in is comparing or benchmarking different performance metrics of your app (like v2.6 versus v2.5), you should have test environments that are as similar as possible. In particular, the devices used to test should be the same.

As part of your Mobile Application Testing Strategy, you’ll want to minimize the noise in the data that comes with using different environments and look at differences in the measured performance on each version. For this purpose, emulators can be of great help because you can specify the hardware and OS version of the emulated device and use the same emulator for benchmarking. It’s also a cost-effective alternative to using real devices if you run frequent benchmarks. 

On the other hand, if you’re looking to evaluate the real user experience, you need to be as close as possible to real-world conditions. This means testing on real hardware. In addition to looking for noticeable performance differences from one app version to the other, you’ll want to make sure the app’s performance is acceptable on specific devices.

You can do this by defining thresholds per device. For example, memory usage cannot be more than 300MB on a specific device. Or, you can get notified if the FPS is lower than 10 on any screen (and probably fail the build pipeline).

– What pass and fail criteria should professionals use for their Mobile Application Testing Strategy? 

Sofía Palamarchuk: This is one of the most common questions we get asked and, arguably, the most difficult to answer. Google and Apple provide some best practices for pass/fail criteria. For example, an app rendering at 60 FPS provides the best experience to the end user. Does this mean you have a performance issue if your app renders at 30 FPS? Well, it depends on what type of app you have. A mobile game or an app that has heavy graphics will have higher FPS requirements.

Transactional apps may not need high levels of FPS because knowing how fast certain transactions are completed is more important. Measuring the end-user response time of the login page or an action like adding an item to a cart is a good way to measure transaction speed.

Our recommendation is to define pass/fail criteria with the whole development team as non-functional requirements. This can be the number of crashes or errors, the average percentage of CPU usage (like under 50%), or the app startup time (like under 3 seconds). The end goal is to have more confidence in the quality of every build. If you’re meeting your pass/fail targets every time, you’ll have more certainty regarding the end-user experience. 

– How can people use Apptim in CI/CD?

Sofía Palamarchuk: We have a CLI tool that allows mobile developers to run automated performance validations of their app, integrate to CI/CD, and set pass/fail criteria for the most important KPIs that affect user experience. Those interested in a demo can reach out to our team at [email protected].

Would you like to contact Apptim? Over 10,000 teams have used Apptim for mobile performance testing! Get in touch here.

Are you looking for a partner to perform software testing? Abstracta is one of the most reliable companies in software quality engineering. Get to know our solutions, and contact us to talk about how we can help you grow your business.

Follow us on Linkedin & Twitter to be part of our community!

331 / 422