Blog

Performance Testing of a Socket.IO Application

If you’re going through the experience of trying to do a performance test with systems that use the Socket.IO library, one of the best ways is to use Locust, Taurus, and BlazeMeter. To learn more about this topic, read this article by December Labs, one of our Quality Sense Conf gold sponsors.

By December Labs

We know that JMeter is the most potent tool used nowadays, and you’ll quickly find a lot of available information. It’s the first tool that comes to mind when you think about doing performance tests, but… what might happen is that it can make you waste time, money, and resources. We’re here to offer you better options for this specific situation. 

Without further ado, we’ll tell you about the experience we had:

Nowadays, chat applications are one of the primary communication tools, are simple to use and effective, and an extensive range of people of different ages are taking advantage of them. For that reason, it is crucial to find the best Performance testing tool that secures permanence and a good user experience with the application.

Real-time applications (RTA) are becoming more and more popular in our field. These applications work in a period of time that gives the idea of instant communication, so users don’t need to refresh the app/website to receive new messages. Famous examples of these types of applications are instant messengers and chats like WhatsApp, Telegram, and Facebook Messenger.

Sockei.IO library became very popular in implementing this type of app because it helps improve the user experience. Big organizations such as Microsoft Office, Yammer, Zendesk, Trello, and many others focus on using this JavaScript library to create real-time solid systems.

The popularity of Socket.IO presents significant challenges for testers, creating robust benchmark tests for these types of systems. The idea is to share with you a little bit about our experience and how was the journey to develop this type of test.

Our initial objective was to develop an application that could support 10,000 concurrent users, that the response times were within an acceptable range, and, if they existed, find bottlenecks.

First Round with JMeter

We started the performance testing using JMeter as our primary tool because it is well known, it has the most significant amount of community, popularity, and documentation. However, we immediately started to face different limitations:

❌The communication client-server wouldn’t restore. 
❌The interaction with the response server was complex.
❌There were problems with the testing scales. 
❌The plugins didn’t work as we expected.

After some time, we faced these limitations and realized there were better solutions than JMeter. At this moment, we started with the investigation process, searching for new technologies to accomplish our objective. Finally, we found some new tools that allowed us to develop our testing script to test the Socket.IO application. 

Second Round with Locust + Taurus + BlazeMeter

We discovered Locust, a command-line tool written in Python. With this tool, load tests are distributable and scalable, making it possible to ramp up a test simulating thousands of users quickly. 

Locust has the advantage of not requiring large configuration files, and testing the Socket.IO application is simpler than with JMeter. For testers who prefer a command line interface, Locust is faster and easier to configure, enabling testing code to be easily reused between projects. The language to create the test is Python, which can be stored in your version control system along with your project, making it easier to re-run the same tests in the future.

The first step to creating our test with Locust was to develop a Socket.IO client that could simulate the flows of our final users and contribute value to our performance testing. As the final goal of our test was to support a high number of concurrent users, we had to use BlazeMeter to take advantage of its infrastructure and run our tests in the cloud.

BlazeMeter is a popular SaaS-based performance testing tool, fully compatible with many open-source load testing. We decided to use BlazeMeter because of the advantages it has, for instance:

✔️A simple way to maintain and execute the scripts from one location.
✔️The ability to generate up to 1,000,000 virtual users – no need to worry about infrastructure cost and setup.
✔️Real-time monitoring and reporting.
✔️Easy access to historical reports.

To be able to run our tests in BlazeMeter, we had to customize the image by adding the Socket.IO library as we finished the BlazeMeter execution. If you’re facing the same situation, you might wonder if it’s possible to generate BlazeMeter reports or get graphical response times, and the answer might still be no, or at least it was six months ago. Therefore, we decided to create the latest version of our tests without BlazeMeter, now using just Locust and Taurus.

Taurus (Test Automation Running Smoothly) is a free and open automation tool created by BlazeMeter which provides an abstraction layer over your test scripts, delivering us a friendly way to develop and configure our script.

The initial idea with Taurus was to use it as a mapper between the Locust script and BlazeMeter infrastructure since doing so makes it look like it was easy to build the integration through its languages (JSON or YML).

Some Advantages of Taurus:

✔️Easy installation and setup.
✔️Tests are written using JSON or YAML.
✔️Ability to run existing test scripts.
✔️Merge multiple test scripts easily into a single test run.
✔️Real-time reporting.
✔️Integration with Blazemeter reporting dashboard.

Following the initial idea of testing the app running 10,000 concurrent users, now with BlazeMeter out of the game, with just Locust and Taurus, we started by ensuring we had a computer with many available resources. We also made a docker-compose with the required configurations to make it possible to run the Locust and Taurus scripts, creating the possibility to execute them in parallel or on different computers if more virtual users need to be run. 

Our Conclusion

In the end, we were able to make a test to the amount of desired virtual users. We also accomplished our goals by working together with the development team. They continuously fixed the issues in our testing process to obtain our final goals: a good response time, the erasure of any possible bottleneck, and, most importantly, the support of 10,000 concurrent virtual users.

Studying the tools available before starting each test will always be necessary to ensure you have the suitable means to achieve the final purpose. You can get the best results by analyzing reality and choosing the correct tools to create robust, scalable, and reliable tests.

If you are about to start performance testing with Socket.IO, we hope this article helps you!  

346 / 422