How Soon is Too Soon to Carry Out Performance Tests?
Many developers do not know when to carry out performance tests for optimum results. It’s very common of them to leave performance testing for the end as an afterthought if they have enough time. In other cases, they ask around in their spare time to see how the tools are running and then perform a minor test. Some even hire a firm that provides testing services a few days before rollout and then verify that everything is running smoothly.
Those are indeed some ways to go about testing, but they set you up for risk. As with many things, testing takes time. In some cases, performance testing is simple (like single-user systems accessed by only 2 or 3 people), but most of the time, testing is complex and we must plan ahead to run them and to make the necessary corrections.
As Scott Barber puts it, if we leave performance testing for the end, it’s like we’re “requesting a blood test for a patient who has already died.” The longer you wait, the more costly the implementation of corrections and the greater the risk.
Okay. So, how soon should we start testing?
If we run performance tests when the system is unstable, after it has failed functional tests, for example, then we run the risk of:
- Encountering functional errors as we prepare performance tests, which will hinder the automation or cause errors in the execution of said tests.
- After correcting the system for proper performance, detecting problems in functional tests may lead to unpredictable changes in the final performance.
It is a best practice to test a specific module or functionality immediately after a stable version is available. Intermediate testing parallel to development is necessary as well as a full test at the end of the project to simulate the whole load that it will support. This leads to better results in final testing, with less of a likelihood of having to make major changes.