Blog

Software Performance Testing Fallacies Part 1

Don’t fall for these common performance testing fallacies

It’s always interesting to find out the many ways in which we can be wrong. Here we want to point out the various software performance testing fallacies that we have seen that have led to the use of poor methods, which end up costing a lot more money down the road. This is our first post on the topic, for part two, click here.

In his book,Perfect Software and Other Illusions About Testing,” Jerry Weinberg explained a number of fallacies regarding testing in general. In this post, we’ll cover five performance testing fallacies

The Planning Fallacy

We often think that performance tests only take place at the end of a development project, just before rollout, in order to do some fine-tuning to make sure everything goes smooth. That is how performance testing is seen as a solution to performance problems. But, in fact, it’s about detecting and anticipating problems in order to start working on their solutions. The greatest problem is that when we consider performance testing only at the end of the project, we will end up encountering very serious problems whose solutions will involve higher costs. It would be best to consider performance, from the early stages of development. One should carry out intermediate tests in order to detect the most important problems that might arise.

The “Just Add More Hardware” Fallacy

It’s typical to hear that performance testing is not necessary because any problems detected may be solved by simply adding more hardware like additional servers, memory, etc. Consider the case of a memory leak. If we add more memory, we might keep the server active for five hours instead of three, but we won’t be solving the problem. It also doesn’t make any sense either to increase infrastructure costs when we can be more effective with what we already have and reduce fixed costs in the long run. In short, adding more hardware is not a good substitute for performance testing.

The Testing Environment Fallacy

There is another hardware fallacy asserting that we can perform tests in an environment that may or may not resemble the actual production environment. For example, testing for a client on Windows assuming that the application will function perfectly for another client who will install the system in Linux. We must make sure to test in an environment as similar to the production environment as possible. There are many elements from the environment that have an effect on a system’s performance. Some of these elements include hardware components, settings of the operating system, and the rest of the applications executed at the same time.

Even the database is an important aspect of the performance testing environment. Some think that performance tests may be carried out with a test database, but in employing one, problems with SQL queries might go unnoticed. As a result, if we have a database with thousands of records, the SQL response time will not have been optimized and would surely bring along tremendous issues. This is why it’s important to keep the test environment as similar to the actual environment as possible.

The Comparison Fallacy

It’s one thing to assume that you can use a performance testing environment that does not resemble the actual production environment, but it’s another to make conclusions about one environment based on another. We should never extrapolate any results. For instance, you cannot duplicate servers to duplicate speed. Neither can you simply increase memory to increase the number of users supported. These assertions are simply mistaken. In general, there are numerous elements exerting an impact on the overall performance. The chain breaks at the weakest link, so if we improve two or three links, the rest will continue to be equally fragile. In other words, if we increase some of the elements that restrict a system’s performance, then the bottleneck will become another element along the chain. The only way to make sure is to keep on testing performance.

Extrapolating in the other direction is not valid either in performance testing. Imagine the case of a client with 1,000 users executing with an AS400 functioning perfectly. We cannot consider the minimum hardware necessary to provide support to ten users. We must try it to verify it through testing.

The Thorough Testing Fallacy

Thinking that one performance test will prevent all problems in itself is a problem. When we go about performance testing, we intend (due to time and resource restrictions) to detect the riskiest problems that may have the greatest negative impact. We usually limit the number of test cases (usually, to no more than 15) because it is VERY costly to carry out a performance test including all functionalities, alternative flows, data, etc. This means that there will always be situations that go untested that will produce, for instance, some blocking in the database, or response times longer than acceptable. The main thing is to cover the main cases, the riskiest, etc. Every time a problem is detected, we must try to apply that solution to each part of the system where it could have an impact. For example, if we detect that the database connections are managed inappropriately in the functionalities being tested, then once a solution is found, it should be applied in every point where connections are involved. Solutions are often global, such as the configuration of a pool’s size, or the memory assigned in the Java Virtual Machine (JVM).

Another valid approach that proves reassuring when it comes to performance testing is the monitoring of the system under production conditions, in order to detect any problems that might have arisen because they are outside the scope of the tests, so that they may be corrected promptly. Remember, just by running a performance test, you are not always completely clear of any possible problems, but there are several ways to ensure that you minimize that risk.

What fallacies have you heard of or dealt with? Comment below!

For more performance testing fallacies, continue on to part two.

 


Recommended for You

Top 10 Factors Impacting Application Performance
Cloud Performance Challenges

14 / 461

7 Comments

  1. July 22, 2016 at 9:06 pm

    […] [Infographic] Why Should I Become a Performance Tester? Software Perfomance Testing Fallacies Part 1 […]

  2. July 19, 2016 at 5:55 pm

    […] [Infographic] Why Should I Become a Performance Tester? Software Performance Testing Fallacies Part 1 […]

  3. May 28, 2016 at 10:49 pm

    […] Software Performance Testing Fallacies Part 1 […]

  4. August 21, 2015 at 10:58 am

    Great Article.

    One other fallacy that I can think about.
    Short Duration Performance Tests should do good – Most of the teams design tests for 20 minutes or 30 minutes and feel everything is fine if they do not encounter any issues. They do not think about/consider issues that might be encountered over a long duration(few of them listed below).
    1. Serious memory leaks that would eventually result in a memory crisis
    2. Failure to close connections between tiers of a multi-tiered system under some circumstances which could stall some or all modules of the system
    3. Failure to close database cursors under some conditions which would eventually result in the entire system stalling
    4. Issues that might surface after the maximum queue depth is reached
    5. Gradual degradation of response time of some functions as internal data-structures become less efficient during a long test

    1. August 21, 2015 at 2:17 pm

      Hi Soma,

      Thank you for your contribution.

      And yes – short tests do not take into account many problemas you might find when running an Endurance test for example. Check out my post on the different types of performance tests and the differences between them: http://www.abstracta.us/types-of-performance-tests/.

      Have a great day!
      Sofia

  5. August 11, 2015 at 8:21 am

    Great article! We dealt with fallacies too, here are some of them.

    Load Testing in early development stage is sometimes not possible. A specific feature may require an entire chain of small features to be completely working and testable. You could argue that testing each component to get the best performance out of them can be a good practice. But, experience has shown that local optimum does not necessarily lead to a global optimum.

    Software development is an incredibly complex task requiring both Human management and technical skills. Maybe, this is the reason why software development could not be automated yet.

    I would instead recommend doing POC (Proof Of Concepts) to try out how the model works and performance test it. This is what we did when we designed our Live Test Reporting. We trashed the entire code about 7 times! Here is our blog post on this:
    https://jellly.io/blog/2015/06/12/real-time-analytics-with-elasticsearch/

    Another Fallacy: Pursue Performance at all costs

    The pursue of optimal performance usually leads to poorly maintainable code base. Optimized code path are often hard to read, and consequently harder to maintain. But, great improvements can usually be made without sacrifying too much code quality. An IBM study published in Clean Code has shown that improving less than 5% of code through Profiling can lead to more than 50% performance improvement.

    One should know when to stop trying to improve the application performance.

    The developer fear Fallacy

    It follows the planning fallacy. As the project is highly advanced and almost deliverable, the customer discovers the performance is poor through load testing. It tells the development team to fix those issues and deliver a new version. The development team may refuse to modify the code base further more. They fear to break things and have to go through an entire functional testing session again.

    This usually happens when there is a lack of unit and functional testing during development. Features have been manually tested. Manual testing is like throwing money by the window. It takes much time every time you need to test a feature. And, while people are testing , they aren’t developing. They don’t want to create unit test because they think they should focus on developing features. Some even think that they are experienced enough to know how to avoid bugs.

    The tested Fallacy

    Doing tests doesn’t prove the absence of bugs, nore the absence of performance issues. It only proves that the tested case is working as expected:
    http://c2.com/cgi/wiki?TestsCantProveTheAbsenceOfBugs

    1. August 11, 2015 at 2:11 pm

      Hi Jelly,

      Great feedback! I totally agree with your ¨Tested¨ Fallacy… at that is what we regard as ¨Excessive Confidence¨.

      There is a belief that the systems where we will encounter problems are those developed by programmers who made mistakes and lack experience, among other things. And because my engineers are all quite experienced, there is no need for me to test performance.

      We must not forget that programming is a complex activity, and regardless of how experienced we may be, it is common to make mistakes.

      Look out for Part 2 next week!

Leave a Reply

Required fields are marked