{"id":1826,"date":"2015-08-10T21:49:46","date_gmt":"2015-08-10T21:49:46","guid":{"rendered":"http:\/\/www.abstracta.us\/?p=1826"},"modified":"2025-05-05T21:21:38","modified_gmt":"2025-05-05T21:21:38","slug":"software-performance-testing-fallacies-part-1","status":"publish","type":"post","link":"https:\/\/abstracta.us\/blog\/performance-testing\/software-performance-testing-fallacies-part-1\/","title":{"rendered":"Software Performance Testing Fallacies Part 1"},"content":{"rendered":"<p><!-- Go to www.addthis.com\/dashboard to customize your tools --><script src=\"\/\/s7.addthis.com\/js\/300\/addthis_widget.js#pubid=ra-58d80a50fc4f926d\" type=\"text\/javascript\"><\/script><\/p>\n<h1><span style=\"font-weight: 400; color: #333333;\">Don&#8217;t fall for these common performance testing fallacies<\/span><\/h1>\n<p><span style=\"font-weight: 400; color: #333333;\">It\u2019s always interesting to find out the many ways in which we can be wrong.\u00a0Here we want to point out\u00a0the various software performance testing fallacies\u00a0that we have seen that have\u00a0led to the use of\u00a0poor methods, which end up costing a lot more money down the road. This is our first post on the topic, for part two, click <a href=\"http:\/\/www.abstracta.us\/software-peformance-testing-fallacies-part-2\/\">here<\/a>.<\/span><\/p>\n<p><span style=\"font-weight: 400; color: #333333;\">In his book,<\/span> \u201c<span style=\"color: #00b674;\"><em style=\"color: #00b674;\"><a href=\"http:\/\/www.amazon.com\/Perfect-Software-Other-Illusions-Testing\/dp\/0932633692\" target=\"_blank\" rel=\"noopener\">Perfect Software and Other Illusions About Testing<\/a>,<\/em><\/span><span style=\"font-weight: 400; color: #333333;\">\u201d Jerry Weinberg explained a number of fallacies regarding testing in general. In this post, we\u2019ll cover five <strong>performance testing fallacies<\/strong>.\u00a0<\/span><\/p>\n<h2><span class=\"ez-toc-section\" id=\"The_Planning_Fallacy\"><\/span>The Planning Fallacy<span class=\"ez-toc-section-end\"><\/span><\/h2>\n<p><span style=\"font-weight: 400; color: #333333;\">We often think that performance tests only take place at the end of a development project, just before rollout, in order to do some fine-tuning to make sure everything goes smooth. That is how performance testing is seen as a solution to performance problems. But, in fact, it&#8217;s about detecting and anticipating problems in order to start working on their solutions. The greatest problem is that when we consider performance testing only at the end of the project, we will end up encountering very serious problems whose solutions will involve higher costs. It would be best to consider performance, <strong>from the early stages of development. <\/strong>One should\u00a0carry out intermediate tests in order to detect the most important problems that might arise.<\/span><\/p>\n<h2><span class=\"ez-toc-section\" id=\"The_%E2%80%9CJust_Add_More_Hardware%E2%80%9D_Fallacy\"><\/span>The &#8220;Just Add More Hardware&#8221; Fallacy<span class=\"ez-toc-section-end\"><\/span><\/h2>\n<p><span style=\"font-weight: 400; color: #333333;\">It&#8217;s typical\u00a0to hear that performance testing is not necessary because any problems detected may be solved by simply adding more hardware like additional servers, memory, etc. Consider\u00a0the case of a memory leak. If we add more memory, we might keep the server active for five hours instead of three, but we won\u2019t be solving the problem. It also doesn\u2019t make any sense either to increase infrastructure costs when we can be more effective with what we already have and reduce fixed costs in the long run. In short, <strong>adding more hardware is not a good substitute for performance testing.<\/strong><\/span><\/p>\n<h2><span class=\"ez-toc-section\" id=\"The_Testing_Environment_Fallacy\"><\/span>The Testing Environment Fallacy<span class=\"ez-toc-section-end\"><\/span><\/h2>\n<p><span style=\"font-weight: 400; color: #333333;\">There is another hardware fallacy asserting that we can perform tests in an environment that may or may not resemble the actual production environment. For example, testing for a client on Windows assuming that the application will function perfectly for another client who will install the system in Linux. <strong>We must make sure to test in an environment as similar to the production environment as possible.<\/strong> There are many elements from the environment that have an effect on a system\u2019s performance. Some of these elements include hardware components, settings of the operating system, and the rest of the applications executed at the same time.<\/span><\/p>\n<p><span style=\"font-weight: 400; color: #333333;\"><strong>Even the database is an important aspect of the performance testing environment<\/strong>. Some think that performance tests may be carried out with a test database, but in employing one, problems with SQL queries might go unnoticed. As a result, if we have a database with thousands of records, the SQL response time will not have been optimized and would surely bring along tremendous issues. This is why it\u2019s important to keep the test environment as similar to the actual environment as possible.<\/span><\/p>\n<h2><span class=\"ez-toc-section\" id=\"The_Comparison_Fallacy\"><\/span>The Comparison Fallacy<span class=\"ez-toc-section-end\"><\/span><\/h2>\n<p><span style=\"font-weight: 400; color: #333333;\">It\u2019s one thing to assume that you can use a performance testing environment that does not resemble the actual production environment, but it\u2019s another to make conclusions about one environment based on another. <strong>We should <em>never<\/em> extrapolate any results.<\/strong> For instance, you cannot duplicate servers to duplicate speed. Neither can you simply increase memory to increase the number of users supported. These assertions are simply mistaken. In general, there are numerous elements exerting an impact on the overall performance. The chain breaks at the weakest link, so if we improve two or three links, the rest will continue to be equally fragile. In other words, if we increase some of the elements that restrict a system\u2019s performance, then the bottleneck will become another element along the chain. The only way to make sure is to keep on testing performance.<\/span><\/p>\n<p><span style=\"font-weight: 400; color: #333333;\">Extrapolating in the other direction is not valid either in performance testing. Imagine the case of a client with 1,000 users executing with an AS400 functioning perfectly. We cannot consider the minimum hardware necessary to provide support to ten users. We must try it to verify it through testing.<\/span><\/p>\n<h2><span class=\"ez-toc-section\" id=\"The_Thorough_Testing_Fallacy\"><\/span>The Thorough Testing Fallacy<span class=\"ez-toc-section-end\"><\/span><\/h2>\n<p><span style=\"font-weight: 400; color: #333333;\">Thinking that one performance test will prevent all problems in itself is a problem. When we go about performance testing, we intend (due to time and resource restrictions) to detect the riskiest problems that may have the greatest negative impact. We usually limit the number of test cases (usually, to no more than 15) because it is VERY costly to carry out a performance test including all functionalities, alternative flows, data, etc. This means that there will always be situations that go untested that will produce, for instance, some blocking in the database, or response times longer than acceptable. The main thing is to cover the main cases, the riskiest, etc. Every time a problem is detected, we must try to apply that solution to each part of the system where it could have an impact. For example, if we detect that the database connections are managed inappropriately in the functionalities being tested, then once a solution is found, it should be applied in every point where connections are involved. Solutions are often global, such as the configuration of a pool\u2019s size, or the memory assigned in the Java Virtual Machine (JVM).<\/span><\/p>\n<p><span style=\"font-weight: 400; color: #333333;\">Another valid approach that proves reassuring when it comes to performance testing is the monitoring of the system under production conditions, in order to detect any problems that might have arisen because they are outside the scope of the tests, so that they may be corrected promptly. <strong>Remember, just by running a performance test, you are not always completely clear of any possible problems,<\/strong> but there are several ways to ensure that you minimize that risk.<\/span><\/p>\n<p><span style=\"font-weight: 400; color: #333333;\">What fallacies have you heard of or dealt with? Comment below!<\/span><\/p>\n<p><strong><span style=\"color: #333333;\">For more performance testing fallacies, continue on to\u00a0<\/span><span style=\"color: #00b674;\"><a href=\"http:\/\/abstracta.us\/blog\/performance-testing\/software-performance-testing-fallacies-part-2\/\">part\u00a0two<\/a><\/span>.<\/strong><\/p>\n<p>&nbsp;<\/p>\n<hr \/>\n<h2><span class=\"ez-toc-section\" id=\"Recommended_for_You\"><\/span><strong>Recommended for You<\/strong><span class=\"ez-toc-section-end\"><\/span><\/h2>\n<p><a href=\"http:\/\/abstracta.us\/blog\/performance-testing\/top-10-factors-impacting-application-performance\/\" target=\"_blank\" rel=\"noopener\">Top 10 Factors Impacting Application Performance<\/a><br \/>\n<a href=\"http:\/\/abstracta.us\/blog\/performance-testing\/cloud-performance-challenges\/\" target=\"_blank\" rel=\"noopener\">Cloud Performance Challenges<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Don&#8217;t fall for these common performance testing fallacies It\u2019s always interesting to find out the many ways in which we can be wrong.\u00a0Here we want to point out\u00a0the various software performance testing fallacies\u00a0that we have seen that have\u00a0led to the use of\u00a0poor methods, which end&#8230;<\/p>\n","protected":false},"author":3,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[32],"tags":[50],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v14.0.2 - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>Software Performance Testing Fallacies Part 1 | Abstracta<\/title>\n<meta name=\"description\" content=\"We&#039;ll discuss test planning, testing environment, test comparison, and other software performance testing fallacies that may be hindering development.\" \/>\n<meta name=\"robots\" content=\"index, follow\" \/>\n<meta name=\"googlebot\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<meta name=\"bingbot\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/abstracta.us\/blog\/performance-testing\/software-performance-testing-fallacies-part-1\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Software Performance Testing Fallacies Part 1 | Abstracta\" \/>\n<meta property=\"og:description\" content=\"We&#039;ll discuss test planning, testing environment, test comparison, and other software performance testing fallacies that may be hindering development.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/abstracta.us\/blog\/performance-testing\/software-performance-testing-fallacies-part-1\/\" \/>\n<meta property=\"og:site_name\" content=\"Blog about AI-powered quality engineering for teams building complex software | Abstracta\" \/>\n<meta property=\"article:publisher\" content=\"https:\/\/www.facebook.com\/AbstractaQA\/\" \/>\n<meta property=\"article:published_time\" content=\"2015-08-10T21:49:46+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2025-05-05T21:21:38+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/abstracta.us\/wp-content\/uploads\/2016\/07\/True_of_False-min.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"420\" \/>\n\t<meta property=\"og:image:height\" content=\"236\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@AbstractaUS\" \/>\n<meta name=\"twitter:site\" content=\"@AbstractaUS\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"WebSite\",\"@id\":\"https:\/\/abstracta.us\/blog\/#website\",\"url\":\"https:\/\/abstracta.us\/blog\/\",\"name\":\"Blog about AI-powered quality engineering for teams building complex software | Abstracta\",\"description\":\"AI-powered quality engineering\",\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":\"https:\/\/abstracta.us\/blog\/?s={search_term_string}\",\"query-input\":\"required name=search_term_string\"}],\"inLanguage\":\"en-US\"},{\"@type\":\"WebPage\",\"@id\":\"https:\/\/abstracta.us\/blog\/performance-testing\/software-performance-testing-fallacies-part-1\/#webpage\",\"url\":\"https:\/\/abstracta.us\/blog\/performance-testing\/software-performance-testing-fallacies-part-1\/\",\"name\":\"Software Performance Testing Fallacies Part 1 | Abstracta\",\"isPartOf\":{\"@id\":\"https:\/\/abstracta.us\/blog\/#website\"},\"datePublished\":\"2015-08-10T21:49:46+00:00\",\"dateModified\":\"2025-05-05T21:21:38+00:00\",\"author\":{\"@id\":\"https:\/\/abstracta.us\/blog\/#\/schema\/person\/78cd0dcae50ce820b25e86d3330e9762\"},\"description\":\"We'll discuss test planning, testing environment, test comparison, and other software performance testing fallacies that may be hindering development.\",\"breadcrumb\":{\"@id\":\"https:\/\/abstracta.us\/blog\/performance-testing\/software-performance-testing-fallacies-part-1\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/abstracta.us\/blog\/performance-testing\/software-performance-testing-fallacies-part-1\/\"]}]},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/abstracta.us\/blog\/performance-testing\/software-performance-testing-fallacies-part-1\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"item\":{\"@type\":\"WebPage\",\"@id\":\"https:\/\/abstracta.us\/blog\/\",\"url\":\"https:\/\/abstracta.us\/blog\/\",\"name\":\"Home\"}},{\"@type\":\"ListItem\",\"position\":2,\"item\":{\"@type\":\"WebPage\",\"@id\":\"https:\/\/abstracta.us\/blog\/performance-testing\/\",\"url\":\"https:\/\/abstracta.us\/blog\/performance-testing\/\",\"name\":\"Performance Testing\"}},{\"@type\":\"ListItem\",\"position\":3,\"item\":{\"@type\":\"WebPage\",\"@id\":\"https:\/\/abstracta.us\/blog\/performance-testing\/software-performance-testing-fallacies-part-1\/\",\"url\":\"https:\/\/abstracta.us\/blog\/performance-testing\/software-performance-testing-fallacies-part-1\/\",\"name\":\"Software Performance Testing Fallacies Part 1\"}}]},{\"@type\":[\"Person\"],\"@id\":\"https:\/\/abstracta.us\/blog\/#\/schema\/person\/78cd0dcae50ce820b25e86d3330e9762\",\"name\":\"Sof\\u00eda Palamarchuk, Co-CEO at Abstracta\",\"image\":{\"@type\":\"ImageObject\",\"@id\":\"https:\/\/abstracta.us\/blog\/#personlogo\",\"inLanguage\":\"en-US\",\"url\":\"https:\/\/secure.gravatar.com\/avatar\/222e8b1136482564fe25acc4de2b9b7a?s=96&d=blank&r=g\",\"caption\":\"Sof\\u00eda Palamarchuk, Co-CEO at Abstracta\"},\"description\":\"Co-Chief Executive Officer at Abstracta\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","_links":{"self":[{"href":"https:\/\/abstracta.us\/blog\/wp-json\/wp\/v2\/posts\/1826"}],"collection":[{"href":"https:\/\/abstracta.us\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/abstracta.us\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/abstracta.us\/blog\/wp-json\/wp\/v2\/users\/3"}],"replies":[{"embeddable":true,"href":"https:\/\/abstracta.us\/blog\/wp-json\/wp\/v2\/comments?post=1826"}],"version-history":[{"count":12,"href":"https:\/\/abstracta.us\/blog\/wp-json\/wp\/v2\/posts\/1826\/revisions"}],"predecessor-version":[{"id":17536,"href":"https:\/\/abstracta.us\/blog\/wp-json\/wp\/v2\/posts\/1826\/revisions\/17536"}],"wp:attachment":[{"href":"https:\/\/abstracta.us\/blog\/wp-json\/wp\/v2\/media?parent=1826"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/abstracta.us\/blog\/wp-json\/wp\/v2\/categories?post=1826"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/abstracta.us\/blog\/wp-json\/wp\/v2\/tags?post=1826"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}