{"id":1868,"date":"2015-08-17T17:49:38","date_gmt":"2015-08-17T17:49:38","guid":{"rendered":"http:\/\/www.abstracta.us\/?p=1868"},"modified":"2025-05-05T21:21:38","modified_gmt":"2025-05-05T21:21:38","slug":"software-performance-testing-fallacies-part-2","status":"publish","type":"post","link":"https:\/\/abstracta.us\/blog\/performance-testing\/software-performance-testing-fallacies-part-2\/","title":{"rendered":"Software Performance Testing Fallacies Part 2"},"content":{"rendered":"<p><!-- Go to www.addthis.com\/dashboard to customize your tools --><script src=\"\/\/s7.addthis.com\/js\/300\/addthis_widget.js#pubid=ra-58d80a50fc4f926d\" type=\"text\/javascript\"><\/script><\/p>\n<h1><span style=\"font-weight: 400; color: #333333;\">Software Performance Testing Fallacies Continued<\/span><\/h1>\n<p><span style=\"font-weight: 400; color: #333333;\">Continuing with the<\/span> <span style=\"font-weight: 400; color: #00b674;\"><a href=\"https:\/\/abstracta.us\/blog\/performance-testing\/software-performance-testing-fallacies-part-1\/\" target=\"_blank\" rel=\"noopener\">previous post<\/a><\/span> <span style=\"font-weight: 400; color: #333333;\">about <strong>software performance testing fallacies<\/strong>, we will take another look at common ways in which many of us are mistaken about performance testing. We will discuss some that are very common in testing, technology, and infrastructure management.<\/span><\/p>\n<h2><span class=\"ez-toc-section\" id=\"Technology_Fallacies\"><\/span><strong><span style=\"color: #00b674;\">Technology Fallacies<\/span><\/strong><span class=\"ez-toc-section-end\"><\/span><\/h2>\n<p><span style=\"font-weight: 400; color: #333333;\">What is one of the main advantages of modern languages like Java or C#? We don\u2019t need an explicit memory management thanks to the garbage collection strategy implemented by the frameworks on which applications are executed. So, <strong>Java or C# guarantee that there will be no memory leaks<\/strong>. I regret to say that this is <em>FALSE<\/em>. There are two main situations where we could produce memory leaks. One is maintaining references to structures we no longer use, such as when we have a list of elements where we always add but never remove. The other is the case of the Microsoft Framework or JVM with a bug. For example, in a string concatenation operation (this was specifically the case in one of our projects). This is why it is important to pay attention to memory management and use tools to detect these situations, like profilers for Java, which we have available<\/span> <a href=\"http:\/\/java-source.net\/open-source\/profilers\" target=\"_blank\" rel=\"noopener\">here<\/a><span style=\"color: #333333;\">.<\/span><\/p>\n<p><span style=\"font-weight: 400; color: #333333;\">Closely related to this is another fallacy regarding memory management on these platforms, which is the belief that <strong>forcing the execution of the Garbage Collector<\/strong> (GC) and explicitly invoking the functionality, <strong>the memory is freed earlier, allowing for better system performance. <\/strong>But this is not so, mainly because the GC (when explicitly invoked) is a process that blocks the execution of any code being executed on the system, to enable the cleaning of the memory that is not being used. And that task takes time. Platforms such as JVM and Microsoft\u2019s Framework include algorithms where the memory cleanup is optimized. It is not done at just any time, but rather in special situations which, following experimentation, have led to concluding that it is the most efficient result achievable. There are ways of adjusting the behavior of these algorithms, since according to the type of application and the type of use, it has been proven that different configurations yield various results. But, this is possible with adjustments of configuration parameters, and not with explicit invocation by code of the GC\u2019s execution.<\/span><\/p>\n<p><span style=\"font-weight: 400; color: #333333;\">We also saw that <strong>it\u2019s commonplace to think that using any<\/strong> <strong>cache is an easy and quick way of optimizing an application<\/strong>. Therefore, we mistakenly think we will improve our application by simply setting some SQL queries in the cache, without even evaluating other options first. The cache is something quite delicate, and if we are not careful, it could even add more points of failure. When the cache is lost, a non-optimized operation could lead to instability. We must carry out a functional verification on the application to see if configuring the cache in a certain way does not change the system\u2019s expected behavior. This is because queries will not provide us with fully updated data. We must measure the hit\/miss cache, as well as the refresh and update costs in order to analyze which queries will imply more benefits than complications.<\/span><\/p>\n<h2><span class=\"ez-toc-section\" id=\"Test_Design_Fallacies\"><\/span>Test Design Fallacies<span class=\"ez-toc-section-end\"><\/span><\/h2>\n<p><span style=\"font-weight: 400; color: #333333;\"><strong>When we test the parts we are testing the whole.<\/strong> This fallacy has been examined by<\/span> <a href=\"http:\/\/www.amazon.com\/Perfect-Software-Other-Illusions-Testing\/dp\/0932633692\" target=\"_blank\" rel=\"noopener\">Jerry Weinberg<\/a> <span style=\"font-weight: 400; color: #333333;\">and it\u2019s clear to us that this assertion is incorrect. In performance testing, we cannot do a simulation without bearing in mind the overall processes or operations and focusing only on unit tests with no concurrence of different activities. We might test a \u201cmoney withdrawal\u201d operation with 1,000 users, and in another test, a \u201cmoney deposit\u201d operations with 1,000 users, and since we won\u2019t reach more than 500 in total, we will be satisfied. With this method, we are not guaranteeing that the concurrence of the two transactions will be problem-free. If they imply some kind of blocking complication between the two, then a total of 10 users might already cause serious problems in response timing.<\/span><\/p>\n<p><span style=\"font-weight: 400; color: #333333;\">There are two almost opposite performance testing fallacies here. What we consider as \u201ccorrect\u201d or \u201cmore adequate\u201d is to find the middle ground. There are those who believe that when we try hundreds of users doing \u201csomething,\u201d probably all of them doing the same, we are implementing a good test. And there are those who consider it necessary to include all the functionalities that the system is capable of executing. But neither position is valid. The first one is too simplistic, leaving aside numerous situations that might be the cause of problems. The other position has an associated cost that is too high, as it is focused on running a \u201cperfect test\u201d. We must aim <strong>at implementing the best test possible, within the time and resources available, <\/strong>so as to avoid all complications possible. This also includes (when time and resources are available) the simulation of cases that might occur in reality, such as deleted caches, a disconnected server, the generation of noises in communication, etc. It\u2019s clear that we cannot test all possible situations and cannot ignore things either.<\/span><\/p>\n<h2><span class=\"ez-toc-section\" id=\"The_Neighbor_Fallacy\"><\/span>The Neighbor Fallacy<span class=\"ez-toc-section-end\"><\/span><\/h2>\n<p><span style=\"font-weight: 400; color: #333333;\">We tend to think that applications in use by others with no complications will not cause us any problems when we decide to use them ourselves. Why should we carry out performance tests, when our neighbor has been using the same product that works for them just fine? As mentioned in our<\/span> <span style=\"font-weight: 400; color: #333333;\"><a href=\"https:\/\/abstracta.us\/blog\/performance-testing\/software-performance-testing-fallacies-part-1\/\" target=\"_blank\" rel=\"noopener\">previous post<\/a><\/span><span style=\"font-weight: 400; color: #333333;\">, we shouldn&#8217;t extrapolate any results. Even when the system works with a given load of users, we must tune it, adjust the platform, ensure the correct configuration of the various components, and provide for a good performance with the use that our own users will make of that system.<\/span><\/p>\n<h2><span class=\"ez-toc-section\" id=\"Overconfidence_Fallacies\"><\/span>Overconfidence Fallacies<span class=\"ez-toc-section-end\"><\/span><\/h2>\n<p><span style=\"font-weight: 400; color: #333333;\">There is a belief that the systems where we will encounter problems are those developed by programmers who have made mistakes and lack experience, among other things. Some managers have the belief that their <strong>engineers are all quite experienced and so there is no need to test performance, <\/strong>especially if they have developed large-scale systems before without any issues. Of course, it will work out fine. Right? No. We must not forget that programming is a complex activity, and regardless of how experienced we may be, it is common to make mistakes. This is even more so when we develop systems that are exposed to multiple concurrent users (which is the most common case) in which performance is affected by so many variables. In those cases, we must consider the environment, the platform, the virtual machine, the shared resources, and hardware failures, etc.<\/span><\/p>\n<p><span style=\"font-weight: 400; color: #333333;\">Another problem we encounter when we are excessively confident occurs during the implementation of performance tests. In general, it is recommended that tests be carried out in an incremental manner. So, we start by executing 20% of the total load that we want to simulate in order to attack the most serious problems first and then scale up the load as the incidents encountered are adjusted. But there are those who prefer to work with the full load from the very start in order to find the problems faster. The problem with that approach is that all problems come up at once, making it harder to focus on each one of them and achieve efficient solutions.<\/span><\/p>\n<h2><span class=\"ez-toc-section\" id=\"Automation_Fallacies\"><\/span>Automation Fallacies<span class=\"ez-toc-section-end\"><\/span><\/h2>\n<p><span style=\"font-weight: 400; color: #333333;\">Since we are uncovering fallacies related to tests in themselves, another common fallacy that generates high costs is thinking that <strong>changes in an application subject to testing that are not noticed on screen will not affect automation, <\/strong>meaning the scripts that we have prepared to simulate the system\u2019s load. In general, when changes are introduced in the system, even when they don\u2019t imply the graphic interface, we must verify that the test scripts we have prepared continue to correctly simulate the execution by an actual user. Otherwise, we could be arriving at the wrong conclusions. When parameters are changed, for example: the way in which certain data is processed or the order for invoking methods, the simulated behavior may be no longer be in accordance with the action that a user applies to the system being tested.<\/span><\/p>\n<p><span style=\"font-weight: 400; color: #333333;\">That wraps up our posts about software performance testing fallacies. Can you think of any others that you have come across? Let us know!<\/span><\/p>\n<p><strong><span style=\"color: #333333;\">For more performance testing fallacies, read<\/span> <span style=\"color: #00b674;\"><a href=\"https:\/\/abstracta.us\/blog\/performance-testing\/software-performance-testing-fallacies-part-1\/\">part one<\/a><span style=\"color: #333333;\">.<\/span><\/span><\/strong><\/p>\n<p>&nbsp;<\/p>\n<hr \/>\n<h2><span class=\"ez-toc-section\" id=\"Recommended_for_You\"><\/span><strong>Recommended for You<\/strong><span class=\"ez-toc-section-end\"><\/span><\/h2>\n<p><span style=\"font-weight: 400; color: #00b674;\"><a href=\"https:\/\/abstracta.us\/blog\/performance-testing\/why-performance-testing-is-necessary\/\">Why Performance Testing is Necessary<\/a><\/span><br \/>\n<a href=\"http:\/\/abstracta.us\/blog\/performance-testing\/performance-testing-production\/\">Performance Testing in Production<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Software Performance Testing Fallacies Continued Continuing with the previous post about software performance testing fallacies, we will take another look at common ways in which many of us are mistaken about performance testing. We will discuss some that are very common in testing, technology, and&#8230;<\/p>\n","protected":false},"author":3,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[32],"tags":[50],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v14.0.2 - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>Software Performance Testing Fallacies Part 2 | Abstracta<\/title>\n<meta name=\"description\" content=\"In line with the previous post about software performance testing fallacies, we will take another look at common ways in which many of us are mistaken.\" \/>\n<meta name=\"robots\" content=\"index, follow\" \/>\n<meta name=\"googlebot\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<meta name=\"bingbot\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/abstracta.us\/blog\/performance-testing\/software-performance-testing-fallacies-part-2\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Software Performance Testing Fallacies Part 2 | Abstracta\" \/>\n<meta property=\"og:description\" content=\"In line with the previous post about software performance testing fallacies, we will take another look at common ways in which many of us are mistaken.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/abstracta.us\/blog\/performance-testing\/software-performance-testing-fallacies-part-2\/\" \/>\n<meta property=\"og:site_name\" content=\"Blog about AI-powered quality engineering for teams building complex software | Abstracta\" \/>\n<meta property=\"article:publisher\" content=\"https:\/\/www.facebook.com\/AbstractaQA\/\" \/>\n<meta property=\"article:published_time\" content=\"2015-08-17T17:49:38+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2025-05-05T21:21:38+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/abstracta.us\/wp-content\/uploads\/2016\/07\/True_of_False-min.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"420\" \/>\n\t<meta property=\"og:image:height\" content=\"236\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@AbstractaUS\" \/>\n<meta name=\"twitter:site\" content=\"@AbstractaUS\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"WebSite\",\"@id\":\"https:\/\/abstracta.us\/blog\/#website\",\"url\":\"https:\/\/abstracta.us\/blog\/\",\"name\":\"Blog about AI-powered quality engineering for teams building complex software | Abstracta\",\"description\":\"AI-powered quality engineering\",\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":\"https:\/\/abstracta.us\/blog\/?s={search_term_string}\",\"query-input\":\"required name=search_term_string\"}],\"inLanguage\":\"en-US\"},{\"@type\":\"WebPage\",\"@id\":\"https:\/\/abstracta.us\/blog\/performance-testing\/software-performance-testing-fallacies-part-2\/#webpage\",\"url\":\"https:\/\/abstracta.us\/blog\/performance-testing\/software-performance-testing-fallacies-part-2\/\",\"name\":\"Software Performance Testing Fallacies Part 2 | Abstracta\",\"isPartOf\":{\"@id\":\"https:\/\/abstracta.us\/blog\/#website\"},\"datePublished\":\"2015-08-17T17:49:38+00:00\",\"dateModified\":\"2025-05-05T21:21:38+00:00\",\"author\":{\"@id\":\"https:\/\/abstracta.us\/blog\/#\/schema\/person\/78cd0dcae50ce820b25e86d3330e9762\"},\"description\":\"In line with the previous post about software performance testing fallacies, we will take another look at common ways in which many of us are mistaken.\",\"breadcrumb\":{\"@id\":\"https:\/\/abstracta.us\/blog\/performance-testing\/software-performance-testing-fallacies-part-2\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/abstracta.us\/blog\/performance-testing\/software-performance-testing-fallacies-part-2\/\"]}]},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/abstracta.us\/blog\/performance-testing\/software-performance-testing-fallacies-part-2\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"item\":{\"@type\":\"WebPage\",\"@id\":\"https:\/\/abstracta.us\/blog\/\",\"url\":\"https:\/\/abstracta.us\/blog\/\",\"name\":\"Home\"}},{\"@type\":\"ListItem\",\"position\":2,\"item\":{\"@type\":\"WebPage\",\"@id\":\"https:\/\/abstracta.us\/blog\/performance-testing\/\",\"url\":\"https:\/\/abstracta.us\/blog\/performance-testing\/\",\"name\":\"Performance Testing\"}},{\"@type\":\"ListItem\",\"position\":3,\"item\":{\"@type\":\"WebPage\",\"@id\":\"https:\/\/abstracta.us\/blog\/performance-testing\/software-performance-testing-fallacies-part-2\/\",\"url\":\"https:\/\/abstracta.us\/blog\/performance-testing\/software-performance-testing-fallacies-part-2\/\",\"name\":\"Software Performance Testing Fallacies Part 2\"}}]},{\"@type\":[\"Person\"],\"@id\":\"https:\/\/abstracta.us\/blog\/#\/schema\/person\/78cd0dcae50ce820b25e86d3330e9762\",\"name\":\"Sof\\u00eda Palamarchuk, Co-CEO at Abstracta\",\"image\":{\"@type\":\"ImageObject\",\"@id\":\"https:\/\/abstracta.us\/blog\/#personlogo\",\"inLanguage\":\"en-US\",\"url\":\"https:\/\/secure.gravatar.com\/avatar\/222e8b1136482564fe25acc4de2b9b7a?s=96&d=blank&r=g\",\"caption\":\"Sof\\u00eda Palamarchuk, Co-CEO at Abstracta\"},\"description\":\"Co-Chief Executive Officer at Abstracta\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","_links":{"self":[{"href":"https:\/\/abstracta.us\/blog\/wp-json\/wp\/v2\/posts\/1868"}],"collection":[{"href":"https:\/\/abstracta.us\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/abstracta.us\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/abstracta.us\/blog\/wp-json\/wp\/v2\/users\/3"}],"replies":[{"embeddable":true,"href":"https:\/\/abstracta.us\/blog\/wp-json\/wp\/v2\/comments?post=1868"}],"version-history":[{"count":13,"href":"https:\/\/abstracta.us\/blog\/wp-json\/wp\/v2\/posts\/1868\/revisions"}],"predecessor-version":[{"id":17535,"href":"https:\/\/abstracta.us\/blog\/wp-json\/wp\/v2\/posts\/1868\/revisions\/17535"}],"wp:attachment":[{"href":"https:\/\/abstracta.us\/blog\/wp-json\/wp\/v2\/media?parent=1868"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/abstracta.us\/blog\/wp-json\/wp\/v2\/categories?post=1868"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/abstracta.us\/blog\/wp-json\/wp\/v2\/tags?post=1868"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}