Both types of testing are necessary for the creation of quality software. In this article, we focus on their differences in order to achieve a thorough understanding of these issues. This time, with an interview with Alejandro Aires.
By Natalie Rodgers
WOPR29 is just around the corner, and the big news is that this world-renowned event is finally coming to Latin America, specifically to Uruguay. This is a great opportunity for professionals in the region, but what it really represents is a deeper and more significant idea: the importance of extending access to knowledge to promote global development.
In this way of extending access to knowledge, and as hosts of WOPR29, in Abstracta we are carrying out this series of articles called “Performance testing in-depth”, in which we address from basic concepts to advanced and significant issues for the industry, in relation to performance testing.
In this article, we intend to focus on the differences between functional and non-functional testing, of which performance testing is a part of, in order to achieve a thorough understanding of these issues and to be able to delve into them from different angles and perspectives.
Before going into the differences, it is important to highlight that both types of testing are necessary for the creation of quality software. They focus on different aspects but are ultimately complementary.
Functional testing consists of verifying that the system and its functionalities work as expected and for what they were developed. “To bring it down to earth, in a web system such as an online store, it is essential that the user is able to make purchases, as this is what sustains the business. For this, it is necessary to perform functional testing, focused on verifying that the purchase flow fulfills its objective, its functionality”, exemplified Alejandro Aires, software tester at Abstracta and member of the Performance team, who has experience in both functional and non-functional testing.
Non-functional testing, on the other hand, focuses on cross-cutting factors related to the user experience. Examples include usability, security, accessibility, and performance.
Alejandro Aires proposed an example to better illustrate the subject: “Let’s suppose we are testing a bank transfer. Functional testing is in charge of testing that the transfer is made under the previously established requirements (that it arrives at the correct destination, with the correct amount, etc.), while non-functional testing can verify other issues, such as that the system supports many simultaneous transactions or that it is user-friendly”.
There is another interesting case to analyze, which is the airbag. If an airbag does not work fast, with good performance, then it does not fulfill its functionality. Examples like this are what make the separation not so clear in some cases.
Another difference is in the requirements to start testing. “In general, non-functional tests usually require that the system has a certain stability, i.e. that it has already advanced with functional tests and adjustments,” specified Alejandro. “Even so, this depends on the system, it is not always that way,” he added.
In short, these are tests that complement each other in search of increasing quality from a holistic view, taking into account different quality factors.
– What types of testing exist within functional testing?
There are many approaches and classifications. For example, we could start by talking about exploratory or scripted testing. There are different functional testing techniques for designing relevant test cases or test data, which can be used in both an exploratory and a scripted approach, but they are usually associated more with scripted testing as they are often called techniques for designing test cases.
The particularity of exploratory testing is that it presents an external structure that is simple to describe, in which during a pre-established period of time (no more than two hours) a tester interacts with a product to fulfill the objective of a mission. The objective of this is to be able to then present and report the results of the process, which will be used by the rest of the project stakeholders to make decisions in good conscience.
On the other hand, we can classify according to the level at which we perform the tests or the moment in the development cycle. So we can distinguish between unit testing, component testing, smoke testing, integration testing, system testing, regression testing, user acceptance testing, and more. I will focus here on some of them:
✔️In unit testing, different units are tested individually. Generally here we talk about code-level units. These are very specific tests, for which it is necessary to isolate a fragment of the code, which corresponds exclusively to what you want to test. In the vast majority of cases, developers are responsible for carrying out this type of testing.
Performing this type of testing seeks to detect errors early and prevent them from escalating.
✔️We perform integration tests when we need to integrate functionality into the system and verify it together with the rest of the system components we are testing, in order to evaluate if they work correctly together. This can be at the unit code integration level, at the service level (at the API level of a layered system), or even at the system level, when we are integrating changes to different components.
✔️Regression tests are a subset of planned tests that are selected to run periodically, for example at each new product release. They are intended to verify that the product has not regressed.
In other words, and in line with the previous point, if we want to verify that the integrations we perform have not affected other previously tested functionality, we need to perform regression testing. Moreover, these tests are very relevant to verify that what worked well before is still working now.
– What techniques can be used to design test cases in functional testing?
There are many. There are well-known techniques such as equivalence partitions, boundary values, pairwise combinations or decision tables. There are also more advanced techniques such as those involving the use of state machines.
Instead of going into detail on these techniques, which for that you can review Federico Toledo’s book, I would like to distinguish how they are classified, based on different criteria, thus seeing what white-box and black-box approaches, specific and abstract test cases, and the data-driven testing approach are all about.
✔️White-Box:
If to design tests we rely on information internal to the system we are testing, such as code, database schema, etc., then we are said to be following a white-box approach. The most common is to base the design on the source code to achieve certain levels of coverage. To figure this out, we can say that the simplest of these levels would involve trying to cover all lines of code.
Sometimes, people talk about “transparent-box” instead of white-box. This refers to the fact that the goal of this strategy is to be able to look at what is inside the box we are testing.
✔️Black-Box:
Contrary to white-box, the expression “black box” implies that the system is a box whose interior cannot be observed from the outside. In this way of working, we rely solely on the observation of inputs and outputs of the system. This can be at the system, unit, component, API, etc. level.
We could say that with white-box we are concerned with what happens inside the box and with black-box we are concerned with what happens outside the box. Many times the boundary is not clear, or maybe we are following a black box approach, but as we know something about what is going on inside then we take advantage of what is going on inside then we take advantage of that information. In this sense, some people also talk about “gray-box”, which is just when both approaches are combined.
✔️Abstract and specific test cases:
This is a classification that refers to the specificity with which the test case is detailed.
An abstract test case is characterized by not having determined values for the expected inputs and outputs, so variables are used and certain properties that they must fulfill are described with logical operators.
A specific test case is an instance of an abstract test case, in which specific values are determined for each input variable and for each expected output. It is not the same to specify a test case where a product with a value greater than $100 is added to the shopping cart as it is to specifically indicate that the product with identifier XYZ, which is already known to have a set cost greater than $100, should be added.
Each abstract test case can be instantiated with different values (following the example, with any product worth more than $100), so there may be different specific test cases at the time it is executed or designed at a low level.
There may be different specific test cases at the time of execution or low-level design. A specific value is assigned to each variable, both input, and output, according to the properties and logical constraints it has determined.
✔️Data-Driven Testing:
It is a technique for building test cases, in which basically the input and output data are separated from the flow that is executed in the application. In other words, the test cases are parameterized. To do this, the flow (the series of steps to execute the test case) is represented, and the expected input and output data is stored in a typical external source (in a CSV file, spreadsheet, or database).
This allows adding new test cases easily, by entering new expected input and output data that serve to execute the same flow.
The application flow is defined with abstract test cases, which when executed with a specific set of data somehow become specific test cases.
– What types of testing exist within non-functional testing?
Each type of testing is associated with a quality factor. Some of the most prominent are:
✔️Usability Testing:
This type of testing evaluates the degree to which the system can be used by specific users effectively, efficiently, and with satisfaction in a specific context of use. There are several techniques for analyzing usability, which seek to detect possible improvements in features associated with the user experience, such as making a system more intuitive and easier to use, among other things.
✔️Accessibility Testing:
It is part of usability testing but its focus is that all people can use the system, including in all cases those who have some kind of disability, contextual, temporary, or permanent.
This type of testing helps to detect errors and barriers that may exist in the software, but that is not easily detectable if specific tests are not performed to find them. Accessibility experts talk about the importance of incorporating accessibility throughout the software development lifecycle.
👉For more information on this topic, we recommend you read these articles.
✔️Security Testing:
The objective here is to look for possible vulnerabilities or threats that may affect the protection, availability, and integrity of data or system functionality. Security testing is important as a preventive mechanism, in search of possible vulnerabilities.
However, it is not possible to prevent everything and it is crucial to have specific protocols to be used in case of possible computer attacks.
According to the World Economic Forum’s Global Risks Report 2022, malware increased by 358% in 2020, while ransomware increased by 435%.
👉You can read more about cybersecurity in this article.
✔️Performance Testing:
Both response times and resource consumption are analyzed. For this, multiple concurrent users are simulated and the performance of the application under test is analyzed. The more exposed, the more users and variables there are around an application, and the more important it becomes to perform performance testing.
To analyze these behaviors, the system is usually put under a large number of concurrent users while resource usage is measured in search of bottlenecks. Within this category, we can find load, volume, and stress tests among others.
For more information about performance testing, we recommend reading this article by Roger Abelenda.
To conclude, our COO Federico Toledo argued that the ideal is always to design a strategy in which both types of testing are combined from early phases (shift-left), work them throughout the process (continuous testing), monitor in production, and use that information to improve the quality of the software and the quality of our tests (shift-right).
Don’t miss WOPR29! It will be held in Montevideo, Uruguay, from December 6-8, 2022. Intended for between 20 and 25 professionals. This is a prestigious global event that seeks to deepen the knowledge of performance testing, with experts from around the world.
Would you like to join WOPR29? Find all the details and apply here.
If you are interested in this topic, we recommend you to read all the articles in our saga “Performance Testing In-Depth”.
Very soon we will publish a new article with the opinion of Alejandro Aires, in which he will tell us all about his personal journey from functional testing to Performance, to learn more about the different possible paths.
Follow us on Linkedin & Twitter to be part of our community!
Related Posts
What Percentage of Functional Testing Should be Automated?
Dive into the intricate world of functional testing and discover how much of it should really be automated for maximum efficiency and quality in software testing. The world of software development is ever-evolving, and the stakes for delivering high-quality, reliable software have never been higher….
API Testing Tools 2024: Best 15 Reviewed
Stay ahead with the best API testing tools! Streamline your automation process with Abstracta’s trusted tools for APIs. In the rapidly shifting world of software development, one constant remains. That constant is the need for effective and efficient testing. API testing has become a cornerstone…
Leave a Reply Cancel reply
Search
Contents