Select one or none
We have source code versioning in place with SVN or Git.
There is some technical debt in the code, we are working on it.
We perform code review / pair review.
We have a CI/CD engine working in place.
We have at least three separate environments: integration/development, staging/UAT and production.
We test in different environments, covering various browsers, devices, OS.
We have test data management for each environment.
We use Virtual Machines, containers and/or service virtualization.
We have defined a template/standard for bug reporting, including what is necessary for developers to
address problems as soon as possible.
We have a bug tracking tool.
We do defect causal analysis.
We have full traceability between features, issues, and code.
We have someone in charge of test management.
We have a list/document of our testing goals and risks. We decide what to test according to that.
We have a test plan with a clear testing strategy.
We are agile, with testing and development being a fully unified team. Testing is a part of design and
We have a functionality inventory (backlog/list/document) prioritized. We also have test cases,
checklists, and/or exploratory testing sessions.
It is easy for everyone in the test team to know what to test according to the time and goals of a
specific sprint / time slot.
We use special test design techniques and have our tests prioritized.
We have a coverage strategy and feel confident about our functional tests.
We have automation working, following the best practices (e.g.: page objects, data-driven).
The automated checks run fast and are easy to maintain. We have automated unit and API tests.
Our most important and critical use cases are automated at the UI level.
We’ve mastered automation with unit, API, and UI automated checks running continuously in our CI/CD
We have analyzed our performance requirements, we know our workload model and designed our performance
We run client-side performance tests before go-live.
We run load testing hitting the server-side with concurrency (using load simulators) to see how it behaves
(using monitoring tools).
We run performance tests continuously, catching performance degradations immediately.
We know about the security risks that we have, and we have planned some security testing to discover
We do OWASP top 10 tests and penetration testing.
We run automated security checks in our CI engine and sleep sound at night.
We collect user feedback about the usability.
We run beta/acceptance/user testing considering usability analysis in the test strategy.
We do usability testing with testers with expertise in usability heuristics and best practices.
We analyze the accessibility of our system.