Defect Metrics – An indicator of quality of product under test

In an earlier blog a few weeks back, I had talked about defect metrics that help determine the quality of test effort. In the current blog, I will be discussing what defect metrics will serve as good indicators of the quality of product under test. While both these sets of metrics are equally important to track and base important test management decisions on, this current set is all the more important in helping a test manager decide “whether the test team is ready to sign off on the product under test”. This set of metrics bring in objectivity in making this decision more accurate and reliable, rather than when based on subjective factors. Also, as a best practice, I would recommend using these metrics on an ongoing basis (say once in 15 days or so) to keep track of how the product quality is shaping up. This will help the test management proactively recommend any corrective actions to the product team as opposed to reactively accepting poor quality later in the game. Here we go, with the magic set of metrics that I have been alluding to.

1.       Number and timing of high priority and severity bugs reported – High priority and severity bugs typically indicate a major crash or failure in product functionality. These strongly denote the health of the product regardless of when in the product life cycle they are reported. These are especially critical warranting a closer look at product quality, if found later in the product life cycle. The product triage team pays very close attention to high priority and severity bugs throughout the life cycle for these right reasons, and in addition, if the test manager notices any patterns/trends such as high pri/sev bugs concentrated in one module, showing up too frequently, showing up too late in the game etc. it would be worthwhile to further analyze and report findings to the product triage team.

 

2.       Number of regressions – The higher the number of regressions, the more the product quality needs attention to. Such regression numbers often indicate the performance of the development team, but more importantly their performance is directly indicative of the product quality. Regressions are often strong indicators of the comprehensiveness and fool-proofness of bug fixes that are checked in. This is a very good discussion point the test manager can take to the round table to highlight the need to improve product quality

 

3.       Number of build breaks or Build Verification Test (BVT) failures – Since these are basic tests that are performed either by the development or test team, to qualify a build for further testing, these are often tests that represent the product’s core functionality or the bare minimum “must run scenarios”. Also, these tests once built at the beginning at the product life cycle remain quite static and only tend to grow with additional functionality that gets developed. Given that these are tests that are run over and over with every newly released build, it is an expectation to hit 100% BVT pass percentage in all builds. Frequent build breaks not only waste the product team’s time in resolving them but more importantly are indicative of the poor build quality that needs immediate attention.

 

4.       Number of functional, performance and security bugs – Bugs of any and all types are valuable to help improve product quality. However, the ones that are most indicative of product quality are bugs that are reported in areas of product functionality, performance and security. Cosmetic bugs such as UI, usability etc. are also important to be looked at but often times these are the ones that can be lived with, whereas this other category of bugs discussed here are ones that cannot be compromised or will at least need serious collaborative consensus before product release.  The test manager should specifically focus on these bugs and probably further dissect this data from various views such as: number present in a given module, a combination of these bugs in a given module, priority/severity of these bugs, when in the product life cycle do these show up, potential regressions these may introduce etc.

 

5.       If product is post v1, number of post release patches, bugs – If the product under test is not a V1 product there will be several useful data points from past releases that the test manager can leverage to track product quality in the current release, which include how the product fared in past releases (both pre and post release), the number and kind of emergency patches that were required post release, defect metrics from past releases etc. as a start point.

Herein above, I have discussed core metrics that can be tracked and acted upon to monitor and improve product quality. Obviously, this is by no means an exhaustive list but is a great start towards having the test team move towards the driver’s seat in proactively playing an important role in owning product quality. Feel free to send me your comments and experiences from your test and/or development assignments, which helped you ship a product of great quality.

SocialTwist Tell-a-Friend
This entry was posted in Testing Practices. Bookmark the permalink.

Leave a Reply

Your email address will not be published. Required fields are marked *

*

Captcha Captcha Reload

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>

Life @ QA InfoTech

Follow us on

Categories

Archives

Service Offerings :