How to improve the defect management process?

In this article, we’ll talk about the difference between bugs and defects, and how to build a proper defect report. We’ll also look at the entire defect management process and its subsequent steps.

But let’s start with two simple questions:


What a bug is?

In very simple terms, it’s the result of bad programming.

 

What a defect is? 

The defect is an aberrance in behaviour, appearance, functionality, etc. of the actual product in comparison to its original requirements from a business perspective.  

 

It’s interesting to note that there’s a fine line between the two definitions. A bug doesn’t necessarily appear in the actual product. But both cases are errors that need to be fixed. 

 

The defect report

When a tester performs a test, they may come across the results of certain functionalities (f.e.) that are completely different from the expected ones (those specified in the documentation – whether in the product or the user interface, or reported by the client). Such deviations are referred to as software defects. Different companies might call them differently. For example, they can simply be called “problems”, “issues”, “bugs” or “incidents”. 

 

The defect report for the developer should contain the following information:

Defect Identifier – A unique identification number for a given defect.

Description of the defect – As much details as possible, with information about the module/environment in which the defect was found.

The version of the application in which the error was found. – The version of the application in which the error was found.

Steps – subsequent steps (preferably supported by screenshots) that will help the developer reproduce the defect.

Date Raised – The date the defect was found.

Reference – a place where you can indicate the correct behaviour of the module with the defect. You can attach e.g. a screenshot with the platform documentation, available business requirements for the product, etc.

Detected By – Your company _w_ codename or ID of the tester who started/found the defect.

Status – the field for the defect status (in progress/to be done/under review/etc.).

Fixed By – The name/ID of the developer that fixed the error.

Date Closed – The date on which the issue/defect was closed.

Severity – short description of the impact the defect has on all software/applications.

Priority – describes the urgency to fix the defect in time/sprint. The severity priority can be set to High / Medium / Low, which refers directly to the impact of the defect on the applications and how urgent it is to fix it.

 

Imagine the following situation as Test Manager:

 

Your team finds bugs when testing project x:

Subject: “So far we’ve found 56 defects in project x.”

Test Manager: “Okay, I will inform the development team.”

 

After a week, the developers respond to Test Manager:

Developer: “We’ve fixed 45 defects.”

Test Manager: “Good, I will inform my Test Team about that.”

 

After another week, the tester responds:

Subject: “Indeed, these 45 defects are patched, but we found an additional 10.”.

 

Based on the example above, you can see that if the communication between two teams is conducted verbally, things will quickly become very complicated and it’ll be only a matter of time before they heap up.

To effectively control and manage bugs, it’s recommended teams adapt to the so-called DLC (defect life cycle).

 

Defect Management Process

The cycle presents defect management process

 

1. Discovery

At the stage of detecting a defect, the team of testers tries to detect as many defects as possible so they don’t appear for the end-user. The detected and reported defect changes its status to accepted when the developer informed about it accepts it through the team, project or error tracking tool.

 

In the Team Leader scenario, the testers detected 54 defects in the x project.

The graphic presents invalid process of reporting bugs between departments

 

Let’s look at the situation below – your Test Team has detected errors in the x project. These errors cause defects and are reported to the Dev Team, but there is a conflict:

Argument between staff members caused by wrong process management

In this situation, the Test Manager should take over the role of the referee and decide whether the problem is a defect or not.

In that case, the solution proposed by Test Manager should lead to resolving the conflict.

 

2. Categorization

Defect categorization helps developers prioritize their tasks. This means that they can decide which defects to address first.

 

Criticaldefects that must be repaired as soon as possible, because they can cause serious damage to the resulting product.

High Defects that can affect the main features in the product.

MediumDefects that cause minimal aberrations from the documentation describing the requirements of the resulting product.

LowDefects that have a minor impact on product quality.

 

Defects are usually categorized by the Test Manager. Here’s an example of how they could be prioritized:

 

Description: Website performance is too slow.

Priority: High.

Explanation: The resulting defect may cause inconvenience to the end-user when using the product.

 

Description: Login functionality is not working properly.

Priority: Critical.

Explanation: Logging in is one of the most important functionalities on the website (e.g. bank page).

 

Description: The website GUI is not displayed correctly on the smartphone.

Priority: Medium.

Explanation: The defect occurs only among users who use the product via smartphones.

 

Description: The website does not remember the login session.

Priority: High.

Explanation: This is a serious defect for the user who will be able to log in but will not be able to perform any further operations available for the logged-in account.

 

Description: Some of the hyperlinks are not working properly (no valid links in the links).

Priority: Low.

Explanation: This is a very simple defect to fix, and the user still has access to the website.

 

3. Resolution

When the defects are accepted by the dev team and categorized, the next steps are taken to fix them.

 

Resulution phase in defect management process

 

Assignment: Assigning a task for a specific developer.

Schedule fixing: Developers create a specific recovery plan (level of difficulty, time needed, etc.) of accepted defects, depending on their prioritization.

Fix the defect: When the Dev Team fixes defects, the Test Manager tracks the entire fix process based on a created repair plan.

Report the resolution: The Test Manager receives a report from the Dev Team when defects are eliminated.

 

4. Verification

After the Dev Team fix the aberrations and return the repair report to the Test Manager, the Test Team verifies if the repaired errors have really been eliminated.

 

5. Closure

If the defects are eradicated and the repair is verified, then the defect status changes to closed. Otherwise, the Test Manager sends them to the Dev Team to re-check a specific defect.

 

6. Reporting

The management board has the right to know the status of defect management and fixing in a project. They need to understand the entire defect management process to support the Test Manager in the project. The Test Manager, in turn, should report the current situation to them to get quick feedback.

 

Important defect metrics

Back to the previous scenario. The Dev and Test Team review each other’s reported defects. Here’s what an example discussion might look like:

Typical mistakes in communication which leads to misunderstanding

How to measure and assess the quality of testers’ work?

This is the question that the Test Manager has to know the answer to. There are two options to consider:

Graph shows defect rejection ratio and defect leakage ratio

In the above variant, the Test Manager can convert the defect rejection ratio (DRR) and it is: 20/84 = 0.238 (23.8%).

 

 

Another variant is the defect leakage ratio (DLR). Let’s assume that 64 defects were found in the product, but the Test Team detected only 44 of them, i.e. they omitted 20 defects, which were then detected for example by Developers. We calculate DLR according to the following formula: 20/60 = 0.312 (31.2%).

 

Ultimately, we get two per cent values ​​determining the quality of the testers’ work:

Defect reject ratio = 23.8 %

Defect leakage ratio = 31.2%

 

The lower the DRR and DLR percentage, the better. What percentage of these two parameters should be acceptable? This value is defined and based on the values ​​from previous projects with similar scope.

In this example project, the recommended DRR and DLR values ​​are acceptable within 5 ~ 10%. This means that the quality of the testers’ work is poor. The Test Manager’s solutions in this situation can be:

• Improving test skills of the Test Team members.

• Spending more time on test execution, especially when reviewing test execution results.

 

To sum up, in order to improve the defect management process, it’s important to create a proper defect report and not communicate it verbally. Also, you should follow the defect management process. If not all reported defects are actual defects, tested the software using the defect rejection ratio or defect leakage ratio. Either way, it’s essential to continuously improve the testing skills of a team and spend more time on test execution. 

 

 

background picture source: Photo by John Schnobrich on Unsplash

Filip Skwierczyński

Published:04/16/2020

Tags:

Insights Show all articles

Data Science – what is it and why do you need it?

Read more

Custom software solutions for logistics

Read more

Process efficiency

Read more

On-premise vs cloud

Read more

Chocolatey – the saviour is here!

Read more

4 steps to choose BI tool

Read more

Basics of CSS 3D

Read more

Do I need Business Intelligence?

Read more

MS Dynamics 365 vs SAP S4\HANA

Read more

Agile Teams

Read more

How to improve the defect management process?

Read more

10 hints for keeping your project well documented

Read more

The IT Revolution is upon us!

Read more