Bug Life Cycle in Software Testing

Bug Life Cycle in Software Testing: Stages, States, and What Goes Wrong

Most QA documentation lists the stages of the bug life cycle but stops short of explaining what actually breaks down between them. This article covers every stage of the bug life cycle in software testing, what each state means in practice, how severity and priority interact, and where experienced teams still get it wrong – with a real healthcare IT scenario to ground the concepts.

What Is the Bug Life Cycle in Software Testing

The bug life cycle – also called the defect life cycle – is the structured sequence of states a defect moves through from the moment a tester discovers it to the moment it gets formally closed. It is not just a tracking mechanism. It is a communication framework between QA, development, business analysis, and project management. Every state transition carries an implicit handoff. When that handoff is unclear, bugs get lost, fixed incorrectly, or closed prematurely.

The ISTQB Foundation Level Syllabus treats defect management as a core QA competency. It defines a defect report as a document that records a deviation from expected behavior and includes: steps to reproduce, actual result, expected result, severity, and priority. The bug life cycle gives that report a formal path through the system. Without the lifecycle, a defect report is just a complaint with no ownership.

Bug and defect are often used interchangeably in practice, and this article does the same. Technically, a bug is a coding error; a defect is any variance from expected behavior – which includes requirement gaps, design flaws, and configuration errors. In production environments, the distinction rarely changes how the lifecycle operates.

Bug Life Cycle Stages: Every State Explained

The lifecycle varies slightly by organization and tooling. The core states are universal. Understanding what each state means – and who owns the transition – is what separates teams that manage defects effectively from teams that manage chaos.

Bug Life Cycle – State Flow
New
Assigned
Open
Fixed
Retest
Closed
Reopened
Deferred
Rejected
Duplicate
Alternate states appear at triage or after retest. A bug can move to Reopened from Closed if the fix doesn’t hold.

New

New is the entry state. A tester or an automated test run discovers a defect and logs it in the tracking tool. The bug exists in the system but nobody has acted on it yet. The quality of what gets logged here determines the quality of everything downstream. A defect report missing steps to reproduce, environment details, or expected result will bounce back from the developer, wasting two to three days before the actual fix work starts.

Per ISTQB guidelines, a defect report at this stage should include: a unique ID, the date, the tester’s name, the software version and environment, a description precise enough to reproduce the failure, and the actual versus expected result. Some teams add a preliminary severity classification at this point. Others set severity during triage. Either approach works – what matters is consistency.

Assigned

After triage – or after an initial review by the QA lead or project manager – the defect gets assigned to a developer or a configuration analyst. Assigned means ownership is established. The person receiving it is responsible for the next state transition. Without a named owner, defects sit in a New state indefinitely.

On Agile programs running Scrum, assignment often happens during sprint planning or a mid-sprint triage session. In SAFe programs with multiple teams, assignment may involve routing to the correct component team. If the routing is wrong – defect goes to Team A when the root cause lives in Team B’s code – the assignment state can cycle multiple times before the right owner takes it.

Open

Open means the assigned developer has accepted the defect and is actively investigating or fixing it. This is where root cause analysis happens. On a complex system integration, the root cause is rarely where the symptom appeared. A defect that surfaces in the UI may trace back to a database query, a middleware transformation, or a configuration rule in a downstream service.

The Open state also covers situations where the developer needs more information before they can proceed. In practice, defects return to the tester for clarification more often than workflow diagrams suggest. The developer can’t reproduce the issue. The steps to reproduce assume a specific test dataset that wasn’t documented. The defect description uses QA terminology the developer doesn’t recognize. Each of these delays adds time to the cycle that a well-written initial report would have prevented.

Fixed

The developer moves the defect to Fixed after implementing the resolution. Fixed does not mean verified. It means the developer believes the defect is resolved based on their own testing. The fix still needs independent verification by QA before any state closer to Closed applies.

On programs running a CI/CD pipeline, Fixed often means the fix has been merged, a new build is available in the QA environment, and the pipeline has run its automated regression suite without failing on the affected test. That automated check is valuable, but it doesn’t replace targeted retesting. Automated regression catches what it was written to catch. An experienced QA analyst retesting a fix will also check the adjacent behavior – the scenarios one step removed from the original defect that might have been affected by the fix.

Retest

Retest is the QA verification stage. The tester who originally logged the defect – or a designated peer – retests the fix against the original steps to reproduce and the expected result. They also check for regression: did the fix break something that was working before?

Skipping the Retest state and moving directly to Closed is the single most common bug life cycle failure in experienced teams. Teams under release pressure mark bugs Closed as soon as the developer marks them Fixed. Defects escape to UAT or production. The cost to fix them there is significantly higher than it would have been in QA – not because the fix is more complex, but because the impact is broader and the stakeholder response is louder.

The Software Testing Life Cycle is explicit about verification as a required phase, not an optional one. ISTQB’s cost-of-quality principle states that defects found later in the SDLC cost exponentially more to fix. The Retest state is the last inexpensive gate before a defect reaches UAT.

Closed

Closed means the QA analyst has independently verified the fix, confirmed the defect no longer reproduces in the target environment, and is satisfied that no regression was introduced. The defect is done. The audit trail is complete.

On regulated programs – healthcare, financial services, government – Closed is not just a workflow state. It’s a compliance artifact. HIPAA audit trails require documentation of what was found, who fixed it, and who verified it. A defect that was marked Closed without a recorded verification step is a gap an auditor will flag.

Reopened

Reopened means the fix didn’t work, didn’t fully work, or introduced a regression that makes the original problem relevant again. The defect goes back to Assigned. Every Reopened ticket represents a failed handoff somewhere between Fixed and Closed. Either the developer tested the wrong scenario, the QA analyst retested the wrong build, or the fix was correct but a subsequent change broke it again.

High Reopened rates on a project are a leading indicator of deeper process problems: unclear acceptance criteria, inadequate regression coverage, or a development cycle too short to allow proper root cause analysis. Track this metric. If more than 15-20% of Fixed defects get Reopened, the fix process – not the testers – needs attention.

Deferred, Rejected, and Duplicate

These are the alternate exit states from the active lifecycle. Deferred means the defect is valid but won’t be fixed in the current release – typically because it’s low risk, low frequency, or the fix cost outweighs the impact before the release window closes. The triage team, not the developer or QA analyst individually, makes this call. Deferred defects need a documented rationale and a target release for resolution. Without that documentation, deferred is just “forgotten” with a politer name.

Rejected means the developer or the triage team has determined the reported behavior is not a defect. Either it matches the intended design, it reproduces only in an unsupported environment, or the requirement was misinterpreted. Rejected defects should include a clear reason. A Rejected ticket with no explanation creates friction with the QA analyst and invites the same defect to be re-reported by someone else.

Duplicate means the same defect was already reported under a different ticket number. Duplicates are unavoidable on large programs with multiple testers. The accepted practice is to link the duplicate to the original ticket, mark the newer one as Duplicate, and continue tracking resolution under the original. Closing duplicates without linking them removes traceability.

Severity vs. Priority: The Distinction the Bug Life Cycle Depends On

Severity and priority are not synonyms. Confusing them is the most frequent source of defect triage conflict on any project. The ISTQB Glossary defines them precisely.

Severity is the degree of impact that a defect has on the operation of the system. It’s a technical assessment. The QA analyst who finds the defect assigns severity based on what the defect does to the software. Priority is the business urgency to fix it. The Product Owner or triage team sets priority based on who is affected, when they’re affected, and what fixing or not fixing costs the business. A developer prioritizes work based on priority, not severity. A QA analyst classifies defect impact based on severity, not priority.

ScenarioSeverityPriorityWhy
Hospital name misspelled on every patient discharge summaryLow – cosmeticHigh – P1No functional impact, but visible on every printed document. HIPAA audit would flag it immediately.
System crash when a user submits a lab result with a specific ICD-10 code combinationCriticalHigh – P1Functional failure in a clinical workflow. Both measures align – fix immediately.
Date field accepts invalid entries on a legacy admin screen used by two internal staff members annuallyMajorLow – P3Functionally a real problem, but frequency and user impact are minimal. Defer to next release.
Broken link on a Help pageLowLow – P4No operational impact. Log it, fix it when convenient.
API endpoint returns incorrect claim amount only when payer ID starts with “00”MajorHigh – P1Rare trigger condition but financial accuracy is non-negotiable in a payer-provider integration.

The triage process exists to reconcile these two dimensions. Per ISTQB Advanced Test Manager guidance, the triage team – typically QA lead, developer representative, BA, and Product Owner – reviews all open defects, validates severity assignments, sets priority in the context of business impact, and determines which bugs get fixed in the current sprint versus deferred. Product Owners set priority; QA analysts set severity. That division of responsibility prevents both over-fixing low-impact issues and under-fixing critical ones.

Healthcare IT Scenario: Bug Life Cycle on an EHR Integration Project

A health system is running system integration testing for a new EHR module that receives inbound HL7 FHIR diagnostic reports from five laboratory partners. During a test run, a QA analyst executes a test case that sends an OBX segment containing a result value flagged as abnormal. The EHR’s alert logic should fire a clinical notification. It doesn’t.

The analyst logs the defect as New in Jira. The report includes: environment (QA, build 4.3.2), steps to reproduce with the exact test message payload, the expected result (alert notification displays in the provider’s worklist), the actual result (no notification), and an attached screenshot of the empty worklist. Severity is set to Critical because a missed abnormal result alert in a clinical setting is a patient safety issue – which also makes it a HIPAA reportable event risk. The defect goes to the QA lead for triage.

At triage, the integration architect joins the call. The team confirms the defect is valid. Priority is set to P1. The defect is assigned to the integration developer who owns the HL7 FHIR message parsing component. It moves to Assigned, then to Open within the hour.

The developer investigates and finds that the alert trigger reads the observation value status field from the FHIR R4 Observation resource. The field is populated as “A” (abnormal) in the inbound message, but the trigger logic was written to match “H” (high) – a different value code from an earlier FHIR draft. The logic never matched “A” and never fired. The developer updates the trigger logic to handle all abnormal status codes defined in the FHIR R4 specification, marks the defect Fixed, and deploys the new build to QA.

The QA analyst retests using the original test message, confirms the alert fires correctly, and tests four additional scenarios: a result with status “H,” a result with status “L” (low), a normal result (no alert expected), and a malformed OBX segment (error handling path). All pass. No regression detected in adjacent test cases. The defect moves to Closed. The fix is documented in the configuration control log for the HIPAA audit trail.

Three days later, during a broader regression suite run, automated tests flag a failure: the alert fires twice when a result message contains two OBX segments. The original defect is Reopened. The developer investigates, finds that the updated trigger logic runs once per OBX segment rather than once per observation result group, and applies a secondary fix. The defect moves through Fixed and Retest again before reaching Closed for the second time.

This Reopened scenario is not exceptional. It’s standard on complex integration programs. The original fix was correct for the reported scenario. The regression introduced a new behavior that only appeared under a different message structure. The QA team’s automated regression suite caught it before UAT. Without that automated coverage, the double-alert would have reached clinical users.

Bug Life Cycle in Agile vs. Waterfall: Practical Differences

The bug life cycle operates the same way in both delivery models. The timing and velocity differ significantly.

DimensionAgile / ScrumTraditional Waterfall
Triage CadenceContinuous or daily during active sprint; formal triage at sprint boundariesWeekly or bi-weekly defect review meetings
Fix TurnaroundCritical bugs fixed within the sprint; sprint velocity adjustsCritical bugs may trigger a formal change request before fix approval
Deferred DefectsGo to backlog; Product Owner prioritizes for future sprintLogged against next release phase; Change Advisory Board reviews
Retest WindowSame sprint if fix is early enough; next sprint if lateDedicated re-test cycle within the SIT or UAT phase
Lifecycle VisibilitySprint board in Jira shows all active defects in real timeDefect summary report distributed at fixed intervals
Release GateDefinition of Done includes zero open Critical/High defects per sprintFormal sign-off from QA lead and project sponsor before phase exit

In Agile programs, defects logged during a sprint compete with user stories for developer capacity. A Critical defect found on Day 6 of a two-week sprint may not get fixed and verified within that sprint if the team is already at capacity. The Product Owner decides whether to pull a story to make room for the defect fix or defer the defect to the next sprint. This is a scope negotiation, not a QA failure. The Product Owner makes that call with awareness of the risk.

Writing a Defect Report That Actually Gets Fixed

Every preventable delay in the bug life cycle traces back to a poorly written defect report. The developer can’t reproduce the issue. The expected result wasn’t documented. The environment wasn’t specified. These aren’t minor inconveniences – they add days to a lifecycle that should take hours.

A defect report that requires no follow-up questions contains: the unique ticket ID and date, the software version and environment (DEV, QA, UAT, PROD), the test case ID that exposed the defect, step-by-step reproduction instructions in numbered sequence using concrete values – not generic placeholders, the exact actual result including any error messages or screenshots, the expected result sourced directly from the acceptance criterion or requirement, the severity classification with brief justification, and any relevant test data or log file attachments.

“Button not working on Claims page” is not a defect report. “Submit button on the Claim Entry form (Claims module v4.3, QA environment) returns HTTP 500 after clicking when the Diagnosis Code field contains a 7-character ICD-10 code and the payer ID field is populated with a value starting with ’00′” is a defect report. The second version can go directly to the developer. The first requires a three-email exchange before the developer can reproduce it.

This standard is consistent with what BABOK v3 describes in the Requirements Life Cycle Management knowledge area: every defect should be traceable back to a specific requirement or acceptance criterion. If a Business Analyst wrote the acceptance criteria with enough precision, linking the defect is straightforward. If the acceptance criteria were vague, the defect report carries the ambiguity forward.

Bug Life Cycle Metrics That Predict Release Readiness

Tracking defects without measuring the lifecycle produces reports. Measuring the lifecycle produces decisions. These are the metrics that matter.

Defect Density by Module – total defects found divided by functional size or test cases per module. High density in a specific area signals where regression testing should concentrate before UAT. It also signals where requirements may have been under-specified.

Mean Time to Resolution (MTTR) by Severity – average time from New to Closed, segmented by severity level. If Critical defects average 14 days to close and the sprint is 10 days, something in the process is broken. Either triage is slow, developer capacity is insufficient, or the retest cycle is too long.

Defect Escape Rate – the percentage of defects found in UAT or production versus defects found in QA. A healthy escape rate depends on the organization, but anything above 10-15% into UAT on a mature program warrants a test strategy review. This metric directly reflects the effectiveness of the testing approach used in QA.

Reopened Rate – percentage of Fixed defects that move to Reopened. As noted earlier, consistently above 20% signals a root cause problem in the fix process.

Deferred Defect Aging – how long deferred defects have been sitting without a target fix release. Deferred defects older than two releases without a documented decision are technical debt accumulating quietly. On a regulated program, they’re also an audit risk.

These metrics align with Six Sigma’s DMAIC framework. Measure the baseline, analyze the patterns, improve the process, and control the result. A QA lead who presents defect metrics with this level of structure in a program status meeting gives project leadership data they can act on – not just a list of open bugs.

Pull your last sprint’s Jira defect data and calculate the average time from New to Assigned for Critical and High defects. If it exceeds 24 hours, your triage process – not your testing – is the bottleneck. Triage latency compounds through every subsequent lifecycle stage. Fix the handoff time at the front of the cycle, and MTTR across all severity levels will improve without changing anything else.


Suggested External References:
1. ISTQB Foundation Level Syllabus – Defect Management (istqb.org)
2. BABOK v3 – Requirements Life Cycle Management, IIBA (iiba.org)

Scroll to Top