Developer Denies the Defect: What QA Does Next
You logged a defect. The developer rejected it. Now it sits in a “Rejected” or “Not a Bug” state while the release clock ticks. This situation is not rare – it is a daily reality in cross-functional teams, and how you handle it defines your credibility as a QA professional. This article breaks down why developers deny defects, which denials are legitimate, and the exact process to push back when they are not.
Why Developer Denies the Defect: The Core Disconnect
Most defect disputes are not about malice. They stem from a gap – between what the requirements document says, what the developer built, and what the tester expected. That gap has several common shapes.
Per ISTQB’s Foundation Level syllabus, a defect is formally defined as “an imperfection or deficiency in a work product where it does not meet its requirements or specifications.” That definition matters. If no documented requirement covers the behavior in question, the developer’s rejection may be technically correct – even if the behavior is objectively wrong from a user perspective.
There are four rejection categories you will encounter repeatedly:
- Works as Designed (WAD) – The behavior matches the specification, even if the specification was poorly written.
- Cannot Reproduce (CNR) – The developer could not replicate the issue using the steps provided.
- Duplicate – The defect was already logged under a different ticket.
- Out of Scope – The behavior falls outside the current sprint or release boundary.
Each of these has a correct QA response. They are not the same situation.
Defect Rejection Reasons: Legitimate vs. Disputed
Not every rejection deserves a fight. Some are valid. The first thing a QA professional must do after receiving a rejection is assess whether the developer is right.
A legitimate rejection means the tester either misread the spec, tested against the wrong build, used an environment with unresolved configuration issues, or logged an item that is genuinely a change request – not a defect. Chasing a legitimate rejection up the chain damages your credibility faster than any missed bug ever would.
A disputed rejection is different. It means the behavior clearly deviates from documented requirements or user acceptance criteria, the defect report is complete and reproducible, and the developer’s stated rationale does not hold under review.
The table below maps the most common rejection reasons to the correct QA response:
| Rejection Reason | Is It Legitimate? | QA Next Step |
|---|---|---|
| Works as Designed | Sometimes – check the spec | Cite the requirement. If requirement is missing, flag a gap and loop in the BA. |
| Cannot Reproduce | Sometimes – check your steps | Attach a screen recording. Specify OS, browser, build number, test data used. |
| Duplicate | Often legitimate | Link to the original ticket. Confirm it is actually the same root cause before accepting. |
| Out of Scope | Depends on sprint agreement | Verify sprint scope in the backlog. If it affects release quality, raise in triage. |
| Enhancement, Not Bug | Sometimes | If acceptance criteria covers it, it is a defect. Attach the AC and reopen. |
When a Developer Denies the Defect: The QA Escalation Process
Escalation is not the same as complaining. It is a structured process. Done incorrectly, it creates political friction that makes future collaboration harder. Done correctly, it produces a documented resolution and protects both teams.
The standard defect lifecycle, as described in ISTQB’s defect management framework, provides a clear path: New → Assigned → Open → Fixed → Retest → Closed. Rejection sends the defect sideways – not dead. The defect goes back to the tester for review, and the tester must then decide whether to accept the rejection or contest it.
Here is the process to follow when you are contesting a developer denial:
Step 1: Strengthen the Defect Report Before Doing Anything Else
A rejected defect is often a poorly documented defect. Before you escalate, verify that your report contains exact reproduction steps (numbered, not paragraph form), the expected vs. actual result clearly separated, screenshots or a screen recording, build number and environment details, the specific requirement or acceptance criterion being violated, and the severity and priority assigned with justification.
Incomplete reports are the most common and most preventable cause of defect rejection. Per the ISTQB glossary, a defect report must contain enough information for the assignee to reproduce and understand the issue independently. “Bug in login” is not a report. It is a note to yourself.
Step 2: Request a Live Reproduction Session
If the developer claims they cannot reproduce the issue, schedule a brief call and reproduce it in front of them. This is not confrontational – it is efficient. Environment mismatches are real. “Cannot Reproduce” sometimes genuinely means the defect only appears in a specific data state or browser configuration that the developer did not replicate.
In healthcare IT projects, this comes up constantly. A QA engineer testing an EHR interface may trigger a defect by navigating through a specific patient record type – say, a record with both a primary and secondary insurance payer attached. The developer testing with a single-payer record sees nothing wrong. The environment is not broken. The test data is different. That distinction matters for triage.
Step 3: Involve the BA or Product Owner – Not Your Manager
If the dispute is about whether the behavior is intentional, that is a requirements question – not a QA question and not a development question. The Business Analyst owns the requirements. The Product Owner owns the acceptance criteria. Bring them in to clarify the intended behavior.
BABOK v3 defines requirements traceability as a key BA responsibility – specifically the ability to link requirements to test cases and verify that implemented behavior aligns with documented business needs. When a developer says “this is working as designed,” the BA’s job is to confirm whether the design actually intended that behavior. Often it did not.
Step 4: Route Through Defect Triage
Most mature teams have a defect triage committee – a cross-functional group that reviews disputed or high-priority defects. Per the ISTQB Glossary, the defect triage committee is defined as “a cross-functional team of stakeholders who manage reported defects from initial detection to ultimate resolution.” This is the correct forum for a contested rejection.
Bring the defect to triage with your documentation complete. Let the committee decide. This removes the dispute from the bilateral QA-dev dynamic and puts it in a structured governance process. That is the point.
Step 5: Escalate to QA Lead or Test Manager Only After Triage
If triage does not resolve the issue – or if your team lacks a formal triage process – the next step is escalation to the QA Lead or Test Manager. This is not “going over someone’s head.” It is following the defined defect management workflow.
The QA Lead can review the evidence objectively and either support the developer’s rejection or formally document the dispute. Either outcome is acceptable. An undocumented rejected defect that ships to production is not.
Real Scenario: Defect Denial in a Healthcare IT Integration Project
A QA team is validating a payer-provider API integration during an EHR implementation. The test case covers HL7 FHIR-formatted claim submission to a clearinghouse. The tester logs a defect: the system accepts a claim with a missing NPI field without returning a validation error. The expected behavior, documented in the acceptance criteria, is a 422 response with a field-level error message.
The developer rejects it as “Works as Designed,” citing that NPI validation is handled downstream by the clearinghouse – not by the application layer.
This is a classic WAD dispute with a requirements ambiguity at its core. The acceptance criteria said “validation error returned.” It did not specify where in the stack the validation must occur. Both sides have a defensible position.
The correct path: the QA engineer documents the dispute, attaches the acceptance criterion, and requests clarification from the BA and the compliance officer. In a HIPAA-covered entity context, failing to validate NPI at the application layer creates an audit risk. The compliance angle changes the priority. The defect gets confirmed, reclassified from Medium to High, and assigned for a fix in the current sprint.
Without the escalation process, that defect ships. During a HIPAA audit, missing NPI validation in a claim submission flow is not a cosmetic issue – it is a compliance gap that affects claim adjudication and provider identity verification.
The Defect Report Quality Problem
The Rejected Defect Ratio (RDR) is a QA metric worth tracking. A consistently high RDR – meaning a large percentage of your reported defects get rejected – signals one of two things: the QA team is logging items that are not defects, or the defect reports are too incomplete to process. Both are fixable. Neither is acceptable long-term.
Karl Wiegers, in Software Requirements (3rd ed.), notes that ambiguous or missing requirements are the single largest source of downstream defects. He is right, but the corollary is equally true: ambiguous defect reports create the same kind of friction on the resolution side that ambiguous requirements create on the development side. Precision in both directions saves time and avoids the “not a bug” standoff.
If your team’s Software Testing Life Cycle does not include a defect report template with mandatory fields, fix that before the next sprint starts.
Defect Denial in Agile Teams: SAFe and Scrum Contexts
In Scrum and SAFe environments, the “not a bug” dispute has an additional layer. If the behavior is not covered by the sprint’s Definition of Done or acceptance criteria, the developer’s rejection may be procedurally correct even if the behavior is wrong from a quality standpoint.
The right move in an Agile context is not to fight the rejection in the sprint – it is to surface the item in the sprint retrospective and ensure it gets logged as a backlog item for the next sprint if it falls outside current scope. Forcing a contested defect into an active sprint without product owner authorization creates scope creep and disrupts velocity.
In SAFe, System Demo is the formal checkpoint where integrated behavior is validated. Defects that surface during System Demo have a different triage path than mid-sprint defects. If a developer denies a defect first raised at System Demo level, that dispute goes to the Release Train Engineer, not the sprint team alone.
The comparison below covers how defect denial resolution works differently across delivery models:
| Delivery Model | Who Resolves Denial? | Where It Gets Logged |
|---|---|---|
| Waterfall / SDLC | QA Manager → Project Manager | Defect tracking tool (Jira, Bugzilla, Azure DevOps) |
| Scrum | Product Owner → Triage meeting | Sprint backlog or next sprint backlog |
| SAFe | Release Train Engineer + Product Management | Program backlog; reviewed at PI Planning |
| Kanban / Continuous Delivery | Team lead + QA lead | Bug queue with WIP limits applied |
What QA Should Not Do When a Developer Denies the Defect
There are patterns that consistently make defect disputes worse. Avoid them.
Do not immediately escalate to your manager. Going over the developer’s head before attempting a direct resolution is inflammatory and usually unnecessary. It signals that you cannot resolve issues at the team level – which reflects on you, not the developer.
Do not close the defect to “keep the peace.” A dismissed valid defect is a deferred production risk. If it ships and causes an incident, you will need to explain why it was closed before resolution. “The developer said it was fine” is not a documented rationale.
Do not log the same defect under a new ticket number after rejection. That approach produces duplicate tickets, wastes triage time, and is easily spotted. It damages credibility more than any disputed defect ever could.
Do not make defect disputes personal. The developer is not your adversary. The defect is. Most developers who reject valid defects are working under time pressure, incomplete specs, or both. Approach the conversation with evidence, not emotion.
Connecting Defect Management to the Broader QA Role
Defect management is one pillar of a complete QA function. How a team handles denied defects reflects the maturity of its entire testing process – from requirement traceability through the Software Development Life Cycle to post-release monitoring.
Teams that track their Rejected Defect Ratio over time find patterns. A spike in WAD rejections usually means requirements need to be written more precisely. A spike in CNR rejections usually means reproduction steps and environment documentation are insufficient. Neither pattern fixes itself without deliberate process adjustment.
In healthcare IT specifically – where a missed defect can affect claim accuracy, medication ordering logic, or HIPAA-protected data flows – letting a disputed defect die quietly is not a team norm. It is a liability. The types of testing applied (functional, regression, integration, compliance) all produce defects that eventually face the same triage process. The process has to hold under pressure.
Document Everything – Especially When the Answer Is No
When a defect is ultimately rejected after triage and escalation, document the outcome with the rationale. Capture who made the decision, why, and what risk was accepted. This is not bureaucratic overhead. It is the paper trail that protects the QA team if the behavior surfaces post-release.
In Six Sigma terms, an undocumented risk acceptance is not risk management. It is risk ignorance. The difference matters when the production incident report asks why a known issue was not fixed before release.
A disputed defect that gets formally rejected – with a documented risk acceptance signed off by the product owner or project manager – is a closed loop. You did your job. The organization made its decision. That is how mature teams operate.
The one thing to walk away with: when a developer denies the defect, your evidence must be stronger than their opinion. Build the report, trace it to a requirement, bring it to triage, and document the outcome – regardless of which way it goes.
Suggested external references:
- ISTQB Glossary: Defect Triage Committee – official ISTQB definition used in this article
- IIBA BABOK v3 – requirements traceability and stakeholder communication framework
