Software Testing Life Cycle (STLC) and Software Development Life Cycle (SDLC)

STLC vs SDLC: How the Software Testing Life Cycle Fits Inside Software Development

Most teams understand the Software Development Life Cycle as the project roadmap – from requirements to deployment. What gets treated as an afterthought is the Software Testing Life Cycle (STLC), the structured process that runs parallel to development and governs every testing decision. Treating STLC as a late-stage checklist is exactly how critical defects reach production. This article breaks down both cycles, where they overlap, and how to run them in sync on real projects.

6
STLC Phases
10x
Cost to fix a bug in production vs. during requirements
~40%
Fewer post-release defects with structured STLC
RTM
Links every test to a requirement – non-negotiable in healthcare IT

What the Software Development Life Cycle Actually Covers

SDLC is the full end-to-end framework for producing software. It starts the moment a business need is identified and ends only when the system is decommissioned. Most implementations follow six core phases: requirement analysis, system design, development, testing, deployment, and maintenance.

Each phase produces specific artifacts. Requirements analysis produces a Business Requirements Document (BRD) or user stories. Design produces architecture documents, data flow diagrams, and system specs. Development produces working code. Every artifact from an earlier phase becomes the input for the next. That dependency chain is exactly why gaps in requirements create test failures three sprints later.

SDLC does not prescribe a single methodology. Waterfall runs phases sequentially. Agile compresses and repeats them in short iterations. SAFe distributes them across multiple teams with coordinated Program Increments. The underlying phase logic stays consistent – only the sequencing and cadence change. If you work in SAFe and need a deeper breakdown of the planning layer, the Scrum framework overview covers how individual team ceremonies feed into larger release trains.

What the Software Testing Life Cycle Is – and Is Not

STLC is a structured sequence of testing activities that governs how testing is planned, designed, executed, and closed. It is a subset of SDLC, but “subset” does not mean “smaller importance.” STLC is the process that determines whether the software SDLC produced actually meets requirements.

A common misconception: STLC only begins after development finishes. That view belongs in 2005. In modern practice, STLC kicks off the moment requirements are documented. QA analysts review the BRD during requirement analysis to identify ambiguous, untestable, or missing acceptance criteria before a single line of code is written. That early engagement is what Karl Wiegers describes in Software Requirements as testability review – requirements that cannot be tested are not complete requirements.

What STLC is not: a list of test cases, a sprint ceremony, or something that belongs only to the QA team. It is a governance framework with defined entry and exit criteria at every phase. For more on what the QA discipline actually covers, see What Is QA.

STLC Phases: What Happens at Each Stage

Phase 1 – Requirement Analysis

QA reviews functional and non-functional requirements to assess testability. The key output is a Requirements Traceability Matrix (RTM) – a document that maps every requirement to at least one test case. If a requirement has no test case, it will not be verified. If a test case maps to no requirement, it is testing scope that was never agreed on. Neither outcome is acceptable on a regulated project.

Phase 2 – Test Planning

This is the most strategically consequential phase. The test plan defines scope, testing types (functional, regression, performance, security, UAT), resource allocation, timeline, risk register, and the automation feasibility assessment. It also sets entry and exit criteria for each subsequent phase. Skipping this or treating it as a template-fill exercise is how teams end up discovering mid-sprint that nobody owns the test environment or that the automation tooling doesn’t support the tech stack.

Phase 3 – Test Case Development

Testers write detailed test cases, define test data, and – where applicable – build automation scripts. Each test case should include preconditions, steps, expected results, and pass/fail criteria. Vague expected results (“system should behave correctly”) are not test cases – they are wishful thinking. For a structured look at how test cases differ from test scenarios, the STLC deep dive covers the distinction with examples.

Phase 4 – Test Environment Setup

Hardware, software, network configuration, and access credentials are provisioned. The test environment should mirror production as closely as budget and timeline allow. In healthcare IT, this means de-identified patient data, HL7 FHIR-compliant interfaces, and environments that replicate EHR integration points. Testing against a sanitized dataset that doesn’t reflect real message volumes will miss performance defects that only show up under actual clinical load.

Phase 5 – Test Execution

Test cases run against the build. Defects are logged with severity, priority, reproduction steps, and screenshots. Failed tests trigger defect reports routed to development. Retesting happens after fixes. Regression suites validate that the fix didn’t break adjacent functionality. On Agile projects, this phase is continuous – not a single end-of-sprint event. For a broader view of testing disciplines covered during this phase, see types of testing.

Phase 6 – Test Cycle Closure

The team evaluates exit criteria: defect density, test coverage percentage, number of open critical/high defects, and UAT sign-off status. A closure report is published documenting what was tested, what was not, and any known risks going into production. This document matters in audits – including HIPAA audits, where demonstrating a testing chain of custody is part of demonstrating due diligence under the Security Rule.

SDLC vs STLC: Side-by-Side Comparison

The two cycles serve different purposes but are tightly coupled. Here is how they compare across the dimensions that matter on real projects:

DimensionSDLCSTLC
Primary goalDeliver working software that meets business requirementsVerify that the delivered software meets quality and acceptance criteria
ScopeFull project lifecycle – requirements through retirementTesting activities only – from requirement review to test closure
Who owns itProject Manager / Program Manager / Product OwnerQA Lead / Test Manager
Key inputsBusiness needs, stakeholder requirements, budget, timelineRequirements documents, design specs, test strategy
Key outputsDeployed, maintained software productRTM, test cases, defect reports, test closure report
Start triggerBusiness case or product idea approvedRequirements baseline established
End triggerSoftware decommissionedTest closure report signed off
Methodology impactWaterfall = sequential phases; Agile = iterative sprintsWaterfall = testing after dev; Agile = continuous testing per sprint
Compliance relevanceGoverns change control, release management, risk managementGoverns audit trail, test evidence, defect traceability

STLC and SDLC in Healthcare IT: A Payer-Provider Integration Scenario

Consider a mid-size health plan implementing a new claims adjudication module that must exchange data with provider EHR systems via HL7 FHIR R4. The SDLC begins with a business analysis phase: the BA team documents the payer’s adjudication rules, maps the data flows, and defines acceptance criteria. Those criteria feed directly into the STLC requirement analysis phase, where QA identifies which FHIR resource types (Claim, ClaimResponse, Coverage) must be validated, and what constitutes a failed transaction under CMS rules.

This is where the edge case emerges: the requirements document specifies behavior for “clean claims.” It says nothing about duplicate submissions, claims with unsupported ICD-10 codes, or FHIR messages that are schema-valid but semantically incorrect. None of those scenarios appear in the BRD. If QA does not raise them during requirement analysis, they will not have test cases. And they will happen in production – because providers will submit them.

During test planning, the QA lead defines four testing types required before go-live: functional testing of the adjudication logic, integration testing of the FHIR API endpoints, performance testing under peak claim volume, and security testing aligned with HIPAA Security Rule requirements for PHI in transit and at rest. Each type gets its own entry and exit criteria. The exit criterion for security testing is not “no critical defects” – it is a signed security assessment report, because the compliance team needs evidence for the HIPAA audit trail.

When the test environment setup phase reveals that the sandbox FHIR server does not support R4 – only DSTU2 – that is not a QA failure. It is an SDLC design phase gap that STLC surfaced before it became a production incident. This is the exact value of treating both cycles as parallel processes with real handshake points, not sequential steps where QA takes whatever development hands over.

For teams working in the BA space on projects like this, the business analyst role guide covers how BABOK v3 frames requirements elicitation and traceability in ways that directly support test planning.

How STLC Runs Inside an Agile SDLC

In Waterfall, STLC and SDLC run sequentially. Development completes, then testing begins. The risk is that defects found late are expensive and release delays compound. Most teams have moved away from this model – or claim to have.

In Agile, the STLC phases compress and repeat inside each sprint. Requirement analysis happens during backlog refinement. Test planning happens during sprint planning. Test case development runs alongside development – not after it. Test execution happens within the sprint. Closure happens at the sprint review. The key principle is shift-left testing: the earlier in the SDLC you introduce testing activities, the cheaper and faster defect resolution becomes.

In SAFe, this scales further. Program-level regression suites run at the end of each PI. System Demos at PI boundaries serve as a cross-team exit gate equivalent to STLC closure. The RTM must span stories across multiple teams if the feature crosses system boundaries – and in healthcare IT, it almost always does.

Who Does What Across Both Cycles

Business Analyst
Owns requirements testability. Bridges BRD to acceptance criteria. Reviews RTM for coverage gaps. Per BABOK v3, traces requirements through to business outcomes.
QA Lead / Test Manager
Owns the test plan, test strategy, and test closure report. Manages entry/exit criteria. Escalates environment and data blockers before they delay execution.
Developer
Delivers code that meets definition of done including unit test coverage. Supports QA during defect triage. Does not decide when a defect is “not a bug.”
Product Owner
Signs off on UAT. Prioritizes defect backlog with business risk in mind. Owns the go/no-go decision at test closure. Not a passive stakeholder.

Where STLC Breaks Down on Real Projects

The ideal scenario – QA involved from day one, stable requirements, adequate test environment, sufficient time – exists on fewer projects than anyone would admit. Here is what actually happens, and what to do about it.

Changing requirements mid-cycle. Requirements change – especially in Agile. The risk is test cases written against an earlier version. The mitigation is a living RTM updated with every requirement change, not reconciled at the end of the sprint. If your project management tool doesn’t enforce this, your RTM will drift.

No dedicated test environment. Sharing a test environment with development creates race conditions – testers validating builds that developers are actively changing. The fix is a separate, locked test environment for each test cycle, with a formal promotion process. In cloud-based projects, this is inexpensive. On legacy on-premise systems, it requires explicit SDLC-level planning.

Exit criteria not enforced. Releasing with open critical defects because “the business decided to accept the risk” without documenting that decision is not risk acceptance – it is risk deferral with no paper trail. Exit criteria exist precisely to force that conversation before the release, not after the production incident.

STLC treated as a QA-only process. When development, business analysis, and product ownership disengage from STLC activities outside of defect triage, test coverage narrows to whatever QA can infer from incomplete documentation. BABOK v3 explicitly frames requirements validation as a shared responsibility – not something delegated entirely to QA at the end of the cycle.

The One Practice That Changes Both Cycles

If you take one operational change from this article, make it this: require your BA or QA lead to review every new requirement for testability before it enters the sprint backlog. Not after design. Not before release. Before it is committed to. A requirement with no measurable acceptance criterion will generate a test case with no pass/fail definition, which will generate a defect dispute that wastes two days of everyone’s time. Testability review at the source is the cheapest defect prevention technique available – and it requires no tools, no budget, and no process overhaul. It requires someone in the room who knows how to ask: “How will we know this is done?”


Authoritative references: BABOK v3 (IIBA) – Chapter 7 covers requirements analysis and validation techniques including testability assessment. HL7 FHIR R4 specification (hl7.org/fhir) – the normative standard for healthcare API integration testing. Karl Wiegers, Software Requirements, 3rd ed. – Chapter 17 covers requirements validation and acceptance criteria formulation.

Scroll to Top