Software Testing Life Cycle (STLC)

Software Testing Life Cycle (STLC): Phases, Entry/Exit Criteria, and What Actually Happens on Real Projects

Most teams treat testing as something that happens after development. That decision is where defects get expensive. The software testing life cycle (STLC) exists to move testing left – starting at requirements, not at deployment. This article breaks down each STLC phase with its entry and exit criteria, real-world constraints, and the decisions that separate solid QA practice from checkbox compliance.

6
STLC Phases
10x
Cost of fixing defects post-release vs. requirements phase
RTM
Key deliverable across all phases
≠ SDLC
STLC is a subset, not a duplicate

What Is the Software Testing Life Cycle (STLC)?

The software testing life cycle is a defined sequence of phases that a QA team follows from the moment requirements land to the moment the product ships – and beyond. Each phase has specific goals, activities, entry criteria (what must be true before the phase starts), and exit criteria (what must be delivered before moving forward).

STLC sits inside the broader Software Development Life Cycle (SDLC). SDLC covers everything from business analysis through deployment. STLC governs only the testing thread running through it. Understanding the distinction matters when you’re scoping a QA engagement or defending your team’s role in a sprint planning argument.

Without a defined STLC, testing becomes reactive – testers get handed a build and told to “find bugs.” That approach misses architectural flaws, misunderstood acceptance criteria, and integration failures that were baked in long before the first line of code was written.

STLC vs. SDLC: The Distinction That Matters

AttributeSDLCSTLC
ScopeFull product lifecycleTesting thread only
Owned byDev leads, PMs, architectsQA leads, test managers
Primary goalBuild working softwareValidate software quality
Starts whenProject is initiatedRequirements are available
Key deliverablesArchitecture docs, source code, release buildsTest plan, RTM, defect reports, test closure report
Compliance relevanceSystem design, data architectureTraceability, evidence, audit trails

In regulated industries like healthcare IT, this distinction is not academic. HIPAA and CMS compliance audits ask for testing evidence – not just that the system was built, but that it was validated. The STLC produces that evidence systematically.

The 6 Phases of the Software Testing Life Cycle

Most organizations follow six core phases. The names vary slightly by organization, but the logic is the same across the industry. Each phase feeds the next. Skip one and you will feel it downstream.

Phase 1 – Requirement Analysis

This is where QA earns its seat at the table – or loses it. The testing team reviews all available requirements: Business Requirements Documents (BRD), Functional Requirement Specifications (FRS), user stories, and use cases. The goal is not to rewrite requirements. The goal is to identify what is testable, what is ambiguous, and what is missing.

A QA analyst reviewing requirements for an EHR implementation, for example, may spot that a user story says “the system shall display patient allergies” without specifying data source, display format, or behavior when the HL7 FHIR feed is unavailable. That ambiguity needs resolution before a single test case gets written. Catching it here costs a meeting. Catching it in UAT costs a sprint.

The Requirement Traceability Matrix (RTM) begins here. Every requirement gets an ID. Every test case written later maps back to that ID. This is the foundation of traceability – which BABOK v3 identifies as a core competency for quality assurance in business analysis contexts.

Entry criteria: Requirements documents available (BRD, FRS, or user stories), stakeholder access for clarification.
Exit criteria: RTM draft created, all testable requirements identified, ambiguities documented and assigned for resolution.

Phase 2 – Test Planning

The test plan is not a formality. It is the document that governs every decision made during testing – scope, out-of-scope, approach, tools, roles, schedule, risks, and sign-off process. A weak test plan produces scope creep, missed coverage, and disagreements about what “done” means.

Test planning also includes the automation feasibility decision. Not every feature is worth automating. In a SAFe environment, this conversation happens during PI Planning or sprint 0. In a regulated healthcare project, the test plan often doubles as a validation plan – referencing the system’s intended use, risk classification, and the regulatory framework it operates under (HIPAA, 21 CFR Part 11, or CMS guidelines).

The test strategy sets the approach at the program level. The test plan executes that strategy for a specific project or release. The two terms are often confused and often collapsed into one document on smaller teams. Both need to exist, whether as separate documents or distinct sections.

Entry criteria: Signed-off or stable requirements, RTM draft available, high-level design documented.
Exit criteria: Test plan reviewed and approved, tool selection finalized, effort and timeline estimated, risks logged.

Phase 3 – Test Case Development

This is execution preparation. Testers write detailed test cases, prepare test data, and – where in scope – develop automation scripts. Each test case links back to one or more requirements in the RTM. If a test case cannot be mapped to a requirement, it needs justification or removal.

Good test cases cover positive paths, negative paths, and boundary conditions. They document pre-conditions, steps, expected results, and actual results fields. The actual results column stays empty until execution. If your test cases don’t have expected results written before execution, you’re not testing – you’re exploring.

Test data preparation is consistently underestimated. In healthcare IT, this means de-identified patient data that mirrors production distributions – including edge cases like patients with multiple insurance plans, ICD-10 codes with decimal precision, or claims that span fiscal year boundaries. Synthetic data generation tools help, but the QA team still has to define the data model. For a deeper look at how test cases differ from test scenarios, see the site’s breakdown of types of testing.

Entry criteria: Approved test plan, stable and signed-off requirements, high-level design available.
Exit criteria: Test cases reviewed and baselined, test data prepared, RTM updated with test case IDs, automation scripts ready (if in scope).

Phase 4 – Test Environment Setup

This phase is the one that blows timelines on nearly every project. The test environment should mirror production as closely as possible. In practice, it rarely does – and the gap is where false positives and missed defects live.

Environment setup includes server provisioning, test tool installation and configuration, database setup, network access, and third-party integration stubs or sandboxes. On a payer-provider integration project using HL7 FHIR APIs, the QA team may need access to a sandbox FHIR server, mock authorization endpoints, and a test patient registry. Coordinating that across three vendors and two internal teams – while staying on schedule – is a real project management challenge, not a technical one.

Smoke testing happens here. Before any formal test cycle begins, the team runs a lightweight smoke suite to confirm the environment is stable enough to test. A failed smoke test sends the build back to dev. Running a full regression against a broken environment wastes everyone’s time.

Entry criteria: Environment setup plan defined, test data ready, smoke test cases prepared.
Exit criteria: Environment operational and validated, smoke test passed, QA team has access, defect tracking tool configured.

Phase 5 – Test Execution

Test execution is the most visible phase. It is not the most important one. By this point, everything that makes execution effective – or ineffective – has already been decided.

Testers execute cases, record actual results, and log defects for any deviation from expected behavior. Defects go into a tracking system (Jira, Azure DevOps, or similar) with enough detail for a developer to reproduce without asking follow-up questions. That means: steps to reproduce, environment details, build version, test data used, expected result, actual result, and supporting evidence (screenshots, logs, API response payloads).

The RTM is updated throughout execution. Every test case status – pass, fail, blocked – gets recorded. Blocked test cases are particularly important to flag. A blocked case means something outside the test itself is preventing execution – missing data, environment outage, or an upstream dependency that hasn’t shipped yet. Blocked cases need owners and deadlines, not passive waiting.

In a SAFe Agile context, execution runs in parallel with development iterations. The QA team is testing sprint N-1 work while dev completes sprint N. Defect cycles run within the sprint cadence. This is where QA roles and dev roles need clear handoff agreements, or work in progress piles up at the boundary.

Entry criteria: Stable test environment, smoke test passed, test cases and test data ready, defect tracking tool accessible.
Exit criteria: All test cases executed (or formally deferred), defect report generated, RTM updated with pass/fail status, critical defects resolved or risk-accepted.

Phase 6 – Test Cycle Closure

Closure is where teams extract learning – and most skip it. The test closure report summarizes what was tested, what was found, what was fixed, what was deferred, and what defect trends emerged. Test metrics are compiled: total cases executed, pass rate, defect count by severity, defect fix rate, and open defects by priority.

This report is not just internal documentation. In healthcare IT, the test closure report (or validation summary report) is the evidence submitted to compliance teams and, in some cases, to regulators. For systems subject to 21 CFR Part 11 or HIPAA security rule requirements, the test closure documentation must demonstrate that the system was formally validated before production use.

Lessons learned belong here, too. Which phase caused the most rework? Were test cases underspecified? Did environment setup take twice as long as planned? These observations feed the next project’s test plan.

Entry criteria: Test execution complete, all critical defects resolved or formally deferred with sign-off, defect report available.
Exit criteria: Test closure report delivered and approved by stakeholders, test artifacts archived, lessons learned documented.

STLC in Practice: A Healthcare IT Scenario

A regional health plan is implementing a new prior authorization module that integrates with their core claims platform via HL7 FHIR R4 APIs. The timeline is 18 weeks. Compliance with CMS interoperability rules is mandatory.

During Phase 1, the QA lead reviews the FRS and flags three requirements with no defined error-handling behavior for FHIR timeout scenarios. These go back to the business analyst for resolution before test planning begins.

In Phase 2, the test plan designates a separate performance test cycle for API response time thresholds, in addition to functional and integration testing. The plan explicitly calls out HIPAA audit log testing as a required validation track – because it is a contractual deliverable, not an optional quality activity.

By Phase 4, the team discovers the FHIR sandbox provided by the EHR vendor doesn’t support the R4 version required by the spec. That takes two weeks to resolve. This is a real scenario, not a hypothetical – vendor environment mismatches are one of the most common causes of STLC schedule overruns in healthcare IT integration projects.

By Phase 6, the validation summary report provides the compliance team with full traceability: every HIPAA-relevant control mapped to a requirement, a test case, a pass/fail status, and a tester sign-off. Without the RTM discipline established in Phase 1, producing that report under deadline pressure would be nearly impossible.

STLC in Agile and SAFe Environments

STLC was originally conceived for waterfall. In Scrum and SAFe environments, the phases don’t disappear – they compress and iterate. Requirement analysis happens at story refinement. Test planning happens during sprint planning. Test case development runs in parallel with development within the sprint.

The phases still exist. They just operate at a different cadence. What changes is the formality of the deliverables, not the logic behind them. A SAFe team still needs to know what they’re testing before they test it. They still need a stable environment. They still need to track defects. The discipline is the same; the paperwork is lighter.

One genuine tension in agile STLC: test closure. When stories ship continuously, there is no single “end” that triggers a formal test cycle closure. Teams handle this through release-level test reports that aggregate sprint-level data, or through definition-of-done criteria that include closure activities at the story level. Either approach works if it is consistent and documented.

The Requirement Traceability Matrix: Backbone of the STLC

The RTM is the single artifact that ties the entire STLC together. It maps each business requirement to the test cases that validate it, and tracks execution status against each. A fully populated RTM answers the question every stakeholder eventually asks: “Are we sure we tested everything the business asked for?”

Karl Wiegers, in “Software Requirements,” describes traceability as essential for managing scope, validating completeness, and supporting change impact analysis. An RTM is the operational implementation of that principle in a QA context.

In practice, RTMs get neglected. Requirements change and the matrix doesn’t follow. New test cases get added without linking back to requirements. By test closure, the RTM looks nothing like the actual scope. Maintaining RTM discipline is a team discipline problem, not a tool problem. Any spreadsheet can hold an RTM. The challenge is the process around updating it.

Where STLC Breaks Down on Real Projects

The ideal STLC assumes clean handoffs between phases. Real projects do not deliver clean handoffs. Requirements arrive late, partially, or continue changing into test execution. Environment setup competes with development for infrastructure resources. Stakeholders approve test plans without reading them, then dispute scope during execution.

The most common failure pattern: skipping Phase 1. Teams under timeline pressure move straight from requirement receipt to test case writing without formally analyzing testability. The result is test cases built on misunderstood requirements that pass cleanly – and a system that fails in production against user expectations.

A second pattern: treating entry and exit criteria as paperwork checkboxes rather than quality gates. Exit criteria exist to prevent phase bleed – the condition where a team nominally closes a phase while leaving critical deliverables unfinished. When entry criteria for the next phase aren’t met, the team discovers the gap mid-execution, under pressure, with no good options.

The third is legacy system complexity. In payer organizations running COBOL-based claims adjudication platforms alongside modern microservices, test environment parity is a known impossibility. The STLC has to acknowledge that gap explicitly in the test plan – with documented risk acceptance and mitigation – rather than pretend the environment is production-equivalent when it clearly is not.

Who Does What in the STLC

QA Lead / Test Manager
Owns test strategy, test plan, resource planning, risk management, and the test closure report.
QA Engineer / Tester
Writes test cases, prepares test data, executes tests, logs defects, and updates the RTM.
Business Analyst
Provides and clarifies requirements, resolves ambiguities flagged in Phase 1, supports UAT sign-off.
DevOps / Infra
Provisions and maintains test environments, manages build deployments, and supports pipeline integration.

In smaller teams, one person wears multiple hats. A QA analyst doing both test planning and execution is common. What matters is that each function exists – even if the same person performs more than one. The STLC roles are about accountability, not headcount.

On projects with a Product Owner involved in UAT sign-off, the PO’s formal acceptance at test closure is the business confirmation that the software meets its intended purpose. That sign-off should be captured in the closure report, not just communicated verbally in a standup.

Applying the STLC in Regulated and High-Stakes Contexts

Healthcare IT and financial systems add compliance requirements that intensify every STLC phase. In healthcare, this means HIPAA security and privacy rule coverage, HL7 FHIR conformance testing, and CMS-mandated interoperability validations for certain system types. The STLC framework doesn’t change – the test coverage scope and documentation standards do.

For financial systems subject to SOX or PCI-DSS, test closure documentation is part of the audit trail. Auditors don’t care about your team’s velocity. They want to see that requirements were tested, defects were tracked and resolved, and the system was validated before it touched production data.

The STLC provides the structure that makes that evidence producible on demand, rather than scrambled together the week before an audit.


One thing to implement this week

If your team does not have a live RTM for your current project, build one before your next test case gets written. It doesn’t have to be complex – a spreadsheet with requirement ID, description, linked test case IDs, and execution status is enough. Starting mid-project is better than never starting. The RTM is the one artifact that makes everything else in the STLC traceable, defensible, and auditable.


Authoritative references:
CMS.gov – Interoperability and Patient Access Rule documentation
HL7 FHIR – Official specification and implementation guidance

Scroll to Top