What’s the Sequence to Write a Test Case?

Most QA professionals know what a test case is. Fewer follow a consistent sequence when writing one — and that gap shows up in execution failures, traceability gaps, and defects that slip through to production. This article walks through the correct sequence to write a test case, explains why the order matters, and grounds each step in how it applies to real-world projects.

~40%
of defects trace back to poorly defined test steps or missing preconditions
Step 1
Start with the requirement — not the UI. Requirement-first writing cuts rework.
Traceability
Every test case must map to a requirement ID — especially in regulated environments

Why the Sequence to Write a Test Case Matters

Writing test cases out of order is one of the most common mistakes in QA. It produces cases that are hard to execute, impossible to trace, and nearly useless for regression. The sequence isn’t arbitrary — it follows the logical dependency chain of a test case’s own structure.

You cannot define expected results before you understand preconditions. You cannot write test steps before you know the test data. Each element depends on the one before it. Skip or reverse that order and you introduce ambiguity that costs time later — usually during execution when the defect is most expensive to fix.

In the context of the Software Testing Life Cycle (STLC), test case writing is a formal phase — not an afterthought. It sits between test planning and test execution, and the quality of that phase determines the reliability of everything downstream.

The Full Sequence to Write a Test Case — Step by Step

The sequence below applies to both manual and automated test cases. In automation, each element maps directly to test script logic. In manual testing, it maps to what the tester reads and executes. The format is the same; only the execution medium changes.

Step 1: Identify the Requirement or User Story

Every test case starts with a source — a requirement, user story, or acceptance criterion. Without this anchor, you’re writing tests against assumptions, not specifications. Pull the requirement directly from your project tracking system (Jira, Azure DevOps, or a formal BRS). Note the requirement ID. You’ll need it for traceability later.

In SAFe environments, this source is typically the acceptance criteria defined on the Story card. In waterfall-adjacent projects, it comes from the functional specification. In either case, it must exist in writing before you open a test case template.

If the requirement is ambiguous — “the system should process claims correctly” tells you nothing testable — flag it before writing. Ambiguous requirements produce untestable cases. That’s a BA problem before it’s a QA problem. Business analysts and QA engineers should be aligning on acceptance criteria at the same time, not sequentially.

Step 2: Define the Test Objective

Before anything else on the case template, write one sentence describing what this test is meant to verify. Keep it narrow. One test case, one objective.

Bad: “Test the login functionality.”
Good: “Verify that a registered user with valid credentials can log in and is redirected to the dashboard.”

The objective determines scope. If you can’t write the objective in one sentence, the case is probably covering too much. Split it.

Step 3: Assign a Unique Test Case ID

Every test case needs a unique identifier. This supports traceability, test management, and defect linking. Most teams use a structured format: TC-[module]-[sequence number]. For example: TC-AUTH-001, TC-AUTH-002.

In healthcare IT projects subject to HIPAA audit trails or ONC certification, test case IDs must map to specific functional requirements in the RTM (Requirements Traceability Matrix). That mapping is not optional. Auditors will ask for it.

Step 4: Write the Test Title / Name

The title should tell a tester — at a glance, with no additional context — exactly what the case tests. Use the format: [Action] + [Object] + [Condition].

Example: “Verify successful login with valid username and password on Chrome.”

Not: “Login test 1.” That tells no one anything.

Step 5: Define Preconditions

Preconditions are the state the system and test environment must be in before the first test step runs. This is where most junior QA writers cut corners, and it causes the most execution failures.

Preconditions should include: environment (UAT, Staging, Prod-mirror), test account state (active user, specific role, data loaded), and any configuration dependencies (feature flag enabled, API mock available, test DB seeded).

In EHR testing, for instance, a precondition might be: “Patient record MRN-10482 exists in the system with at least one active encounter and a valid insurance plan linked.” If a tester picks any random patient record, the test will fail or produce a false pass — and no one will know why until the defect surfaces in production.

Step 6: Prepare Test Data

Test data is the specific input values the test needs to run. It belongs in its own field, not buried inside step descriptions. Keeping it separate makes the case reusable and reduces the effort of re-execution with different datasets.

Identify both valid and invalid data sets. A test case for a date-of-birth field, for example, should have a valid date (within acceptable range), an invalid format (letters instead of numbers), an out-of-range date (future date), and a boundary value (exact minimum age).

In financial IT, test data often requires masking or synthetic generation to meet PCI-DSS requirements. You cannot use real cardholder data in a test environment — ever. That constraint must be acknowledged in the test case itself, not assumed.

Step 7: Write the Test Steps

Test steps are the core of the test case. Each step is a single, atomic action. The tester should be able to execute each step without interpretation.

Rules for writing test steps:

  • Start each step with an action verb: click, enter, select, navigate, verify.
  • One action per step. Not “click Submit and wait for the confirmation screen.”
  • Be specific about UI elements: “Click the blue Submit Claim button in the lower-right corner of the Claims Entry form” — not “click submit.”
  • Reference test data by name, not inline: “Enter [Test Data: Valid Email].”
  • Number sequentially. No branches inside a test case. If branching is needed, split into two cases.

The goal is that a new team member with no project context can execute the case correctly on the first try. If that’s not possible, the steps need more detail.

Step 8: Define the Expected Result

The expected result tells the tester what the system should do after each step — or after the final step, depending on the testing approach. This is not a description of what you hope to see. It’s the verifiable, observable outcome tied directly to the requirement.

Weak: “The user should log in successfully.”
Strong: “The user is redirected to the /dashboard URL. The top navigation displays the user’s name. A session token is issued and visible in browser storage.”

Each expected result must be measurable. If you can’t determine pass or fail without judgment calls, the expected result is underspecified.

Step 9: Define Postconditions

Postconditions describe the state of the system after the test completes — including cleanup. This is critical for test independence. If one test case modifies shared data and the next case depends on that data being in its original state, you have a hidden dependency that will cause intermittent failures.

Postconditions might include: “Test patient record is deleted from the staging environment.” Or: “Submitted form data is rolled back via test DB reset script.” In automated pipelines, postconditions are typically handled by teardown methods. In manual testing, they are the tester’s responsibility — only if documented.

Step 10: Set Priority and Link to Requirements

Assign a priority (High / Medium / Low) based on business impact and risk — not just how easy the case is to execute. High-priority cases cover critical paths: authentication, data submission, core workflows, regulatory compliance checkpoints.

Then link the test case to its source requirement. In Jira, this is a linked issue. In a test management tool (Zephyr, TestRail, qTest), it’s a coverage link. In a regulated environment, this link is your audit evidence. In a typical SDLC, the RTM should show 100% requirement coverage before sign-off.

Test Case vs. Test Scenario: Know the Difference Before You Write

AttributeTest ScenarioTest Case
GranularityHigh-level — describes a condition to testLow-level — specific steps, data, and expected results
Created byQA lead, BA, or product ownerQA engineer or manual tester
Input requiredRequirements or user storiesTest scenario + test data + acceptance criteria
ReusabilityBroad — one scenario covers multiple casesNarrower — designed for repeatable, specific execution
TraceabilityMaps to epics or feature-level requirementsMaps to individual requirement IDs or story acceptance criteria
Used inTest planning, scope estimationTest execution, defect reporting, regression cycles

The distinction matters in practice. Confusing the two leads to test plans that look comprehensive but produce shallow coverage. A test scenario says: “Verify that a patient can be admitted through the registration portal.” A test case says exactly what data to enter, which screen to navigate, what button to click, and what the system must display — for one specific path through that scenario.

This is covered in more detail in the article on types of testing, where the relationship between test levels and documentation depth is broken down by testing phase.

A Real-World Scenario: Healthcare IT — EHR Prior Authorization

A health insurance company is implementing a new prior authorization (PA) workflow in their payer portal. The QA team receives a user story:

“As a provider, I want to submit a prior authorization request for a specialty drug so that I receive an approval or denial decision within 72 hours.”

Here’s how the sequence plays out in practice:

The tester pulls the acceptance criteria from the story: three criteria exist — the form must accept NDC and NPI codes, a decision status must appear within 72 hours, and denied requests must trigger an automated denial letter compliant with HIPAA 835 transaction standards.

Preconditions: a test provider account with NPI 1234567890 exists in the UAT environment; test drug NDC 0069-3060-30 is loaded in the formulary database; the 72-hour mock timer is configured in test mode.

Test data: valid NPI, valid NDC, valid diagnosis code (ICD-10: J45.51), and a second set with an expired NPI to test the negative path.

Steps flow from form navigation through field entry through submission through status check. Expected result on the positive path: the portal displays “Pending” status immediately and transitions to “Approved” within the simulated window. On the negative path with the expired NPI: form submission is blocked with a specific error message tied to the requirement.

Postcondition: all submitted PA requests are voided in the UAT database. No test data persists into the next sprint’s regression cycle.

This level of specificity is not optional in regulated healthcare IT. CMS auditors, OIG reviewers, and internal compliance teams will ask for executed test cases when validating that a payer’s PA system meets HIPAA and ACA Section 1557 requirements.

Edge Cases the Sequence Won’t Automatically Handle

The sequence above is the correct starting point. Real projects add friction.

Incomplete requirements at sprint start. In most Agile teams, acceptance criteria aren’t finalized when sprint planning happens. Write the test case structure (ID, title, objective) against the draft requirement, flag it as “draft,” and complete preconditions and steps once the story is groomed. Don’t wait until the last day of the sprint.

Shared test data environments. When multiple QA engineers run tests against the same staging environment, preconditions and postconditions must be coordinated. If two testers both need MRN-10482 in a clean state, you have a conflict. Document data ownership or use isolated test data per case.

Legacy system black boxes. On integration projects — especially in healthcare IT connecting to legacy HL7 v2 interfaces — the expected result may not be fully deterministic. Document the expected behavior range, not a single expected value. And log it as a known limitation in the test case notes field.

Regulatory checkpoints mid-case. Some test cases in HIPAA or SOC 2 environments require audit log verification as a step, not just functional validation. Build that into the expected results explicitly: “The system audit log records the user ID, timestamp, and action performed on the PA record.”

How This Applies to the QA Analyst vs. BA Role

QA Analyst
  • Owns the full test case sequence
  • Writes steps, test data, and expected results
  • Maintains the RTM linkage
  • Flags requirement gaps before writing
  • Executes and records actual results
Business Analyst
  • Defines acceptance criteria (Step 1 input)
  • Reviews test objectives for alignment
  • Validates expected results against business rules
  • Signs off on UAT test cases
  • Participates in defect triage

The BA-QA handoff is where test case quality is won or lost. BABOK v3 identifies Business Analysis as a discipline that supports validation activities — which means acceptance criteria written by a BA should be specific enough to derive test cases from without guesswork. If a BA’s criteria require the QA analyst to interpret business intent, the criteria need revision.

In SAFe, this collaboration happens in PI Planning and Story refinement. The QA analyst is not a downstream consumer of requirements — they’re a participant in shaping them. That’s the model that produces testable cases from day one.

For teams where QA’s role in the development process is still being defined, the test case writing sequence is actually a useful forcing function. It makes the requirement quality problem visible immediately — you can’t complete Step 3 if Step 1 is vague.

The Sequence Applied to Automated Test Cases

Automation doesn’t change the sequence — it changes where each element lives. The test objective becomes the test method name. Preconditions become setup or @Before methods. Test steps become driver instructions or API calls. Expected results become assertions.

The failure mode in automation is skipping Steps 1-3 and writing code directly against the UI. That produces brittle scripts with no traceability, no requirement linkage, and no clear pass/fail criteria beyond “it didn’t throw an error.” That is not a test case. It’s a script.

The sequence enforces that automation stays anchored to requirements — which is the only way automated regression provides actual coverage evidence rather than execution noise.

If your team is still evaluating where automation fits into your testing lifecycle, start by getting the manual test case structure right. Automation should be a translation of that structure, not a replacement for it.


The one thing to do differently starting today: Before writing a single test step, confirm that the requirement exists in writing and the acceptance criteria are specific enough to derive a measurable expected result. That one check – done consistently – eliminates more test case defects than any template or tool.


Suggested external authoritative references:
1. IIBA BABOK v3 — Business Analysis Body of Knowledge (acceptance criteria and validation standards)
2. HL7 FHIR Overview (for healthcare IT integration test case context)

Scroll to Top