Is QA dying?

Is QA Dying? What the Data Says About the Future of Software Testing

The question surfaces every few years, and right now it’s louder than ever: is QA dying, or just changing faster than most teams can track? AI-generated test scripts, shift-left mandates, and developer-owned quality have convinced some organizations to dissolve dedicated QA functions entirely. That decision is often a mistake – and the data shows why.

This article breaks down what’s actually happening to QA roles, where manual testing is being replaced versus where it’s becoming more critical, and what a mid-level or senior QA professional needs to do to stay ahead of the curve.


Is QA Dying? The Short Answer Is No – But the Role Is Splitting in Two

The U.S. Bureau of Labor Statistics projects 10% job growth for QA Analysts and Testers between 2024 and 2034 – faster than the national average for all occupations. The national median salary for the occupation sits at $102,610 as of May 2024 BLS data, with the 90th percentile reaching $166,960. That is not the profile of a dying field.

What is actually happening is a structural split. The QA function is dividing into two distinct tracks, and only one of them has a strong future.

TrackPrimary ActivitiesAI ImpactOutlook
Legacy Manual QAScripted regression, click-through test cases, spreadsheet trackingHigh – AI tools automate most of this directlyShrinking
Quality EngineeringTest strategy, risk analysis, API/pipeline testing, compliance validation, exploratory testingLow – judgment and domain knowledge requiredExpanding

The professionals who primarily executed scripted manual test cases are being displaced – not by AI replacing human judgment, but because that particular work rarely required human judgment in the first place. Automation tools have always been better at deterministic, repeatable tasks. AI just lowered the barrier to implementing them.

What AI Is Actually Replacing in QA

Accuracy matters here. AI-driven testing frameworks can now analyze requirements, record user flows, and generate test scripts with minimal human input. According to the 2024/25 World Quality Report, 64% of organizations are either actively using AI for QA or building an implementation roadmap. Only 4% have no plans to explore AI testing at all.

The tasks most vulnerable to displacement are specific and predictable:

  • Regression suite execution on stable UI flows
  • Test data generation for non-sensitive environments
  • Basic API contract validation
  • Visual regression comparison (pixel-level diffing)
  • Load and performance script execution

OutSystems and KPMG found that AI in software testing reduced development time by up to 50% in the projects they studied. That number sounds alarming for QA headcount – until you understand where the reduction happened. It was in execution time, not in the work that requires understanding context, compliance risk, or user behavior.

What AI cannot reliably do is interpret ambiguous requirements, identify which test coverage gaps represent actual business risk, or understand why a HL7 FHIR message failing a validation rule matters differently in a payer integration versus a lab results workflow. That distinction requires domain knowledge and judgment – neither of which can be extracted from a prompt.

Is QA Dying in Regulated Industries? Exactly the Opposite

Healthcare IT makes this concrete. Consider a mid-size regional health system implementing a new EHR platform while migrating from a legacy system. The technical work involves HL7 v2.x interface adapters, FHIR R4 API endpoints, and ICD-10 code mapping across clinical documentation modules. None of that can be validated by an AI-generated test suite alone – not because the tools lack power, but because the testing decisions require regulatory interpretation.

HIPAA compliance testing, for example, requires a QA professional who understands what Protected Health Information (PHI) looks like in a test log, which edge cases trigger an OCR audit flag, and how access control role mapping intersects with the Minimum Necessary standard. A QA engineer with that domain background commands a measurable salary premium. Per KORE1’s 2026 salary guide, healthcare and fintech QA engineers who carry regulatory domain expertise earn substantially more than automation generalists with identical technical skills – and that gap is widening, not shrinking.

The same pattern holds in financial services. A Selenium suite can validate that a form submits. It cannot validate that a trade confirmation workflow complies with Regulation NMS reporting windows under stress conditions. A human with domain knowledge sets that test strategy. The tools execute it.

In an analysis of 400 QA job listings across North America and Europe in 2025, structured test management tools like TestRail appeared in 45% of all postings – specifically flagged as critical in regulated industries including finance and healthcare. The job market is not signaling that documentation and traceability are optional. It is signaling the opposite.

Shift-Left Did Not Kill QA – It Relocated It

One of the more durable misreadings of Agile and DevOps is that “everyone owns quality” means QA specialists are redundant. What shift-left actually means is that quality validation moves earlier in the Software Development Life Cycle – not that it disappears.

In a Scrum environment running two-week sprints, a QA engineer who waits until the end of the sprint to begin testing creates a bottleneck that breaks release cadence. But a QA engineer embedded in story refinement, reviewing acceptance criteria, identifying testability gaps before a single line of code is written – that professional is adding value that no automation framework can replicate.

BABOK v3 is explicit that Business Analysis and quality validation are not separate from each other. Eliciting verifiable acceptance criteria, identifying missing edge cases in use case specifications, and flagging ambiguous requirements before development begins are activities that sit at the intersection of BA and QA work. Senior professionals who can operate across that boundary are more valuable in an Agile context, not less.

The Software Testing Life Cycle has not shortened in modern delivery environments. It has been compressed and distributed differently across the team. That requires QA professionals with stronger communication skills, broader system awareness, and the ability to make fast, defensible decisions about test coverage – not junior testers running scripted cases.

The Skills Gap That’s Actually Driving Layoffs

When organizations do cut QA headcount, the pattern is consistent. They are almost never eliminating quality validation as a function. They are eliminating QA professionals whose skill profiles no longer match the work the team actually needs.

An analysis of QA job postings in 2025 found that Selenium appeared in 85% of automation roles, Playwright and Appium at 55%, and CI/CD pipeline integration (Jenkins and equivalents) in 60%. ISTQB certification moved from 50% to 55% of listings year-over-year. The market is not moving away from structured testing. It is moving away from manual-only profiles.

Automation Skills
  • Selenium / Playwright / Cypress
  • API testing (REST, FHIR endpoints)
  • CI/CD pipeline integration
  • TestNG / JUnit / Allure reporting
Strategy Skills
  • Risk-based test planning
  • Acceptance criteria authoring
  • Defect triage and root cause analysis
  • Compliance testing documentation
Domain Skills
  • HIPAA / HL7 FHIR knowledge
  • Regulatory traceability
  • EHR workflow validation
  • SAFe / Agile delivery fluency

The professionals at risk are those whose value proposition is purely execution – running predefined test cases against known requirements in predictable environments. That work is automatable. The professionals building test strategy, owning quality gates in CI/CD pipelines, and validating complex integrations against regulatory standards are not at risk. They are being hired at higher rates and paid more.

Where Manual Testing Is Still Non-Negotiable

It is worth being specific here, because the “manual testing is dead” narrative overgeneralizes from a real trend.

Exploratory testing – structured, time-boxed, hypothesis-driven investigation of system behavior – cannot be automated. Not because the tools lack sophistication, but because exploratory testing is fundamentally about using human judgment to identify what questions to ask, not just executing predetermined queries. Karl Wiegers in Software Requirements (3rd ed.) makes a related point about requirements: unverified assumptions and unstated edge cases are the source of the most expensive defects. Exploratory testing is the primary mechanism for surfacing those assumptions after development.

Usability testing, accessibility validation, and the assessment of whether a clinical workflow actually makes sense to a nurse using it under real conditions – none of that is automatable in any meaningful sense. AI tools can tell you a button exists and is clickable. They cannot tell you whether a patient portal’s medication reconciliation flow is coherent to someone managing five chronic conditions on a mobile device.

The types of testing that require human cognition are not a shrinking subset of QA work. In complex domains, they represent the most critical testing activities in the entire release cycle.

The Quality Engineering Transition: What It Actually Requires

The industry consensus term for the evolved QA role is “Quality Engineering” – and the transition is not merely cosmetic. A QA analyst who validates stories against acceptance criteria is performing a different cognitive job than a quality engineer who designs a test architecture for a microservices-based claims processing system.

The transition requires three concrete shifts:

From execution to strategy. Instead of running test suites, quality engineers design them – determining coverage scope, identifying risk-weighted test priorities, and defining what “done” means for a release. This is closer to the work described in BABOK v3’s Solution Evaluation knowledge area than to traditional test case management.

From output to pipeline. Quality engineers own testing within CI/CD workflows. They write or configure the automated checks that run on every commit. They define the quality gates that block promotion between environments. In a SAFe context, this aligns with the DevSecOps practices in the Continuous Delivery Pipeline – not something a strictly manual tester can contribute to.

From generalist to domain specialist. The professionals who are most insulated from displacement are those who combine technical testing skills with deep domain knowledge. A QA engineer who understands HIPAA’s Security Rule well enough to design a test matrix for access control validation in an EHR system is not interchangeable with a generalist automation engineer. That specialization took years to develop and cannot be replicated quickly by AI tooling or a junior hire.

For professionals currently in a QA role, the question is not whether to make this transition. It is how fast.

The Organizational Mistake That’s Fueling the “QA Is Dead” Narrative

Some organizations have genuinely dismantled their QA teams, distributing testing responsibility to developers and relying on AI tooling to cover the gap. This is not a forward-thinking quality strategy. It is a cost-cutting decision rationalized with industry jargon.

The consequences are predictable. Developer-written tests are optimized to pass, not to find failure modes. Developers write tests that confirm the code does what they intended it to do – not tests that challenge whether it does what the user needs or what the compliance framework requires. These are structurally different questions, and answering them requires a different perspective.

The “everyone owns quality” principle from Agile is not a license to eliminate dedicated quality expertise. It is a directive to embed quality thinking earlier and more broadly across the team. Those two things are not the same. Organizations that confuse them discover the difference after a production incident, a failed audit, or a data breach that a structured test suite would have caught.

In healthcare IT specifically, the cost of that mistake is not abstract. A misconfigured role-based access control rule that exposes PHI across a multi-tenant EHR platform is an OCR investigation, a HIPAA breach notification obligation, and potential civil monetary penalties. The QA function that would have caught it during integration testing was not overhead. It was risk management.

Is QA Dying for Entry-Level Professionals?

This is where the answer is more complicated, and honesty matters more than reassurance.

The entry point into QA has changed significantly. Junior roles that consisted primarily of executing manually scripted test cases are contracting. AI tools can generate those test cases and execute them faster and more consistently than a junior tester following a checklist. Organizations know this, and hiring reflects it.

But the entry point has not closed. It has shifted toward technical fluency earlier in a career. Someone entering QA in 2025 needs to be comfortable with at least one test automation framework, understand API testing fundamentals, and be able to contribute to CI/CD pipeline configuration. The days of entering QA with zero technical background and gradually picking up skills are largely over at established organizations.

For career switchers and early-career professionals, this is a higher bar – but not an insurmountable one. Domain expertise from adjacent fields (clinical experience for healthcare IT, finance background for fintech) combined with foundational automation skills creates a genuinely competitive profile. The combination of domain knowledge and technical capability is more valuable than pure technical depth without context.

Mid-level and senior QA professionals who have built domain expertise, can own test strategy, and can operate within Agile delivery frameworks are not facing displacement. They are facing an upgrade in what is expected of them.


Stop Asking Whether QA Is Dying. Start Asking Whether Your Skills Are

The function of quality assurance – validating that software behaves correctly, complies with applicable standards, and meets user needs – is not going away. The market data, the regulatory environment, and the structural limitations of current AI tooling all confirm this. What is going away is the version of QA that consisted primarily of executing predetermined scripts against known requirements.

The most actionable step a QA professional can take right now is a skills audit against current job postings in their target domain. If the gap is primarily in automation tooling, that is addressable in months. If the gap is in domain knowledge – understanding the regulatory environment, the integration patterns, the failure modes that matter in a specific industry – that takes longer to develop but creates far more durable protection against displacement.

QA is not dying. The professionals who treat it as a static role defined by what they did five years ago are the ones with the actual problem.


Suggested external resources:

Scroll to Top