Epic EHR UAT and Dress Rehearsal: How End Users Test and What Analysts Own Before Go-Live
Epic UAT fails most often not because end users do not show up – but because analysts did not prepare them for what they were supposed to test, how to log what they found, or what “pass” and “fail” actually mean in a clinical workflow context. Dress rehearsal fails when teams treat it as a repeat of integrated testing rather than a simulation of go-live day operations. This article covers what UAT actually requires from end users, what analysts own in supporting it, and what a dress rehearsal must accomplish before a go-live cutover decision can be made with confidence.
- What Epic UAT Is – and What It Is Not
- UAT vs IVT: Different Gates, Different Owners
- How End Users Test During Epic UAT
- What Analysts Own During UAT
- UAT Script Design: What Makes a Good End-User Test
- UAT Defect Process: From Finding to Resolution
- UAT Sign-Off: The Acceptance Gate
- Dress Rehearsal: What It Covers and Why It Is Different
- The Cutover Decision: What Has to Be True Before Go-Live
- Downloads
What Epic UAT Is – and What It Is Not
User Acceptance Testing (UAT) in Epic is the phase where end users – nurses, physicians, pharmacists, schedulers, billing staff – work through defined clinical and operational scenarios in the system and confirm that the build meets their workflow requirements. It is the acceptance gate that sits between integrated validation testing (IVT) and go-live. UAT answers the question IVT cannot: does the system work the way the people who use it every day need it to work?
The BABOK v3 (Business Analysis Body of Knowledge) defines acceptance and evaluation testing as the validation that a solution meets stakeholder requirements. UAT is exactly that. It is not a technical test. It is a requirements verification exercise conducted by the people who defined the requirements. Build analysts are not the acceptance authority in UAT. End users are.
What UAT is not: it is not a re-run of IVT by different people. IVT validates that the integrated modules work together technically. UAT validates that the integrated system works operationally for the clinical and administrative staff who will use it. A workflow that passed IVT can still fail UAT – not because the system is broken but because the build does not reflect how the department actually works. A flowsheet that is technically correct may still be wrong for the nursing unit that uses it if the build analyst made assumptions about workflow sequence that nurses do not follow. Understanding the broader testing landscape – including the distinction between BAT and UAT – is covered in the BAT vs UAT guide in detail.
UAT vs IVT: Different Gates, Different Owners
| Dimension | IVT (Integrated Validation Testing) | UAT (User Acceptance Testing) |
|---|---|---|
| Primary question | Do the modules work together technically? | Does the system work for the people who use it? |
| Who leads testing | Build analysts + integration analyst | End users (super-users) + clinical SMEs |
| Who is the acceptance authority | Project manager / build team | Clinical and operational leadership |
| Test focus | Interface connections, cross-module data flow | Workflow usability, clinical appropriateness, role fit |
| Defect type found | Interface failures, misconfigured routing, missing triggers | Workflow sequence errors, missing required fields, wrong defaults |
| Environment | Integrated test environment | Integrated test environment (same, later in timeline) |
| ISTQB / BABOK equivalent | System integration testing (ISTQB) | Acceptance testing (BABOK v3, ISTQB) |
| Exit produces | Cycle sign-off, defect resolution confirmation | Clinical / operational leader acceptance signatures |
IVT and UAT run sequentially in most Epic implementations – IVT completes first (typically three cycles), then UAT runs on the stabilized build. Some organizations run them partially overlapping, with UAT beginning while late IVT cycle defects are still being resolved. This is a risk management decision. If IVT has not achieved a stable state, end users will encounter technical failures during UAT that are not related to workflow design – which undermines their confidence and wastes their participation time.
How End Users Test During Epic UAT
End users in Epic UAT are typically super-users – clinical staff from each department who received earlier Epic training and who represent their peers in the testing process. They are not professional testers. They are nurses, physicians, pharmacists, schedulers, and billing staff who are being asked to work through defined scenarios in a test system and report what does and does not match their workflow expectations.
What Super-Users Do During UAT
Super-users execute UAT scripts – step-by-step scenario guides that walk them through a clinical or operational workflow. A nursing super-user follows a script that starts with a patient assignment, moves through medication administration, vital signs documentation, care plan completion, and shift handoff. At each step, the script has a description of what they should do and what they should observe as a result. If the system does not do what the script says it should, they log a defect.
The most important thing super-users do during UAT is not execute the script mechanically. It is apply their clinical knowledge to what the system shows them. A super-user who is an experienced nurse will notice that the BCMA scan requires an extra confirmation step that was not there in their prior system – and will flag it even if the script did not explicitly test for it. That observation is exactly what UAT is for. Super-users are the subject matter experts. Their job is to notice when the system does not match reality, not just when it fails to match the script.
Super-User UAT Participation by Role
During UAT at a 250-bed regional medical center, the nursing super-users arrived for their scheduled UAT sessions with no prior briefing on what UAT meant or what they were expected to do. The project team had assumed that super-user training (which covered how to use Epic) also prepared them for UAT (which requires evaluating whether Epic matches their workflow). It does not. The nurses spent the first day asking the module analysts to show them how to use the system instead of testing whether the system matched their workflow. Only on day two – after an emergency briefing on the difference between training and testing – did productive UAT begin. Three days of UAT time were effectively lost. The defects found during the remaining two days included a nursing flowsheet that loaded the wrong vital signs sequence for the ICU, a medication administration barcode requirement that conflicted with the unit’s existing medication preparation process, and a shift handoff template that was missing three required fields the charge nurse needed. None of these would have been found by the build team in IVT.
What Analysts Own During Epic UAT
Analysts are not passive observers during UAT. They have specific, active responsibilities that determine whether UAT produces useful results or becomes an expensive exercise in confusion. The analyst’s job during UAT is to enable end users to test effectively – not to test for them.
Pre-UAT: Analyst Preparation Responsibilities
Before UAT begins, each module analyst must confirm that the integrated test environment is stable. All IVT critical defects must be resolved. The test environment must have valid test patient data – registration records, coverage information, active orders, and results – that allow super-users to start their workflows without needing to build the test setup themselves. A super-user who has to spend 30 minutes setting up a test patient before they can test their actual workflow is wasting UAT time.
Each analyst must also brief their super-users on the specific workflows they are expected to test, what a “pass” looks like for each step, and how to log defects when they find something wrong. This briefing is not training – it is orientation for testing. There is a fundamental difference between showing someone how to use a feature and asking them to evaluate whether that feature matches their workflow. Analysts must make that distinction explicit.
During UAT: Analyst Support Responsibilities
During UAT sessions, each module analyst is present to answer questions about build intent – not to fix defects in real time or to guide the super-user through how to use the system. When a super-user gets stuck on a step and asks for help, the analyst’s response should be: “What did you expect the system to do here?” If the expectation matches the build intent but the system is not responding correctly, it is a defect. If the expectation does not match the build intent, the analyst must determine whether the build was correct or whether it needs to change.
Analysts must resist the impulse to resolve UAT observations immediately by changing the build while super-users are in the session. Ad hoc build changes during UAT introduce instability and may fix one issue while breaking another. All UAT observations should be logged, triaged, and resolved through the formal defect process before the next UAT session runs.
The nursing documentation workflows that super-users evaluate during UAT are closely tied to how clinical documentation build was designed – the EpicCare Inpatient ClinDoc guide covers the build decisions that most commonly surface as UAT findings. Similarly, CPOE workflow expectations that providers test in UAT relate directly to order set and routing configurations described in the Epic EHR Orders and CPOE guide.
UAT Script Design: What Makes a Good End-User Test
UAT scripts for Epic must be written differently from IVT test scripts. IVT scripts are technical – they specify which module to open, which interface to monitor, which Clarity table to check. UAT scripts are operational – they describe a clinical scenario in terms that a nurse, physician, or pharmacist recognizes from their daily work.
UAT Script Structure for Clinical Scenarios
A well-designed UAT script starts with a scenario description in clinical language – “You are a night shift nurse assigned to patient room 412. Your patient has an order for IV antibiotics due at 2200. Complete the medication administration.” The script then walks through each workflow step in the sequence the nurse would naturally follow: check the eMAR, gather the medication from the ADC, scan the patient wristband, scan the medication barcode, document the administration. Each step has an expected outcome in clinical terms, not technical terms.
The expected outcome for a BCMA scan step should not say “system displays the BCMA confirmation dialog.” It should say “system confirms the right patient, right drug, right dose, right route, and right time, and allows you to proceed with administration.” That is what the nurse recognizes as the correct outcome. The technical description of the confirmation dialog is meaningless to the super-user but the clinical check is immediately meaningful.
What UAT Scripts Should Not Do
UAT scripts should not walk the super-user through every click and menu navigation. If the script says “click the green button in the top left of the eMAR screen” it has become a training guide, not a test. Super-users should know how to navigate Epic from their training. UAT scripts test whether the system they know how to navigate produces the right clinical outcomes – not whether they can follow step-by-step navigation instructions.
UAT scripts should also include negative scenarios – workflows where the system should block or warn the user. A UAT script for medication administration should include a scenario where the wrong patient wristband is scanned. The expected outcome is that the system stops the administration and displays a patient mismatch error. If the system allows the administration to proceed in this scenario, that is a critical patient safety defect – one that a super-user with clinical experience will immediately recognize as dangerous.
UAT Defect Process: From Finding to Resolution
Every UAT observation – whether it is a system failure, a workflow mismatch, or a clinical concern raised by a super-user – must be captured, classified, and resolved through a managed process. Verbal observations that are not logged do not get fixed. UAT observations that are logged but not prioritized create a long list that does not get resolved before go-live.
Classifying UAT Observations
Not every UAT observation is a defect. Some observations are training gaps – the super-user does not know how to use a feature that is correctly built. Some are preference differences – the super-user wants the system to work differently from how it was designed, but neither way is wrong. Some are genuine defects – the system does not perform as designed or does not match clinical workflow requirements. Analysts must triage each observation into one of these categories before deciding on the response.
Training gaps are addressed through additional super-user training sessions – not through build changes. Preference differences are escalated to clinical and operational leadership for a decision – the super-user’s preference may be clinically justified and warrant a build change, or it may be a personal preference that does not need to change the system. Genuine defects are logged in the defect tracking system, assigned to the owning module analyst, and tracked to resolution.
UAT Defect Severity vs IVT Defect Severity
UAT defect severity classification requires clinical input in a way that IVT defect classification does not. A UAT defect where the BCMA system allows a wrong-patient scan to proceed is a Critical patient safety defect – but it takes a nurse to recognize why. A build analyst looking at the same scenario might classify it as High (workflow issue) rather than Critical (patient safety) if they do not understand the clinical consequence. Super-user input on severity classification is essential for UAT defects.
During UAT at a community hospital, nursing super-users identified that the shift handoff documentation template required a free-text field for “patient education provided” rather than a structured checkbox field. The nursing super-users classified this as a Critical defect – documentation completeness requirements for Joint Commission accreditation required structured data capture, not free text, to demonstrate compliance. The build analyst who received the defect classified it as Medium (cosmetic field type change) because from a build perspective, both free text and a checkbox captured the data. The disagreement escalated to the CNO within two days. The CNO confirmed the nursing super-users were correct – Joint Commission survey preparation required structured data. The field was rebuilt as a structured checkbox. The lesson: UAT defect severity must be assessed with clinical leadership input, not build analyst judgment alone.
UAT Sign-Off: The Acceptance Gate Before Dress Rehearsal
UAT sign-off is the formal confirmation by clinical and operational leadership that the system meets the requirements for go-live. It is not a declaration that every super-user is happy with every workflow. It is a declaration that the system is clinically safe, operationally functional, and meets the minimum requirements for patient care delivery.
UAT sign-off requires named signatures from clinical leadership – the CNO for nursing workflows, the CMO or department chiefs for physician workflows, the pharmacy director for medication management workflows. These are not rubber-stamp signatures. They represent the organization’s formal acceptance that the system is safe to use for patient care. If a clinical leader is not willing to sign off on UAT, the concern they have must be addressed before go-live proceeds.
UAT acceptance criteria must be defined before UAT begins – not negotiated after the fact based on how many defects are outstanding. Typical UAT acceptance criteria include: all Critical defects resolved, scenario pass rate above a defined threshold (commonly 90%), all service lines represented in testing (not just the most engaged departments), and sign-off obtained from leadership for each major workflow category.
Dress Rehearsal: What It Covers and Why It Is Different
Dress rehearsal is not a repeat of UAT in the production environment. It is a simulation of go-live day operations. The distinction matters. UAT validates that workflows work. Dress rehearsal validates that the organization can execute go-live – including data conversion, interface activation, user access provisioning, command center operations, and downtime procedures – as a coordinated event.
What Dress Rehearsal Covers That UAT Does Not
| Activity | Covered in UAT? | Covered in Dress Rehearsal? | Why It Matters |
|---|---|---|---|
| Clinical workflow scenarios | Yes – primary focus | Regression only | Dress rehearsal assumes UAT passed – no new scenarios |
| Data conversion validation | No | Yes – primary focus | Migrated patient records must be verified in production before go-live |
| Interface activation sequence | No | Yes – timed rehearsal | Interfaces must activate in the right order with no gaps |
| User access at scale | No | Yes – all roles tested | User accounts must work for every role type across every department |
| Downtime procedure practice | No | Yes – clinical staff drill | Staff must know what to do if Epic is unavailable post-go-live |
| Command center readiness | No | Yes – simulated activation | Command center communication, escalation paths, and tools must be tested |
| Cutover timing validation | No | Yes – timed execution | Each cutover step has a time window – dress rehearsal proves the window is achievable |
Data Conversion Validation During Dress Rehearsal
Data conversion validation is one of the most time-intensive dress rehearsal activities. Patient demographic records, coverage information, appointment histories, and in some implementations clinical history must be migrated from the legacy system to Epic. Dress rehearsal runs a full data conversion cycle – extract from legacy, transform, load into Epic’s production environment – and validates the results.
Validation checks include record count reconciliation (same number of patients in legacy and Epic), field-level accuracy for a statistically significant sample, and super-user spot checks of specific high-visibility patient records. A patient record that has been seen at the health system for 20 years and has 15 encounter types should be spot-checked by a registration super-user who knows what that record should contain. The sample must be representative – not just the cleanest records from the legacy system.
Interface Activation Sequence
During dress rehearsal, the integration analyst executes the interface activation sequence as it will happen on go-live day – in the correct order, with each interface tested after activation to confirm it is transmitting correctly before the next one is activated. This is a timed exercise. If activating all interfaces takes 4 hours and go-live is scheduled at midnight, the activation must begin no later than 8:00 PM. Dress rehearsal proves that the activation sequence fits within the cutover window.
Interface activation order matters. ADT interfaces must be active before clinical workflow begins – without ADT, no downstream system knows patients exist. Lab and radiology interfaces must be active before orders are placed. Pharmacy ADC interfaces must be active before medication administration begins. Getting the activation sequence wrong on go-live day creates a cascade of downstream failures that is very difficult to diagnose and resolve under real-time pressure.
User Access Validation at Scale
Dress rehearsal tests that every user account type can log in to the production environment and access the correct modules with the correct permissions. This is not tested in UAT – UAT uses a small set of super-user accounts in the test environment. On go-live day, 500 or 2,000 users may attempt to log in simultaneously. User provisioning problems that are invisible with 10 test accounts become visible at scale.
A sample of users from every role type and every department must be tested during dress rehearsal. The test confirms: they can log in, they see the correct Epic home page for their role, they can access the patient records and departments appropriate to their role, and they cannot access areas outside their permission scope. Role-based access control failures found during dress rehearsal are much easier to fix than access issues reported by 200 nurses at 12:05 AM on go-live day.
The Cutover Decision: What Has to Be True Before Go-Live
The go-live cutover decision is a formal organizational decision – not a project management declaration. It requires input from clinical leadership, IT leadership, and operations leadership. Each group must confirm their specific readiness criteria are met. No single group can declare go-live readiness unilaterally.
Clinical readiness means: UAT sign-off obtained from all service lines, zero unresolved Critical patient safety defects, super-users trained and confirmed, downtime procedures practiced. Technical readiness means: all IVT cycles complete, all interfaces activated and stable, data conversion validated, production environment performance tested. Operational readiness means: command center staffed, escalation paths documented, legacy system read-access maintained, patient communication completed.
The go-live support model that activates at cutover depends on the work done during IVT and UAT. Analysts who participated in testing know the system’s failure modes better than anyone. The go-live command center structure, analyst shift assignments, and escalation procedures are described in the Epic EHR Go-Live Support framework. The full Epic implementation lifecycle that UAT and dress rehearsal sit within is covered in the Epic EHR Learning Hub.
Brief every super-user cohort separately before their UAT sessions with a 30-minute orientation that distinguishes testing from training, explains what a pass and fail look like, and demonstrates how to log a defect. Thirty minutes per cohort – nursing, pharmacy, physicians, registration, billing. That investment prevents the most common UAT failure mode: super-users spending test time learning the system instead of evaluating whether the system matches their workflow. The observations they produce in properly oriented UAT sessions are worth more than any amount of additional analyst build review.
Authoritative References
- IIBA BABOK v3 – Business Analysis Body of Knowledge: Acceptance and Evaluation Testing, Stakeholder Engagement
- ISTQB – Certified Tester Foundation Level: User Acceptance Testing, Acceptance Criteria, and Test Management
