Epic Reporting Workbench: Building Report Templates, Filters, and Scheduled Reports for Operational Use
Epic Reporting Workbench is the reporting tool that most Epic analysts use every day – and the one most teams configure badly. Poorly designed Workbench reports produce incorrect metrics, proliferate into hundreds of unmanaged duplicates, and train department managers to distrust Epic data. This article covers how to build Reporting Workbench report templates that produce accurate operational metrics, how to configure filters that make reports useful without making them dangerous, and how to set up scheduled delivery that replaces manual reporting cycles without creating compliance gaps.
- How Reporting Workbench Works: Architecture and Data Flow
- Data Models: The Foundation of Every Workbench Report
- Building Reporting Workbench Report Templates
- Configuring Filters and Report Parameters
- Scheduled Reports: Automated Delivery Configuration
- Reporting Workbench vs SlicerDicer: When to Use Which
- Report Governance: Certification and Library Management
- Security, PHI, and Access Controls in Reporting Workbench
- Validating Workbench Reports Before Go-Live
- Common Reporting Workbench Problems and Fixes
- Downloads
How Epic Reporting Workbench Works: Architecture and Data Flow
Epic Reporting Workbench (RWB) is a structured reporting interface built on top of Clarity – Epic’s relational reporting database. It runs inside the Epic application and provides clinical and operational staff with a way to build, run, and share reports without writing SQL. The Workbench report engine queries Clarity tables through pre-configured data models that expose specific columns and relationships in a user-accessible interface.
Understanding the data flow is essential before building any Workbench report. Clinical data is entered in Epic’s operational environment and stored in Chronicles (Epic’s proprietary database). The Clarity ETL process extracts data from Chronicles and loads it into Clarity on a defined schedule – typically nightly for most data, with near-real-time extracts for select operational tables. Workbench reports run against Clarity. They reflect data as of the last ETL run, not the current moment. This latency is fundamental – a Workbench report run at 9:00 AM shows data through the prior evening for most data elements. The full Cogito architecture – including the relationship between Clarity, Caboodle, and Workbench – is covered in the Epic Cogito Reporting and Analytics guide.
Workbench reports do not require Clarity database access or SQL knowledge from the report author. They require understanding of the data model structure, the clinical meaning of the columns being selected, and the filter logic that scopes the report to the correct patient population. An analyst who understands the clinical workflow that generates the data – and who has built several Workbench reports – can produce operational metrics faster than an analyst who knows SQL but not the clinical context.
Data Models: The Foundation of Every Reporting Workbench Report
A data model in Reporting Workbench is a pre-configured view of Clarity data. It defines which Clarity tables are included, which columns are exposed for reporting, how tables are joined, and what the unit of analysis is (one row per encounter, one row per medication administration, one row per order). When a report author opens Workbench and selects a data model, they see a column picker showing the available data elements for that model. They never see the underlying SQL or table structure.
Epic-Shipped Data Models
Epic ships hundreds of pre-built data models covering every clinical domain. Patient Encounter data models expose encounter-level columns for patient volume and LOS reporting. Medication Administration data models expose eMAR-level columns for medication compliance reporting. Order data models expose CPOE order columns for utilization and turnaround reporting. Build analysts configure which data models are available to which user roles and departments during implementation.
Epic’s shipped data models are not always configured optimally for every organization’s use case. A data model that works for a general medicine department may expose columns that are clinically irrelevant for an oncology department, and may lack columns the oncology team needs. Build analysts can customize shipped data models by adding or removing columns, or create custom data models that pull from organization-specific Clarity views. Custom data model creation requires Clarity database knowledge and should be documented with the same rigor as any build object.
Choosing the Right Data Model for Your Report
| Reporting Use Case | Correct Data Model Type | Unit of Analysis | Common Mistake |
|---|---|---|---|
| Patient volume by department | Patient Encounter | One row per encounter | Using a Medication model – one row per med order inflates encounter count |
| Medication administration compliance | Medication Administration (MAR) | One row per admin event | Using Order model – orders do not confirm medication was given |
| Lab order turnaround time | Order Results | One row per result | Using Encounter model – no result timestamp available at encounter level |
| Diagnosis case mix | Patient Encounter + Diagnoses | One row per diagnosis (filter to primary) | Not filtering to primary diagnosis – one encounter = multiple rows |
| Provider productivity / RVUs | Charges / Provider Billing | One row per charge transaction | Using Encounter model – no RVU data at encounter level in most data models |
| CMS quality measure | Measure-specific shipped template | Varies by measure | Not validating shipped report denominator against official CMS spec |
The unit of analysis in the data model determines how the report counts. A Medication Administration data model has one row per administration event. A report built on this model that counts rows counts administrations, not patients, not encounters. If you need to count unique patients who received a medication, you need either a data model that aggregates at the patient level or a Workbench aggregation function that counts distinct patient IDs across the administration rows. Getting this wrong produces inflated counts that look plausible but are clinically meaningless. Understanding how clinical documentation generates the underlying data is described in the EpicCare Inpatient ClinDoc guide.
Building Reporting Workbench Report Templates
A Workbench report template is a reusable report structure that defines the columns, filters, groupings, and output format. When a user runs a report based on the template, they may apply additional parameter filters (like a date range or department selection) but the core structure remains consistent. Templates are the correct mechanism for operational reports that multiple people need to run repeatedly.
Step 1: Define the Report’s Clinical Question Before Opening Workbench
The most common Workbench build mistake is opening the report editor and starting to add columns before defining the clinical question the report must answer. A report built without a clear question produces a data dump that no stakeholder knows how to use. Before building, answer these questions in writing: What is being counted? What is being measured? What population is included? What population is excluded? What time period does it cover? What does the stakeholder do with the result?
“How many patients did we see in the ED last month, broken down by ESI triage level and disposition?” is a defined clinical question. It specifies the patient population (ED encounters), the metric (patient count), the breakdown dimensions (ESI level, disposition), and the time period (last month). Every element of this question maps to a specific Workbench configuration decision. Without the question, you cannot make those decisions correctly.
Step 2: Select Columns That Answer the Question – Nothing More
Workbench column selection is a HIPAA minimum necessary exercise as much as a reporting exercise. Every column you add to a report expands the PHI exposure when someone runs it. A patient volume report by department and ESI level does not need patient name, date of birth, or MRN. Adding those columns because “they might be useful” is a HIPAA minimum necessary violation. Add only the columns required to answer the defined clinical question.
There is an exception: some reports are specifically designed for patient-level review – a department manager reviewing specific patients who had long LOS, for example. In those cases, identifiable columns are necessary. The access to these patient-level columns should be restricted through Workbench security settings to the users who have a clinical need for them. The privacy officer must review and approve any report template that returns patient-identifiable information to a broad user audience.
Step 3: Configure Groupings and Aggregations
Workbench supports grouping columns (organizing rows by a categorical dimension like department or encounter type) and aggregation functions (counting rows, summing values, calculating averages). For an ED volume report, the grouping dimensions are ESI level and disposition. The aggregation is COUNT of encounter IDs. The report should show one row per ESI-disposition combination, not one row per encounter.
Calculated columns in Workbench allow you to create derived metrics – a percentage column that divides admits by total encounters, a LOS column that subtracts discharge time from admission time. Workbench calculated columns work differently from SQL calculated columns. They apply to the aggregated result, not the row-level data. A calculated average LOS in Workbench calculates the mean of the LOS values in the result set, which may differ from a mean calculated directly in SQL depending on how the data model pre-aggregates.
Six months after Epic go-live at a regional health system, the Cogito analytics team audited the Reporting Workbench report library and found 312 reports. Of these, 47 were variations of the same daily census count – each built by a different department manager who did not know that a census report already existed, or who found the existing one “didn’t look right” and built their own. The census counts across these 47 reports differed by up to 12% due to different encounter type filters, different date range logic, and different department scope definitions. The CMO had been receiving a census report from the ED medical director and a different census report from the CNO. The two reports had never matched. The resolution required: establishing a canonical census metric definition approved by clinical leadership, building a single certified census report using that definition, communicating to all department managers that this was the authoritative report, and deactivating 46 of the 47 duplicates. The root cause was the absence of a report governance process before go-live.
Configuring Filters and Report Parameters in Reporting Workbench
Filters in Reporting Workbench narrow the data returned by the report to the relevant population. There are two types: fixed filters that are embedded in the report template and cannot be changed by the user, and parameter filters that prompt the user to provide a value (like a date range or department) each time they run the report. The distinction matters significantly for report security and accuracy.
Fixed Filters: What the Report Always Restricts
Fixed filters define the scope of the report’s intended population and should never be changeable by the user. An ED census report should have a fixed filter on the ED department list – if a user could change the department filter, they might inadvertently run the report against inpatient departments and get wrong numbers. A charge capture report should have a fixed filter on active (non-voided) charges. A quality measure report should have fixed filters that implement the measure’s denominator population definition exactly.
Fixed filters for encounter type are particularly important. A report that does not filter by encounter type includes office visits, phone calls, inpatient encounters, and ED encounters in the same count. Each encounter type has a different clinical meaning, and mixing them produces a meaningless metric. The encounter type filter should always be fixed unless the report is explicitly designed to span encounter types.
Parameter Filters: What Users Control
Parameter filters prompt the user to select a value before running the report. Date range is the most common parameter filter – the report asks the user to enter a start and end date, or to select from predefined relative periods like “last 30 days” or “current month.” Department selection is another common parameter, allowing a user to scope the report to their specific department without access to other departments’ data.
Date range parameter filters must include a default value for the period. A report with no default date range requires the user to enter dates every time – which increases the likelihood they will enter an incorrect range. Setting the default to “last full calendar month” ensures that users who run the report without thinking about the date range still get a meaningful result. Warn users in the report description that changing the date range may affect metric calculation if the metric has a specific period requirement (like a 30-day readmission rate that must cover a full month).
Filter Interaction and Population Scope
Multiple filters in a Workbench report interact through AND logic by default – all filter conditions must be true for a record to appear. A report with a filter on encounter type = ED AND a filter on admission date within the date range AND a filter on ESI level >= 3 returns only ED encounters within the date range with ESI 3 or higher. This is usually correct. The edge case where AND logic breaks things is when filters are designed to include multiple values of the same category – like “include ESI 1 OR ESI 2 OR ESI 3.” In Workbench, this is configured as an IN filter (ESI level IN {1, 2, 3}), not as multiple separate filters. Separate filters on the same column with different values result in an impossible filter that returns zero rows (encounter type = 3 AND encounter type = 50 = impossible).
Scheduled Reports: Automated Delivery Configuration
Scheduled Workbench reports run automatically on a defined schedule and deliver results to configured recipients without requiring anyone to manually run the report. This replaces manual reporting workflows where an analyst runs a report every morning, copies the output to a spreadsheet, and emails it to the department manager. Scheduled reports reduce analyst effort, ensure consistency, and eliminate the risk of a report not being sent because the analyst was busy or on leave.
Scheduling Configuration: Time, Frequency, and Output Format
Scheduled reports in Workbench are configured with a run time, a frequency (daily, weekly, monthly), an output format (displayed in Epic, exported to Excel, delivered via InBasket message or email), and a recipient list. The run time should be set after the Clarity ETL completes – typically 6:00 AM to 8:00 AM for organizations with overnight ETL runs. A scheduled report that runs at 5:00 AM before the ETL finishes will deliver yesterday’s data with today’s timestamp, which is both wrong and misleading.
Output format selection depends on what the recipient does with the report. A department manager who reviews a daily census number needs an InBasket summary or a dashboard tile – not an Excel attachment they have to open manually. A quality analyst who needs to manipulate the data for further analysis needs an Excel export. An accreditation coordinator who needs to file a report needs a formatted output that can be saved and referenced. Configure the output format for the actual workflow, not for the convenience of the report builder.
HIPAA Considerations for Scheduled Report Delivery
Scheduled reports that contain PHI must be delivered only to recipients who are authorized to receive that PHI. The recipient list must be reviewed and approved before the schedule is activated. A scheduled report that delivers patient-level data to an email distribution list that includes staff without a clinical need for that data is a HIPAA breach risk.
The safest scheduled report delivery for PHI-containing reports is within Epic’s InBasket system – the recipient accesses the report within the Epic environment where access controls are already enforced. Email delivery of PHI-containing reports requires that the email system is covered by the organization’s HIPAA security policies and that the recipient’s email address is an organizational (not personal) email account. External email delivery of PHI is generally not acceptable without encryption and an explicit business justification.
A community hospital quality team configured a scheduled daily Workbench report that delivered surgical site infection (SSI) data to an email distribution list. The report included patient names, MRNs, procedure dates, and SSI classification. The distribution list had been copied from an older committee meeting invite and included three physicians who had left the organization six months earlier. Their hospital email accounts had been deactivated, but the email system automatically forwarded messages from those accounts to their personal email addresses. The SSI report containing PHI for 22 patients had been delivered to personal email accounts for six months before a routine access audit identified the forwarding rules. The privacy officer classified the incident as a potential HIPAA breach requiring investigation. The fix required removing auto-forwarding from deactivated accounts, switching the SSI report to InBasket delivery, and auditing all other scheduled Workbench reports for similar distribution list problems.
Reporting Workbench vs SlicerDicer: When to Use Which
| Dimension | Reporting Workbench | SlicerDicer |
|---|---|---|
| Primary use | Recurring operational reports, scheduled delivery, regulatory metrics | Ad hoc population exploration, cohort identification, hypothesis testing |
| Who builds it | Reporting analyst or trained user with data model knowledge | End users directly – no IT involvement required |
| Output | Tabular reports, scheduled delivery, Excel export | Interactive charts, population counts, patient lists |
| Best for | Daily census, quality metrics, scheduled leadership dashboards | Quick cohort counts, “how many patients with X condition” questions |
| Template reuse | Yes – templates run repeatedly by multiple users | Sessions are ad hoc – limited template saving |
| Governance requirement | High – reports should be certified and owned | Medium – slicer population scope must be privacy-reviewed |
| Appropriate for CMS submission? | Yes – if validated against measure specification | No – ad hoc tool, not for regulatory submission |
The practical test for which tool to use: if the same question will be asked again next month with the same structure, build a Workbench report template. If the question is exploratory – “I wonder how many patients over 65 had a sepsis diagnosis AND received antibiotics within 3 hours” – use SlicerDicer to explore the population first. If the exploration produces a useful metric that will be tracked ongoing, then build a Workbench report that captures it formally. The broader Epic analytics landscape and how these tools relate is covered in the Epic EHR Learning Hub.
Report Governance: Certification and Library Management
Without report governance, Reporting Workbench becomes an unmanaged data landfill within months of go-live. Every analyst and every department manager builds their own version of the same metric. The versions produce different results. Leadership loses trust in Epic data. Clinical and operational decisions get made on the wrong numbers. This is the most common and most damaging Epic analytics failure pattern.
Report Certification Workflow
Report certification is the process of formally marking a Workbench report as validated, production-quality, and the authoritative source for its metric. Certified reports are distinguished from draft or personal reports in the Workbench interface. Users can trust certified reports to use correct metric definitions and validated data logic. Uncertified reports are clearly marked as personal or in-development.
The certification process for an operational report should include: report logic review by the data analytics team, metric definition confirmation by the relevant clinical or operational leadership, validation against a known reference dataset (parallel period data or a prior system’s output), and a named report owner who is accountable for the report’s accuracy going forward. A report that cannot be assigned a named owner should not be certified – it belongs in the personal/draft tier until an owner accepts accountability for it. The BAT and UAT frameworks that apply to clinical system testing also apply to report validation – confirmed in the BAT vs UAT guide.
Naming Conventions and Folder Organization
Workbench report library organization requires naming conventions that make reports discoverable and purpose-obvious. A functional naming convention includes: department prefix, metric name, period scope, and governance tier. For example: “ED – Door-to-Provider Median Time – Monthly – CMS OP-18 [CERTIFIED]” tells the user the department (ED), the metric (door-to-provider median), the period (monthly), the regulatory context (CMS OP-18), and the governance status (certified) from the report name alone.
Folder organization in Workbench should group reports by organizational function, not by the analyst who built them. A quality department folder, an ED operations folder, a revenue cycle folder – each managed by a specific team. Personal experiment reports live in individual user folders and never appear in organizational folders. The organizational folders contain only certified or under-review reports with named owners. This structure must be established and communicated before go-live – retrofitting it after 300 reports accumulate in a flat list is significantly harder.
Security, PHI, and Access Controls in Reporting Workbench
Reporting Workbench access is governed by Epic’s security model – the same role-based access control framework that governs clinical access. A user’s Workbench access is determined by their Epic security template, which defines which data models they can access, which reports they can view and run, and whether they can create reports or only consume existing ones. Build analysts configure these security settings in coordination with the privacy officer.
Data model access should be scoped to the user’s role and data need. A registration staff member who needs patient volume counts does not need access to the medication administration data model. A billing analyst who needs charge data does not need access to clinical documentation data models. Role-based data model access prevents both accidental and intentional exposure of PHI beyond the minimum necessary scope.
Column-level security restricts which columns appear in a data model for a given user role. The PATIENT table in Clarity contains MRN, name, date of birth, and SSN. A data model built for operational reporting may expose MRN and name but restrict SSN even for users who have access to the model. Column-level restrictions are configured in the data model definition and apply to all reports built on that model for users in the restricted role.
Validating Reporting Workbench Reports Before Go-Live
Workbench report validation must happen before any report is delivered to stakeholders as an operational metric. This includes shipped Epic reports – they are not pre-validated for your organization’s data, clinical protocols, or metric definitions. The validation process has two components: technical validation (does the report return the correct data) and clinical validation (does the metric the report produces match the intended clinical question).
Technical Validation Steps
Run the report against a date range where you know the expected result from another source – the ADT system’s daily census log, the prior reporting system’s output, or a manually reviewed sample. Compare the Workbench count against the reference count. Discrepancies greater than 1% require investigation. Common sources of discrepancy: the report includes encounter types that should be excluded, the date filter logic uses admission date when discharge date is the correct period anchor, or the report double-counts encounters due to a data model join that produces multiple rows per encounter.
For CMS quality measure reports, validation requires opening the CMS measure specification and comparing the Workbench report’s denominator population and numerator criteria against the official spec field by field. The go-live support framework for managing report issues in the post-go-live period is described in the Epic EHR Go-Live Support guide.
Common Reporting Workbench Problems and Fixes
| Problem | Root Cause | Fix |
|---|---|---|
| Report counts are higher than expected | Data model join produces multiple rows per encounter (fan-out). No deduplication in aggregation. | Switch to a data model where the unit of analysis matches what you are counting. Or add a DISTINCT count on the encounter/patient ID column. |
| Report returns no rows for expected period | Date filter is using admission date but encounters have a discharge date filter that excludes open admits. Or the encounter type filter is too restrictive. | Confirm which date column the filter applies to. Run without the date filter temporarily to see if any rows return. Check encounter type filter values. |
| Report data does not reflect today’s activity | Report uses a Clarity table on nightly ETL. Near-real-time data is not available for this data element. | Switch to a data model that uses a near-real-time Clarity extract. Confirm with Epic technical team which tables have near-real-time extracts configured. |
| Two different users get different counts for same report | Row-level security applies a different department scope per user. Each user sees only their authorized data. | Intended behavior if users should see different departments. If they should see the same data, review security template department assignments for each user role. |
| Scheduled report delivers to wrong recipients | Recipient list was not reviewed after staff turnover. Deactivated accounts may have email forwarding. | Audit all scheduled report recipient lists quarterly. Switch PHI-containing scheduled reports to InBasket delivery to avoid email forwarding risk. |
| Shipped CMS report differs from submitted data | Epic’s shipped report uses a slightly different denominator or timestamp than the CMS measure specification. | Validate the shipped report against the official CMS measure specification before using for submission. Rebuild the affected logic if needed. |
Establish one canonical metric definition for every operational metric before building any Workbench report for that metric. Document the definition – what is counted, what is excluded, which encounter types are included, which date column anchors the period. Get it approved by the relevant clinical or operational leader. Build exactly one certified Workbench report that implements that definition. Communicate to all stakeholders that this is the authoritative report for that metric. Do this for the 10 most important operational metrics at go-live. The discipline of doing it before go-live prevents the 47-version census report problem from ever starting.
Authoritative References
- CMS – Hospital Outpatient Quality Reporting (OQR) Measure Specifications: Official Denominator and Numerator Definitions
- HHS OCR – HIPAA Minimum Necessary Standard: Guidance for Report Access and PHI Disclosure
