Skip to main content
ValyouValyou.
Dispatch: ehr-integration-patt... // Status: Published
January 9, 202512 min read

7 EHR Integration Patterns That Actually Work

Practical approaches to integrating with Epic, Cerner, and other EHR systems without losing your mind.

BD
ValyouPrincipal Engineer
Share

7 EHR Integration Patterns That Actually Work

Integrating with Electronic Health Record systems is notoriously difficult. The promise of interoperability runs headfirst into the reality of proprietary APIs, inconsistent data models, and Byzantine certification processes.

After integrating with Epic, Cerner, Allscripts, and various smaller EHRs, here are the patterns that actually work in production.

The Landscape: Understanding What You're Up Against

Before we get to patterns, understand what makes EHR integration uniquely challenging:

Vendor fragmentation: Epic dominates with ~35% market share, but there are hundreds of EHR vendors with different APIs, data models, and integration approaches.

Certification requirements: Many integrations require formal certification processes that can take months. Some require annual recertification.

Hospital IT gatekeeping: Even with a certified integration, each health system has their own IT team who must approve, configure, and enable your connection.

Data model complexity: Healthcare data is messy. HL7v2, HL7 FHIR, CDA, proprietary formats, sometimes all from the same system depending on the data type.

HIPAA everywhere: Every data exchange is a potential compliance event. You need BAAs with everyone in the chain.

Now, the patterns that work despite all this.


Pattern 1: FHIR-First, Fallback Second

The approach: Build your integration layer around FHIR (Fast Healthcare Interoperability Resources) as the primary interface, with fallbacks to legacy protocols when FHIR isn't available.

Why it works:

FHIR has become the mandated standard for patient data exchange in the US (21st Century Cures Act). Major EHR vendors now support FHIR APIs. Starting with FHIR means you're building on the foundation that will become universal.

Implementation:

  1. . Define your internal data model based on FHIR resource types (Patient, Observation, MedicationRequest, etc.)
  1. . Build FHIR client that handles:
  2. - OAuth 2.0 / SMART on FHIR authentication
  3. - Resource retrieval and pagination
  4. - Search parameters
  5. - Bundle handling
  1. . For systems without FHIR support, build adapters that transform legacy formats (HL7v2, CDA) to your FHIR-based internal model
  1. . Your application code only deals with the internal model, never raw EHR data

The advantage: As more systems support FHIR natively, your fallback adapters become unnecessary one by one. You're future-proofing without waiting for the future.

Gotcha: FHIR implementations vary. Epic's FHIR API behaves differently than Cerner's FHIR API. You still need vendor-specific handling, but it's at the edges, not throughout your code.


Pattern 2: The Canonical Data Model

The approach: Transform all incoming EHR data to a single internal model immediately, before it touches your application logic.

Why it works:

You'll integrate with multiple EHRs over time. Each has different field names, data types, and structures. If your application code handles these differences, you're writing conditional logic forever. If you transform at the boundary, your application code is clean.

Implementation:

``` EHR A (Epic) -> Transformer A -> EHR B (Cerner) -> Transformer B -> Canonical Model -> Application EHR C (Custom) -> Transformer C -> ```

Each transformer is responsible for: - Field mapping (Epic's "PAT_MRN_ID" to your "patient_id") - Type coercion (string dates to ISO 8601) - Code system mapping (ICD-10 from one system, SNOMED from another) - Handling missing data (default values, null handling)

The canonical model:

Design it around your application's needs, not around any single EHR's structure. Include: - Consistent identifiers (your internal ID plus original source IDs) - Standardized code systems (pick one, map everything to it) - Clear null/unknown handling - Provenance (where did this data come from, when)

Gotcha: Resist the urge to make your canonical model "complete." Include only what your application actually uses. A huge canonical model that's mostly empty is harder to work with than a focused one.


Pattern 3: Webhook + Polling Hybrid

The approach: Use real-time notifications when available, but always have polling as a fallback.

Why it works:

Some EHRs support real-time notifications (ADT feeds, subscription APIs). Some don't. Some support them but they're unreliable. You need both mechanisms.

Implementation:

  1. . Register for webhooks/subscriptions where available
  2. . On webhook receipt, fetch the full record and process it
  3. . Run polling jobs that check for changes since last sync
  4. . Deduplicate at the application layer (you'll get the same change via both paths)

The hybrid advantage:

Real-time when it works, eventually-consistent when it doesn't. You get low latency where possible without sacrificing reliability.

Reconciliation:

Periodically run full reconciliation jobs that compare EHR state to your state. This catches: - Missed webhooks - Polling gaps - Data corrections in the EHR - Integration bugs that caused bad transforms

Think of real-time updates as optimization and periodic reconciliation as the source of truth.


Pattern 4: Integration Engine as Middle Layer

The approach: Put an integration engine (Mirth Connect, Rhapsody, Azure FHIR Server) between your application and EHRs.

Why it works:

Integration engines are purpose-built for healthcare data transformation. They handle HL7v2 parsing, FHIR conversion, message routing, and error handling out of the box. You don't have to build this infrastructure.

When to use this pattern:

  • You're integrating with multiple EHRs
  • You need to handle legacy protocols (HL7v2 over TCP/MLLP)
  • Your team doesn't have deep healthcare integration experience
  • You want to separate integration concerns from application logic

Implementation:

  1. . Integration engine receives data from EHRs (HL7v2, FHIR, CDA, whatever)
  2. . Engine transforms to standardized format (usually FHIR)
  3. . Engine forwards to your application via clean REST API
  4. . Your application only ever sees the clean API, never raw EHR data

The tradeoff:

Integration engines add infrastructure complexity. They're another system to deploy, monitor, and maintain. For simple integrations, this overhead may not be worth it. For complex multi-EHR environments, it's often essential.

Open source option: Mirth Connect is free and widely used in healthcare. It has a learning curve but handles the weird edge cases of healthcare integration.


Pattern 5: Staged Rollout with Feature Flags

The approach: Enable integration features gradually using feature flags, not big-bang deployments.

Why it works:

Healthcare integrations fail in unexpected ways. A data format that worked in testing breaks with real data. A high-volume clinic overwhelms your rate limits. Edge cases you didn't anticipate cause errors.

Staged rollout lets you catch these problems with limited blast radius.

Implementation:

  1. . Deploy integration code to production but disabled
  2. . Enable for internal testing accounts
  3. . Enable for one friendly clinic/provider
  4. . Monitor for a week (minimum)
  5. . Enable for a cohort of early adopters
  6. . Monitor, fix issues, repeat
  7. . Enable broadly when stable

Feature flag granularity:

  • By organization/clinic
  • By data type (demographics enabled, lab results disabled)
  • By direction (read enabled, write disabled)

Rollback capability:

If problems emerge, disable the feature flag instantly. Don't rely on deployments for rollback. They're too slow.

What to monitor during rollout: - Error rates by EHR, by data type - Latency (are some EHRs slow?) - Data quality (are transforms producing valid output?) - User-reported issues


Pattern 6: Idempotent Operations Everywhere

The approach: Design every integration operation to be safely retryable without side effects.

Why it works:

Healthcare integrations are unreliable. Networks fail. EHRs timeout. Your application crashes mid-operation. If operations aren't idempotent, retries create duplicate data or worse.

Implementation:

For reads: Idempotent by nature. Reading the same data twice is fine.

For writes: Include unique operation identifiers that the receiving system uses to deduplicate.

``` POST /appointments { "idempotency_key": "apt_create_12345_20240115T143000", "patient_id": "...", "...": "..." } ```

If the same idempotency_key is seen twice, return the result of the first operation without creating a duplicate.

For state changes: Check current state before applying change. If the change is already applied, succeed without re-applying.

``` // Instead of: "cancel appointment" // Do: "set appointment status to cancelled if currently active" // Second call succeeds (appointment already cancelled) without error ```

Why healthcare specifically: EHR systems often have "fire and forget" notification patterns. You might receive the same ADT message multiple times. If creating a new internal record for each message, you get duplicates. Idempotency based on source message IDs prevents this.


Pattern 7: Comprehensive Error Handling with Human Escalation

The approach: Automate error handling where possible, but build clear escalation paths for problems that need human judgment.

Why it works:

Healthcare data is messy. You will encounter data that doesn't fit your model, codes you don't recognize, and edge cases your transforms don't handle. You can't crash or silently drop data. It's PHI and it matters.

Error handling tiers:

Tier 1 - Automatic recovery: - Transient network errors: Retry with exponential backoff - Rate limiting: Back off and retry - Timeout: Retry, then queue for later

Tier 2 - Automatic handling with logging: - Unknown code systems: Map to "unknown," log for review - Missing optional fields: Use defaults, log for quality tracking - Malformed but recoverable data: Best-effort parse, flag for review

Tier 3 - Human review queue: - Missing required fields: Can't process without human input - Conflicting data: Two sources disagree, need resolution - Completely unparseable: Manual investigation needed

Tier 4 - Alert/escalation: - Integration completely failing: Page on-call - Unusual error patterns: Alert team for investigation - PHI exposure risk: Immediate escalation

Implementation:

Build a review queue into your application from day one. When the integration encounters data it can't handle automatically, it goes to the queue with context (what failed, why, raw data). Humans can fix, skip, or escalate.

Track queue depth and resolution time. A growing queue indicates systemic problems, either with EHR data quality or your integration logic.


The Meta-Pattern: Expect Change

EHR integrations aren't "done" after initial implementation. EHR vendors release updates. Health systems change configurations. New data types need to be supported. Regulations change requirements.

Design for change:

  • Configuration over code where possible (field mappings, code system maps)
  • Version your data models and transformers
  • Maintain backward compatibility in your internal APIs
  • Monitor integration health continuously
  • Build tooling for investigating integration issues

Relationship management:

Your integration depends on cooperation from EHR vendors and health system IT teams. Maintain these relationships: - Respond quickly to support requests - Document your integration clearly - Notify partners before making changes - Participate in certification programs seriously

The technical patterns matter, but so does the human side. A technically excellent integration that health system IT teams hate won't succeed.


Building a healthcare application that needs EHR integration? [Let's talk architecture](/contact).

End Transmission

Want to discuss this topic?

We're always interested in conversations with people building interesting things.

Start a Conversation