Skip to main content
ValyouValyou.
Dispatch: hipaa-compliance-che... // Status: Published
January 13, 202511 min read

HIPAA Compliance Checklist: 10 Technical Requirements for Patient Portals

The specific technical controls healthcare applications need to meet HIPAA requirements, and pass audits.

BD
ValyouPrincipal Engineer
Share

HIPAA Compliance Checklist: 10 Technical Requirements for Patient Portals

Building healthcare software is different. The same code patterns that work for a SaaS startup become compliance violations when they touch patient data. HIPAA isn't optional, and the penalties for getting it wrong range from $100 to $50,000 per violation, with a maximum of $1.5 million per year for repeated violations.

This guide covers the technical controls you need for a HIPAA-compliant patient portal. Not the administrative requirements (policies, training, BAAs) but the actual code and infrastructure decisions.

Understanding the Scope

HIPAA's Security Rule requires three types of safeguards: - Administrative: Policies, procedures, training - Physical: Facility access, workstation security - Technical: Access controls, audit controls, transmission security

This guide focuses on technical safeguards: the things you actually build.

The key concept is Protected Health Information (PHI): any individually identifiable health information. This includes obvious things like diagnoses and medications, but also demographic data, appointment times, and even the fact that someone is a patient at all.


1. Access Control: Authentication That Exceeds Minimum Standards

The requirement: Implement technical policies and procedures that allow only authorized persons to access PHI.

What this means technically:

Unique user identification is mandatory. No shared accounts, no generic logins. Every access must be traceable to a specific person.

Minimum implementation: - Unique username/email per user - Strong password requirements (12+ characters, complexity rules) - Account lockout after failed attempts - Session timeout for inactivity (15 minutes is common)

What auditors expect: - Multi-factor authentication (MFA) for all users accessing PHI - Role-based access control (users only see what they need) - Automatic session termination - Emergency access procedures (break-glass accounts with extra logging)

Implementation guidance:

Use an established auth provider (Auth0, Okta, AWS Cognito) that supports MFA out of the box. Don't build authentication yourself. The compliance burden is too high.

Session management should be server-side with short-lived tokens. JWTs are fine for API auth but include session revocation capability. When a user logs out or their access is revoked, their sessions must terminate immediately across all devices.

Code pattern to avoid:

```javascript // BAD: Long-lived token with no revocation const token = jwt.sign({ userId }, secret, { expiresIn: '7d' }); ```

Better approach:

Short-lived access tokens with refresh token rotation and server-side session tracking that can be revoked instantly.


2. Audit Controls: Logging Everything That Touches PHI

The requirement: Implement hardware, software, and procedural mechanisms to record and examine access to PHI.

What this means technically:

Every access to PHI must be logged. Who accessed what, when, from where, and what they did with it.

What to log: - User ID and timestamp for every action - IP address and user agent - Resource accessed (which patient, which record) - Action taken (view, create, update, delete, export) - Success or failure of the action

Log retention: HIPAA doesn't specify a retention period, but 6 years is common (matching the document retention requirement). Some states require longer.

Implementation guidance:

Use structured logging (JSON) from day one. Every log entry should be machine-parseable for audit queries.

```json { "timestamp": "2024-01-15T14:30:00Z", "userId": "user_123", "action": "view", "resourceType": "patient_record", "resourceId": "patient_456", "ipAddress": "192.168.1.100", "userAgent": "Mozilla/5.0...", "result": "success" } ```

Log storage must be tamper-evident. Use append-only log stores or ship logs to a separate system where application code can't modify them. AWS CloudWatch, Datadog, or a dedicated SIEM all work.

What auditors will ask: - Show me all access to patient X's records in the last 90 days - Show me everything user Y accessed last month - Show me all failed login attempts this week - How long are logs retained? - Can logs be modified after creation?


3. Integrity Controls: Preventing Unauthorized Modification

The requirement: Implement policies and procedures to protect PHI from improper alteration or destruction.

What this means technically:

PHI must not be silently modifiable. Changes should be tracked, and unauthorized modifications should be detectable.

Implementation approaches:

Checksums and hashing: Store hashes of critical records. Periodically verify that records match their hashes. Any mismatch indicates modification.

Immutable audit trails: Never delete or modify audit logs. Use append-only data stores.

Version history: For clinical records, maintain full version history. Don't overwrite. Create new versions. This is often a regulatory requirement beyond HIPAA anyway.

Database-level integrity: Use transactions for multi-step operations. Implement referential integrity. Use database-level constraints.

What auditors look for: - Can you prove a record hasn't been modified since creation? - If a record was modified, can you show what changed, when, and by whom? - Are audit logs protected from modification?


4. Transmission Security: Encryption in Transit

The requirement: Implement technical security measures that guard against unauthorized access to PHI transmitted over electronic networks.

What this means technically:

All data transmission must be encrypted. This isn't optional.

Requirements: - TLS 1.2 or higher for all connections (TLS 1.3 preferred) - Valid certificates from trusted CAs - Strong cipher suites (no deprecated algorithms) - HSTS headers to prevent downgrade attacks - Certificate pinning for mobile apps

Common mistakes: - Allowing TLS 1.0/1.1 for "compatibility" - Self-signed certificates in production - Not encrypting internal service-to-service communication - Exposing PHI in URLs (which appear in logs)

Implementation guidance:

Use a reverse proxy (Cloudflare, AWS ALB) that handles TLS termination with modern defaults. Don't configure TLS yourself unless you have expertise.

For mobile apps, implement certificate pinning and fail closed if the certificate doesn't match. This prevents man-in-the-middle attacks even if the device has a compromised certificate store.

Internal traffic between services should also be encrypted. In AWS, use VPC with TLS between services. Don't assume network-level isolation is sufficient.


5. Encryption at Rest: Protecting Stored Data

The requirement: While HIPAA calls encryption "addressable" rather than "required," in practice, it's expected for any modern system handling PHI.

What this means technically:

All PHI stored on any medium should be encrypted.

Database encryption: - Enable encryption at rest for your database (RDS encryption, MongoDB encryption, etc.) - This protects against physical theft and certain attack vectors - It doesn't protect against application-level access (that's what access controls do)

File storage encryption: - S3: Enable default encryption (SSE-S3 or SSE-KMS) - Any attached volumes: Enable EBS encryption - Backups: Must be encrypted (they contain PHI too)

Key management: - Use a proper key management service (AWS KMS, HashiCorp Vault) - Don't store encryption keys in code or config files - Implement key rotation - Audit key access

Application-level encryption:

For highly sensitive fields (SSN, certain diagnoses), consider application-level encryption in addition to storage encryption. This provides defense in depth: a database breach doesn't expose readable PHI if it's encrypted at the application layer.


6. Automatic Logoff: Session Management

The requirement: Implement electronic procedures that terminate an electronic session after a predetermined time of inactivity.

What this means technically:

User sessions must timeout. This protects against the scenario where someone walks away from a logged-in terminal.

Implementation requirements: - Server-side session timeout (not just client-side) - Timeout after 15 minutes of inactivity (common standard) - Clear all PHI from screen on timeout - Require re-authentication to resume

Common mistakes: - Client-side only timeout (can be bypassed) - Very long timeout periods - Timeout that doesn't clear sensitive data from view - Different timeout behavior on different endpoints

Implementation guidance:

Track last activity time server-side. On each request, update the timestamp. Before processing any request, check if the session has timed out.

For single-page applications, implement a heartbeat that tracks user activity. No mouse movement, no keystrokes, no scrolling for 15 minutes = force logout.


7. Unique User Identification: No Shared Accounts

The requirement: Assign a unique name and/or number for identifying and tracking user identity.

What this means technically:

Every user has their own account. Period. No shared accounts, no role-based generic logins, no "department" accounts.

Why this matters:

Audit trails are meaningless if you can't identify who took an action. "The front desk account" doesn't tell you which of five receptionists accessed a patient record.

Implementation: - Every human user gets a unique account - Service accounts for system-to-system access are also unique and identifiable - Shared credentials = audit failure

What about shared workstations?

Users log in with their own credentials. Fast user switching with PIN after initial authentication is common in clinical settings. The key is that every action is tied to a specific individual.


8. Emergency Access Procedure: Break Glass Functionality

The requirement: Establish procedures for obtaining necessary PHI during an emergency.

What this means technically:

There must be a way to access PHI when normal access controls would prevent it, in genuine emergencies.

Implementation:

"Break glass" accounts that bypass normal access restrictions but generate high-priority alerts and extensive logging.

Requirements: - Access is available but not convenient (multi-step confirmation) - Every use generates immediate alerts to security/compliance - Extensive logging of all actions taken - Required justification before or immediately after access - Regular audit of break-glass usage

What this is NOT:

A backdoor for convenience. If break-glass accounts are used regularly, your normal access controls are broken.


9. Person or Entity Authentication: Verifying Identity

The requirement: Implement procedures to verify that a person or entity seeking access to PHI is the one claimed.

What this means technically:

You must confirm that users are who they claim to be, not just that they have valid credentials.

For patients accessing their own data: - Identity verification during account creation (knowledge-based verification, ID document verification) - MFA for ongoing access - Account recovery that verifies identity (not just email access)

For staff: - HR-verified identity tied to account - MFA required - Credential management (rotation, revocation on termination)

Implementation guidance:

Patient identity verification is often the hardest part. Options include: - Match against demographic data already on file - Integration with ID verification services (Jumio, Onfido) - In-person verification with photo ID - Verification code sent to address on file

The verification method should match the sensitivity of access being granted.


10. Technical Documentation: Proving Your Controls

The requirement: Maintain documentation of security policies and procedures.

What this means technically:

You must document what controls you have, how they work, and evidence that they're functioning.

Documentation needed: - System architecture diagrams showing where PHI flows - Access control policies and how they're implemented - Encryption specifications (algorithms, key management) - Audit log specifications (what's logged, retention, access) - Incident response procedures

Evidence of functioning: - Regular access reviews (who has access, is it still appropriate?) - Penetration test reports - Vulnerability scan results - Audit log samples showing controls working - Incident reports and resolutions

Implementation guidance:

Treat documentation as code. Store it in version control. Update it when systems change. Review it quarterly.

Automate evidence collection where possible. Compliance tools (Vanta, Drata) can automatically collect evidence of controls functioning.


Beyond the Checklist: The Compliance Mindset

Meeting these 10 requirements gets you to baseline compliance. But HIPAA is a minimum standard, not a target. The actual standard is "reasonable and appropriate" safeguards, which an auditor may interpret differently than you do.

Principles that matter:

Defense in depth: Don't rely on single controls. Encryption at rest AND in transit AND access controls AND audit logging AND monitoring. Layers.

Least privilege: Users access only what they need. Not "all patient data" but "patients on their panel." Not "all fields" but "fields relevant to their role."

Assume breach: Design as if you'll be breached. What limits the damage? Encryption limits data exposure. Logging enables investigation. Segmentation contains blast radius.

Continuous compliance: Compliance isn't a point-in-time achievement. It's ongoing. Regular audits, continuous monitoring, updated policies as systems change.


Building a healthcare application and need help with HIPAA compliance? [Let's talk technical architecture](/contact).

End Transmission

Want to discuss this topic?

We're always interested in conversations with people building interesting things.

Start a Conversation