
The EHR doesn’t “go down” politely. It waits for your busiest morning—two walk-ins, a vaccine kiddo crying like a tiny opera singer, and a provider asking, “Can someone print the med list?” Today, you’ll build a HIPAA-aligned contingency plan that actually survives real life: a one-page downtime workflow, a print-ready downtime packet, and a disaster recovery plan with proof you can restore. No legal lecture. No vague “be prepared.” Just a practical checklist that keeps care moving, keeps ePHI protected, and keeps you audit-ready.
This is general information, not legal advice. HIPAA requirements can vary by your setup, state laws, payor contracts, and your EHR/vendor terms. For an incident involving ransomware, suspected breach, or patient harm risk, involve qualified HIPAA counsel and security/IT professionals promptly.
A HIPAA contingency plan for small clinics should cover EHR downtime operations (how you safely register, document, prescribe, order labs, and bill without the system) and disaster recovery (how you restore systems and data within defined time targets). Start with a one-page downtime workflow, a printed “downtime packet,” role assignments, vendor contacts, backup/restore proof, and a simple drill schedule. The goal: care continues, data stays protected, and recovery is provable.
Table of Contents
1) What HIPAA means: the “contingency plan” in plain English
If you’re time-poor (you are), here’s the clean translation: HIPAA’s Security Rule expects you to plan for “something bad happens” scenarios—system failure, fire, vandalism, flood, ransomware, power outage—so you can protect electronic protected health information (ePHI) and keep critical operations running. In practice, that means your plan isn’t a binder that impresses nobody; it’s a set of implemented procedures you can execute with tired humans on a Monday.
The HIPAA contingency plan standard is usually discussed as a bundle of five pieces: a data backup plan, a disaster recovery plan, an emergency mode operation plan, plus testing and revision procedures and an applications/data criticality analysis. Some parts are required, some are “addressable” (more on that in a second), but the spirit is the same: you can’t rely on hope as a control.
Required pieces clinics forget to bundle together
- Downtime operations: how care continues without the EHR (registration → documentation → orders → meds → billing).
- Recovery: how systems/data come back, with time targets you can defend.
- Security in emergency mode: you don’t get a “HIPAA holiday” just because the Wi-Fi is angry.
- Evidence: restore logs, training logs, drill notes, and version-controlled forms.
Addressable ≠ optional (and why auditors still expect a decision)
“Addressable” does not mean “skip it.” It means you must assess the safeguard in your environment and either implement it, implement an equivalent alternative, or document why it’s not reasonable and appropriate (with rationale). If your justification is basically “we didn’t get around to it,” that’s not a justification—it’s a future headache with better stationery.
Proof beats promises: what “implemented” looks like on paper
“Implemented” looks like: your staff can locate the downtime packet in under 60 seconds, your provider knows how to document safely on paper, your biller knows how charges get captured, and your IT/MSP can show that backups are retrievable and restores are tested. In other words: the plan exists in behavior, not in a PDF folder named “HIPAA_FINAL_FINAL_v7.”
- Downtime workflow tells staff what to do today.
- Disaster recovery tells IT what to restore first and how fast.
- Evidence proves it wasn’t theoretical.
Apply in 60 seconds: Write down your “top 3 must-work” functions (eRx, charting, scheduling) on a sticky note. That’s your restoration priority seed.
Money Block: “Do we need the full plan?” (Eligibility checklist)
- Yes if you create/store ePHI in an EHR, practice management system, imaging portal, or patient messaging tool.
- Yes if you e-prescribe (especially controlled substances) or rely on electronic lab orders/results.
- Yes if you use remote access, cloud apps, or an MSP for IT support.
- Yes if downtime would cause medication risk, delayed critical results, or billing disruption.
Neutral next step: If you checked 2+ “Yes,” build the one-page downtime workflow first—then layer DR and evidence.
2) Who this is for: small clinics that can’t pause care
This is for outpatient clinics, private practices, urgent care, specialty groups, PT/OT clinics, and small behavioral health offices where “downtime” doesn’t mean “we’ll reschedule.” It means the waiting room keeps filling, phones keep ringing, and someone eventually asks, “Do we have a way to verify allergies?”
For you if… outpatient clinics, private practices, urgent care, specialty groups
You’re the owner, practice manager, clinic admin, compliance lead, or the “accidental IT person” who inherited this job because you can reset a router. You want something staff can actually follow—and something you can hand to an auditor or payor without sweating through your shirt.
Not for you if… hospitals/health systems with enterprise IR/BCP teams
If you have a full Business Continuity Program office, a dedicated Security Operations Center, and a DR site with automated failover, bless you. You still need a plan, but this checklist is tuned for smaller teams and smaller budgets.
If you’re outsourced: how MSPs/IT vendors should use this outline
If you’re an MSP supporting a clinic, this is your “common language” document. It prevents the classic mismatch: IT thinks “backup” is done because software is installed, while the clinic thinks “backup” means “we can chart again within a few hours.” This plan forces clarity: who does what, what gets restored first, and how you prove it.
- Clinic: workflows, paper forms, staff behavior, minimum necessary rules.
- MSP/IT: backups, restore testing, access controls, incident response activation.
- EHR vendor/cloud: platform availability, downtime mode features, contractual SLAs, support escalation.
A quick reality check: even if your EHR is “cloud,” your clinic still has dependencies—workstations, identity/logins, internet, eRx connectivity, printers/scanners, and the human factor. Cloud reduces some risks. It does not delete your responsibilities.

3) Downtime starts here: the one-page workflow staff can follow
Your one-page workflow is the difference between “messy but controlled” and “everyone improvises in parallel.” It should fit on a single page. If it needs scrolling, it’s already too long for the moment when the EHR won’t load and the phone is flashing.
Registration → visit → orders → meds → billing (a simple chain)
Think of downtime like a relay race. If the baton gets dropped at registration, everything downstream becomes guesswork. Your workflow should define the baton at each step: patient ID verification, reason for visit, vitals, clinical note, orders, meds, charges.
- Front desk: confirm identity + demographics, print/complete downtime encounter cover sheet.
- MA/RN: vitals + med/allergy verification process, flag critical risks (anticoagulants, allergies, pregnancy, etc.).
- Provider: paper note with required elements, orders on downtime slips, prescriptions per downtime policy.
- Checkout: follow-up scheduling, patient instructions, payment/estimate handling per policy.
- Biller: charge capture method, later reconciliation and claim submission timeline.
Role cards: front desk, MA/RN, provider, biller, practice manager
People don’t read policies under stress. They follow roles. Create role cards—half-page each—that answer: What do I do first? What do I do if I’m stuck? What do I absolutely not do?
- Front desk “first 3 minutes”: downtime packet location, identity verification, labeling rules.
- Clinical staff “no surprises”: medication/allergy verification, critical results protocol, documentation minimums.
- Provider “safe prescribing”: how to confirm meds/allergies, how to handle controlled substances, when to pause and escalate.
- Practice manager “traffic control”: downtime start timestamp, communication script, vendor escalation, incident log start.
Pattern interrupt: Let’s be honest… your downtime plan can’t live in someone’s inbox.
If your plan requires searching email, opening a shared drive, and remembering which file is “the latest,” it will fail precisely when you need it. Your plan needs a physical home: a labeled binder or drawer in each clinical area, plus a controlled digital copy for updating.
- Define the chain from registration to billing.
- Give each role a “first actions” card.
- Start an incident log the moment downtime begins.
Apply in 60 seconds: Add one line to your workflow: “Downtime begins at ___ (time). Practice manager starts incident log.”
4) The downtime packet: what to print before you “need it”
The downtime packet is your paper-based “minimum viable EHR.” It’s not every possible form. It’s the smallest set that keeps care safe and lets you reconcile later without a scavenger hunt.
Minimum set: encounter note, consent, meds/allergies, order slips, charge capture
- Downtime encounter cover sheet (patient identifiers, visit date/time, clinician, encounter type, barcode/label area if you use it).
- Clinical note template (SOAP or your specialty’s equivalent; prompts for allergies, problem list, meds, assessment/plan).
- Medication & allergy verification form (with a clear “source” field: patient report, med bottles, pharmacy call, previous printout, etc.).
- Order slips (labs, imaging, referrals) with patient identifiers repeated on each page.
- Consent / privacy acknowledgement (only what you truly need during downtime; keep it simple).
- Charge capture sheet (CPT/HCPCS/ICD fields or your clinic’s internal superbill format).
- Patient instructions sheet (follow-up, red flags, contact info for results).
Controlled copies: version date, storage spot, reprint cadence
This is where clinics quietly win. Each packet should be a controlled copy: version date on every form, a single “official” storage spot, and a reprint cadence that matches staff turnover and process drift.
- Version date on the bottom right of every form (e.g., “v2026-01”).
- Storage map: “Front desk drawer A,” “Nurse station cabinet,” “Provider workroom shelf.”
- Reprint cadence: quarterly refresh or after any workflow change (new EHR module, new eRx vendor, new payer requirement).
Curiosity gap: The “one form” auditors ask for first (and why it’s never where you think).
It’s usually the thing that proves you can track what happened: an incident log or a downtime event record that shows dates, times, who declared downtime, what systems were affected, and what you did to protect ePHI while operating manually. The form itself is boring. The absence of it is expensive.
Money Block: Quote-prep list (what to gather before you talk to your vendor/MSP)
- Where ePHI lives (EHR, imaging, patient portal, billing, phone system, shared drives).
- Top 3 “must restore first” functions (eRx, charting, scheduling, claims, etc.).
- Your current backup method and last known restore test date (even if the answer is “unknown”).
- Vendor support contacts + escalation paths + after-hours policy.
- List of endpoints that touch ePHI (workstations, laptops, tablets, scanners, servers, cloud apps).
Neutral next step: Bring this list to a 30-minute call and ask one question: “Show me the last successful restore.”
5) Paper-to-EHR re-entry: the chain-of-custody clinics don’t document
Downtime isn’t over when the EHR returns. The dangerous part is the “after”: re-entering notes, reconciling meds, uploading labs, and making sure nothing disappears into the black hole of “we’ll scan it later.”
Reconciliation rule: how you prevent “lost notes” and duplicate meds
The clinic-safe approach is to treat downtime documentation like a controlled handoff. You need a reconciliation rule that answers: What counts as the official record? What must be entered as discrete data? Who verifies completion?
- Single owner for reconciliation (often the practice manager or lead nurse).
- Two lists: “Needs discrete entry” (meds, allergies, problems, immunizations, orders) vs “scan as image/PDF” (signed consents, narrative notes if allowed).
- Duplicate prevention: a method to mark each paper item as “entered” with date/initials (and a place to store it until QA is complete).
Scanning + indexing: file naming that survives audits
Your scanning process should be boring and consistent. Audits love boring. The goal is that any scanned downtime document can be retrieved quickly and linked to the correct encounter.
- Standard file naming: LASTNAME_FIRSTNAME_DOB_YYYYMMDD_DOWNTIME_NOTE.pdf (or your EHR’s preferred convention).
- Indexing rule: always attach to the correct date of service and visit type.
- Storage control: scanned files live in a restricted folder until they’re in the EHR; then they are purged per policy.
Curiosity gap: Where downtime charts go to die—and how to stop that.
They die in “temporary” stacks: the back counter, the provider’s bag, the scan pile that never gets indexed, or a shared drive folder with no naming convention. The fix is unglamorous: a single physical bin labeled “DOWNTIME → RECONCILE,” a daily owner, and a checklist that ends with “bin empty.”
- All downtime encounters logged (count matches appointment list).
- Meds/allergies updated as discrete data where required.
- Orders/results tracked to completion (no “missing” labs).
- Charges entered and reconciled to superbills.
- Paper docs scanned/indexed and stored per policy.
6) Disaster recovery math: RTO/RPO without enterprise budgets
Disaster recovery (DR) gets intimidating because people think it requires a second data center and a staff of twenty. Small clinics don’t need enterprise theatrics. They need clear targets and tested restores. Two numbers do most of the work: RTO and RPO.
RTO (how fast you must restore) vs RPO (how much data you can lose)
- RTO (Recovery Time Objective): How long you can be down before the impact becomes unacceptable.
- RPO (Recovery Point Objective): How much data you can afford to lose, measured in time (e.g., 4 hours of charting).
Here’s the grounding thought: RTO is about patient flow. RPO is about data integrity. You set them by function, not by fear.
Set targets by function: scheduling, clinical notes, eRx, billing, imaging
Small clinics can set realistic targets by ranking functions into tiers. The exact numbers vary by specialty and risk, but the logic is stable: some functions can be manual longer; some functions create immediate safety or revenue risk.
Printed downtime packet + role cards + incident log. Manual charting allowed with safeguards.
Defined RTO/RPO per function + backup ownership + restore test schedule.
Drills + training logs + restore evidence + vendor responsibility matrix signed off.
MFA, least privilege, segmented devices touching ePHI, hardened backups, tested recovery runbooks.
Measured uptime goals, tabletop + downtime drills, fast restore of critical systems, continuous improvement.
Neutral next step: Pick your current tier honestly—then choose the smallest step to move up one tier.
Pattern interrupt: Here’s what no one tells you… backups you’ve never restored don’t count.
This is not a motivational quote. It’s a practical truth. A backup that has never been restored is a belief, not a capability. HIPAA expects retrievable copies and implemented procedures. “We think it’s backing up” is not the same as “we restored last month.”
Show me the nerdy details
If you want to keep this simple but solid, borrow structure from NIST-style contingency planning without adopting enterprise complexity. Write a short “recovery runbook” that includes: (1) restoration order, (2) who can authorize restores, (3) how you verify integrity, (4) what “done” looks like (e.g., test login, sample chart retrieval, eRx connectivity check), and (5) where evidence is stored. Keep it to 2–4 pages. The power is not length; it’s repeatability.

7) Vendor reality check: EHR, cloud, MSP—who does what, exactly
This is where small clinics accidentally fall into a trap: they assume the EHR vendor “handles backups,” the MSP assumes the EHR vendor “handles backups,” and nobody is actually testing restores for the clinic’s reality. Your fix is a one-page responsibility matrix.
Responsibility matrix: backup, encryption, access logs, restore testing
Make a table with rows (backup, restore testing, encryption, access logging, MFA, endpoint protection, patching, incident response, downtime procedures) and columns (clinic, MSP, EHR vendor, other vendors like eRx/labs). Put one owner per cell. No shared ownership unless you write what “shared” means.
| Control | Clinic owner | MSP/IT owner | EHR/vendor owner | Evidence stored where? |
|---|---|---|---|---|
| Backup (what data, how often) | Approves scope | Implements/monitors | Platform-level as applicable | Artifacts binder / secure folder |
| Restore testing (frequency, success criteria) | Witness/approves | Runs restores | Supports/escalates | Restore logs + screenshots |
| Access controls (least privilege, shared login prevention) | Policy + onboarding | Technical enforcement | Role setup in app | Access review log |
| Downtime mode (read-only/manual procedures) | Workflow + training | Enable local tools | Documents features | Downtime binder |
BAA sanity check: what it should cover (and what it often doesn’t)
Your Business Associate Agreement (BAA) is not a magic shield. It’s a contract that defines responsibilities. At minimum, it should be clear about: safeguarding ePHI, reporting security incidents, breach notification responsibilities, and what happens when you need data access or exports during an outage or termination.
- Clarity check: Who is responsible for backups of what data? (EHR database? scanned docs? exports?)
- Incident reporting: How quickly will the vendor notify you of a security incident affecting your data?
- Access during emergencies: How do you retrieve records if the vendor is unavailable?
- Data portability: What are your export options, formats, and timelines?
Downtime mode: “read-only,” local cache, or manual—pick your truth
Some systems offer read-only access during outages. Some offer limited offline workflows. Some offer nothing. The point isn’t which vendor is “best.” The point is: your clinic must know what you actually have, and your downtime plan must match that truth.
Curiosity gap: Your EHR vendor won’t save you from your workflow.
Even with the most reliable platform, the risk is often on the clinic side: unclear roles, missing forms, shared logins, “temporary” texting, and no reconciliation process. Vendors can support; they can’t run your front desk.
- Define who backs up what, and who tests restores.
- Align BAA language with operational reality.
- Write downtime workflows that match your actual system capabilities.
Apply in 60 seconds: Ask your vendor/MSP: “When was the last restore test, and what did ‘success’ mean?”
8) Common mistakes: what turns a downtime event into an audit headache
Most downtime failures aren’t dramatic. They’re quiet. They look like “temporary” workarounds that never get unwound, missing paperwork that makes billing messy, and re-entry that creates medication duplication.
Mistake #1: “We’ll figure it out” (no assigned roles, no printed tools)
When nobody owns the first five minutes, everyone tries to help—and chaos multiplies. Assign roles and print tools. Downtime is not the moment for a team brainstorming session.
Mistake #2: Backups exist—but nobody owns restore testing
If restore testing is nobody’s job, it becomes nobody’s evidence. And in a real event, it becomes nobody’s ability. Someone must own the calendar, the success criteria, and the proof.
Mistake #3: Shared logins during downtime (and the access-control fallout)
Under stress, people share passwords “just for today.” The problem is that audit logs then become meaningless. If something goes wrong—wrong chart, wrong prescription, wrong disclosure—you lose traceability when you need it most.
Mistake #4: Re-entering data without reconciliation (med errors + billing disputes)
Re-entry without a checklist creates duplicates: duplicate meds, duplicate problems, duplicate charges. Worse, it can hide missing items: the lab you never ordered, the referral you never sent, the consent you never scanned.
- Clinic-safe habit: reconcile counts (appointments vs downtime encounters vs entered charts).
- Billing-safe habit: reconcile charges to encounters before claims go out.
- Patient-safe habit: reconcile meds/allergies with a second set of eyes when possible.
9) Don’t do this: outage behaviors that create breach risk
Downtime can tempt people into “quick fixes” that are fast and also… very reportable. HIPAA doesn’t stop applying because the EHR is down. In fact, emergency mode is exactly when minimum necessary and access control matter most.
Don’t improvise texting PHI to personal phones or unapproved apps
If your clinic has an approved secure messaging tool with proper safeguards, use it. If you don’t, do not invent one mid-outage using personal texting, personal email, or screenshots. The convenience is real. The exposure is also real.
Don’t email spreadsheets of PHI “just for today”
“Just for today” is how permanent risk gets created. Spreadsheets spread: forwarded, saved, synced, printed, forgotten. Don’t email spreadsheets of PHI “just for today.” If you must track something, use controlled paper forms and store them in a controlled location.
Don’t disable security controls to “get through the day”
Turning off endpoint protection, sharing admin credentials, bypassing MFA—these shortcuts can turn a downtime event into a cybersecurity incident. If you need emergency access, use a documented break-glass process with logging and time limits.
If you must go manual: the safest “minimum necessary” rules
- Only print what you need for that encounter (not whole patient rosters).
- Label every page with patient identifiers and date of service.
- Control physical access: keep paper in staff-only areas; shred per policy.
- Track movement: bin system + sign-off for re-entry and scanning.
A small clinic hits an outage at 9:12 a.m. The first instinct is heroic improvisation: someone texts a photo of the med list to a provider, a staff member starts a spreadsheet of patients “so we don’t lose track,” and a front-desk team member writes passwords on a sticky note because “the system is slow.” It works—kind of—for two hours. Then a patient calls: the prescription that was “sent” never arrived. The spreadsheet has two versions.
One paper note is missing. Everyone is tired and slightly defensive, because it feels like they did their best. The clinic didn’t fail because people didn’t care. It failed because the plan didn’t exist where stress lives. The next month, they rebuild: one-page workflow, printed packet, a single bin for downtime charts, and a policy that bans personal texting of PHI during outages. Quiet systems, fewer heroics, safer care.
10) Drills + documentation: how to make the plan provable
A plan becomes “real” when you test it. Not with a week-long simulation. With short, repeatable drills that fit clinic life. The best drill schedule is the one you’ll actually do.
Tabletop drill: 30 minutes, quarterly, with a script
A tabletop drill is a guided conversation: “The EHR is down. What do we do in the first 10 minutes? Who calls whom? How do we document? How do we protect ePHI?” It’s low drama, high learning.
- Minute 0–5: Declare downtime, start incident log, locate downtime packet.
- Minute 5–10: Walk the workflow for one sample patient (front desk → MA/RN → provider → checkout).
- Minute 10–20: Identify failure points (missing forms, unclear roles, unsafe workarounds).
- Minute 20–30: Assign 3 fixes with owners + deadlines; update version date on forms.
Downtime drill: one half-day per year (low drama, high learning)
Once a year, do a planned downtime drill for half a day (or a controlled window). You don’t need to shut everything off. The goal is to practice the steps that are hardest to do under pressure: labeling, paper flow, re-entry, reconciliation, and communication.
Artifacts binder: policies, training logs, restore evidence, incident notes
Make an “artifacts binder” (physical or secure digital) that contains:
- Downtime workflow + role cards (current version).
- Downtime packet master forms (current version).
- Drill logs (date, participants, what you learned, what changed).
- Restore evidence (screenshots/logs, what was restored, how integrity was verified).
- Vendor contact list + escalation steps.
- Incident logs for real events (with remediation notes).
Passage-ready checklist: what to review every 90 days
- Downtime packet location + inventory (is it complete?).
- Staff role coverage (new hires trained?).
- Vendor contacts current (no dead phone numbers).
- Backup monitoring reviewed (alerts acknowledged).
- Any workflow changes reflected in forms and training.
- Quarterly tabletops catch drift early.
- Annual downtime drills reveal the real friction points.
- Artifacts binder makes your plan provable.
Apply in 60 seconds: Put a recurring 30-minute meeting on the calendar titled “Downtime Tabletop + Fix 3 Things.”
Money Block: Mini calculator (Downtime impact estimator)
This is not a perfect financial model. It’s a quick way to put a number next to “we should probably test restores.”
Neutral next step: Use the estimate to justify one action: schedule a restore test and document the evidence.
11) When to seek help: the “call now” thresholds
Some downtime events are inconvenient. Others are dangerous. Your plan should clearly define “call now” triggers so staff don’t debate for 45 minutes while risk grows.
Ransomware or suspected intrusion (containment comes first)
If ransomware is suspected, treat it as a security incident and prioritize containment: isolate affected devices, stop propagation, and activate your incident response plan. Avoid improvising. Bring in qualified security/IT professionals and counsel as appropriate.
Patient safety risk (meds, allergies, labs, critical results)
If you cannot reliably verify allergies, medication history, or critical results for high-risk patients, you need an escalation path: limit services temporarily, use verified external sources as policy allows, and document clinical decisions. Patient safety beats schedule purity.
Possible breach: incident response + legal counsel + notification timelines
If ePHI may have been accessed, acquired, used, or disclosed improperly, treat it seriously and document decisions. HIPAA breach analysis and notification requirements can be complex—especially under ransomware scenarios—so this is a “call counsel” moment.
Vendor failure: contract escalation, data export options, contingency EHR steps
If the vendor is the bottleneck, escalate per contract, document timelines, and confirm your options for access to records and exports. This is also the moment to test what you can do without the vendor (paper workflow, local documentation, patient instructions, safe prescribing rules).
Timestamp + incident log + roles assigned.
Downtime packet + minimum necessary + no unsafe texting.
Restore order per criticality + verify integrity + document proof.
Discrete data first (meds/allergies/orders) + scanning/indexing rules.
Drill notes → form updates → training refresh.
FAQ
1) What is a HIPAA contingency plan for a small medical practice?
It’s a set of policies and procedures that help you keep critical operations running while protecting ePHI during emergencies (system outages, disasters, cyber incidents). Practically, it includes downtime workflows, backup and recovery procedures, emergency mode operations safeguards, and a testing/revision cadence with documentation.
2) What should an EHR downtime procedure include for outpatient clinics?
At minimum: a one-page workflow (registration → care → orders → meds → billing), role assignments, a printed downtime packet, a secure way to document and store paper forms, and a reconciliation plan for re-entry. It should also state what staff must not do (personal texting, emailing PHI spreadsheets, shared logins).
3) Does HIPAA require disaster recovery testing or just backups?
HIPAA expects you to have procedures for backups and restoration as part of contingency planning. A backup you never test is a risk, because you can’t prove it’s retrievable or that your recovery process works. Clinics should implement periodic testing and document outcomes.
4) How often should a clinic run downtime drills for HIPAA compliance?
A practical cadence for small clinics is quarterly 30-minute tabletop drills and an annual controlled downtime drill window. The more your workflows change (new staff, new EHR features, new vendors), the more valuable short, frequent tabletop drills become.
5) What’s the difference between RTO and RPO for a small clinic?
RTO is how quickly you need systems restored to avoid unacceptable impact. RPO is how much data you can afford to lose (in time). Set both by function (eRx, charting, scheduling, billing) and document your rationale.
6) Can we use paper charts temporarily during an EHR outage under HIPAA?
Yes—using paper temporarily can be part of a safe downtime procedure. The key is to protect the information (minimum necessary, secure physical storage, controlled access) and to have a plan for re-entry and reconciliation so records aren’t lost or duplicated.
7) What documentation proves we followed our downtime plan?
Keep an incident log with timestamps, a record of what workflows were used, who declared downtime, what systems were affected, re-entry/reconciliation checklists, and evidence of restore tests and training/drills. “We did it” is not as persuasive as “here’s the log.”
8) Does our EHR vendor handle backups and restores automatically?
Sometimes vendors provide platform-level resilience—but clinics still need clarity on scope and responsibility. Confirm exactly what is backed up, how restores work, what you must do on your side (devices, access control, local documents), and how you prove recoverability. Put it in a responsibility matrix.
9) When does an EHR outage become a reportable HIPAA breach?
An outage alone isn’t automatically a breach. The breach question depends on whether ePHI was impermissibly accessed, acquired, used, or disclosed (for example, via ransomware, unauthorized access, or unsafe workaround behaviors). If you suspect a security incident or compromise, involve qualified professionals and document your analysis.
10) What’s the fastest “good enough” downtime packet to start with?
Start with: encounter cover sheet, clinical note template, meds/allergies verification form, order slips, charge capture sheet, and a patient instructions sheet—plus an incident log. Make it a controlled copy with version dates and a known physical location.
13) Next step: one concrete action you can do today
Remember that curiosity loop from the beginning—“What do we do when the EHR goes down on the busiest morning?” The honest answer is: you don’t rise to the occasion. You fall to the level of your preparation. So your best next step is not “write the perfect policy.” It’s to create one page your staff can follow.
Print a one-page downtime workflow + role cards and store it at every workstation (then schedule a 30-minute tabletop drill within 14 days)
- Draft the one-page workflow (registration → visit → orders → meds → billing) using the chain above.
- Create 5 role cards (front desk, MA/RN, provider, biller, practice manager). Keep each under half a page.
- Print and place controlled copies in known locations (label them).
- Run a 30-minute tabletop drill and write down 3 fixes. Update the version date when you apply them.
Money Block: Decision card (When A vs B)
- Best when: you need something staff can execute immediately.
- Time cost: a few hours to draft/print/train.
- Trade-off: re-entry effort is real; needs reconciliation discipline.
- Best when: you have stable IT support and budget for hardening and testing.
- Time cost: ongoing configuration, monitoring, and restore testing.
- Trade-off: still needs a downtime workflow—tech doesn’t replace humans.
Neutral next step: Choose Option A today (paper-first) even if you pursue Option B later.
- Print tools where downtime actually happens.
- Assign roles so decisions aren’t made by panic.
- Document drills and restore tests so your plan is provable.
Apply in 60 seconds: Decide where the downtime binder lives—then physically label that spot today.
Last reviewed: 2026-01.