RCM Automation for Telehealth: How Foresight does it
There’s no prize for picking a side. There is a prize for getting paid on the first try.
Telehealth unlocked access. It also exploded the number of billing edge cases: home vs. clinic, audio‑only vs. audio‑video, multi‑state licensure, plan quirks, and denial codes that read like alphabet soup. If you’ve tried to automate revenue cycle management (RCM) with only rigid edits or only AI, you’ve met the limits: brittle rules that break on real‑world variability, or “smart” systems that can’t show their work when an auditor asks why.
The highest‑performing teams do something simpler and smarter: they pair deterministic rules for the black‑and‑white, machine intelligence for the gray, and human review where the stakes are high. That’s how we designed Foresight RCM. Here’s the playbook.
The False Binary That’s Costing You Revenue
Every week we hear a version of: “Should we use rules or AI for our revenue cycle?” Wrong question. The right one is: “Which parts of our flow are objective and repeatable (rules), which parts are messy and textual (AI), and where do we want a human to be the circuit breaker?”
A quick example: A 32‑minute video visit with an established patient at home. The encounter metadata alone can set POS 10 and apply a payer‑appropriate telehealth modifier (many commercial plans still use 95; audio‑only (where allowed) uses 93; some institutional contexts still require GT/GQ). Those are rules. But extracting the ICD‑10 from a narrative note (“increased levodopa; pill‑rolling tremor”) or proposing the denial fix for an oddball remark code? That’s where AI earns its keep—with guardrails.
When Rules Win: Deterministic Logic You Can Trust
Use rules aggressively anywhere the inputs are structured and the policy is unambiguous.
Telehealth POS & Modifiers (what’s actually stable)
- POS: Professional claims in 2025 use 02 (telehealth, patient not at home) or 10 (telehealth, patient at home). We encode this directly from encounter location and program type.
- Modifier 95: Payer‑specific. Many commercial plans want 95 for audio‑video telehealth. For Medicare professional claims, POS 02/10 is generally sufficient and 95 is not broadly required; however, some institutional outpatient therapy scenarios still call for 95. Our rules apply it only when the payer + setting combination requires it.
- Modifier 93 (audio‑only): When audio‑only is permitted (e.g., many behavioral health situations in 2025), we add 93 and force human review if the documentation hints at ambiguous modality.
- FQ (RHC/FQHC audio‑only) and edge modifiers appear in specialty programs; our rule packs include these as payer‑ and site‑of‑service‑aware exceptions.
Time‑Based E/M (established patients; time selected)
We enforce the correct ranges (not “≥” shortcuts):
- 99212 → 10–19 mins
- 99213 → 20–29 mins
- 99214 → 30–39 mins
- 99215 → 40–54 mins (use G2212/99417 per payer policy for prolonged time)
Rules also log what determined the code (time vs. MDM) and why (documented minutes, captured MDM).
Interstate Licensure & Credentialing
Boolean (meaning, it's a binary choice between yes and no), but easy to get wrong. We validate clinician license state ↔ patient location ↔ payer rules, and we block submission if any link in that chain fails.
Timely Filing & Required Fields
Date math, ID format checks, NPI/TaxID validation, and required supervising/ordering provider presence. Deterministic, explainable, auditable.
Bottom line: Rules do the boring, essential things perfectly, 100% of the time.
When AI Shines: Handling Ambiguity at Scale (with Guardrails)
Modern models are excellent at reading messy notes and surfacing the billing‑relevant facts, if you constrain them.
- ICD‑10 from narrative: Extract diagnoses like Parkinson’s disease without dyskinesia even when the exact code isn’t written, and surface the evidence snippet.
- CPT/E/M from narrative: Propose an E/M level from the note, reconcile with captured time, and explain the rationale.
- Denial reasoning: Read the payer memo and suggest plausible causes and next steps—paired with the lines that support the suggestion.
- ePA clinical summaries: Pull prior therapies, contraindications, and stability statements from longitudinal notes to pre‑fill prior auth.
The Guardrails That Make AI Safe in RCM
- Provenance labels on every field: Rule, LLM, or Playbook, plus a confidence score.
- Confidence‑based routing: High‑confidence fields flow through; low‑confidence values route to a workbench with citations.
- Deterministic validation: Code‑set checks, payer policy gates, type/format checks, and specialty‑specific edit packs.
- Few‑shot prompts & LLM‑as‑a‑judge: First model drafts; a second model critiques for consistency and policy compliance on high‑impact fields.
- Keyword tripwires: Words like “suicidal ideation,” “suspected abuse,” or “end‑of‑life” force human review regardless of confidence.
- No auto‑upcoding, ever: Any AI suggestion that raises level or adds services is hard‑stopped for approval.
The Hybrid Architecture: Engineering Trust
We’ve borrowed the best ideas from safety‑critical systems and applied them to RCM.
Event‑driven flow
- Draft (ingest encounter + payer rules) → 2) Validate (type checks, policy guards) → 3) Enrich (AI extraction, summaries) → 4) Adjudicate (rule/AI conflicts resolved; LLM‑as‑judge on risky fields) → 5) Route (auto‑submit if clean; otherwise send to workbench) → 6) Submit & Listen (watch 277CA/835 responses) → 7) Learn (feed denials back into playbooks and prompts).
Confidence model
We keep field‑level confidence (e.g., icd10_primary=0.91
, pos=0.99
) and an aggregate score used only for routing. Your program sets the threshold (e.g., 0.88) and the exceptions (e.g., “never auto‑submit if new‑to‑plan or audio‑only”).
Full auditability
Every automated field stores the inputs, the method (Rule/LLM/Playbook), and the decision trail. If a CFO or auditor asks why, you can literally click into how.
End‑to‑End Example (Telehealth Neurology)
Encounter: 32‑minute video visit, patient at home (established patient).
Clinician note (excerpt): “Increased levodopa. Pill‑rolling tremor at rest. Bradykinesia noted. No dyskinesias. Stable mood.”
Rules (certainty):
- Modality + location ⇒ POS 10; consider modifier 95 only if payer requires it for AV telehealth; block it if not.
- Time captured ⇒ enforce E/M range and consistency with documentation.
- Licensure/credentialing check ⇒ provider is valid in patient state and plan.
AI (ambiguity):
- Extracts G20.A1 as the primary diagnosis with a source snippet; flags depression as a secondary diagnosis only if explicitly documented.
- Suggests 99214 (30–39 min) based on time; if note is inconsistent, routes to workbench with side‑by‑side evidence.
Routing & submission:
- If all field confidences ≥ 0.88 and no tripwires fire, auto‑submit now. Otherwise, reviewer gets a compact diff view with provenance and citations.
Result: A clean, explainable claim—fast.
Denials: Use Playbooks First, AI for the Weird Stuff
The fastest ROI comes from auto‑resolving the common denials and giving humans superpowers on the rest.
Codified playbooks we ship:
- CARC 5 / 58 (POS or setting mismatch)
Symptoms: POS 02/10 mix‑ups, or telehealth modifier missing where required.
Auto‑fix: Recompute POS from encounter data; apply/strip modifiers based on payer rules; attach a short modality note when payers need proof. - CARC 96 (Non‑covered charge[s])
Symptoms: True benefit exclusion, plan limit reached, or doc gap.
Auto‑assist: Verify coverage; if documentation’s thin, generate a checklist and block resubmission until complete. - CARC 197 (Precert/authorization absent)
Auto‑fix: If ePA is approved, fetch the auth number/dates from the prior‑auth record and resubmit; else route to the ePA queue with a pre‑built clinical summary.
For non‑standard denials (e.g., diagnosis inconsistent with procedure), AI proposes likely causes + next steps with linked snippets, but human approval is required before resubmission.
Safety, Compliance & Operations (Designed‑in, not bolted‑on)
- Transparency: Every field shows provenance and confidence.
- Guardrails: No auto‑upcoding; human approval required for level increases or added services.
- Operator control: Adjustable thresholds by program; specialty/payer edit packs; user‑level permissions (SSO/RBAC); full audit log.
- Telehealth‑aware: POS 10/02 rules, modality‑aware modifiers (95/93/FQ), and licensure validation for multi‑state groups.
- Submit & Listen: We track submitted → 277CA → accepted/paid/denied and feed outcomes back into rules and prompts so the system actually gets smarter.
What Results Look Like (and how we measure them)
Teams running this hybrid pattern in specialty telehealth see:
- ~92% first‑pass acceptance (clean on first submission)
- ~85% of claims auto‑handled end‑to‑end
- Sub‑day time‑to‑submit from date of service
- Lower cost per claim from fewer reworks and shorter denial loops
We track the boring KPIs that make finance happy—First‑Pass, Denial Rate, Days to First Touch, % Auto‑Handled—and we show exactly which edits moved the needle (e.g., “POS mismatch fix,” “modifier 95 applied when required”).
Implementation Blueprint (4 Weeks to Lift‑Off)
- Connect your EHR/PM feed (FHIR/HL7/CSV) + eligibility and plan data.
- Map programs (specialty, payer mix, state footprint, modality policy).
- Turn on rule packs (telehealth core + specialty add‑ons) and set confidence thresholds.
- Pilot on 1–2 programs; reviewers get the workbench, operators get dashboards.
- Tune thresholds, denial playbooks, and prompts using real 277CA/835 feedback.
- Expand to remaining lines; add ePA if you want prior auth and claim automation under one roof.
FAQ (the questions real billers ask)
Do you auto‑submit everything the model suggests? No. High‑confidence, guardrail‑clean claims auto‑flow; anything ambiguous routes to a compact review with citations.
How do you prevent upcoding? We hard‑stop any level increase and require human sign‑off. We also record the evidence used for every level decision (time and/or MDM).
Can I see why a claim changed? Yes. Every field carries provenance (Rule/LLM/Playbook), inputs, and decision trail so you can drill into “why” in seconds.
Why This Approach Builds Trust
Leaning only on rules is brittle. Leaning only on AI is opaque. A hybrid approach gives you:
- Predictability (rules) + coverage of messy reality (AI) + oversight (humans)
- Explainability for auditors and CFOs
- Operator control via thresholds, playbooks, and specialty packs
- Continuous improvement from adjudication feedback loops
Ready to See It Live?
If you’re running telehealth in neurology, addiction medicine, behavioral health—or any specialty with complex narratives—we’d love to show you how this works end‑to‑end.
Contact: jj@have-foresight.com