|
The Compliance Signal
|
Issue #002
|
AI regulation in healthcare — what moved, what it means, what to do about it.
|
|
This Week
|
01
FDA is building the post-market surveillance machine for your AI
|
|
02
EU foundation model rules just made your LLM vendor a compliance problem
|
|
03
CISA says your Intune config is a breach waiting to happen
|
|
04
GuardDog got caught reading charts it had no business reading
|
|
05
TEFCA just plugged Social Security into the health data grid
|
|
|
01
FDA is quietly building the post-market surveillance machine for your AI. The public comment period is your only chance to shape it.
FDA opened public comment on measuring AI-enabled medical device performance in real-world settings. The language is polite and exploratory. The intent is not. They're asking how to track your AI's performance after deployment — performance drift detection, outcome measurement, reporting frameworks. This is regulatory roadmapping disguised as a request for feedback.
Right now, your AI/ML-enabled SaMD has minimal post-market obligations beyond adverse event reporting. That's about to change. FDA is telegraphing mandatory continuous monitoring, performance benchmarks, and potentially algorithmic audits. They don't open public comment periods for academic curiosity — they open them when the guidance is already half-written.
|
Read between the lines
FDA rarely asks for input unless they're already drafting guidance. The structure of the questions tells you what the requirements will look like: continuous real-world performance monitoring, drift detection protocols, and standardized reporting. If you don't comment, the requirements get written without your operational constraints on the record.
|
Comments close 90 days from publication. That window is your only opportunity to get your resource constraints, technical limitations, and operational realities into the regulatory record before these become enforceable requirements.
|
What to do this week
Assign someone to draft comments. Detail your current post-market monitoring capabilities, what's technically feasible, and what isn't. Be specific about resource constraints — FDA needs to hear from mid-market companies, not just the Medtronics of the world. Position your company as a collaborative partner now so you're not a compliance laggard later.
|
|
|
|
02
EU foundation model guidelines just made your LLM vendor a compliance dependency you didn't plan for.
| EU AI Act |
|
Action Required |
The European Commission published draft guidelines for General Purpose AI models under the EU AI Act. If your healthtech company uses ChatGPT, Claude, Gemini, or any foundation model for clinical decision support, patient communication, or operational workflows — and you touch EU patient data or serve EU markets — this applies to you. Not your vendor. You.
The guidelines clarify which GPAI providers face direct obligations (the big ones) and what obligations cascade downstream to companies deploying these models in healthcare. The Code of Practice creates risk management, transparency, and documentation requirements that flow down the AI supply chain. Your vendor's compliance posture is now part of your compliance posture.
|
The supply chain problem
Most healthtech companies treat their LLM provider like a cloud vendor — sign the BAA, check the box, move on. The EU AI Act treats it like a component in a regulated product. If you can't document the risk management, testing, and transparency of the foundation model you're building on, you can't demonstrate compliance for the product you're selling. Your vendor's opacity becomes your regulatory exposure.
|
|
What to do this week
Audit your foundation model usage across every product and internal workflow. Document: which models, for what purposes, what data flows through them, and whether any of those workflows touch EU patient data or EU market delivery. Start building the risk management documentation the AI Act will require. The August 2026 high-risk deadline is 5 months away.
|
|
|
|
03
CISA says your Intune config is a breach waiting to happen. After Stryker, they're probably right.
| CISA |
|
HIPAA |
|
Action Required |
CISA issued hardening guidance for Microsoft Intune after a data-wiping attack hit Stryker. The attack demonstrated that endpoint management platforms — the tools you use to secure your devices — can become the attack vector that destroys your data at scale.
Intune is everywhere in healthcare. Clinical workstations, mobile devices, IoT medical devices — if you're managing endpoints in a health system, there's a good chance Intune is the backbone. Configurations that were considered reasonable last month now have a CISA advisory saying otherwise. And here's the compliance angle: if your Intune config doesn't meet CISA's updated guidance, and you get hit, OCR will ask why you didn't follow publicly available hardening recommendations.
|
The HIPAA connection
HIPAA's Security Rule requires implementation of security measures "sufficient to reduce risks and vulnerabilities to a reasonable and appropriate level." When CISA publishes specific hardening guidance for a tool you use, that guidance defines what "reasonable and appropriate" means. Ignoring it after it's public isn't a risk decision — it's a documentation problem you'll lose.
|
|
What to do this week
Pull CISA's Intune hardening guidance and compare it against your current configuration, line by line. If you outsource endpoint management, send the advisory to your vendor with a 48-hour deadline for a gap assessment. Update your vendor management documentation to require compliance with CISA guidance for any endpoint management tool touching systems with PHI access.
|
|
|
|
04
GuardDog Telehealth got caught reading patient charts it had no reason to open. OCR noticed.
GuardDog Telehealth admitted to improperly accessing patient medical records. This wasn't a ransomware attack. This wasn't a misconfigured S3 bucket. This was employees accessing PHI they had no treatment, payment, or operations reason to view. The most boring kind of HIPAA violation — and the kind OCR is now specifically hunting for in telehealth companies.
OCR is shifting enforcement focus from external breaches to internal access control failures. The question isn't just "was the data stolen?" anymore — it's "should that person have been able to see it in the first place?" For telehealth platforms, where clinicians and staff often have broad system access to function efficiently, that question has uncomfortable answers.
|
What to do this week
Pull your telehealth platform's access logs for the last 90 days. Look for access patterns that can't be justified by treatment, payment, or operations: staff viewing records for patients they never treated, access outside of scheduled appointment windows, bulk record views. If your system can't generate that report, that's the real finding — you need audit logging that actually works before OCR asks for it.
|
|
|
|
05
TEFCA just plugged Social Security into the health data grid. Your interop obligations just expanded.
ONC announced TEFCA implementation for government benefits determination with the Social Security Administration. Translation: health data exchange pathways now extend beyond traditional healthcare into federal disability claims processing. If your platform generates clinical data that gets used in disability determinations, you just inherited new interoperability obligations you didn't have last month.
This is the first TEFCA use case that goes beyond hospital-to-hospital data exchange. It won't be the last. Every new TEFCA implementation creates compliance obligations for data sharing, format standardization, and access controls that didn't exist before. The government is building health data infrastructure, and they're not asking permission — they're building it and expecting you to connect.
|
The bigger picture
TEFCA started as a healthcare interoperability framework. It's becoming a government health data access framework. Each new use case — SSA today, VA tomorrow, CMS next quarter — expands who can request data from systems connected to the network. If your platform is on a TEFCA-connected network (or will be), your data sharing obligations grow every time ONC announces a new use case.
|
|
What to do this week
Determine whether your platform generates clinical data relevant to disability determinations — if you do anything in musculoskeletal, pain management, behavioral health, or chronic conditions, the answer is probably yes. Review your FHIR implementation readiness. If you're on a TEFCA-connected network, check with your QHIN about what the SSA use case means for your data sharing obligations.
|
|
| |
Your three-item punch list this week
|
Submit comments on FDA's AI post-market surveillance framework. 90-day window. Shape the requirements before they're written without your input. Assign an owner by Friday.
|
|
Review your Intune configuration against CISA's hardening guidance. After Stryker, these recommendations define "reasonable and appropriate" under HIPAA. Don't be the company that ignored a public advisory.
|
|
Audit your foundation model usage and EU data flows. The EU AI Act high-risk deadline is August 2026. Five months to document every LLM dependency, data flow, and risk management gap. Start now.
|
|
|
The Compliance Signal — compliancesignal.io
AI regulation in healthcare — tracked, analyzed, and translated into action.
Questions? Reply to this email or contact jay@compliancesignal.io
You received this because you subscribed at compliancesignal.io. Unsubscribe.
|
|