The Compliance Signal

Archives
March 20, 2026

The Compliance Signal — Issue #002 (Updated - Makary Lead)

The Compliance Signal — Week of March 20, 2026

The Compliance Signal Issue #002

AI regulation in healthcare — what moved, what it means, what to do about it.

This Week

01   FDA just told half its digital health rules to get lost. Here's what that actually means.
02   FDA is building the post-market surveillance machine for your AI
03   EU foundation model rules just made your LLM vendor a compliance problem
04   CISA says your Intune config is a breach waiting to happen
05   GuardDog got caught reading charts it had no business reading

01

FDA just told half its digital health guidances to get lost. Here's what that actually means for your compliance program.

FDA Major Shift Action Required

On January 6, Commissioner Makary published revised final guidances on Clinical Decision Support software and General Wellness products — and announced at CES that FDA plans to eliminate at least half of its existing digital health guidances and roll out an entirely new risk-based AI framework. His words: "smarter, more forward-thinking," moving in a "deregulatory direction."

The concrete changes are significant. The revised CDS guidance introduces enforcement discretion for software that provides a single recommended treatment option — something the previous guidance explicitly did not allow. If your CDS tool outputs one clinically appropriate recommendation and meets the other Non-Device CDS criteria (transparent logic, clinician-reviewable), FDA now intends to look the other way. The previous requirement to engineer around single-output constraints to avoid device classification is gone.

On the wearables side, the revised General Wellness guidance expands enforcement discretion to cover non-invasive consumer devices reporting blood pressure, oxygen saturation, and glucose-related signals — as long as they're intended solely for general wellness and paired with non-diagnostic notifications.

This sounds like deregulation. It isn't — not exactly.

Enforcement discretion is not exemption. FDA is choosing not to enforce — for now, under these conditions, for these categories. CDS software that predicts time-critical events or analyzes medical images still requires oversight regardless of single-output status. And the promised "new AI framework" doesn't exist yet. You're in a gap period: old rules loosened, new rules unwritten. Companies that treat this as a green light to stop building compliant systems are making a bet that the new framework will be friendlier than the old one. That's not a compliance strategy — it's a guess.

What to do this week

Map every product feature against the revised CDS criteria. If you have tools that were engineered to produce multiple outputs solely to avoid device classification, evaluate whether the single-recommendation enforcement discretion changes your architecture decisions. But do not dismantle existing compliance infrastructure. Document what changed, what's now discretionary, and what still requires oversight. The new AI framework is coming — you want to be positioned to comply with it, not scrambling to rebuild what you tore down.

 

02

FDA is quietly building the post-market surveillance machine for your AI. The public comment period is your only chance to shape it.

FDA Action Required

FDA opened public comment on measuring AI-enabled medical device performance in real-world settings. The language is polite and exploratory. The scope is not. They're asking how to track your AI's performance after deployment — performance drift detection, outcome measurement, reporting frameworks. When the FDA asks these questions publicly, they're building the evidentiary record for future rulemaking.

Right now, your AI/ML-enabled SaMD has minimal post-market obligations beyond adverse event reporting. That's the current floor, not the ceiling. This comment period signals the FDA is building toward more structured post-market oversight — likely including continuous monitoring requirements, performance benchmarks, and possibly algorithmic audits. FDA doesn't open public comment periods casually. The specificity of these questions — particular metrics, triggers, infrastructure requirements — suggests the agency already has a framework in mind and is stress-testing it.

Read between the lines

The specificity of FDA's questions tells you what the framework will likely look like: continuous real-world performance monitoring, drift detection protocols, and standardized reporting. If you don't comment, the requirements get written without your operational constraints on the record.

Comments close 90 days from publication. That window is your only opportunity to get your resource constraints, technical limitations, and operational realities into the regulatory record before these become enforceable requirements.

What to do this week

Assign someone to draft comments. Detail your current post-market monitoring capabilities, what's technically feasible, and what isn't. Be specific about resource constraints — FDA needs to hear from mid-market companies, not just the Medtronics of the world. Position your company as a collaborative partner now so you're not a compliance laggard later.

 

03

EU foundation model guidelines may have just made your LLM vendor a compliance dependency you didn't plan for.

EU AI Act Action Required

The European Commission published draft guidelines for General Purpose AI models under the EU AI Act. If your healthtech company uses ChatGPT, Claude, Gemini, or any foundation model for clinical decision support, patient communication, or operational workflows — and you serve EU markets — the compliance obligations may not stop at your vendor. Depending on how your product integrates and classifies under the Act, they extend to you.

The guidelines clarify which GPAI providers face direct obligations (the big ones) and what obligations cascade downstream to companies deploying these models in healthcare. The Code of Practice creates risk management, transparency, and documentation requirements that flow down the AI supply chain. Your vendor's compliance posture is becoming part of your compliance posture. For high-risk deployments under the Act, it already is.

The supply chain problem

Most healthtech companies treat their LLM provider like a cloud vendor — sign the BAA, check the box, move on. The EU AI Act treats it like a component in a regulated product. If you can't document the risk management, testing, and transparency of the foundation model you're building on, you can't demonstrate compliance for the product you're selling. If you can't document what your foundation model does and how it was tested, regulators won't distinguish between your product's shortcomings and your vendor's. The opacity problem is yours to solve, even if you didn't create it.

What to do this week

Audit your foundation model usage across every product and internal workflow. Document: which models, for what purposes, what data flows through them, and whether any of those workflows touch EU patient data or EU market delivery. Start building the risk management documentation the AI Act will require. The August 2026 high-risk deadline is 5 months away.

 

04

CISA says your Intune config is a breach waiting to happen. After Stryker, they're probably right.

CISA HIPAA Action Required

CISA issued hardening guidance for Microsoft Intune after a data-wiping attack hit Stryker. The attack demonstrated that endpoint management platforms — the tools you use to secure your devices — can become the attack vector that destroys your data at scale.

Intune is everywhere in healthcare. Clinical workstations, mobile devices, IoT medical devices — if you're managing endpoints in a health system, there's a good chance Intune is the backbone. Configurations that were considered reasonable last month now have a CISA advisory saying otherwise. And here's the compliance angle: if your Intune config doesn't meet CISA's updated guidance, and you get hit, OCR will ask why you didn't follow publicly available hardening recommendations.

The HIPAA connection

HIPAA's Security Rule requires implementation of security measures "sufficient to reduce risks and vulnerabilities to a reasonable and appropriate level." When CISA publishes specific hardening guidance for a tool you use, that guidance defines what "reasonable and appropriate" means. Ignoring it after it's public isn't a risk decision — it's a documentation problem you'll lose.

What to do this week

Pull CISA's Intune hardening guidance and compare it against your current configuration, line by line. If you outsource endpoint management, send the advisory to your vendor with a 48-hour deadline for a gap assessment. Update your vendor management documentation to require compliance with CISA guidance for any endpoint management tool touching systems with PHI access.

 

05

GuardDog Telehealth got caught reading patient charts it had no reason to open. OCR noticed.

HIPAA

GuardDog Telehealth admitted to improperly accessing patient medical records. This wasn't a ransomware attack. This wasn't a misconfigured S3 bucket. This was employees accessing PHI they had no treatment, payment, or operations reason to view. The most boring kind of HIPAA violation — and the kind OCR is now specifically hunting for in telehealth companies.

OCR is shifting enforcement focus from external breaches to internal access control failures. The question isn't just "was the data stolen?" anymore — it's "should that person have been able to see it in the first place?" For telehealth platforms, where clinicians and staff often have broad system access to function efficiently, that question has uncomfortable answers.

What to do this week

Pull your telehealth platform's access logs for the last 90 days. Look for access patterns that can't be justified by treatment, payment, or operations: staff viewing records for patients they never treated, access outside of scheduled appointment windows, bulk record views. If your system can't generate that report, that's the real finding — you need audit logging that actually works before OCR asks for it.

 

Your three-item punch list this week

FDA's revised CDS guidance loosens single-recommendation restrictions, but enforcement discretion is not exemption. Map your products against the new criteria. Do not tear down existing compliance infrastructure — the new AI framework isn't written yet and the gap period won't last forever.
Review your Intune configuration against CISA's hardening guidance. After Stryker, these recommendations define "reasonable and appropriate" under HIPAA. Don't be the company that ignored a public advisory.
The EU AI Act creates supply-chain compliance obligations that reach your LLM vendor choice. If you're serving EU markets and building on a foundation model, vendor due diligence is no longer optional. August 2026 high-risk deadline — five months out.

The Compliance Signal — compliancesignal.io
AI regulation in healthcare — tracked, analyzed, and translated into action.

Questions? Reply to this email or contact jay@compliancesignal.io

You received this because you subscribed at compliancesignal.io. Unsubscribe.

Don't miss what's next. Subscribe to The Compliance Signal:
Powered by Buttondown, the easiest way to start and grow your newsletter.