The Compliance Signal

Archives
March 30, 2026

Issue #004 — The Compliance Signal

The Compliance Signal — Week of April 3, 2026

The Compliance Signal Issue #004

AI regulation in healthcare — what moved, what it means, what to do about it.

This Week

01   ONC proposed eliminating 34 health IT certification criteria — and expanding what counts as information blocking.
02   FDA consolidated seven adverse event databases into one. Your reporting workflows are about to change.
03   The LDT rule is dead. AI diagnostics just landed in a regulatory gap.
04   State AI laws are now in effect: Texas, California, and Illinois. Colorado arrives June 30.
05   FTC is rewriting how it measures data harm. The Health Breach Notification Rule is still loaded.

01

ONC proposed eliminating 34 health IT certification criteria and expanding the definition of information blocking to cover autonomous AI. The rule is not yet final.

ONC/ASTP Major Shift Action Required

On December 29, HHS published the HTI-5 proposed rule — “Health Data, Technology, and Interoperability: ASTP/ONC Deregulatory Actions To Unleash Prosperity.” The public comment period closed February 27. This is the most significant restructuring of health IT certification since the program began.

What the proposed rule would change: 34 of 60 certification criteria would be eliminated — 24 immediately upon finalization, the rest by January 1, 2027. Targeted for removal: clinical decision support certification, family health history, multifactor authentication, and audit report requirements. The Biden-era requirement for AI “model cards” disclosing training data sources and attributes for CDS algorithms would be fully removed. The 7 surviving criteria would be reoriented around FHIR-based APIs. ONC estimates 1.4 million compliance hours saved industry-wide in year one.

Why it matters — the information blocking expansion: While the proposed rule would shrink certification requirements, it would simultaneously expand the information blocking definition. “Access,” “use,” and “exchange” would explicitly encompass “automated means of accessing, exchanging or using electronic health information, including autonomous AI.” If your AI system restricts EHI access, exchange, or use — through scheduling logic, access controls, records routing, or coding analytics — you may have new information blocking exposure.

Our read: The proposed removal of model card requirements is not a green light to stop documenting AI models. FDA’s AI/ML action plan, state AI transparency laws (California AB 489, Colorado SB 205), and potential litigation discovery all still require robust model documentation. ONC stopped certifying against it — but the obligation didn’t disappear, it just moved to other regulators. Similarly, removing MFA certification does not change HIPAA Security Rule requirements. Do not confuse a lighter certification program with a lighter regulatory burden.

What to do

Audit information blocking exposure now. Even before HTI-5 is finalized, review every AI system that touches EHI for actions that could restrict access, exchange, or use. Scheduling, records routing, and coding analytics are the highest-risk areas. Maintain AI model documentation internally — even if ONC drops the requirement, FDA, FTC, and state laws haven’t. Accelerate FHIR investment — the proposed certification program is FHIR-first.

Sources:  HTI-5 Proposed Rule (Federal Register, Dec 2025)·HTI-5 Overview (HealthIT.gov)·HHS Press Release


02

FDA consolidated seven adverse event databases into one. MAUDE migrates by May. Your reporting workflows need to catch up.

FDA Post-Market Action Required

On March 11, FDA launched the Adverse Event Monitoring System (AEMS), replacing seven fragmented databases with a single platform. FAERS (drugs), VAERS (vaccines), and general adverse event systems have already migrated. MAUDE — the medical device adverse event database — migrates by end of May 2026.

Why it matters: Two changes hit medical device companies immediately. First, adverse event reports will be published in real time instead of quarterly. Safety signals for AI medical devices will surface publicly faster — visible to regulators, competitors, plaintiffs’ attorneys, and journalists the same day they’re filed. Second, the consolidated dataset and new analytics APIs signal FDA’s intent to apply more sophisticated pattern detection across product categories — something that was nearly impossible with seven siloed systems processing 6 million reports annually.

Our read: This is infrastructure modernization, not a new regulatory requirement. But the practical effect is that post-market surveillance becomes more transparent and more immediate. Companies that have been lax about timely Medical Device Reports should tighten up — late or incomplete reports are now visible in real time rather than buried in a quarterly data dump.

What to do

Prepare for MAUDE migration by end of May. Update any internal systems or automated workflows that submit to or query MAUDE. Review MDR timeliness — with real-time publication, filing deadlines matter more. Update your PMS plan to reference AEMS instead of legacy database names.

Sources:  FDA AEMS Overview·AEMS Public Dashboard·MAUDE Database


03

The LDT rule is dead. A Texas court killed it, HHS didn’t appeal, and AI diagnostics just landed in a regulatory gap.

FDA Diagnostics Major Shift

In May 2024, FDA published a final rule declaring laboratory developed tests are medical devices subject to FDA regulation, with a phased enforcement timeline running through November 2027. In March 2025, a Texas district court vacated the rule entirely, holding that LDTs are professional services, not “devices” under the FFDCA. The court relied on the Supreme Court’s Loper Bright decision, which eliminated Chevron deference, and concluded FDA exceeded its statutory authority. HHS chose not to appeal. In September 2025, FDA formally reverted the regulation to its pre-May 2024 text.

Why it matters for AI diagnostics: AI algorithms that analyze lab data — genomic sequencing, pathology image analysis, biomarker detection — exist at the intersection of LDTs and SaMD. With the LDT rule vacated, clinical laboratories developing AI-based tests in-house face no FDA device oversight, absent a separate SaMD classification. This creates a regulatory asymmetry: commercial AI diagnostic products face full FDA review, while functionally identical lab-developed AI tests do not.

What could change: Legislative proposals including a modified VALID Act continue to circulate in Congress, with the FDA user-fee reauthorization process (due by September 2027) as a potential vehicle. If enacted, it would create a new regulatory framework for all in vitro clinical tests, potentially including AI-based laboratory tests. The timeline and specific requirements remain uncertain — user-fee consultations are still ongoing and won’t be formally submitted to Congress until January 2027.

Our read: The gap is real but likely temporary. Building to FDA device standards voluntarily is expensive but de-risks against the VALID Act passing. For companies currently in the gap, the strategic move is to build modular compliance infrastructure that can be activated without rebuilding from scratch. And CLIA compliance is your primary federal oversight mechanism in the meantime — do not neglect it.

What to do

Classify your products. Determine whether your AI diagnostic tools qualify as LDTs, SaMD, or both — the regulatory obligations differ dramatically. Track the VALID Act through the user-fee reauthorization process. Ensure CLIA compliance is current — it’s the primary federal oversight mechanism for lab tests with FDA out of the picture.

Sources:  FDA LDT Page·LDT Vacatur Implementation (Federal Register, Sep 2025)·LDT Final Rule (Federal Register, May 2024)


04

Three state AI laws took effect January 1: Texas disclosure, California licensure restrictions, and Illinois employment AI rules. Colorado’s comprehensive AI Act arrives June 30.

State Laws Action Required

Three states enacted healthcare-specific AI provisions effective January 1, 2026:

Texas RAIGA: Healthcare providers must disclose to patients when AI is used in their treatment or service. The law also prohibits AI for “restricted purposes” including unlawful discrimination, and establishes a regulatory sandbox with 36-month test periods. Low compliance burden, high litigation risk if missed.

California AB 489: Prohibits AI systems from using terms, letters, phrases, or design elements that indicate or imply the AI holds a healthcare license. If your patient-facing AI interface includes clinical-sounding titles, white-coat imagery, or anything suggesting licensure, you have a compliance issue effective now.

Illinois HB 3773: Prohibits employer use of AI that results in discrimination against protected classes. Separately, the Illinois Wellness and Oversight for Psychological Resources Act restricts AI in therapy to licensed professionals with required disclosures and human oversight.

Colorado SB 205 hits June 30 — delayed from February 1 by SB 25B-004. Developers must exercise “reasonable care” to prevent algorithmic discrimination, produce technical documentation, and publish public statements. Deployers must adopt risk management policies and perform impact assessments. The 2026 session may further amend requirements before the effective date.

The federal wildcard: A December 2025 Executive Order established a DOJ AI Litigation Task Force to challenge state AI laws in federal court. The Commerce Department was directed to identify “potentially unconstitutional” state laws by March 11, 2026. Our read: Do not bank on preemption — it faces significant constitutional obstacles and the state laws are effective now.

What to do

Texas: Add AI disclosure language to patient consent forms and provider-facing interfaces. California: Audit all patient-facing AI interfaces for language or design elements implying healthcare licensure. Colorado: Begin impact assessment and risk management policy development, but design for modularity — amendments are likely before June 30. Illinois: Review AI employment tools for discriminatory impact.

Sources:  King & Spalding: State AI Laws Effective Jan 2026·Akerman: Healthcare AI Laws Now in Effect·Morgan Lewis: Texas AI Governance


05

The FTC is rewriting how it measures data harm. The Health Breach Notification Rule is still loaded.

FTC Health Data

On February 26, the FTC held a workshop on “Consumer Injuries and Benefits in the Data-Driven Economy.” Chairman Ferguson stated Section 5 “is not an obstacle to innovation” and should promote technological advancement. Bureau of Consumer Protection Director Mufarrige characterized prior efforts to impose substantive data practice limitations as “misguided,” reorienting toward a “notice and consent framework.”

Our read: The Ferguson FTC signals a shift from prescriptive data regulation to a deception-focused enforcement model. Companies that accurately disclose their data practices and obtain proper consent would face lower FTC risk under this approach. Companies whose actual practices diverge from their stated practices face the same or higher risk — because “substantial injury” in healthcare is easier to prove than in most sectors.

The HBNR is the enforcement tool to watch. The Health Breach Notification Rule — updated July 2024 — covers health apps and AI tools not subject to HIPAA. “Breach” includes unauthorized disclosures (not just security breaches). Penalties run up to $53,088 per violation (inflation-adjusted as of January 2025). FTC has already used it against GoodRx and Premom. If your AI product handles health data outside HIPAA coverage — consumer health apps, wellness tools, wearables — the HBNR is your primary federal exposure.

What to do

Audit privacy policy accuracy. Ensure your stated data practices exactly match your actual practices — any gap is enforcement bait under a deception-focused FTC. HBNR compliance review if your product handles health data outside HIPAA: 60-day notification deadline, FTC notification for 500+ affected individuals, “breach” includes unauthorized third-party disclosures.

Sources:  FTC Health Breach Notification Rule·FTC: Complying with the HBNR·HBNR Final Rule (Federal Register, May 2024)


Your three-item punch list this week

1.   Review information blocking exposure for AI systems. HTI-5 proposes expanding the definition to cover autonomous AI that restricts EHI access, exchange, or use. Scheduling, records routing, and coding analytics are the highest-risk areas.
2.   Prepare for MAUDE migration to AEMS by end of May. Update any automated workflows that submit to or query MAUDE. Ensure your MDR submissions are timely and complete — adverse events are now published in real time.
3.   Implement state AI disclosure requirements. Texas patient disclosure, California AI licensure prohibition, and Illinois employment AI provisions are all effective now. Colorado arrives June 30.

What’s the one compliance question you wish someone would answer this week? Hit reply.

The Compliance Signal — compliancesignal.io
AI regulation in healthcare — tracked, analyzed, and translated into action.

Questions? Reply to this email or contact support@compliancesignal.io

You received this because you subscribed at compliancesignal.io. Unsubscribe.

Don't miss what's next. Subscribe to The Compliance Signal:
Powered by Buttondown, the easiest way to start and grow your newsletter.