What Just Happened
On January 22, South Korea's AI Basic Act took effect—the first comprehensive national AI law outside the EU [1]. It requires risk assessments for high-impact AI, mandates transparency for AI-generated content, and imposes fines for non-compliance. This matters because it signals global consensus: comprehensive AI regulation is now the norm, not the exception.
The EU Timeline Uncertainty
Here's where it gets complicated. The EU AI Act's full application was set for August 2, 2026. But in November 2025, the European Commission proposed the "Digital Omnibus"—a simplification package that could push high-risk AI compliance deadlines to late 2027 or even 2028 [2]. The omnibus isn't passed yet, so the August 2026 deadline technically still stands [3].
Should you wait? No. Here's why.
Governments Aren't Waiting
While Brussels debates timelines, individual governments have been acting. In July 2025, the Czech cyber security agency (NUKIB) rated DeepSeek as a "high threat" and banned it from government systems [4]. Italy blocked it under GDPR in early 2025 [5]. The Netherlands, France, and Germany have launched investigations. This isn't future risk—it's current enforcement.
What This Means for European Businesses
The practical impact depends on how you use AI:
If you only use AI tools (ChatGPT, Copilot, etc.): You're likely in "minimal risk" territory. But you still need to know what tools employees are using, ensure data isn't going to banned providers, and have basic usage guidelines.
If AI affects customer decisions (recommendations, scoring, personalization): You may face "limited risk" requirements—primarily transparency obligations. Customers must know when they're interacting with AI.
If AI touches hiring, credit, or critical services: You're in "high risk" territory with documentation and assessment requirements. Most SMEs won't be here, but check your vendors—their AI might put you in scope.
Why This Matters for Your Business
The DeepSeek case illustrates the real risk. Its privacy policy states data is stored in China under laws requiring companies to share data with authorities. Security researchers at Feroot Security found the app contains code linking to China Mobile—a state-owned telecom designated by the US as a military-linked entity [6]. Governments didn't ban it for political reasons—the data risks are documented.
If your employees are using tools like DeepSeek—and some probably are—you have a compliance gap today. The question isn't whether regulation is coming; it's whether you'll be ready when clients ask about your AI governance.