Humans In The Loop -- Monday, April 28, 2026
Happy Monday. The AI industry just blew up its most famous friendship, your employees are already using AI tools you didn't approve, and regulators are coming for your industry whether you're ready or not. Grab a coffee. This one's dense.
The most consequential tech partnership of the AI era just got restructured. Microsoft and OpenAI announced Monday that Microsoft's exclusive right to sell OpenAI's models is over. Going forward, OpenAI can sell to customers on any cloud, including Amazon Web Services and Google Cloud. Microsoft keeps a non-exclusive license to OpenAI's intellectual property through 2032, and remains OpenAI's primary cloud partner, but the moat is gone.
For a non-tech CEO, here is what matters: if your company runs on AWS or Google Cloud, you can now buy OpenAI's models directly through those platforms instead of having to touch Microsoft Azure. That means more vendor choice, more price competition, and less lock-in. The deal also signals that OpenAI is sprinting toward a near-$1 trillion IPO valuation and needs enterprise customers everywhere, not just inside Microsoft's ecosystem.
- Microsoft stops paying OpenAI a revenue share; OpenAI keeps paying Microsoft at 20% until 2030, but now under a cap.
- Amazon CEO Andy Jassy celebrated on X and said OpenAI models will appear on AWS Bedrock within weeks.
- Analysts at Barclays called it a win for both companies, noting Microsoft can now redirect data center capital to its own Copilot products.
Looking ahead, with OpenAI now free to sell everywhere, the enterprise AI price war between Azure, AWS, and Google Cloud is about to get very interesting for your procurement team.
At Google Cloud Next last week, Google announced the Gemini Enterprise Agent Platform, which replaces its old Vertex AI developer service. Plain English: instead of a toolkit for coders, Google now offers a full platform to build, run, and govern AI agents across your entire company. It includes identity controls, security policies, and the ability to watch what your AI is actually doing at all times.
This matters for non-tech companies because Deloitte is already deploying it at scale: 25,000 Deloitte professionals have Gemini Enterprise access today, with plans to roll out to 100,000 licenses. Deloitte also launched a dedicated agentic AI practice with over 1,000 pre-built, industry-specific agents ready for healthcare, financial services, manufacturing, and government clients.
- Google also unveiled its eighth-generation AI chips (TPUs), split into two types: one for training models and one for running them at scale.
- Google's new 'Agentic Data Cloud' gives AI agents governed access to your company's data without letting them touch everything they shouldn't.
- Deloitte's 2026 State of AI report found that AI tools are now available to workers at about 60% of surveyed organizations, a shift from pilots to real deployment.
Looking ahead, if your company is a Deloitte audit or consulting client, expect AI agents showing up in your next engagement whether you asked for them or not.
Shadow AI — meaning AI tools employees use without IT approval — is now a confirmed, widespread enterprise risk. A new Lenovo survey of 6,000 employees found that more than 70% use AI weekly, with up to one-third operating completely outside IT oversight. A separate HiddenLayer security report found that 76% of organizations now cite shadow AI as a definite or probable problem, up from 61% just last year.
The board-level concern here is simple: when an employee pastes company data into an unauthorized AI tool, your data security and compliance guarantees go out the window. Gartner also warned this week that by 2028, 25% of all enterprise AI applications will experience at least five security incidents per year. The risk is not theoretical — 1 in 8 companies in the HiddenLayer report have already had AI breaches linked to autonomous AI agents.
- AI-related cyberattacks have increased nearly 490% year over year, per recent SaaS security research.
- 61% of IT leaders report rising cybersecurity threats tied to AI, but only 31% feel confident managing those risks.
- The biggest current attack surface is not the AI model itself but the permissions and data access the AI inherits from your existing software.
Looking ahead, the one action your board can take today: ask your IT team for a list of every AI tool currently in use across the company, approved or not.
The U.S. regulatory picture is a genuine mess right now, and that is not an insult — it is just the reality. New York revised its AI law (the RAISE Act) in late March to mirror California's framework, shifting toward transparency and reporting requirements instead of outright restrictions. Meanwhile the White House's National Policy Framework, released March 20, urges Congress to create one national standard and override state laws it considers too burdensome. Those two things cannot both win.
In Europe, the EU AI Act's high-risk AI rules were set to kick in this August, but the European Commission is now actively pushing to delay key deadlines to 2027 or 2028. States like Indiana, Utah, and Washington passed laws this year specifically prohibiting insurers from using AI as the sole basis for denying health claims — a real concern for any company in employee benefits, healthcare, or insurance. If your company touches any of these areas, you need outside counsel reviewing your AI use cases now, not after an enforcement action.
- Indiana, Utah, and Washington all enacted new laws this year barring health insurers from using AI alone to deny claims.
- A 42-state attorney general coalition is coordinating AI enforcement pressure, with settlement activity against AI deployers already rising.
- Cyber insurers are now adding AI Security Riders to policies — meaning your insurance could be voided if you cannot document your AI controls.
Looking ahead, even if Congress passes a federal AI law that preempts some state rules, enforcement of existing state laws continues until courts say otherwise.
Goldman Sachs data reported this week puts AI-linked net job losses at about 16,000 per month in the U.S. right now. The sharpest pain is hitting entry-level workers: data entry, customer service, legal support, and billing roles are being automated first. Gen Z is bearing a disproportionate share, with entry-level hiring at top tech companies down 25% since 2023. A separate MIT study found that AI can handle 65% of text-based tasks at an acceptable level today, rising to an estimated 80% to 95% by 2029.
But here is what CEOs actually need to hear before their next all-hands: Gallup's fresh survey of 23,000 U.S. employees found that 65% of workers inside AI-adopting companies say AI has improved their productivity. Morgan Stanley and BCG both published research this month reinforcing that AI reshapes more jobs than it eliminates, and that the pace of change more closely resembles a slow tide than a sudden flood. The practical implication: invest in reskilling now, before your top performers leave for a company that already has.
- Snap laid off 16% of its workforce on April 15, with CEO Evan Spiegel explicitly citing AI reductions in repetitive work as the driver.
- Over 150,000 tech jobs have been eliminated across 500-plus companies since January 2026, per layoff tracking data.
- Only 18% of all U.S. employees say it is very likely their job will be eliminated in five years due to AI — fear is real but still not the majority view.
Looking ahead, the companies that tell employees clearly what AI will and will not change about their roles are the ones retaining the talent they need to execute the transition.
Q1 2026 broke every venture capital record ever kept. Investors poured $330.9 billion into startups globally in the quarter, more than double the prior quarter, with AI companies absorbing 80% of that total. Just four deals — OpenAI ($122B), Anthropic ($30.6B), xAI ($20B), and Waymo ($16B) — together totaled $188 billion, more than all of 2024's global venture funding combined.
The M&A story is just as revealing. Vertical AI companies — ones building AI specifically for healthcare, legal, financial services, or manufacturing — are commanding premium acquisition prices. The logic: proprietary data plus industry expertise is something a generic AI tool cannot replicate. If your company has accumulated decades of specialized operational data, that data is now an M&A asset, even if you have never thought of it that way.
- Amazon made a fresh $5 billion investment in Anthropic this week, with up to $20 billion more on the table.
- OpenEvidence, an AI platform used by over 700,000 physicians for clinical decision support, raised $250 million at a $12 billion valuation.
- Legora, an AI platform for law firms and legal teams, raised $550 million in Series D funding at a $5.55 billion valuation, its cap table a who's who of top-tier VCs.
Looking ahead, if your board asks about AI strategy and you do not have an answer that includes proprietary data as a competitive asset, it is time to build one.