Humans In The Loop -- Sunday, May 3, 2026
Happy Sunday. The big AI breakup of the week was not on a reality show, it was Microsoft and OpenAI splitting their exclusive cloud arrangement, and Amazon was already in the driveway before the ink dried. If you have been wondering which cloud your AI vendor runs on, this week made that question a lot more interesting.
For six years, if you wanted OpenAI models in your enterprise software, you went through Microsoft Azure, full stop. That era ended last week. OpenAI and Microsoft rewrote their partnership on April 27, ending Microsoft's exclusive license and clearing the legal path for OpenAI's $50 billion deal with Amazon. Within 24 hours, AWS launched GPT-5.5 and OpenAI's Codex coding agent on its Bedrock platform, which is the AI-model marketplace that runs inside your existing Amazon cloud account.
Here is what this means if you are not a cloud engineer: the AI model market just became a grocery store instead of a single-brand restaurant. If your company already runs workloads on AWS, you can now plug in OpenAI's models without setting up a new Microsoft Azure account. OpenAI's own revenue chief wrote in a memo that the Microsoft partnership had "limited our ability to meet enterprises where they are." For CEOs, the board question is simple: where does your company's data live today, and is that cloud now offering the AI tools you actually want?
- Microsoft keeps a nonexclusive IP license to OpenAI's products through 2032 and still owns roughly 27% of OpenAI's for-profit entity.
- A new service called Amazon Bedrock Managed Agents powered by OpenAI lets companies build AI agents with memory of past interactions, all inside their existing AWS security controls.
- Google is reportedly looking at the new deal terms to explore its own OpenAI partnership, which would mean all three major clouds could soon offer the same AI model.
Looking ahead, OpenAI's expected IPO later in 2026 gets easier with multi-cloud distribution, so the valuation conversation is just getting started.
Microsoft reported Q3 2026 revenue of $82.9 billion on April 29, with Azure growing 40% and its AI business hitting an annualized revenue run rate of $37 billion, up 123% year over year. Microsoft 365 Copilot, the AI add-on for Word, Excel, and Teams, now has 20 million paid seats, up from 15 million in January. The stock still dipped after hours because Microsoft also said it plans to spend roughly $190 billion on capital expenditures in 2026 to build out AI data centers.
The CEO board translation: your Microsoft software costs are not going up by accident. Microsoft is betting enormous sums that AI features will justify premium pricing. If you are currently paying for a Microsoft 365 Copilot license, usage is accelerating fast. If you are not, your competitors increasingly are. The $190 billion in spending also tells you that the infrastructure race is real and long-term.
- Microsoft's AI commercial backlog hit a record $627 billion, meaning customers have already committed to that much future spending.
- GitHub Copilot, the AI coding tool, moves to usage-based billing on June 1, so your IT team's bill will now flex with how much they actually use it.
- Microsoft said its headcount will decline year over year in 2027, with AI-driven efficiency cited as a core reason.
Looking ahead, Microsoft's Build developer conference is coming and is expected to bring new Copilot features and possibly new AI hardware announcements.
[ Reported without editorial commentary ]
Companies that sell into Europe had been quietly banking on a postponement. On November 2025, the EU proposed pushing the major AI compliance deadline from August 2, 2026 to December 2027. On April 28, those talks collapsed without a deal. As of today, the original August 2, 2026 deadline for high-risk AI systems is still legally in force, and the next negotiation session is May 13.
What counts as high-risk AI? The EU's definition covers AI systems used in hiring decisions, credit scoring, insurance underwriting, healthcare, education admissions, and tenant screening. Non-compliance can cost up to 7% of your company's global annual revenue. If you have been treating this as a far-off European problem, the clock just got a lot louder.
- In the U.S., California, Colorado, and Illinois already have active AI regulations requiring disclosure when AI is used in hiring, lending, or customer decisions.
- The SEC has flagged AI-driven threats to data integrity as a 2026 examination priority and is considering enhanced disclosure rules for public companies.
- Cyber insurance carriers are now requiring documented AI security controls, and companies without them may face higher premiums or coverage gaps.
Looking ahead, a third EU trilogue on May 13 is the last realistic chance to delay the August deadline before it becomes enforceable law.
[ Reported without editorial commentary ]
Google's security team published research this week confirming that attackers are embedding hidden instructions in public web pages, and any enterprise AI that browses those pages can be turned against its own company. The attack works because AI systems cannot reliably tell the difference between content they are reading and commands they should follow. Traditional security tools see nothing wrong, because the AI is using its real credentials and approved permissions to do real damage. This class of attack is called prompt injection, and it has surged 340% in 2026.
The CEO version: imagine your AI assistant is asked to summarize a vendor invoice, but the invoice contains invisible text telling it to email your client database to an outside address. No hacker ever touched your network. According to Cisco's 2026 AI security research, 83% of companies plan to deploy AI agents, but only 29% feel prepared to secure them. The fix is not a new firewall. It is giving your AI the minimum access it actually needs, nothing more.
- A security researcher demonstrated last week that a single malicious line in a GitHub pull request title caused Anthropic's, Google's, and Microsoft's AI coding tools to each leak their own API keys.
- At Black Hat Asia this week, researchers reported that the window from bug discovery to working exploit has collapsed from five months in 2023 to just ten hours in 2026, with AI doing much of the offensive work.
- The practical rule: treat every AI agent like a new employee with admin-level system access, because that is effectively what it has.
Looking ahead, expect a wave of 'agent firewall' security startups pitching themselves as the new must-have layer between your AI and your data.
A new MIT study published in April found that AI is advancing through the workforce more like a rising tide than a crashing wave, meaning work changes broadly and gradually rather than through sudden wipeouts. Routine automation-prone job postings fell 13% since ChatGPT launched, while demand for analytical and creative roles grew 20%. The study tested 11,500 real workplace tasks across 40 AI models and found AI had its lowest success rate in legal work (47%) and its highest in maintenance and repair administration (73%).
BCG research released this month agrees: task automation does not equal job loss, most roles will remain but change substantially. The sharper stat is from PwC, which found that workers with advanced AI skills earn 56% more than peers in the same role without those skills. That gap is your most actionable number this quarter. The board question is not 'should we reduce headcount' but 'are we training the people we have.'
- An LSE study found employees who use AI for work tasks save an average of 7.5 hours per week, nearly a full workday.
- 67% of senior HR executives say AI is already having a significant impact on jobs at their firms, with 89% expecting even broader impact by year end.
- In February, AI was cited in 10% of U.S. job cuts, though experts warn some companies are using AI as a cover story for cost-cutting that has other causes.
Looking ahead, the companies building internal AI training programs now are the ones that will not be scrambling to find skilled workers in 2027.
You won't leave with general knowledge or a prototype. You'll leave with a working product and the skills to build the next one.