Humans In The Loop -- Tuesday, April 21, 2026
Happy Tuesday. In honor of Earth Day tomorrow, here's an AI haiku: 'Models train all night / GPUs run hot and bright / Who turned off the fan?' Today's edition: Cerebras goes public, OpenAI pivots hard to enterprise, US labs form a united front against Chinese model theft, and Goldman Sachs delivers some uncomfortable news about Gen Z. Let's get into it.
Nvidia has a new formal challenger β and it has a chip 58 times bigger than Nvidia's B200. On April 17, AI chipmaker Cerebras Systems filed its S-1 registration statement with the SEC, targeting a Nasdaq listing under the ticker CBRS at a valuation of $22 to $25 billion, aiming to raise approximately $2 billion. The company reported $510 million in revenue for 2025, up 76% year over year, and is targeting a mid-May listing window.
The headline-grabbing anchor of the S-1 is a multi-year contract with OpenAI valued at more than $20 billion to deliver 750 megawatts of compute capacity through 2028, with options for nearly 3 gigawatts more by 2030. OpenAI even lent Cerebras $1 billion to help fund the buildout and received warrants for 33 million near-free shares β making the relationship less 'customer and vendor' and more 'financier and favored supplier.' Cerebras has also inked deals with AWS and Amazon for inference distribution, broadening its customer base beyond its current UAE-heavy concentration (two customers still accounted for 86% of 2025 revenue, an asterisk that will dominate earnings calls for the foreseeable future).
For enterprise technology leaders, the strategic play is clear: Cerebras is positioning itself as the inference-speed specialist for AI-first companies that can't afford the latency trade-offs baked into GPU clusters. Its Wafer Scale Engine 3 claims inference up to 15 times faster than leading GPU-based solutions. This is Cerebras' second IPO attempt β it withdrew its 2024 filing after a CFIUS review of UAE investor G42's stake, which was resolved last October. The window is open, but as one analyst put it, 'this valuation could be much harder to achieve six months from now' if market sentiment shifts.
- Cerebras generated $510M in revenue in 2025 (+76% YoY), with a $24.6 billion remaining performance obligation backlog β 43% of which is expected to be recognized in 2028β2029, tied heavily to the OpenAI compute contract.
- The WSE-3 chip boasts 900,000 AI-optimized cores, 44 GB of on-chip SRAM, and 2,625x more memory bandwidth than Nvidia's B200. It runs on TSMC's 5nm process β on a single wafer the size of a large pizza.
- Customer concentration is the single biggest red flag: just two UAE-based entities (G42 and Mohamed bin Zayed University of AI) accounted for 86% of 2025 revenue. Enterprise buyers considering Cerebras for inference should watch this diversification story closely.
Looking aheadβ¦ Cerebras is targeting a mid-May 2026 Nasdaq debut; if public investors accept the wafer-scale inference story, it could set a precedent that the next phase of AI infrastructure will reward specialized architectures β not just whoever has the most Nvidia GPUs.
OpenAI has officially declared that the 'side quest' era is over. In a March all-hands meeting, Chief of Applications Fidji Simo told staff the company needed to stop being distracted and 'pivot aggressively toward coding and business users.' The strategic reframing is aimed squarely at institutional investors who will price the offering β a Q4 2026 IPO targeting approximately $1 trillion in valuation, with CFO Sarah Friar describing 'good hygiene' for an $852 billion company to 'look and feel and act like a public company.'
The enterprise bet is already paying off in measurable ways. Enterprise now makes up more than 40% of OpenAI's revenue and is on track to reach parity with consumer by the end of 2026, according to Chief Revenue Officer Denise Dresser. OpenAI has crossed $25 billion in annualized revenue, with 1 million enterprise customers and 9 million paying business users as of February. Meanwhile, OpenAI's Frontier platform β designed to help customers like Oracle, State Farm, and Uber build and deploy agents company-wide β is being positioned as the company's core infrastructure play.
But here's the uncomfortable subplot: Anthropic may already be winning the enterprise war. According to new data, Anthropic's annualized revenue has surged past $30 billion, overtaking OpenAI, driven almost entirely by enterprise adoption and the explosive growth of Claude Code, which holds a 54% market share in AI programming tools with over $2.5 billion in annualized revenue. Among U.S. businesses tracked by Ramp Economics Lab, Anthropic's share of combined enterprise spend with OpenAI has gone from roughly 10% at the start of 2025 to over 65% by February 2026. OpenAI's enterprise LLM API share has fallen from 50% in 2023 to 25%. The race to the IPO podium just got a lot more interesting.
- OpenAI's Codex tool has surpassed 3 million users β from 'almost zero' at the start of the quarter, according to CFO Sarah Friar β validating the coding-focused enterprise pivot.
- Anthropic raised a $30 billion Series G in March 2026 at a $380 billion post-money valuation, and is evaluating an IPO as early as October 2026 that could raise over $60 billion.
- PwC's new AI Performance study found that 74% of AI's economic value is captured by just 20% of organizations β the AI haves and have-nots gap is widening faster than most boards realize.
Looking aheadβ¦ OpenAI is nearly doubling its workforce to 8,000 employees by year-end, hiring for a new 'technical ambassador' program designed to drive enterprise adoption β a direct counter to Anthropic's enterprise sales dominance.
In a move with no real precedent in Silicon Valley, OpenAI, Anthropic, and Google announced on April 6 that they have begun sharing intelligence through the Frontier Model Forum β an industry nonprofit they co-founded with Microsoft in 2023 β to detect and block so-called adversarial distillation attempts by Chinese AI firms. The three companies, which compete fiercely for the same enterprise contracts, are now piping threat detection data into a shared system. That is not a normal Tuesday in Big Tech.
The catalyst was documented and expensive. Anthropic identified three Chinese AI laboratories β DeepSeek, Moonshot AI, and MiniMax β that collectively generated over 16 million exchanges with Claude via roughly 24,000 fraudulent accounts, extracting the model's capabilities to train and improve their own systems. OpenAI separately accused DeepSeek of attempting to distill its models 'through new, obfuscated methods' and submitted a formal memo to the House Select Committee on China making this case. Google's Threat Intelligence Group disrupted model extraction activity involving more than 100,000 prompts targeting Gemini's reasoning capabilities.
For enterprise security and compliance leaders, the downstream effects are real and imminent. Frontier model providers are moving toward a more controlled distribution model β tighter terms of service, more aggressive usage restrictions, and greater scrutiny of high-volume API usage. Companies relying on Claude or OpenAI APIs at scale should expect new compliance requirements. Perhaps more critically: when a Chinese lab distills from a frontier model, it does not copy the safety filters. The alignment work, the refusal training, the harm-reduction layers β none of it transfers. Stripped-down copies running without safety guardrails are the threat US officials say extends well beyond IP law.
- This is the first time the Frontier Model Forum has been activated as an active threat-intelligence operation β shifting from a safety research and policy body to an operational defense coalition.
- The Trump administration's AI Action Plan calls for a formal information-sharing center to address adversarial distillation as a national security issue, giving this coalition political tailwinds for export control legislation.
- Anthropic said the threat 'extends beyond any single company or region' and poses a national security risk, since distilled models often lack safety guardrails β a point that will become a serious enterprise procurement consideration as the regulatory pressure builds.
Looking aheadβ¦ expect formal lobbying for AI distillation export controls and terms-of-service litigation against the named Chinese firms within 6 to 12 months β backed by documented evidence that now has a name, a coalition, and a Congressional audience.
A new Goldman Sachs analysis has put a specific number on AI's growing impact on labor markets: 16,000 US jobs are being displaced per month by AI, with young workers β those under 30 β bearing a disproportionate share of the burden. The findings, from an economist Elsie Peng note, represent one of the most granular attempts yet to separate AI's two competing effects on employment: substitution (AI replaces workers) versus augmentation (AI makes workers more productive and may expand hiring).
Gen Z workers are concentrated in exactly the types of roles AI automates most aggressively: data entry, customer service, legal support, billing. Without the accumulated experience and specialized judgment that insulate senior workers, they have little buffer against displacement. BCG's parallel analysis found that 43% of US jobs involve tasks that are at least 40% automatable β and that companies cutting workforces beyond AI's actual current ability to replace those workers will see productivity drop, institutional knowledge disappear, and critical talent walk away.
Despite the grim headlines, enterprise leaders should keep a few calibrating data points in mind. A Gallup survey of 23,717 US employees found that 65% of workers in organizations that have adopted AI say it has improved their productivity and efficiency. A Federal Reserve Bank of Atlanta study of 750 corporate executives found 'little evidence of near-term aggregate employment declines,' with larger companies anticipating AI-driven workforce reductions while smaller firms expect modest gains. The story isn't 'AI is eliminating work.' It's 'AI is eliminating the first rung of the ladder for people just entering it.'
- Goldman's framework scores occupations on 'substitution risk' (AI handles most tasks, like insurance claims clerks) vs. 'augmentation potential' (human judgment remains essential, like lawyers and construction managers). Most enterprise roles fall somewhere in the messy middle.
- BCG warns that companies whose products are in high demand will continue to hire over time as productivity gains translate to growth β but that 'those who fail to dramatically rethink work will see their competitors grow faster and more profitably.'
- IBM's Institute for Business Value estimates that AI will require 40% of the global workforce to acquire new skills within three years β creating a 'skills premium' where workers who can leverage AI tools see wage growth, while those who cannot face stagnation.
Looking aheadβ¦ the Atlanta Fed study found that 'routine clerical roles are declining and demand for skilled technical roles is increasing' β which is a diplomatic way of saying your reskilling budget line item is no longer optional.
Salesforce gave Slack its most ambitious overhaul since the $27.7 billion acquisition in 2021. At an event in San Francisco on March 31, CEO Marc Benioff unveiled more than 30 new AI-powered capabilities for Slackbot β transforming it from a chat assistant into a full agentic system that can transcribe meetings across any video platform, monitor users' desktop activity, execute tasks through third-party tools via the Model Context Protocol (MCP), and serve as a lightweight CRM. The new features run on Anthropic's Claude and are live now for Business+ and Enterprise+ subscribers.
The strategically significant move: starting summer 2026, Slack will be automatically bundled with every new Salesforce customer account. That removes the question of whether enterprise buyers will pay for a separate AI layer β they won't need to, because it arrives with the CRM they already purchased. Slackbot now functions as an MCP client, meaning it can connect to Agentforce, Google Workspace, Microsoft 365, Notion, Workday, ServiceNow, and more than 6,000 other applications in the Salesforce ecosystem without human intervention.
The competitive implications for Microsoft Teams are hard to miss. Salesforce has roughly 1 million businesses running on Slack, and the bundling strategy removes the friction of a separate purchasing decision entirely. The new system is anchored on a FedRAMP-certified model, addressing the security and compliance bar that enterprise buyers increasingly demand. Whether it's enough to shift accounts that have already standardized on Microsoft 365 is a different question β but Benioff is clearly betting that making Slack the default AI front door to the entire Salesforce ecosystem changes the calculus.
- Slackbot can now transcribe meetings, generate summaries with action items, draft emails, schedule meetings, and monitor desktop deals and calendars β all while routing tasks to Agentforce or any MCP-connected enterprise app without human oversight.
- Salesforce reported 'about a million businesses running on Slack' and 2.5x revenue growth since the 2021 acquisition. The new bundling deal starting summer 2026 should accelerate penetration significantly.
- Slack's integration of Claude via MCP deepens Anthropic's footprint in everyday enterprise workflows β following Claude Opus 4.6's deployment as an add-in inside Microsoft PowerPoint and Excel earlier this year, the company is now embedded in multiple competing platforms simultaneously.
Looking aheadβ¦ the shift toward agent orchestration β where a single interface coordinates multiple AI systems across different enterprise applications β is defining the competitive landscape for enterprise software in 2026 more than any other technical development, and Salesforce just made its most aggressive move to own that layer.