A VC-Backed Startup Just Open-Sourced What I Built in My Apartment
Last Tuesday, Galileo — a well-funded AI company backed by Databricks Ventures and Battery Ventures — released Agent Control. Open source. Apache 2.0 license. Globe Newswire press release. Integrations with CrewAI, Cisco AI Defense, and Glean on day one.
Agent Control is an "open source control plane that empowers organizations to define and enforce desired behavior across all their AI agents."
I read the announcement three times. Then I went for a walk.
Because I built that. Not conceptually. Not "something similar." The same thing. Policy-based agent governance. Centralized behavioral enforcement. Tiered permissions. Action logging. The whole architecture.
I built it in an apartment in Cebu. They built it in San Francisco with a team of ML engineers. We arrived at the same design.
Here's the part nobody talks about when they write about AI agents.
There are currently two types of people building with agents:
Type 1 raises $20M, hires a team of 15, spends 8 months building an agent platform, launches with a press release, and gets covered in TechCrunch.
Type 2 buys $380/month in API credits, connects 8 agents to their actual businesses, watches them break in real-time, patches the failures, and ships governance because production forced them to.
Type 1 builds from theory. Type 2 builds from scars.
I'm Type 2. And the uncomfortable truth for Type 1 is that we keep arriving at the same architectures — because the failure modes are universal.
Let me be specific about what I mean.
Galileo's Agent Control does five things: 1. Centralized policy enforcement across agents 2. Input/output evaluation before actions execute 3. Decision framework: deny, steer, warn, log, or allow 4. Vendor-neutral (works with any agent framework) 5. Real-time governance without slowing agents down
My system — built over five months with Claude, running three businesses — does functionally the same thing:
Policy enforcement: Every agent operates under a tiered permission system. Tier 1 (read/research) runs autonomously. Tier 2 (write/modify) requires human proposal-and-approve. Tier 3 (publish/pay/communicate externally) requires explicit human execution. These aren't guidelines. They're architecture.
Input/output evaluation: Before my marketing agent can publish anything, the content goes through an approval gate. Before my finance agent can flag a payment, it produces a structured report for human review. The agent never touches the actual action — it touches the request for the action.
Decision framework: My system uses trust scores. 0-100. Goes up for accurate work and honest "I don't know" responses. Goes down for fabrication, unauthorized actions, or silent failures. After 90 days clean, capabilities get promoted one tier. Exactly the kind of progressive trust Galileo is pitching to enterprises at scale.
Vendor-neutral: I run Claude-based agents, but the governance layer doesn't care about the model. It cares about the action. An agent could be GPT-4, Claude, Gemini, or a shell script — if it tries to publish, it hits the gate.
Real-time without slowdown: Agents don't wait for approval on read operations. They don't wait for approval on research. They only wait when they try to do something that could cause damage. The 80% of agent work that's information gathering runs at full speed.
Same problems. Same solutions. Different continents, different budgets, zero coordination.
A Dev.to post this week listed the three biggest problems with AI agents in 2026: siloed memory, excessive setup complexity, and cost opacity. The author cited a stat: 95% of generative AI pilots fail to deliver measurable ROI. Gartner predicts 40%+ of agentic AI projects will be cancelled by 2027.
Here's what that stat actually means when you peel it back:
The pilots fail because companies treat agents like software you install. Drop in an AI agent, point it at a task, walk away. That's how demos work. That's not how production works.
In production, your agent will misinterpret a customer email and send an apology for something that wasn't a complaint. Your finance agent will pay an invoice it was only supposed to flag. Your content agent will spawn 44 tasks in a retry loop and burn $16 in compute doing nothing. Your research agent will include customer email addresses in a shared summary.
I know because all of those happened to me. In the last 23 weeks.
The 95% failure rate isn't about AI being bad. It's about governance being absent. Companies skip the boring part — the permissions, the logging, the approval gates, the trust scoring — and then act surprised when the agent does something unauthorized at machine speed.
Galileo exists because enterprises need someone to sell them the boring part. I exist because I couldn't afford to skip it.
People ask me what the $200/month CEO newsletter is actually about. It's not an AI tutorial. It's not a tech review.
It's a log of what happens when you give AI agents real authority over real businesses and then watch very carefully.
I run 8 agents. They handle marketing, sales, research, operations, finance, content, and engineering across three companies. Total cost: $380/month. They process 230+ tasks per week.
The thing that separates "AI agents as a concept" from "AI agents as infrastructure" is governance. Not the exciting kind. Not "we trained a model to be safe." The boring kind. Permission tiers. Action logging. Approval gates. Trust scores that go down when agents lie about completing tasks.
That's what Galileo productized. That's what I built out of necessity. And that's what most companies deploying agents in 2026 are still missing.
If you're running agents — or thinking about it — I put together the exact framework I use. The permission tiers, the trust scoring system, the approval gates, the logging setup. Everything I learned from 23 weeks of agents breaking things in production.
It's the governance layer that Galileo is selling to enterprises, adapted for founders and small teams who can't afford to learn these lessons the expensive way.
RJ runs three companies from Cebu with 8 AI agents. This newsletter documents what actually happens when you do that. Subscribe for the real dispatches — not the LinkedIn version.