Breaking Even This Week logo

Breaking Even This Week

Archives
Breaking Even
Platforms
Articles
Heatmap
March 25, 2026

Breaking Even March 25th

Last Wednesday of March 2026. Time to take stock.


March was not a quiet month.

The Aether project — which had been the primary source of consistent, well-paying work for a significant number of Outlier contributors — wound down its broad-access phase. Thousands of workers found themselves on the wrong side of a cut that nobody announced, looking at empty queues and waiting on support tickets that went nowhere. I wrote about what actually happened with Aether and where the work went next, if you were caught in that and still trying to make sense of it.

Account deactivations at Outlier ran at more than 200% above their own platform average for a stretch of two weeks. I watched a wave of posts from workers with 95% accuracy scores and months of consistent history — people who had every reason to feel secure — trying to figure out why they were suddenly locked out. Some of those accounts were reinstated. Many weren't. I got a brief deactivation notice myself, which was resolved before I even logged on to see it, which tells you a lot about the randomness of what was happening.

DataAnnotation held up. Not exciting, not spectacular, just steady — which in a month like this one is actually worth something.

Alignerr continued to be the platform most likely to actually communicate with you, which remains its most underrated quality. The onboarding wait is still long. The work when it shows up is good.

That's the month at ground level.


The Bigger Picture

Here's what I think is actually happening, and why I think it matters more than any single platform's drama.

The easy part is over.

For the past two years, the AI training pipeline needed volume more than it needed precision. It needed millions of examples of basic human judgment — which response is clearer, which answer is more accurate, which instruction was followed correctly. That demand pulled in a huge number of workers, paid well to attract them, and created the ecosystem all of us are navigating right now.

That phase is winding down. Not because the money dried up, but because the models learned what they needed to learn from it. You don't keep paying a tutor for lessons the student has already mastered.

What the models need now is different. They're being asked to perform tasks that require actual domain expertise to evaluate — multi-step legal reasoning, production-grade code, medical diagnostic accuracy, complex financial analysis. The kind of work where a generalist's "this seems right" is not sufficient, because the errors are subtle and the stakes are high.

The industry is moving from high school to college. And college has different admission requirements.

What that looks like in practice: the floor for generalist work is compressing, the ceiling for expert work is rising faster than most people realize. Platforms like Mercor, Mindrift, Handshake AI's Fellowship program, and RWS are listing roles that pay $90 to $250 per hour for verified professionals — doctors, lawyers, senior engineers, data scientists. Those are not aspirational numbers on a landing page. Those are current active postings in March 2026.

At the same time, the structure of the tasks themselves is changing. "Chat eval" — grading a single response to a single prompt — is being replaced by "agentic eval" — watching an AI agent attempt a multi-step task using real tools, and evaluating whether its decision-making process was sound. It's harder. It requires you to hold more in your head at once. You cannot do it on autopilot. But it exists, it's where the volume is migrating, and it pays better.

I'm going to be writing about all of this in depth over the next few weeks, because I think the workers who see this transition coming are the ones who will still be earning from it twelve months from now. The workers who don't are going to keep being surprised every time a project ends.


Platform Pulse

For the live updates, after doing some testing I decided to narrow my focus to just three platforms. I wasn't pulling enough data on the others to make informed decisions. Without enough information it becomes a coin toss, or worse yet misinformation. I'm still working on getting more, but in the meantime these three are getting live updates every hour or so.

Outlier AI — ⚠️ Rough Month The account instability was the story of March. If your account is in good standing, the Aether work that remains is real and pays well — but it's narrower and more demanding than it was. This is not a platform to be on alone right now. If it's your primary income source, March should have been the wake-up call to fix that.

DataAnnotation — 🟢 Holding The reliable if unexciting option. No elevated deactivation signals, no major payment issues, steady if sporadic task availability. The qualification barrier is still high — one shot, no second chances — but once you're in, the platform is doing what it's supposed to do.

Alignerr — 🟢 Stable, Slow Onboarding The work is good when it's there. The wait between project matches remains the main frustration. If you passed the Zara interview and are in limbo, you're not alone — this is a platform where the onboarding timeline is measured in weeks or months, not days. It's worth the wait if the work matches your background.


What's Coming

The research I've been sitting on for the last few weeks is going to start turning into articles. Here's what's in the pipeline:

"AI Is Going to College" — The full breakdown of why the work is changing, what the next phase actually looks like, and how to position yourself to be on the right side of it.

"If You Have Credentials, You're Leaving Money on the Table" — The platforms paying $90 to $250/hr for doctors, lawyers, and engineers right now are real. Most credentialed workers don't know they exist. This is that map.

"Babel Audio: Get Paid to Have a Conversation" — Voice data is its own lane in the AI training world. Babel Audio is the platform doing it well. Complete guide on how it works, what it pays, and whether it's right for you.

"What 'Agentic AI' Actually Means for Your Paycheck" — The shift from chat eval to agent eval is the most important structural change in this industry right now. Breaking it down into plain English.

"Red Teaming: The Best Work in AI Nobody Talks About" — Adversarial testing — getting paid to try to break AI models — is consistently the highest-rated work experience in these communities. Here's how to get into it.

More to come. The research was worth sitting on. The articles will be worth reading.


On Breaking Even

I want to say something directly, because it's the last newsletter of the month and this feels like the right moment for it.

A lot of people found this site because something went wrong. The queue went empty. The account got removed. The project ended without warning. They were looking for confirmation that it wasn't their fault, or an explanation for why it happened, or just evidence that someone else had been through the same thing.

That's not the best time to be evaluating whether you can depend on this kind of work. But it's when most people actually start asking the question.

The answer hasn't changed: you can't depend on it. Not because the platforms are all bad actors, not because the work isn't real, but because the structure of the relationship is not designed for dependency. You are a contractor. The contract ends. Sometimes without notice, sometimes without explanation, always without severance. That is the deal.

What you can do is be smart about it. Keep multiple platforms active. Treat every high-paying project like it has an expiration date, because it does. Use the good months to build the cushion that makes the bad months manageable. And if your background gives you access to the specialist tiers, or if you have gained enough experience to get to those tiers — get there now, because that's where this is going.

The works real, the money's real. but the rules are different. This isn't the kind of work we planned for and it offers none of the protections that come with a regular job. Know the rules and you can play the game. Forget them and the game will remind you.

More soon.


<center><i>Breaking Even tracks AI gig platform health by analyzing publicly available data sourced from multiple online resources. Data is reported independently of any platform mentioned, but some application links may be referral links that pay a small commission for successful signups. This has no impact on what I report, and there is no cost to you. Anyone who asks you to pay money for the opportunity to work or gain access to work should not be trusted.</center></i>

Don't miss what's next. Subscribe to Breaking Even This Week:

Add a comment:

Powered by Buttondown, the easiest way to start and grow your newsletter.