🌚I ruined my weekends teaching a 1B model to interpret dreams on a phone
Hey,
I published a new post — and yes, the title is accurate.
Over the past several months I've been building Sandman, a dream journaling app for Android with fully on-device AI. The idea was simple: fine-tune a small model to interpret dreams without ever sending data to a server. The execution was… less simple.
In the post I walk through:
- Choosing a model — why I landed on Gemma 3 1B and what I ruled out
- Data and first mistakes — 22K training examples, overfitting, and losing JSON output after fine-tuning
- One model, multiple jobs — getting a single checkpoint to handle tagging, sentiment, and interpretation
- Getting it on a phone — the SafeTensors → TFLite → `.task` MediaPipe pipeline that nearly broke me
- What I'd do differently — honest post-mortem from a frontend engineer who had no business doing any of this
It's a learning-in-public post, not a tutorial. If you've been curious about on-device ML or fine-tuning small models, I think you'll find something useful (or at least relatable) in it.
The model is also up on HuggingFace at `mujo-labs/sandman-gemma3-1b-multitask` if you want to poke at it.
Thanks for reading,
Jacob