💡 Happy New Year! Here's how I AI (and more)
Recent posts on using AI for Product work, building TL;DL for podcast summaries, and lessons from a music discovery side project.
Hi friends, and Happy New Year! I’ve been quite busy on the blog during the break, so I wanted to send y’all an update so you can check it out if you’d like. Here’s a summary of all the posts from the last two weeks…
Full Posts
How I Use AI for Product Work
A breakdown of my approach to using LLMs as a thinking partner for Product work. The system combines opinionated prompts, personal context files, and MCP servers that connect to real data sources. The philosophy: give the model context (who you are, what you're working on) and constraints (what "good" looks like) so it can push back on weak reasoning instead of saying "Great idea!" to everything. I use it for document review, stress-testing ideas, and understanding technical concepts well enough to hold my own in architecture reviews.
How My AI Product "Second Brain" Evolved
An update on the AI workflow system from the post above. The setup has shifted from Windsurf to Claude Code/OpenCode, with slash commands replacing manual @ mentions. The key additions: /today generates end-of-day summaries from filesystem activity, /weekly rolls those up for manager updates, and /briefing pulls Google Calendar data to prep for the next day. The philosophy has evolved too—less "sparring partner," more "capable colleague who handles the work around the work."
Introducing TL;DL: AI-Powered Podcast Summaries
A weekend project that generates AI summaries from podcast episodes. Three templates: Key Takeaways for actionable insights, Narrative Summary for interviews and story-driven content, and ELI5 for technical topics. The ELI5 version passed the real test—my wife's clinical psychology podcasts became comprehensible to me. Built on Cloudflare Workers with Durable Objects solving the eventual consistency problem for real-time status updates.
Building a music discovery app (and what I learned about Product)
A reflection on building Listen To More, a music discovery platform that combines Last.fm listening data with AI-generated context. The project has gone through three iterations and now includes MCP servers for Last.fm and Discogs. The PM takeaway: side projects are low-stakes learning environments, there's no substitute for using your own product, and building with your company's tools gives you practical knowledge that transfers to customer conversations.
Sign up to get more roundup posts like this one
Link Posts
- What's Actually Working with AI — Natalia Quintero on why AI adoption doesn't spread like other software: your prompts and workflows are built around your context, so peer-to-peer training beats top-down rollouts.
- Humans make mistakes, and so does AI. It's fine. — Will Larson on the double standard: when colleagues write confusing docs we ask them to clarify, but when agents produce something off we declare the technology broken.
- Where Do the Children Play? — Eli Stark-Elster reframes the "kids and screens" debate: digital space may be the only place left where children can exist without adult supervision.
- Building MCP servers in the real world — Internal MCP servers are where the real value is—giving users access to complex systems that previously required special skills or documentation to navigate.
- Measuring AI's Impact on Shipping Speed and Code Quality — Will Larson on the risk of AI adoption becoming optics over productivity. The proposed solution: correlate team-level AI usage with health metrics instead of chasing per-commit attribution.
That's it for now! I hope to share a little more frequently this year, but you know how New Years Resolutions go... at least I'm off to a great start.