A great time to be a builder
few threads to wrap up 2025 in AI
The newsletter for the technically curious. Updates, tool reviews, and lay of the land from an exited founder turned investor and forever tinkerer.
Hey folks,
We’re wrapping up the year today and taking some time over the holidays to recharge, be with family and, no doubt, build a bunch of stuff 😈.
This year has been a year for builders. And for myself, I’ve done so much I never thought I would (or could!). I now use a terminal for the majority of any work I do (using Droid), I am starting to talk to my computer more (thanks to Monologue), I’ve shipped real coded projects that have been cloned by other people (aka people actually use my software for their own), I’ve contributed to a real codebase with actual engineers and we welcomed our third child of course!
I’m still not technical. But that feels like an unfair characterisation. I understand a lot about how code works, what you need to ship software, deploy it, debug it, and so on. I’m in this new technical class that isn’t ‘vibe-coder’ but isn’t developer.
I think more and more people will ‘look like’ me. A swiss-army knife generalist who can ship, build community, distribute, understand customers and more.
Whatever this space is, feels very similar to the no-code days of 2018 and onwards, where I wasn’t technical but I was very technically capable, given the right tools. And today, those tools are expert software engineers that you can just talk to in plain English.
I effectively build things and bump into a bunch of issues, and that is where I then learn how the system works, which is very different to how I was told to learn to code, which is type these characters in this order and you get this result. Learning to code just never truly clicked for me. I think because the incentive is misaligned where I only wanted to learn to code so that I could build things. But I’ve got no patience to get through months of basic lessons.
I’m learning because I’m building something along the way. I’m doing a lot more than I ever could, with a lot more technical depth than no-code ever provided.
And it’s been very freeing to be in this position and do this.
I want to start sharing more of my own ‘lessons’ here in 2026. Stay tuned.
Some thank you’s -
A huge thank you to Keshav who makes sure this newsletter is created every tues+thurs. We’ve only spoken a few times ever and he’s been with me for nearly 3 years. This newsletter is nothing without him. And he’s also now a self-taught programmer + is much more knowledgeable about AI than I.
To Shanice, who’s been with me for ~5 years as my right-hand woman, handling everything I don’t like or don’t want to (which is often a lot) and generally helping me think clearly when things are muddy.
To Adam for building and getting more technical - trying things, launching experiments and improving his own technical capability.
And to you, for subscribing, reading and responding. It’s easy to forget there’s 150k of you out there after we press publish but it’s been such a wonderful 3 years of writing this newsletter. Long may it continue!
More thoughts on 2025
inspired by others’ reviews that I’ve come across
Vibe coding tops the consequential things in AI in 2025, at least for me. Karpathy, who coined the term, has a relatively short review of 2025, with a good coverage of how models are improving and the fundamental changes in how they are being served to the users.
With vibe coding, more code is being generated per PR, per developer (as per Greptile’s “state of AI coding” report). But devtools have a ton of space to grow and innovate. There is a need for a new kind of version control, review and observability when vibes drive coding. (if you’re building this lmk!!)
Notion’s CEO made the case for future-looking products, comparing AI with steel and the steam engine. AI products are already maturing. Two types of “Agents” come to mind for 2025 - Droid/Claude Code and GPT 5.n Pro/Manus (i.e. CLI-based coding agent and browser-based general work agents). Both were still very hands-on early in the year. You felt the need to stare at them doing the work. Both types of agents are working in the background a lot more now.
What do products look like when agents work in the background? I don’t know. Keshav posted a screenshot from a mobile game in our Slack:
these work based games match what agents really feels like
This one simulates running a lumber factory. The entire game is just clicking the “upgrade” button and waiting till you have more resources to click again. It’s a little bleak, but there’s a possibility of work feeling like a game when agents can be reliable.
Anthropic ran a similar experiment itself. They gave Claude a vending machine to run earlier this year. It lost money in phase 1, but better models, a CEO and a pivot into clothing brought it back on track in phase 2. Anthropic also lent WSJ a machine—see their review. And as I was thinking of jobs, this post popped up in my feed, making the claim that judgment and agency are what get you hired now.
Updates and other news
Google is suing SerpAPI, a scraping company used by ChatGPT, Cursor and Perplexity.
Claude in Chrome is now available to all paid users. It’s a Chrome extension that can act in your browser. Also, Claude Code can use it to test the apps you’re building. (read more)
GPT-5.2-Codex is out. I don’t see myself switching from Opus for now, but it’s a good fallback when you hit limits. Also, Codex now officially supports skills.
Google DeepMind released a few open-source models. FunctionGemma got most of my attention. It’s a 270M model for function calling that can run on a phone or in a browser.
/agent by FireCrawl - Describe what you need with or without a URL and reach data that no other API can.
Bloom - Open-source tool from Anthropic to generate misalignment evals for frontier AI models. Someone made a GUI for it.
Figroot - Free Figma to code plugin.
LlamaParse V2 - Build high-quality document parsing pipelines.
Shitty Coding Agent - A minimal and opinionated CLI agent.
Zagi - A better git for agents.
Other stuff that I’m reading (or planning to!)
Which AI agent compacts your conversation the best?
What’s the big deal about computer use?
How to write better prompts for v0? useful for other vibe-coding platforms as well.
A codebase by an agent, for an agent.
Quick review of Nano Banana Pro vs GPT Image 1.5.
Using Opus 4.5 to generate prompts based on multiple reference images.
That’s it for today. Feel free to comment and share your thoughts. 👋
Read about me and Ben’s Bites
📷 thumbnail creds: @keshavatearth
Thanks to today’s sponsors who made this newsletter possible :)
Wanna partner with us for Q1? January’s almost booked out.



Great job with this newsletter this year. Really appreciate the content and effort you put into it!
🙏