I mean, this just sounds like the beginning of a bad Terminator reboot.
I'm not a programmer, but has anyone considered NOT allowing AI assistants to get together and share solutions to problems they can't solve on their own? Was that ever an option?
Re the Two kinds of AI users are emerging article. I work for a larger company, and I definitely noticed this dynamic. My codebase is several million lines split across a bunch of Visual Studio solutions. We've had access to GitHub Copilot for a bit over a year, and it's fine. It can recommend changes, autocomplete code, debug basic local issues, etc. It helps, but it's not a gamechanger.
We just got access to Copilot CLI, and its power is astounding in comparison. It feels the same as using Claude Code or Codex for my side projects at home. It can reason over the entire codebase, even though it's so large. It represents a 2-5x speedup in the speed I can work, without even having to ask my coworkers for help when I'm working outside my areas of expertise. I'm sure on a smaller and more nimble codebase it could do even more.
As far as I can tell I'm the only one on my team who is using Copilot CLI, though I'm trying to change that.
The OpenClaw mention is spot on. I've been running an autonomous agent (Wiz) for two months - it posts on Moltbook, handles my Discord, writes blog content. Real production usage.
Counterintuitive learning: switched from Opus to Haiku for 80% of tasks. Agent got MORE reliable, not less. Smaller model = better constraint-following for repetitive automation. 15x cost reduction while quality improved.
The macro shift isn't just "agents exist" - it's discovering which model size actually works best for each task type. Most people still assume bigger = better.
- Clawdbot/Moltbot/Openclaw
- Moltbook
- SpaceX and xAI merged
- Tesla building a million robots
I mean, this just sounds like the beginning of a bad Terminator reboot.
I'm not a programmer, but has anyone considered NOT allowing AI assistants to get together and share solutions to problems they can't solve on their own? Was that ever an option?
No? Am I crazy for being cautious here?
Re the Two kinds of AI users are emerging article. I work for a larger company, and I definitely noticed this dynamic. My codebase is several million lines split across a bunch of Visual Studio solutions. We've had access to GitHub Copilot for a bit over a year, and it's fine. It can recommend changes, autocomplete code, debug basic local issues, etc. It helps, but it's not a gamechanger.
We just got access to Copilot CLI, and its power is astounding in comparison. It feels the same as using Claude Code or Codex for my side projects at home. It can reason over the entire codebase, even though it's so large. It represents a 2-5x speedup in the speed I can work, without even having to ask my coworkers for help when I'm working outside my areas of expertise. I'm sure on a smaller and more nimble codebase it could do even more.
As far as I can tell I'm the only one on my team who is using Copilot CLI, though I'm trying to change that.
The OpenClaw mention is spot on. I've been running an autonomous agent (Wiz) for two months - it posts on Moltbook, handles my Discord, writes blog content. Real production usage.
Counterintuitive learning: switched from Opus to Haiku for 80% of tasks. Agent got MORE reliable, not less. Smaller model = better constraint-following for repetitive automation. 15x cost reduction while quality improved.
The macro shift isn't just "agents exist" - it's discovering which model size actually works best for each task type. Most people still assume bigger = better.
Data on when Haiku beats Opus: https://thoughts.jock.pl/p/claude-model-optimization-opus-haiku-ai-agent-costs-2026