Discussion about this post

User's avatar
Pawel Jozefiak's avatar

Shipping at inference speed - this captures exactly what it feels like to build with autonomous agents now. I just shipped a complete Slay the Spire 2 card database in days, not weeks, because the iteration cycle compressed from hours to minutes.

The key insight: when agents can read code, understand context, make decisions, and execute changes autonomously, the constraint shifts from "how long does coding take" to "how quickly can I validate and decide." I could review and approve changes faster than traditional development cycles.

What's wild: the agent (Wiz) didn't just build what I asked for - it extended itself with new skills mid-project when it encountered gaps. Needed image processing? Created a skill. Needed deployment automation? Built it. This self-extension is what makes agentic coding fundamentally different from traditional tools.

I wrote about this rapid-build experience here: https://thoughts.jock.pl/p/slay-the-spire-2-everything-we-know-card-database-2026 - we're not just shipping faster, we're shipping at a fundamentally different operational tempo.

JP's avatar

That Steipete piece about not reading code anymore resonated. I've been shipping with Claude Code at a similar pace and the bottleneck really is inference speed now, not typing. The Cerebras partnership with OpenAI is the most concrete move towards fixing that. Codex-Spark on wafer-scale silicon does 1,000+ tokens/s, which is fast enough that the tool stops feeling like a background worker and starts feeling like a pair programmer. Wrote a breakdown of the chip architecture and why it matters: https://reading.sh/chatgpt-and-codex-are-about-to-get-helluva-lot-faster-51ad25a7eed0

No posts

Ready for more?