Half the AI Agent Market Is One Category. The Rest Is Wide Open.
← Top Stories

AI's New Power Law: The People Who Treat It Like Engineering Are Lapping Everyone Else

From a 37-year-old SimCity codebase ported in 4 days without reading a line of code, to meta-prompting loops that turn a 72B model into a GPT-4 beater, to Karpathy running 100 ML experiments overnight on one GPU — a small cohort is treating AI as a compounding engineering discipline while 95% of vertical markets sit untouched.

Read Top Post

Karpathy’s autoresearch repo ran 83 ML experiments overnight on a single GPU, kept 15 improvements, and discarded the rest — all without a human touching the code. The human writes a Markdown file. The agent edits training code, trains for exactly 5 minutes per run, scores against a fixed metric, and loops. Three files, ~100 experiments, one night. The pattern from every post in this story crystallized: design the arena, let AI iterate.

Mar 08, 2026 · 7 min

A meta-prompted 72B open-source model beat GPT-4 on real tasks. The technique: feed your actual inputs and outputs into an LLM, have it write a prompt, critique the output, fold feedback back in, repeat. Garry’s YouTube prompt is on version 27. Mitchell Hashimoto found the same prompt on a more powerful model can produce worse results without this loop. The compounding isn’t in the model — it’s in the prompt engineering cycle.

Feb 25, 2026 · 9 min

AI research is producing results so simple they look like science-fair projects. Sean Goedecke’s analogy: imagine rubber-band cars matching combustion engines if you soak the bands in maple syrup — suddenly a million easy questions are open. LLMs are that moment. No PhD required, just curiosity and compute. The “gentleman science” window won’t stay open forever, but right now amateurs with systematic approaches are publishing real discoveries.

Feb 16, 2026 · 4 min

Ira Glass spent 8 years at NPR making work he found embarrassing. The post reframes AI not as a replacement for creative struggle but as a tool that compresses the gap between taste and ability — the gap where most people quit. The throughline: AI rewards people willing to do reps and iterate, the same people meta-prompting and running overnight experiments. The skill is still showing up; the tooling just got radically faster.

Feb 14, 2026 · 4 min

Christopher Ehrlich pointed OpenAI’s codex at SimCity’s 1989 C codebase — assembly ported to C, full of bitshifts and unreadable math — and had it running in a browser in 4 days. He never read a single line. The trick: property-based tests comparing TypeScript output to the original C, so the AI iterated against a verification layer. 25% of YC startups now have 95% AI-written code. The leverage isn’t hypothetical anymore.

Feb 10, 2026 · 4 min

A university endowment’s engineers panicked after seeing Claude Code — and that panic is the tell. The post argues fear of AI scales inversely with ambition: if your plan is to keep doing what you’re doing, the machine is terrifying. If your plan is to 10x output, it’s the best tool ever built. This frames the entire arc — the gap between AI-as-threat and AI-as-lever is a mindset gap, and it’s widening.

Feb 07, 2026 · 4 min