AI Is In Its Gentleman Science Era
AI research is wide open for builders with heretical ideas. The window won’t last forever.
Source: seangoedecke.com
Source: seangoedecke.com
TL;DR
LLMs have reset the research game. The biggest breakthroughs are simple ideas anyone can try and the easy questions havenât been answered yet.
Something unusual is happening in AI research. The amateurs are winning.
Not âwinningâ like hobbyists occasionally stumble onto something useful. Winning like William Herschel, a composer, discovering Uranus. Winning like Antoine Lavoisier, a tax collector and lawyer, laying the foundation for modern chemistry.
The gentleman scientist disappeared because science got hard. The 2025 Nobel Prize in physics was awarded for âmacroscopic quantum mechanical tunneling and energy quantization in an electric circuit.â Even understanding the terms takes years of study. You canât dabble in that.
But AI research? Sean Goedecke puts it plainly: âAI research discoveries are often simpler than they look.â The fearsome mathematics in papers often contain ideas you could express in five lines of code.
The Window Is Open
Hereâs what makes this moment special: strong LLMs are âso new, and are changing so fast, that their capabilities are genuinely unknown.â
Thatâs not a bug. Thatâs a feature. When capabilities are unknown, there are millions of easy questions to answer. Does this technique work? What if you combine it with that? What happens if you try it on a different task?
You donât need a PhD to run these experiments. You need curiosity and compute.
Goedecke describes it through a thought experiment that makes the whole thing click: imagine someone discovers that rubber-band-powered cars, the kind kids build for science fairs, can match combustion engines if you soak the rubber bands in maple syrup.
Thatâs LLMs. A simple idea: train a large transformer on human-written text, produces a surprising and transformative technology. Suddenly there are a million questions worth exploring, and most of them donât require a real lab.
The Proof Is in the Simplicity
Look at the breakthroughs that actually moved the field.
GRPO, group-relative policy optimization, was hugely influential for reinforcement learning in 2024. The math looks fearsome. The idea is dead simple: let the model try a problem multiple times, then reinforce the attempts that did better than average. Thatâs it.
Anthropicâs âskillsâ feature? According to the research, theyâre âmarkdown files and scripts on-disk that explain to the agent how to perform a task.â Recursive Language Models? âAgents with direct code access to the entire prompt via a Python REPL.â
These arenât incremental improvements from decade-long research programs. Theyâre simple tricks applied to LLMs for the first time. Anyone could have tried them.
Many of the things we learn about AI capabilities, like o3âs ability to geolocate photos, come from informal user experimentation. Not from formal research institutions. From people just⊠trying stuff.
Heretics Welcome
This is exactly why the world is full of great future founders. The credentialed class doesnât have a monopoly on insight. Being willing to think thoughts that arenât cookie-cutter, thatâs where the alpha is.
And it matters that this technology is democratizing. Open source LLMs are critical right now. Otherwise a generation of founders ends up paying high rent on expensive land they can never own. The gentleman scientist era requires access to the tools.
The window wonât last forever. After a decade or so, Goedecke expects the easy questions to be answered, and AI research to look more like traditional science: specialized, credentialed, inaccessible.
But right now? The field is wide open for builders with heretical ideas. The next OpenClaw might be running on your laptop. Stop waiting for permission. Try it.
Related Links
-
We are in the "gentleman scientist" era of AI research (Sean Goedecke)
Comments (0)
Sign in to join the conversation.