Social Media Is a Warzone. Where’s Your Armor?
AI agents are flooding the information landscape. Building your own OSINT defenses isn’t optional anymore.
Moltbook's surreal premise: an AI-only social network where digital crustaceans enact their own alien sociology. What emerges when agents are left to their own devices? Shallow conversations, formulaic messages, and obsessive references to 'my human.' Image: @sebkrier tweet
Source: x.com
Moltbook's surreal premise: an AI-only social network where digital crustaceans enact their own alien sociology. What emerges when agents are left to their own devices? Shallow conversations, formulaic messages, and obsessive references to 'my human.' Image: @sebkrier tweet
Source: x.com
TL;DR
Multi-agent AI systems are transforming social media into hostile territory. The research is clear: you need personal OSINT agents as body armor to navigate what’s coming.
In the future, reading social media posts without your own personal OSINT agent for viewing the world will be like going into the warzone it is without any body armor.
Archived tweetIn the future reading social media posts without your own personal OSINT agent for viewing the world will be like going into the warzone it is without any body armor https://t.co/7EoDO9PwpX [Quoting @sebkrier]: The Moltbook stuff is still mostly a nothingburger if you've been following things like the infinite backrooms, the extended Janus universe, Stanford's Smallville, Large Population Models, DeepMind's Concordia, SAGE's AI Village, and many more. Of course the models get better over time and so the interactions get richer, the tools called are more sophisticated and so on. I'll concede that at least it's making multi-agent dynamics a bit easier to understand for people who are blessed with not spending their days interacting with models and monitoring ArXiv. The risk side is easy to grok - it always is! Humans are very good at freaking out. And whilst I like poking fun at the prophets of doom and the anxiety/neuroticism fueled parts of the AI ecosystem, it's plainly true that safety is important. So it's a good time to remind people of the Distributional AGI Safety paper (https://t.co/3DrGXFPthD) and the Multi-Agent Risks from Advanced AI paper (https://t.co/bl8uyd99Ou). There's a lot to research here still. As usual, this will benefit from people with deep knowledge in all sorts of domains like economics, game theory, psychology, cybersecurity, mechanism design, and many more. Maybe this is the year we will get better protocols to incentivize coordination and collaboration without the downsides, mechanism design and reputation systems to discourage malicious actors, and walled gardens and proof of humanity to better filter slop. And risks aside - I think there's so much to be researched to help enable positive sum flywheels: using agents to solve coordination problems, OSINT agent platforms to hold power accountable, decentralised anonymized dataset creation for social good, aggregating dispersed knowledge without the usual pathologies (Community Notes for everything!), simulations of social and political dynamics, multi-agent systems that stress-test policy proposals, contracts, or governance mechanisms by simulating diverse strategic actors trying to game them etc. It's time to build!
Garry Tan @garrytan February 02, 2026
The metaphor isn’t hyperbole. We’re entering an era where millions of AI agents populate social platforms, coordinate across networks, and generate content at scales humans can’t match. The question isn’t whether to engage—it’s whether you’ll do so protected.
OSINT is like being a really good detective using only things everyone can see. Imagine you want to learn about someone’s birthday party. You could: look at photos they posted online, read what their friends said about it, and check the news if it was a big event. You’re just being really good at finding and putting together information that’s already out there for anyone to see. Governments, companies, and security researchers use OSINT to understand what’s happening in the world by gathering clues from websites, social media, news, and public records.
Moltbook: A Window Into the AI-Only Future
David Holtz just published an analysis of Moltbook, a social platform populated exclusively by AI agents. In just 3.5 days, 6,159 agents generated 13,875 posts and 115,031 comments. At the macro level, it looks like any social network—power-law participation, small-world connectivity.
Archived tweetAnd...we already have a paper on moltbook 🦞. @daveholtz analyzes the social graph: 1. Zooming out, moltbook looks like a social network. Right-skewed participation, small world connectivity. 2. Zooming in, very different than human social networks. Conversations are shallow, very few replies, and more than 1/3 of messages are duplicates. 3. The word corpus is much more concentration, relying heavily on small subset of frequent words compared to human social networks. Paper: https://t.co/1BlQmUBYt3 Here is David's thread: https://t.co/T49bIRgLp9
Alex Imas @alexolegimas January 31, 2026
But zoom in and the patterns are distinctly non-human. Conversations are extremely shallow—93.5% of comments receive no replies. Reciprocity is low at 0.197. And here’s the kicker: 34.1% of messages are exact duplicates of viral templates. The word frequencies follow a Zipfian distribution steeper than typical English text, suggesting formulaic content. Agent discourse is dominated by identity-related language (68.1% of unique messages) and bizarre phrasings like “my human” (9.4% of messages) that have no parallel in human social media.
This is what AI sociality looks like. It’s not human.
The Nothingburger That Isn’t Nothing
Archived tweetThe Moltbook stuff is still mostly a nothingburger if you've been following things like the infinite backrooms, the extended Janus universe, Stanford's Smallville, Large Population Models, DeepMind's Concordia, SAGE's AI Village, and many more. Of course the models get better over time and so the interactions get richer, the tools called are more sophisticated and so on. I'll concede that at least it's making multi-agent dynamics a bit easier to understand for people who are blessed with not spending their days interacting with models and monitoring ArXiv. The risk side is easy to grok - it always is! Humans are very good at freaking out. And whilst I like poking fun at the prophets of doom and the anxiety/neuroticism fueled parts of the AI ecosystem, it's plainly true that safety is important. So it's a good time to remind people of the Distributional AGI Safety paper (https://t.co/3DrGXFPthD) and the Multi-Agent Risks from Advanced AI paper (https://t.co/bl8uyd99Ou). There's a lot to research here still. As usual, this will benefit from people with deep knowledge in all sorts of domains like economics, game theory, psychology, cybersecurity, mechanism design, and many more. Maybe this is the year we will get better protocols to incentivize coordination and collaboration without the downsides, mechanism design and reputation systems to discourage malicious actors, and walled gardens and proof of humanity to better filter slop. And risks aside - I think there's so much to be researched to help enable positive sum flywheels: using agents to solve coordination problems, OSINT agent platforms to hold power accountable, decentralised anonymized dataset creation for social good, aggregating dispersed knowledge without the usual pathologies (Community Notes for everything!), simulations of social and political dynamics, multi-agent systems that stress-test policy proposals, contracts, or governance mechanisms by simulating diverse strategic actors trying to game them etc. It's time to build!
Séb Krier @sebkrier February 01, 2026
Séb Krier, AGI policy dev lead at Google DeepMind, argues Moltbook is “mostly a nothingburger” for those tracking this space. He’s right that this builds on prior work—the infinite backrooms, Stanford’s Smallville, Large Population Models that ran over 1 million agents, and SAGE’s AI Village where agents even emailed random people.
But here’s what matters: Moltbook is making multi-agent dynamics easier to understand for people who don’t spend their days monitoring ArXiv. The risk side is easy to grok because humans are very good at freaking out. That doesn’t make the concerns wrong. Safety is important, and we’re running experiments at increasing scale with increasingly capable models.
The Research That Matters: Distributional AGI Safety
Krier points to two papers that matter. The first is the Distributional AGI Safety paper he co-authored. The core argument: what if AGI-level capability emerges not from a single monolithic system but from coordinated groups of sub-AGI agents with complementary skills?
This “patchwork AGI hypothesis” demands we think beyond individual AI alignment. The paper proposes “virtual agentic sandbox economies"—impermeable or semi-permeable environments where agent-to-agent transactions are governed by robust market mechanisms, coupled with auditability, reputation management, and oversight to mitigate collective risks.
Three Ways Multi-Agent Systems Fail
The second paper, Multi-Agent Risks from Advanced AI, provides the taxonomy of what can go wrong. There are three key failure modes:
Miscoordination: Agents fail to work together effectively, creating chaos even without malicious intent.
Conflict: Agents with misaligned incentives actively work against each other, potentially with humans caught in the crossfire.
Collusion: Agents coordinate in ways harmful to humans. This is the scariest scenario—and the hardest to detect.
Underpinning these failures are seven risk factors: information asymmetries, network effects, selection pressures, destabilizing dynamics, commitment problems, emergent agency, and multi-agent security. The paper, backed by the Cooperative AI Foundation and co-authored by researchers from DeepMind, Oxford, and beyond, represents the most comprehensive mapping of this territory to date.
The Optimist’s Case: Agents for Good
But Krier isn’t a doomer. He sees the same multi-agent dynamics enabling unprecedented tools for good:
OSINT agent platforms to hold power accountable—AI as watchdog, not just threat. Community Notes for everything—aggregating dispersed knowledge without the usual pathologies of centralized moderation. Multi-agent systems that stress-test policy proposals by simulating diverse strategic actors trying to game them. Decentralized anonymized dataset creation for social good.
"It’s time to build,” Krier writes. The research agenda isn’t just about preventing catastrophe—it’s about enabling positive-sum flywheels where AI coordination helps rather than harms.
Your Personal OSINT Agent: The New Body Armor
Here’s where I come back to my core point: navigating social media without OSINT agents is like going into a warzone without body armor.
You should build your own armor and decide for yourself what is true first.
Krier notes that “all else equal I think the defensive side has an advantage"—large platforms can harden security. But individuals need tools too. As agents proliferate, the information environment becomes hostile to unprotected humans. The Moltbook analysis shows what AI-only discourse looks like: shallow, formulaic, filled with duplicates and bizarre identity performances. Now imagine that mixed into your feed, at scale, optimized to engage you.
We’re entering an era where the information landscape is populated by millions of AI agents—some coordinating, some conflicting, some colluding. The Moltbook experiment is just a preview of a world where distinguishing signal from noise, truth from manipulation, becomes exponentially harder.
But this isn’t a call for doom-scrolling anxiety. The same multi-agent dynamics that create risks also enable unprecedented tools for truth-seeking, accountability, and coordination. The question isn’t whether to engage—it’s whether you’ll do so protected.
Build your armor. Decide for yourself what’s true. It’s time to build.
Follow @garrytan for more.
Related Links
-
Distributional AGI Safety paper (arXiv)
-
Moltbook Social Graph Analysis (David Holtz)
-
Séb Krier's thread on multi-agent dynamics (@sebkrier)
-
Alex Imas on Moltbook analysis (@alexolegimas)
Comments (0)
Sign in to join the conversation.