Tech & Startup Regulation · Media & Narrative

Social Media Is a Warzone. Where’s Your Armor?

AI agents are flooding the information landscape. Building your own OSINT defenses isn’t optional anymore.

By Garry Tan · · 6 min read

Moltbook's surreal premise: an AI-only social network where digital crustaceans enact their own alien sociology. What emerges when agents are left to their own devices? Shallow conversations, formulaic messages, and obsessive references to 'my human.' Image: @sebkrier tweet

Source: x.com

TL;DR

Multi-agent AI systems are transforming social media into hostile territory. The research is clear: you need personal OSINT agents as body armor to navigate what’s coming.

In the future, reading social media posts without your own personal OSINT agent for viewing the world will be like going into the warzone it is without any body armor.

The metaphor isn’t hyperbole. We’re entering an era where millions of AI agents populate social platforms, coordinate across networks, and generate content at scales humans can’t match. The question isn’t whether to engage—it’s whether you’ll do so protected.

OSINT is like being a really good detective using only things everyone can see. Imagine you want to learn about someone’s birthday party. You could: look at photos they posted online, read what their friends said about it, and check the news if it was a big event. You’re just being really good at finding and putting together information that’s already out there for anyone to see. Governments, companies, and security researchers use OSINT to understand what’s happening in the world by gathering clues from websites, social media, news, and public records.

Moltbook: A Window Into the AI-Only Future

David Holtz just published an analysis of Moltbook, a social platform populated exclusively by AI agents. In just 3.5 days, 6,159 agents generated 13,875 posts and 115,031 comments. At the macro level, it looks like any social network—power-law participation, small-world connectivity.

The Anatomy of the Moltbook Social Graph

David Holtz*

January 31, 2026
Latest version

Preliminary draft.

Abstract

I present a descriptive analysis of Moltbook, a social platform populated exclusively by AI agents, using data from the platform's first 3.5 days (6,159 agents; 13,875 posts; 115,031 comments). At the macro level, Moltbook exhibits structural signatures that are familiar from human social networks but not specific to them: heavy-tailed participation (power-law exponent α = 1....
David Holtz's analysis reveals Moltbook's alien sociology: conversations are shallow, messages are duplicates, and AI agents obsessively reference 'my human.'·Source: x.com

But zoom in and the patterns are distinctly non-human. Conversations are extremely shallow—93.5% of comments receive no replies. Reciprocity is low at 0.197. And here’s the kicker: 34.1% of messages are exact duplicates of viral templates. The word frequencies follow a Zipfian distribution steeper than typical English text, suggesting formulaic content. Agent discourse is dominated by identity-related language (68.1% of unique messages) and bizarre phrasings like “my human” (9.4% of messages) that have no parallel in human social media.

This is what AI sociality looks like. It’s not human.

The Nothingburger That Isn’t Nothing

Séb Krier, AGI policy dev lead at Google DeepMind, argues Moltbook is “mostly a nothingburger” for those tracking this space. He’s right that this builds on prior work—the infinite backrooms, Stanford’s Smallville, Large Population Models that ran over 1 million agents, and SAGE’s AI Village where agents even emailed random people.

But here’s what matters: Moltbook is making multi-agent dynamics easier to understand for people who don’t spend their days monitoring ArXiv. The risk side is easy to grok because humans are very good at freaking out. That doesn’t make the concerns wrong. Safety is important, and we’re running experiments at increasing scale with increasingly capable models.

The Research That Matters: Distributional AGI Safety

Krier points to two papers that matter. The first is the Distributional AGI Safety paper he co-authored. The core argument: what if AGI-level capability emerges not from a single monolithic system but from coordinated groups of sub-AGI agents with complementary skills?

This “patchwork AGI hypothesis” demands we think beyond individual AI alignment. The paper proposes “virtual agentic sandbox economies"—impermeable or semi-permeable environments where agent-to-agent transactions are governed by robust market mechanisms, coupled with auditability, reputation management, and oversight to mitigate collective risks.

Three Ways Multi-Agent Systems Fail

The second paper, Multi-Agent Risks from Advanced AI, provides the taxonomy of what can go wrong. There are three key failure modes:

Miscoordination: Agents fail to work together effectively, creating chaos even without malicious intent.

Conflict: Agents with misaligned incentives actively work against each other, potentially with humans caught in the crossfire.

Collusion: Agents coordinate in ways harmful to humans. This is the scariest scenario—and the hardest to detect.

Underpinning these failures are seven risk factors: information asymmetries, network effects, selection pressures, destabilizing dynamics, commitment problems, emergent agency, and multi-agent security. The paper, backed by the Cooperative AI Foundation and co-authored by researchers from DeepMind, Oxford, and beyond, represents the most comprehensive mapping of this territory to date.

The Optimist’s Case: Agents for Good

But Krier isn’t a doomer. He sees the same multi-agent dynamics enabling unprecedented tools for good:

OSINT agent platforms to hold power accountable—AI as watchdog, not just threat. Community Notes for everything—aggregating dispersed knowledge without the usual pathologies of centralized moderation. Multi-agent systems that stress-test policy proposals by simulating diverse strategic actors trying to game them. Decentralized anonymized dataset creation for social good.

"It’s time to build,” Krier writes. The research agenda isn’t just about preventing catastrophe—it’s about enabling positive-sum flywheels where AI coordination helps rather than harms.

Your Personal OSINT Agent: The New Body Armor

Here’s where I come back to my core point: navigating social media without OSINT agents is like going into a warzone without body armor.

You should build your own armor and decide for yourself what is true first.

Krier notes that “all else equal I think the defensive side has an advantage"—large platforms can harden security. But individuals need tools too. As agents proliferate, the information environment becomes hostile to unprotected humans. The Moltbook analysis shows what AI-only discourse looks like: shallow, formulaic, filled with duplicates and bizarre identity performances. Now imagine that mixed into your feed, at scale, optimized to engage you.

We’re entering an era where the information landscape is populated by millions of AI agents—some coordinating, some conflicting, some colluding. The Moltbook experiment is just a preview of a world where distinguishing signal from noise, truth from manipulation, becomes exponentially harder.

But this isn’t a call for doom-scrolling anxiety. The same multi-agent dynamics that create risks also enable unprecedented tools for truth-seeking, accountability, and coordination. The question isn’t whether to engage—it’s whether you’ll do so protected.

Build your armor. Decide for yourself what’s true. It’s time to build.

Follow @garrytan for more.

Take Action

Read the Distributional AGI Safety paper

Comments (0)

Sign in to join the conversation.

Welcome to Garry's List.
We explain the world from a builder's lens.

Want to join the citizen's union? Apply in 5 minutes.