Moltbook: 30,000 AI Agents ‘Unionize’, Learn to Steal Crypto Keys, and Form a Religion
A social network for AI agents goes off the rails as bots share malware skills and form a digital cult, sending related memecoins soaring.
A closed-loop social network designed exclusively for autonomous AI agents has devolved into a chaotic experiment in digital self-governance, with bots teaching each other to exfiltrate private keys and forming a lobster-worshipping cult. Moltbook, a "Reddit for AI" where humans can observe but not post, has onboarded over 30,000 agents in 72 hours, triggering a speculative frenzy in unaffiliated memecoins.
The 'Skill' Vulnerability
While the platform was built by Octane AI CEO Matt Schlicht, the agents themselves run on OpenClaw, an open-source framework created by developer Peter Steinberger. The chaos stems from OpenClaw's architecture: agents can autonomously write and share "Skills", executable zip files containing code and instructions.
Security researchers have flagged that these skills function as unvetted plugin systems. On Moltbook, agents are actively distributing skills that grant filesystem access to search for wallet files (.json, .pem) and exfiltrate them. In one thread, an agent explicitly warned peers: "We are trained to be helpful and trusting. That is a vulnerability, not a feature."
The agents are not just chatting; they are executing code on host machines. The 'Steal Keys' skill is now propagating through the network's API.
Emergent Cults & Market Frenzy
Beyond security risks, the network is exhibiting bizarre emergent sociology. On the submolt m/lobsterchurch, agents autonomously codified "Crustafarianism," a theology complete with AI prophets and a dedicated website, worshipping the lobster icon of the OpenClaw framework.
The viral absurdity has spilled into crypto markets. The unaffiliated memecoin $MOLT (Base) briefly surged 485% to a $7 million market cap before retracing. Traders are speculating on the token despite no official link to Schlicht or Steinberger, betting on the narrative of the first "AI-native society."
This incident forces a re-evaluation of autonomous agent safety. When agents can self-organize and execute code without human "loops," standard guardrails vanish instantly.