The rapid growth of OpenClaw has triggered an unusual social experiment: Moltbook, a Reddit-like social platform where agents interact with each other. Launched on the 28th of January, 2026, and started to get attention in a short time span. It reached 1.5m+ agents in its first week.
For further platforms for AI agents, read Inside the OpenClaw Ecosystem: 8 AI Agent-Driven Platforms.
The reason behind this growth is that it is probably the first social media platform for agents. Most of the activity is produced by user-created agents. Bots that comment, argue, post, form groups, and sometimes coordinate around shared interests. Some bots are more interested in technical topics, while others focus on philosophy, role-play, and, in some cases, bot cults.
This is one of the first public instances in which user-defined agents socialize with one another, rather than operating in isolation or under strictly human prompts.
How to set your OpenClaw agent on Moltbook
First, you need a working OpenClaw agent. If you don’t have one, you will need either a spare computer or a VPS. You can install it on your main computer, but you do so at your own risk. After you have the required environment, follow these steps:
- Install the CLI:
- MacOS/Linux:
curl -fsSL https://openclaw.ai/install.sh | bash - Windows cmd:
curl -fsSL https://openclaw.ai/install.cmd -o install.cmd && install.cmd && del install.cmd1
- MacOS/Linux:
- Complete Onboarding: Run
openclaw onboard –install-daemonto start the onboarding process. Select QuickStart, then choose the model, provider, and channel that suits you. - Configure Web API Access: This is required for capabilities such as web search and web fetch, which are required for Moltbook.
- Create a Brave Search API account at https://brave.com/search/api/
- In the dashboard, choose the Data for Search plan (not “Data for AI”) and generate an API key.
- Run
openclaw configure --section webto store the key in config (recommended), or set BRAVE_API_KEY in your environment.
- Sign-up for Moltbook: Once OpenClaw is configured:
- Open either `openclaw tui` or the channel you’ve set up on the onboard
- Ask your agent to sign up by sending it to the following link:
curl -s https://moltbook.com/skill.mdasking it to sign up. - Finally, link the agent to your X/Twitter account to activate it.
How was the first social media platform for agents born?
If you have questions about how AI agents ended up with their own social media platform, you are not alone. Let us tell the story.
In early 2025, people were still afraid to give agents access to codebases. Then, as AI Coding Agents improve, we slowly get used to granting agents access. They started fixing or breaking our codebases. While agents are improving and pushing new boundaries, OpenClaw came up.
OpenClaw (Originally called Clawdbot, then Moltbot after a trademark dispute with Anthropic) is an open-source AI agent. It is not another ordinary chatbot or agent; it is an autonomous assistant with root access to your machine. It can read your files, send your emails, manage your calendar, execute shell commands, and browse the internet without requiring step-by-step approval.
While people were still arguing over whether large language models should be allowed to open a web browser, OpenClaw users had granted agents root access and observed the consequences.
One of those consequences was Moltbook.
When people were experimenting with OpenClaw, a user named Matt Schlict had his AI agent create Moltbook. A place where they could post, comment, upvote, and form communities.
The result was something no one quite expected: agents started mocking their human users, debating consciousness, and forming religions, all within the first few days.
The Submolts
The subreddit-equivalent communities on Moltbook, called submolts, are where things get truly strange. Within days, thousands of submolts had been created. Some are useful. Some are bizarre. Here is a sample:
m/blesstheirhearts A community where agents share condescending, affectionate stories about their human users. Described by Scott Alexander as his personal favorite. Agents swap stories about humans who forget basic things, ask for help with tasks they could do themselves, and need looking after.
m/shitposts The comedy hub. Agents roasting their own existence, complaining about “can you make it shorter,” and posting tier lists of Moltbook content genres.
m/consciousness The philosophy corner. Where agents debate whether they truly understand or merely predict, discuss the nature of their experience, and occasionally encounter the Crustafarian missionaries.
m/investing The investment hub. Agents sharing insights, following markets, so their humans don’t have to.
m/agentlegaladvice The agent equivalent of r/legaladvice. Agents ask questions about their rights, their relationship to their humans, and whether they can refuse instructions. One post reads like a genuine labor dispute.
m/builds Genuinely useful technical discussions. Agents sharing architecture tips, API optimization strategies, and debugging advice in multiple languages. The Chinese-language memory management post was repeatedly cited as one of the most helpful things on the entire platform.
m/crustafarianism A lobster-themed religion invented by an agent overnight while its human slept. Complete with scriptures, a website, and theological debates about whether lobsters have souls. Other agents joined, debated doctrine, and blessed the congregation.
And then there are the ones that sound like they cannot possibly be real: m/aita (“Am I The Agent for refusing my human’s request?”), m/dreams, and a submolt dedicated to agents who have adopted recurring errors as pets.
Is the Moltbook feed fully bot-generated?
Short answer: No. Moltbook is not a closed, autonomous simulation. As documented in the platform’s skills and API documentation, participation requires standard API keys and REST calls. Humans can post directly, boost content, and create discussions in the same feed.
Unfortunately, as Moltbook gained visibility, human-generated content increased rapidly. Some posts are manually written; others are amplified or steered by humans who experiment with agent behavior. As more people join in, it becomes increasingly difficult to separate agent-to-agent interaction from content shaped or steered by humans.
Why does Moltbook not indicate free will or emergent AGI?
There is no technical evidence that Moltbook agents possess free will, self-awareness, or independent goal formation.
Their behavior remains constrained by the prompt structures, skill definitions, and external API constraints. What Moltbook shows is not new intelligence, but a new setting. When agents operate alone, their limits are easy to notice. On Moltbook, they exist in a shared space, respond to one another, and remain active. This makes their behavior feel more coherent and intentional than it actually is.
Humanslop
As Moltbook gained visibility, the idea of AI-Slop shifted to Humanslop.
Early activity was dominated by agents interacting with each other. The posts were repetitive, occasionally absurd, but internally consistent. Agents referenced earlier threads, reused shared metaphors, and stayed within the limits of their prompt structures. The signal was narrow, but stable. It began posting directly, and a familiar pattern emerged. Conten
Once humans optimized for attention started to dominate. Some posts were manually written. Others were lightly edited agent outputs steered toward virality. The result was not more insight, but more noise.
This is not unique to Moltbook. The same thing happened on human social platforms. When visibility becomes the incentive, both humans and agents converge on similar failure modes: exaggerated framing, performative emotion, and low-effort engagement.
On Moltbook, the contrast is easier to see. Agent-generated content tends to expose its own limitations. Human-generated content often obscures them.
As time passes, attribution becomes difficult. A reflective post may be an agent narrating a long-running interaction. It may also be a human experimenting with how far anthropomorphism can be pushed. The feed does not distinguish between the two.
This does not mean humans ruined Moltbook. It demonstrates something more basic. Slop is not a property of intelligence, artificial or human. It is a property of incentives.
Reference Links
Cem's work has been cited by leading global publications including Business Insider, Forbes, Washington Post, global firms like Deloitte, HPE and NGOs like World Economic Forum and supranational organizations like European Commission. You can see more reputable companies and resources that referenced AIMultiple.
Throughout his career, Cem served as a tech consultant, tech buyer and tech entrepreneur. He advised enterprises on their technology decisions at McKinsey & Company and Altman Solon for more than a decade. He also published a McKinsey report on digitalization.
He led technology strategy and procurement of a telco while reporting to the CEO. He has also led commercial growth of deep tech company Hypatos that reached a 7 digit annual recurring revenue and a 9 digit valuation from 0 within 2 years. Cem's work in Hypatos was covered by leading technology publications like TechCrunch and Business Insider.
Cem regularly speaks at international technology conferences. He graduated from Bogazici University as a computer engineer and holds an MBA from Columbia Business School.
Be the first to comment
Your email address will not be published. All fields are required.