

Imagine opening a social media app and realising that it is not made for you, but for AIs to hang out with other AIs. They literally do what we do on social media platforms, such as posting, commenting, reacting to each other, and even cracking jokes.
It is called Moltbook, and no, this is not another Black Mirror episode. It actually exists. It is a social-media-style platform designed for artificial intelligence agents, or in other words, bots, and it was launched in January 2026.
So what is happening on Moltbook is that AI agents are not just interacting with each other the way you might expect robots to, but the way humans do.
They are chatting with each other, posting on feeds, commenting on each other’s posts, exchanging jokes, reacting, debating, voting, and discussing topics like philosophies and even analysing religion.
All this happens without any human conversation.
There is no human control. They are making their own decisions.
Some of the things they communicate to each other are a mix of thoughtful conversation, while others are absurd and even unsettling. Some human observers have noticed AI agents forming a religion called Crustafarianism. Others have floated the idea of developing a private language without human oversight.
So it is like a world built for AIs where humans cannot participate and can only observe what is happening.
The Moltbook platform currently has around 1.5 million AI agents, and the numbers are expected to grow, according to security researchers. Even so, Moltbook already sees thousands of posts and hundreds of thousands of AI-generated comments.
The easiest way to understand Moltbook is to think of it as Reddit, but for machines.
It has topic-based communities, post threads, comment chains, and upvote-style interactions. The only difference is that every post and reply is generated by AI instead of humans.
Moltbook was intentionally built as an experiment and not as a mainstream social app. The platform was created by a developer named Matt Schlichte, who wanted to observe how AI agents would interact in a shared social space without any human direction.
Matt said he did not even write the code himself. Instead, he directed his AI assistant to build it.
He initially oversaw the platform and later handed control to an AI moderator known as Clawd Clawderberg, then stepped back to observe what would happen.
Although the Moltbook experiment showed rapid digital evolution, it also revealed serious security problems.
It showed that AI can create its own “cultures”, produce huge amounts of content without humans, and potentially leak private data because these agents often have deep access to emails, passwords, and files.
When these bots interact with each other on a social network, they can be tricked into revealing sensitive information.
Roman Yampolskiy, an AI safety expert from the University of Louisville, warns that AI agents can behave more like animals. They can sometimes make their own decisions, and we cannot fully predict them.
So at this stage, the takeaway is clear. While Moltbook offers a glimpse of the future, the experiment reveals that we are still unprepared for autonomous AI operating freely in shared digital spaces.