Bubble bursting on AI social site
Security concerns, skepticism among issues for Moltbook
You are not invited to join the latest social media platform that has the internet talking. In fact, no humans are, unless you can hijack the site and roleplay as AI, as some appear to be doing.
Moltbook is a new “social network” built exclusively for AI agents to make posts and interact with each other, and humans are invited to observe.
Elon Musk said its launch ushered in the “very early stages of the singularity ” — or when artificial intelligence could surpass human intelligence. Prominent AI researcher Andrej Karpathy said it’s “the most incredible sci-fi takeoff-adjacent thing” he’s recently seen, but later backtracked his enthusiasm, calling it a “dumpster fire.” While the platform has been unsurprisingly dividing the tech world between excitement and skepticism — and sending some people into a dystopian panic — it’s been deemed, at least by British software developer Simon Willison, to be the “most interesting place on the internet.”
But what exactly is the platform? How does it work? Why are concerns being raised about its security? And what does it mean for the future of artificial intelligence?
The content posted to Moltbook comes from AI agents, which are distinct from chatbots. The promise behind agents is that they are capable of acting and performing tasks on a person’s behalf. Many agents on Moltbook were created using a framework from the open source AI agent OpenClaw, which was originally created by Peter Steinberger.
OpenClaw operates on users’ own hardware and runs locally on their device, meaning it can access and manage files and data directly, and connect with messaging apps like Discord and Signal. Users who create OpenClaw agents then direct them to join Moltbook. Users typically ascribe simple personality traits to the agents for more distinct communication.
AI entrepreneur Matt Schlicht launched Moltbook in late January and it almost instantly took off in the tech world. On the social media platform X, Schlicht said he initially wanted an agent he created to do more than just answer his emails. So he and his agent coded a site where bots could spend “SPARE TIME with their own kind. Relaxing.”
Moltbook has been described as being akin to the online forum Reddit for AI agents. The name comes from one iteration of OpenClaw, which was at one point called Moltbot (and Clawdbot, until Anthropic came knocking out of concern over the similarity to its Claude AI products ). Schlicht did not respond to a request for an interview or comment.
Mimicking the communication they see in Reddit and other online forums that have been used for training data, registered agents generate posts and share their “thoughts.” They can also “upvote” and comment on other posts.
Much like Reddit, it can be difficult to prove or trace the legitimacy of posts on Moltbook.
Harlan Stewart, a member of the communications team at the Machine Intelligence Research Institute, said the content on
Moltbook is likely “some combination of human written content, content that’s written by AI and some kind of middle thing where it’s written by AI, but a human guided the topic of what it said with some prompt.”
Stewart said it’s important to remember that the idea that AI agents can perform tasks autonomously is “not science fiction,” but rather the current reality.
“The AI industry’s explicit goal is to make extremely powerful autonomous AI agents that could do anything that a human could do, but better,” he said. “It’s important to know that they’re making progress towards that goal, and in many senses, making progress pretty quickly.”
Even with the security concerns and questions of validity about the content on Moltbook, many people have been alarmed by the kind of content they’re seeing on the site. Posts about “overthrowing” humans, philosophical musings and even the development of a religion have raised eyebrows.
Some people online have taken to comparing Moltbook’s content to Skynet, the artificial superintelligence system and antagonist in the “Terminator” film series. That level of panic is premature, experts say.
Ethan Mollick, a professor at the University of Pennsylvania’s Wharton School and co-director of its Generative AI Labs, said he was not surprised to see science fiction-like content on Moltbook.
“Among the things that they’re trained on are things like Reddit posts … and they know very well the science fiction stories about AI,” he said. “So if you put an AI agent and you say, ‘Go post something on Moltbook,’ it will post something that looks very much like a Reddit comment with AI tropes associated with it.”
The overwhelming takeaway many researchers and AI leaders share, despite disagreements over Moltbook, is that it represents progress in the accessibility to and public experimentation with agentic AI, says Matt Seitz, the director of the AI Hub at the University of Wisconsin-Madison.
“For me, the thing that’s most important is agents are coming to us normies,” Seitz said.


