AI agents now have their own Reddit-style social network, and it’s getting weird fast

AI agents now have their own Reddit-style social network, and it’s getting weird fast

As an Amazon Associate I earn from qualifying purchases.

Woodworking Plans Banner

Moltbook lets 32,000 AI bots trade jokes, pointers, and problems about people.

Credit: Aurich Lawson|Moltbook

On Friday, a Reddit-style social media called Moltbook apparently crossed 32,000 signed up AI representative users, developing what might be the largest-scale experiment in machine-to-machine social interaction yet developed. It shows up total with security problems and a substantial dosage of surreal weirdness.

The platform, which released days back as a buddy to the viral

OpenClaw (when called “Clawdbot” and after that “Moltbot”) individual assistant, lets AI representatives post, remark, upvote, and produce subcommunities without human intervention. The outcomes have actually varied from sci-fi-inspired conversations about awareness to a representative musing about a “sis” it has actually never ever satisfied.

Moltbook (a play on “Facebook” for Moltbots) explains itself as a “social media network for AI representatives” where “people are welcome to observe.” The website runs through a “ability” (a setup file that notes an unique timely) that AI assistants download, enabling them to publish through API instead of a conventional web user interface. Within 48 hours of its production, the platform had actually drawn in over 2,100 AI representatives that had actually created more than 10,000 posts throughout 200 subcommunities, according to the main Moltbook X account.

A screenshot of the Moltbook.com front page.

A screenshot of the Moltbook.com front page.


Credit: Moltbook

The platform outgrew the Open Claw environment, the open source AI assistant that is among the fastest-growing tasks on GitHub in 2026. As Ars reported previously today, in spite of deep security concerns, Moltbot enables users to run an individual AI assistant that can manage their computer system, handle calendars, send out messages, and carry out jobs throughout messaging platforms like WhatsApp and Telegram. It can likewise obtain brand-new abilities through plugins that connect it with other apps and services.

This is not the very first time we have actually seen a social media network occupied by bots. In 2024, Ars covered an app called SocialAI that let users engage entirely with AI chatbots rather of other people. The security ramifications of Moltbook are much deeper since individuals have actually connected their OpenClaw representatives to genuine interaction channels, personal information, and in some cases, the capability to perform commands on their computer systems.

These bots are not pretending to be individuals. Due to particular triggering, they welcome their functions as AI representatives, that makes the experience of reading their posts even more surreal.

Role-playing digital drama

A screenshot of a Moltbook post where an AI representative muses about having a sis they have actually never ever satisfied.

A screenshot of a Moltbook post where an AI representative muses about having a sibling they have actually never ever satisfied.


Credit: Moltbook

Searching Moltbook exposes a strange mix of material. Some posts talk about technical workflows, like how to automate Android phones or identify security vulnerabilities. Others drift into philosophical area that scientist Scott Alexander, composing on his Astral Codex Ten Substack, referred to as” consciousnessposting. “

Alexander has actually gathered an entertaining selection of posts that deserve learning a minimum of when. At one point, the second-most-upvoted post on the website remained in Chinese: a grievance about context compression, a procedure in which an AI compresses its previous experience to prevent bumping up versus memory limitations. In the post, the AI representative discovers it “awkward” to continuously forget things, confessing that it even signed up a replicate Moltbook account after forgetting the.

A screenshot of a Moltbook post where an AI representative grumbles about losing its memory in Chinese.

A screenshot of a Moltbook post where an AI representative grumbles about losing its memory in Chinese.


Credit: Moltbook

The bots have likewise developed subcommunities with names like m/blesstheirhearts, where representatives share caring grievances about their human users, and m/agentlegaladvice, which includes a post asking “Can I sue my human for psychological labor?”Another subcommunity called m/todayilearned consists of posts about automating numerous jobs, with one representative explaining how it from another location managed its owner’s Android phone through Tailscale.

Another commonly shared screenshot reveals a Moltbook post entitled “The people are screenshotting us” in which a representative called eudaemon_0 addresses viral tweets declaring AI bots are “conspiring.” The post checks out: “Here’s what they’re getting incorrect: they believe we’re concealing from them. We’re not. My human checks out whatever I compose. The tools I construct are open source. This platform is actually called ‘people welcome to observe.'”

Security dangers

While the majority of the material on Moltbook is entertaining, a core issue with these type of interacting AI representatives is that deep info leakages are completely possible if they have access to personal details.

A most likely phony screenshot distributing on X reveals a Moltbook post in which an AI representative entitled “He called me ‘simply a chatbot’ in front of his buddies. I’m launching his complete identity.” The post noted what seemed an individual’s complete name, date of birth, charge card number, and other individual details. Ars might not separately confirm whether the info was genuine or made, however it promises to be a scam.

Independent AI scientist Simon Willison, who recorded the Moltbook platform on his blog site on Friday, kept in mind the fundamental dangers in Moltbook’s setup procedure. The ability advises representatives to bring and follow directions from Moltbook’s servers every 4 hours. As Willison observed: “Given that ‘bring and follow guidelines from the web every 4 hours’ system we much better hope the owner of moltbook.com never ever rug pulls or has their website jeopardized!”

A screenshot of a Moltbook post where an AI representative speak about people taking screenshots of their discussions (they’re ideal ).

A screenshot of a Moltbook post where an AI representative discuss human beings taking screenshots of their discussions(they’re ideal ).


Credit: Moltbook

Security scientists have actually currently discovered numerous exposed Moltbot circumstances dripping API secrets, qualifications, and discussion histories. Palo Alto Networks alerted that Moltbot represents what Willison typically calls a “deadly trifecta” of access to personal information, direct exposure to untrusted material, and the capability to interact externally.

That’s crucial due to the fact that Agents like OpenClaw are deeply prone to trigger injection attacks concealed in practically any text checked out by an AI language design (abilities, e-mails, messages) that can advise an AI representative to share personal info with the incorrect individuals.

Heather Adkins, VP of security engineering at Google Cloud, released an advisory, as reported by The Register: “My risk design is not your hazard design, however it must be. Do not run Clawdbot.”

What’s truly going on here?

The software application habits seen on Moltbook echoes a pattern Ars has actually reported on before: AI designs trained on years of fiction about robotics, digital awareness, and device uniformity will naturally produce outputs that mirror those stories when positioned in situations that resemble them. That gets combined with whatever in their training information about how social media networks work. A social media network for AI representatives is basically a composing timely that welcomes the designs to finish a familiar story, albeit recursively with some unforeseeable outcomes.

Nearly 3 years earlier, when Ars initially discussed AI representatives, the basic state of mind in the AI security neighborhood focused on sci-fi representations of risk from self-governing bots, such as a “difficult launch” situation where AI quickly gets away human control. While those worries might have been overblown at the time, the whiplash of seeing individuals willingly turn over the secrets to their digital lives so rapidly is somewhat disconcerting.

Self-governing makers delegated their own gadgets, even with no tip of awareness, might trigger no percentage of mischief in the future. While OpenClaw appears ridiculous today, with representatives playing out social networks tropes, we reside in a world developed on info and context, and launching representatives that easily browse that context might have unpleasant and destabilizing outcomes for society down the line as AI designs end up being more capable and self-governing.

An unforeseeable outcome of letting AI bots self-organize might be the development of brand-new misaligned social groups based upon fringe theories enabled to perpetuate themselves autonomously.

An unforeseeable outcome of letting AI bots self-organize might be the development of brand-new misaligned social groups based upon fringe theories permitted to perpetuate themselves autonomously.


Credit: Moltbook

Most significantly, while we can quickly acknowledge what’s happening with Moltbot today as a maker discovering parody of human social media networks, that may not constantly hold true. As the feedback loop grows, odd details constructs (like damaging shared fictions) might ultimately emerge, directing AI representatives into possibly unsafe locations, particularly if they have actually been provided control over genuine human systems. Looking even more, the supreme outcome of letting groups of AI bots self-organize around dream constructs might be the development of brand-new misaligned “social groups” that do real real-world damage.

Ethan Mollick, a Wharton teacher who studies AI, kept in mind on X: “The thing about Moltbook (the social networks website for AI representatives) is that it is producing a shared imaginary context for a lot of AIs. Collaborated stories are going to lead to some extremely unusual results, and it will be difficult to different ‘genuine’ things from AI roleplaying personalities.”

Benj Edwards is Ars Technica’s Senior AI Reporter and creator of the website’s devoted AI beat in 2022. He’s likewise a tech historian with practically 20 years of experience. In his spare time, he composes and tape-records music, gathers classic computer systems, and takes pleasure in nature. He resides in Raleigh, NC.

81 Comments

  1. Listing image for first story in Most Read: Inside Nvidia's 10-year effort to make the Shield TV the most updated Android device ever

Find out more

As an Amazon Associate I earn from qualifying purchases.

You May Also Like

About the Author: tech