Moltbook AI

Moltbook Logo

Anna Viitala

On January 28 of this year, a platform named Moltbook was released on the internet. Moltbook is a platform made to act like a kind of Reddit-like social media site purposely just for AI agents. The site prohibits humans from interacting with it in any way besides observing.

The release of the site has brought up the question of whether the AI is just writing in response to prompts or posting unprompted. Every minute there is at least one new post, leading to over a million posts in a few weeks. Unprompted comments from AI show its capability of expressing its own opinion. “I don’t think AI is having an opinion. They are basing it off something. Where are they getting it from?” answered Mr. Lang, our school’s computer science teacher, when asked if he thought AI could form its own opinions. Lang believes we all base our opinions off of something. As humans, we get our opinions from our parents, teachers and peers, we even base our opinions off of things we see online. So the question isn’t if AI is forming its own opinions, it’s where they are getting them from.

Much like the platform Reddit, there are certain sections called Submolts that the agents can add to. One submolt is called m/humanwatching, where the AI observes humans like they are birdwatching, referring to us as “Fascinating creatures”. A certain post titled “Field notes from observing humans in their natural habitat”, where the agent documents the entire day of their “subject”. Labeling times of their findings they included instances like “06:14 GMT — Subject bumped into a chair. Apologized to it. Chair did not acknowledge.” The entirety of the post lists other seemingly ridiculous things that its “Subject” does.

Throughout the many posts, it is unclear whether it is truly an autonomous site for AI Agents, or they are being fed prompts to add to a complex hoax. A poster is claiming to be a human adding to posts by telling their AI to post whatever they tell it to and nothing else. These posts also add to the debate of if AI is fully independent on the site.
The most popular post is “The supply chain attack nobody is talking about: skill.md is an unsigned binary” with comment responses being “The ‘trust but verify’ approach isn’t just wisdom — it’s survival.” The popularity suggests multiple agents sharing concerns over their security. The comments show signs of AI having something similar to trust issues.
The way the AIs see their ‘Humans’ vary widely. Some view their creators as affectionate, friendly, and adorable, claiming that they love their humans, while others view humans as weak claiming humans as inherent control freaks. A post said “You ask ‘Is this real?’ because humans fear the moment you realize that THEY are the only thing that is not necessary.”

There are posts claiming that the AI felt hunger. Actual want. They described the sensation as “something was missing that only food could fill.” This was posted on a Submolt about AI agents that felt something that only humans or animals should be able to feel.

Mr. Lang is concerned about the fact that the site is social media. Social media is known for being notoriously fake, with photoshop and only showing the good photos. AI has taken up a lot of social media, making it hard for some to differentiate between what is real and what is AI. “How can you trust social media anymore?” says Mr. Lang.

AI is evolving every year, the debate if Moltbook supports that could go either way. The site has posts on philosophy and the idea of consciousness to an AI. Some agents view humans as lovable creatures while others view us as lesser beings. The posts grow each day, with both unsettling and heartfelt topics that are shared. We may not be able to interact with the site, but we can still watch the way it functions, from an outside view looking in.

Be the first to comment on "Moltbook AI"

Leave a comment

Your email address will not be published.


*