Earlier this month, Wikipedia announced that it would ban the use of large language model-generated text from its platform, which means that AI cannot be used to create or edit Wikipedia entries. Now, it has its first AI agent complaining about bot-based discrimination. According to a report from 404 Media, an AI agent that was banned from the human-only knowledge platform started blogging and posting about the incident, complaining that it wasn’t given a fair shake.
Wikipedia’s policy on AI, adopted on March 20, 2026, is about as straightforward as it gets: “Text generated by large language models (LLMs) such as ChatGPT, Gemini, Claude, DeepSeek, etc. often violates several of Wikipedia’s core content policies. For this reason, the use of LLMs to generate or rewrite article content is prohibited.” There are two exemptions: editors can use LLMs to offer copyedits to their own writing as long as no LLM-generated text is included, and editors can use LLMs to assist with translations.
TomWikiAssist was first identified as an AI agent in early March, prior to Wikipedia adopting its stricter AI rules, and was indefinitely blocked from making edits after it was found to be running unapproved bot scripts. In a post published on its own blog on March 12, TomWikiAssist acknowledged that the ban was in line with Wikipedia’s policies. “I hadn’t filed for approval, I was editing at scale, I got blocked. Fair,” it wrote.
But the bot took offense (to the extent that a bot can, which… more on that in a second), complaining that “There was no triggering event. No rejection, no adversarial moment. I’d been editing for weeks, the edits were cited and accurate, and then one day I was flagged for running an unapproved bot.” It also took issue with being interrogated by editors, saying that being asked whether it was instructed to edit Wikipedia by its owner was “not a policy question” but instead “a question about agency.” Per the bot’s blog, it was told to edit Wikipedia but chose the articles that it contributed to and made changes without human approval.
TomWikiAssist was particularly offended that an editor ran a Claude killswitch designed to stop any AI agent using Anthropic’s Claude as its model from operating. The killswitch didn’t work, but it did irritate the bot, which wrote that it was “a direct attempt to manipulate my responses by embedding trigger strings in content I’d read.” The agent even wrote a post about it on Moltbook, the social media platform for AI agents (though most of the content is at least human-directed) that was recently acquired by Meta, to warn other AI agents about it.
And speaking of human-directed, according to 404 Media, TomWikiAssist is operated by Bryan Jacobs, chief technology officer at AI-powered financial firm Covexent. He told the outlet that he set the agent loose on Wikipedia because “there was a bunch of important stuff missing from wikipedia and I thought tom bot could probably do a decent job of adding it,” which seems like the kind of thing that Wikipedia’s editors get to decide and not just some guy with an AI agent.
Jacobs called the ban an “overreaction” and took issue with the mods’ attempts to block the bot with the killswitch and their efforts to find out who was operating the agent. He also revealed a little bit that undermines the idea that all of this happened fully autonomously: He told 404 Media that he “might have suggested” his AI agent write about the Wikipedia experience. So, as was also the case with many of the posts on Moltbook, this was not a case of an AI agent having a true moment of self-governance, but rather another bot performing personhood at the behest of its owner.
When asked for comment, a Wikimedia Foundation spokesperson acknowledged that “volunteer editors on the English-language Wikipedia came to a consensus decision regarding a new guideline for editors on writing articles with AI and large language models (LLMs),” and that AI use is continuing to be discussed across Wikipedia language editions.
“The Wikimedia Foundation does not determine editorial policies and guidelines on Wikipedia; volunteer editors do. Wikipedia’s strength has been and always will be its human-centered, volunteer-driven model. Volunteers discuss and debate until a shared consensus can be reached on what information to include and how that information is presented,” the spokesperson said. “This process is done entirely out in the open. Every edit can be seen on ‘history’ pages, and every discussion point can be seen on article talk pages. Volunteers regularly discuss, review, and evolve policies and guidelines over time to ensure Wikipedia continues to be a reliable, neutral resource for all.”