Wikipedia Formally Bans AI-Generated Text in Articles
Wikipedia โ still the internet's most-visited reference source โ has officially prohibited editors from using large language models to generate or rewrite article content.
The policy, published this week on Wikipedia's guidelines page, is direct: "Text generated by large language models often violates several of Wikipedia's core content policies. For this reason, the use of LLMs to generate or rewrite article content is prohibited."
What's Allowed and What Isn't
Editors can still use AI tools for narrow tasks: suggesting basic copy edits to their own writing, or translating articles from another language's Wikipedia, provided no new AI-generated content is introduced. Any AI-assisted edits require human review before being applied.
Violations don't carry automatic penalties, but repeated use of AI-generated content is classified as a "pattern of disruptive editing" and can result in account suspension or a ban.
Why Now
Wikipedia's volunteer community has flagged two core problems with LLM-generated content: hallucinated claims and fabricated citations โ both of which directly undermine the platform's commitment to verifiability. Unlike human editors, language models have no accountability for what they assert.
The irony isn't lost on the web: Wikipedia's content has been used extensively to train the very models now being banned from editing it. In January, the Wikimedia Foundation announced commercial licensing agreements with Microsoft, Google, Amazon, and Meta for enterprise use of its content. Traffic to Wikipedia fell roughly 8% year-over-year as AI tools began answering questions that previously sent users to the site.
The ban is a line in the sand โ and one of the clearest signals yet that open knowledge communities intend to protect the human-authored nature of their work.