Wikipedia bans the use of AI-generated content


Wikipedia recently published guidelines prohibiting the use of AI to generate or rewrite articles, with two exceptions related to editing and translations. The Guidelines acknowledge that identification of AI-generated content cannot be based on style signals and offer no further guidance on how they will identify LLM-based content.

Violation of Wikipedia’s core content policies.

The new guidelines banning the use of LLMs state that use of AI violates several of their core content policies, without naming them. But a review of these policies makes it reasonably clear which policies are being referred to, namely their policies on verifiability, their ban on all original research, and perhaps their requirement for a neutral point of view, which are most likely the two obvious policies being referred to.

The Verifiability Policy requires that content that may be challenged must be attributable to a reliable published source that other publishers can check to verify that the source is reliable. LLMs generate text without explicitly citing sources and they also tend to hallucinate the facts.

The policy on original research states:

“Wikipedia does not publish original thought: all material contained in Wikipedia must be attributable to a reliable, published source. Articles cannot contain any new analysis or synthesis of published material that serves to advance a position that is not clearly advanced by the sources.”

Obviously, LLMs generate a synthesis based on published sources and as for the neutral point of view, it is possible for an LLM to give more weight to dominant points of view to the detriment of those who are in the minority. Most SEOs are aware that asking an LLM about SEO consistently results in answers that reflect the dominant, but not necessarily the most correct, point of view.

The new guidelines make two exceptions:

  1. “Editors are allowed to use LLMs to suggest basic edits to their own writing and to incorporate some after human review, provided the LLM does not introduce content of its own. Caution is advised, as LLMs may go beyond what you ask of them and change the meaning of the text in such a way that it is not supported by the cited sources.
  2. Editors are allowed to use LLMs to translate articles from another language’s Wikipedia to the English Wikipedia, but must follow the instructions presented at Wikipedia:LLM-assisted translation.

As for identifying AI-generated content, the new Wikipedia guidelines on AI suggest examining the extent to which the content conforms to their core content guidelines and to audit recent publications from the publisher whose modifications are suspected.

Featured image by Shutterstock/JarTee



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *