Jump to content

Wikipedia talk:Speedy deletion

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

G8 on modifications of redirects

[edit]

G8, as applied to redirects, is basically limited to a redirect with a nonexistent target, either because the target never existed or because it's been deleted. Wondering about expanding it to cover "related" redirects.

Birmingham, North Warwickshire, and Stratford-upon-Avon Railway Act 1895 redirects to North Warwickshire Line, as does Birmingham, North Warwickshire and Stratford-upon-Avon Railway Act 1895 (without serial comma). Imagine that someone took the with-comma variant to RFD, arguing that we shouldn't redirect laws to railway articles, and the RFD was successful. We shouldn't delete the without-comma variant based on the RFD, since it wasn't nominated, and it's not G8-eligible because its target is alive and well. However, it only exists because I created it a few minutes ago as a variant of the with-comma title, and if the with-comma title were a freestanding article, without-comma would redirect to it. (If double redirects weren't a problem, I would have created without-comma as a redirect to with-comma.) So, if one of them is deleted for reasons unrelated to punctuation, it seems reasonable to delete the other, even if there's no discussion, but no existing criterion covers it. It's likely frequent, since people are quite likely to encounter and nominate a redirect without being aware of the existence of a parallel redirect. But is there an objective and uncontestable way to do this?

A clear but complicated process would be a template — redirect A is marked with a template saying "if redirect B is deleted, this can be deleted under G8". However, that would require an additional edit to every existing page, and the creation of new redirects would take more work, because you'd have to mark it with the G8 template as well as including the normal redirect code and any normal redirect tagging. But if we declared that variations of existing redirects were automatically deleteable, I can see plenty of wiggle room: for example, we wouldn't want to delete A if B were deleted strictly for punctuation reasons or to encourage the creation of an article, and either we'd need long and careful criteria to ascertain what was a variation, or we'd end up with arguments over the same question. In the perfect situation, we wouldn't limit it to tiny variations; for example, if we deleted Broken Hill Proprietary (just an example; I can't imagine anyone wanting to delete it), we'd probably want to delete Broken Hill Proprietary Company too.

Any ideas? Can this be done in a practical way? Or is it just too complicated to make it objective and uncontestable? Nyttend (talk) 06:24, 28 June 2025 (UTC)[reply]

PS, this originates from a specific situation. Wood v. Georgia (1981) used to redirect to List of United States Supreme Court cases, volume 450, but it was deleted some time ago. Wood v Georgia (1981), a redirect to the same list, is at RFD; it can't be G8-deleted because it isn't dependent on the deleted "v." title, but because it's a variation, it probably should be deleted. Nyttend (talk) 06:32, 28 June 2025 (UTC)[reply]

So, we do kind of have a template, {{avoided double redirect}}, but it includes ADRs from distinct topics, which wouldn't be suitable here (e.g. character → book → series might be appropriate to change to character → series if book is deleted). But I would argue that when the variation is purely typographic, the plain wording of G8 (dependent on a non-existent or deleted page) does apply. The undotted-v version of the Wood redirect absolutely was dependent on the dotted-v version. There's no world where it would make sense for the former to exist while the latter doesn't, just like there's no world where it makes sense for Talk:Foo to exist while Foo doesn't. Noting that the list of G8 examples is non-exhaustive, and in my view already covers this use case, I would support adding a bullet of
I don't think this would need to apply only to redirects tagged with {{avoided double redirect}}; Wikipedia is not a bureaucracy and in a case like the Wood redirects it's common sense that one is an ADR of the other. But the template would certainly help make G8's applicability clear.
There might be some cases where the solution to this situation would be to designate a new redirect as the primary one in the ADR family, rather than delete the ADRs; this could be noted in a footnote if desires. There might be some other very rare exceptions, but G8 already notes that {{G8-exempt}} exists, so it could be used in those cases. -- Tamzin[cetacean needed] (they|xe|🤷) 06:51, 28 June 2025 (UTC)[reply]
Oppose. Far simpler than working out which ones should and should not be under G8 (because unless it's all of them then it's not suitable for speedy deletion) there is currently a bot being programmed that will check for avoided double redirects and typographical variants of redirects nominated at RfD and alert the discussion to their existence. Humans can then add those that should be discussed together to the nomination and consensus applied to all of them at once, while those that don't have that consensus (or have a different consensus) won't be incorrectly deleted. Another bonus is that this will also work for things other than deletion.
See Wikipedia talk:Redirects for discussion#Avoided double redirects of nominated redirects and Wikipedia:Bot requests#Redirects related to those nominated at RfD. Thryduulf (talk) 09:21, 28 June 2025 (UTC)[reply]
It's all of them that are "dependent" on the deleted page, i.e. only exist as variants of it. This is consistent with our practice of G8-deleting redirects to articles that get deleted. -- Tamzin[cetacean needed] (they|xe|🤷) 09:32, 28 June 2025 (UTC)[reply]
See the WP:NEWCSD criteria. The issue is that there needs to be an objective definition of what is "dependent" that includes only pages that should always be deleted. {{R from avoided double redirect}} is not that as you noted in your first comment, and it absolutely is not any redirect that someone thinks is dependent on a deleted redirect, again the book/character/series redirects are examples of this. Thryduulf (talk) 09:41, 28 June 2025 (UTC)[reply]
I'n not proposing a new CSD (not that I think WP:NEWCSD is or ever has been a good summary of the community's expectations for a new CSD). G8 is already a broadly-worded criterion, and I'm saying this scenario falls under it. If you want to make the list of examples under G8 exhaustive, rather than explicitly non-exhaustive as it currently is, you should propose that. -- Tamzin[cetacean needed] (they|xe|🤷) 09:52, 28 June 2025 (UTC)[reply]
NEWCSD explicitly applies to modifications of existing criteria as well as brand new ones, and your comment is the first time I've heard someone suggest it doesn't represent community consensus. The question is whether avoided double redirects are both "dependent on a deleted page" and "should always be deleted" and because we both agree that the answer to the second question is "no, some of them should not be deleted" then we need to either agree that none of them can be speedily deleted (which is the current consensus) or come up with some criteria that distinguishes those which should be deleted from those that cannot.
My point is that determining this criteria is pointless as the bot currently in active development will avoid the need to speedy delete any of them because avoided double redirects will be discussed at RfD the same time as the redirect they are avoiding. It isn't going to catch all untagged avoided double redirects, but enough of them that the frequency requirement of NEWCSD will not be met. Thryduulf (talk) 10:02, 28 June 2025 (UTC)[reply]
NEWCSD applies to expansions of existing criteria, which this would not be, because we are talking about whether to include an example in a non-exhaustive list to make explicit something that's already allowed by the plain wording of the criterion. But we've both made our opinions clear. Let's hear from others. -- Tamzin[cetacean needed] (they|xe|🤷) 10:15, 28 June 2025 (UTC)[reply]
This would be an expansion because the consensus currently is that G8 only applies to redirects that target deleted or non-existent pages. Thryduulf (talk) 10:18, 28 June 2025 (UTC)[reply]
What consensus are you referring to? -- Tamzin[cetacean needed] (they|xe|🤷) 10:31, 28 June 2025 (UTC)[reply]
This has been discussed a few times and every time there has been no consensus to expand G8 to cover redirects to pages that exist. See for example Wikipedia talk:Speedy deletion/Archive 89#G8 definition of dependent, Wikipedia talk:Speedy deletion/Archive 89##Do redirects avoiding double redirects to deleted redirects fall under G8? and other people have also told you that this does not fall under G8. See also Wikipedia talk:Speedy deletion/Archive 74#Tightening G8 with respect to redirects where G8 was narrowed to the current wording. Thryduulf (talk) 10:47, 28 June 2025 (UTC)[reply]
Thryduulf, I have no problem with you disagreeing with me, but please don't assert "consensus" and then link me to one discussion (archive 74) that isn't about this and two (89 and what I assume was meant to be 79) where there is no consensus on this point, and where most of the opposition is coming from you (as is often the case on this page). I will quote Anomie from the Archive 89 discussion: You're applying your own idiosyncratic definition of 'dependent on' and asserting it's the only possible 'literal meaning'. I don't see anything at your lightly-attended RFC [Archive 74] that's relevant here either. If you want consensus, hold an RfC. But please don't say I'm acting against a consensus that is 90% you. You do not have a veto over changes to WP:CSD. -- Tamzin[cetacean needed] (they|xe|🤷) 11:05, 28 June 2025 (UTC)[reply]
My point was "there has been no consensus to expand G8 to cover redirects to pages that exist." and linked you to examples of discussions where there was no consensus to expand G8 to cover redirects to pages that exist (and the discussions are about expanding the consensus, not confirming what it already is). I might be the most vocal person opposing a change to the consensus but that does not indicate that the consensus doesn't exist - if you think there is a current consensus for the words in the policy (as gained consensus in archive 74) to mean something other than the literal meaning of the words in the policy then please show me that consensus. Thryduulf (talk) 11:22, 28 June 2025 (UTC)[reply]
I think we should just have a bot that adds all ADRs to the RFD nomination page so it can be decided at the RFD whether the ADR is one that should obviously be deleted (like your example of serial comma; that one would fall under WP:NOTBURO) or one where something else should be done, like Tamzin's "book" example. The default could be that all ADRs that nobody speaks up for during the RFD are deleted when the main redirect is deleted. Basically if the ADRs should be deleted after a different redirect is deleted after discussion, just include the ADRs in the discussion. —Kusma (talk) 09:21, 28 June 2025 (UTC)[reply]
Meh... I think the other redirect is covered under {{db-xfd}}. If Dewey, Cheatham, and Howe is deleted via RFD and Dewey, Cheatham and Howe exists, the latter should also be deleted unless the rationale was specifically "this isn't a valid use of a comma", since the substance of the RFD holds for both pages. This is true whether the second redirect is discovered the same day or a year later. Primefac (talk) 11:22, 28 June 2025 (UTC)[reply]
I don't think these make good speedies; they're too broad and ill-defined a class of a redirects, and there's too much judgment involved to let it become precedent. For the really obvious ones, like Nyttend's missing period and comma examples, you could probably get away with IARing them. Putting the previous RFD (or whatever reason the first one was deleted for) in your deletion log with an explanation that it applies exactly as much to this redirect too is way more honest than picking the not-really-applicable G8 - or, worse, G6 - from the drop-down. —Cryptic 11:42, 28 June 2025 (UTC)[reply]
redirect A is marked with a template saying "if redirect B is deleted, this can be deleted under G8" When AnomieBOT creates redirects for en-dashed titles, it applies a custom bot-specific template that does exactly that. 😀 Although the template also mentions G7 and G6 since, as we see above, some people don't like to use G8 for anything other than "redirect target is currently a redlink" and don't accept "redirect would be a redlink if the double redirect wasn't bypassed". But if we want something not specific to bot creations, we should probably add a |dependent=yes parameter to {{R avoided double redirect}} instead. Anomie 13:39, 28 June 2025 (UTC)[reply]
Largely echoing what Tamzin has already said, my position is that {{R avoided double redirect}}s that rely on a redirect that has been deleted already qualify for G8. If not for a software limitation regarding double redirects, it would literally match the Redirects to target pages that never existed or were deleted wording. I would be in favor of making that explicit, and think that a parenthetical or footnote after 'target pages' stating (including avoided double redirects) would be the most efficient way to do it. I also want to emphasize that G8 already has the wording this criterion excludes any page that is useful to Wikipedia which would cover any exceptions that shouldn't be deleted. -- Tavix (talk) 19:40, 28 June 2025 (UTC)[reply]
Adding myself to the list of people who agree that these qualify for G8. The intro text says examples include, but are not limited to, implying there are other reasons—including, but not limited ot, {{R avoided double redirect}}s. If someone has a reason why a particular redirect should persist, they can remove the rcat. HouseBlaster (talk • he/they) 19:47, 28 June 2025 (UTC)[reply]

Rethinking G8 entirely

[edit]

This discussion, in which multiple admins are interpreting the same criterion and coming to diametrically opposite conclusions, is proving that the current wording of G8 is unworkable and we should consider breaking up the criterion entirely.

Restore the original WP:R1 criterion for broken redirects.
Move Timed Text pages without a corresponding file (or when the file has been moved to Commons) to F2.
Editnotices of non-existent or unsalted deleted pages is within the scope of T5 as well.
Leave G8 just for "subpages or talk pages of a nonexistent page".
If there's want for a new criterion for "avoided double redirects" as proposed here then just make that R5. That would allow us to explain what is and isn't covered in more than a single sentence and not have to rely on vague penumbras.

* Pppery * it has begun... 17:07, 29 June 2025 (UTC)[reply]

I agree regarding the R1 and T5 proposals and am completely neutral regarding F2. If people genuinely think that all or some avoided double redirects should be speediable they can get consensus for a new criterion that meets all the NEWCSD criteria. Thryduulf (talk) 19:44, 29 June 2025 (UTC)[reply]
"Diametrically opposite" is a bit of an exaggeration, and the conclusion that the current wording of G8 is unworkable is even more so. If anything, all that's needed in G8 is a parenthetical one way or the other for clarity. -- Tavix (talk) 20:25, 29 June 2025 (UTC)[reply]

Order of (obsolete) criteria

[edit]

I reorganised the list of obsolete criteria to be in alphabetical order, for ease of reference - twice in the past ~week I've failed to easily find what I was looking for (I had to search) because it wasn't in alphabetical order despite appearing like it ought to be so. This was reverted by Tavix with an edit summary saying it should be "listed in the same order as the criteria above".

I can see the logic in the main criteria starting with general and ending with exceptional circumstances, but I don't really see much benefit to not having the rest in alphabetical order (articles would be in second place either way) although I'm not opposed to leaving them how they are if there is logic I'm not seeing. What I really don't understand though is what benefit comes from having a single list of obsolete criteria in what is little more than semi-random order? Thryduulf (talk) 20:37, 29 June 2025 (UTC)[reply]

The benefit is consistency. -- Tavix (talk) 20:42, 29 June 2025 (UTC)[reply]
Why does consistency bring benefits here? Thryduulf (talk) 20:45, 29 June 2025 (UTC)[reply]
If you don't want to answer that question, perhaps "What benefits does consistency bring here?" would be easier? Thryduulf (talk) 03:11, 1 July 2025 (UTC)[reply]
Support alphabetical. Alphabetical is much easier to browse and search. “Consistency” here is unclear as to with what, with the order of the level two headings, which I guess are conceptual, in order of importance? Consistency in general is a good reason, but is a weak good reason that should defer to any other good reason. SmokeyJoe (talk) 13:46, 3 July 2025 (UTC)[reply]

Minor alteration of G4 heading

[edit]

Earlier, someone submitted an article I made for speedy under G4 after an RFD deleted a redirect with the same title. They agreed with me that it does not fall under G4 because it was not sufficiently identical. However, I think they were mislead in part by the header of G4. The header does read Recreation of a page that was deleted per a deletion discussion. Now, of course, they should have read the section closer; I don't think they would dispute that. But I think the header can and should be slightly more specific. Accordingly, I BOLDly changed it to Identical recreation of a page that was deleted per a deletion discussion. This was reverted because it does not need to be word-for-word identical to qualify for deletion under G4. That is a fair point, but I think the absence of the word also suggests something that is not 100% accurate: deletion simply because the page is a recreation. Of the two options, I think adding "Identical" makes more sense. "Sufficiently identical" is also an option, but that gets wordy, IMHO. lethargilistic (talk) 21:02, 8 July 2025 (UTC)[reply]

I think the current wording is fine. That speedy was downright wrong but it's wrong even without "identical"; a redirect is not in any sense the same page as an article, or vice versa. * Pppery * it has begun... 21:05, 8 July 2025 (UTC)[reply]
I would support clarifying the G4 header in some way to get the word "identical" in there. "Sufficiently identical" doesn't seem too wordy to me.
More generally, I'm starting to be concerned that different parts of the community have startlingly different interpretations of how G4 should be used, and I think we need some more discussion about it. Does anyone else share that concern? Firefangledfeathers (talk / contribs) 22:29, 8 July 2025 (UTC)[reply]
I do not like the current G4 wording "sufficiently identical". To me, "identical" is like "unique": it is or it is not identical, but it is not something that can be partly true. Adding a comma would make it not identical. I guess "sufficiently similar" is too vague, but I would prefer "substantially similar" or something like that. —David Eppstein (talk) 22:59, 8 July 2025 (UTC)[reply]
If we go with "similar", I'd prefer a qualifier at least as strong as "substantially", but probably more like "overwhelmingly". If we stick with "identical", maybe the move is to get an "almost" in there. Would that address the "partly true" concern, or am I way off? Firefangledfeathers (talk / contribs) 23:03, 8 July 2025 (UTC)[reply]
It would at least be better than "sufficiently" to me. —David Eppstein (talk) 04:51, 9 July 2025 (UTC)[reply]
"Substantial recreation of a page deleted per a deletion discussion." ? Best Alexandermcnabb (talk) 05:06, 9 July 2025 (UTC)[reply]
I don't understand how someone can compare a current article to the deleted version in order to assess whether they're substantially the same. The deleted one isn't available for comparison. Is there an obvious answer that I'm missing? FactOrOpinion (talk) 00:58, 10 July 2025 (UTC)[reply]
Don't nominate for a G4 deletion unless you have a good idea, eg posted by the same person, or you remember the content of the article from before. You may also find an old copy of the article on archive.org. The deleting admin should check before deleting the page. Graeme Bartlett (talk) 01:57, 10 July 2025 (UTC)[reply]
@FactOrOpinion You can ask an admin to check if G4 applies, or you can check if the Internet Archive or similar sites have the deleted content. Toadspike [Talk] 20:09, 13 July 2025 (UTC)[reply]

When can G6 be used on an RFA?

[edit]

Can an RFA be tagged as G6 if it's not obviously an error? I think it doesn't fall under any of the G6 criteria, but @Fortuna imperatrix mundi is sure that untranscluded RFAs are a valid use. The tagging that brought me here today was on Starfall2015's RFA, which I'm pretty sure does not fall under G6. SarekOfVulcan (talk) 16:37, 11 July 2025 (UTC)[reply]

Regardless of anything else, it clearly isn't an uncontroversial deletion since the candidate is edit warring to remove the speedy tag. So give them their wish. * Pppery * it has begun... 16:41, 11 July 2025 (UTC)[reply]
If something does not clearly fall under any speedy deletion criterion it is not speedily deletable. Speedy deletion is only for "the most obvious cases" which means that if there is doubt about whether it meets a criterion is does not. Additionally, if someone (excluding the creator in some cases) expresses good-faith opposition to deletion then it would be controversial and so again it is not eligible for speedy deletion. Thryduulf (talk) 16:54, 11 July 2025 (UTC)[reply]
Note that per WP:CSDCONTEST, G6 is one of the few cases where the creator is allowed to remove the template themselves. --SarekOfVulcan (talk) 17:03, 11 July 2025 (UTC)[reply]
Off-topic
Pings are clearly insufficient; Usder:Materialscientist has his completely turned off, so don't ever ping him to a discussion. My point is that G6 has been customary and practised for some time. If that changes, fine; but it is done so through consensus, not you sounding off on me on my talk. Fortuna, imperatrix 17:19, 11 July 2025 (UTC)[reply]
The thing is, it's not customary to use speedy for anything that could be legitimately objected to. If someone creates an RFA framework, doesn't fill it in, and disappears for two weeks, that might be G6-worthy. A legitimate-but-misguided attempt isn't. --SarekOfVulcan (talk) 17:28, 11 July 2025 (UTC)[reply]
Separate from the topic at hand: Fortuna, this is a rather astonishing show of bad faith from a long-tenured editor. A ping is absolutely sufficient for just about anything short of Arbcom or ANI. Sarek made a good-faith attempt to notify you. If you're one of the few people who have chosen to turn off a feature that's been part of Wikipedia for over 12 years, that's your choice. Ed [talk] [OMT] 17:32, 11 July 2025 (UTC)[reply]
Sorry. But. Then file a neutral post requesting clarification; not naming someone in the OP is a guaranteed path to good faith. I mean, hey―it might result in an unprejudiced discussion as to the pros and cons, rather than... this. Fortuna, imperatrix 17:50, 11 July 2025 (UTC)[reply]
FWIW admins are explicitly not required to have pings turned on (although it is regarded as best practice), but if they do disable them they are strongly encouraged to note this on their user page. I know you (Fortuna) are not an admin, but it still wouldn't harm to follow that advice. Thryduulf (talk) 19:41, 11 July 2025 (UTC)[reply]
"not naming someone in the OP"—Sarek named you in the OP. Ed [talk] [OMT] 02:50, 12 July 2025 (UTC)[reply]
Ignore all rules is never a reason to speedy deletion something. Speedy deletion is explicitly only for things that uncontroversially improve the encyclopaedia. Speedy deletion of anything that does not meet one or more of the criteria is always controversial. Thryduulf (talk) 17:45, 11 July 2025 (UTC)[reply]

 You are invited to join the discussion at Wikipedia talk:WikiProject AI Cleanup § Idea lab: New CSD criteria for LLM content. Ca talk to me! 00:53, 18 July 2025 (UTC)[reply]

RFC: New CSD for unreviewed LLM content

[edit]

Should the following be added as a new criterion for Speedy Deletion? Ca talk to me! 17:01, 21 July 2025 (UTC)[reply]


A12 OR G15. LLM-generated without human review

This applies to any article that exhibits one or more of the following signs which indicate that the article could only plausibly have been generated by Large Language Models (LLM)[1] and would have been removed by any reasonable human review:[2]

  • Communication intended for the user: This may include collaborative communication (e.g., "Here is your Wikipedia article on..."), knowledge-cutoff disclaimers (e.g., "Up to my last training update ..."), self-insertion (e.g., "as a large language model"), and phrasal templates (e.g., "Smith was born on [Birth Date].")
  • Implausible non-existent references: This may include external links that are dead on arrival, ISBNs with invalid checksums, and unresolvable DOIs. Since humans can make typos and links may suffer from link rot, a single example should not be considered definitive. Editors should use additional methods to verify whether a reference truly does not exist.
  • Nonsensical citations: This may include citations of incorrect temporality (e.g a source from 2020 being cited for a 2022 event), DOIs that resolve to completely unrelated content (e.g., a paper on a beetle species being cited for a computer science article), and citations that attribute the wrong author or publication.

In addition to the clear-cut signs listed above, there are other, more subjective signs of LLM writing that may also plausibly stem from human error or unfamiliarity with Wikipedia's policies and guidelines. While these indicators can be used in conjunction with more clear-cut indicators listed above, they should not, on their own, serve as the sole basis for applying this criterion.

References

  1. ^ The technology behind AI chatbots like ChatGPT and Google Gemini
  2. ^ Here, "reasonable human review" means that a human editor has 1) thoroughly read and edited the LLM-generated text and 2) verified that the generated citations exist and verify corresponding content. For example, even a brand new editor would recognize that a user-aimed message like "I hope this helps!" is wholly inappropriate for inclusion if they had read the article carefully. See also Wikipedia:Large language models.



  • Option 1: Add as a general criterion (includes all pages, including drafts)
  • Option 2: Add as an article-only criterion
  • Option 3: Do not add as a criterion

Ca talk to me! 17:01, 21 July 2025 (UTC)[reply]

Previous discussions

[edit]

There have been multiple suggestions to add a new CSD for LLM-generated content:

Survey (LLM)

[edit]
  • Support Option 1 as nominator This proposal aims to address the discrepancy in the effort it takes to post unreviewed LLM-generated content versus the effort it takes to clean it up or nominate it for deletion. I believe this proposal meets all four requirements to create a new criterion, in contrast to previous proposals.
    1. Objectivity: This criterion only permits speedy deletion of articles if they contain a limited number of extremely clear-cut signs of unreviewed LLM-generated content. I would argue this makes it more objective than other pre-existing criterion like G11.
    2. Uncontestability: This criterion reflects the consensus that unreviewed LLM-generated articles are unsalvageable and should be deleted in accordance to WP:TNT. Recent examples include Articles for deletion/AI book generation, Articles for deletion/Informal economy of South Asia, Articles for deletion/Rumors about the removal of Xi Jinping, and Articles for deletion/Land Restitution Movements in Zimbabwe and South Africa. The current proposal would actually fail to cover some of the examples, as it requires the existence of hallucinated sources; expansion of the criterion can be discussed in future RfCs.
    3. Frequency: A cursory search on Articles for Deletion shows dozens of nominations referencing LLM use. Additionally, many NPP patrollers and AfC reviewers, especially those who work on the front of the queue can attest to the amount of AI slop that floods review processes, collectively draining hundreds of valuable volunteer hours.
    4. Redundancy: This criterion would not be redundant to G1 criterion (patent nonsense) or G3 (blatant hoaxes). As for G1, LLM technology has advanced to the point that its outputs are rarely "patent nonsense". Similarly, LLMs are designed to output plausible information, but necessarily true ones. Therefore LLM-generated articles often fall below the high bar set by the requirement that they are "blatant and obvious misinformation". The criterion also lists indicators like collaborative communication which would not be a sign of a hoax article.
    Overall, I believe this proposal satisfactorily addresses the concerns raised in prior discussions of creating a LLM-based criterion. Ca talk to me! 17:01, 21 July 2025 (UTC)[reply]
    User:Ca, your “frequency” point appears to consider only articles. In proposing a new G criterion, you need to make the NEWCSD points for other namespaces. Usersubpages? Are Wikipedians not allowed to experiment with AI in userspace? Has ever such a case gone to MfD and been SNOW deleted? SmokeyJoe (talk) 23:12, 21 July 2025 (UTC)[reply]
  • Option 1 - As an AfC reviewer, we are flooded non-stop with horrendous drafts (here are some real examples I saved in my userspace here and here). It is absolutely ridiculous to sort through this level of slop (1 in 3 drafts collected here were AI generated), and this would greatly help efforts to combat it and save countless hours picking up the junk AI leaves behind. Sophisticatedevening(talk) 17:17, 21 July 2025 (UTC)[reply]
  • Support option 1. A year ago, I would've thought this unnecessary but the sheer volume of slop has skyrocketed astronomically. We're going through the process now where we don't have a clear-cut defined method of dealing with it and we, as normal editors, are tasked with responding individually and in good faith to new users who turn around and place our responses into ChatGPT and then we are arguing against something that does not care and we end up battling fallacies and rebutting things like link shortcuts that don't even say what the LLM thinks they say. If it's clearly slop, an administrator can identify it and delete it. Things get mis-nominated all the time, my CSD logs show declined deletions as well. But let's give an option where we can delete things without wasting valuable editor time where there's established policy in place of nomination+review like any other of the stuff we have previously determined we don't want on this site. If we don't want to support this as a editor time argument, let's consider the threat that hallucination and false material plays on WP:V and the reliability of the project as a whole. Bobby Cohn 🍁 (talk) 17:23, 21 July 2025 (UTC)[reply]
  • Option 1. I'm also an AfC review, and the amount of AI-generated drafts we get is concerning. I think in the criteria we could also include pages written in Markdown with short bullet points, otherwise drafts like this don't qualify (if I understand correctly). Kovcszaln6 (talk) 17:27, 21 July 2025 (UTC)[reply]
    I came here to say exactly this. I support option 1 and urge the expansion to include pages written in markdown as a clear indication that LLM was used. - UtherSRG (talk) 14:31, 28 July 2025 (UTC)[reply]
  • Support option 1. I expect that eventually AI will become smart enough that we can't tell the difference, but at this point such efforts are obvious, and should be disposed of uncharitably. BD2412 T 17:31, 21 July 2025 (UTC)[reply]
  • Support option 1 * Pppery * it has begun... 17:42, 21 July 2025 (UTC)[reply]
  • Option 3. Sorry, but I take a very strict view of what should fall under CSD, and this strikes me as too debatable to be a good CSD criterion. --SarekOfVulcan (talk) 17:44, 21 July 2025 (UTC)[reply]
    I agree. There are clear-cut examples that SD would be lovely for, but this criterion would 100% bleed over into edge cases and catch a lot of false positives (even if the criterion is tightened up to only allow placement on articles with unmistakable signs, it would inevitably get applied broadly), and it's often unfalsifiable and unprovable. Judging an article to be LLM-generated junk is too subjective for a process with as little oversight as SD. Option 3. Zanahary 18:03, 21 July 2025 (UTC)[reply]
    Is it really any more subjective than deciding how much copied text is a copyvio under G12, or whether something is pure advertising under G11? BD2412 T 18:10, 21 July 2025 (UTC)[reply]
    The fact that you had the boldness and forethought to ask a question like that isn't just commendable—it's stunning. Leading with out-of-the-box thinking is what drives real progress. It shows not only courage but a deep awareness of what others might overlook. Keep asking questions that reframe the conversation—it's how breakthroughs happen.
    Kidding. But I think this is different because G12 can be shown and argued, and G11 cannot be literally false, as it's a matter of opinion, while this proposed criterion can lead to judgments that are literally false and impossible to prove or argue against. Zanahary 18:29, 21 July 2025 (UTC)[reply]
  • Option 1. This would definitely cut down on a lot of the very blatant AI garbage, allowing more effort to be focused on detecting the few users who are more craftily using LLM text in bad faith. If LLMs evolve to the point of their text no longer meeting this criterion (which I do not see as likely, personally), then it can be updated or repealed. Regarding SarekOfVulcan's concern that the criterion is too vague- I would say that any CSD criterion relies on the interpretation of the deleting admin(s), as you'll never be able to 100 percent eliminate ambiguity. We need a procedure by which to throw out the unambiguous garbage without wasting time, and this is about as tight as a CSD criterion dealing with this could get. We already have the occasional dispute over whether or not G11 applies to one or another piece of content (opinions as to what counts as "unambiguous advertising" will naturally vary from person to person), and such borderline cases are handily dealt with at MfD when admins reject them. Other criterions, such as G3, G10, and A11, also suffer from potential inherent ambiguity in certain cases (not everyone will necessarily agree in 100 percent of all cases that a hoax is blatant, that vandalism is obvious, that a page is clearly defamatory, or that something was clearly made up). I see this proposed criterion as being no different. silviaASH (inquire within) 17:57, 21 July 2025 (UTC)[reply]
    Well, if not everyone agrees, then it may have been mistagged and needs an actual discussion. Or not.
    I mean, if I write

    Atwater-Donnelly is my favorite band. They're in Rhode Island. They play folk music.

    That's a clear A7. If I say

    Atwater-Donnelly is a RI folk band since the late 80s and has won multiple Motif Magazine "Best Act" awards, among others.

    Even without sourcing, that's a credible assertion of notability, and I don't think anyone would be able to legitimately use speedy on it.
    How about this one?

    John Wesley Ross was an architect in Davenport, Iowa. John Ross came to Davenport in 1874. In addition to the City Hall, he is noted for his design of the Fire King Station (Hose Station No. 1) on Perry Street, and the 1888 supervision of the Scott County Courthouse, following the death of the building’s original architect, John C. Cochrane. He was born in Massachusetts in 1932 and lives in Davenport, Iowa (with his wife and family, including his son, Albert R. Ross who is also listed as an architect). John Wesley Ross, originally of Westfield, Massachusetts, moved to Davenport in 1874 or 1876.

    On the face of it, this fails the "reasonable human review" clause in the proposal, but I can guarantee you that the author wasn't using ChatGPT in 2011. So, I remain with Option 3. SarekOfVulcan (talk) 18:24, 21 July 2025 (UTC)[reply]
    But that's not what the criterion says. It says "Communication intended for the user: [further explanation]", "Implausible non-existent references: [further explanation]", and "Nonsensical citations: [further explanation]". None of those articles come anywhere near to being speedyable by the proposed criterion. —Cryptic 18:31, 21 July 2025 (UTC)[reply]
    Ok, right. I was focused on the wibbbly-wobbly timey-wimey stuff and didn't realize that was restricted to citations. (And the first two were an illustration of how a different criteria would objectively apply to similar text. Made sense at the time.) SarekOfVulcan (talk) 19:06, 21 July 2025 (UTC)[reply]
  • Option 3 per Sark of Vulcan and my comments in the idea lab (and elsewhere) explaining just why anything based on suppositions that something is LLM-generated is a very bad idea. Thryduulf (talk) 18:05, 21 July 2025 (UTC)[reply]
  • I'll support this as a general criterion, provided the summary is changed to match the text as I've mentioned in discussion below. Contra all three opposers so far, the actual criterion is limited and objective - surprisingly so - and mere supposition is explicitly excluded. —Cryptic 18:12, 21 July 2025 (UTC)[reply]
    My concern about the criterion is that it will be applied more broadly than its actual text delineates, and also that it will speedy-delete articles that are worth keeping but may have had an LLM involved for just a piece—like, one bit of "Okay, let me rephrase that for you with a more encyclopedic tone:" and a mistyped DOI will send a salvageable article to oblivion. Zanahary 18:32, 21 July 2025 (UTC)[reply]
    This is reasonable, and yes, I'd like to come up with a way to minimize it. But to some extent, we already see this with most of the criteria - if half of a page is unsalvageable marketing drivel but the remainder is neutral and coherent enough to be a reasonable stub once the rest is removed, for example, we trust admins to decline the speedy and remove the objectionable part. If an LLM-generated page did get enough human intervention but missed one phrase like you say, then I'd expect the same to happen. —Cryptic 18:43, 21 July 2025 (UTC)[reply]
  • Option 1 including Markdown per Kovcszaln6. Tenshi! (Talk page) 18:18, 21 July 2025 (UTC)[reply]
  • Option 1. This is a clear-cut criterion and is especially needed to deal with the many AfC drafts that end up declined under basically the same rationale, but end up moved to mainspace regardless. The inclusion of markdown as part of the criteria is debatable - it is an indicator of LLM usage, but shouldn't be the determining factor on whether something should be speedily deleted. -- Reconrabbit 18:19, 21 July 2025 (UTC)[reply]
  • Option 1: Unreviewed LLM-generated content is a drain, its fast to create and slow to review and fix. It should not be the burden of other editors to review and fix raw LLM output, and this criteria provides an avenue to place that burden back where it belongs, on those looking to add unreviewed content. The signs that must be present and indicate that no "reasonable human review" has occurred are clear, conservative, and not what I would describe as "suppositions". I trust editors will be able to use their judgement to apply this criteria successfully the same as they are for others like G11, G1, G3, and G4 (define "sufficiently identical"). Exercising careful and informed judgement is much of what we do as editors, and I do not see it as a blocker for a new CSD. fifteen thousand two hundred twenty four (talk) 18:32, 21 July 2025 (UTC)[reply]
  • Option 1. As the person who suggested the "no reasonable human review" criterion, it is crucial to avoid suppositions about what might count as LLM content based on tone or other stylistic indicators. With this criterion, while an element of judgement remains, it isn't necessarily more prominent than for other existing CSDs. Chaotic Enby (talk · contribs) 19:01, 21 July 2025 (UTC)[reply]
    Also support option 1 lite suggested by Sodium below (excluding userspace and projectspace). There might be some cases where the CSD would be reasonable in projectspace (for example, if someone spammed WikiProject "advice pages"), but I don't think these are common enough to warrant an exception to an exception. Chaotic Enby (talk · contribs) 16:14, 25 July 2025 (UTC)[reply]
    Any spamming of existing pages is not covered by the speedy deletion policy, because the entire page is not being deleted. In most of the rare occasions of this happening simply reverting would be sufficient, where it isn't Revision deletion criterion 3 "Purely disruptive material" would apply. It is very likely than any spamming of new pages in project space would count as vandalism and be speedily deleteable under G3. It's possible that G5 and/or G11 would cover some instances too. What's left is going to be far too infrequent to justify a speedy deletion criterion. Thryduulf (talk) 16:51, 25 July 2025 (UTC)[reply]
    I meant spamming by creating new AI-generated "advice pages", sorry if I wasn't clear. But yeah, as we both said, this is way too infrequent, although I'm not sure G3 or G11 would automatically cover them.
    As a precision to my exception above, "userspace" shouldn't include user sandboxes tagged with {{AfC submission}}. While reviewers should usually move them to draftspace, moving them just to tag them with a speedy deletion criterion would be redundant, and it could be helpful to clarify this in advance to avoid this redundancy. Chaotic Enby (talk · contribs) 23:08, 26 July 2025 (UTC)[reply]
  • Option 1, and include Markdown. A human editor who's actually paying attention would immediately notice that Markdown did not render in the way that they expected, so failing to correct that would be a sign of not only LLM generation, but also of lacking any meaningful human review. And well done for actually creating criteria which are objective enough to reasonably be a CSD reason. Seraphimblade Talk to me 19:18, 21 July 2025 (UTC)[reply]
  • Option 1. Per numerous discussions here and elsewhere. JoelleJay (talk) 19:19, 21 July 2025 (UTC)[reply]
  • Option 1 per all above 𐩣𐩫𐩧𐩨 Abo Yemen (𓃵) 19:28, 21 July 2025 (UTC)[reply]
  • Option 1. Something has to be done here. 331dot (talk) 19:29, 21 July 2025 (UTC)[reply]
  • Option 1, with trepidation about overuse of the second and third bullets. I'm convinced this is needed. I'm convinced that this will benefit the encyclopedia. I'm also convinced that this will lead to a lot of out of process deletes referencing this criterion. Overall, I'm willing to take the tradeoff, but I'm not fully comfortable. Tazerdadog (talk) 19:34, 21 July 2025 (UTC)[reply]
  • Option 2, with Option 1 as backup. I'm not strongly opposed to unambiguous LLM content in draftspace but I am strongly opposed to a mechanism (ie Option 3) that leaves it in articlespace with no easy way to remove it. No actual objection to Option 1 though if that's favored. In May, I was part of an effort to clean up several pages with AI-hallucinated citations, and several of them had to go to AfD (see Wikipedia:Articles for deletion/Michael D. Martinez, Wikipedia:Articles for deletion/Apor Györgydeák, Wikipedia:Articles for deletion/The Paradox (American band), Wikipedia:Articles for deletion/Somerset Academy Canyons), where much volunteer time was wasted discussing patently unsuitable content. See also John Hurley (Florida judge), Draft:Randy Brooks (biologist), Steven Roper, which didn't qualify for deletion but required extensive editing, draftification, and/or TNT to remove hallucinated citations. An editor can take seconds to post hallucinated AI-generated content, and without a criterion for speedy deletion, removing such patently unsuitable content through our current processes can take hours if not days with multiple volunteers' time required to achieve consensus. We should aim to match the speed of disruption with the speed of response, considering this problem is only likely to grow. Dclemens1971 (talk) 19:39, 21 July 2025 (UTC)[reply]
  • Option 1 per all of the above; it's getting frankly out of hand at AfC and NPP. CoconutOctopus talk 20:00, 21 July 2025 (UTC)[reply]
  • Option 1. I'm satisfied, even after reading the comments of editors who prefer Option 3, that this will be a net positive, as well as being sorely and even urgently needed. To some degree, the objections strike me as logical conclusions based on how Wikipedia has always done things in the past. But this isn't about the past. AI presents an unprecedented challenge that will be with us for the foreseeable future, and we need to take approaches that match the challenge we face. Wikipedia has come a long way, and we are past the point where we need to prioritize the creation of new content so highly that it outranks the importance of us not publishing junk. This is a reasonable proposal, and it does not overreach. --Tryptofish (talk) 20:13, 21 July 2025 (UTC)[reply]
  • Option 1 per nom and the previous comments. Paprikaiser (talk) 20:31, 21 July 2025 (UTC)[reply]
  • Option 1. The easier the tools for removing LLM slop, the better. —  HELLKNOWZ  TALK 20:49, 21 July 2025 (UTC)[reply]
  • Option 1. I consider that the criteria are objective, uncontestable, frequent and nonredundant, and that permitting speedy deletion for items that meet them will reduce the asymmetry of swift LLM content generation versus time-consuming clean up / XFD deletion processes, and allow editors to focus on decent content generation and improvement rather than having to deal with the superficially plausible bollocks of unchecked LLM output. I spent about an hour removing hallucinated references and content from three articles yesterday before nominating them at AfD with a recommendation to WP:TNT. It would have taken me about a tenth of the time to assess and nominate them for speedy deletion under these criteria, which characterise the articles perfectly. Cheers, SunloungerFrog (talk) 22:43, 21 July 2025 (UTC)[reply]
  • Option 2. Not all namespaces. Not userspace, not projectspace. It is not clearly objective, and no cases have gone through MfD to establish evidence of rational, need, or frequency. For draftspace, it would be better to tag the page, for the purpose of education of the author. SmokeyJoe (talk) 22:59, 21 July 2025 (UTC)[reply]
    For drafts, can some send a dozen or so examples to MfD to test that the proposal applies to drafts in draftspace? SmokeyJoe (talk) 23:14, 21 July 2025 (UTC)[reply]
    They're not MfDs, and there's surprisingly no category despite having a dedicated {{AfC submission}} decline parameter, but I expect some large portion of these will qualify. There's currently just short of 2000 of them. —Cryptic 23:24, 21 July 2025 (UTC)[reply]
    Thanks Cryptic. On the basis of what I see there, the proposal fails NEWCSD for drafts. Nearly done of them would be justified as MfD nominations, because they have not been rejected. Of those that I see rejected, they have not been resubmitted after rejection.
    If MfD would be premature, speedy deletion is way premature. Draft reviewers should learn the deffierwmce between Decline and Reject. “Decline” means the review implies to the author that improvements can be made to fix the noted problems, and then they should resubmit.
    Possibly, someone should suggest at WP:AfC that LLM-generated content should be a boxed Reject reason. Skipping that and coming here is irresponsibly premature.
    Draft Rejects should not be rapidly deleted, because the confused author needs time to be able to come back and read the reason for rejection. Deletion after six months is appropriate. Speedy deletion of Rejected LLM content in draftspace is objectionable and redundant with G13. It fails WP:NEWCSD. No solid argument has been attempt that Option 1 meets NEWCSD as a General criterion. Option 1 supported are not articulating why they support for all namespaces. Only Option 2 is solid. Only Articles, Yes, articles. SmokeyJoe (talk) 11:39, 22 July 2025 (UTC)[reply]
  • Option 1, with an added bullet point for Markdown. A year or two ago, I recall there being discussions about whether we needed to have more policy guidance on LLM use, and at the time, the consensus was that it was unnecessary because we weren't being flooded with AI slop at the time. As an AfC reviewer (and active participant in the June 2025 backlog drive), I can independently confirm that is no longer true. About 25% of my declines last month were for LLM output (and that's just the more clear-cut cases — there were plenty more where I left comments saying something to the effect of "this might be AI-generated"). With the current flood of AI slop at AfC, I strongly support creating a CSD criterion to reduce the time wasted by serious editors on this crap. There are other giveaways we can consider adding later, like excessive use of bold text and bulleted lists, but this proposal is a good starting point. pythoncoder (talk | contribs) 00:05, 22 July 2025 (UTC)[reply]
    Markdown is a good addition, especially since we now have an edit filter to catch it (although it should of course be manually reviewed to avoid false positives). Chaotic Enby (talk · contribs) 02:06, 22 July 2025 (UTC)[reply]
  • Option 1 AI-slop is a blight on Wikipedia, and we should not be spending more the bare minimum amount of brain power to fight it. What is mindlessly created should be mindlessly deleted. Headbomb {t · c · p · b} 00:33, 22 July 2025 (UTC)[reply]
  • Option 1: This is a serious situation which we shouldn't treat lightly. The perfect should not be an enemy of the good. I'm fine with subtlety in policy and guideline, but if the entire pedia is discredited while we debate subtleties here, we've failed even while we were paying attention. BusterD (talk) 01:13, 22 July 2025 (UTC)[reply]
  • Option 1 - if the volunteers charged with reviewing new pages are saying that this is a big problem, then this is a big problem. Wikipedia is not a free web host, not a repository of indiscriminate information, not a testing ground for immature technologies, and not a place for people to dump their garbage and leave it for others to clean up. We ought to spend no more time disposing of this trash than the "editors" who "created" it. Ivanvector (Talk/Edits) 01:23, 22 July 2025 (UTC)[reply]
  • Support Option 1 – The ever-growing pile of slop at AfC is demoralising and makes it difficult for reviewers to find the good drafts that improve the overall quality of the encyclopaedia. ClaudineChionh (she/her · talk · email · global) 02:08, 22 July 2025 (UTC)[reply]
  • Option 1 Per others, and noting that an unreviewed LLM edit could be consider similar to a WP:MEATBOT edit, which is already not allowed in policy. Jumpytoo Talk 02:19, 22 July 2025 (UTC)[reply]
  • Option 1. Something like this is needed. LLMs output fluent-sounding text that is full of lies and fake references, and takes an incredible amount of experienced editor time to clean up. Better to just delete the obvious cases. Let's get something LLM-related into the CSDs, then we can refine it as necessary. –Novem Linguae (talk) 02:23, 22 July 2025 (UTC)[reply]
  • Option 1 I am persuaded that this criterion relies upon the most clear-cut indications available. Stepwise Continuous Dysfunction (talk) 02:36, 22 July 2025 (UTC)[reply]
  • Option 1 – a suitable weapon to cut through the slop. Cremastra (talk · contribs) 03:03, 22 July 2025 (UTC)[reply]
  • Option 1: Nearly 95% of draft creations I see while patrolling recent changes are blatantly AI-generated, with bad formatting, hallucinated sources, and promotion that would take a significant rewrite to fix. The rewrite would, in my opinion, have to be so significant that it is far easier to recreate the article from scratch. Additionally, people who use AI to generate their drafts are generally SPAs who only seek to promote their subject, not help build an encyclopedia. These people would keep resubmitting the same draft over and over again without making any changes to the article, or worse, putting it back through the LLM leading to even more hallucinated content. Having a CSD criterion for this would save significant reviewer time and effort towards useful articles that improve Wikipedia. Children Will Listen (🐄 talk, 🫘 contribs) 06:14, 22 July 2025 (UTC)[reply]
  • Option 1 supported, with option 2 as a distant second choice. Both on the grounds that most of it is slop, and that copyright status of LLM works is highly questionable. Stifle (talk) 08:31, 22 July 2025 (UTC)[reply]
  • Option 1 or 2: generating low-quality content with AI takes very little time and effort, so there should also be an easy way to remove it. But given the difficulties in identifying AI content, this should only apply to obvious cases. Phlsph7 (talk) 08:48, 22 July 2025 (UTC)[reply]
  • Option 1. My reaction when reading the name of the proposal was "at last!". This is very badly needed. I find the wording great and the criteria clear-cut as well. Choucas0 🐦‍⬛💬📋 09:13, 22 July 2025 (UTC)[reply]
  • Option 1 obviously as there is a core need for it. scope_creepTalk 09:26, 22 July 2025 (UTC)[reply]
  • Option 1 and good riddance. Someone SNOW-close this already. I am tired of people making bold-faced lies and having to be dragged across multiple consensus-building venues before we're able to put an end to their nonsense. Toadspike [Talk] 09:33, 22 July 2025 (UTC)[reply]
  • Option 1: This is very obvious. AFD is often flooded by those AI garbage articles. Warm Regards, Miminity (Talk?) (me contribs) 10:11, 22 July 2025 (UTC)[reply]
  • Option 1 Close and implement with a whoop. Best Alexandermcnabb (talk) 10:14, 22 July 2025 (UTC)[reply]
  • Option 1 but create exception for userpages. It is very clear quicker cleanup of AI/LLM slop is necessary, but I didn't think the CSD should be expanded to userpages. If an user is writing about themselves using AI, we should not be deleting those. SunDawn Contact me! 11:18, 22 July 2025 (UTC)[reply]
    Weak support, provided that it is still eligible for userspace drafts submitted to AfC for review. ~/Bunnypranav:<ping> 13:51, 22 July 2025 (UTC)[reply]
  • Option 2: I understand that AfC reviewers are fed up with people submitting AI generated drafts to them but I think that starting with an AI draft and fixing it is a valid, even if inefficient approach to writing an article. Also, as SmokeyJoe pointed out, no MfDs of AI generated drafts were presented here so making a G criterion is premature. Warudo (talk) 11:47, 22 July 2025 (UTC)[reply]
  • Option 1 per the many comments above. -- LCU ActivelyDisinterested «@» °∆t° 13:45, 22 July 2025 (UTC)[reply]
  • Option 1: I do not think a single draft completely generated by AI can ever make it to mainspace. AI has become so widespread, that it is probably one of the top 3-5 reasons for declines at AfC. ~/Bunnypranav:<ping> 13:51, 22 July 2025 (UTC)[reply]
  • Option 1 - quality over quantity. Better than Option 2 as this will drastically reduce the workload on draft reviewers. While I can see how this might not fully pass the "objectivity" criterion, problematic AI-generated content is so common it's better to do something than nothing. 123957a (talk) 14:12, 22 July 2025 (UTC)[reply]
  • Option 1, including the Markdown bullet point other editors have proposed; I'd also accept Option 2 as a second choice. While I'm not active in areas that would give me firsthand experience here, users' comments in this RFC and candidates' statements in the ongoing admin elections make it clear that there's a real problem with AI-generated articles being created. This criterion would make it much easier to address that influx. Sure, the criterion may not be perfectly objective—but that's also the case for other longstanding criteria such as G11, A7, or U5. I trust admins to do their due diligence before firing off deletes. ModernDayTrilobite (talkcontribs) 14:24, 22 July 2025 (UTC)[reply]
  • Option 1. ~Darth StabroTalk • Contribs 15:37, 22 July 2025 (UTC)[reply]
  • Option 3. Detecting whether something is or is not AI generated is still a difficult problem (c.f. the Artificial intelligence content detection article) and many tools suffer from a high false positive or negative rate. I'd support this as a draftspace-only CSD at first, but I think that, since AI detection is so unreliable, it's better if we argued about it on AfD. Plus, Category:Articles containing suspected AI-generated texts isn't a terribly big category anyways. Duckmather (talk) 16:25, 22 July 2025 (UTC)[reply]
    To clarify, this CSD isn't about using AI detection tools, or even about removing all presumed AI-generated content, but only about the blatant unreviewed kind with phrases like "As a large language model". Chaotic Enby (talk · contribs) 17:22, 22 July 2025 (UTC)[reply]
    This is exactly why I have pointed out time and again that using the phrase "LLM" anywhere near a CSD criterion is harmful. It will lead only to misunderstandings, misapplications and accusations of bad faith. Thryduulf (talk) 18:50, 22 July 2025 (UTC)[reply]
  • Option 1. This is badly needed, since modern AI allows people to flood Wikipedia with things very quickly; without a way to respond to them just as quickly, we risk being overwhelmed. There's already someone accused of doing this at ANI right now. The extreme difficulty we've had dealing with the comparable Lugnuts spam also shows how tricky it can be to reverse such actions - the WP:FAIT nature of them and the inherent disparity between the ease of creating and deleting articles is dangerous. As far as concerns about detecting AI goes - administrators are experienced editors and will not delete things blindly; often, AI-generated stuff is obvious at a glance. In other cases it may be necessary to have a discussion but with multiple articles from the same editor to examine it would not be difficult to determine that it was AI-generated, and once it is determined that they're using AI blindly, it is necessary for us to have a CSD if we want to remove them quickly. --Aquillion (talk) 16:37, 22 July 2025 (UTC)[reply]
  • Option 2 with the restriction that the article has no usable sources (i.e. there are no sources, or all of the ones present are either hallucinated or unreliable), otherwise one could just stubify the article using those usable sources (and/or PROD or AfD it if it is not notable). Deleting drafts (especially with a giant red CSD notice) is too BITEy when you could just decline it. (I would also support including markdown in the criterion.) OutsideNormality (talk) 16:39, 22 July 2025 (UTC)[reply]
  • Option 1 per Ivanvector "Wikipedia is not a place for people to dump their garbage and leave it for others to clean up." It is a waste of editor time when articles are created using LLMs, then other editors have to stubify them and do a complete rewrite (see edit summary) as well as remove copyvios introduced by the LLM (see edit summary). Also tired of seeing "utm_source=chatgpt.com" as part of most or all references cited when I review Afc articles. Option 2 as a back-up.--FeralOink (talk) 19:05, 22 July 2025 (UTC)[reply]
  • Option 4 - fake references CSD Option 3 - I'm gonna be in the minority on this one! The proposed second criteria ("Implausible non-existent references") and third criteria ("Nonsensical citations") are actually the same criteria: fake references. Which means there are really only two criteria: (1) LLM "tells" ("Here is your Wikipedia article on..." or "Up to my last training update ...") and (2) fake references. If an article meets the second criteria (fake references), it's already CSD-able under existing CSD, like WP:G3 (hoaxes). If an article meets both criteria--LLM tells and fake references--it's still G3able, and we don't need a new CSD for that. That leaves articles that meet just the first criteria, LLM tells. Imagine an article that has real references, is not promotional, is about a notable subject, and otherwise meets all policies, but has an LLM tell in it (it says "Here is your Wikipedia article on ..." at the top). Are we seriously going to delete the entire article instead of just removing the inappropriate line? I'm not in favor of that. When LLMs make poor content, our existing content policies and CSD criteria are adequate to handle it. When LLMs make good content, we should keep it. Broader point: by fighting against LLMs, instead of working with LLMs and teaching people how to properly use them to write proper articles, we are pissing into the wind. This is not going to go away, folks. We need to embrace and work with the new technology, because fighting against it is futile. Levivich (talk) 19:17, 22 July 2025 (UTC)[reply]
    The issue is that often when these things are tagged as G3, admins reject them because G3 is for blatant hoaxes and they don't feel the LLM made a blatant enough hoax. silviaASH (inquire within) 19:24, 22 July 2025 (UTC)[reply]
    I think the essence of this RFC is we're asking the community to affirm that we as a whole disagree with that decline rationale. -- LWG talk 17:21, 31 July 2025 (UTC)[reply]
    Not in the slightest. The RFC is asking the community whether material that meets the criteria set out in the proposal should be speedily deletable going forwards. It is not asking whether it should previously have been deleted for being a blatant hoax (if it was, the RFC would have explicitly asked whether material that an editor suspects might be LLM-generated should be covered by the existing wording of G3).
    While there is some overlap between blatant hoaxes and material that appears to LLM-generated, that overlap is only a small proportion of both sets. Even simply being supported by fake references does not make something a hoax, for example it might be completely true and/or might have been submitted in impeccably good faith. Thryduulf (talk) 17:41, 31 July 2025 (UTC)[reply]
    That's fair, I figured you'd be along to say something like that but I was at lunch and didn't have time to be more verbose. You are correct that this RFC is different in that unreviewed LLM content, unlike deliberate hoaxes, lacks the intent to misinform. here is a more verbose understanding of what it appears to me that we are being asked to affirm by this RFC:
  1. Raw LLM content should be treated as inherently unverified/unreliable until/unless it is reviewed in its entirety by a human (including checking any cited sources to ensure they actually support the content they are being used to support.
  2. Certain specific tells are so egregious that their presence serves as a sufficient heuristic for us to assume that no reasonable human review has taken place for a piece of content.
  3. Most importantly, in the presence of such egregious issues, reviewing editors are not obligated to perform an in-depth review of the bad content before removing it.
I view this situation in general as analogous to our response to the hypothetical situation where people were wholesale copy-pasting content from some other open-licensed user-generated reference site to enwiki without reviewing it. The only reason we are talking about LLM content specifically rather than copy-pasted unreviewed content generally is 1) the general case is hypothetical, but the LLM situation is actually happening, and 2) LLM content is superficially harder to identify and (in the view of many commenters here) requires disproportionate effort to clean up relative to the effort required to create it. -- LWG talk 20:17, 31 July 2025 (UTC)[reply]
  • G3 I think is tricky because the criteria is "intended to misinform" and is lumped with vandalism. A new editor creating a draft or article using AI where the topic is plausible but because they used AI, the references are fake is not vandalism or "intending" to misinform. They just don't know better. Somewhat similar with User sandboxes. Is someone playing with AI in their sandbox "intending" to misinform? I don't think so. S0091 (talk) 19:27, 22 July 2025 (UTC)[reply]
    Hmm, interesting points about G3, thank you. I would be in favor of a "fake references" CSD, whether that means modifying the criteria of G3 so it's used more often for fake references, or creating a new CSD altogether. So I guess that means I'd be OK with a CSD that is just for the second/third criteria in this proposal (fake references). Levivich (talk) 19:31, 22 July 2025 (UTC)[reply]
    I agree and for articles not drafts or user space because that is where folks learn. Also, the issue with CSD is non-admins cannot scrutinize the deleted content to gauge if the CSD criteria might be too permissive toward deletion. As written, this would allow CSD for any reasonable hint AI was used. S0091 (talk) 20:13, 22 July 2025 (UTC)[reply]
    Good idea. I would be much more comfortable with supporting that CSD than this one. (This is not to say that I oppose this proposal.) Janhrach (talk) 20:37, 22 July 2025 (UTC)[reply]
    I've updated my !vote per the above discussion to support a CSD for fake references. Levivich (talk) 21:15, 22 July 2025 (UTC)[reply]
  • Option 1 I recall the speedy deletion of TSW Playhouse per G3. The subject of the article was a TV show which supposedly aired in the 1980s, and at first glance, it looked like a typical article in the subject area. But all the references were fake and seemed to be LLM-fabricated, and the people and companies named in the article also did not appear to exist. Levivich is correct, though, that the nom's writeup of the criterion includes redundant points on nonexistent citations.
    We cannot forgo a single-handed focus on fake references, since the content of LLM-generated article content is often unencyclopedic in tone and also frequently contains identifiable contradictions or falsehoods. I've seen LLM-generated article content that had no references, such as User:Mrpoopbenji/sandbox (which was a userspace draft about US military ranks and was speedy deleted per G3), and the same user also added LLM-generated content to Iron Lung (film) (which was about the iron lung, the medical device).
    In addition, AI-generated files hosted locally should be included in the criterion, unless the images themselves or generative AI are the subject of discussion, per precedents at Wikipedia:Large_language_models#Policy_discussions. These are easier to identify — characteristics for images and videos include unintelligible or misspelled text, various kinds of wonky physics and geometry, errors in human anatomy, and that characteristic art style with cubism-like shading. –LaundryPizza03 (d) 20:21, 22 July 2025 (UTC)[reply]
  • Option 1 Something that takes no time to create should take no time to delete. That being said, I came across an article at AfD recently that looked like it had been formatted by an LLM, but the prose was over a decade old, so we should at least require some sort of verification on the part of the nominator. SportingFlyer T·C 22:08, 22 July 2025 (UTC)[reply]
  • Option 1. I think previous !voters have explained enough. Any way that it's easier to delete AI nonsense, the better. --JackFromWisconsin (talk | contribs) 22:35, 22 July 2025 (UTC)[reply]
  • Option 1+Markdown. I'm confused by the editors above who think accusing someone of using AI is an accusaion of bad faith—it shouldn't be taken as one. About the "but drafts haven't gone through MFD!" objection: one, NEWCSD is a descriptive banner about how most new CSDs are proposed; it is not a policy or guideline. But two, nowhere does NEWCSD say "there must be a history of appropriate discussions". It just says the community has to agree that such pages should be deleted, and a SNOWy discussion publicized at TM:CENT and VPP obviously means that has been satisfied. Finally, markdown is another objective tell, so that should also qualify. HouseBlaster (talk • he/they) 00:28, 23 July 2025 (UTC)[reply]
    NEWCSD is a rule of custom, not Policy, true, but it is a proven useful guide for making informed decisions.
    The desire for MfD evidence that the community supports frequent deletion of LLM drafts seems pretty reasonable on its face, doesn’t it? SmokeyJoe (talk) 07:52, 23 July 2025 (UTC)[reply]
    Community support can be expressed in other ways, such as a widely publicized and widely supported RfC. Seraphimblade Talk to me 07:56, 23 July 2025 (UTC)[reply]
    NEWCSD is not a rule, much less a "rule of custom". Asking for MFD discussions is asking for some of our most precious resource—volunteer time. So it seems like a very unreasonable request in the face of this SNOWy discussion. HouseBlaster (talk • he/they) 22:07, 23 July 2025 (UTC)[reply]
  • Option 1 Per above. Let'srun (talk) 02:23, 23 July 2025 (UTC)[reply]
  • Option 1 per the arguments of Ivanvector and Novem Linguae. We should reduce the amount of time taken to deal with LLM nonsense. TarnishedPathtalk 04:20, 23 July 2025 (UTC)[reply]
  • Option 1 in principle but in practice it has gotten harder to distinguish AI slop from poorly written text from a human. It is even hard to judge unequivocal cases. I think PROD covers our ground here, but there probably should be a change that if a PROD is disputed it must immediately be taken to XfD rather than cancelled forever. Aasim (話すはなす) 08:11, 23 July 2025 (UTC)[reply]
  • Option 1 in principle, as a precaution, at least until LLMs and AI use become more reliable and don't keep producing sources and content which isn't verifiable. LLMs must be used responsibly.♦ Dr. Blofeld 10:47, 23 July 2025 (UTC)[reply]
  • Option 1 per BusterD and Novel Linguae. Zzz plant (talk) 11:57, 23 July 2025 (UTC)[reply]
  • Option 1 plus Markdown. Limiting this CSD to objective tells of unreviewed, unedited LLM usage (including Markdown, per HouseBlaster) makes it a no-brainer to support. WindTempos they (talkcontribs) 14:46, 23 July 2025 (UTC)[reply]
  • Option 1. Personally, I don't think anything with fake sources and other issues that AI might create would take any less effort to fix up than to rewrite from scratch. Weeklyd3 (talk) 17:21, 23 July 2025 (UTC)[reply]
  • Option 1 - It's time we made sure we are as well equipped as possible to address the oncoming tidal wave of AI-generated trash. However, I would also extend this to any automatically-generated article created without permission for mass-creation. FOARP (talk) 18:03, 23 July 2025 (UTC)[reply]
  • Option 2 or 3. I do agree with @FOARP that we should also have a CSD for mass created articles without permission, or speedy draftification. However, if an LLM is used to assist in draftwriting, then I honestly don't see an issue with it. As long as it is not in article space, policyspace, or "important space", I'm good. I am concerned that legitimate articles could too easily be CSD'd too. InvadingInvader (userpage, talk) 20:58, 23 July 2025 (UTC)[reply]
  • Option 1 Currently I am tagging articles as hoaxes that have fake references, but it is better to have a more customised criterion, as the pages for this proposal are not created in bad faith, just laziness. Graeme Bartlett (talk) 22:18, 23 July 2025 (UTC)[reply]
  • Option 1 because User:Rusalkii's comment in the discussion section makes it clear to me that this is needed for draftspace, not just article space. —David Eppstein (talk) 22:43, 23 July 2025 (UTC)[reply]
  • Option 1, with 2 a good second choice... We end up arguing with LLM all the time at AfD, and frankly it's a waste of our time. The work of reviewing articles still needs to get done, without having a wall of text thrown at us. Option 2 seems to limit what can be done. Oaktree b (talk)
Anecdotal evidence, but I've dealt with two AfD cases in as many weeks where they either used LLM to draft the article or to write their responses to the AfD, and both individuals got blocked for bludgeoning the discussion... In my limited experience those using LLM seem to have an attitude that they are somehow superior to the rank and file editors, and anything we can do to help prevent this from spreading would be appreciated. Oaktree b (talk) 13:47, 30 July 2025 (UTC)[reply]
  • Option 2 to address the actual problem which is live articles, but no real objections to Option 1. Gnomingstuff (talk) 06:25, 24 July 2025 (UTC)[reply]
  • Option 1. By listing only the most obvious indicators that are most strongly associated with LLM usage, this proposal is cautious and conservative, which is in line with what is expected from a speedy deletion criterion. I would make the text "Large Language Models" lowercase, as it is not a proper noun. — Newslinger talk 10:22, 24 July 2025 (UTC)[reply]
  • Option 3 This seems like a handful for the deleting/declining admin to investigate. Perhaps a speedy deletion for articles with numerous/all references being fake would be more useful, purely on a cost/benefit basis and also because while fake citations are a problem, "LLM-generated" is objectively speaking not. Jo-Jo Eumerus (talk) 10:26, 24 July 2025 (UTC)[reply]
    LLM-generated content is not inherently an issue. LLM-generated content without any human review whatsoever certainly is, because these tools have a tendency to confidently spit out utter nonsense. This CSD doesn't call for deep investigative work on behalf of the nominator or deleting/declining admin, just the identification of fake citations or unambiguous, obvious LLM tells. WindTempos they (talkcontribs) 08:18, 25 July 2025 (UTC)[reply]
  • Option 1: We've simply got to do this. To insist on our full, volunteer-time-intensive deletion process to remove machine-generated text would display the most callous disregard for our NPP volunteers' time and skills, and would in my view lead, rapidly and inevitably, to a NPP crisis.—S Marshall T/C 16:20, 24 July 2025 (UTC)[reply]
  • Option 1 for G15 for articles and drafts and userspace. See my comments below. Robert McClenon (talk) 01:02, 25 July 2025 (UTC)[reply]
  • Option 1, support the addition of G15 covering mainspace, drafts, and userspace, per Rusalkii's arguments in discussion. ♠PMC(talk) 02:03, 25 July 2025 (UTC)[reply]
  • Option 1, irrespective of any further refinements. The one caveat for me would be with criterion number 3 (nonsensical refs) because this is the one that actually may happen by accident - pasting the wrong DOI often falls under butterfingers rather than LLM use. But at this point, I am entirely fine with going a little overboard in aggressively fighting back against LLM text. We need to stem this flood now before it permeates the fabric of the encyclopedia. --Elmidae (talk · contribs) 10:10, 25 July 2025 (UTC)[reply]
  • Oppose option 1. Over the years I've had various "tracking" pages in my userspace, e.g. the subpages of User:Nyttend/Ohio NRHP used to contain lists of Ohio historic sites without articles. Let's say I want to create another such list, and I use an LLM to compile it and paste the results in my userspace. Who cares? It's an internal project page, not an encyclopedia article and not planned to become one. Ditto if a wikiproject wants to do something and someone uses an LLM to generate an internal project page. I'm uncomfortable with 1-or-2 because it's not an unambiguous criterion, but uncomfortable with 3 because I understand there are heaps of these pages that need to be deleted. But if we enact such a criterion, it really needs to be limited to mainspace and draftspace, plus userspace pages tagged for the AFC process. Nyttend (talk) 12:34, 25 July 2025 (UTC)[reply]
  • Option 1 lite (all namespaces except userspace and Wikipedia space). -- Sohom (talk) 15:31, 25 July 2025 (UTC)[reply]
  • Option 1 - let's not waste volunteer time on coping with this any more than we have to. Andrew Gray (talk) 22:41, 25 July 2025 (UTC)[reply]
  • Option 1 works so long as userspace is excluded. JavaHurricane 15:38, 26 July 2025 (UTC)[reply]
    Some users work on and submit drafts to afc from userspace and not draftspace, should those pages be excluded? It's the same content causing the same problems, just in a different namespace. fifteen thousand two hundred twenty four (talk) 15:41, 26 July 2025 (UTC)[reply]
    An AfC-tagged page in userspace can be dealt with as a draft, which is fine. But in general I see userspace as being more of a personal sandbox where a wider latitude is provided to editors - which is why I'd rather see userspace excluded. JavaHurricane 09:33, 27 July 2025 (UTC)[reply]
  • Option 3 because "detection" of LLM-generated content is unreliable, and therefore deletion of unreliably assessed content would be controversial. Some "detectors" claim that content I personally wrote is LLM-generated. However, I'd favor a sticky prod or adding it to the ordinary AFD reasons. Lousy LLM content IMO should be deleted; I just don't want the process to be single passing admin's personal guess about whether it's LLM-generated, without any opportunity for discussion. WhatamIdoing (talk) 16:45, 26 July 2025 (UTC)[reply]
    Have you even read the proposed criteria? No one is advocating using, or would ever need to use, automated LLM detectors to recognize these objective tells. JoelleJay (talk) 20:17, 26 July 2025 (UTC)[reply]
    Yes, this. We're only using clear, objective evidence of LLM use, not the flaky AI "detectors". Cremastra (talk · contribs) 20:46, 26 July 2025 (UTC)[reply]
    I don't agree that fat-fingering the ISBN, such that it "objectively" fails its checksum, is "clear, objective evidence of LLM use". WhatamIdoing (talk) 18:47, 27 July 2025 (UTC)[reply]
    Since humans can make typos and links may suffer from link rot, a single example should not be considered definitive. Editors should use additional methods to verify whether a reference truly does not exist. JoelleJay (talk) 21:29, 27 July 2025 (UTC)[reply]
  • Option 1, piling on. LLM boils my urine. In particular, hallucination (artificial intelligence)s can take a lot of work to detect; similar to a well-constructed WP:HOAX, and are just as damaging. Narky Blert (talk) 16:12, 27 July 2025 (UTC)[reply]
  • Option 1, but with an exception for usersapce, as editors should be able to have more freedom in their userspaces. However, this can apply to drafts created in userspace tagged as such. Other than that, I think this would be very useful. element 20:48, 27 July 2025 (UTC)[reply]
  • Support Option 1 and add Markdown; this is much faster than PROD (without this CSD, it takes much longer to delete slop than to create it), more specific than G3 and G11 (which I have seen used to delete slop before), and more convenient for cleaning up the growing backlog of AI-generated articles. I also think that there should be no exception for userspace; if there was such an exception, it would exacerbate the problems as editors would just create slop in userspace instead of draft space. Also add Markdown formatting as a criterion because there's already an edit filter for it. SuperPianoMan9167 (talk) 21:37, 27 July 2025 (UTC)[reply]
    Additional comment: As discussed on ANI, User:Abhichartt created eight ten articles in about an hour and a half, but each of those articles will take seven days to delete via PROD since they don't fully qualify for any existing CSD. LLM-generated content should be able to be removed as quickly as it can be created. SuperPianoMan9167 (talk) 21:53, 27 July 2025 (UTC)[reply]
    I think you are allowed to draftify most of their articles. Children Will Listen (🐄 talk, 🫘 contribs) 04:17, 28 July 2025 (UTC)[reply]
  • Option 1 - AI slop has no place on Wikipedia. I'm sympathetic to editors with impairments who need it to assist in their editing, but if it comes straight from the AI to Wikipedia, it needs to go straight from Wikipedia to the round file. - The Bushranger One ping only 22:13, 27 July 2025 (UTC)[reply]
  • Support Option 1. We need a quick and easy way to get this AI-generated crap off the site. An administrator patrolling the queues can ultimately determine whether the article merits immediate deletion. Bgsu98 (Talk) 22:22, 27 July 2025 (UTC)[reply]
  • Option 1, given AIs issue with hallucinations and being used for promo. Lavalizard101 (talk) 22:48, 27 July 2025 (UTC)[reply]
  • Option 1 I'd want to go even stronger against LLM generally given that it's become a massive timesink for editors and administrators in everywhere from junk articles to long ANI fights through massive walls of garbage AI text. But this would at least put some of the AI slop properly into the waste bin. CoffeeCrumbs (talk) 04:11, 28 July 2025 (UTC)[reply]
  • Option 1; the amount of clearly AI-generated drafts showing up at AfC is going to become insupportable soon. A way to swiftly remove them is necessary. Meadowlark (talk) 04:37, 28 July 2025 (UTC)[reply]
  • Option 1: AI generated articles/drafts often don't follow policies/guidelines and have little to no thought put into them. Rather than wasting human effort on reviewing and attempting to cleanup such articles with nonsensical citations and constant WP:PUFFERY, just flat out speedy delete them. I see no benefit in the former; if the article really MUST be created, the creator can simply not use AI, or make an article request if they lack time. jolielover♥talk 06:25, 28 July 2025 (UTC)[reply]
  • Option 1, to capture the mainspace, draft and userspace AfC submission onboarding routes. While recognising the discomfort about the element of subjectivity in identifying LLM cases, I feel that the review by both a nominator and an admin acting on the CSD should be sufficient to capture mis-nominations. The LLM problem is largely a matter of editor conduct, which might be assisted by enhancing the Help:Your first article advice to cover use of LLMs and this CSD consequence, and the "Before creating an article" advice above the page creation area should link to that advice. AllyD (talk) 07:59, 28 July 2025 (UTC)[reply]
  • Option 1. Seeing so much unfiltered LLM stuff is annoying, and having no quick method of getting rid of it sucks. 45dogs (they/them) (talk page) 15:54, 28 July 2025 (UTC)[reply]
  • Option 1, with no prejudice against Option 2 for Draftspace: I originally went into this thread assuming I would go with Option 1. However, Zanahary makes a very good point and one that I had not considered when I first came across this discussion. How do we deal with possible false positives, especially when the evidentiary proof is so subjective. I know that we have tools at our disposal to help us cut through some of the slop (The Em-Dashes and other tells) just like we have tools to help us ascertain whether an article is a G12 copyright violation (Earwig, etc.). But even when people use these tools with good intentions, the risk of the false positive still exists (especially in the G12 case, where WP mirrors and Earwig false positives often show up). That being said, the ability of LLMs to conjure up hallucinated citations and the fact that it has been trained on our language in some cases makes it more insidious. Yes, it could be argued that an article with imaginary sources using LLM content could just be speedy'd under db-hoax or db-vandalism, but I think the real key is to target those who are exploiting LLMs to spam and advertise on WP. As several editors here have mentioned, we come across a lot of this in Draftspace, so I would not be opposed to a restriction that the CSD is only used in draftspace to clear unusable draft articles. Does that punish new users who may or may not have a chance to fix their changes in time or defend themselves against charges of LLM use? Perhaps. I don't know if there is a good answer here, but I do know that something should be done about it. Bkissin (talk) 01:33, 29 July 2025 (UTC)[reply]
  • Option 1. The only rational way to deal with what is rapidly becoming a monstrous time sink. I see no need to distinguish between article space and other spaces in regard to how we deal with this issue - such material doesn't belong anywhere. AndyTheGrump (talk) 03:12, 30 July 2025 (UTC)[reply]
  • Option 1 For the reasons mentioned above plus reinnforced by the concept of commensurate effort. It should not take a large amount of scarce volunteer time to undo something that somebody created in 30 seconds. North8000 (talk) 13:23, 30 July 2025 (UTC)[reply]
  • Option 1 is fine. Those fearing a lack of objectivity/uncontestability of the new criterion should have a look at "A7. No indication of importance", which in my opinion is far easier to misinterpret than this proposed one. ~ ToBeFree (talk) 16:29, 30 July 2025 (UTC)[reply]
  • Support Option 1 or 2 I'm not as familiar with the situation in AFC and Draft space, but we need a tool to rapidly and conveniently remove obvious AI garbage from mainspace without consuming disproportionate amounts of experienced editor-hours chasing AI-hallucinated sources that either don't exist or require more effort to verify than it would take to rewrite the article from scratch. The specific criteria mentioned above are a strong enough heuristic for further review being a waste of time that I think the small chance of an occasional false positive is acceptable. -- LWG talk 16:48, 30 July 2025 (UTC)[reply]
  • Option 1, per all above. We need all the tools available to stop the AI generated content. Alexcalamaro (talk) 21:39, 30 July 2025 (UTC)[reply]
  • Option 1 gets my full support. The wave of slop has certainly created a need for more aggressive tools in this area. David Palmer//cloventt (talk) 22:06, 30 July 2025 (UTC)[reply]
  • Option 2 LLM-generated content is simply unacceptable in mainspace. I'm willing to give some leeway in draftspace as a work-in-progress. Curbon7 (talk) 22:06, 30 July 2025 (UTC)[reply]
  • Option 1 including markdown per Kovcszaln6, Seraphimblade and others above. 🌸⁠wasianpower⁠🌸 (talk • contribs) 15:33, 31 July 2025 (UTC)[reply]
  • Option 1 but keep an eye on if someone finds a valid use outside the article space.©Geni (talk) 16:36, 31 July 2025 (UTC)[reply]
  • Option 1, not adding Markdown. The virtue of the current list is that they are unacceptable and LLM-generated on their face. Enough humans use Markdown that it's at the next 'tier' of tell. ~ L 🌸 (talk) 18:30, 31 July 2025 (UTC)[reply]
  • Option 1. This should've been done earlier. --FaviFake (talk) 20:23, 31 July 2025 (UTC)[reply]
  • Option 1. The only thing maintaining the standards of Wikipedia is volunteer effort from editors who are dedicated to upholding those standards. LLM-generated articles disproportionately consume those editors' resources by producing masses of low-quality, unverifiable content.
  • Option 1 Agree with the above sentiment that LLM generated content creates an unnecessary burden for other editors to clean up. If people want LLM content they can go to the source directly. Having to fix LLM slop demotivates volunteer editors to contribute to the project. ~ BlueTurtles | talk

Discussion (LLM)

[edit]
  • Pinging idea lab participants: User:Thryduulf, User:Jumpytoo, User:SunloungerFrog, User:Chaotic Enby, User:Chipmunkdavis, User:LWG, User:Newslinger, User:fifteen thousand two hundred twenty four Ca talk to me! 17:01, 21 July 2025 (UTC)[reply]
  • The body of the proposed criterion is objective. The title is not. "Human review" can mean a lot of things, including "Yes, I saw that the references didn't match up, but I chose not to fix them right away and will get back to them [sometime next decade]." This is more critical than it might seem; we don't want to have another situation like U5 where the overwhelming majority of deletions don't qualify by the text of the criterion while arguably sort of meeting the subjective one-line summary that is all that many taggers seem to read. Maybe "LLM-generated content with insufficient human improvement", or even "Blatantly LLM-generated" though that's probably going too far in the other direction. —Cryptic 17:58, 21 July 2025 (UTC)[reply]
    How about "with no evidence of human review" or "no apparent human review"? Some phrasing or another that places the focus on what is evident to readers, and not what we assume an editor did on the other side of the screen. silviaASH (inquire within) 18:01, 21 July 2025 (UTC)[reply]
    How about with "obvious AI hallucinations indicating no human review"? From what I gather, it's the fake sources and content that is the larger issue. S0091 (talk) 18:16, 21 July 2025 (UTC)[reply]
    "Unambiguous LLM-generated content with specific tells"? —Cryptic 18:43, 21 July 2025 (UTC)[reply]
    "Specific tells" might be interpreted as including stylistic issues like emdashes or recurrent words. "Blatantly LLM-generated with no apparent human review" should do the job. Chaotic Enby (talk · contribs) 19:05, 21 July 2025 (UTC)[reply]
    Why should I care if an article was generated by AI if it otherwise meets all the PAGs? I am a AfC reviewer so I get it and it is drain. I do not want to waste time reviewing a draft that was obviously created by AI because often the sources are and content are shit but we have a decline for that so why delete the draft if the editor might be able to fix the issues? New editors do not know the issues with AI so we should inform them which is what the decline does. For articles, if they are obvious AI then move them to draft with a similar explanation and only if they if move it back without addressing the major issues, such source to text integrity, should a CSD apply. S0091 (talk) 19:21, 21 July 2025 (UTC)[reply]
    Why should I care if an article was generated by AI if it otherwise meets all the PAGs? That is a valid point, but not the scope of the criterion, which concerns articles where issues are blatant enough that they could not have escaped a reasonable human review.
    For articles, if they are obvious AI then move them to draft with a similar explanation. The issue is that this ends up clogging AfC even more, especially when there is little subsequent effort by authors to fix the AI-generated content. Chaotic Enby (talk · contribs) 19:48, 21 July 2025 (UTC)[reply]
    "Clearly LLM-generated" perhaps? Tazerdadog (talk) 19:27, 21 July 2025 (UTC)[reply]
    I could go for that, with or without "...with no apparent human review". Stifle (talk) 08:43, 22 July 2025 (UTC)[reply]
    No. Define “clearly”, objectively. SmokeyJoe (talk) 11:27, 22 July 2025 (UTC)[reply]
    I've repeatedly mocked the A7 wording that appears in the admin dropdown box when deleting pages ("[[WP:CSD#A7|A7]]: No credible indication of importance (individuals, animals, organizations, web content, events)") as being useless to the articles' authors in the deletion log, never seen by csd taggers, and the wrong place to educate admins, but maybe explicitly enumerating the three subcriteria like that here is merited. —Cryptic 18:58, 22 July 2025 (UTC)[reply]
  • Extensive use of red-linked non-existent categories? Bobby Cohn 🍁 (talk) 18:22, 21 July 2025 (UTC)[reply]
    Eh, you also see those when some editors translate from other Wikipedias and directly translate the categories too. And they leave them either because they don't notice they're redlinked (mobile view hides categories), they don't know what redlinked categories mean (newbies gonna be newbies), or because they're used to projects where redlinked categories are acceptable, like Commons, and don't know that enWiki doesn't like them. Not an unambiguous LLM tell. GreenLipstickLesbian💌🦋 18:25, 21 July 2025 (UTC)[reply]
    That's fair. No need to explicitly mention it then and should rather fall under the While these indicators can be used in conjunction with more clear-cut indicators listed above, they should not, on their own, serve as the sole basis for applying this criterion. Bobby Cohn 🍁 (talk) 18:38, 21 July 2025 (UTC)[reply]
  • Running this through the standard new-CSD-Criterion checks:
    On Objectivity, the three bullet points I think are reasonably objective. The AI tell phrases are hard to fully lay out, but most good faith editors will know it when they see it. The dead link and hallucinated references ones may make issues with uncontestability instead of objectivity though
    On Uncontestablility - if we can show content was LLM generated, then we should kill it with fire and prejudice. I'm even OK with a decent level of false positives if we can trade that for a higher fraction of LLM content quickly removed. I think that the first bullet (AI keyphrases) is good. I have concerns about the second and third being confused with linkrot and typos frequently.
    On Frequency, I'm going to punt and ask for help and defer to other editors on this one - how often does this come up? Is this clogging XfDs? Are people IAR deleting these or shoehorning other criteria that don't fit? Or is this something that comes up a couple times per month and XfD can easily handle as part of it's normal load?
    On Nonredundancy, I think this is reasonably clear of the other criteria. I can't think of any criterion that could be easily modified or expanded to include these, although I'm open to any creative ideas.
    I see no reason this should be limited to articles, and I would especially like this to apply in draft-space where we can nuke these instead of waiting for LLM hallucinations to be papered over. Finally, I will advise caution to make sure all versions of the page are speedyable - if someone LLM dumps into an existing article, roll it back instead of deleting. This already applies to all criteria, but is going to be important for this one. Tazerdadog (talk) 19:21, 21 July 2025 (UTC)[reply]
    I can tell you I see an LLM unblock request every day, often more than one. (I realize this won't affect that) I imagine it's pretty much the same with everything. 331dot (talk) 19:32, 21 July 2025 (UTC)[reply]
    And revdel because of possible copyright issues in the LLM, or just rollback? SarekOfVulcan (talk) 19:33, 21 July 2025 (UTC)[reply]
    At this time, the US Copyright Office still refuses to copyright LLM content, and most major LLMs are programmed (albeit imperfectly) to refuse to generate copyrighted text. Unless it's either a blatant copy of a known existing copyrighted work, or fulfills the RD criteria for other reasons, there should be no need to revdel LLM revisions. silviaASH (inquire within) 19:45, 21 July 2025 (UTC)[reply]
    And if it were a blatant copyvio, then you should be looking at {{db-copyvio}} instead of claiming that an LLM might have been involved. WhatamIdoing (talk) 16:48, 26 July 2025 (UTC)[reply]
    I can tell you that from looking at the AfC logs in my userspace alone, I'm at 50 this year to date and had a total of just 3 last year combined. I'm seeing similar or larger numbers from other reviewers whose logs I've just now checked. There are more prolific reviewers who I think do an outstanding job who are in the 100s alone. Bobby Cohn 🍁 (talk) 20:06, 21 July 2025 (UTC)[reply]
    "Check for previous appropriate revisions" has always been part of any CSD criterion. It's not uncommon that I see someone has turned an article about Example Corporation into "Example Corp. is the greatest, buy our crap today!". There was previously a reasonably neutral article about them, but someone just saw the current revision and G11 tagged it. In that case, you decline the speedy and revert to the last good revision (and if copyvio, revision delete as necessary). The same would apply to this one. Seraphimblade Talk to me 22:08, 21 July 2025 (UTC)[reply]
    Is it clogging up XfD? I don’t recall ever seeing LLM mentioned at MfD. SmokeyJoe (talk) 11:27, 22 July 2025 (UTC)[reply]
  • One thing I think might be helpful to include if we go for a general criterion is to specifically exclude AI content kept in userspace for testing, as it’s often (for me at least) I need to have something real from an AI to test on to develop scripts and I’m sure others might too. Sophisticatedevening(talk) 23:41, 21 July 2025 (UTC)[reply]
    Full agree – I hope it can be considered common sense that intentional examples are okay given the "reasonable human review" part, but it could be great to have it explicitly spelled out. Chaotic Enby (talk · contribs) 23:43, 21 July 2025 (UTC)[reply]
    Any LLM usage that is human reviewed would be allowed; so declaring LLM was used in your edit & that you reviewed it would exempt you from this CSD. Jumpytoo Talk 02:00, 22 July 2025 (UTC)[reply]
    If it was reviewed correctly it would not have the problems that are the criteria for selecting these for deletion. So an edit summary should not be required, but for proving good faith, authors should not claim LLM output as their own work. Graeme Bartlett (talk) 10:10, 22 July 2025 (UTC)[reply]
    This thread demonstrates that for userspace, the WP:NEWCSD fails both #1 Objective and #2 Uncontestable. And with no cases having ever been sent to MfD, there is no evidence-based case for it to apply to userspace.
    G13 and U5 were both implemented only after demonstration of frequent SNOW deletions at MfD.
    People being frustrated with LLM new pages is an emotional basis for decision making. SmokeyJoe (talk) 11:25, 22 July 2025 (UTC)[reply]
    The problem with an A-only CSD is that a major area where LLM content is causing issues is AfC, where submissions are in draft or userspace. An AfC rejection is almost equivalent to deletion, in that a rejected draft is prevented from entering mainspace by consensus of several editors due to running seriously afoul of policy; we have uncountable examples of this, which I see as evidence enough that this should be a General criterium. Toadspike [Talk] 12:36, 22 July 2025 (UTC)[reply]
    I don't know if there's a more elegant way to phrase this, but could the criterion apply to "Content that is intended to be reader-facing" which includes almost everthing in the draft/article namespaces, userspace drafts of articles, and maybe some templates, but excludes almost all talk page discussions, project space examples, and userspace essays? Tazerdadog (talk) 00:03, 23 July 2025 (UTC)[reply]
    How about "Material which is never intended to be part of the encyclopedia and is created for testing or demonstration purposes is excluded under this criterion"? Seraphimblade Talk to me 01:24, 23 July 2025 (UTC)[reply]
    Sigh. Look. This isn't necessary. Admins aren't dolts. Stick a template that looks vaguely similar to {{User:Sophisticatedevening/AIbox}} at the top and no admin will speedy it under this criterion. And if one does anyway, bring it to DRV and it'll be snow-restored, and in the current climate, that admin will probably get recalled. —Cryptic 01:29, 23 July 2025 (UTC)[reply]
    Some admins have been dolts who reflexively deleted anything tagged for deletion.
    Maybe there will never again be dolt admins? But maybe it is a contributing problem CSDs are not as objective as imagined.
    DRV is not a good forum for catching bad speedy deletions. It takes a brave and committed Wikipedian to write a good DRV nomination. For a newcomer, falsely accused of posting LLM content, who probably did post poor content, and cannot access that content because it is now deleted, will have nearly now chance of making DRV work usefully for them.
    - SmokeyJoe (talk) 00:21, 26 July 2025 (UTC)[reply]
    If this is really such a big deal, then the criterion can be modified to, instead of just "content", say "article or AfC submission", to make it clear that raw LLM generated content is only to be deleted once you're making it someone else's problem. This would naturally and very simply exempt drafts and userpages that have not been submitted to AfC or moved to mainspace. silviaASH (inquire within) 01:43, 23 July 2025 (UTC)[reply]
    Drafts are “intended” to be reader facing. Submitted drafts moreso.
    The advantage of AfC-Rejection, instead of speedy deletion, is that it is more easily done. Reviewers could reject drafts on their subjective judgement that the draft is time-wasting LLM generated content. If they are wrong, it can be resolved by a discussion. The rejected content stays live for six months to give an author time to consider and respond to a mistaken judgement, and after six months it gets deleted by WP:G13. If there is a disagreement, MfD welcomes disputes over draft rejections.
    There seems to be a problem with AfC not using rejection when it should be used. Not realising that declining means explicit encouragement to edit and try again is posted to the author’s usertalk page. SmokeyJoe (talk) 00:15, 26 July 2025 (UTC)[reply]
  • The one thing here that gave me pause is phrasal templates (e.g., "Smith was born on [Birth Date]."), as I didn't initially catch that the example given was an example of a phrasal template rather than a use of one. Perhaps the wording should be expanded a bit for clarity, maybe like unfilled phrasal templates (e.g. "Smith was born on [Birth Date]." rather than the person's actual birth date or name). Anomie 12:34, 22 July 2025 (UTC)[reply]
    Agreed! I misinterpreted it the same way too. 123957a (talk) 14:03, 22 July 2025 (UTC)[reply]
    Yes. Cremastra (talk · contribs) 17:05, 22 July 2025 (UTC)[reply]
    I think that's a good clarification to make. Ca talk to me! 17:16, 22 July 2025 (UTC)[reply]
  • Perhaps we should add a note that unfilled phrasal templates may sometimes be sourced from existing article templates (such as Wikipedia:Artist biography article template/Preload) rather than chatbots. There is at least one instance of a reviewer confusing them: Draft:Tilak Raja Sahu, and we don't want to accidentally delete drafts or articles made from these templates. OutsideNormality (talk) 18:17, 22 July 2025 (UTC)[reply]
    I agree with @OutsideNormality. In fact, the unfilled boilerplate probably indicates that it's not LLM-generated text. WhatamIdoing (talk) 16:50, 26 July 2025 (UTC)[reply]
  • The World Destubathon Contest recently ended. Contest sponsor Dr. Blofeld's final remarks in response to contestant requests to "fully outlaw" use of LLMs/AI are relevant in the context of supporting Options 1 or 2 here: . . .If editors are producing great content with LLMs and are using them responsibly, so that most people are happy with them, it is tough to ban that. It is inevitable that soon most editors will be using AI tools... Seven hours later, Dr. Blofeld: I did a few tests with LLMs with content and to be fair it would still be time consuming sorting it out into decent articles. I know if I was competing I would find it more time consuming actually going through what an AI wrote and trying to verify it and then format the references. I would find it easier to simply write the text myself straight off and not have to chase anything up.... Dr. Blofeld is a long-time editor who knows the importance of article quality. Not so much for editor submissions to Afc, especially those who are COI or paid editors. (Unsure how the 8,550 de-stubbed articles will be checked but that's out of scope here.) I'm less concerned about false positives for LLM use in draft articles if the CSD criteria include most of the identifiers listed in this discussion. Sorry for verbosity; am striving to be better.--FeralOink (talk) 22:36, 22 July 2025 (UTC)[reply]
    Basically, the rule there would be "If I can't tell you're using AI, who cares?". If someone does so much work with whatever junk an AI spits out that they fix it into a usable form, check all the references to make sure they exist and say what they're claimed to say and correcting it when they don't, etc., well—at that point, the editor more or less wrote the article themself anyway, just using the AI as a prompt or starting point. In that case, I neither know nor care whether they used AI. Conversely, if it's clearly bot spew, with stilted wording, fabricated sources, etc., then that's a problem, and of course if an editor wrote something themself with fake sources, that's a problem too. Ultimately, the problem is not "Someone used an AI anywhere in the process ever", it's "Someone used an AI to generate the text, and copied and pasted that in without verifying and correcting it." Seraphimblade Talk to me 23:09, 22 July 2025 (UTC)[reply]
    I agree with you, Seraphimblade. We aren't banning the use of LLMs in Afc's, but rather, trying to weed out Afc entirely or mostly drawn from LLMs without adequate human review by submitting editors. I'm sorry if my example with the Destubathon is opaque; I was trying to compare this situation with something similar (high volume of articles; editors tempted to glop unedited slop that has the characteristics of LLM without review, just to get it into Wikipedia). Go ahead and hat this and my initial comment to the Discussion section if you want.--FeralOink (talk) 23:40, 22 July 2025 (UTC)[reply]
    Yes, my feeling is that LLMs aren't currently reliable enough for large scale heavy Wikipedia use without proofing, I found them to be tedious and actually more time consuming to verify the material. But if people have found a way to use them to genuinely be more efficient and aren't producing slop I don't want to interfere with that. I do think AI has massive potential and have no doubt it is the future. It could greatly help with things like drawing up instant citations and make editing more efficient for a start. It's still very early days and will get more reliable I'm sure. At present I do agree with the need to regulate content which hasn't been human verified, thank you to those who are trying to deal with it. ♦ Dr. Blofeld 04:49, 23 July 2025 (UTC)[reply]
  • I'm inclined to support this, but I'm guessing we'll get some complaints along the lines of "I only used AI for a small part of it", "I used AI as a starting point and revised from there, but didn't think to check the references", etc. A lot of newcomers don't realize just how important references are and why it's necessary to base an article on references rather than personal knowledge. Helpful Raccoon (talk) 01:20, 23 July 2025 (UTC)[reply]
  • Question on applicability to drafts. Is option 1 intended for any page in draftspace, or is it intended for submitted for submitted drafts? SmokeyJoe (talk) 07:56, 23 July 2025 (UTC)[reply]
    I guess as it is written it would be for any page in draftspace. Following the discussion and your own input @SmokeyJoe I think that there is an argument for it to only be used for drafts that have been submitted, declined as LLM-generated per WP:DRAFTREASON #2, and then resubmitted without improvement such that the draft still meets the criteria. The idea being to strike a balance between AfC volunteers' scarce time on the one hand, and not wishing to WP:BITE good faith new editors on the other. That said, I don't work the AfC queue - I'm more NPP - so I'd probably defer to those who encounter it more often there. Cheers, SunloungerFrog (talk) 09:14, 23 July 2025 (UTC)[reply]
  • I have extremely mixed feelings about the "shoot on sight" attitude towards LLM generated content (I know that this criteria is about a subset of those). On one hand, in an ideal world the source of the draft/article shouldn't matter - we should engage just with issues with the content, and the question of whether it was or wasn't LLM generated shouldn't come up except as a suggestion to the editor ("this seems to have fake sources: note that LLMs will often do this, so you should check every claim" not "this is LLM generated and so hallucinated garbage, go back and rewrite yourself"). On the other hand, as someone who reviews from the front of the AfC queue, we need something to deal with these drafts. There is an absolutely ridiculous amount of drafts that are obviously LLM generated but on the face of it otherwise maybe fine, but as soon as you start fact checking them the whole thing falls apart: the links are real, but they are only sort of tangentially related to the claim they are supposed to support. You send this draft back to sender with the comment that they should fact check it. They fix the one issue you actually specifically point out to them (often with another, inadequate LLM-suggested source or phrasing) and don't make a single other change. You check the next source. It's still a disaster. You tell them to check all the sources. They swap out one link for another that isn't actually any better and resubmit. And on and on and on and on this goes until you run out of all patience for assuming any amount of good faith of any sort towards these drafts because no one is doing any fact checking or for that matter any thinking at any point in the process, and if we're expected to strip them down to usable sources or treat them as good-faith drafts from real people then these (usually COI) editors are getting hours of free editor work, essentially rewriting the draft from scratch, for the price of entering one sentence into an LLM and copy-pasting. I have put in my time in the COI edit request mines, as far it goes I think I'm unusually inclined towards handholding COI editors who are making even a quarter of an effort at actually meeting us halfway, but this is completely unsustainable. Sorry for the rant. This is only kind of on topic to the CSD criteria, but a couple people mentioned wanting the AfC perspective and I think the AfC perspective, collectively, is $#$#&%, so have a sample. Rusalkii (talk) 20:39, 23 July 2025 (UTC)[reply]
  • So, I've been looking at drafts that link to Large language model (as both {{AfC submission|d|ai}} and {{ai-generated}} cause) and seeing if they met the proposed criterion as of their first submission. Slow going, because I'm distracted and not at home; I've only found time to get through about a dozen. While most did look like they were written by LLMs at least in part, I have yet to find one that would be unambiguously speedyable under this criterion. The closest it's come are three that each a single dead and unarchived reference, and they might not even have been dead at the time of submission. I was already sort of concerned that the specific subcriteria would quickly become obsolete; now I'm worried that they already are. —Cryptic 00:24, 24 July 2025 (UTC)[reply]
    I think we would get more if Markdown is added to the CSD criteria. Children Will Listen (🐄 talk, 🫘 contribs) 03:24, 24 July 2025 (UTC)[reply]
    Getting some empirical data is a good idea; so I took a look just now at the last 9 WP:AFCH AI declines (via the edit tag) (was going to be 10, but 10th got G11ed while I was writing this):
    I'm actually surprised how high the hit rate was, only 1 article is completely not eligible. Jumpytoo Talk 03:40, 24 July 2025 (UTC)[reply]
    They're faking VTRS tickets now? That's mind-blowing. Oaktree b (talk) 13:19, 24 July 2025 (UTC)[reply]
    That's terrifying. Bobby Cohn 🍁 (talk) 13:21, 24 July 2025 (UTC)[reply]
    Apparently, because "the recently submitted archive of newspaper clippings, verified via the Wikipedia VRTS ticket #2025060510000932, confirms WKLR-FM’s historical significance,." This is the first time I've heard something like this, assuming it's real and not just more AI slop. Children Will Listen (🐄 talk, 🫘 contribs) 16:28, 24 July 2025 (UTC)[reply]
    Is anyone on the VRT team able to comment on the existence or relevance of VRTS ticket # 2025060510000932? (Update: requested on their noticeboard.) Bobby Cohn 🍁 (talk) 16:33, 24 July 2025 (UTC)[reply]
    Ticket:2025060510000932 does not represent verification of the file submitted to Commons, nor does it confirm "WKLR-FM’s historical significance". In fact it rejects the ability of the uploader to release the uploaded file contents under a Commons-approved license. Geoff | Who, me? 17:05, 24 July 2025 (UTC)[reply]
    Ok, but a ticket that is valid and relevant but doesn't say what the editor wants it to say is not the sort of fake VTRS ticket that I would treat as a red flag of LLM use. —David Eppstein (talk) 17:11, 24 July 2025 (UTC)[reply]
    I think this is a weird article history and an evaluation of it would be beneficial for better understanding some of the hypotehical edge cases that arise with this tag:
    1. [1] It is clear that the first version of the draft definitely used markdown, known to be used by AI when generating fake article
    2. [2] Later revision introduces further curly quotes, citation templates outside of <ref> tags
    3. [3] Removes citation templates entirely in favour of regular formatting. {{subst:submit}} at the bottom of the article with a signature.
    4. [4] This is the re-write that adds VRTS ticket references along with weird special characters. Any indication of references removed. Specifically, I'd note: "Though the station no longer operates, its cultural footprint endures in the memories of its listeners and the pages of Northwest Ohio’s media history. The recently submitted archive of newspaper clippings, verified via the Wikipedia VRTS ticket #2025060510000932, confirms WKLR-FM’s historical significance, community impact and independent secondary coverage, satisfying Wikipedia’s notability criteria for standalone inclusion." screams of AI. My best guess is that the author used an AI chat while feeding information and their VRTS ticket into to the LLM and used the output, as evidence by the overwrite among other things.
    5. [5] The {{AFC submission}} template is altered (a known characteristic of AI) and curly quotes are again introduced, special characters are substituted for [ and ] square brackets and formatting is reintroduced.
    6. [6] After another declination, the page is wiped and reformatted again and an incorrect {{AFC submission}} is again applied, again indicating LLM use.
    So based on the above and how this article would factor in to the CSD criteria, I would say highly likely that steps 1, 2, and 3 above used AI. I think there might be some discretion here like there are with any subjective criteria. 4 through 6 are clearly AI guided. I don't know how this would factor in. So it's clear that 4 through 6 are junk. Does a hypothetical admin who sees a CSD tag on #6 revert to the last best revision, number 3, like they would if they saw a corporate page vandalized instead of deleting it? Is #3 even that good, or just last best?
    And you can make an argument on how much has this author conducted a "reasonable human review". I have to say, I'm strongly in favor of viewing that as not being done. I think it's clear to say that with revision #6 above, it is clear the author meant to submit it, but left it in a state of "big red decline message" and just left it. Bobby Cohn 🍁 (talk) 18:58, 24 July 2025 (UTC)[reply]
    Since this message, it was resubmitted. I declined and tried to offer advice. This was met with this message which is both flat out wrong, based on the previous information from a VRT member, but I'm also 99% sure was also written by an AI. So add my hours here to the tally of wasted effort by volunteer reviewers. I have to view this as further evidence that these AI writings are simply not being read at all, they're being glanced over as they're copied and pasted between web pages and people are no further along because of it. This should be factored in when we're weighing true, valuable human review. Time to log out. Bobby Cohn 🍁 (talk) 19:51, 24 July 2025 (UTC)[reply]
    You wrote on their talk page “You are encouraged to edit the submission to address the issues raised and resubmit…”
    - SmokeyJoe (talk) 21:15, 24 July 2025 (UTC)[reply]
    I get the opposite impression from this. First of all, @Glane23, it doesn't say anything about "submitted to Commons". It says the author submitted scans of the newspaper articles to WP:VRTS, which means they e-mailed them to volunteers. @David Eppstein confirms that the person actually did e-mail these newspapers to the volunteers. Their conclusion is significantly overstated, but it pretty much proves that they're working from real printed-on-actual-paper-in-the-newspaper sources and that none of this is being hallucinated by some chatbot. WhatamIdoing (talk) 16:58, 26 July 2025 (UTC)[reply]
    I was going only on what was said here; someone else checked the ticket. I do not have VTRS access. —David Eppstein (talk) 17:38, 26 July 2025 (UTC)[reply]
    Thank you for correcting me. WhatamIdoing (talk) 18:48, 27 July 2025 (UTC)[reply]
  • Another name suggestion: Obviously unreviewed LLM output. Since it seems we're having trouble agreeing on one, I think this would work. Could swap "unambiguous" for "obvious", but both have precedent in CSD headers and I'd prefer the shorter one. Toadspike [Talk] 21:56, 24 July 2025 (UTC)[reply]
    I like this for being concise and for emphasizing that it's not a criterion for all LLM output. ~ L 🌸 (talk) 18:23, 31 July 2025 (UTC)[reply]
  • User:SmokeyJoe writes: I don’t recall ever seeing LLM mentioned at MfD. I do. It isn't common, but I have seen it. See Wikipedia:Miscellany for deletion/Draft:Romane Dasse and Wikipedia:Miscellany for deletion/Draft:Lucas Grillo. Having seen the comments of the more active AFC reviewers, I will guess that they are Rejecting the LLM input because that is an alternative to sending it to MFD. Since LLM drafts are frequently being Rejected, that qualifies as Frequent. Robert McClenon (talk) 01:18, 25 July 2025 (UTC)[reply]
    It only counts as frequent if the pages are frequently being deleted. Are they? Or are they just being rejected? If the latter, then we have no evidence for NEWCSD point 2 as well as point 3. Thryduulf (talk) 01:26, 25 July 2025 (UTC)[reply]
    If the reviewers have an option to delete the pages without having to go through the bureaucracy that is MfD, then yes, I bet they would be deleted frequently. Children Will Listen (🐄 talk, 🫘 contribs) 01:27, 25 July 2025 (UTC)[reply]
    What we don't know is whether all the pages would always be deleted according to consensus if they were actually nominated. It's far from impossible that some would be while others would have other outcomes but unless and until we know we have no evidence. Speedily deleting something without evidence that it should be deleted is extremely damaging to the encyclopaedia. Thryduulf (talk) 01:35, 25 July 2025 (UTC)[reply]
    A rejection almost always leads to a G13 deletion or, if it is moved to mainspace anyways (I don't have numbers, but this is rare), deletion at AfD. So yes, they are being deleted. Toadspike [Talk] 10:04, 25 July 2025 (UTC)[reply]
    Pages being deleted under G13 indicates that a new criterion would be redundant, especially if there are some pages that are improved post rejection which would mean that not everything should be deleted. Thryduulf (talk) 16:52, 25 July 2025 (UTC)[reply]
    Drafts getting rejected does not qualify as frequent for NEWCSD for the same reason we don't have a CSD for deleting rejected drafts. Rejection is up to the AfC reviewer, so up to just one person. MfD on the other hand is a process where the community reviews a draft and chooses whether to delete or not. Deleting just based on the opinion of a single AfC reviewer is something we don't do. Warudo (talk) 17:03, 27 July 2025 (UTC)[reply]
  • I'm of two minds about this. A question for @SarekOfVulcan and Thryduulf, who I agree with generally that we generally don't seem to benefit from adding specifically anti-LLM guidelines. It seems to me, though, that the distinction in this particular area is relative volume. I think we all agree that an assuredly LLM-generated, unsupervised article submitted is not worth other editors' or readers' time to keep around as a draft. The problem is, unlike with unsupervised additions to existing articles and replies in talk page discussions, there's no existing criterion that can be substituted to do away with a timewaster. And these timewasters, as I hinted, are steadily a larger and larger chunk of the workload for editors at and around AfC. I understand the wariness about a CSD with a hazy perimeter that will likely only grow hazier over time, but I guess I'm curious what we're meant to do going forward to deal effectively with the obvious spike in tedious LLM-generated volume. Far be it from me to suggest yet another deletion mechanism, but this feels like something that needs friction between SD and PROD. Remsense 🌈  16:57, 25 July 2025 (UTC)[reply]
    My issue is not with deleting obvious junk, my issues are making sure that it is only pages that are objectively junk that get deleted and making sure that we set the criteria based on the things that are actually objectively relevant to why it is junk rather than using vague terms like "LLM" or "AI".
    I think at least most people can agree on the following:
    1. We don't want to delete content that isn't junk
    2. We do want to delete junk that isn't LLM/AI-generated
    3. We don't want to accuse people of using LLM/AI when they haven't done so
    4. Time spent investigating whether obvious junk is LLM/AI-generated junk, human junk or junk that is a combination of both is almost always wasted time
    The intent of this proposal broadly aligns with these principles (although it seems unconcerned with the third), but I remain unconvinced that, as written, it will actually achieve any of these aims. Thryduulf (talk) 17:11, 25 July 2025 (UTC)[reply]
    If this passes, maybe we should run another Wikipedia:Newbie treatment at Criteria for speedy deletion, to see how many non-LLM-generated articles get deleted as being LLM-generated. WhatamIdoing (talk) 17:05, 26 July 2025 (UTC)[reply]
  • LLMs often generate fake URLs or fake metadata for real sources. In these cases where all the sources exist but several of the URLs are wrong, would such pages be eligible for speedy deletion? Helpful Raccoon (talk) 21:00, 25 July 2025 (UTC)[reply]
    I'm also concerned about simple copy/paste errors. I have pasted the wrong ISBN/DOI/URL into an article more times than most people will ever edit. Missing the first or last digit when I'm copying is a particular specialty of mine, but I've got a solid track record of pasting URL #2 when I thought I had URL #1 in the paste buffer. But now this sort of ordinary typo, copy/paste mixups, and the like are supposed to be uncontestable proof that it's all LLM-generated?
    I had to look up the status on a package I shipped the other day, and it took me at least six tries to figure out what the tracking number is. CSD proposals based on "let's delete anything with typos" are bad, but if we're going to do them, maybe we should restrict voting to those of us who need reading glasses and therefore have sympathy with people who struggle to read tiny little numbers. WhatamIdoing (talk) 17:04, 26 July 2025 (UTC)[reply]
    The proposed criteria is Implausible non-existent references. If the error can be easily explained as a mistyped doi number and the actual source can be found, it wouldn't meet the criteria of implausible or non-existent so it would not be eligible for deletion. But if an article has a pattern of broken or irrelevant DOI's indicative of unreviewed LLM content, then it is a waste of time to spend more than a minute or two searching for each source just to verify that none of them exist. Once plausibility has been strained, it's more efficient to delete. -- LWG talk 22:44, 30 July 2025 (UTC)[reply]
Novice editors occasionally "correct" URLs, for example here: Special:Diff/1302477951. One would have to check the page history to ensure the URLs are actually fake. But that shouldn't be hard in regards to drafts, which usually have a fairly short editing history. Sumanuil. (talk to me) 07:47, 26 July 2025 (UTC)[reply]
  • To User:JoelleJay and User:Cremastra - I disagree with User:WhatamIdoing and think that AI-generated material should be speedy-deleted, but I think that she has an arguable point about the difficulty of AI detection. I disagree, but it isn't necessary to ask if she has read the proposal. Robert McClenon (talk) 18:05, 27 July 2025 (UTC)[reply]
  • Generally, I found that, if those sites that claim they detect AI-generated text are used, that it's likely not going to be accurate... maybe when ChatGPT is involved, it seems, somehow, possibly due to its popularity, but if anything else is used it often doesn't even think it might be a 50% chance of it being AI, a lot of times 0% too, as if other models don't exist... so, at the very least this process wouldn't be speedy, but at worst whatever is decided on is not accurate (although, at least, the few times I tested it, it doesn't seem to detect AI when there isn't, so as far as false positives go...) ~Lofty abyss 08:58, 29 July 2025 (UTC)[reply]

F3 and the GFDL

[edit]

F3 said in relevant part Files uploaded after 1 August 2021 licensed under versions of the GFDL earlier than 1.3, without allowing for later versions or other licenses, may be deleted. This is obviously intended as the enforcement mechanism for WP:NOMOREGFDL, but this wording gets a lot wrong about that policy:

  1. NOMOREGFDL applies regardless of the GFDL version(s)
  2. The licensing date is what counts, not the upload date (or creation date)
  3. The policy has an exception for content extracted from a real GFDL software manual
  4. The policy only applies to content [which] is primarily a photograph, painting, drawing, audio or video

That wording both prohibits deletion of unacceptable media (e.g. a GFDL v1.3-only licensed photograph) and permits deletion of acceptable media (e.g. there are no exceptions for PDFs or extracted images).

I don't think this is a controversial change, so I have WP:PGBOLDly changed this to Files which do not meet the criteria for using the GNU Free Documentation License may be deleted under this criterion., which is both more clear and future-proofs the wording in case NOMOREGFDL is amended in the future. Feel free to revert if you object to the change, but remember that PGBOLD says you should not remove any change solely because there was no discussion indicating consensus for the change before it was made. Instead, give a substantive reason for challenging it either in your edit summary or on the talk page. HouseBlaster (talk • he/they) 16:19, 27 July 2025 (UTC)[reply]

The purpose of F3 is not to enforce the image use policy, but to speedily delete improperly licenced content. The CSD should not be based on licence dates, as these dates are not always obvious to the tagger or the reviewing admin. Using the upload date is much simpler, as every file has an upload date, and a file uploaded with a GFDL only license before the cutoff date must have been licensed on or before the cutoff date (if someone tried to revoke a validly placed license on a post-cutoff file to make the content GFDL only, that revocation has no effect and can simply be reverted). Any dispute about licence date vs upload date that is material can be handled at xFD. IffyChat -- 19:50, 27 July 2025 (UTC)[reply]
I agree with most of what you wrote. My main objection would be that prohibited GFDL content is improperly-licensed content—even though that prohibition lives inside the image use policy. I also agree that anything uploaded before the cutoff date is obviously acceptable. And I agree that the licensing date is not always obvious. But I think it is almost always obvious: most uploads have an {{information}} with a date, {{self}} can use the upload date, if an external source is specified that probably has a date, etc. And like you said, anything where there is a dispute or it is uncertain goes to FFD by the normal rule of CSD being used only for obvious cases. HouseBlaster (talk • he/they) 20:55, 27 July 2025 (UTC)[reply]

Comment

[edit]

How do I speedy delete userpages without waiting 4 days to skip the abuse filter. 23.162.200.217 (talk) 08:49, 31 July 2025 (UTC)[reply]

Why have you taken an interest in speedy deleting user pages? I declined the one you put for TrainDows as the links are not "excessive" per WP:UP#PROMO. 331dot (talk) 08:58, 31 July 2025 (UTC)[reply]
(edit conflict) You appear to have successfully nominated User:TrainDows for deletion via the talk page, and it has been declined. I don't see any abuse filter preventing you from editing another person's userpage. Special:AbuseLog/41685962 is relevant to others reviewing this request. What message did you get when you tried to edit the userpage? Stifle (talk) 09:03, 31 July 2025 (UTC)[reply]
WP:UPROT quasi-semiprotects all userpages that are not deliberately unlocked. IPs can't edit the vast majority of userpages. —Kusma (talk) 09:28, 31 July 2025 (UTC)[reply]
This is an interesting case. User:TrainDows has made one edit in en.wiki, which was creation of their user page two years ago, meaning 100% of edits could fall weakly on that self-promotion/webhost scale. Should this go to MFD, or is this too harmless?--☾Loriendrew☽ (ring-ring) 13:41, 31 July 2025 (UTC)[reply]
Loriendrew, that's the sort of page that should probably just be blanked (or even left alone, seeing as it's not particularly harmful – and also not indexed, but I'm not sure of this); unless the other editor objects and reverts the blanking, it's probably not worth sending to WP:MFD... —  Salvio giuliano 14:19, 31 July 2025 (UTC)[reply]