Wikipedia talk:WikiProject AI Cleanup

I wanted to share a helpful tip for spotting AI generated articles on Wikipedia

edit

If you look up several buzzwords associated with ChatGPT and limit the results to Wikipedia, it will bring up articles with AI-generated text. For example I looked up "vibrant" "unique" "tapestry" "dynamic" site:en.wikipedia.org and I found some (mostly) low-effort articles. I'm actually surprised most of these are articles about cultures (see Culture of Indonesia, Culture of Qatar, or Culture of Indonesia). 95.18.76.205 (talk) 01:54, 2 September 2024 (UTC)Reply

Thanks! That matches with Wikipedia:WikiProject AI Cleanup/AI Catchphrases, feel free to add any new buzzwords you find! Chaotic Enby (talk · contribs) 02:00, 2 September 2024 (UTC)Reply

A new WMF thing

edit

Y'all might be interested in m:Future Audiences/Experiment:Add a Fact. Charlotte (Queen of Heartstalk) 21:46, 26 September 2024 (UTC)Reply

Is it possible to specifically tell LLM-written text from encyclopedically written articles?

edit

The WikiProject page says "Automatic AI detectors like GPTZero are unreliable and should not be used." However, those detectors are full of false positives because LLM-written text stylistically overlap with human-written text. But Wikipedia doesn't seek to cover all breadth of human writing, only a very narrow strand (encyclopedic writing) that is very far from natural conservation. Is it possible to specifically train a model on (high-quality) Wikipedia text vs. average LLM output? Any false positive would likely be unencyclopedic and needing to be fixed regardless. MatriceJacobine (talk) 13:29, 10 October 2024 (UTC)Reply

That would definitely be a possibility, as the two output styles are stylistically different enough to be reliably distinguished most of the time. If we can make a good corpus of both (from output of the most common LLMs on Wikipedia-related prompts on one side, and Wikipedia articles on the other), which should definitely be feasible, we could indeed train such a detector. I'd be more than happy to help work on this! Chaotic Enby (talk · contribs) 14:50, 10 October 2024 (UTC)Reply
That is entirely possible, a corpus of both "Genuine" Articles and articles generated by LLMs would be better though, as the writing style of for example ChatGPT can still vary depending on prompting. Someone should collect/archive articles found to be certainly generated by Language Models and open-source it so the community can contribute. 92.105.144.184 (talk) 15:10, 10 October 2024 (UTC)Reply
We do have Wikipedia:WikiProject AI Cleanup/List of uses of ChatGPT at Wikipedia and User:JPxG/LLM dungeon which could serve as a baseline, although it is still quite small for a corpus. A way to scale it would be to find the kind of prompts being used and use variations of them to generate more samples. Chaotic Enby (talk · contribs) 15:24, 10 October 2024 (UTC)Reply

GPTZero etc

edit

I have never used an automatic AI detector, but I would be interested to know why the advice is "Automatic AI detectors like GPTZero are unreliable and should not be used."

Obviously, we shouldn't just tag/delete any article that GPTZero flags, but I would have thought it could be useful to highlight to us articles that might need our attention. I can even imagine a system like WP:STiki that has a backlog of edits sorted by likelihood to be LLM-generated and then feeds those edits to trusted editors for review.

Yaris678 (talk) 14:30, 11 October 2024 (UTC)Reply

It could indeed be useful to flag potential articles, assuming we keep in mind the risk that editors might over-rely on the flagging as a definitive indicator, given the risk of both false positives and false negatives. I would definitely support brainstorming such a backlog system, but with the usual caveats – notably, that a relatively small false positive rate can easily be enough to drown true positives. Which means, it should be emphasized that editorial judgement shouldn't be primarily based on GPTZero's assessment.
Regarding the advice as currently written, the issue is that AI detectors will lag behind the latest LLMs themselves, and will often only be accurate on older models on which they have been trained. Indeed, their inaccuracy has been repeatedly pointed out. Chaotic Enby (talk · contribs) 14:54, 11 October 2024 (UTC)Reply
How would you feel about changing the text to something like "Automatic AI detectors like GPTZero are unreliable and should over ever be used with caution. Given the high rate of false positives, automatically deleting or tagging content flagged by an automatic AI detector is not acceptable." Yaris678 (talk) 19:27, 15 October 2024 (UTC)Reply
That would be fine with me! As the "automatically" might be a bit too restricted in scope, we could word it as "Given the high rate of false positives, deleting or tagging content only because it was flagged by an automatic AI detector is not acceptable." instead. Chaotic Enby (talk · contribs) 19:46, 15 October 2024 (UTC)Reply
I'd argue that's an automatic WP:MEATBOT, but there's no harm in being clearer. jlwoodwa (talk) 16:21, 16 October 2024 (UTC)Reply
I support that wording. I use GPTZero frequently, after I already suspect that something is AI-generated. It's helped me avoid some false positives (human-generated text that I thought was AI), so it's pretty useful. But I'd never trust it or rely on it. jlwoodwa (talk) 16:15, 16 October 2024 (UTC)Reply
I don't really see the need for detectors at this point, as it's usually pretty clear when an editor is generating text. As you say, the worry is false positives, not false negatives; these are pretty quickly rectified upon further questioning of the editor. Remsense ‥  00:08, 6 November 2024 (UTC)Reply

I have edited the wording based on my suggestion and Chaotic Enby's improvement. Yaris678 (talk) 06:54, 19 October 2024 (UTC)Reply

404 Media article

edit

https://www.404media.co/email/d516cf7f-3b5f-4bf4-93da-325d9522dd79/?ref=daily-stories-newsletter Seananony (talk) 00:44, 12 October 2024 (UTC)Reply

Question about To-Do List

edit

I went to 3 of the articles listed, Petite size, I Ching, and Pension, and couldn't find any templates in the articles about AI generation. Is the list outdated? Seananony (talk) 02:21, 12 October 2024 (UTC)Reply

Seananony: The to-do list page hasn't been updated since January. Wikipedia:WikiProject AI Cleanup § Categories automatically catches articles with the {{AI-generated}} tag. Chaotic Enby, Queen of Hearts: any objections to unlinking the outdated to-do list? — ClaudineChionh (she/her · talk · contribs · email) 07:08, 12 October 2024 (UTC)Reply
Fine with me! Chaotic Enby (talk · contribs) 11:14, 12 October 2024 (UTC)Reply

What to do with OK-ish LLM-generated content added by new users in good faith?

edit

After opening article Butene, I noticed the headline formatting was broken. Then I read the text and it sounded very GPT-y but contained no apparent mistakes. I assume it has been proofread by the human editor, Datagenius Mahaveer who registered in June and added the text in July.

I could just fix the formatting and remove the unnecessary conclusion but decided to get advice from more experienced users here. I would appreciate if you put some kind of a brief guide for such cases (which, I assume, are common) somewhere BTW! Thanks in advance 5.178.188.143 (talk) 13:57, 19 October 2024 (UTC)Reply

Hi! In that case, it is probably best to deal with it the same way you would deal with any other content, although you shouldn't necessarily assume that it has been proofread and/or verified. In this case, it was completely unsourced, so an editor ended up removing it. Even if it had been kept, GPT has a tendency to write very vague descriptions, such as polybutene finds its niche in more specialized applications where its unique properties justify the additional expense, without specifying anything. These should always be reworded and clarified, or, if there are no sources supporting them, removed. Chaotic Enby (talk · contribs) 15:24, 19 October 2024 (UTC)Reply
I very much agree with the idea of putting up a guide, by the way! Thanks a lot! Chaotic Enby (talk · contribs) 15:26, 19 October 2024 (UTC)Reply
I already have two guides on my to-do list, so I'll pass this to someone else, but I made a skeleton of a guide at Wikipedia:WikiProject AI Cleanup/Guide and threw in some stuff from the main page of this project, in an attempt to guilt someone else (@Chaotic Enby?) into creating one. -- asilvering (talk) 19:08, 19 October 2024 (UTC)Reply
Great, now I've been guilt-tripped and can't refuse! I'll go at it, should be fun – and thanks for setting up the skeleton! (I was thinking of also having a kind of flow diagram like the NPP one) Chaotic Enby (talk · contribs) 19:10, 19 October 2024 (UTC)Reply
Oh, that would be a great idea! I just can't really guilt you into it by making a half-finished svg. -- asilvering (talk) 19:19, 19 October 2024 (UTC)Reply
Do we need a separate guide page? A lot of the content currently in Wikipedia:WikiProject AI Cleanup/Guide is copied from or paraphrasing Wikipedia:WikiProject AI Cleanup#Editing advice. I think it would make sense to not have a separate page for now (usual issues with forking) and instead expand Wikipedia:WikiProject AI Cleanup#Editing advice. If that section gets to big for the main page of this WikiProject, then we can copy it to Wikipedia:WikiProject AI Cleanup/Guide and leave a link and summary at Wikipedia:WikiProject AI Cleanup#Editing advice. Yaris678 (talk) 12:51, 23 October 2024 (UTC)Reply
For now, the "guide" is mostly just the skeleton that Asilvering set up, I haven't gotten to actually writing the bulk of the guide yet. Chaotic Enby (talk · contribs) 13:54, 23 October 2024 (UTC)Reply
Sure. But what am saying is, rather than expand on that skeleton, expand on Wikipedia:WikiProject AI Cleanup#Editing advice. Yaris678 (talk) 16:55, 23 October 2024 (UTC)Reply
These links were useful. Thanks!
I suggest centralizing them all under Wikipedia:WikiProject AI Cleanup/Guide and simply linking it from WikiProject page. Symphony Regalia (talk) 17:33, 23 October 2024 (UTC)Reply
Expanding the guides a bit for corner cases would be useful. Symphony Regalia (talk) 17:31, 23 October 2024 (UTC)Reply

Flagging articles up for examination

edit

Hi Folks!! I'm looking to catch up to the current state. I reviewed an article during the last NPP sprint as an IP editor had flagged it with LLM tag. I couldn't say for sure if it was generated or not, so I'm behind. I sought advice and was pointed here. I was generated in fact. So I'm looking any flagged articles that you happen to come across, so I can take a look and learn the trade, chat about and so on, so to speak. I've joined the group as well. Thanks. scope_creepTalk 14:01, 24 October 2024 (UTC)Reply

The pre-ChatGPT era

edit

We may want to be more explicit that text from before ChatGPT was publicly released is almost certainly not the product of an LLM. For example, an IP editor had tagged Hockey Rules Board as being potentially AI-generated when nearly all the same text was there in 2007. (The content was crap, but it was good ol' human-written crap!) Maybe add a bullet in the "Editing advice" section along the lines of "Text that was present in an article before December 2022 is very unlikely to be AI-generated." Apocheir (talk) 00:57, 25 October 2024 (UTC)Reply

This is probably a good idea. I'm sure they were around before then, but definitely not publicly. Symphony Regalia (talk) 01:42, 25 October 2024 (UTC)Reply
Definitely a good idea, also agree with this. Just added a slightly edited version of it to "Editing advice", feel free to adjust it if you wish! Chaotic Enby (talk · contribs) 01:59, 25 October 2024 (UTC)Reply
So far, I haven’t seen anything that I thought could be GPT-2 or older. But I did run into a few articles that seem to make many of the same mistakes as ChatGPT, except a decade earlier.
If old pages like that could be mistaken for AI because it makes the mistakes that we look for in AI text, that does still mean that’s a problematic find; maybe we should recommend other cleanup tags for these cases. 3df (talk) 22:53, 25 October 2024 (UTC)Reply
I think that's very likely an instance of "bad writing". Human brains have very often produced analogous surface-level results! Remsense ‥  23:05, 25 October 2024 (UTC)Reply
Yes, I have to say, ChatGPT's output is a lot like how a lot of first- or second-year undergraduate students write when they're not really sure if they have any ideas. Arrange some words into a nice order and hope. Stick an "in conclusion" on the end that doesn't say much. A lot of early content on Wikipedia was generated by exactly this kind of person. (Those people grew out of it; LLMs won't.) -- asilvering (talk) 00:31, 26 October 2024 (UTC)Reply
I ran this text from 2017 version. GPT Zero said 1% chance of AI.
FIH was founded on 7 January 1924 in Paris by Paul Léautey, who became the first president, in response to field hockey's omission from the programme of the 1924 Summer Olympics. First members complete to join the seven founding members were Austria, Belgium, Czechoslovakia, France, Hungary, Spain and Switzerland. In 1982, the FIH merged with the International Federation of Women's Hockey Associations (IFWHA), which had been founded in 1927 by Australia, Denmark, England, Ireland, Scotland, South Africa, the United States and Wales. The organisation is based in Lausanne, Switzerland since 2005, having moved from Brussels, Belgium. Map of the World with the five confederations. In total, there are 138 member associations within the five confederations recognised by FIH. This includes Great Britain which is recognised as an adherent member of FIH, the team was represented at the Olympics and the Champions Trophy. England, Scotland and Wales are also represented by separate teams in FIH sanctioned tournaments. Graywalls (talk) 00:03, 6 November 2024 (UTC)Reply

AI account

edit

Special:Contributions/Polynesia2024. Their contribution pattern is suspicious. No matching edit summaries and content dump in thousands of bytes minutes apart over many articles. Some of their inserted contents test as high as 99% AI, such as the contents they inserted into Ford. What is the current policy on AI generated contents without disclosure? Perhaps it could be treated as account sharing (because the person who has the account isn't the one who wrote it) or adding contents you did not create. Graywalls (talk) 23:53, 25 October 2024 (UTC)Reply

There isn't technically any policy on not disclosing AI content yet, even in obvious cases like this one. However, the user who publishes the content is still responsible for it, whether it is manually written or AI-generated, so this would be treated the same as rapid-fire disruptive editing, especially given their unresponsiveness. Chaotic Enby (talk · contribs) 00:25, 26 October 2024 (UTC)Reply
Also being discussed at Wikipedia_talk:WikiProject_Spam#Possible_academic_boosterism_ref_spamming. Flounder fillet (talk) 00:57, 26 October 2024 (UTC)Reply

Ski Aggu is potentially stuffed with fake sources that do not work or sources may not directly support contents. CSD request was denied. I'm not going to spend the time to manually check everything but putting it out there for other volunteers to look. Unfortunately AI spam bots can apparently churn out tainted articles and publish into articles, but there's more procedural barrier to their removal than creation. Graywalls (talk) 16:19, 26 October 2024 (UTC)Reply

I'll check the first ref block and if it is, I'll Afd it. scope_creepTalk 16:40, 26 October 2024 (UTC)Reply
The whole first block is two passing mentions, a couple youtube videos and many Discog style album listing sites. There is nothing for a blp. Several of them don't mention him. They are fake. scope_creepTalk 16:49, 26 October 2024 (UTC)Reply

Editor with 1000+ edit count blocked for AI misuse

edit

User:Jeaucques Quœure. See [1]. I do wonder if a WP:CCI-like process for poor AI contributions could be made. Ca talk to me! 13:02, 26 October 2024 (UTC)Reply

Wow, I think that would be a quagmire if we were specifically looking for LLM text, as detection would be slow and ultimately questionable in many instances. We could go through and verify that the info added in those edits is verifiable, but I wouldn’t go beyond that, nor do I think there is a need to go beyond that. — rsjaffe 🗣️ 14:28, 26 October 2024 (UTC)Reply
I checked the last 50 edits, and the problematic edits appear to have been taken care of. Ca talk to me! 14:55, 26 October 2024 (UTC)Reply

Media Request from Suisse Radio (French-speaking)

edit

Bonjour

Un journaliste pour la radio publique suisse (RTS) a contacté Wikimedia CH. Il cherche à parler à des contributrices et contributeurs francophones suisses qui luttent contre les faux articles écrits avec IAG. https://en.wikipedia.org/wiki/Wikipedia:WikiProject_AI_Cleanup / https://fr.wikipedia.org/wiki/Discussion_Projet:Observatoire_des_IA.

Voici des articles anglophones sur la question : https://www.extremetech.com/internet/people-are-stuffing-wikipedia-with-ai-generated-garbage?utm_source=pocket_saves

https://www.404media.co/the-editors-protecting-wikipedia-from-ai-hoaxes/

Est-ce qu'il y a une contributrice ou un contributeur (suisse) pour parler de l’arrivée des textes/photos IAG sur Wikipédia et de votre expérience. Il doit rendre son sujet déjà demain, mardi 29 octobre à 12h ?

Merci de prendre contact avec moi.  Je suis la responsable d'outreach & communication.

Cordialement

Kerstin Kerstin Sonnekalb (WMCH) (talk) 08:07, 28 October 2024 (UTC)Reply

Greghenderson2006 created articles

edit

We and Our Neighbors Clubhouse came up high on GPT Zero check, as did a number of other articles created by this editor. Graywalls (talk) 18:59, 30 October 2024 (UTC)Reply

Can you mark the specific section? I think the tagging will be more useful that way.
Unless you're implying the entire article.
Edit: Upon reading it probably is the entire article. Symphony Regalia (talk) 16:03, 3 November 2024 (UTC)Reply
@Symphony Regalia Multiple of his articles. He's even got caught responding to dispute discussions with AI generated text. Graywalls (talk) 20:37, 5 November 2024 (UTC)Reply

Images

edit

What's the current perspective on AI-generated images? See Yairipok Thambalnu. Nikkimaria (talk) 03:43, 1 November 2024 (UTC)Reply

See WP:WPAIC/I. I'm inclined to remove this one since it doesn't have a historical basis, but am away from my computer. Thanks, charlotte 👸♥ 03:47, 1 November 2024 (UTC)Reply
Removed it, it really didn't add anything to the article except a generic, anime-style young woman drowning in a river. It is possible that Illustrated Folk Tales of Manipur might have a suitable replacement, if a Meitei speaker wants to go through it. Chaotic Enby (talk · contribs) 14:56, 1 November 2024 (UTC)Reply

Wiki EDU

edit

FULBERT's Research Process and Methodology - FA24 at New York University.

GPT Zero: 100% AI: https://en.wikipedia.org/w/index.php?title=Food_security&diff=1255448969&oldid=1251183250 No disclosure of AI use, yet multiple student work is showing at as 100% AI. Is this supposed to be some kind of Wikipedia experiment to see if we'll catch on? Graywalls (talk) 20:36, 5 November 2024 (UTC)Reply

Besides the GPTZero results, there's also the fact that this was added in a section it had nothing to do with, and doesn't actually make sense in context. In WikiEdu classes, I wouldn't be surprised if there was a certain amount of students using AI even if not told to do it, whether looking for a shortcut or simply not knowing it's a bad idea. It could be good to ask the instructor and staff to explain to students more clearly that they shouldn't use AI to generate paragraphs. Chaotic Enby (talk · contribs) 20:46, 5 November 2024 (UTC)Reply
as @Chaotic Enby suggested, @JANJAY10 and FULBERT:, perhaps you two would like to explain not making sense contextually and 100% AI in GPT Zero. Graywalls (talk) 02:01, 6 November 2024 (UTC)Reply
@Graywalls @Chaotic Enby Thank you both for your assistance in this matter, and I appreciate your help with teaching new users about editing Wikipedia. I will speak with @JANJAY10 about this. FULBERT (talk) 02:29, 6 November 2024 (UTC)Reply
Thank you for the feedback. JANJAY10 (talk) 03:35, 6 November 2024 (UTC)Reply

@FULBERT:, we just went over this regarding one of your students. @Ian (Wiki Ed):, I've cleaned up after a different student of yours in Job satisfaction and now another editor MrOllie is having to clean up after them again. When multiple editors have to clean up after WikiEDU student edits multiple times, it is disruptive. Graywalls (talk) 20:52, 7 November 2024 (UTC)Reply

I agree Graywalls, you (and other volunteers) shouldn't have to clean this up. Ping me when you discover the problems and I'll do as much as I can to help. Ian (Wiki Ed) (talk) 21:00, 7 November 2024 (UTC)Reply
this edit at Test anxiety. AI detection site says "Possible AI paraphrasing detected. We are highly confident this text has been rewritten by AI, an AI paraphraser or AI bypasser." with a 100% score. It talks about tests having been done and things being examined, but I see nothing of relevance about test anxiety or issues to do with academic "exams or anything of relevance to test anxiety in the source. The source is open access with full PDF access. How's this edit look to you all? @MrOllie, Chaotic Enby, Ian (Wiki Ed), and FULBERT: Graywalls (talk) 02:54, 9 November 2024 (UTC)Reply
Gaining insight into this interaction can aid in creating more potent support plans for academic anxiety management. - This definitely feels LLM generated. Sohom (talk) 02:58, 9 November 2024 (UTC)Reply
Download the PDF from the link that student cited at https://translational-medicine.biomedcentral.com/articles/10.1186/s12967-024-05153-3
Not one mention of "test anxiety" or anything to do with school/academic setting in general. Graywalls (talk) 03:18, 9 November 2024 (UTC)Reply

And this in unemployment from FULBERT's Research Process and Methodology - FA24 - Sect 200 - Thu at New York University again. Graywalls (talk) 19:05, 9 November 2024 (UTC)Reply

@Graywalls @Chaotic Enby @Ian (Wiki Ed) @Sohom @MrOllie This is all very appreciated. I have spoke with my students (in general) in our last class session and will do so again when we meet today to reiterate this. Additionally, I have met individually with several students whose edits have been identified above. I have also revised some of my own (saved) verbiage to use when reviewing edits to account for the concerns you have all helpfully raised above. While I am glad for this support for my students, I am also concerned on a more general level with this behavior happening across all our projects (and workplaces), and think these discussions, along with the verbiage we use to address the issues as instructional opportunities, present us with challenges that go beyond the concerns many of us initially faced with plagiarism alone. Please let me know if you detect this with any of my students again this term, and I thank you all again for your help and guidance here. FULBERT (talk) 17:42, 14 November 2024 (UTC)Reply
Thanks a lot for your help in this, and for speaking with your students about this issue. There are definitely challenges to be addressed, and this is the reason behind this project. Chaotic Enby (talk · contribs) 19:04, 14 November 2024 (UTC)Reply
Solidarity. -- asilvering (talk) 19:55, 14 November 2024 (UTC)Reply

Greetings from German project and a question

edit

Hello, Our German project KI (AI in German) and Wikipedia starts now and your project was in a significant medium here. We will study your experiences carefully. I have a little introductory question. You write: To identify text written by AI, and verify that they follow Wikipedia's policies. Any unsourced, likely inaccurate claims need to be removed. You write later, that the source can be hallucinated completely or it's not the correct content - also our experiences. When the AI texts will become better and formal (style) identification "AI generated" will be more difficult: Is there an alternative to checking everything? We have "Sichten" (reviewing all edits from new users) with a jam of 18.520 edits, waiting up to 55 days - checking only against vandalism. When a deeper check will be necessary I see problems regarding resources. Using AI itself for it? Thanks for hints regarding the future development. --Wortulo (talk) 17:36, 9 November 2024 (UTC)Reply

@Wortulo Hello! Congratulations for setting up the project – it looks to be very well-organized, and I would be happy if our teams could work together to take inspiration from each other's experiences.
What you mention is indeed an issue we've been keeping in mind. Source checking is indeed something that might become necessary in the future to a larger extent, although for now it's something that we have only been doing in cases where we already have suspicions.
Regarding the new edit queue, while we don't have a direct equivalent here, we do have the Wikipedia:New pages patrol, where source checks of new pages are indeed performed. We do not yet use AI tools to assist us in this, as they are quite unreliable, although we are experimenting with potentially using some in the future. Chaotic Enby (talk · contribs) 17:51, 9 November 2024 (UTC)Reply
Thank you very much for your quick and constructive reply. I did describe your project in more detail with the relevant links. We have our first online meeting on 27 November and I will also suggest something like this as an option. So far, it has been organised on the basis of individual initiatives here. I agree with you that tools are still too unreliable for recognition "AI generated" at the moment (false positive when AI has been used to improve style f.e.) They will also continue to evolve (perhaps we can help) and will become necessary if things develop as I suspect. Hallucinations will perhaps also be prevented by AI itself one day, but the danger that they will be less easily recognised will probably come sooner. An exchange of information would be greatly appreciated. Thank you in advance and I would be happy about this. Wortulo (talk) 08:47, 10 November 2024 (UTC)Reply
@Wortulo Thanks a lot! Don't hesitate to ask me if you have any more questions. I would also be interested in how the meeting goes, if any points that are brought up could also apply here!
I've been reorganizing the English Wikipedia's project these last few days – if you have any comments or advice, do let me know! Chaotic Enby (talk · contribs) 23:42, 12 November 2024 (UTC)Reply

I will do so. I have seen your new page "resources" and have linked the website. Allow me to ask regarding 2 typical examples from November:

Background: when in de:WP an article has been identified, often a long discussion will follow and then a deletion or not (it is necessary to convince an admin to delete). When we start, we want an easy and unique solution. Have you some hints? The second question: I am sure you know DEEPL to translate and DEEPL write to improve Texts (or similar tools). From my experience, there is often a false positive identification as KI generated then with the tools. Probably the differences in formulation will decrease, but hallucination percentage will remain (due to the technology itself). I also have no solution, but what do you think about this? Declaration obligation? Wortulo (talk) 07:47, 13 November 2024 (UTC)Reply

Hi!
Like many maintenance categories, it is a hidden category, and isn't visible from the article page. Regarding the {{llm}} template, it is an alias for {{ai-generated}} – there is no separate LLM-exclusive project, and the essay Wikipedia:Large language models is in the scope of the current project.
Except articles that are clearly AI-generated hoaxes (which we can speedily delete), articles often have to be rewritten (as Wikipedia:Articles for deletion is rarely appropriate for cleanup issues), although Wikipedia:Draftification is usually an option for new creations.
Regarding tools like DeepL Write, I wouldn't necessarily call it a "false positive" – rewriting using AI can still bring issues of non-neutral tone and implied synthesis. In the latest discussion (nearly a year ago), there wasn't a consensus to require declaring the use of such tools, although consensus can change. Chaotic Enby (talk · contribs) 12:17, 13 November 2024 (UTC)Reply
Thanks. I am a "DAU" (in German the silliest acceptable user and common - that all explanations must be as comprehensible as possible) ;-) I tried to understand what's to do, if I would contribute. The alias t|llm is not in your new list "Resources", then a message is visible for the reader (when using t|ai-generated it is not). So for me it's unclear, when I use what of these two. You should explain? And as a DAU I also was unable to find the hidden category at all when I edit (set on article or talk?) Drużyna coat of arms is still another example in your list of November where I do not find the hidden category. I have some experiences in our project Payed editions. There we have templates with a clear text on discussion page - also connected to a hidden category. It explains in general, what happens and it is possible to explain the specific things and to discuss then. In this direction I see our adaptation of your good idea (!!!). KISS (Keep it smart an simple) is a goal. Wortulo (talk) 06:53, 14 November 2024 (UTC)Reply
PS: Situationa sexual behaviour has no more the hidden category, but the problem and its solution remains documented]. --Wortulo (talk) 07:00, 14 November 2024 (UTC)Reply
Greetings from dewiki! A while ago (and with some LLM help), I wrote a python script that looks at checksums in ISBN references. I was able to identifiy 5 articles with hallucinated references. Please note that there can be many other causes for checksum fails in ISBN references (clueless publishers, for example) aside from honest mistakes and switched numbers. The file can be found at github. There is also a list of Wikipedia articles from the English language Wikipedia with failed checksums in ISBN references. The list is online at github, too. For example, I have some concerns about the articles Battle of Khosta and Spanish military conspiracy of 1936. I would like to ask someone more familiar with the English language and the subject matter to look into it. Thank you in advance. -- Mathias Schindler (talk) 09:35, 14 November 2024 (UTC)Reply
Greetings! Thanks for the link, it would be great to incorporate this script in the New Pages Patrol workflow if possible. I'm wondering if there would also be a way to check if existing ISBNs match the given book title?
Looking at the first example, one of the books cited (The Circassians: A Handbook) does exist, but with a completely different ISBN. The Circassian Genocide appears to exist, but with at least two different ISBNs, both similar to the given one but ending in ...87 or ...94 instead of ...82. The other ISBNs aren't present on the page anymore, with 0714640369 being (wrongly) used to refer to Muslim Resistance to the Tsar: Shamil and the Conquest of Chechnia and Daghestan, which has since been fixed to 9780714634319.
The article definitely reads like it has been AI-written, with the "Aftermath" section trying to emphasize how the event demonstrates such-and-such like a typical ChatGPT conclusion: demonstrating their capability to repel, underscored the difficulty Russia faced, highlighted the effectiveness of Circassian guerrilla tactics... Chaotic Enby (talk · contribs) 11:59, 14 November 2024 (UTC)Reply

Article written based on an AI-generated source

edit

Sykes's nightjar relies heavily on https://animalinformation.com, which appears to be entirely AI-generated (look at the privacy policy, the articles, etc.; see my post on the talk page). I tagged this unreliable sources, but I couldn't find a tag for AI-generated sources, so I'm posting a notification here instead. Mrfoogles (talk) 23:37, 9 November 2024 (UTC)Reply

IIRC there used to be a {{AI-generated sources}} but it was deleted as redundant to {{Unreliable sources}}. charlotte 👸♥ 23:44, 9 November 2024 (UTC)Reply
Adding to what Queen of Hearts said, while the template was an article-wide message box, the deletion discussion brought up the alternative of creating an inline tag for that purpose, although that hasn't been done yet as far as I know. Another thing you can do is to report it to the reliable sources noticeboard if you think further discussion is needed. Chaotic Enby (talk · contribs) 23:49, 9 November 2024 (UTC)Reply
Don't think further discussion is needed particularly, thanks for the suggestion though. Do you know if there is a tag for incorrect information? I think in combination with the unreliable sources tag that basically does what is necessary. Mrfoogles (talk) 00:12, 10 November 2024 (UTC)Reply
i would just remove all the information cited to AI-generated sources, and see if you can find anything reliable to expand it back out with. ... sawyer * he/they * talk 00:30, 10 November 2024 (UTC)Reply
AI or not, that site is clearly short of WP:RS standards. Graywalls (talk) 03:02, 10 November 2024 (UTC)Reply
It looks like five other articles use it as a source: Fiordland penguin, Xerotyphlops syriacus, Diploderma flavilabre, Makatea fruit dove and Alpine chipmunk. It should probably be removed from them too, and very likely at least briefly discussed at WP:RSN for future cases. Chaotic Enby (talk · contribs) 03:10, 10 November 2024 (UTC)Reply
I've removed it from Sykes's nightjar, Alpine chipmunk, Makatea fruit dove and posted it to RSN. And per Graywalls, it's definently short of the readies even if it wasn't. scope_creepTalk 08:15, 13 November 2024 (UTC)Reply
I removed it on Fiordland penguin, Xerotyphlops syriacus and Diploderma flavilabre (where the material cited to it described the lizard as several times longer than it actually is), so all the removals are done now. Flounder fillet (talk) 17:05, 13 November 2024 (UTC)Reply

Searching for AI Commons images in use on Wikipedia

edit

Commons is pretty good at categorising the AI-affected images that get uploaded there, filing them into subcategories of Category:AI-generated images and Category:Upscaling.

Is there an easy way to generate a list of which images in all of those subcategories are currently in use in mainspace here on Wikipedia? (I'm currently using the VisualFileChange script which can highlight global usage of images in a category, but that's across all Wikipedia projects rather than just the English one, and also includes non-mainspace user page and Signpost usage.) Belbury (talk) 14:30, 13 November 2024 (UTC)Reply

I don't know if there's an easy way to do it, it might be possible with a query script but I'll ask more knowledgeable editors on this topic! Chaotic Enby (talk · contribs) 12:00, 14 November 2024 (UTC)Reply
@Belbury @Chaotic Enby One option is Glamorous - put in 'AI-generated images' for the category, search depth of 3 seems reasonable. Select 'Show details' to generate a big table of images and where they're used. Sam Walton (talk) 12:07, 14 November 2024 (UTC)Reply
Thanks, that's very useful! A pity that there's no built-in way to limit the search to just enwiki, but I can work with it. Looks like the version at https://glamtools.toolforge.org/glamorous/ has a slightly cleaner interface for the generated table. Belbury (talk) 12:36, 14 November 2024 (UTC)Reply