Wikipedia talk:Using neural network language models on Wikipedia
This project page does not require a rating on Wikipedia's content assessment scale. It is of interest to the following WikiProjects: | ||||||||
|
Not internet connected
editAs the bot says itself in response to certain questions that might require lookup of information via the internet, it is a language model only, and is not connected to the internet (as of this writing). One of the most useful things for me that an AI bot could provide, would be help in finding sources on certain topics, and it already knows a good deal about sources just from its language model input, but it doesn't know everything. Still, it may be useful already in that vein; I asked it for a list of sources for the major figures in medieval French poetry, and it did a good job. Then I refined, asking just for authors from North Carolina (not something one would do in reality, but I did that just to gauge its depth) and it gave me three or four. I consider it one more tool to add to the set of tools I can use to find sources. Mathglot (talk) 04:04, 24 December 2022 (UTC)
Good idea
editI think having a guideline against using ChatGPT for factual info, or at least trusting it without proper sourcing and verification, is a good idea. While it's very cool and possibly very helpful, it also contains known inaccuracies, and it can't cite its sources effectively. It's a very dangerous thing too. As this technology becomes more widespread, Google results are going to be flooded with information of dubious accuracy. On the other hand, an AI-assisted tool with the proper safeguards and limitations could be very useful for research and article-writing.
For now though I think ChatGPT qualifies as "user submitted database data" so is unreliable much like other user-submitted and user-curated data (like citing Wikipedia itself). Super cool piece of tech though, and very powerful for sure. But a little scary for our civilization's data quality. Definitely something that should be applied with caution for Wikipedia. Andre🚐 06:22, 27 December 2022 (UTC)
- @Andrevan Cannot agree more on this. On an unrelated note, using AI on collaborative projects is not unheard of, such as OpenStreetMap's usage of MapWithAI to quickly add missing roads on the map. Perhaps we can learn from OSM contributors. CactiStaccingCrane (talk) 11:37, 15 January 2023 (UTC)
Translation and foreign sources
editI've conversed with Chat GPT in foreign languages, and it's scary good (and responds in the language I type in). One thing I haven't tried, is using it instead of DeepL or Google translate, which I might combine with a request to fix any problems I saw in the original language to make it read more smoothly in English. A fun side project might be something like, "please find sources in English roughly equivalent to these French sources about the Second Empire: A, B, C...". Like CS Crane, I sometimes get back invented sources, but another time when I was asking about Medieval French poetry I got some legit sources. I might run some tests and report back. Mathglot (talk) 10:03, 14 January 2023 (UTC)
- I should make a new section in the essay about this :) CactiStaccingCrane (talk) 11:39, 15 January 2023 (UTC)
Writing a lead
editChat GPT is very good at sticking to target length and style, and I haven't tried this yet, but it might be interesting to throw a body section from an article at it, and ask it to summarize it in a sentence or two, just extracting the most important information; i.e., do what the WP:LEAD is supposed to do. It can also write for a particular target audience, in case an existing lead is too jargony for example. Mathglot (talk) 10:07, 14 January 2023 (UTC)
- Mathglot, I noticed that without proper prompt ChatGPT tend to generalize things too much or give insufficient depth to the topic. In my opinion ChatGPT is the most effective when you take a good but long lead and try to condense it. CactiStaccingCrane (talk) 11:36, 15 January 2023 (UTC)
Assuming that free ChatGPT version uses GPT-3.5 and the Bing chat uses GPT-4, one should be getting better results when using the later. The easiest way I found so far - use the Edge browser with integrated Bing chat, open the Wikipedia article you want to improve, open the side panel with Bing chat and paste the following prompt:
Rewrite the lead section of the current Wikipedia article using wiki markup. The lead section should summarize the most important points of the article and provide context for the reader. Do not add any new information that is not already in the article and do not search the internet. Follow the guidelines on the page https://en.wikipedia.org/wiki/Wikipedia:Manual_of_Style/Lead_section to decide what to include and how to format the lead section.
--DixonD (talk) 16:43, 9 April 2023 (UTC)
- @DixonD: can you please show some examples of what that prompt has produced, ideally for articles where you noticed a deficient lead, used that prompt, reviewed the output, and edited in the proposed revision without changes? Sandizer (talk) 04:28, 10 April 2023 (UTC)
- Here is an example. The original article is Shevchenko National Prize, and the lead section without any modifications that Bing Chat has produced can be found here: User:DixonD/sandbox. The section still requires some editing before modifying the actual article, but I would say it is already a helpful tool. DixonD (talk) 07:52, 10 April 2023 (UTC)
- @DixonD: that looks good, except Oleksandr Honchar is a disambiguation page, and should be Oles Honchar. Sandizer (talk) 08:02, 10 April 2023 (UTC)
- Here is an example. The original article is Shevchenko National Prize, and the lead section without any modifications that Bing Chat has produced can be found here: User:DixonD/sandbox. The section still requires some editing before modifying the actual article, but I would say it is already a helpful tool. DixonD (talk) 07:52, 10 April 2023 (UTC)
List of uses at Wikipedia
editUnless new policy clamps down on it, it's inevitable that usage of ChatGPT will increase at Wikipedia. (At least, so long as the free model is available, which may not be for long.) Having a place to record instances of use at Wikipedia would be helpful for a variety of reasons, not least of which would be to provide some real data to examine, if and when P & G start to be written about its use here. It may also inform construction of this page, to have a variety of examples coming from different editors, who bring their unique backgrounds and experience to bear, and may pose questions that editors of this page hadn't considered.
How would you feel about creating a subpage, something like /List of uses of ChatGPT at Wikipedia, and have a list-article like format providing a link, and maybe a few words or a sentence of description? Here's one entry to start you off:
- WP:Ref Desk – user question about legal basis for claims of illegal occupation of the West Bank by Israel
Some of the conventions or templates of glossary pages or list articles might be helpful to look at. Cheers, Mathglot (talk) 01:20, 16 January 2023 (UTC)
- @Mathglot Feel free to do so :) CactiStaccingCrane 10:05, 16 January 2023 (UTC)
- Created. Mathglot (talk) 10:36, 29 March 2023 (UTC)
Meta or mediawiki
editThis topic transcends just en-wiki (see above). Not sure if you hang out at Mediawiki or meta: much, but I'm starting to think that a parallel page should be raised there, because this affects every project; most directly, every one in which ChatGPT knows the language, but actually all of them, as someone could extract a GPT response, and translate it into Tok Pisin or their favorite target language. I'm not super-experienced in either of those projects, but I have put my toe in the waters, so if you want to jump in, I think you shouldn't hesitate. Can collaborate on it with you if needed, although pretty oversubscribed at the moment. Guessing that meta: is the place to start. Mathglot (talk) 01:34, 16 January 2023 (UTC)
- Based on my tests, I found that ChatGPT is extremely crappy at writing the foreign source but it is pretty decent at reading and interpreting the foreign source. This is perhaps another concerns that we should be worrying about that people are spamming small wikis with crap AI-generated translations. CactiStaccingCrane 10:07, 16 January 2023 (UTC)
- Yes, and we should link or cross-post on en-wiki at WP:TRANSLATION and maybe say something at WP:PNT, and probably from Meta's translation project (will link later). Mathglot (talk) 10:10, 16 January 2023 (UTC)
- Mathglot I've made a stub at meta:Using neural network language models. Feel free to expand on it as I have very limited knowledge on the use of AI on Wikimedia. CactiStaccingCrane 10:26, 16 January 2023 (UTC)
- Yes, and we should link or cross-post on en-wiki at WP:TRANSLATION and maybe say something at WP:PNT, and probably from Meta's translation project (will link later). Mathglot (talk) 10:10, 16 January 2023 (UTC)
Page at WP:Artificial intelligence
editFYI, there is a page at Wikipedia:Artificial intelligence with a wider lens on AI efforts on Wikipedia/Wikidata, in case folks are interested. - Fuzheado | Talk 17:50, 20 January 2023 (UTC)
Copyvio?
editApologies if this has already been discussed, but do obvious text dumps from ChatGPT need to be revdelled as copyvio? I just removed this nonsense from Jaroslav Bašta which looks more than a bit LLMish. – filelakeshoe (t / c) 🐱 09:58, 30 January 2023 (UTC)
- @Filelakeshoe No one knows really, not even OpenAI, and that's extremely scary. CactiStaccingCrane 14:16, 1 February 2023 (UTC)
- Funniest thing I ever read on Wikipedia. DFlhb (talk) 14:31, 1 February 2023 (UTC)
- @DFlhb: We should name this guideline WP:BASTOPIA. 〜 Festucalex • talk 20:09, 6 July 2023 (UTC)
Good stuff
editI'm beginning to like this page more than WP:LLM, thanks to its simplicity. This could be a proposed policy or guideline, and the transcripts page could be transformed into a "how-to" proposal (since people will use LLMs, they should know how to use them effectively and with the least risk). DFlhb (talk) 14:11, 31 January 2023 (UTC)
My thoughts on this
editI did a lot of thinking about this which can be located here.
Feel free to use what I wrote for the purpose of policy creation but give credit somewhere on the policy page that is created.
Keep in mind that not everything I typed out applies right now.
Copyvio paraphrasing?
editOne thing that occurred to me which doesn't seem to have been discussed is using ChatGPT etc. to paraphrase discovered copyright violations instead of deleting them outright. I don't have any experience dealing with copyvios, but it seems like such a shame when lots of important content has to be deleted, so I'm wondering what people who do think about the possibility. Sandizer (talk) 06:57, 20 February 2023 (UTC)
Summarizing a reliable source. This is inherently risky, due to the likelihood of an LLM introducing original research or bias that was not present in the source, as well as the risk that the summary may be an excessively close paraphrase, which would constitute plagiarism.
Are there actual examples of introducing OR or bias when being asked to summarize? Or paraphrasing excessively closely? Sandizer (talk) 08:10, 24 February 2023 (UTC)
Tables and structured data formats
editUnfortunately it turns out that ChatGPT and other LLMs are also bad at interpreting and creating tables even when directly given the schema. I wouldn't classify something like this as "lesser risk." Gnomingstuff (talk) 07:54, 5 March 2023 (UTC)
- Thanks, I added your link to the relevant paragraph. IMHO, he might have set temperature too high (how is it even set up in ChatGPT interface? I use nat.dev where there is an explicit field). Ain92 (talk) 23:41, 26 March 2023 (UTC)
Reading templates and modules
editI've noticed that, although ChatGPT and GPT-4 can't really be trusted to create templates, it is incredibly good at reading them. I've gone on a bit of a TemplateData creation spree recently, and I've found that ChatGPT and GPT-4 are both very helpful for understanding more convoluted templates (although GPT-4 is a bit better); you can dump the template's wikitext and ask it for aliases, brief descriptions and types of each parameter, and it'll reliably give you a mostly accurate breakdown — and it's much easier to double-check GPT's analysis against corresponding pieces of the template than it is to manually read through the entire template. {{Lemondoge|Talk|Contributions}} 17:43, 29 March 2023 (UTC)
"Micro-hallucinations"
editThere are a lot of stories of rather large-scale "hallucinations" (lying/fiction) on the part of ChatGPT, etc., but it's become clear to me just experimenting a bit that every single alleged fact, no matter how trivial or plausible-looking, has to be independently verified. I asked the current version of ChatGPT at https://chat.openai.com/ to simply generate a timeline of Irish history (i.e., do nothing but a rote summarization of well-established facts it can get from a zillion reliable sources) and it produced the following line item:
- 1014 CE: Battle of Clontarf in Ireland, in which High King Brian Boru defeats a Viking army and secures his rule over Ireland
That's patent nonsense. Brian Boru and his heirs died in the Battle of Clontarf, though his army was victorious. In the larger timeline, it also got several dates wrong (off by as much as 5 or so years).
We need to be really clear that nothing an AI chatbot says can be relied upon without independent human fact-checking. — SMcCandlish ☏ ¢ 😼 05:00, 6 May 2023 (UTC)
Opportunity for summarizing a source
editThere are valid risks from using an LLM, and they are described very well here. However, I think LLMs have the potential to summarize a reliable source much better than a human, and to adhere to Wikipedia's policies better than a human. The key is to build the prompt correctly, providing a lot guidance on how to craft the answer to meet Wikipedia's policies. An enterprising Wikipedian with strong prompt engineering skills could build a robust prompt. It could then be recommended to editors as a standard prompt for summarizing reliable sources.
Chat GPT may not yet be up to the task, but these LLMs are developing rapidly. It won't be long before they are better at summarizing than humans. We humans have our own biases and skill levels, and it is clear that human bias is a big problem with human generated content on Wikipedia. MensaGlobetrotter (talk) 13:30, 26 July 2023 (UTC)
Can I use an AI to modify an existing image?
editHere's the question in a nutshell. I have a draft underway at Draft:Philip Banks (The Fresh Prince of Bel-Air). I'd like to use this photograph of the actor (on Commons under a CC-BY-SA-2.0 license that allows modification), as it much more closely resembles his age while portraying the character in the series. However, I have used some AI magic to remove the cigarette to create the image below. Is this a permissible use of AI for this image? BD2412 T 02:54, 28 August 2023 (UTC)
- @BD2412: From a legal perspective, since the image is licensed via CC BY-SA 2.0, your modification is find if your change is clearly noted, you attribute the original, and release your edit under the CC BY-SA 2.0 license. The real question (IMHO) is whether this is 'cool' by the ethical/policy standards of Wikipedia. I have no hard answer to that. Right now, I think this would violate the spirit of the policies laid out here, namely the bit about images "not be[ing] changed in ways that materially mislead the viewer." (The fact that you edited out a cigarette could also raise some WP:NOTCENSORED and WP:NPOV questions, even though it sounds like you did this for aesthetic reasons, rather than political etc.). Either way, this brings up some good questions about how far modifications to photographs.--Gen. Quon[Talk] 13:59, 28 August 2023 (UTC)
- With respect to the policies, I do not think this is quite so clear. The image is not "misleading", in the sense that it accurately depicts the usual appearance of the character (although the actor, James Avery, may have been at least an occasional smoker, most pictures of Avery do not show him smoking; moreover, the fictional character, Philip Banks, was never depicted as a smoker, as far as I recall). The article for which the image is proposed to be used is on the character rather than the actor. BD2412 T 15:02, 28 August 2023 (UTC)
- @BD2412: This doesn't answer your original question, and I do not have an answer to your original question, but I noticed that the edited image is much more grainy than the original. The edit added a lot of noise to the whole image. If you are using Stable Diffusion, you can use inpainting to change only a selected area of the photo and leave the rest alone.--Orgullomoore (talk) 07:28, 24 September 2023 (UTC)
- I see no reason for AI to make a difference to whether this picture is allowed or not. The original copyright is cleared. The derivative version in this case doesn't add another uncleared copyright. AI also has nothing to do with deciding whether the picture properly represents the subject or whether it's better than some other picture. If it deceives, it deceives regardless of whether it was retouched or not. If the retouching deceives, it deceives, regardless of whether it was retouched with Photoshop or old-fashioned darkroom magic. Similarly, if it's poorer than some other, it's poor. Whom to blame? Doesn't matter; the judgement call is about how well it represents the subject. And yes, it's better to have good retouching than bad, and if someone does a better retouch, then better is better regardless. Jim.henderson (talk) 01:01, 26 September 2023 (UTC)
- With respect to the policies, I do not think this is quite so clear. The image is not "misleading", in the sense that it accurately depicts the usual appearance of the character (although the actor, James Avery, may have been at least an occasional smoker, most pictures of Avery do not show him smoking; moreover, the fictional character, Philip Banks, was never depicted as a smoker, as far as I recall). The article for which the image is proposed to be used is on the character rather than the actor. BD2412 T 15:02, 28 August 2023 (UTC)
Is this AI?
editI ran it through ZeroGPT, but the percentage output depends on which part of it I paste in and ranges from 40-100%. Something doesn't seem quite right, because this was suddenly dumped into an article by a one hour old, likely single purpose account. Special:Diff/1177551781 I initially reverted it but undid myself because it came up 90% percent copyvio, that was a false alarm caused by pre-existing contents matched up against a Wiki mirror. Graywalls (talk) 04:03, 28 September 2023 (UTC)
"Wikipedia:ChatGPT" listed at Redirects for discussion
editThe redirect Wikipedia:ChatGPT has been listed at redirects for discussion to determine whether its use and function meets the redirect guidelines. Readers of this page are welcome to comment on this redirect at Wikipedia:Redirects for discussion/Log/2023 October 2 § Wikipedia:ChatGPT until a consensus is reached. - CHAMPION (talk) (contributions) (logs) 10:22, 2 October 2023 (UTC)
Systematically evaluated prompts for LLM proofreading/copyediting
edit@CactiStaccingCrane Has anybody been evaluating and curating prompts for proofreading/copyediting Wikipedia articles?
I have been using the Earth and Antarctica article lead sections for systematic testing and evaluation of proofreading/copyediting prompts included with the free open-source software https://copyaid.it.
It occurs to me that some folks at Wikipedia might be interested in collaborating on curating Wikipedia article test cases, curating proofreading/copyediting prompts, and documenting their performance. These prompts can be used via copy-and-paste, or with CopyAid, or maybe with a new MediaWiki feature that enables server-side proofreading/copyediting.
The (poorly documented) testing software is at https://gitlab.com/castedo/copyblaster and the test cases and results are saved at https://gitlab.com/castedo/copyblast. Castedo (talk) 21:37, 10 April 2024 (UTC)
AI-written citations?
editI was adding an event to an article (Special:Diff/1220193358) when I noticed that the article I was reading as a source, and planning to cite, was tagged as being written by AI on the news company's website. I've looked around a bit, skimmed Wikipedia: Using neural network language models on Wikipedia, WP:LLM, WP:AI, WP:RS and this Wikimedia post, but couldn't find anything directly addressing whether it's ok to cite articles written by AI. Closest I could find is here on WP:RS tentatively saying "ML generation in itself does not necessarily disqualify a source that is properly checked by the person using it" and here on WP:LLM, which clearly states "LLMs do not follow Wikipedia's policies on verifiability and reliable sourcing.", but in a slightly different context, so I'm getting mixed signals. I also asked Copilot and GPT3.5, which both said AI-written citations neither explicitly banned nor permitted, with varying levels of vaguery.
For my specific example, I submitted it but put "(AI)" after the name, but I wanted to raise this more broadly because I'm not sure what to do. My proposal is what I did, use them but tag them as AI in the link, but I'm curious to hear other suggestions.
I've put this on the talk pages in Wikipedia:Using neural network language models on Wikipedia and Wikipedia:Reliable sources. SqueakSquawk4 (talk) 11:35, 22 April 2024 (UTC)
Worst idea ever
editThis is why I keep paper books 2600:1005:B072:3443:DD19:B3B5:A6C7:D201 (talk) 21:40, 9 May 2024 (UTC)