Archive 75Archive 79Archive 80Archive 81Archive 82Archive 83Archive 85

DYK bot

Can someone make a bot to automatically update the Wikipedia:List of Wikipedians by number of DYKs. Just like Wikipedia:List of Wikipedians by featured list nominations and Wikipedia:List of Wikipedians by featured article nominations. Thanks. ~~ CAPTAIN MEDUSAtalk 18:37, 15 June 2020 (UTC)

This is something I'd be interested to work on, if it would be useful. Do you know how this list is currently updated? Pi (Talk to me!) 06:08, 20 June 2020 (UTC)
Manually, by its participants.
As an aside, two users on that list are combining totals from old and new accounts. I'm not on the list because I never bothered, but I would also be combining from two accounts. Is there a way for your proposed bot to handle this? The Squirrel Conspiracy (talk) 17:49, 21 June 2020 (UTC)
That shouldn't be a problem, I'd just have to put the list somewhere of all the accounts that needed adding up. I'll look into the feasibility of it tomorrow Pi (Talk to me!) 05:28, 22 June 2020 (UTC)

  Coding... - I'm just making the script to get the data. Once that's working I'll look at making the bot to update the table Pi (Talk to me!) 17:53, 22 June 2020 (UTC)

@CAPTAIN MEDUSA: I've made some progress with getting the list of nominations, and getting the article creators is relatively simple, but I'm not sure where to find the data for who is credited with the expansion of the article or promotion to GA. Does DYK as a process have a policy on this, and is the data recorded anywhere? Pi (Talk to me!) 22:54, 22 June 2020 (UTC)
Pi, here but you have to manually search for a user.
You can also find the user by going through nominations. It would usually say, Created, Improved to GA, Moved to main space, 5x expanded, and nominated by..... ~~ CAPTAIN MEDUSAtalk 23:32, 22 June 2020 (UTC)
This is coming along OK, I should have a prototype in a couple of days Pi (Talk to me!) 04:32, 23 June 2020 (UTC)
Category:DYK/Nominations, This category is quite useful. ~~ CAPTAIN MEDUSAtalk 12:26, 23 June 2020 (UTC)

category watch and notification bot

Hi. Is there a bot which can monitor a category, such as the category that {{helpme}} requests are added to, and leave notifications of each new addition on-wiki at a specified target page, such as my personal talk page? Just checking as I am looking for something similar for AfC WikiProject, and I suspect that it might be already implemented. Prior discussion one, prior discussion two. Can look at implementing it in Python or Nodejs or Perl, but I hope that perhaps there is an existing bot for such a task. Thank you in advance for your advice. --Gryllida (talk) 05:13, 29 June 2020 (UTC)

@Gryllida: This might end up with you getting a lot of talk page spam. Worth bearing that in mind. Naypta ☺ | ✉ talk page | 09:11, 29 June 2020 (UTC)
This is for testing of the bot - the spam will eventually need to track {{article start request}} tracking category, and go to an AfC help desk wiki page. As this template is currently not in use, testing out a bot on existing and often used {{helpme}} sounds easier. Gryllida (talk) 20:56, 29 June 2020 (UTC)
I made a tool that kind of does this but instead of posting to a page it emails. It monitors backlinks for a given page, generates a list of the backlinks (configurable eg. transclusions to userspace only), checks again the next day or how often you like, diffs the two lists and emails any differences. It can send additions, subtractions or both. It correctly compensates for the bug when a vandal blanks a page briefly which the backlink database takes a few days to re-register. It is a single-file GNU awk script and will work on any unix machine, with cron. on git. -- GreenC 21:16, 29 June 2020 (UTC)

Request for bot for medical templates to move ICD data to Wikidata, and remove from view

Hi all, hope you are well in this crazy time period. I am seeking a bot that will:

  1. Go through all templates in this category: Category:Disease_and_disorder_templates
  2. In the Wikidata entries associated with the templates insert the relevant ICD9 and ICD10 codes
  3. Then, remove the associated data from the template header

This is per the discussion here: Wikipedia_talk:WikiProject_Medicine#Proposal_to_remove_ICD_codes_from_templates, essentially the reasons being that they clutter the titles and don't help editors.

An example of this would be here:

The codes are: "C44.L40–L68/D23.L15–49, 173/216"; each is linked to a respective ICD9 and ICD10 category; Wikidata would need to be updated and then these removed from the title. We did this a few years ago within the Anatomy space; ping to Nihlus who was very helpful then. Please let me know if there's any additional information that I can provide to help. Many thanks, --Tom (LT) (talk) 23:43, 15 July 2020 (UTC)

Tom (LT), for clarification, is the slash part of the same ICD10 code?
eg if Template:Tumors of skin appendages were being added to wikidata, would it be:
  • ICD-10 = C44.L40–L68/D23.L15–49
  • ICD-9 = 173/216
Similarly, is it the "ICD-10" property that's desired, or "ICD-10-CM" / "ICD-10-PCS"? ProcrastinatingReader (talk) 00:54, 16 July 2020 (UTC)
The slash represents two ICD ranges (C44.L40-68, and D23.L15-49), but beyond that I don't know. Ping to some editors who might though: DePiep, Tobias1984, Was a bee. --Tom (LT) (talk) 02:10, 16 July 2020 (UTC)
More important: as WhatamIdoing noted, template {{Medical resources}} already reads ICD's from WD, with option to locally (at enwiki) overwrite. Why is this not considered? -DePiep (talk) 07:18, 16 July 2020 (UTC)
@DePiep correct me if I am wrong, but {{Medical resources}} is for use on articles. What I am requesting is for the relevant wikidata for TEMPLATES to be moved and the template titles stripped. How do you propose we use the medical resources template in this instance to remove the template ICD codes? --Tom (LT) (talk) 07:44, 16 July 2020 (UTC)
I still don't understand this proposal. Is it:
1. Add the codes to Wikidata (but to which WD item exactly? C44.L40–L68/D23.L15–49, 173/216 must be added to skin cancer (Q192102), right? and/or to WD-items listed in this navbox, like papillary eccrine adenoma (Q7132983)?)
2. Remove code from the enwiki template titlebar;
3. How should that WD property be used (shown) in enwiki (infobox? {{Medical resources}}), in which articles/templates? -DePiep (talk) 08:29, 16 July 2020 (UTC)
I'm thinking this: (1) Add the range codes to the relevant wikidata item for each template so they are preserved (2) remove the code, and (3) remove them from view completely. There was consensus and no objections at WP:MED when I proposed this. They don't (especially the older codes) contribute to navigation or organisation of the navboxes in any meaningful way. --Tom (LT) (talk) 08:52, 16 July 2020 (UTC)
That's for example here Template:Tumors of skin appendages (Q20346485) then. But I do have the impression that in WD it should be tied to the navbox topic (skin cancer (Q192102)), not the navbox template. Maybe WD people can make a suggestion re this. I'll leave it here. -DePiep (talk) 09:31, 16 July 2020 (UTC)
DePiep I forgot about the bot request process on Wikidata - thanks for reminding me, it is the appropriate venue for me to go first, which I've now done: Wikidata:Wikidata:Bot_requests#Move_ICD_codes_on_medical_navboxes_to_wikidata. --Tom (LT) (talk) 03:14, 17 July 2020 (UTC)

Please consider this request to be suspended / closed until I get the Wikidata component sorted. Many thanks --Tom (LT) (talk) 03:14, 17 July 2020 (UTC)

I think it might be easier to have one bot to do the lot, filing BRFAs both here and at Wikidata for approval to run. It's not really much more work to do both tasks with the same bot in one clean sweep. ProcrastinatingReader (talk) 12:39, 21 July 2020 (UTC)
If that is not too difficult, that does sound easier. --Tom (LT) (talk) 20:21, 21 July 2020 (UTC)
I've done WikiData part (summary). I'll do remaining WikiPedia part (removing links) with AWB semi-manually. --Was a bee (talk) 14:41, 13 August 2020 (UTC)
  Done About 400 data were exported to Wikidata. Although I've moved all IDs as far as I found, only some links of CPT links from templates[1] haven't been processed. This is because there is no corresponding property at Wikidata. (I think moving them to each WikiPedia talk pages would be good option). If there are things left undone, notify me. I'll process it.--08:53, 14 August 2020 (UTC)
Many thanks! --Tom (LT) (talk) 07:06, 17 August 2020 (UTC)

Template:Date is used directly in articles and shouldn't be

Template:Date is supposed to be used only in templates but there's more than a few used otherwise. A simple "subst" won't work as a significant portion of the uses are inside <ref> where subst does not work.

A bot to process these would be appreciated. --Izno (talk) 00:15, 4 July 2020 (UTC)

I am not sure that there would be consensus for such edits. I have wondered aloud on the template's talk page whether that guidance is valid. I did not get a satisfactory answer. – Jonesey95 (talk) 01:18, 4 July 2020 (UTC)
Jonesey95, agreed. I wonder whether the template might be potentially useful for integration with Wikidata or similar applications. I'd want to see matters like that explored before mass-substituting. {{u|Sdkb}}talk 05:37, 7 July 2020 (UTC)

Amalthea (bot)

Since this bot it down, there a request to replace one of its functions at Wikipedia:Bots/Requests for approval/ProcBot 3. However, there is a second task it does: updating Wikipedia:Sockpuppet investigations/Cases/Overview. We may need someone to create a bot to fill that function. Ping Amalthea, ProcrastinatingReader and Xaosflux --- C&C (Coffeeandcrumbs) 14:17, 22 July 2020 (UTC)

Doesn't DeltaQuadBot have similar functionality?  Majavah talk · edits 14:21, 22 July 2020 (UTC)
Looks like it's already running anyway, at User:AmandaNP/SPI case list. Should be possible to just swap it out with the existing one not being updated? Ping @AmandaNP to see if she's okay with that / has any comments? ProcrastinatingReader (talk) 09:38, 23 July 2020 (UTC)
It should already be transcluding to the main SPI page. My bot has always been the backup until the other one is alive + I use for my personal formatting. I see no need to deviate from that. -- Amanda (aka DQ) 20:47, 23 July 2020 (UTC)

Automatically format TV run dates

Hey geniuses, I was looking at this version of Bigg Boss Tamil 3 and noted that

| first_aired          = 23 June 2019
| last_aired           = 6 October 2019

was problematic, because these dates should be properly formatted for Template:Infobox television. So I wondered if there was a bot that could look at these parameters, then look to see if there is one of the {{Use DMY dates}} or {{Use MDY dates}} templates on the page, and adjust accordingly, with a result of:

| first_aired          = {{Start date|df=y|2019|06|23}}
| last_aired           = {{End date|df=y|2019|10|06}}

or

| first_aired          = {{Start date|2019|06|23}}
| last_aired           = {{End date|2019|10|06}}

Depending on whatever date format it finds.

Also, could this be incorporated into an existing bot? Don't we have maintenance bots that could be looking for stuff like this?

Thanks! Cyphoidbomb (talk) 18:44, 6 July 2020 (UTC)

@Primefac: since using {{start date}} and {{end date}} is advised by Template:Infobox television, for a couple of good reasons, would there be a need for a wider discussion for consensus before filing a RFBA for this? ProcrastinatingReader (talk) 18:37, 17 July 2020 (UTC)
Not Primefac, but looks clearly uncontroversial to me.   Doing... Will file a BRFA for this soon. SD0001 (talk) 20:35, 17 July 2020 (UTC)
SD0001, I've already made up a bot for this, just not sure if it's eligible for BRFA. If you're already finished coding it as well, feel free to just file yours, since you're more familiar with the bot approval process. ProcrastinatingReader (talk) 20:41, 17 July 2020 (UTC)
@ProcrastinatingReader: Well in fact, I have only done a data collection step and saw there are about 17000 affected pages. I didn't write any code for making changes yet (and that does seem a bit tricky), so if you're done with that, please go ahead. SD0001 (talk) 20:51, 17 July 2020 (UTC)
SD0001, assuming my script is correct (it looks to be making the correct replacements, and ignoring when it can't make sense of the data) I believe I've got the changes part down. I've looked at ~75 replacements locally and it made the correct one for all.
But for the data collection, I got a result below 17k (though only accounting for first_aired and last_aired currently, and not other poorly formulated date params). Turns out, after asking in #-discovery, Special:Search doesn't allow regex lookaheads, and the engine messes up with a not after a star, so I had to use a simpler query to fetch results which are possibly outdated and then do a better check in my script with a lookahead. This does work, but it generates too many false positives initially (also, I got about 1k needing update, not 17k), which isn't a problem since I check fetches locally, but it is annoying. I wanted to use the lookahead to ignore results which aren't already using a template as the value. Curious what search query you used? ProcrastinatingReader (talk) 21:37, 17 July 2020 (UTC)
@ProcrastinatingReader: Yes search doesn't support lookaheads and also it can display only the first 10k results (via either the UI or the API). I didn't use a search query. I used my self-written bot framework to load all the 48,000 articles with template, parse the templates using my template parser, and pass the values in first_aired and last_aired fields to the JS Date() function which would be able to make sense of raw dates but not the ones within a template. And there seem to 17000 articles where JS Date() is able to make sense of the dates, which implies they're all raw dates that need to be templatized. SD0001 (talk) 05:25, 18 July 2020 (UTC)
If it would help, a tracking category can always be added to the infobox code.Gonnym (talk) 06:54, 18 July 2020 (UTC)
I'm not sure how extra tracking cats work, but may be helpful? I've ran a manual search along the lines suggested above and got about 20-25k across both templates for both applicable params. Quite a bit larger than Special:Search. ProcrastinatingReader (talk) 23:39, 19 July 2020 (UTC)
I would set up a category along the lines of {{#ifexpr: {{Str find|{{lc:{{{first_aired|}}}}}|start date}} > 1 | [[Category:Pages using infobox television with nonstandard dates]]}} with a similar tracker for end date as well. Primefac (talk) 00:40, 20 July 2020 (UTC)
@Primefac: I might be testing it incorrectly, but that doesn't seem to work for me. Maybe it has to do with the fact that "start date" is a template name? If I check "dtstart" which appears in the class name it does work. --Gonnym (talk) 08:48, 20 July 2020 (UTC)
I looked at some examples and think it's meant to be in the template itself? Also < 1 (since it returns -1 for no match). I added Primefac's example to sandbox here: Template:Infobox television/sandbox, and you can see it flagging here. Might need some slight tweaks; it currently flags empty strings, and ideally it should ignore some strings like "present". It can probably be made more specific using the template, but I wonder if it's better to just stash this into a module and do it more cleanly, and not have to repeat? Alternatively, a check to see if the param is not empty, and begins with either a number or a letter, excl words like "present" would probably do it (all of these should be non-standard). Good idea on the category btw, saves having to query the API for all 40k+ transclusions (and their contents) for every run. ProcrastinatingReader (talk) 16:37, 20 July 2020 (UTC)
Done using a switch, seems to work. Special:Diff/968642472. ProcrastinatingReader (talk) 16:45, 20 July 2020 (UTC)
Your example does not work. Use the valid template and the tracking category still appears. --Gonnym (talk) 16:53, 20 July 2020 (UTC)
Good catch, I only tested the ones that should flag (and the false positives) I forgot to check start date itself. {{Str find|{{lc:{{{first_aired|}}}}}|may}}}} seems to return "1" for example, which makes me think |first_aired= is already passed through the start date template by the time it's evaluated here. Will read through some docs. ProcrastinatingReader (talk) 17:05, 20 July 2020 (UTC)
Uhh, I have a solution, but it's disgusting. Looks like it works, though? No template: Special:Permalink/968658413. Has template: Special:Permalink/968659176. I imagine there's a far neater way to do this, though. ProcrastinatingReader (talk) 18:36, 20 July 2020 (UTC)

Yeah, you're right, Gonnym, I didn't realize that it would parse the {{start date}} template before it hit the infobox call. In that case, you'll be wanting {{#if:{{{first_aired|}}}|{{#ifexpr: {{Str find|{{{first_aired|}}}|dtstart }} < 1 | [[Category:Pages using infobox television with nonstandard dates]]}}}} and using dtend for the end date. Really nice, actually because it means that you don't have to worry about template redirects. I've tested it in the sandbox and it looks good to me, but if someone else wants to run it through the paces before we go live let me know. Primefac (talk) 21:42, 20 July 2020 (UTC)

Neater than my module idea. I tried that dtstart concept in Special:Diff/968653168 but I guess the nowiki broke it. I think it might be worth retaining the switch/adding some form of check, so 'present' (for last_aired) isn't added to the tracking cat ('present' is a valid value for that param) ProcrastinatingReader (talk) 21:53, 20 July 2020 (UTC)
Well, the easiest way would be to assume that if they're using the proper template for start they'll use the proper template for end, and only check the |first_aired param. Primefac (talk) 21:57, 20 July 2020 (UTC)
Possibly a dangerous assumption, eg could be different editors at different times who last touched either of the params. I've done a quick data collection run with a script. For {{Infobox television}}, 19399 templates with first_aired being improper, 21122 templates with last_aired being improper. For {{Infobox television series}}, 2032 and 2104. These could overlap, of course (templates with one improper param may also have a second improper param). But I think that's (at least) 1795 templates where last_aired isn't valid while first_aired is? ProcrastinatingReader (talk) 22:48, 20 July 2020 (UTC)
Fair enough. Go for it. Primefac (talk) 22:52, 20 July 2020 (UTC)
Am I being silly or is something fishy going on with last_aired? See User:ProcrastinatingReader/sandbox3 with both first_aired and last_aired, and Special:Permalink/968696281 for last_aired only. Both seem to flag up as nonstandard? This is without my edit. ProcrastinatingReader (talk) 23:09, 20 July 2020 (UTC)
Got the sandbox tracking working correctly now (I believe). --Gonnym (talk) 00:32, 21 July 2020 (UTC)
Was going to say, looks good to me... and for what it's worth, I've turned the sandbox into a [[:Category... just so it's a bit more obvious if/when it triggers. Obviously will need to have the : removed when it goes live. Primefac (talk) 00:33, 21 July 2020 (UTC)
LGTM now, as well. ProcrastinatingReader (talk) 08:53, 21 July 2020 (UTC)
If the infobox is supposed to use those two templates, then it wouldn't be controversial to enforce that. Primefac (talk) 22:21, 17 July 2020 (UTC)
Also, just to note, quite a lot of dates are not using the proper templates. See Elizabeth I for an example of the dates in Template:Infobox person not being properly used. Probably a bunch of uncontroversial cleanup which can be done by a bot here. Baby steps, I suppose. ProcrastinatingReader (talk) 00:07, 18 July 2020 (UTC)

This is now  Y Done by User:ProcBot. Archiving. ProcrastinatingReader (talk) 16:42, 9 September 2020 (UTC)

Land use of the municipalities in Switzerland

The links of sources for the land uses of the municipalities (within the geography section) in switzerland point to a web page that is no longer available (e.g. Bulle) and only some of them have been linked to wayback machine. Can someone link the rest of them to wayback machine?--Horizon Sunset (talk) 17:44, 22 July 2020 (UTC)

May wish to post this at WP:URLREQ. ProcrastinatingReader (talk) 10:16, 23 July 2020 (UTC)
For the record, editor moved this to URLREQ, discussion: Special:Permalink/975505684#Land_uses_of_municipalities_of_Switzerland. Archiving. ProcrastinatingReader (talk) 16:44, 9 September 2020 (UTC)

DetectiveBot

Can I get technical support for creating DetectiveBot,Nihaal The Wikipedian (talk) 13:07, 2 September 2020 (UTC)

Nihaal The Wikipedian, you're going to have to be more specific. What is this bot, what does it do, and why do we need it? Primefac (talk) 15:11, 2 September 2020 (UTC)

Primefac I want this bot to be at least 1.5x faster than ClueBot NG. Detect,revert,report and also block when needed . This bot is very likely to have false positives too, so help might be needed.Nihaal The Wikipedian (talk) 05:42, 3 September 2020 (UTC)

You'll likely need consensus to create (or have someone create) that sort of bot, especially since we already have ClueBot. Primefac (talk) 14:00, 3 September 2020 (UTC)
A bot that will "block when needed" in combination with "is very likely to have false positives too" is a very bad idea. Also, you don't explain how it will be "1.5x faster than ClueBot NG". --Redrose64 🌹 (talk) 16:57, 3 September 2020 (UTC)
I think we should have a VilenskiBot that solves world hunger. I need tech support - it might need some support everyone only gets potatoes. Best Wishes, Lee Vilenski (talkcontribs) 17:10, 3 September 2020 (UTC)

@Redrose64 and Primefac:. Then there is messaging bot which helps people properly ping and send messages to people. Free for everyone, a simple tool.Nihaal The Wikipedian (talk) 05:41, 4 September 2020 (UTC)

Are these just ideas, or something actually in development? Do you have a working demo? --Redrose64 🌹 (talk) 19:31, 4 September 2020 (UTC)

I need help for that . My idea . Nihaal 03:48, 9 September 2020 (UTC)

Redirects from Townname to Townname, Statename [US]

Look at the two most recent redirects that I have created. There were communities at the name with the state disambiguator, but the base name was a redlink. Is there any way to do what I just did for every article that is in the form of "[anything], [state/province/country]" with a corresponding redlink? It would mostly need to run only once, but it could run again for a minor update every 3 months or so. HotdogPi 11:20, 20 July 2020 (UTC)

I'm currently working through bad links of this type, e.g. to Villa Park where Villa Park, California was intended. I'm not finding many redlinks. I am finding plenty of duplicate names. For example, one bad link, despite mentioning California nearby, actually related to Villa Park, Illinois. Certes (talk) 11:34, 20 July 2020 (UTC)
When I say "redlink", I just mean that there's nothing there. The two I created were found by typing in the URL bar, not by clicking a link. HotdogPi 12:33, 20 July 2020 (UTC)
I mean the same. I'm creating lists of targets by removing ", State" from the article titles. Most such abbreviated titles are redirects to the correct city (no action needed) or disambiguation pages (incoming links will be caught and fixed elsewhere). Several are articles on another topic (or a primary redirect thereto), and I'm fixing links to such pages. Very few are redlinks, and those which are may be duplicated in other states or countries. Certes (talk) 12:57, 20 July 2020 (UTC)
I can see this being messy if done by bot. Who gets priority in the case of naming conflicts? Just the first one to be processed by the bot? ProcrastinatingReader (talk) 11:38, 20 July 2020 (UTC)
@HotdogPi: I've worked through redlinks for cities in U.S. states A–G as a by-product of my other work. I've created one dab (Cherokee Village), one redirect (McRae–Helena, a new city) and have two outstanding where I'm undecided between those approaches: Greers Ferry (Greers Ferry, Arkansas is unique but has a dam and a lake) and Clarkedale (Clarkedale, Arkansas is unique but the reader may want a Clarkdale or a Clarksdale). I've skipped places without city status, but some of these "cities" have only a few hundred residents. I'm also checking for clashes with place names beyond the U.S., as many American cities are named after places in Britain and elsewhere. Although I'm using semi-automated tools, I doubt that we could specify this task tightly enough to deploy a bot. Certes (talk) 15:50, 22 July 2020 (UTC)
@HotdogPi: Here is a full list of pages matching "$foo,_$state" where "$foo" does not exist. Some of these obviously should not be redirected because they're titles of works or whatever (I didn't feel like adding a category filter). I would say that this is   Not a good task for a bot. As Certes noted, these decisions would rely heavily on context. Hopefully the list will be helpful to you though. --AntiCompositeNumber (talk) 18:44, 22 July 2020 (UTC)

Fix talk page to mainspace redirects

Query 46704 lists all of the talk namespace redirects that point to an article. Usually, if "A" redirects to "B" and "Talk:A" is also a redirect, then "Talk:A" should redirect to "Talk:B", not "B".

So, I think that we should have a bot that lists all of the talk page to mainspace redirects on a single page (perhaps a user subpage for the bot, or a "database reports subpage"). After that, the bot will find all of the redirects that do not include a slash (slashes indicate subpages), and fix them to point to the talk page of the mainspace target instead. If "Talk:A" happens to redirect to "A", then the (admin)bot would delete "Talk:A" because otherwise, it would redirect to itself. There are currently 1292 talk namespace redirects that point to articles (plus possibly some more due to a database replication lag). GeoffreyT2000 (talk) 20:46, 30 July 2020 (UTC)

Excluding subpages gets us down to just about 1,000 pages: https://quarry.wmflabs.org/query/47087. There's a few that shouldn't be redirects, and a few where fixing it naively would cause a double redirect. I don't think any of these pages should be deleted, since they're likely to have some sort of history. All in all, it wouldn't be a very difficult bot task. It may be worth putting in a warn edit filter for top-level ns1 pages being redirected to a page in a non-talk namespace. --AntiCompositeNumber (talk) 22:18, 30 July 2020 (UTC)

Getting Rid of Old IP Warnings

I see a lot of users like BD2412 going around a just blanking IP talk pages with the {{OW}} template and I thought "Boy isn't that a tedious job" and then I thought "well, let's get a bot to do that". Here's my suggestion if this hasn't already been introduced or already given to a bot as a task:

A Bot that would go around and search for Old IP warnings/blocks (ones more than a month old [excluding block template, which would need more time]) and get rid of them by replacing them with {{OW}}.

Best, P,TO 19104 (talk) (contribs) 01:39, 7 August 2020 (UTC).

Pretty sure this is a CONTEXT issue. Primefac (talk) 01:44, 7 August 2020 (UTC)
I've asked before and been told it can't be done, though the reasoning is somewhat opaque to me. All we need, I think, is a list of non-blocked IP editor talk pages from which no edits have emanated for ten years, and on which no edits have been made in that period. BD2412 T 02:25, 7 August 2020 (UTC)
Perhaps it's a question of timing. Ten years: yes. One month: no. Somewhere between lies a happy medium. I'd also add: no blocks expiring during the period, so the page remains marked when an IP resumes editing after a long block. Certes (talk) 08:09, 7 August 2020 (UTC)
One month may be sufficiently long for a dynamic IP address where you pull a different one out of the hat each time you power up. I have come across people who change their IP whenever they pour another coffee - see these eight posts which I am certain were all the same person. But one month is certainly far too short when it comes to static IP addresses where I have people being disruptive, stopping when served a level 3 notice, waiting a few weeks and beginning again. --Redrose64 🌹 (talk) 09:50, 7 August 2020 (UTC)

It seems this issue has been discussed before in the following places:

It also seems in the past this was a very contriversial issue, so it was necessary to go to the VP. P,TO 19104 (talk) (contribs) 13:39, 7 August 2020 (UTC)

If you look at the VP you can obviously see that there was a lot support for the idea, but the proposal seemed to go nowhere. So I guess the question here is, is this feasible; as we may already have the support we need? P,TO 19104 (talk) (contribs) 13:48, 7 August 2020 (UTC)

Bot to subcategorize fair use images by date

I am surprised to find that our hundreds of thousands of images falling under the Category:Fair use images structure have no categorization by date. This is important because all of these images will eventually fall into the public domain, based on the passage of time. Most images that have been uploaded have a "date" field, and although a large subset of these are filled out as "unknown", that also should be categorized. In short, I would like a bot to parse the images falling under this category and create and populate all needed subcategories for, e.g., Category:Fair use images created in 1952. BD2412 T 15:34, 7 August 2020 (UTC)

That would be great, although what would be even better is thoughtful use of the Category:Out of copyright in... tree. (t · c) buidhe 19:14, 7 August 2020 (UTC)
@Buidhe: There are a number of variables to when something goes out of copyright, including the country of publication, whether the "author" is a person or a corporate entity, and where the author is a person, the date of their death. However, we have to start somewhere, so knowing when it was published (and having a sense of what proportion of works are missing that information) would be a good start. Of course, images without a known publication date should be in a Category:Fair use images missing date of publication. BD2412 T 05:59, 8 August 2020 (UTC)
BD2412, Whatever scheme is used should clearly distinguish creation and publication dates, which are not necessarily the same year. Unfortunately, the date parameter may be used for either and it's not clear which. Therefore, a bot scheme sholud categorize Category:Fair use images dated 1952 because it might have been created or published that year. (t · c) buidhe 06:02, 8 August 2020 (UTC)
That... should be changed in the upload process. Well, we're stuck with what we have, so I agree that we will need to treat those dates as ambiguous for classification purposes. I would think that we could at least presume our ~200,000 album covers to be publication dates. BD2412 T 06:08, 8 August 2020 (UTC)
Yes, and most photographs are dated by the creation date. (t · c) buidhe 06:29, 8 August 2020 (UTC)
Request
External links
Website Links
British Thoracic Society 15
MeSH 14
... ...

This might need two steps (per WAID, below) - generating a list of articles, and then generating a list of external links that have been used.

Goal
  • Use this information to request a bot to run through article space and commonly used links to templates
Reason

Make maintenance easier, by:

  1. Helping resolve the issue of dead links in bulk (done for existing templates, see for example 2014: Category_talk:Anatomy_external_link_templates)
  2. Helping future template discussions. For example, I recently completed a survey of templates used in medical space (Wikipedia_talk:WikiProject_Medicine/Archive_139#Request_for_some_more_eyes_at_TfD) which revealed a number of templates which linked to sites no longer fit for use, either because of a change in our standards, paywalls, or deterioration or change in the website's standards.
  3. Helping future template discussions that may, for example, result in a change in location of that link to an authority control template or Wikidata.
Discussion
Discussions as to appropriate venue, and one not relevant to request
User:Tom (LT), I don't know if this needs a bot; it might be possible to do it as some sort of query.
In terms of which articles to search, I think it would be a good idea to exclude articles tagged by Wikipedia:WikiProject Biography, Wikipedia:WikiProject Hospitals, Wikipedia:WikiProject Business, and anything with the WPMED tag set to |society=yes. WPMED currently tags a very large number articles that are primarily about people and organizations. WhatamIdoing (talk) 16:13, 29 August 2020 (UTC)
That's a good point regarding excluding some articles, although if that's particularly difficult I have no strong opinion if they got included. I'm all ears if you have a suggestion as to what that sort of query you mention might be, otherwise as it involves running through some text in articles I thought a bot might be most appropriate. --Tom (LT) (talk) 23:38, 29 August 2020 (UTC)
I believe the information you want is at Wikipedia:Request a query. You'd probably want two queries: Please give me a list of the right articles, and then please give me a list of the links. WhatamIdoing (talk) 02:02, 30 August 2020 (UTC)
Thanks, will give it a go. I have made slight alterations to the text based on the venue change. --Tom (LT) (talk) 07:22, 30 August 2020 (UTC)
Right, that did not work well. Moving back to BOTREQ as suspected. Thank you for your good intentions when you gave the advice above. --Tom (LT) (talk) 23:57, 1 September 2020 (UTC)
    • Cryptic and also WhatamIdoing is there a way just to get the links contained within the "External links" section of articles? --Tom (LT) (talk) 08:55, 31 August 2020 (UTC)
      • Not even in principle without a full database dump. Even then, you'd probably have to render all those articles and scrape them. I suppose you could try to parse them, but any existing templates there would make it difficult, even before considering the ones pulling from Wikidata. —Cryptic 09:22, 31 August 2020 (UTC)
        A full database dump isn't required. And it isn't that difficult provided you use a good bot framework. I see only about 70,000 articles in the categories combined. The API can pull the texts of 500 articles at a time, which means we're done in 140 API calls. As for the parsing, a good bot framework (like pywikibot) would provides ways to extract text from a section and also to parse external links and templates from them. You'd probably want to get the url= param of any templates whose names begin with "cite ", along with the raw external links. That may not cover everything, but I guess it's good enough. – SD0001 (talk) 10:31, 31 August 2020 (UTC)
  • Excluding articles whose talk pages are in any of a list of other categories as mentioned above in passing, on the other hand, is easy. I just need the list. Filtering out ones with a given template parameter usually isn't, but it looks like this one adds Category:Society and medicine task force articles. (Setting important/quality just adds more categories and doesn't replace that one, right?) —Cryptic 09:22, 31 August 2020 (UTC)
    Right. We are verbose with WikiProject categories. WhatamIdoing (talk) 17:59, 31 August 2020 (UTC)
  • Thanks all! To be clear I don't want to look at citation links, just those within the "external links" section of the articles. This is because it's a single section that, in my experience, is often unloved and would benefit from an update / oversight :). --Tom (LT) (talk) 22:51, 31 August 2020 (UTC)
    • @Tom (LT): If you're interested only in the links produced by the [external links] syntax, its a very straightforward problem which can be solved using pywikibot (to fetch the articles) and mwparserfromhell (to parse the links). These are fairly common libraries used by bot developers. If you post at WP:BOTREQ, there's a chance someone will get to it soon.
    • Database queries (which is what this page is for) are not helpful since they can't differentiate between links by the position on the page or the way they're present (external link, citation template, etc). – SD0001 (talk) 14:05, 1 September 2020 (UTC)
  • @Tom (LT): Although not same as the your original request, I ran the bot. Surveyed how many times do each medical external templates used in article space (summary). I did this based on the spirit that something is better than nothing. --Was a bee (talk) 12:51, 13 September 2020 (UTC)
Medical external template usage

Transclusion counts of templates under the en:Category:Medicine external link templates. Main space (article space) only. Templates transcluded through Lua module were not (couldn't be) counted.

Template Transclusion count Article list
1 en:Template:Medical resources 6574 [2]
2 en:Template:MeSH name 5867 [3]
3 en:Template:ICD10 4180 [4]
4 en:Template:ICD9 4001 [5]
5 en:Template:FMA 3229 [6]
6 en:Template:Gray's 1778 [7]
7 en:Template:PMID 1671 [8]
8 en:Template:OMIM 716 [9]
9 en:Template:WhoNamedIt 656 [10]
10 en:Template:EMedicine 618 [11]
11 en:Template:ICD9proc 548 [12]
12 en:Template:NormanAnatomy 512 [13]
13 en:Template:EMedicine2 500 [14]
14 en:Template:SUNYAnatomyLabs 488 [15]
15 en:Template:ATC 485 [16]
16 en:Template:DorlandsDict 454 [17]
17 en:Template:ClinicalTrialsGov 405 [18]
18 en:Template:NormanAnatomyFig 334 [19]
19 en:Template:SUNYAnatomyFigs 318 [20]
20 en:Template:PMC 274 [21]
21 en:Template:NCBIBook2 238 [22]
22 en:Template:BUHistology 217 [23]
23 en:Template:MedlinePlusEncyclopedia 207 [24]
24 en:Template:ICDO 196 [25]
25 en:Template:UMichAtlas 158 [26]
26 en:Template:MeSH number 138 [27]
27 en:Template:TerminologiaEmbryologica 137 [28]
28 en:Template:OPS301 135 [29]
29 en:Template:Office of Rare Diseases 117 [30]
30 en:Template:LoyolaMedEd 102 [31]
31 en:Template:SUNYAnatomyImage 100 [32]
32 en:Template:LOINC 66 [33]
33 en:Template:DartmouthHumanAnatomy 65 [34]
34 en:Template:ViennaCrossSection 64 [35]
35 en:Template:BrainMaps 64 [36]
36 en:Template:NINDS 59 [37]
37 en:Template:DukeOrtho 56 [38]
38 en:Template:Chorus 49 [39]
39 en:Template:OklahomaHistology 49 [40]
40 en:Template:EmbryologyUNC 48 [41]
41 en:Template:BrainInfo 48 [42]
42 en:Template:UCDavisOrganology 46 [43]
43 en:Template:NLM 43 [44]
44 en:Template:UIUCHistologySubject 43 [45]
45 en:Template:MedicalMnemonics 41 [46]
46 en:Template:DiseasesDB 40 [47]
47 en:Template:EmbryologySwiss 40 [48]
48 en:Template:Cite GPnotebook 36 [49]
49 en:Template:AnatomyAtlasesMicroscopic 33 [50]
50 en:Template:ICD10PCS 32 [51]
51 en:Template:DermNet 31 [52]
52 en:Template:KansasHandKinesiology 31 [53]
53 en:Template:KansasHistology 31 [54]
54 en:Template:NICE 29 [55]
55 en:Template:GeneTests 26 [56]
56 en:Template:DermAtlas 25 [57]
57 en:Template:MerckManual 23 [58]
58 en:Template:MeSH PharmaList 22 [59]
59 en:Template:BiowebUW 22 [60]
60 en:Template:MerckHome 20 [61]
61 en:Template:UMichAnatomyModule 18 [62]
62 en:Template:PSUAnatomy 17 [63]
63 en:Template:EmbryologyUNSW 16 [64]
64 en:Template:MedlinePlusDrugInfo 16 [65]
65 en:Template:EmbryologyTemple 14 [66]
66 en:Template:DrugBank 12 [67]
67 en:Template:BrainstemWisconsin 12 [68]
68 en:Template:EatonHand 11 [69]
69 en:Template:MuscleUWash 11 [70]
70 en:Template:NeuroanatomyWisc 10 [71]
71 en:Template:DECIPHER 9 [72]
72 en:Template:Gray page 9 [73]
73 en:Template:HCPCSlevel2 7 [74]
74 en:Template:SUNYRadiology 7 [75]
75 en:Template:WhoNamedIt2 7 [76]
76 en:Template:GeorgiaImmunology 6 [77]
77 en:Template:ICD11 6 [78]
78 en:Template:UTGlucagon 6 [79]
79 en:Template:CDCDiseaseInfo 5 [80]
80 en:Template:MedlinePlusImage 5 [81]
81 en:Template:MedlinePlusOverview 5 [82]
82 en:Template:SUNYCrossSection 5 [83]
83 en:Template:WOROI 5 [84]
84 en:Template:DailyMed 4 [85]
85 en:Template:CNX A&P 4 [86]
86 en:Template:Medicinenet 3 [87]
87 en:Template:Orphanet 3 [88]
88 en:Template:SearchLOINC 3 [89]
89 en:Template:ECDC 2 [90]
90 en:Template:NHS 2 [91]
91 en:Template:AnatomyAtlases 2 [92]
92 en:Template:GNF GO 1 [93]
93 en:Template:Locus 1 [94]
94 en:Template:MedlinePlus2 0 [95]
95 en:Template:Terminologia Anatomica 0 [96]
96 en:Template:TerminologiaHistologica 0 [97]
97 en:Template:Gray's Anatomy link 0 [98]
98 en:Template:TA98 0 [99]

Thanks :) Close enough. I withdraw this request for the moment so that bot editors can focus on more worthy targets --Tom (LT) (talk) 23:52, 18 September 2020 (UTC)

Replace template with another and remove where the other already exists re; Template:Germanic philology

Sorry for the messy entry, I'm not familiar with bot procedures but I wanted to flag this as it seems to have gone under the radar.

It would appear that a few months ago the content at Template:Germanic philology was merged into Template:Germanic languages, and the former was redirected to the latter. However, many pages included both templates, and so now they instead contain two copies of the same template (as the philology template simply reproduces the content of the languages template). For instance: Fingallian, Germanic philology. I am unsure how many pages this may affect.

To fix this, it seems it would be worthwhile to instruct a bot to:

  1. Convert all instances of the "Germanic philology" template to the "Germanic languages" template.
  2. Subsequently remove duplicate instances of the template within a single article.

Thanks. BlackholeWA (talk) 02:44, 27 August 2020 (UTC)

I'll go through with my bot tomorrow and remove any duplicate uses. Primefac (talk) 02:48, 27 August 2020 (UTC)
Thanks. It also belatedly occurs to me that it might also be worthwhile opening a TfD for the seemingly now defunct philology template. (Would do so myself now, but it's 3am here) BlackholeWA (talk) 02:51, 27 August 2020 (UTC)
Not sure what you mean, as it's still used and is clearly a useful redirect. I mean, I won't stop you from taking it to RFD, but it seems like a waste of time. Primefac (talk) 10:37, 27 August 2020 (UTC)
  Done, about 10 pages edited. Primefac (talk) 14:07, 27 August 2020 (UTC)

OpenSourceSyncBot

Consider an open source software developed for the Wikipedia/Wikimedia movement, usually the tasks are tracked in tracking systems designed by and for software developer, such as Github Issues or Phabricators, but their major audience, i.e. people who cares their progress the most, people they need to solicit feedback the most, are on Wikipedia.

Hereby propose the idea to create a OpenSourceSyncBot to sync a Wikipedia page, e.g. mw:ORES/Synced_tasks with a search criteria in its relevant tracking system, e.g. ORES component on Phabricator.

Phase 1: for any tasks in the tracking system, sync them onto the subpage on Wikipedia, so it gives people more transparency and visibility to the development progress. The format could be a Wikipedia page table with "task title, progress, reportee, assignee". e.g. It will also periodically sync to update such information.

Phase 2: for newly added row on the Wikipedia page table, added by a Wikipedia user, it will create a new task on the external tracking system.

Proposer: xinbenlv Talk, Remember to "ping" me 23:06, 14 August 2020 (UTC)

This seems like a waste of time. External software, including Phabricator, is easy to learn and follow, as well as log in (which seems to be a hump for some people). What driving need is there for this request? --Izno (talk) 01:03, 15 August 2020 (UTC)

Examples of Usage

For example, WP:Twinkle developers can better reach out their users Wikipedia:Twinkle#Reporting bugs or requesting features

(This is not a request for a bot. This is a request for a sanity check.)

WP:ELN is discussing the ==External links== section of Mary Tyler Moore. It contains (in part) this list:

* {{NYTtopic|people/m/mary_tyler_moore/}}
* {{IMDb name|1546}}
* {{tcmdb name|id=134771|name=Mary Tyler Moore}}
* {{iBDB name|023123}}
* {{findagrave|175697586}}

This is not an unusual set of links for BLP articles. Obviously, the exact list of links and the order they're presented in varies. Most of them use external link templates.

Imagine a future in which we developed a consensus that some/all of this "standard link dump" should be combined into a single template, perhaps similar to Template:Authority control. Am I correct that it would (if that magical future arrives) be a relatively simple matter for a bot to remove some of these (existing) items from this list and transform them into the new template, in at least most articles? If it's harder than it sounds, then I'd rather know that in advance. (Please ping me.) WhatamIdoing (talk) 17:48, 19 July 2020 (UTC)

I take it you're saying that you're envisioning some sort of template where when someone calls {{ELinksTemplate|Mary Tyler Moore}} it spits out the five templated links you mention? If so, I'm not sure how feasible it would be to do that, because you would need a HUGE module to account for the millions of names and links that would be required. Unless I'm mistaken on your future vision, the rest of the discussion is a rather moot point.
As I typed out the above, I thought about having this magical template be basically a wrapper for the links you mention, so you would set (for example) |imdb=1546 to have it kick out the IMDb link. I suppose that could be doable, but I don't think you'll ever get consensus to basically turn five templates into "five templates plus a wrapper template for them all". Primefac (talk) 18:26, 19 July 2020 (UTC)
This idea was discussed briefly at Template talk:Authority control in 2014. I remember a more recent discussion, but I don't recall where it happened. – Jonesey95 (talk) 18:49, 19 July 2020 (UTC)
Interesting. As mentioned in that discussion, it would be a nightmare to get consensus on what to include/exclude in such a template. Not to say it can't be done, just a little tedious. Primefac (talk) 18:53, 19 July 2020 (UTC)
I fear that this idea would have a similar problem to Authority control: no one would agree on a "standard" set of external links. For example, if a TV/film actor had a minor off-broadway role, they would be listed in iMDb and IOBDB, and both would likely be represented in Wikidata (because of course we'd use wikidata for this template). However, some editors might not want to link to the IOBDB page because it doesn't provide much more information, especially if there are already many external links. That would mean implementing overrides and having protracted discussions about what sites are suitable for general external links. --AntiCompositeNumber (talk) 18:59, 19 July 2020 (UTC)
Primefac, what I want is for the bot to take that list and turn it into something like {{new thing |NYTtopic=people/m/mary_tyler_moore/ |IMDb name=1546 |tcmdb name=134771 |iBDB name=023123 |findagrave=175697586}} and have the template display the same links more compactly. WhatamIdoing (talk) 22:18, 19 July 2020 (UTC)
I can't see why that'd be useful, personally. That template would call these under the hood, so the only thing that eliminates is writing out the bullets. imo this single template idea would only make sense if the data was to be sourced from Wikidata, perhaps some kind of {{links|imdb|tcmmb|ibd}} which sources the info from Wikidata? ProcrastinatingReader (talk) 22:36, 19 July 2020 (UTC)
I think having it display in a standardized, compact format, similar to Template:Medical resources or Template:Authority control would be beneficial. My question for this group is whether it's feasible to have the bot convert the articles, given that not all articles will use the same templates, place them in the same order, etc. WhatamIdoing (talk) 22:54, 19 July 2020 (UTC)
Technically speaking, sure. It's possible to parse the vast majority, yes, despite those display differences. The order doesn't really cause issues with parsing, neither does them being bullets or newlines or something else. Displaying them again might need more design thought if those differences are to be retained whilst using a wrapper template. ProcrastinatingReader (talk) 23:48, 19 July 2020 (UTC)
Thanks, I appreciate all the responses and the time people took to understand my question. There might (someday, not soon, possibly a couple of months from now) be a request for a bot to do this. WhatamIdoing (talk) 21:48, 4 August 2020 (UTC)
@WhatamIdoing: I don't have anything to add around the ability of a bot to make those changes, but in terms of your idea of grouping the links I suggest you check out {{Sports links}} and the underlying Module:External links. Sports links does exactly what you're talking about, for standard external links for athletes. That template and the underlying module are apparently based on similar template/module in Norwegian wiki, where they also have templates for other groups like film/art/astronomy. A202985 (talk) 17:29, 19 August 2020 (UTC)
Thank you for telling me about this template, A202985. I'm hoping ultimately for something that it more compact than what's displayed at Mary Docter#External links (all seven items in that list are generated through the template). I don't know how people would feel about pulling the content from Wikidata. I suspect that in some cases, they already are, though. It might work very well with just a change to the formatting. WhatamIdoing (talk) 18:24, 20 August 2020 (UTC)

Civil parish bot

This isn't another request (straight away) for a bot request but rather (at the moment) only a request to see if anyone has the skills to create the code for it. I placed a request at Wikipedia:Bot requests/Archive 79#Civil parish bot and there was discussion at Wikipedia:Village pump (proposals)/Archive 160#Civil parish bot and User talk:DannyS712/Archive 12#Json format that coding was needed. The basic format is at User:Crouch, Swale/Bot tasks/Civil parishes (current)/Simple and I have attempted to do coding at User:Crouch, Swale/Bot tasks/Civil parishes (current)/Coded. I don't have the skills to do the JSON bit so I'm wandering if anyone does? If not then this can be archived and I can get on with looking at creating them manually, thanks. Crouch, Swale (talk) 20:58, 23 September 2020 (UTC)

Do you have any sort of consensus that this bot is a good idea, which was asked for last year? You seem to be skirting around this again. Spike 'em (talk) 10:14, 24 September 2020 (UTC)
There did appear to be for the parishes themselves and the main point is that I we can get the technical bit then the approval might be easier since that was the problem last time. Even if the bot isn't actually approved this would be useful in detecting missing parishes. Crouch, Swale (talk) 16:21, 24 September 2020 (UTC)
Could you please explain exactly where you think there was consensus to mass create these articles, as I can't see any in the links you've provided? You have been asked multiple times to gain that consensus and then get the bot sorted, not the other way round. Spike 'em (talk) 16:53, 24 September 2020 (UTC)
At Wikipedia talk:WikiProject UK geography/Archive 18#Bot created articles there was a small amount but yes you're right there hasn't otherwise been much consensus either way. @Spike 'em: do you suggest that I start an RFC to deal with this? since there are also some related questions at User:Crouch, Swale/England. Crouch, Swale (talk) 17:22, 24 September 2020 (UTC)

Double redirect bot

The current way of dealing with double redirects is slow and inefficient. A far simpler way to deal with them would be to simply have a bot that detects when a new redirect is created, either from a merger, or as a new page. If it finds a double redirect, it will fix it instantly. The current system is slow, and redirects can take several days to fix. If sinebot is able to sign posts in talk and user talk namespaces almost instantly, how hard can it be for a bot to fix double redirects faster? I-82-I | TALK 07:48, 29 August 2020 (UTC)

Surely we already have several bots which fix double redirects? In any case, fixing a double redirect "instantly" could cause chaos in the event of a malicious edit. --Redrose64 🌹 (talk) 10:15, 29 August 2020 (UTC)
Xqbot and others do this. It's not instant but I agree that an instant fix can cause problems. There are times when I'd have had to leave a set of pages inconsistent awaiting admin (or page mover) help if a bot had intervened to mess up my carefully planned sequence of moves. Certes (talk) 10:43, 29 August 2020 (UTC)
This has been discussed recently, and if I recall there are two active (and four approved) bots that deal with double redirects. They both work in different ways but they essentially do work as described by the OP. Primefac (talk) 12:31, 29 August 2020 (UTC)

Name hatnotes

  • Hey everyone, I was busy in the last few days adding some custem names hatnotes. I then got the idea why not let a bot add those hatnotes instead of a human? However, the problem here is that some name hatnotes like the Philippine, the Indonesian, the Icelandic or the Chinese do have multiple naming customs within their country. Is it possible to let a bot adding name hatnotes in simpler naming customs like Dutch, Eastern Slavic, Germanic, Japanese, Burmese, Malay, Mongolian, Renaissance Florentine, Okinawan, Portuguese, Slavic, Spanish (including the Basque, Galician and Catalan) and Turkic names? Those countries or people should have normally one style of naming customs. Hopefully this'll be done easily. Cheers. CPA-5 (talk) 20:09, 13 July 2020 (UTC)
    CPA-5, hatnotes should be for disambiguation, not explanatory notes on naming conventions; I hate the fact that we still do so so widely. To give a tweaked version of an old comment I made:
    WP:HAT states in its first paragraph that "their purpose is to help readers locate a different article if the one they are at is not the one they're looking for". Despite that, the use of hatnotes for surname clarification does go way back to the 2000s. The basic argument against it is that putting it in a hatnote, the very first thing readers see after the title, is positioning way too prominent for what is basically trivia.
    Concerns over this issue have been brought up at the village pump at least twice — in 2011 here, and last year by me here — and both times there was interest in making a change. I introduced {{efn Chinese name}} and a few similar templates, with discussion here and here, and I hope they'll become increasingly widely adopted and eventually the old style deprecated. {{u|Sdkb}}talk 23:39, 30 August 2020 (UTC)
A complication is that, for many of these pages, the surname occupying the base name is nowhere near being a primary topic. I suspect that the vast majority of readers reaching Schoenberg, Braun or Wills have no interest in the surname, and that most of their incoming links were intended for other articles. (I've fixed several thousand such errors this summer.) We should really address that issue before claiming that Schoenberg is a {{German name}}, when 90% of uses intend the composer. Certes (talk) 09:55, 31 August 2020 (UTC)

Tagging empty categories with Template:Db-c1 after some elapsed time

The following discussion is closed. Please do not modify it. Subsequent comments should be made on the appropriate discussion page. No further edits should be made to this discussion.


I'm not sure if such a bot already exists, but shouldn't there be an automated script that tags categories under C1 if they remain empty for an allotted time (i.e.: six hours)? ToThAc (talk) 22:04, 7 October 2020 (UTC)

ToThAc, some categories should not be deleted, even if empty. How would we avoid marking them for speedy deletion? Heart (talk) 06:23, 9 October 2020 (UTC)
We have {{Possibly empty category}} for categories like Lua templates with errors, which should be kept because it will be populated automatically and provide a useful warning next time we mess up a template. However, some empty categories without that tag may still be useful and should not be deleted without human consideration. Certes (talk) 10:15, 9 October 2020 (UTC)
@Certes: Shouldn't there be another template dealing with that situation as well? ToThAc (talk) 19:33, 9 October 2020 (UTC)
There probably is. The experts at WT:WikiProject Categories should be able to advise whether it is safe to delete empty categories which have none of a certain list of tags. Certes (talk) 23:36, 10 October 2020 (UTC)
QEDKbot used to do this. It ran to into troubled waters (actually, whole of Wikipedia talk:Bots/Requests for approval at the moment is discussion about QEDKbot!) and so its operator appears to disabled it for now. – SD0001 (talk) 19:53, 9 October 2020 (UTC)
The discussion above is closed. Please do not modify it. Subsequent comments should be made on the appropriate discussion page. No further edits should be made to this discussion.

Move editnotices when underlying page is moved

Usually editors moving pages don't move the editnotice attached to it. Either they forget, or aren't admin/TE so they don't do it (or both). {{Editnotice/notice}} categorises such cases into Category:Editnotices whose targets are redirects (which I've been updating for a while) but it often takes months for the job queue to go over transclusions and add moved pages into this cat (see this on VPT), which makes it hard to even do this manually. I'm thinking a bot would (a) be able to do this sooner and get around that issue and (b) actually just do the move automatically, suppressing the redirect. One way would be to listen to Special:Log/move and check if a editnotice for page exists, this could be done continuously. Another is to regularly loop over all transclusions of {{Editnotice}} (or [[Special:PrefixIndex/Template:Editnotices/) daily and do the moves. There's <20k so this is feasible, I think, but this would leave a period of up to 24 hours (ideally, the editnotice shouldn't just be disappearing for a day, especially when they're ones required for DS etc). Thoughts on these options, or other alternative methods? ProcrastinatingReader (talk) 16:41, 9 September 2020 (UTC)

  BRFA filed ProcrastinatingReader (talk) 11:39, 18 September 2020 (UTC)
Done by ProcBot's task 4. ProcrastinatingReader (talk) 09:16, 12 October 2020 (UTC)

Yobot wikiproject tagging request

This is a formal request to recruit @Yobot: to tag talk pages under WikiProject Phoenicia. Please tag the pages under Category:Phoenicia; no auto-rating. Thanks ~ Elias Z. (talkallam) 13:34, 1 September 2020 (UTC)

Yobot has not edited since 2018. – Jonesey95 (talk) 15:12, 1 September 2020 (UTC)
I seem to recall User:AnomieBOT performs this task. Primefac (talk) 15:59, 1 September 2020 (UTC)
@Elie plus: In any case, you need to supply an explicit list of categories to process - unless you only want the 20 pages presently in Category:Phoenicia to be tagged, those can be done as an AWB job. --Redrose64 🌹 (talk) 22:34, 1 September 2020 (UTC)

Bot for updating U.S. college admissions statistics

For many years, U.S. college articles were using manually updated tables like this to represent admissions statistics. Following a WikiProject Higher Education discussion, we've begun replacing them with {{Infobox U.S. college admissions}}, which uses data available from the Common Data Set (and I think also IPEDS) for everything (except the optional historical test score parameters). Symbols for historical data are chosen automatically using the new {{Fluctuation formatter}} I created.

Would anyone be interested in starting work on a bot that could gather the data and use it to update the templates automatically every year? Given the number of colleges in the U.S., doing so will save likely hundreds of hours of editor work per year. {{u|Sdkb}}talk 20:06, 29 August 2020 (UTC)

Would it not make more sense to have a template or module to store all of the data, call it from the templates, and only have to update on page per year? A bot could still be used to import and format it, but it would save a lot of edits; what are there, something like 500 universities in the US? Primefac (talk) 20:21, 29 August 2020 (UTC)
Primefac, that could certainly be an approach, since yeah, there's more like 2000 institutions. The centralized data storage approach isn't working that well for college colors, but it could perhaps be done better or done at Wikidata rather than here. I'll mostly leave those sorts of decisions to whoever wants to actually do the coding, so long as the system they set up is reasonably robust and durable. {{u|Sdkb}}talk 23:28, 30 August 2020 (UTC)
If it's being updated by a bot I don't really see an issue with it being unable to format its updates as a Lua table. The proposal at that link also kinda misses the point imo, people can always suggest updates on the talk if they can't edit the module, and even if they make a TPER with the new data (& sources, if applicable), I don't think TEs would mind adding it into the module's /data, so knowledge of Lua isn't really required. It sounds like people just aren't submitting updates? As for this thing, if statistics are to be updated by bot, people would only need to amend if the Common Data Set values were wrong, somehow. ProcrastinatingReader (talk) 23:14, 1 September 2020 (UTC)

Opt-in service to notify discussion of past AfDs when an article is renominated

Participation at AfD often requires considerable research and debate to find consensus. It's therefore understandable that people get frustrated when, sometime after it's closed, the article is renominated without them knowing. Given how few participants many AfDs have, it sometimes happens that a well attended AfD is overturned by a much smaller group. But even when that doesn't happen, the second (or subsequent) nomination loses out on the efforts of those who researched before.

Anyone willing to make a bot that would look for "nomination)" in the title (or some other method of determining renominations) and, based on an opt-in list, notify past participants (if they want)? — Rhododendrites talk \\ 04:11, 30 August 2020 (UTC)

Not to ask the dumb question, but if one is concerned enough about the topic, could they not just watchlist the page? Primefac (talk) 12:50, 30 August 2020 (UTC)
Well, I recently wiped my watchlist, but even before that I had about 20k pages on it. It's easy to miss a renomination if there's no notification. Especially if it's a topic I'm only marginally interested in. Participation in an AfD isn't necessarily interest in watching the article or being "concerned enough about the topic"; it's about the effort and the process. I've spent a good amount of time digging up sources (or trying and failing to do so) on topics I'm not super interested in, because that's just what AfD needs sometimes. If the same exact discussion is going to happen again, there's a good argument, I think, for not losing that effort (or not replicating it, or not risking a redo where nobody puts in that effort). — Rhododendrites talk \\ 15:11, 30 August 2020 (UTC)
Fair enough. Primefac (talk) 15:16, 30 August 2020 (UTC)
I could probably do this soon, if nobody beats me to it. TheTVExpert (talk) 19:05, 31 August 2020 (UTC)
Great! — Rhododendrites talk \\ 19:25, 31 August 2020 (UTC)

Just to clarify, in case it's unclear, when I say "opt-in" I intended that to mean opt-in for the service, and not on the level of the individual AfD. i.e. "I want this in general" rather than "if this specific article is renominated, I want to be notified". — Rhododendrites talk \\ 19:25, 31 August 2020 (UTC)

Tangentially related: WP:Bots/Requests for approval/SDZeroBot 6. – SD0001 (talk) 21:22, 4 September 2020 (UTC)

WikiProject Curling template changes

Hello! I originally posted this on WP:AWBREQ, but a bot makes more sense. Currently, articles about curlers use various combinations of {{Sports links}}, {{WCT}}, {{WCF}}, {{CurlingZone}}, and other templates for external links, but they can all be simplified to just {{Sports links}}, which would standardize our templates moving forward. Could a bot check all pages that use {{WCF}}, {{WCT}}, and {{CurlingZone}}; remove those templates in the external links section (but not other article sections), along with {{SR/Olympics profile}}, {{IOC profile}}, {{COC profile}}, {{USOPC profile}}, {{Olympedia}}, and {{Olympic Channel}} (all of which are redundant with {{Sports links}}); and then add {{Sports links}} if it's not already there? Thanks! Allthegoldmedals (talk) 11:59, 20 August 2020 (UTC)

Allthegoldmedals, could you give example(s) of what sort of changes would be performed, either here or by diff link? Helps me to better visualise the complexity of the task. Primefac (talk) 15:26, 20 August 2020 (UTC) (please do not ping on reply)
Here are a few: [100], [101], [102], [103], [104]. As far as I can tell, most should be like the first two. Allthegoldmedals (talk) 16:52, 20 August 2020 (UTC)
Sports links recommended use provides suboptimal HTML. Since it generates list items, it really shouldn't be preceded by a list bullet. Just as a note. --Izno (talk) 16:44, 20 August 2020 (UTC)
@Izno: Could you expand on this? As far as I can tell if {{Sports links}} isn't preceded by a list bullet then the first link isn't bulleted, though the proceeding links are. A202985 (talk) 18:49, 20 August 2020 (UTC)
I will try, but I can't guarantee it is the case as I said (code review on mobile is hard). I'm simply deeply suspicious that what is happening is that there is an empty list item being generated due to the implementation of sports links because a template cannot get out of a list item that has started outside the template, so far as I know. Templates should generally strive not to do that. --Izno (talk) 21:19, 23 August 2020 (UTC)
Despite the fact that sports links has a minor HTML issue (which can be fixed on the template side), we at WikiProject Curling have discussed and decided that we'd still like to carry out this template change, because many of the external link templates currently point to dead links, which is a more pressing issue. I'd like to follow up about this bot request? Allthegoldmedals (talk) 22:41, 8 September 2020 (UTC)

DYK blurb filling bot

More or less the same thing as Wikipedia:Bots/Requests for approval/DYKHousekeepingBot, which Shubinator says they doesn't have time to revive. The idea is to crawl Category:Pages using DYK talk with a missing entry, find the missing DYK blurbs, and add |entry= to these article's {{DYK talk}} templates on their talk pages.

For instance, 1st Polish Light Cavalry Regiment of the Imperial Guard has the DYK blurb (found in Wikipedia:Recent additions/2009/April)

(note Polish 1st Light Cavalry Regiment of the Imperial Guard redirect to 1st Polish Light Cavalry Regiment of the Imperial Guard)

In this case, Talk:1st Polish Light Cavalry Regiment of the Imperial Guard should be updated with {{DYK talk|...|entry=... that [[light cavalry|light-cavalrymen]] of the '''[[Polish 1st Light Cavalry Regiment of the Imperial Guard]]''' saved [[Napoleon I of France|Napoleon]]'s life at least three times?}}

Headbomb {t · c · p · b} 04:40, 9 October 2020 (UTC)

  Coding...  Majavah talk · edits 15:05, 9 October 2020 (UTC)
  BRFA filed!  Majavah talk · edits 15:31, 9 October 2020 (UTC)

Bot for fixing italicizations of movie/newspaper/etc. names

This might be something that'd more have to be done with AWB (in which case I'd appreciate advice on how), but to lay it out: I fairly often come across instances of e.g. Star Trek: The Next Generation that are not italicized. I can think of very few instances where this wikitext would show up, including the link, but we would not want to italicize. Would it be possible to get a bot to go around and identify instances of missing italicizations and fix them? (Italicization obviously isn't the most pressing issue facing the 'pedia, but since it is visible to readers, I don't think WP:COSMETICBOT applies.) {{u|Sdkb}}talk 21:10, 7 September 2020 (UTC)

I can think of very few instances where this wikitext would show up are there any? If so, might not be a good task for bot, as it wouldn't be able to differentiate here. ProcrastinatingReader (talk) 16:47, 9 September 2020 (UTC)
ProcrastinatingReader, giving this some more thought, the only instance outside of citations that I can think of is within a block of italicized text, in which case double italicization=no italicization. It also occurs to me that this goes beyond just titles to include any page at all that has an italic title but is linked to from another page not using italics. What would be the best way to address this? Should we set up a maintenance category and hand it over to the typo team or something? {{u|Sdkb}}talk 03:42, 14 September 2020 (UTC)
Newspaper titles often show up in citation templates which dislike italics. Certes (talk) 17:02, 9 September 2020 (UTC)
Certes, are bots able to ignore content within citations? {{u|Sdkb}}talk 01:57, 11 September 2020 (UTC)
I'm sure a bot could easily be programmed to do that. WP:OABOT#Resources mentions a bot which uses that technique for a different purpose, and the functionality may even come out of the box with mwparserfromhell. However, the few bots I've written have been trivial read-only hacks, so an expert may be more helpful. Certes (talk) 11:00, 11 September 2020 (UTC)

This would be a bot to remove 404 (the ones that appear red) links. — Preceding unsigned comment added by Moouser (talkcontribs) 22:33, 16 November 2020 (UTC)

WP:Red links are entirely valid. This is a   Not done. --Izno (talk) 22:39, 16 November 2020 (UTC)

Replacing invalid values in currency templates

A recent change to the MediaWiki software has started assigning a tracking category, Category:Pages with non-numeric formatnum arguments, to pages that contain invalid input to formatnum, which is supposed to be given only numeric input. I have edited a few templates to get the article count down from about 150,000 to the current 31,000, but there are some instances of errors within articles that need to be corrected.

One of the errors is invalid input to currency templates, including {{US$}}, {{CAD}}, and other templates in Category:Currency templates. The invalid input often looks like {{US$|75 million}}, which should be written {{US$|75}}{{nbsp}}million. Here's a sample fix.

This search shows some of the 500+ articles that have invalid text in {{US$}}. The "insource" regex in the search shows the most common construction of the invalid text, and creating a regex to fix the affected articles should be easy. The tricky part is doing the same fix for about 50 templates and their redirects.

Is there anyone here who would be willing to work with me to fix these errors? I can create a list of probable articles and templates that are involved (although I don't know how to create a list of all of the possible redirects). I estimate that the affected article count is between 1,000 and 3,000. – Jonesey95 (talk) 15:29, 29 September 2020 (UTC)

Do we know what is behind this change? It would help both existing and future uses if {{formatnum:CAD 1234 squillion}} produced "CAD 1,234 squillion" and, in practice, it does. If the parser function is being changed to remove this useful feature, we might be better off writing a template to reimplement the current behaviour of formatnum: and changing {{US$}} etc. to use that template, rather than editing thousands of articles. Certes (talk) 17:46, 29 September 2020 (UTC)
This MW help page explains that sending non-numeric values to formatnum can produce "unreliable output". It looks like the MW developers have deprecated and started tracking this non-numeric input (see T237467 and T263592) as of sometime in the last week, so we either need to fix existing uses or write a new template. It would be great to have a new template that does what formatnum does; if you start developing such a template (it should have a better name than the poorly chosen "formatnum"), ping me and I'll help with QA. – Jonesey95 (talk) 18:19, 29 September 2020 (UTC)
@Jonesey95: I've knocked something together in my sandbox (actual code in sandbox2). It almost works with a single #invoke:String|replace, but unfortunately the formatnum: executes before its argument gets replaced, so it formats the placeholder ("%1") rather than the actual number. Unless someone has a clever fix, we need to jump through some hoops with three String calls (or write some Lua). I've not assumed a name for the new template. {{Formatnum}} is currently a dummy "use a colon instead" warning but could be hijacked. Certes (talk) 19:41, 29 September 2020 (UTC)

Rcat templates specifying printworthyness

A lot of rcat templates specify the printworthyness of redirects through the |printworthy= parameter of {{Redirect template}}. All of these have, in their documentation, a notice asking editors to also add {{R printworthy}} or {{R unprintworthy}} (as appropriate) to redirects categorised by the template, if in the mainspace. However, very few editors actually take notice of this instruction, so how about a bot to do this instead?

The bot would be implemented (I hypothesise; I've never actually done this myself) by running through Category:Printworthy redirects and Category:Unprintworthy redirects, checking if each page includes {{R printworthy}} or {{R unprintworthy}}, and adding the relevant template if the answer is no (within an {{Rcatshell}} if there is one).

Any thoughts? WT79 (speak to me | editing patterns | what I been doing) 17:13, 9 September 2020 (UTC)

Not to ask the really dumb question, as I'm not heavily involved in that project, but if an "r from..." template includes the option to mark a redirect as unprintworthy, why do you then need another template to do the same thing? Primefac (talk) 22:20, 9 September 2020 (UTC)
I think it's meant as a visual thing, as the other templates only add categories. Fair point though. WT79 (speak to me | editing patterns | what I been doing) 07:01, 10 September 2020 (UTC)
After a bit further thinking, I think the reason is as follows: {{Redirect template}}'s |printworthy= parameter adds the pages to Category:Printworthy redirects, if they are in the main namepace. However, {{R printworthy}} / {{R unprintworthy}} are the standard Rcat templates to use to mark redirects as printworthy / unprintworthy; these are used separately to other templates on the redirect. They may be used other rcats which specify printworthyness, so isn't just part of {{Redirect template}}, which is only supposed to be used as a meta-template. If their functionality was merged into {{Redirect template}}, and {{R printworthy}} and {{R unprintworthy}} replaced with '{{Redirect template|printworthy=<!--yes or no as appropriate-->}}', a reverse problem would be caused as {{R printworthy}}/{{R unprintworthy}} would need to be removed from pages where printworthiness is already specified, to avoid duplication. WT79 (speak to me | editing patterns | what I been doing) 14:23, 14 September 2020 (UTC)(edited 16:16, 12 October 2020 (UTC))
If a redirect has already been placed in the appropriate printworthiness category, then the job is done and there's no need to further add separate tags for printworthiness. Right? – Uanfala (talk) 20:12, 29 October 2020 (UTC)

Bot that automatically fixes spacing after periods

Scrolling through Wikipedia:Typo Team/moss/E, I noticed that a majority of typos marked are incorrect spacing after periods.As an example, I would like to name the typo I just made between "periods" and "as". Now, to qualify for correction, the words would have to:

  • Not be in links i.e:
  • Not be in the same string, without a space, as "www", "http", "com", "org", etc.
  • Not be part of a reference.
  • Be a correct word in the language. (A metric for this could be having a page on Wiktionary.) "Periods" and "as" would meet this condition
  • And have been on an article longer than a certain time period (in the case that someone misspells a link).

Interested to hear what you think. Opalzukor (talk) 16:11, 16 September 2020 (UTC)

Since both "i" and "e" have entries at Wiktionary, your use of "i.e" above would presumably be (incorrectly) modified by such a bot. I think that a supervised bot task of this sort might be possible with considerable effort and a lot more conditions to avoid false positives, but an unsupervised bot is unlikely to be possible. – Jonesey95 (talk) 17:37, 16 September 2020 (UTC)
WT:AutoWikiBrowser/Typos also works in this area. It already fixes many types of wrong punctuation but may be able to help further. Certes (talk) 17:42, 16 September 2020 (UTC)
A dedicated AWB user could have a field day fixing specific patterns, like "(letter).The", which currently has more than 5,000 hits in article space (including false positives in URLs and similar things that should not be fixed). Someone looking for and fixing common patterns like this could take a considerable load off of the Typo Team. – Jonesey95 (talk) 17:45, 16 September 2020 (UTC)
Jonesey95, wow, that is a field day in waiting indeed! {{u|Sdkb}}talk 18:04, 16 September 2020 (UTC)
  Not a good task for a bot. This is technically feasible, but it would have tons of mistakes. Programming for every edge case would be almost impossible and you would take up huge amounts of resources to locate and fix these things. BJackJS talk 20:53, 10 November 2020 (UTC)

A bot to optimize talk page archiving periods

I very often come across talk pages that are archiving either way too aggressively or (less frequently) not at all aggressively enough. Since the frequency of new talk page threads is something quantifiable, I'd think it'd be possible to use an algorithm to determine when this is happening and automatically adjust the archiving period. I envision that this would be only for mainspace talk pages, since non-mainspace pages have differing desires for how long old threads ought to stick around. Integrating with the current manual adjustment system would be tricky, but this could eventually save a bunch of editor effort and make talk pages function better. {{u|Sdkb}}talk 04:29, 13 October 2020 (UTC)

I agree there's an issue with archive periods. Glaring issue imo, if one tries to solve this with a bot, is: how do you tell editors on local articles that they're setting the archive time wrong, and that the bot's value should be followed? Wouldn't it just result in edit warring with the bot? I mean, I remember EEng was once frogmarched off to ANI for changing 3 days to 7 days on an archive timer (or something along those lines). ProcrastinatingReader (talk) 15:16, 13 October 2020 (UTC)
ProcrastinatingReader, yeah, that's definitely the challenge. I think the way to handle it would be to use a gradual introduction. So first introduce the option to set |age=auto at User:ClueBot III/ArchiveThis and make it suggested/default, which handles the issue for new pages going forward. Then, once that's been established for a while, we could start considering mass switches for existing pages, but even then I'd assume we'd want to allow opting out. {{u|Sdkb}}talk 23:44, 13 October 2020 (UTC)
If you want support for |age=auto, that's really something to ask at User:ClueBot III, or whatever archiving bot you want to support this. But I don't see how or why existing archiving periods should be overriden by bots. Headbomb {t · c · p · b} 23:55, 13 October 2020 (UTC)
  Not a good task for a bot. I agree with Headbomb, this is not something a bot should take care of, and definitely something that would require consensus to even implement in the first place. Primefac (talk) 14:44, 16 October 2020 (UTC)

Related to this: a bot to keep automatic archival information templates such as {{archives}} or {{auto archiving notice}} in synch with actual bot parameters. That is, if we change |algo=old(60d) to |algo=old(90d) (this example uses User:Lowercase sigmabot III syntax) a bot could come in and change |age=60 to |age=90 of such a template, if present. CapnZapp (talk) 17:05, 21 October 2020 (UTC)

CapnZapp, that would be very nice. It'd be even better if we were able to adjust the way the bot/banners work so that it wouldn't be necessary to list that setting in two places. {{u|Sdkb}}talk 19:27, 8 November 2020 (UTC)
Actually, that would be rather simple to implement for {{User:MiszaBot/config}}, all that would be necessary is a parameter that switched whether {{archives}} was called, which could then pass the |algo= value. Primefac (talk) 19:34, 8 November 2020 (UTC)

Bot to purge/null edit pages

On certain pages, it would be useful to have a bot automatically do null edits after a certain period. I'm thinking placing something {{Bot purge}} like

{{Bot purge}}                  <!-- Purges every day (00:00:01 UTC)-->
{{Bot purge|12 hours}}         <!-- Purges every 12 hours (00:00:01 UTC; 12:00:01 UTC)-->
{{Bot purge|1 hour|mode=null}} <!-- Null edits every 1 hour (00:00:01 UTC; 01:00:01 UTC; 02:00:01 UTC...)-->
{{Bot purge|15 minutes}}       <!-- Purges every 15 minutes (00:00:01 UTC; 00:15:01 UTC; 00:30:01 UTC...)-->
{{Bot purge|UTC=20:00:00}}     <!-- Purges at 20:00:00 UTC every day-->

on a page, and then the bot taking its instructions from there. Headbomb {t · c · p · b} 18:54, 22 October 2020 (UTC)

If we do this, let's not mix up the separate meanings of WP:PURGE and WP:NULLEDIT by giving the template or process the wrong name. – Jonesey95 (talk) 19:20, 22 October 2020 (UTC)
Btw, may be of interest, see Joe's Null Bot, which was slightly similar (but didn't use a template), but is not currently active. ProcrastinatingReader (talk) 19:39, 22 October 2020 (UTC)
Some portals include a display relevant to the day, week or month (selected anniversary, etc.) and could benefit from this functionality. A syntax such as cron's would be more flexible at the cost of being more cryptic. Purging would suffice for that usage; I don't think a null edit is needed. Certes (talk) 19:47, 22 October 2020 (UTC)
I'm rather indifferent to the purge/null edit distinction in the name of the template, but null edits are more powerful/give the expected results in all cases, whereas purges don't. I suppose there could always be |mode=purge vs |mode=null for the cases where it matters. I know that for the usages I have in mind, purges are insufficient. Ultimately it doesn't really matter, as long we have a scalable user-friendly way to get bots to purge/null edit certain pages. Headbomb {t · c · p · b} 20:28, 22 October 2020 (UTC)
Updated the template doc (and above example) to have a |mode=purge by default, and a |mode=null override. Headbomb {t · c · p · b} 20:32, 22 October 2020 (UTC)
This does sound like it'd be useful. If it requires the bot to make an actual edit, though, it'd make any page with it really annoying to watchlist. Is there any way to make the bot not show up on watchlists by default? {{u|Sdkb}}talk 22:01, 22 October 2020 (UTC)
@Sdkb: Null edits don't show in watchlists, since nothing is recorded (you can verify this by going in your watchlist, I just did a null edit on your talk page, you won't see it). They are basically hard purges, which will update backlinks (e.g. Special:WhatLinksHere) and categories, on top of everything a normal purge does. Headbomb {t · c · p · b} 22:21, 22 October 2020 (UTC)
I also think this could be useful. We may want to think in advance about guidelines for where it can be deployed. We would not want it on a template with lots of transclusions, for example. Maybe limited to certain namespaces and transclusion limits, at least at the beginning? The bot and the template could be programmed to ignore usages outside of at least some of those limits. – Jonesey95 (talk) 22:36, 22 October 2020 (UTC)
Agree it should probably limited. I was thinking it should only apply on the page where {{Bot purge}} is actually used. So if you have it on e.g. User:AAlertBot/Status2, then only that page would get bot-purged, and not the pages that transclude User:AAlertBot/Status2. But I'm spitballing ideas here, it could be handy to have transclusions get purged too. Perhaps |scope=transclusions / |scope=this page? Limiting to metaspace (i.e. not articles, not mainspace talk) would also likely be a good initial limitation. Headbomb {t · c · p · b} 22:54, 22 October 2020 (UTC)
Re only that page would get bot-purged, and not the pages that transclude...: That is not how the job queue works, AFAIK. Pages that are null-edited get put in the job queue to have their transclusions null-edited as well (eventually). I think a purge runs only on the purged page, though, with no downstream effects. – Jonesey95 (talk) 23:00, 22 October 2020 (UTC)
Well the job queue can do what it wants. The point of the bot purges/null-edits would be to get ahead of the job queue on user-specified pages. If null-edits cause major downstream effects, they could be limited to pages with fewer than X transclusions, where X is a small enough number (100, 50, 25 or whatever the community feels is reasonable). Purges shouldn't cause any downstream effects though, since that only refreshes that one page. Headbomb {t · c · p · b} 23:09, 22 October 2020 (UTC)
As noted above, Joe's Null Bot (talk · contribs) did this. It is no longer able to perform its design task. Although its operator, Joe Decker (talk · contribs), has not edited in six months, the bot had ceased to function well over a year earlier (possibly November 2018), for technical reasons - see the archives of WP:VPT. So if Joe's Null Bot can't do it, I don't see that another bot would be able to do it either. --Redrose64 🌹 (talk) 20:49, 23 October 2020 (UTC)
I believe the blocker was T210307 which was resolved. I don't know why the bot stopped after its resolution though, because you can definitely make null edits. With AWB, I use {{subst:void}} I believe, and that certainly works. Headbomb {t · c · p · b} 21:11, 23 October 2020 (UTC)
See also LourdesBot. Certes (talk) 22:30, 23 October 2020 (UTC)
The problem was not the ability to make null edits; as I recall the difficulty was in sending forcelinkupdate requests through the API. --Redrose64 🌹 (talk) 10:15, 24 October 2020 (UTC)
Yes, see Wikipedia:Village_pump_(technical)/Archive_181#Sending_a_POST_to_the_API_to_purge which may be relevant (RR pointed this out to me when I ran into a similar issue trying to do this). Perhaps this only causes issues with certain types of purges (eg if you want to refresh cats), though. Null edits may still work. And I guess non-linkupdate purges, where that's appropriate. ProcrastinatingReader (talk) 12:25, 24 October 2020 (UTC)
  • I'd be happy to do something like this, but I feel we should probably have consensus somewhere first? --TheSandDoctor Talk 18:23, 27 October 2020 (UTC)
  • Pages are cached for very good reasons and giving people the ability to subvert caching would need even better reasons. What is the purpose of this proposal? What page would need purging more than once per day? Most BLP articles use an age template which gives an off-by-one error when the person's birthday comes around. We're not supposed to worry about performanace, but we're not supposed to fight it either, for example, by purging every BLP daily. Johnuniq (talk) 22:53, 27 October 2020 (UTC)
    • Re:"Very good reasons" Pages are cached to not overload servers when a change to a big template is made. An example of a page that would need to be purged more than once a day would be a page which is transcluded, which happened to be updated more than once a day. For example, User:AAlertBot/Status2 detects if the bot has crashed during its run, and is transcended on WP:AALERTS as well as on my user talk page (User talk:Headbomb). It depends on User:AAlertBot/Status, which is normally edited twice daily (around 8:00 UTC). Ideally, we'd want WP:AALERTS (and a few other pages) to be purged at around 8:30 AM, so that if the bot has crashed, an error message is shown on my userpage, notifying me of an issue with the bot, and at WP:AALERTS, also letting people know that the bot crashed.
      This one is an example of a pages that would need to be purged once daily. But there are many other pages that are updated more often, or less often. For example, a user that wants to have {{Vandalism information}} on his talk page, purged every hour. I also really don't see what this has to do with WP:BLPs. Headbomb {t · c · p · b} 23:34, 27 October 2020 (UTC)
      • I'm just saying there are a lot of articles (mostly biographies, but also others) that use Module:Age templates and the point of caching would be eroded if anyone could stick a template on any page specifying that it be purged every day (let alone every 15 minutes). For special pages such as the one you mention, a purge bot might work from a configuration list that links to each page to be regularly purged, along with when-to-purge parameters. Purging user pages to show decorations would be very undesirable. Johnuniq (talk) 02:21, 28 October 2020 (UTC)
        • That wouldn't be the point of this bot. I mean in theory it could be used for something like purging an article every 21 seconds, but in practice things could be limited to a maximum frequency / specific namespaces. Headbomb {t · c · p · b} 03:04, 28 October 2020 (UTC)
          • Not that I'm necessarily against a central list. It might very well be a better implementation. Headbomb {t · c · p · b} 03:10, 28 October 2020 (UTC)
            • From a risk/reward standpoint I have to agree with John on this; if you can immediately come up with two or three scenarios of easy (even if unintentional) misuse, it might be better to have some form of centralized page from which to pull these more-heaviy-refreshed pages; I always wished Joe's Null bot would have something like that so one wasn't beholden to the timetable of the bot op. Primefac (talk) 10:16, 28 October 2020 (UTC)
              • I'm also happy to code up a centralised version of this. I guess two lists, a purge list and a null edit list (for when upstream changes / cat changes are desired)? ProcrastinatingReader (talk) 10:37, 28 October 2020 (UTC)
                • I'd push for a third list, one that would trigger Null-bot-like edits of all pages transcluding a template (which from a TE perspective can sometimes be highly relevant). Primefac (talk) 11:01, 28 October 2020 (UTC)
                  • Although important, that sounds like an ad hoc task rather than a regular one. As well as daily etc., would it be useful to have a frequency of "purge/edit once, asap", with the bot either changing the frequency to "done (ignore)" or removing the list entry/template entirely after complying with the request? Certes (talk) 11:09, 28 October 2020 (UTC)
                    • I was thinking a "one and done" type list, wherein a user adds the transcluded template to the page, the bot runs through its transclusions, and then removes the template from the list; it wouldn't be any sort of every-day thing (hence the third list). Primefac (talk) 11:20, 28 October 2020 (UTC)
                  • YES, what Primefac said. It's very important in TfD to check if there pages still transcluding a template which is being deleted, and having these sometimes take over a month to update (actual situation) is very time consuming. --Gonnym (talk) 11:11, 28 October 2020 (UTC)
  • Scheduling : When considering the trigger points, a fixed date would be a useful option. Public holidays in the United Kingdom#Dates in England, Northern Ireland and Wales needs a purge each new year as it includes, for each movable holiday, a calculation of the day on which it falls in the current year. Cabayi (talk) 12:24, 28 October 2020 (UTC)
    Hmm. I wonder what the best way to technically structure this is (haven't thought too deep about it yet). As far as I can see, there's four main technical requirements. 1. Support absolute dates (eg 1 January, every year). 2. Support regular dates (eg every 1 hour). 3. Need to keep it updating from a central list regularly on wiki as entries are added/removed/changed (without resetting long regular jobs). 4. Support lots of entries. In a cron-based approach, a couple of these become harder to do, so a continuous scheduler seems to work better, although still needs to support (3). ProcrastinatingReader (talk) 12:37, 28 October 2020 (UTC)
    Actually, a cron-based approach fixes (3), although makes (4) a bit of a mess. ProcrastinatingReader (talk) 12:39, 28 October 2020 (UTC)
    Okay, easier than originally figured. Got purguing/null edits down. As for the TfD usecase above, mw:API:Purge in theory with "forcerecursivelinkupdate" should do the same trick I'd think (& per mw:Manual:Purge#Null_edits). Found it was broken before when doing it via interface, will skim MediaWiki's source to see if API also suffers from same issue. If not, it does greatly simplify the "ad hoc" use case. Or I'll just file a BRFA so I can test it live, I guess.   BRFA filed ProcrastinatingReader (talk) 14:03, 28 October 2020 (UTC)

For the record, I speedily approved this, and leave implementation details to @ProcrastinatingReader: and the community in general. If anyone has a problem with that, I can rescind approval. Headbomb {t · c · p · b} 18:32, 29 October 2020 (UTC)

Thanks @Headbomb. I've setup some documentation for User:ProcBot/PurgeList. For the "absolute dates" section I've just made it once a day. I may add in the ability to choose the time of day, too, if that's desired. Some parts of the list I may tidy up depending on how it's used, to simplify syntax (eg if some pages add many entries for themselves, may just collapse multiple dates into one template for ease). Feel free to add to that page. A few notes: regarding multiple lists, I now figure I may just have two separate lists. This (as a list anyone can edit), and for category members/upstream purges (eg the TfD use case above) a separate list, which may be better to protect since the bot is exempt from purge rate limits? Regarding the TfD use case, I think the "forcerecursivelinkupdate" may be bust (asked in IRC yesterday; will ask on phab). Regarding null edits, I've commented out that part for a bit; I think purge with "forcelinkupdate" (which does seem to work in my test) should do mostly the same functionality, which causes the "What links here" on that page to be updated. That's the default setting used for bot purges. It doesn't queue for upstream modifications, but then again that just adds to a slow job queue anyway. @Headbomb since you mentioned this use case above, can you try a purge on the list and see if it updates the links as you'd like? ProcrastinatingReader (talk) 19:34, 29 October 2020 (UTC)
Time of day is definitely desired here for select WP:AALERTS pages. I'll try using your purgelist and see how that works. Headbomb {t · c · p · b} 19:46, 29 October 2020 (UTC)
I'd also probably make it a hard limit that nothing can be purged more often than once per 15 minutes, or possibly once per hour. Headbomb {t · c · p · b} 19:53, 29 October 2020 (UTC)
@Headbomb I can add in a hard limit. BTW I believe your test worked (I don't have the refresh thing on my browser, and saw your page timestamp updating at regular 10 minute intervals the couple times I looked), not sure about the "what links here" part though (didn't check). ProcrastinatingReader (talk) 11:50, 30 October 2020 (UTC)
Well, glad to know it worked. My test pages didn't care about the 'what links here' functionality, so null edits weren't needed on them. Just the purge once or twice a day at specific times. Headbomb {t · c · p · b} 16:56, 30 October 2020 (UTC)

Double space bot

Hello. I'm not too familiar with Wikipedia bots, but I'm wondering if one exists that eliminates double spaces in pages ("[][]" instead of "[]"). I do a lot of control-F work to eliminate these spaces, but I think this is the kind of task that would be best completed by a bot. Thank you, KidAd talk 23:31, 2 November 2020 (UTC)

I think this would fail WP:CONTEXTBOT, I'm afraid. It's best suited for semi-automated editing, perhaps using AWB? ProcrastinatingReader (talk) 00:30, 3 November 2020 (UTC)
Agreed about using AWB, as long as there is a substantive edit to be made at the same time (see WP:AWBRULES, #4). As a specific example of why this requires context parsing, double spaces inside infobox templates typically should not be replaced, as multiple spaces are often used to align the equals signs for easy reading of parameters and values. If someone can figure out a "safe" context-sensitive way to remove these spaces, it might be a valid task for the proposed Cosmetic Bot Day. – Jonesey95 (talk) 00:34, 3 November 2020 (UTC)
This is a mix of WP:CONTEXTBOT, since oftentimes whitespace is used to align elements in a template, table, etc., and WP:COSMETICBOT, since using 343 spaces or 2 or 1 yields the same visual output. I'll often do a regex find _+ replace with _ (where _ is a space), but that does require human review a lot of the time. Headbomb {t · c · p · b} 00:50, 3 November 2020 (UTC)
Besides the WP:COSMETICBOT arguments put forth above (which I agree with), this is pointless. All web browsers will collapse multiple spaces to a single space on the rendered page except in certain circumstances - edit boxes, within a <pre>...</pre> element, where an element is styled with the declaration white-space:pre;. So opening up an edit to reduce those spaces in the wikisource is simply a waste of time. --Redrose64 🌹 (talk) 21:04, 4 November 2020 (UTC)
Some of us prefer to leave two spaces between sentences. I wouldn't double an existing single space, even manually whilst doing other edits, but I would oppose the removal of such deliberate spacing. Certes (talk) 22:03, 4 November 2020 (UTC)
I frequently remove double spaces (with AWB or with WikEd) using Regex "([a-z])\s\s+([a-z])" to "$1 $2". This will not affect spaces between sentences or around equals signs, as in template parameters. Some articles contain a lot of double or triple spaces that are very distracting for editors, so I do this along with substantial edits so the next editor won't have to handle them. Chris the speller yack 17:53, 13 November 2020 (UTC)

Guard (American and Canadian football)

A football position article of Guard (American and Canadian football) was moved on 27 August 2019‎ to Guard (gridiron football) per [105] with an edit summary of "moved page Guard (American and Canadian football) to Guard (gridiron football): to match Tackle (gridiron football position) and Center (gridiron football)"

AWB currently matches 1575 links to Guard (American and Canadian football).

Can we search and replace to bypass redirects in two capitalization formats, like:

1. [[Guard (American and Canadian football)|Guard]] --> [[Guard (gridiron football)|Guard]]
2. [[Guard (American and Canadian football)|guard]] --> [[Guard (gridiron football)|guard]]

Any "missed" links/redirects should be few and I can manually (or AWB) correct them. UW Dawgs (talk) 20:58, 6 December 2020 (UTC)

This sounds like a cosmetic change that isn't needed because the links are not broken. If it really is needed, it might be a good job for Usernamekiran's bot 4. Certes (talk) 21:20, 6 December 2020 (UTC)
  Not a good task for a bot. No reason to bypass the redirect. Also, for the record, Kiran's task is for moves that result in DABs needing fixed, not just generic link updating.Primefac (talk) 21:25, 6 December 2020 (UTC)
Withdrawn. Done via AWB which also picked up other known/desired/helpful fixes. UW Dawgs (talk) 02:35, 7 December 2020 (UTC)

Removing Redirect-Class from articles that aren't redirects

According to quarry:query/49607, there are currently 2,585 articles tagged as Redirect-Class by at least one WikiProject that are not actually redirects, and 2,179 if you further exclude disambiguation pages. These incorrectly tagged articles are likely to receive less attention from the WikiProject as a result, and I can't imagine any good reason why a project would want to leave non-redirects tagged as Redirect-Class. Thus, two questions:

  • Would the task of placing these articles back into the unassessed category be suitable for any of the existing autoassessment bots?
  • Barring that, I believe I have the technical knowledge to create a bot to fix this. Are there any unexpected edge cases I should consider?

Vahurzpu (talk) 17:57, 9 November 2020 (UTC)

With the sole exception of {{WikiProject Military history}}, there should never be any need to explicitly set |class=redirect (or equivalent), because all WikiProjects that provide Redirect-Class (other than Military history) also have code in their banners that will autodetect that a talk page is that of a redirect. So either altering it to the valueless form |class=, or removing the parameter entirely, will both work. In my opinion, the first method is best for pages in the main Talk: space, since an explicit value (stub, start etc. will need to ba added later on; but the second method is more suited to all other talk spaces, becuse the namespace is autodetected so the page will be automatically placed in Template-Class or similar, as applicable.
WikiProject Military history is a special case, because it uses a large number of non-standard techniques, amongst which is the lack of autodetection for redirects, so that talk pages of redirects need an explicit class setting; and the recognised values are non-standard as well. They are |class=rdr, |class=red, |class=redirect (all case-insensitive) - it does not recognise |class=redir that the others all allow. So for talk pages having {{WikiProject Military history}} and one of those three values for |class= will need to be individually checked to see if the talk page is that of a redirect - if it is, the value in |class= will need to be left alone. --Redrose64 🌹 (talk) 11:08, 10 November 2020 (UTC)
In that case,   Coding... Vahurzpu (talk) 16:36, 11 November 2020 (UTC)
Now   BRFA filed Vahurzpu (talk) 22:43, 15 November 2020 (UTC)
Now  Y Done Vahurzpu (talk) 20:08, 16 November 2020 (UTC)

Mass undo of about 200 erroneous message deliveries

{{resolved}} This is a request for a mass undo of about 200 messages delivered at 18:43 and 18:44, 8 December 2020 (UTC time) by the MediaWiki message delivery service. See its recent contributions and Special:Log/massmessage. I queued a message for delivery about 15 hours before that time stamp, and all messages were delivered, and then through some apparent hiccup, a subset of editors received the message again, 15 hours later.

I don't know of an easy way to undo those 200 edits (194 to be precise, I believe). Is there someone with some sort of script/bot/privilege who is able to quickly and easily undo them? Thanks in advance. – Jonesey95 (talk) 19:29, 8 December 2020 (UTC)

@Jonesey95: A rollbacker or admin can use User:Writ Keeper/Scripts/massRollback.js. DannyS712 I believe also has a bot approval for fixing massmessage errors. – SD0001 (talk) 19:37, 8 December 2020 (UTC)
SD0001, can't roll back using the tool because it would remove both postings. Primefac (talk) 22:33, 8 December 2020 (UTC)
I did them one-by-one with Twinkle. I hope this bug never strikes again, though. Nasty. – Jonesey95 (talk) 16:39, 9 December 2020 (UTC)

Clean up introduction sample pages

Related discussions:

A change back in June to the introduction shown to new users has resulted in new pages being created as subpages of Draft:Sample page when users complete the introduction without logging in (see Special:PrefixIndex/Draft:Sample_page). These are essentially individualized sandboxes, and should be routinely deleted - they're test pages by definition so WP:G2 applies, and they often contain material that qualifies for deletion under other speedy criteria. Can someone code an adminbot that will look for these subpages and delete them, maybe if they have not been edited in a few days? Ivanvector (Talk/Edits) 14:45, 30 September 2020 (UTC)

Thanks for creating the botreq here. As the main editor who set up the sample page system, I endorse this request. Hopefully it should be pretty easy to code, since there are no subpages of Draft:Sample page other than the random numbered ones, all of which should be deleted after a period of inactivity. We have a fairly long standard period for most drafts (6 months), but I have no issue with the period here being much shorter (after all, this is only applying to IP editors; editors who have logged in create their sample page in their userspace). Please feel free to ping me if there are any questions about the system. {{u|Sdkb}}talk 21:23, 3 October 2020 (UTC)
Well, there are only 9 currently. So could just delete by hand for now, periodically. A bot could be made to G2 delete inactive 8 digit subpages of that name, but at this rate seems like more effort than it's worth, and we'd also need an admin volunteer to run it. Also, some editors could use it as a draft page for writing an actual article. A bot wouldn't be able to differentiate. So maybe best to just rely on the regular G13 system? ProcrastinatingReader (talk) 11:16, 12 October 2020 (UTC)

Sorting on Wikipedia:Translators available

A bot would be useful on Wikipedia:Translators available for the sorting the lists of users, in each section, by the date of last edit (descending). I'd suggest running the task monthly. – SD0001 (talk) 12:19, 28 September 2020 (UTC)

  Coding... Taking a shot at this. —{Canucklehead} FKA Cryptic Canadian 01:45, 8 October 2020 (UTC)
  Still doing... Had to step away from this over the weekend, still need to figure out a few things before a BRFA. —{Canucklehead} 00:17, 13 October 2020 (UTC)

DEFAULTSORT bot

Is there a bot that adds the {{DEFAULTSORT}} magic word to articles that need it but don't have it? I have a list of over 1k television "List of episodes" articles that don't have DEFAULTSORT. Cheers. -- /Alex/21 09:52, 2 October 2020 (UTC)

Do those articles share some template such as an infobox which could apply the DEFAULTSORT word? Certes (talk) 10:35, 2 October 2020 (UTC)
"List of episodes" articles don't use infoboxes, no. -- /Alex/21 11:26, 2 October 2020 (UTC)
Please give an example of one of the articles which lack a defaultsort, and also suggest a sortkey that would be desirable for that article. --Redrose64 🌹 (talk) 22:16, 3 October 2020 (UTC)
List of Star Trek: Enterprise episodes, "Star Trek: Enterprise episodes, List of". -- /Alex/21 03:11, 5 October 2020 (UTC)
The page already has
[[Category:Star Trek: Enterprise episodes| ]]
[[Category:Star Trek episode lists|Enterprise episodes]]
[[Category:Lists of American science fiction television series episodes|Star Trek: Enterprise]]
each with a different sortkey. Only one of them - Lists of American science fiction television series episodes - would benefit from having the sortkey "Star Trek: Enterprise episodes, List of", so I don't see how making that the defaultsort would be an improvement. --Redrose64 🌹 (talk) 09:52, 5 October 2020 (UTC)

There are three main cases to consider:

  • For categories consisting entirely of lists:
    • "List of" should be either omitted or moved to the end of each sortkey. The (purely cosmetic) effect should be to alphabetically distribute the list titles according to the first letter of the third word (instead of grouping them all under "L" for "List") when viewing the category. This should never actually affect their sort order.
  • For all other categories (containing both lists and non-lists):
    • A single list whose scope exactly matches that of the category should have a whitespace sortkey so it appears before "0–9" and "A" even.
    • Any other list article should remain sorted under "L" for "List of".

Note that the lists and categories in question might not use identical phrasing for some strange "local consensus" reason, so determining which case applies probably wouldn't be a good bot task. ―cobaltcigs 08:28, 17 October 2020 (UTC)

A bot that can copy articles from non-diffusing subcategories into the appropriate parent categories.

I was going to copy all films from the American television films category into the American films category (using Cat-a-lot), because the template on the American films category specifically tells editors to do this. However, an administrator objected because he did not want his watchlist to be full of hundreds of minor edits. He then requested that I get a bot to transfer articles from non-diffusing subcategories into the appropriate parent categories. I have taken into account the fact that such a bot may transfer categories that were inappropriately placed, though. Is this still an acceptable proposal? Scorpions13256 (talk) 20:31, 14 September 2020 (UTC)

@Scorpions13256: Just to clarify, is this a request for all items in the sub-cats of American films, or just American television films? Mdann52 (talk) 21:57, 19 October 2020 (UTC)

Sorry that I was unclear. This was my first bot request. I was initially talking about all non-diffusing subcategories in general. however, I now think that the best move would be just to transfer all films from American television films to American films. My other proposal seems a bit drastic now that I think of it. What do you think? Scorpions13256 (talk) 23:26, 19 October 2020 (UTC)
  Doing... - Yep probably something that needs to be looked at wider to gain consensus, as that's potentially millions of changes. I'll try and get a BRFA together in a few days to do a one time run to get them moved - given this potentially thousands of changes, I think that would be best. Mdann52 (talk) 08:49, 22 October 2020 (UTC)

Hi all, there are about 650 articles which were previously peer reviewed. However, because of article moves, the links to the reviews are now broken. Category:Pages using Template:Old peer review with broken archive link. See for example Talk:Battle of the Catalaunian Plains. I'm seeking bot help repairing the 650 links. Essentially, the bot will need to go through each article in that category, determine what the name of the article was when the peer review was closed, and then update the template {{Old peer review}} on the current article talk page by adding |reviewedname=the old name. Extra useful if the date can be found and inserted too (|date=date the review was closed). --Tom (LT) (talk) 00:25, 19 September 2020 (UTC)

  Coding... This is definitely doable. I will begin to start making it. I think this could be extended to other broken links like this one. BJackJS talk 17:43, 21 October 2020 (UTC)
@Tom (LT) Still in progress but   BRFA filed. I expect the bot to be done by the end of the week. BJackJS talk 19:59, 21 October 2020 (UTC)
Great, thanks! --Tom (LT) (talk) 20:53, 22 October 2020 (UTC)

I would like a bot to help me with reverting vandalism.

Hi. I would like to own a bot to give me assistance with reverting vandalism and warning users who have vandalised Wikipedia pages. I would like the bot to be called OverriddenBot, since my username is Overridden and that’s what I chose for the bot. Thank you. Overridden (talk) 08:26, 19 December 2020 (UTC)

Have you considered looking into Twinkle? It will probably do what you’re after. You can use a separate (non-bot) account for this as outlined at WP:SOCK if you don’t want to fill up your edit history. If you’re after something like ClueBot, that’s something you’d need to develop yourself. ProcrastinatingReader (talk) 11:44, 19 December 2020 (UTC)
WP:Huggle would also be a good choice. Primefac (talk) 13:09, 19 December 2020 (UTC)

A bot to move biographical articles with state-only disambiguators.

We have numerous articles with titles like Thomas Williams (Alabama), using the state alone as a disambiguator. These are, as it turns out, disfavored because the person who is the subject of the article is not an example of an Alabama. However, it's a pain in the ass to dig them out and fix them manually. What I would ideally like is a bot to find all biographical articles with titles that are Person's name (State) and replace them with Person's name (State profession) (in the above case, it would be Thomas Williams (Alabama politician)), and then update all incoming links to that as well. I recognize that this can be tricky, because many people have multiple professions and it may require a human eye to choose the best disambiguator, but I think there are some broad categories that can be done automatically. For example (again, as with Thomas Williams (Alabama)), anyone who has served in the United States Congress can almost certainly be disambiguated with "politician" for their profession. Since many of the issues will be with members of Congress auto-generated at these titles in the first place, that should handle a good number of them. BD2412 T 03:03, 13 October 2020 (UTC)

I did a quick data collection run. Got bored typing out each state name, so it's not complete, but still. See Quarry for results. It's complicated slightly by the fact that not all people are in Category:Living people (as they probably should be), just using the births cat to get around it, but it could miss usages if that isn't categorised either. Also worth noting only about 118 results popped up, so assuming the data for all states is ~ < 500, possibly much less. Might be easier to adjust by hand, in that case. ProcrastinatingReader (talk) 17:10, 13 October 2020 (UTC)
ProcrastinatingReader, see here for a full list. Mr. Heart (talk) 17:23, 13 October 2020 (UTC)
You need to use underscores (_) instead of spaces.  Majavah talk · edits 17:31, 13 October 2020 (UTC)
Majavah, updated. Mr. Heart (talk) 17:34, 13 October 2020 (UTC)
I found a few others here who evaded the Quarry query by not having been born. Certes (talk) 20:39, 13 October 2020 (UTC)
Certes, unfortunately it's not foolproof. If I scrap the category check it will result in the output being mostly non-people (places, waterfalls, all that stuff). But looking at your list, Thomas M. Allen (Georgia) for example, none of the cats are really general people cats (eg births, deaths, etc). I suppose more cases could be caught by allowing "Year of birth missing" (ie just 'birth' and 'death'), but those are manual cats so also probably won't catch every case. Nevertheless, added that at this Quarry (should be done running in a few mins). Not sure if there's a better way to do it, but I don't think we could easily identify what is a bio without relevant cats. ProcrastinatingReader (talk) 21:06, 13 October 2020 (UTC)
Amended further to also check talk page for WikiProject Biographies. I think this is as close as we can get it. See new Quarry, if its execution ever finishes. If not, #49017 (with ~650 results) is the closest I've got. Given that the profession also needs to be determined, probably best to clean these by hand, rather than by bot. ProcrastinatingReader (talk) 21:32, 13 October 2020 (UTC)
I believe there are some fairly large subsets for which profession can be determined by a bot. Anyone who served in the U.S. House or Senate, or as a state governor, can be at ([State] politician); anyone who served as a U.S. federal judge can be at ([State] judge). There may be cases where a more precise descriptor would also serve, but at least moving to these would be better than just the state. BD2412 T 21:36, 13 October 2020 (UTC)
Alternately, drop a list of links on a project page and I'll do it by hand, probably around ten or twelve a day for sixty days. BD2412 T 21:40, 13 October 2020 (UTC)
BD2412 and others: Here's a table of linked articles generated from quarry 49017 above. – Jonesey95 (talk) 21:51, 13 October 2020 (UTC)
I set PetScan loose on grandchildren of American politicians by party and state, but this task may not be limited to politicians. We can't even filter by eye: it's far from obvious that Amos Marsh (Vermont) is of interest but Charlotte Lake (California) isn't. If it helps, this list are all politicians, though some may also be notable for other reasons. I can't help thinking that a non-diffusing Category:Dead people would make many such tasks much easier. Certes (talk) 22:21, 13 October 2020 (UTC)
I've added descendent categories of American people by state by occupation and removed the ones we already know about to make a second list. The few people I found who were missed elsewhere are now divided into 15 politicians and 26 others. Certes (talk) 11:46, 17 October 2020 (UTC)

Interstate 635 (Texas), Colorado River (Texas), Toyota Stadium (Texas), etc. are obviously not "examples of a Texas" either. Why are "state-only disambiguators" only "disfavored" for politicians' names? ―cobaltcigs 08:01, 17 October 2020 (UTC)

WP:NCPDAB disambiguates with occupations etc.; WP:PLACEDAB with states etc. Colorado River (Texas river) and Toyota Stadium (Texas stadium) would violate WP:CONCISE. By contrast, adding the last word to John Doe (Texas politician) adds useful information, and might help the reader distinguish him from other John Does who aren't politicians. Adding the field of notability is not only for politicians, it's just that politicians are more likely to have omitted it and to need moving. Certes (talk) 11:03, 17 October 2020 (UTC)
Yes, I agree that adding extra words to those would be redundant and silly.
Also I specifically said "politicians" because multiple Americans with the same name and same non-political occupation are typically disambiguated by [[... (OCCUPATION born YEAR)]] (plus or minus a comma, and unless their birth years are unknown), because non-politicians tend not to be strongly associated with a particular state.
Meanwhile let's also look overseas and survey how we handle multiple politicians of the same name and same non-U.S. nationality. Surely these must exist. Feel free to prove me wrong, but I doubt any analogous constructs like [[... (Bavaria politician)]] and [[... (Baden-Württemberg politician)]] are used anywhere—because we assume readers know U.S. states but not German states. Falling back on the birth year is probably most common for them as well.
So maybe applying a consistent chain of rules to everybody, including U.S. politicians, would be better. This would have the side effect of avoiding the "this person is not an instance of his/her home state" complaint altogether—with the added bonus of not pretending our readers are silly enough to interpret it that way, as suggested above. Or if we don't want to do that, we can just stick to using as few words as possible.
In any case I'd rather see an RFC about this than a bot request. ―cobaltcigs 16:58, 17 October 2020 (UTC)
Just a clarification, state-only disambiguators are not only disfavored for politicians' names, but for people generally. Politicians are just more likely to be associated with a particular state than are people of many other occupations. A given John Smith may be a politician, or even a a Vermont politician, but he isn't a Vermont. BD2412 T 21:03, 23 October 2020 (UTC)
We also have a small number of non-state place qualifiers, such as John E. Johnson (Brandon). I'm not sure how to catch those. Certes (talk) 17:41, 21 October 2020 (UTC)

Hi, I'm looking to get a bot to update the Template:IMDb episodes links in TV season articles, by adding |season=x to the link template with x being the season number, so the link will directly point to that respective season on IMDb; when |season=x is not specified in the template, it just links to the most current season. I'm guessing the bot can just grab the season number from the article title? This would only need to be done for season articles; example, Fargo (season 4) while the IMDb links for Fargo (TV series) and List of Fargo episodes can remain unchanged. Thanks. Drovethrughosts (talk) 16:26, 27 October 2020 (UTC)

An example: {{Imdb episodes|2802850|Fargo}} would be changed to {{Imdb episodes|2802850|Fargo|season=4}}

WikiProject redirects

A common mistake is to type "Wikiproject" instead of "WikiProject" to get to pages like Template:WikiProject Physics or Wikipedia:WikiProject Physics. So a bot that would automatically create those would be really useful.

This should only be the base pages, not the subpages like Wikipedia:WikiProject Physics/Quality Control. Headbomb {t · c · p · b} 16:51, 31 July 2020 (UTC)

There was a deliberate action some time long ago to remove WikiProject template redirects to make it easier to maintain them. I am not entirely certain that part of this request would have consensus. --Izno (talk) 17:05, 31 July 2020 (UTC)
The redirects that people got rid of were those that were very weird/non-standard ("WikiProject Phys"). This would be a systematic creation for very common typos very often made by newbies. Headbomb {t · c · p · b} 17:10, 31 July 2020 (UTC)
I'd be opposed to this template redirect creation as I find it useless (and template redirects always have a hidden downside later on). The templates are used exactly once per page. It's ok if it takes you 2 seconds more to type in the correct "P". --Gonnym (talk) 17:20, 31 July 2020 (UTC)
The point is newbies don't know that and make that mistake often. WP:CHEAP applies here. There is no downside to those redirects, and many upsides. Headbomb {t · c · p · b} 17:21, 31 July 2020 (UTC)
Newbies, and others, should be directed to User:Evad37/rater if you find them having problems with these templates. --Izno (talk) 18:00, 31 July 2020 (UTC)
Newbies at AFC should not be directed to scripts. Headbomb {t · c · p · b} 18:11, 31 July 2020 (UTC)
Newbies at AFC should also not be directed to add rubbish to talk pages.
Using a bot to create redirects for variant capitalisation will not help much when a given miscapitalisation is rarely used a second time. Look through the history of Wikipedia:Database reports/Broken WikiProject templates beginning at this revision to see the sheer variety that the newbies come up with. The last column of the report tells you how many instances existed at the time that the report ran: it's rarely above 1. --Redrose64 🌹 (talk) 18:42, 31 July 2020 (UTC)
So? That's where WP:CHEAP applies. This doesn't fix every "mistake" someone can make, but it fixes a good bunch of them. Headbomb {t · c · p · b} 19:12, 31 July 2020 (UTC)
A good solution to the common-mistake-by-newbies problem would be to provide a javascript-based form for them to add project tags. This can be done via mw:Snippets/Load JS and CSS by URL so that they don't have to install any script on their end. SD0001 (talk) 14:08, 19 August 2020 (UTC)
  • Like Headbomb, and RedRose said; newbies, let them be at AfC or anywhere, should not be directed to scripts. Regarding wikiproject redirects, and shortcuts; they tend to be (just a little) headache. I had discussed a similar issue regarding wikiproject shortcuts on Primefac's talkpage a few weeks ago. For example, {{WPCannabis}} is not recognised by AWB/JWB as a wikiproject. Same goes for {{Uk-crime}}. If a new, or experienced editor is creating a new article, and looking for talkpge banners, they can copy-paste from somewhere, or at least would be able to see it somewhere. Repeating/copying it is not a difficult thing. While I have strong opinions on shortcuts, I am flexible with case-sensitive redirects. —usernamekiran (talk) 13:43, 15 September 2020 (UTC)
  • Adding to the chorus above, I strongly agree that this is worthwhile. We should prioritize beginner-friendliness; the stuff for advanced users can and will all be figured out in due time, but there's no way to fix after the fact when a new user gets frustrated trying to tag a talk page and just gives up. {{u|Sdkb}}talk 20:08, 24 September 2020 (UTC)
  • Oh yes, this definitely sounds like a good thing – the project banners use wonky capitalisation, and I occasionally have trouble getting it right from the first try if I haven't tagged recently. – Uanfala (talk) —Preceding undated comment added 20:18, 29 October 2020 (UTC)

Redirects identified as disambiguation pages

Category:Redirects tagged as disambiguation pages contains lots of talk pages of redirects which are incorrectly tagged with {{WikiProject Disambiguation}} (or one of its 45 redirects). Please could a bot:

  1. Remove {{WikiProject Disambiguation}} (or redirect) from each talk page.
  2. Remove |class=disambig from any other project banner on the page.

Thank you — Martin (MSGJ · talk) 17:55, 29 October 2020 (UTC)

Many talk pages consist only of {{WikiProject Disambiguation}}. Should those be blanked? Certes (talk) 18:16, 29 October 2020 (UTC)
Yes, I think that would be best. — Martin (MSGJ · talk) 18:28, 29 October 2020 (UTC)
Is there anything wrong with a redirect talk page having the banner? If it's got any other sort of content – other project banners, discussions, etc. – then I don't see a problem with it. On the other hand, if the DAB banner is all there is on the page, then the whole talk page should be deleted (rather than blanked) – the problem with these talk pages is not that they have the banner, the problem is that they exist and have the potential to get in the way. I think it'll be best to have a bot delete all talk pages that meet the following criteria: a) have no other content than the dab banner, b) have only a single edit in their history (we want to be careful not to delete any meaningful history), and c) the corresponding article is either a dab page, or a redirect to one (because if it's not, then a human will probably need to have a look). – Uanfala (talk) 18:57, 29 October 2020 (UTC)
I believe WikiProject Disambiguation want to keep track of disambiguation pages and not redirects; that is why the banner should be removed. I don't have any particular opinion on blanking vs deleting, but was hoping not to restrict the task to adminbots, because it would take longer to get approval. — Martin (MSGJ · talk) 19:34, 29 October 2020 (UTC)
WikiProject Disambiguation tracks dab pages through Template:Disambiguation, which is found at the bottom of all dab pages. That's why the talk page banner is not necessary for tracking. It may be useful if the page is not a dab page (or a redirect to one) – if it is a former dab page, a potential future dab page, or a page that serves a similar function to a dab page (or a redirect to any of these), then I imagine it's conceivable the wikiproject might want to track it with a banner, though I can't vouch for that. – Uanfala (talk) 19:47, 29 October 2020 (UTC)
A couple of points re c) above. Firstly, we should keep banners on talk pages where the corresponding mainspace page is an actual dab rather than a redirect: that's exactly where the banner is meant to appear. Secondly, we can remove the banner from talk pages whose mainspace counterparts redirect to something other than a dab (i.e. an article): it's doubly inappropriate there. Certes (talk) 19:55, 29 October 2020 (UTC)
we should keep banners on talk pages where the corresponding mainspace page is an actual dab – this is precisely where the banner is not needed. If the talk page has other content, the banner doesn't hurt. But if the banner is all that is there, then, as the template's documentation makes clear, the page shouldn't have been created in the first place. – Uanfala (talk) 20:22, 29 October 2020 (UTC)
No. Redirects to disambiguation pages are not incorrect tags, only redirects to disambiguation disambiguated pages are. Two different things, even though neither are disambiguation pages. Emir of Wikipedia (talk) 20:29, 29 October 2020 (UTC)
Sorry, I don't get your point, Emir of Wikipedia. Would you mind clarifying what you meant? – Uanfala (talk) 20:37, 29 October 2020 (UTC)
Imagine there are 2 redirects "A" and "B" both tagged with the WikiProject Disambiguation tag. "A" redirects to "a (disambiguation)" and "B" redirect to "Bee". It would be correct to tag A WikiProject Disambiguation tag as the redirect target is a disambiguation page, but it would not be correct to tag "B" with it as the redirect target is not a disambiguation page. Even though both "A" and "B" are not disambiguation pages, they are treated differently based on the redirect target. Sorry if you still don't get my point, it is a bit difficult to explain unless you give me some time to find some examples. Emir of Wikipedia (talk) 20:48, 29 October 2020 (UTC)
OK, that makes sense. So, to sum up: talk page banner for "A", which is a redirect to "A (disambiguation)", is correct according to you, incorrect in the opinion of MSGJ, and unnecessary in my view. Tagging "B", which is redirect to "Bee", is universally perceived as incorrect, but my point above was that here we would not necessarily want automatic actions here, because "Bee" may well turn out to be a page that the project will want to track (such as a WP:BCA, though this will need to be probed with more participants of the project). – Uanfala (talk) 21:03, 29 October 2020 (UTC)
Wouldn't that just mean we want a bot with an exemptions list, such that "B" (→ "Bee") is fixed, but "BDAB" (→"Wikipedia:Guidlines on disambiguating the page 'B'") would be manually exempted? WT79 (speak to me | editing patterns | what I been doing) 21:31, 29 October 2020 (UTC)

Apparently this is a lot more complicated/controversial than I envisioned it would be, so I withdraw the request for now. If there is a reliable way of determining whether the target of these redirects is a dab page then I may return. — Martin (MSGJ · talk) 08:07, 30 October 2020 (UTC)

@MSGJ re "If there is a reliable way of determining whether the target of these redirects is a dab page" can we not just check for presence of category "All disambiguation pages"? All dab pages should be tagged with a dab template which places this category ProcrastinatingReader (talk) 11:48, 30 October 2020 (UTC)

Bot that marks redirects with R to/from diacritics

This task should be relatively simple. Find all cases of redirects like

and mark them with {{R to diacritics}}

Then find all cases of redirects like

and mark them with {{R from diacritics}}

Obviously pages that are already tagged should be skipped. Headbomb {t · c · p · b} 03:45, 17 September 2020 (UTC)

If the redirect already has similar templates, such as {{R move}}, then these and the new template should go within a new or existing {{Redirect category shell}}. Certes (talk) 07:55, 17 September 2020 (UTC)

Delete redirects to draftspace from mainspace

A common PMR request is to delete redirects following draftification. I believe this is covered under WP:R2. See Quarry - we only have 6 of these pages currently, so they do usually get suppressed or tagged. I imagine it's been discussed before, but I couldn't find it in BOTREQ archives: why, instead of a lot of manual deletions and PMR reqs, does a simple bot (well, adminbot) not just delete these auto after a bit of time elapses? With a basic check to ensure the redirect has no real history. ProcrastinatingReader (talk) 09:01, 2 November 2020 (UTC)

we have to come up with something so that the vandals wont be able to delete valid articles. —usernamekiran (talk) 10:56, 2 November 2020 (UTC)
Hmm, I was thinking about that with the after a bit of time elapses part. When vandals draftify articles, those drafts are reverted in minutes/hours usually by patrollers. So if there's a 12 hour delay I don't think it could be abused? ProcrastinatingReader (talk) 11:06, 2 November 2020 (UTC)
For the record, Category:Candidates for speedy deletion as inappropriate cross-namespace redirects is the category - no need for a quarry every time you want to check. I also don't think this is really an issue that a bot needs to deal with; R2s are the easiest to deal with, and it's not the end of the world if an XNR sits around for a few hours. Primefac (talk) 11:07, 2 November 2020 (UTC)
Doesn't that category only list the ones actually nominated? My Quarry lists all redirects in mainspace -> draftspace (tagged or not - although I tagged the 3 without histories on the Quarry and they are now deleted). ProcrastinatingReader (talk) 11:27, 2 November 2020 (UTC)
Ah, misunderstood the point of your quarry. I would be okay with a bot nominating R2s for XNRs after a certain period of time, but not deleting them outright; as kiran said, too much potential for vandalism. Primefac (talk) 11:49, 2 November 2020 (UTC)
I don't see any vandalism problem. An important part would be the ensure the redirect has no history (I would have the threshold very low, as maybe 2 revisions or less). I don't see how you could then use the bot to vandalise; if you move an article to draftspace (and the redirect gets deleted), the article can just get moved back; if you just replace an article with a redirect to draftspace, it does not get deleted because it has some history. Seems like a lose-lose (for the vandal) situation. WT79 (speak to me | editing patterns | what I been doing) 16:15, 2 November 2020 (UTC)
I don't see any vandalism problem. Example: user moves a low-watched but perfectly satisfactory page to the draft space, which is then R2'd by the bot. The page was written five years ago and all of the original editors are inactive. Thus, no one notices, and it's deleted after six months per WP:G13. I can think of a half-dozen others, but per WP:BEANS I'll just stick with the most obvious. Primefac (talk) 17:18, 2 November 2020 (UTC)
I have similar concerns. I also thought like 2-3 scenarios where misuse could easily be done. The first scenario that came to my mind has already been explained by Primefac. Once LTAs find out about it, getting auto-confirmed isnt difficult at all. Any unwatched, or WP:Forgotten Articles can be moved to draft/user space. So obviously the redirect would have no reviosions/history. The bot will perform R2 on the redirect, and the moved/new location will eventually get G13'ed. —usernamekiran (talk) 04:13, 3 November 2020 (UTC)

Correct outdated name of reference source.

Some years ago, I edited a number of articles about the Civil rights movement. I cited Civil Rights Movement Veterans (https://www.crmvet.org) as an information source. Other Wiki editors also cited that website as a source in their articles and edits. Last year it changed its name to "Civil Rights Movement Archive," but all their URLs remained the same.

By hand, I edited the Civil rights movement article to change the name of the source from "Veterans" to "Archive." My computer skills are primitive and it was far too time consuming. I did a Wikipedia search for "Civil Rights Movement Veterans" (note quotes) which returned 108 wiki articles.

Could someone run a bot to automatically change "Civil Rights Movement Veterans" references to "Civil Rights Movement Archive" (leaving all URLs unchanged)?

Thanks

Brucehartford (talk) 18:31, 15 November 2020 (UTC)

Looks like there are only about 100 instances, could very easily be done with AWB and likely not worth a bot run. Primefac (talk) 19:14, 15 November 2020 (UTC)

No doubt you're correct. Unfortunately, my computer skills are not up to AWB. I looked at it, and it was beyond me. Thanks for checking into it though. Brucehartford (talk) 00:15, 18 November 2020 (UTC)

@Brucehartford:   Doing... with AWB. GoingBatty (talk) 23:15, 28 December 2020 (UTC)
@Brucehartford:   Done! GoingBatty (talk) 00:49, 29 December 2020 (UTC)

Subst:ing Template:Anchor in section titles

This is a fairly self-explanatory task; I checked with AnomieBOT, which currently does most template substing, a few days ago, but it was deemed too difficult to determine which transclusions were in section headers. See the template's documentation for details on why substitution is necessary. WT79 (speak to me | editing patterns | what I been doing) 16:52, 6 November 2020 (UTC)

There's no consensus for anchors to be substed. I know I would object to that myself. Headbomb {t · c · p · b} 18:15, 6 November 2020 (UTC)
I removed the consensus just this week (due to other reasons). Why do you believe there isn't consensus to subst anchors? --Izno (talk) 20:20, 6 November 2020 (UTC)
If the purpose of this is just to avoid broken section links, I think this is made moot by Wikipedia:Bots/Requests for approval/Cewbot 6 (which is far superior to anchors in section headers, imo) ProcrastinatingReader (talk) 21:05, 6 November 2020 (UTC)
{{Anchor}} is already used a lot, including in a large number of section headers. Per Template:Anchor/doc#Limitations, it is preferrable to substitute it when using it in a heading, the reason being that the edit summary for section edits of that section are generated as /* {{anchor|Foo}}Bar */ [rest of summary], even though Foo is normally invisible to readers, and the entire string, with curly brackets, does not work as a section link. This is not generated by the raw html, which is ignored by the summary generator. We need to clean up these existing uses. WT79 (speak to me | editing patterns | what I been doing) 21:42, 6 November 2020 (UTC)
It does pollute the wikitext, though, and make it harder to edit. It looks pretty obscure, and makes it harder to search and find these too. The real solution imo is not to make existing usages more firm, but to look at the transclusions in section headers and start removing them safely, letting Cewbot maintain them instead. Perhaps a bot to do that instead should be proposed. ProcrastinatingReader (talk) 22:07, 6 November 2020 (UTC)
Agreed with ProcrastinatingReader that most uses of anchors in section titles (namely, those for old titles that have been changed) should be removed once Cewbot makes it safe to do so. I can think of some instances where we might still want them present, though (e.g. one word anchors for complex titles that are likely to change repeatedly over time). {{u|Sdkb}}talk 19:33, 8 November 2020 (UTC)

Full rigged ship

There are about a thousand articles that contain the parameter

Ship sail plan = Full rigged ship

but should be

 Ship sail plan = Full-rigged ship

according to well-known dictionaries and common understanding of compound modifiers. About a hundred or so are unlinked, and it wouldn't hurt to link them while we're at it. There may or may not be spaces on either side of the equals sign. Chris the speller yack 17:37, 13 November 2020 (UTC)

This is AWBable. --Izno (talk) 21:05, 16 November 2020 (UTC)
Maybe even reasonably added to the list of misspellings just for en. --Izno (talk) 21:06, 16 November 2020 (UTC)
@Chris the speller: As a first step, I changed the redirect Full rigged ship so it now contains {{R from misspelling}}. GoingBatty (talk) 06:09, 28 December 2020 (UTC)
@Chris the speller:   BRFA filed GoingBatty (talk) 23:13, 28 December 2020 (UTC)
@Chris the speller:   Doing... GoingBatty (talk) 06:08, 7 January 2021 (UTC)
@Chris the speller:   Done! GoingBatty (talk) 13:22, 7 January 2021 (UTC)

Fixing broken shortcuts to sections

When someone changes a section name, there's no indication that someone else somewhere on Wikipedia might have created a link to that section that will be broken by the name change. I occasionally come across instances of such broken anchor links. Is there any bot patrolling for this and changing links (or, if that would be disruptive in some cases, adding an {{anchor}} to the destination page)? If not, I'd think we'd want to set that up. {{u|Sdkb}}talk 05:53, 20 September 2020 (UTC)

Exactly what I was thinking of a few days back. We don't even have the automation to get a list of broken section redirects, let alone to fix them. We have no fewer than 730,000 redirects pointing to sections, in the mainspace alone. The only way AFAIK to check accurately if such a redirect is valid is to look at the rendered HTML of the target page and see if it has an element with the given ID. But the API allows for parsing only one page in a single call, which means we would have to make 730,000 API calls, and repeat the process every month or so??
A best-effort way would be look at the wikitexts of the target pages (this the API allows us to fetch for 500 pages at a time) and guess if the redirect would be valid by checking the section names and {{anchor}} tags. Even then we need 730,000 / 500 = 1460 API calls, but that's reasonable I guess. – SD0001 (talk) 10:10, 20 September 2020 (UTC)
It might take fewer calls if we group the redirects by target page, so Foo#bar and Foo#baz can be handled together. But how do we fix them once we have a list? We could add anchors for old versions of headings, at the considerable cost of retrieving multiple old versions, but I expect too many of those 730,000 redirects would need manual attention. Adding anchors automatically after future changes might be more useful. Certes (talk) 10:44, 20 September 2020 (UTC)
There is/was a bot that handles this, yes. I don't remember which. --Izno (talk) 14:25, 20 September 2020 (UTC)
Hmm, interesting. It should probably be linked from WP:ANCHOR if it's identified. And if it used to exist but has stopped working, then it's part of the larger problem of quiet bot retirements, which I know some editors are tired of hearing about but which really needs to emphasized until it's addressed. {{u|Sdkb}}talk 14:47, 20 September 2020 (UTC)
Are you sure? Just a few days back, I fixed a broken section redirect that had been broken for 4 years. – SD0001 (talk) 20:10, 20 September 2020 (UTC)
Oh boy, we do have Wikipedia:Database reports/Broken section anchors. – SD0001 (talk) 20:16, 20 September 2020 (UTC)
Also found Wikipedia:Bots/Requests for approval/BrokenAnchorBot and Wikipedia:Bots/Requests for approval/SteveBot 5. Neither of these have been active anytime in the recent past. Both needed to be run manually (not fully automatic). – SD0001 (talk) 20:25, 20 September 2020 (UTC)
Hmm, 2010 and 2011. Could either of those be mined for code/techniques that we could use to get this running again? And while it'd be better to have something manual than nothing, I think ideally we should set up a bot that runs automatically and throws up a prominent error message if it ever stops functioning, since this is an issue that needs persistent work. Courtesy pinging Steven Crossin who is still active (as of last month). {{u|Sdkb}}talk 21:36, 20 September 2020 (UTC)
It seems a challenge to me. I am trying to code this task... --Kanashimi (talk) 09:34, 8 October 2020 (UTC)
This sounds like a bot-assisted semi-automated process like DisamAssist or AWB. It may be worth mentioning in the manual that links are occasionally intended for a different article, as here: [[C#major]] usually means C-sharp major, etc. Certes (talk) 11:18, 8 October 2020 (UTC)
@Sdkb, SD0001, Certes, Izno, and Steven Crossin:   BRFA filed --Kanashimi (talk) 10:45, 10 October 2020 (UTC)
@Kanashimi: Came across Dexbot (BRFA) today, which is I guess what Izno must have been referring to. It's also fixing broken section redirects, though the two don't seem to conflict. – SD0001 (talk) 08:26, 10 November 2020 (UTC)
I actually have a more complete list, please see "Fix broken anchor" section in the User:Cewbot#Tasks. However, I checked the edits cewbot do recently, it still seems good, including fixing wrong capitalization section title like this. --Kanashimi (talk) 09:50, 10 November 2020 (UTC)

Bot for fixing failed pings/misspellings of usernames

I recently remarked on the discord that people fairly frequently misspell my username, sometimes resulting in missed pings, and several others chimed in that they have the same or a similar issue. It would be nice to have a bot that could work off a whitelist of common misspellings of usernames, and fix them/ping the editor. We'd probably want a little oversight of the list to prevent abuse, but otherwise it'd hopefully be pretty straightforward. We might have it append some smalltext, similar to {{Unsigned}}. {{u|Sdkb}}talk 21:55, 9 November 2020 (UTC)

Seems like a helpful and relatively simple bot. Since it's not that high priority maybe I could take a stab at it?
Anyway, some possibilities:
  • Correct the misspelling and append something inline. For example, "@Skdb:" will be turned into "@Sdkb: (corrected from Skdb by Ovinus (talk))" on talk pages. It would ignore edits by the account Skdb.
  • Post a notification on the correct user's talk page, something like

Hi! An editor probably tried to mention you (link to diff) on page (link), but misspelled your account name. (Sent in error? Report here.) Ovinus (talk) 07:17, 10 November 2020 (UTC)

  • Gently notify the user who used the wrong name on their talk page.
The main issue with the first one is there's not a good way to distinguish between the few intended links to User:Skdb and a mistake. Plus, it gets more complicated when there's a ping of multiple people and one of them has to be changed. Because there's a redirect anyway, I think just the second and maybe third steps would be fine. Perhaps the third step would only happen if an editor repeatedly uses the wrong name (say, twice in the span of 48 hours).
In terms of the bot itself, I think a template editor-protected page with a table of the names would be sufficient. To be included the user would have to prove that they own the misspelled account (to prevent abuse). This could be done in one step by requiring the misspelled account to add their name or request their name be added to the list. Sincerely, Ovinus (talk) 07:17, 10 November 2020 (UTC)
Ovinus Real, thanks for taking this on!
I lean toward corrections/pings rather than talk page messages because the latter seems a little too intrusive (especially for the misspeller), since it requires an extra step (since the correction ping summons you to your talk page, not the source of the failed ping), and since correcting at the source allows others in the discussion to navigate to the user's page (having a redirect like I do at Skdb is not the norm).
Thinking about it, really the only parties that need to be notified are the misspeller and misspellee, so having the bot making an edit that pings both with something like Correcting misspelled username of C0mpl1c8tD NamE on behalf of BadSpellr (report error) would be sufficient. That would get around any trickiness with pings of multiple users, etc., since it wouldn't append any smalltext.
Regarding distinguishing between intended and non-intended uses of misspelled usernames, so long as the bot only works going forward (i.e. timestamps after it was set up, rather than digging through histories), I can think of very few scenarios where someone would want to intentionally link a misspelled username. For those rare cases, we could set up a simple {{Escaped wikilink}} template that would create links the bot would ignore.
Regarding vetting the whitelist, a template-protected table sounds good. Verifying ownership isn't really the criterion, though—I don't need to own Skdb for it to be a misspelling of my username. We'd just need to make sure someone doesn't enter "Ovinus Real" as a misspelling of "Ovinus Totally Fake" to make the bot mass-vandalize your signatures. Cheers, {{u|Sdkb}}talk 09:26, 10 November 2020 (UTC)
I think notifying via edit summary is sensible, and yeah, there's not much reason to link to a nonexistent user. As to ownership, I think it should be a requirement because otherwise someone—in good faith—could create an account under that name and have their pings gobbled up. For example, the account Ovinus is registered but has no edits; should they be considered a failed ping of me? I think not. (Though my usurp request for that account will be done in a few days, hopefully!) Anyways, perhaps we can get a list of templates–not including their redirects–which the bot should consider besides a direct user link. For example, should {{Noping}} be changed? If so, it should be changed without notifying the linked user. Ovinus (talk) 09:40, 10 November 2020 (UTC)
Sounds good to me. {{u|Sdkb}}talk 07:31, 17 November 2020 (UTC)
I would strongly suggest that even with the whitelist, the bot checks that the "wrong" username is not used before making a correction. The obvious advantage is that nobody has to actively patrol the whitelist vs. the new username logs, or even check before asking for an addition. There is a small disadvantage due to trolling: if Alice asks for an Aleece → Alice entry in the whitelist because many people make that mistake, Moriarty can register "Aleece" to break the corrections - "Aleece" will be blocked for impersonation but the username will be in use. However (1) this may not be a big issue, and (2) if it turns out to be there will always be time to look for solutions (e.g. correct X → Y as long as (X is not a registered username) OR (X blocked indef AND X has less than N edits)). TigraanClick here to contact me 11:04, 17 November 2020 (UTC)

Remove deprecated parameters from Template:Infobox dog breed

Following discussion in the first half of 2020, Template:Infobox dog breed underwent a minor redesign to reduce the focus on kennel clubs from English speaking countries [108]. As a result a number of parameters were deprecated but remain in many of the 627 transclusions. It is requested that a bot be tasked to remove the following deprecated parameters from these pages:

| patronage = | fcigroup = | fcisection = | fcinum = | akcgroup = | akcstd = | akcstd1 = | akcstd2 = | akcfss = | akcmisc = | ankcgroup = | ankcstd = | ankcstd1 = | ankcstd2 = | ckcgroup = | ckcstd = | ckcstd1 = | ckcstd2 = | ckcmisc = | kcukgroup = | kcukstd = | kcukstd1 = | kcukstd2 = | nzkcgroup = | nzkcstd = | nzkcstd1 = | nzkcstd2 = | ukcgroup = | ukcstd = | ukcstd1 = | ukcstd2 = | otherstd =

This is not a war stopper, but it may cause some confusion for unknowing editors in the future. Kind regards, Cavalryman (talk) 22:46, 7 January 2021 (UTC).

Likely too small for a bot run, but I set up a tracking category in order to remove the deprecated parameters (when that's had a chance to filter through all 600-odd transclusions I can manually remove these params). I would suggest (at any point, really) setting up an unknown parameter check just to keep future maintenance to a minimum. Primefac (talk) 16:15, 8 January 2021 (UTC)
Primefac & Jonesey95, thank you very much, that will make the task much easier. The unknown parameters check will be of assistance also, it’s quite common to find some funny ones people have tried to insert. Kind regards, Cavalryman (talk) 22:13, 8 January 2021 (UTC).
I have no doubt. I'll let the system refresh itself overnight and I'll try to get to the param clearing at some point over the weekend. Primefac (talk) 22:44, 8 January 2021 (UTC)
Many thanks, I have cleared all in the category so far. Kind regards, Cavalryman (talk) 10:19, 9 January 2021 (UTC).
  Done as far as the deprecated params go. Primefac (talk) 19:10, 10 January 2021 (UTC)
Thank you very much for everything you have done, that’s an amazing job. Kind regards, Cavalryman (talk) 21:17, 10 January 2021 (UTC).

Article History template script

Hi all, I was wondering if anyone was interested in developing a script for talk pages to automatically role templates like DYK, GA and PR into an {{ArticleHistory}} format? I occasionally Wikignome, and it occurs to me such a script would likely be very useful for myself and many other editors, by automating a fairly time consuming manual process. The benefits will be more readable and organised talk pages, as well as a more comprehensive history for some articles. What a noble goal! --Tom (LT) (talk) 06:44, 27 July 2020 (UTC)

I posted at village pump earlier and didn't get any responses, so I assume such a script doesn't exist, therefore I thought I'd ask here :). --Tom (LT) (talk) 06:44, 27 July 2020 (UTC)

The MilHistBot has this capability. It normally does so when articles are promoted to A-class or FAC. Hawkeye7 (discuss) 18:37, 27 July 2020 (UTC)
@Hawkeye7 that's great! Is there a way to get it as a user script? --Tom (LT) (talk) 04:50, 28 July 2020 (UTC)
Unfortunately, it would have to be rewritten. As far as I know, scripts have to be written in JavaScript. Hawkeye7 (discuss) 10:06, 28 July 2020 (UTC)
I have worte codes that may parse {{Article History}}. You may use the codes to modify {{Article History}}, and the codes are in JavaScript. But I think the request can execute automatically? --Kanashimi (talk) 11:37, 28 July 2020 (UTC)
Thanks Kanashimi! I don't understand the second half of what you have said though. How will the script happen automatically? (Is it possible that I can choose for it to execute, like with most scripts via a button added to the "More" menu?) --Tom (LT) (talk) 00:52, 29 July 2020 (UTC)
For example, we may using a bot to merge DYK, GA and PR mark into {{Article History}}. But I don't know if there is a such mark... --Kanashimi (talk) 08:47, 29 July 2020 (UTC)
I see Kanashimi. But how can I do that using a script that I can initiate? --Tom (LT) (talk) 00:53, 2 August 2020 (UTC)
Are there some sample edits for DYK, GA and PR, so we can know more clearly what to do? --Kanashimi (talk) 01:08, 2 August 2020 (UTC)
EG @Kanashimi see Talk:FC_Bayern_Munich. Merge the "Old peer review" template into the "Article history" section. If there are separate DYK and GA templates, do the same thing.--Tom (LT) (talk) 00:34, 29 August 2020 (UTC)
Second example is Talk:Sonic the Hedgehog - would merge the GA and PR templates into a single "Article history" template. --Tom (LT) (talk) 00:36, 29 August 2020 (UTC)
@Tom (LT): Thank you. I think it is possible using a bot to combine {{Old peer review}} into {{Article History}}. But I wonder if we really desiring a bot to do this. Also @KingSkyLord: please give some comments, thank you. --Kanashimi (talk) 12:25, 3 September 2020 (UTC)
@Kanashimi: A bot which automatically changes the {{Article history}} gets my full support! KingSkyLord (talk | contribs) 13:07, 3 September 2020 (UTC)
It is a requirement for FAC, and the FACBot does so. It is trickier than the GA and DYK merges though, due to different formats of peer review used over time, and page moves. Hawkeye7 (discuss) 20:16, 3 September 2020 (UTC)
Thanks all. Just wondering if at least a user script could be developed (or a bot if there is community consensus!)--Tom (LT) (talk) 23:55, 3 September 2020 (UTC)
@Tom (LT): Since we could do this automatically, I think it will be better not to bother humans. @Hawkeye7: I see some pages transcluding {{Old peer review}}. Will FACBot clean them in the future? --Kanashimi (talk) 22:04, 4 September 2020 (UTC)
No, not unless they become featured articles. Hawkeye7 (discuss) 01:57, 20 September 2020 (UTC)

@Hawkeye7 could FACBot or MilHist bot be customised to run on new good articles or peer reviews every so often? --Tom (LT) (talk) 04:42, 19 September 2020 (UTC)

To merge the article history? Sure. That is doable. Hawkeye7 (discuss) 01:56, 20 September 2020 (UTC)
This is EnterpriseyBot task 7, but I haven't run it in a while. I could set it back up, though. Enterprisey (talk!) 02:14, 20 September 2020 (UTC)
Either is OK! Options are each sweep going through all articles in Category:Old requests for peer review, or simply going through recent peer reviews (they are always placed in categories titled in the pattern Category:June 2020 peer reviews. A full sweep will be needed at least once. I will leave how you assess WP:GA to you, and also whether you want to do the same for WP:DYK and so on. --Tom (LT) (talk) 23:47, 20 September 2020 (UTC)
@Hawkeye7 and Enterprisey - would it be possible to activate one of your bots bot on those pages? --Tom (LT) (talk) 23:59, 2 October 2020 (UTC)
Working on it. It seems the Python code I wrote is sort of disgusting, so I'm translating it to Rust, which might take a few more days. Enterprisey (talk!) 08:39, 19 October 2020 (UTC)
Great, thanks! It pleases me to know beautiful code will be used. --Tom (LT) (talk) 09:53, 19 October 2020 (UTC)
Tom (LT), looking good. I'll have it run daily. Enterprisey (talk!) 10:47, 24 October 2020 (UTC)
Thanks Enterprisey. I'll keep an eye on the change log of your bot for the next few days to see how it goes. Will you be running it through all Category:Old requests for peer review to start off with? --Tom (LT) (talk) 22:27, 24 October 2020 (UTC)
It doesn't support the peer review templates yet. It might be a bit tricky to implement that. For example, on Talk:Isaac Asimov, there are two peer reviews, one finished and one not started. The dates of both would have to be auto-detected from the page history, which is probably doable but would be a pain. I will, however, implement {{Old XfD multi}}, and that, combined with ITN, OTD, and DYK, should cover most cases pretty well. Enterprisey (talk!) 06:34, 25 October 2020 (UTC)
Ok I see. What information do you need? Is it the revision date only, or do you also definitely need a revision ID too? Also... regarding the current bot - apart from improving your bot's code, will it be running in new areas (Such as ITN, OTD, DYK, GA)? --Tom (LT) (talk) 07:03, 25 October 2020 (UTC)
The article history template requires a date, and I don't think it requires a revision ID. It's just that I would have to dig through the history looking for an edit that removes the peer review template... ah, I'm clearly just making excuses because that would be trickier to write  . For your other question, currently it only supports ITN, OTD, DYK, and XfD. {{GA}} shouldn't be too hard. Enterprisey (talk!) 09:05, 25 October 2020 (UTC)
If you did expand it to GA, I think that would be a good think to help tidy up those talk pages too. At some future date I'll request a bot to insert the dates into the peer reviews. Then I'll ping you and your bot can churn through the 20,000 or so review-related talk pages to clean them up.--Tom (LT) (talk) 02:01, 27 October 2020 (UTC)

Sorry, I was just made aware of this discussion: please see Taming talk clutter from 2008 and read this discussion at Village Pump Technical.

Gimmetrow and Dr Pda designed the article milestones. Gimmetrow's old bot (Gimmebot) rolled EVERY content review template into the article milestones, so it can be done-- that is GA, PR, FA, everything. But more, he ordered the events logically and sensibly, and I have been going through and trying to fix at least the October FAs, since a) all templates are no longer rolled in by bot, b) some GA passes use faulty templates, c) many DYK noms do not identify the nom page, d) some processes are not providing oldids, and e) OTD is off doing their own thing, dumping clutter on to talk pages outside of the article milestones.

No, oldid is not REQUIRED for proper display, but neither is it hard to find. Dr pda also used to have a script that returned an oldid based on any timestamp. ALL OF THIS was accomplished more than a decade ago, so I'm sure it can be now. And the point of the milestones is to always be able to click back on any date and see what the article looked like at the time of that event.

GimmeBot processed every GA and every FA and every PR. If any one is going to take this on, please try to return sensible ordering of the milestones as they used to be and as I have been correcting them, eg, here. Separate each event, in order, and put the rest of the important stuff at the bottom. And get OTD and ITN on board, and figure out why DYK isn't providing nom pages. Happy to help if someone is going to take this on; as of now, I am repairing all FACs and FARs manually. See my contribs. SandyGeorgia (Talk) 21:19, 1 November 2020 (UTC)

Thanks for all the information. My bot task can currently format the parameters properly (all the actions in order at the top, etc) and can figure out the DYK nom page. I concur that it's not too difficult to figure out the oldid, and will put that in soon. It can also figure out the OTD template correctly, but the procedure it uses is simple and I gather there's more complexity I'm missing? As of this comment, I still haven't added GA/FA/PR, but I've set it up to add more actions to the template, and plan to add them all before I run the bot. (So if you don't really want to repair the FACs and FARs, the bot will get to them.) Speaking of running it, it seems that the task has expanded in scope that it'll merit another BRFA, which I'll open when I'm done coding. Finally, I notice that in some combinations, the article history looks worse than the individual templates, such as in this revision. I will probably make it combine templates only when there are three or more other templates, perhaps. Enterprisey (talk!) 11:13, 2 November 2020 (UTC)
Enterprisey Glad you are on board, have hated seeing such a long labor over many decades lost and talk pages converting back to clutter where we cannot locate old events ... ipad typing now, I will be back in a few hours with more info for you. SandyGeorgia (Talk) 11:35, 2 November 2020 (UTC)
I feel like that revision is probably something that needs to be redesigned in Module:Article history, as I agree that looks ugly. ProcrastinatingReader (talk) 16:02, 2 November 2020 (UTC)
Thanks all. I think these are almost three separate tasks. Thanks Enterprisey for keeping chip away at this. Having a bot actually use the template is very useful. The second task is fixing up the template and I 100% agree latter-day additions are pretty ugly. The third task is potentially simplifying the back-end, something that is too complex for me but does seem like a job that is worth doing it it makes maintenance going forward easier. I look forward to hearing more progress from enterprisey on this eventually :) --Tom (LT) (talk) 10:18, 18 November 2020 (UTC)

Brain dump

Enterprisey starting over with a brain dump of everything I know that is going wrong with Template:Article history, and things that might be done to fix the issues. Historically, when GimmeBot was doing everything, there was very little manual intervention. Since the demise of GimmeBot, we have different processes going different ways, nothing standardized, and some editors intervening manually and causing errors. This will be partially an exercise in getting everyone back on the same page.

In no particular order of priority:

  1. See Wikipedia:Wikipedia Signpost/2008-03-24/Dispatches re the script from Dr pda we used to be able to activate from tools to find a missing oldid. Now I have to go trawling manually through articlehistory to find them.
  2. Historically, the template or the bot (I dunno, don't speak the language) was programmed to kick out errors, that I watched daily with a link on my userpage. The error category was supposed to be red when there were no errors. Maralia and I kept up with errors daily, and I can't recall what changed there. My memory could be faulty, but I think someone objected to how the error cat was being used and did away with that. I could be wrong. Also, as Gimmebot got better and better, there weren't really any errors anyway ... it had gotten to where any time an error popped up, it was talk page vandalism.
  3. I don't know what is going on with DYK. First, a whole ton of DYKBOT entries do not include the nomination page. Typically, finding them is not hard-- except when there have been article name changes. This one, for example, was really hard to find because of multiple name changes.
    There is another problem going on with DYKbot, which is explained in this discussion. I don't get it, but when the date is at the end of the month, we end up with a completely useless link in article milestones because it goes to the wrong page entirely (the next month). This will require some fixing with DYKbot.
  4. OTD, I don't know what's up there. Look at this sample. They are dumping in a string of dates, outside of the article milestones, that uses a different format. Syncing the formats would be nice! OTD uses date2= oldid2= while article history uses otd2date= otd2oldid= SO when I have to fix these manually, it's a mess of irritating typing.
  5. FACbot does not move the other information (other than events) to the bottom of the list, so we end up with a jumble. Technically, everything still works, but the reason that concerns me is that when we have a mess in there, regular editors will never learn to understand, watch and correct article milestone errors, because nothing is standardized. That is, look at this compared to this after I cleaned it up.
  6. MANY editors are not understanding that it is the ENDING date of an event that matters, and that the point of the oldid is that we should be able to click through article history and see how an article looked as it finished any event. (Look at this edit and the two after it.) And because we have gotten away from the standardized approach, with many bots and editors doing different things, there are now messes everywhere that need cleanup, separately from ongoing tasks.
    Here's a fine and dandy mess as a sample: [109] SandyGeorgia (Talk) 13:52, 2 November 2020 (UTC)
  7. I disagree with your revert reasoning on your bot work here, because you have to think long term goals :) We want editors to go back to understanding a standardized approach to talk page templates. And remember that, as soon as the next process hits and another event is added, we want the template in place already because three events on one page is clutter.
  8. Separately, features have been added to Template:Article history that aren't working, eg here. SandyGeorgia (Talk) 13:54, 2 November 2020 (UTC)
  9. Then we have errors like this, which seem to result because FACbot is not using a script like Dr Pda's to detect when the event occurred, rather it is pulling the last date from the peer review, which is when the PR page was moved. (Separately, the PR page should not have been moved, because the article history template was designed to use the name under which the event occurred ... @Wehwalt: ... so that the title is now red-linked in the PR.) SandyGeorgia (Talk) 21:15, 2 November 2020 (UTC)

I will add to this list as I recall other things ... SandyGeorgia (Talk) 13:38, 2 November 2020 (UTC)

AFC drafts that have titles identical to titles on other Wikipedias

From user talk:SD0001:

... would there be a way to do a bot report of pending articles that have titles identical to titles on other Wikipedias, with links to those foreign-language Wikipedia pages? If an article exists on another Wikipedia, it's a good indication that the draft should be approved. Thanks, Calliopejen1 (talk) 18:19 pm, 11 November 2020 (UTC)

I don't think using wikidata is an option since AFC drafts are very unlikely to have been linked to wikidata. Is there another way this could be done? – SD0001 (talk) 06:33, 13 November 2020 (UTC)

Agreed. On an only-slightly related note, how feasible would it be to get a report of all draft pages with a corresponding fully-protected/salted article? Primefac (talk) 13:36, 13 November 2020 (UTC)
@Primefac: like this? (for protected redirects: [110]) ProcrastinatingReader (talk) 17:15, 13 November 2020 (UTC)
Pretty much. Something on-wiki might be nice for slightly-easier tracking purposes. Primefac (talk) 17:53, 13 November 2020 (UTC)
Is it not possible to search Wikidata for an identical title for the Wikidata item (or identical title for the Wikidata link to a foreign-language Wikipedia), even if no links to Wikidata exist? Obviously this would not be perfect, but I think it could still be very useful. Calliopejen1 (talk) 19:40, 16 November 2020 (UTC)
It might be possible to do that, but I'm not sure how well that would translate to a bot being able to keep a list of drafts-with-the-same-name-as-existing-WD-pages updated. Primefac (talk) 13:34, 17 November 2020 (UTC)
Yes. Directly querying the database for an exact match is too slow (6+ minutes for a single page itself). Simply using the wikidata search and processing the result seems like the way to go. It's more flexible, but there could be some caveats. – SD0001 (talk) 18:02, 17 November 2020 (UTC)

Using a SQL query this would be a cross database join. My bot "shadows" does something similar where it looks for File:'s that have the same name on Commons and Enwiki. It's pretty fast and Commons has 60 million File: pages, which exceeds the total of all mainspace pages in all wikis by a fair amount, it is fast. The problem is Wikitech is redesigning the SQL servers and cross database joins will soon no longer be available. A Phab is open to try and find a solution. If you would like to follow developments see Phab T267992. -- GreenC 19:03, 17 November 2020 (UTC)

@GreenC and SD0001: How about just a report for drafts where the draft title matches the wikidata English-language label, if one exists? Would that be easier? It seems like most WD items have English labels, and this simpler (?) report would probably cover most of what I'm interested in. Calliopejen1 (talk) 18:51, 20 November 2020 (UTC)
@Calliopejen1: I just put together User:VahurzpuBot/Drafts matching non-English Wikipedias. The current code for this only looks at Category:AfC pending submissions by age/0 days ago and selects Wikidata items based on a case-insensitive but exact match on labels or aliases. Additionally, it doesn't update automatically yet. What other features would be useful? Vahurzpu (talk) 23:17, 20 November 2020 (UTC)
@Vahurzpu: Thanks! This is exactly what I was thinking of, but to be honest it's less helpful than I expected (still a lot of junk in here). Could you run it for one or two more days worth of AFC submissions, so we can see if this was a fluke or not? Thanks! Calliopejen1 (talk) 18:50, 24 November 2020 (UTC)

WP:TALKORDER issues

I come across way too many article talks, like Talk:Jennifer Lawrence, where the {{Archives}} causes that ugly overlap. It happens whenever the template isn't at the bottom of the list of talk banners (view source to see what I mean). To fix, we'd need a continuous bot to make sure this template keeps getting moved to the bottom of talk page banners. I don't think a CSS fix is really possible for this, and a JS fix would not be preferable to just having a bot maintain talk pages. I've made a discussion on the talk last week, see Redrose's response there for useful info as well (perhaps a broader bot for that purpose should be considered). It reminds me that another issue we see is DS templates constantly in the wrong order, it's advised by the template itself, and WP:TALKORDER, to have them below the talk header. Yet they seem to be scattered randomly. We commonly have random whitespace in talk page banners, too, thus random newlines. Really a bot to clean all this up would be a good idea, and enforce order (except when opted-out, I suppose). ProcrastinatingReader (talk) 22:04, 2 September 2020 (UTC)

You'll have to see Special:PermaLink/973120366 for the version PR refers to, since I went ahead and removed the {{archives}} (there's already a {{talk header}} so it makes it redundant). Primefac (talk) 22:19, 2 September 2020 (UTC)
@ProcrastinatingReader: I was just thinking about this the other day. WP:Talk page layout gives a fairly consistent indication of what the order for talk page banners should be, but they regularly end up more random. It'd be very nice to have a bot fixing that, and over time as people get used to a certain order, it could make the maze of talk banners easier to navigate.
Programming will be a fairly big task, though. You'd have to go through every talk page banner available and assign it an order. You'd also probably want to automate things like when to introduce collapsing of WikiProjects or {{banner holder}}. And we'd need to discuss what should happen when someone creates and adds a new banner that isn't part of the queue, or how to handle custom notice banners. I could also see complaints that if the bot operates too frequently, it's just making edits without a strong purpose. All those obstacles are possible to overcome, however, and I think if we did it'd make talk pages a lot nicer. {{u|Sdkb}}talk 20:00, 24 September 2020 (UTC)
I do think this would make talk pages a fair bit more friendly, to be honest, just by improving the consistency. I think the first part of dealing with this may be to get a consensus and/or a more complete list on what talk order is preferred. Headbomb as someone who edited WP:Talk page layout, might you have any thoughts on this proposal? ProcrastinatingReader (talk) 21:13, 30 October 2020 (UTC)
This is a pretty damn tricky task, because WP:TPL is descriptive (observations) more than prescriptive (thou shall do this or else the cable gremlins will make you regret it). There's certainly more than a few things in there that are tricky and iffy. The only thing, AFAICT, that I'd consider 'safe' to do by bot is to put the archives at the very bottom, put banners in a {{WPBS}}, and put whatever can be shoved in {{Article history}} in {{Article history}}. Headbomb {t · c · p · b} 23:03, 30 October 2020 (UTC)
For what it's worth (having just done an unrelated-but-made-me-think-of-it bot run), AWB has a set "order" that defines the order of talk page banners. Given that one of our central programs already has a metric, I'm not really sure how "tricky" this would be. Of course, how necessary is another question entirely, since any changes my bot makes are largely incidental to the overall task that it's performing.
Of course, the additional issue is that it will add yet another bot that will hide updates to a page due to a bot edit, but it's rather unlikely that bug will be fixed any time soon. Primefac (talk) 21:37, 1 November 2020 (UTC)
One challenge that AWB has with ordering talk page banners is dealing with redirects. You may want to look at the AWB custom module User:Magioladitis/WikiProjects, which probably needs some updating. GoingBatty (talk) 05:40, 28 December 2020 (UTC)
ProcrastinatingReader and Headbomb, the above prompted me to add the Template:Banner holder#Choosing banners to collapse documentation section. If you're interested, additional input/expansion would be welcome, and might eventually lead to enough standardization that a bot could take over the task. {{u|Sdkb}}talk 19:24, 8 November 2020 (UTC)
Slightly separate but you also have pages using the graphs directly, eg Talk:Robert Hunter (lyricist), rather than via their talk page wrappers. Imagine new users stumbling across this - looks a mess. ProcrastinatingReader (talk) 03:25, 21 November 2020 (UTC)

Clearing the category "Wikipedia usernames with possible policy issues"

Have a bot remove a user from the category Category:Wikipedia usernames with possible policy issues when they have been inactive for over one year or have been blocked indefinitely. Heart (talk) 03:15, 9 October 2020 (UTC)

I don't work with that category, but would it make sense for a bot to move those pages to a corresponding "inactive user" category so that the usernames could still be tracked? – Jonesey95 (talk) 05:47, 9 October 2020 (UTC)
Jonesey95, well, the notice explicitly states that users that haven't been active in a week can be removed. I think that rule is ludicrous and have extended it to a year to give time to change names, or to come back from a wikibreak. So I would see no need for the category, but it is up the user who creates the bot to decide this. Heart (talk) 06:21, 9 October 2020 (UTC)
  Doing... Good idea, I'll get working on this. BJackJS talk 18:15, 4 December 2020 (UTC)
TheSandBot 6 is already approved for removing blocked users from that category. – SD0001 (talk) 19:48, 4 December 2020 (UTC)
Damn. BJackJS talk 20:32, 4 December 2020 (UTC)
Ping TheSandDoctor to see if he can do a run? ProcrastinatingReader (talk) 12:00, 5 December 2020 (UTC)
@ProcrastinatingReader and BJackJS: I really need to make that a daily chron job...running inside 30min. Thanks for the ping, ProcrastinatingReader. --TheSandDoctor Talk 23:18, 5 December 2020 (UTC)

Need a bot to add remove contents to wiki pages

Hey I need a simple bot that could be able to add words to the links I send it. Maybe have the option where to add the text, but also have an option to remove all the text that you put in the bot once it comes across one of the words on the links. Might've not expressed myself the best but I hope you guys got my message. — Preceding unsigned comment added by JokerLow (talkcontribs) 23:51, 5 January 2021 (UTC)

I didn't. What are you trying to accomplish with this bot? Primefac (talk) 00:23, 6 January 2021 (UTC)

Article Alert for WP:WILDFIRE

Hello, I'm here for requesting a bot to make an article alert page for WP:WILDFIRE wikiproject, like [[WP:CALI] and WP:USA does. --🔥LightningComplexFire🔥 17:51, 8 January 2021 (UTC)

Follow the instructions at Wikipedia:Article alerts/Subscribing Majavah (talk!) 17:54, 8 January 2021 (UTC)

Fixing punctuation before citations

  Resolved

Per MOS:REFPUNCT, citations are supposed to go after punctuation like periods and commas, not before it. This is already included in GENFIXes, but I think it's noticeable enough to readers that it'd be good to have a bot working on it; it's not really WP:COSMETICBOT to my reading. Yobot has an approved task for doing this, but given how many pages I've come across with this issue, I'm guessing it's no longer working. {{u|Sdkb}}talk 20:29, 10 January 2021 (UTC)

Have you asked the botop why the task is not running? Primefac (talk) 20:33, 10 January 2021 (UTC)
I gave them a ping above. {{u|Sdkb}}talk 22:27, 10 January 2021 (UTC)
I can work with it. The task was stopped because there were comments on some bugs pending. -- Magioladitis (talk) 21:51, 11 January 2021 (UTC)
Thanks for doing that. Probably helps that it's now part of the genfixes. Primefac (talk) 21:52, 11 January 2021 (UTC)
It always was. But to run properly it has to run with general fixes which means that the edit sometimes is lost within other minor fixes which gives the impression the bot is doing nothing worthy. -- Magioladitis (talk) 21:53, 11 January 2021 (UTC)

I resumed the bot task. If there is any problem, please report it immediately. -- Magioladitis (talk) 09:44, 14 January 2021 (UTC)

The following discussion is closed. Please do not modify it. Subsequent comments should be made on the appropriate discussion page. No further edits should be made to this discussion.


Sansoni (publisher) is an old and important Italian publisher, whose page was recently created.

There are hundreds of pages with Cite book templates for works published by Sansoni.

It would be useful to link them to the publisher page.

So my proposal is that a bot should look for instances of {{Cite book }} where there is one of these parameters:

|publisher=G. C. Sansoni
|publisher=G.C. Sansoni
|publisher=Sansoni

And replace it respectively with:

|publisher=[[Sansoni (publisher)|G.C. Sansoni]]
|publisher=[[Sansoni (publisher)|G.C. Sansoni]]
|publisher=[[Sansoni (publisher)|Sansoni]]

The replace should only be done on the first instance in each page, of course, to avoid excessive wikilinks.

Thank you in advance!

--Lou Crazy (talk) 02:21, 14 January 2021 (UTC)

Out of curiosity, when you say "hundreds", is that low-hundreds or high-hundreds? Just looking for a ballpark figure. Primefac (talk) 02:26, 14 January 2021 (UTC)
Probably around 200 or slightly more. Searching for the word "Sansoni" yelds 395 pages, and many of them use the Cite book template. Some mention instead other people by that surname, some use alternate names for this publisher, so I'd guess about half of that would be replaced by looking for those strings. Opening a few pages at random confirms my estimate. --Lou Crazy (talk) 02:31, 14 January 2021 (UTC)
Okay. Generally speaking (and I do use even that loosely) a bot run is not really necessary for <500 edits. That of course doesn't preclude someone from filing it, but for something small a) by the time trials etc are done the task is basically finished, and b) AWB has loads of users who would be willing to do this. I'll leave this open for a few days but if you don't get any takers I suggest making a request at Wikipedia:AutoWikiBrowser/Tasks. Primefac (talk) 03:19, 14 January 2021 (UTC)
Since no one volunteered here, I'm following your suggestion. Thank you! --Lou Crazy (talk) 03:02, 18 January 2021 (UTC)
The discussion above is closed. Please do not modify it. Subsequent comments should be made on the appropriate discussion page. No further edits should be made to this discussion.

Please remove all files in this category, because it's not necessary (all files in this category are out of copyright since this year). 185.172.241.184 (talk) 09:24, 15 January 2021 (UTC)

  Not done, if I interpret this category correctly, it's meant to mark which pages need their copyright status updated. If and when that happens, the cat will be empty and then it can be deleted. Primefac (talk) 11:47, 15 January 2021 (UTC)

Bot to change all instances of "wife of" to "married to".

I've been doing this by hand today and I thought maybe a bot could? To avoid complications we could start doing this with a category that has ~100 articles and check for mistakes.

More info here: https://en.wikipedia.org/wiki/Wikipedia:Writing_about_women

Samiwamy (talk) 18:35, 18 January 2021 (UTC)

@Samiwamy: This sounds like a textbook case of WP:CONTEXTBOT. — The Earwig talk 18:43, 18 January 2021 (UTC)
To provide more detail for Samiwamy, this would not work as specified. "Jane was the wife of John" would become "Jane was the married to John". If you replaced "the wife of" with "married to", that would work in the previous sentence, but it would make an error on "Jane murdered the wife of John" and many other valid constructions. – Jonesey95 (talk) 18:52, 18 January 2021 (UTC)


Thanks everyone for the info Samiwamy (talk) 18:59, 18 January 2021 (UTC)

To add to the above, Wikipedia:Writing about women is a personal essay not any kind of policy. Without even trying, I can think of numerous occasions when "wife of" would undoubtedly be preferable to any potential alternative ("Katherine Howard was the fifth wife of Henry VIII"). ‑ Iridescent 19:15, 18 January 2021 (UTC)
Yes, there are plenty of exceptions. According to Jewish views on marriage, The Torah obligates a man to not deprive his wife of food. Hume Cronyn appeared alongside Jessica Tandy, his wife of over fifty years. More controversially, some people actually are notable mainly for being the wife/husband/son/mother/whatever of someone more famous. Certes (talk) 19:24, 18 January 2021 (UTC)
I have started doing some of these manually to see if I can spot some replicable patterns that a bot can change. I think the entire task can actually be done manually, with a bit of time. BD2412 T 20:03, 18 January 2021 (UTC)
That would be nice User:BD2412. I do think I will keep doing it manually. Let me know what happens. Samiwamy (talk) 20:44, 18 January 2021 (UTC)
I think that we can more broadly have a bot change instances of "is the wife of" and "was the wife of" to "is married to" and "was married to", respectively, although we need to watch out for strings of text in direct quotes. BD2412 T 21:23, 18 January 2021 (UTC)
I just started working on this, and I don't think that a bot change would be appropriate (even for "is/was the wife of"), mainly because of people like first ladies who are mostly known because they were wives, or for mythical figures mainly known because they were wives. (And from my work so far, there isn't a universal term like "first lady" that's always used in articles to be used as a filter.) Calliopejen1 (talk) 21:56, 18 January 2021 (UTC)
There are far too many valid uses of "wife of" for this to be done in any sort of automated way. And for those of you who are embarking on this crusade, you might want to check for husbands too. – Jonesey95 (talk) 22:35, 18 January 2021 (UTC)
(ec) Yes, we also have 88,482 cases of "husband of", some but not all of which could be replaced by "married to". However, prose such as Prince Bernhard of Lippe-Biesterfeld, husband of Queen Juliana of the Netherlands, unveiled the [Statue of Maria van Riebeeck] is correct to imply that the prince is mainly notable for being married to the better known queen. Would "married to" be an improvement there? Certes (talk) 22:41, 18 January 2021 (UTC)

adding "nobots", and "category:wikipedians who opt out of message delivery" to indef blocked users

I have seen many users who have been blocked indefinitely for various reasons (socking, disruptive editing, CIR, and what not), but they receive many newsletters, and other notifications. Currently, there is User:Yapperbot/Pruner to remove inactive users from lists (WikiProject membership, FRS, etc), notifying the removed users appropriately. I am not sure what is the extent of this task. Would it be feasible to spend resources on creating a bot task to add {{nobots}}, and "category:wikipedians who opt out of message delivery" on the talkpages of users who have been blocked indefinitely, and do not have {{unblock}} on talkpage for more than 30 days? That way, resources can be conserved by avoiding new bot messages being posted, and later being archived. In case the user returns after a while, or after standard offer, they can simply remove the "nobots", and the category. Opinions are welcome. Regards, —usernamekiran (talk) 13:22, 15 September 2020 (UTC)

FWIW, I created a custom module, which did the edit(s) successfully: special:diff/978555212. I tested the module under different scenarios, and I also tested on a few (talk)pages from Category:Indefinitely blocked Wikipedians. I couldnt find any errors, as it is fairly a basic task. I didn't save these edits, just previewed. —usernamekiran (talk) 16:30, 15 September 2020 (UTC)
bump. —usernamekiran (talk) 10:03, 8 November 2020 (UTC)
Well, doing this to all indef blocked users is a bad idea, because we have 1 million indef blocked users. Doing it to all indef blocked users subscribed to a newsletter may be feasible, if all newsletter user lists are categorised in some way. Worth making a feature request to Naypta? ProcrastinatingReader (talk) 18:01, 27 November 2020 (UTC)
yeah, thats what I meant, blocked users with subscriptions. Apologies for the vagueness. Maybe we can run the bot through mailing list, to look for the users fitting in the criteria of being indef, and no unblock request (instead of going through indef blocked category). —usernamekiran (talk) 18:39, 27 November 2020 (UTC)
Alternatively, this could be implemented at the newsletter delivery bot-level, where those bot could check if a user has been indef blocked for over a month and not deliver the newsletter if so. Headbomb {t · c · p · b} 21:22, 27 November 2020 (UTC)

Find all six tags and remove all six of them

My specific need is to find all talk pages with the following six tags and remove all six of them. I would think that, if the "table" mechanism is generalized, then it could be used by others, so my preference would be a BOT named FindAllTheseTags_ThenRemoveAll (long name, but more descriptive than FindALLremoveALL).

My list of tags is:

  • Food and drink articles needing attention to referencing and citation
  • Food and drink articles needing attention to coverage and accuracy
  • Food and drink articles needing attention to structure
  • Food and drink articles needing attention to grammar
  • Food and drink articles needing attention to supporting materials
  • Food and drink articles needing attention to accessibility

To ensure clarity of the spec: Only Talk pages with ALL SIX are to be fixed.

The reason for this BOT is to counteract the still-existing after-effects of a BOT that, back in 2008, tagged talk pages with THE ABOVE SIX tags. My BOTREQ request is on the basis of my HelpDesk request, which directed me here. Pi314m (talk) 12:33, 24 January 2021 (UTC)

I do not know what you mean by tags. If you mean raw categories, then this search finds a total of 5 possibles. If you mean something else, please specify. --Izno (talk) 18:44, 24 January 2021 (UTC)
TALK, not actual articles. For example, Talk:1-800-Flowers has Food and drink articles needing attention to structure and the other 'needing's.
Another example is Talk:1898 Canadian prohibition plebiscite.
The count is 15,644 for Category:Food and drink articles needing attention to structure.
The count is 15,673 for Category:Food and drink articles needing attention to supporting materials.
The count is 15,654 for Category:Food and drink articles needing attention to accessibility.

A glance at the first 200 overlapping article titles suggests that the bot will reduce these counts to far fewer. Pi314m (talk) 00:43, 25 January 2021 (UTC)
These categories are added by {{WikiProject Food and drink}}, you can ask for help on that template's talk page. There's nothing bots can do here. Headbomb {t · c · p · b} 00:48, 25 January 2021 (UTC)
I understood your concern as being about categories regardless; the point I was making was that there are only 5 pages of possible interest for raw categories. As Headbomb says, you will need to sort out what to do on the talk page of that WikiProject template. --Izno (talk) 04:54, 25 January 2021 (UTC)

This discussion has now progressed to Template talk:WikiProject Food and drink#2008 hangover: six tags, 15,000 cases. --Redrose64 🌹 (talk) 16:23, 25 January 2021 (UTC)

Referenced sections with "unreferenced" cleanup tags

I frequently find sections with "unreferenced" tags that do have references, like this one. Is there a bot that can replace these tags with {{refimprove}}? Jarble (talk) 19:54, 22 January 2021 (UTC)

@Jarble:   Doing... (although replacing with {{more citations needed}}, since {{refimprove}} redirects there). GoingBatty (talk) 01:17, 26 January 2021 (UTC)
@Jarble:   Done, and I'll do it monthly. GoingBatty (talk) 23:46, 27 January 2021 (UTC)

Automatically report highly reverted pages for page protection

The bot would scan recent reverts and inspect the page history. It will then analyse the number of reverts against pre-set thresholds. If one of these thresholds are met, it files an automatic report to WP:RPP requesting page protection.

Example thresholds could be:

  • 4 reverts in the last 3 days by non-registered users/non-auto confirmed users - request semi-protection
  • 10 reverts in the last 3 days by non-registered users/non-auto confirmed users - request semi-protection
  • 3 reverts in the past 24 hours by more than 2 different users where all users are autoconfirmed/extended confirmed - request full protection

I have no programming experience with Wikipedia so unfortunately I won't be able to program this. Eyebeller (talk) 07:59, 18 November 2020 (UTC)

We do have filters for similar kinds of edit warring, example: 249 . In many cases it's a single editor who needs an individual sanction, who are reported to AIV by User:DatBot. Is there evidence that multi-user edit wars happen by new editors, are not caught by filter 249 and hence not reported, and someone doesn't report to WP:RFPP manually? Similar for the third bullet, is there evidence in such a case someone doesn't just go to WP:RFPP manually if needed? Additionally, this overlaps strongly with WP:ANEW, and a bot would not be able to distinguish between genuine content disputes and conduct issues / enforcing consensus etc. Those limits are too close to WP:3RR to allow for flexibility. And I'm not sure a high false positive rate is good; given RFPP's backlogs already often creep up, a high wrong venue rate would not be ideal. ProcrastinatingReader (talk) 10:53, 18 November 2020 (UTC)
I want to point out that the thresholds are examples and could be changed. Eyebeller (talk) 15:25, 18 November 2020 (UTC)
The main point is long term vandalism and abuse by multiple users. I should point out to tighten out on false positives we can do a more detailed analysis of those reverts to make sure that they were reverted as disruptive/vandalism. This could include analysing if the revert was made by Cluebot NG, if the revert edit summary contained keywords like "vandalism"/"disruptive" or if it was made with the default Huggle revert summary. Eyebeller (talk) 15:40, 18 November 2020 (UTC)
I didn't realise this was here, but there's a parallel discussion at AN. Primefac (talk) 15:17, 1 December 2020 (UTC)

Talk page notifications when topic equivalent is promoted to quality status on another project?

Hello! I posted a comment over at the Village Pump and was directed here, so I'll copy here:

I think it'd be cool if a bot could be designed to add Talk page notifications when the subject's article is promoted to Quality status at another Wikipedia project. To pick an artbitrary example, a notification could have been added to en:Talk:G.U.Y. when hu:G.U.Y. was promoted to quality status.

Added benefits could be editors comparing different language versions, encouraging translation efforts, and more editors becoming familiar with Wikidata, depending on the notification's text and bot design. I could also see notifications being posted to WikiProject talk pages, etc.

Thoughts? Concerns? Other feedback? Sorry if this idea has been brought to the table before. ---Another Believer (Talk) 15:38, 24 November 2020 (UTC)

Another Believer, is it possible to do a bot over multiple wikis? Mr. Heart (talk) 15:39, 24 November 2020 (UTC)
HeartatSchool, I have no idea! I don't know anything about bots or how they operate on/across projects. I'm just an editor who thinks this would be helpful. Actually, I think so. Notifications are posted to English Wikipedia article Talk pages when an image from Wikimedia Commons has been nominated for deletion, so, this is similar, no? ---Another Believer (Talk) 15:41, 24 November 2020 (UTC)
What other wikis have article review processes (eg GA) other than enwiki, and I suppose huwiki? ProcrastinatingReader (talk) 17:40, 24 November 2020 (UTC)
I believe (almost) every wiki has an FA and/or GA process. If an article is an FA/GA on another wiki, an FA/GA icon is shown in the languages section next to the link. – SD0001 (talk) 17:46, 24 November 2020 (UTC)
SD0001, Right, and that's helpful, but I'd also love to know when an article I'm watchlisting has been promoted in another language. Talk page notifications is one way for this to appear in my watchlist. ---Another Believer (Talk) 18:01, 24 November 2020 (UTC)
Great idea, I support it. Ludost Mlačani (talk) 23:49, 29 November 2020 (UTC)

I think this is technically difficult to do using a bot. Only reasonable approach I can think of is if we knew the name of the GA template on a given wiki (given that, although we use Legobot, other wikis probably do it manually with differently named templates), we could patrol its recent changes, check for addition of template, and then lookup Wikidata link to find the enwiki article and add a talk page message. Otherwise, this is probably better as a userscript with some kind of "Check other wikis for GA status" button in the toolbar. ProcrastinatingReader (talk) 12:04, 5 December 2020 (UTC)

Wikidata also keeps quality information as "link badges", you could also listen for changes in that. Majavah (talk!) 14:16, 5 December 2020 (UTC)
Hmm, is that automatic or manually added? Also does Wikidata allow listening to changes to a specific property only? ProcrastinatingReader (talk) 14:20, 5 December 2020 (UTC)

Generating category directs for species common names

When uploading images to Wikimedia Commons, I often notice that there are no category redirects for the common names of most species, so there are too many redirects that need to be created manually. Is there a bot that could create these missing redirect pages, using data from Wikispecies or WikiData? For example: commons:Category:Red fox is {{category redirect|Vulpes vulpes}}. Jarble (talk) 18:23, 10 December 2020 (UTC)

Beware that some common names are ambiguous and require disambiguation pages listing multiple species and/or other meanings (or at least a hatnote from the primary topic). Many such pages exist but some may be missing. Certes (talk) 00:48, 12 December 2020 (UTC)

Make archive bots assume standard naming

AT Wikipedia talk:Moving a page#Updating archive bot settings when moving a page you can learn PrimeHunter has recently created Category:Pages where archive parameter is not a subpage, and that by far the biggest reason for pages to end up there is that Wiki editors move pages without updating talk page archival bot instructions.

But why should humans have to do menial tasks like that at all?

I assume when the bots were created there were no real standards and practices regarding auto archiving, but now there is. Seems to me we can avoid needless administration (and a lot of pages that don't archive properly) if we change the code of the two main archival bots to assume the standard naming as the default. If the |archive=User talk:Example/Archive %(counter)d parameter (Lowercase Sigmabot III) and the |archiveprefix=User talk:Example/Archive (ClueBot III) parameters could be made optional we could remove them from the standard instructions while still allowing manual override for the (few) cases where it's needed. This should mean that moving a page would no longer break auto archiving.

Of course, if there were a good reason this wasn't implemented back when, feel free to enlighten your audience :) CapnZapp (talk) 09:59, 7 January 2021 (UTC)

A page move sometimes fails to move existing archives. It could be messy if archiving automatically starts over with new archive names. PrimeHunter (talk) 10:11, 7 January 2021 (UTC)
Agree with PrimeHunter; we should not assume that a talk page's archives were moved along with the talk page. Primefac (talk) 10:33, 7 January 2021 (UTC)
Quite, normal confirmed editors do not have the "Move subpages (up to 100)" option that is provided to admins and page movers, and they may overlook some of the directions at the "Please clean up after your move" page that is displayed following the move. --Redrose64 🌹 (talk) 13:14, 7 January 2021 (UTC)
Well, the easy solution to this is to check if the value of |archive= (minus the subpage) is a redirect, and if it has any subpages matching the subpage pattern that are non-redirects. So in that way it could be automated. For ones that don't meet the criteria, it's likely post-move cleanup is needed and it could build a report. ProcrastinatingReader (talk) 13:17, 7 January 2021 (UTC)
The suggestion was to make |archive= optional. A bot cannot check a parameter if it isn't there. It would have to look for moves in those cases. Moves aren't logged at the target name so it would have to examine the page history or incoming redirects. If somebody copy-pastes the talk page instead of moving then there might be no trace. Not demanding a subpage name will also increase the number of poor archive parameters when somebody copies the archive parameters from a random page with very different activity. PrimeHunter (talk) 22:39, 7 January 2021 (UTC)

Thank you all for your consideration so far, @PrimeHunter, Primefac, Redrose64, and ProcrastinatingReader: Are you saying the occasional "overarchiving" (or whatever you feel is an appropriate title for the issue you have brought up) is deemed more disruptive than the (presumably) much larger load on human administration? That a big reason the bot writers mandated the archive name was so nothing was ever archived in the wrong place, even though it added a workload on humans that (from the layperson's perspective) is unnecessary? Perhaps a suggestion of this nature has been discussed previously? Cheers PS. If this place is the wrong venue for taking a holistic approach and here discussion should be limited to only unproblematic suggestions, please direct me to a more appropriate venue and thank you for your time. CapnZapp (talk) 10:29, 8 January 2021 (UTC)

There are large backlogs in every area requiring human attention. There tends to be skepticism to automating these, in fear of some false positives or errors, and Prime makes a good point above as to possible pitfalls here. Here it seems like you're not requesting a new bot, but rather a tweak to the existing archive bots? In that case, you'd need to communicate with those botops and get them to implement the desired change in their bot. ProcrastinatingReader (talk) 10:31, 8 January 2021 (UTC)
I don't know the original reasoning for demanding the parameter. I just think there are valid reasons for doing it. Category:Pages where archive parameter is not a subpage currently has 2875 pages (including 710 in userspace) but the tracking was added only a week ago and some of the wrong parameters are more than 10 years old. If maintenance editors with knowledge of archiving get it down to zero and monitor it then wrong parameters should be fixed quickly, often with better results than an archive bot ignoring the wrong parameter. Many of the pages are tiny or empty and don't even need any archiving like [111] PrimeHunter (talk) 10:55, 8 January 2021 (UTC)

The website airdisaster.com appears to be used in several articles about aviation accidents, but now links to a spam site/domain hoarder, which seems very undesirable for users. Can someone get the direct links removed and where possible linked to an archived page? In particular where it is linked as an external link, occurrences in references appear to be fixed already Pieceofmetalwork (talk) 16:07, 9 January 2021 (UTC)

@Pieceofmetalwork: Are you suggesting adding {{webarchive}} like this edit? GoingBatty (talk) 18:46, 10 January 2021 (UTC)
Yes, that would be a good solution. Pieceofmetalwork (talk) 18:48, 10 January 2021 (UTC)
@Cyberpower678: Could these links be replaced by the Internet Archive Bot? Jarble (talk) 20:18, 22 January 2021 (UTC)
Suggest trying WP:URLREQ, Jarble. ProcrastinatingReader (talk) 16:04, 2 February 2021 (UTC)
Yes this is URLREQ since it should also toggle |url-status=unfit. -- GreenC 21:36, 3 February 2021 (UTC)

It's done. Example edits: [112][113][114][115], etc.. -- GreenC 03:16, 4 February 2021 (UTC)

uploading book cover should automatically fill in two tags

When you upload an image and choose the option on the list that it is a book cover, it adds book cover to the Licensing section, but you then have to manually add two things to the Summary. It should automatically make Use = Infobox since there is no possible chance there would be anything else. The other required field is Article, which you could easily see which article it was just placed in, and if none found then have a message reminding people to add one. Dream Focus 14:23, 14 February 2021 (UTC)

@Dream Focus: Why is this a BOTREQ matter? --Redrose64 🌹 (talk) 23:41, 14 February 2021 (UTC)
Where else do I request it? Also a bot can look at areas that don't have that filled out yet but are for book covers, and just do this for them automatically. Then eliminate any deletion notice tag some other bot already put up there. Dream Focus 00:17, 15 February 2021 (UTC)
@Dream Focus: If you are using Wikipedia:File Upload Wizard to upload the images, then Wikipedia talk:File Upload Wizard would be the right place to discuss how to improve the wizard. GoingBatty (talk) 01:38, 15 February 2021 (UTC)

Redundant template pairs

The following pairs of cleanup templates:

should not be used on the same article; but often are.

We need a bot, please, to remove first template in each of the pairs named above.

The bot should not do this when the templates are section-specific (e.g. {{One source|section|date=October 2020}})

The bot should remove {{Multiple issues}}, where appropriate.

The bot needs to take into account common redirects (for example, {{More citations needed}} is often used via {{Refimprove}}; {{More footnotes needed}} as {{More footnotes}}, etc.).

This can be done as a one-off and then either run occasionally, or added to one of the regular clean-up tasks.

Other such pairs might be identified in future.

Prior discussion is here . Andy Mabbett (Pigsonthewing); Talk to Andy; Andy's edits 20:51, 21 October 2020 (UTC)

Bot or other process to keep categories and page renderings up to date

Two related proposals on the Community Wishlist survey have been rejected as out of scope, so I am putting this note here in case there is anyone interested in taking on a project to keep Wikipedia pages and categories up to date.

Basically, pages on Wikipedia are not refreshed often enough, which means that it can take weeks, months, or longer for category membership to update, or for things like age calculation in infoboxes to work correctly.

When a change is made to a template or module that involves category membership, pages that transclude that template or module require a null edit in order to update their category membership. Because of delays in the job queue, such category membership changes can take weeks, or even months. Even worse, changes to the underlying MediaWiki software that apply categories (e.g. those in Special:TrackingCategories) do not force pages into the job queue, which means that category membership for affected pages can take months, years, or forever.

These delays cause outdated information, missing information, and outright errors to be rendered for readers, and cause editors who are working on fixing problems identified by maintenance categories to be delayed in applying those fixes. When a maintenance category should be populated but is empty, it gives editors the false impression that all affected articles are working properly.

One proposed solution/workaround is to set up a background process that tracks all pages based on their last edit time stamp, including null edits. That tracking could be used to make a list of needed null edits for "stale" pages. There is some detail in the phab links below about how to generate such lists and (possibly) how to force pages into the job queue so that a null-edit bot might not be needed.

For details and links to phabricator tickets, see meta:Community Wishlist Survey 2021/Archive/Set maximum delay in updating category membership and meta:Community Wishlist Survey 2021/Archive/Correct wrong tenure lengths. (Actually, I'll just put the phab links here: T132467, T135964, T157670, T159512.) – Jonesey95 (talk) 16:34, 7 December 2020 (UTC)

Haven't dove deep into the phab tickets. Is the problem here that MediaWiki doesn't have the server resources to keep those millions of pages fresh, or is it that the resources exist but the there are no algorithms in Mediawiki to do the purges automatically? Or something in between? – SD0001 (talk) 13:23, 11 December 2020 (UTC)
It's the latter. There are good comments at T157670 from February 2017 that show tables of pages and their last refresh date. If we could somehow make that report, list the pages, and "expire" or refresh/null-edit (not purge) the most out-of-date pages, that would be a start. We would have to be aware of the effect on the job queue, but I think it would be manageable. – Jonesey95 (talk) 14:15, 11 December 2020 (UTC)
Perhaps there could be a second job queue, processed only when the main one empties, containing all pages ordered by last refresh date. This would keep the process busy when and only when it has nothing more urgent to do. (In practice, I expect we'd do some sort of "find stalest 1000" query rather than actually maintaining a queue of length 40 million.) Certes (talk) 14:43, 11 December 2020 (UTC)
I'd be interested in seeing the current results of Legoktm's query (select count(*), SUBSTR(page_links_updated, 1,6) from page group by SUBSTR(page_links_updated, 1,6) order by SUBSTR(page_links_updated, 1,6) desc;), and probably some variations on it, including that same query limited to article and template space. If we could get a reasonable list of the stalest articles and templates, a bot could null-edit them systematically. – Jonesey95 (talk) 16:35, 11 December 2020 (UTC)
We may also be interested in any pages where page_links_updated IS NULL and page_touched is old. They won't have been re-parsed since creation. Unfortunately, the page table does not seem to be indexed on those columns and I don't see a relevant alternative view. Certes (talk) 17:16, 11 December 2020 (UTC)
@Jonesey95: see https://people.wikimedia.org/~legoktm/T157670/ - let me know what other queries you want me to run. Legoktm (talk) 18:25, 11 December 2020 (UTC)
Those queries are helpful. From the NS0 query, it appears to me that we have about 15 million pages in article space (although {{NUMBEROFARTICLES}} gives me 6 million pages, so if someone could explain that, please do), of which 8 million have been refreshed in the last two months. That leaves about 7 million "stale" article pages, if I understand the report (which I clearly do not). If we refresh one article per second with a bot, which doesn't seem like a heavy load, we can do 2.6 million articles every 30 days. How do we get this process started? I think we would need generate a list of the names of the stale articles somehow.
If we can get this working for articles, we can look at expanding it to other namespaces. – Jonesey95 (talk) 20:43, 11 December 2020 (UTC)
The other 9 million are non-article pages such as redirects and dabs. Further reading: Wikipedia:Database reports/Page count by namespace. Certes (talk) 21:31, 11 December 2020 (UTC)
Thanks! That might make it even easier. If we can get a list of the X thousand most stale articles (non-redirect, non-dabs) and feed them to a null-edit bot at one per second, we might be able to get the whole (actual) article space refreshed in less than a month, and then keep it that way with a background process that null-edits newly stale articles. – Jonesey95 (talk) 23:20, 11 December 2020 (UTC)
We can go beyond main namespace but need to be a bit careful. (Refreshing Template:Pagetype might take a while!) Certes (talk) 23:48, 11 December 2020 (UTC)
FWIW, you can purge the links of a page at a rate of around 20/request (each request every 5-10 secs incl delay). Any more and the request times out. So it's closer to 2-4 pages per second you can update. ProcrastinatingReader (talk) 14:32, 19 December 2020 (UTC)

Don't know how to get an archive bot's assistance

Hello, the Illinois Historic Preservation Agency recently took down their website because it was based on Adobe Flash, breaking lots of links of the format http://gis.hpa.state.il.us/pdfs/XXXXXX.pdf (where X represents a numeral). I just checked a random one, and it was in IA, so the archive bots could run with these URLs, but how do I ask that they work on them? Nyttend (talk) 13:12, 16 February 2021 (UTC)

@Nyttend: WP:URLREQ if you don't have access to bot jobs on the IABot management website. --Izno (talk) 17:45, 16 February 2021 (UTC)
If you wish, you can also run the IABot yourself here. ƒirefly ( t · c ) 17:46, 16 February 2021 (UTC)
URLREQ request filed. There are lots of these, so I don't want to run it myself on every single URL if it can be automated. Thanks! Nyttend (talk) 19:29, 16 February 2021 (UTC)

Periodically assemble a list of articles under a disambiguated title that are not accessible from the base title

We have many articles that have a disambiguated title that are not linked to from a hatnote and are not listed on a disambiguation page. Either editors forgot to add the page to the disambiguation page, or the hatnote was removed in an act of vandalism. Sourdough, Montana (created in 2009) was not accessible from the base title Sourdough until Sourdough (disambiguation) was created in 2020; Drought (disambiguation) was inaccessible from 2018 to 2020.

I'm wondering if this is something that would be worth keeping an eye on by periodically assembling a list. I have no idea if such a list would be too large for anyone to want to go through, maybe an invisible tag similar to {{orphan}} could be added to these articles?

Thjarkur (talk) 12:39, 19 January 2021 (UTC)

MOS:SMALL and/or MOS:POINTS fixes in infobox

One of the things I like to do is make infoboxes compliant with MOS:SMALL and MOS:POINTS using AWB. For example, [116] and [117]. The SMALL fixes are easy, for html tags I just find <small> and </small> and leave the "replace with" window blank. For {{small}} and {{midsize}}, I use regex. Find ({{small\|)(.*?)(}}) and replace with $2.

The MOS:POINTS are a bit more challenging. I basically hard-coded a bunch of find and replace rules using regex for common degrees. This way, it doesn't matter if it's typed as "M.B.A." or "M. B. A.", it'll still get changed to MBA.

The problem with AWB is it's not versatile enough for me, at least for my rudimentary skills. For example, in order to limit the find-and-replace to infoboxes, I set the rule as "inside templates", so I still have to make sure it doesn't make any changes to URLs in any of the CS1 templates. Another issue is related to my regex for PhD and PhB. For PhD: (P)(\.?)(\s?)(h)(\.?)(\s?)(d)(\.?). This means though that in the infobox for Marcel Lettre, "Joseph D. Kernan" becomes "JosePhD Kernan". I'd like for this task to be done by a bot so that I can make other edits and not have to waste time making sure that these issues don't come up.  Bait30  Talk 2 me pls? 01:49, 22 January 2021 (UTC)

For PhD, you can check with \b for a word boundary at the start so it matches "Jose ph D." but not "Joseph D.". Certes (talk) 11:30, 22 January 2021 (UTC)
A bot might not be able to remove small from infoboxes and navboxes. It would have to be able to avoid removing instances that were wrapping already-enlarged text, typically found in the |name= or similar parameters. – Jonesey95 (talk) 16:30, 22 January 2021 (UTC)
Do you have any examples of that? It's hard for me to imagine a scenario where putting {{small}} in the name parameter would be the best option. On a more technical note, would it be possible to create a bot task that would only apply to certain parameters? Because what I've been doing is almost entirely exclusive to |education= anyways.  Bait30  Talk 2 me pls? 21:28, 22 January 2021 (UTC)
Sure. Colombia uses a {{small}} template in the infobox's |native_name= parameter; because |native_name= is rendered larger than the normal infobox text, the text inside the {{small}} template ends up rendered at 93.5% of normal, which is perfectly fine and should not be enlarged. – Jonesey95 (talk) 22:32, 22 January 2021 (UTC)

Bot request for Wikipedia to turn to Uncyclopedia

I request a bot that shall replace all Wikipedia links and the logo with their respective Uncyclopedia links.[April Fools!] Wikitrumpets (talk) 04:10, 1 April 2021 (UTC)

Bot request to post a notice to AN about "a RfPP backlog" every 5 minutes

I remember seeing this idea mentioned once, but for some reason it was never coded. Let's be honest here, this bot will be pretty accurate.[April Fools!] Pahunkat (talk) 07:54, 1 April 2021 (UTC)

The off switch can be an RfPP for AN itself. When it gets to the top of the queue, it will prevent the bot from posting. Certes (talk) 10:24, 1 April 2021 (UTC)

Bot to remind editors that most ideas are bad

The following discussion is closed. Please do not modify it. Subsequent comments should be made on the appropriate discussion page. No further edits should be made to this discussion.


Per WP:Most ideas are bad, most ideas are bad. But editors often forget this. Therefore, I propose a bot to remind them. This bot would use the latest advances in neural network language processing to automatically detect when someone is proposing an idea. It would then leave a message on their talk page something along the lines of "Hi! I'm User:BadIdeasBot. I noticed that you recently suggested an idea. Please remember that most ideas are bad. On the off chance that your idea is not bad, please disregard this message. Thank you." What do you all think? Surely this is idea is one of the good ones, right? - {{u|Sdkb}}talk 00:24, 1 April 2021 (UTC)[April Fools!]

  • Sdkb, did you ever think that your bad idea to request a Bad Idea Bot to tell editors their bad ideas are bad ideas is a bad idea? D🐶ggy54321 (let's chat!) 00:42, 1 April 2021 (UTC)
    🤯. {{u|Sdkb}}talk 00:47, 1 April 2021 (UTC)
    • If only there was a bot to remind Sdkb about what a terrible idea the proposed bot is then this entire thread could have been prevented! Spirit of Eagle (talk) 00:52, 1 April 2021 (UTC)
      • Exactly. Maybe we need to create a User:SdkbBadIdeaBot so that Sdkb stops with the bad ideas. Although, making a bad idea bot for the user who badly ideated a bad idea bot might be a bad idea. Maybe I should consult User:BadIdeaBot. No, that’s a bad idea. BadIdeaBot was suggested by Sdkb, so BadIdeaBot might have the bad idea to have a conflict of interest, and will most definitely bad-idea me since they want to protect their bad-ideated creator, who will then report me to ANI for making a personal attack saying they were bad-ideated. Maybe we need a bad idea bot... D🐶ggy54321 (let's chat!) 01:05, 1 April 2021 (UTC)
We obviously need this bot. If we had already got it, it would have stopped Sdkb from proposing it's creation. REDMAN 2019 (talk) 10:59, 1 April 2021 (UTC)
The discussion above is closed. Please do not modify it. Subsequent comments should be made on the appropriate discussion page. No further edits should be made to this discussion.