Wikipedia:Bot requests/Archive 66

Latest comment: 8 years ago by HighInBC in topic Roulette bot
Archive 60Archive 64Archive 65Archive 66Archive 67Archive 68Archive 70

One-time error checking help

I've created new maps for almost every city, village, and township in Ohio, and I'm about halfway done with adding them to their articles. This normally works fine, but I periodically make errors, and it would help if a bot could check all 2200 of these pages for errors. In some cases, I've added the map of one place to the article about a different one; for example, here I copy/pasted one article's map into another article. In other cases, I simply haven't switched all the elements of the description correctly; for example, here I used the correct map, but I made a mistake in the caption, since the city's in Trumbull County, not Mahoning County as suggested by the description.

All maps follow a rigid naming convention: Map of COUNTYNAME County Ohio Highlighting PLACE TYPE.png. "Place" is simply the community or township name, and "Type" is City, Village, or Township (note the capital letter). Likewise, all captions follow the same convention: Location of PLACE in COUNTYNAME County, although "Township" is part of the PLACE section for townships (see the caption for Beaver Township, Mahoning County, Ohio). Given this convention, I expect that the bot can handle the situation easily. I'm imagining the following (collapsed so this request doesn't appear so massive):

Extended content
  • Bot goes to each county template (either the Ohio section of User:Nyttend/County templates, or the entirety of User:Nyttend/County templates/OH/1 and User:Nyttend/County templates/OH/2) and opens each link in the "Cities", "Villages", and "Townships" line [note that some counties have just one "City", and some have none at all]
  • Upon opening a page, bot checks for {{Infobox settlement}}. If the article doesn't have that infobox, the bot logs the page and then just goes on to the next page.
  • Upon finding that the page has {{Infobox settlement}}, bot looks for "County Ohio Highlighting" and then looks at the word immediately before it. If the word is the same as the county template that the bot just opened, everything's good (for example, when opening links from {{Fairfield County, Ohio}}, all pages should read "Map of Fairfield County Ohio Highlighting"); if not, log it. Bonus points if the bot knows to look for two words in the case of {{Van Wert County, Ohio}}.
  • After finding this word, bot checks the caption field; if COUNTYNAME is the same in both places, good, and if not, log it.
  • Now check for PLACE in the filename. If it's the same as the first part of the page name, good; if not, log it. For example, if Map of Carroll County Ohio Highlighting Magnolia Village.png is in Minerva, Ohio, it should be logged; if Map of Carroll County Ohio Highlighting East Township.png is in East Township, Carroll County, Ohio, it's good.
  • Now check for PLACE in the caption. If it's the same as in the filename, or if it's the same as in the filename plus "Township", good. If not, log it.

Final notes: (1) When an article already had a good detailed map, I didn't add the new one. Therefore, if the bot doesn't find "County Ohio Highlighting" at all (this will be the case at Seven Hills, Ohio, for example), it should go to the next page without logging anything at all, because in most or all such cases, there's no problem. (2) Since the project isn't done, I'd appreciate it if you didn't run the full check until I tell you that I'm done. (3) Some municipalities are in multiple counties, so it's possible that the map link would be "wrong"; the bot would find an error when opening Adena, Ohio from {{Guernsey County, Ohio}}, for example. Coding to avoid this error might require a lot of work, but these situations are rare, so don't worry about it. (4) Finally, since the bot's just logging pages that might be wrong, WP:CONTEXTBOT shouldn't apply.

Nyttend (talk) 18:12, 18 August 2015 (UTC)

@Nyttend: Though this is a bit old, I assumed it was still needed and went ahead coded it up. Have you done all the replacements yet? BMacZero (talk) 07:02, 8 October 2015 (UTC)
Yes, I finished everything a few days after posting this request. Thanks a lot; I assumed that this had disappeared into the eternally-forgotten-requests archive. Nyttend (talk) 22:17, 8 October 2015 (UTC)
@Nyttend: I wandered over here from Commons again, and this seemed pretty quick. Here you are [1]. It looks like it came out right to me, but let me know if you notice any major problems. BMacZero (talk) 06:16, 10 October 2015 (UTC)
Thanks. I work on Saturdays now, so I won't be able to check it immediately; could you leave it there for a bit? Nyttend (talk) 11:02, 10 October 2015 (UTC)
BMacZero, I just looked at the list; thank you. Could you have the bot redo its survey? There are a lot of false positives, because (as far as I can tell) it didn't just look at cities, villages, and townships: I'm seeing results for places like Bentonville, Ohio and Nova, Ohio, which are a census-designated place and an unincorporated community respectively. Could you tell the bot to ignore links in the lines labelled "unincorporated community", "unincorporated communities", "CDP", and "CDPs" please? Nyttend (talk) 03:38, 11 October 2015 (UTC)
PS, at the same time, I expect this to be quite useful. I found a random problem report and checked the page, and indeed there was a problem until I fixed it. This page was precisely the reason that I requested help from a bot operator: it's a simple and obvious mistake, but when you're editing three thousand low-traffic articles, you'll not know the difference when one or two has a mistake, and humans might well not notice. Nyttend (talk) 03:44, 11 October 2015 (UTC)
That explains all the missing infoboxes. Here's a revised, much more manageable, output: [2]. The ones that say "Location of split failed" just had the lines in a different order than expected - I'd fix it, but there aren't too many to check manually. The multiple-county ones seem to be more prevalent; they're probably okay if both the map_caption and image_map are the same, which you can spot easily in the log. BMacZero (talk) 22:50, 11 October 2015 (UTC)
Great! I've checked all of them. Indeed, the majority were multi-county communities that had nothing wrong, and most of the rest were either missing infoboxes or pages with minor aberrations like St. Paris, Ohio (I spelled out abbreviations, so the map is "Saint Paris") where nothing was really wrong. Those "Location of split" situations were all Hancock County locations where someone else had previously uploaded similar maps with similar names, and apparently the bot got confused because the names don't include the municipality type at all: they're of the format File:Map of Hancock County Ohio Highlighting Arcadia.png. All of them were fine; this was the same situation as the Seven Hills example that I mentioned above. And finally, I found eleven pages with actual errors, including this one where the map showed one township, the caption mentioned another, and both of them were wrong. Thanks so much for the help; it's immensely useful to know that everything else is good. Nyttend (talk) 00:40, 12 October 2015 (UTC)
Cool. I guess I should hit this with a  Y Done. BMacZero (talk) 17:07, 12 October 2015 (UTC)

Feasability of using a bot to determine and move large number of pages for updated guideline

I am proposing a redrafting of the Wikipedia:Naming conventions (ships) guideline, at user:Saberwyn/Proposed ship naming and disambiguation conventions update. One of the major changes follows the outcome of a Request for Comment on the matter of ship article disambiguation. The current form of the proposal is that all ship articles requiring disambiguation will be disambiguated in the form "(yyyy)", where yyyy = year of launch. If the proposed guideline is accepted, over 26,000 ship articles (number determined by transclusions of {{infobox ship career}} - selected to include articles with infoboxes on specific ships, as opposed to class articles) will need to be checked for compliance with the guideline, and if not compliant, moved to a compliant title. Could a bot be used to check the titles of all ship articles (as determined per number above), check that year of launch is the method of disambiguation, and if not, move the article?

The bot will have to:

  • Read the article title to determine if it ends in a parenthical disambiguator (Is the last character in an article title a ")" ? )
    • If not, no action needed by bot... if necessary, humans can move these as they are encountered
  • Check for year of launch (possible options are by reading the date in the "|ship launched=" field of the infobox, or by reading the date in "Category:YYYY ship" if present)
    • If not found or not comprehensible, list for human attention
  • If the final characters of the title do not correspond to the year of launch, move the article to a new title with the year as the parenthical disambiguator
    • If unable to move, list for human attention
  • As it is assumed that the previous title was a valid alternate title, deletion of the old title or updating of article links will not be required

A second bot operation (or both passes if the above is not possible) could generate a list of articles that use a civilian ship prefix (which, under the proposal, will also be depreciated as part of an article title in most cases), so that humans can review article titles and move those necessary to a date-disambiguation title. I'm reluctant to suggest using a bot to move this group, as the prefix may be part of the common name for the subject.

So, theoretically (because the proposal may not pass in this form, or at all), is it possible to create/adapt a bot to do this, how difficult would it be, and what technical problems would have to be surmounted? Any opinions on the appropriateness of the proposed method of disambiguation should be directed to the proposal. -- saberwyn 03:45, 13 September 2015 (UTC)

  Needs wider discussion. given that the proposal is still under development. This is feasible, yes. I would propose a change in the workflow though. It makes more sense to check if the parenthetical disambiguator contains the year launched before you check what that year is in the article; it reduces the work/processing time when an article can be skipped. A bot should just check whether a four digit number is present within a set of parentheses in the title. If so, skip it. If not, then look for the |ship launched= parameter. I wouldn't use Category:YYYY ship, because it's possible mistakes have been made in applying that category (listing year built instead of year launched, for instance). The parameter is more explicit in what the date means, so it's less likely to contain errors. You will need specific consensus on what the disambiguator should be, as this was not covered in the RfC you linked. For instance, should we use (1936) or (launched 1936)? ~ RobTalk 16:25, 13 September 2015 (UTC)
As stated above, the current form of the proposal is for (YYYY) only, but that is subject to change. As for the exact methodology, I'm useless at programming, so any suggestions for improving the hypothetical bot run(s) to make it easier for the programmer/operator would be greatly appreciated. Thanks for the info. -- saberwyn 09:01, 15 September 2015 (UTC)

Per-day redirects

Typing a specific date, for example "15 February 2013", into our search box, finds articles with that date in references or maintenance templates, before events which happened on that date. Therefore, for every page like Portal:Current events/2013 February 15, we should have redirects from, at least:

and possibly:

and others. Once the backlog is done, we'd need a maintenance bot to create each new day's set.

We also need to decide what to do for dates were no Portal:Current events page exists (mostly pre 1999). Perhaps redirect to the relevant year? Andy Mabbett (Pigsonthewing); Talk to Andy; Andy's edits 14:02, 8 September 2015 (UTC)

Firstly, mass creating redirects is again something that needs consensus from the community, though personally I don't see much harm in creating a bot for this. Nextly, since my bot already creates the pages, it would be a matter of adding a few lines of code to implement the requested redirects.—cyberpowerChat:Limited Access 16:00, 8 September 2015 (UTC)

Discussion seems to have stalled. How an we take this forward? Andy Mabbett (Pigsonthewing); Talk to Andy; Andy's edits 09:07, 15 September 2015 (UTC)

Redirects away from mainspace has always been the exception and not the rule. If you want to change this with thousands of systematic redirects then I think an RFC is in order with notifications in many places, for example Wikipedia:Village pump (proposals), Wikipedia talk:Redirect, Wikipedia talk:WikiProject Redirect, Portal talk:Current events, Wikipedia talk:Cross-namespace redirects, Wikipedia talk:WikiProject Days of the year. One of these would be a better place for an RFC than here. Without a widely advertised discussion in advance, created redirects risk being mass deleted. Note that Portal:Current events/2013 February 15 is transcluded in mainspace at February 2013#2013 February 15. If we make redirects on dates then I think this would be a much better and less controversial target. PrimeHunter (talk) 13:42, 15 September 2015 (UTC)

Bot to undelete 400,000 old IP talk pages

It has been proposed here that an adminbot be used to restore some 400,000 old IP talk pages that were deleted by user MZMcBride. It is to be noted that about 10,000 such pages have already been restored by user:MusikBot per this BRFA. See the BN discussion and the BRFA for further details. 103.6.159.68 (talk) 19:11, 17 September 2015 (UTC)

What benefit to the project is to be achieved by restoring an extremely large number of pages full of stale warnings? DavidLeighEllis (talk) 23:24, 17 September 2015 (UTC)
The stale warnings would not be restored. The pages would be templated with a notice that older warnings or other discussions can be seen in the page history. The benefit is to primarily make the edit history of the page reviewable by the typical editor who may want to know if new activity from the IP address is related to previous activity from the same IP address. bd2412 T 23:34, 17 September 2015 (UTC)
I'm not convinced of the utility either way. In theory, restoring is beneficial (i.e., ideally they wouldn't have been deleted in the first place), but there's also the practical matter of 400,000 new log entries and 400,000 new edits for new transclusions of {{OW}}. There's also the danger of restoring some problematic history, and half a million pages are far too many for human review. I know this problem was present with the initial restorations, and we can avoid restoring pages with multiple deletions, but with so many pages, there can easily be some awful things lurking that we won't be able to detect. — Earwig talk 00:13, 18 September 2015 (UTC)
I agree. I'd like to see consensus for this before doing this. Unless there's a clear and obvious need for them, they can always be restored immediately, on-demand via WP:UNDELETE if someone truly needs to review ancient templated warnings... and let's be honest, the number of people who need to do that with any regularity is practically zero. --slakrtalk / 01:43, 18 September 2015 (UTC)
I'm also going to have to agree. I'm all for transparency and historical value, but restoring this many pages is a massive undertaking. I don't think we'll get undelete requests for these pages; most people won't even notice or bother to look that prior revisions were deleted. It's true that if we (a) target restoring revisions with summary "deleting stale IP talk page" (or whatever is consistent) and (b) skip any pages with multiple entries in the deletion log, we'll probably bypass any deletions done in response to abuse. Nonetheless, I'm quite frankly not comfortable with the idea without broad consensus. The bot has restored a good 10,000 pages, and from the looks of it there are several thousand more deleted by Tawkerbot strictly under the "stale IP talk page" rationale. I can continue to finish restoring those for you and more, no problem, however from a bot operator standpoint I'm more than content with calling it quits at this point MusikAnimal talk 05:25, 18 September 2015 (UTC)
As noted earlier, all MZM deletioms use exactly the same summary: Old IP talk page. BTW Tawkerbot's deletion log looks empty. 103.6.159.88 (talk) 05:55, 19 September 2015 (UTC)

I think it'd be alright to try get a consensus here, as there's no need to fragment the discussion to more pages. I have left a note at WP:VPR and WP:VPT. 103.6.159.88 (talk) 06:07, 19 September 2015 (UTC)

  • Support: The best reason in support of the undeletion of these pages is that there is no reason why content that does not meet the deletion policy should remain visible only to admins. 103.6.159.88 (talk) 06:07, 19 September 2015 (UTC)
    • What value is there in templated "you shouldn't vandalize Wikipedia" messages that are ten years old? Have you looked through the recent MusikBot restorations? Which of these pages do you think contribute to building an encyclopedia? These pages add a lot of clutter to database dumps and database queries for almost no benefit at all. There's no "content" being hidden here from non-admins or anyone else. --MZMcBride (talk) 15:07, 21 September 2015 (UTC)
  • Happy to support this, provided reasonable care is taken, i.e. only undeleting pages with one log entry, only undeleting pages with the specific rationale of "[[WP:OLDIP|Old IP talk page]]". That said, we need an admin willing to take on the task who has the requisite skill at running a bot (not me) and it would probably be worthwhile to check in with a dev to make sure it they're OK with the performance side of things. Also might be a good idea to ask MZMcBride what he thinks of this, in the case of the initial undeletions, apparently the fact that bd2412 was asking for his own deletions to be undone was relevant. Ideally these deletions should not have happened and I see no real benefit in leaving all these edits to be viewable to admins only. Jenks24 (talk) 06:25, 19 September 2015 (UTC)
  • Weak Oppose. I agree with {{The Earwig}}'s arguments. Moreover, undeleting will cause confusions if the IP owner has changed.
  • Instead we could just have a WP:REFUND on request. AFAIK no one has ever asked for it. Graeme Bartlett (talk) 09:19, 19 September 2015 (UTC)
    • What do you think is to be achieved by increasing the work for admins at WP:RFUD, rather than having the bot restore them all in one go? As elaborated by others, there is neglible risk of inappropriate content being restored (since pages with multiple deletion log entries would be skipped). 103.6.159.77 (talk) 12:37, 20 September 2015 (UTC)
  • Comment: If the only objections are that it would flood the logs, then do spread the job out over time. Doing these in bursts of 1000 at a time, with 1 burst per day over 400 days, might be better than doing them all at once. davidwr/(talk)/(contribs) 21:37, 19 September 2015 (UTC)
  • Alternative proposal Instead of undeleting 400,000 pages, add a message to all 400,000 pages saying "Historical versions of this page prior to [date] were deleted. To request un-deletion click here [button to click on goes here]" with a similar message including the date of the last deleted edit in the edit summary, then have another admin-bot undelete pages on request. This way only pages that people care about will be un-deleted. davidwr/(talk)/(contribs) 21:37, 19 September 2015 (UTC)
    • While the notice is probably a good compromise, a second admin bot doing automatic undeletion would be a very bad idea, as there would be no way to determine whether restored edits had inappropriate content on them, such as outing, personal information, or grossly offensive content. It may be better to just say to post at WP:REFUND. I doubt this would overwhelm them; how often is someone going to request an old and unused IP talk page be undeleted? ~ RobTalk 21:46, 19 September 2015 (UTC)
      • I seem to recall at least part of the motivation for deleting the pages was so that new users were not bombarded with irrelevant messages about ten-year old edits as, usually, the IP has been reassigned or is used by different users. I doubt any messages delivered to the current users of these IP addresses would be seen as being of any relevance to them. Some may even be a bit intimidated by them. -- zzuuzz (talk) 21:59, 19 September 2015 (UTC)
    • What do u mean by "pages that people care about"? When nonadmins can't see the content or history of deleted pages, how do they know which pages they care about? Anyway, the process is too much of WP:BURO-creep. 103.6.159.77 (talk) 12:37, 20 September 2015 (UTC)
  • Comment Any pages that are more than several years old that are restored should probably be blanked or replaced by a templated statement saying that old content was removed but that it still exists in the page's edit history. Doing this will prevent the issue of confusing new editors. davidwr/(talk)/(contribs) 05:28, 20 September 2015 (UTC)
    • That seems like a pre-requisite, and it's been mentioned a few times. However I'd suggest that leaving 400,000 new messages, on dynamic IP talk pages for example, will result in confusion. -- zzuuzz (talk) 09:42, 20 September 2015 (UTC)
    • FYI, all the pages that would be restored will have their contents replaced with template {{OW}}. And since it's a bot doing them, the IPs won't see the "You have new messages" box. 103.6.159.77 (talk) 12:42, 20 September 2015 (UTC)
  • I too am opposed to this. Lots of work for no benefit in virtually every case. WP:REFUND indeed is the right place to go; if you want to see the deleted content, I'll happily restore it (assuming no major problems with the page), but mass restoration is a bad idea. Nyttend (talk) 14:52, 20 September 2015 (UTC)
    • Agreed. --MZMcBride (talk) 15:03, 21 September 2015 (UTC)
      • Assuming that restoration can be done with no work beyond clicking a button on a bot that already has its script, and no messages will be created for the IPs, I don't see the problem at all. We have hundreds of thousands of existing IP talk pages with warnings or test posts or whatever, with little rhyme or reason separating those from deleting pages with similar content. We'll never be rid of them all, so we may as well provide some uniformity to their treatment. bd2412 T 17:26, 21 September 2015 (UTC)
  •   Needs wider discussion. This discussion really doesn't belong on this page, either way. It will need wider consensus from the community. ~ RobTalk 04:21, 26 September 2015 (UTC)
  • Rather than restore, I suggest we delete all IP talkpages from IPs that haven't edited or been blocked in the last three years. Such pages have no value as the human behind the IP is unlikely to be the same. But there are drawbacks to keeping such pages. As pointed out above they are offputting to new IP editors; We've recently had RFA opposes based on high percentage of User space edits, restoring 400,000 IP talkpages would make some editors more vulnerable to such RFA opposes, whilst deleting the odd million stale IP pages would reduce many editors userspace edit percentage. When I do newpage patrol I focus on redlinked talkpages as these are newbie edits and the stiki and huggle users pay them little heed. Some will be vandalism, but most goodfaith though not always helpful. The vandals I can forget about because once I've warned them the hugglers and stiki users will be watching their next edits. Deleting a million or so stale IP talkpages would make it easier to spot when new IP editors started to edit from those IPs. Obviously the bot would need to ignore any moved pages. ϢereSpielChequers 20:22, 4 October 2015 (UTC)

Fix T44616

A bot should fix all remaining instances of T44616. MZMcBride has already fixed some in December 2012; at the time it was Bug 42616. This will make the moves revertible only by administrators. GeoffreyT2000 (talk) 16:20, 24 September 2015 (UTC)

I'm not clear what the point of this is. What's broken by having an extra newline that we need a bot to fix it? — Earwig talk 19:34, 24 September 2015 (UTC)
  Needs wider discussion. This almost certainly falls afoul of WP:COSMETICBOT, and it would need community consensus prior to being carried out. ~ RobTalk 04:16, 26 September 2015 (UTC)

Tagging of cat talkpages

Requesting bot assistance to tag talkpages of all categories and subcategories within Category:Establishments in Rivers State by year and Category:History of Rivers State by period with {{NigeriaProject|Rivers State=yes}}. Thanks. Stanleytux (talk) 11:28, 23 September 2015 (UTC)

Only the category talk pages? Also, generally new tagging should use the standard name of the template, in this case {{WikiProject Nigeria}}, rather than a redirect. Anomie 12:32, 23 September 2015 (UTC)
Just the category talk pages for now, no problem the project template still works this way, no redirect {{WikiProject Nigeria|Rivers State=yes}}. Stanleytux (talk) 13:10, 23 September 2015 (UTC)
Anomie please tag the talkpages of the above categories and all of its subcategories with {{WikiProject Nigeria|Rivers State=yes}}. Thanks. Stanleytux (talk) 13:24, 23 September 2015 (UTC)
All requested category talkpages have been tagged. Feel free to remove this request or archive it. Thank you. Stanleytux (talk) 16:13, 10 October 2015 (UTC)

Processing Shadows Commons

Recently I did an edit like this to potentially de-eclipse an image Shadowed at Commons.

https://en.wikipedia.org/w/index.php?title=File%3ABohr_model.jpg&type=revision&diff=682440062&oldid=634253010

It occurred to me that as this could be done for most of the images here Category:Wikipedia files that shadow a file on Wikimedia Commons automatically by a bot.

If bots are allowed to move files, then the entire process can be automated, apart from the eventual F2 deletion.

For All items in Category identifed:- Step 1. - Identify image tagged as {{ShadowsCommons}}. Step 1a. - Ceck to see if it is actually shadowing Step 2. - Rename filename.ext to filename (uploadtimestamp).ext Step 3. - Remove {{ShadowsCommons}} from the renamed file. Step 4. - Replace ALL transclusions and non disscusion page links to the file. Step 5. - Tag the created redirect as F2. Step 6. Repeat until category is empty (or only images remaining are protected generics.)

Sfan00 IMG (talk) 19:20, 23 September 2015 (UTC)

Simply appending the upload timestamp does not strike me as a very useful way to disambiguate images. In many cases, surely there is a better name that a human could come up with. For example – to pick a random image from the category – File:Sleepwalker.jpg would be more appropriately renamed to File:Sleepwalker (comics).jpg or File:Sleepwalker (Marvel).jpg than File:Sleepwalker (20080424).jpg — Earwig talk 20:52, 23 September 2015 (UTC)
(edit conflict) Bots do not have the filemove userright. Although it would be possible to give a bot the "file mover" user right. Currently, there are no bots with this additional flag. Avicennasis @ 20:57, 10 Tishrei 5776 / 20:57, 23 September 2015 (UTC)
True, but I see no reason that we couldn't give a bot +filemover (if people supported a bot doing this). I mean, we have bots with +admin, +templateeditor, +rollback, etc. — Earwig talk 21:14, 23 September 2015 (UTC)
I know. Merely pointing it out. (We even have a bot with +autopatrol for some reason, which is redundant to +bot) Avicennasis @ 21:46, 10 Tishrei 5776 / 21:46, 23 September 2015 (UTC)
Besides the example above, there are other issues, e.g. File:BBC Four.svg. There's no need to rename this file on EnWp - it's essentially a duplicate from commons, so the best case here is to just delete the local copy. Stuff like needs human review. Avicennasis @ 21:46, 10 Tishrei 5776 / 21:46, 23 September 2015 (UTC)
  • Oppose A suitable file name needs to be carefully determined by a file mover. Appending a timestamp is at best confusing, as it suggests to people that the timestamp refers to the time when the picture was created, and should be avoided. In some cases, it may be better to nominate the file for deletion on Wikipedia or Commons. --Stefan2 (talk) 11:11, 24 September 2015 (UTC)

TAFIbot

I request a bot that places a tag at the talk page with a notice that the article in question has been through the week of being that weeks TAFI selected article. As it is not done all the time today. --BabbaQ (talk) 23:40, 26 September 2015 (UTC)

ShadowsCommons Autotagger

User:Stefan2 has a query here - https://quarry.wmflabs.org/query/950 which is used to identify files which have the same name on Commons and locally, but which may not be the same media?

Would it be possible for a bot task to run through the list periodically to tag files locally unless the local copy is already tagged using whatever CSD F8 uses, or there is a {{Keep Local}} template on the local copy? Sfan00 IMG (talk) 14:52, 3 October 2015 (UTC)

  • Well, I wrote a bot for this purpose some time ago and tagged a few hundred files (see Wikipedia:Bots/Requests for approval/Stefan2bot), but I found that there were so many cases where it was better to do things on Commons instead (such as correcting someone's uploads of Wikipedia thumbnails to Commons, or nominating files for deletion on Commons), so I thought that it was better to go through the files manually instead. In doing so, I also solve lots of the filename conflicts myself. In the end, someone will need to go through the files manually anyway, and having them in list form doesn't make any great difference. --Stefan2 (talk) 15:11, 3 October 2015 (UTC)

Bot Request

I am requesting you that please accept my bot request because I am wikipedia editor and now I have created 30 pages, so pleasgive me bot to check there mistakes and correct them. Thankyou.--Productable Khan (talk) 16:42, 9 October 2015 (UTC)

 N Not done Read this: Wikipedia:Creating a bot. – Jonesey95 (talk) 17:46, 9 October 2015 (UTC)

Citation overkill

Wikipedia has a Citation overkill policy, which says that you do not need "more than a couple" citations to back up a single claim. I think a bot could easily recognize when five or more citations are mixed together (like this: [1][2][3][4][5]) and address the problem by adding a "too many citations" tag after the mix of 5 or more citations (like this: [1][2][3][4][5][too many citations]). I believe it would be very helpful because the bot can make more users aware of this policy and it would be less likely to happen in the future.--Proud User (talk) 18:06, 9 October 2015 (UTC)

That is (a) an essay, not a policy, and (b) not as clear-cut as you make out. It's always a judgement call, which bots are notoriously bad at. Relentlessly (talk) 18:12, 9 October 2015 (UTC)

Day pages

An adminbot needs to revert pages for days in 2003 and 2004 to their last non-redirect version and move them to their corresponding Portal:Current events page (e.g. January 1, 2003 to Portal:Current events/2003 January 1). If the corresponding Portal page already exists, a history merge will be performed (this is why an adminbot is needed). If the day page has history only as a redirect, it can simply be deleted (another reason for an adminbot). Some users that have previously done such moves are Fram, AnomieBOT (a bot), and Waldir. GeoffreyT2000 (talk) 19:31, 24 September 2015 (UTC)

How are the history merges to be done? I mean, using a bot. In my experience it often requires some subjective judgment to decide which revisions to keep for a clean final result. --Waldir talk 20:06, 24 September 2015 (UTC)
Ping @GeoffreyT2000 --Waldir talk 18:31, 17 October 2015 (UTC)

GOCEbot

I request a bot that places the GOCE tag at articles talk pages that has been through a completed work by the GOCE project. A GOCEbot perhaps.--BabbaQ (talk) 23:38, 26 September 2015 (UTC)

@BabbaQ: Could you elaborate on the procedure? How would the bot know when GOCE has signed off on an article, and what template should it use on the article talk page? Is this currently being done now by hand? MusikAnimal talk 04:50, 30 September 2015 (UTC)
{{GOCE}} is sometimes added by copy editors to indicate that a WP:GOCE copy edit was requested and completed. See the last line of the "Instructions for copy editors" at Wikipedia:WikiProject Guild of Copy Editors/Requests. I'm a big fan of BabbaQ's work, but I don't think that this task has consensus. I suggest that BabbaQ bring it up at at place like Wikipedia talk:WikiProject Guild of Copy Editors. – Jonesey95 (talk) 06:03, 30 September 2015 (UTC)

Alright, let's go with   Needs wider discussion. MusikAnimal talk 16:21, 30 September 2015 (UTC)

Sort

A bot needs to remove transclusions of Template:Sort from thousands of pages. The template itself can then be deleted. GeoffreyT2000 (talk) 00:28, 3 October 2015 (UTC)

Looking at it most of the uses are in other templates. It cannot just be removed from them it would break them. But it may only be directly included in a small number, and so could be replaced by in theory inlining the code. I write “in theory” as it would be a massive undertaking to identify and fix them. Here’s one if you want to look at it yourself: {{player2}}.--JohnBlackburnewordsdeeds 01:05, 3 October 2015 (UTC)
@JohnBlackburne: Instead of removing the transclusions, replace them with instances of data-sort-value. GeoffreyT2000 (talk) 01:36, 3 October 2015 (UTC)
If you start with {{Fb team}}, you'll take care of 19,000 of the 64,000 transclusions (including about 90% of the transclusions in other templates). There are probably a few more templates that are used in thousands of articles; the aforementioned {{player2}} has 1,352 transclusions, which will probably take care of 90% of the remaining transclusions in templates. Then you'll be left with articles and a few stray templates. I don't see how this can be done with a bot, but there are some smart people around here. – Jonesey95 (talk) 01:44, 3 October 2015 (UTC)
GeoffreyT2000, do you have idiot-proof instructions for replacing this template? I see the "replace it with data-sort-value" instructions, but that's not enough help for me. Also, can you please provide a link to the discussion that led to the deprecation of this template? Thanks. – Jonesey95 (talk) 02:00, 3 October 2015 (UTC)
Seems to me we should avoid the use of the HTML directly and instead use templates. So even if {{sort}} is deprecated, I would say from my opinion alone there isn't consensus for a mass-replacement. --Izno (talk) 03:44, 3 October 2015 (UTC)
  • Strongly opposed. I'm seeing no discussion at talk or elsewhere, and the doc page's replacement guidelines require more complexity than the simple template. Anyone who starts to replace 64,000 templates with more-complicated coding needs to be blocked immediately to prevent disruption and confusion. Nyttend (talk) 12:22, 9 October 2015 (UTC)
There is no apparent consensus for this task. If one is reached in the future, this task can be resubmitted with clear instructions on how to implement the consensus decision. – Jonesey95 (talk) 13:14, 9 October 2015 (UTC)

  Needs wider discussion.

A special kind of archiving bot

This request concerns the page Wikipedia:Articles for creation/Redirects. As each request on this page is dealt with, it is encapsulated with the code {{afc-c|a}}/{{afc-c|d}} and {{afc-c|b}}. This closes it and places it in a box, now ready for archival. At the moment, we haven't got a bot to move it to the month's archive page which can recognise that the request has been closed. Conventional "archive after x days" bots are unsuitable because some requests can sit on the page for up to a month while they are deliberated over and discussed. There is no average time for requests to be there, so some are done within a day and others may be done a few days or weeks later, depending on what the reviewers do.

Because of this, archival has been done exclusively by hand, picking out the closed requests and cut-pasting them into the archive. Because there is a high turnover of requests, this is a task required daily, at about the same time; this means that it can get tiring and real life causes to stop editing can cause a pileup of old requests. Because I know we have bots which can detect things about sections already on pages such as WP:RFPP and WP:AIV, I don't see why one can't be developed and run on this page. Here's the simple breakdown of how the task should run:

  • At a specified time each day, scan AfC/R for sections closed with {{afc-c}}
  • Select which of those sections were last edited over 24 hours ago (this allows initial requesters to see the result, and appeals)
  • Move them to the bottom of Wikipedia:Articles for creation/Redirects/YYYY-MM

Who will take on such a task? Rcsprinter123 (remark) 19:45, 12 October 2015 (UTC)

I've been meaning to implement more complex directives for lowercase sigmabot III. Maybe this could become a reality. Σσς(Sigma) 21:57, 12 October 2015 (UTC)

Multiple Shared Ip notices on Talk pages

The following exchange:

There can be many shared ip notices on talk pages as seen here: https://en.wikipedia.org/w/index.php?title=User_talk:165.72.200.11&oldid=672279975
This can be confusing, looks bad. Only one is needed at the bottom. TheMagikCow (talk) 14:36, 20 July 2015 (UTC)

I've set up automated archiving for that page. Perhaps a bot could do so for others? Andy Mabbett (Pigsonthewing); Talk to Andy; Andy's edits 19:13, 20 July 2015 (UTC)
  Redundant We already have archiving bots that do a great job of archiving.—cyberpowerChat:Online 20:17, 27 August 2015 (UTC)

seems to have been closed and archived in error; my suggestion was not that bots do the archiving; but that a bot be used to add the instructions for a bot to do so to the affected talk pages. Andy Mabbett (Pigsonthewing); Talk to Andy; Andy's edits 11:32, 15 October 2015 (UTC)

We could also have Twinkle add the archive template, but there's really no sense in archiving that stuff, and seems like a waste of resources. Instead Twinkle could blank warnings that are X years old and add {{OW}} MusikAnimal talk 04:59, 17 October 2015 (UTC)

Template:Resolved archiver

Is there a bot that automatically archives conversations marked with this template? I know Commons has something like this and I imagine we at least had something at some point, but it's hard to find anything about it. Or will a page already configured with User:MiszaBot/config automatically archive {{Resolved}} sections as if they were User:ClueBot III/ArchiveNow? czar 23:22, 17 October 2015 (UTC)

Look at the header for User talk:Citation bot, which works the way that you want. Here's a copy:
{{User:ClueBot III/ArchiveThis
|archiveprefix=User talk:Citation bot/Archive
|format=%%i
|maxarchsize=20
|age=960000
|index=no
|archivenow={{tl|resolved}},{{tl|Resolved}},{{tl|wontfix}},Fixedin,fixedin,{{tl|notabug}},{{tl|fixed}}
}}
That should work for you. – Jonesey95 (talk) 00:03, 18 October 2015 (UTC)

Redirect Fixing

Maybe a bot that clicks on links and fixes redirects on Wikipedia links? Possible set-up for fixing redirects: If link name "X" has been redirected to page name "Y", then change link source code from "X""X" (Fix redirects) this request makes more sense when viewed in the source code Thanks!

Badmonkey717 (talk) 03:40, 18 October 2015 (UTC)Badmonkey717

See WP:NOTBROKEN. – Jonesey95 (talk) 04:52, 18 October 2015 (UTC)

  Declined Sometimes a link to the redirect is intentional per [WP:NOTBROKEN]] as Jonesey95 wrote. -- Magioladitis (talk) 09:12, 20 October 2015 (UTC)

Protect articles with copyvios

An adminbot should protect all articles in Category:Articles tagged for copyright problems for an expiry time of 7 days. GeoffreyT2000 (talk) 22:14, 16 October 2015 (UTC)

Why? Is there consensus to do this? — Earwig talk 04:10, 17 October 2015 (UTC)

Removal of duplicated citations

I suggest a bot that can remove duplicated citations. If you look at the source code, you can see what I mean by "duplicated citations". Qwertyxp2000 (talk) 23:41, 6 April 2015 (UTC)

Markup Renders as
====Without duplicated citations===
Lorem ipsum dolor sit amet, consectetuer adipiscing elit.<ref name="random thingy" group="example ref1">[https://www.google.com Random citation] Google. Retrieved at "random date".</ref> Aenean commodo ligula eget dolor. Aenean massa. Cum sociis natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Donec quam felis, ultricies nec, pellentesque eu, pretium quis, sem. Nulla consequat massa quis enim. Donec pede justo, fringilla vel, aliquet nec, vulputate eget, arcu. In enim justo, rhoncus ut, imperdiet a, venenatis vitae, justo. Nullam dictum felis eu pede mollis pretium. Integer tincidunt. Cras dapibus..<ref name="random thingy" group="example ref1" /> Vivamus elementum semper nisi. Aenean vulputate eleifend tellus. Aenean leo ligula, porttitor eu, consequat vitae, eleifend ac, enim. Aliquam lorem ante, dapibus in, viverra quis, feugiat a.

====Dummy refs====
{{reflist|group="example ref1"}}

{{tick}} This is acceptable


===With duplicated citations===

Lorem ipsum dolor sit amet, consectetuer adipiscing elit.<ref group="example ref2">[https://www.google.com Random citation] Google. Retrieved at "random date".</ref> Aenean commodo ligula eget dolor. Aenean massa. Cum sociis natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Donec quam felis, ultricies nec, pellentesque eu, pretium quis, sem. Nulla consequat massa quis enim. Donec pede justo, fringilla vel, aliquet nec, vulputate eget, arcu. In enim justo, rhoncus ut, imperdiet a, venenatis vitae, justo. Nullam dictum felis eu pede mollis pretium. Integer tincidunt. Cras dapibus..<ref group="example ref2">[https://www.google.com Random citation] Google. Retrieved at "random date".</ref> Vivamus elementum semper nisi. Aenean vulputate eleifend tellus. Aenean leo ligula, porttitor eu, consequat vitae, eleifend ac, enim. Aliquam lorem ante, dapibus in, viverra quis, feugiat a.

====Dummy refs====
{{reflist|group="example ref2"}}

{{cross}} This is not acceptable

Without duplicated citations

Lorem ipsum dolor sit amet, consectetuer adipiscing elit.[example ref1 1] Aenean commodo ligula eget dolor. Aenean massa. Cum sociis natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Donec quam felis, ultricies nec, pellentesque eu, pretium quis, sem. Nulla consequat massa quis enim. Donec pede justo, fringilla vel, aliquet nec, vulputate eget, arcu. In enim justo, rhoncus ut, imperdiet a, venenatis vitae, justo. Nullam dictum felis eu pede mollis pretium. Integer tincidunt. Cras dapibus..[example ref1 1] Vivamus elementum semper nisi. Aenean vulputate eleifend tellus. Aenean leo ligula, porttitor eu, consequat vitae, eleifend ac, enim. Aliquam lorem ante, dapibus in, viverra quis, feugiat a.

Dummy refs

  1. ^ a b Random citation Google. Retrieved at "random date".

 Y This is acceptable

With duplicated citations

Lorem ipsum dolor sit amet, consectetuer adipiscing elit.[example ref2 1] Aenean commodo ligula eget dolor. Aenean massa. Cum sociis natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Donec quam felis, ultricies nec, pellentesque eu, pretium quis, sem. Nulla consequat massa quis enim. Donec pede justo, fringilla vel, aliquet nec, vulputate eget, arcu. In enim justo, rhoncus ut, imperdiet a, venenatis vitae, justo. Nullam dictum felis eu pede mollis pretium. Integer tincidunt. Cras dapibus..[example ref2 2] Vivamus elementum semper nisi. Aenean vulputate eleifend tellus. Aenean leo ligula, porttitor eu, consequat vitae, eleifend ac, enim. Aliquam lorem ante, dapibus in, viverra quis, feugiat a.

Dummy refs

  1. ^ Random citation Google. Retrieved at "random date".
  2. ^ Random citation Google. Retrieved at "random date".

 N This is not acceptable

@Qwertyxp2000: AWB's general fixes will do this - see the page for more details. GoingBatty (talk) 01:10, 7 April 2015 (UTC)
GoingBatty, thank you for finding the right page. I will soon be changing the {{Duplicated citations}} tag. Qwertyxp2000 (talk) 01:15, 7 April 2015 (UTC)
@Qwertyxp2000: You might want to have the template link to WP:REFNAME instead of the AWB page. GoingBatty (talk) 01:20, 7 April 2015 (UTC)
@Qwertyxp2000: You might want to have a comment in the documentation saying that AWB may be used to fix the issue, and provide the link to the AWB page. GoingBatty (talk) 01:23, 7 April 2015 (UTC)
Why cannot you do this all? Then I can see what you are thinking. Qwertyxp2000 (talk) 01:31, 7 April 2015 (UTC)
@Qwertyxp2000: Apparently some people think that duplicate citations are acceptable. GoingBatty (talk) 01:41, 7 April 2015 (UTC)
Looking at the second scenario, if I have referenced the first and last sentences of a paragraph to the same source but not the middle of it, or per haps the middle is cited to another source, then if someone comes along and removes a "duplicate" cite, I would revert that as vandalism. We encourage people to use inline citation and multiple sources, but we don't limit people to only citing one statement from each source they use. ϢereSpielChequers 05:58, 7 April 2015 (UTC)
I think you are misunderstanding, WereSpielChequers. No one is saying a statement can only be referenced once. Qwertyxp2000 is wanting a bot to fix references which are duplicated (rather than referenced twice or more). Duplicated references produce two entries to the same thing in the list of references, whereas a reference used multiple times will have one entry with multiple uses (the little "^ a b" you see next to the example reference in the first example). I recently manually combined a bunch of duplicate references here (I also normalized the references so they could be referred to multiple times). Maybe that will help clarify things. ···日本穣? · 投稿 · Talk to Nihonjoe · Join WP Japan! 18:07, 24 May 2015 (UTC)
So is this a good idea for a bot or is it already fulfilled by AWB? Because I am looking to start working on a bot (something I've been putting off for two years). :P Sn1per (talk)(edits) 13:51, 27 June 2015 (UTC)
@Sn1per: AWB's general fixes will also do this in some circumstances - see the page for more details. GoingBatty (talk) 22:57, 28 June 2015 (UTC)
@GoingBatty: I see this problem a lot though and perhaps it would be a good idea to have it actively fixed by a bot to take some work off of AWB users? Sn1per (talk)(edits) 15:07, 29 June 2015 (UTC)
@Sn1per: I don't object to a bot task to do this using AWB's rules. How would you create a list of articles to be fixed? GoingBatty (talk) 15:38, 29 June 2015 (UTC)
@Sn1per and GoingBatty: A database scan for <ref>([^\<]+)</ref>.+<ref>\1</ref> will find about 20,000 candidates. The first 1% or so are listed at User:John of Reading/Sandbox. But remember that the AWB general fixes will only combine duplicate citations if the article already has at least one named reference, to avoid changing the citation style (AWB documentation). -- John of Reading (talk) 16:09, 29 June 2015 (UTC)
@John of Reading: Thanks John! You might want to tweak your regex to also include named references. GoingBatty (talk) 16:28, 29 June 2015 (UTC)
@GoingBatty: That would take more than a "tweak", but I'll think about it. If two references have the same name, the software will use only the first definition whether or not the definitions are identical. So the search would have to be for references that have identical content but different names, or one named and one unnamed. -- John of Reading (talk) 16:40, 29 June 2015 (UTC)
@John of Reading: Thanks for the regex, I tend to be terrible at those :P I would assume that my bot should follow the same behavior as AWB to comply with the same policy? Sn1per (t)(c) 22:54, 29 June 2015 (UTC)
@Sn1per: Definitely, and even then you may run into objections. See this 2011 thread. Hint: it's surprisingly long. -- John of Reading (talk) 06:04, 30 June 2015 (UTC)
@John of Reading: I think I tweaked the regex, not sure if it works all the time but should be accurate enough to quickly select pages for further scrutiny. Are there any obvious errors? <ref(.|\n)*?>([^\<]+)<\/ref>.+<ref(.|\n)*?>\2<\/ref> Sn1per (t)(c) 18:18, 2 July 2015 (UTC)
(still working on improving the regex) Sn1per (t)(c) 18:33, 2 July 2015 (UTC)
@Sn1per: The ".+" in the middle will only work if the regex is run with the "singleline" option turned on - the dot/period needs to match newlines - so the "(.|\n)" can be simplified to just ".". That's <ref.*?>([^\<]+)<\/ref>.+<ref.*?>\1<\/ref>. -- John of Reading (talk) 18:47, 2 July 2015 (UTC)
@John of Reading: Thanks for the advice. Here is an improved regex: <ref([^\>]*)?>([^\<]*)</ref>.*?<ref(?!\1)[^\>]*?>\2</ref> It (should) be able to find two refs, where at least one has no attributes i.e. name="", or if both have different attributes. Sn1per (t)(c) 19:15, 2 July 2015 (UTC)
(note that I am using python regexs, where a "." doesn't seem to match newlines) Sn1per (t)(c) 19:18, 2 July 2015 (UTC)
Nevermind, I'm an idiot. Just say your note about singleline mode. Sn1per (t)(c) 19:19, 2 July 2015 (UTC)
@Sn1per: Yes, that regex does the job - neat! Now, a quick reality check: on my laptop it will take 20 hours to run this against a database dump. My latest dump is from mid-May; I think it's not worth tying up the machine for so long to produce a list that is several weeks out of date. But I can produce a partial list of a hundred or so articles for testing purposes fairly easily. -- John of Reading (talk) 19:45, 2 July 2015 (UTC)
  Coding... Well, it seems like a good idea to me, so I'll start working on it. But if you guys strongly disagree, leave a message on my talk page so I don't waste too much effort. The bot will probably take a few days to work on given that I am new to the field. Sn1per (talk)(edits) 17:16, 28 June 2015 (UTC)
@Sn1per:, Why code it? Why not use the AWB library that already contains the code. - X201 (talk) 15:33, 29 June 2015 (UTC)
@X201: I was thinking of making a pywikibot-based robot based on the WMF Tool Labs servers so that the bot can run all the time, rather than relying on me to run it off my PC, given that the problem is pretty large. Sn1per (t)(c) 22:54, 29 June 2015 (UTC)

Woken up

GA Cup

On the behalf of myself and Figureskatingfan, we are looking at the possibility of having a bot help assist us in the GA Cup. We held the first competition at the end of 2014/beginning of 2015 and after the success of it, we are currently planning a second competition, hoping for it to be a even bigger success. In the first competition, some of the participants expressed their frustration in the how the submission process for their Good article reviews was not very efficient. For the upcoming competition, we were wondering if it would be possible to have a bot scan the Good article nomination page for reviews being conducted by participants and add the appropriate review links to a page.

More specifically, ideally, the bot would scan the nomination page and say BenLinus1214 was reviewing an article, it would add it under the appropriate header.

If anyone is interested in helping us I would be glad to have you on board and will be more than happy to answer any questions!--Dom497 (talk) 23:25, 11 May 2015 (UTC)

@Dom497: Doesn't seem that hard or complicated. I'll test around a bit but I'm not going to promise anything yet. -24Talk 21:39, 28 May 2015 (UTC)
@Negative24: Thank you so much for trying! I would have done it myself but I don't know enough code to do it!--Dom497 (talk) 22:35, 28 May 2015 (UTC)
@Negative24: Hey, just curious to see if you've had any positive results. Thanks again! :) --Dom497 (talk) 02:30, 14 June 2015 (UTC)
@Dom497: Sorry for the delay. I've been exploring everything related to bots and this being my first time even trying to make a bot, things are going a bit slow. I'm going to be going on a month long Wikibreak (related to a project in life that needs my time) and so I may not be able to code anything up before the 2015 GA cup. I'm going to leave this open for anyone to pickup (feel free to do so if you're interested) but I'm not able to fully start on something at the moment. I will resume this project when I have the time and if someone hasn't picked it up by then. Sorry, -24Talk 03:08, 19 June 2015 (UTC)
@Negative24: No problem at all! Thanks for trying! :) --Dom497 (talk) 19:26, 19 June 2015 (UTC)
  Question: While I am good with botwork, my experience with the GA process is non-existent. It would help to have some clarification. Maybe you can show me a scenario, and show me the diffs that need to be done?—cyberpowerChat:Online 22:02, 26 August 2015 (UTC)
@Cyberpower678: If a GA nomination is being reviewed, the list of noms at WP:GAN lists their name (see this diff). With the GA Cup there are submission pages (see here for mine). On these I put all the articles I'm reviewing using a special template. This currently has to be done manually, and people forget. What I think Dom497 is looking for is a bot that will periodically look at WP:GAN, see if a GA Cup competitor is reviewing an article, and add that article to their respective submission page using the template. Hope that helps, if something isn't clear, just ping me as I'm not watching this page. Wugapodes (talk) 03:46, 28 August 2015 (UTC)
I asked Legoktm if he could do it since he already runs bots in that area, but he says he's too busy, so I guess I'll do it. Keep an eye on this request.—cyberpowerChat:Online 23:02, 28 August 2015 (UTC)
@Dom497 and Cyberpower678: This would definitely be helpful in future cups, but this probably shouldn't be given an extremely high priority. Let's say Cyberpower works like lightning and gets this bot ready in a week from now. At the current rate bot approvals is moving, the approval will likely take another 2-3 weeks. That would give you maybe 3 weeks of use of this bot in the final round, and that's a best case scenario if Cyberpower creates the bot rapidly. It might be best to code and trial this bot during the "offseason" to prepare it for full use in the next cup. That would also be less likely to cause frustration for the contestants due to a late switch in how things are done. This could be trialled by popping any few regular GA reviewers in as "contestants" for the bot and letting it do its thing so it's ready to go out of the gate for the next cup. ~ RobTalk 14:07, 9 September 2015 (UTC)

  Doing... for reasons and explained here (among others) internet traffic should be encrypted. Recently, Wikimedia decided to use HTTPS by default. Which begs the question why we not also convert external links to HTTPS (wherever this is an option). For instance, one of the most-linked websites on Wikipedia, the Internet Archive, actually encourages HTTPS inbound links ever since 2013, yet most of the external links on Wikipedia to them still use insecure HTTP. Also, all Google services offer HTTPS access, and Google encourages one to use it, but there are still thousands of links to Google Books, Google News, and YouTube with HTTP. Long story short, what I am asking for is a simple search-and-replace bot, to convert:

http://[wayback.|web.|*]archive.org/https://[wayback.|web.|*]archive.org/
http://[news.|books.|*]google[.com|.co.uk|.ca|...]/https://[news.|books.|*]google[.com|.co.uk|.ca|...]/
youtube ... you get the idea.

Is it possible to have this done by a bot? --bender235 (talk) 17:43, 27 June 2015 (UTC)

bender235 does the htto to https econversion for extenal link has consensus? I recall some reactions in the past but I can't find any link to some discussion about it. -- Magioladitis (talk) 17:50, 27 June 2015 (UTC)
We had a discussion on VPP that concluded that we should use protocol-relative links to whichever site supports both HTTP and HTTPS equally. However, since Wikipedia moved to HTTPS by default permanently, protocol relative links make little sense. --bender235 (talk) 18:40, 27 June 2015 (UTC)
JFYI: this is why we do this: “Since the Internet Archive site uses HTTPS by default for its connections, Russian ISPs are unable identify which page is being requested by their users, and thus whether it is the one subject to the new ban.” ISPs can no longer interfere with a site's traffic. All they can do is block the entire domain, which sooner or later will cause public protest. --bender235 (talk) 05:53, 29 June 2015 (UTC)
I think for youtube is better to convert to {{Youtube}}? -- Magioladitis (talk) 17:50, 27 June 2015 (UTC)
Yes, raw Youtube links should be converted to {{Youtube}}, but those inside {{cite web}} or similar templates should just be converted to https. --bender235 (talk) 18:40, 27 June 2015 (UTC)
@Bender235: In early 2014 we discussed using protocol-relative links instead. GoingBatty (talk) 18:35, 27 June 2015 (UTC)
It appears that {{Google books}} uses protocol-relative links while {{YouTube}} and {{Wayback}} use https. GoingBatty (talk) 18:39, 27 June 2015 (UTC)

I pointed out to Bender235 that over and above altering links from "http:" to "https:" that changing from a country specific address (such as co.nz) to .com may deny access to some people, as sometimes there appears to be a restriction in the access to text in one country but not another. Bender235 wants proof of this, but as I have not kept records of it and I make a lot of edits, I will provide one when I come across it, but in the mean time I see no need to change the country domain along with the connection type.

@PBS: As I have repeatedly explained to you, this claim makes no sense at all. If book X is restricted (by copyright) to be seen in country Y, Google will not determine this by the top-level domain you browse to, but by the IP address you are currently browsing from. This concept is called geo-blocking. Please read the article. --bender235 (talk) 19:18, 6 September 2015 (UTC)

It has been pointed out that this sort of edit can easily mask vandalism (see User talk:Bender235#https), so as it is not a change that needs expediting, that must be weighed in whether this is a suitable candidate for automation (rather than for example adding to to a process like AWB to be done when other more specific changes are made). See also User talk:Bender235#AWB, apparently Bender235's AWB access was removed on by user:Materialscientist on 2 July 2015 (it has not been restored. When discussing this on Bender235's talk page it was suggested by Bender235 that the discussion Wikipedia:Village pump (technical)/Archive 138#HTTPS by default was relevant to this and so should probably be included in this conversation.

-- PBS (talk) 09:50, 13 July 2015 (UTC)

Images tagged for Commons Transfer when the image is already at Commons...

This Catscan query:-

http://tools.wmflabs.org/catscan3/catscan2.php?depth=10&categories=Copy+to+Wikimedia+Commons%0D%0AWikipedia+files+with+the+same+name+on+Wikimedia+Commons&ns[6]=1&sortby=uploaddate&ext_image_data=1&file_usage_data=1

Is there a way for a BOT to handle this periodically? Namely removing the {{Copy to Wikimedia Commons}} tag, so people aren't confused about what ACTUALLy does need to be reviewed transferred?Sfan00 IMG (talk) 10:43, 2 October 2015 (UTC)

  • Waste of time and unnecessary watchlist clutter. If the file has been tagged with {{subst:ncd}}, it is usually deleted on Wikipedia within a few days. There should be no problem if the "copy to Commons" template remains on the file information page during those few days. --Stefan2 (talk) 11:11, 2 October 2015 (UTC)

WikiProject Pakistan

I'd like to request a bot to tag all articles, categories, subcategories, templates under the parent Category:Pakistan with Template:WikiProject Pakistan. It's been a while since bot-assisted WP:PAK tags were added in mass (the last time was in early 2012), and I know that there are hundreds of pages that need tagging. A big thanks and a complementary barnstar await any bot who could take the initiative. Many thanks, Mar4d (talk) 02:49, 10 October 2015 (UTC)

How deep into the category tree are you intending to go? Tagging "all subcategories" is generally a bad idea. For example,
HTH. Anomie 14:38, 10 October 2015 (UTC)
@Anomie: Thanks for the analysis. In that case, is it not possible to restrict to the main subcategories (eg. where the subcategory at least has the word 'Pakistan' in its name) or other easily identifiable relevant descriptor? Mar4d (talk) 19:09, 10 October 2015 (UTC)
@Mar4d: I suggest you follow the recommendations at User:Yobot#WikiProject tagging for this request, even if Yobot isn't going to do the tagging. GoingBatty (talk) 01:21, 15 October 2015 (UTC)

Updating T:TDYK every new day

Every day, the Template talk:Did you know page is updated by moving the Current nominations level 2 section header to one newer day, and adding a new level 3 section header for articles created/expanded on that day. This task is currently done manually by a human. Examples: September 26, September 25, September 22, September 1.

I think this once-a-day task may be done better by a bot. Note that this is my first bot request, so please notify me if I have made any problems. sstflyer 15:34, 26 September 2015 (UTC)

  Doing... Seems simple enough. This could run exactly at 00:00 UTC if we want. Happy to implement this, I don't think it will be hard MusikAnimal talk 04:42, 30 September 2015 (UTC)

@Allen3 and Mandarax: Any input on this? MusikAnimal talk 04:43, 30 September 2015 (UTC)
The common case of moving the "Current nominations" header down one day and adding a header for the new day should be straight forward. The hard part is dealing with the exceptions. The most common exception is probably when someone living east of Greenwich adds a header for a new day before 00:00 UTC. When a new date header is added early, the "Current nominations" header is almost never moved. Template talk:Did you know being actively edited by humans also means date headers and their associated comments are mangled upon occasion. --Allen3 talk 09:11, 30 September 2015 (UTC)
Allen3 has covered the key exceptions: the bot will need to know that the new day may already be there, and that there should be eight consecutive days (seven prior days plus the new day) in the "Current nominations" section. The bot should probably be written so as to allow for a different number of days in the "Current nominations" section—it hasn't been that long since we went from five prior days to seven, and it's conceivable that the number could change again in the future. (I would expect such changes to occur years apart.) BlueMoonset (talk) 16:07, 30 September 2015 (UTC)
@Allen3, SSTflyer, and BlueMoonset: Got it. My thoughts: First off, add an editnotice and perhaps some embedded comments saying this process is automated now, so that people don't bother trying to do it by hand. They should simply wait until the bot comes by and does it. If the bot detects a heading for the new day was added, but the "current nominations" header wasn't moved, it will only do the latter, and vice versa. Additionally, the bot will produce a daily report of any errors it encountered when trying to parse the page. I do a similar thing with the WP:PERM pages, see Special:PermaLink/682155886. We could transclude the report at the top of the Template talk:DYK, that only contains errors and otherwise is blank, that way it's easily seen and swiftly fixed. The bot will continue to try to do its task until the errors are fixed, or if it detects the job has already been done by hand. I can have a wiki page specify how many days after the current nominations a heading should exist, so you can change it whenever you need to without my assistance. How does that sound? MusikAnimal talk 16:16, 30 September 2015 (UTC)
If there's a wiki page that specifies the number of days, then it has to be fully protected so that only an admin can change it, much like the queue pages at DYK are protected. This is not a number that should be changed without an RfC being run first. BlueMoonset (talk) 05:28, 2 October 2015 (UTC)
  Coding... No problemo. I am going to assume my development approach is sane and proceed with coding soon MusikAnimal talk 20:12, 2 October 2015 (UTC)

()   BRFA filed Sorry for the delay, got held up with other technical work MusikAnimal talk 01:55, 9 October 2015 (UTC)

The bot is in trial by the way, and seems to be doing fine. While monitoring its activity, I noticed we are manually removing nominations that have been accepted/declined [3]. The corresponding templates appear to be wrapped in a <noinclude>...</noinclude>. Given we have the list of the nominations at T:TDYK, it shouldn't be terribly difficult for the bot to check each one and if it has been closed, remove it from the list. How do you feel about automating this process? T:TDYK is a very large page with lots of transclusions and can take quite a while to load at times. If the bot automated removing redundant transclusions to keep the page tidy, it might overall speed things up for us. For performance/efficiency, it would only check entries in "Older nominations". Pinging @Allen3, SSTflyer, and BlueMoonset: who might be interested MusikAnimal talk 00:09, 15 October 2015 (UTC)
The problem with this idea is that "accepted" nominations are sometimes found to be wanting—it's been happening more and more lately—and are removed from prep or queue, at which point the template is reverted to before it was substituted; the "noincludes" go away. At that point, they automatically show up again on the T:TDYK page. If the bot has removed the template transclusion entry, then it won't reappear, and will be lost from view unless someone notices that the template has gone AWOL. Having a less crowded page would be welcome in terms of speed and other issues, but I think the risks would be too high that nominations would slip between the cracks if they run into trouble. BlueMoonset (talk) 04:27, 15 October 2015 (UTC)
Got it. I figured there was something to it if you were leaving all of those blank transclusions on the page! I'm not itching to have my bot control the page or anything, just tossing around some ideas to improve the process... what about category-driven automation? I see that when you open a DYK nomination it has the category "Pending DYK nominations". The bot could go off of that to build T:TDYK, but the nominator would need to indicate in the nomination what date heading it should go under. Maybe this idea is going a little overboard, but it might make make it easier to keep track of nominations, and eliminate maintenance at T:TDYK. This is comparable to WP:GAN which is entirely maintained by a bot MusikAnimal talk 15:05, 15 October 2015 (UTC)
@BlueMoonset: Pinging in case you hadn't read my new idea. This would clearly require broader discussion, but I'm curious what your thoughts were. Thanks MusikAnimal talk 16:20, 22 October 2015 (UTC)

Bold and italic

A bot should replace * * with ''' ''' and _ _ with '' ''. GeoffreyT2000 (talk) 00:02, 21 October 2015 (UTC)

Why?—cyberpowerTrick or Treat:Online 00:10, 21 October 2015 (UTC)
And how? It is probably impossible for a bot to do it and not find many more false positives, as underscores and asterixes have many other uses. I have not seen this used ever in articles.--JohnBlackburnewordsdeeds 00:17, 21 October 2015 (UTC)

Draft articles without an AFC banner

There used to be a category (and a bot that forced articles into the category) that kept track of Draft class articles without an AFC submission banner of any type. I've also seen some lost into the ether because the submit substitution was screwed up somehow. Could a bot create a list of all draft-space article without a call to template:AFC submission? Depending on the volume created, this may be worth doing regularly (monthly?) as a backlog at Wikipedia:WikiProject Articles for creation or something. -- Ricky81682 (talk) 19:53, 26 October 2015 (UTC)

What is the rationale for such a category? I thought Draft space was, by definition, a place where people could work on drafts of articles before submitting them to the AFC process or moving them to article space. Is there a requirement that Draft articles have certain tags? I must be missing something. – Jonesey95 (talk) 20:00, 26 October 2015 (UTC)
You're correct. I worded this wrong for what I'm looking for. -- Ricky81682 (talk) 21:15, 26 October 2015 (UTC)
  Not done Ricky81682 Draft namespace was created for the purpose of a unified draft location that anybody could work on from Abandoned Drafts and Articles for Creation. In the formative discussions, there was a suggestion of putting the non-AFC pages in draft space in some sort of categorization scheme so that we could track drafts that were sitting out there (in supposed WP:WEBHOST violation) never being improved that was struck down. I believe this is not the first or second time I've explained this difference. Before this goes any further can you please look into proposing a RFC at WT:DRAFTS or at WP:VPR to secure that there is a consensus to do this? I doubt there is a consensus to do this, but if a bot is to do this, there needs to be an ironclad consensus for it. I see that you've asked before (1) and didn't get an answer you wanted. Hasteur (talk) 20:22, 26 October 2015 (UTC)
Yes and I agree that a mass move would be improper. I was actually looking for old drafts and knew that the lack of header would simplify it. How about a request for all draftspace article that have not been edited in say two years? I don't know how the API works but I guess asking for the lack of header would be an extra computing call. It would not be G13 eligible because of the lack of header. What has happened is that I found an old user, listed the junk for deletion, found possibly useful old stale draft (say User:World Cinema Writer/National Treasure 3) and 'adopted' it, moved that to Draft:National Treasure 3 and added a new banner so that it's checked once in a while. What I found an old draft article and would work on it the same way? That or take them to MFD in bulk I guess. I think seeing two year old stale drafts would be perfect for the Abandoned drafts project to work out. -- Ricky81682 (talk) 21:15, 26 October 2015 (UTC)
I'm sure that there is a good idea in there somewhere. If you take it to WT:DRAFTS, you'll get some help refining and defining the need. Once that need is defined, you can bring a request back here for implementation. I imagine that it wouldn't be hard, for example, for a bot to tag Draft articles that had not been edited in a while (except by bots), and then automatically remove that tag after a human editor makes a change to the page. That should not be discussed on this page, however. – Jonesey95 (talk) 21:44, 26 October 2015 (UTC)
I'm just asking for a list right now, not necessarily a category. I think this is the place to ask for something like that. A list can be checked by humans then. I'll see there too. If there's interest, it may be a bot task to review periodically. Thanks. - Ricky81682 (talk) 04:03, 27 October 2015 (UTC)

() See this Quarry. Assuming my SQL is right, there are around 1026 draft pages that have not been edited in the past six months. Most of these look like test pages, vandalism, or WP:WEBHOST violations. I even just deleted an attack page. Furthermore, nearly all that I've checked have less than 5 edits made to them. I suppose the lack of articles makes sense, as many content creators would have instead found their way into the draftspace via article creation links, which insert an AfC template. Either way it looks like there's a lot stuff to review here. I can make a tool to interact with this data easier MusikAnimal talk 05:57, 27 October 2015 (UTC)

Just updated the Quarry to also show the page length in bytes. If you put that in descending order you're more likely to find articles. There are a fair amount, it turns out. MusikAnimal talk 06:09, 27 October 2015 (UTC)

Ricky81682 (RE to 21:15, 26 Oct 2015 UTC) I would not bulk MFD them as the argument you're using "That they're stale and haven't been touched" was rejected multiple times for nonAFC draftspace pages. I would strenously suggest you go round up a consensus at WT:Drafts prior to nominating for MFD. Getting the consensus also has the side benefit of stirring the community up to support your MFD nominations. Once you can satisfy the CSD requirements (Objective, Unconstestable, Frequent, Non-redundant) there'll be a wonderful case for using CSD to vaporize the poor drafts. Hasteur (talk) 14:26, 27 October 2015 (UTC)

I have no intention to. Please give me some credit here. I've brought this up at WP:DRAFTS and Abandoned Drafts as I'd rather be it be done as an Abandoned Draft backlog to work on, something to give that project some push I think. Although 1000 page is nothing compared to the 49k backlog of old userspace drafts. -- Ricky81682 (talk) 21:38, 27 October 2015 (UTC)

hate

please changes that i have been made..please keep it..what is your problem..sir please do this.. — Preceding unsigned comment added by Aamir rodaba (talkcontribs) 18:46, 19 December 2015 (UTC)

Template:YouTube

Since I've been the only one active on Template talk:YouTube for the past two months, I am going to claim consensus for my proposed changes to the template. But before I rewrite the template, I need a bot to go to every page using it, and replace the channel parameter with user. Thanks, 117Avenue (talk) 00:57, 9 November 2015 (UTC)

This could probably be done with AWB. It looks like there are only a few hundred articles in Category:Articles using YouTube with deprecated parameters. – Jonesey95 (talk) 06:29, 9 November 2015 (UTC)
I've been trying to avoid learning AWB. 117Avenue (talk) 02:55, 11 November 2015 (UTC)
@117Avenue:   Doing... with AWB. Kharkiv07 (T) 02:45, 12 November 2015 (UTC)
@117Avenue:  Y Done Kharkiv07 (T) 03:27, 12 November 2015 (UTC)
Thanks, 117Avenue (talk) 02:37, 14 November 2015 (UTC)

Reducing the load of WP:TAFI unofficial-manager Northamerica1000

Please take part in the ongoing discussion at: Wikipedia:Village pump (technical)#Reducing the load of WP:TAFI unofficial-manager Northamerica1000 to make our lives over at WP:TAFI that little bit easier. :)--Coin945 (talk) 15:27, 30 October 2015 (UTC)

Started a thread at WT:TAFI#Bot automation MusikAnimal talk 01:39, 9 November 2015 (UTC)

Give out Deletion to Quality Awards

Is there a way a bot could give out WP:Deletion to Quality Awards ?

Here's what it would have to do:

  1. Check Category:Deletion to Quality Award candidates
  2. Find out who the FA, FL, or GAN nominator was.
  3. Place the corresponding Banner Award from Wikipedia:Deletion_to_Quality_Award#Banner_awards on their user talk page = linking to the article and the AFD page as the two parameters in those Banner Awards.

You can say, on behalf of Cirt and WP:Deletion to Quality Awards.

And also, any way a bot could update the "Hall of Fame" table at Wikipedia:Deletion_to_Quality_Award#Deletion_to_Quality_Award_Hall_of_Fame ?

Thoughts ?

Any help would be most appreciated,

Cirt (talk) 05:04, 21 October 2015 (UTC)

Note: Please note that a one-time-run would be totally acceptable. :) — Cirt (talk) 09:20, 21 October 2015 (UTC)
Can anyone help me out with above, please? — Cirt (talk) 07:59, 29 October 2015 (UTC)
Any help would be appreciated, please? — Cirt (talk) 03:38, 4 November 2015 (UTC)
Too much effort required. Most other awards (such as WP:1M or WP:FOUR) do not have dedicated bots. sst✈discuss 08:45, 4 November 2015 (UTC)
@SSTflyer:Not asking for a dedicated bot. Just a one-time-run, please? — Cirt (talk) 08:47, 4 November 2015 (UTC)
For a one-time run you could always use AWB. sst✈discuss 08:48, 4 November 2015 (UTC)
@SSTflyer:I'm not personally that familiar with how to use AWB like that, perhaps you could help? — Cirt (talk) 08:52, 4 November 2015 (UTC)
I suspect that the problem is figuring out who should get "credit". Once you have a list of usernames, then Special:MassMessage can do the delivery (if the message is identical for everyone) or a simple script (if the message should say "Thanks User:Example for saving This Named Article"). WhatamIdoing (talk) 19:31, 4 November 2015 (UTC)
Using Special:MassMessage to give out awards seems too impersonal. Similar argument as for why we don't have a welcome bot. Compiling a list of the (user, article, award_type) tuples is fine, updating the WP:DQUAL list with it is fine, and manually giving out awards from that list is fine, but I'm wary of an automated thing. More on the higher-level merits of the task, I note many of these AfDs were closed with strong, even speedy, keep rationales—I wonder if those should be exempt. — Earwig talk 19:49, 4 November 2015 (UTC)
@WhatamIdoing and The Earwig:Thank you very much for your helpful input! I can try to give out the awards, myself, with AWB, if there's an easier way to do it. Is there a way to generate these lists you speak of ? — Cirt (talk) 22:08, 4 November 2015 (UTC)
I don't know how to do that, other than manually copying and pasting everything into a spreadsheet. WhatamIdoing (talk) 00:16, 7 November 2015 (UTC)
No time at the moment, Cirt, but if you still need a list by this coming weekend, let me know. — Earwig talk 10:38, 8 November 2015 (UTC)

Neelix redirects

An adminbot should delete all redirects created by Neelix, many of which are currently at RfD. GeoffreyT2000 (talk) 17:43, 5 December 2015 (UTC)

We've had seemingly endless discussions about this. The consensus arrived at was that admins, human admins, can use their judgement and delete any that seem silly under G6 and speedily close any RFDs. As tempting as I find this idea, consensus ws already established that some of these redirects re not utter garbage. If we could just blindly delete them en masse it would have already happened. Beeblebrox (talk) 18:58, 5 December 2015 (UTC)

Convert deprecated parameter "or" for template:s-rel

Would someone be ever-so-kind as to set-up a bot to convert a deprecated parameter. The total number of article would be about 340, with one edit in each article. The lists are at Template talk:S-rel/oc lists, with the new parameter for each. For example "Change these {{s-rel|oc}} to {{s-rel|chal}}". The discussion was/is at Template talk:S-rel#Introduce two new parameters. tahc chat 03:59, 15 November 2015 (UTC)

Replacement of Template:Infobox Country World Championships in Athletics

Hello. Could I hire a bot to substitute all transclusions of {{Infobox Country World Championships in Athletics}}, per the outcome of this TfD? Alakzi (talk) 13:12, 20 June 2015 (UTC)

Same with {{Infobox China station}} and {{Infobox Japan station}}, but using the sandbox version. Alakzi (talk) 17:34, 20 June 2015 (UTC)
{{Infobox Country World Championships in Athletics}} done - thanks Plastikspork. Alakzi (talk) 16:00, 25 June 2015 (UTC)
@Alakzi: is this still pending or done? Mdann52 (talk) 18:18, 28 August 2015 (UTC)
China and Japan station are pending. Alakzi (talk) 18:20, 28 August 2015 (UTC)
Hi. I'm working on eliminating the backlog of requests, one request at a time. Unless, someone else takes this one, I will hopefully get to it soon.—cyberpowerChat:Online 00:37, 29 August 2015 (UTC)
@Cyberpower678: Mind if I steal this one, if you haven't started on it yet? I'm working on clearing WP:TFD/H and there's a handful of templates that can be handled in one BRFA, including the two remaining here. ~ RobTalk 16:06, 3 September 2015 (UTC)
Sure. As long as I haven't plastered a doing or coding template, I haven't taken it yet.—cyberpowerChat:Limited Access 16:10, 3 September 2015 (UTC)
  Doing... Thanks. ~ RobTalk 16:15, 3 September 2015 (UTC)
  BRFA filed ~ RobTalk 04:28, 26 September 2015 (UTC)
China is done. Japan still has a few thousand transclusions to go. — Earwig talk 21:13, 22 December 2015 (UTC)
 Y Done — Earwig talk 03:26, 24 December 2015 (UTC)

WikiProject Mountains banner update

  Resolved

After the recent update of the Wikipedia:WikiProject Mountains banner (Template:WikiProject Mountains) to include two new parameters for mountains in the Alps (see discussion here), I would like to update the talk page of everey article concerned (all in Category:Mountains of the Alps, no subcategories) by adding:

 |alps=yes | alps-importance= 

to:

 {{WikiProject Mountains | class= | importance= }}

result:

 {{WikiProject Mountains | class= | importance= | alps=yes | alps-importance=[same as "importance"] }}

ZachG (Talk) 18:50, 16 November 2015 (UTC)

@Zacharie Grossen: What about cases where {{WikiProject Mountains}} is not on the talk pages of articles in Category:Mountains of the Alps? Would you want that added (and if so, how), or just skipped? Talk:Acherkogel is an example.  Hazard SJ  07:34, 21 November 2015 (UTC)
You made a very good point. I guess the best option is to add a blank template like this one:
 {{WikiProject Mountains | class= | importance= | alps=yes | alps-importance= }}
or, if the article is a stub:
 {{WikiProject Mountains | class=stub | importance= | alps=yes | alps-importance= }}
Would it be possible? ZachG (Talk) 16:45, 21 November 2015 (UTC)
@Zacharie Grossen: Definitely! I've coded and trialed this (my bot is already approved for this sort of task). As you can see, in the trial, there was an issue where alps-importance=importance=(value) was set to, rather than just alps-importance=(value). That's been fixed. My second noticed issue was where the only edit was the addition of an empty alps-importance= parameter (see this and this), which would be undesirable (I'm gonna prevent adding that parameter unless there's actually a value, or unless I'm adding alps=yes). Otherwise, I hope everything's okay with everything else in the sample run I made?  Hazard SJ  02:40, 22 November 2015 (UTC)
Glad to hear that! I'm definitely not an expert on the matter but if the issues you mentioned are fixed then I think it's ok to run the bot. Thank you for your help. ZachG (Talk) 13:23, 22 November 2015 (UTC)

Hazard-SJ I can help with the task. For instance in this one the wikproject should have been below the other template. This can e done if you enable general fixes in AWB. You should also enable this module to normalise all wikiproject banners and avoid placement problems. -- Magioladitis (talk) 16:23, 24 November 2015 (UTC)

@Magioladitis: I just made changes to better determine where to place the template (see example from above). Additionally, I'm not opposed to your offer to help, did you have anything specific in mind (so our bots don't spill oil in each others' way ;) )? P.S. If you haven't noticed, I'm not using AWB, I'm using Python.  Hazard SJ  09:43, 25 November 2015 (UTC)

Hazard-SJ my mistake. I thought you were using AWB. What I can do is to ensure the correct placement of the banners etc. You can do the rest. -- Magioladitis (talk) 12:32, 28 November 2015 (UTC)

All done here. -- Magioladitis (talk) 09:43, 29 November 2015 (UTC)

BOT request

I had liked to have a bot named 'KNOWLEDGEBOT'. I want a bot so that I could edit pages more speedily than I can and to help everyone here. I hereby accept the bot policy and take all responsibilities of bot I won't allow him to violate anything and see over his way of commenting or communication. It won't do any harm or go on editing too speedily I will supervise the bot and I hereby I accept the bot policy. I request you to create this bot and I as its Bot operator. I am responsible for all of its acts, repairs, communication language etc. I will supervise my bot and it will be in my control. RegardsBOTFIGHTER (talk) 13:57, 3 December 2015 (UTC)

  •   Not done. 1. this is the place to request that a bot do a task, not to request a bot. 2. I want a bot so that I could edit pages more speedily than I can and to help everyone here. Urm... Edit faster? Unless you have some precognition in your pocket, you can't do that. 3. Your username is exceedingly concerning given you want to fight bots and also want to have a bot. No. Hasteur (talk) 14:13, 3 December 2015 (UTC)
I have kept this name because this was my first High rated game I programmed, I would not make the Bots fight, I didn't got any other username available I searched for many usernames, I really wont make the Bots fight.BOTFIGHTER (talk) 14:24, 3 December 2015 (UTC)

Roulette bot

Could someone create a bot that follows a betting method on a roulette website, please? The website is www.csgoskins.net.

- step 1: Bet 1/1023 of the credits I have on black
- step 2: - if I won: bet 1/1023 of the credits I have on red
          - if I lost: bet 1/511 of the remaining credits I have on black
            - if I lost again: bet 1/255 of the credits I have on black
              - if lost again: 1/127 of the credits on black
                - if lost again: 1/63 of the credits on black
                  - if lost again: 1/31 of the credits on black
                    - if lost again: 1/15 of the credits on black
                      - if lost again: 1/7 of the credits on black
                        - if lost again: 1/3 of the credits on black
                          - if lost again: all of the remaining credits on black

So basically if I win, restart the method on the other color. If I lose, double the bet on the same color until it wins, then start again on the other color. I have no idea how difficult to do this kind of bot since I don't have any programming experience, but I would appreciate if someone would do it for me. Thank you for the help. — Preceding unsigned comment added by Neate (talkcontribs) 18:23, 30 December 2015 (UTC)

  Not done This page is for requesting bots to do work on this site. Hasteur (talk) 18:26, 30 December 2015 (UTC)
@Neate: would be well advised to read Martingale (betting system) and keep their money. All the best: Rich Farmbrough, 18:35, 30 December 2015 (UTC).
Beat you to it![4] HighInBC 18:37, 30 December 2015 (UTC)

Removal of {{Start date}} from {{Singles}} template

It has become common practice in album articles to use {{Start date}} in the {{Singles}} add-on to {{Infobox album}}. Per Template:Start date/doc: "This purpose of the {{start date}} template is to return the date (or date-time) that an event or entity started or was created. It also includes duplicate, machine-readable date (or date-time) in the ISO date format (which is hidden by CSS), for use inside other templates (or table rows) which emit microformats. It should only be used once in each such template and should not be used outside such templates." i.e. {{Start date}} should only be used in album articles for the album release date, not single release dates. It would be nice to have a bot to clean this up, as this error is currently in who knows how many articles. Chase (talk | contributions) 16:44, 5 July 2015 (UTC)

While we're at it, the bot that would do this should also remove {{Start date}} from AltDate in {{Episode list}}. nyuszika7h (talk) 19:32, 5 September 2015 (UTC)
@Chasewc91 and Nyuszika7H:   BRFA filed  Hazard SJ  03:46, 16 December 2015 (UTC) (Edit: Pinged  Hazard SJ  16:54, 16 December 2015 (UTC))

Dead links in external-links sections are useless; we provide the links for additional reading, not for citations, so if you can't access them, they're pointless — they always need to be fixed or removed. Could a bot go through Category:All articles with dead external links and record ones with dead links in the EL sections, either adding by a new category (e.g. Category:Articles with dead links in External Links sections, or something of the sort) or listing them on a tracking page? I'm imagining that it opens each page, finds each occurrence of {{dead link}} or redirects thereto, and records the ones in which one or more of these templates appears below ==External links== (or == External links ==) and above the next set of equals signs. I'm asking that the bot only record these pages, without doing anything else, because fixing or removing these links is a CONTEXTBOT situation. Nyttend (talk) 01:14, 27 November 2015 (UTC)

Actually Cyberbot is approved to do more than that, and will plow through the dead links and try to fix them. It's still in development though, but the functionality has been tested and approved.—cyberpowerChat:Online 23:55, 29 December 2015 (UTC)

Category:AfD debates relisted 3 or more times removal

Per the discussion (and background) at Wikipedia:Administrators'_noticeboard#Category:AfD_debates_relisted_3_or_more_times, can we get a bot set up to check Category:AfD debates relisted 3 or more times, and remove the category from closed discussions. I had been doing this every few days using AWB, but would prefer to have something automated do it. There was talk of getting an AfD closing script to do it, however not everyone uses the same script, or a script at all. Much obliged. --kelapstick(bainuu) 21:20, 3 December 2015 (UTC)

  BRFA filed  Hazard SJ  06:25, 4 December 2015 (UTC)