User talk:WereSpielChequers/Invisible flagged revisions

Latest comment: 3 months ago by WereSpielChequers in topic Totally viable proposal

Deferred changes

edit

Happy New Year, WereSpielChequers. You'll be aware of Deferred changes, which is on its way although I haven't seen a timetable for implementation. It's related to your idea in that possibly problematic edits are put into Pending Changes for review, and can be either "actively" deferred like pending changes are now, or "passively", meaning they go live but are reviewed afterwards like your "invisible" concept. Edits, by anybody, will be marked for deferred changes by edit filters; bots such as ClueBot; or ORES ratings. Your idea is different of course in that all edits by IPs and new editors would come up for post hoc review. I liked the aspect that patrollers could see where an edit had already been checked & found OK. However, I wonder whether it wouldn't be best to hold back until deferred changes comes on stream and we see how that's working out: Noyster (talk), 23:19, 1 January 2017 (UTC)Reply

Trusted users

edit

I don't understand how 'trusted users' are marked. How do we know someone is a trusted user? Is it something you apply for, is it something you earn by number of edits or what? Peter Damian (talk) 18:44, 14 March 2017 (UTC)Reply

My proposal is the second column in User:WereSpielChequers/Invisible_flagged_revisions#Userrights - Reviewers, Autopatrollers and admins. Those are the three userrights where referencing of content is now expected. OK some admins have been grandfathered in, but if an admin who only got the userright in 2003-7 as a "good vandalfighter" now started creating problematic content they'd probably be desysopped. Additional Reviewers could easily be appointed. There is a potential gap for someone whose own content was impeccable but who didn't want or wasn't trusted with the reviewer right. But I'm loathe to propose yet another userright, and I quite like the idea that if you create referenced content we should by default trust you to approve referenced content by others. I didn't envisage it being something one applied for, more that admins would be on the lookout for potential reviewers if this came in and we had a backlog of edits needing review. I think that the status should not be something you earn simply by number of edits, as raw edit count could just be vandalism reversion. I would like to have a mechanism that anyone with a certain ratio of edits verified by others is automatically deemed a trusted user, and I've added such a proposal to the draft. ϢereSpielChequers 06:55, 31 March 2017 (UTC)Reply
Proposed system seems to be gameable, but maybe I missed something. Is there a way that a trusted user loses that rating by being caught adding bad information? This is tricky to identify automatically because it is easy to find a reference for POV pushing, spam and other garbage, and also easy to add references which do not support the statement they are used with. Also sometimes a reference may support a claim, but it is not immediately obvious that this is true, and may require some understanding of the topic to actually recognise that the claim is supported by the reference. · · · Peter (Southwood) (talk): 07:22, 16 April 2018 (UTC)Reply
Everyone can make a mistake, and an honest mistake or two doesn't or shouldn't lose the communities trust. But userrights such as Reviewer, Autopatroller and admin can all be revoked and sometimes have to be taken away. However it is quite rare for such a trusted user to lose our trust for adding bad information. By the time you have one of those three discretionary rights you have had to make good contributions to the pedia - that isn't as easy to game as confirmed or even extended confirmed. You might still be non neutral, or an edit warrer or just very incivil, but those are not problems that this proposal addresses. This proposal addresses two separate issues: "is this edit vandalism", and "can I verify this edit". I think that if we implement it we will be able to close a loophole in our current anti vandalism efforts, but as for verifying all new edits as factual, the best this could achieve would be to reduce the size of the problem, to quantify it and hopefully to focus our attention on people who are adding lots of info but who haven't yet passed the test of having a certain number of reviewers flag one of their edits as verified.. ϢereSpielChequers 11:52, 16 April 2018 (UTC)Reply

Totally viable proposal

edit

There are times when users or circumstances need extra scrutiny. Since this proposal was drafted, Wikimedia projects have begun to use a lot more automated review. Even though this is always getting better, right now there are multiple types of automated review, and automated review can also detect cases when human review is merited. A flagging system can tag items for further consideration.

Flags can also be a social step toward inviting in editors with dubious profiles, such as those who use VPNs. Many VPN users are vandals, but some legitimately need privacy, and right now we have no options for screening good from bad conduct. Thanks for posting this idea. Bluerasberry (talk) 18:25, 1 August 2024 (UTC)Reply

Thanks, I'm a bit snowed under in real life at the moment and trying not to take on a big commitment like steering this through an RFC. But every time I go looking for old vandalism I find examples that have obviously slipped past us because of our current system which doesn't tell us which ip edits have been looked at twenty times and which have not been patrolled at all. ϢereSpielChequers 19:42, 1 August 2024 (UTC)Reply