Wikipedia talk:Community health initiative on English Wikipedia
This is the talk page for discussing improvements to the Community health initiative on English Wikipedia page. |
|
Dealing with clueless reports
editThis is a welcome initiative, but it is likely that some issues will be reported that a quick investigation should show are baseless. Standard community procedures include using WP:HERE as a reference point. It would be lovely if all users could be welcomed and cosseted, however experience shows that encouraging some users is very disruptive for community health as it often involves wasting a great deal of time and energy for useful editors. I have seen several cases where editor X is reported for abusive language directed against editor Y, and a superficial glance confirms that X was abusive and Y was polite. However, looking at the underlying issues may show that Y should be indefinitely blocked for exhausting the patience of many editors with clueless attempts to subvert the purpose of Wikipedia, and it was regrettable but quite understandable that X exploded with frustration because the community often cannot properly handle civil POV pushing. Johnuniq (talk) 00:28, 3 May 2017 (UTC)
- You said that very well. I get a lot of this thrown at me. This initiative is important and very well-intentioned but the execution could go wrong in so many ways. Here's to hoping it is executed well and wisely. Jytdog (talk) 18:13, 3 May 2017 (UTC)
- +1 Although 'clueless' reports doesn't seem like quite the right section title.
- Sometimes complainers are simple victims of abuse, sometimes complainers are the ones being abusive because their edits are rejected for policy reasons, and sometimes good editors can be driven to heated-frustration by polite-but-aggressively-disruptive individuals. When you see a complaint, you really need to start with a mindset presuming a 50-50% chance which side is the problem.
- The WP:HERE / WP:NOTHERE links are important. We cite NOTHERE a lot. Someone's reason for being here is an overriding factor in whether they will be successful or disruptive. People here to fix the world often march towards a block, and may be on either end of a complaint. Alsee (talk) 12:43, 16 May 2017 (UTC)
- @Johnuniq and Alsee: IMHO a bigger problem is dealing with clueless replies. I recently reported a series of personal attacks to AN (a certain editor publically accused me of bad faith, and told me to f** off), and the first admin reply was essentially 'you provoked a bully, so it is your fault, leave the discussion, let him win and don't waste our time'. Few more people commented, nobody bothered to issue as so much as a warning. It is my conclusion, based on 10 years here, and talking to many people about their experiences, that most people who are harassed do not bother to report it because they are fed up with admins/community ignoring their complains. There is an assumption that complains about harassment are unlikely to generate even so much as a warning, they are just a stressful waste of time where the victim complains and the community/admins criticize them for not having a thick enough skin. The complains may get some traction if it is some IP/red link doing the harassment, but the more established the harasser, the less people want to ruffle their feathers. --Piotr Konieczny aka Prokonsul Piotrus| reply here 05:14, 17 May 2017 (UTC)
- I take it that you are referring to this thread. If so, it is interesting reading with regard to the mission of this initiative. Lots of things like that happen. Lugnuts behavior was subpar, but not actionable, which is why nothing came of it. Turning to folks doing this initiative... if this initiative is going to try to eliminate the kind of behavior that Piotrus complains of, that would be... mmm extremely controversial. Reading this would probably be instructive. Jytdog (talk) 05:43, 17 May 2017 (UTC)
Scope questions
editA few questions about the intended scope S Philbrick(Talk) 16:15, 3 May 2017 (UTC)
- Hello S Philbrick,
- Our intention is to have the work of the Anti- harassment tools team to happen in partnership with the Wikimedia community. So we welcome your question and thoughts. Anti-Harassment Tools team 22:03, 5 May 2017 (UTC)
People
editThe initiative reads as if the people in scope are editors, both registered and unregistered, both instigators and recipients. It sounds like subjects of biographies are not in scope. At OTRS, we often get emails in which the main complaint is harassment of the subject of an article. Is this in scope? S Philbrick(Talk) 16:15, 3 May 2017 (UTC)
- The team’s work will include looking at harassment as defined by WMF and community policies. So, yes, subjects of BLP that are harassed on wiki (with images or text) would be included similar to other people as we design tools for detection, reporting, and evaluation of harassment. Anti-Harassment Tools team 22:03, 5 May 2017 (UTC)
Nexus
editThe initiative talks about "Wikipedia and other Wikimedia projects". While it is obvious that any harassment within the walls of Wikimedia projects is in scope, and there are examples of harassment that are clearly not in scope (the baseball player taunted by fans), there are some gray areas. It might be helpful to clarify the boundaries.For example:
- Twitter - Seems obvious it should be out of scope yet one of the more famous incidents involves harassment of a Wikipedia editor on twitter
- Wikimedia Mailing lists - I haven't seen a lot of harassment there but it may be useful to know whether this is in scope or out of scope
- IRC - I'm not a very active IRC user but I know some incidents have occurred there. I think IRC is officially considered not part of Wikimedia but that doesn't preclude the possibility that it could be in scope for this initiative
- Email - Might have to distinguish between at least three categories : emails sent as part of the "email this user" functionality on the left sidebar, email sent where the sender identified an email address from an editor's user page, and email sent in which there is no obvious nexus to Wikimedia
- Facebook - I assume Wikipedia Weekly is not officially part of Wikimedia, but it may have a quasi-official status.
S Philbrick(Talk) 16:15, 3 May 2017 (UTC)
- A primary area of emphasis for the Community health initiative will be on WMF wikis at the moment because there is a significant amount of catch-up we need to do, specifically in the four focus areas — Detection, Reporting, Evaluation & Blocking. In addition to the work on English Wikipedia, another area of emphasis will be a review of the global tools used by functionaries (stewards, checkusers, oversighters, etc,)
- As for harassment that occurs off wiki — it’s on everyone’s minds — our team has talked about this topic multiple times every week. What is our responsibility? How can we equip our users with the resources to protect and defend themselves elsewhere? This topic will continue to be discussed including in the future in community consultations.
- Some areas outside of general wiki space where harassment work is happening now are:
- MediaWiki technical spaces (including IRC, Phabricator, and official mailing lists) are covered by the Wikimedia Code of Conduct, which was ratified just last month. We may build software to cover these issues as well but have nothing planned.
- The EmailUser feature has been highlighted by the community as a means for harassment. We are definitely looking into how we can improve the EmailUser feature to better prepare users who may currently be unaware of how it operates.
- Wikimedia Foundation affiliated events (conferences, hackathons, etc) are covered by the Wikimedia Foundation Friendly Space Policy. There is nothing currently planned, but the Anti-harassment team is working closely with the Support and Safety team, and the possibility exists for there to be work on some type of tools to aid in reporting or investigation of harassment at these off wiki events.
- Anti-Harassment Tools team 22:03, 5 May 2017 (UTC)
Timing
editIt is understandable that the initial scope may be limited, but is this envisioned to be a "walk before we run", with the possibility that scope may be expanded eventually or is it intended that the answers to the scope question are intended to be semi-permanent (Meaning of course things can change over time but is this part of the planning or not?--S Philbrick(Talk) 16:15, 3 May 2017 (UTC)
- As we make it more difficult for harassment to occur in its current forms, we know bad actors will adapt and find new ways to push their agendas and attempt to drive people away from the community. We’re funded through FY18-19 with the Newmark Foundation Grant but don’t have a firm understanding of where this work will go past that. The 2030 Movement Strategy will certainly inform our plans!
- Well prepared software development teams should always be open to adjusting any plans along the way. What we learn with every feature we build will inform our future decisions, roadmap, and projects. And we say this in all sincerity — we need the community to help us with this learning and roadmap adjustments.
- Again, thank you for your questions and comments, Caroline, Sydney, & Trevor of the Anti-Harassment Tools team. (delivered by SPoore (WMF) (talk), Community Advocate, Community health initiative (talk) 22:03, 5 May 2017 (UTC))
- Thanks to all for the detailed answers to my questions.--S Philbrick(Talk) 17:51, 8 May 2017 (UTC)
ProcseeBot
editAs part of the detection section we have "Reliability and accuracy improvements to ProcseeBot". I'd like to comment on this as an admin who deals with a lot of proxies on enwiki. ProcseeBot is actually incredibly reliable and accurate. There's always going to be room for improvement in relation to the input sources to be checked, but its accuracy and reliability in doing those checks cannot be disputed. The only time when it really comes to admins' attention is unblock requests, where typically the block has lasted a bit too long.
ProcseeBot is invaluable but limited to detecting HTTP proxies, and really has its main value in blocking zombie proxies. These are often used especially by spambots, but they are not the main types of proxies used for abuse. There's an increasingly wide range of VPNs, cloud services, web and other anonymity services which are often almost impossible to automatically detect. Perhaps due to ProcseeBot's effectiveness, but perhaps not, these are the types of proxies we see causing the most anonymity abuse.
On enwiki, as well as globally at meta, we tend to block whole ranges of them when they become apparant. This can be done fairly easily using whois and reverse DNS for some addresses. For others such as Amazon, The Planet, Leaseweb, OVH, Proxad, this is not so straightforward as it will often hit a lot of collateral. More concerning is the recent rise of centrally organised dynamic VPN proxies, such as vpngate, which operate in a similiar way to Tor but without the transparency.
The team could always look at nl:Gebruiker:RonaldB/Open proxy fighting which (last time I looked) is active on multiple wikis and pro-active in blocking - and unblocking - various types of proxies which are not detected by ProcseeBot. Its output at WP:OPD is a good source of finding trolls.
I'm going to ignore for now the question of whether enwiki wants to totally ban all web servers and anonymising proxies from editing - policy says not. And a lot of harassment comes from hugely dynamic ranges and not proxies, where what is really needed is WMF pressure on ISPs. Anyway, good luck thinking about this, just don't focus too much on ProcseeBot and checking for HTTP proxies. -- zzuuzz (talk) 19:26, 3 May 2017 (UTC)
- @Zzuuzz: Thanks for the feedback. This is really useful. For ISPs that frequently re-assign IPs I wonder if there's really much we can do there. We recently rolled-out cookie blocking, but it's trivial to work around. There has been talk of user-agent searching in CheckUser, but at a certain point we start getting into creepy user-tracking/profiling/fishing territory. If you have any further thoughts on that, please let us know. Ryan Kaldari (WMF) (talk) 23:34, 4 May 2017 (UTC)
- If I were looking at this, with infinite resources, I'd look at reverse DNS (or AS) and geolocation in relation to page- or topic-level admin controls. What I'd really like to hear about (but not necessarily now) is attempts from the WMF to put pressure on ISPs in relation to ToS-breakers and outright illegal abusers. -- zzuuzz (talk) 18:26, 5 May 2017 (UTC)
Tools - not just for admins
editThe page M:Community_health_initiative/User_Interaction_History_tool was exclusively written in terms of administrators, but keep in mind that tools have various valuable uses for non-admins. One of those uses is to compile evidence to present to admins. Another use is to review suspected sock puppets, which may or may not be harassment related. Alsee (talk) 08:55, 16 May 2017 (UTC)
- That's an interesting proposal for a tool, and it will be interesting to see what response there is to your edits on that page. There has not been much response here. I hoped for at least an acknowledgment that my first post above was read. Johnuniq (talk) 09:59, 16 May 2017 (UTC)
- Johnuniq, I'll support your 'clueless reports' section above, and ping SPoore (WMF) to hopefully comment there and/or my Dashboard section below. I'm not much concerned with a response on this section. Alsee (talk) 11:16, 16 May 2017 (UTC)
- Hi Johnuniq and Alsee, we definitely read your first posts. Before we change anything or add new tools we will have discussions with the community. One of my primary jobs is to identify the various stakeholders and to make sure that a broad group of people give input into the work of the Ant-harassment tool team. We'll look at your comments and then give a more substantive answer to you about the Dashboard and Interaction History tool. Cheers, SPoore (WMF) (talk) , Community Advocate, Community health initiative (talk) 13:55, 16 May 2017 (UTC)
- Johnuniq, I'll support your 'clueless reports' section above, and ping SPoore (WMF) to hopefully comment there and/or my Dashboard section below. I'm not much concerned with a response on this section. Alsee (talk) 11:16, 16 May 2017 (UTC)
Dashboard
editThe main page here mentions that you are evaluating A dashboard system for wiki administrators or functionaries to help them manage current investigations and disciplinary actions, and there's a link about some of our existing dashboards.
Were you considering programming some sort of dashboard? Tools like Interaction Analyzer let us mine for information, and we pull everything into wikitext for community processing. We've created an entire integrated wiki-ecosystem were we create, modify, and abandon workflows on the fly. If you look at all of the existing "dashboard" examples, they are all simply wikipages. If you're building data-retrieval-tools like InteractionAnalizer, and trying to develop policy&social innovations, great. But please ping me if you were thinking about coding some sort of dashboard or "app" to manage the workflow. That is a very different thing. I'd very much like to hear what you had in mind, and discuss why it (probably) isn't a good idea. Some day we may develop a great M:Workflows system, but trying to build a series of one-off apps for various tasks is the wrong approach. Alsee (talk) 10:42, 16 May 2017 (UTC)
- Hi Alsee, no firm decisions have been made yet. We're looking for feedback from people who use the current tools in order to make them more useful. The team is still hiring developers. Look for invitations to discussions as we began prioritizing our work and making more concrete plans. SPoore (WMF) (talk) , Community Advocate, Community health initiative (talk) 14:16, 16 May 2017 (UTC)
- I fear this is going to become one of things where "we have hired people and we need to do something with them." Jytdog (talk) 15:08, 16 May 2017 (UTC)
- I invite&request anyone to ping me if there is discussion of building any "internal" tools like the NewPagePatrol system or software managed "dashboards", rather than external tools like ToolLabs. I would very much want to carefully consider what is proposed to be built. Normally I closely track projects like this, but my time is already strained on various other WMF projects. Alsee (talk) 23:40, 16 May 2017 (UTC)
- I fear this is going to become one of things where "we have hired people and we need to do something with them." Jytdog (talk) 15:08, 16 May 2017 (UTC)
Invitation to test and discuss the Echo notifications blacklist
editHello,
To answer a request from the 2016 Community Wishlist for more user control of notifications, the Anti-harassment tools team is exploring changes that allow for adding a per-user blacklist to Echo notifications. This feature allows for more fine tuned control over notifications and could curb harassing notifications. We invite you to test the new feature on beta and then discuss it with us. For the Anti-harassment tools team SPoore (WMF) (talk) , Community Advocate, Community health initiative (talk) 15:18, 2 June 2017 (UTC)
Anti-Harassment Tools prioritization
editGood Tuesday, Wikipedia!
We owe you an update of what the Community health initiative and Anti-Harassment Tools team have been working over the past month:
- We now have a developer!
David Barratt joined us on May 30 as our Full Stack Developer. He’s already tearing through all the onboarding tasks and is looking forward to building tools for you to use!
- Echo notifications blacklist
We’ve posted this in a few places already but we again wanted to share a new feature that found legs in the 2016 Community Wishlist and Vienna hackathon last month: Echo notifications blacklist. Have a test in our beta environment and share your thoughts!
- Prioritizing our efforts
The meat of what I’d like to talk about today is how the Anti-Harassment Tools team will prioritize our work.
Now that David is on board, we’re nearly ready to start putting the digital pen to digital paper and build some tools to help the Wikipedia community better deal with harassment.
There are a lot of opportunities to explore, problems to solve, and projects to tackle. Part of my job as product manager is to prioritize our backlog of opportunities, problems, and projects in a logical and efficient way. We’re using Phabricator to track our work on the Anti-Harassment Workboard in the “Prioritized Project/Opportunity/Problem Backlog” column. It is prioritized from top to bottom, and it’s natural for items near the top to be more fleshed out while lower items may just be a few words or stray thoughts.
I take many things into consideration during this prioritization process: what is designed, defined, and ready for development? what will provide the most value to our users? what has momentum and strong support? what can we accomplish given our time frame and developer capacity? I’ve made a full list of all prioritization considerations on Wikipedia:Community_health_initiative#Prioritization.
The English Wikipedia community’s input will be extremely important in this process. We need to know if there’s a more logical order to our prioritization. We need to know if we’re forgetting anything. We need to know if the community is ready for what we’re planning on building.
Here’s how we invite you to participate:
- At the beginning of every quarter we’ll reach out for input here on English Wikipedia (amongst other places) to discuss the top 5-10 items prioritized for the next three months.
- Outside of this, if you’d like to make a recommendation for reprioritization, please bring it to us in whatever method you’re most comfortable: on a talk or user talk page, via email, or on our Phabricator tickets.
- If you’d like to propose a new feature or idea, leave us a note on wiki, send me an email, or create a Phabricator ticket (tagging it with Anti-Harassment.)
Currently the top few items on our backlog include some tools for the WMF’s Support and Safety team (T167184), the Notifications Blacklist (T159419) and AbuseFilter (T166802, T166804, & T166805.) This is likely enough work through October but time permitting we’ll look into building page and topic blocks (T2674) and the User Interaction History tool (T166807.)
So please take a look at our backlog and let us know what you think. Is the sequencing logical? Are there considerations we’ve missed? Are there more opportunities we should explore?
Thank you!
— TBolliger (WMF) (talk) 20:44, 13 June 2017 (UTC) on behalf of the Anti-Harassment Tools team
- What's "a bukkteam"? MER-C 23:47, 13 June 2017 (UTC)
- 😆 No idea! A rogue typo that made it through. Thanks for pointing it out — I've updated the sentence to use human English. — TBolliger (WMF) (talk) 23:50, 13 June 2017 (UTC)
- According to this article 'Wikipedia' is trying to adapt Google's (Jigsaw divison) 'Perspective' AI software to 'tackle personal attacks against its volunteer editors'. I have done a brief search but I cant find a mention of this. Is it one of the anti-harrassment tools under a different guise? Only in death does duty end (talk) 15:13, 14 June 2017 (UTC)
- @Only in death: Our Research department has been working with Google's Jigsaw on their Perspective AI since mid-2016. You can see the background about the work on meta:Research:Detox and read about some findings on this WMF blog post. We see their research as a great opportunity to learn more about the feasibility of extending ORES functionality to evaluate user interactions. This could manifest as a component of several tools: an extension of the New filters for edit review, a condition for AbuseFilter to check against, or potentially indicators on a User Interaction History tool. — TBolliger (WMF) (talk) 17:14, 14 June 2017 (UTC)
- Sounds reasonable. I'd go for the abuse filter because it works across the board. I would also like to add something like phab:T120740 or custom, callable functions -- we have quite a few filters that ask whether a user is autoconfirmed and editing in the main namespace for instance. The software should perform this check once only per edit. MER-C 02:23, 15 June 2017 (UTC)
- @MER-C: Thank you, and I totally agree about AbuseFilter callable functions. It certainly doesn't make sense to run all filters for non-confirmed users on edits performed by confirmed users (and likewise for other checks, such as namespace, account age, etc.) Our first step is to measure performance so we actually know how much we need to improve it. — TBolliger (WMF) (talk) 18:48, 15 June 2017 (UTC)
User page protection on all WMF projects
edit@TBolliger (WMF): Re "propose a new feature", I do not know if you are familiar with some of the long-term harassment from LTAs. I have had extensive communication with a couple of editors who have received egregious threats via Wikipedia email and from editing of user or user talk pages from socks of one LTA with an identity known to the WMF. Another request has just been made here for a way to have user pages protected on all WMF sites. Many more details are available. Per WP:DENY it might be best to minimize discussion at that talk. Johnuniq (talk) 01:24, 15 June 2017 (UTC)
- @Johnuniq: Yes, I've read over the LTA pages a few months ago and I definitely see some of our backlog items as addressing this: T166809 is to build "Cross-wiki tools that allow stewards to manage harassment cases across wiki projects and languages" while we could empower individual users to protect their pages or mute specific users on T164542 "General user mute/block feature". I see that there might need to be the need for both tactics, as individual users will be able to react more quickly than stewards or admins in small scale cases. — TBolliger (WMF) (talk) 18:48, 15 June 2017 (UTC)
Exploring how the Edit filter can be used to combat harassment
editThe Edit filter (also known as AbuseFilter) is a feature that evaluates every submitted edit, along with other log actions, and checks them against community-defined rules. If a filter is triggered the edit may be rejected, tagged, logged, trigger a warning message, and/or revoke the user’s autoconfirmed status.
Currently there are 166 active filters on English Wikipedia. One example would be filter #80, “Link spamming” which identifies non-autoconfirmed users who have added external links to three or more mainspace pages within a 20 minute period. When triggered, it displays this warning to the user but allows them to save their changes. It also tags the edit with ‘possible link spam’ for future review. It’s triggered a dozen times every day and it appears that most offending users are ultimately blocked for spam.
AbuseFilter is a powerful tool at handling content issues and we believe it can be extended to handle more user conduct issues. The Anti-Harassment Tools software development team is looking into three major areas:
- 1. Improving its performance so more filters can run per edit
We want to make the AbuseFilter extension faster so more filters can be enabled without having to disable any other useful filters. We’re currently investigating the current performance in T161059. Once we better understand how it is currently performing we’ll create a plan to make it faster.
- 2. Evaluating the design and effectiveness of the warning messages
There is a filter — #50, “Shouting” — which warns when an unconfirmed user makes an edit to mainspace articles consisting solely of capital letters. (You can view the log if you’re curious about what types of edits successfully trip this filter.) When the edit is tripped, it displays a warning message to the user above the edit window:
These messages help dissuade users from making harmful edits. Sometimes requiring a user to take a brief pause is all it takes to avoid an uncivil incident.
We think the warning function is incredibly important but are curious if the presentation could be more effective. We’d like to work with any interested users to design a few variations so we can determine which placement (above the edit area, below, as a pop-up, etc.) visuals (icons, colors, font weights, etc.) and text most effectively conveys the intended message for each warning. Let us know if you have any ideas or if you’d like to participate!
- 3. Adding new functionality so more intricate filters can be crafted.
We’ve already received dozens of suggestions for functionality to add to AbuseFilter, but we need your help to winnow this list so we can effectively build filters that help combat harassment.
The first filter I propose would warn users when they publish blatantly aggressive messages on talk pages. The user would still be allowed to publish their desired message but the warning message would give them a second chance to contemplate that their uncivil words may have consequences. Many online discussion websites have this functionality, to positive effect. The simple version would be built to detect words from a predefined list, but if we integrated with ORES machine learning we could automatically detect bad faith talk page edits. (And as a bonus side effect, ORES could also be applied to content edit filters.)
Another filter I propose would be to log, warn, or prevent 3RR violations. This filter has been proposed twice before ([1], [2]) but rejected due to lack of discussion and because AbuseFilter cannot detect reverts. The Anti-Harassment Tools team would build this functionality as we believe this filter would provide immense usefulness in preventing small-scale harassment incidents from boiling over.
There are countless other filters that could be created. If you wanted to create a filter that logged, warned, or prevented harassing comments, what would it be? And what functionality would you add to AbuseFilter? Join our discussion at Wikipedia talk:Community health initiative on English Wikipedia/Edit filter.
Thank you, and see you at the discussion!
— The Anti-Harassment Tools team (posted by TBolliger (WMF) (talk) 23:15, 21 June 2017 (UTC))
- Isn't edit warring well out of scope for a project focused on harassment? Jytdog (talk) 00:30, 22 June 2017 (UTC)
- @Jytdog: Not entirely. Many cases of harassment originate from content disputes and edit wars. So this would be a potential way to solve the root cause and not the symptom. — TBolliger (WMF) (talk) 21:52, 22 June 2017 (UTC)
- On 3rr, first, I don't think you could adequately identify the exceptions to the 3rr rule algorithmically, and frankly those exceptions are more important then the rule.(BLP Violations, Blatant Vandalism, etc...) As for warning, its a mixed bag. On the one hand, it is definitely positive to warn people who may be inadvertently be about to violate 3rr with no intention to edit war at all. But on the other hand, you would want to be very careful that it doesn't encourage the idea that edit warring without violating 3rr is acceptable. Personally, I would focus efforts on long term abuse that existing filters are proving ineffective against, particularly when it comes to long term harassment aimed at particular editors. Monty845 02:42, 22 June 2017 (UTC)
- @Monty845: You're right, detecting 3RR is very nuanced, which is why a tag might be more appropriate than a warning. Which filters do you believe are ineffective? (You may want to email me instead of posting them here. — TBolliger (WMF) (talk) 21:52, 22 June 2017 (UTC)
- I'm genuinely intrigued by what ORES can offer in the way of aggressive talk page edits. However, I recall the mixed results from Filter 219 (as transferred to Filter 1 in August 2009)[3]. Even detecting "fuck off" is not unproblematic. If you take out the aggression this site is still so different from many other forums and websites. -- zzuuzz (talk) 21:53, 14 July 2017 (UTC)
Interaction review
editSo this is an idea that I recently considered and it's quite simple:
- You can score another editor on certain qualifications. Picking these is crucial, but some examples could be:
- civility
- expertise
- helpfulness
- visibility/participation
- The scores are 3 levels (down vote, neutral, up vote)
- These scores are private (only the person giving a score can see how he scored a fellow contributor)
- You can change your score
- The scores by others are averaged/eased over time and presented to the user on his user page or something (if you have enough scores, you might be able to go into a timeline and see how you are being perceived has changed over time). The score view would be low profile, but be present consistently and continuously (but no pings etc etc).
- The system might sometimes poke you to give a review (you recently interacted with 'PersonB', can you qualify your experience with this editor ?)
The idea would be that this would create insight for the contributor around his own behaviour and the communities perception of said behaviour. Hopefully encouraging self steering/correcting, without the introduction of blocking, public shaming etc. I would also add some "course" material in the presented score, for those who need help interpreting their score and to help them to be 'better' community members. —TheDJ (talk • contribs) 11:50, 22 June 2017 (UTC)
- BTW, there might be downsides to this. For instance a troll could use it as a 'success' measure. Or influenceable/unstable people might become distraught by their 'score'. Such are important elements that need to be taken into account when designing something. —TheDJ (talk • contribs) 11:55, 22 June 2017 (UTC)
- Hello The DJ, the Anti-harassment tool team is definitely looking for more ideas, so thank you for posting this one. Have you seen it (or something similar) in use on another website? Also, I'm thinking that it could be something that people opt in to but it could be socialized to be used if it was found to be useful. Also in order to minimize bad effects of trolling it could have limit who could do the reviews. And maybe include a blacklist for people who have interactions bans, etc. I'm interested in hearing other thoughts. SPoore (WMF) (talk) , Community Advocate, Community health initiative (talk) 19:22, 22 June 2017 (UTC)
- Haven't seen it anywhere else. It came from a trail of thought I had: "Most sites allow you to block and friend other people, but that doesn't really match the bazaar-type of interaction that usually occurs on Wikipedia. So if you cannot ban a user, what can you do". It's my opinion that the 'forced' coming together of people and their ideas as on Wikipedia is essential to the Wikipedia model, and trying not to break that, is one of the bigger challenges when we want to deal with community health. —TheDJ (talk • contribs) 20:21, 23 June 2017 (UTC)
- Hello The DJ, the Anti-harassment tool team is definitely looking for more ideas, so thank you for posting this one. Have you seen it (or something similar) in use on another website? Also, I'm thinking that it could be something that people opt in to but it could be socialized to be used if it was found to be useful. Also in order to minimize bad effects of trolling it could have limit who could do the reviews. And maybe include a blacklist for people who have interactions bans, etc. I'm interested in hearing other thoughts. SPoore (WMF) (talk) , Community Advocate, Community health initiative (talk) 19:22, 22 June 2017 (UTC)
- As i have been reviewing these suggestions for interventions i keep thinking about medicine. There is no intervention that doesn't have adverse effects - for all of these things the potential benefits are going to have to be weighed against potential harms, and there will be unexpected things (good and bad) when it gets implemented - some kind of clinical trials should be conducted in a controlled way to learn about them before something is actually introduced, and afterwards there should be what we call postmarketing surveillance/pharmacovigilance as sometimes adverse effects don't emerge until there is really widespread use. Jytdog (talk) 15:23, 22 June 2017 (UTC)
- Jytdog, we (the Anti-harassment tool team) agree that there needs to a variety of types of testing and analysis both before and after release of a new feature. Luckily we have a Product Analysis/Researcher as part of our team. Our team is still new and we are currently developing best practices and workflows. You can expect to see documentation about the general way that we will research, test, and analysis, as well as specific plans and results for a particular feature. Right now we are thinking about post release analysis and data collect for the Echo notifications blacklist feature. T168489.
- Additionally, after the final members of the Community Health Initiative are on boarded (two are starting in early July) we plan to have a larger community discussion(s) about definitions, terminology, policy about harassment (specific to English Wikipedia) as it pertains to this teams work. And we will work with the community to identify measures of success around Community health that will inform our work and help determine the type of research, testing, and analysis that we need to do. SPoore (WMF) (talk) , Community Advocate, Community health initiative (talk) 19:22, 22 June 2017 (UTC)
- Great! Jytdog (talk) 22:14, 22 June 2017 (UTC)
- Additionally, after the final members of the Community Health Initiative are on boarded (two are starting in early July) we plan to have a larger community discussion(s) about definitions, terminology, policy about harassment (specific to English Wikipedia) as it pertains to this teams work. And we will work with the community to identify measures of success around Community health that will inform our work and help determine the type of research, testing, and analysis that we need to do. SPoore (WMF) (talk) , Community Advocate, Community health initiative (talk) 19:22, 22 June 2017 (UTC)
YouTube videos and comments get this kind of rating but for determining what content is promoted. Is the suggestion that we can rate editors across all behavior just once? That seems problematic as behavior shifts over time and no one has a complete idea of any user's contributions. Perhaps look at how the Thanks system could be tweeked to get to a similar goal. If we could both Thank and Dislike edits and that translated into a score that would provide a lot of feedback to users who make unpopular edits or post stupid opinions. Maybe make the Dislikes anonymous and set a filter to catch users that just dislike everything someone does to kill their score. Or allocate dislikes equal to but not more than the number of Thanks given. Something a bit like this happens at geocaching.com where you can only Favorite 1 in 10 of the caches you find. Legacypac (talk) 20:14, 23 June 2017 (UTC)
- In my idea, you can change your score (re-score), at any time. Youtube is not really the same, since "promotion" is a public effect and my idea would only give a 'private' effect, and that is a crucial part of it. I've considered all these kinds of 'public' feedback, but the problem is that I think you would get a large group of people heavily opposing such a system (the sort that prioritises content/contribution-quality over interaction quality). When the system is private and instead focuses on 'awareness', I think that might be an interesting and novel approach that has not been tried very often before and which could be an interesting avenue for research. It can probably even being automated with ORES like AI scoring for a part (then the user's scoring can maybe be used for training the AI and the AI result might be mixed with the score that is visible to the user being scored in the final graph or something. Just some crazy thoughts. —TheDJ (talk • contribs) 16:06, 26 June 2017 (UTC)
- I struggle with this scoring of other users. In my view this will likely become a tool to quantitate wikipolitics (already a bane of this place) to make them appear "objective". This would be used in all kinds of unattractive ways, including bragging rights. As an example, people run around touting how many GAs or FAs they have been involved with, and the GA/FA process gets distorted by people collecting badges this way. This proposed system has potential to be abused similarly and with worse effect, especially with regard to negatively rating people. I understand that a goal of this initiative is to quantitate behavior relevant to harassment and I get that, but making it usergenerated is problematic. Jytdog (talk) 17:00, 26 June 2017 (UTC)
- A private badge, can easily be faked, because others cannot verify it. As such it is a pointless bragging feature, because the first thing everyone would say is: "sure, but you can fake that". At least, that's my view on it. —TheDJ (talk • contribs) 15:45, 27 June 2017 (UTC)
- I believe there is definitely room in the MediaWiki software to support and strengthen the positive community interactions already occurring on Wikipedia to both provide constructive user feedback and to allow users to take pride in their accomplishments. And I agree that if we can channel a user's frustration with another into a constructive interaction as opposed to a neutral or incivil interaction, the experience of everybody involved (and the encyclopedia itself) is better off. I also believe having a collection of these accomplishments/accolades are better measures of a user's conduct than just an edit count. The Teahouse, Thanks, barnstars, wikilove, and manually written messages of appreciation show that there is appetite to celebrate good contributions. The Anti-Harassment Tools team is looking into existing dispute resolution workflows at the moment, but I look forward to exploring preventative tools that encourage these positive interactions . — TBolliger (WMF) (talk) 17:06, 27 June 2017 (UTC)
- A private badge, can easily be faked, because others cannot verify it. As such it is a pointless bragging feature, because the first thing everyone would say is: "sure, but you can fake that". At least, that's my view on it. —TheDJ (talk • contribs) 15:45, 27 June 2017 (UTC)
- It is unclear whether the result is only presented to the user, or if it is publicly visible. If it's publicly visible then the rating-inputs need to be tightly controlled, and you're defacto building a website-defining social governance engine. I don't think we want to go there. If the result is only visible to the user, submitting ratings will largely be a waste of time. Ratings will be dominated by people who are motivated (angry) over some particular conflict, and/or people compulsively wasting time on mostly useless ratings. Receptiveness to feedback is almost a defining characteristic of positive-participant vs problem-individual. Positive participants who see negative ratings will either wisely ignore it, or they will be overly sensitive to it. Problem individuals will likely see bad ratings as more evidence that they're being unfairly attacked. The idea is swell in theory, but it jumps badly between a poor time sink and a defacto governance engine. Alsee (talk) 21:40, 28 June 2017 (UTC)
- I oppose any kind of scoring system as Wikipedia is an encyclopedia not a meter of approval. Esquivalience (talk) 18:28, 18 August 2017 (UTC)
Changes we are making to the Echo notifications blacklist before release & Release strategy and post-release analysis
editHello;
I've posted #Changes we are making to the blacklist before release and #Release strategy and post-release analysis for those interested in the Echo notifications blacklist feature. Feedback appreciated! — TBolliger (WMF) (talk) 18:41, 23 June 2017 (UTC)
Finding edit wars
editThere is currently a discussion at WP:Village pump (proposals)#Request for new tool which which is of interest here.
Problem summary: Edit wars can be stressful for everyone involved. In many cases the individuals involved may not know how to request intervention, or they may be so absorbed in the conflict that fail to request intervention. The discussion is for creating a tool or bot which would find likely edit wars in progress, and automatically report them for human investigation. Alsee (talk) 21:53, 1 July 2017 (UTC)
- Thank you User:Alsee! I've left a comment in that discussion. We'll definitely be looking into edit war detection with our Edit filter work but it may also be better as a separate tool. — Trevor Bolliger, WMF Product Manager 🗨 17:10, 3 July 2017 (UTC)
Our goals through September 2017
editI have two updates to share about the WMF’s Anti-Harassment Tools team. The first (and certainly the most exciting!) is that our team is fully staffed to five people. Our developers, David and Dayllan, joined over the past month. You can read about our backgrounds here.
We’re all excited to start building some software to help you better facilitate dispute resolution. Our second update is that we have set our quarterly goals for the months of July-September 2017 at mw:Wikimedia Audiences/2017-18 Q1 Goals#Community Tech. Highlights include:
- Implementing performance and anti-spoof improvements to Edit filter
- Research and plan the User Interaction History feature
- Consult the Wikipedia community about Page or topic blocks
- Finishing and releasing the Mute feature.
I invite you to read our goals and participate in the discussions occurring here, or on the relevant talk pages.
Best,
— Trevor Bolliger, WMF Product Manager 🗨 20:29, 24 July 2017 (UTC)
Tech news this week
editI see in this week's tech news in the signpost the following:
- It will be possible to restrict who can send you notifications on a wiki. This new feature will accessible in your preferences, in the Notifications tab. Please see the documentation. (Phabricator task T150419)
That was discussed here - this arises from this initiative, right? Jytdog (talk) 18:20, 5 August 2017 (UTC)
- Hi @Jytdog: — While the original work was done by volunteers outside our regular prioritization, my team (the Anti-Harassment Tools team) will be making some final changes before we release it on more wikis. Right now it is only enabled on Meta Wiki. — Trevor Bolliger, WMF Product Manager 🗨 21:21, 7 August 2017 (UTC)
Examining the edits of a user
editUser story: I want to examine and understand the sequence edits of another user. Maybe they are stalking another user, maybe they are pushing a conflict across multiple pages, maybe they are an undisclosed paid editor.
When I view the history of a page, each diff has very helpful links at the top for next-edit and previous-edit. That's great for walking through the history of that page.
When I view the contribution history of a user, opening one of the edit-links will open a diff of the target page. As noted above, that diff page has links for next-edit-to-that-page and previous-edit-to-that-page. In most cases that is exactly what we want. However those next&previous links are useless when I'm trying to walk through the edits of a particular user. In that case what I really want is next&previous links for edits by that user.
Working from a user's contribution history page is possible, but very awkward. Either I have to go down the list opening each edit in a new tab, or I have to use the browser's back button to continually reload the contribution history page.
I find it hard to picture a good user-interface solution for this use case. All of the options I can think of would either be wrong for the more common case, or they would unduly clutter the user interface. It would be great if you can come up with a good solution for this. Alsee (talk) 13:43, 9 August 2017 (UTC)
- @Alsee: Oh, that's an interesting idea. We've thought about how to show this type of information in an easy-to-understand format for 2+ users, but haven't thought about it for a single user. It should be straightforward to build a new tool for this, so I've created T172893 to keep track of this idea. — Trevor Bolliger, WMF Product Manager 🗨 14:55, 9 August 2017 (UTC)
Need input on warning templates
editIt was great to meet some of the anti-harassment team members at Wikimania 2017. Following up on my presentation there, I could use some input on crafting new warning templates for anonymous and new editors who attempt to leave personal attacks on others' user pages. Funcrunch (talk) 15:51, 16 August 2017 (UTC)
- Thank you for the notification, Funcrunch. I'm pleased to see you moving forward with more ideas. SPoore (WMF), Community Advocate, Community health initiative (talk) 20:05, 16 August 2017 (UTC)
- Thank you, Funcrunch! I have my own thoughts and will voice my opinion shortly. I would also suggest that you ping the Village Pump or Wikipedia:WikiProject_Templates to get more people to participate in the conversation. Best of luck, I 100% agree 'vandalism' 'graffiti' or 'test edit' are too weak to describe some of these messages. — Trevor Bolliger, WMF Product Manager 🗨 21:52, 18 August 2017 (UTC)
- @TBolliger (WMF): Thanks, I pinged WP Templates on the discussion. I couldn't figure out where on the Village Pump would be the right place for a link; if you have one in mind feel free to ping them too (or let me know where it should go). Funcrunch (talk) 23:10, 18 August 2017 (UTC)
- @Funcrunch: I usually most on Wikipedia:Village_pump_(miscellaneous) but this topic could also be pertinent to Wikipedia:Village_pump_(proposals). @SPoore (WMF):, your thoughts? — Trevor Bolliger, WMF Product Manager 🗨 16:50, 21 August 2017 (UTC)
- I would suggest posting at Wikipedia:Village_pump_(miscellaneous), too. SPoore (WMF), Community Advocate, Community health initiative (talk) 15:26, 23 August 2017 (UTC)
Update and request for feedback about User Mute features
editHello Wikipedians,
The Anti-harassment Tool team invites you to check out the new User Mute features under development and to give us feedback.
The team is building software that empowers contributors and administrators to make timely, informed decisions when harassment occurs.
With community input, the team will be introducing several User Mute features to allow one user to prohibit another specific user from interacting with them. These features equip individual users with tools to curb harassment that they may be experiencing.
The current notification and email preferences are either all-or-nothing. These mute features will allow users to receive purposeful communication while ignoring non-constructive or harassing communication.
Notifications mute
editWith the notifications mute feature, on wiki echo notifications can be controlled by an individual user in order to stop unwelcome notifications from another user. At the bottom of the "Notifications" tab of user preferences an user can mute on-site echo notifications from individual users, by typing their username into the box.
Echo notifications mute is feature is currently live on Meta Wiki and will be released on all Echo-enabled wikis on August 28, 2017.
Try out the feature and tell us how well it is working for you and your wiki community. Suggest improvements to the feature or documentation. Let us know if you have questions about how to use it. Wikipedia talk:Community health initiative on English Wikipedia/User Mute features
Email Mute list
editSoon the Anti-harassment tool team with begin working on a feature that allows one user to stop a specific user from sending them email through Wikimedia special:email. The Email Mute list will be placed in the 'Email options' sections of the 'User profile' tab of user preferences. It will not be connected to the Notifications Mute list, it will be an entirely independent list.
This feature is planned to be released to all Wikimedia wikis by the end of September 2017.
For more information see. Community health initiative/Special:EmailUser Mute
Let us know your ideas about this feature.
Open questions about user mute features
editSee Wikipedia:Community health initiative on English Wikipedia/User Mute features for more details about the user mute tools.
Community input is needed in order to make these user mute features useful for individuals and their wiki communities.
Join the discussion at Wikipedia talk:Community health initiative on English Wikipedia/User Mute features or the discussion on Meta Or if you want to share your ideas privately, contact the Anti-harassment tool team by email.
For the Anti-harassment tool team, SPoore (WMF), Community Advocate, Community health initiative (talk) 20:17, 28 August 2017 (UTC)
Anti-harassment tools team's Administrator confidence survey closing on Sept 24
editHello, The Wikimedia Foundation Anti-harassment tools team is conducting a survey to gauge how well tools, training, and information exists to assist English Wikipedia administrators in recognizing and mitigating things like sockpuppetry, vandalism, and harassment. This survey will be integral for our team to determine how to better support administrators.
The survey should only take 5 minutes, and your individual response will not be made public. The privacy policy for the survey describes how and when Wikimedia collects, uses, and shares the information we receive from survey participants and can be found here: https://wikimediafoundation.org/wiki/Semi-Annual_Admin_Survey_Privacy_Statement
To take the survey sign up here and we will send you a survey form. Survey submissions will be closed on September 24, 2017 at 11:59pm UTC. The results will be published on wiki within a few weeks.
If you have questions or want to share your opinions about the survey, you can contact the Anti-harassment tool team at Wikipedia talk:Community health initiative on English Wikipedia/Administrator confidence survey or privately by email
For the Ant-harassment tools team, SPoore (WMF), Community Advocate, Community health initiative (talk) 16:29, 22 September 2017 (UTC)
Invitation to participate in a discussion about building tools for managing Editing Restrictions
editThe Wikimedia Foundation Anti-Harassment Tools team would like to build and improve tools to support the work done by contributors who set, monitor, and enforce editing restrictions on Wikipedia, as well as building systems that make it easier for users under a restriction to avoid the temptation of violating a sanction and remain constructive contributors.
You are invited to participate in a discussion that documents the current problems with using editing restrictions and details possible tech solutions that can be developed by the Anti-harassment tools team. The discussion will be used to prioritize the development and improvement of tools and features.
For the Wikimedia Foundation Anti-harassment tools team, SPoore (WMF), Community Advocate, Community health initiative (talk) 20:47, 25 September 2017 (UTC)
Help us decide the best designs for the Interaction Timeline feature
editHello all! In the coming months the Anti-Harassment Tools team plans to build a feature that we hope will allow users to better investigate user conduct disputes, called the Interaction Timeline. In short, the feature will display all edits by two users on pages where they have both contributed in a chronological timeline. We think the Timeline will help you evaluate conduct disputes in a more time efficient manner, resulting in more informed, confident decisions on how to respond.
But — we need your help! I’ve created two designs to illustrate our concept and we have quite a few open questions which we need your input to answer. Please read about the feature and see the wireframes at Wikipedia:Community health initiative on English Wikipedia/Interaction Timeline and join us at the talk page!
Thank you, — CSinders (WMF) (talk) 19:42, 3 October 2017 (UTC)
Anti-Harassment Tools quarterly update
editHappy October, everyone! I'd like to share a quick summary of what the Anti-Harassment Tools team accomplished over the past quarter (and our first full quarter as a team!) as well as what's currently on the docket through December. Our Q1 goals and Q2 goals are on wiki, for those who don't want emoji and/or commentary.
Q1 summary
📊 Our primary metric for measuring our impact for this year is "admin confidence in resolving disputes." This quarter we defined it, measured it, and are discussing it on wiki. 69.2% of English Wikipedia admins report that they can recognize harassment, while only 39.3% believe they have the skills and tools to intervene or stop harassment and only 35.9% agree that Wikipedia has provided them with enough resources. There's definitely room for improvement!
🗣 We helped SuSa prepare a qualitative research methodology for evaluating Administrator Noticeboards on Wikipedia.
⏱ We added performance measurements for AbuseFilter and fixed several bugs. This work is continuing into Q2.
⚖️ We've begun on-wiki discussions about Interaction Timeline wireframes. This tool should make user conduct investigations faster and more accurate.
🤚 We've begun an on-wiki discussion about productizing per-page blocks and other ways to enforce editing restrictions. We're looking to build appropriate tools that keep rude yet productive users productive (but no longer rude.)
🤐 For Muting features, we've finished & released Notifications Mute to all wikis and Direct Email Mute to Meta Wiki, with plans to release to all wikis by the end of October.
Q2 goals
⚖️ Our primary project for the rest of the calendar year will be the Interaction Timeline feature. We plan to have a first version released before January.
🤚 Let's give them something to talk about: blocking! We are going to consult with Wikimedians about the shortcomings in MediaWiki’s current blocking functionality in order to determine which blocking tools (including sockpuppet, per-page, and edit throttling) our team should build in the coming quarters.
🤐 We'll decide, build, and release the ability for users to restrict which user groups can send them direct emails.
📊 Now that we know the actual performance impact of AbuseFilter, we are going to discuss raising the filter ceiling.
🤖 We're going to evaluate ProcseeBot, the cleverly named tool that blocks open proxies.
💬 Led by our Community Advocate Sydney Poore, we want to establish communication guidelines and cadence which encourage active, constructive participation between Wikimedians and the Anti-Harassment Tools team through the entire product development cycle (pre- and post-release.)
Feedback, please!
To make sure our goals and priorities are on track, we'd love to hear if there are any concerns, questions, or opportunities we may have missed. Shoot us an email directly if you'd like to chat privately. Otherwise, we look forward to seeing you participate in our many on-wiki discussions over the coming months. Thank you!
— The Anti-Harassment Tools team (Caroline, David, Dayllan, Sydney, & Trevor) Posted by Trevor Bolliger, WMF Product Manager 🗨 20:56, 4 October 2017 (UTC)
- Trevor Bolliger, a majority of those emoji rendered as garbage-boxes for me, including the one in your signature. They approximately resemble
0911F0
. It's probably best to avoid nonstandard characters. Alsee (talk) 23:57, 12 October 2017 (UTC)- @Alsee: Oh, bummer. I'll update my signature. Thanks for the heads up. — Trevor Bolliger, WMF Product Manager (t) 00:17, 13 October 2017 (UTC)
- I think that the complaint was not about the signature, but the text headings (📊, 🤖, 💬, etc). —PaleoNeonate – 01:21, 13 October 2017 (UTC)
- I'll avoid using emojis in future updates. (crying emoji). I like to try to add some personality to otherwise sterile posts, but if it's working against me, it's probably for the best. — Trevor Bolliger, WMF Product Manager (t) 19:05, 13 October 2017 (UTC)
- I think that the complaint was not about the signature, but the text headings (📊, 🤖, 💬, etc). —PaleoNeonate – 01:21, 13 October 2017 (UTC)
- @Alsee: Oh, bummer. I'll update my signature. Thanks for the heads up. — Trevor Bolliger, WMF Product Manager (t) 00:17, 13 October 2017 (UTC)
Submit your ideas for Anti-Harassment Tools in the 2017 Wishlist Survey
editThe WMF's Anti-Harassment Tools team is hard at work on building the Interaction Timeline and researching improvements to Blocking tools. We'll have more to share about both of these in the coming weeks, but for now we'd like to invite you to submit requests to the 2017 Community Wishlist in the Anti-harassment category: meta:2017 Community Wishlist Survey/Anti-harassment. Your proposals, comments, and votes will help us prioritize our work and identify new solutions!
Thank you!
— Trevor Bolliger, WMF Product Manager (t) 23:58, 6 November 2017 (UTC)
Anti-Harassment Tools team goals for January-March 2018
editHello all! Now that the Interaction Timeline beta is out and we're working on the features to get it to a stable first version (see phab:T179607) our team has begun drafting our goals for the next three months, through the end of March 2018. Here's what we have so far:
- Objective 1: Increase the confidence of our admins for resolving disputes
- Key Result 1.1: Allow wiki administrators to understand the sequence of interactions between two users so they can make an informed decision by adding top-requested features to the Interaction Timeline.
- Key Result 1.2: Allow admins to apply appropriate remedies in cases of harassment by implementing more granular types of blocking.
- Objective 2: Keep known bad actors off our wikis
- Key Result 2.1: Consult with Wikimedians about shortcomings in MediaWiki’s current blocking functionality.
- Key Result 2.2: Keep known bad actors off our wikis by eliminating workarounds for blocks.
- Objective 3: Reports of harassment are higher quality while less burdensome on the reporter
- Key Result 3.1: Begin research and community consultation on English Wikipedia for requirements and direction of the reporting system, for prototyping in Q4 and development in Q1 FY18-19.
Any thoughts or feedback, either about the contents or the wording I've used? I feel pretty good about these (they're aggressive enough for our team of 2 developers) and feel like they are the correct priority of things to work on.
Thank you! — Trevor Bolliger, WMF Product Manager (t) 22:40, 7 December 2017 (UTC)
Anti-Harassment Tools status updates (Q2 recap, Q3 preview, and annual plan tracking)
editNow that the Anti-Harassment Tools team is 6 months into this fiscal year (July 2017 - June 2018) I wanted to share an update about where we stand with both our 2nd Quarter goals and our Annual Plan objectives as well as providing a preview for 3rd Quarter goals. There's a lot of information so you can read the in-depth version at meta:Community health initiative/Quarterly updates or just these summaries:
- Annual plan summary
The annual plan was decided before the full team was even hired and is very eager and optimistic. Many of the objectives will not be achieved due to team velocity and newer prioritization. But we have still delivered some value and anticipate continued success over the next six months. 🎉
Over the past six months we've made some small improvements to AbuseFilter and AntiSpoof and are currently in development on the Interaction Timeline. We've also made progress on work not included in these objectives: some Mute features, as well as allowing users to restrict which user groups can send them direct emails.
Over the next six months we'll conduct a cross-wiki consultation about (and ultimately build) Blocking tools and improvements and will research, prototype, and prepare for development on a new Reporting system.
- Q2 summary
We were a bit ambitious, but we're mostly on track for all our objectives. The Interaction Timeline is on track for a beta launch in January, the worldwide Blocking consultation has begun, and we've just wrapped some stronger email preferences. 💌
We decided to stop development on from the AbuseFilter but are ready to enable ProcseeBot on Meta wiki if desired by the global community. We've also made strides in how we communicate on-wiki, which is vital to all our successes.
- Q3 preview
From January-March our team will work on getting the Interaction Timeline to a releasable shape, will continue the blocking consultation and begin development on at least one new blocking feature, and begin research into an improved harassment reporting system. 🤖
Thanks for reading! — Trevor Bolliger, WMF Product Manager 🗨 01:29, 20 December 2017 (UTC)
Reporting System User Interviews
editThe Wikimedia Foundation's Anti-Harassment Tools team is in the early research stages of building an improved harassment reporting system for Wikimedia communities with the goals of making reports higher quality while lessening the burden on the reporter. There has been interest expressed in building a reporting tool in surveys, IdeaLab submissions, and on-wiki discussions. From movement people requesting it, to us as a team seeing a potential need for it. Because of that, myself and Sydney Poore have started reaching out to users who have expressed interest over the years of talking about harassment they’ve experienced and faced on Wikimedia projects. Our plan is to conduct user interviews with around 40 individuals in 15-30 min interviews. We will be conducting these interviews until the middle of February and we will write up a summary of what we’ve learned.
Here are the questions we plan to ask participants. We are posting these for transparency in case there are any major concerns we are not highlighting, let us know.
- How long have you been editing? Which wiki do you edit?
- Have you witnessed harassment and where? How many times a month do you encounter harassment on wiki that needs action from an administrator? (blocking an account, revdel edit, suppression of an edit, …?)
- Name the places where you receive reports of harassment or related issues? (eg. arbcom-l, checkuser-l, functionaries mailing list, OTRS, private email, IRC, AN/I,….?)
- Volume per month
- Name the places where you report harassment or related issues? (eg. emergency@, susa@, AN/I, arbcom-l, ….?)
- Volume per month
- Has your work as an admin handling a reported case of harassment resulted in you getting harassed?
- Follow question about how often and for how long
- Have you been in involved in different kinds of conflict and/or content disputes? Were you involved in the resolution process?
- What do you think worked?
- What do you think are the current spaces that exist on WP:EN to resolve conflict? What do you like/dislike? Do you think those spaces work well?
- What do you think of a reporting system for harassment inside of WP:EN? Should it exist? What do you think it should include? Where do you think it should be placed/exist? Who should be in charge of it?
- What kinds of actions or behaviors should be covered in this reporting system?
- an example could be doxxing or COI or vandalism etc
New user preference to let users restrict emails from brand new accounts
editHello,
The WMF's Anti-Harassment Tools team introduced a user preference which allows users to restrict which user groups can send them emails. This feature aims to equip individual users with a tool to curb harassment they may be experiencing.
- In the 'Email options' of the 'User profile' tab of Special:Preferences, there is a new tickbox preference with the option to turn off receiving emails from brand-new accounts.
- For the initial release, the default for new accounts (when their email address is confirmed) is ticked (on) to receive emails from brand new users.
- Case user: * A malicious user is repeatedly creating new socks to send Apples harassing emails. Instead of disabling all emails (which blocks Apples from potentially receiving useful emails), Apples can restrict brand new accounts from contacting them.
The feature to restrict emails on wikis where a user had never edited (phab:T178842) was also released the first week of 2018 but was reverted the third week of 2018 after some corner-case uses were discovered. There are no plans to bring it back at any time in the future.
We invite you to discuss the feature, report any bugs, and propose any functionality changes on the talk page.
For the Anti-Harassment Tools Team SPoore (WMF), Community Advocate, Community health initiative (talk) 00:47, 9 February 2018 (UTC)
AN/I Survey Update
editDuring the month of December, the WMF's SuSa and the Anti-Harassment Tools team ran a survey targeted at experienced users and admins on AN/I and how reporting harassment and conflict is handled. For the past month of January, we have been analyzing the quantitive and qualitative data from this survey. Our timeline towards publishing a write up of the survey is:
- February 16th- rough draft with feedback from SuSa and Anti-Harassment team members
- February 21st- final Draft with edits
- March 1st- release report and publish data from the survey on wiki
We are super keen to release our findings with the community and wanted to provide an update on where we are at with this survey analysis and report.
Auditing Report Tools
editThe Wikimedia Foundation’s Anti-Harassment Tools Team is starting research on ways reports are made about harassment used across the internet, while also focusing on Wikimedia projects. We are planning to do 4 major audits.
Our first audit is focusing on reporting for English Wikipedia. We found 12 different ways editors can report. We then divided these into two groups, on-wiki and off wiki reporting. On-wiki reporting tends to be incredibly public, off wiki reporting is more private. We’ve decided to focus on 4(ish) spaces for reporting that we’ve broken into two buckets, ‘official ways of reporting’ and ‘unofficial ways of reporting.’
Official Ways of Reporting (all are maintained by groups of volunteers, some are more adhoc volunteers e.g. AN/I)
- Noticeboards: 3rr, AN/I, AN
- OTRS
- Arb Com Email Listserv
- We’ve already started user interviews with arb com
Unofficial Ways of Reporting:
- Highly followed talk page (such as Jimmy Wales)
Audit 2 focuses on Wikimedia projects such as Wikidata, Meta and Wikimedia Commons. Audit 3 will focus on other open source companies and projects like Creative Commons and Github. Audit 4 will focus on social media companies and their reporting tools, such as Twitter, Facebook, etc. We will be focusing on how these companies interact with English speaking communities and their policies for on English speaking communities, specifically because policies differ country to country.
Auditing Step by Step Plan:
- Initial audit
- Write up of findings and present to community
- This will include design artifacts like user journeys
- On-wiki discussion
- Synthesize discussion
- Takeaways, bullet points, feedback then posted to wiki for on-wiki discussion
- Move forward to next audit
- Parameters for the audit given to us from the community and the technical/product parameters
We are looking for feedback from the community on this plan. We anticipate to gain a deeper level of understanding about the current workflows on Wikimedia sites so we can begin identifying bottlenecks and other potential areas for improvement. We are focusing on what works for Wikimedians while understanding on what other standards or ways of reporting are also out in the world.
Research results about Administrators' Noticeboard Incidents
editHello all,
Last fall, as part of the Community Health initiative, a number of experienced En.WP editors took a survey capturing their opinions on the AN/I noticeboard. They recorded where they thought the board working well, where it didn’t, and suggested improvements. The results of this survey are now up; these have been supplemented by some interesting data points about the process in general. Please join us for a discussion on the results.
Regards, SPoore (WMF), Community Advocate, Community health initiative (talk) 20:07, 5 March 2018 (UTC)
Datetime picker for Special:Block
editHello all,
The Anti-Harassment Tools team made improvements to Special:Block to have a calendar as datetime selector to choose a specific day and hour in the future as expire time. The new feature was first available on the de.wp, meta, and mediawiki.org on 05/03/18. For more information see Improvement of the way the time of a block is determined - from a discussion on de.WP or (phab:T132220) Questions? or want to give feedback. Leave a message on meta:Talk:Community health initiative/Blocking tools and improvements, on Phabricator, or by email. SPoore (WMF), Trust & Safety, Community health initiative (talk) 20:17, 15 May 2018 (UTC)
How can the Interaction Timeline be useful in reporting to noticeboards?
editWe built the Interaction Timeline to make it easier to understand how two people interact and converse across multiple pages on a wiki. The tool shows a chronological list of edits made by two users, only on pages where they have both made edits within the provided time range.
We're looking to add a feature to the Timeline that makes it easy to post statistics and information to an on-wiki discussion about user misconduct. We're discussing possible wikitext output on the project talk page, and we invite you to participate! Thank you, — Trevor Bolliger, WMF Product Manager (t) 22:10, 14 June 2018 (UTC)
Partial blocks is coming to test.wikipedia by mid-October
editHello all;
The Anti-Harassment Tools team is nearly ready to release the first feature set of partial blocks — the ability to block a user from ≤10 pages — on the beta environment then test.wikipedia by mid-October.
In other news, due to technical complexity, multiple blocks (phab:T194697) is de-prioritize and remove it from this project. Our first focus will be to make sure page, namespace, and upload blocking work as expected and actually produce meaningful impact. I'll share the changes to the designs when they are updated. SPoore (WMF), Trust & Safety, Community health initiative (talk) 00:10, 25 September 2018 (UTC)
Proposal for talk page health rater template
editThere is a discussion related to this project area at the village pump. The topic is a suggested optional talk page template which allows users to rate talk page discussion health. Edaham (talk) 02:54, 4 January 2019 (UTC)
- @Edaham: Thank you for sharing and inviting to that idea. My only 2¢ is that the 'health' of a discussion could vary on a topic/section basis, so a future version could be on the section level, rather than page level. I look forward to seeing where your IdeaLab submission goes! — Trevor Bolliger, WMF Product Manager (t) 21:03, 4 January 2019 (UTC)
- Thanks for the quick reply - let's continue this discussion at the village pump. have a happy new year. Edaham (talk) 03:11, 5 January 2019 (UTC)