User talk:Colin/A large scale student assignment – what could possibly go wrong?

Latest comment: 12 years ago by Colin in topic Class Spring 2012

Thoughts

edit

This essay is very comprehensive and accurate with its assessment of the student assignment and I agree with everything it says for the most part. One detail I would change however is the recommendation to deduct extra credit for poor edits. This would discourage students from editing, and from what I saw, there were a few "gems" who were in the class who made valuable contributions to Wikipedia. A better solution to discourage bad edits by students would be to just not reward students for poor edits. Peter.C • talk • contribs 00:21, 3 January 2012 (UTC)Reply

We need to deduct marks for poor edits. This will hopefully get people associated with the class reviewing all the changes that they (the students) make. I agree that currently this classes cause overall harm to Wikipedia.
I am willing to support one more try with the changes proposed if those associated with the class review the edits in question and deduct marks for poor edits/copy and paste. I too will be reviewing things. If this is not possible I agree that it would be best not to attempt it again.--Doc James (talk · contribs · email) 05:42, 3 January 2012 (UTC)Reply
Peter, the proposal to deduct 1% for bad edits isn't mine: that's the change proposed by the professor. Do you think the text is confusing here? I agree that expressing the scoring in a positive manner is more likely to be motivational than a list of negatives. A flaw in the original score was that students could write nonsense and choose an inappropriate source, and yet still get marks. And many of them thought that copy/paste of the source was acceptable too. I also don't think students should get a mark for creating an account and listing their name. It is a necessary step but not really worth awarding.
Ultimately, we want the students to be motivated towards and capable of doing a good job, and the institution to have the resources and capability to review and fix any poor work. The actual scoring system isn't really our concern other than the overall motivational effect as you mention.Colin°Talk 08:43, 3 January 2012 (UTC)Reply
The idea to deducted marks was mine and the prof has agreed to it. Basically they will get three marks for good edits and will loss three marks for bad edits. If they do nothing they get zero marks. We need people to take this seriously and if they might loss marks they will. When I was in University we had some exams where you would get a mark if you got the answer right and loss a mark if you got the answer wrong. If you did not know the answer you would leave the question bland. It gets rid of people guessing as guessing is not good enough.Doc James (talk · contribs · email) 16:50, 3 January 2012 (UTC)Reply
James, are you sure that if they carry out this assignment badly they will end up with negative marks? For a start, the username thing gets them one mark they can't really lose. It isn't clear whether the two edit marks are voided if the edit is bad or whether the negative mark can be applied twice for two bad articles. Colin°Talk 09:10, 4 January 2012 (UTC)Reply

If they do not respond to their talk page, do not format properly, and copy paste that would be minus 3 marks. But I guess if they get one mark for signing up, and two for marking an edit they may just end up with zero. May be they should get one mark for responding to the talk page / loss one mark for not responding to the talk page, gain one mark for formatting properly/loss one mark for not formatting properly, loss one mark for copy and paste / gain one mark for accurate paraphrasing? Will propose. I am willing to review / supervise /trial one more semester of this.Doc James (talk · contribs · email) 05:55, 5 January 2012 (UTC)Reply

I still think repeating this with these minor changes will not fix the fundamental problems with the exercise. Let's assume all the students now take care over the work and write a sensible sentence, reasonably sourced and paraphrase their source. Let's also assume 1500 students take part. This would be the best-case scenario from the experimenter's point of view. We'd now have 3000 random factoids spread over about 2000 articles (based on previous spread, though actually likely to be far fewer articles). Has that improved Wikipedia's quality? Would repeatedly performing this eventually produce any good articles? No and no. Would the material be retained should someone take the article to FA? Quite possibly not.
For a start, WP:WEIGHT is unlikely to get a look-in so we'd have any old fact added even if irrelevant. Take one example I'm digging out of memory. Some famous psychologist bio. One student added the fact that he chose to study chemistry (or whatever it was) at university because he was persuaded there was no money/jobs in psychology. In fact the psychologist changed his mind on entering university and did study psychology. So the article gave the misleading impression that this famous psychologist actually studied chemistry at uni when he actually studied psychology. The whole chemistry thing is pretty irrelevant trivia but adding it actually mislead.
Think about the sort of mess we'd get into with disease/drug articles. Every rare side effect or symptom would be added with no concern to their importance. And whereas the typical newbie would add this unsourced and so could be reverted, this student newbie would supply a source and take some persuading and perhaps some evidence to show that it wasn't important. If someone is stubborn enough, digging up enough secondary sources to prove a negative (that nobody mentions this side effect or symptom), can be tiring and an utter timesink. These students weren't stubborn because being wrong didn't affect their mark. But if marked negatively they may become defensive of their edit.
These relatively brainless additions to Wikipedia are like junk food: empty calories. I'm still not convinced on the large-scale/no-supervision model at all -- because it doesn't seem to create Wikipedians. I don't think repeating the experiment with minor tweaks will generate useful new data. However, if the task was changed such as to the one below, that would generate new data. There's more thinking involved in that task and the writing can have no detrimental effect on Wikipedia. It could even be coupled with some off-wiki questions for the students such as "How does WP's sourcing differ to that used in a academic review paper. Why?", "How do WP's writers differ from those of an academic textbook? What effect does this have on policy?", "How do WP's psychology articles compare to your textbooks for accuracy/comprehensiveness?" ...
Colin°Talk 08:33, 5 January 2012 (UTC)Reply

Suggestions for alternative assignments

edit

Perhaps we could brainstorm some ideas for alternative assignments that would require minimal supervision and the least chance of harm. We need to keep them somewhat focussed on psychology (there are assignments elsewhere that are suitable for students doing an arts degree where their writing skills are practiced on WP).

  • 1a Read WP:V and WP:MEDRS. Choose a psychology topic (not a biography) from the list that is not at GA or FA. Research to locate one or more ideal sources for the article that are not already in use (e.g., if they read MEDRS they will know that they can search PubMed for reviews in the article topic). Ideally, the source should cover the whole article topic or a significant aspect of the topic. On the article talk page, start a new section and list the source(s) with a full citation (see WP:CITE and WP:MEDMOS for help with medical citations formatting, though care should be taken to avoid the <ref> tags).
  • 1b. Read the source (if a book, then a suitable chapter or section) and the article. Review the article to see if information is missing, incorrect or could otherwise be improved using the source. Add comments on this to the article talk page. A paragraph or two would suffice. If commenting on missing information, consider that a related article may already contain this information and be a more appropriate location.

Colin°Talk 08:43, 3 January 2012 (UTC)Reply

Yes would support that. What we need to do is create lists of possible assignment formats for different topic areas to be presented to profs when the sign on to the Global Education Program Doc James (talk · contribs · email) 16:53, 3 January 2012 (UTC)Reply

Comments

edit

This is a very helpful analysis. I've read it through a couple of times; here are some comments.

  • The suggestion that graduate students would do better seems rational but it should be noted that WP:MMM, which was by a long way the most successful student editing project on Wikipedia, was composed of undergraduates. My own suggestion is that the engagement and understanding of the professor, and the ratio of students to online mentors, are much more important. By those metrics one would have expected this project to be a dismal failure.
  • If this class is to be repeated, I think adding penalty marks is a good idea, as it might discourage those students who know they're not doing that good a job. However, presumably there are educational goals here as well; it makes little sense for a class to be deliberately structured to discourage 95% of the students from participating in an activity. If the exercise has educational value, shouldn't it be done in such a way that the students learn something? I can't imagine that more than a handful of these 1500 students learnt anything from this exercise.
  • The value to the community has to be measured in terms of gaining editors and gaining content; the cost is the effort in mentoring. If Joordens learned nothing from this (perhaps he did, but since he didn't edit actively it's possible he did not) then there's no gain in terms of expertise in the classroom; next semester will be just the same. As it stands the costs are far too high for the almost zero benefits received.

I'm very pessimistic about the chances of success with such a large class. To be honest, I don't think the experiment should be repeated; instead we should focus on classes where there is some expectation of engagement with the professor online, and where the number of students won't overwhelm the limited number of helpers. Mike Christie (talk - contribs - library) 00:10, 4 January 2012 (UTC)Reply

Thanks for the comments. I'm not sure I suggested that graduate students would necessarily do better at editing Wikipedia. However, I think they would be less likely to make subject-error mistakes than the beginner class at university, and so need less supervision for those kind of errors. As you say, a well supervised class could contribute good material in a subject they are only just learning. Having graduated in a subject shows some interest or ability in it. Whereas I guess most of the undergraduates taking this course are probably graduating in something else (I saw several neuroscience comments on user pages which suggests this is a module that could be taken by many undergraduates). That means there will be students who (a) chose this thinking it was an easy subject rather than one they are enthusiastic about and (b) made a mistake chosing this module. I totally agree with you on the prerequisites for success. Colin°Talk 08:10, 4 January 2012 (UTC)Reply

Data question

edit

I'd like to better understand the data, especially for Doc James' assessments.

Consider 1993kid (talk · contribs), which was assessed as "Content added without refs. Than ref added in wrong spot." Is this counted as a user who "added unsourced content, or gave a source that wasn't appropriate"? What about Lisa.I.B (talk · contribs), whose work was summarized as "Refs need work"? WhatamIdoing (talk) 18:53, 6 January 2012 (UTC)Reply

I looked at the first one just there. The student added some unsourced text which was nearly instantly reverted by Dicklyon (talk · contribs) who left a message on the user's talk page along with a welcome template. I don't know enough about the subject but I suspect the text put forward a naive view of colour vision and how the brain processes it. Dicklyon asked for a source but it seems that simultaneously the student was adding a general ref. Such a ref, in a 6000 word article already using inline refs, is useless. Dicklyon reverted this and asked them to learn how to do an inline ref. Perhaps that could have been handled differently but the outcome would perhaps have still been that the text wasn't right. Googling for some of the text turns up some online notes for this psychology textbook that makes we willing to put money on the added text not being sufficiently original. But I don't have the book to check. Needless to say the student didn't respond to Dicklyon and went on to do much the same to another article. Colin°Talk 20:34, 6 January 2012 (UTC)Reply
That student added information, and—less than seven minutes later—a bibliographic citation, which s/he struggled to figure out how to format in wikicode, although the citation itself was complete, right down to the oft-omitted details of publisher's location and page number.
Any experienced editor could have fixed that formatting problem in ten seconds. And that's all it was: a simple formatting problem. From where I sit, that interchange is an example of uncollegial, uncollaborative editing and a violation of WP:PRESERVE to boot. If I had encountered that sort of problem, I would have fixed it and dropped the diff on the user's talk page so they could see how I fixed it. "Go read some unspecified help pages until you figure it out" is not helpful.
More to the point, it is not IMO an example of adding unsourced information. It is an example of using two edits to achieve what you and I probably would have done in one edit. We do not have a rule that says the sources must be added in the very same edit as the information (and it's a good thing that we don't, given that some very good editors, like SandyGeorgia, habitually use multiple small edits). WhatamIdoing (talk) 21:16, 6 January 2012 (UTC)Reply
If we assume for the moment that the text was actually "information", then I agree the student could have been helped to turn the general ref (which I sincerely hope we agree is useless to WP) into an inline citation. This would have been much friendlier than just to revert. It wasn't done by one of us so I can't answer for it. Other than to guess that because it actually wasn't "information" then the reverter wasn't inclined to do so. Again, this could have been explained differently rather than as you say pointing them unspecifically towards help. If you look at the timeline, the first revert was done in the belief that the text was unsourced. Yes from Doc James latter viewpoint it wasn't strictly "unsourced" so you could nitpick with his summary. Believe me, when you've spent several hours looking at student edits, you are losing the will to live, never mind worrying that WhatAmIDoing might come along and pick your sentence apart :-). However, the student didn't appear to realise their text had been removed and so added the general ref for the now absent text. The student would have had a big orange bar for their talk page by now so why didn't they look at it and respond. Even aggressively as one might if feeling misjudged.
This leads me to consider something that is based on the facts behind the experiment (because this really was an experiment, that's not a figure of speech, and they just had a poster presentation and plan to publish later). I think the students were told that it didn't matter if their edits were reverted, so not to worry about it. How else can you explain the 100% lack of interaction? If I had to add a sentence to WP to gain a mark, I'd make damn sure WP kept the sentence and possibly become a bit bloody minded about reverting any stupid wikidoc who reverts me. After all, I've never read about the 3RR.
But if we come back to the text. The more I think about it, the more I think Dicklyon was right that "Talking in terms of wavelengths is not so useful here". The eye doesn't see wavelengths so thinking that somehow the brain synthesises these different wavelengths to get brown or yellow is just naive thinking. The textbook appears to make a comparison with auditory sensory perception being the opposite (analytical) which is also wrong if you know anything about psychoacoustics. So basically, the psychology textbook is talking bollocks and WP is right not to have this text. Finally, of course, there's the issue of whether the text is original. If we had the source text we'd know for sure, but I'm very doubtful.
Ultimately, in terms of the experiment and benefit to WP, the edit wasn't kept (for good or bad). This makes it a waste of time for the student and for those monitoring the pages. You can complain that some editors may have handled it better but we have the editor population we have. We can't base our planning of running student assignments on some mythical community of loving caring helpful editors with tonnes of free time on their hands and nothing better to do (because they've written all the GAs they every want to write). Colin°Talk 21:56, 6 January 2012 (UTC)Reply

If we look at Lisa she added a link to U of Ts inside net (many students did this, she also added a text to the ref in question)[1] Yes her edits are some of the best ones and I did not revert them. I gave here a 2 on the 1 to 5 scale. With respect to 1993kid. He added text here [2], than added the ref at the end [3] As the ref was not connected to the text not really helpful. Yes it could have been fixed rather than reverted.Doc James (talk · contribs · email) 22:18, 6 January 2012 (UTC)Reply

And you're counting the (unnecessary) inclusion of a convenience link as an "inappropriate source", even when accompanied by a perfectly good citation to a peer-reviewed journal article? There's no problem with the source there: there's only a problem with trivial, unimportant formatting—a problem of exactly the sort that CITE says, "While you should try to write citations correctly, what matters most is that you provide enough information to identify the source. Others will improve the formatting if needed." We should not be pretending that such problems are in the same class as providing zero citation information or citing someone's blog. WhatamIdoing (talk) 00:12, 7 January 2012 (UTC)Reply
Yes, these are edits that others need to come in and clean up afterword. It does not stand alone. At least with newbies editing on their own time their appears to be a greater chance they will stick around and may receive feedback to become positive to the project as a whole. Here it felt like I was typing into the wind. I did not revert either of the editors you mentioned as I did not feel these edits where sufficiently poor to warrant, but neither where they stellar. Feel free to re-due my analysis.Doc James (talk · contribs · email) 02:56, 7 January 2012 (UTC)Reply
So instead of "71% have sourcing problems", we really mean "29% were perfect, and 71% had problems ranging from catastrophic down to trivial"? WhatamIdoing (talk) 05:17, 7 January 2012 (UTC)Reply
Great can you show me the 29% within my group that you consider added perfect content? This users Lisa.I.B (talk · contribs) was counted in my 18% that added okay content. I do not remember even seeing one that added perfect content. (properly paraphrased, referenced to a high quality source, with proper formatting style). But if you can prove me wrong I will be happy to change my stance. This was my favorite edit [4] because one of our bots took care of it. Doc James (talk · contribs · email) 06:06, 7 January 2012 (UTC)Reply

WhatamIdoing, the three sets of reviews are subjective and you know Doc James is probably the fussiest of us. If you feel this part of the analysis is weak then rather than us spend time arguing over one or two students, why don't you pick a sample and perform the same task with the lists here. I can quickly generate a random list of n students who added content using the little database I've got.

In terms of whether to revert, fix or engage, the nature of the assignment had an influence. Often we don't have the sources so can't easily fix a confusing sentence by re-reading the source. The assignment was over by the time we started our analysis so the students had already gone from WP. So we can't ask the student to try again or work with them to fix it. And as you can see, even when editors did leave talk messages at the very time the student was still editing, they just got ignored. Running edit-and-go assignments is not a good idea if you expect those edits to need work or engagement with the editor. Colin°Talk 12:26, 7 January 2012 (UTC)Reply

Review workload

edit

If it's true "That the plagiarism and poor content was reverted or fixed is almost wholly down to the extraordinary efforts of three Wikipedians", why would it "be impossible for one professor and a few helpers to review the work in detail"? Surely they are no less capable of doing what three volunteers did. WhatamIdoing (talk) 18:56, 6 January 2012 (UTC)Reply

The prof in question does not appear interested and neither does the ambassadors involved. As Colin mentioned they have made very few edits to Wikipedia themselves. It they where willing that would be great and I would have no issues.Doc James (talk · contribs · email) 19:40, 6 January 2012 (UTC)Reply
I don't think it is feasible to do what we did for 1500 students. They are more capable in that they know the subject (but need to take care then that they check sourcing rather than just knowing if the facts are correct) but less capable in that they know very little about Wikipedia. I doubt the lecturer knows how to write an inline citation, how summary-style affects articles or what a dab page is. But mainly, as James says, it is for the same reason the students didn't do a wonderful job: the motivation and inclination to spend a lot of time on this isn't there. Each article edit is worth one mark in one of many modules the students are taking.
Reviewing the work isn't just a case of looking at the diffs and saying:
  • Is it a piece of info on psychology. Tick
  • Is a source given and is it reliable. Tick
  • Is the text original (not copied). Tick
  • Is the citation correct. Tick
Look at the article as a whole and then see if the text is added in the right place (or most commonly, is this even the right article for it). Does the text make sense? Is it just repeating what was already there? Is it correct/supported by the source (need to read source to find this out)? Is the text written in an appropriate style for Wikipedia? Have they broken anything with their edits (e.g., broken tag causes citations to get screwed up). If they removed or moved stuff is that OK. If they did more than was called for, are those edits ok too?
Then... fix it. I think that's the big thing that was wrong here. The experiment thought Wikipedia was self-healing. It didn't assume those running it would have to roll up their sleeves and write or edit. The mistakes in the edit either have to be fixed or reverted. To fix them is really hard because you might need another source or to move the text somewhere else, or add more text so it makes sense. If the citation is rubbish, you need to track down the actual source used. And reading those sources takes time. And even though we were reviewing just days after the original edits, some articles had been edited multiple times since then. You end up spending a while looking through the revision history to work out what has happened to the text.
Here's one example of a mistake that can be really tricky to fix and I don't know if we have any guidelines warning about it. We have a paragraph of great text that is reliably sourced at the end of the paragraph. This is all fine and FA quality. Now the student adds a sentence to the middle of the paragraph and adds their ref on the end of their sentence. The paragraph has been spliced. It now appears that the first few sentences of the paragraph are sourced to the new ref rather than the old one. You come along to review this. If you decide to keep the text, then you could fix this by repeating the original ref just before the student's sentence. But now you are claiming those first few sentences really did come from the original ref. Perhaps you're not so sure. This is a crummy psychology article after all, not an FA. But you don't have access to that original source. What to do? Perhaps you do nothing and someone comes along and splits the paragraph in two. Then moves the first paragraph somewhere else. This is of course something that happens on a wiki normally and is a problem. But it shows that reviewing even a sentence or two and then fixing it up afterwards can turn out to be a bigger job than just getting the red pen out and making a tick or a cross here and there.
Perhaps you think these edits don't need to be fixed by the institution. That might be a reasonable action if they were largely good and rarely very bad. Unfortunately it is the opposite. Colin°Talk 20:23, 6 January 2012 (UTC)Reply
I agree that the specific people in question are not likely to choose to do this work, but I disagree that it is impossible for them to do it. I think it entirely within their capabilities, even if all 1500 students participated. Checking diffs for 1500 students is no harder or more time-consuming than grading 1500 two-page essays, and I consider that to be well within the capabilities of any course instructor.
The "impossibility" of doing it right is not why the course instructor made these choices. WhatamIdoing (talk) 21:21, 6 January 2012 (UTC)Reply
Well, nothing is impossible. But that's not really a helpful gauge of reality. The motivation and ability of the students and the professors is a key factor that needs to be taking into account. Colin°Talk 22:02, 6 January 2012 (UTC)Reply
You're advocating for a general rule based on one guy's first effort. You are, in effect, assuming that everyone is and will always be like him. You're also assuming that this person isn't capable of reforming his own approach. Based on this, you're saying that it's absolutely hopeless and they should simply give up trying.
It's not impossible to do this right. It takes some effort, but it is actually achievable. WhatamIdoing (talk) 00:22, 7 January 2012 (UTC)Reply
WhatamIdoing, I don't think it is helpful for you to nitpick that I said "impossible" rather than "very difficult" or "rather unlikely" or some other phrase. That was only one aspect of this assigment: huge class run by one non-wikipedian who had no intention to review the edits by eye. In the section above you take us and others to task for not spending enough time fixing and engaging with the students. So you don't consider the hours we spent to be satisfactory. That makes it even less likely that any prof would manage to do a satisfactory job with 1500 students.
What "general rule" do you think I'm advocating based on this? Where have I assumed everyone will be like him? I don't really understand here. If a different prof had done this, he may have prepared his students better. He may have realised the scale of the task and recruited more helpers. He'd have learned how to edit himself and asked his helpers to do so too. He'd have asked the students to make their edit well before the deadline (rather than after it!) and then to come back and see whether there were problems with it. He'd have asked the students to engage with other Wikipedian's to achieve the best result. He'd have trained them how to paraphrase their sources. He'd have done an example edit in front of the class to show how it should be done. He'd have awarded more marks. He might have done his first WP assignment on a smaller scale rather than the biggest scale yet. And so on and so on. There are so many things someone could have done differently if they really wanted a class this big to edit. I don't say "it's absolutely hopeless and they should simply give up trying" because the "it" there is unhelpfully imprecise. In your last two sentences, what are you referring to? Student edits in general? Large scale student edits? Running an assignment on WP where neither the prof nor the students engage with WP in any way? .... Colin°Talk 12:45, 7 January 2012 (UTC)Reply

The Indian initiative had similar problems as did this class here. Not saying we should not try again just that hopefully we will learn from our previous efforts.Doc James (talk · contribs · email) 06:26, 7 January 2012 (UTC)Reply

Editors == reviewers

edit

The canonical way to solve the "reviewing scales with editing" problem is to make editors reviewers. Students need to learn critical reading and analysis as well as critical writing and synthesis. Ask a body of students to do two separate tasks.

  1. Find an article that could bear improvement. Improve it, citing sources. Include a talk-page section on your source analysis / how you chose the new sources you did.
  2. Review the work of someone else in your group / your matched partner. Check whether it adheres to research / writing standards.

Both tasks have a prerequisite of learning how to edit and considering quality standards for encyclopedic articles.

There should be at least an equal number of people doing each task -- so either students get to pick which one they do, or they are all asked to do both. You could imagine simple systems for ensuring either method. If you are especially concerned about "the workload" or "bad contributions" or "breaking the current conservative system" you can increase the ratio of reviewers to writes to 2:1 or 3:1. Reviewing is just as valuable a task as writing - reviewers could be asked to work through the various steps of peer review for the changes -- or for the article as a whole, posting the result of the review on the talk page.

– SJ + 04:44, 1 February 2012 (UTC)Reply

This solution assumes that editing and reviewing are tasks that require equal ability/experience/knowledge of the wiki and of the subject one is writing about. Clearly this is not true otherwise academia would be populated solely by students learing together and reviewing each other's work -- without any need for tutors, lecturers and professors. It makes the common mistake being made by these education programs that editors and students are equivalent. In the volunteer wiki land, we are all peers and do indeed review each other's work. In academia, there is a definite hierarchy. The above scenario you present has worked for small groups (a handful) led by a teacher who was already well capable of writing encyclopaedic material and collaborating in a wiki. I agree that it has merits and I also proposed the idea of students doing reviewing further up. But this is not the only step required to make it successful on a larger scale. Colin°Talk 08:49, 1 February 2012 (UTC)Reply

An overly conservative view of editing

edit

I think it is great that this class attempted such a project, and that they are planning to repeat it with improvements. Please ask them to have their students directly engage in review; they are smart enough to understand that once they think of it.

This essay presents a conservative view of editing.

  1. Few people know how to edit well
  2. Those who don't know how to edit well end up detracting from the encyclopedia
  3. New contributors should go through extensive training before they start contributing, to avoid messing things up
  4. Wiki enthusiasts who don't have extensive personal experience in producing good articles shouldn't encourage others to contribute without detailed guidance

This sounds a lot like the appeal to expertise that print encyclopedias make in limiting who gets to contribute.

I agree that we want programs that develop lasting Wikipedians, not drive-by edits, but don't think the right answer lies in discouraging edits for fear of 'doing it wrong'. better to make reviewing and critical analysis much easier, making feedback loops and communication easier and more obvious and fun for the logged-in writer, and shifting the ratio of these tasks.

– SJ + 04:44, 1 February 2012 (UTC)Reply

This response couldn't be more wrong and reflects the continued misunderstanding by the WMF that students doing assignments are equivalent to volunteer editors. I don't take this view of volunteer editors (the 99% who built this encyclopaedia) at all. As a lay editor of medical articles, I can assure you I don't hold the view that only experts should edit articles, or that we should limit who gets to contribute. Let's look at your points (which I've numbered)
  1. I agree with this point. It is obvious. Only a tiny percentage of our readers also edit. The MediaWiki syntax is a well known hurdle and the complexities of inline citations, citation templates, dab pages, summary-style and all the MoS, guidelines and policies are a lot to pick up.
  2. Disagree strongly. The students editing in this assignment made the encyclopaedia worse but not because they couldn't edit well. Other folk can fix refs, add wikilinks, and improve prose. They made it worse because they plagiarised their sources, because they didn't understand the subject they were writing about, because they were only requested to use one source, because it was valued at so few marks that it wasn't worth a learning investment, because they were asked to do something beyond their abilities.
  3. Disagree strongly. Volunteer editors will naturally adapt their editing boldness to their ability and understanding. With students, who are under pressure whether the marks are bonus or required, they shouldn't be expected to do more than their abilities allow. Push students too hard on an internal assignment and you just get lots of failed essays in a drawer. Do this same online on wiki and you can screw up an encyclopaedia. This isn't a playground for students to practice their homework on. Are we not here to build an encyclopaedia? The best student assignments have taken students in graded steps and have involved good introductions to WP and supported by knowledgeable leaders.
  4. I'm not sure where this idea came from. The prof here isn't a "wiki enthusiast". He might be "enthusiastic about the idea of a wiki". He has never edited WP. He has not engaged with the WP community on-wiki at all. How is that a "wiki enthusiast"?
I'm discouraged that your response has been entirely defensive. Where is the acknowledgement that this was a fundamentally stupid assignment driven by someone who didn't have a clue? Colin°Talk 09:16, 1 February 2012 (UTC)Reply

Update July 2012

edit

Apparently this class is up and running again, but under the radar. Students are registering an account but not linking it to any on-wiki programme or using identifying names, etc. So it is nearly impossible to track what these students are up to and since almost nobody watchlists psychology articles, I guess the effort isn't being reviewed by many wikipedians. The focus of the assignment is still essentially: how can I set an assignment for 1500 students and mark it with little effort. Ok, I'm being cheeky but basically there's a computer program that tracks the students edits. Marks for registering, marks for edits on en.WP and marks for edits on another language WP. Perhaps I'm not being told the whole picture (wouldn't be the first time) and there's a squad of experienced WP editors with access to psychology journals and textbooks reviewing this work for plagiarism, accuracy, relevance, etc.

Some criticism of the above essay/analysis:

  • We made incorrect assumptions. That may be so. But since the folk running the show weren't Wikipedians, and didn't engage on Wikipedia at all, we had no way to ask questions or get clarify points.
  • Drew false conclusions.. That is also possible but, as with all good research, we pointed out the limitations of the study such as we knew them, and I think the overall conclusions stand.
  • They did not insist that students register with the Wikipedia Education Program. In as much as the entire assignment was voluntary, this is true. But it is a deceptive statement. The text at Wikipedia:Canada Education Program/Courses/Introduction to Psychology, Part I (Steve Joordens) states clearly: "1% for creating a Wikipedia username and linking it to the Canadian Education Program, Registering with the APS Wikipedia Initiative, and linking your student number to your Wikipedia username as Steve highlighted in class".
  • We said that only a tiny minority could be "bothered to register". Can't find those exact words, perhaps they are elsewhere. Regardless, from the information we could gather, this was true and surprising. Why would students avoid such easy marks? It would be nice to know a reason or if actually more did register and take part in editing WP, it would be great to get the numbers. I don't expect them to be radically different as I think we'd have spotted some of these hidden students on our rounds.
  • The problems raised were exaggerated. Actually, I think we were rather conservative in our assessment of the problems. If we really did have access to the sources, we'd have spotted more plagiarism or errors, and if any of us were psychology experts, we'd have spotted more nonsense.
  • Telling [the prof] how [he] should run assignments. If you run assignments on WP, then the community should have a say and ultimately be able to impose restrictions on what you do. And if you run experiments on WP, then the community should have a say and ultimately be able to impose restrictions on what you do. This was both an assignment and an experiment. Wikipedia is a "free content" encyclopaedia. It is not a free to do with what you like wiki. Some folk don't seem to get this and it may be WP's downfall.
  • Our methodology was very suspect; we went looking for instances of problems; we weren't scientific in approach and instead were "instance grabbing" did "brazen 'publication' of crappy analyses". Actually, I think we analysed the students as fairly as we could. We looked at every single student account we were aware of and looked at every single one of the edits. And noted if they made good edits, or if they made bad edits. We tried to fix or remove any problems as we went along. This wasn't a cherry picking exercise where we ignored the hundreds of good edits and focused on a few extreme ones. In my sample 75% of the students made edits that harmed Wikipedia. That's an appalling statistic.

Apart from a student marking exercise, the notionally "good work" bit is that they "disemminate research" by including facts from research papers. But WP doesn't like research papers as sources and isn't a research publication. As a tertiary source, our sources should be drawn from a higher level that basic psychology research papers. From the little new writing I've found, we've got that same sort of issues with "A study in 2012 found that pregnant females..." rather than just stating solid, well established encyclopaedic facts with confidence.

The biggest problem here is hubris. It created all the problems we see and prevents the community from working with the educational establishment to improve things. The second biggest problem is that the focus on the assignment/experiment was not on improving Wikipedia. Nobody on Wikipedia thinks adding a random factoid taken from a random journal to a random psychology article constitutes an improvement to Wikipedia. The encyclopaedia just isn't built like that. And certainly not if your "workers" are largely ignorant of their subject, of WP rules and motivated by a different goal. Colin°Talk 07:42, 1 August 2012 (UTC)Reply

Agree having students work as teams to improve articles with direct supervision from those who know about the subject at hand is a better method. This simply conceals any problems and prevents peer review. Doc James (talk · contribs · email) (if I write on your page reply on mine) 06:57, 9 August 2012 (UTC)Reply

Class Spring 2012

edit

The page Wikipedia:Canada Education Program/Courses/Introduction to Psychology, Part I (Steve Joordens) lists (some of) last years students. A few of this years's students have added themselves to the list. Also some of last year's students seem to have done the exercise again this spring. Here are some notes, working up the user list from the bottom, listing only the students who have edited articles.

Added inappropriate paragraph to Appetite. I've reverted it. Added a 100% copy vio to Eye movement (sensory). I've reverted and warned user.
Added a 100% copy vio to Stress (psychological). I reverted and warned user. Added complete nonsense to Adolescence. I've reverted.
Added and then removed a short sourced sentence to Anorexia nervosa. Added unenclopaedic text to Vision which is a DAB page. Was swiftly reverted.
Added a paragraph to Stress (psychology) but it wasn't appropriate to the article subject. I've removed it. Added a paragraph to Vision which is a DAB page. Was swiftly reverted. In addition to being in the wrong place, the paragraph was extremely basic.
Added an inline cited sentence to Albert Bandura. Don't have book so can't check text. Added a inline cited sentence to Alfred Adler. Unable to check it.

Once again the student's favourite source is Carlson, Neil R. and Heth, C. Donald (2010) Psychology the Science of Behaviour Ontario, CA: Pearson Education Canada. Based on what they claim to be learning from it, I suggest a new book be found for the class. Most students are also adding global refs to the textbook they cite. This is completely unhelpful as we need inline citations. Colin°Talk 12:41, 1 August 2012 (UTC)Reply

student activity through the APS portal

edit

Hi! Just wanted to leave a quick update about Joordens' classes. I talked with the people running the APS Wikipedia Initiative, who've been developing tools for pointing experts to Wikipedia articles they can help with as well as tools for organizing course work. Joordens' students seem to be using the APS portal during the current term, but not at the same scale as previously. I'm told there are 19 active users, whose edit counts range from 2 to 600+. That suggests a pattern of activity that isn't the same as before: fewer people, and (for some, at least) deeper engagement with Wikipedia. APS can't release info such as usernames, but it seems like —despite not wanting to engage here — Joordens is adapting his approach significantly.--Sage Ross (WMF) (talk) 18:04, 27 August 2012 (UTC)Reply

There supposedly are two classes. One of which is doing much better work than the single class did last year. Doc James (talk · contribs · email) (if I write on your page reply on mine) 18:15, 27 August 2012 (UTC)Reply
Ah, interesting!--Sage Ross (WMF) (talk) 18:17, 27 August 2012 (UTC)Reply
Yes, there seem to be a few postgrad-level students making efforts to take start articles up to GA and get DYKs. They have even engaged with the the community, at least as far as working out how to format at DYK. This is much more promising than launching 1000-monkeys at our psychology articles every semester. Colin°Talk 21:11, 27 August 2012 (UTC)Reply