Talk:IBM Watson/GA1

(Redirected from Talk:Watson (computer)/GA1)
Latest comment: 11 years ago by Spinningspark in topic GA Review

GA Review

edit
GA toolbox
Reviewing

Article (edit | visual edit | history) · Article talk (edit | history) · Watch

Reviewer: SpinningSpark 20:03, 10 November 2013 (UTC)Reply

If there are no objections, I'll take this review. I'll note at the outset I've had no role in editing or creating this article. I welcome other editors at any stage to contribute to this review. I will spend a day familiarising myself with the article and then provide an assessment. Kind regards, LT910001 (talk) 01:12, 19 October 2013 (UTC)Reply

Thanks for waiting. In conducting this review, I will:

  • Provide an assessment using WP:GARC
  • If this article does not meet the criteria, explain what areas need improvement.
  • Provide possible solutions that may (or may not) be used to fix these.

Assessment

edit
Rate Attribute Review Comment
1. Well-written:
  1a. the prose is clear, concise, and understandable to an appropriately broad audience; spelling and grammar are correct. Readability is hampered by structure. Several other minor concerns.
  1b. it complies with the Manual of Style guidelines for lead sections, layout, words to watch, fiction, and list incorporation.
2. Verifiable with no original research:
  2a. it contains a list of all references (sources of information), presented in accordance with the layout style guideline. Some sources do not have access dates; very occasional section does not have a source.
  2b. reliable sources are cited inline. All content that could reasonably be challenged, except for plot summaries and that which summarizes cited content elsewhere in the article, must be cited no later than the end of the paragraph (or line if the content is not in prose).
  2c. it contains no original research.
3. Broad in its coverage:
  3a. it addresses the main aspects of the topic. Will evaluate after refactoring
  3b. it stays focused on the topic without going into unnecessary detail (see summary style).
  4. Neutral: it represents viewpoints fairly and without editorial bias, giving due weight to each. I recommend no pass on GAC 4 until issues concerning involvement with medical diagnosis are addressed as per my Commentary below. However, I think it should be easy for the editors to correct this in a week or two at most and therefore do not recommend failing the GA nomination if it would otherwise pass, instead of placing it on hold. EllenCT (talk) 08:16, 11 November 2013 (UTC)Reply
  5. Stable: it does not change significantly from day to day because of an ongoing edit war or content dispute.
6. Illustrated, if possible, by media such as images, video, or audio:
  6a. media are tagged with their copyright statuses, and valid non-free use rationales are provided for non-free content.
  6b. media are relevant to the topic, and have suitable captions.
  7. Overall assessment.

Commentary

edit

At first blush this article can definitely improve to become a GA within a reasonable time span, however there are some outstanding issues:

  • Several sources do not have access dates
  • Readability is impacted by structure. Suggest you use the structure (or something similar) "Description: Hardware, software, processing" and "History: Development, Jeopardy" and "Future applications".
  • There is a very large quote in the 'hardware' section, which could be to some extent integrated into text. This might be done by explaining the nature of components or using a secondary source to provide commentary on the hardware components' utility.

Kind regards, LT910001 (talk) 01:51, 20 October 2013 (UTC)Reply

No action in a week, am failing this review based on the reasons above. Would encourage renomination when this article's issues relating to readability have been addressed; this may include restructuring the article and addressing the comments above. Kind regards, LT910001 (talk) 04:48, 30 October 2013 (UTC)Reply
Ack! Oh, the endless embarrassment! This article has indeed been changed in response to my earlier comments. I've reversed the failure and put the review on hold. I'm going to go on a wikibreak and if it's not much trouble would ask you to find another reviewer to complete the review. Thanks for responding to the changes (I shouldn't have acted so rashly!) and I wish you all the best. LT910001 (talk) 04:56, 30 October 2013 (UTC)Reply

Since the original reviewer seems to have gone away, I'll take a look at this one. The immediate problems that show up are,

  1. The disambiguation tool is showing two problems that need fixing
  2. The external links tool is showing two deadlinks. (added 10/11) It is not a requirement for GA, but I would strongly recommend that you archive your online sources at WebCite to protect them from linkrot and add a link to WebCite in the cite so that future editors can find them easily.
  3. A lot of the sources are blogs. Some might be ok on the "recognised expert" rule, but I need to take a closer look. (edit: detailed comments below)

SpinningSpark 19:33, 8 November 2013 (UTC)Reply

Thanks for taking over, Spinningspark. I'm present on Wikipedia at the moment, but I'm not able to reliably devote enough time for a proper GA review, and I'm not able to guarantee timeliness when responding. Thanks for taking over, LT910001 (talk) 00:49, 9 November 2013 (UTC)Reply

Further comments:

  • File:Watson Answering.jpg requires a fair use rationale for this article.
  • File:DeepQA.svg. The link in the "other versions" field is not to another version on Wikimedia, but to the source diagram. This should rather be presented as a reference or else added to the source field e.g "Own work based on [...]". I also note that the diagram is very close to the source and the annotation is identical. This is uncomfortably close to be being a copyvio. I don't think I am inclined to fail it for GA because of this but it is open to challenge by others in the future.
  • fn#5 gives a quote and links to the source but does not name the source.
  • fn#7 does not link to a relevant article. Probably the page has been changed or else has gone dead. Does not seem to be necessary anyway as fn#3 is sufficient verification.
  • fn#6 fails verification. It is supposed to be verifying the choice of human contestants but is written before the selection was made and only mentions one of them. Does not seem to be necessary anyway as fn#3 is sufficient verification.
  • fn#22 needs a page number.
  • The passage beginning "To provide a physical presence..." is cited to an article by one of the contestants (fn#25). This does not strike me as a reliable source for IBM's motivation in design choices, in particular the 42 threads claim.
  • fn#41, the source is not named. It is also timing out for me and may possibly have gone dead.
  • fn#62. Why is this source considered reliable?
  • fn#69 fails verification. Probably, the page has been changed
  • fn#73 is a bare url and requires a page number for verification
  • fn#76 is essentially a marketing ad. I am not seeing what this is supposed to be verifying or what it is adding to the article.
  • fn#77 provides a link but does not name the source
  • fn#83 provides a link but does not name the source
  • fn#84 provides a link but does not name the source

SpinningSpark 21:37, 10 November 2013 (UTC) to 10:49, 11 November 2013 (UTC)Reply

  • I have resolved most of the image use and verifiability issues that you addressed in this list. —Seth Allen (discussion/contributions), Monday, November 11, 2013, 17:15 U.T.C.
  • fn#7 (new numbering) is dead
  • You have not responded concerning fn#24 and the number 42. A cite from IBM on why IBM have done something would be better. Or else attribute the claim to Jennings in-article.
  • fn#40 is still dead. This might be ok if the document exists other than online but there is not enough citation information given to be able to find it. Alternatively, if fn#14 has all the necessary information please provide page number.
  • I need to see a response from you to the issue raised by EllenCT before I can pass this. SpinningSpark 18:57, 11 November 2013 (UTC)Reply
  • I have finished my work with this article. Now here is an outline of what I did: I provided access dates for those sources that lacked them, gave a fair use rationale to the non-free image that lacked it, and as to the "DeepQA" diagram, I discarded the link in the "Other versions" field and moved it to the source field ("Own work based on diagram found at http://www.aaai.org/Magazine/Watson/watson.php). For the links that did not name their sources, I converted them to the standard citation format (title, URL, author, publisher, publication date, and access date). I also removed those footnotes that failed verification (including the blog that you said should not be considered reliable, as well as a citation to copyvio content on YouTube), provided page numbers for footnotes 22, 73, and 40, and provided archived copies for the two dead links. For the claim to Jennings, I removed it from the citation, and now the article attributes the claim to him in the prose, in the second paragraph of the "Jeopardy preparation" section. And for the "Future applications" section, I gave that a significant, if not complete, overhaul to ensure neutrality; mention of Watson's involvement in medical diagnosis was removed from pre-existing statements and now there is a notice in the first paragraph of the "Healthcare" section that simply states: "Despite being developed and marketed as a 'diagnosis and treatment advisor,' Watson has never been actually involved in the medical diagnosis process, only in assisting with identifying treatment options for patients who have already been diagnosed." So, I guess I have covered nearly all the concerns that have been raised in this nomination. —Seth Allen (discussion/contributions), Tuesday, November 12, 2013, 21:41 U.T.C.

I'm concerned that the article repeats several IBM press releases which state that they were developing Watson to be involved in the medical diagnosis process, and in a few instances implies that it actually is so involved. However, IBM's detailed documentation makes it clear that the only implementations involved with healthcare are in "utilization management" (cost-benefit analysis concerning treatment for patients already diagnosed by an M.D.) and simply recommending treatment options. For example, see slide 7 of this presentation which indicates that the "Watson Diagnosis & Treatment Advisor" actually only "assists with identifying individualized treatment options for patients [already] diagnosed with cancer," which is corroborated by this case study which, though sub-headlined, "IBM Watson helps fight cancer with evidence-based diagnosis and treatment suggestions," again contains no statements that suggest Watson is actually involved in the diagnosis process and several which indicate it recommends treatment options for patients already diagnosed. I think this indicates some pretty heavy-handed attempts at manipulation on the part of IBM's marketing department, which border on outright deception, and I personally would never consider this article passing GAC 4 (neutrality) until the statements implying that Watson is involved with performing or assisting medical diagnoses are corrected to be consistent with the details of IBM's descriptive literature. If Watson were actually to assist in medical diagnosis, the potential legal liability for diagnosis errors would probably be vast, and since Watson's knowledge base that it uses to interpret natural language statements is crowdsourced, (including from Wikipedia editors!) indemnification against potential error and even natural language ambiguity is, I believe, a larger problem than what Watson has so far addressed. EllenCT (talk) 08:11, 11 November 2013 (UTC)Reply

Passing for GA. Well done, a very interesting article. SpinningSpark 00:28, 13 November 2013 (UTC)Reply