Talk:Geoffrey Hinton
A news item involving Geoffrey Hinton was featured on Wikipedia's Main Page in the In the news section on 8 October 2024. |
This article must adhere to the biographies of living persons (BLP) policy, even if it is not a biography, because it contains material about living persons. Contentious material about living persons that is unsourced or poorly sourced must be removed immediately from the article and its talk page, especially if potentially libellous. If such material is repeatedly inserted, or if you have other concerns, please report the issue to this noticeboard.If you are a subject of this article, or acting on behalf of one, and you need help, please see this help page. |
This level-5 vital article is rated C-class on Wikipedia's content assessment scale. It is of interest to the following WikiProjects: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
|
godfather joke
editThe first paragraph claims his work has gained him the nickname "godfather of neural networks". This must be a joke. What about the other, much older pioneers of neural networks? For example, Warren McCulloch was called the godfather of neural networks - see http://soma.berkeley.edu/books/MA/MassAction.html . And there are Teuvo Kohonen, Kunihiko Fukushima, Shun'ichi Amari, Paul Werbos, David E. Rumelhart, and others who may be more deserving of such a title. I checked the source - apparently it was Andrew Ng who called Hinton that in a Wired magazine article. But both Hinton and Ng are working for the same company, Google (next door to Wired magazine). This looks like a company's self-promotion in disguise. Should not appear in any biography. Putting things straight (talk) 16:15, 8 October 2013 (UTC)
- I believe it's because Prof. Hinton also does an excellent Marlon Brando. – AndyFielding (talk) 15:46, 7 April 2023 (UTC)
For future use
editMy father was a Stalinist and sent me to a private Christian school where I was the only person to pray every morning. From a very young age I was convinced that many of the things that the teachers and other kids believed were just obvious nonsense. That's great training for a scientist and it transferred very well to artificial intelligence. But it was a nasty shock when I found out what Stalin actually did.
— Preceding unsigned comment added by 176.199.175.30 (talk) 23:53, 10 November 2014 (UTC)
location of birth
editThis article says he was born in Bristol? http://www.magazine.utoronto.ca/feature/getting-smarter-computer-science-professor-geoffrey-hinton-is-helping-to-build-a-new-generation-of-intelligent-machines/ but the wiki article says london? — Preceding unsigned comment added by 82.2.180.6 (talk) 07:12, 23 June 2015 (UTC)
Why delete Alex Krizhevsky whose breakthrough made this possible? Other co-workers are also mentioned.
editUser Nelson: You deleted my text on Krizhevsky and others. One cannot give Hinton sole credit for the work of Alex Krizhevsky. In fact, Hinton was resistant to Krizhevsky's idea. Why delete Krizhevsky? Other co-workers are also mentioned. Same for David E. Rumelhart and others. I also added Seppo Linnainmaa, the inventor of backpropagation (1970):
While a professor at Carnegie Mellon University (1982–1987), Hinton and David E. Rumelhart and Ronald J. Williams were among the first researchers who demonstrated the use of back-propagation algorithm (also known as the reverse mode of automatic differentiation published by Seppo Linnainmaa in 1970) for training multi-layer neural networks that has been widely used for practical applications.[1]
The dramatic image-recognition milestone of the AlexNet designed by his student Alex Krizhevsky[2] for the Imagenet challenge 2012[3] helped to revolutionize the field of computer vision. — Preceding unsigned comment added by Uf11 (talk • contribs) 19:51, 16 November 2018 (UTC)
References
- ^ Rumelhart, David E.; Hinton, Geoffrey E.; Williams, Ronald J. (1986-10-09). "Learning representations by back-propagating errors". Nature. 323 (6088): 533–536. doi:10.1038/323533a0. ISSN 1476-4687.
- ^ Dave Gershgorn (18 June 2018). "The inside story of how AI got good enough to dominate Silicon Valley". Quartz. Retrieved 5 October 2018.
- ^ Krizhevsky, Alex; Sutskever, Ilya; Hinton, Geoffrey E. (2012-12-03). "ImageNet classification with deep convolutional neural networks". Curran Associates Inc.: 1097–1105.
{{cite journal}}
: Cite journal requires|journal=
(help)
Extra section for high-profile case of plagiarism in the backpropagation paper?
editHinton's backpropagation paper[1] (he was the second of three authors) did not mention Seppo Linnainmaa, the inventor of the method. This is actually Hinton's most highly cited paper, together with the Krizhevsky paper.[2] At the moment, the article mentions only in passing this high-profile case of plagiarism although it probably deserves an extra section.
Uf11 (talk) — Preceding undated comment added 18:45, 14 December 2018 (UTC)
- Do you have a source that suggests either that Seppo Linnainmaa is the inventor of backpropagation or that any part of this paper was plagiarized? The backpropagation article suggests that there is no single clear inventor, i.e., that the idea has been rediscovered and refined by many researchers. This is the story presented in a journal article[1] by neural network researcher and credit assignment enthusiast Jurgen Schmidhuber. In the absence of a source suggesting otherwise, it seems plausible that the failure to credit previous work reflects multiple discovery rather than plagiarism. 72.140.47.98 (talk) 00:39, 11 March 2019 (UTC)
- We cannot look into the minds of people. But the facts are as follows. In an interview of 2018,[2] Hinton said that "David E. Rumelhart came up with the basic idea of backpropagation, so it's his invention." Your well-known reference[1] of 2015 (as well as many other references) show that backpropagation was invented much earlier. You deleted these important facts. I undid that. In fact, Hinton himself says in the same interview: "I have seen things in the press that say that I invented backpropagation, and that is completely wrong." Uf11 (talk) 15:44, 14 April 2019 (UTC)
- I've attempted to make the article a little more unbiased, by noting that the Rumelhart paper was not the first paper to propose backpropagation in the introduction and moving the quote to the section that is actually about backprop. (It's a little weird to focus so much on backprop in the introduction, especially given that Hinton was a middle author on the paper and does not take credit for it himself.) If you do not agree with this, please discuss why here before changing it. 72.140.133.233 (talk) 14:35, 19 April 2019 (UTC)
Strange table in the article
editIs there a reason for almost an entire section's worth of screen space to be devoted to some kind of strange table of Hinton's quotes during a TV interview paired with interpretation and rephrasing of his (fairly straightforward) words? jp×g 19:06, 8 April 2023 (UTC)
- Came here just to see if my view on this interpretation table was already supported. It will be reduced to a proper paragraph and the table removed. JSory (talk) 15:21, 10 April 2023 (UTC)
- I'm no wikipedia expert, but the right-side column approaches original research (or some odd attempt at translating quotes in to some manifesto-style language). Very odd table indeed. Douglaswyatt (talk) 20:52, 12 April 2023 (UTC)
- Very odd and certainly unencyclopedic. I've been bold and removed it. Maybe someone can reword it into something that makes sense in an actual encyclopedia article although I doubt that's possible without it qualifying as OR. --Cyllel (talk) 14:27, 1 May 2023 (UTC)
- To the creator of the table, if you want more context for why this isn't appropriate, see Wikipedia:Manual_of_Style/Tables#Prose. It's something really more suited to a (certain kind of) blog post than an encyclopedia article. I see no pressing reason that the information from the interview can't be communicated in conventional prose, the ideas are not at all complex or difficult to handle in a way that might justify a novel presentation like a table. --Cyllel (talk) 14:34, 1 May 2023 (UTC)
Geoffrey Hinton is not the first person to receive both Turing and Nobel
editHerb Simon received the Turing Award in 1975 [3] and the Nobel prize in Economics in 1978 [4]
- ^ a b Schmidhuber, Jürgen (2015). "Deep learning in neural networks: An overview". Neural Networks. 61: 85–117. arXiv:1404.7828. doi:10.1016/j.neunet.2014.09.003. PMID 25462637.
- ^ Ford, Martin (2018). Architects of Intelligence: The truth about AI from the people building it. Packt Publishing. ISBN 978-1-78913-151-2.
- ^ https://amturing.acm.org/award_winners/simon_1031467.cfm
- ^ https://www.nobelprize.org/prizes/economic-sciences/1978/simon/facts/
2601:547:F80:732C:8C0D:F3D6:5F34:8C3B (talk) 14:11, 8 October 2024 (UTC)
- You are totally right. Fixed.--ReyHahn (talk) 14:23, 8 October 2024 (UTC)
First computer scientist to win a Nobel Prize?
editIf this is the case I think it should be mentioned in the entry.Mistico Dois (talk) 01:08, 9 October 2024 (UTC)
- It is not the case.--ReyHahn (talk) 06:58, 9 October 2024 (UTC)
- I think Herbert A. Simon would qualify, then. This would make him the second. Where there any others?Mistico Dois (talk) 13:42, 9 October 2024 (UTC)
- It is the first Turing+PHYSICS Nobel if you want to highlight that.--ReyHahn (talk) 13:47, 9 October 2024 (UTC)
- I know that. As for other computer scientists winning the Nobel I could point that Demis Hassabis won this year too for Chemistry.Mistico Dois (talk) 13:50, 9 October 2024 (UTC)
- I think Herbert A. Simon would qualify, then. This would make him the second. Where there any others?Mistico Dois (talk) 13:42, 9 October 2024 (UTC)