Talk:The Emperor's New Mind
This article is rated Start-class on Wikipedia's content assessment scale. It is of interest to the following WikiProjects: | |||||||||||||||||||||||||||||||||||||||||||
|
Untitled
editFor me, the current article misses the mark. The Emperor's New Mind does contain much interesting background material on computation, physics, mathematics, and other topics, but all this background material is simply to prepare the reader to understand Penrose's main argument. The main argument boils down to this: the human brain may exploit certain quantum mechanical phenomena, key to intelligence and/or consciousness, that effectively make the brain's activity uncomputable, and hence beyond the reach of Turing machines/classical computers. To allow for this, Penrose suggests that current models of quantum physics are flawed, and hints at how they might be modified.
Although Penrose's expertise and authority on physics is undisputed, many have found the ideas suggested in The Emperor's New Mind unconvincing and unnecessary, though admittedly plausible. Furthermore, even if Penrose turned out to be right, there is no reason why quantum computers would not be able to exploit the same quantum phenomena that the brain does, and thus become just as intelligent as humans. Thus, The Emperor's New Mind is really an argument against strong AI in classical computers, not against strong AI in artificially created systems.
Regarding the followup book Shadows of the Mind, in chapter 2 of that book, Penrose presents an argument that appears to prove that the reader has some insight that a computer could not have. However, there is a subtle mistake in his argument, and I vaguely remember something written by Hofstadter where he succintly points out the mistake. I can't find a reference to what Hofstadter wrote, however I think there have been other reviewers of Shadows of the Mind. A quick web search turned up a detailed review by David J. Chalmers at [1] MichaelMcGuffin 12:49, 3 Jul 2004 (UTC)
This article doesnt just miss the mark, it is on another planet. Most of the references are to do with the first couple of chapters and are largely irrelevant to the theme of the book. The guts of the book must either have not been read by the author of this article, or largely misunderstood. In reference to the discussion that quantum computers could exploit quantum phenomena, as the brain is claimed to: Quantum computers are purely computational and deterministic, and have no net gain over classical computers other than raw speed. They simply serve to assist with the practical complexity of computation, but achieve no greater abilities in principle. The quantum nature of quantum computers is not exploiting the same concept as the quantum nature which Penrose suggests may explain human reasoning.
I seem to recall both emotions and Godel's incompleteness theorom as well
editYeah, I had similar issues with Penrose's book. I felt that some of the chapters were completely unnecessary (he didn't really connect his famous tiles to the topic at hand), and some chapters were in subjects that were outside his expertise (the chapters on biology were unenlightening).
He also summarized his arguments at the end with 'ask a computer how it feels', which is a pretty inane argument. Don't claim to be making a scientific argument and then get philosophical!
All in all, I didn't find the arguments to be very strong. I guess it's a hard position to take, though. If you state that it is impossible for a machine to be intelligent because it cannot perform x, you then have to define x. Then someone will build a machine that specifically does x. Like playing chess.
Anyway...just my two cents
-t. [note:--just noticed I didn't sign this back when i wrote it. sorry, i was probably still learning the ropes. in any case, this is tristanreid, the same user as below]
- Penrose does make it fairly clear that if you make a machine that achieves x, you can then apply that same argument to the new machine, so that it can not do, say, x2. If you then make a machine that achieves x and x2, then the same argument again shows it can not achieve, say, x3. So there is no machine that can ever encompass all of the possible non-computable truths, because for every machine, there is an x which it can not do. It is the existance of an x for every single sufficiently complex machine that forbids a sufficiently complex machine from achieving all x's. Remy B 13:54, 4 December 2005 (UTC)
- Hi Remy. If I understand your point correctly (forgive me if wrong), it is that it is not necessary to specifically name the tasks which a particular machine cannot perform, as it can be shown that any particular machine will have at least one such task that it is unable to do, and therefore no machine could be sufficiently complex to perform all unperformable tasks. If I've gotten that, I think it's a good point! I still don't see, though, that it prevents the same argument from being made about human intelligence. For any given task that is associated with intelligence, I think there exists a human that can't perform that task. Reading, mathematical/logical reasoning, chess playing, etc. I think it's interesting to read about people who have sustained brain damage, and must relearn how to function without a formerly vital part of their mental capabilities. One more thing, in regard to a previous poster's comments about quantum computers. I think that it's probably accurate to say that quantum computers have no advantage over 'classical' computers, if we're debating whether a computer can perform the computations necessary to be classified as intelligent. On the other hand, I think there's something missing from the argument (not yours, the argument in general). I think too much of the AI debate focuses on Turing machines and computability. Has it ever been proved that humans really have some ability that transcends this? Penrose hints that it may be so because of some quantum link, but doesn't really explain what's so special about that link, he just takes it for granted that we definitely have something that computers never could. Tristanreid 19:45, 17 December 2005 (UTC)
- I think you are close to my point, but its not quite what I meant. Penrose was not only stating that it can be shown that there is a truth for every machine that that machine can not prove, but that there is a truth for every machine that that machine can not prove, and that each of those truths are ones that a human CAN prove. This means that for every machine that is some truth accessible to humans that is not accessible to that machine. Penrose uses this as his basis to state that human beings must be achieving some kind of non-computable process in accessing truths. The reason the AI debate focuses on Turing machines and computability is because AI only deals with computable processes, and Turing machines can achieve any computable process. AI has no capability to deal with non-computable processes because there are currently no known physical processes in the Universe that achieve this. Penrose suggests that further research into quantum mechanics may bring to light the physical processes that the human brain uses to achieve non-computability, but hasnt said that it has to be that in particular. However, he does assert that the fact that humans achieve non-computability means that ultimately there must be some form of non-computability in any complete physics model of the Universe, since the human brain follows the laws of physics. Remy B 12:20, 18 December 2005 (UTC)
- The reason I think the AI discussion focuses too much on computability is not because I didn't understand what the argument was, I just think we've reached the limit of how much they can prove anything about AI, and I don't know that any new insights are being gained. Wouldn't it be more interesting to try to isolate the things that make us intelligent that computers currently can't do, and use those insights to gain self-knowledge and to enhance computers as a tool? Aside from that (back to computability), I've never seen a proof that humans achieve non-computability, only assertions that humans are definitely not sophisticated Turing machines. But lets say that humans CAN achieve noncomputability. If humans have access to some method of noncomputability that follows the laws of physics, why could this method never be used to create a computer that doesn't deal only with computable processes? Thanks in advance for any insight you can share with me, I enjoy this type of discussion immensely. Tristanreid 19:32, 18 December 2005 (UTC)
- Penrose draws the conclusion that humans achieve non-computability based on the assertion that humans are not representable as Turing machines. This is because Turing machines can prove all computable truths (ie. do anything computable), and if there is a truth that humans can prove that any Turing machine can not, that truth must be attained using a non-computable process. Considering your other point, you are right that if we could find and master the non-computable physics that the human mind uses, we could indeed build a machine that also reaches non-computable conclusions. The debate exists because AI proponents state that all human reasoning is computable, and that this new physics is not necessary. By this definition any man-made machine that uses non-computable processes would not be considered AI. Remy B 10:51, 19 December 2005 (UTC)
- There is still no proof that humans can prove any truth that a Turing machine can not, just an assertion. A conclusion can't be solely based on an assertion. As to the last point, when you say "AI proponents think this", you're not accurate. I could just as easily say that "AI opponents think that humans could never build something as smart as themselves, because of Godel's Incompleteness Theorum", which I've actually heard someone say. It's a strawman argument, ultimately. I'm an AI proponent, and I believe that if cognitive science discovered that there was some aspect of human thought that could only be achieved by using a certain type of physics, that we could build a machine using that type of physics. Any intelligence created by man is 'artificial', regardless of what area of physics is used in the underlying process. Why would that area become 'out of bounds'? Further, if physics is used to compute something 'non-computable', hasn't it become computable? Tristanreid 15:34, 19 December 2005 (UTC)
- Penrose doesnt say that humans can prove ANY truth that a Turing machine can not, but he certainly believes he has proven that humans can prove SOME truths that Turing machines can not. He shows this in a much more rigorous and convincing manner in his follow up book 'Shadows of the Mind'. Penrose uses a variation on the diagonal slash with respect to considering the human intellect as an algorithm (ie. anything computable), and then demonstrating a contradiction to show that the assumption that the human intellect can be represented as an algorithm must be false. As for the statement you have heard about humans never being able to build something as smart as themselves due to Godels IT, that only applies to computable machines because Godels IT only applies to formal systems which are by definition only computable. You can not use Godels IT to show that humans can not build a machine that uses non-computable processes because that is out of the domain of formal systems. On the next point, if you say you are an AI proponent but would allow AI to include non-computable processes then I guess we just have a difference of definition. My general reading of the AI community is that they follow the stricter definition that I also use, which is that AI only encompasses computable processes, but thats only semantics and doesnt really change anything. On your last point, Penrose does mention (I think in Shadows of the Mind) that human access to non-computable physics can not as simple as considering that physics to be an Oracle machine, because then it does indeed become computable. My interpretation of what he said on that point is that he doesnt have a good answer to your concern, but that he believes it is inevitable that the concern will be answered because he has proven that humans are doing *something* non-computable, even if we cant yet define exactly what that is or how it makes sense. I think the philosophy of a non-computable mind is in its very early infancy, and there is a long way to go before we can get and comprehend all of the answers. Remy B 06:31, 20 December 2005 (UTC)
- Sorry, I should have typed that more clearly. When I said ANY truth, I meant "ANY AT ALL". In other words, I still don't see that Penrose has shown that humans are doing something non-computable. If you know the substance of Penrose's argument against algorithmic representation of human intellect, I'd love to see it, I read SOTM when it first came out, but don't remember the details very well. I do remember him (and others) using Cantor's diagonal slash to demonstrate the halting problem, as an example of noncomputable problems, but I don't remember an example or proof that human intellect is different, beyond his argument that a computer could never be creative. To stay focussed in this discussion, I'll concede what you said about limiting the AI discussion to computability in general. That's what my original point was about, I've always felt that the AI discussion should be expanded to include any physically deterministic process that could conceivably be used to create an artificial intelligence, but I also think there is a lot of value in trying to figure out how human intelligence works. Tristanreid 17:01, 20 December 2005 (UTC)
- I would like to have a go at my own wording out of what I consider the convincing reasoning that Penrose made for the non-computability of human intellect. Maybe that would justify its own Wikipedia article if it was written as a NPOV article rather than an essay? I'll definitely leave a note on your user talk page if I do get around to doing that. Remy B 17:32, 20 December 2005 (UTC)
US or Brit spelling?
editSince this article is about a British writer, should it use British English, hence "modelled" rather than "modeled"? I'm not so fussy as to actually go ahead and make the change and tread on anyone's toes, just interested how the language policy is generally applied in this kind of situation. — PhilHibbs | talk 16:15, 24 October 2007 (UTC)
WikiProject class rating
editThis article was automatically assessed because at least one WikiProject had rated the article as start, and the rating on other projects was brought up to start class. BetacommandBot 04:29, 10 November 2007 (UTC)
We can learn much from TENM, but not about the human intellect.
editPenrose deserves more fame than he has enjoyed. TENM is a fine ramble through his fine mind, and the book is a smorgasborg for the intellect. In particular, chpt. 6 of TENM is a rare but worthy attempt by a first rate mind to explain quantum theory to educated lay readers. Penrose likes cooking up smorgasborgs for intelligent nonspecialists, and his next effort of this nature was the monumental The Road to Reality. Once again, a wonderful ramble through a great mind, but yet again a book that failed to deliver on its promise because of the author's lack of passion for particle physics and quantum field theory. They simply aren't his strongest subjects. Penrose is no expert on theoretical computer science and the theory of algorithms either. He most definitely is not neurologist. The central argument of TENM was anticipated in important respects by John Lucas (philosopher) in 1961; see here for a recent discussion by Lucas.123.255.31.114 (talk) 05:38, 7 June 2009 (UTC)
"The book's thesis is considered erroneous by experts in the fields of philosophy, computer science, and robotics."
editWooooah, there. That's a massive accusation to add, unsourced, and without any discussion. There needs to be a source for this statement, not to mention an opposing view. It seems unlikely the guy would win an award for a book no one thinks is right. I'm deleting it unless someone comes up with a pretty good source. Joker1189 (talk) 20:43, 27 July 2010 (UTC)
- The source was provided with the edit: L.J.Landau (1997) "Penrose's Philosophical Error" ISBN 3-540-76163-2 http://www.mth.kcl.ac.uk/~llandau/Homepage/Math/penrose.html Spot (talk) 03:08, 31 July 2010 (UTC)
- I think that almost any person with a background on quantum information or theoretical computer science would criticise Penrose's position much. Scott Aaronson dedicated one lecture to discuss the topic with CS students. His exposition of the subject is great to read, yet I do not think we need Scott here to realise that Penrose does not expose right the differences between: computability and efficiency; quantum computers and brains. And, anyway, we all know that anything computable classically is quantum computable. It would be great if a significant number of references were added to the article, but I am personally against citing just Scott's lecture. Garrapito (talk) 20:51, 17 May 2011 (UTC)
All the time I was reading it, the first of Arthur C. Clarke's Three Laws kept running through my mind: "When a distinguished but elderly scientist states that something is possible, he is almost certainly right. When he states that something is impossible, he is very probably wrong." JHobson3 (talk) 17:45, 9 June 2015 (UTC)
Read's like a book's jacket liner.
editAs it stands there isn't a critique of Penrose's theory in this article. Certainly previous statements to the effect that it has been entirely discredited are out of order. On the other hand there are powerful thinkers who do disagree quite strenuously with his theory. For the article to have any credibility it should have a section on those criticisms and perhaps the counter arguments. I'm unqualified to provide them but the article, as it currently stands, can't be considered authoritative.
Raymond Smullyan
editWhy is Raymond Smullyan in the "see also"? I haven't read any of his philosophy books, but the wiki article "Raymond Smullyan" doesn't mention any connection to the contents of this article. — Preceding unsigned comment added by 2.240.204.182 (talk) 19:04, 3 February 2016 (UTC)