Talk:Technological singularity/Archive 7
This is an archive of past discussions about Technological singularity. Do not edit the contents of this page. If you wish to start a new discussion or revive an old one, please do so on the current talk page. |
Archive 1 | ← | Archive 5 | Archive 6 | Archive 7 | Archive 8 |
Lede
The article lede goes way too far into the substance and discussion of Technological Singularity than WP outlines for article ledes. The lede to an article should be a brief explanation of what the topic is about, leaving the actual "nitty-gritty" details of what it is, detailed implications, et cætera to the main article. The lede should give the reader the gist of the topic, not the first 8 hours of Technological Singularity 101. :P — al-Shimoni (talk) 19:47, 6 February 2011 (UTC)
- agreed. What a terrible lead. Igottheconch (talk) 22:56, 15 February 2011 (UTC)
- Maybe the paragraphs specifically devoted to Vinge and Kurzweil should be cut, with their information integrated into the 'History of the idea' section? Hypnosifl (talk) 00:13, 19 February 2011 (UTC)
- I started editing but was summarily reverted almost immediately, before I got farther with it. I've sandboxed the article and intend to go over it and see if I can make some improvements. BE——Critical__Talk 23:49, 3 March 2011 (UTC)
- Maybe the paragraphs specifically devoted to Vinge and Kurzweil should be cut, with their information integrated into the 'History of the idea' section? Hypnosifl (talk) 00:13, 19 February 2011 (UTC)
We are already past the TS
The Industrial Revolution - when machines started making machines - was the TS. We are way past it. 92.15.5.217 (talk) 14:09, 15 March 2011 (UTC)
- Yes, to my understanding we have seen several TS:s. Each major revolution in our biological and cultural development, each time we turned from one species into another, the time instants at which we entered the agricultural revolution, the iron age, the industry revolution, etc. The exponential development curve changed exponent at those instants. And we are still waiting for yet another TS caused by the information revolution. Have I misunderstood this? In that case, the article should clarify the concept.
- The article needs a more stringent mathematical definition of TS. My understanding of a TS is a singular point, a knee, or a breakpoint in the economical development curve, or a discontinuity (mathematics) of its derivative. See this illustration: http://www.chicagoboyz.net/blogfiles/2005linearlog.png .
- I have noticed some confusion between the omega point and a TS. The term singularity is often used in the context of black holes, meaning infinite gravity, but I don't think that is an appropriate analogy in this case. Or? Mange01 (talk) 20:39, 30 March 2011 (UTC)
- The singularity is not "when machines started making machines", it's when the intelligence of machines exceeds that of humans, and when intelligent machines start creating even more intelligent machines leading to an "intelligence explosion". Also in answer to Mange01, the use of the word "singularity" was just used by Vinge as a nontechnical analogy to black hole singularities, no mathematical quantity is literally expected to become singular at the technological singularity. Hypnosifl (talk) 22:34, 6 April 2011 (UTC)
- Citation needed. :) 18:30, 7 April 2011 (UTC)
- Are you responding to my comment? The opening section makes clear that the singularity concept is defined in terms of superhuman intelligence, and the next section mentions that Vinge chose the name in analogy with black hole singularities. Hypnosifl (talk) 19:38, 7 April 2011 (UTC)
- Citation needed. :) 18:30, 7 April 2011 (UTC)
- The Industrial Revolution was more about human limbs making machine limbs (for example, a steam shovel). An artificial limb, no matter how much more powerful than a human limb, cannot behave or create autonomously. It is a mere appendage of the human brain that creates it. The technological singularity, if it occurs, will be about human brains making machine brains, which are not mere extensions of the human brain but possible replacements and competitors to it. However, the Industrial Revolution illustrates how technological evolution has proceeded several orders of magnitude faster than biological evolution. Humans themselves have about the same sensory, cognitive, and motor capacity that they have had for tens of thousands of years, whereas in just a few centuries machines have undergone astounding improvement. --Teratornis (talk) 18:15, 7 April 2011 (UTC)
- The singularity is not "when machines started making machines", it's when the intelligence of machines exceeds that of humans, and when intelligent machines start creating even more intelligent machines leading to an "intelligence explosion". Also in answer to Mange01, the use of the word "singularity" was just used by Vinge as a nontechnical analogy to black hole singularities, no mathematical quantity is literally expected to become singular at the technological singularity. Hypnosifl (talk) 22:34, 6 April 2011 (UTC)
Definition of singularity still needed
Anyway, I still don't understand the article because it does not explain the word singularity in this context, and it does not link to any of the singularity related wp articles. One of you implied that this is not about a math, but I don't believe you, of course the authors of the books on this topic have some mathematical background and have chosen this word because of that. Singularity is a strictly mathematical term with several alternative meanings. I have seen sources on the internet simply implying that "technological singularity" is a "break point" in the exponential development - a sudden change into a shorter time constant or "knowledge doubling time". This has happened several times in the history, for example when we became homo, when we became home sapiens, when we leaned how to read, etc. What does the main sources say about this? Mange01 (talk) 23:05, 7 June 2011 (UTC)
- I added the definition to the lead. Vinge coined the term to refer to the emergence of greater than human intelligence. There are other definitions, but all of them revolve around the scenario where greater than human intelligence emerges and triggers an intelligence explosion. The term singularity was taken from physics, treating the lack of usefulness of current physical models for describing the infinite density at the center of a black hole (a singularity), as a metaphor for how conventional human understandings of the future breakdown when greater than human minds begin playing a role in that future. Abyssal (talk) 03:48, 8 June 2011 (UTC)
Singularity time line: First Vernor Vinge, then Hans Moravec, later Kurzweil and others
The current article is written as if Kurzweil's contribution to the concept of a technological singularity somehow was on the same level as Vinge's, although Kurzweil entered the game much later, long after the pioneers. This should be corrected.
Obviously Vernor Vinge introduced the concept. He wrote about the singularity and exponentially accelerating change not only for an academic audience[1] (crediting earlier related thoughts by Stanislaw Ulam[2] and I. J. Good[3]), but also popularized the concept in SF novels such as Marooned in Realtime (1986) and A Fire Upon the Deep (1992).
Another problem with the present article is that it seems to suggest that it was Kurzweil who extended Moore's law in his essay "The Law of Accelerating Returns" (2001). But it was actually computer scientist and futurist Hans Moravec who extended Moore's law in his book Robot: Mere Machine to Transcendent Mind (1998) to include technologies from far before the integrated circuit, also plotting the exponentially increasing computational power of animal brains in evolutionary history. Sir Arthur C. Clarke wrote about this book: "Robot is the most awesome work fo controlled imagination I have ever encountered: Hans Moravec stretched my mind until hit the stops."[4]. Kurzweil's essay came three years later, and was an old hat by the time it was published. It mentions Moravec, but fails to make clear that Moravec did essentially the same thing much earlier. No wonder that the predictions of human-level AI by Moravec and Kurzweil were similar: 2030-2040 (Moravec) vs 2045 (Kurzweil).
- ^ Vinge, Vernor (1993). "The Coming Technological Singularity: How to Survive in the Post-Human Era"
- ^ Ulam, S., Tribute to John von Neumann, Bulletin of the American Mathematical Society, vol 64, nr 3, part 2, May, 1958, p1-49.
- ^ Good, I. J. "Speculations Concerning the First Ultraintelligent Machine", Advances in Computers, vol. 6, 1965.
- ^ ISBN 0-19-511630-5: Cover praise for Robot: Mere Machine to Transcendent Mind, by Sir Arthur C. Clarke, 1993
Quiname (talk) 18:47, 22 April 2011 (UTC)
- I think you're missing the point that the first section is not intended to be a "time line" at all, just an introduction to the main concepts of the singularity--that's why the section immediately after it is called "History of the idea", the history section is meant to be separate from the non-historical introduction to the idea. The first paragraph is meant to introduce the idea that the singularity as presently understood by its most prominent advocates (which certainly includes Kurzweil, who's played a big part in spreading awareness of the idea) is closely tied to the idea of superintelligence, that "the singularity" doesn't just refer to a broader notion of accelerating technological change. So it makes sense for it to talk about multiple current prominent advocates, not just Vinge. I do agree that if Moravec came up with the notion of extending Moore's law this should be mentioned in the part about the idea of "accelerating returns", though I would note that Kurzweil extended this idea even further than just computing technology to include technologies like gene-sequencing, the resolution of brain-scanning, and others. Hypnosifl (talk) 11:31, 23 April 2011 (UTC)
- Also, it's worth pointing out that all the stuff in the first section was originally in the article's lede (the part that now just says "A technological singularity is a hypothetical event occurring when technological progress becomes so rapid and the growth of artificial intelligence is so great that the future after the singularity becomes qualitatively different and harder to predict"), but people were complaining that the lede was too long (see Talk:Technological_singularity#Lede and Wikipedia:Manual of Style (lead section)), so as a compromise the old lede (which you can see here) was moved to that first section. That might give some perspective on what the point of the first section is...if you agree that a full paragraph on Moravec wouldn't really belong in the lede, then it shouldn't go in the first section either. Hypnosifl (talk) 12:56, 23 April 2011 (UTC)
- Finally, on this comment: Kurzweil's essay came three years later, and was an old hat by the time it was published. It mentions Moravec, but fails to make clear that Moravec did essentially the same thing much earlier. No wonder that the predictions of human-level AI by Moravec and Kurzweil were similar: 2030-2040 (Moravec) vs 2045 (Kurzweil) It's actually not true that Kurzweil's essay was the first place he mentioned the law of accelerating returns, as I noted in a recent edit to the article he originally discussed it in his book The Age of Spiritual Machines which came out on Jan. 1 1999 (you can read his discussion of the "law of accelerating returns" on page 30-35 here, and although it's not shown on google books, the wiki article mentions that he predicted in it that by 2029 machines would routinely pass the Turing test and claim to be conscious and petition for recognition of this)...I couldn't find the exact publication date of Moravec's Robot but the earliest amazon reviews by people saying they've read it were from November 1998, so I don't think it's likely that Kurzweil cribbed the idea from Moravec. Hypnosifl (talk) 16:53, 23 April 2011 (UTC)
Just a Thought
Let's suppose that at some point, AI could truly simulate a human's thought. AI, then, would possess the qualities of human thinking, specifically decision-making and the ability to weigh both outcomes (the "good" one and the "bad"/evil one). If we explain crime in human behavior as the decision to indulge in the criminal behavior with consideration to the possibility of receiving punishment and impunitively not receiving punishment, then the existance of super intellegence seems to cause logical issues. If a superintellegence knew that it could elude punishment from both humans and other superintellegent entities, then what would restrain this superintellegent entity from being infinitely criminal (when there exists corresponding opportunities to make gains through such conduct)? 74.195.236.12 (talk) 00:49, 25 May 2011 (UTC)
- Avoiding criminal behavior is not necessarily solely due to fear of punishment. I don't steal because it's contrary to social contract (Golden Rule, etc), because I realize that society could not survive if every person was a thief. I voluntarily sacrifice some rights, and modify some behaviors, to be consistent with a productive, functioning member of a society - precisely because I know I derive benefit from said society. My behavior is not dictated by fear of punishment, but rather from not contributing to the loss of the society that I currently benefit from. — Preceding unsigned comment added by 76.203.229.161 (talk) 03:57, 7 July 2011 (UTC)
The whole topic is a fallacy
What's all this talk about artificial intelligence taking over the world from human beings? Since when does intelligence hold power on Earth? Stupidity has, and will always be, the most powerful force on Earth, and intelligent beings and institutions have always been subservient to it or only able, at most, to garner some fringe benefits. Enough said. Yet we are not ruled by chimps. -- cheers, Michael C. Price talk 17:16, 23 June 2011 (UTC)
Who says human stupidity will not gift power to AI? It's not inconceivable that humanity will place increasing reliance on AI to manage life and eventually hand over the keys to everything?
Updated term
Updated the term to this:
Technological singularity refers to the point in time where machines have become self-educating and are doing it so fast that humans are no longer able to keep up with hem.[1] Since the capabilities of such a greater-than human intelligence are too difficult for an unaided human mind to comprehend, a technological singularity is seen as an intellectual event horizon, beyond which the future becomes (too) difficult to understand or predict.
Also added link to Eureqa.
Can it also be mentioned that the technological singularity (atleast on the determining of natural laws) has allready been reached. According to Hod Lipson, due to Eureqa, we are allready in the post-technological singularity era.
External links tag needed?
I looked over this section, and it seems well-organized and not excessive. Could the editor who added the tag please specify their objection? Thanks, Pete Tillman (talk) 07:59, 27 November 2011 (UTC)
Bots
Should we be allowing bots to edit this article? Think about it. — Preceding unsigned comment added by M@ (talk • contribs) 14:20, 27 October 2011 (UTC)
- I, for one, welcome our new bot overlords. —WWoods (talk) 03:46, 28 October 2011 (UTC)
- "EXTERMINATE! EX-TER-MIN-ATE!" — Preceding unsigned comment added by 76.67.112.84 (talk) 08:36, 30 November 2011 (UTC)
Is this a religion?
I saw a book describing the Singularity as a new religion:
"The idea that information is alive in its own right is a metaphysical claim made by people who hope to become immortal by being uploaded into a computer someday. It is part of what should be understood as a new religion. That might sound like an extreme claim, but go visit any computer science lab and you’ll find books about "the Singularity," which is the supposed future event when the blessed uploading is to take place. A weird cult in the world of technology has done damage to culture at large."
http://www.amazon.com/You-Are-Not-Gadget-Manifesto/dp/0307389979/ref=pd_sim_b_6
Should this be noted in the article? —Preceding unsigned comment added by 24.99.60.219 (talk) 01:21, 10 May 2011 (UTC)
- Well, as already noted, the singularity has been described as "the Rapture for nerds".
- —WWoods (talk) 20:09, 10 May 2011 (UTC)
- AI heal Thyself
- Urgen (talk) 04:20, 6 June 2011 (UTC)
I think "religion" is the wrong word. The Singularity concept certainly represents a metaphysical perspective and a motivation for computer evolution that goes beyond mere short-term utility and profit. The motivation is driven rather by a much larger and longer term vision of the human condition and the profound transformations that will effect our life on this planet and in the universe. — Preceding unsigned comment added by 69.86.25.123 (talk) 17:46, 23 October 2011 (UTC)
- Religious in the sense that a greater intelligence is being created that can create a greater being itself ... how long would it take until the descended intelligence reaches "Godhood"?
-G — Preceding unsigned comment added by 76.67.112.84 (talk) 08:38, 30 November 2011 (UTC)
Please add this proof that the “singularity” model is bullshit:
Please go to Dr. Albert A. Bartlett's presentation on “Arithmetic, Population, and Energy” (part 5) and jump to 4:48 (I recommend watching all the parts up until that time before, so you can understand the implications.), for proper proof of why a singularity can and will never happen, and the whole thing is Kurzweil’s typical pseudo-scientific nonsense.
— 94.221.17.98 (talk) 23:38, 24 October 2011 (UTC)
- I'm not so sure I'd use the phrase "bull shit" on a Talk Page. (Also, it's correctly two words.) Nevertheless, please feel free to add the material you're referring to into the Article. The Mysterious El Willstro (talk) 05:20, 22 December 2011 (UTC)
- Dr. Albert A. Bartlett's presentation is very interesting, but technological progress and oil consumption are very different matters, so it's definitely not "proper proof of why a singularity can and will never happen". Obruchez (talk) 21:47, 3 February 2012 (UTC)
Repeat links
This article has a ridiculous number of WP:REPEATLINKs. Can somebody help clean it up Bhny (talk) 18:34, 29 October 2011 (UTC)
- In what Sections does the Article link plain English words or self-redirects? Those are what that policy refers to, and so far I don't see any as the Article stands. The Mysterious El Willstro (talk) 05:15, 22 December 2011 (UTC)
Generally, a link should appear only once in an article, but if helpful for readers, links may be repeated in infoboxes, tables, image captions, footnotes, and at the first occurrence after the lead. The article currently has 6 links to Ray Kurzweil, which is 5 too many. (ok I've fixed that one now, but there are more like that) Bhny (talk) 23:31, 3 February 2012 (UTC)
Clearly wrong
"In The Progress of Computing, William Nordhaus argued that, prior to 1940, computers followed the much slower growth of a traditional industrial economy, thus rejecting extrapolations of Moore's law to 19th-century computers."
The only computers that existed before 1941 were natural ones. (All biological organisms are technically computers, they just happen to be natural computers rather than man-made ones. They take information from the environment, make calculations, and return output in behavioral form. All of them process information chemically, and animals with nervous systems (including humans ourselves) process electronically as well as chemically. Bear in mind that anything processing information internally with at least one input device (sensory organs or receptor organelles in the case of organisms) and at least one output device (muscles or motor organelles in this case) is thereby defined as a computer.)
At any rate, the very first artificial computer was the ENIAC device, built by the United States Army at the advent of World War II. It's purpose was to calculate firing coordinates for canons on tanks and ships. So, the reference in the Article to "19th-century computers" is a complete anachronism. The Moorse Telegraph and Edison Light Bulb were early electronic devices, but, getting back to the definition of a computer, they did not conduct calculations (the Moorse Telegraph transmitted typed dots and dashes purely at face value, without a conversion algorithm or anything like that), and the Edison Light Bulb had no input or output devices.
In short, the source by Nordhaus needs to be reanalyzed to correct and clarify the above-quoted part of the Article! The Mysterious El Willstro (talk) 05:43, 22 December 2011 (UTC)
- "Since the capabilities of such intelligence would be difficult for an unaided human mind to comprehend, the occurrence of a technological singularity is seen as an intellectual event horizon, beyond which events cannot be predicted or understood." Is this correct? Advanced capabilities are not necessarity incomprehensible. Nor does singularity mean that events thereafter cannot be understood - those are two different concepts, surely.203.184.41.226 (talk) 05:02, 31 July 2012 (UTC)
- There actually is a mathematical basis for that, a named theorem of some kind, that explains why it exceeds human bounds, unfortunately, I don't know the name! :o But, the reasoning goes like this: Human beings have an upper limit on the amount of information they can process; new processing capability beyond this point does not exist. That is, when measuring how much "information per second" that a brain can process, there comes a point where having a higher ips rate is not possible. This is like an information consumption rate. A super-intelligence would be able to process faster than the human ips rate. A good analogy is like here on wikipedia itself: the number of page edits being created, per minute, are faster and more numerous than you can review, even if you were reading 24x7x365 at fastest human reading rate. No human can possibly comprehend reading at a higher rate than that. (Wikipedia is the Super-Intelligence.) "...beyond which events cannot be predicted or understood": This is where the speed part comes in. If the human brain does not process a certain amount of information, there is no way predictions or understanding of this unknown information can occur. Some super-intelligence might be able to process ("understand") the wikipedia article changes faster than the change rate. // Mark Renier (talk) 10:47, 31 July 2012 (UTC)
PcGnome (talk) 18:06, 21 August 2012 (UTC)
- While your contention may be true, I can see going beyond what you seem to assume are real limits. In your IPS setup - if we assume information in some sort of packet form, without increasing IPS, one might design a "super packet" where what is the same bit size would actually represent much larger chunks of information in the same processing slot.
- The example that applies to reading speed - I'd say the natural limits are on how fast letters, words & other representations can be processed. But if the information form is made more efficient, one can read much more than reading rate would suggest. There would have to be support systems to make use of higher content symbols. As it is, character, word and symbol are completely arbitrary. If a language could be devised whereby just the shape of the information naturally engenders recognition - the ultimate "intuitive language". Train the mind to recognise small variations is squiggles causes natural modification of the pattern the brain absorbs.
- pcG
Added Michael Crichton
Michael Crichton's science fiction book Prey addresses the issue of rapidly self improving artificial intelligence causing unintended problems, so I included it in the list of authors who have written such stories. — Preceding unsigned comment added by Eworrall (talk • contribs) 19:38, 31 January 2012 (UTC)
Mathematical definition still missing
The term Singularity is borrowed from math and/or physics. The article should provide a more stringent mathematical definition of TS, with a link to an appropropriate article. Some sources seems to define it as a singular point (an infinite value) in the second derivative of the economical and/or cultural development, corresponding to a discontinuity (mathematics)) in the derivative, corresponding to a knee or a breakpoint in the development curves, i.e. a sudden change of the exponent in the exponential development. See this illustration: http://www.chicagoboyz.net/blogfiles/2005linearlog.png . Such SP:s have occurred after several revolutions in the history, for example after we became homo sapiens, and after the agricultural and industrial revolutions. Can we show real economical statistics for this? Some expected the IT revolution to also result in a SP, but the economic growth of the western society is rather slow these days as compared to before the IT revolution. Why?
Other authors seem to associate with a singularity in physics, such as a black hole, and mean that the development (asymptotically or faster?) goes towards infinity. But it always does anyway, as long as the growth is positive. I don't get that view - it must be further clarified.
The article should further clarify the difference between the omega point and a TS. Mange01 (talk) 20:39, 30 March 2011 (UTC)
- I'd like to posit that a singularity is when a system ceases to behave within the bounds of said system's limits. Kind of a tipping point, although that might not be entirely what I'm trying to say. Akin to having a 2D graph where the axis are time and number of events. when the line becomes vertical, time stops as events achieve the infinite in contravention of graphing mechanics. Probably in the real world, we may only achieve the speed of a runaway feedback reaction, the squeal of a microphone or a nuclear reaction. Cataclysmic to the previous order in any case.
- pcG
Yes, the idea is that "ever accelerating progress of technology ... gives the appearance of approaching some essential singularity", meaning that progress jumps to infinity.
The reason there's no mathematical explanation of this in the article is that it's mathematically wrong. There will never be a singularity. There will never be a point at which Moore's Law reaches an infinite number of transistors per unit area. Exponential growth does not reach infinity, it just keeps increasing and increasing over time. — Preceding unsigned comment added by 96.224.64.169 (talk) 02:30, 10 January 2013 (UTC)
- Looks like you overlooked the phrase "gives the appearance". —Tamfang (talk) 09:58, 5 March 2013 (UTC)
- In asking about this issue many times with many editors and poring over whatever free crap Kurzweil internetifies, the only response to this is "there is no math". Kurzweil himself basically says it's called Singularity because that's where the tipping point happens (cusp?) - i.e., it seems like it could be called one.
- The only well-defined data they like to use shows exponential growth -- Moore's Law, etc. -- or sometimes only quadratic growth or a very dubious logarithmic plot. They throw them on an indiscriminate list of graphs of different axes: log-log, log-linear, and linear are all in the same row. Indeed, the "singularity" is to be inferred by sight.
- tldr; there is no mathematical definition nor any claim that there was or ever will be. SamuelRiv (talk) 22:40, 5 March 2013 (UTC)
Economical consequencees of AI
Google cars might well make Taxi drivers (and other car driving and trucks driving job) obsolete. So all those people will have soon to find another job. But then once the automation of very complex task like car driving start to increase at an accelerated rate, it's only logical to think that just about everyone will end up unemployed. Working will be not needed in the sense it is today. One difficulty with this, is that most people don't work for fun or for advancing the civilization. They work because they need food and shelter. Hence only the people that don't need to work for food and shelter will be able to keep that for them. If all goes well, people that need to work to live will go extinct.
Anyway i'd like to see some academic people add their own view of how the current economy might adapt well (or not so well) to the rise of complex task automation. — Preceding unsigned comment added by 82.67.178.146 (talk) 07:01, 13 July 2012 (UTC)
- There needs to be a paradigm shift. The problem is that the human being is the most efficient self harvesting "crop" known. That's why there are so many of us. Seen from below it's about survival, but from above it's simply profiting from the harvest of as many "crops" as can be accommodated. This is why there is so much more wealth in the world than a century ago - but people still have to work the limit to survive. The essence of crop harvesting is that you spend the minimum on "upkeep", so only what the individual needs is ever provided by the system.
- The real question is - if everything became automated and work became unnecessary - would we have seven billion in luxury, or would the "from the top" folks decide that the mojority of the crop is no longer needed if they cease to be harvestable.
- pcG
History of the Idea - Formatting
Perhaps consider putting the "History of the Idea" section in table format. The "In 1975... In 1983..." format doesn't look right to me. I think something a little more aesthetic would be better to convey chronological information. — Preceding unsigned comment added by 214.27.56.177 (talk) 03:36, 30 March 2012 (UTC)
- Table, no. Perhaps as a list:
- In 1958 Ulam wrote this:
- In 1966 Good said that:
- In 1983 Vinge formally said this other thing:
The problem, of course, is that Wikipedia hates lists, despite WP:LIST. Some dork will certainly happen by and slap on a template suggesting that it go back into paragraph story form. SBHarris 22:23, 21 August 2012 (UTC)
Singularity is not Superintelligence
Contrary to the introduction, the technological singularity is not the theoretical emergence of superintelligence through technological means. Superintelligence, in the form of collective intelligence, through technological means currently exists. A single person does not design and build a modern cpu chip. The singularity theory postulates an explosion in the computational capacity available to the human race. The theory argues for an explosion of our collective superintelligence. It assumes some sort of functional equivalence between transistors and neurons and assumes that the amount of transistor-based computing power we will manufacture will vastly exceed the amount of neuronal capacity we will breed.
Contrary to the Speed Improvements section, it is not difficult to directly compare silicon-based hardware with neurons. Numerous people have proposed conversion factors to convert between the two forms of computation. Hans Moravec uses a particularly elegant argument to argue that the human brain is approximately equivalent to 100M MIPS and 100M Megabytes.
Fermi paradox
--Paul Williams (talk) 09:30, 18 September 2013 (UTC)
Hello.
Two questions here (i) and (ii)
The Fermi paradox can be a criticism of the technological singularity hypothesis. It appears in the "see also" list, but is omitted from the list of Criticisms
The missing criticism may be found on a page already referenced from the article
hanson.gmu.edu/vi.html
- "...the superintelligent entity resulting from the singularity would start a spherical colonization wave that would propagate into space..."
Vinge's defence is here
hanson.gmu.edu/vr.html
- "Alas, self-destruction appears to be quite a plausible scenario to me. Many people look at the 1990s as the dawn of a more peaceful age, but in retrospect it may be a short pause ... before weapons of mass destruction become very widely available. Intelligence as a self-lethal adaptation would explain Fermi's Paradox better than the Singularity".
(i) Should the above quotations be inserted to justify the Fermi reference ?
If so, I would prefer to leave the write-up to more experienced editors. Please could anyone comment this talk item and maybe do the edit on the main page.
(ii) There is only one reference to the Fermi paradox which appears in Archive 4. The editor - Jon Cohen (the scenariste? Minority Report!) -suggests the paradox, not as a criticism of the singularity, but suggests the singularity as a solution to the paradox. His inference would seem be that, instead of causing a spherical colonisation, the singularity would be an implosion, analogous to a (astrophysical) black hole. He refers to a personnal website Is this usable as a Wiki reference ? It would make for a balanced for-and-against use of the Fermi paradox
I have further comments, but prefer to solve the above issues before presenting them.
Paul —Preceding undated comment added 15:47, 18 September 2013 (UTC)
A tangential concept
This version of "Singularity" is the closest I could find.
It comes close when it talks of the result of computer networks being the nexus of the idea.
But I think it might help to think of humans being networked rather than the machine.
What I considered was that there is already an example of a near perfect networked group of individuals. Multi-cellular organisms in general and the human body specifically.
Fifty trillion cells perfectly networked within each body. Every cell gets the communication and resources that it needs and the dutiful performance of expected tasks.
The internet isn't the network - it's the sum of all who are connected with it.
Human society has progressed from only the few at the top having a voice and everybody listened to now we're at the state where everyone gets a voice. The next step is learning how to properly listen to all voices and then the human network will be truly off and running.
The Singularity in this case is akin to what happened when cells began to network - they had no clue or even any ability to understand the conscioousness that would result.
There's no telling what this sort of singularity will result in. But it would seem that life itself is anti-entropic and that simplicity produces complexity and there's no telling how large the effort will become. Seven thousand worlds of over seven billion people would mimic the fifty trillion we now have on a truly grand scales. Galactic consciousness might be as understandable to us as a liver cell could understand your mind.
We will build it - and who knows what will come. Sure, we may become cyborgs in the process, but that doesn't seem essential to me.
pcG
PcGnome (talk) 17:59, 21 August 2012 (UTC) PcGnome (talk) 22:22, 21 August 2012 (UTC) PcGnome (talk) 23:45, 21 August 2012 (UTC)
- Just a quick note. Life is actually speeding up the decay of the Universe into entropy. For each amount of "order" created by life, a much bigger amount of "chaos" is created. If you're measuring a decrease in total entropy, you aren't measuring the whole system. --TiagoTiago (talk) 14:38, 28 December 2013 (UTC)
Anyone explain this?
"For example, if the speed of thought could be increased a million-fold, a subjective year would pass in 30 physical seconds."
If my mind can add 2+2 once a second, and another mind can do it a trillion times in a second, by what logic is time sped up relative to the super-intelligence? By every logic I can think of, if I can do more within a give time, the reference frame seems LONGER not shorter. Increasing thought a million fold should decrease the passage of time relative to the enjoyment or hate of what is happening but generally would increase time span of a given frame. Having the ability to "see" more time (consciousness of sub-second time as greatly usable) would allow a second to become a million seconds and allow the "work" done within that second to be vastly greater than the slower consciousness. Therefore I believe in saying time will become shorter for a super intelligence is the exact opposite of what such an entity would experience.
Input??? — Preceding unsigned comment added by 174.96.23.10 (talk) 21:05, 9 May 2014 (UTC)
Not a direct argument
This section
"Jared Diamond, in Collapse: How Societies Choose to Fail or Succeed, argues that cultures self-limit when they exceed the sustainable carrying capacity of their environment, and the consumption of strategic resources (frequently timber, soils or water) creates a deleterious positive feedback loop that leads eventually to social collapse and technological retrogression."
under the Criticism sub-header is not a direct argument against the Technological Singularity. It also isn't sourced.
It should either be sourced and reworked so as to directly relate to the article or simply removed. — Preceding unsigned comment added by Angrinord (talk • contribs) 02:17, 12 July 2014 (UTC)
Broken reference link
Hello, I wasn't sure if there was a proper place for reporting broken links so I figured I'd just bring it up in the talk page. Link 10, attempting to lead to "what is the singularity" on the singularity university website, in fact leads to a 404 page not found. It appears they've updated the website since 2009.
I'd be happy to fix it, but I'm not sure what protocol exists for such things. Unfortunately I don't remember my log in from 10 years ago, so I guess I'll have to start a new one!
Cheers
206.174.6.46 (talk) 17:02, 10 October 2014 (UTC)kirk (not sure if this does anything without being signed in..) Ah! It shows the IP address. Clever.
Request for third-party help on a recent edit
An anonymous editor using multiple IP addresses has added a rambling and irrelevant discourse that has now been reverted first by another editor and then by me, and has be re-added both times.
The editor asserts that my reversion fails WP:NPOV because I claim to be a radical Singulatarian. Editors need not be NPOV: edits are supposed to be NPOV. Nevertheless, I recuse myself. Can other parties please evaluate this addition? Note the total set of edits appears also to have removed some prior material without explanation and the two attempts at reversion did not address this. A rollback may be appropriate. -Arch dude (talk) 21:13, 3 March 2015 (UTC)
- This is very similar to a series of disruptive edits made to One-electron universe in October, and Accelerating universe recently. Maybe others? Article protection should be sought if this continues. 07:28, 4 March 2015 (UTC)
- I've put in a request at Wikipedia:Requests for page protection. Gravitoelectromagnetism to watch, by the way. Grayfell (talk) 08:16, 4 March 2015 (UTC)
- I'm not the slightest bit of a singularitarian, and I concur it was synthesis, OR and incoherent. It's possible there's a point in there, but it would need not to read like it belonged on GeoCities before going on the article - David Gerard (talk) 20:38, 4 March 2015 (UTC)
External links grab-bag
The "External links" section was tagged as a likely WP:EL violation for four years. Looking through the list, I can't see any obvious candidates for "the subject's official site" or "cannot be integrated into the Wikipedia article due to copyright issues" - quite a lot appears to be blog posts or speculation - so I've removed the list from the article and pasted it here as the list may contain useful material for references. If so, the WP:RSes in the following should be digested into the article as proper references:
- Singularity-related research links, from Ray Kurzweil, author of The Singularity is Near
- Intelligence Explosion: Evidence and Import by Machine Intelligence Research Institute
- Singularities and Nightmares: Extremes of Optimism and Pessimism About the Human Future by David Brin
- A Critical Discussion of Vinge’s Singularity Concept by Robin Hanson
- Is a singularity just around the corner by Robin Hanson
- Brief History of Intellectual Discussion of Accelerating Change by John Smart
- One Half of a Manifesto by Jaron Lanier – a critique of "cybernetic totalism"
- One Half of an Argument – Ray Kurzweil's response to Lanier
- A discussion of Kurzweil, Turkel and Lanier by Roger Berkowitz
- The Singularity Is Always Near by Kevin Kelly
- The Maes-Garreau Point by Kevin Kelly
- "The Singularity – A Philosophical Analysis" by David Chalmers
- 2045: The Year Man Becomes Immortal, By Lev Grossman, time.com, Feb. 10, 2011.
- Report on The Stanford Singularity Summit
- An IEEE report on the Singularity.
- March 2007 Congressional Report on the SingularityAlternate Link - actual title "Nanotechnology: The Future is Coming Sooner Than You Think"
I'm going through them slowly, but obviously others doing so too would be good - David Gerard (talk) 18:48, 11 June 2015 (UTC)
NPOV in introduction
I have tagged this article for NPOV concerns in the lead due to recent edits that have been made. I have included the two versions of the lead below for reference.
Previous version:
The technological singularity is the hypothetical advent of artificial general intelligence (also known as "strong AI"). Such a computer, computer network, or robot would theoretically be capable of recursive self-improvement (redesigning itself), or of designing and building computers or robots better than itself. Repetitions of this cycle would likely result in a runaway effect — an intelligence explosion[1][2] — where smart machines design successive generations of increasingly powerful machines, creating intelligence far exceeding human intellectual capacity and control. Because the capabilities of such a superintelligence may be impossible for a human to comprehend, the technological singularity is an occurrence beyond which events may become unpredictable, unfavorable, or even unfathomable.[3]
The first use of the term "singularity" in this context was made in 1958 by the Hungarian born mathematician and physicist John von Neumann. In 1958, regarding a summary of a conversation with von Neumann, Stanislaw Ulam described "ever accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue".[4] The term was popularized by mathematician, computer scientist and science fiction author Vernor Vinge, who argues that artificial intelligence, human biological enhancement, or brain–computer interfaces could be possible causes of the singularity.[5] Futurist Ray Kurzweil cited von Neumann's use of the term in a foreword to von Neumann's classic The Computer and the Brain.
Kurzweil predicts the singularity to occur around 2045[6] whereas Vinge predicts some time before 2030.[7] At the 2012 Singularity Summit, Stuart Armstrong did a study of artificial general intelligence (AGI) predictions by experts and found a wide range of predicted dates, with a median value of 2040. Discussing the level of uncertainty in AGI estimates, Armstrong said in 2012, "It's not fully formalized, but my current 80% estimate is something like five to 100 years."[8]
Newly edited version:
A technological singularity is a fundamental change in the nature of human civilization, technology, and application of intelligence, to the point where the state of the culture is completely unpredicatable to humans existing prior to the change, nor can humans after the change relate fully to humans existing prior to the change. The first use of the term "singularity" in this context was made in 1958 by the Hungarian born mathematician and physicist John von Neumann. In the same year, Stanislaw Ulam described "ever accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue".[1] The term "singularity" in the technological sense was popularized by mathematician, computer scientist and science fiction author Vernor Vinge.
Vinge argues that artificial intelligence, human biological enhancement, or brain–computer interfaces could be possible causes of the singularity.[2] Futurist Ray Kurzweil cited von Neumann's use of the term in a foreword to von Neumann's classic The Computer and the Brain. An artificial general intelligence (also known as "strong AI") would hypothetically be capable of recursive self-improvement (redesigning itself), or of designing and building computers or robots better than itself, with repetitions of this cycle resulting in a runaway effect — an intelligence explosion[3][4] — where smart machines design successive generations of increasingly powerful machines, creating intelligence far exceeding human intellectual capacity and control. Because the capabilities of such a superintelligence may be impossible for a human to comprehend, this singularity would be an occurrence beyond which events might become unpredictable, unfavorable, or even unfathomable.[5]
Kurzweil predicts the singularity to occur around 2045[6] whereas Vinge predicts some time before 2030.[7] At the 2012 Singularity Summit, Stuart Armstrong did a study of artificial general intelligence (AGI) predictions by experts and found a wide range of predicted dates, with a median value of 2040. Discussing the level of uncertainty in AGI estimates, Armstrong said in 2012, "It's not fully formalized, but my current 80% estimate is something like five to 100 years."[8]
The newly edition version of the lead opens with and focuses on the broader interpretation of technological singularity. I believe this gives undue weight to non-AI singularity which is only briefly discussed in this article. Almost the entire article focuses on the narrower technological singularity involving intelligence explosion to superintelligence (besides the non-AI singularity section itself). The article contains the sourced statement, "The term 'technological singularity' was originally coined by Vinge," so why should the second sentence of the lead state that the term "singularity" was first used in the (broad) technological context by John Neumann? This is undue weight. The lead should be a summary of key information from the article, not a chronological history of the concept's development. The last thing that I want to point out is the non-AI singularity section itself contains the sourced statement, "Some writers use 'the singularity' in a broader way to refer to any radical changes in our society brought about by new technologies... although Vinge and other prominent writers specifically state that without superintelligence, such changes would not qualify as a true singularity." With the recent edit's focus on the broader singularity in the introduction, this is further evidence that undue weight is given in the lead. For these reasons, I believe the lead should be reverted back to the previous version. Abierma3 (talk) 17:22, 8 June 2015 (UTC)
- If the article was titled Artificial Intelligent Singularity, you would be absolutely correct - and an article about general technological singularities, with a summary about Artificial Intelligent Singularities and a cross-reference to this article, would be warranted.
- As it stands, you are in the logical position of asking that an article about "dead people" only talk about Jim Morrison (or whomever is your favorite dead person).
- Jim might be dead, but not all dead people are Jim Morrison. Likewise, the advent of trans-human strong AI is absolutely a technological singularity, but we can conceive of technological singularities which do not include strong AI (we discover derelict alien technology in the Oort Cloud - our view of our role in the universe, and our technology, are transformed in ways no one pre-contact could have predicted, with no AI being present).
- Admittedly, strong AI is what many - not all - people mean when they talk about "The Singularity"; the rest of the article and the concentration on a strong AI singularity can/should stand - but a "call out" that not all cultural/technological singularities involve strong AI needs to be made.
- The fact that there is already a section calling this out - under "manifestations" - is further evidence that this point needs to be stressed. If the lead is supposed to be a summary of the article - as you claim - this needs to stand.
- Additionally, I'm unsure that the sourced article by Vinge which is cited specifically denies that a non-AI based singularity is "real". Can you provide an exact quote in the document, please? Please note I am not denying that the quote exists (I have not yet had time to re-read the document carefully) - I'm merely asking you provide it to demonstrate that it does. However - even if it does, does this not merely indicate an opinion of Vinge? Can this not be viewed - even so - as an appeal to authority logical fallacy?
- Ironically, one might claim that an insistence on a strong AI technological singularity being the only kind that the article acknowledges could be viewed as a strong POV bias.
- Having dug into the cited Vinge aticle, I found the section which I think you are referring to
From the human point of view this change will be a throwing away of all the previous rules, perhaps in the blink of an eye, an exponential runaway beyond any hope of control. Developments that before were thought might only happen in "a million years" (if ever)will likely happen in the next century. (In [4], Greg Bear paints a picture of the major changes happening in a matter of hours.)
I think it's fair to call this event a singularity ("the Singularity" for the purposes of this paper). It is a point where our models must be discarded and a new reality rules. As we move closer and closer to this point, it will loom vaster and vaster over human affairs till the notion becomes a commonplace. Yet when it finally happens it may still be a great surprise and a greater unknown. In the 1950s there were very few who saw it: Stan Ulam [27] paraphrased John von Neumann as saying: "One conversation centered on the ever accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue".
Von Neumann even uses the term singularity, though it appears he is still thinking of normal progress, not the creation of superhuman intellect. (For me, the superhumanity is the essence of the Singularity. Without that we would get a glut of technical riches,never properly absorbed (see [24]).)
- I believe that you are correct that to Vinge superhumanity appears to be an essential component, but it also appears quite clear that to other authors which pre-date Vinge, and who used the term before Vinge did so (and note that Vinge did not invent the term, or the concept, he merely popularized a particular variant of the idea) did not require this.
- In short, the general term, invented and used by authors that pre-date Vinge, does not include superhuman strong AI - by Vinge's own admission. Although, it cannot be denied that the Strong AI technological singularity variant does.
- As such - even by Vinge's own words - in an article about the general concept of a technical singularity (as evidenced by the lack of the use of Strong AI in the title), the concept of non-AI singularity needs to be addressed.
- 67.226.149.113 (talk) 19:35, 8 June 2015 (UTC)
- You are misrepresenting my argument. No where have I called for this article to only be about AI Singularity. I am merely calling out the undue weight given to non-AI singularity in the lead. Per Wikipedia policy WP:RSUW, "We should not attempt to represent a dispute as if a view held by a small minority deserved as much attention as a majority view." Perhaps even more important: "Undue weight can be given in several ways, including, but not limited to, depth of detail, quantity of text, prominence of placement, and juxtaposition of statements." Notice the inclusion of prominence of placement in this list. With the recent edits, the significant minority viewpoint was moved to the most prominent place in the article (the opening sentences and first paragraph of the lead). The previous version of the article had the majority viewpoint in the opening paragraph and the significant minority viewpoint in the second paragraph. The two versions of the lead have essentially the same information, just this different order. This is undue weight according to Wikipedia Policy. Once again, I have not asked for "strong AI technological singularity [to be] the only kind that the article acknowledges." You seem well versed on logic, so you should know it is a Red Herring fallacy to accuse me of such. In my previous post, I clearly stated my belief that "the lead should be reverted back to the previous version" (which does in fact mention the non-AI singularity), and I stand by this opinion. Abierma3 (talk) 20:01, 8 June 2015 (UTC)
- 67.226.149.113 (talk) 19:35, 8 June 2015 (UTC)
- However, the new version matches the cited references, and the old version did not match the references that it itself cited - David Gerard (talk) 20:03, 8 June 2015 (UTC)
- Per the Vinge citation, the new version is actually per the cited references (which were on the article already) - the previous text did not match the references it cites.
- As the references support the new version better than the old, this would suggest that the problem you flag - "Lead gives undue weight to non-AI Singularity when nearly the entire article discusses the intelligence explosion Singular[it]y" - is one of unbalance in the article body, rather than in the intro. There's a lot of self-published citations from those who professionally deal in hypothetical AI singularities, and I suspect if those were cleared out the body would be considerably better balanced - David Gerard (talk) 19:42, 8 June 2015 (UTC)
- The reference presents two viewpoints: Neumann who did not associate the singularity with superintelligence, and Vinge who believed superintelligence was of essence to the singularity. How does this demonstrate that Neumann's view is in fact the majority viewpoint and that Vinge's view is the significant minority viewpoint? Just because one viewpoint pre-dates the other does not automatically make it the majority viewpoint. Judging by the body of the article and the large number of references, the consensus on this article for years has been that AI singularity is the majority viewpoint. You will have to be a little more specific when you state that "there's a lot of self-published citations from those who professionally deal in hypothetical AI singularities." Due weight does not mean equal balance, weight should be given based on the prominence of the viewpoint. If you have enough specific proof that the non-AI singularity is in fact the significant minority viewpoint to change the consensus, we can restructure the article accordingly. I am open for discussion, but you have yet to demonstrate anything convincing. Abierma3 (talk) 20:18, 8 June 2015 (UTC)
Sematics, meaning, and logic are not democratic. We don't vote on symbolic logic.
If a term has multiple meanings, you cannot reasonably exclude some of them merely on the basis that more people discuss others - especially if one term is a superset of the others. You admit that the article references discuss both points, yet appear to want to discard the expression of one on the basis of "popularity". The general view of a singularity is not an alternative view which distracts from the "real meaning"; it is a general view which leads into the other.
I cannot - reasonably and fairly - make reference to the Tesla Roadster without some refernce to what a car is.
To turn your argument around: you have yet to demonstrate why the general term should not be defined as part of the definition of the specific term, other than the fact that you appear to revere one author's usage of the term over another's - even when Vinge makes it clear there are alternative definations of the term, and thus he clarifies what the singularity is to him, for purposes of his thesis.
Put another way even Vinge defined and discussed both terms; How can the definition - and brief discussion - of the genreal term be "unfair", especially as the coverage of the general term is vastly overwhelmed by the discussion of the more specific popular use of the term, already?
69.196.169.138 (talk) 01:53, 9 June 2015 (UTC)
- Once again, my argument is being misrepresented. No where have I stated that non-AI singularity should be excluded from this article as you are implying. I am simply demanding due weight in the lead.
- This has nothing to do with "popularity." That might be your interpretation of what I refer to as the Wikipedia Consensus Policy. It reads: "Consensus among a limited group of editors, at one place and time, cannot override community consensus on a wider scale. For instance, unless they can convince the broader community that such action is right." I do not hold strong personal views in this argument. I am merely upholding what appears to be the community consensus for many years on this article.
- To answer your question specifically, it is "unfair" or rather, undue weight, per Wikipedia MOS:INTRO Relative Emphasis section: "According to the policy on due weight, emphasis given to material should reflect its relative importance to the subject, according to published reliable sources. This is true for both the lead and the body of the article. If there is a difference in emphasis between the two, editors should seek to resolve the discrepancy."
- I have no problem with the first sentence being a definition of the general term, but there is way too much emphasis being placed on non-AI singularity in the first paragraph. It is nearly as long as the non-AI singularity section itself, which is a glaring discrepancy as the MoS explains. The strong AI technological singularity should at least be mentioned in the first paragraph because that is what the large majority of the article focuses on. Abierma3 (talk) 15:18, 9 June 2015 (UTC)
This thread was getting long, and the replacement of the lead has been contested. So I've invoked WP:BRD, and started a new thread below to continue discussion. The Transhumanist 19:12, 11 June 2015 (UTC)
WP:BRD (Bold, revert, discuss)
The replacement of the lead paragraph was bold. I've reverted it. And now we need to discuss it. (Note that reverting the lead even further back, to the version before Strong AI was added, which was the lead for years, would be acceptable as the Strong AI addition was fairly recent).
First, we should all acknowledge that we all want what is best for the article: that it meet the standards of Wikipedia by following WP's policies and guidelines.
If the lead is not correct, based on lack of reliable sources, undue weight, etc., then we need to fix it. And the way to do this is by reaching consensus on the matter.
So let me start the discussion with a question: what is the significance of superintelligence in the definition of "technological singularity"? The Transhumanist 19:05, 11 June 2015 (UTC)
Is the advent of superintelligence the TS, or is it the invention of the thing that led to the emergence of superintelligence? The spark that set off the explosion? Or is it something else altogether? The Transhumanist 19:33, 11 June 2015 (UTC)
- I agree with your decision to revert. I've read the entire conversation and I can see both sides of the argument. On the one side, the recently edited article was weighted heavily towards non AI singularity and the development of the term over time, rather than a general definition of the term. On the other hand, the reverted article focuses heavily on a 'doomsday' scenario of the singularity. I believe that is what the author was trying to fix in the edited version. To put the focus on an unknowable shift, rather than a focus on a negative shift. Turing, Von Neumann and Vinge have all described or defined the singularity did not focus on the 'end of the world' in a doomsday sense. They focused on the 'end of the world' in the sense of it being the end of the knowable world. It has recently become popular to see this potential event as a doomsday. ShalonSims (talk) 18:09, 21 July 2015 (UTC)
External links modified
Hello fellow Wikipedians,
I have just added archive links to 9 external links on Technological singularity. Please take a moment to review my edit. If necessary, add {{cbignore}}
after the link to keep me from modifying it. Alternatively, you can add {{nobots|deny=InternetArchiveBot}}
to keep me off the page altogether. I made the following changes:
- Added archive https://web.archive.org/20141208031136/http://singinst.org/blog/2010/04/08/david-chalmers-on-singularity-intelligence-explosion/ to http://singinst.org/blog/2010/04/08/david-chalmers-on-singularity-intelligence-explosion/
- Added archive https://web.archive.org/20150524220927/http://physicsworld.com/cws/article/news/2013/dec/03/metamaterials-offer-route-to-room-temperature-superconductivity to http://physicsworld.com/cws/article/news/2013/dec/03/metamaterials-offer-route-to-room-temperature-superconductivity
- Added archive https://web.archive.org/20140328215502/http://www.acceleratingfuture.com/people-blog/2007/the-human-importance-of-the-intelligence-explosion/ to http://www.acceleratingfuture.com/people-blog/2007/the-human-importance-of-the-intelligence-explosion/
- Added archive https://web.archive.org/20150814105834/http://cliodynamics.ru/index.php?option=com_content&task=view&id=124&Itemid=70 to http://cliodynamics.ru/index.php?option=com_content&task=view&id=124&Itemid=70
- Added archive https://web.archive.org/20120615054939/http://www.growth-dynamics.com:80/articles/TedWEB.htm to http://www.growth-dynamics.com/articles/TedWEB.htm
- Added archive https://web.archive.org/20141123120819/http://www.singinst.org/AIRisk.pdf to http://singinst.org/AIRisk.pdf
- Added archive https://web.archive.org/20121019124216/http://www.sens.org/node/514 to http://www.sens.org/node/514
- Added archive https://web.archive.org/20110908014050/http://singinst.org:80/overview/whatisthesingularity/ to http://singinst.org/overview/whatisthesingularity
- Added archive https://web.archive.org/20120501043117/http://www.stat.vt.edu/tech_reports/2005/GoodTechReport.pdf to http://www.stat.vt.edu/tech_reports/2005/GoodTechReport.pdf
When you have finished reviewing my changes, please set the checked parameter below to true to let others know.
This message was posted before February 2018. After February 2018, "External links modified" talk page sections are no longer generated or monitored by InternetArchiveBot. No special action is required regarding these talk page notices, other than regular verification using the archive tool instructions below. Editors have permission to delete these "External links modified" talk page sections if they want to de-clutter talk pages, but see the RfC before doing mass systematic removals. This message is updated dynamically through the template {{source check}}
(last update: 5 June 2024).
- If you have discovered URLs which were erroneously considered dead by the bot, you can report them with this tool.
- If you found an error with any archives or the URLs themselves, you can fix them with this tool.
Cheers. —cyberbot IITalk to my owner:Online 11:47, 28 August 2015 (UTC)
Has Moore's law ended?
Did anyone notice that current high-end new laptops, though more battery-efficient, are barely faster than the ones from a few years ago? Wouldn't that be relevant evidence against continual exponential speedups? MvH (talk) 15:02, 25 March 2015 (UTC)MvH
- No, your personal opinion on a limited number of laptops you have worked with over the years has no bearing on this article. Anecdotal evidence is not acceptable in Wikipedia. Abierma3 (talk) 02:19, 7 June 2015 (UTC)
How many separate personal opinions does it take for such anecdotal evidence to be accepted as evidence? Does Wikipedia have a number for that? Is there a wikipoll for that?
No amount of "personal opinions" will work. Also, Moore's law does not stat that affordable home PC's will become exponentially more powerful. — Preceding unsigned comment added by 76.31.240.221 (talk) 02:27, 13 September 2015 (UTC)
- I'm not the only one who thinks that exponential speedups are a thing of the past. Clock speeds have been around 4Ghz for years now, a number that isn't expected to change much without dramatic changes in the way chips are made http://www.eetimes.com/document.asp?doc_id=1323892 MvH (talk) 19:44, 30 October 2015 (UTC)MvH
- From 1986 to 2002, clock speeds increases 52% per year. From 2002 to 2004, it was about 20% per year, and from 2004 to 2015, clock speeds have increased about 2% per year (clock speeds reached 3.6 Ghz in 2004, and are now around 4.4 to 4.6 Ghz in 2015). However, this doesn't mean that computers haven't gotten better (better screens/efficiency/SSD, etc). To put this in perspective, if the trend continues, then a 10 Ghz machine would be on the market in 2050 [of course one can't say anything so far away with any degree of confidence]. http://www.cs.columbia.edu/~sedwards/classes/2012/3827-spring/advanced-arch-2011.pdf MvH (talk) 23:24, 30 October 2015 (UTC)MvH
Problematic quote
"The reason to believe in human agency over technological determinism is that you can then have an economy where people earn their own way and invent their own lives. If you structure a society on not emphasizing individual human agency, it's the same thing operationally as denying people clout, dignity and self-determination ... To embrace [the idea of the Singularity] would be a celebration of bad taste and bad politics." Perhaps I have misunderstood this quote but isn't this an argument from adverse (short term) consequences? — Preceding unsigned comment added by 72.10.117.195 (talk) 07:18, 11 November 2015 (UTC)
Was John von Neumann alive in 1958?
The article: "The first use of the term "singularity" in this context was made in 1958 by the Hungarian born mathematician and physicist John von Neumann"
Another wikipedia article: https://en.wikipedia.org/wiki/John_von_Neumann "John von Neumann (Hungarian: Neumann János, /vɒn ˈnɔɪmən/; December 28, 1903 – February 8, 1957)"
- Thanks for spotting this. Checking the source made clear that it was a misattribution anyway. Paradoctor (talk) 15:33, 1 December 2015 (UTC)
Religion and Singularity
The section fails to provide any reliable source connecting the two. I see the addition as merely a synthesis of two different ideas concocted as if a reliable source connected the two. Not only does it violate WP:OR but also WP:RS. I have removed the questionable addition unless proper citation can be provided. Kapil.xerox (talk) 00:01, 4 December 2015 (UTC)
- (edit conflict)
- Your edit summaries leave some room for improvement. Took me awhile to figure out that the actual problem here was WP:SYN, rather than WP:OR, and your first edit dumped the kid along with the bathwater. Anyway, things makes sense now. Paradoctor (talk) 01:05, 4 December 2015 (UTC)
- Sure Paradoctor - I wasn't able to come up with the policy name off the top of my head. Appreciate the correction.Kapil.xerox (talk) 01:11, 4 December 2015 (UTC)
- Feels nice to be appreciated. Happy editing, Paradoctor (talk) 01:21, 4 December 2015 (UTC)
- Sure Paradoctor - I wasn't able to come up with the policy name off the top of my head. Appreciate the correction.Kapil.xerox (talk) 01:11, 4 December 2015 (UTC)
Golden info revealed today
Mark Zuckerberg, CEO of Facebook, posted this today:
https://www.facebook.com/zuck/posts/10102620559534481
Supervised learning is what AI is doing right now. Unsupervised learning is what AI cannot do right now. Unsupervised learning is the key to technological singularity. Machine learning isn't even mentioned in this article. This is golden info revealed by Zuckerberg today. Keep in mind that the info on Wikipedia is only the info that is authoratatively referenced.
External links modified
Hello fellow Wikipedians,
I have just added archive links to 5 external links on Technological singularity. Please take a moment to review my edit. If necessary, add {{cbignore}}
after the link to keep me from modifying it. Alternatively, you can add {{nobots|deny=InternetArchiveBot}}
to keep me off the page altogether. I made the following changes:
- Attempted to fix sourcing for http://hanson.gmu.edu/vc.html
- Added archive http://web.archive.org/web/20071129111006/http://hjem.get2net.dk/kgs/growthphysA.pdf to http://hjem.get2net.dk/kgs/growthphysA.pdf
- Attempted to fix sourcing for http://www.singinst.org/intro/whyAI-print.html
- Attempted to fix sourcing for http://www.asimovlaws.com/
- Added archive http://web.archive.org/web/20071230031316/http://www.singinst.org:80/overview/whatisthesingularity/ to http://www.singinst.org/overview/whatisthesingularity
When you have finished reviewing my changes, please set the checked parameter below to true or failed to let others know (documentation at {{Sourcecheck}}
).
An editor has reviewed this edit and fixed any errors that were found.
- If you have discovered URLs which were erroneously considered dead by the bot, you can report them with this tool.
- If you found an error with any archives or the URLs themselves, you can fix them with this tool.
Cheers.—cyberbot IITalk to my owner:Online 16:58, 22 March 2016 (UTC)
Huh?
"The number of patents per thousand peaked..." Per thousand what, people? Mcswell (talk) 04:16, 20 June 2016 (UTC)
"References" that aren't used as references
This is the "references" section that was basically a long list of external links, violating WP:ELNO. The article is actually referenced to the sentence level with inline references; these appear to be further reading, or leftovers from previous versions of the article. I'm putting them here so they can be redigested into the article if they're WP:RSes for that purpose and particularly prove a point.
- Acceleration Studies Foundation (2007), ASF: About the Foundation, retrieved 2007-11-13
- Anonymous (18 March 2006), "More blades good", The Economist, vol. 378, no. 8469, London, p. 85
- Bell, James John (2002), Technotopia and the Death of Nature: Clones, Supercomputers, and Robots, Earth Island Journal (first published in the November/December 2001 issue of the Earth First! Journal), retrieved 2007-08-07
{{citation}}
: Italic or bold markup not allowed in:|publisher=
(help); Unknown parameter|season=
ignored (|date=
suggested) (help) - Bell, James John (1 May 2003), "Exploring The "Singularity"", The Futurist, World Future Society (mindfully.org), retrieved 2007-08-07
- Berglas, Anthony (2008), Artificial Intelligence will Kill our Grandchildren, retrieved 2008-06-13
- Broderick, Damien (2001), The Spike: How Our Lives Are Being Transformed by Rapidly Advancing Technologies, New York: Forge, ISBN 0-312-87781-1
- Bostrom, Nick (2002), "Existential Risks", Journal of Evolution and Technology, 9, retrieved 2007-08-07
- Bostrom, Nick (2003), "Ethical Issues in Advanced Artificial Intelligence", Cognitive, Emotive and Ethical Aspects of Decision Making in Humans and in Artificial Intelligence, 2: 12–17, retrieved 2007-08-07
- Dreyfus, Hubert L.; Dreyfus, Stuart E. (1 March 2000), Mind over Machine: The Power of Human Intuition and Expertise in the Era of the Computer (1 ed.), New York: Free Press, ISBN 0-7432-0551-0
- Ford, Martin (2009), The Lights in the Tunnel: Automation, Accelerating Technology and the Economy of the Future, CreateSpace, ISBN 978-1-4486-5981-4.
- Good, I. J. (1965), "Speculations Concerning the First Ultraintelligent Machine", Advances in Computers, Advances in Computers, 6, Academic Press: 31–88, doi:10.1016/S0065-2458(08)60418-0, ISBN 9780120121069, archived from the original on 2001-05-27, retrieved 2007-08-07
{{citation}}
: Unknown parameter|editors=
ignored (|editor=
suggested) (help) - Hanson, Robin (1998), Some Skepticism, Robin Hanson, archived from the original on August 28, 2009, retrieved 2009-06-19
{{citation}}
: Unknown parameter|deadurl=
ignored (|url-status=
suggested) (help) - Hanson, Robin (June 2008), "Economics of the Singularity", IEEE Spectrum
- Hawking, Stephen (1998), Science in the Next Millennium: Remarks by Stephen Hawking, retrieved 2007-11-13
- Heylighen, Francis (2007), "Accelerating Socio-Technological Evolution: from ephemeralization and stigmergy to the global brain" (PDF), in Modelski, G.; Devezas, T.; Thompson, W. (eds.), Globalization as an Evolutionary Process: Modeling Global Change, London: Routledge, ISBN 978-0-415-77361-4
- Hibbard, Bill (5 November 2014). "Ethical Artificial Intelligence". arXiv:1411.1373 [cs.AI].
{{cite arXiv}}
: Invalid|ref=harv
(help) - Johansen, Anders; Sornette, Didier (25 January 2001), "Finite-time singularity in the dynamics of the world population, economic and financial indices" (PDF), Physica A, vol. 294, no. 3–4, pp. 465–502, arXiv:cond-mat/0002075, Bibcode:2001PhyA..294..465J, doi:10.1016/S0378-4371(01)00105-4, archived from the original (PDF) on November 29, 2007, retrieved 2007-10-30
{{citation}}
: Unknown parameter|deadurl=
ignored (|url-status=
suggested) (help) - Moravec, Hans (January 1992), "Pigs in Cyberspace", On the Cosmology and Ecology of Cyberspace, retrieved 2007-11-21
- Schmidhuber, Jürgen (29 June 2006). "New Millennium AI and the Convergence of History". arXiv:cs/0606081.
{{cite arXiv}}
:|class=
ignored (help); Invalid|ref=harv
(help) - Singularity Institute for Artificial Intelligence (2002), Why Artificial Intelligence?, archived from the original on October 4, 2006
{{citation}}
: Unknown parameter|deadurl=
ignored (|url-status=
suggested) (help) - Singularity Institute for Artificial Intelligence (2007), What is the Singularity?, archived from the original on December 30, 2007, retrieved 2008-01-04
{{citation}}
: Unknown parameter|deadurl=
ignored (|url-status=
suggested) (help) - Smart, John (September 2005), On Huebner Innovation, Acceleration Studies Foundation, retrieved 2007-08-07
- Ulam, Stanislaw (May 1958), "Tribute to John von Neumann", Bulletin of the American Mathematical Society, 64 (nr 3, part 2): 1–49, doi:10.1090/S0002-9904-1958-10189-5
{{citation}}
:|issue=
has extra text (help) - Vinge, Vernor (30–31 March 1993), "The Coming Technological Singularity", Vision-21: Interdisciplinary Science & Engineering in the Era of CyberSpace, proceedings of a Symposium held at NASA Lewis Research Center, NASA Conference Publication CP-10129, retrieved 2007-08-07. See also this HTML version, retrieved on 2009-03-29.
- Warwick, Kevin (2004), March of The Machines, University of Illinois Press, ISBN 978-0-252-07223-9
- David Gerard (talk) 21:47, 16 July 2016 (UTC)
- I'm clipping items out of the above as I put them back, please also do so - David Gerard (talk) 16:44, 17 July 2016 (UTC)
SciFi
An article about a SciFi concept should not pretend to be describing science. --Edoe (talk) 08:33, 14 July 2016 (UTC)
- I totally take the point, but please note that the topic - while speculative - is not confined to SF writers. As such, your edit is inappropriate, and I've reversed it. --PLUMBAGO 09:20, 14 July 2016 (UTC)
- The SciFi authors most mentioned or cited in the article are Vernor Vinge and Ray Kurzweil. Consider also the very definition of Science fiction as "a genre of speculative fiction dealing with imaginative concepts ... often explores the potential consequences of scientific and other innovations ...". The lenghty text tries to avoid the subject being classified as SciFi, while it permamently reproduces the very definition of the term. --Edoe (talk) 15:48, 14 July 2016 (UTC)
- Also true (though Kurzweil might prefer "futurist"), but it's still the case — as the article includes — that the concept has a broader base than science fiction. Unsurprisingly, given the pace of advances across the sciences, it has a (questionable?) following in academia. Also unsurprisingly, given that it deals with extrapolated future technology and potential catastrophe, it is of great interest to SF authors, but it has deeper roots than this. Anyway, while you (and I, largely) feel that it is primarily a SF concept, it is also clearly a topic of interest beyond SF. --PLUMBAGO 16:37, 14 July 2016 (UTC)
- This has been a matter of some contention before - basically transhumanists insisting this SF-derived concept is so too a real-life thing, and cobbling together a previous history for it. The article needs a thorough source check, for a start - David Gerard (talk) 17:50, 14 July 2016 (UTC)
- I've rewritten the intro a bit, covering both the SF trope and its transhumanist usage. How's that? The article is still full of bad sourcing to blogs to push the transhumanist side, I'll have a go at that next. I just made an edit that removed a nonexistent unverifiable source, a blog, a seminar talk video equivalent to a self-sourced blog post, three redundant cites to the same IJ Good piece, toned down some WP:PEACOCK terms ... and that's editing two paragraphs. This article will take some effort to get up to general Wikipedia standard - David Gerard (talk) 18:17, 14 July 2016 (UTC)
- Improved, thanx. Anyhow, I'd like to read the term Science fiction somewhere in the first paragraph. The topic need not be classified 'primarily a SF concept', but mentioning the connection would be reasonable. --Edoe (talk) 13:13, 15 July 2016 (UTC)
- I've rewritten the intro a bit, covering both the SF trope and its transhumanist usage. How's that? The article is still full of bad sourcing to blogs to push the transhumanist side, I'll have a go at that next. I just made an edit that removed a nonexistent unverifiable source, a blog, a seminar talk video equivalent to a self-sourced blog post, three redundant cites to the same IJ Good piece, toned down some WP:PEACOCK terms ... and that's editing two paragraphs. This article will take some effort to get up to general Wikipedia standard - David Gerard (talk) 18:17, 14 July 2016 (UTC)
- It's there, I just linked it :-) I'll take source-repairing the rest in slow chunks - David Gerard (talk) 15:33, 15 July 2016 (UTC)
- You mean "science fiction author Vernor Vinge", that says little about the connection, maybe he's also a pet lover, sandwich hater etc. How about?: "The term "singularity" was used in science fiction and popularized by mathematician, computer scientist and writer Vernor Vinge." --Edoe (talk) 16:53, 16 July 2016 (UTC)
- Something like that, yeah. Ideally, we'd have a cite that makes it very clear that it was popularised by Vinge primarily in science fictional circles. Basically the article needs a lot more with solid sources on the science fictional trope - David Gerard (talk) 21:35, 16 July 2016 (UTC)
- I've just done another two paragraphs of bringing stuff up to scratch. I'm proceeding slowly, to deal with objections as they come in - David Gerard (talk) 23:25, 16 July 2016 (UTC)
I've just done another section and some assorted fluff. Could others please go through the article looking for references that fail to meet any reasonable WP:RS standard? It's pretty heavy going (we can't really just delete large chunks and say "go back and do it again", we have to extract usefulness) but it's what the article needs - David Gerard (talk) 16:49, 17 July 2016 (UTC)
- ^ nwtmagazine, juli/august 2011