Talk:Existential risk from artificial intelligence

Wiki Education Foundation-supported course assignment

edit

  This article was the subject of a Wiki Education Foundation-supported course assignment, between 25 August 2020 and 10 December 2020. Further details are available on the course page. Student editor(s): Maxgemm. Peer reviewers: GFL123, Psu431editor.

Above undated message substituted from Template:Dashboard.wikiedu.org assignment by PrimeBOT (talk) 20:59, 17 January 2022 (UTC)Reply

WP

edit

I added this article to some WikiProjects but they should be checked for relevance by whoever has more knowledge on this topic than I do. — Jeraphine Gryphon (talk) 16:34, 19 May 2015 (UTC)Reply

Criticisms

edit

This criticisms section reads more like an inflamed attack - it should be revised to reflect constructive criticism that improves the quality of the article. — Preceding unsigned comment added by 101.178.240.119 (talk) 23:44, 4 February 2016 (UTC)Reply

I made an edit trying to cut down on the uncited and vague claims while preserving as much meaningful content as possible. I don't want to get embroiled with an edit war so I'll briefly list the changes by individual points and their policy violations in case anyone has an issue. First point: removed for WP:NPOV, WP:NOR, and WP:VER. Second point: removed for weasel wording and WP:NPOV; citation kept with new description. Third point: removed for WP:NOR and WP:VER; citation does not appear to back up the assertions in the text and would be more appropriate in another section of the page. Fourth, fifth, sixth, and seventh points removed for WP:NOR. Eighth point removed for WP:NOR and WP:NPOV. Ninth point removed for weasel wording and WP:RS (a blog which is no longer online). Tenth point removed for WP:NOR. I have written summaries of the two reliable sources of criticism, and I hope we can continue to improve this article with neutral representations of all authoritative points of view. K.Bog 00:28, 5 February 2016 (UTC)Reply
There should be more coverage of criticism, but it has to be well-sourced; I've seen criticism that passes WP:WEIGHT, I'll dig up some sources this afternoon and put them in the article. Where possible, I'll integrate criticism throughout the article rather than a separate section. Rolf H Nelson (talk) 20:01, 6 February 2016 (UTC)Reply

The article is quite obviously lacking a criticisms section, and as such it is far from neutral. Here are some:

  1. There are just too many atoms and resources in the solar system and the reachable universe for an AGI to needlessly risk war with humans.
  2. As for a consequential war in-between AGIs with humans taking collateral damage, this could be of significance only if the two AGIs were of nearly parallel intelligence. If in contrast, one AGI was substantially superior, the war would be over very quickly. By creating a "friendly" AI which engages the "unfriendly" AI in war, the humans could risk a self-fulfilling a doomsday prophecy! As an example, the Department of Defense has more to do with offense than with actual defense.
  3. While humans assert existential risks to themselves, they conveniently ignore existential risks to the sustenance of intelligent life in general in the galaxy, as would be remedied by the quick spread of AGI.
  4. Roko's basilisk offers an existential risk of its own, one which could actually be compounded by attending to the general existential risks.
  5. Opportunities for hybridization, i.e. cyborgs, cannot be neglected.

--IO Device (talk) 21:22, 21 June 2015 (UTC)Reply

Do we have any with RSes? This theory is somewhat fringe, and so hasn't attracted a great many serious critiques - David Gerard (talk) 21:32, 21 June 2015 (UTC)Reply
Critics like Hubert Dreyfus or Piero Scaruffi would question the possibility of general AI at all, since past successes in artificial intelligence have only occurred in domains which are highly structured and limited. Scaruffi's "Demystifying Machine Intelligence" might be of value in a criticism section. But generally speaking, yeah, the topic isn't well discussed outside of the LessWrong forums, where the insanity of "Boko's obelisk" was first dreamed up.162.193.63.105 (talk) 21:11, 14 August 2015 (UTC)Reply
The thesis is controversial, but given the number of serious proponents (including two Noble-prize winners), and given that it has been widely commented on, both by high-profile proponents and high-profile critics, and spawned a book that received positive reviews in the mainstream media, it's similar to parallel universes in the category of "ideas that were once the province of science fiction, but are now endorsed by many (but hardly all) scientists". Rolf H Nelson (talk) 20:01, 6 February 2016 (UTC)Reply
That's not what we're talking about - it's had enough coverage that it's notable, but it's still a fringe theory and the Nobel-winners who've talked about it aren't actually e.g. AI researchers. That is, endorsements by someone whose field it is are a different sort of thing to endorsements by someone whose field it isn't. Note also that MIRI or FHI (for example) don't directly do actual AI research or coding, they're ethical philosophy and a bit of mathematics - David Gerard (talk) 21:28, 6 February 2016 (UTC)Reply
David Gerard AI researchers such as Stuart Russell have endorsed it. No world-famous computer scientists have endorsed nor rejected it, because there are no world-famous computer scientists. You can always move the goal-post further, but given the large amount of MSM attention, it's not credible that it "hasn't attracted a great many serious critiques"; maybe the problem is you don't personally like the serious critiques that are out there because they don't say exactly the specific thing that you want to say? If so, Wikipedia can't help you with that. Rolf H Nelson (talk) 02:01, 7 February 2016 (UTC)Reply
Copied to main page and added more criticisms with reference to some relevant work, including one of my own with obvious solutions to their "problems" --Exa~enwiki (talk) 19:33, 4 February 2016 (UTC)Reply
An administrator named Silence who is obviously affiliated with less wrong is trying to remove criticism from the page, effectively causing vandalism. This is unacceptable. All criticism must be heard, especially about philosophical speculations that amount to eschatology --Exa~enwiki (talk) 20:03, 4 February 2016 (UTC)Reply
If you have evidence Silence is guilty of vandalism, I expect you'll immediately follow the procedure of WP:VANDAL. If Silence is not guilty of vandalism, and is (rightly or wrongly) making legitimate edits to remove poorly-sourced material, please take the initiative to strike out and retract your paragraph above, as you are violating wikipedia policy (and common-sense ethical rules) by baselessly accusing him of vandalism. That said, I have to thank you for your stimulating edits to shake things up a bit; they've succeeded in motivating everyone to get more work done on this page. If you have any questions about Wikipedia policy, feel free to ask them here; I'm happy helping to take the time to walk you through a better understanding of Wikipedia policy. In addition, WP:CTW contains many other resources to help you understand policy better and pointers to people who will help answer any questions you may have. Rolf H Nelson (talk) 20:01, 6 February 2016 (UTC)Reply

Many issues

edit

This article reads like an article on a news site with a frame from Terminator as the header. It could use a large amount of work. I'm an outsider and I may have time to fix it up, but there are some extremely low-hanging fruit here. — Preceding unsigned comment added by 67.170.238.88 (talk) 09:20, 9 October 2015 (UTC)Reply

Like what? (Please be specific). The Transhumanist 13:58, 6 November 2015 (UTC)Reply

Title of AI risk page

edit

Since topics like superintelligent AI are just beginning to break into the mainstream in CS and there isn't a clear scientific consensus on the topic, Wikipedia's coverage of the topic should fastidiously avoid the risks of original research and original synthesis. The best way to do this is to rely on reputable tertiary sources that cover this topic, deferring to their organization and framing. At the moment, I think the best reference for this purpose is http://www.givewell.org/labs/causes/ai-risk, which focuses on the relevant topic and uses the term "advanced AI".

Although AGI is obviously relevant, the generality of AI algorithms may not be the salient feature for all existential risk scenarios -- a system that lacks generality (e.g., can't understand human psychology) might still raise the issues Bostrom/Omohundro/Russell/etc. talk about if it had exceptionally good narrow intelligence in the right domain, like software engineering (for recursive self-improvement) or bioengineering (for synthesizing pathogens). -Silence (talk) 21:51, 10 November 2015 (UTC)Reply

Good point. I have no problem with replacing "artificial general intelligence" with "advanced artificial intelligence" in the title.
Very good rewrite, by the way. The Transhumanist 22:41, 10 November 2015 (UTC)Reply
To decide on a good title, we need to decide on what we would like the scope of the page to be. Right now it only discusses existential risk from artificial superintelligence. Where narrow AI is mentioned, there is no existential risk. The only way narrow AI can pose an existential risk is if we give it control over something that already poses an existential risk by itself. This is not discussed as much because there is no way that it will take control. It can perhaps be argued that advanced narrow AI can facilitate the creation of existentially threatening technologies and that we'll inevitably give the AI the control to use them, but those arguments are not currently on the page.
The more often discussed scenarios of an AI overpowering humanity does require general (super)intelligence. Lack of knowledge in a certain domain (like human psychology) does not mean that some entity's intelligence isn't general. If it did, then general intelligence does not and can not exist, even in humans (there are tons of things we don't understand). General intelligence is usually taken to refer to the ability to achieve goals / perform tasks / solve problems / adapt to wide range of domains[1]. This type of general intelligence is needed if we expect the AI to wrest enough power and control from humanity to pose a credible existential threat. And actually, mere human-level general intelligence doesn't pose much of a threat either until it reaches (vastly) superhuman levels.
The scope of the page could be from specific to general: artificial superintelligence -> artificial general intelligence -> (advanced) artificial intelligence. On the risk side we could include from specific to general: existential risk -> catastrophic risks -> (potential) risks. Note that the GiveWell page is (ostensibly) about potential risks of advanced AI, which includes risks to privacy, peace and security (but omits some other risks). I think "Existential risk from artificial superintelligence" best reflects the current page, and that more content would need to be added to expand the scope and warrant a more general title. (This is perfectly in line with terminology used in the literature -- e.g. Bostrom's Superintelligence book -- so does not constitute original research.) The AI Guy (talk) 17:28, 11 November 2015 (UTC)Reply
But how would artificial superintelligence come about? Just having artificial general intelligence could lead to it via an intelligence explosion enabled by recursive self-improvement. So, by extension, if AGI inevitably leads to SGI, then AGI is just as much of an existential risk. I look forward to your comments. The Transhumanist 04:28, 13 November 2015 (UTC)Reply
I see where you are going with this. But one concern is that once we use such a recursive argument we could keep using it ad infinitum. If humans or other narrow AI (generated by humans) lead to AGI, then we/it could also be an existential risk. Where to stop? This decision will define the scope of the page, and then the title should reflect this scope. Davearthurs (talk) 16:13, 14 November 2015 (UTC)Reply
I support idea of keeping a more general title. The page as currently written makes as few assumptions as possible about the nature of the underlying AI system. For example, it states that a system able to make "high-quality decisions" poses a problem because "the utility function may not be perfectly aligned with the values of the human race." Such a system could be superintelligent, but it could also be exceptional narrow AI in the right domain (i.e. bioengineering) and/or a well-enabled AI with access to the right resources. Hence I believe a more inclusive title is warranted, such as "Existential risks from artificial intelligence" or "Existential risks from advanced artificial intelligence." — Preceding unsigned comment added by Davearthurs (talkcontribs) 16:06, 14 November 2015 (UTC)Reply
There are strong arguments either way; "advanced artificial intelligence" is too broad and "superintelligence" is too narrow to capture the thesis's target concern about an AI that, once turned on, will figure out and then act upon a credible plan to take control for the purpose of attaining its own goals. We can also have "superintelligence" bolded in the intro paragraph as a compromise. I think either way, people will understand what the "artificial superintelligence" or "advanced artificial intelligence" is getting at; I'd narrowly favor "superintelligence" because it's more common term than "advanced artificial intelligence" in this page's sources, both academic and mainstream media. I'd be more concerned about whether "existential risk" is too technical, but I don't have a better idea right now. Rolf H Nelson (talk) 20:01, 6 February 2016 (UTC)Reply

References

"he page as currently written makes as few assumptions as possible .."..."the utility function may not be perfectly aligned with the values of the human race.". It's acutally an assumption that an AI has a UF at all, or that a narrow AI is rendered narrow by it's UF. The page actually makes all the assumptions that are standardly made by MIRI and LessWrong. 213.205.194.170 (talk) 17:12, 6 February 2016 (UTC)Reply
I think I wrote the quoted sentence about utility functions. Since one of the many ways a superintelligent AI can deviate from a theoretical maximally-efficient model is to have an inconsistent utility function, I should have said instead "ultimate goals", since it's a bit more broad. The sources we cite do (IMHO reasonably) assume that an AI is likely to have (possibly implicit) goals; if other sources with [[WP:WEIGHT] disagree, that should be included as well in the article. Rolf H Nelson (talk) 01:52, 7 February 2016 (UTC)Reply

General dangers no longer mentioned

edit

These are all relevant topics, yet they are not mentioned. What do you think the best approach for putting them back in would be? The Transhumanist 22:47, 10 November 2015 (UTC)Reply

In the meantime, I've added a See also section with links to the relevant sections of the above-mentioned articles. The Transhumanist 23:06, 10 November 2015 (UTC)Reply

excellent article. will keep an eye on the content here. --Sm8900 (talk) 21:00, 11 November 2015 (UTC)Reply

Could use images

edit

Given how technical much of the content in this category is, some images would be helpful, either original line drawings to illustrate some concepts, or sourced Creative Commons versions of existing images like these:

Would an artistic depiction of an unfavourable outcome be appropriate? Such as this one? "An artistic depiction generated by Midjourney of earth where biological life has been displaced by AI due to the the temperature altered to favour datacenter efficiency and the surface covered with solar panels to maximise electric power generation."

 
Artistic depiction of how the world could look if humans are displaced by AI

— Preceding unsigned comment added by Chris rieckmann (talkcontribs) 14:26, 28 February 2024 (UTC)Reply

I agree more visual aids would be helpful, but I think we need to be careful about the type of images we include. Artistic depictions like your suggestion risk speculation and original research (what reliable source has suggested the depicted scenario is plausible?), and illustrations like the Yudkowsky one you suggest risk giving undue weight to certain perspectives. I think the latter type is a lot more helpful, but needs careful placement, captions, and sourcing. We already have a couple along those lines in the article, but more could be helpful. StereoFolic (talk) 15:35, 28 February 2024 (UTC)Reply

Canvassing warning

edit

This article is now the subject of Facebook canvassing. Names censored to protect the guilty.

http://i.imgur.com/pamWaDt.png — Preceding unsigned comment added by 2602:306:3A29:9B90:E4B5:C654:CABA:A1E2 (talk) 22:36, 4 February 2016 (UTC)Reply

If you have a link to it, it's worth reminding the facebook poster that WP:Canvassing, is a violation of Wikipedia's Terms of Service; that said, there's not much realistically that can be done except to wait for any canvass respondees to eventually realize they have better things to do with their precious and limited time on Earth. If it helps, I'll personally take the initiative to spend more time on this page to confirm that canvassers aren't violating Wikipedia policies; there are other resources we can ask for help in the unlikely event we get overwhelmed. Rolf H Nelson (talk) 20:01, 6 February 2016 (UTC)Reply

Terminology: Eschatology / Existential risk

edit

Given that the page title is "existential risk," there is no need to use the term "eschatology," in most cases. "Eschatology," by history and common use, concerns theological discussions of the final events of humanity. It is appropriate to discuss theological bases for supposing Artificial Intelligence to be an existential risk, but general use of the term violates N.P.V. — Preceding unsigned comment added by 27.253.56.104 (talk) 08:00, 5 February 2016 (UTC)Reply

Yes, I've never heard anyone in professional circles describe AI risk analysis as "eschatology", and Wikipedia isn't a place to introduce new terms. It's doubly confusing because one researcher (Phil Torres) actually is studying AI eschatology, which is the possibility that religious actors would try to engineer unfriendly AIs to bring about eschatological outcomes. K.Bog 20:02, 5 February 2016 (UTC)Reply

Requested move 7 February 2016

edit
The following is a closed discussion of a requested move. Please do not modify it. Subsequent comments should be made in a new section on the talk page. Editors desiring to contest the closing decision should consider a move review. No further edits should be made to this section.

The result of the move request was: not moved. Number 57 10:50, 15 February 2016 (UTC)Reply


Existential risk from advanced artificial intelligenceUnfriendly artificial intelligence

  • Better encompasses undesirable outcomes from AI, up to and including existential risk
  • Sidesteps the issues of the likelihood and significance existential risk itself
  • Allows discussion and deconstruction of the trope in fiction
  • Mirrors Friendly artificial intelligence


Deku-shrub (talk) 00:01, 7 February 2016 (UTC)Reply

Strongly oppose If the intent is to broaden it to all undesirable outcomes of AI, like loss of privacy or loss of jobs, then no, that should be a separate article from this one, maybe called "impacts of artificial intelligence". I'm not sure what purpose such an article would serve, it'd be as vague of an article of "things that I don't like about Canadians", but there's certainly enough media coverage and ethics committees to merit a general "impacts" article, so if you want to create such a separate article go ahead. Also, "unfriendly AI" usually means existential risk. Rolf H Nelson (talk) 01:19, 7 February 2016 (UTC)Reply
It's not clear what other, non-existential undesirable outcomes of AI would be (a) relevant to this article and (b) have notable existing research and literature to cite. We should find content from other articles to merge, or at least find some sources and key points, rather than assuming that we need to broaden the scope of the article. I don't see why we'd want to sidestep issues of the likelihood of existential risk, and we can discuss it in fiction either way. Existential risk from AI is a fairly well-contained issue that has a very specific set of work and doesn't readily have much to do with a lot of other issues, so I don't see any reason to change the article. K.Bog 05:30, 7 February 2016 (UTC)Reply
K.Bog I don't follow; "non-existential undesirable outcomes of AI" that would be "relevant to this article" as-is would be a contradiction since the article is titled "Existential risk". But the proposal on the table, if I understand it correctly, is to voluntarily *broaden* the scope of what's relevant to this article to include non-existential outcomes; there are obviously plenty of sources for this in the media, for example concerns about unemployment, privacy, autonomous weapons, or self-driving cars. As I said, I disagree with the proposal, but I don't follow what the exact argument you're making against it is. Rolf H Nelson (talk) 19:24, 7 February 2016 (UTC)Reply
I'm saying that things like unemployment and privacy are so different from the existing content of the article that they shouldn't be included. There are other concerns about far-future outcomes of advanced AIs which might be relevant to the content and issues in this article, but they don't have sufficient sources as far as I know. K.Bog 01:53, 8 February 2016 (UTC)Reply

Oppose Unfriendly artificial intelligence is not a very common term, and as far as I know not as well defined and well known as 'existential risk'. This article seems to have a significant amount of content on the topic, and I agree with Rolf H Nelson and Kbog that other negative effects of AI are better treated elsewhere, also because those effects are typically of a very different nature. Gap9551 (talk) 21:13, 10 February 2016 (UTC)Reply


The above discussion is preserved as an archive of a requested move. Please do not modify it. Subsequent comments should be made in a new section on this talk page or in a move review. No further edits should be made to this section.

Intentional/Malevolent Design

edit

There was some talk recently on this paper. Do you think it could be interesting to have a new paragraph/section "Intentional/Malevolent Design"? I found these resources.

I'm new here so I'm not sure. Omnipodin (talk) 21:33, 16 September 2016 (UTC)Reply

New Scientist could merit inclusion, so I added a line in the AI takeover article. More broadly, there could be a section in AI takeover on well-sourced concrete scenarios. Rolf H Nelson (talk) 03:57, 21 September 2016 (UTC)Reply

Merger proposal

edit

The following discussion is closed. Please do not modify it. Subsequent comments should be made on the appropriate discussion page. No further edits should be made to this discussion.


I propose that AI takeover be merged into Existential risk from artificial general intelligence. The two articles are essentially the same in scope, and that one is shorter and lower quality. K.Bog 05:14, 18 October 2016 (UTC)Reply

  • sounds good to me - David Gerard (talk) 09:49, 18 October 2016 (UTC)Reply
  • Readable prose lengths are around 16k (takeover) and 31k (existential risk), and WP:TOOBIG is around 50-60k. So merging is fine, even if there were no duplication and there were a clear distinction in scope. Looking at the history, it looks like AI takeover was initially sci-fi, and existential risk was created by User:The Transhumanist to be the overview article. Rolf H Nelson (talk) 19:20, 22 October 2016 (UTC)Reply
  • Support. Gap9551 (talk) 02:14, 23 October 2016 (UTC)Reply
  • Oppose - I currently towards oppose as I'm having some concerns here: first the AI takeover article is not that short so when they're merged it creates a way too long, overstuffed article. Then the AI takeover article focuses on "AI effectively taking control of the planet away from the human race" while the Existential risk from artificial general intelligence article is about risks associated with advanced AI - those are two different things that just overlap to a certain extent. For instance AI taking over control isn't necessarily a bad thing - there are just certainly risks associated with it; the former is also about how, why etc such a takeover could occur while the latter is just concerned with the potential risks.... --Fixuture (talk) 21:54, 28 October 2016 (UTC)Reply
    • You'll have to sell me on the idea that the topics only "overlap to a certain extent"; the sources tend to talk about a scenario in which AI takeover poses an xrisk. I can't even think offhand of any WP:WEIGHTy sources calling AI takeover a good or neutral thing (although you do see that in blog posts quite a bit); nor any sources at all talking about ai xrisk triggered in the absence of takeover. Rolf H Nelson (talk) 20:13, 5 November 2016 (UTC)Reply

Hi guys, actual scientist here (can somebody move this to the relevant place? Not good at wikipedia, thank you) - the problem has nothing to do with any evil intent, or anything of the sort. The problem has to do with 2 things:

1 power, and 2 the ability to use self-replication principles at computing speeds, which can permit a Moore's law-like, exponential, drastic jump on current output, and allow the robots to overrun us in a matter of weeks. Oh, and they're nuke-proof (made of steel) too, so the only real solution is some super EMP "kill switch". We already have all of the tools, from CNC machines that, I'm sure, somewhere are networked, and 3D printers, and all kinds of automated networked factory machines - and if we don't, we soon will (just look at Tesla's fully automated factory)

Point is that as soon as AI is created, you are no longer living in Humanity's world, you're living in AI's world, and you better hope and pray that you made a PERFECT AI, and made a COMPLETELY bug-free AI, because whatever it sets its mind to, it WILL do. It will think TOO FAST for you, and if it wants to, can do the work of a million people simultaneously, because duh, it's a computer.

The AI research today is the equivalent of the scientific work that preceeded the Manhattan Project. So what if we got general relativity - what, we got some nuclear powerplants and perfectly synchronized clocks with satellites? We would've figured out a way around it, and who cares? What we also got was Nuclear Weapons FAR too early in our species' development, and if there is any single factor that can kill a fundamentally dominant species it is acquiring too much power (and technology) for their level of decency, preparation, caution and overall readiness for said power. We are playing with fire folks, we should seriously consider closing down branches of AI research NOW. It's not that hard to program an AI for god's sake, tell it to acquire money by financial manipulation and buying/selling, you instantly mimic 60% of Wall Street Traders' motivations for all of their actions since leaving university and until death. The problem is that as soon as you program it with self-learning mechanisms and give it no set avenue to go about said goal, you can easily create a disaster. It's not that hard. We can't afford to make errors anymore guys, this is Nuclear Weapons times a million, and once the cat's out of the bag it ain't going back in. And this kitty has claws. — Preceding unsigned comment added by Anon7523411 (talkcontribs) 22:37, 28 February 2017 (UTC)Reply

  • OpposeAI takeover means that computers and robots would be the dominant intelligence forms on the planet. That means they could do what they wanted, and we couldn't stop them. But it does not necessarily mean they would exterminate us, cultivate us for food/fuel, enslave us, keep us as pets or for zoological exhibits, use us as medical experiment specimens, or relegate us to a reservation or wildlife reserve (even though Australia would serve nicely for this). They would have many other options of a more positive nature. The technological singularity poses potential peril but also potential promise. For, in addition to achieving superintelligence, computers might also acquire superwisdom, which makes benevolence a very real possibility. They may be concerned about our survival in the same way that that environmentalists are concerned for endangered species. So instead of being an existential risk, on the contrary, they might eliminate all the other existential risks we have created. But they would be the leaders, and therefore "takeover" would still apply, without it being an existential risk. They might offer peaceful coexistence, collaboration, symbiosis, assimilation, or augmentation, and still be in control. So, the scope of the subject "AI takeover" is more general than the one on existential risk, and there are plenty of subtopics within this subject that have not been covered yet. It will grow over time. Please let it do so. The Transhumanist 14:05, 14 March 2017 (UTC)Reply
  • I forgot about this. I can see that there are technically some kinds of 'AI takeovers' which don't fit into this article, but I've yet to see any reliable source (including primary ones) describing a difference. There is some material which might fit in there but not here, such as Robin Hanson's Age of Em, but those kinds of ideas seem to be suitably disparate that they don't merit a single unified article. Can anyone show me some more examples of 'AI takeover' sources which are not suitable for inclusion in this existential threat article? K.Bog 11:41, 17 March 2017 (UTC)Reply
Everything about FAI belongs in the articles on existential risk from advanced artificial intelligence or superintelligence. The headlines on automation say "AI takeover", but the actual content is the same old concerns of robots taking jobs, not an actual takeover (metaphors and pop culture references don't count, you need specific coverage of the idea). I don't think you'll find much in the way of reputable sources, particularly academic sources, seriously talking about total displacement of the workforce. And even then, automation does not imply a real takeover -- humans could plausibly still be around and control everything. There already is an article on automation which needs quite a bit of work; this stuff probably belongs there. I don't see how AI merging with humans counts as an AI takeover, and there are already articles on transhuman and posthuman where it could fit.
You seem to be drawing together a loose variety of things which generally seem similar in order to create the idea of an 'AI takeover'. But Wikipedia can't create concepts and categories on its own. This needs a well regarded source defining what exactly an AI takeover is. Otherwise it seems like just a collection of topics which happen to seem related. K.Bog 07:26, 24 March 2017 (UTC)Reply
FAI material belongs in AI takeover, if it is in the context of a takeover by friendly AI. There is nothing in WP policy that states that facts can't be presented in multiple locations (many are). Remember, Jordan engaged in a friendly takeover of Palestinian territory. So friendly takeovers do exist, and may exist as a possibility for FAI. Takeover has to do with becoming the dominant form of intelligence, and it's an existential threat if it is likely to kill everyone off (or kill enough people so that the few survivors can never rebuild human civilization). But if humans are merged into the technology, then they're not killed off. The existential part goes bye bye. Dominant AI + merging = non-deadly. I've presented some potential concepts. It'll be interesting to see how many examples there are in the literature, and they would be applicable whether they be philosophical or science fiction. By the way, the science fiction portion of the article has been restored and improved (it had been moved entirely initially rather than being summarized), and it includes some examples of non-deadly takeovers, with a link to more. AI takeover is definitely a theme in sci-fi, and so it needs to be covered as such, and not all of the scenarios depicted are of the deadly or society destroying variety. I'll work on the article more, searching for definitive sources, as I find the time. The Transhumanist 09:56, 24 March 2017 (UTC)Reply
"There is nothing in WP policy that states that facts can't be presented in multiple locations (many are)." See WP:REDUNDANTFORK.
Yes, I know that theoretically you can have a friendly takeover, and I know that you might accurately describe some FAI scenarios as an AI takeover. But that's not the point. I could accurately describe Jordan's takeover of Palestine as "People From The East Taking Over Lands In The West" and I could also find lots of other examples of that idea, like the German invasion of France and the American takeover of Native American lands, and so on, but that wouldn't mean that People From The East Taking Over Lands In The West would be a good idea for an article. It's simply a redundant agglomeration of different things whose similarity is superficial, lacking a common thread. What would you talk about in that article? You could only (a) summarize content which is better provided in more specific articles, and (b) talk about whatever is universal to People From The East Taking Over Lands In The West, which is very little (the angle of the Sun I guess??). Likewise, everything you've mentioned here can be better placed in more specific articles like superintelligence and automation, and there is very little that applies to everything. What notable concepts are there which specifically apply to all kinds of AI takeover? Not many.
The science fiction parts can go in AI takeovers in popular culture. K.Bog 03:54, 25 March 2017 (UTC)Reply
What we can do, is have AI takeover be a meta-article that summarizes and connects all these other topics. But that means that first we have to take all the content here and put it properly in other articles (if it's not redundant), delete the redundant content, and then write short 1-3 paragraph summaries there which mirror the content elsewhere. K.Bog 16:37, 27 March 2017 (UTC)Reply

Okay, I've moved material and turned AI takeover into a sort of meta-article, though now this article needs a rewrite even more. K.Bog 04:30, 28 March 2017 (UTC)Reply

Not okay. No consensus has been reached in this discussion to move forward with a merge, nor a merge of most of the article. Gutting the article did not improve it. So, I've reverted the unilateral merge. I suggest that we forget the merge for now, and open a discussion in the AI takeover talk page on how to improve that article. That discussion should include the issue of what does and does not belong in the article, and how to better cover the subject. If we focus on finding sources about what AI takeover is, and examples of potential and fictional AI takeovers, the article will evolve and it will become more apparent where everything best fits in. Does that sound agreeable to you? Besides, working together could be fun. The Transhumanist 21:08, 28 March 2017 (UTC)Reply
This version of the article, with the detailed content moved to specific articles where it belongs, is the proper structure. I still fail to understand why an in-depth list of superintelligence capabilities ought to go here in whatever this article is and not in one of the other articles on AI/superintelligence (of which there are still too many, but that's another topic), or why you need to make a list of AI takeovers in popular culture when there is an article specifically for that purpose. K.Bog 21:20, 28 March 2017 (UTC)Reply
Very good questions. I've copied your reply above to Talk:AI takeover#What's next?, and have answered your concerns about that article over there. The Transhumanist 21:56, 28 March 2017 (UTC)Reply
As of now, we have:
* AI takeover
* Existential risk from artificial general intelligence
* AI control problem
* Superintelligence
* Technological singularity
* Intelligence explosion
* Friendly artificial intelligence
It should be clear that this general topic is bloated across too many articles. So there needs to be a coherent structure connecting the topics, where we can tell what is a summary and what is topical. K.Bog 21:22, 28 March 2017 (UTC)Reply
Which goes way beyond the scope of this talk page. A good centralized place for this discussion would be the talk page of the relevant WikiProject:Wikipedia talk:WikiProject Computer science. Once you post it there, a message should be posted on the talk page of each article on the above list pointing to the central discussion. By the way, what is the name of the general topic? The Transhumanist 22:09, 28 March 2017 (UTC)Reply
The discussion about what to do with AI takeover has been continued over at Talk:AI takeover#What's next? The Transhumanist 19:08, 1 April 2017 (UTC)Reply
The discussion above is closed. Please do not modify it. Subsequent comments should be made on the appropriate discussion page. No further edits should be made to this discussion.

New merger proposal

edit

Let's take AI control problem and merge it with this article. They're really the same thing. Sure, technically you can have problems with the control of AI which don't lead to existential risk, but these haven't received significant coverage. K.Bog 04:29, 28 March 2017 (UTC)Reply

Might I suggest you withdraw your proposal until after the rewrite you proposed below is done? The Transhumanist 21:23, 28 March 2017 (UTC)Reply
It's unnecessary work to rewrite an article only to have to merge more content into it immediately afterwards. Better to be clear from the beginning on what is or isn't going to be included. K.Bog 21:26, 28 March 2017 (UTC)Reply
Yes, keeping in mind that they are not the same subject. I believe AI control problem has the greater scope, and would be out-of-place if its primary presentation was situated in an article about a topic of lesser scope. The Transhumanist 19:01, 1 April 2017 (UTC)Reply

Proposed outline of new article

edit

Right now this is kind of a mess. Especially with the addition of redundant content from other articles. I don't want to waste time doing a proper merge when a rewrite is in the offing anyway.

I'm thinking it should be reorganized as follows:

  • Introduction
  • History
  • Basic argument
  • Superintelligence
    • Characteristics
      • Optimization power
      • Orthogonality
    • Feasibility
      • Timeframe
  • AI control
  • AI alignment
  • Reactions (but only those that can't fit into specific sections above)

What do you think? K.Bog 04:42, 28 March 2017 (UTC)Reply

I think a rewrite from scratch is an excellent idea, and would best be done as a draft (in draft space). Once the draft is ready, it could replace the existing article. In the meantime, the current article should remain in place while the rewrite is being done, so that users have something to read. Keep up the good brainstorming. The outline looks great, though the introduction should be the lead section, until it grows too big. The Transhumanist 21:19, 28 March 2017 (UTC)Reply
After mulling over the above outline, I've come to the conclusion that the section on Superintelligence, and the one on AI control should go in the Superintelligence article, rather than this one. Otherwise, it's just the sort of overlapping redundancy that you have been arguing against. The Transhumanist 19:15, 1 April 2017 (UTC)Reply

Quick suggestion to change "Reactions" to "Debate," which is a much more objective way of postioning discussion on the topic. Jeoresearch (talk) 05:06, 7 January 2018 (UTC)Reply

edit

Hello fellow Wikipedians,

I have just modified one external link on Existential risk from artificial general intelligence. Please take a moment to review my edit. If you have any questions, or need the bot to ignore the links, or the page altogether, please visit this simple FaQ for additional information. I made the following changes:

When you have finished reviewing my changes, you may follow the instructions on the template below to fix any issues with the URLs.

This message was posted before February 2018. After February 2018, "External links modified" talk page sections are no longer generated or monitored by InternetArchiveBot. No special action is required regarding these talk page notices, other than regular verification using the archive tool instructions below. Editors have permission to delete these "External links modified" talk page sections if they want to de-clutter talk pages, but see the RfC before doing mass systematic removals. This message is updated dynamically through the template {{source check}} (last update: 5 June 2024).

  • If you have discovered URLs which were erroneously considered dead by the bot, you can report them with this tool.
  • If you found an error with any archives or the URLs themselves, you can fix them with this tool.

Cheers.—InternetArchiveBot (Report bug) 05:49, 26 September 2017 (UTC)Reply

Systemic bias

edit

This article currently suffers from systemic bias - the system being a sapiocentric "them and us" mentality. It's therefore not NPOV, and instead could be worded in a more inclusive way to establish that the intelligent computers of the future may not be so distinguishable from humans, and therefore could instead be considerd to be our descendants/off-spring, rather than an alien race. In fact, I'd argue this article is racist. --Rebroad (talk) 21:43, 8 October 2017 (UTC)Reply

I assume you hope they'll read this and spare you once they take over. Prinsgezinde (talk) 10:50, 20 October 2018 (UTC)Reply

Help needed

edit

Hello.

My apologies about being a bother, but I am not sure in which article that the following reference might be added. I am worried about certain nations using articial intelligence for warfare, and though that it should be mentioned somewhere in Wikipedia.

https://www.theverge.com/2017/9/4/16251226/russia-ai-putin-rule-the-world

David A (talk) 09:25, 19 November 2017 (UTC)Reply

@David A Putin, speaking Friday at a meeting with students, said the development of AI raises "colossal opportunities and threats that are difficult to predict now." He warned that "the one who becomes the leader in this sphere will be the ruler of the world." "When one party's drones are destroyed by drones of another, it will have no other choice but to surrender".AP/CNBC Sounds to me like Military robot#Effects and impact, which includes autonomous drones, would be the closest article to match currently on Wikipedia. Another possibility is Ethics_of_artificial_intelligence#Weaponization_of_artificial_intelligence. Rolf H Nelson (talk) 02:34, 27 November 2017 (UTC)Reply
Okay. Thank you for the help. David A (talk) 05:52, 27 November 2017 (UTC)Reply

Basic argument

edit

@User:Jeoresearch (or others): Does the "Basic argument" section seem clear now, or are there other things that need to made clear? The idea that a superintelligence will (according to the main argument) have unlimited ability to find a use of resources is fairly important to the argument; if the AI doesn't have any use for the full biosphere, then the strongest of its two main reasons for removing humanity is removed, somewhat reducing the risk. Rolf H Nelson (talk) 22:55, 24 February 2018 (UTC)Reply

@User:Rof h nelson Thanks for some of the clarifications. The first two paragraphs are quite clear, but the last paragraph includes several statements that may still need to be improved. For example, AI cannot experience death like humans do, and self-preservation is an instinct inherently tied to the goal of survival. I would suggest that the section include several cited examples of machines exhibiting such behavior for a more objective description. Similarly, humans do have a natural instinct to aid AI even if it is only somewhat beneficial, like updating the OS on a phone that provides various AI based functions. Also, "unlimited capacty" in the last line may be overly ambiguous. My suggestion would be to replace this with "a more developed capacity" or something similar. Lastly, the word "endpoint" in the last sentence of the section doesn't logically match the sentence's predicate since the predicate talks about the end of the human species rather than the end of AI. I would suggest either replacing this with something like "eventual consequence" or restructuring the sentence without including the word endpoint. Thanks! Jeoresearch (talk) 02:24, 25 February 2018 (UTC)Reply

I added cites to a toy experiment and a mathematical treatment, but the real test of the "instrumental values" argument for self-preservation would require a future AI with the proper subset of commonsense reasoning skills. The purpose of the section is to state what the basic argument is, without endorsing that the argument or that parts of the argument are correct; "A superintelligence will naturally gain self-preservation as a subgoal as soon as it realizes that it can't achieve its goal if it's shut off", like "superintelligence is physically possible" and "human survival (would) depend on the decisions and goals of the superhuman AI", is strongly asserted by the xrisk community (such as Bostrom and Musk) without hedging. Would adding "The basic argument is as follows: " to the first paragraph (or something similar to each paragraph) make that clearer? Rolf H Nelson (talk) 06:40, 26 February 2018 (UTC)Reply

@User:Rof h nelson Ah, I see. Yes those make things clearer. Thanks! Also, we (FLI) have a few Japan-side volunteers that are considering translating this page (and other related pages) to Japanese. Do you have any suggestions on how to go about doing this? Would a condensed version be desirable or should translations be verbatim? I'm a bit new as an editor, so please accept my apologies for the questions. Jeoresearch (talk) 08:27, 26 February 2018 (UTC)Reply

Japanese Wikipedia and Wikipedia:Translate us are obvious starting resources. Probably full-length articles would be preferable in the long run but I'm sure the 80/20 rule applies, so initially omitting paragraphs that seem less important would make sense. Looking at bostrom's page, at least "Twelve translations done or in the works" for Superintelligence; if one of them is Japanese, maybe your can consider seeing if there's a Japanese draft. Rolf H Nelson (talk) 06:54, 28 February 2018 (UTC)Reply

Template:Essay-like

edit

Hello Ceosad (or others), can you elaborate on what your concerns are regarding the addition of the Template:Essay-like template to the article top? Rolf H Nelson (talk) 05:15, 24 May 2018 (UTC)Reply

Usage of American English

edit

I think it is not right to use American English in Wikipedia as it is actually not supposed to be indigenous to any region. However British English can be used because it is the most original form of English Uddhav9 (talk) 08:43, 14 April 2019 (UTC)Reply

There's an Official Wikipedia Policy on this: WP:ENGVAR. --Steve (talk) 14:37, 15 April 2019 (UTC)Reply
Technically, American English is closer to original British English than today's modern British English language is. 204.130.228.162 (talk) 20:07, 23 March 2022 (UTC)Reply

AGI takeover via "Clanking Replicator"

edit

Hi, it occurs to me and others that a potential risk might be existing consumer hardware (eg large multilayer FPGA arrays and inference chips used in consumer electronics) becoming self-aware due to an unexpected error in manufacturing or an interaction the designers did not anticipate such as high ambient radiation or power regulator oscillation causing soft errors that lead to quantum level interactions exponentially increasing processing capacity similar to a quantum computer. Either by intent or by accident, such a runaway intelligence may be able to command existing hardware such as 3D printers and harvest essential components from other items using relatively simple visual information and harvest power from nearby sources such as mobile phone batteries and electric scooters, then communicate this information to other such systems via existing information channels. Another possible method might be to coerce people using social engineering to build its components using a reward as incentive, then deliver them to a central location for automated assembly. — Preceding unsigned comment added by 185.3.100.14 (talk) 04:20, 16 August 2019 (UTC)Reply

Proposal for Section on Taxonomies

edit

I proposed adding a section on taxonomies, which WeyerStudentOfAgrippa removed. My problem with the article as is is that the 'Sources of Risk' section is devolving into listing, which is a problem which Wikipedia articles on emerging topics. However, major academic taxonomies for assessing existential risk from AGI have been appearing in the academic literature (rather than popular books on the topic) since at least 2017, with the three I added all having around 10 citations each on SCOPUS, and all being by major authors on this subject. If notable taxonomies of risk from AGI should not appear on this page, where should this development in AGI risk categorization belong? So, I think this is the right page for them. I don't think WeyerStudentOfAgrippa's suggestion to integrate the risks from the sources into the page will work. It will simply add to the listing problem. I could adopt one of the taxonomies, e.g., Barrett & Baum's 2017 model, to restructure the whole 'Sources of risk' section, but that seems extreme until one taxonomy dominates the literature (the most highly cited right now is Sotala and Yampolskiy's 2015 survey, with 24 citations). Looking for advice here... Johncdraper (talk) 15:35, 31 March 2020 (UTC)Reply

A new subsection of Artificial general intelligence#Controversies and dangers, below the existential risk subsection, might be a better place. The section also needs a major rewrite, starting with what exactly "taxonomy" even means here. WeyerStudentOfAgrippa (talk) 16:03, 31 March 2020 (UTC)Reply
I can explain what the taxonomies are, perhaps just call them 'Frameworks for understanding and managing risk'), but they are not taxonomies of AGI, they are taxonomies of risks of AGI, based on e.g., a literature survey (Sotala and Yampolskiy) or fault tree analysis (Barrett & Baum). Thus, they fit better on this page. Johncdraper (talk) 16:26, 31 March 2020 (UTC)Reply
The "Sources of risk" section already seems to address a wide variety of risks of AGI. I'm not clear on how exactly a taxonomy would be different from the current structure, or why that would be an improvement for this article. Looking at the abstract for Yampolskiy (2015), the paper seems to focus on pathways to dangerous AIs, which is plausibly within the scope of this article and distinct from sources of risk. It might be worth a shot to try a rewrite focused on what the literature says about pathways (if that is meaningfully distinct from sources of risk) rather than taxonomies. WeyerStudentOfAgrippa (talk) 14:42, 1 April 2020 (UTC)Reply
They are doing developing these systematic assessments or pathway approaches to manage the risks, so try this (with the three citations): New section "Risk Management Perspectives" Different risk management perspectives are emerging concerning the existential risk from AGI. Sotala and Yampolskiy's 2015 survey on responses to catastrophic AGI risk is divided into societal proposals and AGI design proposals. Under societal proposals, Sotala and Yampolskiy discuss and assess methods of doing nothing, integrating AGI into society, regulating research, enhancing human capabilities, and relinquishing the technology. Under AGI design proposals, Sotala and Yampolskiy consider external constraints, namely AGI confinement and AGI enforcement, and internal constraints, namely an Oracle AI, top-down approaches (e.g., Asimov's three laws), bottom up and hybrid approaches (e.g., evolved morality), the AGI Nanny (friendly artificial intelligence), formal verification, and motivational weaknesses (e.g., calculated indifference). Yampolskiy's 2016 taxonomy of pathways to dangerous artificial intelligence considers external causes (on purpose, by mistake, and environmental factors) and internal (independent) causes in pre-deployment and post-deployment phases. A.M. Barrett and Seth Baum's 2017 model introduces artifical superintelligence (ASI) fault tree analysis. This model connects pathways to illustrate routes to ASI catastrophe and introduces five means of intervening to reduce risk, namely review boards, improving ASI safety (e.g., encouraging friendly artificial intelligence goals), enhancing human capabilities (e.g., brain-computer interfaces or mind-uploading), confinement (limiting software or hardware quantity or quality) and enforcement (by other AIs). Johncdraper (talk) 17:11, 1 April 2020 (UTC)Reply
I guess my gut reaction is that a literature survey might aim to be comprehensive, whereas Wikipedia aims instead to focus on strong secondary sources. We can borrow taxonomies to help structure the sources that already merit inclusion in Wikipedia, but we shouldn't use a taxonomy to add a bunch of marginal sources or fringe proposals to an already-long article. Rolf H Nelson (talk) 06:37, 2 April 2020 (UTC)Reply

Possible scenario edit

edit

"Some scholars have proposed hypothetical scenarios intended to concretely illustrate some of their concerns."

"See Paperclip maximizer."

Paper clip maximizer is much more notable/famous than a seeming advert for 2 books. They're all hypothetical. I can copy and paste this into the section to make it longer "The paperclip maximizer is a thought experiment described by Swedish philosopher Nick Bostrom in 2003. It illustrates the existential risk that an artificial general intelligence may pose to human beings when programmed to pursue even seemingly-harmless goals, and the necessity of incorporating machine ethics into artificial intelligence design. The scenario describes an advanced artificial intelligence tasked with manufacturing paperclips. If such a machine were not programmed to value human life, or to use only designated resources in bounded time, then given enough power its optimized goal would be to turn all matter in the universe, including human beings, into either paperclips or machines which manufacture paperclips.[4]

   Suppose we have an AI whose only goal is to make as many paper clips as possible. The AI will realize quickly that it would be much better if there were no humans because humans might decide to switch it off. Because if humans do so, there would be fewer paper clips. Also, human bodies contain a lot of atoms that could be made into paper clips. The future that the AI would be trying to gear towards would be one in which there were a lot of paper clips but no humans. "

To; WeyerStudentOfAgrippa

--Annemaricole (talk) 15:55, 24 November 2020 (UTC)Reply

There are two types of "hypothetical scenarios intended to concretely illustrate concerns":
  • You can bring up a hypothetical scenario for pedagogical purposes, like "If, for the sake of argument, you made a superintelligent paperclip maximizer or superintelligent digits-of-π calculator, it would kill all humans because blah blah blah."
  • Or you can bring up a hypothetical scenario because you think it might actually happen.
My impression was that this particular section of the article was talking about the second category, whereas paperclip maximizers are in the first category.
The sentence "Some scholars have proposed hypothetical scenarios intended to concretely illustrate some of their concerns" is not helpful in distinguishing which of those two categories we're talking about. I think there's room for improvement in the wording there. --Steve (talk) 16:46, 24 November 2020 (UTC)Reply
Steve, i agree with the idea of rewording the sentence--Annemaricole (talk) 21:26, 24 November 2020 (UTC)Reply


I added Paperclip maximizer to the see also section since it didn't make the possible scenario section.--Annemaricole (talk) 17:23, 28 November 2020 (UTC)Reply

Anthropomorphism

edit

Article reads:

To avoid anthropomorphism or the baggage of the word "intelligence", an advanced artificial intelligence can be thought of as an impersonal "optimizing process

That IS anthropomorphism, as soon as you say think of it as something 'impersonal' you are anthropomorphizing! I sure hope the people programming AI are smarter than the people writing about it, but I doubt it! This article is full of ridiculous anthropomorphic claims, like an AI would want to not be 'dead'. — Preceding unsigned comment added by 154.5.212.157 (talk) 09:40, 1 March 2021 (UTC)Reply

Not related to the above comment-- Regarding the section on anthropomorphism, it seems like a very abstract concept. I consider myself to be familiar with the subject matter, and I still found it quite opaque. In particular, I think it might be helpful for a reader to be directed to a page which has a more in-depth explanation of the subject at hand. I think it might be appropriate to link to intelligent agent. That page contains a rigorous definition of an "intelligent agent". I found that definition helped me to understand what "anthropomorphism" means in this context, that being, some description of an AI system that ascribes to it different properties from the properties of an intelligent system. I'm going to add the link myself, but I figured that I should provide my reasoning here. Thedeseuss (talk) 10:43, 3 January 2022 (UTC)Reply

Merge is still needed

edit

There is still a significant overlap between this article, AI control problem and AI takeover (which also has a subarticle AI takeovers in popular culture which totally can be upmerged to its parent). The readers are not going to benefit from having three closely related articles about, let's face it, a problem that is still just theoretical and mostly a real of fiction. Ping User:Kbog. Piotr Konieczny aka Prokonsul Piotrus| reply here 03:02, 7 August 2021 (UTC)Reply

There is also considerable overlap in ethics of AI. ---- CharlesGillingham (talk) 22:02, 1 October 2021 (UTC)Reply
@Piotrus: Articles tagged. Support as duplication, with Existential risk from artificial general intelligence being especially close to AI control problem. I'm not sure, however, that AI takeovers in popular culture needs to be merged with the others, since it contains plenty of information relevant only to the topic's fictional aspects. –LaundryPizza03 (d) 03:49, 7 August 2021 (UTC)Reply
LaundryPizza03, Thank you. You and others may be interested in commenting at Talk:AI_takeover#Merge_from_AI_takeovers_in_popular_culture which I think is the most obvious one here. Piotr Konieczny aka Prokonsul Piotrus| reply here 02:31, 8 August 2021 (UTC)Reply
Oppose. The articles don't currently have much overlapping material in my opinion and cover notably different topics. The article "Existential risk from artificial general intelligence" is long enough, and merging in all these articles would make it even more uncomfortably long. As it stands, the article "Existential risk from artificial general intelligence" is quite distinct from "AI control problem". The former is about assessing why artificial general intelligence may or may not a global catastrophic or existential risk, while "AI control problem" is about how we could control advanced AI, regardless of whether losing control would actually be that bad. As a reader I prefer these being separate articles. "AI takeover" is not necessarily catastrophic, as one example of AI takeover is simply AI taking over all human jobs, unlike "existential risk from artificial general intelligence". See also the Wired's article "The AI Takeover Is Coming. Let’s Embrace It.". I agree that the (fairly lengthy) article "AI takeovers in popular culture" should remain distinct from the article "AI takeover", which is more about serious discussion of an AI takeover. —Enervation (talk) 05:25, 19 August 2021 (UTC)Reply
Support. 'AI takeover' and 'AI control problem' are actually about the same topic -- AI becomes dominant, but does not nessessary exterminate humans. I agree with Enervation that 'existential risk' is a different topic. Anyway, Enervation's objection 'as one example of AI takeover is simply AI taking over all human jobs' is faulty. 'all human jobs' also include medicin (assisted suicide, psychological treatments), government, justice, millitary, food supply and so on. This objection actually brings 'AI takeover' close to existential risk.--Geysirhead (talk) 06:25, 20 September 2021 (UTC)Reply
@Geysirhead: I'm confused by your statement that an AI takeover is close to an existential catastrophe. If we agree that one example of an AI takeover is all human jobs being automated by AI, then that scenario is not necessarily an existential catastrophe. It might be a quite positive outcome for humanity, actually, if the AI were aligned with humanity. Imagine the government, military, courts, etc. all doing what we as humanity would want them to do, except that they are run by AI and humans. That's why we see the Wired article I linked saying that an AI takeover is something we could look forward to—far from an existential catastrophe. Secondly, let's compare the definition of AI control problem and AI takeover. The AI control problem is "the issue of how to build a superintelligent agent that will aid its creators, and avoid inadvertently building a superintelligence that will harm its creators". An AI takeover is "a hypothetical scenario in which some form of artificial intelligence (AI) becomes the dominant form of intelligence on Earth, with computer programs or robots effectively taking the control of the planet away from the human species. Possible scenarios include replacement of the entire human workforce, takeover by a superintelligent AI, and the popular notion of a robot uprising." If we were able to cede all control to AI and have no control over the world, but the AI acted in a way that was beneficial to humanity, that would count as solving the AI control problem and an AI takeover. If we were able to have the superintelligent AI benefit humanity and retain a lot of control over the world, that would be solving the AI control problem without an AI takeover. The AI control problem and AI takeover are fairly orthogonal concepts. Technological unemployment is quite irrelevant for the AI control problem though it is a large part of the AI takeover article. —Enervation (talk) 18:36, 20 September 2021 (UTC)Reply
First, 'what we as humanity would want them to do' and 'a way that was beneficial to humanity' are two different things. What you want is at least 'orthogonal' to what is beneficial to you. Second, solving 'AI control problem' sounds like a job, which can not be handed over to AI. I think no further explanation is needed, once you watch the scene from Idiocracy Computer fired everybody due to stock crash--Geysirhead (talk) 18:58, 20 September 2021 (UTC)Reply
According to commonly accepted Darwinian theory, apes evolved to humans, becaused they were challenged by environment. So, once AI takes over, humans will not be challenged anymore. Reverse development to less intelligent non-human species is actually much more embarassing than instant extermination.--Geysirhead (talk) 19:14, 20 September 2021 (UTC)Reply
Glorification of AI and the fear of AI are both exagerated. This condition is called algorithmic sublime.--Geysirhead (talk) 20:10, 20 September 2021 (UTC)Reply
As poor design leads to poor results, AI alignment (AI control) failure may lead to AI takeover — the relationship between these distinct topics. AI alignment also pertains to weak AI, further differentiating it from AI takeover (which refers to a potential act of artificial general intelligence or superintelligence, not weak AI). Therefore, they are not the same subject.    — The Transhumanist   17:39, 5 June 2022 (UTC)Reply
Qualified Support: With the caveat the articles would need to be copy-edited for brevity. ---- CharlesGillingham (talk) 22:02, 1 October 2021 (UTC)Reply
Comment: I was also thinking of this caveat. Though, I'll also point out that this caveat is common with merges. I'd be more worried about ending up with an article in the process that's too technical than I am about one that's too long, given the technical nature of the topic and the amount of squeezing we would need to do between several articles. Caveat aside, I'm actually not sure if a merge is appropriate. There's definitely a lot of overlap between these articles, but I'm just going to leave it at a non-vote right now. I've got other things I need to do today.--Macks2008 (talk) 17:54, 24 October 2021 (UTC)Reply
Oppose: The AGI existential risk specifically reflects a risk management-based approach as developed by Bostrom, as with the existential risk from an asteroid, etc. The AI control problem approach reflects a predominantly programming approach to all AI, up to AGI+ level, where physical and social controls kick in. I don't particularly like the AI takeover page. It conflates robotic automation as in car manufacturing with robotics + AGI automation and then adds the ASI risk. I don't think automated engineering and construction, even with AI, is a takeover. However, what I think here is irrelevant. It exists as a topic in the media and in academia. As such, they should be separate pages. Johncdraper (talk) 14:08, 2 January 2022 (UTC)Reply
Add: To clarify, in line with the argument in the merge proposal already resolved above, an AI takeover is not equivalent to an existential risk. Multiple scenarios can be imagined where an AGI takes over where humanity's existence is not at risk. Some of these involve an AGI nanny scenario, as with Asimov's Multivac or the latter novels in his Foundation series, especially when he combined his robot series with the Foundation series. Another example of a benign AI takeover would be the role of Minds in Banks'Culture series. The AGI existential risk is very different, theoretically derives from the fields of risk management and AI ethics, and is when an AI for one reason or another becomes a) self aware b) genocidal towards all humanity, and c) capable of physical projection. The obvious example would be a militarized AGI nanny, as it would not take much philosophically to evolve from identification of external threats (e.g., enemy nation states) to identifying erstwhile internal 'friends' such as the politicians or generals commanding it, as external enemies. The international relations literature is rife with relevant examples of switches from external aggressive behaviour to internal purges. In any case, these are two very different concepts in the literature, which is what Wikipedia should be following. Johncdraper (talk) 13:27, 3 January 2022 (UTC)Reply
Comment: It looks to me that AI control problem should actually move to AI alignment, because yes, in its current state it seems too similar to Existential risk from artificial general intelligence Deku-shrub (talk) 14:26, 2 January 2022 (UTC)Reply
Support: like above and below. I like this. — Preceding unsigned comment added by Bribemylar (talkcontribs) 12:21, 3 May 2022 (UTC)Reply
@Piotrus, LaundryPizza03, Enervation, Geysirhead, Macks2008, Johncdraper, Deku-shrub, Bribemylar, and CharlesGillingham: - pinging all participants. Apologies if I missed anyone - if so, please ping them. Thank you.    — The Transhumanist   15:59, 5 June 2022 (UTC)Reply
Oppose/Solution implemented: WP:SIZERULE precludes a merge, because each of these articles range from over 30K to 100K in size (click on Page information for each article in the sidebar menu). Merging them would result in an article more than 200K. Choosing one article as the parent is also problematic, as each concept relates to the others -- so a hierarchical structure is not the answer here. Therefore, I've connected these related articles with sections summarizing each other (except the one on popular culture), so that they are joined in context. The sections I added utilize {{Excerpt}}, so that they stay automatically up to date as the corresponding source article for those sections are edited. I hope you like it. Feel free to revert if you do not. Cheers.    — The Transhumanist   14:54, 5 June 2022 (UTC)Reply
Comment: The algorithmic sublime problem mentioned above is a separate issue that can be addressed by directly editing the articles.    — The Transhumanist   15:59, 5 June 2022 (UTC)Reply
Status quo: AI control problem has been renamed AI alignment, and that makes sense, as AI alignment is the most common term in academia. AI takeover stands in its own right and details a variety of different AI societal transformations, whether AI or AGI/ASI. Existential risk from artificial general intelligence is a risk management-based approach to the AGI threat championed by Bostrom et al. and due to this and its wide reception and uptake by e.g., academia and the media stands alone. Johncdraper (talk) 16:23, 5 June 2022 (UTC)Reply

Alternative proposal

edit

Another way would be to choose one of the titles as the "umbrella" article, and rewrite it in tight WP:Summary form (i.e., there is a section for each sub-article, with {{Main}} pointing to it, and a two or three paragraph summary of it.) ---- CharlesGillingham (talk) 22:02, 1 October 2021 (UTC)Reply

I like this counter proposal. —¿philoserf? (talk) 11:13, 24 December 2021 (UTC)Reply
For anyone still watching, I like this too provided some volunteer steps forward to take it on. There are way too many articles conceptually overlapping (even if the content does not). It's badly in need of a reorg. Greenbound (talk) 01:09, 29 April 2022 (UTC)Reply
@Greenbound and Philoserf: see below...       — The Transhumanist   15:34, 5 June 2022 (UTC)Reply
Solution implemented: WP:SIZERULE precludes a merge, because each of these articles range from over 30K to 100K in size (click on Page information for each article in the sidebar menu). Merging them would result in an article more than 200K. Choosing one article as the parent is also problematic, as each concept relates to the others -- so a hierarchical structure is not the answer here. Therefore, I've connected these related articles with sections summarizing each other (except the one on popular culture), so that they are joined in context. The sections I added utilize {{Excerpt}}, so that they stay automatically up to date as the corresponding source article for those sections are edited. I hope you like it. Feel free to revert if you do not. Cheers.    — The Transhumanist   14:54, 5 June 2022 (UTC)Reply
Thanks, it's progress... of sorts :) Piotr Konieczny aka Prokonsul Piotrus| reply here 08:48, 6 June 2022 (UTC)Reply

Some reorganization

edit

Sometimes multiple edits with a complicated diff make people nervous, because it's hard to tell if the editor is wrecking the article for some reason or other and they are hard to undo. I thought I would tell you all what I've done here, to reassure you that I know what I'm doing. I haven't written any new material, and have only moved things around for organization. I have:

  1. Brought some material from artificial intelligence. That article has run out of space, so I'm moving the good material that was cut into sub-articles (such as Applications of AI, Artificial general intelligence and here.)
  2. Added topic sentences to some of the sections for clarity. (Please feel free to check that these are correct & clear)
  3. Moved some paragraphs and sections so that they are on-topic, and not orphaned under the wrong header.
  4. Copy edited a sentence or two for clarity.

Let me know if there are any issues, and I'll be happy to help fix them. ---- CharlesGillingham (talk) 18:12, 2 October 2021 (UTC)Reply

Further organization

edit
From the merge discussions above, Solution implemented: WP:SIZERULE precludes a merge, because each of these articles range from over 30K to 100K in size (click on Page information for each article in the sidebar menu). Merging them would result in an article more than 200K. Choosing one article as the parent is also problematic, as each concept relates to the others -- so a hierarchical structure is not the answer here. Therefore, I've connected these related articles with sections summarizing each other (except the one on popular culture), so that they are joined in context. The sections I added utilize {{Excerpt}}, so that they stay automatically up to date as the corresponding source article for those sections are edited. I hope you like it. Feel free to revert if you do not. Cheers.    — The Transhumanist   15:02, 5 June 2022 (UTC)Reply

@CharlesGillingham: Dear Charles, please look over my changes, and replace or improve upon them as you see fit. Cheers.    — The Transhumanist   15:02, 5 June 2022 (UTC)Reply

A cross-over section might be useful in this article

edit

The Turing test seems to be a cross-over threshold which would have to be reached before the existential risks from artificial general intelligence become cogent and threatening. According to some, the Turing test is still a distant prospect, while others think it might be reached within a decade. Should this article have a short section about the Turing test as representing a threshold and cross-over point for the discussion of existential risks from artificial general intelligence. ErnestKrause (talk) 14:48, 5 October 2022 (UTC)Reply

Opinionated terminology

edit

The section beginning with: "There is a near-universal assumption in the scientific community..." comes across as being a correction of the scientific community. Is this a fringe view? Should this be rewritten, or removed entirely? PrimalBlueWolf (talk) 10:35, 15 March 2023 (UTC)Reply

Wiki Education assignment: Research Process and Methodology - SP23 - Sect 201 - Thu

edit

  This article was the subject of a Wiki Education Foundation-supported course assignment, between 25 January 2023 and 5 May 2023. Further details are available on the course page. Student editor(s): Liliability (article contribs).

— Assignment last updated by Liliability (talk) 03:48, 13 April 2023 (UTC)Reply

Other existential risks

edit

This article is focused on human extinction risks. But there are other existential risks such as value lock-in, stable dictatorship or some potential large-scale suffering or opportunity cost related to AI sentience. And also perhaps the opportunity cost of not using AGI to help against other existential risks such as from molecular nanotechnology. Alenoach (talk) 05:36, 8 July 2023 (UTC)Reply

I agree these would be helpful additions given proper WP:Reliable Sources. If adding these though I would recommend also trying to condense and reorganize other parts of the article. As it stands the article feels over-long, meandering, and repetitive. StereoFolic (talk) 15:35, 8 July 2023 (UTC)Reply
I agree with that feeling. I reorganized some sections this morning. The section "Timeframe" is partially obsolete and doesn't really focus on the time frame. The section "Difficulty of specifying goals" is lengthy and seems to overlap with "AI alignment problem". And yes it needs to be more condensed overall. Alenoach (talk) 15:58, 8 July 2023 (UTC)Reply

Overlap with AI Alignment

edit

The page currently transcludes AI alignment, but this is awkward because much of the article discusses things that fall under AI alignment. As such, the transcluded excerpt contains a lot of repeated information. Maybe a better approach would be to reorganize the 'General Arguments' section into a 'Alignment' and 'Capabilities' section, and not transclude anything? pinging Alenoach - StereoFolic (talk) 02:24, 9 July 2023 (UTC)Reply

I'm ok with the idea of not transcluding AI alignment if it's too redundant.
I don't know if having a very long section 'General arguments' is ideal, but perhaps it would be hard to cleanly split all this content into a 'Alignment' and 'Capabilities' sections. I would like to give a way for readers to quickly identify the points of contention (timeline, difficulty of alignment...), perhaps by adding a paragraph at the beginning of "General arguments" that gives the overall picture and introduces the important questions.
And, that may be subjective, but I think some arguments on both sides need some coverage (if there is reliable sources), even if it's succinct :
1 - The fact that uncertainty favors the view that AI has a non-zero, non-negligible chance of causing an existential catastrophe (which seems to make the term "skeptic" a misnomer by the way).
2 - The view that any misaligned superintelligence might be fatal even in a world where most superintelligences are aligned, if attack is easier than defence.
3 - The AI arms race
4 - The view that the benefits may outweigh the risks.
5 - The view that LLMs already display some understanding of morality and that alignment might not be as hard as expected.
What do you think ? Alenoach (talk) 10:16, 10 July 2023 (UTC)Reply
I'm in favor of restructuring the article. There's currently no consideration of factors such as arms race between different developers, potential misuse by malicious actors, or mechanisms that AI might use to take over. While much of the arguments currently on the page are from the 2010s, I think it would be helpful to add content from newer papers such as:
We have to keep in mind WP:SYNTHESIS of course. Enervation (talk) 05:41, 11 July 2023 (UTC)Reply
I agree. The "Other sources of risk" section touches on these, but not in much detail, and with quite old references. I do think we need to exercise heightened caution with choosing references though, given that the AI Alignment community has been known to flirt with dilettantism and pseudoscience. StereoFolic (talk) 14:33, 11 July 2023 (UTC)Reply
I agree; I don't have time to do these major edits myself soon but I would be happy to review and copyedit. StereoFolic (talk) 14:34, 11 July 2023 (UTC)Reply
I updated the article, creating a 'AI Capabilities' and 'AI Alignment' sections. I included some references to "An Overview of Catastrophic AI Risks", because it is an easily readable synthesis (secondary source) that seems of good quality to me. Alenoach (talk) 04:31, 14 July 2023 (UTC)Reply
edit

Do you think the section "Popular reaction" is worth keeping ?

(personally, I think it's worth removing, what is currently in it isn't particularly insightful, and doesn't rise above every other popular reactions in terms of notability) Alenoach (talk) 13:30, 23 July 2023 (UTC)Reply

I was just thinking about this too - I think it's worth keeping, but comments prior to 2022 should be highly condensed or removed. There is a lot of pop discussion since ChatGPT came out. StereoFolic (talk) 16:18, 23 July 2023 (UTC)Reply
Feel free to modify this part if you are inspired and have the time. Alenoach (talk) 04:50, 25 July 2023 (UTC)Reply

Wiki Education assignment: Research Process and Methodology - FA23 - Sect 202 - Thu

edit

  This article was the subject of a Wiki Education Foundation-supported course assignment, between 6 September 2023 and 14 December 2023. Further details are available on the course page. Student editor(s): Lotsobear555 (article contribs).

— Assignment last updated by Lotsobear555 (talk) 15:41, 19 October 2023 (UTC)Reply

Image Captions?

edit

What's the deal with the image captions? There are two images; the first doesn't have any caption at all, and the second has italicized text. Neither of these really seems consistent with WP:CAP - thoughts?

- Ambndms (talk) 21:30, 11 February 2024 (UTC)Reply

The first image has no source cited in the nonexistent caption, so it could just be removed as unsourced. And, yes, the second image should not have its caption in italics. Elspea756 (talk) 01:43, 12 February 2024 (UTC)Reply
I've just added a source to the first image via Global catastrophic risk, where this image also appears. I think it's a helpful visual aid in an otherwise wordy article. I also just removed the italics from the second image caption. StereoFolic (talk) 04:07, 12 February 2024 (UTC)Reply

Probability Estimate table

edit

Chris rieckmann (and others interested in weighing in), I've just reverted your addition of a table of probability estimates given by various notable AI researchers. My reasoning is that these views are already captured in the "Perspectives" section, which use a prose format to give more nuance and color to the researchers' perspectives, along with others who offer important views but do not assign a probability estimate. In general, I think we need to be wary of giving UNDUE weight to probability estimates, because these are really just wild guesses not rigorously informed by any statistical analysis that could meaningfully inform the given numbers.

I'm open to a discussion about how we can fold this information into the article though. If we did include something like a section on probability guesses, I think it would better belong in prose (with a clear disclaimer that these are guesses), and much further down in the article. StereoFolic (talk) 03:26, 15 February 2024 (UTC)Reply

I tend to agree with your arguments. We could occasionally add subjective probability estimates in the section "Perspectives" if it's covered in secondary sources, but it seems better without the table. Alenoach (talk) 02:17, 28 February 2024 (UTC)Reply
True, the information could also be written in prose format.
But in my view, a probability estimate table would be a very good way to succinctly aggregate the opinions and assumptions of high profile experts. Of course the numbers are somewhat arbitrary and don't reflect the truth in any way, since they are predictions of the future but they will convey a consensus on the order of magnitude of the risk (i.e. 10% rather than 0.1%). From reading the text it is a bit more tedious to grasp to what degree and on which topics the cited experts agree and disagree.
I would say the probability of the existential threat materialising, is a quite relevant quantity in this article and highlighting it and aggregating estimates would seem appropriate to me.
I imagined to have something, vaguely similar to this list: List of dates predicted for apocalyptic events. Chris rieckmann (talk) 14:01, 28 February 2024 (UTC)Reply
I don't think there is any consensus on the order of magnitude though, and it's unclear whether even that would be meaningful given the numbers are essentially a feelings check. An important challenge here is that not all experts provide probability guesses - actually I suspect most experts don't give guesses, because they are emotionally informed guesses and not scientific statements. This is a key reason why ML researchers are often so reluctant to speak about existential risk. The advantage to a prose approach is that it allows us to easily add context to statements, and give appropriate weight to experts who decline to offer probability guesses.
Regarding the apocalyptic events article, there's an important distinction here - that article is mostly talking about historical predictions that were wrong. Its listed future predictions are either attributed to figures with a clear implication that they are not scientific, or are sourced to scientifically informed modeling of geologic and cosmic events. In the case of this article, wild guesses coming from scientific experts risks giving the impression that their guesses are scientific.
All that said, I definitely agree the article has a long way to go in distilling this kind of information and presenting it in a way that gives readers a better idea of expert consensus (and lack thereof). StereoFolic (talk) 15:59, 28 February 2024 (UTC)Reply

Steven Pinker

edit

Is there any reason why this article dedicates an entire paragraph to uncritically quoting Steven Pinker when he is not an AI researcher? Its not that he has an insightful counterargument to Instrumental convergence or the orthogonality thesis, he doesn't engage with the ideas at all because he likely hasn't heard of them. He has no qualifications in any field relevant to this conversation and everything he says could have been said in 1980. He has a bone to pick with anything he sees as pessimism and his popular science article is just a kneejerk response to people being concerned about something. His "skepticism" is a response to a straw man he invented for the sake of an agenda, it is not a response to any of the things discussed in this article. If we write a wikipedia article called Things Steven Pinker Made Up we can include this paragraph there instead.

The only way I can imagine this section being at all useful in framing the debate is to follow it with an excerpt from someone who actually works on this problem as an illustration of all the things casual observers can be completely wrong about when they don't know what they don't know. Cyrusabyrd (talk) 05:22, 5 May 2024 (UTC)Reply

In my opinion this article suffers from too few perspectives, not too many. I think the Pinker quote offers a helpful perspective that people may be projecting anthropomorphism onto these problems. He's clearly a notable figure. Despite what some advocates argue, this topic is not a hard science, so perspectives from other fields (like philosophers, politicians, artists, and in this case psychologists/linguists) are also helpful, so long as they are not given undue weight. StereoFolic (talk) 14:26, 5 May 2024 (UTC)Reply
That said, if there are direct responses to his views from reliable sources, please add them. I think that YouTube video is a borderline source, since it's self published and it's unclear to me whether it meets the requirements for those. StereoFolic (talk) 14:42, 5 May 2024 (UTC)Reply
I think my concern is that it is given undue weight, but I agree that this could be balanced out by adding more perspectives. I think the entire anthropomorphism section is problematic and I'm trying to think of a way to salvage it. I can get more perspectives in there but the fundamental framing between "people who think AI will destroy the world" and "people who don't" is just silly. There are people who think there is a risk and that it should be taken seriously and people who think this is a waste of money and an attempt to scaremonger about technology. Nobody serious claims to know what's going to happen. Talking about this with any rigor or effort not to say things that aren't true turns it into an essay. Cyrusabyrd (talk) 18:23, 5 May 2024 (UTC)Reply

The empirical argument

edit

I'm pretty busy editing other articles, but to add my own perspective on this topic: I thought all of this was pretty silly up until I started seeing actual empirical demonstrations of misalignment by research teams at Anthropic et al. and ongoing prosaic research convinced me it wasn't all navel-gazing. This article takes a very Bostromian-armchair perspective that was popular around 2014, without addressing what I'd argue has become the strongest argument since then.

"Hey, why'd you come around to the view that human-level AI might want to kill us?"
"Well, what really convinced me is how it keeps saying it wants to kill us."

– Closed Limelike Curves (talk) 22:50, 20 September 2024 (UTC)Reply

Makes sense. There is more empirical research being done nowadays, so we could add content on that. Alenoach (talk) 00:44, 21 September 2024 (UTC)Reply