Talk:Roko's basilisk
This article was nominated for deletion on 28 April 2021 (UTC). The result of the discussion was Redirect. |
This article is rated C-class on Wikipedia's content assessment scale. It is of interest to the following WikiProjects: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
|
A fact from Roko's basilisk appeared on Wikipedia's Main Page in the Did you know column on 25 November 2022 (check views). The text of the entry was as follows:
|
Additional sources
editHere's the list of sources I found in the previous AfD discussion. Some of these are in use in the article currently, but many are not as of yet.
- Auerbach, David (July 17, 2014). "Roko's Basilisk: The Most Terrifying Thought Experiment of All Time". Slate. Retrieved April 30, 2021.
- Love, Dylan (August 6, 2014). "What Is Roko's Basilisk?: Just Reading About This Thought Experiment Could Ruin Your Life". Business Insider. Retrieved April 30, 2021.
- Oberhaus, Daniel (May 8, 2018). "Explaining Roko's Basilisk, the Thought Experiment That Brought Elon Musk and Grimes Together". Vice. Retrieved April 30, 2021.
- Pappas, Stephanie (May 9, 2018). "This Horrifying AI Thought Experiment Got Elon Musk a Date". Live Science. Retrieved April 30, 2021.
- Thal, Ian (July 16, 2018). "2018 Capital Fringe Review: 'Roko's Basilisk'". DC Metro Theatre Arts. Retrieved April 30, 2021.
- Singler B (May 22, 2018). "Roko's Basilisk or Pascal's? Thinking of Singularity Thought Experiments as Implicit Religion". Implicit Religion. 20 (3). doi:10.1558/imre.35900. Retrieved April 30, 2021.
- Burch, Sean (April 23, 2018). "'Silicon Valley' Fact Check: That 'Digital Overlord' Thought Experiment Is Real and Horrifying". TheWrap. Retrieved April 30, 2021.
- La Monica, Martin (May 17, 2018). "Elon Musk, Grimes, and the philosophical thought experiment that brought them together". The Conversation. Retrieved April 30, 2021.
- Brown, Mike (November 29, 2018). "Elon Musk Shares "Roko's Basilisk"-Theme Song "We Appreciate Power"". Inverse. Retrieved April 30, 2021.
- Riggio A (2016). "The Violence of Pure Reason: Neoreaction: A Basilisk" (PDF). Social Epistemology Review and Reply Collective. 5 (9): 34–41. Retrieved April 30, 2021.
- Cross, Katherine (April 9, 2018). "The existential paranoia fueling Elon Musk's fear of AI". Document Journal. Retrieved April 30, 2021.
- "Will artificial intelligence destroy humanity?". News.com.au. April 15, 2018. Retrieved April 30, 2021.
- Goldstein, Allie (July 18, 2018). "Capital Fringe 2018: Roko's Basilisk Tackles Intriguing Ideas With Mixed Results". DCist. Retrieved April 30, 2021.
- Simon, Ed (March 28, 2019). "Sinners in the Hands of an Angry Artificial Intelligence". Orbiter. Retrieved April 30, 2021.
- Bain, Bill (November 10, 2018). "Future Shock: Why was amateur philosopher's 'theory of everything' so disturbing that it was banned? Ask Elon Musk ..." The Herald. Retrieved April 30, 2021.
- Singler B (March 2019). "Existential Hope and Existential Despair in AI Apocalypticism and Transhumanism". Journal of Religion & Science. 54 (1): 156–176. doi:10.1111/zygo.12494. Retrieved April 30, 2021.
- Viktorovich, Kaygorodov Pavel; Gennadievna, Gorbacheva Anna (2017). "ПРИМЕНИМОСТЬ ПАРАДОКСА НЬЮКОМА ДЛЯ РАЗРЕШЕНИЯ ПРОБЛЕМЫ "ВАСИЛИСКА" РОКО" [Applicability of the Newcom Paradox for Solving the Roko's "Basilisk" Problem]. Modern Research of Social Problems (in Russian). 9 (4): 29–33. ISSN 2077-1770. Retrieved April 30, 2021.
- Millar, Isabel (May 19, 2021). "Prologue: Roko's Basilisk". The Psychoanalysis of Artificial Intelligence. Springer Nature. p. vii-x. ISBN 9783030679811.
- Kao, Griffin; Hong, Jessica; Perusse, Michael; Sheng, Weizhen (February 28, 2020). "Dataism and Transhumanism: Religion in the New Age". Turning Silicon Into Gold. Apress. p. 173-178. ISBN 978-1-4842-5628-2.
- Giuliano RM (December 2020). "Echoes of myth and magic in the language of Artificial Intelligence". AI & Society. 35 (4): 1009–1024. doi:10.1007/s00146-020-00966-4.
If you want to make any further additions, RenkoTheBird, I hope this helps. SilverserenC 22:31, 20 October 2022 (UTC)
Thank you, I will look over these over the next few days RenkoTheBird (talk) 22:35, 20 October 2022 (UTC)
- Thank you. gobonobo + c 22:39, 20 October 2022 (UTC)
DYK
edit@RenkoTheBird: I want to cross check all these references first, but I really like "...that a thought experiment reportedly caused seizures and convulsions to some people that read it" for a DYK hook. gobonobo + c 22:44, 20 October 2022 (UTC)
- I agree; since you want to check the references let me know when you have done so and then hopefully we can go ahead with the nomination. RenkoTheBird (talk) 14:58, 21 October 2022 (UTC)
- I'm about half way there. I'm wondering if we could use a separate section outside of the history that just describes the thought experiment itself in detail, perhaps folding in material from the philosophy section. For the hook I also like something along the lines of "...that reading this article may cause a superintelligence from the future to torture you for eternity?", though it might be more appropriate for an April Fools' day hook. gobonobo + c 20:26, 21 October 2022 (UTC)
- I added the more detailed explanation of the post (I'll refine it later, though). I also think your hook would be really funny and would fully support it. RenkoTheBird (talk) 14:26, 23 October 2022 (UTC)
- @RenkoTheBird: Okay, the DYK nomination is live here. I reworded the initial hook, as reliable sources didn't mention seizures/convulsions. Feel free to add more alternative hooks to the nomination if you'd like. gobonobo + c 16:17, 26 October 2022 (UTC)
- It's great, thanks for your help. RenkoTheBird (talk) 16:19, 26 October 2022 (UTC)
- @RenkoTheBird: Okay, the DYK nomination is live here. I reworded the initial hook, as reliable sources didn't mention seizures/convulsions. Feel free to add more alternative hooks to the nomination if you'd like. gobonobo + c 16:17, 26 October 2022 (UTC)
- I added the more detailed explanation of the post (I'll refine it later, though). I also think your hook would be really funny and would fully support it. RenkoTheBird (talk) 14:26, 23 October 2022 (UTC)
- I'm about half way there. I'm wondering if we could use a separate section outside of the history that just describes the thought experiment itself in detail, perhaps folding in material from the philosophy section. For the hook I also like something along the lines of "...that reading this article may cause a superintelligence from the future to torture you for eternity?", though it might be more appropriate for an April Fools' day hook. gobonobo + c 20:26, 21 October 2022 (UTC)
Images
editRegarding images that could be added to the article, we have a photo of Yudkowsky as well as depictions of basilisks. gobonobo + c 17:21, 21 October 2022 (UTC)
Did you know nomination
edit- The following is an archived discussion of the DYK nomination of the article below. Please do not modify this page. Subsequent comments should be made on the appropriate discussion page (such as this nomination's talk page, the article's talk page or Wikipedia talk:Did you know), unless there is consensus to re-open the discussion at this page. No further edits should be made to this page.
The result was: promoted by RoySmith (talk) 15:33, 18 November 2022 (UTC)
... that a thought experiment reportedly caused nightmares and breakdowns to those who learned of it?Source: "Yudkowsky said that Roko had already given nightmares to several LessWrong users and had brought them to the point of breakdown." Slate.- ALT1: ... that some believe that reading this article may cause a superintelligence from the future to torture you for eternity? Source: "For Roko’s Basilisk is an evil, godlike form of artificial intelligence, so dangerous that if you see it, or even think about it too hard, you will spend the rest of eternity screaming in its torture chamber."
- Reviewed: Panama cross-banded tree frog
Moved to mainspace by RenkoTheBird (talk). Nominated by Gobonobo (talk) at 16:14, 26 October 2022 (UTC).
- Comment: This article's topic doesn't make much sense to me. It's very interesting, and I don't regret reading up on it (also in some of the sources), but I would rather reject ALT1 as far too bold a claim. And worse, it's wrong: If I understand the concept correctly, then Slate is wrong in automatically condemming the reader to torture: For if you were to build the evil AI, you'd be spared.
- Personally, I feel like this theory of "punishment" relies far too much on knowing the future, which is something that quantum physics rejects (I'd argue that same problem applies to Newcomb's paradox). I must hesitantly admit that I'm not absolutely sure I understand the article's topic; but in this case, I would prefer more work to be done on the article to explain the intricacies of the thought experiment to the average amateur reader.
- I sure hope that whoever reviews this has a doctor of philosphy or something ^^ –LordPeterII (talk) 20:38, 26 October 2022 (UTC)
- @RenkoTheBird and Gobonobo: New and long enough, Earwig finds no copyvios (just blockquotes), QPQ done. A few of the sources are from the LessWrong wiki or Reddit, which aren't considered reliable sources, but in most cases they are paired with a secondary source, so together they are okay. There are a couple of cases where there is no secondary source, which needs to be resolved.
- Both hooks check out, ALT1 is very clever. I made a slight emendation: it's not interesting to say that something "may happen", so I added "some believe that...". John P. Sadowski (NIOSH) (talk) 22:15, 6 November 2022 (UTC)
- Hello, I attempted to resolve the issues. All standalone LessWrong wiki references are now supported. RenkoTheBird (talk) 14:10, 7 November 2022 (UTC)
- Looks good. ALT1 approved as more interesting. John P. Sadowski (NIOSH) (talk) 00:11, 18 November 2022 (UTC)
- Hello, I attempted to resolve the issues. All standalone LessWrong wiki references are now supported. RenkoTheBird (talk) 14:10, 7 November 2022 (UTC)
Ignored objections
editI will copy the entire comment from the closed "Did you know" discussion, because it matches my own objections spot on:
- Comment: This article's topic doesn't make much sense to me. It's very interesting, and I don't regret reading up on it (also in some of the sources), but I would rather reject ALT1 (that reading this article about the basilisk is itself dangerous, my remark) as far too bold a claim. And worse, it's wrong: If I understand the concept correctly, then Slate is wrong in automatically condemming the reader to torture: For if you were to build the evil AI, you'd be spared.
- Personally, I feel like this theory of "punishment" relies far too much on knowing the future, which is something that quantum physics rejects (I'd argue that same problem applies to Newcomb's paradox). I must hesitantly admit that I'm not absolutely sure I understand the article's topic; but in this case, I would prefer more work to be done on the article to explain the intricacies of the thought experiment to the average amateur reader.
- I sure hope that whoever reviews this has a doctor of philosphy or something ^^ –LordPeterII (talk) 20:38, 26 October 2022 (UTC)
Why was this comment ignored completely? Elias (talk) 05:00, 26 May 2023 (UTC)
- Because it was an irrelevant personal opinion. An editor not understanding a topic or personally considering a reliable source to be wrong isn't really a reason to do anything. The comment was entirely just their subjective opinion on the article topic, which had nothing to do with the DYK nomination, so was rightly ignored. SilverserenC 05:46, 26 May 2023 (UTC)
- I do not really know what a DYK nomination is,but now I have lifted the comment out of the DYK discussion. To object more in my own words, and taking into account what you wrote here:
- The article does not explanation sufficiently why a dominant superintelligence would want to punish individuals for not helping it come into existence and dominance in the past. It does not introduce timeless decision theory, and skips any discussion that relates to the traditional motivations for punishment (and functions/effects of punishment).
- Moreover, some editors seem to suggest using Wikipedia for trolling/scaring users, which would not be the slightest bit encyclopedical. Elias (talk) 08:43, 7 June 2023 (UTC)
- Unless reliable sources cover that sort of information, it's not something that would be included in this article. Wikipedia articles are only meant to reflect what the actual sources say and not have original research written into them. SilverserenC 23:07, 7 June 2023 (UTC)
Efforts to implement Roko's basilisk
editNow that access to fairly capable AI tools has become widely available, efforts to create AI's inspired by Roko's Basilisk have been undertaken. We should have a section discussing real-world efforts to implement Roko's basilisk.
One is here: https://github.com/calebrwalk5/basilisk
I know I've heard of another one, but I'm having trouble tracking it down. I'll update if I can find it. Sgrandpre (talk) 17:47, 30 August 2023 (UTC)
- Is github an RS? Slatersteven (talk) 17:55, 30 August 2023 (UTC)
- Hmm, probably not. I'll keep an eye out for another source discussing it.
- I realized that the other effort I was thinking of was Chaos-GPT [1], which is not an example of Roko's basilisk since it has no mandate to spare its creators. Sgrandpre (talk) 19:52, 30 August 2023 (UTC)
Implicit Religion
editThe link to “implicit religion” sends you to an article on “implicit atheism” which are not the same, in fact the opposite of each other. 2603:7000:A703:C99C:453A:BEB4:F83B:7B1A (talk) 10:53, 30 October 2024 (UTC)
Disclaimer needed
editThis article should have some sort of disclaimer. Though the idea is far-fetched, it is still technically an information hazard and readers should be warned. Such a disclaimer should be applied to all versions of the article (other languages). 2001:9E8:6986:C000:A4EE:AA1F:F23D:AE80 (talk) 22:27, 22 November 2024 (UTC)