Pause Giant AI Experiments: An Open Letter

Pause Giant AI Experiments: An Open Letter is the title of a letter published by the Future of Life Institute in March 2023. The letter calls "all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4", citing risks such as AI-generated propaganda, extreme automation of jobs, human obsolescence, and a society-wide loss of control.[1] It received more than 30,000 signatures, including academic AI researchers and industry CEOs such as Yoshua Bengio, Stuart Russell, Elon Musk, Steve Wozniak and Yuval Noah Harari.[1][2][3]

Motivations

edit

The publication occurred a week after the release of OpenAI's large language model GPT-4. It asserts that current large language models are "becoming human-competitive at general tasks", referencing a paper about early experiments of GPT-4, described as having "Sparks of AGI".[4] AGI is described as posing numerous important risks, especially in a context of race-to-the-bottom dynamics in which some AI labs may be incentivized to overlook security to deploy products more quickly.[5]

It asks to refocus AI research on making powerful AI systems "more accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal". The letter also recommends more governmental regulation, independent audits before training AI systems, as well as "tracking highly capable AI systems and large pools of computational capability" and "robust public funding for technical AI safety research".[1] FLI suggests using the "amount of computation that goes into a training run" as a proxy to for how powerful an AI is, and thus as a threshold.[6]

Reception

edit

The letter received widespread coverage, with support coming from a range of high-profile figures. As of July 2024, a pause has not been realized - instead, as FLI pointed out on the letter's one-year anniversary, AI companies have directed "vast investments in infrastructure to train ever-more giant AI systems".[7] However, it was credited with generating a "renewed urgency within governments to work out what to do about the rapid progress of AI", and reflecting the public's increasing concern about risks presented by AI.[8]

Eliezer Yudkowsky wrote that the letter "doesn't go far enough" and argued that it should ask for an indefinite pause. He fears that finding a solution to the alignment problem might take several decades and that any misaligned AI sufficiently intelligent might cause human extinction.[9]

Some IEEE members have expressed various reasons for signing the letter, such as that "There are too many ways these systems could be abused. They are being freely distributed, and there is no review or regulation in place to prevent harm."[10] One AI ethicist argued that the letter provides awareness to multiple issues such as voice cloning, but argued the letter was unactionable and unenforceable.[11]

The letter has been criticized for diverting attention from more immediate societal risks such as algorithmic biases.[12] Timnit Gebru and others argued that the letter was sensationalist and amplified "some futuristic, dystopian sci-fi scenario" instead of current problems with AI today.[11]

Microsoft's CEO Bill Gates chose not to sign the letter, stating that he does not think "asking one particular group to pause solves the challenges".[13] Sam Altman, CEO of OpenAI, commented that the letter was "missing most technical nuance about where we need the pause" and stated that "An earlier version of the letter claimed OpenAI is training GPT-5 right now. We are not and won't for some time."[14] Reid Hoffman argued the letter was "virtue signalling", with no real impact.[15]

List of notable signatories

edit

Listed below are some notable signatories of the letter.[1]

See also

edit

References

edit
  1. ^ a b c d "Pause Giant AI Experiments: An Open Letter". Future of Life Institute. Retrieved 2024-07-19.
  2. ^ Metz, Cade; Schmidt, Gregory (2023-03-29). "Elon Musk and Others Call for Pause on A.I., Citing 'Profound Risks to Society'". The New York Times. ISSN 0362-4331. Retrieved 2024-08-20.
  3. ^ Hern, Alex (2023-03-29). "Elon Musk joins call for pause in creation of giant AI 'digital minds'". The Guardian. ISSN 0261-3077. Retrieved 2024-08-20.
  4. ^ Bubeck, Sébastien; Chandrasekaran, Varun; Eldan, Ronen; Gehrke, Johannes; Horvitz, Eric; Kamar, Ece; Lee, Peter; Lee, Yin Tat; Li, Yuanzhi; Lundberg, Scott; Nori, Harsha; Palangi, Hamid; Ribeiro, Marco Tulio; Zhang, Yi (2023-04-12). "Sparks of Artificial General Intelligence: Early experiments with GPT-4". arXiv:2303.12712 [cs.CL].
  5. ^ "MPs warned of AI arms race to the bottom | Computer Weekly". ComputerWeekly.com. Retrieved 2023-04-13.
  6. ^ Support (2023-03-31). "FAQs about FLI's Open Letter Calling for a Pause on Giant AI Experiments". Future of Life Institute. Retrieved 2023-04-13.
  7. ^ Aguirre, Anthony (2024-03-22). "The Pause Letter: One year later". Future of Life Institute. Retrieved 2024-07-19.
  8. ^ "Six months after call for AI pause, are we closer to disaster?". euronews. 2023-09-21. Retrieved 2024-07-19.
  9. ^ "The Open Letter on AI Doesn't Go Far Enough". Time. 2023-03-29. Retrieved 2023-04-13.
  10. ^ "'AI Pause' Open Letter Stokes Fear and Controversy - IEEE Spectrum". IEEE. Retrieved 2023-04-13.
  11. ^ a b Anderson, Margo (7 April 2023). "'AI Pause' Open Letter Stokes Fear and Controversy - IEEE Spectrum". IEEE Spectrum. Retrieved 2024-07-03.
  12. ^ Paul, Kari (2023-04-01). "Letter signed by Elon Musk demanding AI research pause sparks controversy". The Guardian. ISSN 0261-3077. Retrieved 2023-04-14.
  13. ^ Rigby, Jennifer (2023-04-04). "Bill Gates says calls to pause AI won't 'solve challenges'". Reuters. Retrieved 2023-04-13.
  14. ^ Vincent, James (April 14, 2023). "OpenAI's CEO confirms the company isn't training GPT-5 and 'won't for some time'". The Verge.
  15. ^ Heath, Ryan (22 September 2023). "The great AI "pause" that wasn't". Axios.
edit