![]() | Draft article not currently submitted for review.
This is a draft Articles for creation (AfC) submission. It is not currently pending review. While there are no deadlines, abandoned drafts may be deleted after six months. To edit the draft click on the "Edit" tab at the top of the window. To be accepted, a draft should:
It is strongly discouraged to write about yourself, your business or employer. If you do so, you must declare it. Where to get help
How to improve a draft
You can also browse Wikipedia:Featured articles and Wikipedia:Good articles to find examples of Wikipedia's best writing on topics similar to your proposed article. Improving your odds of a speedy review To improve your odds of a faster review, tag your draft with relevant WikiProject tags using the button below. This will let reviewers know a new draft has been submitted in their area of interest. For instance, if you wrote about a female astronomer, you would want to add the Biography, Astronomy, and Women scientists tags. Editor resources
Last edited by Solomon1320 (talk | contribs) 2 seconds ago. (Update) |
Connor Leahy is a German-American[1] artificial intelligence researcher and entrepreneur currently serving as CEO of AI safety research company Conjecture.[2] He has warned of the existential risk from artificial general intelligence, and has called for regulation such as "a moratorium on frontier AI runs" implemented through a cap on compute.[3]
Career
editIn 2019, Leahy reverse-engineered GPT-2 in his bedroom, and later co-founded EleutherAI to attempt to replicate GPT-3.[4]
Leahy was one of the signatories of the 2023 open letter from the Future of Life Institute calling for "all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4."[5][6]
In November 2023, Leahy was invited to speak at the inaugural AI Safety Summit.[2]
References
edit- ^ "Memes tell the story of a secret war in tech. It's no joke". ABC News. 2024-02-17. Retrieved 2024-07-01.
- ^ a b Stacey, Kiran; Milmo, Dan (2023-10-20). "Sunak's global AI safety summit risks achieving very little, warns tech boss". The Guardian. ISSN 0261-3077. Retrieved 2024-07-01.
- ^ Perrigo, Billy (2024-01-19). "Researcher: To Stop AI Killing Us, First Regulate Deepfakes". TIME. Retrieved 2024-07-01.
- ^ Smith, Tim (March 29, 2023). "'We are super, super fucked': Meet the man trying to stop an AI apocalypse".
- ^ Evans, Greg (2023-03-29). "Elon Musk & Steve Wozniak Sign Open Letter Calling For Moratorium On Some Advanced A.I. Systems". Deadline. Retrieved 2024-07-01.
- ^ "Pause Giant AI Experiments: An Open Letter". Future of Life Institute. Retrieved 2024-07-01.