“What sets OpenAI apart is the ambition of its mission: ‘to ensure that artificial general intelligence — AI systems that are generally smarter than humans — benefits all of humanity.’ Many of its employees believe that this aim is within reach…perhaps one more decade (or even less)…” Kelsey Piper, “Why can't OpenAI's employees talk?”, Vox's Future Perfect Newsletter, 17 May 2024, (brackets original)
Statement by Sam Altman, CEO of OpenAI since 2019:
“…we are going to operate as if these risks are existential…A misaligned superintelligent AGI could cause grievous harm to the world; an autocratic regime with a decisive superintelligence lead could do that too…AGI capable enough to accelerate its own progress could cause major changes to happen surprisingly quickly…Success is far from guaranteed, and the stakes (boundless downside and boundless upside)…” (brackets original), Sam Altman, “Planning for AGI and beyond” 24 February 2023
Holden Karnofsky is an American nonprofit executive. He is a co-founder and Director of AI Strategy of the research and grantmaking organization Open Philanthropy.
“…the idea of AI itself going to war with humans…AI systems disempowering humans entirely, leading to a future that has little to do with anything humans value. (Like in the Terminator movies, minus the time travel and the part where humans win.)” (brackets and italics original), Holden Karnofsky, “AI Could Defeat All Of Us Combined” 09 June 2022
“AI might form its own goals…AI…could defeat all of humanity combined, if (for whatever reason) it were pointed toward that goal. By ‘defeat,’ I don’t mean ‘subtly manipulate us’ or ‘make us less informed’ or something like that - I mean a literal ‘defeat’ in the sense that we could all be killed, enslaved or forcibly contained…if such an attack happened, it could succeed against the combined forces of the entire world.” (brackets and italics original), Karnofsky, Ibid.
“We generally don't have a lot of things that could end human civilization if they ‘tried’ sitting around. If we're going to create one, I think we should be asking not ‘Why would this be dangerous?’ but ‘Why wouldn't it be?’... total civilizational defeat is a real possibility." (italics original), Karnofsky, Ibid.
Read the full article for his answer to this question: “How can AIs be dangerous without bodies?”
Subscribe to his free newsletter to keep informed.




