Industry leaders in artificial intelligence warn that AI poses an existential threat to humanity on par with nuclear war or pandemics, urging global prioritization in mitigating the risks.
Leading figures in artificial intelligence, including CEOs of OpenAI and Google DeepMind, have signed a statement warning that AI systems pose an existential threat to humanity comparable to nuclear war and pandemics. The statement, released by the Center for AI Safety, emphasizes the need to mitigate the risks of AI extinction and calls for global priority like other large-scale societal risks. The concern about AI’s potential to become uncontrollable and destroy humanity has gained traction in recent months due to advancements in AI algorithms. While some AI researchers and industry leaders support the statement, others argue that it distracts from immediate AI concerns such as bias, disinformation, and corporate power. National governments are increasingly focused on AI risks and regulations, with discussions about existential concerns taking place.