The Center for AI Safety (CAIS) recently published statement Signed by prominent figures in AI to warn about potential risks technology poses to humanity.
“Reducing the risk of AI-induced extinction should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” the statement reads.
The signatories to the statement include notable researchers and Turing Award winners such as Jeffrey Hinton and Joshua Bengio, as well as OpenAI and DeepMind researchers such as Sam Altman, Ilya Sutskever, and Demis Hassabis. executives are also included.
The CAIS letter, which aims to spark discussion about a range of emerging AI-related risks, has drawn both support and criticism across the industry.continue another open letter Elon Musk, Steve Wozniak and more than 1,000 other experts signed the petition calling for an end to “out of control” AI development.
Despite its brevity, the latest statement does not provide specific details about defining AI or specific strategies for mitigating risks. However, CAIS made it clear in a press release that its goal is to establish safeguards and institutions to effectively manage AI risks.
OpenAI CEO Sam Altman has actively engaged with world leaders and advocated for AI regulation. During a recent Senate appearance, Mr. Altman repeatedly called on lawmakers to tightly regulate the industry. CAIS’ statement is consistent with his efforts to raise awareness about the dangers of AI.
While the open letter has received a lot of attention, some AI ethics experts have criticized the trend of issuing such statements.
Dr. Sasha Luccioni, a research scientist in machine learning, puts the hypothetical risks of AI side by side with concrete risks such as pandemics and climate change to address pressing issues such as bias, legal challenges and consent. It suggests that reliability can be increased while diverting attention from the problem.
Author and futurist Daniel Jeffries argues that discussing the risks of AI has become a status game in which individuals jump on the bandwagon at no real cost.
Critics believe that by signing an open letter on future threats, those responsible for current AI harm can lessen their guilt while ignoring the ethical issues associated with AI technologies already in use. ing.
But CAIS, a San Francisco-based nonprofit, remains focused on mitigating societal-scale risks from AI through technology research and advocacy. The organization was co-founded by experts with backgrounds in computer science and a strong interest in AI safety.
While some researchers are concerned about the emergence of superintelligent AI that surpasses human capabilities and could pose a threat to survival, signing an open letter on a hypothetical doomsday scenario addresses the existing ethical dilemmas surrounding AI. Some researchers claim that it distracts from They emphasize the need to address real-world problems AI poses today, including surveillance, biased algorithms, and human rights violations.
Balancing AI progress with responsible implementation and regulation continues to be a key challenge for researchers, policymakers and industry leaders alike.
Want to learn more about AI and big data from industry leaders? check out AI・Big Data EXPO It will be held in Amsterdam, California and London.The event will be held at the same time as digital transformation week.
Check out other upcoming enterprise technology events and webinars from TechForge here.
https://www.artificialintelligence-news.com/2023/05/31/ai-leaders-warn-about-risk-of-extinction-in-open-letter/ AI leaders warn of ‘extinction risk’ in open letter