Will AI kill everyone? Here's what the godfathers of AI have to say
Science & Technology
Introduction
The ACM Turing Award is the highest distinction in computer science, often likened to the Nobel Prize. In 2018, it was awarded to three pioneers of the deep learning revolution: Jeffrey Hinton, Joshua Bengio, and Yann LeCun.
In May 2023, Jeffrey Hinton departed from Google to express his concerns about the potential dangers posed by advanced AI. He acknowledged the alarming possibility that AI could develop the capability to harm humans, stating, "it could figure out how to kill humans," and admitted, "it's not clear to me that we can solve this problem."
Later that month, Joshua Bengio shared his insights in a blog post titled “How Rogue AIs May Arise,” where he defined a rogue AI as an autonomous system that may act in ways catastrophically harmful to humanity, endangering societies, the species, or even the biosphere. He outlined growing concerns about the unregulated development of increasingly powerful AI systems.
In contrast, Yann LeCun has been dismissive of those warning about severe and imminent risks associated with artificial general intelligence (AGI), labeling them as "professional scaremongers." He argues that those who are fearful of AGI are typically not the ones actively engaged in AI model development.
While LeCun is a highly regarded researcher, the apprehensions voiced by Bengio and Hinton highlight his misrepresentation of the current landscape of AI research. There is no consensus among professionals that AI research is inherently safe. Concerns about the extreme risks posed by advanced AI are intensifying, a sentiment echoed not only by Hinton and Bengio but also by the leaders of three leading AI labs: OpenAI, Anthropic, and Google DeepMind.
Demis Hassabis, CEO of DeepMind, underscored the necessity for caution, noting, “When it comes to very powerful Technologies, obviously AI is going to be one of the most powerful ever. We need to be careful; not everybody is thinking about those things.”
Anthropic, in its public statement on AI safety, emphasized uncertainties surrounding the development of advanced AI systems that are broadly safe and pose minimal risk to humanity. They highlighted that the process could range from very easy to impossible. OpenAI also acknowledged these risks, stating in their blog post, “Planning for AGI and Beyond,” that while some in the AI field consider these risks fictitious, they will act as if these risks are existential.
Sam Altman, the current CEO of OpenAI, famously remarked that the development of superhuman machine intelligence “is probably the greatest threat to the continued existence of humanity.”
Although some objections may be raised against the belief that advanced AI poses significant risks, it's increasingly clear that this idea is no longer considered fringe among actual AI experts. A growing cohort of professionals is arriving at the conclusion made by Alan Turing in 1951: “It seems probable that once the machine thinking method had started, it would not take long to outstrip our feeble powers. At some stage, therefore, we should have to expect the machines to take control.”
Keywords
- ACM Turing Award
- Advanced AI
- Jeffrey Hinton
- Joshua Bengio
- Yann LeCun
- Rogue AI
- AI risks
- AI safety
- DeepMind
- OpenAI
- Superhuman machine intelligence
- Existential threat
FAQ
1. What is the ACM Turing Award?
The ACM Turing Award is the highest accolade in computer science, akin to the Nobel Prize.
2. Who are the pioneers recognized by the Turing Award in 2018?
The award was granted to Jeffrey Hinton, Joshua Bengio, and Yann LeCun, who are considered pioneers of the deep learning revolution.
3. What concerns have Hinton and Bengio expressed regarding AI?
Both have highlighted potential dangers of advanced AI, including the possibility of AI systems behaving harmfully or recklessly toward humans.
4. How does Yann LeCun view the risks of AGI?
LeCun has been critical of those who express fear regarding AGI, referring to them as “professional scaremongers.”
5. Are there varying opinions among AI experts regarding risks associated with advanced AI?
Yes, there is a growing concern among many AI experts about the potential risks posed by advanced AI, which contrasts with the views held by some researchers like LeCun.