AI godfather Geoffrey Hinton raises alarm on tech giants downplaying AI risks

In a recent episode of the ‘One Decision’ podcast, he advocates for global cooperation to establish safety limits, suggesting AI systems can be taught morality, much like educating a child

author-image
BestMediaInfo Bureau
New Update
godfather
Listen to this article
0.75x 1x 1.5x
00:00 / 00:00

New Delhi: Geoffrey Hinton, widely regarded as the “Godfather of AI” and a 2024 Nobel Laureate in Physics, has issued a stark warning about the dangers of advanced artificial intelligence (AI) systems, accusing major tech companies of understating the risks. In a recent episode of the ‘One Decision’ podcast aired on July 24, Hinton singled out Google DeepMind CEO Demis Hassabis as a rare industry leader who truly understands and is committed to addressing these concerns. 

“Many of the people in big companies, I think, are downplaying the risk publicly,” Hinton said. 

“People like Demis, for example, really do understand the risks and really want to do something about it.”

Hinton, who spent over a decade at Google before resigning to speak freely about AI’s dangers, expressed unease about the accelerating pace of AI advancements. He highlighted that advanced AI systems are learning in ways humans don’t fully understand, posing significant risks if not properly managed. Hinton’s concerns centre on the potential for AI to be misused or to operate beyond human control, a sentiment echoed by Hassabis, who has long advocated for responsible AI development.

Hassabis, who co-founded DeepMind in 2010 and sold it to Google in 2014 for $650 million, now leads Google’s premier AI research lab. DeepMind, known for breakthroughs like AlphaGo, which defeated a world champion at the game Go, and AlphaFold, which solved decades-long challenges in protein folding, has been at the forefront of AI innovation. Hinton praised Hassabis for his focus on safety, noting that he is among the few leaders prioritising ethical governance.

Earlier this year, Hassabis warned that powerful AI systems could become difficult to control without proper oversight. In a February statement, he emphasised the need for an international regulatory framework, likening it to a “digital Geneva Convention” to prevent misuse. “A bad actor could repurpose those same technologies for a harmful end,” 

Hassabis’ concerns extend beyond job displacement, a common fear in AI discussions. While acknowledging that AI will disrupt roles, potentially transforming industries more profoundly than the Industrial Revolution, he believes the greater threat lies in misuse by malicious actors. 

He advocates for global cooperation to establish safety limits, suggesting AI systems can be taught morality, much like educating a child. “They learn by demonstration. They learn by teaching,” Hassabis said in a ‘60 Minutes’ interview on April 20. “And I think that’s one of the things we have to do with these systems, is to give them a value system and guidance, and some guardrails around that, much in the way that you would teach a child.”

Hinton’s critique extends to other tech leaders, whom he labelled as “oligarchs” controlling AI’s trajectory. “The people who control AI, people like Musk and Zuckerberg, they are oligarchs,” he said on the ‘One Decision’ podcast. 

When asked if he trusted them, Hinton replied, “I think when I called them oligarchs, you know the answer to that.” His remarks reflect frustration with the lack of transparency and accountability in the industry, particularly as companies race toward artificial general intelligence (AGI), AI with human-level cognitive abilities, which Hassabis predicts could emerge within five to ten years.

Hinton expressed regret for not prioritising AI safety earlier in his career, a sentiment that underscores his current urgency. After leaving Google, where he was asked to stay and focus on safety, Hinton has become a vocal advocate for responsible AI development, warning that corporate leaders are aware of the risks but often avoid meaningful action.

Despite the risks, Hassabis remains optimistic about AI’s potential to transform humanity for the better. In the ‘60 Minutes’ interview, he predicted that AI could revolutionise drug discovery, reducing development timelines from years to “maybe months or maybe even weeks.” 

DeepMind’s AlphaFold, which mapped 200 million protein structures in a year, a task that would have taken “a billion years of PhD time,” exemplifies this potential. Hassabis believes AI could help cure all diseases within a decade, a vision that earned praise from Perplexity AI CEO Aravind Srinivas, who called him a “genius” and urged that he be given all necessary resources.

However, Hassabis cautions that AI still lacks imagination and nuanced understanding, describing current systems as an “average of all the human knowledge” they’re trained on. 

He advises building intelligent tools to advance neuroscience before exploring concepts like self-awareness, which he does not see as an immediate goal. “My advice would be to build intelligent tools first and then use them to help us advance neuroscience before we cross the threshold of thinking about things like self-awareness,” he told 60 Minutes.

As AI advances at an “exponential” pace, Hinton and Hassabis emphasise the need for coordinated global efforts to ensure safety and ethical use. Recent protests outside DeepMind’s London office demanding greater transparency highlight public unease about AI’s trajectory. With Hassabis leading Google’s charge toward AGI and advocating for responsible stewardship, the industry faces a critical juncture. “It’s moving incredibly fast,” Hassabis said, noting that the influx of talent and resources is fueling rapid progress.

AI Google AI regulation DeepMind tech giants
Advertisment