The risk of slowing down AI progress

Is ChatGPT like a nuclear weapon or deadly pathogen such as COVID-19? To Vox writer Sigal Samuel, both provide constructive analogies for thinking about generative AI, as argued in her new piece, “The case for slowing down AI: Pumping the brakes on artificial intelligence could be the best thing we ever do for humanity.”

 
What’s her case for slowing down AI progress instead of racing to develop more advanced and powerful AI systems? It boils down to this: It might kill us all. From Samuel’s piece:

What if researchers succeed in creating AI that matches or surpasses human capabilities not just in one domain, like playing strategy games, but in many domains? What if that system proved dangerous to us, not because it actively wants to wipe out humanity but just because it’s pursuing goals in ways that aren’t aligned with our values? That system, some experts fear, would be a doom machine — one literally of our own making. … Imagine that we develop a super-smart AI system. We program it to solve some impossibly difficult problem — say, calculating the number of atoms in the universe. It might realize that it can do a better job if it gains access to all the computer power on Earth. So it releases a weapon of mass destruction to wipe us all out, like a perfectly engineered virus that kills everyone but leaves infrastructure intact. Now it’s free to use all the computer power! In this Midas-like scenario, we get exactly what we asked for — the number of atoms in the universe, rigorously calculated — but obviously not what we wanted.

Συνέχεια  εδώ

 
Πηγή: fasterplease.substack.com

Σχετικά Άρθρα