
Voluntary commitments insufficient to end race to godlike AI
Earlier this year, many artificial intelligence experts signed a letter stating that extinction risks from AI should be a global priority. Even executives of AI companies have stated that they believe smarter-than-human AI systems are the ‘greatest threat to the existence of humanity’, and AI has a 10% to 25% chance of causing a civilisation-wide catastrophe. It is clear that these risks are credible and worth taking seriously.
Catastrophic risks could be mitigated if AI experts had sufficient time, patience and caution to ensure that they know how to control extremely powerful systems. Right now, the frontier of AI development is not characterised by caution or patience. It is characterised by a few powerful AI companies that are racing towards godlike AI. Companies are pouring billions of dollars into the development of increasingly powerful systems.
There are public policy solutions that could substantially reduce this reckless race towards AI catastrophe. However, so far, AI companies are not advocating for them – and some evidence suggests that they are actively opposing meaningful regulations.
Συνέχεια εδώ