How AI could make the next big crisis way, way worse

There are plenty of big global problems that people are hoping AI can finally help solve: climate changetraffic deathsloneliness.

But what if AI, faced with a sudden crisis, is actually the wrong tool to manage a big problem in real time? What if it might make a bad situation drastically worse?

That’s the bleak potential future that Anselm Küsters, a tech researcher and historian at the Center for European Policy in Berlin, explored in a research paper published last December titled “AI as Systemic Risk in a Polycrisis.”

If that last word looks unfamiliar, “polycrisis” is an idea laid out by Columbia University historian Adam Tooze to describe the slow-rolling, mutually-reinforcing combination of parallel risks we’re living through — risks to climate, markets, and the security of Europe, just to name a few examples.

In that environment, Küsters makes the argument that when something goes gravely wrong, AI systems trained on older data from a relatively “peaceful” world might be woefully equipped to handle a more chaotic one.

How much should we worry about this, and is there anything we can do? I called him yesterday to discuss the origins of his project, the gulf between “data haves” and “data have nots” in a global crisis and what the European Union is getting right (and wrong) in the AI Act currently making its way through its parliament. An edited and condensed version of our conversation follows:

Συνέχεια εδώ

Πηγή: politico.com

Σχετικά Άρθρα