A radical new idea for regulating AI

There’s a growing push to regulate AI.

 
But what would that actually even look like?

AI is either an idea or a tool, depending who you ask. And as a rule, the government doesn’t just start regulating tools or ideas. It usually finds a lever — a place to intervene, and ideally, a hard rationale for doing so.

Last week, the leading futurist Jaron Lanier laid out a powerful case for both in the New Yorker. He argued for a principle called “data dignity” — or the concept that “digital stuff would typically be connected with the humans who want to be known for having made it.” In practical terms, this means you or I would actually have some claim on the huge data trails we leave — and on the ways they’re being used to train powerful artificial minds like GPT-4.

For Lanier and the wider community of experts who have been exploring this idea, “data dignity” has two key pillars. For one, it’s a way to keep AIs closely tethered to people, rather than spinning off on their own in terrifying ways. It also offers clear, practical guidelines by which to regulate how they’re built and used — as well as who profits from them.

Which in some ways, seems almost obvious. If these models are worthless without the stuff we post on the internet, shouldn’t we have a say over when, where, and how that stuff is used?

Συνέχεια εδώ

Σχετικά Άρθρα