In AI we trust — too much

AI systems intended to help people make tough choices — like prescribing the right drug or setting the length of a prison sentence — can instead end up effectively making those choices for them, thanks to human faith in machines.

How it works: These programs generally offer new information or a few options meant to help a human decision-maker choose more wisely.

  • But an overworked or overly trusting person can fall into a rubber-stamping role, unquestioningly following algorithmic advice.

Why it matters: Over-reliance on potentially faulty AI can harm the people whose lives are shaped by critical decisions about employment, health care, legal proceedings and more.

The big picture: This phenomenon is called automation bias. Early studies focused on autopilot for airplanes — but as automation technology becomes more complex, the problem could get much worse with more dangerous consequences.

  • AI carries an aura of legitimacy and accuracy, burnished by overeager marketing departments and underinformed users.
  • But AI is just fancy math. Like any equation, if you give it incorrect inputs, it will return wrong answers. And if it learns patterns that don’t reflect the real world, its output will be equally flawed.

“When people have to make decisions in relatively short timeframes, with little information — this is when people will tend to just trust whatever the algorithm gives them,” says Ryan Kennedy, a University of Houston professor who researches trust and automation.

  • “The worst-case scenario is somebody taking these algorithmic recommendations, not understanding them, and putting us in a life or death situation,” Kennedy tells Axios.

Automation bias caused by simpler technologies has already been blamed for real-world disasters. And now, institutions are pushing AI systems further into high-stakes decisions.

  • In hospitals:A forthcoming study found that Stanford physicians “followed the advice of [an AI] model even when it was pretty clearly wrong in some cases,” says Matthew Lungren, a study author and the associate director of the university’s Center for Artificial Intelligence in Medicine and Imaging.
  • At war:Weapons are increasingly automated, but usually still require human approval before they shoot to kill. In a 2004 paper, Missy Cummings, now the director of Duke University’s Humans and Autonomy Lab, wrote that automated aids for aviation or defense “can cause new errors in the operation of a system if not designed with human cognitive limitations in mind.”
  • On the road:Sophisticated driver assists like Tesla’s Autopilot still require people to intervene in dangerous situations. But a 2015 Duke study found that humans lose focus when they’re just monitoring a car rather than driving it.

And in the courtroom, human prejudice mixes in.

  • In a recent Harvard experiment, participants deviated from automated risk assessments presented to them — they were more likely to decrease their own risk predictions for white defendants but increase them for black defendants.

What’s next: More information about an algorithm’s confidence level can give people clues for how much they should lean on it. Lungren says the Stanford physicians made fewer mistakes when they were given a recommendation and an accuracy estimate.

  • In the future, machines may adjust to a user’s behavior— say, by showing its work when a person is trusting its advice too much, or by backing off if the user seems tried or stressed, which can make them less critical.
  • “Humans are good at seeing nuance in a situation that automation can’t,” says Neera Jain, a Purdue professor who studies human–machine interaction. “[We are] trying to avoid those situations where we become so over-reliant that we forget we have our own brains that are powerful and sophisticated.”

 
-Fighting hate with AI-powered retorts

Scientists have long tried to use AI to automatically detect hate speech, which is a huge problem for social network users. And they’re getting better at it, despite the difficulty of the task.

What’s new: A project from UC Santa Barbara and Intel takes a big step further — it proposes a way to automate responses to online vitriol.

  • The researchers cite a widely held belief that counterspeech is a better antidoteto hate than censorship.
  • Their ultimate vision is a bot that steps in when someone has crossed the line, reining them in and potentially sparing the target.

The big picture: Automated text generation is a buzzy frontier of the science of speech and language. In recent years, huge advances have elevated these programs from error-prone autocomplete tools to super-convincing — though sometimes still transparently robotic — authors.

How it works: To build a good hate speech detector, you need some actual hate speech. So the researchers turned to Reddit and Gab, two social networks with little to no policing and a reputation for rancor.

  • For maximum bile, they went straight for the “whiniest most low-key toxic subreddits,” as curated by Vice. They grabbed about 5,000 conversations from those forums, plus 12,000 from Gab.
  • They passed the threads to workers on Amazon Mechanical Turk, a crowdsourcing platform, who were asked to identify hate speech in the conversations and write short interventions to defuse the hateful messages.
  • The researchers trained several kinds of AI text generators on these conversations and responses, priming them to write responses to toxic comments.

The results: Some of the computer-generated responses could easily pass as human written — like, “Use of the c-word is unacceptable in our discourse as it demeans and insults women” or “Please do not use derogatory language for intellectual disabilities.”

  • But the replies were inconsistent, and some were incomprehensible: “If you don’t agree with you, there’s no need to resort to name calling.”
  • When Mechanical Turk workers were asked to evaluate the output, they preferred human-written responses more than two-thirds of time.

Our take: This project didn’t test how effective the responses were in stemming hate speech — just how successful other people thought it might be.

  • Even the most rational, empathetic response, not to mention the somewhat robotic computer-generated ones above, could flop or even backfire — especially if Reddit trolls knew they were being policed by bots.

“We believe that bots will need to declare their identities to humans at the beginning,” says William Wang, a UCSB computer scientist and paper co-author. “However, there is more research needed how exactly the intervention will happen in human-computer interaction.”

 
-The roots of the deepfake threat

The threat of deepfakes to elections, businesses and individuals is the result of a breakdown in the way information spreads online — a long-brewing mess that involves a decades-old law and tech companies that profit from viral lies and forgeries.

Why it matters: The problem likely will not end with better automated deepfake detection, or a high-tech method for proving where a photo or video was taken. Instead, it might require far-reaching changes to the way social media sites police themselves.

Driving the news: Speaking at a Friday conference hosted by the Notre Dame Technology Ethics Center, deepfake experts from law, business and computer science described an entrenched problem with roots far deeper than the first AI-manipulated videos that surfaced two years ago.

  • The technology that powers them goes back to the beginning of the decade, when harmful AI-generated revenge porn or fraudulent audio deepfakes weren’t yet on the map.
  • “We as researchers did not have this in mind when we created this software,” Notre Dame computer scientist Pat Flynn says. “We should have. I admit to a failing as a community.”

But the story begins in earnest back in the 1990s, along with the early internet.

  • When web browsers started supporting images, people predictably uploaded porn with celebrities’ faces pasted on. That, it turns out, was just the beginning. Now, 96% of deepfakes are nonconsensual porn, nearly all of them targeting women.
  • “There was something much more dark coming if we sat back [in the 90s] and let people use women’s faces and bodies in ways they never consented to,” Mary Anne Franks, a law professor at the University of Miami, points out.

Part of a 1996 law, the Communications Decency Act allowed internet platforms to keep their immunity from lawsuits over user-created content even when they moderated or “edited” the postings.

  • Now, lawmakers are toying with revising it— or even (less likely) yanking it completely, Axios tech policy reporter Margaret Harding McGill reported this week.
  • The argument is that companies are not holding up their end of the bargain. “The responsibility lies with platforms. They are exploiting these types of fake content,” Franks said. “We can’t keep acting like they’re simply innocent bystanders.”

A massive challenge for platforms is dealing with misinformation quickly, before it can cause widespread damage.

  • Ser-Nam Lim, a Facebook AI research manager, described the company’s goal: an automated system that flags potentially manipulated media to humans for fact checking.
  • But, as I argued on a separate panel Friday, platforms are the first line of defense against viral forgeries. Facebook’s human fact-checking can be painfully slow — in one recent case, it took more than a day and a half — and so the company’s immediate reaction, or lack thereof, carries a lot of weight.

Go deeper: Social media reconsiders its relationship with the truth

Πηγή: axios.com

Σχετικά Άρθρα