Teaching robots to see

Machine vision is a crucial missing link holding back the robotization of industries like manufacturing and shipping. But even as that field advances rapidly, there’s a larger hurdle that still blocks widespread automation — machine understanding.

Why it matters: Up against a shortage of workers, those sectors stand to benefit hugely from automation. But the people working in warehouses and factories could find their jobs changed or eliminated if vision technology sees new breakthroughs.

The big picture: Machine vision can help robots navigate spaces previously closed off to them, like a crowded warehouse floor or a cluttered front lawn. And it’s critical for tasks that require dexterity, like packing a box with oddly shaped objects.

  • Plus, AI can help make sense of the avalanche of video footage recorded daily, which far outstrips humanity’s ability to digest it.
  • Companies are scrambling to make use of that data to understand how people and vehicles move, or to check for tiny imperfections in new products.
  • The rise of AI-monitored cameras is also making surveillance inescapable at workand in public spaces.

Driving the news: In a report first shared with Axios, LDV Capital, a venture firm that invests in visual technologies, predicts an upheaval in manufacturing and logistics, driven primarily by computer vision.

  • “The majority of global factories, ports, and warehouses are understaffed and ill-equipped to meet still-rising requirements,” the report reads. Visual technologies will help change that, LDV argues.
  • In China, some “lights-off” factories have been built to operate without a single human present. But the U.S. will largely see robots employed in factories and warehouses not custom-built for robots, says Abby Hunter-Syed, VP of operations at LDV.

Yes, but: It’ll take more than just high-fidelity cameras and fast AI perception to make an intelligent robot.

  • A big unsolved challenge is imbuing robots with a deeper understanding of the world around them, so that they can interpret what they see and react to it.
  • “Domestic robots, for example, are just not going to arrive until machines can interpret scenes well,” says Gary Marcus, co-founder of robotics company Robust.ai. “You can do Roomba, but not Rosie the Robot.”

A broad understanding of the world helps us humans avoid confounding errors when we look around.

  • Even if we see a cloud perfectly shaped like a horse, we never actually think it’s a flying horse because we get how clouds work.
  • The same ability helps us handle objects easily — even ones we’ve never seen before. Humans can generally guess how to place an item on a surface so that it stays upright, for example, rather than tipping over.
  • “We’ve built physics models in our heads, and we’ve not quite been able to transfer them to robots,” says Avideh Zakhor, a Berkeley professor who studies computer vision.

The big question: How much of the problem is solvable with incremental improvements in machine vision, before robots need better common sense?

  • Evan Nisselson, a partner at LDV, argues that industry can get 85% or 90% of the way toward lucrative automation with better machine vision.
  • But that depends on how much warehouses and factories can remove variability and chaos from the areas where robots are working.

The bottom line: “The Rubicon here, which we haven’t crossed yet, is to not just be able to see objects,” says Marcus. “It’s interpreting scenes that will be the breakthrough.”

 

-The hidden costs of AI

In the most exclusive AI conferences and journals, AI systems are judged largely on their accuracy: How well do they stack up against human-level translation or vision or speech?

Yes, but: In the messy real world, even the most accurate programs can stumble and break. Considerations that matter little in the lab, like reliability or computing and environmental costs, are huge hurdles for businesses.

Why it matters: Some stumbles that have tarred high-profile AI systems — like facial recognition that fails more often on darker faces, or medical AI that gives potentially harmful advice — have resulted from unanticipated real-world scenarios.

The big picture: Research often guns for incremental advances, juicing an extra percentage point of accuracy out of a previously proposed model.

  • “We feel like this is the only thing that gets rewarded nowadays in the community,” says Roy Schwartz, a researcher at the Allen Institute for Artificial Intelligence. “We want to argue that there are other things to consider.”
  • In a paper posted this summer, Schwartz and several co-authors proposed that researchers present not just the accuracyof their models but also the computing cost it took to get there — and the resulting environmental toll.

That’s one of several hidden considerations that can drive up the cost of creating an AI model that works in the real world. Among the other important factors accuracy doesn’t capture:

  • The cost of labelingtraining data, which is still mostly done by humans.
  • The reliability of a systemor, conversely, its tendency to make critical mistakes when it sees a new situation it hasn’t been trained to deal with.
  • Vulnerability to adversarial examples,special kind of attack that can cripple certain kinds of AI systems, potentially making an autonomous car blind to a fast-approaching stop sign.
  • Bias,or whether a model’s accuracy varies depending on the kind of person or thing it is evaluating.
  • Interpretability of a model’s results, or how easy it is to understand why an AI system made a particular prediction.

“The machine learning model is just a tiny piece of the machine learning product,” says Andrew Ng, founder of Landing.ai, a startup that helps companies set up AI processes.

  • “One of the challenges is that you ship a system and then the world changes in some weird and unpredictable way,” says Ng, who previously started Google’s and Baidu’s AI operations.
  • “Anyone can download code from GitHub, but that’s not the point,” Ng tells Axios. “You need all these other things.”

One of the byproducts of the hidden costs associated with increasingly accurate AI systems is that they can make it hard for a new startup or cash-strapped university lab to compete.

  • “The AI models that are being designed and developed don’t always contemplate the application of the model,” says Josh Elliot, director of AI at Booz Allen Hamilton.
  • Big companies offer pre-packaged AI programs — like one for detecting street signs, say — but users often lack the computing firepower required to tweak them.
  • That means the reigning players get to decide the important problems worth trying to solve with AI, Schwartz says, and everyone else has to go along.

What’s next: Schwartz, Ng and others propose putting more effort toward solving problems more efficiently rather than making a slightly more accurate model at an enormous cost.

Go deeper: How not to replace humans

Πηγή: axios.com

Σχετικά Άρθρα