Forgetting to learn
In the quest to build AI that goes beyond today’s single-purpose machines, scientists are developing new tools to help it remember the right things — and forget the rest, Kaveh reports.
Getting that balance right is the difference between a machine that can trade stocks like a pro but can’t make heads or tails of a crossword puzzle, and one that learns all that plus a variety of other skills, and continually improves them — an important step toward human-like intelligence.
“AI is entirely about memory and forgetting,” says Dileep George, founder of the AI company Vicarious.
- A computer that remembers too little won’t be able to do anything that requires connecting past experiences to new ones — like understanding a pronoun in a sentence, even if the person it refers to was named just one sentence before. These memory lapses are known as “catastrophic forgetting.”
- But one that remembers too muchloses the ability to see the big picture. This is called overfitting: focusing entirely on the particulars of past experiences, at the cost of the ability to extract general concepts from them.
- “A big part of learning is knowing whatto learn,” says David Cox, director of the MIT–IBM Watson AI Lab. “You want to be able to forget things that are irrelevant.” This holds for humans and machines both.
Another effect of catastrophic forgetting is that a computer learning a new task can lose the ability to do an old one — like a language learner forgetting their native tongue.
- To solve this, some researchers are adding memory modules that can set aside learned patterns, so that they don’t get overwritten by new information.
- Others, like George, are experimenting with turning specific tasks into computer programsthat are walled off from others and can be combined to perform more complex jobs.
These help with AI’s forgetting problem — but they’re not how human brains work, says Blake Richards, a neuroscientist and AI researcher at the University of Toronto.
- What our brains actually do isn’t completely clear, Richards says. It’s likely that memories are stored all together — but that patterns are kept separate from each other within the tangle, warding off the overwriting problem.
- Being able to connect memories with one another may be at the root of our ability to imagine and plan — two essential qualities that AI still lacks.
A trick humans do during sleep may be key to moving AI closer to the way we learn, says Cox. At rest, we relive recent memories, and in doing so reinforce neural pathways that help us remember them.
- Machines can mimic this with a process called “experience replay,” which weaves in memories of previously learned tasks alongside new lessons.
- This helps them remember the old andthe new — and because the memories are not kept separate, a computer could use parts of one to help learn the other, like a typist learning to play piano.
What’s next: Perfecting memory could unlock AI “that can actually make insightful predictions and imagine what’s going to happen in the future,” Richards says. That’s a crucial building block toward common sense, long a holy grail for AI researchers.
• What you may have missed
Some distractions are worth it. To catch up, here is the top of this week’s Future:
- Beating the ‘superforecasters’: A geopolitical prognostication contest
- The new sharecroppers: The hidden workforce behind the AI revolution
- Untested systems for criminal justice: Much of applied AI doesn’t work
- The race to move stuff: Amazon wants to dominate another industry
Πηγή: axios.com




