Patent of the Week: Helping AI Systems Multitask and Remember
An artificial neural network (ANN) is a computer program designed to operate similarly to the human brain and the ways in which it processes information. However, while the brain is (most often) able to learn new tasks without forgetting those learned previously, ANNs experience a degrading accuracy called “catastrophic forgetting.”
There are methods to help alleviate catastrophic forgetting, but current solutions are limited and require high computational overhead, according to researchers who have proposed a complementary method inspired by the human brain.
The novel algorithm – developed by researchers including UChicago professor of neurobiology, David Freedman – combines synaptic stabilization and context-dependent gating to enable neural networks to “remember” previous tasks when trained on new tasks.
“This method is easy to implement, requires little computational overhead, and allows ANNs to maintain high performance across large numbers of sequentially presented tasks, particularly when combined with weight stabilization,” the researchers explained in a paper.
The work also demonstrates the method’s applicability for both feedforward and recurrent network architectures, trained using either supervised or reinforcement-based learning. According to the researchers, “this suggests that using multiple, complementary methods, akin to what is believed to occur in the brain, can be a highly effective strategy to support continual learning.”
// Read more:
- Alleviating catastrophic forgetting using context-dependent gating and synaptic stabilization – Proc Natl Acad Sci USA
- Brain-inspired algorithm helps AI systems multitask and remember – UChicago News
// Patent of the Week is a weekly column highlighting research and inventions from University of Chicago faculty.