Яндекс Метрика
Компьютерное зрение

Layer-Norm Fast Weights RNN

University of Toronto,Google DeepMind,Google Brain
Digit recognitionКлассификация изображений

Эта архитектура ИИ переосмысляет работу нейронной памяти, внедряя «быстрые веса» для кратковременного хранения данных. Модель имитирует динамику синапсов мозга, что позволяет нейросети эффективнее справляться с распознаванием цифр и классификацией образов.

Until recently, research on artificial neural networks was largely restricted to systems with only two types of variable: Neural activities that represent the current or recent input and weights that learn to capture regularities among inputs, outputs and payoffs. There is no good reason for this restriction. Synapses have dynamics at many different time-scales and this suggests that artificial neural networks might benefit from variables that change slower than activities but much faster than the standard weights. These "fast weights" can be used to store temporary memories of the recent past and they provide a neurally plausible way of implementing the type of attention to the past that has recently proved very helpful in sequence-to-sequence models. By using fast weights we can avoid the need to store copies of neural activity patterns.

Что такое Layer-Norm Fast Weights RNN?+
Кто разработал Layer-Norm Fast Weights RNN?+
Какие задачи решает Layer-Norm Fast Weights RNN?+