Яндекс Метрика
Языковая модель

Transformer + Simple Recurrent Unit

ASAPP,Cornell University,Google,Princeton University
Машинный перевод

Гибридная архитектура, объединяющая мощь Transformer и легкость Simple Recurrent Unit (SRU) для ускорения вычислений. Эта ИИ-модель эффективно решает проблему параллелизации рекуррентных сетей, обеспечивая высокую скорость машинного перевода без потери качества.

Common recurrent neural architectures scale poorly due to the intrinsic difficulty in parallelizing their state computations. In this work, we propose the Simple Recurrent Unit (SRU), a light recurrent unit that balances model capacity and scalability. SRU is designed to provide expressive recurrence, enable highly parallelized implementation, and comes with careful initialization to facilitate training of deep models. We demonstrate the effectiveness of SRU on multiple NLP tasks. SRU achieves 5--9x speed-up over cuDNN-optimized LSTM on classification and question answering datasets, and delivers stronger results than LSTM and convolutional models. We also obtain an average of 0.7 BLEU improvement over the Transformer model on translation by incorporating SRU into the architecture.

Что такое Transformer + Simple Recurrent Unit?+
Кто разработал Transformer + Simple Recurrent Unit?+
Какие задачи решает Transformer + Simple Recurrent Unit?+