Яндекс Метрика
Языковая модель

SRU++ Large only 2 attention layers (k=5) (WT103)

ASAPP
Языковое моделирование

SRU++ — это сверхэффективная архитектура, которая решает проблему высокой стоимости обучения больших языковых моделей. Комбинируя быструю рекуррентность и механизмы внимания, эта AI-модель показывает отличные результаты на датасете Wiki-103 при минимальных затратах ресурсов.

Large language models have become increasingly difficult to train because of the growing computation time and cost. In this work, we present SRU++, a highly-efficient architecture that combines fast recurrence and attention for sequence modeling. SRU++ exhibits strong modeling capacity and training efficiency. On standard language modeling tasks such as Enwik8, Wiki-103 and Billion Word datasets, our model obtains better bits-per-character and perplexity while using 3x-10x less training cost compared to top-performing Transformer models. For instance, our model achieves a state-of-the-art result on the Enwik8 dataset using 1.6 days of training on an 8-GPU machine. We further demonstrate that SRU++ requires minimal attention for near state-of-the-art performance. Our results suggest jointly leveraging fast recurrence with little attention as a promising direction for accelerating model training and inference.

Что такое SRU++ Large only 2 attention layers (k=5) (WT103)?+
Кто разработал SRU++ Large only 2 attention layers (k=5) (WT103)?+
Какие задачи решает SRU++ Large only 2 attention layers (k=5) (WT103)?+