Яндекс Метрика
Языковая модель

Subformer (96M)

University of Tokyo,National Institute of Advanced Industrial Science and Technology (AIST)
Языковое моделированиеText summarizationМашинный перевод

Subformer (96M) представляет собой сбалансированное решение для суммаризации текстов и перевода. Инженеры применили здесь лучшие практики сжатия параметров, сохранив гибкость ИИ в обработке сложных последовательностей и длинных документов.

Transformers have shown improved performance when compared to previous architectures for sequence processing such as RNNs. Despite their sizeable performance gains, as recently suggested, the model is computationally expensive to train and with a high parameter budget. In light of this, we explore parameter-sharing methods in Transformers with a specific focus on generative models. We perform an analysis of different parameter sharing/reduction methods and develop the Subformer. Our model combines sandwich-style parameter sharing, which overcomes naive cross-layer parameter sharing in generative models, and self-attentive embedding factorization (SAFE). Experiments on machine translation, abstractive summarization and language modeling show that the Subformer can outperform the Transformer even when using significantly fewer parameters.

Что такое Subformer (96M)?+
Кто разработал Subformer (96M)?+
Какие задачи решает Subformer (96M)?+