Яндекс Метрика
Языковая модель

ULM-FiT

University of San Francisco,Insight Centre NUI Galway,Fast.ai
Text classification

ULM-FiT совершила революцию в NLP, внедрив метод трансферного обучения для быстрой адаптации языковых моделей. Теперь ИИ-разработчики могут достигать топовых результатов в классификации текстов, используя минимум размеченных данных и времени.

Inductive transfer learning has greatly impacted computer vision, but existing approaches in NLP still require task-specific modifications and training from scratch. We propose Universal Language Model Fine-tuning (ULMFiT), an effective transfer learning method that can be applied to any task in NLP, and introduce techniques that are key for fine-tuning a language model. Our method significantly outperforms the state-of-the-art on six text classification tasks, reducing the error by 18-24% on the majority of datasets. Furthermore, with only 100 labeled examples, it matches the performance of training from scratch on 100x more data. We open-source our pretrained models and code.

Что такое ULM-FiT?+
Кто разработал ULM-FiT?+
Какие задачи решает ULM-FiT?+