Яндекс Метрика
Языковая модель

RNN for 1B words

Google
Языковое моделирование

Эта модель от Google стала эталоном для проверки алгоритмов статистического языкового моделирования на гигантских массивах данных. Обученная на миллиарде слов, она помогает ИИ-разработчикам быстро тестировать новые архитектуры и сравнивать их эффективность в реальных задачах.

We propose a new benchmark corpus to be used for measuring progress in statistical language modeling. With almost one billion words of training data, we hope this benchmark will be useful to quickly evaluate novel language modeling techniques, and to compare their contribution when combined with other advanced techniques. We show performance of several well-known types of language models, with the best results achieved with a recurrent neural network based language model. The baseline unpruned Kneser-Ney 5-gram model achieves perplexity 67.6; a combination of techniques leads to 35% reduction in perplexity, or 10% reduction in cross-entropy (bits), over that baseline. The benchmark is available as a this http URL project; besides the scripts needed to rebuild the training/held-out data, it also makes available log-probability values for each word in each of ten held-out data sets, for each of the baseline n-gram models.

Что такое RNN for 1B words?+
Кто разработал RNN for 1B words?+
Какие задачи решает RNN for 1B words?+