Яндекс Метрика
Языковая модель

Ring-flash-linear-2.0

Ant Group
Генерация текстаОтветы на вопросыКоличественные рассужденияГенерация кодаMathematical reasoning

Старшая версия в линейке Ant Group, Ring-flash-linear-2.0, масштабирует возможности гибридного ИИ до внушительных 104 миллиардов параметров. Модель демонстрирует выдающиеся результаты в глубоких количественных рассуждениях, сохраняя высокую эффективность благодаря сочетанию линейного и softmax-внимания.

In this technical report, we present the Ring-linear model series, specifically including Ring-mini-linear-2.0 and Ring-flash-linear-2.0. Ring-mini-linear-2.0 comprises 16B parameters and 957M activations, while Ring-flash-linear-2.0 contains 104B parameters and 6.1B activations. Both models adopt a hybrid architecture that effectively integrates linear attention and softmax attention, significantly reducing I/O and computational overhead in long-context inference scenarios. Compared to a 32 billion parameter dense model, this series reduces inference cost to 1/10, and compared to the original Ring series, the cost is also reduced by over 50%. Furthermore, through systematic exploration of the ratio between different attention mechanisms in the hybrid architecture, we have identified the currently optimal model structure. Additionally, by leveraging our self-developed high-performance FP8 operator library-linghe, overall training efficiency has been improved by 50%. Benefiting from the high alignment between the training and inference engine operators, the models can undergo long-term, stable, and highly efficient optimization during the reinforcement learning phase, consistently maintaining SOTA performance across multiple challenging complex reasoning benchmarks.

Что такое Ring-flash-linear-2.0?+
Кто разработал Ring-flash-linear-2.0?+
Какие задачи решает Ring-flash-linear-2.0?+