Младшая версия в линейке HGRN2 с 1 миллиардом параметров предлагает идеальный баланс между производительностью и легкостью. Эта ИИ-модель использует иерархические гейты для быстрой обработки данных, что делает её отличным решением для задач с ограниченными вычислительными ресурсами.
Hierarchically gated linear RNN (HGRN,Qin et al. 2023) has demonstrated competitive training speed and performance in language modeling, while offering efficient inference. However, the recurrent state size of HGRN remains relatively small, which limits its this http URL address this issue, inspired by linear attention, we introduce a simple outer-product-based state expansion mechanism so that the recurrent state size can be significantly enlarged without introducing any additional parameters. The linear attention form also allows for hardware-efficient this http URL extensive experiments verify the advantage of HGRN2 over HGRN1 in language modeling, image classification, and Long Range this http URL largest 3B HGRN2 model slightly outperforms Mamba and LLaMa Architecture Transformer for language modeling in a controlled experiment setting; and performs competitively with many open-source 3B models in downstream evaluation while using much fewer total training tokens. "HGRN2 is comparable to the state-of-the-art (SOTA) methods at 1 billion and better than SOTA methods at 3 billion size."