Яндекс Метрика
Языковая модель

K2 Think

Mohamed bin Zayed University of Artificial Intelligence (MBZUAI),G42
Генерация текстаОтветы на вопросыКоличественные рассужденияГенерация кода

K2 Think — это мощная языковая модель на 32B параметров, которая доказывает: размер ИИ не всегда определяет его интеллект. Благодаря продвинутым техникам рассуждения, этот алгоритм обходит гигантов вроде DeepSeek v3.1 в задачах на логику, математику и кодинг.

K2-Think is a reasoning system that achieves state-of-the-art performance with a 32B parameter model, matching or surpassing much larger models like GPT-OSS 120B and DeepSeek v3.1. Built on the Qwen2.5 base model, our system shows that smaller models can compete at the highest levels by combining advanced post-training and test-time computation techniques. The approach is based on six key technical pillars: Long Chain-of-thought Supervised Finetuning, Reinforcement Learning with Verifiable Rewards (RLVR), Agentic planning prior to reasoning, Test-time Scaling, Speculative Decoding, and Inference-optimized Hardware, all using publicly available open-source datasets. K2-Think excels in mathematical reasoning, achieving state-of-the-art scores on public benchmarks for open-source models, while also performing strongly in other areas such as Code and Science. Our results confirm that a more parameter-efficient model like K2-Think 32B can compete with state-of-the-art systems through an integrated post-training recipe that includes long chain-of-thought training and strategic inference-time enhancements, making open-source reasoning systems more accessible and affordable. K2-Think is freely available at this http URL, offering best-in-class inference speeds of over 2,000 tokens per second per request via the Cerebras Wafer-Scale Engine.

Что такое K2 Think?+
Кто разработал K2 Think?+
Какие задачи решает K2 Think?+