Яндекс Метрика
Языковая модель

GRITLM 8x7B

Contextual AI,The University of Hong Kong,Microsoft
Генерация текстаОтветы на вопросыКоличественные рассуждения

GRITLM 8x7B представляет собой мощную версию модели с архитектурой Mixture of Experts, оптимизированную для масштабных задач генерации и поиска. Этот ИИ использует двунаправленное внимание для эмбеддингов и каузальное для генерации, обеспечивая топовую производительность в сложных рассуждениях.

8x7B parameter model that uses bidirectional attention for embedding and causal attention for generation. It is finetuned from Mistral-8x7B All text-based language problems can be reduced to either generation or embedding. Current models only perform well at one or the other. We introduce generative representational instruction tuning (GRIT) whereby a large language model is trained to handle both generative and embedding tasks by distinguishing between them through instructions. Compared to other open models, our resulting GritLM 7B sets a new state of the art on the Massive Text Embedding Benchmark (MTEB) and outperforms all models up to its size on a range of generative tasks. By scaling up further, GritLM 8x7B outperforms all open generative language models that we tried while still being among the best embedding models. Notably, we find that GRIT matches training on only generative or embedding data, thus we can unify both at no performance loss. Among other benefits, the unification via GRIT speeds up Retrieval-Augmented Generation (RAG) by > 60% for long documents, by no longer requiring separate retrieval and generation models. Models, code, etc. are freely available at this https URL.

Что такое GRITLM 8x7B?+
Кто разработал GRITLM 8x7B?+
Какие задачи решает GRITLM 8x7B?+