LUXIA-21.4B-Alignment — компактная, но невероятно эффективная языковая модель, которая при своих 21,4 млрд параметров умудряется обходить гигантов на 72B. Этот ИИ демонстрирует лучшие в своем классе результаты в NLP-задачах, сочетая высокую производительность и отличную оптимизацию.
We introduce LUXIA-21.4B-Alignment, a large language model (LLM) with 21.4 billion parameters, demonstrating superior performance in various natural language processing (NLP) tasks. It's demonstrates unparalleled state-of-the-art performance in models with parameters under 35B, and it also outperformed the 72B model and the 34Bx2 MoE (Mixture of Experts) model. Please refer to the evaluation results table for details. The luxia-21.4b-alignment model is derived from the luxia-21.4b-instruct model through DPO training, and the luxia-21.4b-instruct model is an SFT trained version of the luxia-21.4b model. We plan to release both the pretrained model and the instruction-tuned model soon.