Pleias 1.0 1.2B — это более мощная версия «нано-модели», сочетающая архитектуру в стиле Llama и полную юридическую чистоту обучающей выборки. Данный ИИ-инструмент идеально подходит для задач, где важна высокая скорость работы при сохранении качества ответов и точности генерации.
Description Pleias-nano-1.2b-Preview is a transformer base model, entirely pretrained from scratch, using an architecture similar to Llama/GPT-Neox for easier deployment/inference. It includes the following features, that would apply to any responsibly trained variant: Only trained on open data under a permissible license and in compliance with the European AI Act. By design, all Pleias model are unable to output copyrighted content. Extensive multilingual support for main European languages. A new tokenizer designed for enhanced document processing tasks and better multilingual support. Extremely low level of toxicity and problematic content. Pleias-nano-1.2b-Preview has demonstrated unusual abilities for multilingual generation in its size range. Fully supported languages include English, French, Spanish, German, Italian, Dutch, Latin and Portuguese. Given its size, Pleias-nano-1.2b-Preview can run on CPU without any compression loss. We provide a first GGUF variant as part of our release.