Яндекс Метрика
Мультимодальная модель, Языковая модель, Компьютерное зрение, Распознавание речи

OpenOmni

Chinese Academy of Sciences,Shenzhen Institute of Advanced Technology,University of Chinese Academy of Sciences,National University of Singapore,University of Science and Technology of China (USTC)
Речь в текстРаспознавание речиImage captioningВизуальные ответы на вопросыГенерация текстаОтветы на вопросыText-to-speech (TTS)Speech synthesis

OpenOmni — это мощная мультимодальная ИИ-модель с открытым исходным кодом, объединяющая текст, изображения и речь. Она решает проблему нехватки качественных данных и выводит синтез эмоциональной речи на новый уровень, делая взаимодействие с ИИ максимально естественным.

Recent advancements in omnimodal learning have significantly improved understanding and generation across images, text, and speech, yet these developments remain predominantly confined to proprietary models. The lack of high-quality omnimodal datasets and the challenges of real-time emotional speech synthesis have notably hindered progress in open-source research. To address these limitations, we introduce \name, a two-stage training framework that integrates omnimodal alignment and speech generation to develop a state-of-the-art omnimodal large language model. In the alignment phase, a pre-trained speech model undergoes further training on text-image tasks, enabling (near) zero-shot generalization from vision to speech, outperforming models trained on tri-modal datasets. In the speech generation phase, a lightweight decoder is trained on speech tasks with direct preference optimization, enabling real-time emotional speech synthesis with high fidelity. Experiments show that \name surpasses state-of-the-art models across omnimodal, vision-language, and speech-language benchmarks. It achieves a 4-point absolute improvement on OmniBench over the leading open-source model VITA, despite using 5x fewer training samples and a smaller model size (7B vs. 7x8B). Additionally, \name achieves real-time speech generation with <1s latency at non-autoregressive mode, reducing inference time by 5x compared to autoregressive methods, and improves emotion classification accuracy by 7.7\%

Что такое OpenOmni?+
Кто разработал OpenOmni?+
Какие задачи решает OpenOmni?+