Яндекс Метрика
Компьютерное зрение

EVA-CLIP (EVA-02-CLIP-E/14+)

Beijing Academy of Artificial Intelligence / BAAI,Huazhong University of Science and Technology
Классификация изображений

EVA-CLIP — это эволюция популярной архитектуры CLIP, которая выводит эффективность обучения нейросетей на новый уровень. Благодаря продвинутым техникам оптимизации, эта ИИ-модель демонстрирует превосходные результаты в задачах классификации и понимания визуального контента.

Contrastive language-image pre-training, CLIP for short, has gained increasing attention for its potential in various scenarios. In this paper, we propose EVA-CLIP, a series of models that significantly improve the efficiency and effectiveness of CLIP training. Our approach incorporates new techniques for representation learning, optimization, and augmentation, enabling EVA-CLIP to achieve superior performance compared to previous CLIP models with the same number of parameters but significantly smaller training costs. Notably, our largest 5.0B-parameter EVA-02-CLIP-E/14+ with only 9 billion seen samples achieves 82.0 zero-shot top-1 accuracy on ImageNet-1K val. A smaller EVA-02-CLIP-L/14+ with only 430 million parameters and 6 billion seen samples achieves 80.4 zero-shot top-1 accuracy on ImageNet-1K val. To facilitate open access and open research, we release the complete suite of EVA-CLIP to the community at this https URL.

Что такое EVA-CLIP (EVA-02-CLIP-E/14+)?+
Кто разработал EVA-CLIP (EVA-02-CLIP-E/14+)?+
Какие задачи решает EVA-CLIP (EVA-02-CLIP-E/14+)?+