CLIP-like model evaluation
[ICCV 2023] CLIP-Driven Universal Model; Rank first in MSD Competition.
Run OpenAI's CLIP and Apple's MobileCLIP model on iOS to search photos.
Simple implementation of OpenAI CLIP model in PyTorch.
Code to train CLIP model
Infinity is a high-throughput, low-latency serving engine for text-embeddings, reranking models, clip, clap and colpali
FashionCLIP is a CLIP-like model fine-tuned for the fashion domain.
LLM2CLIP makes SOTA pretrained CLIP model more SOTA ever.
Connecting segment-anything's output masks with the CLIP model; Awesome-Segment-Anything-Works
Easily compute clip embeddings and build a clip retrieval system with them
CLIPxGPT Captioner is Image Captioning Model based on OpenAI's CLIP and GPT-2.
#计算机科学#An open source implementation of CLIP.
翻译 - CLIP 的开源实现。
Contrastive Language-Image Forensic Search allows free text searching through videos using OpenAI's machine learning model CLIP
Local image generation using VQGAN-CLIP or CLIP guided diffusion
CLIP+MLP Aesthetic Score Predictor
Create charts from the command line
翻译 - 从命令行创建高质量的图表
Search photos on Unsplash based on OpenAI's CLIP model, support search with joint image+text queries and attention visualization.
CLIP + Mesh + SMPL-X