Submission declined on 18 April 2024 by Chaotic Enby (talk).
Where to get help
How to improve a draft
You can also browse Wikipedia:Featured articles and Wikipedia:Good articles to find examples of Wikipedia's best writing on topics similar to your proposed article. Improving your odds of a speedy review To improve your odds of a faster review, tag your draft with relevant WikiProject tags using the button below. This will let reviewers know a new draft has been submitted in their area of interest. For instance, if you wrote about a female astronomer, you would want to add the Biography, Astronomy, and Women scientists tags. Editor resources
|
- Comment: Unfortunately, not enough sources. Medium is not a reliable source, and the same paper is linked 4 times via 4 different websites. Chaotıċ Enby (talk · contribs) 21:14, 18 April 2024 (UTC)
CLIP (Contrastive Language-Image Pre-training) is a machine learning model developed by OpenAI that aims to bridge the gap between visual and textual data by learning visual concepts directly from natural language descriptions. This model is distinct in its method of pre-training, which involves using a large-scale dataset composed of image-text pairs collected from a variety of internet sources.[1].
Overview
editCLIP trains two neural networks simultaneously: an image encoder and a text encoder. The image encoder processes visual input into a feature vector, while the text encoder transforms text descriptions into a corresponding vector[1]. The training objective is to maximize the agreement between the image and text vectors using a contrastive loss function. This method allows CLIP to understand and classify images based on textual descriptions, facilitating zero-shot learning capabilities, where the model can accurately predict image categories it has never explicitly seen during training[2].
Approach
edit- Feature Extraction:
- Image Encoder: If=image_encoder(I)
- Text Encoder: Tf=text_encoder(T) Where I is the input image and T is the corresponding text description. If and Tf are the feature representations of the image and text, respectively.
- Embedding Projections:
- Image Embedding: Ie=Wi⋅If
- Text Embedding: Te=Wt⋅Tf Here, Wi and Wt are trainable projection matrices that map the feature vectors If and Tf into a common embedding space.
- Cosine Similarity Calculation:
- similarity=∥Ie∥∥Te∥Ie⋅Te This equation computes the cosine similarity between the image and text embeddings. The model's goal during training is to maximize this similarity for matching pairs and minimize it for non-matching pairs.
- Contrastive Loss Function:
- L=−log∑j=1Nexp(similarity(Ie,Tej)/τ)exp(similarity(Ie,Te)/τ) Where τ is a temperature parameter that scales the logits, and N is the number of negative samples in the batch. This loss function, often called the InfoNCE loss, encourages the model to distinguish the correct text description from a set of incorrect ones by maximizing the numerator and minimizing the denominator[2].
Applications
editCLIP's ability to interpret images through natural language makes it suitable for a broad range of applications[3], from enhancing image search engines to facilitating advanced systems for visual question answering and automated tagging in digital asset management[4].
Significance
editThe development of CLIP represents a significant step towards creating more flexible and broadly applicable visual recognition systems that reduce reliance on large-scale labeled datasets[5]. Its approach aligns with the broader trend in AI research focusing on models that learn from a variety of data types and sources to improve generalization across different tasks[6]
Future Directions
editWhile CLIP shows considerable promise, its performance on standard benchmarks, though improving, still trails behind more traditional supervised learning methods. Future research could explore optimizing the balance between zero-shot learning capabilities and the robustness required for specific applications, potentially expanding the practical usability of such models in real-world scenarios.
References
edit- ^ a b openai/CLIP, OpenAI, 2024-04-18, retrieved 2024-04-18
- ^ a b Radford, Alec; Kim, Jong Wook; Hallacy, Chris; Ramesh, Aditya; Goh, Gabriel; Agarwal, Sandhini; Sastry, Girish; Askell, Amanda; Mishkin, Pamela; Clark, Jack; Krueger, Gretchen; Sutskever, Ilya (2021-07-01). "Learning Transferable Visual Models From Natural Language Supervision". Proceedings of the 38th International Conference on Machine Learning. PMLR: 8748–8763. arXiv:2103.00020.
- ^ Radford, Alec; Kim, Jong Wook; Hallacy, Chris; Ramesh, Aditya; Goh, Gabriel; Agarwal, Sandhini; Sastry, Girish; Askell, Amanda; Mishkin, Pamela; Clark, Jack; Krueger, Gretchen; Sutskever, Ilya (26 February 2021). "Learning Transferable Visual Models from Natural Language Supervision". www.semanticscholar.org. arXiv:2103.00020. Retrieved 2024-04-18.
- ^ "Paper page - Learning Transferable Visual Models From Natural Language Supervision". huggingface.co. 2023-09-15. Retrieved 2024-04-18.
- ^ "dblp: Learning Transferable Visual Models From Natural Language Supervision". dblp.org. Retrieved 2024-04-18.
- ^ Tsang, Sik-Ho (2022-12-08). "Review — CLIP: Learning Transferable Visual Models From Natural Language Supervision". Medium. Retrieved 2024-04-18.
- in-depth (not just passing mentions about the subject)
- reliable
- secondary
- independent of the subject
Make sure you add references that meet these criteria before resubmitting. Learn about mistakes to avoid when addressing this issue. If no additional references exist, the subject is not suitable for Wikipedia.