Contrastive learning is a self-supervised learning method that trains models to distinguish between similar and dissimilar data samples. By creating positive pairs (e.g., augmented versions of the same image) and negative pairs (different images), it learns to map similar samples to nearby points and dissimilar samples to distant points in an embedding space. This approach is widely used for pre-training image, text, and audio models, enabling them to learn rich, generalizable features without manual annotation.
Contrastive learning is a self-supervised learning paradigm that learns representations by pulling similar data points closer together and pushing dissimilar ones further apart in an embedding space. It's a powerful technique for learning meaningful features without explicit labels, often serving as a pre-training step for downstream tasks.
| Alternative | Difference | Papers (with contrastive learning) | Avg viability |
|---|---|---|---|
| neural networks | — | 1 | — |