Next Article in Journal
Enhanced Bug Prediction in JavaScript Programs with Hybrid Call-Graph Based Invocation Metrics
Next Article in Special Issue
A Survey of Robots in Healthcare
Previous Article in Journal
Tykhonov Well-Posedness and Convergence Results for Contact Problems with Unilateral Constraints
Previous Article in Special Issue
A Review of Extended Reality (XR) Technologies for Manufacturing Training
Open AccessFeature PaperReview

A Survey on Contrastive Self-Supervised Learning

Department of Computer Science and Engineering, The University of Texas at Arlington, Arlington, TX 76019, USA
*
Author to whom correspondence should be addressed.
Technologies 2021, 9(1), 2; https://doi.org/10.3390/technologies9010002
Received: 31 October 2020 / Revised: 20 December 2020 / Accepted: 23 December 2020 / Published: 28 December 2020
(This article belongs to the Collection Selected Papers from the PETRA Conference Series)
Self-supervised learning has gained popularity because of its ability to avoid the cost of annotating large-scale datasets. It is capable of adopting self-defined pseudolabels as supervision and use the learned representations for several downstream tasks. Specifically, contrastive learning has recently become a dominant component in self-supervised learning for computer vision, natural language processing (NLP), and other domains. It aims at embedding augmented versions of the same sample close to each other while trying to push away embeddings from different samples. This paper provides an extensive review of self-supervised methods that follow the contrastive approach. The work explains commonly used pretext tasks in a contrastive learning setup, followed by different architectures that have been proposed so far. Next, we present a performance comparison of different methods for multiple downstream tasks such as image classification, object detection, and action recognition. Finally, we conclude with the limitations of the current methods and the need for further techniques and future directions to make meaningful progress. View Full-Text
Keywords: contrastive learning; self-supervised learning; discriminative learning; image/video classification; object detection; unsupervised learning; transfer learning contrastive learning; self-supervised learning; discriminative learning; image/video classification; object detection; unsupervised learning; transfer learning
Show Figures

Figure 1

MDPI and ACS Style

Jaiswal, A.; Babu, A.R.; Zadeh, M.Z.; Banerjee, D.; Makedon, F. A Survey on Contrastive Self-Supervised Learning. Technologies 2021, 9, 2. https://doi.org/10.3390/technologies9010002

AMA Style

Jaiswal A, Babu AR, Zadeh MZ, Banerjee D, Makedon F. A Survey on Contrastive Self-Supervised Learning. Technologies. 2021; 9(1):2. https://doi.org/10.3390/technologies9010002

Chicago/Turabian Style

Jaiswal, Ashish; Babu, Ashwin R.; Zadeh, Mohammad Z.; Banerjee, Debapriya; Makedon, Fillia. 2021. "A Survey on Contrastive Self-Supervised Learning" Technologies 9, no. 1: 2. https://doi.org/10.3390/technologies9010002

Find Other Styles
Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Article Access Map by Country/Region

1
Search more from Scilit
 
Search
Back to TopTop