THESIS
2022
1 online resource (xi, 55 pages) : illustrations (chiefly color)
Abstract
Contrastive learning (CL) is popular in self-supervised learning and achieves great success
in many tasks. Negative samples are known for playing a key role in CL since they
avoid the model from collapsing. However, most works focus on designing diverse
architectures, while selecting negative samples with an indiscriminate approach. This
leads to some problems in model training. Too easy negative samples prevent from learning
good representation, while too hard negative samples tend to have higher risks of being false
negative samples, which also undermines representation learning. In this work, we propose
Hardness-Aware Contrastive Learning (Hardness-Aware CL) to select negative samples with
specified hardness in a quantitative way, which mitigates the cases that too easy and too
hard...[
Read more ]
Contrastive learning (CL) is popular in self-supervised learning and achieves great success
in many tasks. Negative samples are known for playing a key role in CL since they
avoid the model from collapsing. However, most works focus on designing diverse
architectures, while selecting negative samples with an indiscriminate approach. This
leads to some problems in model training. Too easy negative samples prevent from learning
good representation, while too hard negative samples tend to have higher risks of being false
negative samples, which also undermines representation learning. In this work, we propose
Hardness-Aware Contrastive Learning (Hardness-Aware CL) to select negative samples with
specified hardness in a quantitative way, which mitigates the cases that too easy and too
hard negative samples are selected. Making use of an underlying exponential increasing
relation between hardness and false negative rate, we further propose Hardness-Aware
Debiased Contrastive Learning (Hardness-Aware DCL) by applying a debiased contrastive
objective. Extensive experimental results on CIFAR-10 and CIFAR-100 datasets show the
effectiveness of our proposed methods.
Post a Comment