THESIS
2024
1 online resource (xiv, 92 pages) : illustrations (chiefly color)
Abstract
Current deep learning models are trained to fit the training set distribution. Despite the remarkable advancements attributable to cutting-edge architectural designs, these models cannot inference for out-of-distribution (OOD) samples—instances that diverge from the training set’s scope. Unlike humans, who can naturally recognize something that is unknown to themselves, current deep learning models lack this capability. Since it is hard to include all objects of the open world into the training set, how to design an open-set recognition algorithm to detect the OOD samples and reject them is essential. This thesis focuses on studying open-set recognition and its application in computer vision. Initially, we introduce an open-set 3D semantic segmentation system for autonomous driving appl...[
Read more ]
Current deep learning models are trained to fit the training set distribution. Despite the remarkable advancements attributable to cutting-edge architectural designs, these models cannot inference for out-of-distribution (OOD) samples—instances that diverge from the training set’s scope. Unlike humans, who can naturally recognize something that is unknown to themselves, current deep learning models lack this capability. Since it is hard to include all objects of the open world into the training set, how to design an open-set recognition algorithm to detect the OOD samples and reject them is essential. This thesis focuses on studying open-set recognition and its application in computer vision. Initially, we introduce an open-set 3D semantic segmentation system for autonomous driving applications. We aim to detect anomalous objects that are not common on the road and not in the training set, as such outliers are critical for the safety of autonomous driving systems. Subsequently, we analyze the open-set problem from the Information Bottleneck perspective, and propose a prototypical similarity learning algorithm to learn more class-specific and instance-specific information for better open-set performance. Ultimately, we deeply analyze a new setting called unified open-set recognition, in which both OOD samples and in-distribution but wrongly-classified samples are supposed to be detected, since the model’s predictions of them are wrong. In general, our works provide a new theoretical analysis perspective, a new training and evaluation setting, and a new application for the open-set recognition community.
Post a Comment