THESIS
2023
1 online resource (ix, 49 pages) : color illustrations
Abstract
Conceptualization, making abstraction and inference instantiation based on it, is an essential
part of intelligence for both humans and artificial to do reasoning. And it has long
been regarded as the key component of Natural Language Processing and Understanding
for commonsense Knowledge.
With the fast-growing development of Pre-trained Language Models, more tasks about
conceptualization have been launched and tested, most of which are caring conceptualization
with context. However, the current experiments are focusing on the traditional
fine-tuning setting to let models fit into the provided datasets but ignore the importance
of the self-capable conceptualization ability, which should be the true representative of the
cognitive ability of models. In this work, we propose some zero-sho...[
Read more ]
Conceptualization, making abstraction and inference instantiation based on it, is an essential
part of intelligence for both humans and artificial to do reasoning. And it has long
been regarded as the key component of Natural Language Processing and Understanding
for commonsense Knowledge.
With the fast-growing development of Pre-trained Language Models, more tasks about
conceptualization have been launched and tested, most of which are caring conceptualization
with context. However, the current experiments are focusing on the traditional
fine-tuning setting to let models fit into the provided datasets but ignore the importance
of the self-capable conceptualization ability, which should be the true representative of the
cognitive ability of models. In this work, we propose some zero-shot experiments to explore
the influence of various prompts regarding models, and the adaptability of prompts
regarding datasets, and try to find a challenging dataset to better examine models. The results
show that significant improvement can be produced with a good choice of prompts.
Post a Comment