THESIS
2022
1 online resource (xi, 49 pages) : color illustrations
Abstract
Recently, large pretrained language models achieve compelling performance on commonsense
benchmarks. Nevertheless, it is unclear what commonsense knowledge the models
learn and whether they purely exploit spurious patterns. Feature attributions are popular
explainability techniques that identify important input concepts to model outputs.
However, commonsense knowledge tends to be implicit and rarely explicitly presented
in inputs. These methods cannot help infer models’ implicit reasoning over mentioned
concepts.
In this thesis, we develop CommonsenseVIS, a visual explanatory system that utilizes
external commonsense knowledge bases to contextualize model behavior for commonsense
question-answering. Particularly, we extract relevant commonsense knowledge in
inputs as references to align...[
Read more ]
Recently, large pretrained language models achieve compelling performance on commonsense
benchmarks. Nevertheless, it is unclear what commonsense knowledge the models
learn and whether they purely exploit spurious patterns. Feature attributions are popular
explainability techniques that identify important input concepts to model outputs.
However, commonsense knowledge tends to be implicit and rarely explicitly presented
in inputs. These methods cannot help infer models’ implicit reasoning over mentioned
concepts.
In this thesis, we develop CommonsenseVIS, a visual explanatory system that utilizes
external commonsense knowledge bases to contextualize model behavior for commonsense
question-answering. Particularly, we extract relevant commonsense knowledge in
inputs as references to align model behavior with human knowledge. Our system features
multi-level visualization and interactive probing of model behavior on different concepts
and their underlying relations. Through case studies and a user study, we show that CommonsenseVIS helps NLP experts conduct a systematic and scalable visual analysis of
models’ relational reasoning over concepts in different situations.
Post a Comment