Options
Embarrassingly Simple Performance Prediction for Abductive Natural Language Inference
Kadiķis, Emīls; Srivastav, Vaibhav; Klinger, Roman (2022): Embarrassingly Simple Performance Prediction for Abductive Natural Language Inference, in: Marine Carpuat, Marie-Catherine de Marneffe, Ivan Vladimir Meza Ruiz, u. a. (Hrsg.), Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Seattle: Association for Computational Linguistics, S. 6031–6037, doi: 10.18653/v1/2022.naacl-main.441.
Faculty/Chair:
Author:
Title of the compilation:
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
Editors:
Carpuat, Marine
Marneffe, Marie-Catherine de
Meza Ruiz, Ivan Vladimir
Conference:
2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL), Juli 2022 ; Seattle
Publisher Information:
Year of publication:
2022
Pages:
Language:
English
Abstract:
The task of natural language inference (NLI), to decide if a hypothesis entails or contradicts a premise, received considerable attention in recent years. All competitive systems build on top of contextualized representations and make use of transformer architectures for learning an NLI model. When somebody is faced with a particular NLI task, they need to select the best model that is available. This is a time-consuming and resource-intense endeavour. To solve this practical problem, we propose a simple method for predicting the performance without actually fine-tuning the model. We do this by testing how well the pre-trained models perform on the aNLI task when just comparing sentence embeddings with cosine similarity to what kind of performance is achieved when training a classifier on top of these embeddings. We show that the accuracy of the cosine similarity approach correlates strongly with the accuracy of the classification approach with a Pearson correlation coefficient of 0.65. Since the similarity is orders of magnitude faster to compute on a given dataset (less than a minute vs. hours), our method can lead to significant time savings in the process of model selection.
GND Keywords: ; ;
Computerlinguistik
Automatische Sprachanalyse
Inferenz <Künstliche Intelligenz>
Keywords:
Abductive Natural Language Inference
DDC Classification:
RVK Classification:
Peer Reviewed:
Yes:
International Distribution:
Yes:
Open Access Journal:
Yes:
Type:
Conferenceobject
Activation date:
March 7, 2024
Versioning
Question on publication
Permalink
https://fis.uni-bamberg.de/handle/uniba/93889