Kadiķis, EmīlsEmīlsKadiķisSrivastav, VaibhavVaibhavSrivastavKlinger, RomanRomanKlinger0000-0002-2014-66192024-03-072024-03-072022https://fis.uni-bamberg.de/handle/uniba/93889The task of natural language inference (NLI), to decide if a hypothesis entails or contradicts a premise, received considerable attention in recent years. All competitive systems build on top of contextualized representations and make use of transformer architectures for learning an NLI model. When somebody is faced with a particular NLI task, they need to select the best model that is available. This is a time-consuming and resource-intense endeavour. To solve this practical problem, we propose a simple method for predicting the performance without actually fine-tuning the model. We do this by testing how well the pre-trained models perform on the aNLI task when just comparing sentence embeddings with cosine similarity to what kind of performance is achieved when training a classifier on top of these embeddings. We show that the accuracy of the cosine similarity approach correlates strongly with the accuracy of the classification approach with a Pearson correlation coefficient of 0.65. Since the similarity is orders of magnitude faster to compute on a given dataset (less than a minute vs. hours), our method can lead to significant time savings in the process of model selection.engAbductive Natural Language Inference004Embarrassingly Simple Performance Prediction for Abductive Natural Language Inferenceconferenceobject10.18653/v1/2022.naacl-main.441https://aclanthology.org/2022.naacl-main.441