Female, white, 27? : Bias Evaluation on Data and Algorithms for Affect Recognition in Faces






Faculty/Professorship: University of Bamberg  ; Explainable Machine Learning ; Cognitive Systems  
Author(s): Pahl, Jaspar ; Rieger, Ines  ; Möller, Anna; Wittenberg, Thomas; Schmid, Ute  
Title of the compilation: FAccT '22: 2022 ACM Conference on Fairness, Accountability, and Transparency : 2022 Proceeding
Conference: FAccT '22: 2022 ACM Conference on Fairness, Accountability, and Transparency, June 21 - 24, 2022, Seoul Republic of Korea
Publisher Information: Association for Computing Machinery
Year of publication: 2022
Pages: 973-987
ISBN: 978-1-4503-9352-2
Language(s): English
DOI: 10.1145/3531146.3533159
Abstract: 
Nowadays, Artificial Intelligence (AI) algorithms show a strong performance for many use cases, making them desirable for real-world scenarios where the algorithms provide high-impact decisions. However, one major drawback of AI algorithms is their susceptibility to bias and resulting unfairness. This has a huge influence for their application, as they have a higher failure rate for certain subgroups. In this paper, we focus on the field of affective computing and particularly on the detection of bias for facial expressions. Depending on the deployment scenario, bias in facial expression models can have a disadvantageous impact and it is therefore essential to evaluate the bias and limitations of the model. In order to analyze the metadata distribution in affective computing datasets, we annotate several benchmark training datasets, containing both Action Units and categorical emotions, with age, gender, ethnicity, glasses, and beards. We show that there is a significantly skewed distribution, particularly for ethnicity and age. Based on this metadata annotation, we evaluate two trained state-of-the-art affective computing algorithms. Our evaluation shows that the strongest bias is in age, with the best performance for persons under 34 and a sharp decrease for older persons. Furthermore, we see an ethnicity bias with varying direction depending on the algorithm, a slight gender bias and worse performance for facial parts occluded by glasses.
GND Keywords: Affective Computing; Mimik; Gefühlsausdruck; Kategoriale Daten; Annotation; Bias; Fairness <Informatik>; Datenerhebung; Evaluation
Keywords: affective computing, action units, categorical emotions, metadata post-annotation, bias, fairness, data evaluation,, algorithm evaluation
DDC Classification: 004 Computer science  
RVK Classification: ST 302   
Type: Conferenceobject
URI: https://fis.uni-bamberg.de/handle/uniba/55036
Release Date: 8. August 2022