Options
Female, white, 27? : Bias Evaluation on Data and Algorithms for Affect Recognition in Faces
Pahl, Jaspar; Rieger, Ines; Möller, Anna; u. a. (2022): Female, white, 27? : Bias Evaluation on Data and Algorithms for Affect Recognition in Faces, in: FAccT ’22: 2022 ACM Conference on Fairness, Accountability, and Transparency : 2022 Proceeding, Association for Computing Machinery, S. 973–987, doi: 10.1145/3531146.3533159.
Author:
Title of the compilation:
FAccT '22: 2022 ACM Conference on Fairness, Accountability, and Transparency : 2022 Proceeding
Conference:
FAccT '22: 2022 ACM Conference on Fairness, Accountability, and Transparency, June 21 - 24, 2022 ; Seoul Republic of Korea
Publisher Information:
Year of publication:
2022
Pages:
ISBN:
978-1-4503-9352-2
Language:
English
Abstract:
Nowadays, Artificial Intelligence (AI) algorithms show a strong performance for many use cases, making them desirable for real-world scenarios where the algorithms provide high-impact decisions. However, one major drawback of AI algorithms is their susceptibility to bias and resulting unfairness. This has a huge influence for their application, as they have a higher failure rate for certain subgroups. In this paper, we focus on the field of affective computing and particularly on the detection of bias for facial expressions. Depending on the deployment scenario, bias in facial expression models can have a disadvantageous impact and it is therefore essential to evaluate the bias and limitations of the model. In order to analyze the metadata distribution in affective computing datasets, we annotate several benchmark training datasets, containing both Action Units and categorical emotions, with age, gender, ethnicity, glasses, and beards. We show that there is a significantly skewed distribution, particularly for ethnicity and age. Based on this metadata annotation, we evaluate two trained state-of-the-art affective computing algorithms. Our evaluation shows that the strongest bias is in age, with the best performance for persons under 34 and a sharp decrease for older persons. Furthermore, we see an ethnicity bias with varying direction depending on the algorithm, a slight gender bias and worse performance for facial parts occluded by glasses.
GND Keywords: ; ; ; ; ; ; ; ;
Affective Computing
Mimik
Gefühlsausdruck
Kategoriale Daten
Annotation
Bias
Fairness <Informatik>
Datenerhebung
Evaluation
Keywords: ; ; ; ; ; ; ;
affective computing
action units
categorical emotions
metadata post-annotation
bias
fairness
data evaluation,
algorithm evaluation
DDC Classification:
RVK Classification:
Type:
Conferenceobject
Activation date:
August 8, 2022
Versioning
Question on publication
Permalink
https://fis.uni-bamberg.de/handle/uniba/55036