Options
Uncovering the Bias in Facial Expressions
Deuschel, Jessica; Finzel, Bettina; Rieger, Ines (2021): Uncovering the Bias in Facial Expressions, in: Judith Rauscher, Mona Hess, Astrid Schütz, u. a. (Hrsg.), Kolloquium Forschende Frauen 2020 - Gender in Gesellschaft 4.0 : Beiträge Bamberger Nachwuchswissenschaftlerinnen, Bamberg: University of Bamberg Press, S. 15–42, doi: 10.20378/irb-90482.
Faculty/Chair:
Author:
Title of the compilation:
Kolloquium Forschende Frauen 2020 - Gender in Gesellschaft 4.0 : Beiträge Bamberger Nachwuchswissenschaftlerinnen
Conference:
Kolloquium Forschende Frauen 2020 ; Bamberg
Publisher Information:
Year of publication:
2021
Pages:
ISBN:
978-3-86309-853-7
Language:
English
DOI:
Abstract:
Over the past decades the machine and deep learning community has celebrated great achievements in challenging tasks such as image classification. The deep architecture of artificial neural networks together with the plenitude of available data makes it possible to describe highly complex relations. Yet, it is still impossible to fully capture what the deep learning model has learned and to verify that it operates fairly and without creating bias, especially in critical tasks, for instance those arising in the medical field. One example for such a task is the detection of distinct facial expressions, called Action Units, in facial images.
Considering this specific task, our research aims to provide transparency regarding bias, specifically in relation to gender and skin color. We train a neural network for Action Unit classification and analyze its performance quantitatively based on its accuracy and qualitatively based on heatmaps. A structured review of our results indicates that we are able to detect bias. Even though we cannot conclude from our results that lower classification performance emerged solely from gender and skin color bias, these biases must be addressed, which is why we end by giving suggestions on how the detected bias can be avoided.
Considering this specific task, our research aims to provide transparency regarding bias, specifically in relation to gender and skin color. We train a neural network for Action Unit classification and analyze its performance quantitatively based on its accuracy and qualitatively based on heatmaps. A structured review of our results indicates that we are able to detect bias. Even though we cannot conclude from our results that lower classification performance emerged solely from gender and skin color bias, these biases must be addressed, which is why we end by giving suggestions on how the detected bias can be avoided.
GND Keywords: ;
Mimik
Vorurteil
Keywords:
-
DDC Classification:
RVK Classification:
Type:
Conferenceobject
Activation date:
January 10, 2024
Permalink
https://fis.uni-bamberg.de/handle/uniba/90482