Gender Classification and Bias Mitigation in Facial Images

Citation:

Wenying Wu, Zheng Yang, Pavlos Protopapas, and Panagiotis Michalatos. 7/13/2020. “Gender Classification and Bias Mitigation in Facial Images”. Publisher's Version

Abstract:

Gender classi€cation algorithms have important applications in many domains today such as demographic research, law enforcement, as well as human-computer interaction. Recent research showed that algorithms trained on biased benchmark databases could result in algorithmic bias. However, to date, liŠle research has been carried out on gender classi€cation algorithms’ bias towards gender minorities subgroups, such as the LGBTQ and the non-binary population, who have distinct characteristics in gender expression. In this paper, we began by conducting surveys on existing benchmark databases for facial recognition and gender classi€cation tasks. We discovered that the current benchmark databases lack representation of gender minority subgroups. We worked on extending the current binary gender classi€er to include a non-binary gender class. We did that by assembling two new facial image databases: 1) a racially balanced inclusive database with a subset of LGBTQ population 2) an inclusive-gender database that consists of people with non-binary gender. We worked to increase classi€cation accuracy and mitigate algorithmic biases on our baseline model trained on the augmented benchmark database. Our ensemble model has achieved an overall accuracy score of 90.39%, which is a 38.72% increase from the baseline binary gender classi€er trained on Adience. While this is an initial aŠempt towards mitigating bias in gender classi€cation, more work is needed in modeling gender as a continuum by assembling more inclusive databases.