Bias Analysis and Achieving Fairness for Affective Computing
Project Aim:
The Bias Analysis and Achieving Fairness for Affective Computing Project aims to:
- provide a guide by 1. providing an overview of the various definitions of bias and measures of fairness within the field of facial affective signal processing and 2. categorizing the algorithms and techniques that can be used to investigate and mitigate bias in facial affective signal processing;
- propose Continual Learning (CL) as an effective strategy to enhance fairness in Facial Expression Recognition (FER) systems, guarding against biases arising from imbalances in data distributions;
- extend recent studies that have suggested that model compression can have an adverse effect on algorithmic fairness, amplifying existing biases in machine learning models.
Studies Conducted:
A summary of studies with human participants (as of June 2023):
- The Hitchhiker’s Guide to Bias and Fairness in Facial Affective Signal Processing: Overview and techniques [IEEE 2021]
- Towards Fair Affective Robotics: Continual Learning for Mitigating Bias in Facial Expression and Action Unit Recognition [arXiv 2021]
- The Effect of Model Compression on Fairness in Facial Expression Recognition [arXiv 2022]
- Counterfactual Fairness for Facial Expression Recognition [ECCV-W 2022]
- Causal Structure Learning of Bias for Fair Affect Recognition [WACV-W 2023]
- Towards Gender Fairness for Mental Health Prediction [IJCAI '23]
- "It's not Fair!" -- Fairness for a Small Dataset of Multi-Modal Dyadic Mental Well-being Coaching [ACII '23]
Major Findings:
Major findings (as of June 2023):
- - Our experiments show that CL-based methods, on average, outperform popular bias mitigation techniques, strengthening the need for further investigation into CL for the development of fairer FER algorithms
- - Our experimental results show that: (i) Compression and quantisation achieve significant reduction in model size with minimal impact on overall accuracy for both CK+DB and RAF-DB; (ii) in terms of model accuracy, the classifier trained and tested on RAF-DB seems more robust to compression compared to the CK+ DB; (iii) for RAF-DB, the different compression strategies do not seem to increase the gap in predictive performance across the sensitive attributes of gender, race and age which is in contrast with the results on the CK+DB, where compression seems to amplify existing biases for gender
Project Team:
- - Prof Hatice Gunes (PI, Apr 2019-present)
- - Nikhil Churamani (Research Assistant, Oct 2022 – Jan 2023) – now a postdoc at AFAR Lab on another project
- - Sinan Kalkan (Visiting Academic, Oct 2019 - Sep 2020)
- - Samuil Stoychev (M.Phil Student, Oct 2020 - Jun 2021)
- - Jiaee Chong (PhD student funded by the Alan Turing Institute)