Bias Analysis and Achieving Fairness for Affective Computing

Project Aim:

The Bias Analysis and Achieving Fairness for Affective Computing Project aims to:

  1. provide a guide by 1. providing an overview of the various definitions of bias and measures of fairness within the field of facial affective signal processing and 2. categorizing the algorithms and techniques that can be used to investigate and mitigate bias in facial affective signal processing;
  2. propose Continual Learning (CL) as an effective strategy to enhance fairness in Facial Expression Recognition (FER) systems, guarding against biases arising from imbalances in data distributions;
  3. extend recent studies that have suggested that model compression can have an adverse effect on algorithmic fairness, amplifying existing biases in machine learning models.

Studies Conducted:

A summary of studies with human participants (as of June 2023):

  1. The Hitchhiker’s Guide to Bias and Fairness in Facial Affective Signal Processing: Overview and techniques [IEEE 2021]
  2. Towards Fair Affective Robotics: Continual Learning for Mitigating Bias in Facial Expression and Action Unit Recognition [arXiv 2021]
  3. The Effect of Model Compression on Fairness in Facial Expression Recognition [arXiv 2022]
  4. Counterfactual Fairness for Facial Expression Recognition [ECCV-W 2022]
  5. Causal Structure Learning of Bias for Fair Affect Recognition [WACV-W 2023]
  6. Towards Gender Fairness for Mental Health Prediction [IJCAI '23]
  7. "It's not Fair!" -- Fairness for a Small Dataset of Multi-Modal Dyadic Mental Well-being Coaching [ACII '23]

Major Findings:

Major findings (as of June 2023):

  • - Our experiments show that CL-based methods, on average, outperform popular bias mitigation techniques, strengthening the need for further investigation into CL for the development of fairer FER algorithms
  • - Our experimental results show that: (i) Compression and quantisation achieve significant reduction in model size with minimal impact on overall accuracy for both CK+DB and RAF-DB; (ii) in terms of model accuracy, the classifier trained and tested on RAF-DB seems more robust to compression compared to the CK+ DB; (iii) for RAF-DB, the different compression strategies do not seem to increase the gap in predictive performance across the sensitive attributes of gender, race and age which is in contrast with the results on the CK+DB, where compression seems to amplify existing biases for gender

Project Team:

  • - Prof Hatice Gunes (PI, Apr 2019-present)
  • - Nikhil Churamani (Research Assistant, Oct 2022 – Jan 2023) – now a postdoc at AFAR Lab on another project
  • - Sinan Kalkan (Visiting Academic, Oct 2019 - Sep 2020)
  • - Samuil Stoychev (M.Phil Student, Oct 2020 - Jun 2021)
  • - Jiaee Chong (PhD student funded by the Alan Turing Institute)
--> Designed by BootstrapMade