March 13, 2024: Social Bias in AI Systems
Dr. Chiara Gallese
University of Turin
Virtual Event: Wednesday, March 13,2024 12 PM EDT or (UTC−04:00)
Description
Research has shown how data sets convey social bias in AI systems, especially those based on machine learning. A biased data set is not representative of reality and might contribute to perpetuate societal biases within the model. To tackle this problem, it is important to understand how to avoid biases, errors, and unethical practices while creating the data sets. In this work we offer a preliminary framework for the semi-automated evaluation of fairness in data sets, by combining statistical information about data with qualitative consideration. We address the issue of how much (un)fairness can be included in a data set used for machine learning research, focusing on classification issues. In order to provide guidance for the use of data sets in contexts of critical decision-making, such as health decisions, we identify six fundamental features (balance, numerosity, unevenness, compliance, quality, incompleteness) that could affect model fairness. We developed a rule-based approach based on fuzzy logic that combines these characteristics into a single score and enables a semi-automatic evaluation of a data set in algorithmic fairness research.
You can get more information about this work here
Registration is Free, but required
Register using this google form