Close

Presentation

Distribution-Guided Fairness Calibration in Learning
DescriptionIn recent years, the emphasis on model fairness in AI applications for edge devices has grown. However, the traditional focus on AI model optimization has revolved around accuracy and efficiency,
formulating it as a two-objective problem. Consequently, the fairness of the model often goes unaddressed, potentially leading to unjust treatment of minorities. To rectify this oversight, it is imminent
and essential to include fairness in the model optimization, making it a three-objective optimization problem in terms of accuracy, efficiency, and fairness. By examining the existing methods, we found
that the weight distribution will affect efficiency and fairness, but these two metrics are always considered separately. Confronting the obstacle, we propose a novel optimization framework namelyFAIST, calibrating a fair model by controlling weight distribution to optimize fairness, efficiency, and accuracy simultaneously. We first devise an optimization algorithm that can guide the training
to generate model weights following the desired distribution. Then, we integrate the optimizer into a reinforcement learning process to identify hyperparameters of distribution to yield high performance. Evaluation of dermatology and face attribute datasets demonstrates FAIST's simultaneous improvements, with a notable 27.24% fairness improvement on the ISIC2019 dataset
Event Type
Work-in-Progress Poster
TimeWednesday, June 265:00pm - 6:00pm PDT
LocationLevel 2 Lobby
Topics
AI
Autonomous Systems
Cloud
Design
EDA
Embedded Systems
IP
Security