Flavien Prost
Flavien Prost
Verified email at
Cited by
Cited by
Gemini: a family of highly capable multimodal models
G Team, R Anil, S Borgeaud, Y Wu, JB Alayrac, J Yu, R Soricut, ...
arXiv preprint arXiv:2312.11805, 2023
Fairness without demographics through adversarially reweighted learning
P Lahoti, A Beutel, J Chen, K Lee, F Prost, N Thain, X Wang, E Chi
Advances in neural information processing systems 33, 728-740, 2020
Debiasing embeddings for reduced gender bias in text classification
F Prost, N Thain, T Bolukbasi
First Workshop on Gender Bias in Natural Language Processing ACL 2019, 2019
Toward a better trade-off between performance and fairness with kernel-based distribution matching
F Prost, H Qian, Q Chen, EH Chi, J Chen, A Beutel
NeurIPS 2019 Workshop on Machine Learning with Guarantees, 2019
Measuring Recommender System Effects with Simulated Users
S Yao, Y Halpern, N Thain, X Wang, K Lee, F Prost, AB H. Chi, Jilin Chen
2nd Workshop on Fairness, Accountability, Transparency, Ethics and Society …, 2020
Understanding and improving fairness-accuracy trade-offs in multi-task learning
Y Wang, X Wang, A Beutel, F Prost, J Chen, EH Chi
Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data …, 2021
Practical compositional fairness: Understanding fairness in multi-component recommender systems
X Wang, N Thain, A Sinha, F Prost, EH Chi, J Chen, A Beutel
Proceedings of the 14th ACM International Conference on Web Search and Data …, 2021
Measuring model fairness under noisy covariates: A theoretical perspective
F Prost, P Awasthi, N Blumm, A Kumthekar, T Potter, L Wei, X Wang, ...
Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society, 873-883, 2021
Simpson's Paradox in Recommender Fairness: Reconciling differences between per-user and aggregated evaluations
F Prost, B Packer, J Chen, L Wei, P Kremp, N Blumm, S Wang, T Doshi, ...
arXiv preprint arXiv:2210.07755, 2022
FRAPP\'E: A Post-Processing Framework for Group Fairness Regularization
A Ţifrea, P Lahoti, B Packer, Y Halpern, A Beirami, F Prost
arXiv preprint arXiv:2312.02592, 2023
Towards A Scalable Solution for Improving Multi-Group Fairness in Compositional Classification
J Atwood, T Tian, B Packer, M Deodhar, J Chen, A Beutel, F Prost, ...
International Conference on Machine Learning (ICML) SCIS Workshop, 2023
FRAPPÉ: A Group Fairness Framework for Post-Processing Everything
A Tifrea, P Lahoti, B Packer, Y Halpern, A Beirami, F Prost
Forty-first International Conference on Machine Learning, 0
The system can't perform the operation now. Try again later.
Articles 1–12