Follow
Flavien Prost
Flavien Prost
Verified email at google.com
Title
Cited by
Cited by
Year
Gemini: a family of highly capable multimodal models
G Team, R Anil, S Borgeaud, Y Wu, JB Alayrac, J Yu, R Soricut, ...
arXiv preprint arXiv:2312.11805, 2023
10422023
Fairness without demographics through adversarially reweighted learning
P Lahoti, A Beutel, J Chen, K Lee, F Prost, N Thain, X Wang, E Chi
Advances in neural information processing systems 33, 728-740, 2020
3222020
Debiasing embeddings for reduced gender bias in text classification
F Prost, N Thain, T Bolukbasi
First Workshop on Gender Bias in Natural Language Processing ACL 2019, 2019
762019
Toward a better trade-off between performance and fairness with kernel-based distribution matching
F Prost, H Qian, Q Chen, EH Chi, J Chen, A Beutel
NeurIPS 2019 Workshop on Machine Learning with Guarantees, 2019
45*2019
Understanding and improving fairness-accuracy trade-offs in multi-task learning
Y Wang, X Wang, A Beutel, F Prost, J Chen, EH Chi
Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data …, 2021
422021
Measuring Recommender System Effects with Simulated Users
S Yao, Y Halpern, N Thain, X Wang, K Lee, F Prost, AB H. Chi, Jilin Chen
2nd Workshop on Fairness, Accountability, Transparency, Ethics and Society …, 2020
402020
Practical compositional fairness: Understanding fairness in multi-component recommender systems
X Wang, N Thain, A Sinha, F Prost, EH Chi, J Chen, A Beutel
Proceedings of the 14th ACM International Conference on Web Search and Data …, 2021
34*2021
Measuring model fairness under noisy covariates: A theoretical perspective
F Prost, P Awasthi, N Blumm, A Kumthekar, T Potter, L Wei, X Wang, ...
Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society, 873-883, 2021
122021
Simpson's Paradox in Recommender Fairness: Reconciling differences between per-user and aggregated evaluations
F Prost, B Packer, J Chen, L Wei, P Kremp, N Blumm, S Wang, T Doshi, ...
arXiv preprint arXiv:2210.07755, 2022
32022
FRAPPE: A Group Fairness Framework for Post-Processing Everything
A Ţifrea, P Lahoti, B Packer, Y Halpern, A Beirami, F Prost
arXiv preprint arXiv:2312.02592, 2024
1*2024
Inducing Group Fairness in LLM-Based Decisions
J Atwood, P Lahoti, A Balashankar, F Prost, A Beirami
arXiv preprint arXiv:2406.16738, 2024
2024
FRAPPÉ: A Group Fairness Framework for Post-Processing Everything
A Tifrea, P Lahoti, B Packer, Y Halpern, A Beirami, F Prost
International Conference on Machine Learning, 2024
2024
Towards A Scalable Solution for Improving Multi-Group Fairness in Compositional Classification
J Atwood, T Tian, B Packer, M Deodhar, J Chen, A Beutel, F Prost, ...
International Conference on Machine Learning (ICML) SCIS Workshop, 2023
2023
FRAPPÉ: A Group Fairness Framework for Post-Processing Everything
A Tifrea, P Lahoti, B Packer, Y Halpern, A Beirami, F Prost
Forty-first International Conference on Machine Learning, 0
The system can't perform the operation now. Try again later.
Articles 1–14