About

SynthFairCLIP is a research initiative focused on fair vision–language models.

We study how to reduce bias in CLIP-style models by combining:


What we release

GitHub – evaluation tools

If you use our resources, please consider citing the SynthFairCLIP project.


Acknowledgement

We acknowledge EuroHPC JU for awarding the project ID EHPC-AI-2024A02-040 access to MareNostrum 5 hosted at BSC-CNS.