Eticas

High-performance AI without the risks. 

Externally Auditing Algorithms

Six months ago we published our first external audit of the VioGén algorithmic system used in Spain to protect victims of domestic violence, and we continue to stand for people’s rights, so we would like to share with you that we’re working on some projects that will undoubtedly have a great impact on public awareness when it comes to algorithmic accountability.

Uber 

We’re externally auditing UBER’s algorithm to check for socioeconomic bias, as it’s been reported that there are some irregularities in their systems. The objective is to determine if, in similar circumstances, the pricing changes based on the departure’s area average income, as well as gender bias and other parameters regarding data privacy.

We’re already working on the third stage of this project but we need more data to carry out the quantitative analysis, so if you use Uber in Spain and want to collaborate with us, we explain here how to help us in this process. No personal data will be processed.

Banking

Bias in banking, financial and credit entities is a huge deal, as it’s been discovered that due to bias in the datasets used to develop algorithms, credit scores tend to be different, under the same circumstances, for men and women. Through reverse engineering and algorithmic auditing, and in collaboration with bank user organizations, our team is working to uncover the truth about the AI used by banks and stand for equality.

Insurance

¿What if because of having a health condition you had different pricing conditions in Life Insurance? That would be a flagrant act of discrimination, yet we are, in collaboration with Fundació Catalana Síndrome de Down, conducting an external audit on facial recognition in life insurance companies to assure that these systems are not discriminating against this group of people, as it has been suspected.

Social Media

Social Media algorithms are affecting most of the global population constantly and the worry surrounding them is increasing enormously. These companies are constantly updating their algorithms but it’s still concerning the way the content is not moderated enough or presents bias during the process. For instance, moderation on content created and presented to different groups of migrants is systematically an object of bias that we have tackled and studied as part of the European consortium Re:framing Migrants. This kind of use of algorithms is a clear threat to democracy and should be addressed immediately, as no AI ethics has been applied in the development of these systems.

The power of Social Media is usually undermined for its entertaining purpose, but we should be aware that these platforms have the potential to contribute to confirmation bias and consequently, to social polarization.

As Human Rights advocates we’re working on an external audit of Youtube and TikTok algorithms to prevent this.