Adversarial Audit
The Impact of Facial Recognition on People with Disabilities
Advances in AI, especially facial recognition, hold great promise, but the risks to diverse users need careful consideration. This audit aims to ensure empathetic and conscientious innovation, using AI to benefit everyone.
Facial recognition · AI bias · Disability ·
This adversarial audit report investigates the intersection of facial recognition technology and disability, shedding light on potential biases and challenges faced by individuals with disabilities.
Through a multi-faceted approach, including qualitative interviews and rigorous experimental testing, the study exposes disparities and biases in age prediction, gender classification, and emotion recognition for disabled individuals.
Key recommendations include adopting transparent bias mitigation strategies, prioritizing accessibility by design, collaborating with disability organizations, and investing in research on AI and disability. The findings underscore the need for a more inclusive and equitable approach to technological advancement.
7.19%
error rate in age prediction for Down Syndrome participants.
4.45%
error rate in age prediction for participants with no DS.
50%
of people with disabilities face the risk of poverty and social exclusion.
Relevance of AI systems' social impact on people with disabilities
The consequences of AI’s decision-making devoid of personal context have been particularly concerning, as they might perpetuate biases and systemic discrimination. To address these challenges, experts emphasize the urgent need to cultivate AI systems that embrace diversity, incorporating physical and psychological variations to ensure inclusivity. By prioritizing these principles, we can harness AI’s potential to empower individuals with disabilities and create a more equitable future for all.
The concerns voiced by experts encompass not only the potential for misclassification due to subtle variations but also the overarching challenge of AI-driven decisions lacking human nuance.
Systemic biases magnified by AI’s self-learning capabilities compound the need for ethical and inclusive AI development. Central to these conversations is the imperative to train AI systems with diverse data that accurately represent individuals with disabilities. By doing so, we can navigate the intricate social dimensions of AI, ensuring that technological progress is synonymous with progress for all members of society.
Gender Disparity & BMI Prediction Challenges
Gender bias was unearthed through our thorough analysis of age predictions. Women consistently faced underestimation, with unsettling instances of being forecasted as young as 5 or 8 years old. In contrast, men were prone to overestimation, exacerbating the gender divide. These findings expose an entrenched gender bias within Azul’s algorithm, a bias that extends its ramifications across the lives of individuals with Down Syndrome.
Azul’s algorithm demonstrated moderate efficacy in BMI prediction for individuals with Down Syndrome. However, it exhibited a propensity to overestimate BMI values, especially concerning women. This trend raises justifiable concerns about equity within insurance pricing. Inflated predicted BMI values can result in disproportionately high premiums, adding an unnecessary burden to individuals who are already grappling with the intricacies of Down Syndrome.
Age Prediction Challenges for Down Syndrome Individuals
The comparison between Azul and DeepFace has brought to the forefront the intricate challenges surrounding age prediction for individuals with Down Syndrome. Azul exhibited deviations of up to 21 years between predicted and actual ages. Such discrepancies have critical implications, particularly in insurance pricing, where misjudging age can lead to unfair premiums and financial burdens.
In the case of DeepFace, the evaluation of age prediction for Down Syndrome participants revealed a similarly challenging landscape. The algorithm demonstrated significant deviations between predicted and actual ages, spanning from -16 to +23 years. These disparities parallel the difficulties encountered by Azul, highlighting the intricate nature of age prediction for individuals with Down Syndrome.
Between the lines
Keeping all these findings in mind,Eticas recommends:

- Comprehensive Evaluation: In light of biases in facial recognition, stakeholders must assess its suitability and reliability.
- Accessibility by Design: All FR models, including Azul, should prioritize universal design for an inclusive experience from the start.
- Disability Advocacy Collaboration: Companies like Azul should partner with disability organizations to develop ethical AI solutions.
- Regular Third-Party Audits: Azul and others should conduct audits to ensure transparency, fairness, and user protection.