Eticas

High-performance AI without the risks. 

PRECIRE, An algorithm for ranking potential employees using voice recognition

Mitchel Ondili, November 2022

Automation in commercial services has been touted as a cost and personnel-saving mechanism, as well as a way of enforcing ‘objective’ hiring standards. The hiring process is defined by two stages: recruitment, for identifying potential employees, and selection, for ranking applicant information. Recommender systems are often used for recruitment processes, primarily across social media platforms, and utilize algorithms in targeted ads or on hiring platforms like LinkedIn. The algorithm we are highlighting this month, PRECIRE, is a similar type of algorithm, but which is, instead, aimed at the selection phase of the hiring process.

PRECIRE relies on affective computing, or the algorithmic analysis of personality traits. The algorithm works by analyzing voice samples provided by candidates to determine if they are a good ‘fit’ for the company or to identify existing employees for promotion. The candidates are given 10-15 minutes to answer questions on a topic that often has no correlation with the position they have applied for. Instead, the algorithm analyzes how the candidate speaks, picking out intonation, pacing, the complexity of language used, and other similar variables. A psychological profile is created which makes up the basis for the algorithm’s hiring recommendation. The initial rollout of the algorithm was in call centers and temporary employment agencies and has grown in scope and reach.

Aside from the obvious effectiveness concerns, algorithms which determine hiring decisions can easily reproduce prejudiced definitions of ‘cultural fit’. The concern with hiring algorithms lies first and foremost in the training of models relying on past data which creates a feedback loop that reinforces existing hiring biases. PRECIRE in particular boasts a vast client list including Vodafone and Volkswagen, but the use of hiring algorithms like this poses several risks that can be viewed from two main angles: the issue of touting objectivity while enforcing subjective biases and the potential for privacy violations and data misuse.

  1.     Touting objectivity while enforcing subjective biases

The efficiency promised by the company developing this hiring algorithm relies on the assumed and promised objectivity of the outcome. However, upon closer inspection, the algorithm, like so many others, uses criteria pre-selected by the company which is based on subjective categorizations and past employee traits considered to exemplify future employees such as confidence in oral presentation and perceived ability to work well with clientele. Additionally, the potential for discrimination against persons with disabilities is increased as certain speech patterns may not be reflected in a significant way by the training dataset. The training dataset can also affect the results in favor of one ethnicity or race over another, by disproportionately lowering the scores of non-native speakers or speakers with differences in pronunciation, dialect, verbiage, and accents.

  1.   Potential privacy violations and data misuse

The use of third-party hiring tools, independent of the results they provide, also raises the question of data rights and privacy preservation when dealing with data transfers throughout the hiring process. Hiring companies need to be compelled to respect data rights, in the case of PRECIRE, particularly because of the sensitive nature of voice data as well as the responsibility to safeguard their personal information. Voice data can be manipulated for several other purposes such as unlawful verification. The applicants need to be informed of their Access, Rectification, Cancellation and Opposition (ARCO) rights when it comes to submission of the voice recording.

How do we mitigate harm caused by hiring algorithms?

First, by implementing audit mechanisms for quality control. This includes transparency about the demographic representation in the dataset and the repercussions of siloed representations. The audit process should include an algorithm impact assessment (which may include a privacy impact assessment, data protection impact assessment or Human Rights Impact Asessment) Secondly, recruiting algorithms should always be a complementary recruiting tool, not a primary one, time saving mechanisms mean little in the face of impersonal, inexplicable discriminatory outcomes on the basis of automated decision making. Thirdly, maintaining the balance of power between the applicant and prospective employer by disclosing all selection criteria, obtaining consent and communicating, among others, the applicant’s ARCO rights. Finally, hiring criteria should be based on measurable requirements such as past experience, educational requirements instead of relying on psychological evaluations which reinforce prejudicial notions of acceptability and which maintain a harmful status-quo. In the absence of using algorithms, improvements to traditional hiring standards could be made, with additional accommodations for persons suffering with disabilities and groups at risk of discrimination. The ‘efficiency’ offered by a hiring algorithm is of no use if it only makes discrimination in hiring more efficient.

All algorithms should be explainable, so that all those affected by their use can understand how they work, and so that we can hold those responsible to account. Eticas Foundation’s OASI project is an effort in that direction. On the OASI pages you can read more about algorithms and their social impact, and in the Register you can browse an ever-growing list of algorithms sorted by different kinds of categories. And if you know about an algorithmic system that’s not in the Register and you think it should be added, please let us know.