A robot evaluating your cover letter, is this the future of human resources? It may seem so to those who observe the rapid adoption of artificial intelligence in human resources and recruitment companies. It seems smart: instead of a possibly biased human, you now have an objectively calculating robot that picks out the best resumes. But is this legally correct?

There are many software systems that can be used to screen applicants. It’s popular to recognize interesting resumes, but it’s also becoming more common to watch video calls to see if someone is nervous, evasive or lying.

What all this software has in common is that it works with statistics: based on a set of data from previous applicants, common and distribution characteristics are determined where new applications are placed in the right container.

objectivity

Objectivity is an important advantage for many companies. That way, you know your HR staff aren’t unwittingly rejecting women or applying other discriminatory criteria when selecting them. But is such software really that objective? After all, the data that fuels such a system comes from historical practice. And there may be discriminatory criteria in between.

A well-known example comes from Amazon: HR robots only hired men because the company only worked with men for the first fifteen years of its existence.

Artificial Intelligence Examples

new legislation

Fear of such “algorithmic exclusion” has led the General Data Protection Regulation (GDPR) to explicitly prohibit the use of artificial intelligence in the selection of applicants. Or well: applicants are selected on the basis of AI only.

Because it is allowed as a tool for the recruiter who then makes his own decision. Of course, it is difficult to prove in practice that the recruiter will mainly rely on AI.

However, new, stricter legislation is on the way from Europe: AI Law. This sci-fi name is a bit misleading because it involves all sorts of algorithms and automatic judgment or decision making about people. Likewise with a standard decision tree or a program that makes fixed steps and choices.

First and foremost is the impact that people can experience from such a system. If it can be neglected, the law allows its use without further rules. If the risks are unacceptable, its use is prohibited.

sharpness

HR systems are in the middle: High-risk AI, but not unacceptably high. There are strict requirements for this, such as risk management and clear design documentation with built-in annotation modules so people understand why the system has come to fruition. Its accuracy must also be proven in writing. It and both the supplier and the user are responsible for any errors here.

Currently, such systems are therefore legal as an aid. The applicant has one arm: a company needs to explain to him why it is rejecting him under GDPR. ‘Our AI thought your competencies were below average and your experience was very limited.’ That’s not nice to hear, but it’s clear.

Text: Arnoud Engelfriet

Source: Computer Totaal

Previous articleFour startups will attempt the first launch in the coming month
Next articleThe first mission to the Moon in the history of modern Russia is postponed to 2023

LEAVE A REPLY

Please enter your comment!
Please enter your name here