The Regulation of AI in Recruitment

AI applications for human resources teams are becoming ubiquitous. Many applications assist employers and employees -   but they are not without dangers and throw into focus how important it is for HR teams to understand data protection laws, and to work closely with their DPOs and advisers to ensure that data protection is taken into account at a deep and detailed level.

As in many other areas of societal change, HR is at the sharp end, with recruitment in particular going through a process of transformation as a result of AI-based tools.  We have tools to write job adverts and job descriptions, tools to analyse facial expressions whilst a candidate is being interviewed, tools to scrape the internet for interesting information about candidates, and a myriad of other tools aimed at improving the recruitment process.  Many of these developers have the stated aim of automating recruitment, removing any human involvement in the recruitment process.

There are plenty of positives about AI recruitment.  The tentative evidence is that AI produces job descriptions and adverts that lead to more candidates applying than when human-written documents are used; interview tools can assist in making more objective assessments of candidates who are neurodiverse, or may be disadvantaged in more human-based recruitment settings.  Candidates also benefit from the use of tools: AI can produce applications that boost the likely success rates for candidates who are not applying in their first language; advanced search tools enable candidates to access a wider range of vacancies.

But there is undeniably a need for specific regulation.  Algorithms can accentuate and embed bias.  Decisions are often ‘black box’, with no ability for candidates to know why they have or have not been accepted for the job.

So what does early regulation look like?  There is little to go on.  First off the block was New York, which introduced the Automated Employment Decision Tool in July 2023.  This requires hirers to notify candidates when they are using AI tools.  Employers will also need to submit to annual independent audits to demonstrate that their systems are not biased.  California is set to follow New York with a more comprehensive set of regulations, possibly as early as next year.

The US courts are also starting to play a role in the regulation of AI.  Earlier this year the Equal Employment Opportunity Commission brought a claim against an online education firm whose AI software discriminated against older candidates.  This led to a $365,000 settlement. 

China has also recently introduced new laws aimed at regulating AI, although these are more focused on regulating AI-generated content than ensuring fair employment practices.  Closer to home, however, the EU’s AI Act, due into force in the next few years, is likely to set the gold standard for AI regulation, in the same way that GDPR has for data privacy.  This will significantly affect recruiters and HR teams in European states, and it will classify AI in employment as posing a high risk to the rights and freedoms of individuals.  We can therefore expect significant regulation of it. By contrast, the UK is providing little initiative in this area, although for how long this laissez-faire approach can continue is unclear.

However, state-proposed laws, and enforcement litigation takes years.  AI developments are measured in months, if not weeks.  This could lead to the perception that we are currently in the Wild West, but existing laws do adapt, particularly in highly regulated areas such as employment.

Any reliance upon data for decision-making creates profound data protection risks, which many employers are ignoring.  Automated decision-making triggers strong obligations under the UK and European data protection legislation.  Human intervention can reduce these obligations, but that intervention must be meaningful.  Employers are responsible for ensuring that privacy by design is built into their systems at each opportunity, but there are real concerns that employers are missing this, particularly if they are simply buying an out-of-the-box or online solution. 

Data protection laws potentially provide an alternative route to the traditional employment tribunal claim for aggrieved candidates.  However, potentially they are most potent when combined with the UK’s existing equality laws.  If a particular AI recruiting tool has led to bias in the recruitment process then this may lead to many individuals having claims for unlawful discrimination.  The employer will not be able to pass off liability on the software provider, even if the employer had no intention to discriminate, and indeed no knowledge that it had discriminated.

So, what do employers do?  Increasingly HR teams need to understand the impact that data protection law, and that – increasingly – the regulation of AI, will have on their practices and initiatives.  Many of the tools that are being used are great, and can really improve hiring practices, but shortcuts through the complex maze of laws and regulations in this area are going to be dangerous.

Matthew and the team Prettys run regular webinars for HR professionals covering not only employment law, but also considerations for employers about data protection and privacy.

Join the mailing list for future events here: https://tinyurl.com/y5vdbt59

Expert
Matthew Cole
Partner