Written by: Nadine Mukhtar

How can AI be used in the Workplace?

Artificial Intelligence (AI) technologies have a broad range of application in the workplace by both employers and employees. Within the recruitment and hiring process, AI technologies can be used to replace human decision making, known as algorithmic decision making (ADM). Further, with the rise of generative AI technologies such as ChatGPT (defined as ‘algorithms that can be used to create new content, including audio, code, images, text…’), employees may also be able to use such tools to complete their tasks. Whilst AI technologies in the workplace can enhance efficiency, numerous concerns relating to indirect and direct discrimination, data protection, IP infringement and privacy concerns have been echoed. Absent a defined regulatory landscape on the use of AI in the workplace, employers will need to tread carefully to ensure that AI technologies do not breach existing legislation such as the Equality Act 2010 in the UK and the EU and UK General Data Protection Regulations (GDPR) amongst others.

A Closer Look at the Challenges and Opportunities of AI in the Recruitment and Hiring Process

The use of AI technologies by employers in the recruitment process is growing, with a recent study by the Society of Human Resources Management revealing that 79% of employers use AI in the recruitment process. Amongst some of the most popular uses of AI in employment is through (ADM). This can take the form of automated resume screening, video interviews and skills assessments. Further, AI technologies have been deployed in relation to decisions regarding performance management such as promotions and dismissals.

AI technologies can help employers and recruitment agencies, especially in large companies, by filtering through applications more efficiently to save time. Likewise, AI systems can potentially be better than humans at evaluating an applicant’s personality during automated video interviews. It has also been claimed that the use of data sets to train AI tools can help reduce bias in recruitment and performance management decisions.

However, it has rightly been pointed out that if the data sets used to train AI tools are already biased in nature, the AI system may simply replicate the bias, as famously occurred with Amazon’s hiring system in 2018 which exhibited sex discrimination. This can pose several problems with regards to complying with existing legislation. For example, the UK’s Equality Act 2010, which protects individuals from direct and indirect discrimination based on nine characteristics, is posited to apply to any form of AI used in the workplace. Direct discrimination under the Equality Act occurs when less favourable treatment is exhibited due to a protected characteristic. On the other hand, indirect discrimination requires proof that a neutral provision, criterion or practice (PCP), which applies equally to all, nevertheless disadvantages a group with a certain protected characteristic in a manner that cannot be objectively justified.

While decisions without human intervention in the employment context are permitted under the UK GDPR, Rachel Lewis and Iman Kouchouk of Farrer & Co emphasise that an employer or recruiter using an AI system will still have to abide by anti-discrimination laws. As AI systems are perceived to be applying neutral PCPs, practitioners posit that claims of discrimination are likely to focus on indirect discrimination against an employer. This would place a burden of proof on the claimant to demonstrate, on a balance of probabilities, the existence of PCP which disadvantages the group that shares the protected characteristic. However, Daniel Gray of Mischon de Reya highlights that an objective justification for an indirect discrimination claim may be open to an employer, if the employer can demonstrate that the use of an ‘AI system was a proportionate means of achieving a legitimate aim.’ Although the ease or difficulty of establishing such a justification is unclear, perhaps the rapidly growing rise of AI systems in the workplace and the benefits in terms of efficiency, resources and costs may increase the chances of successfully demonstrating an objective justification. This is especially so considering that the largest organisations are more likely to use AI in the workplace, increasing the viability of justifications based on enhancing efficiency.

In addition to indirect discrimination claims, there is also the possibility that direct discrimination claims may arise if an AI system is configured to consider protected characteristics. Whilst the burden of proof is still on the claimant initially, an objective justification is rarely available for direct discrimination, meaning that it must be proven that the AI system’s decision was not based on the protected characteristic. This will be a challenge, as the AI system’s black box, which refers to the invisible inputs and operations used to train an AI algorithm, will make it difficult to explain how a decision was reached. Further, it has been highlighted that the IP protection afforded to software used in AI tools are likely to compound the difficulty of explaining how an AI system arrived at a particular decision. Consequently, this may result in the drawing of an adverse inference against an employer or recruiter in relation to a discrimination claim based on the use of an AI system, opening another avenue of liability.

Generative AI Use by Employees

In addition to the use of AI by employers in the recruitment and hiring process, the sanctioned use of generative AI tools by employees has its advantages and disadvantages. Although repetitive tasks may be completed more efficiently, misuse of such tools by employees can render employers vicariously liable for employees’ actions.

Furat Ashraf of Bird & Bird has highlighted that these consequences can include reputational damage, financial liabilities, IP infringement and issues relating to confidentiality and data privacy.

Consequently, employers will need to train employees in the proper use of AI tools in the workplace to limit their potential liability, in terms of preventing discriminatory outcomes as a result of AI tools, and abiding by IP, data protection and privacy legislation.

Next Steps

While adequate training on the use of AI tools in the workplace may be a sensible step, employers, recruiters and AI providers must still face the challenge of navigating lawful use of AI tools in other processes such as hiring and firing, absent a consensus on the regulation of AI in the workplace.

Some measures may include robust contractual arrangements between the employer/recruiter and the AI provider to chart liability in instances of discrimination in the recruitment and hiring process. Further, practitioners highlight that commercial negotiations could mitigate the IP protection on the software used to configure an AI system to provide greater transparency as to how decisions were reached.

Further, with the EU AI Act on the horizon, which is posited to categorise the use of AI tools in the employment process as a ‘high-risk’ activity, employers should watch for the developments of the Act and prepare to introduce human oversight of AI systems.