Home 5 Uncategorized 5 Artificial intelligence – how will it impact HR in 2020?

Artificial intelligence – how will it impact HR in 2020?

With artificial intelligence becoming an ever-more intensive debate amongst the HR community with each passing month, we decided to reach out to one of our partners – global law firm Norton Rose Fulbright – to get some more information about the impact it is having in the work place. In today’s blog Paul Griffin – Head of Employment at the London office – and Marcus Evans – EMEA Head of Data Protection – give us some insight based on what they have been seeing and the impact they think AI will have.

Artificial intelligence in the work place

Artificial Intelligence (Al) will have an increasing impact on the workplace in the coming years – indeed, some have compared its significance with the impact the industrial revolution had on work. Al’s potential applications in the workplace seem to be limitless and evolving all the time. However, one of the uses of particular interest to employment and in-house lawyers is for recruitment purposes, where it creates a number of legal complexities.

Al and recruitment: the issue of bias

Some of the Al used for recruitment now even evaluates candidates’ facial expressions and body language as part of a wider data set to determine their suitability for positions. However, the scientific community is not in agreement over the appropriateness of ascribing particular meanings to the expression of emotions. It appears that how people express emotions may not be innate but learned, so our perception of others’ emotions can be very inaccurate.

Therefore, it seems that it is not that human expressions of emotion as they may be interpreted by Al have no meaning, but rather, those expressions may not lend themselves to a standard interpretation. This is just one example of why purchasers of such Al applications need to be cautious.

This problem highlights a general issue with the use of Al in the workplace: to reduce the risk of liability associated with its deployment, a business is likely to need to understand at a basic level how the algorithm is working and what its fallibilities are. If it has no such understanding, how can it take steps to mitigate the risks?

For example, an employer might use a third party developer’s Al software to reject unsuitable candidates automatically, without owning the software itself. If a candidate then claims the decision to exclude them is based on a protected characteristic (such as race, sex or disability), the employer may be vicariously liable for unlawful discrimination. It may attempt to use the statutory defence available under the Equality Act 2010 and claim that it has done everything reasonably possible in the circumstances to prevent the discrimination from occurring. Putting aside the inherent difficulties in successfully arguing the statutory defence, it could only ever be a real possibility if the employer had attempted to understand how the Al was making decisions.

It is unlikely that the software developer would allow the employer to delve into its coding in any meaningful way, as allowing access could damage its commercial interests. However, even if such access were to be authorised, many employers may face what is called the ‘black box’ issue. This term is used to describe the inability of many types of Al software to explain the logic used in reaching decisions. In some cases, there is a requirement for software to be developed to run alongside the Al application to interpret, in a way that humans understand, how decisions have been reached. Commentary in this area of work suggests that this solution is by no means straightforward.

If a developer assumed responsibility for the Al avoiding making unlawful, biased decisions, then it would open itself up to significant costs. It may be reluctant to warrant that the operation (and therefore decision-making) of Al complies with all laws. If the relevant Al uses machine learning, the developer may argue that the provider of the data sets used to teach the system ought to be responsible, not it. For example, at the client’s request, the developer may customise the Al using the client’s own data sets relating to what the business considers to be a successful recruitment outcome for it. If those data sets contain implicit biases, these may be reflected in the decision-making of the Al and the developer might argue that it ought not to be blamed for the outcome. If, for example, the Al decides the ideal candidate is a white, able-bodied, male employee in his thirties practising what might be perceived to be a mainstream religion, this is likely to reflect the business’s inherent bias.

Data protection considerations

Data protection laws, the EU General Data Protection Regulation (GDPR) in particular, pose material constraints on the roll out and use of AI applications in the workplace. These laws are engaged because in many AI recruitment applications personal data of current or past employees is used to train the algorithm and in all applications designed to monitor, take, or assist in taking decisions about applicants or employees their personal data is used in that process.

Taking a recruitment AI application as an example, at the training phase, the principle hurdle is whether the employee expected his or her application details and information about the relative success or failure of the employee since recruitment to be used for this purpose. Some employees might consider this unobjectionable; others might be fearful the use of the technology may be extended to managing their own performance rather than just assessing new recruits: the content of fair processing notices given to the applicants is pivotal.

Before deploying an AI application that makes decisions about people, the GDPR requires that the controller of the application undertakes a thorough assessment of how the application will meet its requirements, including the risks of producing errors, bias and other harms. This assessment needs to be documented and to conclude that the risk of these harms arising are low; if not the controller must consult with, and obtain permission from, the data protection regulator to proceed. The UK Information Commissioner has published extensive guidance on what it expects controllers to do to meet these standards. A similar but broader approach is emerging at the European level through the EU High Level Expert Group on AI’s Trustworthy AI Checklist and the EU Commission’s new white paper on AI. All of these initiatives force the controller to consider all security, liability and ethical risks and their mitigants in a methodical and auditable manner. The process is very technical and labour intensive. Although an outsourced AI service provider can support, the employer will need to devote significant time and resource to completing this process before rolling out any AI application.

The last point of note from a data protection perspective is that where there is no human intervention in an automated decision and that decision results in a significant impact on an individual (which being denied access to a job interview would satisfy) the employer using the AI application is obliged to set out the information that is used in the decision making and to explain the logic in broad terms before the decision is taken. After the decision is taken the candidate is entitled to ask for the decision to be retaken with human intervention and/or be provided with a more specific explanation of how the decision was arrived in his/her case.

In the recruitment context, given that there is no other obligation to give feedback to unsuccessful candidates, employers need to be comfortable with this level of transparency before deploying any fully automated candidate sifting applications.

It is these hard legal requirements in the GDPR that have put data protection regulators and data protection lawyers at the centre of regulation of human centric AI applications. Evaluating risk of harm to individuals and, to a lesser extent, society has always been part of their skill set; understanding the complex mathematical phenomena that drive the benefits and risks of AI has not and is causing many a furrowed brow!

If you would like to talk to any member of the Norton Rose Fulbright Team about how your business could be impacted by AI in relation to some of the points listed above, please feel free to reach out to Marcus or Paul or, alternatively, if you’d like to talk to the LACE Partners team about some of the trends we’re seeing from HR Teams in this space please get in contact on +44 (0) 20 8065 0310. You can also listen in to the HR on the Offensive Podcast in which Aaron and Chris discuss what’s happening with AI for some of the larger HCM systems and how HR teams are being affected by changes that will start to occur in 2020. Listen here.

Got a question? Need some support? Contact us today and we'll be happy to help.