Richard Justenhoven: The four main challenges to overcome when using AI in assessment

The goal of any recruitment process is to identify the right person for the job. The closer you match the individual to the requirements of the role, the more effective that person will be. You certainly don’t need Artificial Intelligence (AI) to achieve this, but AI will help you do it quicker and more efficiently.

The reality is that AI excels at two things: analysing massive amounts of data and conducting ‘narrow’ tasks – the things you might outsource to a shared service centre. We compare AI’s narrow tasks to household chores – you wouldn’t wash dishes in a washing machine or put clothes in a dishwasher. Each perform specific tasks that rely on humans. So a household’s time-saving machines – and an AI’s algorithms – are unique and aren’t made to be swapped.

AI helps make processes easier by providing useful information, at various stages, that will help make a final recruitment decision. It also has three other key benefits – reducing bias, being legally defensible and engagement.

However, with AI in assessment growing at a rapid pace, besides technical challenges, there are four less obvious issues that must be addressed:

Defensibility:

The process of selecting candidates – be it for entry into the organisation or promotion within – must always be legally defensible. It must not be discriminatory nor favour a particular group of candidates based on gender, or race, or any of the other groups outline in equal opportunity legislation. There are also the rights of the individual – and the right to be informed how assessment information is to be used. Under the General Data Protection Regulation (GDPR), you need to make sure candidates know how and why assessment is being used including profiling to make decisions. But regardless of AI, this is good practice anyway.

Bear in mind too, that standardised ‘plug-and-play’ AI systems are available – but they won’t differentiate your employer brand. If your competitors use the same systems, you’ll all be chasing the same talent. Also, these systems utilise ‘deep learning networks’ which learn as they go. This sounds promising but actually it makes it very difficult to explain exactly why candidates were accepted or rejected. These systems therefore lead you to make selection decisions that you can’t defend, which leaves you vulnerable to litigation from disgruntled candidates. Only custom AI systems offer the ability to make transparent and defensible selection decisions.

Time:

Custom AI systems mirror human behaviour and replicate the best practice of your assessors and raters. To achieve this, you have to pre-feed the system with relevant information. It can take up to six months to ‘train’ an AI system to assess candidates in exactly the same way that your assessors and raters would judge them. Managing this lead time will be a major challenge for organisations.

Chief human resources officers (CHROs) should therefore be forming project teams now to look at custom AI models for video interviewing and other recruitment processes. Otherwise you’ll always be six months behind those pioneering companies that have already invested in this technology.

Ethics:

There is an ethical question around how much support you take from an AI system. For example, are you happy for an AI system to reject your candidates? Or would you prefer it to ‘flag up’ unsuitable candidates so you can review and check their details? How to use AI ethically will be a key consideration for many employers.

AI’s role should be restricted to providing additional information and enhancing efficiency. Recruiters should always set the objectives when hiring. AI can then deliver useful information, at various stages of the selection process, that will support a final decision.

Data handling:

AI excels at massive amounts of data. However, when so much data is involved, the results can be misinterpreted or even deliberately abused. Good data handling practices will be essential not just for confidentiality but also for maintaining your organisation’s reputation. AI should be used carefully and honourably to help you predict which candidates will be effective in the role – and engaged by your organisation.

With this in mind, there are four guidelines to help you get AI in assessment right.

Recruiters should set the initial goal and make the final hiring decision. AI is simply there to support and assist the process.

The need for interviews remains a human activity. What impression does a candidate get from being interviewed by an avatar?

Standard AI systems play a limited role; there is a need to develop custom AI systems to differentiate your employer brand – and offer the transparency of decisions.

And remain aware of ethical considerations, not least how much support we take from an AI system.

In summary, using AI wisely means it’s possible to closely predict the people who will be the best in the specific roles available. They’re also likely to be the most engaged – which supports productivity and retention. There is no doubt, AI will be used more and more in assessment, but getting it right is essential for all organisations.