New research has found employees largely view new technology designed for surveillance and monitoring purposes unfavourably.
A new report by the TUC (Trades Union Congress) investigates workers’ attitudes towards the increase of artificial intelligence (AI) being adopted by employers.
Specifically, the report looks at the use of AI in terms of people-management including decisions on recruitment, line management, monitoring and training which are “increasingly being carried out by AI, instead of a person”.
Over a fifth of people surveyed (22 per cent) stated that artificial intelligence was used in their workplace for absence management whilst 15 per cent revealed it was used for ratings. 14 per cent of respondents said AI was utilised for work allocation, timetabling shifts and assessing training needs or allocations.
These statistics contrast greatly against the number of workers who were happy to see technology making decisions about people at work (28 per cent). This means that almost three-quarters of respondents were not (72 per cent).
Six in 10 employees felt that, unless carefully regulated, using technology to make people-centred decisions could increase unfair treatment in the workplace.
Another serious trend that the report identifies is almost nine out of 10 (89 per cent) workers stating that it could be possible that their company has implemented AI but the employee is not aware of this.
The TUC states that this is likely connected to a “lack of consultation and transparency” from employers regarding the use of AI at work.
The research also identifies that the COVID-19 pandemic has accelerated the use of AI in the workplace. Under a fifth (15 per cent) of employees stated that monitoring and surveillance at work has increased since COVID-19.
Worryingly, this could contribute to a negative relationship between employers and their employees. Over half of workers (56 per cent) stated that introducing new technologies to monitor the workplace damages trust between employers and their employees.
David Greenhalgh, specialist employment lawyer at Excello Law, said:
Covid-19 and home working has meant the workplace and the HR function in that new ‘workplace’ needs to be redefined.
HR decision making can be open to questioning in the context of employment tribunal proceedings. An attempt to justify a decision made by AI in some cases may give an employer protection where the process is objectively based and without any element of discrimination. AI can be used as a tool to remove unconscious bias in the recruitment process.
But overuse of AI could backfire where live evidence is required about the decision making process and the factors that were taken into account. Decisions taken by HR are complex, involving a number of different factors being balanced against each other. Where an employer needs to be able to demonstrate it acted as a reasonable employer, AI is unlikely to provide the solution.
As we have also seen recently, relying solely on AI to make a wholly automated decision about an employee (or worker) based on data held about that individual could result in claims under the GDPR.
TUC General Secretary Frances O’Grady said:
Worker surveillance tech has taken off during this pandemic as employers have grappled with increased remote working.
Big companies are investing in intrusive AI to keep tabs on their workers, set more demanding targets – and to automate decisions about who to let go. And it’s leading to increased loneliness and monotony.
Workers must be properly consulted on the use of AI, and be protected from punitive ways of working. Nobody should have their livelihood taken away by an algorithm.
As we emerge from this crisis, tech must be used to make working lives better – not to rob people of their dignity.