A new research, surveying over 2,000 UK citizens, reveals a shift in the ‘AI debate’ away from just fears over job loss, to a fresh debate over the role of humans in AI programming, the potential for bias and where accountability should lie for alleviating that bias*.
The findings reveal a greater familiarisation with AI technology amongst UK citizens (thankfully, the majority – 64 per cent – do know that AI is a machine or computer system that simulates human intelligence and not a physical robot). And a quarter (25 per cent) think AI is fundamentally a force for good, with almost half of those respondents (45 per cent) believing that it can help solve major world issues such as diagnosing illnesses. Only 13 per cent thought AI was a force for evil overall.
The UK public is now shifting its concerns to the role of humans in AI as this technology grows and evolves. Over a third (41per cent) think AI in its current state is biased and 38 per cent blame inaccurate data for this bias. Furthermore, over 32 per cent think that the technology industry has a greater role to play in encouraging gender diversity in AI. Whilst, a further 31 per cent call on the government to take this responsibility. The need for a balanced, human touch is still what the UK public wants to see from AI.
All of these findings point to a greater need for a new breed of AI which we term ‘augmented intelligence’ – a force that amplifies the power of human intuition with the scale and speed of machine intelligence.
Elif Tutuk, senior director of research at Qlik, comments,
Artificial Intelligence technology is clearly ushering in a new era of exciting developments and breakthroughs – but it is encouraging to see that the UK public realises that bias is what will hold this technology back from reaching its full potential.
Bias is often caused by incomplete data sets, and perhaps most importantly, a lack of context around those data sets. So, for example, when we ask a question as a human, we ask it based on a hypothesis, which makes that question inherently biased from the get-go. AI has to have the capability to have context ‘built in’ to analyse all of the data on behalf of humans and provide more objective outcomes.
For the AI industry to grow and flourish – it needs trust at its core. Trust is shaped by human experiences, and we need greater attention paid to the many different human experiences that can create more balanced AI for all. The industry may have already decided that Alexa is a woman and Watson is a man, but more emphasis has to be placed on the data behind AI and who is programming it. If we can empower more men and women to become more data literate – having a better understanding of how to read, analyse and understand data – we can create a more level playing field for the growth of the UK’s AI industry.
*Research by Qlik2