Despite the hype, many consumers admit they don't trust artificial intelligence, with a significant proportion also expressing cynicism about the benefits it brings.
Research conducted by Herbert Smith Freehills reveals that just 5% of UK consumers are unconcerned about the growing presence of AI in everyday life. Only 20% say they have a high level of faith that AI systems are trustworthy.
Undertaken to mark the launch of the firm's Emerging Tech Academy, the research explored views among 1000 consumers between the ages of 18 and 80. Respondents were asked about the type of AI systems they use today, expectations about future usage, and comfort levels with the way machines gather data and operate. Key findings include:
- Manipulative machines: just over half (56%) do not accept that AI can be impartial. Additionally, more than one third of respondents (37%) fear the outputs of AI systems could be biased against specific groups and over half (53%) also fear AI will make decisions that directly impact them, using information that is wrong.
- Responsive, but not responsible: while 60% accept that AI will make the world run more efficiently by offering solutions quickly, just over half (53%) say they are concerned about a lack of accountability in AI systems. One third (31%) also suggest that AI tools failing to meet ethical expectations is a problem.
- Modern, yet outdated: although a significant proportion accept that AI can help reduce human errors (44%), just 16% believe AI tools give accurate information. More than one-third (38%) also fear that AI systems use out-of-date information.
"Artificial intelligence can undoubtedly benefit consumers, but there is clearly still work to do to win their trust and overcome cynicism. The AI market risks being seen as the 'wild west' so, as policymakers define their strategies to address the risks of AI, they must ensure they are creating a system that delivers certainty and confidence now, while being flexible enough to promote and account for future innovations," says Alexander Amato-Cravero, Regional Head of Herbert Smith Freehills' Emerging Technology Group.
Based on the findings and ahead of the UK hosting the first major global summit on AI safety, Herbert Smith Freehills' Emerging Tech Academy has identified three steps which, taken together, can foster an environment in which consumer and business confidence in AI will improve. These are:
• Accelerating the development and implementation of legally binding AI rules: the sooner policymakers can plug the gaps in the current patchwork of rules that apply to AI with laws, regulations, guidance, and principles that are fit for purpose and have the force of law, the sooner consumers and businesses will be comfortable engaging with AI systems.
• Increasing alignment among domestic and global policymakers on AI: the risks associated with AI are overseen by multiple regulators and authorities. A harmonised approach is needed to address gaps in the existing collection of laws and regulations. With consumers engaging with businesses around the world, this discourse must go beyond domestic policy and address global alignment and interoperability as well.
• Improving dialogue and better educating consumers and markets on AI risks: despite excitement about the possibilities of AI systems now and in the future, consumers' fear and distrust will be minimised through balanced dialogue about the benefits and risks.
Amato-Cravero concludes, "The key to long-term success is dialogue rather than fanfare. It's easy to get caught up in the hype, but building confidence in AI requires cutting through the noise with sharp focus on the opportunities and risks. At the same time, policymakers must deliver certainty to consumers and businesses by clarifying the patchwork of existing laws and regulations."
The research was conducted during May and June 2023 and is based on 1000 respondents. Respondent profiles include individuals including those in full time employment and education and across 11 UK regions.
Additional statistics
• Only 34% of respondents think AI is reliable.
• Those aged 55+ are less likely (21%) than those 35-years or less (34%) to say AI helps them make better decisions.
• Fewer women (48%) than men (55%) are comfortable with the idea of companies using AI to diagnose health problems.
• More people (53%) are uncomfortable with the idea of AI being used to settle legal disputes than those who think it is a good idea (29%).
© Technews Publishing (Pty) Ltd. | All Rights Reserved.