- Almost three quarters (73%) of people in the UK have had no AI training or education,
- 72% of the UK public are unsure if online content can be trusted and only 42% are willing to trust AI more broadly,
- Almost two-thirds of UK workers report using AI at work, with at least 53 per cent noting certain benefits such as increased efficiency, quality of work or innovation. However, a third of workers using AI admit to using it in inappropriate ways.
New research from KPMG and the University of Melbourne, has revealed almost three quarters (73 per cent) of people in the UK have had no AI education or training, yet almost half (48 per cent) of the UK public still believe they can use AI tools effectively.
Trust, Attitudes and Use of Artificial Intelligence: A Global Study 2025 gathered the views of 48,000 people, including more than 1,000 people in the UK, highlighted that out of the 47 countries surveyed, the UK was in the bottom third for AI literacy and training, based on how many people in those countries have had formal or informal training in AI or related fields. Despite the UK’s lower AI literacy and training rates, the study found the UK’s results are in line with other G7 countries such as the USA, Canada and France1.
Trust in AI
The study revealed that less than half (42 per cent) of the UK population are willing to trust AI. In addition to this, only 57 per cent of people are willing to accept or approve of the use of AI.
Concern about AI risks and AI-generated misinformation were common amongst the UK public:
- 72 per cent said they are unsure online content can be trusted as it may be AI-generated,
- 78 per cent were concerned about negative outcomes from AI such as loss of human interaction and connection, which 55 per cent of respondents already reported observing or experiencing,
- 40 per cent believe the risks of AI outweigh the benefits.
With widespread concern about some of the risks of the technology, 80 per cent of people in the UK believe AI regulation is required and nine in 10 (91 per cent) want laws and action to combat AI-generated misinformation specifically.
Despite this, there were some more positive findings, with 71 per cent still expecting AI to deliver on a range on benefits such as reducing time spent on mundane or repetitive tasks.
Commenting on the findings, Leanne Allen, Head of AI at KPMG UK, said:
“The UK faces a complex AI trust problem as the technology becomes a greater part of our everyday lives. The UK's AI literacy and training rate may not be the sole factor contributing to its distrust of the technology. At the heart of the issue is likely the rapid pace of technological change, driven by a race to be "first" in AI advancements. This is contrasted by the slower development of regulatory guidelines and controls needed to mitigate the growing risks. It is understandable that it’s challenging to trust something that is evolving so quickly without adequate regulatory frameworks in place.
“To build confidence and trust in AI, we shouldn’t only be thinking about how to familiarise and train people to use AI but also about how we ensure the AI technologies we are using are ‘trusted by design’. If people know the technologies have been built in a responsible way and have controls and assurances built in, then they may be more willing to trust and use them.”
AI in the workplace
When it came to AI in the workplace in the UK, almost two thirds (65 per cent) of workers said they intentionally use AI at work. 39 per cent felt they couldn’t complete their work without the help of AI and 44 per cent are concerned about being left behind if they don’t use AI at work. Of those who reported using AI at work, at least 53 per cent noted certain benefits such as increased efficiency, quality of work or innovation.
While the majority of workers report using AI, complacency has already formed around the technology. More than half of workers using the technology (54 per cent) say they made mistakes in their work due to AI and 58 per cent report that they have relied on AI output at work without evaluating its accuracy. In addition to this, more than a third of workers (38 per cent) admitted using AI at work in inappropriate ways such as uploading copyrighted information.
When workers were asked about their organisation’s policies around AI, more than half (59 per cent) of those who had said their organisation uses AI, also said training was provided in the responsible use of AI. The same number of workers reported their organisations having policies and practices governing responsible use of the technology.
Allen added:
“For organisations already integrating AI, it's crucial to assess and manage associated risks while providing robust training on AI ethics. They need a long-term strategy that breaks ingrained habits and adopts new ways of working collaboratively with AI. While businesses considering AI adoption should look at embedding controls and assurances to manage risks effectively.
“Regardless of where an organisation is on its AI journey, monitoring controls and cultivating a culture that understands and encourages responsible AI use is key to maximising benefits and minimising risks.”