With generative AI (GenAI) evolving faster than the controls designed to manage it, organizations need to clearly define how they’re using it and how they’re readying their workforce for it.

Organizations are starting to experiment with generative artificial intelligence, which allows for the quick production of content such as text, images, and video with limited need for human intervention. It shows promise in summarizing meetings, drafting emails, and creating code. It’s a technology that many businesses in Canada and around the world are seeing as having great potential: AI, including generative AI, is seen as the most important technology to help businesses achieve their goals over the next several years, according to the latest KPMG Global Tech Report, and 63% of Canadian businesses have already adopted it. The top metrics they use to measure the success of their investment in AI and generative AI include better return on investments and the launch of new revenue streams.

63%

63% of Canadian businesses have adopted AI

Source: KPMG Global Tech Report 2023

Yet, by leveraging intellectual property, organizations could inadvertently expose confidential information and open themselves up to the risk of fraud or theft. At KPMG, our professionals dedicate and tailor AI services to address the challenges organizations face when implementing AI responsibly, take the next step and empower your organization today.

Creating proprietary generative AI tools takes time, money, and resources. As a result, most organizations are relying on third-party solutions like ChatGPT, OpenAI, and Stability AI instead. There are concerns about how such systems make decisions, with 39% of Canadian respondents agreeing such concerns are a factor in delaying progress on this front, with global responses being even higher (55%). But understanding the risks involved with generative AI is critical in helping to guide against its misuse—and to find ways where it can lead to greater productivity and innovation. Not surprisingly, 67% of Canadian respondents in the Global Tech Survey think that ethical understanding will be the most important personal attribute that technology leaders will need to have in an AI-enabled world.

In time, we may experience ‘singularity,’ which is when generative AI becomes smarter than humans. While it’s difficult to plan for that scenario, business leaders would do well to adopt mid- to long-term strategies that account for how generative AI will change their organizations. Short-term strategies won’t work because they will quickly become outdated.

With these potential risks, some organization are banning some AI tools or quickly educating their workforce on “best practices”. Many in the industry are also calling for a halt on further development so that we can collectively reckon with the implications of AI and GPT as it goes mainstream at unprecedented speed. While banning generative AI tools might serve as a temporary measure for organizations as the technology evolves, a permanent ban isn’t a permanent solution—employees will always find ways around it. Here are some key considerations in adopting a longer-term strategy and readying your workforce for generative AI.

67%

67% think that ethical understanding is the most important personal attribute technology leaders need in an AI-enabled world

Source: KPMG Global Tech Report 2023

Develop a policy—even if you’re not yet using generative AI

A well-defined policy with specific use cases and examples is a must. Organizations should be aware of how employees are using third-party generative AI solutions, and then create appropriate policies and procedures to protect sensitive data. Similarly, they need to establish and build trust in employees to use it wisely.

Understand the impact on regulatory compliance

Leaders need to fully understand the impacts of generative AI on data privacy, consumer protections, copyright infringement, and other compliance or legal requirements. That means training AI models on legally obtained data and doing so in compliance with laws such as the General Data Protection Regulation (GDPR) in the EU. Even if users are only working with internal data, you don’t want them to inadvertently expose private or proprietary information to the public—or your competitors.

Protect against security and privacy risks

An AI engine is constantly learning, so there’s a danger it could ingest confidential IP and make it available to other parties. It is important to safeguard the data used to train AI models by implementing security protocols, such as access controls, encryption, and secure storage. According to KPMG in Canada’s recent Generative AI Adoption Index, amongst those that use Generative AI at work, 23% included information about the company, including its name. Policies should guide which datasets can be fed into an AI engine to ensure they don’t violate any privacy or IP laws.

23%

23% of those who use Generative AI at work have included information about the company, including its name in prompts.

Source: 2023 Generative AI Adoption Index

Test for bias and inaccuracies

As AI engines ingest data and ‘learn,’ they could inadvertently introduce bias into the process. Identify which applications aren’t particularly prone or vulnerable to bias (such as chatbots that route calls) and start your generative AI journey there. Certain team members should be responsible for evaluating the output to help control for bias. This can involve analyzing the training data to identify potential sources of bias, testing the system on diverse populations and use cases to ensure that it performs accurately and fairly for all groups, and evaluating the design of the system to identify and address any potential sources of bias. Ultimately, testing for bias is an essential step in ensuring that AI systems are fair, equitable, and work for everyone.

Upskill your workforce

Organizations also need to consider how they plan to upskill their workforce for a future enabled by generative AI. For example, with generative AI, virtual training environments that simulate real-world scenarios will allow learners to practise and apply their skills in a safe and controlled environment. Will an AI-generated custom and adaptive micro-credential certificate be more valuable than a university degree? If so, how do you adapt to this mindset? Amongst those who use generative AI at work, 81% think its will be hugely beneficial in certain industries (KPMG in Canada 2023 generative AI Adoption Index).

81%

81% of those who use Generative AI at work think it will be hugely beneficial in certain industries

Source: 2023 Generative AI Adoption Index

What’s next?

It’s possible to foster a culture of experimentation while keeping your business objectives top of mind through protected sandboxes, which use an isolated environment for testing. While sandboxing is not a new concept, it still requires a careful approach in terms of who has access and which datasets the AI engine can draw upon. However, it allows users to start training AI engines with datasets that can be bounded, managed, and controlled.

Understanding the risks and ensuring protections are in place can help leaders focus on the potential benefits of generative AI, such as process improvements or enhanced customer experiences. The sweet spot for generative AI in its current iteration will be finding business opportunities with limited ethical or regulatory consequences, such as helping chatbots better route customer calls.

AI opens doors to a lot of opportunities. Organizations must be very careful to balance pace of adoption with enterprise readiness. By setting realistic expectations, leaders can ready their workforce for generative AI and reap the benefits, while mitigating risk. Find out how KPMG can help.

Connect with us

Stay up to date with what matters to you

Gain access to personalized content based on your interests by signing up today

Connect with us