It's been more than a year since generative artificial intelligence (AI) burst onto the scene, and its capabilities continue to evolve at the speed of light, making headlines daily. The increasing generative AI landscape presents organizations with unprecedented opportunities for transformative growth, provided these advancements are adequately understood, managed, and governed. While generative AI offers immense potential as a catalyst for innovation and efficiency, it also introduces inherent risks and complexities. From staff unpreparedness to data privacy concerns and operational disruptions, organizations face a wide variety of challenges associated with generative AI adoption. Next to these internal challenges, new AI regulations are arising, such as the European Union AI Act, which emerges as a pivotal regulatory development in the field of AI.

As organizations navigate this rapidly evolving generative AI landscape, they must contend with not only the introduction of regulations but also the need for their own governance and internal standards. This duality underscores the critical importance for establishing robust AI governance frameworks to ensure ethical, transparent, and accountable AI deployment within the organization. Through having the right governance mechanisms in place, organizations can utilize AI to their fullest advantage to develop positive outcomes and competitive advantages, while effectively mitigating risks.


In this new, in-depth article, we explore the significant role of risk management and internal audit functions in building trust throughout your organization’s AI transformation journey, emphasizing the need for sound and effective governance frameworks and standards to adequately navigate the complexities of AI. AI governance can be defined as the set of monitoring mechanisms, policies, procedures, and controls implemented to oversee and regulate the development, deployment, and usage of AI technologies. Its significance lies in its ability to provide a structured framework for organizations to navigate the complexities of AI adoption while mitigating risks and ensuring compliance with regulatory requirements.

Risk Management has a crucial role as part of overseeing AI initiatives integrated within the overall business strategy. Positioned to support protecting the organization’s value, Risk Management is instrumental in ensuring that AI deployment and potential in-house developments are consistent with ethical and legal principles, while promoting accountability and transparency to all stakeholders. Their main responsibilities are risk identification and assessment, contributing to adherence to ethical and legal principles, supporting accountability and transparency, collaborating with stakeholders and promoting and organization-wide understanding of AI.

Building on the foundation laid down by Risk Management, Internal Audit serves as the next layer in ensuring robust governance and risk management practices throughout the organization's AI journey. Internal Audit provides independent and objective assurance over the effectiveness of the organization's AI governance processes and controls. Their main responsibilities at this point of the AI journey are providing assurance over AI governance frameworks, playing an advisor role during the AI journey, assessing the effectiveness of AI governance controls and continuous auditing.

The article presents KPMG's AI Governance Framework, which addresses the relevant domains covering key governance, risk management, and compliance issues to contribute to the ethical and responsible use of AI technologies. KPMG's AI Governance Framework covers nine key domains for an effective AI Governance within organizations: Governance & Oversight, Vision & Strategy, Policies & Procedures, AI Landscape & Inventory, AI Awareness & Literacy, AI Competences & Training, Data Management, Security & Privacy, Evaluation & Monitoring, and Documentation & Record-Keeping. For each domain, we have defined the role of Risk Management as one in which embedding risk management principles as part of the AI strategy is central. Meanwhile, the role of Internal Audit is responsible for providing assurance on the AI governance mechanisms, policies, procedures, and controls.

Effective governance and controls around AI deployment, development, and usage is not only desirable, but imperative for organizations seeking to succeed in their AI transformation journey. By implementing adequate AI governance principles and leveraging the expertise of Risk Management and Internal Audit professionals in doing so, organizations can navigate the complexities of AI implementation with confidence. As organizations - and their Risk Management and Internal Audit Functions - embark on their AI transformation journeys, KPMG stands ready to be a trusted partner in the quest for responsible AI adoption. Read on to discover our latest insights and proposed approach to generative AI governance.