Over the last 10-15 years, banks’ reliance on technology has undergone a number of changes of emphasis. Initially, technology was used to streamline and automate internal back-office processes and make them more cost-effective. Then, technology gradually began to contribute to decision-making to automate various front-line processes and to create new opportunities ranging from internet banking to algorithmic trading. Now, information technology is widely used to mediate relationships with customers and counterparties and to communicate instantly and across the globe.
The consequence of such extensive reliance on technology is that weaknesses in systems and processes have become potentially much more serious, with more profound impacts. In individual institutions, failures can damage confidence and threaten brand value. When they lead to widespread contagion, systemic disruption threatens. Reliance on technology brings its own risks, as seen most vividly when systems crash (the malfunctioning or non-functioning of a major bank’s automated teller machine (ATM) network is both a massive inconvenience to its customers and often a major news story) or generate instability (some sharp movements in stock prices have been attributed to flash trading and to automated and algorithmic trading more generally).
The dangers are magnified when increasing corporate and operational complexity means that few, if any, managers are any longer in a position to exercise judgment over the totality of business operations. So does the potential for systemic errors to be introduced and not be recognized. Technology risk has become a major component of operational risk and is a growing focus of concern for senior management and regulators alike.
There has been a significant regulatory focus on technology risk for decades. For example, the US Federal Financial Institutions Examination Council (FFIEC) was created in the 1970s to prescribe principles, standards and reporting formats for the federal examination of financial institutions, including their risk management systems and risk data infrastructures, with a strong focus on technology risk management. Basel II required that banks begin to hold capital against operational risk – which includes technology risk – as a buffer against the impact of operational failures. However, quantifying this risk has proven difficult. Most banks have relied on simpler standardized approaches rather than trying to construct models to calculate how much capital they should hold against operational risk.
Historically, IT risk has tended to be managed in the chief technology officer’s silo (and within that, often in a sub-silo such as cyber security). In recent times, the focus has been redirected to taking data risk out of its silos and integrating it into an enterprise-wide risk management framework. Operational risk (including IT risk) must truly become the ‘third leg’ of the risk stool alongside credit risk and market risk. As a result, it is now increasingly understood that IT risk is too important to be left solely to IT people. The CIO has first to be an information technologist. But the CIO also has a key role to play in informing the risk assessments of the chief risk officer. It is also important that the business line be an integral part of any technology related project, as they are ultimately the end user.
Accordingly, regulators are increasingly examining how technology risk is being incorporated into a bank’s overall risk management framework.
Risk management is intimately dependent on issues of data: data integrity, completeness, relevance and accuracy. And even in the smallest banks, good risk management depends on the IT architecture and systems used to store and process data. But the many banks with multiple aging IT systems or poorly integrated inherited systems from acquisitions or mergers find it very difficult to aggregate and report data to support risk management.
The shortcomings of current practice were harshly exposed by the financial crisis. A key lesson was that large parts of the financial services industry in the US and Europe was unable to identify and aggregate risk across the financial system and to quantify its potential impact. Exposures could not be easily be aggregated across trading and bank books, across geographies and across legal entities. Risk management, governance and the underlying data infrastructure were unacceptably weak. Global systemic risk was, as a result, both obscure and under estimated.
More than 6 years after the crisis, many of these weaknesses remain. The Basel Committee published at the end of last year the results of a self-assessment by 30 global systematically important banks (G-SIBs) of their progress in meeting the committee’s principles for effective risk data aggregation and risk reporting. The results show the lowest reported compliance rates for data architecture and IT infrastructure, the accuracy and integrity of data and the ability of banks to adapt to changing demands for data analysis and reporting. Nearly half of the banks reported material non-compliance on these principles and that they are having to resort to extensive manual workarounds. One-third of the banks reported that they will be unable to comply fully with the principles by the 2016 deadline. A report of the Senior Supervisors Group in January 2014 on data quality and management in 19 major US, Canadian and European banks reached the even more damning conclusion that:
“...firms’ progress towards consistent, timely and accurate reporting of top counterparty exposures fails to meet supervisory expectations as well as industry self-identified best practices.”
Weaknesses in systems and data management have also hampered the ability of both banks and their supervisors to run stress and scenario tests. The experience of stress-testing has revealed the fact that systems and processes for aggregating and analyzing risk in large banks remain disturbingly inadequate. Ad hoc processes and manual intervention are still necessary to produce a summary of potential risks. In turn, poor or non-existent data management infrastructure casts doubt on the reliability of the assessments that are produced. There is a long way to go before the industry can convince regulators that it has the quality of data necessary to satisfy their requirements.
Many banks appreciate the need for remedial action, but are understandably wary of the scale of the task. They face competing demands for expenditure on IT and data systems at a time when they are looking to cut costs, not least to offset the increasing costs of regulation and compliance.
Supervisors are increasingly stressing the need for improvement and, at least for systemically important banks, supervisors have already increased the intensity of their supervision in areas such as banks’ IT systems and data management. The question then becomes what actions supervisors are likely to take to drive improvement. This varies across countries, but in most countries, the supervisory toolkit will include the ability to require banks to take remedial action. And if this action is not forthcoming, then supervisors can reflect this in their overall supervisory assessment of a bank, with possible consequences for the amount of capital that the bank has to hold against its risks or for the imposition of restrictions on business expansion. In some countries, the supervisors may go further into enforcement territory, imposing fines on banks with inadequate systems and taking actions against specific individuals performing senior management functions in the bank.