The Basel Committee defines the operational risk as the "risk of loss resulting from inadequate or failed internal processes, people and systems or from external events".
This definition includes human error, fraud and malice, failures of information systems, problems related to personnel management, commercial disputes, accidents, fires, floods... In other words, its scope seems so wide you do not immediately perceive the practical application.
Moreover, the concept of operational risk appears at first glance not very innovative, since the banks did not wait for the Basel Committee to organise their activities in the form of procedures, and to develop internal audit departments to verify the correct application of these procedures. However, spectacular failures, like Baring's, have attracted the attention of regulators on the need to provide banks with prevention and coverage mechanisms against operational risks (through the allocation of dedicated capital).
The implementation advocated by an increasing number of studies on this subject is to consider as an actual operational risk:
Proactive management of operational risk, in addition to allowing compliance with the requirements of the Basel Committee, necessarily leads to improved production conditions: streamlining of processes which results in increased productivity, improved quality leading to a better brand image... In particular, such an approach allows the development of quantitative tools which define measurable objectives for operational teams in terms of reduction of operational risks.
Furthermore, the increasing complexity and sophistication of operations, the increased volumes and the real time capabilities mean that "failure is not an option", since the cost of the error can quickly amount to hundreds of thousands or even millions of Euros. The general environment favors greater awareness of operational risk which becomes, just as credit risk and market risk management, an intrinsic component of banking activities.
The development of a method for monitoring operational risk, however, faces many internal obstacles, whether psychological or organisational:
However the subject is gaining acceptance and the methodological body grows and takes shape gradually.
The first step in the process of monitoring operational risk is to establish a risk map. This map is based on an analysis of business processes, which we cross with the typology of operational risks.
A business process is a set of coordinated tasks, which aim at providing a product or service to customers. The definition of business processes primarily corresponds to a business-oriented analysis of the activity of the bank, and not to an organisational analysis.
Determining the business processes thus starts with the identification of the different products and services, then the actors (who may belong to different entities within the organisation) and the tasks involved in providing these products.
Then, to each step of the process, we assign the incidents likely to disrupt its unfolding and prevent the achievement of its objectives (in terms of concrete results, or in terms of time). For each event, risk is assessed in terms of:
Each event with possible risk must be assigned to a risk category (making future data analysis easier and faster) and, in organisational terms, to the business line where the incident would occur. The Basel Committee has defined standard lists for these topics (see below).
The classification of risks must match the high-level view desired by the management, it must allow synthetic analyses that are transverse to all activities and as such should be established by a central risk management department.
On the other hand, in order to be realistic and useful, the analysis of business processes and of incurred risks must be entrusted to relevant operational staff. They will use a rigorous framework, identical for all, but which allows them to describe their activities.
Finally, the map would not be complete if it did not come with the identification of key risk indicators: these are quantifiable elements that may increase the likelihood of the occurrence of a risk : number of transactions processed, absenteeism rate, etc. This concept is at the core of the so-called "scorecard method" (see below).
The initial identification of risks results in a "theoretical" map of activities, however experience only allows first, to validate this description and second, to identify sensitive areas of activity in order to put in place appropriate controls. It is then time to collect the observed incidents in a historical database, which allows to evaluate the actual losses caused by operational risks (loss data).
Data collection usually takes place in a declarative mode. Operational people fill out standardised forms, which are later captured in a database, or they directly enter data in the application. For incidents such as computer breakdowns, it is possible to consider automatic or semi-automatic data collection (an automatically created "failure report" is later on manually completed with incurred loss amounts).
Such databases, fed during several consecutive years, turn into a valuable source of information for the management of operational risks. These data allow to bring out an objective, quantified view of incurred risks, assuming of course they have been collected in a reliable and realistic way.
Data collection of loss events relies on the previously established map to register and reference incidents. It also allows, by a retroactive effect, to tune the map.
There also exist similar databases, but coming from external sources. These data usefully complete data collected internally since historical databases by definition only register incidents that have already occurred in the bank. In order to obtain a more realistic measurement, a sampling of data obtained from other institutions is added. These data however require an effort of analysis and adjustment to the specific situation of the bank.
The statistical analysis of recorded loss data allows to build a graph of loss events, which range from frequent events with limited financial impact, to extremely rare events with catastrophic consequences. This distribution of risks can then be used to make all kinds of sophisticated computations (see below).
The need to measure operational risk comes from the recommendations of the Basel committee, which require banks to allocate an adequate amount of capital to cover their operational risk.
In theory, this amount of capital should correspond to the maximum loss incurred due to operational risk in the bank, with a high probability (99%) in a given time frame (for instance, one year). Therefore, it is basically a "Value at Risk" (VAR). The question is how to compute this VAR.
We focus here on the "independent" measurement methods: those that are not derived from a decision of the regulator, or more precisely those that fall in the category of "advanced methods" of the Basel committee.
Globally, evaluation methods are related to 3 major families, which are not necessarily mutually exclusive as we will see below: statistical methods, scenario-based approaches and scorecard approaches.
The most typical example of statistical methods is the "Loss Distribution Approach" (LDA). It relies on a database of loss events collected within the bank, enhanced with data from external sources.
The first step of the approach is to draw, for each business line and each type of loss event, 2 curves of the probability distribution for loss, one which represents the frequency of loss events over a time interval (loss frequency distribution), the other the severity of these same events (loss severity distribution). To do so, we sort loss events by frequency on one hand, and by cost on the other hand, and we represent the result graphically (using histograms).
For each of the resulting distributions, we look for the mathematical model that best represents the shape of the curve. In order to validate the choice of a mathematical model, we compare the result (frequency or loss) predicted by the model to the output of the curve built from real data: if both curves overlap, the model is considered as reliable.
Then we combine both distributions, using a Monte-Carlo simulation, in order to obtain for each business line and each type of event, an aggregated curve of the loss distribution for a given time horizon. For each of those, the Value At Risk (VAR) is the maximum loss incurred with a probability of 99.9%.
The required capital in the Basel II framework is then the sum of the calculated VARs.
Scenario analysis involves systematic surveys with experts of each business line and risk management experts. The goal is to obtain from these experts an evaluation of the probability and cost of operational incidents, as identified in the analytical framework proposed by the Basel committee.
The elaboration of the scenarios combines the whole set of key risk indicators of a given activity. Simulations are then performed with varying risk indicators.
This approach represents a valuable complement when historical data are not sufficient to implement a purely statistical method. In particular, it is especially useful to assess the impact of severe risk events, or the impact of simultaneous events. Indeed the statistical approach described above has the drawback of considering operational incidents as completely uncorrelated, and does not take into account possible cumulative effects.
In spite of its name, the scenario analysis is not only "qualitative". It can also support mathematical models and the body of theory on the subject is quite important (see for example http://www.gloriamundi.org).
Statistical methods are somehow biased, or even dangerous, in the way they can build calculations (sometimes extremely sophisticated) on few, scattered sampling data, and based on a number of subjective assumptions. We are a long way from the objectivity of the computations made in the framework of market risk, or even credit risk, where basic data are much less challengeable. The sophistication of the calculations gives an impression of reliability that may not always resist a thorough examination of underlying data!
Moreover, these methods, which rely exclusively on historical data, do not allow to anticipate changes in the risk profile of the bank due to internal evolutions (new organisations, new activities) or external evolutions (changes in markets, competitors, emergence of new fraud techniques). They base the estimations on events that already happened, not on events that might actually happen, among which are the most dreaded ones, those that occur rarely but with serious consequences.
In that respect, the scorecard method provides an interesting alternative, since it does not rely on actual registered loss data, but on risk indicators, which thereby support a "before the fact" vision of operational risks.
This method consists in building an assessment grid for each category of risk, made up of quantitative indicators: turnover, number of operations... and qualitative indicators: estimation of the speed of change in an activity, for instance. These questionnaires are designed by expert teams grouping risk specialists and operational people of each business line. They gather criteria that govern the probability as well as the potential impact of a risk.
Once the questionnaires have been designed, a first evaluation of the capital required to cover operational risk for the whole bank is made - this is the surprising aspect of this method. In order to perform this evaluation, there is no other way than using a statistical method! This first evaluation should normally be slightly overestimated, because afterwards we only use scorecards to change the global amount of allocated capital.
The amount of capital is then allocated to each risk category by evaluating for each business line the relative importance of each category.
Finally, the questionnaires are distributed to business lines and filled out. Since there are 13 risk categories as defined in Basel 2 and questionnaires contain at least 20 questions and there may be dozens of departments involved in large financial institutions, this results in a considerable amount of data to go through.
As a result of the examination of this data, it is possible to establish a "score" for each business line in each category of operational risk, and thus allocate it its due proportion of regulatory capital.
Repeating this process allows to change in time the amount of capital allocated to each business line. Since this evaluation is made independently of other business lines, it is not a zero-sum game: the global amount of regulatory capital may increase or decrease depending on the scores.
The scorecard approach provides a detailed picture of the risk profile of the financial institution. It also allows to involve operational staff in risk management, and therefore also constitutes a strong incentive to reduce these risks.
The control and, if possible, the mitigation of operational risk bring us back to the risk map. We must first determine an acceptable level of risk, then identify the required actions to bring the "inherent" risk (existing risk before the application of preventive measures) back to this level.
The implementation of control measures and action plans then results from a compromise between enforcement cost and obtained risk level.
The framework of risk management must evolve along with the bank activities: each project ("business" project or software project) should therefore include a risk aspect in order to:
True operational risk management should therefore be an iterative process.
One of the main innovations of the Basel II agreement compared to Basel I has been not only to require allocation of capital to cover operational risk but also to advocate for an operational risk management system.
Basel 2 offers banks three capital calculation methods of increasing complexity. The method chosen must be consistent within a banking group.
The choice of an advanced method initially requires a more substantial investment, but also allows to reduce capital requirements.
Besides, the Basel committee took particular care to define a standard classification of business lines and operational risks.
Information systems occupy a central position in today's markets, and therefore are at the heart of concerns whenever operational risk control is being implemented. Any IT project should therefore consider operational risk aspects.
Furthermore we note the development of information systems dedicated to operational risk management. The available tools to monitor operational risk either incorporate the qualitative approach (risk map) or the quantitative approach (database of incidents and statistical analysis of historical data), preferably both. They generally include the following functions: