The earth is shaking, and a crack opens near you. It begins growing in your direction, and you start running, always looking backwards to make sure you’re safe. But ahead of you, two new cracks grow from different directions, and you don’t see them until it’s too late. Had you kept your eyes forward, with some backward glancing, you could have noticed the trajectory of the cracks and determined the solid ground to run to.
This analogy is an example of how operational risk management frameworks traditionally work. They rely heavily on historical events which means the framework doesn’t facilitate horizon scanning. Backwards-looking frameworks do not analyse data to predict behaviours for future events that may occur.
This is shaky ground. Building a framework without a well-constructed underlying data architecture and foundation fails to prepare you for transformation within your industry which can present greater threats. To construct a solid framework that supports you for the long-term, you need to ensure your data models provide forward-looking, or predictive analytics, to support pre-emptive risk management.
Data models allow businesses to better understand and pull together different data points in order to assess patterns. These patterns show threats that the business may see as a potential risk to be thwarted.
The sheer volume and availability of data available positions operational risk functions to identify, measure and manage a variety of operational threats present within your business. The key challenge is to ensure your data architecture and models support forward-looking risk management and potential exposure.
In order to enable a more predictive framework, data models should be constructed and designed to facilitate a comprehensive linkage of data sources. They should not only draw upon traditional framework elements capturing backward-looking data, but also scan for patterns that can better predict what the future may hold.
The harm of framework silos
When building an operational risk framework, businesses often neglect the underlying data model that drives understanding of risk. The framework traditionally consists of a number of elements that may include:
- risk and control self-assessments (RCSA);
- internal loss and external loss data;
- scenario analysis;
- key risk indicators;
- control testing governance and reporting;
- issue management; and
- policy framework.
Businesses typically negate proactive scanning for emergent risks because they implement these elements in isolation without the ability to create linkages such as:
- Collecting loss data in isolation, and then not using that loss data to derive patterns and analytical trends through data combinations to challenge and predict potential future control failings; or
- Running scenario analysis without challenging the control environment by linking historical data points across the framework in order to predict future outcomes.
The way siloed data model elements harm risk forecasting can be seen throughout industries. One example of this is in investment banking where internal fraud events have occurred. In September 2019, a trader disguised crude oil derivatives transactions for a major corporation’s subsidiary as hedges and manipulated the risk management system, so they looked like they were associated with customer accounts. This event cost the company US$320 million prompting the company to fire the employee and alert the police. But this type of financially harmful employee corruption isn’t an isolated event. Between 2007 and 2008, a Swiss bank, lost around $2.3 billion and a French bank lost $7.2 billion in rogue trading scandals partially caused by unsound structural risk management frameworks.
To combat such events, investment firms are setting up surveillance teams to watch for suspicious behaviours. However, these teams lack predictive tools, so they pull on multiple isolated data sources. This results in the team spending too much time trying to consolidate and manipulate all the data and too little time analysing and creating solutions to the threat of current and future exposure such events may pose.
Utilising management information techniques, artificial intelligence and sound data models allow your operational risk framework to derive meaningful patterns and behaviours in line with your unique risk challenges.
How to build a well-defined data model
A consolidated risk and compliance data model linked to processes and controls will improve the effectiveness of operational risk by reducing losses, mitigating disruptions and meeting evolving regulatory requirements. Attributes of the data model should be well-defined and include standardised process, risk and control taxonomies which must all be aligned to products, services, business divisions and legal entities.
A well-constructed data model should facilitate management information from the most granular level of detail to C-suite reporting.
The targeted outcomes of a well-defined data model are:
- Improved operational resilience with reduced incidents, issues and risk events. This leads to more resiliency across the business specifically across key business services and critical functions.
- Quantitative board reporting providing foresight into the probability and impact of materializing risks relative to Board stated risk appetite and impact tolerances
- Increasing proactive responses to vulnerabilities faced by your business.
- Enhanced awareness of end-to-end processes and interdependencies with third parties or the within the business.
To achieve these targeted outcomes, you should begin investing in your firm’s underlying data model, moving your firm closer to achieving predictive analytics. Predictive analytics allows for more proactive risk management – foreseeing problems that lie ahead – instead of reactive risk management – addressing events of the past.