<

Blogs

13 October 2018

Risk Management and the Heisenberg’s Uncertainty Principle

Risk management reminds me of Heisenberg’s Uncertainty Principle which asserts a fundamental limit to the precision with which certain properties of quantum particles can be determined. The Uncertainty principle talks about not being able to accurately measure just two parameters of a particle risk management deals with a whole bunch of them. While there is universal agreement on risk being quantified as impact and likelihood, quantifying impact and likelihood to any level of accuracy is dependent on the discipline, context, model and taxonomy. The focus on risk management continues to increase as information security and cybersecurity standards and regulations propose risk assessments to decide on the security posture and controls.

Developing a Risk Management Framework is less of a science and more of consensus, one takes the basic principles that most people agree on and start nudging the boundaries to the point where it remains acceptable and yet comprehensive. A lot of discussions are needed to work out the syndication elements, risk appetite and taxonomy to minimize your risk view versus mine. In large organizations, the problems are amplified as different stakeholders view the same risks differently. A high cybersecurity risk on a IT system may be insignificant from a value perspective at an enterprise level but that may be the vector for a large data breach and it is important that the RMF is able to syndicate and bubble up such risks appropriately.

While most organizations are able to knock out the science using the various risk management related standards like NIST 800-30/37, ISO27005, COSO but the ability to deploy risk management workflow effectively and efficiently eludes many. The problem usually is some form of a combination of discipline and/or automation. In larger organizations, the automation has to go beyond simple workflow automation and has to include automated data integrations and optimized risk methodology to achieve a “low-touch” factory environment.

Our recommendation is simple and has 5 steps: i) base the risk management framework on the most relevant standard; ii) conduct design workshops early with all stakeholders to build consensus on the methodology as well as workflow, the syndication and taxonomy; iii) design the automation model identifying data sources, has a common workflow, and is both “low touch” and “real-world” aligned; iv) address how legacy data is managed and v) train and enable users.

Simple may not mean easy, the workshops and consensus building can be stressful and long drawn out. Real world aligned workflows require agile type development life cycle as users have to “see” and “adjust”. Many times, data integrations have to have built in with workarounds and over-rides as data sources may not be ready with clean and accurate data. The same applies for stakeholders as well as they may not be clearly identified. Risk methodology optimizations such as control rationalization buckets, pre-defined (but editable) threat and vulnerability applicability matrix, risk buckets (for syndication) are required for a large-scale factory type automation of a risk management framework.