Risk and the universal currency

I’ve been thinking about risk for the last five years. Everywhere I look, I see bad behaviours that get in the way of genuine risk management.

This is going to be somewhat out of left field for most followers, who enjoy whimsy and occasional software engineering. I apologise for that. Skip this one.

This theory comes from being in a large organisation that wants a consistent risk appetite but cannot possibly have one, and results in rigidity that stifles innovation and masks genuine issues. It’s also come from a background in engineering massive software systems for government, where we think about tolerances in terms of budgets, not as numbers devoid of context.

This is a reminder that money exists because barter systems are confusing and impossible to maintain.

In short, this is my unified theory of risk.

Let risk be a function of the likelihood of the risk materialising and the consequences if it should materialise. This is the standard in The Orange Book.

A government department may overall consider itself ‘averse’ to risk. On its risk matrix, it says its appetite is a risk of ‘2’ and its tolerance a ‘4’.

Now, somewhere else in the organisation is a feisty team that wants to try something out. However, every time the central assurance function looks at its risk register, its face falls. Every risk is way outside of tolerance. They’re running risks that are calculated as 15s, 16s, 20s even.

How has this happened?

I believe it’s happened because one side of this scale is out of line with the other. They’re simply not measuring the same thing. The department is measuring consequences in the millions of pounds. Its ‘5’ is the fall of the government, or the dissolution of the department, or riots in the streets.

The start-up team – we hope – is not dealing with that level of consequence. Perhaps they should use the departmental scale?

If they do that, they lose any ability to make local risk decisions. The grain of the matrix is way too coarse: every consequence this team manifests is a 1, maybe a 2, on a departmental scale.

Equally the department can’t use the start-up team’s scale – in that case, everything becomes a ‘5’ and there’s no way of prioritising anything at all.

Suppose instead that we use the universal currency: currency.

Let us assume that likelihood will be measured in terms of percentages and consequences in terms of monetary outcomes. This will alarm/delight the people who think that reputation is priceless.

This does not suggest that it is easy to define a monetary outcome. Only that it is necessary, and that it communicates a value better than ‘bad’, ‘really bad’, ‘calamitous’. This also does not suggest it is easy to define a likelihood, though forecasting might be an interesting approach to take here.

With these two difficult things in place, we can define a risk appetite (where we would like to be) and a risk tolerance (where we’ll accept being for a short time) in terms of money, rather than single number.

At this level, perhaps that means the department is willing to tolerate up to £20m of losses: a 20% likelihood of losing £100m, or an 80% chance of losing £25m.

Now to our start-up team, the prospect of losing £20m should be absolutely terrifying and also completely impossible. The team looks at its work, and the worst possible outcome, and says that the top of its scale is £1m. If there is a risk that could cause more damage than that, it’s out of their hands and someone way more senior has to decide if they want to accept it.

However: the team might also say that their appetite is ’16’ and their tolerance is ’20’. That might imply that they have a willingness to lose £0.65m in the pursuit of their work.

Now, this should be checked. Does this team actually have the authority to take that expensive a risk? Should its risk budget be further reduced? If we aggregate it across the organisation, does it turn out we’re actually risking more than we want?

If so, we can change it.

For a certain group of people this prospect is terrifying. If we put actual numbers on things, then we’re saying that we’re willing to lose absolutely gargantuan amounts of money. There are those who will say that government departments should not accept any losses at all.

This is wild to me. We must accept a degree of risk, and accept moreover that some of those risks will come to pass. If we want to make sure we have no losses, then we have to have the most stringent hiring process possible, to ensure we’re not losing money to someone who’s really only doing 35 hours a week instead of 37. We have to make sure that no project ever, ever, ever fails.

Is that reasonable? Is that realistic?

No.

When we start putting numbers on these things, we can treat them a little bit more like insurance. We can say: I have a reputation worth £1bn. How much am I willing to pay to protect it? What am I willing to go without to fund that payment?

And, in turn, it puts the ball in our court as security people to make predictions that are useful and deliver value to our senior leaders. What’s going to move the needle on our risks? What’re the three things we have to do today, tomorrow, this year to bring the likelihood of an incident down, or reduce its impact when it happens?

Architects used to think they could sit in an ivory tower and dictate how the enterprise would build software. The smart ones are embedded in delivery teams now, building the tracks as fast as the devs feed the engine. The others are still in their towers, wondering why nobody calls any more.

Security people have to do the same. We cannot afford to be left behind because we’re still huddled in our security working groups, doing our best Eeyore impressions, waiting for the sky to fall because nobody will buy into our narrative: that the sky is falling.

The sky is falling, friends. Are we in our communities upskilling, or are we making bunkers whence we can smugly tell the shattered world we told it so?

I know where I’ll be. I hope you’ll be there too.

Leave a comment