“We and our computers were supposed to make life easier; this was our promise,” wrote Peter de Jager, a computer programmer at Dylex in Toronto at the time his essay was published in Computerworld in 1993. “What we have delivered is a catastrophe.”
De Jager’s contention and the fears it prompted possessed many features of a post-apocalyptic disaster movie: critical infrastructure could shut down; planes could fall out of the sky; and the global financial system could suffer a disaster that would make Black Tuesday look like just another bad day at the office.
But there was no bad actor, or even bad act, at the heart of his warning, just operational efficiency. His theory, and the reason governments and businesses would spend over $300bn to avert the catastrophe, was that the decision to record dates in two digits rather than four in early computer code, could bring everything down.
The Millennium Bug, or Y2K, is widely eyed with derision by the general public, much as the Millennium Dome was by the UK taxpayers who footed the £700m bill to build it, but to many of those in the know, the relatively limited incidents of error caused by it is instead vindication of the time and money invested in averting it.
Today there is no risk that the simple changing of the Gregorian calendar year will drive people to panic buy food and arms - as they did then - but the same warning signs are there, and this time with bad acts perpetrated by bad actors.
Rising risk of ransomware
Having established itself on the cyber security radar from around 2012, the use of ransomware - malicious code embedded in a seemingly legitimate email - has grown exponentially in recent years with the 181.5m attacks registered in 2018 marking a 229% increase. But as companies across the world have become more distributed via remote working, the risks are rising even more.
On the 31st March of this year, the Harris Federation of 48 schools in and around London was subjected to a ransomware attack by the Russian cyber-gang REvil: financial documents were published; systems were locked down; and the email accounts of its 37,000 students were disabled.
A month later, the Colonial Pipeline, which supplies roughly 45% of all fuel consumed on the East Coast of the United States, fell victim to another ransomware attack instigated by DarkSide: the pipeline was shut down; the Federal Motor Carrier Safety Administration issued an emergency declaration for 17 states and Washington D.C to keep fuel supplies going; and it emerged that 100GB of data had been stolen from company servers the day prior.
And on 2 July, a ransomware attack targeting US software firm Kaseya by REvil succeeded in not only penetrating and prompting the shutdown of its own systems but also those of more than 1,000 companies within its supply chain including a Swedish supermarket.
This escalation from disabling UK students emails to taking down a key fuel supply line and a global supply chain in barely more than four months offers a particularly stark warning of the risks to businesses and governments of the growing illicit trade in cyber attacks.
But as is so often the case in matters of risk it is insurers, who must reimburse the policyholders and finance the ransom payments demanded, that sit at the intersection of this growing concern.
Rapidly reaching a “crisis point”
Cyber insurance has been around as a product for 15 years, but the rapid speed at which the threat landscape has changed and the relative paucity of historical data when compared with more routine lines of business such as natural disasters has prompted many to criticise its effectiveness.
The Cost of Cyber Crime, published by the Office of Cyber Security and Information Assurance in the Cabinet Office, estimates the cost to the UK economy at £21bn a year and recent signals indicate this will increase.
According to the insurance broker Howden, the pricing of cyber policies across the insurance sector increased by 32% in July when compared to the previous year, which in combination with the data from reinsurance broker Willis Re putting reinsurance rates up by 40% offers a clear sign that the sector had failed to foresee the spike in attacks.
“It is clearly reaching a bit of a crisis point,” says Kirstin Gillon, Technical Manager at the Tech Faculty of the ICAEW, on the need for businesses and their insurers to come up with workable safeguards to cyber attacks.
“Maybe it will be a turning point [where insurers say] we’re going to work out what we’re going to do and what we’re going to cover and not cover because we’re actually in a bit of an epidemic at the moment,” she says.
Based on her consultations with ICAEW members, Gillon has detected an air of cynicism as to the value of many of the policies currently available. “I think there has been quite a lot of cynicism from our members about how much the policies being offered would actually pay out or how useful they are if it comes to it,” she says.
Raising the floor on cyber security
Admitting that the lack of historical data makes it “incredibly hard for insurance companies”, Gillon suggests that one route forward is wider adoption of standards like Cyber Essentials by the National Cyber Security Centre (NCSC).
Taking the form of a government-backed certification scheme, Cyber Essentials is designed to ensure that all those who complete the verification meet a minimum level of cyber protection, which should in turn enable the insurance sector to use it as a benchmark on which to base its policies.
Self-described cyber insurance veteran, Oliver Brew, who currently serves as head of client success at CyberCube, admits that insurers could have embraced standards more fully in the past, but believes they have recently upped their game.
“Ten years ago I would have agreed wholeheartedly, but I think there has been significant progress in the last few years with the standards that have made a difference,” Brew tells ICAEW, adding that the adoption of the Payment Card Industry Data Security Standard has seen a significant increase in credit card payment security.
“It was sort of a wild west and you could run credit card payment operations with relatively lax security but today that is not the case and the insurance industry played a part in raising the floor.”
For Brew, who prior to joining CyberCube worked as chief underwriter of cyber insurance at Symantec, the value in raising the floor for basic cyber security could prove key.
“One of the truisms of cyber risk is that you don’t necessarily need to be the best protected to avoid a loss, you just need to not be the worst,” he says. “These criminal enterprises will use the same technique against hundreds of thousands of companies and those who have a vulnerability will be the ones who are exploited.”
By using the standards and latest guidance on the basic level of cyber security, insurers can work with policyholders to try and ensure their systems and networks are protected against many of the attacks being launched, but with ransom payments getting ever higher, it is safe to assume that the cyber criminals will continue to find new ways to breach the firewalls and protections put in place.
Combatting the risk with a more dynamic relationship
Brew is clear that the “brazen attack on critical infrastructure” seen in the Colonial Pipeline attack and the “systemic nature” of that carried out on Kaseya and its supply chain constitute a stark warning of what threats could emerge if insurers and businesses don’t face up to the rapidly changing risk landscape and become more proactive in their attempts to counter the threats.
Early indications of this evolution in approach can be seen in growing range of support services that insurers are providing to enable clients to shore up their defences. These include enabling access to multi-factor identification to make it harder for bad agents to infiltrate a business's network.
Another service which is increasingly being offered by insurers is to carry out ‘ethical hacking’ activities or red-teaming, where a team of cyber security experts attempt to break through the existing safeguards in order to expose potential weaknesses.
Smartening up its act
These services and the use of algorithmic underwriting, which uses machine learning to better assess the risk profile of a client and adapt the insurance policy to it, are part of a wider move towards a more dynamic relationship between insurer and insured that is built on the strong foundation of neither party wanting to suffer a loss.
“Insurers are transforming how work is done and how risks are assessed based on machine learning and new insights from vast quantities of data which simply weren’t available five years ago and that will only accelerate,” says Brew.
Drawing a parallel with the increasing use of smart sensors in homes to enable property insurers to better gauge and mitigate risk, Brew envisages a “much more live relationship” where carriers and clients work more closely and with greater transparency to identify and act on potential risks before it is too late.
“I like to think of the householder analogy,” he says, “where if there is a big ice storm coming, the house insurer will not only give notice to their policyholders but also activate smart thermostats to ensure the pipes don’t freeze and things like that to reduce loss in advance rather than simply clean up afterwards.”
Should insurers and their clients be able to build these more dynamic partnerships, fuelled by the sharing and analysis of far greater amounts of relevant data, then they should be able to respond more rapidly and with greater agility as new methods and types of attack.
However, if they stick to tightly to the model of simply paying out for losses rather than preventing them, then it could be De Jager’s warning of computerised catastrophe could finally come true.