Cybersecurity expert Bruce Schneier has written an excellent article examining how developments in Artificial Intelligence may completely alter the balance of power in computer systems defense:
“Artificial intelligence technologies have the potential to upend the longstanding advantage that attack has over defense on the Internet. This has to do with the relative strengths and weaknesses of people and computers, how those all interplay in Internet security, and where AI technologies might change things.”
The main distinction: humans are able to think outside of the box – but we get bored easily. Meanwhile, computers are good at repetitive menial tasks, but fare poorly where judgement or creativity is important.
“You can divide Internet security tasks into two sets: what humans do well and what computers do well. Traditionally, computers excel at speed, scale, and scope. They can launch attacks in milliseconds and infect millions of computers. They can scan computer code to look for particular kinds of vulnerabilities, and data packets to identify particular kinds of attacks.
Humans, conversely, excel at thinking and reasoning. They can look at the data and distinguish a real attack from a false alarm, understand the attack as it’s happening, and respond to it. They can find new sorts of vulnerabilities in systems. Humans are creative and adaptive, and can understand context.
Computers — so far, at least — are bad at what humans do well. They’re not creative or adaptive. They don’t understand context. They can behave irrationally because of those things.”
One of the best illustrations of this difference was demonstrated by the 1983 Soviet nuclear false alarm incident, considered by some to be the closest the USA and USSR came to nuclear conflict (even more so than the Cuban Missile Crisis):
“On 26 September 1983, Stanislav Petrov, a lieutenant colonel in the Soviet Air Defense Forces, was the officer on duty at the Serpukhov-15 bunker near Moscow which housed the command center of the Soviet early warning satellites, code-named Oko. Petrov’s responsibilities included observing the satellite early warning network and notifying his superiors of any impending nuclear missile attack against the Soviet Union. If notification was received from the early warning systems that inbound missiles had been detected, the Soviet Union’s strategy was an immediate and compulsory nuclear counter-attack against the United States specified in the doctrine of mutual assured destruction.
Shortly after midnight, the bunker’s computers reported that one intercontinental ballistic missile was heading toward the Soviet Union from the United States… Petrov dismissed the warning as a false alarm, though accounts of the event differ as to whether he notified his superiors or not after he concluded that the computer detections were false and that no missile had been launched. Petrov’s suspicion that the warning system was malfunctioning was confirmed when no missile in fact arrived. Later, the computers identified four additional missiles in the air, all directed towards the Soviet Union. Petrov suspected that the computer system was malfunctioning again, despite having no direct means to confirm this.The Soviet Union’s land radar was incapable of detecting missiles beyond the horizon, and waiting for it to positively identify the threat would limit the Soviet Union’s response time to a few minutes.
It was subsequently determined that the false alarms were caused by a rare alignment of sunlight on high-altitude clouds and the satellites’ Molniya orbits, an error later corrected by cross-referencing a geostationary satellite.”
By all technical indications, he should’ve ordered a nuclear counter-strike. If the decision had been left to a computer, there is no question what it would’ve done: fired the missiles. And why not? The computer would’ve worked with the best info it had.
But Petrov, being human, understood context:
“In explaining the factors leading to his decision, Petrov cited his belief and training that any U.S. first strike would be massive, so five missiles seemed an illogical start. In addition, the launch detection system was new and in his view not yet wholly trustworthy, while ground radar had failed to pick up corroborative evidence even after several minutes of the false alarm.”
This is where computers have fallen far short: understanding context and making judgement calls. It’s the same reason why the balance of power in cybersecurity is tipped toward offense, which tends to be brute in nature. A malicious computer can sit there all day and brute-force passwords while sending out millions of spam emails. Cyber-defense, on the other hand, tends to require thoughtfulness, creativity, and an understanding of context- not a computer’s strong suit.
But developments in AI may change that. Schneier goes on to discuss the possibilities:
“AI will allow computers to take over Internet security tasks from humans, and then do them faster and at scale. Here are possible AI capabilities:
- Discovering vulnerabilities — and, more importantly, new types of vulnerabilities in systems, both by the offense to exploit and by the defense to patch, and then automatically exploiting or patching them.
- Reacting and adapting to an adversary’s actions, again both on the offense and defense sides. This includes reasoning about those actions and what they mean in the context of the attack and the environment.
- Abstracting lessons from individual incidents, generalizing them across systems and networks, and applying those lessons to increase attack and defense effectiveness elsewhere.
- Identifying strategic and tactical trends from large datasets and using those trends to adapt attack and defense tactics.
That’s an incomplete list. I don’t think anyone can predict what AI technologies will be capable of. But it’s not unreasonable to look at what humans do today and imagine a future where AIs are doing the same things, only at computer speeds, scale, and scope.”
Forecasting technological developments is notoriously difficult. As Schneier concedes, only some – or maybe even none – of these predictions may come to pass. But the bottom line is this: If computers can get better at what makes humans special – creativity, judgement, context – then cybercrooks who bank on the current unbalance of power will find themselves in an entirely new ball game.
Categories: Cybersecurity, The IT Philosopher
Leave a Reply