I’m reading papers from Cyber Dialogue 2012 conference and this part caught my attention in Christopher Bronk’s “A Governance Switchboard: Scalability Issues in International Cyber Policymaking” paper.
The easy-to-use attack tools require only targets with unremedied vulnerability. There is a world of unpatched, poorly configured, and badly designed IT that political hacktivists or cyber criminals can exploit to meet their objectives. The relatively unskilled are able to locate vulnerabilities in systems far more effectively than those charged with securing systems can.
Because of the relative ease in locating and exploiting vulnerability, a basic asymmetry of cyber comes into relief: the offence, the attacker, he who wishes to compromise a system, holds the upper hand.
What the paper misses is that all that unpatched, poorly configured IT is out there due to a simple risk calculation that takes into calculation two things:
- Frequency of loss events: there’s so many other poorly configured IT systems out there that we more than likely don’t need to spend too much money getting it secured because we won’t fall victim any time soon;
- Actual loss: with so many other systems being exploited so frequently the general public has about half a day (if so long) to pay attention to any real breach before their attention wanders off to new “shiny!”, hence the loss to us is going to end up negligible in the long run.
Discussion on whether or not such simple risk calculation is true for all cases (it isn’t) and whether or not regulatory and legal environment discourages such behaviour in certain industries (it does) is not intent of this short comment.
Tl; dr: yes, there are systems that are vulnerable to stock standard exploits. It is as much by design as it is by negligence.