[First printed in ISSA Journal, February, 2008.]
I was participating in a round table in Chicago a while back when one of the participants – a senior security executive from an enterprise – said that it takes a CISO-equivalent “3-5 years to implement their security program”. It’s been almost a year, and that phrase has stuck with me, but perhaps not for the reason you think. The question going through my mind is, “ Given everything we know about security, how on earth can this be true?” And the answer, of course, is that we don’t really know that much.
Here is the issue with the above scenario – presumably nothing changes at the new enterprise that would require a new security approach. The IT resources are all the same; the business processes are the same; the executive-level risk posture is the same; so how could a security program be so much different from one security professional to the next? There should be a “right” way to do things. There should be “best practices” that work. And there should be a way to tell when this is so.
Well, we sort of evaluate the strength of security programs. It’s really very easy – everyone is presumed innocent until they are found guilty (we skip the trial part in the interest of expediency). T.J. Maxx, ChoicePoint, NextOnTheList Inc.? We know their programs were poor because they had an incident. Hogwash, pure hogwash. Anyone who has been a security professional at an organization of any magnitude (say, one of the top 10,000 companies in the world) knows that the complexities and tradeoffs in modern-day IT shops are so voluminous that it is ridiculous to weigh in on an entire program due to a single incident.
How, then, are we going to figure out which programs are strong and which ones are weak? Evidence-based security is the answer. Evidence-based security involves the integration of measurement into a security program so that risk, control effectiveness, and the overall allocation of resources can be evaluated objectively and continuously given the ongoing changes in the IT shop, the business, the industry, and the economy. Start thinking now about how you are going to provide strategic advice on your security program with the support of evidence. There isn’t much out there today, but a little effort will reap immediate rewards.
To start off on this journey, we can take a page out of work done in health care when evaluating medical test results (thanks to Dan Geer for his initial reference here). Sensitivity and specificity are two measures used to evaluate the effectiveness of a health care test. The calculations for each are based on true positives, false positives, true negatives, and false negatives – sound familiar? (see http://www.bmj.com/cgi/content/full/327/7417/716 for more details). We can leverage this model ourselves by collecting data and classifying it in these four categories, based on the results from firewalls, antivirus, intrusion prevention, authentication, (user –based) access control, and other inline controls (antispam already does this).
Perhaps the most mind-boggling part of this exercise will be the notion that evaluating data in this way requires counting the “true negative” – that is, benign events that should be allowed in an enterprise (legitimate firewall connections, non-malicious flows, etc). Start thinking now about ways we can count network flows, sessions, program operations, and messages/transactions, because they are crucial to our understanding and calculation of risk in this form of evidence-based security. It may sound hard, but there are native and special-purpose products doing it today.
“T.J. Maxx, ChoicePoint, NextOnTheList Inc.? We know their programs were poor because they had an incident. Hogwash, pure hogwash.” Hogwash indeed, but not for the reasons you say. We know TJX was insecure because we know that they failed to remediate well-known weaknesses — they used WEP security on their wireless LANs and didn’t separate store networks from the HQ datacenter networks. We know that Choicepoint was insecure because they didn’t validate need to know for sensitive customer data.
Evidenced-based security will fail because the evidence is kept secret. It works in healthcare because the criteria for morbidity and mortality are well-defined, and there are penalties imposed on doctors ranging up to loss of license for failure to report critical cases. The best that even Dan Geer can suggest for improving reporting is for the FASB to change accounting standards to require valuation of intangible assets. Not gonna happen…
@Dean -
You’ve highlighted the insidious part of this problem. Unless you’re going to tell me that everyone who doesn’t get compromised is invulnerable, we are simply cherry-picking vulnerabilities in hindsight.
There are almost certainly other entities that didn’t get compromised yet have the exact same problems. And there are plenty of other problems to go along.
If everyone is some level of “insecure” then what level of insecurity is reasonable?