Well, Microsoft has released its Security Development Lifecycle for all to read. To be honest, I was underwhelmed. Of course, they deserve some credit, but most of this stuff is pretty straightforward. When I was at Coopers & Lybrand (now PriceWaterhouseCoopers), we had a methodology called SQA 2000 (this was circa 1995). This doesn’t really look any different.
Given my work in metrics, I started salivating at the title for metrics. This is what it said:
3.3 Metrics for Product Teams
As a company, Microsoft is driven by the adage that "you can’t manage what you can’t measure." While it is very difficult to devise metrics that reliably measure the security of software, there are clearly metrics that serve as proxies for software security. These metrics range from training coverage for engineering staff (at the beginning of the development lifecycle) to the rate of discovered vulnerabilities in software that has been released to customers.
Microsoft has devised a set of security metrics that product teams can use to monitor their success in implementing the SDL. These metrics address team implementation of the SDL from threat modeling through code review and security testing to the security of the software presented for FSR. As these metrics are implemented over time, they should allow teams to track their own performance (improving, level, or deteriorating) as well as their performance in comparison to other teams. Aggregate metrics will be reported to senior product team management and Microsoft Executives on a regular basis.
Where’s the beef? I would have loved to see some real metrics that measure vulnerabilities found, software complexity, and anything else that may be useful. Nothing. Thus, the reason I was underwhelmed. I only skimmed the very short document, so I am hoping I am missing something obvious and will be embarrassed by this post sometime soon.