<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Spire Security Viewpoint &#187; Vulnerability Management</title>
	<atom:link href="http://spiresecurity.com/?cat=6&#038;feed=rss2" rel="self" type="application/rss+xml" />
	<link>http://spiresecurity.com</link>
	<description>Risk and Cybersecurity Analysis</description>
	<lastBuildDate>Fri, 14 Nov 2014 00:11:00 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>hourly</sy:updatePeriod>
	<sy:updateFrequency>1</sy:updateFrequency>
	<generator>http://wordpress.org/?v=3.5.1</generator>
		<item>
		<title>Which is More Secure &#8211; Android or iOS?: Tale of the Tape</title>
		<link>http://spiresecurity.com/?p=1353</link>
		<comments>http://spiresecurity.com/?p=1353#comments</comments>
		<pubDate>Fri, 19 Jul 2013 16:04:13 +0000</pubDate>
		<dc:creator>Pete Lindstrom</dc:creator>
				<category><![CDATA[Economics and Risk]]></category>
		<category><![CDATA[Metrics]]></category>
		<category><![CDATA[Vulnerability Management]]></category>

		<guid isPermaLink="false">http://spiresecurity.com/?p=1353</guid>
		<description><![CDATA[Tech risk professionals love to have debates about platform security, though it used to be Windows vs. Linux (really closed vs. open source) which morphed to Windows vs. Apple and is now Android vs. iOS. In any case, there are&#8230;<p class="more-link-p"><a class="more-link" href="http://spiresecurity.com/?p=1353">Read more &#8594;</a></p>]]></description>
				<content:encoded><![CDATA[<p>Tech risk professionals love to have debates about platform security, though it used to be Windows vs. Linux (really closed vs. open source) which morphed to Windows vs. Apple and is now Android vs. iOS. In any case, there are often numbers available to support one viewpoint or another. Let&#8217;s have a look and see if we can come to some conclusions.</p>
<p>For our latest debate &#8211; Android vs. iOS &#8211; there are three sets of numbers that have recently come into play for evaluation:</p>
<ol>
<li><span style="line-height: 16px;">Number of vulnerabilities: A recent <a href="http://mobile.theverge.com/2013/7/16/4527326/android-versus-ios-security">blog post on TheVerge.com</a> highlights that iOS and its 238 vulns from 2007-2013 has 8.8x more vulnerabilities than Android&#8217;s 27 from 2009-2013.</span></li>
<li>Number of malware samples: In April, a <a href="http://www.symantec.com/content/en/us/enterprise/other_resources/b-istr_main_report_v18_2012_21291018.en-us.pdf">Symantec report [PDF]</a> pointed out that Apple&#8217;s 387 vulns in 2012 dwarfs Android&#8217;s 13 and yet Android had 103 &#8220;mobile threats&#8221; (malware) compared with Apple&#8217;s 1. Importantly, they also point out that &#8220;<em>most mobile threats have not used software vulnerabilities</em>.&#8221;</li>
<li>Percent of traffic: A <a href="http://www.cc.gatech.edu/~traynor/papers/lever-ndss13.pdf">paper presented at NDSS &#8217;13 [PDF]</a> monitored actual smartphone traffic and found that a) &#8220;<em>The </em>mobile malware found by the research community thus far <em id="__mceDel" style="letter-spacing: 0.05em; line-height: 1.6875;">appears in a minuscule number of devices in the network: </em><em id="__mceDel" style="letter-spacing: 0.05em; line-height: 1.6875;">3,492 out of over 380 million (less than 0.0009%)&#8221;</em> and b) &#8220;<em><span style="letter-spacing: 0.05em; line-height: 1.6875;">users of iOS devices are virtually identically as likely </span><span style="letter-spacing: 0.05em; line-height: 1.6875;">to communicate with known low reputation domains as the </span><span style="letter-spacing: 0.05em; line-height: 1.6875;">owners of other mobile platforms, calling into question the </span></em><span style="letter-spacing: 0.05em; line-height: 1.6875;"><em>conventional wisdom of one platform demonstrably providing greater security than another</em>&#8220;</span></li>
</ol>
<p>Now, since we all know that security is the number one priority for IT decisions (heh), the CIO is waiting to hear from us on which platform is more secure. How do you answer?</p>
<p>Here&#8217;s my analysis, just using the numbers provided*</p>
<p>First, number of vulnerabilities as a measure is often thought of as a leading indicator of risk even though we all recognize that more vulns found equals fewer vulnerabilities remaining. The perception, however, is that there are actually <em>even more</em> vulns left. Absent of any other information, however, it is worth considering the notion that a higher number here is a measure of stronger security going forward (that is, #vulns is a lagging indicator). It doesn&#8217;t help matters that at least one of the sets of numbers inexplicably uses different time periods in its analysis. This measure would be much more useful if we had a way to normalize the numbers across platforms &#8211; the two most obvious ways would be with 1) a measure of complexity or size of the code base or 2) a measure of the personhours expended in looking for vulns. While I favor this latter option, it is not very practical.</p>
<p>The second measure, number of malware samples, is interesting because it is closer to the actual compromise. In addition, as Symantec points out many of them don&#8217;t exploit software vulnerabilities (this is another knock against using vuln counts). The challenge here is that there is essentially unlimited ability to create more malware samples. Moreover, the notion of a &#8220;mobile threat&#8221; is fairly broad and not always threatening to the extent that legitimate apps have some similar characteristics. Given the (somewhat) restricted methods for distribution and installation of apps on smartphones, a better measure would be to identify the distribution and accessibility to the population of these malware apps. In this case, getting an understanding of the number of downloads would get significantly closer to understanding the relative risk.</p>
<p>The final measure, compromised smartphones, provides a historical measure of actual infected phones. Aside from the really, really low number, we must decide whether these values are a good reflection of (future) risk or not. Since this number identifies compromised systems, it gets us closest to that which we are trying to prevent, which is useful. Ultimately, I believe this measure is the best of the three in helping us understand &#8220;risk&#8221; in the mobile world. And right now, it&#8217;s a tossup.</p>
<p>A better measure for determining which platform is more secure, in my opinion, would involve a measure of attack surface combined with one of devices sold (as a placeholder for activity volume and popularity).</p>
<p>&nbsp;</p>
<p>&nbsp;</p>
]]></content:encoded>
			<wfw:commentRss>http://spiresecurity.com/?feed=rss2&#038;p=1353</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>The 7-day Itch: Ups and Downs of Google&#8217;s New Disclosure Policy</title>
		<link>http://spiresecurity.com/?p=1331</link>
		<comments>http://spiresecurity.com/?p=1331#comments</comments>
		<pubDate>Wed, 05 Jun 2013 14:13:51 +0000</pubDate>
		<dc:creator>Pete Lindstrom</dc:creator>
				<category><![CDATA[Economics and Risk]]></category>
		<category><![CDATA[Highlights]]></category>
		<category><![CDATA[Random]]></category>
		<category><![CDATA[Vulnerability Management]]></category>

		<guid isPermaLink="false">http://spiresecurity.com/?p=1331</guid>
		<description><![CDATA[Recently, members of the security team at Google made an important announcement about &#8220;real-world exploitation of publicly unknown vulnerabilities.&#8221; While it was done on the Google Online Security blog, all indications are that this is an official Google policy statement.&#8230;<p class="more-link-p"><a class="more-link" href="http://spiresecurity.com/?p=1331">Read more &#8594;</a></p>]]></description>
				<content:encoded><![CDATA[<p><span style="letter-spacing: 0.05em; line-height: 1.6875;">Recently, members of the security team at Google made an <a href="http://googleonlinesecurity.blogspot.com/2013/05/disclosure-timeline-for-vulnerabilities.html">important announcemen</a>t about &#8220;real-world exploitation of publicly unknown vulnerabilities.&#8221; While it was done on the Google Online Security blog, all indications are that this is an official Google policy statement. To wit, Google announced that &#8220;after 7 days have elapsed without a patch or advisory, we will support researchers making details available so that users can take steps to protect themselves.&#8221;</span></p>
<p><span style="letter-spacing: 0.05em; line-height: 1.6875;">This is an important announcement because it highlights the very real problem of &#8220;<a href="http://spiresecurity.com/?p=36">in-the-wild-exploits of undercover vulnerabilities</a>.&#8221; This strain of &#8220;0day&#8221; is the most significant given that active exploits are already happening when they are discovered. In these scenarios, the threats (malicious actors) and vulnerabilities have already collided in the real world and losses are being actively incurred. Thus, <strong>this type of situation is the most important type that technology risk (techrisk) managers must deal with in their environments.</strong></span></p>
<p>The announcement itself highlights some important, underappreciated aspects of the techrisk profession:</p>
<p>- That exploits/breaches/incidents are the fundamental &#8220;unwanted outcome&#8221; that we are trying to prevent. It is not uncommon for techrisk pros to focus efforts on software quality, control weaknesses, or compliance violations &#8211; all useful intentions to the extent that they address the aforementioned incidents.</p>
<p>- That techrisk professionals can identify attacks even when the vulnerability is unknown. Much of our profession&#8217;s focus revolves around the notion that we must find vulnerabilities in order to protect ourselves, yet time and again we succeed in identifying these types of attacks using behavioral analysis and other techniques. With the growth in popularity of forensic archiving, we can now also determine to what extent we have been victims in the past to assist with understanding the risks of the future.</p>
<ul>
<li><span style="letter-spacing: 0.05em; line-height: 1.6875;">That much of the profession&#8217;s effort associated with vulnerability management is ineffective. Our efforts to identify each vulnerability prior to exploit are simply overwhelmed by scale and can simply be shown through a thought exercise &#8211; consider how many vulnerabilities are created every day (in the aggregate) as compared with how many are found. Perhaps more importantly, it is worth noting that the vast majority of vulnerabilities that are found are never known to be actively exploited <a href="https://www.isecpartners.com/media/12955/eip-final.pdf">[pdf]</a>.</span></li>
<li><span style="letter-spacing: 0.05em; line-height: 1.6875;">That there is a variance in how different types of attacks &#8211; namely, targeted vs. opportunistic &#8211; manifest themselves online. Google&#8217;s primary cited reason for its new policy involves political activists as victims of targeted attacks that may lead to physical harm. The history of infosec and techrisk highlight other scenarios &#8211; the NIMDA worm, WMF exploit, WebDAV, etc &#8211; that involve opportunistic exploits across a multitude of targets.</span></li>
<li><span style="letter-spacing: 0.05em; line-height: 1.6875;">That the most significant way to &#8220;move the marker&#8221; in security is through the identification of exploits and not vulnerabilities. As with Code Red and Nimda in the Fall of 2001 leading to Bill Gates&#8217; well-known &#8220;<a href="http://www.microsoft.com/en-us/news/features/2012/jan12/GatesMemo.aspx">Trustworthy Computing Memo</a>,&#8221; active exploits are the best drivers of change in the techrisk profession.</span></li>
</ul>
<p><span style="letter-spacing: 0.05em; line-height: 1.6875;"> </span><span style="letter-spacing: 0.05em; line-height: 1.6875;">While Google&#8217;s new policy offers and opportunity to assess the state of security on the Internet overall, it also demonstrates significant deficiencies in its approach:</span></p>
<ul>
<li><span style="letter-spacing: 0.05em; line-height: 1.6875;">The 7-day deadline has no risk basis. With the significant variance in number of affected parties and speed of compromise associated with opportunistic attacks versus targeted ones, the number is an arbitrary one. In the primary example cited (activists at risk of physical harm), speed is highly unlikely to have a significant impact on risk reduction.</span></li>
<li><span style="letter-spacing: 0.05em; line-height: 1.6875;">The capabilities of enterprises and/or users to protect themselves can vary significantly. There are many reasons why some parties choose to remain vulnerable to certain types of attacks &#8211; system complexities, legacy support needs, lack of technical skill, competitive priorities, etc. Through the years some security researchers (including some employees of Google) have expressed disdain for those that cannot protect themselves. A company the size of Google should be held to a higher standard in its willingness to protect those online that can&#8217;t always protect themselves.</span></li>
<li><span style="letter-spacing: 0.05em; line-height: 1.6875;"> </span><span style="letter-spacing: 0.05em; line-height: 1.6875;">No consideration of economics. The policy completely ignores tradeoffs like the risk of breaking systems when taking precautionary measures (e.g. patch failures), the well-known increase in exploits that occur after the disclosure of many new vulnerabilities [<a href="http://www.cs.umd.edu/~waa/pubs/Windows_of_Vulnerability.pdf">Arbaugh, McHugh, 2000 pdf</a>; <a href="http://users.ece.cmu.edu/~tdumitra/public_documents/bilge12_zero_day.pdf">Bilge, Dumitras 2013 pdf</a>], and the opportunity costs associated with new requirements. When Google says, for example, &#8220;each day an actively exploited vulnerability remains undisclosed to the public and unpatched, more computers will be compromised&#8221; they neglect the significant likelihood that computers will be compromised regardless of the state of disclosure to the public and fall back on the age-old myth that only patches can protect systems.</span></li>
<li><span style="letter-spacing: 0.05em; line-height: 1.6875;"> </span><span style="letter-spacing: 0.05em; line-height: 1.6875;">It can lead to even more exploitations and incidents. Anyone paying close attention to the vulnerability research community knows that there is wide variance in how researchers disclose their information and some decisions are made based on annoyance, frustration, spite and sometimes even malice. If a vulnerability will get &#8220;noticed&#8221; more quickly, researchers may be tempted to &#8220;test&#8221; it in the wild in order to increase its priority level.</span></li>
</ul>
<p><span style="letter-spacing: 0.05em; line-height: 1.6875;">A company with the talent and resources at Google can do better. Here are some opportunities for improving the state of security on the Internet and addressing the real, significant risk associated with actively exploited 0days:</span></p>
<ul>
<li><span style="letter-spacing: 0.05em; line-height: 1.6875;">Encourage and train political activists in obfuscation and evasion techniques. It is challenging to discuss a blanket policy across all scenarios simply by highlighting arguably the most important one &#8211; that involving physical harm. It seems highly unlikely that this case is a common one and the best way to discuss the overall implications of the policy itself is to remove this scenario from the discussion as it tends to cause an emotional reaction. As many of us know, there are many ways political activists can protect themselves online that would be much more effective than a 7-day disclosure policy which comes after they have been compromised.</span></li>
<li><span style="letter-spacing: 0.05em; line-height: 1.6875;"> </span><span style="letter-spacing: 0.05em; line-height: 1.6875;">Increase focus on actively exploited 0days. Since these are the most important scenarios the techrisk profession has to deal with, Google should be making every effort to identify these exploits and employ or invent new ways to protect against them. Google researchers still participate in random, ineffective vulnerability research that simply distracts from this very real problem.</span></li>
<li><span style="letter-spacing: 0.05em; line-height: 1.6875;"> </span><span style="letter-spacing: 0.05em; line-height: 1.6875;">Provide more insight into the &#8220;dozens&#8221; of 0days identified &#8220;through the years&#8221; that was mentioned in the blog announcement. If there is one thing Google has, it is great data. As evidenced by past reports [<a href="http://static.googleusercontent.com/external_content/untrusted_dlcp/research.google.com/en/us/archive/provos-2008a.pdf">Provos, 2008 pdf</a>], Google could very easily provide more specific evidence on the number of 0days they have identified, the volume of exploits, and their disposition by vendors. The fact that they haven&#8217;t yet, especially in the face of this policy announcement, is disappointing and makes it difficult to evaluate the measure.</span></li>
<li><span style="letter-spacing: 0.05em; line-height: 1.6875;"> </span><span style="letter-spacing: 0.05em; line-height: 1.6875;">Take a risk-based approach to disclosure. Fast-moving worms do most of their damage in hours and days &#8211; in those cases, seven days is too long. Targeted attacks are unlikely to get repeated in a way that demands immediate attention for most environments &#8211; in those cases, seven days is too short. A risk-based approach would take into account the frequency of exploit, probability of future exploit within a target population, and impact of the exploit while evaluating the changes to these variables over time &#8211; in particular before and after disclosure.</span></li>
<li><span style="letter-spacing: 0.05em; line-height: 1.6875;"> </span><span style="letter-spacing: 0.05em; line-height: 1.6875;">Monitor the situation closely. Google&#8217;s unique ability to gather data in this regard is worth mentioning again as a function of its ability to assess its own policy. Collecting and publishing data on actual 0days throughout their exploit lifecycle would be a boon to the entire profession.</span></li>
<li><span style="letter-spacing: 0.05em; line-height: 1.6875;"> </span><span style="letter-spacing: 0.05em; line-height: 1.6875;">Initiate or participate in discussions to create new ways to address this very real problem. Commercial, community, and government mechanisms already exist for sharing data publicly and privately that could be used as models for minimizing the risks associated with these types of attacks. For example, a (private) process similar to federal wiretap capabilities in secrecy and opportunity may be more effective in addressing targeted attacks. There are countless other approaches that could be leveraged to address this problem.</span></li>
</ul>
<p>Make no mistake, the Google 7-day policy announcement sheds light on a <strong>real and significant issue</strong> in technology-related risk. While it highlights some of the challenges techrisk professionals face on a daily basis, it also demonstrates significant deficiencies in its approach to address the problem. This is a great opportunity to evaluate the existing state of the Internet from a risk and security perspective to determine where inconsistencies or weaknesses lay and map out a risk-based program that has the highest likelihood of success.</p>
]]></content:encoded>
			<wfw:commentRss>http://spiresecurity.com/?feed=rss2&#038;p=1331</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Cognitive Dissonance or Spite?</title>
		<link>http://spiresecurity.com/?p=1302</link>
		<comments>http://spiresecurity.com/?p=1302#comments</comments>
		<pubDate>Mon, 11 Feb 2013 16:56:28 +0000</pubDate>
		<dc:creator>Pete Lindstrom</dc:creator>
				<category><![CDATA[Economics and Risk]]></category>
		<category><![CDATA[Highlights]]></category>
		<category><![CDATA[Random]]></category>
		<category><![CDATA[Vulnerability Management]]></category>

		<guid isPermaLink="false">http://spiresecurity.com/?p=1302</guid>
		<description><![CDATA[I happened to see a tweet the other day that said: &#8220;If you want a bug fixed quickly, sell it on the Russian black market. It&#8217;ll be so heavily abused that the vendor will patch out of cycle.&#8221; Now, it&#8230;<p class="more-link-p"><a class="more-link" href="http://spiresecurity.com/?p=1302">Read more &#8594;</a></p>]]></description>
				<content:encoded><![CDATA[<p>I happened to see a tweet the other day that said:</p>
<blockquote><p><em>&#8220;If you want a bug fixed quickly, sell it on the Russian black market. It&#8217;ll be so heavily abused that the vendor will patch out of cycle.&#8221;</em></p></blockquote>
<p>Now, it could be the joke&#8217;s on me and the 126 people who retweeted this message (a large number for security tweets) were in on it. Or, they all don&#8217;t realize how ludicrous this is. In the infosec/techrisk field, this kind of thinking is not unheard of so I will treat this as if it is legitimate.</p>
<p>The tweet highlights just how biased people can be when they get caught up in a notion without understanding the implications. Apparently, this tweeter wants bugs fixed quickly. At first blush this seems like a simple enough concern, shared by many. But peel one small layer deeper and the statement often ends up being &#8220;want bugs that you know about (or worse, that you discovered) fixed quickly after your discovery?&#8221; It becomes easier to see how certainty bias and the focusing illusion come into play.</p>
<p>there is plenty of evidence to demonstrate that it is unlikely that the bug in question is the only bug that remains unfixed &#8211; we have any number of bugs in various stages of discovery and disclosure all the time. If we assume that the average bug takes 120 days from discovery (or at least vendor notification) to patch release, and vendors generally release patches on a monthly cycle, then there are four months of undisclosed (typically) vulns on your systems that remain upatched.</p>
<p>Now, you might assert that this makes the point &#8211; of course we want them patched &#8220;quickly.&#8221; But that completely ignores the tradeoffs. If your patch is prioritized, that means another one must be de-prioritized. I suppose you could say that security developers aren&#8217;t operating at capacity and therefore can absorb the workload for both bugs, but that seems farfetched to me and doesn&#8217;t scale in any case.</p>
<p>Of course, the worst part of the tweet is the part that purposely increases risk by increasing the threat of compromise. No need for a soapbox/high horse here to recognize that purposely inflating risk to get attention in spite of how detrimental it is to Internet users is certainly unprofessional and really kind of pathetic.</p>
<p>Too often, folks get caught up in some perceived solution to a problem and neglect the bigger picture. Many times, the bugfinder is sincerely concerned. But it is important to understand the cost/benefit and risk dynamics involved if you really want to positively affect Internet risk.</p>
]]></content:encoded>
			<wfw:commentRss>http://spiresecurity.com/?feed=rss2&#038;p=1302</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Vulnerability Research in the age of Embedded Systems (SCADA)</title>
		<link>http://spiresecurity.com/?p=1262</link>
		<comments>http://spiresecurity.com/?p=1262#comments</comments>
		<pubDate>Wed, 25 Jan 2012 16:05:48 +0000</pubDate>
		<dc:creator>Pete Lindstrom</dc:creator>
				<category><![CDATA[Economics and Risk]]></category>
		<category><![CDATA[Highlights]]></category>
		<category><![CDATA[Vulnerability Management]]></category>

		<guid isPermaLink="false">http://spiresecurity.com/?p=1262</guid>
		<description><![CDATA[I have a post over at the Verizon Business blog (Considering Vulnerability Disclosure in the Realm of SCADA Systems) about how vulnerability discovery and disclosure impacts risk. Although it provides a basic risk model that can be applied to any situation,&#8230;<p class="more-link-p"><a class="more-link" href="http://spiresecurity.com/?p=1262">Read more &#8594;</a></p>]]></description>
				<content:encoded><![CDATA[<p>I have a post over at the Verizon Business blog (<a href="http://securityblog.verizonbusiness.com/2012/01/24/considering-vulnerability-disclosure-in-the-realm-of-scada-systems/">Considering Vulnerability Disclosure in the Realm of SCADA Systems</a>) about how vulnerability discovery and disclosure impacts risk. Although it provides a basic risk model that can be applied to any situation, it focuses on the recent <a href="http://www.digitalbond.com/2012/01/19/project-basecamp-at-s4/">SCADA disclosures</a> by Digital Bond and Rapid7. These are some of the smartest people in our field and yet they insist (by implication) on increasing risk to make a point. I sincerely hope they reconsider their actions in the future, before any serious damage is done.</p>
]]></content:encoded>
			<wfw:commentRss>http://spiresecurity.com/?feed=rss2&#038;p=1262</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Evaluating the Oracle Security Manifesto</title>
		<link>http://spiresecurity.com/?p=1257</link>
		<comments>http://spiresecurity.com/?p=1257#comments</comments>
		<pubDate>Tue, 30 Aug 2011 15:21:48 +0000</pubDate>
		<dc:creator>Pete Lindstrom</dc:creator>
				<category><![CDATA[Highlights]]></category>
		<category><![CDATA[Random]]></category>
		<category><![CDATA[Vulnerability Management]]></category>

		<guid isPermaLink="false">http://spiresecurity.com/?p=1257</guid>
		<description><![CDATA[The cool thing about Mary Ann Davidson is she doesn&#8217;t mince her words; you know where she stands on every issue and she is willing to own it in the security world. So when I started hearing some buzz about&#8230;<p class="more-link-p"><a class="more-link" href="http://spiresecurity.com/?p=1257">Read more &#8594;</a></p>]]></description>
				<content:encoded><![CDATA[<p>The cool thing about Mary Ann Davidson is she doesn&#8217;t mince her words; you know where she stands on every issue and she is willing to own it in the security world. So when I started hearing some buzz about her latest blog post &#8211; <a href="http://blogs.oracle.com/maryanndavidson/entry/those_who_can_t_do">Those Who Can&#8217;t Do, Audit</a> &#8211; I expected some sizzle. And got it.</p>
<p>It turns out the target this time is &#8220;SASO,&#8221; a company that must be making headway in driving legislation towards third party code reviews.</p>
<blockquote><p><em>&#8220;<span> I’ve opined in previous blogs on the importance of defining what problem you want to solve, specifying what “it” is that you want to legislate, understanding costs – especially those pertaining to unintended consequences &#8211; and so on.&#8221;</span></em></p></blockquote>
<p>-&gt; Hear, hear (or is that here, here?). In any case, we should all be finding ways to understanding exactly what we want. For example, do you simply want &#8220;more secure software&#8221; or do you really want &#8220;fewer incidents?&#8221; I am utterly against legislation because we haven&#8217;t defined the problem and more importantly we haven&#8217;t scoped out the solution.</p>
<blockquote><p><em> This includes legislative mandates on suppliers – who, as we all know – </em>[sarcasm] <em>are busy throwing crappy code over the wall with malice aforethought. Those evil suppliers simply cannot be trusted&#8230;</em>[/sarcasm]</p></blockquote>
<p>-&gt; I had to emphasize the sarcasm here because <em>some security folks actually believe this</em>. However, Mary Ann seems to insinuate that simply because developers are not <em>trying </em>to write write crappy code, they aren&#8217;t creating it. And there is good reason to believe (ahem) that software ships with vulnerabilities. The code isn&#8217;t necessarily crappy, per se, but it is hard to refute the evidence that it <em>does</em> have vulnerabilities &#8211; sometimes many.</p>
<p>Though I understand the frustration and strong emotions here, on both sides, I am not a fan of setting this up as some sort of moral, &#8220;good vs. evil&#8221; argument. In my experience, most folks really are trying to &#8220;do the right thing&#8221; even though the approaches conflict.</p>
<blockquote><p><span><em>Having to plow through 1000 alleged vulnerabilities to find the three that are legitimate is way too expensive for any company to contemplate doing it.</em></span></p></blockquote>
<p>-&gt; In my opinion, this is the real problem with this entire space. The tools do not provide high quality, which makes security expensive. And I immediately segue into most vulns aren&#8217;t exploited and there are no defined bounds to how long you can look for vulnerabilities.</p>
<blockquote><p><span> <em>“creating a market for themselves.”</em></span></p></blockquote>
<p>-&gt; Reference to &#8220;demand creation,&#8221; one of a handful of conflicts of interest in the security world (really, everywhere). Another is the conflict between security and shipping products.</p>
<blockquote><p><em>&#8220;<span> they analyze the binaries to do static analysis&#8221;</span></em></p></blockquote>
<p>-&gt; I wonder who this could be. Oh, that&#8217;s right, it&#8217;s &#8220;SASO.&#8221;</p>
<blockquote><p><span><em>And thus, suppliers are out of business if they screw it up, because their competitors will be ruthless. Competitors are ruthless.</em></span></p></blockquote>
<p>-&gt; This is standard far &#8220;it&#8217;s not us, it&#8217;s them&#8221; mentality that is extremely tricky. Vendors think &#8220;competitors&#8221; are ruthless which implies they are some sort of exception. Enterprises believe &#8220;everyone&#8221; is ruthless &#8211; there are no &#8220;competitors&#8221; only prospective suppliers. And again, in a very ambiguous space, it is rare to find a software company that doesn&#8217;t have something to say about the quality of their security program.</p>
<p>Of course, of much more importance in all of this &#8211; and perhaps Oracle can attest to this as well &#8211; is the value the software product provides to the company.</p>
<blockquote><p><span><em>Whom do you think is more trustworthy? Who has a greater incentive to do the job right – someone who builds something, or someone who builds FUD around what others build? Did I mention that most large hardware and software companies run their own businesses on their own products so if there’s a problem, they – or rather, we – are the first ones to suffer? Can SASO say that? I thought not.</em></span></p></blockquote>
<p>-&gt; A strawman that seems a bit of a stretch based on the evidence. I agree wholeheartedly that developers really do try to write secure software, and that companies really do try to ship secure products. Unfortunately, in today&#8217;s world there is plenty of evidence that it isn&#8217;t good enough. And, to be honest, I think the answer is that SASO has more incentive to do the job &#8220;right&#8221; &#8211; it is their core business.</p>
<blockquote><p><span><em>&#8230;why SASO will never darken our code base&#8230;</em></span></p></blockquote>
<p>-&gt; I can&#8217;t help but think of a CFO asserting that external auditors will never &#8220;darken&#8221; his financial statements&#8230; umm, yeah. Moving on, now.</p>
<p>This next section is the &#8220;manifesto&#8221; part:</p>
<blockquote><p><span><em>1) We have source code and we do our own static analysis.</em></span></p></blockquote>
<p>-&gt; It is very hard not to be trite here with a &#8220;and how is that working out for the industry?&#8221; line. This is true of every significant software developer. It seems like vulnerabilities are still missed (ahem).</p>
<blockquote><p><span>2) <em>Security != testing</em></span></p></blockquote>
<p><span>-&gt; Agreed! There is much more to it. But vulnerabilities are where the rubber meets the road. The good news is that the corollary to this statement also refutes Mary Ann&#8217;s earlier point that she would never outsource &#8220;security.&#8221; It really isn&#8217;t security, it is testing that is (potentially) being outsourced.</span></p>
<blockquote><p><span>3) <em>Precedent&#8230; </em></span><span>4) <em>Fixability&#8230;</em></span></p></blockquote>
<p><span>-&gt; I worry a lot about Precedent. Just not in this case. And fixability is simply a truism that is irrelevant as far as I can tell.</span></p>
<blockquote><p><span><span>5) <em>Equality as public policy.</em></span></span></p></blockquote>
<p><span><span>-&gt; The more vulnerabilities you fix, the more every customer benefits. I don&#8217;t see how this is unfair or unequal.</span></span></p>
<blockquote><p><span>6) <em>Global practices for global markets.</em></span></p></blockquote>
<p><span>-&gt; Aha! Finally, we get to the real argument, which is that they already use Common Criteria labs to evaluate security, and Oracle believes it is more comprehensive. A much stronger argument, I believe. Buried.</span></p>
<blockquote><p><span><span>7) <em>All tools are not created equal.</em></span></span></p></blockquote>
<p><span><span>-&gt; Wow. Lots of nuances to this one. I agree that you shouldn&#8217;t mandate a tool, or even approach. That runs the risk of ambiguity and leads to the reason why there shouldn&#8217;t be legislation in this regard. But that doesn&#8217;t mean there is no value to third-party reviews. The biggest value I see is not independence as if developers are colluding in producing bad code, but independence in that another set of eyes can provide new ways to look at the code and, as has been shown by public disclosures (which I generally don&#8217;t support), find more vulnerabilities.</span></span></p>
<p><span><span>[man, this Oracle post goes on forever!]</span></span></p>
<blockquote><p><span>[A "cautionary tale"] <em>I told the product group that they absolutely, positively, needed in-house security expertise, that “outsourcing testing” would create an “outsourcing security” mentality that is unacceptable.</em></span></p></blockquote>
<p><em>-&gt;</em> If I had a dollar for every &#8220;cautionary tale&#8221; I&#8217;ve heard, I would be a rich man. The notion that there aren&#8217;t a thousand ways to address &#8220;mentality&#8221; issues is simply wrong.</p>
<blockquote><p><span><em>By way of contrast, consider another company that does static analysis as a service.</em></span></p></blockquote>
<p><em>-</em>&gt; So, that contrasting story doesn&#8217;t really contrast. It seems to indicate that you *can* outsource security testing, if you do it &#8220;ethically,&#8221; which I am sure everyone would suggest that is how they work, and, as with the other arguments about what is in the best interests of companies, is certainly in the testing companies&#8217; best interests.</p>
<blockquote><p><span><em> I recently heard that SASO has hired a lobbyist. (I did fact check with them and they stated that, while they had hired a lobbyist, they weren’t “seriously pursuing that angle” – yet.)</em></span></p></blockquote>
<p><em>-</em>&gt; Ugh. Just ugh.</p>
<blockquote><p><span><em>I have to wonder, what are they going to lobby for?</em></span></p></blockquote>
<p><em>-</em>&gt; A great, important question that really should be answered by the industry.</p>
<blockquote><p><span><em>In my opinion, neither SASO &#8211; nor any other requirement for third party security testing &#8211; has any place in a supply chain discussion. If the concern is assurance, the answer is to work within existing international assurance standards, not create a new one. Particularly not a new, US-only requirement to “hand a big fat market win by regulatory fiat to any vendor who lobbied for a provision that expanded their markets.” Ka-ching.</em></span></p></blockquote>
<p><span><em></em></span><em>-</em>&gt; Though I think Mary Ann is a bit too confident in her in-house setup, I believe this is a reasonable approach and agree more than I disagree with it<em>.</em> And I find myself agreeing with the rest of the post (minus the book recommendations <img src='http://spiresecurity.com/blog/wp-includes/images/smilies/icon_wink.gif' alt=';-)' class='wp-smiley' /> ).</p>
<p>After reading the entire article (!) I find myself missing something. The assertions about how much security there is seems to conflict with the evidence in the public. I am no fan of public disclosures, but that is the way software security world operates today, and to leave the challenges in the entire software world unacknowledged is missing the point.</p>
<p>The only thing worse than the market is government. So I choose the lesser of two evils, almost every time. But it is worth noting that the amount of &#8220;demand creation&#8221; in the security space is reprehensible as well. Regardless of that, the software security profession as a whole completely ignores</p>
<p>THE MOST IMPORTANT QUESTION IN SOFTWARE TODAY: For any given application, how many vulnerabilities should be tolerated?</p>
<p>If your answer is none, please follow the yellow brick road to the emerald city. We have to get away from working to perfection and set standards as an industry that define a reasonable level of attention to the vulnerable state of software. This reasonability or vuln tolerance measure could be based on effort, code base churn, size, complexity, age, etc.</p>
<p>Obviously, this is a complex problem with many options, perhaps none of which is perfect. But if we don&#8217;t want another &#8220;compliance&#8221; state regarding software, we really need to address this problem with something other than &#8220;we try really hard.&#8221;</p>
]]></content:encoded>
			<wfw:commentRss>http://spiresecurity.com/?feed=rss2&#038;p=1257</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Liability and Secure Software</title>
		<link>http://spiresecurity.com/?p=1251</link>
		<comments>http://spiresecurity.com/?p=1251#comments</comments>
		<pubDate>Mon, 22 Aug 2011 17:46:16 +0000</pubDate>
		<dc:creator>Pete Lindstrom</dc:creator>
				<category><![CDATA[Economics and Risk]]></category>
		<category><![CDATA[Highlights]]></category>
		<category><![CDATA[Vulnerability Management]]></category>

		<guid isPermaLink="false">http://spiresecurity.com/?p=1251</guid>
		<description><![CDATA[iang over at Financial Cryptography has a thought-provoking discussion of liability (ht @alexhutton) and its corresponding risks. I think I added a comment (but can&#8217;t be sure) that said this: Culture and consciousness is all a distraction and very malleable.&#8230;<p class="more-link-p"><a class="more-link" href="http://spiresecurity.com/?p=1251">Read more &#8594;</a></p>]]></description>
				<content:encoded><![CDATA[<p>iang over at Financial Cryptography has a <a href="https://financialcryptography.com/mt/archives/001331.html">thought-provoking discussion of liability</a> (ht @alexhutton) and its corresponding risks. I think I added a comment (but can&#8217;t be sure) that said this:</p>
<blockquote><p><em>Culture and consciousness is all a distraction and very malleable. What really matters at the end of the day is the relative number of vulns in the software.</em></p>
<p><em>Also, worth noting that &#8220;secure software&#8221; is a derivative goal of less risk &#8211; that is, fewer incidents. We often opt for the former in the face of the latter, which is counterproductive.</em></p></blockquote>
<p>Liability is a horrible idea. Here are some reasons why:</p>
<ol><strong></strong></p>
<li><strong>It&#8217;s unenforceable.</strong></li>
<p><strong></strong></p>
<li><strong>It will destroy innovation.</strong></li>
<p><strong></strong></p>
<li><strong>It will destroy open-source.</strong></li>
<p><strong></strong></p>
<li><strong>It will create an Xbox Internet.</strong></li>
<p><strong></strong></p>
<li><strong>It will double prices.</strong></li>
<p><strong></strong></p>
<li><strong>It will force lock-in.</strong><br />
<strong></strong></li>
<li><strong>And, finally &#8212; it won&#8217;t work.</strong></li>
</ol>
<p>Those come circa 2005 from my commentary here: <a href="http://www.computerworld.com/s/article/105869/Opinion_To_sue_is_human_to_err_denied">To Sue is Human; To Err Denied</a></p>
<p>Related:</p>
<p><a href="http://spiresecurity.com/?p=632">Software Liability = Our Worst Nightmare</a></p>
<p><a href="http://spiresecurity.com/?p=527">The Death of Open Source and Xboxes for Everyone</a></p>
<p><a href="http://spiresecurity.com/?p=350">Software Liability Redux</a></p>
<p><a href="http://spiresecurity.com/?p=298">Who Should be Liable?</a></p>
]]></content:encoded>
			<wfw:commentRss>http://spiresecurity.com/?feed=rss2&#038;p=1251</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Monoculture Revisited</title>
		<link>http://spiresecurity.com/?p=1200</link>
		<comments>http://spiresecurity.com/?p=1200#comments</comments>
		<pubDate>Thu, 02 Dec 2010 03:30:00 +0000</pubDate>
		<dc:creator>Pete Lindstrom</dc:creator>
				<category><![CDATA[Vulnerability Management]]></category>

		<guid isPermaLink="false">http://spiresecurity.com/?p=1200</guid>
		<description><![CDATA[It&#8217;s been eight years since the &#8220;great monoculture debate&#8221; hit the press with a storm. Bruce Schneier and Marcus Ranum take on the topic in their he says/she says column for searchsecurity, though it doesn&#8217;t appear that Schneier actually believes&#8230;<p class="more-link-p"><a class="more-link" href="http://spiresecurity.com/?p=1200">Read more &#8594;</a></p>]]></description>
				<content:encoded><![CDATA[<p>It&#8217;s been eight years since the &#8220;great monoculture debate&#8221; hit the press with a storm. Bruce Schneier and Marcus Ranum take on the topic in their he says/she says <a href="http://searchsecurity.techtarget.com/magazineFeature/0,296894,sid14_gci1522895_mem1,00.html">column for searchsecurity</a>, though it doesn&#8217;t appear that Schneier actually believes the story any more&#8230; for good reason.</p>
<p>At the time, I wrote a rebuttal in Information Security Magazine. I can&#8217;t find it at the original online link, so have copied the version I have below (this might differ slightly from the published version). Let me know what you think.</p>
<blockquote><p><em>ALL TOGETHER NOW</em></p>
<p class="MsoNormal"><em>I’m sick and tired of having to be a farmer, car manufacturer, avionics expert and biologist to do my job. This whole analogy business has gone way to far. Nowadays, we spend more time making comparisons to security than we do solving security problems. Hello! Get over it!</em></p>
<p class="MsoNormal"><em>The latest analogy everyone’s using is comparing Microsoft to a farming monoculture. This all started in late September when Dan Geer, Bruce Schneier, Becky Bace and other security mavens released “CyberInsecurity: The Cost of Monopoly,” a white paper that argued that Microsoft’s dominance in client-server computing posed a serious risk to global IT security.[1]</em></p>
<p class="MsoNormal"><em>Now, I have no idea whether monoculture is bad to farmers. I know nothing about pesticides, fertilization methods or crop rotation. But I do know that the charges waged against Microsoft in this paper are a bit silly&#8211;at most inconsequential, and potentially destructive.</em></p>
<p class="MsoNormal"><em>The authors argue that the solution to the dangers posed by monoculture is diversification. In a nutshell, a diversified computing base will limit the number of potentially vulnerable and exploitable systems, no matter what specific system (or systems) is targeted. </em></p>
<p class="MsoNormal"><em>I’d suggest that this basic philosophy sounds great in theory, but is totally impractical when you get down to specifics. </em></p>
<p class="MsoNormal"><em>The first victim of diversification is simplicity. In defining complexity, one must look at the overall computing infrastructure and all the resources in use. Every day thousands of new programmers code millions of new lines of code in an uncoordinated fashion (under competition). </em></p>
<p class="MsoNormal"><em>The integration of many varied components further increases complexity. What are the costs of supporting a diversified base of applications and platforms, or of training half of the end users in the world on new client systems? What about the productivity losses during this transition period? What about the trillions in business that doesn’t get done?</em></p>
<p class="MsoNormal"><em>OK, for the sake of argument, let’s assume for a minute that monoculture really is bad. What do we do about it? One suggested alternative is to purposely control market forces by limiting any vendor to 50 percent of the desktops in use. That brings 600 million desktops down to 300 million. Great, except that the most prevalent virus/worm to date has only affected a couple million systems. Even limiting 10 operating systems to equal market share gives any attacker a target of 60 million systems. And hackers have a history of adapting to new environments anyway. With the growing popularity in blended threats, a virus could bundle many different attacks against different platforms.</em></p>
<p class="MsoNormal"><em>Perhaps the greatest problem in the push for mandatory diversification is the fact that most IT shops have spent the last 10 years pushing for “monocultural” computing environments. Monocultural is merely a synonym for “standardized.” To suggest that the risk is too great for a standard desktop is to suggest that the 20-year effort to standardize systems and systems support processes was a bad idea. </em></p>
<p class="MsoNormal"><em>The final test of the monoculture argument is in the consequences of its adoption:</em></p>
<ul>
<li><em>Application software vendors who focus on Windows operating systems will see their markets halved and their costs doubled.</em></li>
<li><em>Enterprises double their costs in providing technical support, retraining highly skilled professionals, and modifying and supporting internal applications that work on the Windows platform.</em></li>
<li><em>Attackers will focus on another lucrative target&#8211;for example, Cisco. Don’t know about you, but I’m a lot more worried about Cisco vulnerabilities creating a “cascading failure.”</em></li>
<li><em>The government sets a precedent that it will control the Internet. Innovation dies.</em></li>
<li><em>The problem doesn’t get solved. The Internet will be just as prone to cascading failure as it is today. Are 300 million vulnerable systems really better than 600 million?</em></li>
</ul>
<p class="MsoNormal"><em></em></p>
<p class="MsoNormal"><em></em></p>
<p class="MsoNormal"><em></em></p>
<p class="MsoNormal"><em></em></p>
<p class="MsoNormal"><em></em></p>
<p class="MsoNormal"><em>I confess that I really don’t like being a Microsoft apologist. Redmond has significant problems to address if it really wants to strengthen our computing environments. But I’m constantly surprised that security professionals let their emotions get in the way of reason and intellectual rigor.</em></p>
<p class="MsoNormal"><em></em></p>
<p class="MsoNormal"><em>It is time to put this Microsoft bashing to bed and move on. Diversification is foolish amidst all of the other needs of an IT organization. Security professionals need to play the cards they are dealt. There are many, many different approaches to security that can be successful in securing Windows et. al. To spend life in an alternate reality that doesn’t include Microsoft is a copout. </em></p>
<p class="MsoNormal"><em></em></p>
<p class="MsoNormal"><em>[1] See <strong>www.ccianet.org/papers/cyberinsecurity.pdf</strong>.</em></p>
<p class="MsoNormal"><strong><em> </em></strong><strong><em>PETE LINDSTROM, CISSP</em></strong><em> (<a href="mailto:petelind@spiresecurity.com">petelind@spiresecurity.com</a>), is the founder and research director of Spire Security, an IT security analyst firm. He also is a member of Information Security’s editorial board. </em></p>
<p class="MsoNormal"><em></em></p>
<p><em>(originally published in Information Security Magazine and no longer available online, but referenced <a href="http://www.schneier.com/crypto-gram-0311.html">here</a>)</em></p></blockquote>
]]></content:encoded>
			<wfw:commentRss>http://spiresecurity.com/?feed=rss2&#038;p=1200</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Vulnerability Creation vs. Discovery vs. Fix</title>
		<link>http://spiresecurity.com/?p=1194</link>
		<comments>http://spiresecurity.com/?p=1194#comments</comments>
		<pubDate>Mon, 25 Oct 2010 15:34:53 +0000</pubDate>
		<dc:creator>Pete Lindstrom</dc:creator>
				<category><![CDATA[Economics and Risk]]></category>
		<category><![CDATA[Incidents]]></category>
		<category><![CDATA[Metrics]]></category>
		<category><![CDATA[Random]]></category>
		<category><![CDATA[Vulnerability Management]]></category>

		<guid isPermaLink="false">http://spiresecurity.com/?p=1194</guid>
		<description><![CDATA[Michael Janke at Last In, First Out is rightly concerned about the respective run rates of the vulnerability creation process and our ability to fix them individually. He asks the question &#8220;Are we creating new vulnerabilities faster than we are&#8230;<p class="more-link-p"><a class="more-link" href="http://spiresecurity.com/?p=1194">Read more &#8594;</a></p>]]></description>
				<content:encoded><![CDATA[<p><a href="http://blog.lastinfirstout.net/2010/09/are-we-creating-more-vulnerabilities.html?utm_source=feedburner&amp;utm_medium=feed&amp;utm_campaign=Feed:+LastInFirstOut+(Last+In,+First+Out)">Michael Janke at Last In, First Out</a> is rightly concerned about the respective run rates of the vulnerability creation process and our ability to fix them individually. He asks the question &#8220;<strong><em>Are we creating new vulnerabilities faster than we are fixing old ones?&#8221; </em><span style="font-weight: normal;">after providing a list of publicly disclosed vulnerabilities from various time periods.</span></strong></p>
<p><strong><span style="font-weight: normal;">I am not clear whether this list of disclosed vulnerabilities is intended to represent vulnerabilities created or fixed (it is neither), but it certainly does its job in highlighting the problem. It is worth first understanding that vulnerabilities can exist in various states after creation &#8211; undiscovered/discovered; undisclosed/disclosed (publicly); and unfixed/fixed, giving us 8 different possible state combinations (though 2 are impossible) for vulnerabilities:</span></strong></p>
<p>undiscovered, undisclosed, unfixed (latent)</p>
<p>undiscovered, undisclosed, fixed (due to code upgrade, for example)</p>
<p>undiscovered, disclosed, unfixed (impossible)</p>
<p><span>undiscovered, disclosed, fixed (impossible)</span></p>
<p><span><span>discovered, undisclosed, unfixed (true zero day; undercover vulnerability)</span></span></p>
<div><span><span>discovered, undisclosed, fixed (QA and internal code review teams)</span></p>
<div><span><span>discovered, disclosed, unfixed (common zero day)</span></p>
<div><span><span>discovered, disclosed, fixed (standard)</span></p>
<p><span>It may also be worth differentiating between a patch available state and a patch applied state depending on whether you are a vendor or an end-user, but this will suffice for now.</span></p>
<div>Back to Michael&#8217;s question, &#8221;<strong><em>Are we creating new vulnerabilities faster than we are fixing old ones?&#8221;</em><span style="font-weight: normal;"> The answer is simple: Yes. The evidence is not so readily available, but logically intuitive, I believe. The thought exercise involves considering the amount of new code being created every day and determining how many vulnerabilities you think are being created. So, for example, you might determine that there are 50 million lines of code and 5 thousand vulnerabilities created every day <a href="http://spiresecurity.com/?p=189">like I did here</a>. You can then compare that number to the number we are &#8220;fixing&#8221; &#8211; using either the number being disclosed, like Michael does, or perhaps an estimate that incorporates the percentage of unpatched vulns in the world.</span></strong></div>
<div>Michael asks a great question and I think he and I come to similar conclusions, but we differ significantly in our reactions to this information. He believes activism and (presumably) regulations will solve the problem. In confess that I <a href="http://spiresecurity.com/?p=306">really</a> <a href="http://spiresecurity.com/?p=313">despise</a> the use of automobiles as some sort of analogous situation, primarily because we are talking more about atoms and molecules than we are about physical components to a car. And even more importantly, automobile safety (at least the kind in this context) does not revolve around the INTELLIGENT ADVERSARY.</div>
<div>Michael is correct that we can&#8217;t eliminate all vulnerabilities but <a href="http://spiresecurity.com/?p=350">liability is not the answer</a>. Software Safety Data Sheets coupled with continued action against attackers will do a much better job.</div>
<div>Related:</div>
<div><a href="http://spiresecurity.com/?p=194">Back of the Envelope Math &#8211; Undercover Vulnerabilities</a></div>
<div><a href="http://spiresecurity.com/?p=189">Another Envelope: Vulnerability Growth Rates</a></div>
<div><a href="http://spiresecurity.com/?p=189"></a><strong><span style="font-weight: normal;">Computerworld: <a href="http://www.computerworld.com/s/article/105869/Opinion_To_sue_is_human_to_err_denied?taxonomyId=017">To Sue is Human, To Err Denied</a> </span></strong></div>
<p></span></div>
<p></span></div>
<p></span></div>
]]></content:encoded>
			<wfw:commentRss>http://spiresecurity.com/?feed=rss2&#038;p=1194</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Why Check Point should buy RSA</title>
		<link>http://spiresecurity.com/?p=1183</link>
		<comments>http://spiresecurity.com/?p=1183#comments</comments>
		<pubDate>Mon, 13 Sep 2010 20:16:58 +0000</pubDate>
		<dc:creator>Pete Lindstrom</dc:creator>
				<category><![CDATA[Identity Management]]></category>
		<category><![CDATA[Vulnerability Management]]></category>

		<guid isPermaLink="false">http://spiresecurity.com/?p=1183</guid>
		<description><![CDATA[Well, things have changed from almost 10 years ago, but I was taking a trip down memory lane with the new HP &#8211; Arcsight acquisition and came across this. I suppose nowadays perhaps RSA (EMC) should be buying Check Point,&#8230;<p class="more-link-p"><a class="more-link" href="http://spiresecurity.com/?p=1183">Read more &#8594;</a></p>]]></description>
				<content:encoded><![CDATA[<p>Well, things have changed from almost 10 years ago, but I was taking a trip down memory lane with the new HP &#8211; Arcsight acquisition and came across this. I suppose nowadays perhaps RSA (EMC) should be buying Check Point, and clearly OPSEC is nothing like what it was back then, but I found it intriguing. It was a Hurwitz Group Trend Watch.</p>
<h4>Security Strategies &#8211; January 31, 2002</h4>
<p class="MsoNormal"><strong><span>Why Check Point should buy RSA</span></strong></p>
<p><em><span>By: Pete Lindstrom, Director &#8212; Reply to:plindstrom@hurwitz.com [not active anymore]</span></em></p>
<p class="MsoNormal"><span>It is no secret that the security space is highly fragmented. Hundreds of companies vie for market share and mindshare amidst hundreds of others, all with a bit of a unique spin – operating within the Four Disciplines of security management (Identity, Configuration, Threat, and Trust Management). Even within Operational Security (Authentication, Access Control) choices and configurations abound.<span> </span>There is no true “security” company because there is so much to do and so many ways to do it.</span></p>
<p class="MsoNormal"><strong><span>THE HURWITZ TAKE</span></strong></p>
<p class="HGPara">The company that can consolidate solutions and provide broad coverage in the areas described above will own the security market. But who will that be? Right now, Symantec has a strong story in the Threat Management and Configuration Management space, with ISS close behind. Tivoli has a strong presence in Access Control and is working on mindshare in Identity Management and Threat Management. Netegrity and Verisign have interesting plays in Access Control and Trust Management, respectively. CA has products in just about all of these areas, but no solid mindshare. That leaves Check Point and RSA.</p>
<p class="HGPara">Check Point and RSA – at its most basic level, there doesn’t seem to be too much in common. But a second look reveals plenty of similarities, in both their businesses and solutions:</p>
<p class="MsoListBullet"><span><span>n<span> </span></span></span>Both Check Point and RSA own the markets and the minds in firewalls and authentication, respectively.</p>
<p class="MsoListBullet"><span><span>n<span> </span></span></span>Both have strong indirect channels. In fact, they share many of the same resellers.</p>
<p class="MsoListBullet"><span><span>n<span> </span></span></span>There are two basic prerequisites to selling a security solution – if you support authentication, you must support RSA’s SecurID; if you have a network security solution, you must join Check Point’s OPSEC Alliance.</p>
<p class="MsoListBullet"><span><span>n<span> </span></span></span>Check Point provides Access Control at the network layer. RSA provides Authentication at the network and application layers. Authentication and access control are always linked, with the common denominator for networks being the VPN.</p>
<p class="MsoListBullet">But wait, there’s more.<span> </span>From that position, they could roll up the authentication space by adding biometrics and dedicating effort toward smart cards and single sign-on (with RSA’s RADIUS server). They can take the Securant solution that RSA acquired and integrate it with firewalls –increasingly important in the continual blend of the network and application layers.</p>
<p class="MsoListBullet">There are other reasons to consider this, but the end result is the same: A Check Point – RSA merger would result in an operational security powerhouse that could own and define the security space in years to come.</p>
]]></content:encoded>
			<wfw:commentRss>http://spiresecurity.com/?feed=rss2&#038;p=1183</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Disclosing the Elephant in the Room of the Disclosure Debate</title>
		<link>http://spiresecurity.com/?p=1172</link>
		<comments>http://spiresecurity.com/?p=1172#comments</comments>
		<pubDate>Fri, 23 Jul 2010 20:37:30 +0000</pubDate>
		<dc:creator>Pete Lindstrom</dc:creator>
				<category><![CDATA[Economics and Risk]]></category>
		<category><![CDATA[Highlights]]></category>
		<category><![CDATA[Random]]></category>
		<category><![CDATA[Vulnerability Management]]></category>

		<guid isPermaLink="false">http://spiresecurity.com/?p=1172</guid>
		<description><![CDATA[There has been a lot of discussion lately about vulnerability disclosure, with Google and Microsoft respectively weighing in with their latest opinions on the topic. There is really nothing new here, as evidenced by the Google folks referencing a 9-year-old&#8230;<p class="more-link-p"><a class="more-link" href="http://spiresecurity.com/?p=1172">Read more &#8594;</a></p>]]></description>
				<content:encoded><![CDATA[<p>There has been a lot of discussion lately about vulnerability disclosure, with <a href="http://googleonlinesecurity.blogspot.com/2010/07/rebooting-responsible-disclosure-focus.html">Google</a> and <a href="http://blogs.technet.com/b/ecostrat/archive/2010/07/22/coordinated-vulnerability-disclosure-bringing-balance-to-the-force.aspx">Microsoft </a>respectively weighing in with their latest opinions on the topic.</p>
<p>There is really nothing new here, as evidenced by the Google folks referencing a <a href="http://www.schneier.com/crypto-gram-0111.html#1">9-year-old Bruce Schneier essay</a> on the topic. I have written extensively on the topic and the related software liability in previous years (some highlighted below) and get castigated quite a bit when pointing out some fairly obvious points. I believe these points are important and are sometimes ignored, so I will go ahead and point some of them out again, as there is a big elephant in this room and I think it is the real reason that folks are constantly at odds with one another.</p>
<p>So, here&#8217;s the elephant: <strong>Vulnerability disclosure of any kind (full, responsible, irresponsible, coordinated, uncoordinated, whatever) is not working</strong> and hasn&#8217;t been working since <a href="http://www.wired.com/techbiz/media/news/2002/01/49826">Bill Gates&#8217; Trustworthy Computing memo of 2002</a>. If you think about it, the remarkable thing about that memo is that it effectively neutralized all that was to follow in disclosure (and even the debate where it stands today) because it was a major acknowledgement of the problem from a huge company. Ever since then, nobody has been able to articulate the long-term strategic benefit of vulnerability disclosure (and for good reason). Even worse, there is no evidence of benefits anywhere, other than to the bugfinders themselves (though certainly this can work both ways). Let&#8217;s face it, the truth of this matter, and the reason for all the debate revolves around respect, fame, and competitive advantage and not around bringing about a safer Internet. Please let me explain.</p>
<p>The reason something as simple as a memo could have such an effect is that bugfinders never really had a strategic mission to begin with. Let&#8217;s face it, the only thing a bugfinder wants is for his or her particular bug that they happened to find at any given time fixed in whatever they believe is a timely manner. I don&#8217;t know where this fits in Maslow&#8217;s hierarchy of needs, but it ranks very low unless of course you factor in self-esteem (in the form of respect and fame). In any case, finding and fixing a single vulnerability is an extremely minor exercise (relatively speaking) with a huge downside relating to the scalability of the threat.</p>
<p>Perhaps the more interesting development in this arena is that we have an independent who works for a large company and therefore has heavy influence due to both his technical skill and his employment status. The circumstances where large companies target each other (and I assume that everyone agrees with the Google Security statement that even if Tavis Ormandy was working independently they fully supported his actions) is even more complicated. The most interesting problems relates back to this lack of strategic purpose for disclosure &#8211; if a company the size of Google is spending time finding vulnerabilities in its competitors&#8217; products, it seems reasonable to me that they should have found every single vulnerability in their own products. The <a href="http://en.wikipedia.org/wiki/Comparative_advantage">principle of comparative advantage</a> should be put to work here. In addition, large companies should have a better sense for their altruistic objectives, unless there aren&#8217;t any.</p>
<p>Although we know that the lack of cohesion in participants muddies the waters for strategy (and thus we are stuck spinning our wheels dealing with the whims of bugfinders), the most obvious reason for finding vulnerabilities is to enhance software quality and increase security. Amidst this noble goal, the second half of the statement often gets ignored. I think this is because people assume a correlation where there is none. That is, enhanced software quality with respect to vulnerability discovery and disclosure does NOT increase security, at least in the short-term. The objective is really a derivative of an interest in reducing the number of compromises.</p>
<p>So, how can disclosing a vulnerability (followed presumably by the availability of a patch) reduce security? Simple &#8211; although the opportunity exists for individuals to reduce their vulnerable state, so many people can&#8217;t or don&#8217;t that the increase in threat significantly multiplies the number of incidents that occur. That is, vulnerability disclosure completely ignores the threat component of the risk equation in the short run.</p>
<p>This short run focus on vulnerabilities and not threat might seem okay because we have much less control of that aspect of risk, but we have significant indirect influence. That is, clearly the risk to unpatched (or otherwise unprotected) systems goes way up because disclosure significantly reduces the &#8220;costs&#8221; to any attacker and we know from history that incidents increase dramatically after disclosure. One quick aside: With the evolution of today&#8217;s technical architectures to SaaS and other cloud-based applications, it is worth pointing out that the circumstances of increased risk are not applicable to environments where one entity can guarantee that every instance of a software program has been properly patched.</p>
<p>As for the long run, there are many more developers than there are bugfinders and every day we are creating many more vulnerabilities than we are finding. It does not appear that developers are creating fewer vulnerabilities as a result of the disclosure effort, nor does the world have fewer vulnerabilities. One approach to address this is to assert that we should significantly increase our bugfinding efforts&#8230; except that we have done that as well, with the introduction through the years of newer and better automated solutions. No, the real way to address these problems is to think outside the box for a solution &#8211; all manner of trusted computing and its derivatives, for example.</p>
<p>Perhaps the biggest failing of vulnerability disclosure is that we completely ignore the externalities in this situation &#8211; the billion or so users of these various products. This spiteful approach is often justified with a wolves/sheep kind of reasoning that is quickly brought to its knees by considering all the good people in our own networks of friends and neighbors that shouldn&#8217;t need to be software engineers just to surf the Internet. These users are frequently victims of the increased risk we are artificially creating in their environments, unbeknownst to them.</p>
<p>One thought exercise that might be interesting here is to try to imagine what would happen if all of a sudden nobody disclosed any vulnerabilities. In fact, nobody (at least none of the good guys) even looked for vulnerabilities. The typical response here is to suggest that software would get even shoddier and the bad guys would have their way with us and we would never even know about it. I would suggest to you that this is complete and utter rubbish.</p>
<p>My version of this thought exercise is that people would work harder to further the goals of trusted computing because the stakes were higher and more funds were available. They would develop better monitoring tools to catch even more undercover exploits than are already being caught. They would put even MORE pressure on software manufacturer&#8217;s when compromises were discovered. Even now, we discover and respond to &#8220;undercover exploits&#8221; more quickly than we do publicly disclosed vulnerabilities, and I think we can get even better at it. Make no mistake, given that we are only finding a small fraction of existing vulnerabilities, there is nothing keeping the bad guys from finding and exploiting unknown vulnerabilities today, so it isn&#8217;t like our current process is helping there.</p>
<p>I have the utmost respect for many bugfinders. I believe many of them have great intentions. But they are attempting to haphazardly run across the battlefield while the bad guys pick them off from sniper posts, infiltrate their ranks, or simply choose another battle field that is unoccupied. There is no chance at victory fighting the battle this way.</p>
<p>[Here is a list of previous posts, essays, and articles I have written about vulnerability disclosure. It is worth mentioning that though I stand by my facts and opinions, I am not always proud of the emotional pieces - I hold the utmost respect for a number of the folks I took shots at. I still disagree with their opinions, though <img src='http://spiresecurity.com/blog/wp-includes/images/smilies/icon_wink.gif' alt=';-)' class='wp-smiley' /> ]</p>
<p>10/11/04: <a href="http://searchsecurity.techtarget.com/tip/1,289483,sid14_gci1014528,00.html">The Folly of Vulnerability Seeking</a></p>
<p>11/13/04: <a href="http://spiresecurity.com/?p=759">The Folly of Vulnerability Seeking</a> (follow-up to my searchsecurity article of the same name)</p>
<div>1/15/05: <a href="http://spiresecurity.com/?p=684">Time to Defeat</a></div>
<p>4/1/05: <a href="http://spiresecurity.com/?p=611">The Dead Horse Lives</a></p>
<p>8/8/05: <a href="http://spiresecurity.com/?p=570">More, more, more (Vuln Research)</a></p>
<p>8/17/05: <a href="http://spiresecurity.com/?p=554">The Long-Term Impact of Vulnerability Research: Public Welfare</a></p>
<p>10/30/05: <a href="http://spiresecurity.com/?p=522">I&#8217;ll bite: Feel free not to be so helpful</a></p>
<div>11/2/05: <a href="http://www.computerworld.com/s/article/print/105869/Opinion_To_sue_is_human_to_err_denied?taxonomyName=Security&amp;taxonomyId=17">To sue is human, to err denied</a> (one of my favorite titles <img src='http://spiresecurity.com/blog/wp-includes/images/smilies/icon_wink.gif' alt=';-)' class='wp-smiley' /> )</div>
<p>11/7/05: <a href="http://spiresecurity.com/?p=513">A New Litmus Test for Security Companies</a></p>
<p>2/20/06: <a href="http://spiresecurity.com/?p=513">I waffle slightly (I think)</a></p>
<p>3/7/06: <a href="http://spiresecurity.com/?p=759">More Turtles!</a></p>
<p>3/24/06: <a href="http://spiresecurity.com/?p=449">Why Bugfinding is Irresponsible and Increases Risk</a></p>
<p>3/31/06: <a href="http://spiresecurity.com/?p=759">More on Bugfinding</a></p>
<p>8/3/06: <a href="http://spiresecurity.com/?p=400">How Microsoft Reduces Risk</a> (where I introduce a new risk equation)</p>
<p>9/6/06: <a href="http://spiresecurity.com/?p=386">It Ain&#8217;t Over &#8217;til it&#8217;s Over</a></p>
<p>9/6/06: <a href="http://spiresecurity.com/?p=385">Now it&#8217;s Over (For Now)</a></p>
<p>5/16/07: <a href="http://srmsblog.burtongroup.com/2007/05/more_sex_is_saf.html">More Sex is Safer Sex</a></p>
<p>9/24/08: <a href="http://spiresecurity.com/?p=127">On Vulnerability Rediscovery</a></p>
<p>7/13/09: <a href="http://spiresecurity.com/?p=82">Exploiting Undercover Vulnerabilities</a></p>
<p>2/25/09: <a href="http://spiresecurity.com/?p=82">The Disclosure Race Condition</a></p>
<p>3/4/09: <a href="3/24/06: Why Bugfinding is Irresponsible and Increases Risk  ">The Other Side of </a><a href="3/24/06: Why Bugfinding is Irresponsible and Increases Risk  ">Full </a><a href="http://spiresecurity.com/blog/wp-admin/3/24/06:%20Why%20Bugfinding%20is%20Irresponsible%20and%20Increases%20Risk">Disclosure</a></p>
]]></content:encoded>
			<wfw:commentRss>http://spiresecurity.com/?feed=rss2&#038;p=1172</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
	</channel>
</rss>
