<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
		>
<channel>
	<title>Comments on: It Ain&#8217;t Over &#8217;til it&#8217;s Over</title>
	<atom:link href="http://spiresecurity.com/?feed=rss2&#038;p=386" rel="self" type="application/rss+xml" />
	<link>http://spiresecurity.com/?p=386</link>
	<description>Risk and Cybersecurity Analysis</description>
	<lastBuildDate>Wed, 21 Aug 2013 23:28:51 +0000</lastBuildDate>
	<sy:updatePeriod>hourly</sy:updatePeriod>
	<sy:updateFrequency>1</sy:updateFrequency>
	<generator>http://wordpress.org/?v=3.5.1</generator>
	<item>
		<title>By: Ray Lai</title>
		<link>http://spiresecurity.com/?p=386&#038;cpage=1#comment-592</link>
		<dc:creator>Ray Lai</dc:creator>
		<pubDate>Thu, 07 Sep 2006 20:24:46 +0000</pubDate>
		<guid isPermaLink="false">http://spiresecurity.com/blog/?p=386#comment-592</guid>
		<description><![CDATA[&gt; 3) bugfinding doesn&#039;t make software more secure

But it does: http://www.eecs.harvard.edu/~stuart/papers/usenix06.pdf
]]></description>
		<content:encoded><![CDATA[<p>> 3) bugfinding doesn&#8217;t make software more secure</p>
<p>But it does: <a href="http://www.eecs.harvard.edu/~stuart/papers/usenix06.pdf" rel="nofollow">http://www.eecs.harvard.edu/~stuart/papers/usenix06.pdf</a></p>
]]></content:encoded>
	</item>
	<item>
		<title>By: PaulM</title>
		<link>http://spiresecurity.com/?p=386&#038;cpage=1#comment-591</link>
		<dc:creator>PaulM</dc:creator>
		<pubDate>Thu, 07 Sep 2006 18:51:16 +0000</pubDate>
		<guid isPermaLink="false">http://spiresecurity.com/blog/?p=386#comment-591</guid>
		<description><![CDATA[OK, I am starting to feel guilty posting these novellas up to your blog Pete, but I am enjoying the discussion.

&gt; 1) Patches don&#039;t make risk go down unless every single vulnerable system everywhere in the world is patched.

Calculating risk for &quot;the whole world&quot; is a pointless endeavor.  However, calculating risk for, say, a client&#039;s network, is a legitimate excercise.

Case &amp; point: SQL-Slammer.  Anybody with an IDS listening to unfiltered Internet traffic is still seeing this worm.  However, if they have patched and/or blocked SQL across their border points, then the risk is negligibly low.  Patches make risk go down.  Period.


&gt; 2) Detection only affects risk if you can prevent it.

Untrue.  You can&#039;t look at risk only in terms of whether or not a host has been compromised, but for how long, since business risk isn&#039;t simply whether or not you&#039;ve lost exclusive control of a machine on your network, but what information has been stolen or altered or what services have been disrupted.  Time to detect ~= time to respond ~= length of compromise.  The smaller those numbers are, the less risk the compromise represents to the business.

&gt; Since there is a finite number
&gt; of vulns on a system at any given time, an increase in the number of attackers who
&gt; know about it also increases risk.

Agreed, though since spam/phishing/worms are all highly-efficient means of delivering an exploit, that I have to wonder how significant the increase is in some cases.  A handful of bot-herders with a local 0day is potentially more dangerous than 100K script kids with a published remote exploit.


&gt; 3) You assume that people will patch. They don&#039;t, and more attackers will know about
&gt; the bug. Increased risk. I don&#039;t disagree that software is being developed more securely;
&gt; I do disagree if anyone asserts this was the only way. Neither of us can prove/disprove each
&gt; other.

Funny.  That study you cited showed that nearly 40% of systems were patched within 30 days, prior to the presence of a worm that exploited that bug.


&gt; 4) I am saying that if there is no relationship between good bugfinders and bad
&gt; bugfinders, given the total number of vulns in the world, it is highly unlikely
&gt; that there will be many collisions. I use Ozment&#039;s paper as representative - you
&gt; are right that it leaves a lot to be desired in this context, but I think it is
&gt; also likely to be a best case scenario. You would need 100% overlap to succeed.

The problems is that this is very much the chicken and the egg, as you pointed out earlier.  We can correlate the overlap between researchers and criminals, but identifying whether or not disclosure causes Russian spamsploits or vise-versa is impossible.


&gt; 5) Are you suggesting that Windows 3.1 and XP (and every other flavor of Windows)
&gt; have the same exact code base?

Of course not.  But look at the WMF exploits from New Year&#039;s - pieces of code from the 3.0 days still live on in XP.  But my point was that Microsoft has learned from its experience with NT 4.0.  We all benefit from that, and it&#039;s not at all far-fetched to say that disclosure of vulnerabilities played a part in that.

Perhaps its a little Ayn-Randian, but network security really is an objective and selfish exercise.  If disclosure helps me but hurts you because I read bugtraq while you read Penny Arcade, then maybe that&#039;s just the nature of things and maybe you&#039;ll &quot;get it&quot; eventually, probably after you experience some pain.

I believe that it is more important for an individual organization to be prepared to handle risks, even if that means that the availability of the information used by an organization to protect itself can also be used to the detriment of others.  The continuity of my sphere of influence and responsibility has to come first - that&#039;s what they pay me for.  They pay me to make sure that they&#039;re part of the 40% that aren&#039;t impacted by the next worm.


]]></description>
		<content:encoded><![CDATA[<p>OK, I am starting to feel guilty posting these novellas up to your blog Pete, but I am enjoying the discussion.</p>
<p>> 1) Patches don&#8217;t make risk go down unless every single vulnerable system everywhere in the world is patched.</p>
<p>Calculating risk for &#8220;the whole world&#8221; is a pointless endeavor.  However, calculating risk for, say, a client&#8217;s network, is a legitimate excercise.</p>
<p>Case &#038; point: SQL-Slammer.  Anybody with an IDS listening to unfiltered Internet traffic is still seeing this worm.  However, if they have patched and/or blocked SQL across their border points, then the risk is negligibly low.  Patches make risk go down.  Period.</p>
<p>> 2) Detection only affects risk if you can prevent it.</p>
<p>Untrue.  You can&#8217;t look at risk only in terms of whether or not a host has been compromised, but for how long, since business risk isn&#8217;t simply whether or not you&#8217;ve lost exclusive control of a machine on your network, but what information has been stolen or altered or what services have been disrupted.  Time to detect ~= time to respond ~= length of compromise.  The smaller those numbers are, the less risk the compromise represents to the business.</p>
<p>> Since there is a finite number<br />
> of vulns on a system at any given time, an increase in the number of attackers who<br />
> know about it also increases risk.</p>
<p>Agreed, though since spam/phishing/worms are all highly-efficient means of delivering an exploit, that I have to wonder how significant the increase is in some cases.  A handful of bot-herders with a local 0day is potentially more dangerous than 100K script kids with a published remote exploit.</p>
<p>> 3) You assume that people will patch. They don&#8217;t, and more attackers will know about<br />
> the bug. Increased risk. I don&#8217;t disagree that software is being developed more securely;<br />
> I do disagree if anyone asserts this was the only way. Neither of us can prove/disprove each<br />
> other.</p>
<p>Funny.  That study you cited showed that nearly 40% of systems were patched within 30 days, prior to the presence of a worm that exploited that bug.</p>
<p>> 4) I am saying that if there is no relationship between good bugfinders and bad<br />
> bugfinders, given the total number of vulns in the world, it is highly unlikely<br />
> that there will be many collisions. I use Ozment&#8217;s paper as representative &#8211; you<br />
> are right that it leaves a lot to be desired in this context, but I think it is<br />
> also likely to be a best case scenario. You would need 100% overlap to succeed.</p>
<p>The problems is that this is very much the chicken and the egg, as you pointed out earlier.  We can correlate the overlap between researchers and criminals, but identifying whether or not disclosure causes Russian spamsploits or vise-versa is impossible.</p>
<p>> 5) Are you suggesting that Windows 3.1 and XP (and every other flavor of Windows)<br />
> have the same exact code base?</p>
<p>Of course not.  But look at the WMF exploits from New Year&#8217;s &#8211; pieces of code from the 3.0 days still live on in XP.  But my point was that Microsoft has learned from its experience with NT 4.0.  We all benefit from that, and it&#8217;s not at all far-fetched to say that disclosure of vulnerabilities played a part in that.</p>
<p>Perhaps its a little Ayn-Randian, but network security really is an objective and selfish exercise.  If disclosure helps me but hurts you because I read bugtraq while you read Penny Arcade, then maybe that&#8217;s just the nature of things and maybe you&#8217;ll &#8220;get it&#8221; eventually, probably after you experience some pain.</p>
<p>I believe that it is more important for an individual organization to be prepared to handle risks, even if that means that the availability of the information used by an organization to protect itself can also be used to the detriment of others.  The continuity of my sphere of influence and responsibility has to come first &#8211; that&#8217;s what they pay me for.  They pay me to make sure that they&#8217;re part of the 40% that aren&#8217;t impacted by the next worm.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Pete</title>
		<link>http://spiresecurity.com/?p=386&#038;cpage=1#comment-590</link>
		<dc:creator>Pete</dc:creator>
		<pubDate>Thu, 07 Sep 2006 14:52:06 +0000</pubDate>
		<guid isPermaLink="false">http://spiresecurity.com/blog/?p=386#comment-590</guid>
		<description><![CDATA[@andi - if 2000 trains are coming toward you at full speed, and somebody tells you about three of them, and you focus all of your efforts on those three, are you still dead? You DO close your eyes and you don&#039;t know it. (Btw, do you know how often &quot;common sense&quot; and &quot;conventional wisdom&quot; have been wrong in the history of mankind?)

@Paul -

&quot;Anecdotal&quot; quantification? Get real. For every instance of short-term &quot;good&quot; that&#039;s been forced upon the unsuspecting Internet user by bugfinders, there have been tens of thousands of &quot;bads&quot;.

Good points on my comments. I will elaborate:

1) Patches don&#039;t make risk go down unless every single vulnerable system everywhere in the world is patched.

2) Detection only affects risk if you can prevent it. Since there is a finite number of vulns on a system at any given time, an increase in the number of attackers who know about it also increases risk. If all systems everywhere get patched, you decrease risk. You simply choose to ignore all of the other risk associated with that target. It is not impossible to protect yourself without knowing about specific vulns.

3) You assume that people will patch. They don&#039;t, and more attackers will know about the bug. Increased risk. I don&#039;t disagree that software is being developed more securely; I do disagree if anyone asserts this was the only way. Neither of us can prove/disprove each other.

4) I am saying that if there is no relationship between good bugfinders and bad bugfinders, given the total number of vulns in the world, it is highly unlikely that there will be many collisions. I use Ozment&#039;s paper as representative - you are right that it leaves a lot to be desired in this context, but I think it is also likely to be a best case scenario. You would need 100% overlap to succeed.

5) Are you suggesting that Windows 3.1 and XP (and every other flavor of Windows) have the same exact code base?
]]></description>
		<content:encoded><![CDATA[<p>@andi &#8211; if 2000 trains are coming toward you at full speed, and somebody tells you about three of them, and you focus all of your efforts on those three, are you still dead? You DO close your eyes and you don&#8217;t know it. (Btw, do you know how often &#8220;common sense&#8221; and &#8220;conventional wisdom&#8221; have been wrong in the history of mankind?)</p>
<p>@Paul -</p>
<p>&#8220;Anecdotal&#8221; quantification? Get real. For every instance of short-term &#8220;good&#8221; that&#8217;s been forced upon the unsuspecting Internet user by bugfinders, there have been tens of thousands of &#8220;bads&#8221;.</p>
<p>Good points on my comments. I will elaborate:</p>
<p>1) Patches don&#8217;t make risk go down unless every single vulnerable system everywhere in the world is patched.</p>
<p>2) Detection only affects risk if you can prevent it. Since there is a finite number of vulns on a system at any given time, an increase in the number of attackers who know about it also increases risk. If all systems everywhere get patched, you decrease risk. You simply choose to ignore all of the other risk associated with that target. It is not impossible to protect yourself without knowing about specific vulns.</p>
<p>3) You assume that people will patch. They don&#8217;t, and more attackers will know about the bug. Increased risk. I don&#8217;t disagree that software is being developed more securely; I do disagree if anyone asserts this was the only way. Neither of us can prove/disprove each other.</p>
<p>4) I am saying that if there is no relationship between good bugfinders and bad bugfinders, given the total number of vulns in the world, it is highly unlikely that there will be many collisions. I use Ozment&#8217;s paper as representative &#8211; you are right that it leaves a lot to be desired in this context, but I think it is also likely to be a best case scenario. You would need 100% overlap to succeed.</p>
<p>5) Are you suggesting that Windows 3.1 and XP (and every other flavor of Windows) have the same exact code base?</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: andi</title>
		<link>http://spiresecurity.com/?p=386&#038;cpage=1#comment-589</link>
		<dc:creator>andi</dc:creator>
		<pubDate>Thu, 07 Sep 2006 14:02:00 +0000</pubDate>
		<guid isPermaLink="false">http://spiresecurity.com/blog/?p=386#comment-589</guid>
		<description><![CDATA[Come on.. Should I close my eyes if a train is moving towards me at full speed? Would the danger be smaller? Common sense is all I need to support Ptacek&#039;s arguments.
]]></description>
		<content:encoded><![CDATA[<p>Come on.. Should I close my eyes if a train is moving towards me at full speed? Would the danger be smaller? Common sense is all I need to support Ptacek&#8217;s arguments.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: PaulM</title>
		<link>http://spiresecurity.com/?p=386&#038;cpage=1#comment-588</link>
		<dc:creator>PaulM</dc:creator>
		<pubDate>Wed, 06 Sep 2006 17:36:29 +0000</pubDate>
		<guid isPermaLink="false">http://spiresecurity.com/blog/?p=386#comment-588</guid>
		<description><![CDATA[&gt; The most interesting thing about Ptacek&#039;s points are that they are all opinions - there is no evidence at all to support his conclusions. He totally DIDN&#039;T say that!

All of the bullet points from Thomas&#039; post are quantifiable, at least anecdotally.  You can dismiss them as purely opinion, but you&#039;d be wrong.


&gt; Btw, I have the evidence on my side:
&gt; 1) risk warnings never go down with the latest disclosure of vulns;

No, they go down with the latest release of a patch.  That patch is likely the result of a found and disclosed vuln.

&gt; 2) any single infection of a disclosed vuln is evidence that risk went up;

OK, but if we&#039;re quantifying risk, is the risk to a target greater if a vuln is known to attacker and target and, if not mitigated, at least detected?  Or is the risk greater if the vuln is known only to the attacker and the target cannot mitigate or detect the attack and therefore cannot respond to it?  This one&#039;s just common sense.

&gt; 3) bugfinding doesn&#039;t make software more secure;

In an atomic instance (OpenSSL v0.9.6 vs. v0.9.7) there&#039;s a clear impact from patching the bug - a remotely exploitable buffer overflow is eliminated.  Over time, maybe it doesn&#039;t have a direct impact.

But you cannot discount the indirect impact that disclosure has on software.  Look at the evolution of NT 4.0 -&gt; 2003 Server.  Had it not been for some global incidents and a PR nightmare for Microsoft, 2003 Server might not have features like DEP or host firewalls that can be configured via Group Policy.  Had there been no research and disclosure, this process might have taken longer with different (likely worse, but that&#039;s my opinion) results.

&gt; 4) there may be about 7% overlap in bugfinding;

So is your conclusion that since, within the same community and industry, there is a 7% rediscovery rate of the same vulnerability, that there is only 7% overlap in the bugs found by infosec researchers working openly and those working privately (for the Russian Mob)?  I can&#039;t even begin to list all of the flaws in your assertion.  Clearly Ozment&#039;s study represents nothing of the kind.

&gt; 5) bugfinding may make software more secure after 7 years (i.e. about 2-4 years after its useful lifetime).

Yes, Microsoft Windows server software has run its course.  The technology is almost 15 years old and has run its course.  Nobody uses it anymore.



]]></description>
		<content:encoded><![CDATA[<p>> The most interesting thing about Ptacek&#8217;s points are that they are all opinions &#8211; there is no evidence at all to support his conclusions. He totally DIDN&#8217;T say that!</p>
<p>All of the bullet points from Thomas&#8217; post are quantifiable, at least anecdotally.  You can dismiss them as purely opinion, but you&#8217;d be wrong.</p>
<p>> Btw, I have the evidence on my side:<br />
> 1) risk warnings never go down with the latest disclosure of vulns;</p>
<p>No, they go down with the latest release of a patch.  That patch is likely the result of a found and disclosed vuln.</p>
<p>> 2) any single infection of a disclosed vuln is evidence that risk went up;</p>
<p>OK, but if we&#8217;re quantifying risk, is the risk to a target greater if a vuln is known to attacker and target and, if not mitigated, at least detected?  Or is the risk greater if the vuln is known only to the attacker and the target cannot mitigate or detect the attack and therefore cannot respond to it?  This one&#8217;s just common sense.</p>
<p>> 3) bugfinding doesn&#8217;t make software more secure;</p>
<p>In an atomic instance (OpenSSL v0.9.6 vs. v0.9.7) there&#8217;s a clear impact from patching the bug &#8211; a remotely exploitable buffer overflow is eliminated.  Over time, maybe it doesn&#8217;t have a direct impact.</p>
<p>But you cannot discount the indirect impact that disclosure has on software.  Look at the evolution of NT 4.0 -> 2003 Server.  Had it not been for some global incidents and a PR nightmare for Microsoft, 2003 Server might not have features like DEP or host firewalls that can be configured via Group Policy.  Had there been no research and disclosure, this process might have taken longer with different (likely worse, but that&#8217;s my opinion) results.</p>
<p>> 4) there may be about 7% overlap in bugfinding;</p>
<p>So is your conclusion that since, within the same community and industry, there is a 7% rediscovery rate of the same vulnerability, that there is only 7% overlap in the bugs found by infosec researchers working openly and those working privately (for the Russian Mob)?  I can&#8217;t even begin to list all of the flaws in your assertion.  Clearly Ozment&#8217;s study represents nothing of the kind.</p>
<p>> 5) bugfinding may make software more secure after 7 years (i.e. about 2-4 years after its useful lifetime).</p>
<p>Yes, Microsoft Windows server software has run its course.  The technology is almost 15 years old and has run its course.  Nobody uses it anymore.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Thomas H. Ptacek</title>
		<link>http://spiresecurity.com/?p=386&#038;cpage=1#comment-587</link>
		<dc:creator>Thomas H. Ptacek</dc:creator>
		<pubDate>Wed, 06 Sep 2006 05:25:38 +0000</pubDate>
		<guid isPermaLink="false">http://spiresecurity.com/blog/?p=386#comment-587</guid>
		<description><![CDATA[1. Never.

2. When vulnerability disclosure actually creates new vulnerabilities.

3. The fact that clientside vulnerabilities in IE are worth more money than serverside vulnerabilities in &lt;agent management system X&gt;.

4. We wait the minimal amount of time possible. Others wait even less time.

5. Nothing.

6. Windows XP.

7. Because their risk ratings are arbitrary and subjective.

8. Because from 1988 to 1995 there wasn&#039;t a single buffer overflow advisory, despite the fact that the Morris worm exploited one.

]]></description>
		<content:encoded><![CDATA[<p>1. Never.</p>
<p>2. When vulnerability disclosure actually creates new vulnerabilities.</p>
<p>3. The fact that clientside vulnerabilities in IE are worth more money than serverside vulnerabilities in <agent management system X>.</p>
<p>4. We wait the minimal amount of time possible. Others wait even less time.</p>
<p>5. Nothing.</p>
<p>6. Windows XP.</p>
<p>7. Because their risk ratings are arbitrary and subjective.</p>
<p>8. Because from 1988 to 1995 there wasn&#8217;t a single buffer overflow advisory, despite the fact that the Morris worm exploited one.</p>
<p></agent></p>
]]></content:encoded>
	</item>
</channel>
</rss>
