<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
		>
<channel>
	<title>Comments on: If I were a Czar*</title>
	<atom:link href="http://spiresecurity.com/?feed=rss2&#038;p=24" rel="self" type="application/rss+xml" />
	<link>http://spiresecurity.com/?p=24</link>
	<description>Risk and Cybersecurity Analysis</description>
	<lastBuildDate>Wed, 21 Aug 2013 23:28:51 +0000</lastBuildDate>
	<sy:updatePeriod>hourly</sy:updatePeriod>
	<sy:updateFrequency>1</sy:updateFrequency>
	<generator>http://wordpress.org/?v=3.5.1</generator>
	<item>
		<title>By: Pete</title>
		<link>http://spiresecurity.com/?p=24&#038;cpage=1#comment-20</link>
		<dc:creator>Pete</dc:creator>
		<pubDate>Thu, 20 Aug 2009 23:14:42 +0000</pubDate>
		<guid isPermaLink="false">http://spiresecurity.com/blog/?p=24#comment-20</guid>
		<description><![CDATA[@Adam -

Part of the point is that the attacks will be &quot;random&quot; or perhaps some other controlled distribution. We don&#039;t need to know whether some configuration could be compromised by some specific attack - we can test that easily. What we need to know is how likely is it in the wild - taking into account volume of benign activity, time, perhaps strategic placement at various points throughout the &#039;Net, etc..

So, if we have two groups of 5, 50, 500, 5000 systems strategically placed and they have very specific configurations to test for some single variable or set of variables (e.g. aggregated patches), then we can see how long they last under those circumstances. And we can do it over and over, continuously measuring the circumstances.

This is really just a more formalized way of doing what (I think) the Internet Storm Center has been doing by measuring the time to compromise or whatever it is.

We would certainly need to think through the client vs. server and active vs. passive activity of the groups (e.g. honeymonkeys vs. honeypots).

The use case that goes through my mind the most is for enterprises considering an update (think about your patch timing paper) trying to decide when to patch by measuring the threat component of risk. The idea isn&#039;t about possibility, it is about probability.

Pete
]]></description>
		<content:encoded><![CDATA[<p>@Adam -</p>
<p>Part of the point is that the attacks will be &#8220;random&#8221; or perhaps some other controlled distribution. We don&#8217;t need to know whether some configuration could be compromised by some specific attack &#8211; we can test that easily. What we need to know is how likely is it in the wild &#8211; taking into account volume of benign activity, time, perhaps strategic placement at various points throughout the &#8216;Net, etc..</p>
<p>So, if we have two groups of 5, 50, 500, 5000 systems strategically placed and they have very specific configurations to test for some single variable or set of variables (e.g. aggregated patches), then we can see how long they last under those circumstances. And we can do it over and over, continuously measuring the circumstances.</p>
<p>This is really just a more formalized way of doing what (I think) the Internet Storm Center has been doing by measuring the time to compromise or whatever it is.</p>
<p>We would certainly need to think through the client vs. server and active vs. passive activity of the groups (e.g. honeymonkeys vs. honeypots).</p>
<p>The use case that goes through my mind the most is for enterprises considering an update (think about your patch timing paper) trying to decide when to patch by measuring the threat component of risk. The idea isn&#8217;t about possibility, it is about probability.</p>
<p>Pete</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Adam</title>
		<link>http://spiresecurity.com/?p=24&#038;cpage=1#comment-19</link>
		<dc:creator>Adam</dc:creator>
		<pubDate>Thu, 20 Aug 2009 21:31:45 +0000</pubDate>
		<guid isPermaLink="false">http://spiresecurity.com/blog/?p=24#comment-19</guid>
		<description><![CDATA[I&#039;m still a little confused.  If the system is on the internet, how do you ensure that A &amp; B both get hit by the same attacks?

BTW, since my VP (Scott Charney) is in the touted shortlists, I&#039;m not sure if I can endorse anyone.  He might take it amiss if I endorse him, and likely would if I endorse someone else.
]]></description>
		<content:encoded><![CDATA[<p>I&#8217;m still a little confused.  If the system is on the internet, how do you ensure that A &#038; B both get hit by the same attacks?</p>
<p>BTW, since my VP (Scott Charney) is in the touted shortlists, I&#8217;m not sure if I can endorse anyone.  He might take it amiss if I endorse him, and likely would if I endorse someone else.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Pete</title>
		<link>http://spiresecurity.com/?p=24&#038;cpage=1#comment-18</link>
		<dc:creator>Pete</dc:creator>
		<pubDate>Thu, 20 Aug 2009 20:17:47 +0000</pubDate>
		<guid isPermaLink="false">http://spiresecurity.com/blog/?p=24#comment-18</guid>
		<description><![CDATA[@Adam -

The point of #2 is to deploy systems live on the &#039;Net but allow for control groups and test groups. Since with computers you can manipulate single variables (e.g. config settings) you could evaluate the impact of changing configurations using real data - evidence and outcomes.

A smart man once said the following:

&quot;Another way to put this is if you want to improve something, you have to start by measuring it. Let’s start measuring security outcomes, so we can start assessing the processes, errors or hostile acts that lead to those outcomes.&quot;

That is the goal of the lab - to create experiments where we can measure outcomes. (Perhaps not exactly what that smart man intended, but it fits well, IMO).

Btw, am I hired? ;-)

Pete
]]></description>
		<content:encoded><![CDATA[<p>@Adam -</p>
<p>The point of #2 is to deploy systems live on the &#8216;Net but allow for control groups and test groups. Since with computers you can manipulate single variables (e.g. config settings) you could evaluate the impact of changing configurations using real data &#8211; evidence and outcomes.</p>
<p>A smart man once said the following:</p>
<p>&#8220;Another way to put this is if you want to improve something, you have to start by measuring it. Let’s start measuring security outcomes, so we can start assessing the processes, errors or hostile acts that lead to those outcomes.&#8221;</p>
<p>That is the goal of the lab &#8211; to create experiments where we can measure outcomes. (Perhaps not exactly what that smart man intended, but it fits well, IMO).</p>
<p>Btw, am I hired? <img src='http://spiresecurity.com/blog/wp-includes/images/smilies/icon_wink.gif' alt=';-)' class='wp-smiley' /> </p>
<p>Pete</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Adam</title>
		<link>http://spiresecurity.com/?p=24&#038;cpage=1#comment-17</link>
		<dc:creator>Adam</dc:creator>
		<pubDate>Thu, 20 Aug 2009 17:39:09 +0000</pubDate>
		<guid isPermaLink="false">http://spiresecurity.com/blog/?p=24#comment-17</guid>
		<description><![CDATA[Thanks for picking this up!

On #2, how do you ensure that it does better than common criteria labs?
]]></description>
		<content:encoded><![CDATA[<p>Thanks for picking this up!</p>
<p>On #2, how do you ensure that it does better than common criteria labs?</p>
]]></content:encoded>
	</item>
</channel>
</rss>
