<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Spire Security Viewpoint &#187; Highlights</title>
	<atom:link href="http://spiresecurity.com/?cat=10&#038;feed=rss2" rel="self" type="application/rss+xml" />
	<link>http://spiresecurity.com</link>
	<description>Risk and Cybersecurity Analysis</description>
	<lastBuildDate>Fri, 14 Nov 2014 00:11:00 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>hourly</sy:updatePeriod>
	<sy:updateFrequency>1</sy:updateFrequency>
	<generator>http://wordpress.org/?v=3.5.1</generator>
		<item>
		<title>Engineering vs. Economics in TechRisk: How &#8220;Stronger&#8221; Software can lead to Higher Risk</title>
		<link>http://spiresecurity.com/?p=1407</link>
		<comments>http://spiresecurity.com/?p=1407#comments</comments>
		<pubDate>Tue, 07 Jan 2014 16:10:28 +0000</pubDate>
		<dc:creator>Pete Lindstrom</dc:creator>
				<category><![CDATA[Economics and Risk]]></category>
		<category><![CDATA[Highlights]]></category>

		<guid isPermaLink="false">http://spiresecurity.com/?p=1407</guid>
		<description><![CDATA[It seems counterintuitive: how can it be that making software &#8220;stronger&#8221; (as in reducing vulnerabilities) can increase risk on the Internet (as in creating more incidents)? But it happens frequently. The trick to understanding this conundrum lay in thinking like&#8230;<p class="more-link-p"><a class="more-link" href="http://spiresecurity.com/?p=1407">Read more &#8594;</a></p>]]></description>
				<content:encoded><![CDATA[<p>It seems counterintuitive: how can it be that making software &#8220;stronger&#8221; (as in reducing vulnerabilities) can increase risk on the Internet (as in creating more incidents)? But it happens frequently. The trick to understanding this conundrum lay in thinking like an economist and not like an engineer.</p>
<p>Engineers are focused on quality, so when they hear about vulnerabilities in software, their immediate reaction is to want to fix them&#8230; all of them. Regardless of whose software it is. Regardless of where it&#8217;s deployed. In fact, some of them care so much that they go out seeking vulnerabilities simply to fix them. They are the type of people who are great at solving problems, but not at understanding the downstream implications of their actions.</p>
<p>Economists, on the other hand (get it?), look at cause and effect, actions and reactions, and, most importantly, outcomes. The root of the economic problem lay in the ultimate unwanted outcome &#8211; the breach.Economics-oriented security pros understand that everything we do is intended to thwart the breach. It is easy to lose track of unwanted outcomes in the face of compliance needs and operational activities, but even those activities are all intended to minimize damages from attacks and exploits.</p>
<p>The engineer correctly believes that fixing vulnerabilities creates high quality (&#8220;stronger&#8221;) software. If the program starts with 300 vulnerabilities and you fix one, that obviously leaves 299 &#8211; one less than when it started. More importantly, if an enterprise has 1,000 systems that all have that same vulnerability and they apply a patch to 500 of them, they have decreased their attack surface by 500 vulnerabilities. From both perspectives, the level of vulnerability is, in fact, reduced.</p>
<p><span style="letter-spacing: 0.05em; line-height: 1.6875;">But the economist knows that fewer vulnerabilities is not the ultimate objective. The ultimate objective is to reduce the likelihood of an incident.</span></p>
<p>The economist understands that there is a key missing ingredient to the engineer&#8217;s scenario &#8211; the intelligent adversary, aka the threat. And in pursuit of higher quality software the vulnerability details usually get published, leading to lower attack costs for the adversary. Given the scalability of technology, this typically leads to more attackers connecting to more targets, albeit in a (somewhat) smaller population of targets.</p>
<p>That is the key observation for this discussion &#8211; a breach requires both an attacker (threat) and a target (vuln), which manifests itself in the form of a connection between source and destination. Even though the population of targets may be reduced (perhaps even significantly so), if the threat is sufficiently motivated, more connections can be made with the vulnerable targets. The only way to guarantee reduced risk is to bring one of the populations (most likely the vulnerable targets) to zero. History shows us this is not likely with commercial software in enterprises. Interestingly, the increasingly common scenario for cloud-based software (e.g. Software-as-a-Service) may be able to do just that.</p>
<p>And there you have it &#8211; given the need for both threats and vulnerabilities, the reduction in one doesn&#8217;t force a reduction overall. And if the other element is increased in the process, the marginal difference in each population must be evaluated to truly understand the impact. Historically, this has led to scenarios where the vulnerability is reduced while the risk is simultaneously increased.</p>
<p>For reference:</p>
<p><a href="http://srmsblog.burtongroup.com/2007/05/more_sex_is_saf.html">More Sex is Safer Sex…</a></p>
]]></content:encoded>
			<wfw:commentRss>http://spiresecurity.com/?feed=rss2&#038;p=1407</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Advanced Malware Protection Evaluation Criteria</title>
		<link>http://spiresecurity.com/?p=1401</link>
		<comments>http://spiresecurity.com/?p=1401#comments</comments>
		<pubDate>Thu, 24 Oct 2013 02:41:48 +0000</pubDate>
		<dc:creator>Pete Lindstrom</dc:creator>
				<category><![CDATA[AMP Firehose]]></category>
		<category><![CDATA[Highlights]]></category>
		<category><![CDATA[Random]]></category>

		<guid isPermaLink="false">http://spiresecurity.com/?p=1401</guid>
		<description><![CDATA[[Pete Lindstrom is VP of Research at Spire Security, LLC and host of the AMP Firehose 1-day Workshop (vendor bakeoff) coming up in Chicago on 10/29. Register at www.regonline.com/AMPFirehoseCHI.] I believe the folks at Gartner put a lot of research&#8230;<p class="more-link-p"><a class="more-link" href="http://spiresecurity.com/?p=1401">Read more &#8594;</a></p>]]></description>
				<content:encoded><![CDATA[<p>[<em>Pete Lindstrom is VP of Research at Spire Security, LLC and host of the AMP Firehose 1-day Workshop (vendor bakeoff) coming up in Chicago on 10/29. Register at</em> <a href="http://www.regonline.com/AMPFirehoseCHI">www.regonline.com/AMPFirehoseCHI</a>.]</p>
<p>I believe the folks at Gartner put a lot of research and effort into their Magic Quadrant analysis. That said, I can&#8217;t help but conclude that &#8220;vision&#8221; and &#8220;execution&#8221; don&#8217;t quite do it for me when it comes to identifying appropriate candidate solutions to address a problem. They just seem to be too much about marketing, which is very important to the companies but only ancillary to an enterprise&#8217;s needs. Sure, they want a solution that will be viable for the long-term, but other than that it is pretty insignificant.</p>
<p>To address this issue, I have put together a set of questions in 4+1 evaluation categories that I believe provide more insight into the important attributes of a solution. The first round of categories was introduced at AMP NYC a month ago. Here is my second revision. Opinions and advice are welcome.</p>
<p><strong>1. Company/Product Information:</strong> What level of confidence does the company information provide that the company and product will remain viable for your organization?</p>
<p>Consider:<br />
• What year was the company founded?<br />
• What is the background of the management team?<br />
• How many employees does the company have?<br />
• What is the funding status/source of finances?<br />
• What is the product name and version?<br />
• How many customers does the company have for the pertinent product?<br />
• What certifications and tests were done on the product?<br />
• What other 3rd party reviews, awards, or other supporting evidence exists about the product?<br />
• What is the pricing model for the solution?</p>
<p><strong>2. Functional Operation:</strong> What level of benefit does the functional operation of the product have?</p>
<p>Consider:<br />
• Primary operation &#8211; scan memory state, scan configuration/file system/network state, monitor/record system call activity, monitor/record network traffic, isolate memory, isolate system activity, isolate network communications.<br />
• Trigger action &#8211; detect &#8220;known good&#8221; execution, detect &#8220;known good&#8221; activity, detect &#8220;known bad&#8221; execution, detect &#8220;known bad&#8221; behavior, detect anomalous execution, detect anomalous behavior.<br />
• Response options &#8211; allow, deny execution, kill process, kill network connection, reroute network communication, log event, notify user, notify admin (alert), other.<br />
• Recovery options (post-infection) &#8211; Restore config to known good state, remove bad files/objects, identify similar issues across network, notify/update other control solutions.</p>
<p><strong>3. Architecture &amp; Administration:</strong> How well does the product&#8217;s architecture fit in with your organization&#8217;s existing security processes? How likely is it to provide benefits? What features does it have to support implementation and administration?</p>
<p>Consider:<br />
• Where/how are any product sensors or agents deployed throughout an enterprise (endpoint, network, cloud, other)? How are they protected?<br />
• Where/how does the product admin/management function work? How is it protected? (endpoint, network, cloud, other)<br />
• Where/how does the product log/data/storage function work? How is it protected? (endpoint, network, cloud, other)<br />
• How is information shared a) with the solution components; and b) with others?<br />
• How does the solution get installed/implemented in the environment?<br />
• How customizable is the configuration and interface?</p>
<p><strong>4. Technical Integration:</strong> How well does the solution integrate into the IT ecosystem? How easy will it be to implement and maintain?</p>
<p>Consider:<br />
• How does the solution integrate with other products from the same company?<br />
• How does the solution integrate with 3rd party security solutions?<br />
• How does the solution integrate into an IT architecture?<br />
• What are the prerequisites for user directories, management servers, etc?<br />
• What standards, communication protocols, platforms, languages, frameworks, etc. are supported?<br />
• How robust is the API for third party access?</p>
<p>The final category is actually a rollup of the other four, since the differentiators and value come from the previous specifics being identified.</p>
<p><strong>Key Differentiators / Overall Value Proposition</strong><br />
When looking at the complete picture of the solution, how strong are the overall benefits derived from the individual evaluation categories?</p>
<p>I believe these evaluation categories more properly reflect the needs of the enterprise. What do you think?<em id="__mceDel"><strong><br />
</strong></em></p>
<p>&nbsp;</p>
]]></content:encoded>
			<wfw:commentRss>http://spiresecurity.com/?feed=rss2&#038;p=1401</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Do Enterprises Need AMP? An &#8220;Advanced Malware Protection&#8221; Market Assessment</title>
		<link>http://spiresecurity.com/?p=1376</link>
		<comments>http://spiresecurity.com/?p=1376#comments</comments>
		<pubDate>Tue, 03 Sep 2013 14:58:28 +0000</pubDate>
		<dc:creator>Pete Lindstrom</dc:creator>
				<category><![CDATA[AMP Firehose]]></category>
		<category><![CDATA[Economics and Risk]]></category>
		<category><![CDATA[Highlights]]></category>
		<category><![CDATA[Incidents]]></category>
		<category><![CDATA[Threat Management]]></category>

		<guid isPermaLink="false">http://spiresecurity.com/?p=1376</guid>
		<description><![CDATA[Over the past few months I have been on an &#8220;advanced malware protection&#8221; (AMP) kick. I am fascinated by this topic because it ties together a set of market conditions that can be extremely challenging to navigate through, both for&#8230;<p class="more-link-p"><a class="more-link" href="http://spiresecurity.com/?p=1376">Read more &#8594;</a></p>]]></description>
				<content:encoded><![CDATA[<p>Over the past few months I have been on an &#8220;advanced malware protection&#8221; (AMP) kick. I am fascinated by this topic because it ties together a set of market conditions that can be extremely challenging to navigate through, both for security architects and solution providers:</p>
<ol>
<li><span style="line-height: 16px;"><strong>Need</strong>. I choose the word &#8220;need&#8221; with caution, since, as you will find out below, it does not necessarily mean there is &#8220;demand&#8221; for a better solution. However, I don&#8217;t think techrisk professionals can deny that the malware dropping attack vector is alive and well. It is highlighted as the key to the Aurora attacks that catalyzed the &#8220;advanced persistent threat&#8221; concern.</span></li>
<li><strong>Varied Solutions</strong>. There are a number of vendors that have cropped up through the years with solutions to address the malware problem, and the techniques vary significantly. Whitelisters only allow identified executables to run; sandboxes isolate malware and/or identify actions; and real-time forensics track system calls and/or configured state.</li>
<li><strong>Mature Market</strong>. Even with an identifiable need and newer interesting solutions, the most powerful security market in the world &#8211; antivirus (nee antimalware) &#8211; operates in pseudo-commodity mode and dominates in endpoint security.</li>
</ol>
<p>As an industry analyst, I have had the opportunity to interview over a dozen solution providers and even more enterprise security architects and executives on the state of antimalware in the enterprise. Here are a few of my conclusions:</p>
<ul>
<li>Companies are moderately satisfied (and perhaps complacent) with their existing antimalware solutions. They acknowledge that these solutions are not blocking all malware but believe that every solution in the category has similar problems and so are reluctant to switch.</li>
<li>The only factor that could affect existing signature-base antimalware is price &#8211; a lower-cost solution (which many agree is unlikely) could have a strong-enough value proposition. Notably, a few organizations are evaluating Microsoft&#8217;s free antimalware solution as one of these alternative options.</li>
<li>Organizations are looking to gain more benefit from their existing antimalware solutions. Many are still focused on signature-based functionality and are now looking at more advanced capabilities. In addition, organizations are considering and employing new capabilities like Microsoft&#8217;s EMET functionality.</li>
<li>For those times when malware gets through and infects a system, re-imaging is the standard approach, though some organizations are mildly reluctant to do it. Most of these malware infections are not classified as &#8220;incidents&#8221; per se &#8211; there is an ad hoc evaluation process to decide whether any infection should be escalated into being classified as an incident.</li>
<li><span style="letter-spacing: 0.05em; line-height: 1.6875;">Organizations are looking at architectural changes and not product changes when it comes to endpoint client-side security. This means they are focusing on BYOD and/or VDI (or even dumb terminals) as options in their client security strategies.</span></li>
<li>Control over (physical) clients continues to relax, with certain &#8220;pockets&#8221; of exceptions (kiosks or manufacturing systems). For some, this was after a long period of control strengthening (e.g. finally taking away local administrative rights).</li>
</ul>
<p>As I mentioned at the start, the market dynamics fascinate me here. I don&#8217;t think there is a techrisk professional left that believes signature-based antimalware is &#8220;good enough&#8221; and yet we see its dampening impact everywhere. At this stage, it has simply become the &#8220;checkbox compliant&#8221; easiest approach.</p>
<p>As someone extremely interested in cybersecurity economics I am encouraged by the attention being given to the bottom line &#8211; organizations should be very careful about cost-benefit in their security programs. While some of the organizations I interviewed had done a comprehensive analysis, it appeared to me that a number of organizations had not undergone a thorough review of their strategies.</p>
<p>I will be addressing these issues at my <a href="http://www.regonline.com/AMPFirehoseNYC">&#8220;Drinking from the AMP Firehose&#8221; workshop</a> in New York City in a couple of weeks. The workshop concept was driven by these ideas and aims to break through the logjam brought on by complacency and confusion. Regardless of the conclusions that individual organizations come to, I think the entire field will be better off for it.</p>
<p><em>Pete Lindstrom is Principal and VP of Research for Spire Security, LLC, a research and advisory firm. Learn more about Advanced Malware Protection by &#8220;Drinking from the Firehose&#8221; in New York City on 9/17/13. Details at <a href="http://www.regonline.com/AMPFirehoseNYC">www.regonline.com/AMPFirehoseNYC</a>.</em></p>
]]></content:encoded>
			<wfw:commentRss>http://spiresecurity.com/?feed=rss2&#038;p=1376</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Do you need &#8220;Advanced Malware Protection&#8221; from 0days and the APT? Key Economic Considerations</title>
		<link>http://spiresecurity.com/?p=1362</link>
		<comments>http://spiresecurity.com/?p=1362#comments</comments>
		<pubDate>Tue, 27 Aug 2013 21:49:32 +0000</pubDate>
		<dc:creator>Pete Lindstrom</dc:creator>
				<category><![CDATA[AMP Firehose]]></category>
		<category><![CDATA[Economics and Risk]]></category>
		<category><![CDATA[Highlights]]></category>

		<guid isPermaLink="false">http://spiresecurity.com/?p=1362</guid>
		<description><![CDATA[Events over the past few years have heightened attention on attackers with more serious intentions than script kiddies or casual hackers. The &#8220;advanced persistent threat&#8221; has been outed, first generally by Google and RSA, then much more explicitly by Mandiant.&#8230;<p class="more-link-p"><a class="more-link" href="http://spiresecurity.com/?p=1362">Read more &#8594;</a></p>]]></description>
				<content:encoded><![CDATA[<p><span style="letter-spacing: 0.05em; line-height: 1.6875;">Events over the past few years have heightened attention on attackers with more serious intentions than script kiddies or casual hackers. The &#8220;advanced persistent threat&#8221; has been outed, first generally by Google and RSA, then much more explicitly by Mandiant. The use of 0days in malware has been identified as a key element of the &#8220;kill chain&#8221; for attackers. Right or wrong, cybersecurity concerns are at an all-time high.</span></p>
<p>On the protection side of the equation, although antimalware solutions provide a basic (and compliant) level of protection, security professionals are well aware of the limitations of signature-based approaches. Solutions that have been around for a while, such as host intrusion prevention and whitelisting, have gained renewed interest. Other approaches like network or endpoint sandboxes for isolation and/or analysis or active forensics for near-real-time analytics are coming on strong.</p>
<p>The challenge is determining whether the additional cost is worth it, deciding whether a new solution will significantly reduce the problem, and identifying which type of solution(s) are best.</p>
<p>While it is easy for security professionals to claim they will spend &#8220;whatever it takes&#8221; to address technology-related risk, that assertion is easily deflated through extreme examples (millions? billions? trillions?). While the intentions are valiant (and I get the point), no organization has an unlimited supply of money to spend on security. Therefore, it is crucial to make good decisions about how and where to spend money.</p>
<p>Any business decision is accompanied by some sort of justification, and cybersecurity is no different. In security, we typically evaluate total cost of ownership of the solution and compare it to our notion of how much risk is reduced. At the very least, every purchasing decision is supported by a claim that the spending &#8220;is worth it.&#8221; At best, a more formal cost-benefit approach should be employed.</p>
<p>Evaluating the cost-benefit of an &#8220;advanced malware protection&#8221; solution can be extremely challenging. Dropping malware (in the form of viruses and worms) onto systems is one of the oldest methods of attacking and compromising computing environments. Because of this, all enterprises already have controls in place that attempt to protect against malware infection. In addition, there are a number of techniques that can be used to address the problem.</p>
<p>Regardless of the challenge, conducting an economic analysis of newer AMP solutions may lead to some surprising conclusions. Here are six key considerations for conducting your analysis.</p>
<p><strong>1. Ignore the &#8220;Advanced&#8221; Part of Advanced Malware Protection</strong></p>
<p>The first distinction you should make in reviewing your needs for &#8220;advanced&#8221; malware protection is that the &#8220;advanced&#8221; part is extremely nebulous &#8211; the bar keeps changing in defining exactly which techniques are advanced and which aren&#8217;t advanced. Accordingly, the first takeaway is &#8220;evaluate your AMP solutions in concert with all antimalware efforts in your organization.&#8221;</p>
<p>This should not be a radical thought.</p>
<p><strong>2. Cover All the Antimalware Bases</strong></p>
<p>Diving a bit deeper into costs, enterprises should consider the costs of all capabilities &#8211; the capital investments made on hardware and software, maintenance costs, and personnel costs. The vendor solution (capital investment) side of antimalware protection can include endpoint antimalware, email or gateway-based antimalware, intrusion detection (potentially), and secure web gateways. On the operational expense side, organizations should consider the personnel costs associated with identification, prevention, mitigation, response, and recovery activities associated with malware infections and incidents.</p>
<p><strong>3. Allocate Partial Costs of Broader Solutions</strong></p>
<p>Focusing on the costs associated with one type of threat &#8211; in this case, malware &#8211; can be challenging. Some solutions, like endpoint antimalware, focus directly on the problem while others provide varying levels of accompanying support. In my research, for example, secure web gateways were cited as a means for detecting malware infections that were undetected by endpoint antimalware solutions, but secure web gateways provide more capability than malware infection detection.</p>
<p>The key in the analysis is to allocate costs based on the proportional value provided by the broader solution. If 10% of the ongoing value of the solution comes from antimalware detection, then 10% of the future costs should be allocated to antimalware.</p>
<p><strong>4. Ignore Sunk Costs</strong></p>
<p>The maturity level of antimalware makes it likely that capital investments to address the problem have already occurred. Any spending that occurred in the past should be excluded from the analysis, though any current and future operational expenses should be included. In contrast, a decision involving a future capital investment should include that amount allocated (either amortized or depreciated) over its lifetime as well as the operational costs.</p>
<p><strong>5. Factor in Employee Productivity</strong></p>
<p>The second economic issue to consider is the productivity of employees. The productivity costs associated with the impacted worker should be considered along with the costs associated with the IT triage person. If it takes four hours to recover an infected system, then four hours of the worker&#8217;s lost productivity should be included (nothing fancy here &#8211; use a single average number based on salary for all workers).</p>
<p><strong>6. Use a Breakeven Approach</strong></p>
<p>Perhaps a bigger challenge in justifying antimalware spending is in determining the amount of potential losses. That &#8220;it is worth it&#8221; decision means that the security professional spending $100,000 on a security solution believes the solution will offset at least $100,000 in risk.</p>
<p>While some cringe a bit at the realization that spending reveals the minimum expectation of risk reduction, it also provides an opportunity to conduct a standard financial breakeven analysis. Rather than attempting to figure out the exact amount of financial loss at stake, you need only consider the total amount being spent and determine whether it is less than the potential losses. So as long as you&#8217;ve properly accounted for all costs, an appropriate decision can be made.</p>
<p>The AMP solution decision is not an easy one &#8211; with a handful of controls at different maturity levels in the organization already and a variety of newer solutions vying for attention. Some organizations may even come to the conclusion they don&#8217;t need to augment their existing capabilities. Others will find out they really do. Regardless, enterprises should be conducting the necessary analysis to make the best decision for its needs.</p>
<p><em>Pete Lindstrom is Principal and VP of Research for Spire Security, LLC, a research and advisory firm. Learn more about Advanced Malware Protection by &#8220;Drinking from the Firehose&#8221; in New York City on 9/17/13. Details at <a href="http://www.regonline.com/AMPFirehoseNYC">www.regonline.com/AMPFirehoseNYC</a>.</em></p>
]]></content:encoded>
			<wfw:commentRss>http://spiresecurity.com/?feed=rss2&#038;p=1362</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Who Do You Trust? Is it Time for a CyberSwitzerland?</title>
		<link>http://spiresecurity.com/?p=1345</link>
		<comments>http://spiresecurity.com/?p=1345#comments</comments>
		<pubDate>Wed, 12 Jun 2013 16:28:03 +0000</pubDate>
		<dc:creator>Pete Lindstrom</dc:creator>
				<category><![CDATA[Highlights]]></category>

		<guid isPermaLink="false">http://spiresecurity.com/?p=1345</guid>
		<description><![CDATA[A brief Twitter conversation with Phil Cox (@sec_prof) and Dave Piscitello (@securityskeptic) and the latest PRISM / NSA news got me thinking about trust. Phil suggested that the time is ripe for some sort of Internet &#8220;Switzerland&#8221; where a U.S.&#8230;<p class="more-link-p"><a class="more-link" href="http://spiresecurity.com/?p=1345">Read more &#8594;</a></p>]]></description>
				<content:encoded><![CDATA[<p>A brief Twitter conversation with Phil Cox (@sec_prof) and Dave Piscitello (@securityskeptic) and the latest PRISM / NSA news got me thinking about trust. Phil suggested that the time is ripe for some sort of Internet &#8220;Switzerland&#8221; where a U.S. Citizen could (presumably) store your data unfettered by FISA and the long-arm of the US legal system. He argued that &#8220;it&#8217;s been done with finances&#8221; and there is &#8220;no reason tech couldn&#8217;t do it&#8221; and further suggested that &#8220;an already &#8216;trusted&#8217; entity would need to do it.&#8221;</p>
<p>I am not so sure. (And, to be fair, I am not clear how strong Phil&#8217;s opinion is on this.)</p>
<p>The idea of some sort of &#8220;cyberSwitzerland&#8221; sounds like a direction we could head in, but we immediately run into questions of trust, oversight, and technical capability.</p>
<ul>
<li><span style="line-height: 16px;">Trust &#8211; The first step is to identify an entity (presumably in the cloud service-providing business) that we trust more than the U.S. Government (since they are the bad guys with this NSA spying scenario). This doesn&#8217;t seem particularly onerous &#8211; any of the big players might do &#8211; Amazon, Google, etc&#8230; Some privacy-supporting org like the Electronic Frontier Foundation might also consider getting into the business or endorsing some service. (Come to think of it, maybe we just need James Earl Jones to or Martin Sheen to endorse a no-name. William Shatner, too <img src='http://spiresecurity.com/blog/wp-includes/images/smilies/icon_wink.gif' alt=';-)' class='wp-smiley' /> ). But the problem really comes with oversight.</span></li>
<li>Oversight &#8211; Any entity we decide to trust more than the U.S. Government would also have to be willing to snub the U.S. legal system. This is the problem area, because no large entity with U.S. operations would snub the U.S. legal system &#8211; heck, that&#8217;s probably part of the reason you trust them &#8211; they follow rules. So the linchpin problem is that any trustworthy entity also will (ultimately) obey the law and so we are right back where we started from. Like they say, there is no honor among thieves.</li>
<li>Technical Skill &#8211; The final nail in the coffin is that, even if you found an entity you trust who is willing to snub the U.S. legal system, they need to be able to protect you from the NSA. Any successful entity in this endeavor would obviously become a prime target for them. It is unclear whether this is possible in the long run, especially given the many ways to compromise systems. At the very least, it would be quite expensive.</li>
</ul>
<p>Ultimately, I believe anyone going through this analysis will come to the conclusion that a &#8220;CyberSwitzerland&#8221; cloud service provider is highly unlikely to be able to address the needs of those concerned enough to make a change (who aren&#8217;t breaking the law in some way). That is, for the average U.S. citizen, a &#8220;CyberSwitzerland&#8221; is not a way out.</p>
<p>There are ways, however, that could significantly help the average citizen concerned about privacy. The real answer here has got to be some form of obfuscation &#8211; at the very least encrypted data, perhaps augmented by more unique schemes of data dispersal and split-key techniques. And the super-paranoid might even throw in some &#8220;chaff&#8221; generation along the way to add noise to whatever analysis is putting you in the &#8216;results&#8217; list to begin with. Heck, you could even hire 20 people around the world to impersonate you and encrypt random data uploaded to random sites with a &#8220;firewall&#8221; between you and each of them (sort of joking here).</p>
]]></content:encoded>
			<wfw:commentRss>http://spiresecurity.com/?feed=rss2&#038;p=1345</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>The 7-day Itch: Ups and Downs of Google&#8217;s New Disclosure Policy</title>
		<link>http://spiresecurity.com/?p=1331</link>
		<comments>http://spiresecurity.com/?p=1331#comments</comments>
		<pubDate>Wed, 05 Jun 2013 14:13:51 +0000</pubDate>
		<dc:creator>Pete Lindstrom</dc:creator>
				<category><![CDATA[Economics and Risk]]></category>
		<category><![CDATA[Highlights]]></category>
		<category><![CDATA[Random]]></category>
		<category><![CDATA[Vulnerability Management]]></category>

		<guid isPermaLink="false">http://spiresecurity.com/?p=1331</guid>
		<description><![CDATA[Recently, members of the security team at Google made an important announcement about &#8220;real-world exploitation of publicly unknown vulnerabilities.&#8221; While it was done on the Google Online Security blog, all indications are that this is an official Google policy statement.&#8230;<p class="more-link-p"><a class="more-link" href="http://spiresecurity.com/?p=1331">Read more &#8594;</a></p>]]></description>
				<content:encoded><![CDATA[<p><span style="letter-spacing: 0.05em; line-height: 1.6875;">Recently, members of the security team at Google made an <a href="http://googleonlinesecurity.blogspot.com/2013/05/disclosure-timeline-for-vulnerabilities.html">important announcemen</a>t about &#8220;real-world exploitation of publicly unknown vulnerabilities.&#8221; While it was done on the Google Online Security blog, all indications are that this is an official Google policy statement. To wit, Google announced that &#8220;after 7 days have elapsed without a patch or advisory, we will support researchers making details available so that users can take steps to protect themselves.&#8221;</span></p>
<p><span style="letter-spacing: 0.05em; line-height: 1.6875;">This is an important announcement because it highlights the very real problem of &#8220;<a href="http://spiresecurity.com/?p=36">in-the-wild-exploits of undercover vulnerabilities</a>.&#8221; This strain of &#8220;0day&#8221; is the most significant given that active exploits are already happening when they are discovered. In these scenarios, the threats (malicious actors) and vulnerabilities have already collided in the real world and losses are being actively incurred. Thus, <strong>this type of situation is the most important type that technology risk (techrisk) managers must deal with in their environments.</strong></span></p>
<p>The announcement itself highlights some important, underappreciated aspects of the techrisk profession:</p>
<p>- That exploits/breaches/incidents are the fundamental &#8220;unwanted outcome&#8221; that we are trying to prevent. It is not uncommon for techrisk pros to focus efforts on software quality, control weaknesses, or compliance violations &#8211; all useful intentions to the extent that they address the aforementioned incidents.</p>
<p>- That techrisk professionals can identify attacks even when the vulnerability is unknown. Much of our profession&#8217;s focus revolves around the notion that we must find vulnerabilities in order to protect ourselves, yet time and again we succeed in identifying these types of attacks using behavioral analysis and other techniques. With the growth in popularity of forensic archiving, we can now also determine to what extent we have been victims in the past to assist with understanding the risks of the future.</p>
<ul>
<li><span style="letter-spacing: 0.05em; line-height: 1.6875;">That much of the profession&#8217;s effort associated with vulnerability management is ineffective. Our efforts to identify each vulnerability prior to exploit are simply overwhelmed by scale and can simply be shown through a thought exercise &#8211; consider how many vulnerabilities are created every day (in the aggregate) as compared with how many are found. Perhaps more importantly, it is worth noting that the vast majority of vulnerabilities that are found are never known to be actively exploited <a href="https://www.isecpartners.com/media/12955/eip-final.pdf">[pdf]</a>.</span></li>
<li><span style="letter-spacing: 0.05em; line-height: 1.6875;">That there is a variance in how different types of attacks &#8211; namely, targeted vs. opportunistic &#8211; manifest themselves online. Google&#8217;s primary cited reason for its new policy involves political activists as victims of targeted attacks that may lead to physical harm. The history of infosec and techrisk highlight other scenarios &#8211; the NIMDA worm, WMF exploit, WebDAV, etc &#8211; that involve opportunistic exploits across a multitude of targets.</span></li>
<li><span style="letter-spacing: 0.05em; line-height: 1.6875;">That the most significant way to &#8220;move the marker&#8221; in security is through the identification of exploits and not vulnerabilities. As with Code Red and Nimda in the Fall of 2001 leading to Bill Gates&#8217; well-known &#8220;<a href="http://www.microsoft.com/en-us/news/features/2012/jan12/GatesMemo.aspx">Trustworthy Computing Memo</a>,&#8221; active exploits are the best drivers of change in the techrisk profession.</span></li>
</ul>
<p><span style="letter-spacing: 0.05em; line-height: 1.6875;"> </span><span style="letter-spacing: 0.05em; line-height: 1.6875;">While Google&#8217;s new policy offers and opportunity to assess the state of security on the Internet overall, it also demonstrates significant deficiencies in its approach:</span></p>
<ul>
<li><span style="letter-spacing: 0.05em; line-height: 1.6875;">The 7-day deadline has no risk basis. With the significant variance in number of affected parties and speed of compromise associated with opportunistic attacks versus targeted ones, the number is an arbitrary one. In the primary example cited (activists at risk of physical harm), speed is highly unlikely to have a significant impact on risk reduction.</span></li>
<li><span style="letter-spacing: 0.05em; line-height: 1.6875;">The capabilities of enterprises and/or users to protect themselves can vary significantly. There are many reasons why some parties choose to remain vulnerable to certain types of attacks &#8211; system complexities, legacy support needs, lack of technical skill, competitive priorities, etc. Through the years some security researchers (including some employees of Google) have expressed disdain for those that cannot protect themselves. A company the size of Google should be held to a higher standard in its willingness to protect those online that can&#8217;t always protect themselves.</span></li>
<li><span style="letter-spacing: 0.05em; line-height: 1.6875;"> </span><span style="letter-spacing: 0.05em; line-height: 1.6875;">No consideration of economics. The policy completely ignores tradeoffs like the risk of breaking systems when taking precautionary measures (e.g. patch failures), the well-known increase in exploits that occur after the disclosure of many new vulnerabilities [<a href="http://www.cs.umd.edu/~waa/pubs/Windows_of_Vulnerability.pdf">Arbaugh, McHugh, 2000 pdf</a>; <a href="http://users.ece.cmu.edu/~tdumitra/public_documents/bilge12_zero_day.pdf">Bilge, Dumitras 2013 pdf</a>], and the opportunity costs associated with new requirements. When Google says, for example, &#8220;each day an actively exploited vulnerability remains undisclosed to the public and unpatched, more computers will be compromised&#8221; they neglect the significant likelihood that computers will be compromised regardless of the state of disclosure to the public and fall back on the age-old myth that only patches can protect systems.</span></li>
<li><span style="letter-spacing: 0.05em; line-height: 1.6875;"> </span><span style="letter-spacing: 0.05em; line-height: 1.6875;">It can lead to even more exploitations and incidents. Anyone paying close attention to the vulnerability research community knows that there is wide variance in how researchers disclose their information and some decisions are made based on annoyance, frustration, spite and sometimes even malice. If a vulnerability will get &#8220;noticed&#8221; more quickly, researchers may be tempted to &#8220;test&#8221; it in the wild in order to increase its priority level.</span></li>
</ul>
<p><span style="letter-spacing: 0.05em; line-height: 1.6875;">A company with the talent and resources at Google can do better. Here are some opportunities for improving the state of security on the Internet and addressing the real, significant risk associated with actively exploited 0days:</span></p>
<ul>
<li><span style="letter-spacing: 0.05em; line-height: 1.6875;">Encourage and train political activists in obfuscation and evasion techniques. It is challenging to discuss a blanket policy across all scenarios simply by highlighting arguably the most important one &#8211; that involving physical harm. It seems highly unlikely that this case is a common one and the best way to discuss the overall implications of the policy itself is to remove this scenario from the discussion as it tends to cause an emotional reaction. As many of us know, there are many ways political activists can protect themselves online that would be much more effective than a 7-day disclosure policy which comes after they have been compromised.</span></li>
<li><span style="letter-spacing: 0.05em; line-height: 1.6875;"> </span><span style="letter-spacing: 0.05em; line-height: 1.6875;">Increase focus on actively exploited 0days. Since these are the most important scenarios the techrisk profession has to deal with, Google should be making every effort to identify these exploits and employ or invent new ways to protect against them. Google researchers still participate in random, ineffective vulnerability research that simply distracts from this very real problem.</span></li>
<li><span style="letter-spacing: 0.05em; line-height: 1.6875;"> </span><span style="letter-spacing: 0.05em; line-height: 1.6875;">Provide more insight into the &#8220;dozens&#8221; of 0days identified &#8220;through the years&#8221; that was mentioned in the blog announcement. If there is one thing Google has, it is great data. As evidenced by past reports [<a href="http://static.googleusercontent.com/external_content/untrusted_dlcp/research.google.com/en/us/archive/provos-2008a.pdf">Provos, 2008 pdf</a>], Google could very easily provide more specific evidence on the number of 0days they have identified, the volume of exploits, and their disposition by vendors. The fact that they haven&#8217;t yet, especially in the face of this policy announcement, is disappointing and makes it difficult to evaluate the measure.</span></li>
<li><span style="letter-spacing: 0.05em; line-height: 1.6875;"> </span><span style="letter-spacing: 0.05em; line-height: 1.6875;">Take a risk-based approach to disclosure. Fast-moving worms do most of their damage in hours and days &#8211; in those cases, seven days is too long. Targeted attacks are unlikely to get repeated in a way that demands immediate attention for most environments &#8211; in those cases, seven days is too short. A risk-based approach would take into account the frequency of exploit, probability of future exploit within a target population, and impact of the exploit while evaluating the changes to these variables over time &#8211; in particular before and after disclosure.</span></li>
<li><span style="letter-spacing: 0.05em; line-height: 1.6875;"> </span><span style="letter-spacing: 0.05em; line-height: 1.6875;">Monitor the situation closely. Google&#8217;s unique ability to gather data in this regard is worth mentioning again as a function of its ability to assess its own policy. Collecting and publishing data on actual 0days throughout their exploit lifecycle would be a boon to the entire profession.</span></li>
<li><span style="letter-spacing: 0.05em; line-height: 1.6875;"> </span><span style="letter-spacing: 0.05em; line-height: 1.6875;">Initiate or participate in discussions to create new ways to address this very real problem. Commercial, community, and government mechanisms already exist for sharing data publicly and privately that could be used as models for minimizing the risks associated with these types of attacks. For example, a (private) process similar to federal wiretap capabilities in secrecy and opportunity may be more effective in addressing targeted attacks. There are countless other approaches that could be leveraged to address this problem.</span></li>
</ul>
<p>Make no mistake, the Google 7-day policy announcement sheds light on a <strong>real and significant issue</strong> in technology-related risk. While it highlights some of the challenges techrisk professionals face on a daily basis, it also demonstrates significant deficiencies in its approach to address the problem. This is a great opportunity to evaluate the existing state of the Internet from a risk and security perspective to determine where inconsistencies or weaknesses lay and map out a risk-based program that has the highest likelihood of success.</p>
]]></content:encoded>
			<wfw:commentRss>http://spiresecurity.com/?feed=rss2&#038;p=1331</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Cognitive Dissonance or Spite?</title>
		<link>http://spiresecurity.com/?p=1302</link>
		<comments>http://spiresecurity.com/?p=1302#comments</comments>
		<pubDate>Mon, 11 Feb 2013 16:56:28 +0000</pubDate>
		<dc:creator>Pete Lindstrom</dc:creator>
				<category><![CDATA[Economics and Risk]]></category>
		<category><![CDATA[Highlights]]></category>
		<category><![CDATA[Random]]></category>
		<category><![CDATA[Vulnerability Management]]></category>

		<guid isPermaLink="false">http://spiresecurity.com/?p=1302</guid>
		<description><![CDATA[I happened to see a tweet the other day that said: &#8220;If you want a bug fixed quickly, sell it on the Russian black market. It&#8217;ll be so heavily abused that the vendor will patch out of cycle.&#8221; Now, it&#8230;<p class="more-link-p"><a class="more-link" href="http://spiresecurity.com/?p=1302">Read more &#8594;</a></p>]]></description>
				<content:encoded><![CDATA[<p>I happened to see a tweet the other day that said:</p>
<blockquote><p><em>&#8220;If you want a bug fixed quickly, sell it on the Russian black market. It&#8217;ll be so heavily abused that the vendor will patch out of cycle.&#8221;</em></p></blockquote>
<p>Now, it could be the joke&#8217;s on me and the 126 people who retweeted this message (a large number for security tweets) were in on it. Or, they all don&#8217;t realize how ludicrous this is. In the infosec/techrisk field, this kind of thinking is not unheard of so I will treat this as if it is legitimate.</p>
<p>The tweet highlights just how biased people can be when they get caught up in a notion without understanding the implications. Apparently, this tweeter wants bugs fixed quickly. At first blush this seems like a simple enough concern, shared by many. But peel one small layer deeper and the statement often ends up being &#8220;want bugs that you know about (or worse, that you discovered) fixed quickly after your discovery?&#8221; It becomes easier to see how certainty bias and the focusing illusion come into play.</p>
<p>there is plenty of evidence to demonstrate that it is unlikely that the bug in question is the only bug that remains unfixed &#8211; we have any number of bugs in various stages of discovery and disclosure all the time. If we assume that the average bug takes 120 days from discovery (or at least vendor notification) to patch release, and vendors generally release patches on a monthly cycle, then there are four months of undisclosed (typically) vulns on your systems that remain upatched.</p>
<p>Now, you might assert that this makes the point &#8211; of course we want them patched &#8220;quickly.&#8221; But that completely ignores the tradeoffs. If your patch is prioritized, that means another one must be de-prioritized. I suppose you could say that security developers aren&#8217;t operating at capacity and therefore can absorb the workload for both bugs, but that seems farfetched to me and doesn&#8217;t scale in any case.</p>
<p>Of course, the worst part of the tweet is the part that purposely increases risk by increasing the threat of compromise. No need for a soapbox/high horse here to recognize that purposely inflating risk to get attention in spite of how detrimental it is to Internet users is certainly unprofessional and really kind of pathetic.</p>
<p>Too often, folks get caught up in some perceived solution to a problem and neglect the bigger picture. Many times, the bugfinder is sincerely concerned. But it is important to understand the cost/benefit and risk dynamics involved if you really want to positively affect Internet risk.</p>
]]></content:encoded>
			<wfw:commentRss>http://spiresecurity.com/?feed=rss2&#038;p=1302</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>How Much did Amazon Lose in Yesterday&#8217;s Outage?</title>
		<link>http://spiresecurity.com/?p=1294</link>
		<comments>http://spiresecurity.com/?p=1294#comments</comments>
		<pubDate>Fri, 01 Feb 2013 14:55:18 +0000</pubDate>
		<dc:creator>Pete Lindstrom</dc:creator>
				<category><![CDATA[Economics and Risk]]></category>
		<category><![CDATA[Highlights]]></category>
		<category><![CDATA[Incidents]]></category>
		<category><![CDATA[Random]]></category>

		<guid isPermaLink="false">http://spiresecurity.com/?p=1294</guid>
		<description><![CDATA[One of the crucial aspects of risk management for infosec pros to learn is how to estimate consequences. It can be helpful to review incidents and create a model for thinking about losses. Amazon&#8217;s outage for an hour yesterday, is&#8230;<p class="more-link-p"><a class="more-link" href="http://spiresecurity.com/?p=1294">Read more &#8594;</a></p>]]></description>
				<content:encoded><![CDATA[<p>One of the crucial aspects of risk management for infosec pros to learn is how to estimate consequences. It can be helpful to review incidents and create a model for thinking about losses. Amazon&#8217;s outage for an hour yesterday, is a good, simple example for us to play with &#8211; this exact example used to be the one I used when teaching my security metrics class because it is so clean. Or is it?</p>
<p>When estimating losses, it isn&#8217;t entirely unreasonable to do the high-level straight-line math like <a href="http://www.itworld.com/cloud-computing/339609/amazoncom-suffers-outage-nearly-5m-down-drain">IT World did here</a>:</p>
<blockquote><p><em>&#8220;Amazon.com&#8217;s latest earnings report showed that the company makes about $10.8 billion per quarter, or about $118 million per day and $4.9 million per hour.&#8221;</em></p></blockquote>
<p>It&#8217;s really quick and dirty &#8211; and in a general sense legitimate &#8211; but can we do better? There are other ways to look at this that might shed some light on impact assessment. First, the assessment above makes no mention of costs. That might be the biggest weakness since costs are more under the control of Amazon and (probably) don&#8217;t fluctuate as much as revenue.</p>
<p>Luckily for us, Amazon just released its quarterly earnings report and <a href="http://articles.marketwatch.com/2013-01-29/commentary/36613307_1_margins-tom-szkutak-fourth-quarter">this report</a> asserts that its operating margin is about 3%. So right off the bat, we could suggest that Amazon lost 97% of $5 million or $4.85 million in costs. A more conservative estimate might try to determine whether the costs were unrecoverable or not, etc. Hopefully, you get the idea. A cost-oriented approach also works well as an example in infosec since that is often a big piece of the losses we face.</p>
<p>It is important to note here that these costs are additive to the lost revenue estimate &#8211; not only did we lose the $4.85 million in operating costs, but also (presumably) we lost that initial $4.9 million in revenue, for a total of (let&#8217;s say) $10 million.</p>
<p>Now, let&#8217;s look again at that lost revenue estimate. As mentioned earlier, coarse numbers like those used in the calculation above are certainly justifiable but we can probably do better. A quick thought exercise can help here &#8211; by creating the experience of an &#8220;average customer&#8221; of Amazon&#8217;s we can better assess the impact of the outage. This is harder than it sounds because we&#8217;ll have to second guess our own biases, but let&#8217;s try anyway. Let&#8217;s call him Joe.</p>
<p>Given that the outage was simply a &#8220;denial-of-service&#8221; of sorts, the big variable we must evaluate is time. More specifically for our scenario, we need to answer the question &#8220;How timely does Joe&#8217;s interaction with Amazon need to be, or, how likely is Joe to wait an hour to complete his purchase?&#8221; At the very least, we know Joe is willing to wait two days (maybe more &#8211; not sure what the average delivery time is for Amazon) to receive whatever goods he purchases. Throw in what we might assume (my bias) about Amazon&#8217;s low prices and the corresponding brand loyalty that comes with it and it seems reasonable to conclude that Joe will wait an hour to make the purchase, and therefore the lost revenue is actually only deferred revenue to be recognized in the future.</p>
<p>But not everyone is average (usually nobody is), and so once we cover a generic case, it is useful to consider the impact of the outliers. Now, we can imagine scenarios where even though a customer can wait for delivery, she can&#8217;t wait to place the order &#8211; too many other things going on in life. Or even a case where the customer would actually lose a full day due to delivery cutoff times. These are the types of cases that warrant more attention. Certainly it is reasonable to factor these cases into a loss scenario. Let&#8217;s say this is true 10% of the time.</p>
<p>The goal here is to be conservative in our estimates (even though it is sometimes beneficial for companies to be liberal after the fact &#8211; can hide other problems) so we should remember that these scenarios are typically useful in identifying some sort of discount factor to apply to the initial $5 million estimate. Though it is possible to come up with scenarios where there is a multiplier &#8211; maybe holiday seasons &#8211; it is less common.</p>
<p>Our lost revenue evaluation has led us to conclude that 90% of purchases will still be made in the future, so the remaining 10% of cases will discount our $5 million loss down to $500,000. Add that to our lost costs and we are back to the initial $5 million estimate, though from a different perspective. While it might be attractive to decide all was for nought, it is worth considering the situations where the costs are much lower, or the revenue is more likely to be lost to see the value in the exercise.</p>
<p>Now, <a href="http://erratasec.blogspot.com/2013/02/risk-analysis-v-downtime.html">some might suggest</a> (essentially) that the above analysis is really not worth it because a loss is a loss. Not only that, but Amazon&#8217;sown numbers have shown (?) that there is no discernible uptick in sales in the period following the outage. As mentioned earlier, it is easier to see how costs are fairly static and therefore turn into losses. On the revenue side, however, it is not clear at all.</p>
<p>In assessing lost revenue in this case, one must do two things: first distinguish between necessity and convenience and second evaluate the impact of buyer&#8217;s capacity. The purported lack of a noticeable uptick in sales in the short term could easily be explained if purchases are more oriented around convenience than necessity. Measures associated with shopping carts might be of assistance here (I sometimes leave items in my shopping cart for days if not weeks). Again, this information can be factored into the estimates if need be.</p>
<p>It is uncommon to consider a &#8220;buyer&#8217;s capacity&#8221; but especially with convenience purchases, one might decide that the rate of purchase is a determining factor and even though the shopper returns, she will be buying other items, etc. This justification is easier to believe in cases where capacity is high &#8211; that is, the shopper is buying at a rate where fitting in the &#8220;lost&#8221; purchases is unlikely (and when it happens is noticeable in the numbers). My assessment is that this scenario is unlikely; people are more casual in their shopping experience and will therefore wait to make their purchases. (A similar capacity limit could have an effect on the Amazon side, but that is even more farfetched).</p>
<p>My conclusion is that $5 million is a reasonable loss estimate for Amazon&#8217;s outage, but not for the reasons initially believed.</p>
]]></content:encoded>
			<wfw:commentRss>http://spiresecurity.com/?feed=rss2&#038;p=1294</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>How the Cost of Interventions provides Insight into Security Decisionmaking</title>
		<link>http://spiresecurity.com/?p=1286</link>
		<comments>http://spiresecurity.com/?p=1286#comments</comments>
		<pubDate>Thu, 31 Jan 2013 15:55:33 +0000</pubDate>
		<dc:creator>Pete Lindstrom</dc:creator>
				<category><![CDATA[Economics and Risk]]></category>
		<category><![CDATA[Highlights]]></category>
		<category><![CDATA[Metrics]]></category>
		<category><![CDATA[Random]]></category>

		<guid isPermaLink="false">http://spiresecurity.com/?p=1286</guid>
		<description><![CDATA[In 1994, Tengs, et.al. published the research paper &#8220;Five-Hundred Life-Saving Interventions and Their Cost-Effectiveness.&#8221; (pdf) The research reviewed 587 different interventions and calculated the &#8220;cost per life-year saved&#8221; as a normalized metric across over 200 different studies on economic costs. So,&#8230;<p class="more-link-p"><a class="more-link" href="http://spiresecurity.com/?p=1286">Read more &#8594;</a></p>]]></description>
				<content:encoded><![CDATA[<p>In 1994, Tengs, et.al. published the research paper <a href="http://www.ce.cmu.edu/~hsm/bca2005/lnotes/500-interventions.pdf">&#8220;Five-Hundred Life-Saving Interventions and Their Cost-Effectiveness.&#8221;</a> (pdf) The research reviewed 587 different interventions and calculated the &#8220;cost per life-year saved&#8221; as a normalized metric across over 200 different studies on economic costs.</p>
<p>So, for example, using available data they calculated that automatic fire extinguishers in airplane lavatory trash receptacles cost $16,000 per life year saved. (This was in 1993 &#8211; maybe smoking was still allowed then?)</p>
<p>Interestingly, these costs ranged from &#8220;those that save more resources than they consume to those costing more than 10 billion dollars per year of life saved.&#8221; The median cost per life year saved was $42,000. The paper also breaks down amounts by type of intervention, prevention stage, and even provides some data on proposed govt regulations by regulatory agency (FAA median $23,000; EPA median $7,600,000).</p>
<p>As a quick aside, the existence of this data helps one understand that even though circumstances where &#8220;success means nothing happened&#8221; (in this case, death didn&#8217;t happen), there is still plenty of opportunity to assess the benefit of some particular intervention.</p>
<p>These types of &#8220;revealed preference&#8221; study results can be eye-opening to those that suggest we should spend &#8220;whatever it takes&#8221; to address some particular concern. In looking at the large variance in costs, perhaps that isn&#8217;t the best course of action. It is nice to think we have unlimited resources, but at some point they run out. When they do, not only does that impact overall effectiveness, but opportunity costs come into play.</p>
<p>What does this mean for cybersecurity? Though it is not fair any more to say there is no data available to our profession, it certainly is difficult to leverage the data coming out in ways that are helpful to an organization. However, we can start thinking in terms of estimates and measures that make sense. In particular, we can evaluate and compare costs of various controls to each other and factor in some notion of anticipated risk reduction.</p>
<p>We can learn a lot from studies like these.</p>
]]></content:encoded>
			<wfw:commentRss>http://spiresecurity.com/?feed=rss2&#038;p=1286</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>How Red Meat can make Cybersecurity Healthier</title>
		<link>http://spiresecurity.com/?p=1272</link>
		<comments>http://spiresecurity.com/?p=1272#comments</comments>
		<pubDate>Mon, 26 Mar 2012 14:16:47 +0000</pubDate>
		<dc:creator>Pete Lindstrom</dc:creator>
				<category><![CDATA[Economics and Risk]]></category>
		<category><![CDATA[Highlights]]></category>
		<category><![CDATA[Metrics]]></category>
		<category><![CDATA[Random]]></category>

		<guid isPermaLink="false">http://spiresecurity.com/?p=1272</guid>
		<description><![CDATA[Recently, the L.A. Times and other places wrote about a study done by Dr. Walter Willett of Harvard, et.al. regarding the impact of red meat on one&#8217;s mortality. He found that eating as little as one extra serving of red&#8230;<p class="more-link-p"><a class="more-link" href="http://spiresecurity.com/?p=1272">Read more &#8594;</a></p>]]></description>
				<content:encoded><![CDATA[<p>Recently, the <a href="http://articles.latimes.com/2012/mar/24/health/la-he-five-questions-walter-willett-20120324">L.A. Times</a> and other places wrote about a <a href="http://archinte.ama-assn.org/cgi/content/full/archinternmed.2011.2287">study done</a> by Dr. Walter Willett of Harvard, et.al. regarding the impact of red meat on one&#8217;s mortality. He found that eating as little as one extra serving of red meat a week contributed to a 13% or 20% increased risk of death. More specifically, they found that</p>
<p style="padding-left: 30px;">&#8220;After multivariate adjustment for major lifestyle and dietary risk factors, the pooled hazard ratio (HR) (95% CI) of total mortality for a 1-serving-per-day increase was 1.13 (1.07-1.20) for unprocessed red meat and 1.20 (1.15-1.24) for processed red meat.&#8221;</p>
<p>As with many studies about diet, lifestyle, and death, this one has sparked discussion. The Numbers Guy from the Wall Street Journal, Carl Bialik, wrote <a href="http://online.wsj.com/article/SB10001424052702304636404577297802304647434.html">two</a> <a href="http://blogs.wsj.com/numbersguy/the-risk-numbers-1128/">articles</a> on the study itself and the difference between absolute risk and relative risk numbers that often create confusion and annoyance. That article led me to the always excellent Understanding Uncertainty blog post by Dr. David Spiegelhalter&#8217;s <a href="http://understandinguncertainty.org/what-does-13-increased-risk-death-mean">fuller treatment</a> of exactly what a 13% increased risk of death actually means (dying about a year younger, in case you are wondering). It also provides discussion on correlation/causation caveats and the practical application of the numbers.</p>
<p>All this discussion is interesting and should be useful for any IT risk professional interested in quantitative treatments of risk. But these details are not the reason I am writing this. As I was reviewing the information, it struck me just how difficult this is in the physical world. This quote from Dr. Willett in the L.A. Times article really highlights the problem:</p>
<p style="padding-left: 30px;">&#8220;In principle, the ideal study would take 100,000 people and randomly assign some to eating several servings of red meat a day and randomize the others to not consume red meat and then follow them for several decades. But that study, even with any amount of money, in many instances is simply not possible to do.&#8221;</p>
<p>What struck me was not only how hard this is, but also the rigor of the results in the face of the described obstacles. And, even more importantly how much easier this would be for IT risk professionals in the virtual world.</p>
<p>In the virtual world, we actually <em>could</em> design and conduct a study that controlled for almost every variable to quantify risk. We could, for example, deploy 10,000 or 100,000 virtual machine clients around the Internet that were all configured exactly alike with the exception of some specified difference &#8211; patched vs. non-patched, different anti-malware solutions and/or signature updates, open vs. closed ports, other configuration changes, etc. About the hardest part would be determining how/where to deploy the VMs and coming up with a &#8220;honeymonkey&#8221; algorithm to mimic user activity.</p>
<p>Perhaps the biggest challenge would be recognizing and characterizing the intelligent adversary contribution to the variance in the numbers &#8211; the popularity of vulnerabilities, exploit techniques, 0days, etc. And that would be the good stuff, as well.</p>
<p>Conducting an experiment like this seems so easy to me that I wonder if somebody is already doing it. I am pretty sure some group (ISC?) used to do some sort of &#8220;time-to-compromise&#8221; metric for unpatched systems. And I suspect there may be others. Does anyone know of experiments/studies being done similar to this? If so, I&#8217;d love to hear about them. If not, why not?</p>
]]></content:encoded>
			<wfw:commentRss>http://spiresecurity.com/?feed=rss2&#038;p=1272</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
	</channel>
</rss>
