<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Spire Security Viewpoint &#187; Economics and Risk</title>
	<atom:link href="http://spiresecurity.com/?cat=16&#038;feed=rss2" rel="self" type="application/rss+xml" />
	<link>http://spiresecurity.com</link>
	<description>Risk and Cybersecurity Analysis</description>
	<lastBuildDate>Fri, 14 Nov 2014 00:11:00 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>hourly</sy:updatePeriod>
	<sy:updateFrequency>1</sy:updateFrequency>
	<generator>http://wordpress.org/?v=3.5.1</generator>
		<item>
		<title>Engineering vs. Economics in TechRisk: How &#8220;Stronger&#8221; Software can lead to Higher Risk</title>
		<link>http://spiresecurity.com/?p=1407</link>
		<comments>http://spiresecurity.com/?p=1407#comments</comments>
		<pubDate>Tue, 07 Jan 2014 16:10:28 +0000</pubDate>
		<dc:creator>Pete Lindstrom</dc:creator>
				<category><![CDATA[Economics and Risk]]></category>
		<category><![CDATA[Highlights]]></category>

		<guid isPermaLink="false">http://spiresecurity.com/?p=1407</guid>
		<description><![CDATA[It seems counterintuitive: how can it be that making software &#8220;stronger&#8221; (as in reducing vulnerabilities) can increase risk on the Internet (as in creating more incidents)? But it happens frequently. The trick to understanding this conundrum lay in thinking like&#8230;<p class="more-link-p"><a class="more-link" href="http://spiresecurity.com/?p=1407">Read more &#8594;</a></p>]]></description>
				<content:encoded><![CDATA[<p>It seems counterintuitive: how can it be that making software &#8220;stronger&#8221; (as in reducing vulnerabilities) can increase risk on the Internet (as in creating more incidents)? But it happens frequently. The trick to understanding this conundrum lay in thinking like an economist and not like an engineer.</p>
<p>Engineers are focused on quality, so when they hear about vulnerabilities in software, their immediate reaction is to want to fix them&#8230; all of them. Regardless of whose software it is. Regardless of where it&#8217;s deployed. In fact, some of them care so much that they go out seeking vulnerabilities simply to fix them. They are the type of people who are great at solving problems, but not at understanding the downstream implications of their actions.</p>
<p>Economists, on the other hand (get it?), look at cause and effect, actions and reactions, and, most importantly, outcomes. The root of the economic problem lay in the ultimate unwanted outcome &#8211; the breach.Economics-oriented security pros understand that everything we do is intended to thwart the breach. It is easy to lose track of unwanted outcomes in the face of compliance needs and operational activities, but even those activities are all intended to minimize damages from attacks and exploits.</p>
<p>The engineer correctly believes that fixing vulnerabilities creates high quality (&#8220;stronger&#8221;) software. If the program starts with 300 vulnerabilities and you fix one, that obviously leaves 299 &#8211; one less than when it started. More importantly, if an enterprise has 1,000 systems that all have that same vulnerability and they apply a patch to 500 of them, they have decreased their attack surface by 500 vulnerabilities. From both perspectives, the level of vulnerability is, in fact, reduced.</p>
<p><span style="letter-spacing: 0.05em; line-height: 1.6875;">But the economist knows that fewer vulnerabilities is not the ultimate objective. The ultimate objective is to reduce the likelihood of an incident.</span></p>
<p>The economist understands that there is a key missing ingredient to the engineer&#8217;s scenario &#8211; the intelligent adversary, aka the threat. And in pursuit of higher quality software the vulnerability details usually get published, leading to lower attack costs for the adversary. Given the scalability of technology, this typically leads to more attackers connecting to more targets, albeit in a (somewhat) smaller population of targets.</p>
<p>That is the key observation for this discussion &#8211; a breach requires both an attacker (threat) and a target (vuln), which manifests itself in the form of a connection between source and destination. Even though the population of targets may be reduced (perhaps even significantly so), if the threat is sufficiently motivated, more connections can be made with the vulnerable targets. The only way to guarantee reduced risk is to bring one of the populations (most likely the vulnerable targets) to zero. History shows us this is not likely with commercial software in enterprises. Interestingly, the increasingly common scenario for cloud-based software (e.g. Software-as-a-Service) may be able to do just that.</p>
<p>And there you have it &#8211; given the need for both threats and vulnerabilities, the reduction in one doesn&#8217;t force a reduction overall. And if the other element is increased in the process, the marginal difference in each population must be evaluated to truly understand the impact. Historically, this has led to scenarios where the vulnerability is reduced while the risk is simultaneously increased.</p>
<p>For reference:</p>
<p><a href="http://srmsblog.burtongroup.com/2007/05/more_sex_is_saf.html">More Sex is Safer Sex…</a></p>
]]></content:encoded>
			<wfw:commentRss>http://spiresecurity.com/?feed=rss2&#038;p=1407</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>AMP: Determining the value of whitelists, sandboxes, isolation, and active forensics</title>
		<link>http://spiresecurity.com/?p=1393</link>
		<comments>http://spiresecurity.com/?p=1393#comments</comments>
		<pubDate>Wed, 11 Sep 2013 13:02:50 +0000</pubDate>
		<dc:creator>Pete Lindstrom</dc:creator>
				<category><![CDATA[AMP Firehose]]></category>
		<category><![CDATA[Economics and Risk]]></category>
		<category><![CDATA[Threat Management]]></category>

		<guid isPermaLink="false">http://spiresecurity.com/?p=1393</guid>
		<description><![CDATA[The most challenging thing about evaluating anti-malware solutions is the variety of architectures that can be employed to address the problem. Let&#8217;s look at three product categories and see how they might provide value to an organization: 1. Application Control&#8230;<p class="more-link-p"><a class="more-link" href="http://spiresecurity.com/?p=1393">Read more &#8594;</a></p>]]></description>
				<content:encoded><![CDATA[<p>The most challenging thing about evaluating anti-malware solutions is the variety of architectures that can be employed to address the problem. Let&#8217;s look at three product categories and see how they might provide value to an organization:</p>
<p>1. <strong>Application Control / Whitelisting Solutions.</strong> Whitelisting solutions change the security approach from one that allows software to install/run unless otherwise specified on a &#8220;blacklist&#8221; (&#8220;default allow&#8221;) to one that requires explicit permissions on a &#8220;whitelist&#8221; for software to be executed (&#8220;default deny&#8221;).</p>
<p>Clearly, the goal of whitelisting is to reduce the number of malware infections by preventing unidentified software from running thus saving the aforementioned recovery costs. Given the common predisposition for organizations to consider infections separately from incidents, whitelisting solutions also are intended to reduce the likelihood of a bigger incident.</p>
<p>The tradeoff for whitelisting solutions is determining whether costs associated with false positives &#8211; legitimate software that is kept from running &#8211; will offset these additional benefits. Generally speaking, the more dynamic and decentralized an organization is, the larger the problem. Nowadays, whitelisting solutions have varying ways to deal with this known issue.</p>
<p>2. <strong>Sandboxes and Virtual Machines.</strong> Perhaps the most varied set of solutions addressing malware these days are the sandboxes and virtual machines. Some sanboxes &#8211; primarily on the network &#8211; are designed simply to provide an out-of-band (and sometimes near-real-time) environment to execute suspicious software and determine whether it is malware. As with whitelisting, the goal is to identify more malware more quickly, thereby reducing costs.</p>
<p>Other solutions &#8211; focused on the endpoint &#8211; actually isolate the production operating environment to reduce recovery costs by reducing the downtime associated with re-imaging a solution, and/or reduce the impact by containing malware in an environment separate from other production resources.</p>
<p>There are some tradeoffs in the sandbox/virtual arena depending on the architecture. Network solutions may not see as much traffic in highly mobile environments. Endpoint solutions have performance considerations and/or architectural dependencies to consider.</p>
<p>3. <strong>Active Forensics.</strong> Recently, a number of solutions have arisen to offer a near-real-time approach to forensics. By recording system calls and/or scanning system state looking for anomalies, their goal is to identify malware infections within shorter time periods than existing methods can.</p>
<p>Active forensics solutions look to reduce the costs of recovery by providing detailed information on changes that were made by malware so a responder can recover more quickly. In addition, the solutions provide comprehensive information so that recovery may be possible without re-imaging. In environments where users can install their own software, this could significantly reduce end-user productivity losses associated with recovery techniques. In addition, active forensics attempt to reduce the time-to-discovery such that further exploit and escalation chances are reduced.</p>
<p>The tradeoff with active forensics is determining whether the detailed information is enough to ensure completeness of recovery so that recovery without re-imaging is a possibility. On the risk side, enterprises must determine whether the new insight provided will lead to a fast enough response time to offset the cost of the solution.</p>
<p>Each of these product categories (as well as others) have a value proposition that may provide benefits to organizations looking to augment their antimalware protection programs. The key is for companies to understand exactly what benefits they provide and decide for themselves which particular type of solution, if any, is likely to have the largest benefit.</p>
<p><em>Pete Lindstrom is Principal and VP of Research for Spire Security, LLC, a research and advisory firm. Learn more about Advanced Malware Protection by “Drinking from the Firehose” in New York City on 9/17/13. Details at <a href="http://www.regonline.com/AMPFirehoseNYC">www.regonline.com/AMPFirehoseNYC</a>. Complimentary access for those who qualify. Contact petelind@spiresecurity.com for details.</em></p>
<p>&nbsp;</p>
]]></content:encoded>
			<wfw:commentRss>http://spiresecurity.com/?feed=rss2&#038;p=1393</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Cost-Benefit Analysis for Anti-Malware Protection (AMP)</title>
		<link>http://spiresecurity.com/?p=1383</link>
		<comments>http://spiresecurity.com/?p=1383#comments</comments>
		<pubDate>Mon, 09 Sep 2013 16:54:49 +0000</pubDate>
		<dc:creator>Pete Lindstrom</dc:creator>
				<category><![CDATA[AMP Firehose]]></category>
		<category><![CDATA[Economics and Risk]]></category>

		<guid isPermaLink="false">http://spiresecurity.com/?p=1383</guid>
		<description><![CDATA[I recently wrote about key economic considerations for AMP. With those in mind, it is time to evaluate your existing anti-malware program and determine whether you should consider augmenting or otherwise addressing it. The first stage of this process is&#8230;<p class="more-link-p"><a class="more-link" href="http://spiresecurity.com/?p=1383">Read more &#8594;</a></p>]]></description>
				<content:encoded><![CDATA[<p>I recently wrote about key economic considerations for AMP. With those in mind, it is time to evaluate your existing anti-malware program and determine whether you should consider augmenting or otherwise addressing it.</p>
<p>The first stage of this process is to understand the costs and benefits of your existing program. This is a 4-step process:</p>
<ol>
<li><span style="letter-spacing: 0.05em; line-height: 1.6875;">Determine the probability and (economic) impact of being compromised by malware. This is the overall risk an organization is trying to address with anti-malware solutions.</span></li>
<li><span style="letter-spacing: 0.05em; line-height: 1.6875;">Collect the total cost of ownership of *all* of your anti-malware solutions.</span></li>
<li><span style="letter-spacing: 0.05em; line-height: 1.6875;">Estimate the amount of risk reduced by the current anti-malware solutions in the IT environment.</span></li>
<li><span style="letter-spacing: 0.05em; line-height: 1.6875;">Compare the amount of risk reduced in step 3 to the total cost of ownership in step 2. A simple comparison should show higher benefits than costs (if not, you are doing it wrong). More advanced comparisons (division!) can provide your &#8220;risk reduced per unit cost&#8221; for your current anti-malware program.</span></li>
</ol>
<p>These four steps are notionally simple but TechRisk professionals will recognize that any non-trivial environment will have challenges developing some of the estimates. It may be worth perusing my website or contacting me directly to discuss some useful ways to do this.</p>
<p>Once the assessment of the current situation is complete, it is time to review both the TCO information and the remaining (or &#8220;residual&#8221;) risk to determine if it is worthwhile to modify the program in any way. Given that there are existing costs to work with, a new solution may provide opportunities to reduce the existing TCO along with the ultimate objective to reduce the residual risk.<br />
To evaluate the value of new solutions, it is beneficial to delve a little deeper into costs and the amount of risk reduced. This is especially true since many solutions shift costs from operating expenses to a capital investment.</p>
<p>Taking a page out of the &#8220;activity-based costing&#8221; book can help an organization evaluate its cost structure more effectively. To do this, an organization should allocate its costs to a set of identified anti-malware activities. A new solution may help lower these costs by, for example, reducing the number of infections that must be cleaned over time.</p>
<p>On the risk side, new solutions may reduce the likelihood of an infection or incident by identifying more malware prior to infection. They also may provide a means to reduce the impact of an infection by lowering the response and recovery costs or addressing some other aspect of loss.</p>
<p>Understanding risk and costs is a crucial aspect of managing a security program. In addition to recognizing the value provided by an existing anti-malware program, performing this analysis may highlight areas of inefficiency or weakness.</p>
<p><em>Pete Lindstrom is Principal and VP of Research for Spire Security, LLC, a research and advisory firm. Learn more about Advanced Malware Protection by “Drinking from the Firehose” in New York City on 9/17/13. Details at <a href="http://www.regonline.com/AMPFirehoseNYC">www.regonline.com/AMPFirehoseNYC</a>. Complimentary access for those who qualify. Contact petelind@spiresecurity.com for details.</em></p>
]]></content:encoded>
			<wfw:commentRss>http://spiresecurity.com/?feed=rss2&#038;p=1383</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Do Enterprises Need AMP? An &#8220;Advanced Malware Protection&#8221; Market Assessment</title>
		<link>http://spiresecurity.com/?p=1376</link>
		<comments>http://spiresecurity.com/?p=1376#comments</comments>
		<pubDate>Tue, 03 Sep 2013 14:58:28 +0000</pubDate>
		<dc:creator>Pete Lindstrom</dc:creator>
				<category><![CDATA[AMP Firehose]]></category>
		<category><![CDATA[Economics and Risk]]></category>
		<category><![CDATA[Highlights]]></category>
		<category><![CDATA[Incidents]]></category>
		<category><![CDATA[Threat Management]]></category>

		<guid isPermaLink="false">http://spiresecurity.com/?p=1376</guid>
		<description><![CDATA[Over the past few months I have been on an &#8220;advanced malware protection&#8221; (AMP) kick. I am fascinated by this topic because it ties together a set of market conditions that can be extremely challenging to navigate through, both for&#8230;<p class="more-link-p"><a class="more-link" href="http://spiresecurity.com/?p=1376">Read more &#8594;</a></p>]]></description>
				<content:encoded><![CDATA[<p>Over the past few months I have been on an &#8220;advanced malware protection&#8221; (AMP) kick. I am fascinated by this topic because it ties together a set of market conditions that can be extremely challenging to navigate through, both for security architects and solution providers:</p>
<ol>
<li><span style="line-height: 16px;"><strong>Need</strong>. I choose the word &#8220;need&#8221; with caution, since, as you will find out below, it does not necessarily mean there is &#8220;demand&#8221; for a better solution. However, I don&#8217;t think techrisk professionals can deny that the malware dropping attack vector is alive and well. It is highlighted as the key to the Aurora attacks that catalyzed the &#8220;advanced persistent threat&#8221; concern.</span></li>
<li><strong>Varied Solutions</strong>. There are a number of vendors that have cropped up through the years with solutions to address the malware problem, and the techniques vary significantly. Whitelisters only allow identified executables to run; sandboxes isolate malware and/or identify actions; and real-time forensics track system calls and/or configured state.</li>
<li><strong>Mature Market</strong>. Even with an identifiable need and newer interesting solutions, the most powerful security market in the world &#8211; antivirus (nee antimalware) &#8211; operates in pseudo-commodity mode and dominates in endpoint security.</li>
</ol>
<p>As an industry analyst, I have had the opportunity to interview over a dozen solution providers and even more enterprise security architects and executives on the state of antimalware in the enterprise. Here are a few of my conclusions:</p>
<ul>
<li>Companies are moderately satisfied (and perhaps complacent) with their existing antimalware solutions. They acknowledge that these solutions are not blocking all malware but believe that every solution in the category has similar problems and so are reluctant to switch.</li>
<li>The only factor that could affect existing signature-base antimalware is price &#8211; a lower-cost solution (which many agree is unlikely) could have a strong-enough value proposition. Notably, a few organizations are evaluating Microsoft&#8217;s free antimalware solution as one of these alternative options.</li>
<li>Organizations are looking to gain more benefit from their existing antimalware solutions. Many are still focused on signature-based functionality and are now looking at more advanced capabilities. In addition, organizations are considering and employing new capabilities like Microsoft&#8217;s EMET functionality.</li>
<li>For those times when malware gets through and infects a system, re-imaging is the standard approach, though some organizations are mildly reluctant to do it. Most of these malware infections are not classified as &#8220;incidents&#8221; per se &#8211; there is an ad hoc evaluation process to decide whether any infection should be escalated into being classified as an incident.</li>
<li><span style="letter-spacing: 0.05em; line-height: 1.6875;">Organizations are looking at architectural changes and not product changes when it comes to endpoint client-side security. This means they are focusing on BYOD and/or VDI (or even dumb terminals) as options in their client security strategies.</span></li>
<li>Control over (physical) clients continues to relax, with certain &#8220;pockets&#8221; of exceptions (kiosks or manufacturing systems). For some, this was after a long period of control strengthening (e.g. finally taking away local administrative rights).</li>
</ul>
<p>As I mentioned at the start, the market dynamics fascinate me here. I don&#8217;t think there is a techrisk professional left that believes signature-based antimalware is &#8220;good enough&#8221; and yet we see its dampening impact everywhere. At this stage, it has simply become the &#8220;checkbox compliant&#8221; easiest approach.</p>
<p>As someone extremely interested in cybersecurity economics I am encouraged by the attention being given to the bottom line &#8211; organizations should be very careful about cost-benefit in their security programs. While some of the organizations I interviewed had done a comprehensive analysis, it appeared to me that a number of organizations had not undergone a thorough review of their strategies.</p>
<p>I will be addressing these issues at my <a href="http://www.regonline.com/AMPFirehoseNYC">&#8220;Drinking from the AMP Firehose&#8221; workshop</a> in New York City in a couple of weeks. The workshop concept was driven by these ideas and aims to break through the logjam brought on by complacency and confusion. Regardless of the conclusions that individual organizations come to, I think the entire field will be better off for it.</p>
<p><em>Pete Lindstrom is Principal and VP of Research for Spire Security, LLC, a research and advisory firm. Learn more about Advanced Malware Protection by &#8220;Drinking from the Firehose&#8221; in New York City on 9/17/13. Details at <a href="http://www.regonline.com/AMPFirehoseNYC">www.regonline.com/AMPFirehoseNYC</a>.</em></p>
]]></content:encoded>
			<wfw:commentRss>http://spiresecurity.com/?feed=rss2&#038;p=1376</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Do you need &#8220;Advanced Malware Protection&#8221; from 0days and the APT? Key Economic Considerations</title>
		<link>http://spiresecurity.com/?p=1362</link>
		<comments>http://spiresecurity.com/?p=1362#comments</comments>
		<pubDate>Tue, 27 Aug 2013 21:49:32 +0000</pubDate>
		<dc:creator>Pete Lindstrom</dc:creator>
				<category><![CDATA[AMP Firehose]]></category>
		<category><![CDATA[Economics and Risk]]></category>
		<category><![CDATA[Highlights]]></category>

		<guid isPermaLink="false">http://spiresecurity.com/?p=1362</guid>
		<description><![CDATA[Events over the past few years have heightened attention on attackers with more serious intentions than script kiddies or casual hackers. The &#8220;advanced persistent threat&#8221; has been outed, first generally by Google and RSA, then much more explicitly by Mandiant.&#8230;<p class="more-link-p"><a class="more-link" href="http://spiresecurity.com/?p=1362">Read more &#8594;</a></p>]]></description>
				<content:encoded><![CDATA[<p><span style="letter-spacing: 0.05em; line-height: 1.6875;">Events over the past few years have heightened attention on attackers with more serious intentions than script kiddies or casual hackers. The &#8220;advanced persistent threat&#8221; has been outed, first generally by Google and RSA, then much more explicitly by Mandiant. The use of 0days in malware has been identified as a key element of the &#8220;kill chain&#8221; for attackers. Right or wrong, cybersecurity concerns are at an all-time high.</span></p>
<p>On the protection side of the equation, although antimalware solutions provide a basic (and compliant) level of protection, security professionals are well aware of the limitations of signature-based approaches. Solutions that have been around for a while, such as host intrusion prevention and whitelisting, have gained renewed interest. Other approaches like network or endpoint sandboxes for isolation and/or analysis or active forensics for near-real-time analytics are coming on strong.</p>
<p>The challenge is determining whether the additional cost is worth it, deciding whether a new solution will significantly reduce the problem, and identifying which type of solution(s) are best.</p>
<p>While it is easy for security professionals to claim they will spend &#8220;whatever it takes&#8221; to address technology-related risk, that assertion is easily deflated through extreme examples (millions? billions? trillions?). While the intentions are valiant (and I get the point), no organization has an unlimited supply of money to spend on security. Therefore, it is crucial to make good decisions about how and where to spend money.</p>
<p>Any business decision is accompanied by some sort of justification, and cybersecurity is no different. In security, we typically evaluate total cost of ownership of the solution and compare it to our notion of how much risk is reduced. At the very least, every purchasing decision is supported by a claim that the spending &#8220;is worth it.&#8221; At best, a more formal cost-benefit approach should be employed.</p>
<p>Evaluating the cost-benefit of an &#8220;advanced malware protection&#8221; solution can be extremely challenging. Dropping malware (in the form of viruses and worms) onto systems is one of the oldest methods of attacking and compromising computing environments. Because of this, all enterprises already have controls in place that attempt to protect against malware infection. In addition, there are a number of techniques that can be used to address the problem.</p>
<p>Regardless of the challenge, conducting an economic analysis of newer AMP solutions may lead to some surprising conclusions. Here are six key considerations for conducting your analysis.</p>
<p><strong>1. Ignore the &#8220;Advanced&#8221; Part of Advanced Malware Protection</strong></p>
<p>The first distinction you should make in reviewing your needs for &#8220;advanced&#8221; malware protection is that the &#8220;advanced&#8221; part is extremely nebulous &#8211; the bar keeps changing in defining exactly which techniques are advanced and which aren&#8217;t advanced. Accordingly, the first takeaway is &#8220;evaluate your AMP solutions in concert with all antimalware efforts in your organization.&#8221;</p>
<p>This should not be a radical thought.</p>
<p><strong>2. Cover All the Antimalware Bases</strong></p>
<p>Diving a bit deeper into costs, enterprises should consider the costs of all capabilities &#8211; the capital investments made on hardware and software, maintenance costs, and personnel costs. The vendor solution (capital investment) side of antimalware protection can include endpoint antimalware, email or gateway-based antimalware, intrusion detection (potentially), and secure web gateways. On the operational expense side, organizations should consider the personnel costs associated with identification, prevention, mitigation, response, and recovery activities associated with malware infections and incidents.</p>
<p><strong>3. Allocate Partial Costs of Broader Solutions</strong></p>
<p>Focusing on the costs associated with one type of threat &#8211; in this case, malware &#8211; can be challenging. Some solutions, like endpoint antimalware, focus directly on the problem while others provide varying levels of accompanying support. In my research, for example, secure web gateways were cited as a means for detecting malware infections that were undetected by endpoint antimalware solutions, but secure web gateways provide more capability than malware infection detection.</p>
<p>The key in the analysis is to allocate costs based on the proportional value provided by the broader solution. If 10% of the ongoing value of the solution comes from antimalware detection, then 10% of the future costs should be allocated to antimalware.</p>
<p><strong>4. Ignore Sunk Costs</strong></p>
<p>The maturity level of antimalware makes it likely that capital investments to address the problem have already occurred. Any spending that occurred in the past should be excluded from the analysis, though any current and future operational expenses should be included. In contrast, a decision involving a future capital investment should include that amount allocated (either amortized or depreciated) over its lifetime as well as the operational costs.</p>
<p><strong>5. Factor in Employee Productivity</strong></p>
<p>The second economic issue to consider is the productivity of employees. The productivity costs associated with the impacted worker should be considered along with the costs associated with the IT triage person. If it takes four hours to recover an infected system, then four hours of the worker&#8217;s lost productivity should be included (nothing fancy here &#8211; use a single average number based on salary for all workers).</p>
<p><strong>6. Use a Breakeven Approach</strong></p>
<p>Perhaps a bigger challenge in justifying antimalware spending is in determining the amount of potential losses. That &#8220;it is worth it&#8221; decision means that the security professional spending $100,000 on a security solution believes the solution will offset at least $100,000 in risk.</p>
<p>While some cringe a bit at the realization that spending reveals the minimum expectation of risk reduction, it also provides an opportunity to conduct a standard financial breakeven analysis. Rather than attempting to figure out the exact amount of financial loss at stake, you need only consider the total amount being spent and determine whether it is less than the potential losses. So as long as you&#8217;ve properly accounted for all costs, an appropriate decision can be made.</p>
<p>The AMP solution decision is not an easy one &#8211; with a handful of controls at different maturity levels in the organization already and a variety of newer solutions vying for attention. Some organizations may even come to the conclusion they don&#8217;t need to augment their existing capabilities. Others will find out they really do. Regardless, enterprises should be conducting the necessary analysis to make the best decision for its needs.</p>
<p><em>Pete Lindstrom is Principal and VP of Research for Spire Security, LLC, a research and advisory firm. Learn more about Advanced Malware Protection by &#8220;Drinking from the Firehose&#8221; in New York City on 9/17/13. Details at <a href="http://www.regonline.com/AMPFirehoseNYC">www.regonline.com/AMPFirehoseNYC</a>.</em></p>
]]></content:encoded>
			<wfw:commentRss>http://spiresecurity.com/?feed=rss2&#038;p=1362</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>New Workshop: Drinking from the Advanced Malware Protection Firehose</title>
		<link>http://spiresecurity.com/?p=1364</link>
		<comments>http://spiresecurity.com/?p=1364#comments</comments>
		<pubDate>Tue, 27 Aug 2013 13:52:07 +0000</pubDate>
		<dc:creator>Pete Lindstrom</dc:creator>
				<category><![CDATA[AMP Firehose]]></category>
		<category><![CDATA[Economics and Risk]]></category>

		<guid isPermaLink="false">http://spiresecurity.com/?p=1364</guid>
		<description><![CDATA[&#8220;Drinking from the Advanced Malware Protection (AMP) Firehose&#8221; is a workshop for information security architects, managers and tech-savvy executives to evaluate the ability of newer and evolving AMP solutions (whitelists, sandboxes, active forensics) to address the challenges of zero-day and&#8230;<p class="more-link-p"><a class="more-link" href="http://spiresecurity.com/?p=1364">Read more &#8594;</a></p>]]></description>
				<content:encoded><![CDATA[<p><span style="letter-spacing: 0.05em; line-height: 1.6875;">&#8220;Drinking from the Advanced Malware Protection (AMP) Firehose&#8221; is a workshop for information security architects, managers and tech-savvy executives to evaluate the ability of newer and evolving AMP solutions (whitelists, sandboxes, active forensics) to address the challenges of zero-day and Advanced Persistent Threats. Participants will create their custom risk profile and essential features scorecard based on a defined structure in collaboration with the group.</span></p>
<p><span style="letter-spacing: 0.05em; line-height: 1.6875;">Key Benefits: </span></p>
<ul>
<li>Create and use an economic/risk model to justify your need for Advanced Malware Protection (AMP).</li>
<li><span style="letter-spacing: 0.05em; line-height: 1.6875;">Cut through the confusion of biased vendor presentations to identify the l functional benefits of AMP solutions.</span></li>
<li><span style="letter-spacing: 0.05em; line-height: 1.6875;">Evaluate vendors based on an objective model (i.e., your needs) customized to match your requirements.</span></li>
<li><span style="letter-spacing: 0.05em; line-height: 1.6875;">Benefit from collaboration and feedback of your peers who face the same challenges at their organizations.</span></li>
</ul>
<p>With an economic model in hand, participants will hear from up to 10 vendors (maximum  10 minutes each) as they provide details on how their AMP solutions address current needs and conforms with the requirements in the scorecards. After vendors are excused, participants will discuss and debate capabilities and ultimately assign their own scores. The process is akin to speed-dating, but with group feedback (and no alcohol).</p>
<p>Each participant takes away the proceedings, along with their economic model (a quantitative risk assessment) and vendor scorecard,  that includes their unique values and scores, as well as the group summary scores.</p>
]]></content:encoded>
			<wfw:commentRss>http://spiresecurity.com/?feed=rss2&#038;p=1364</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Does &#8220;Risk = T * V * I? Notes on Pr(t) * Pr(v) = Pr(event)</title>
		<link>http://spiresecurity.com/?p=1359</link>
		<comments>http://spiresecurity.com/?p=1359#comments</comments>
		<pubDate>Mon, 12 Aug 2013 14:07:32 +0000</pubDate>
		<dc:creator>Pete Lindstrom</dc:creator>
				<category><![CDATA[Economics and Risk]]></category>
		<category><![CDATA[Metrics]]></category>

		<guid isPermaLink="false">http://spiresecurity.com/?p=1359</guid>
		<description><![CDATA[On the SIRA mailing list, we are discussing the age-old risk equation &#8220;Risk = Threats x Vulns x Impact (or Consequences).&#8221; A number of folks think it is nonsense. Here&#8217;s why I don&#8217;t. (Email to SIRA mailing list). Before I&#8230;<p class="more-link-p"><a class="more-link" href="http://spiresecurity.com/?p=1359">Read more &#8594;</a></p>]]></description>
				<content:encoded><![CDATA[<p>On the SIRA mailing list, we are discussing the age-old risk equation &#8220;Risk = Threats x Vulns x Impact (or Consequences).&#8221; A number of folks think it is nonsense. Here&#8217;s why I don&#8217;t. (Email to SIRA mailing list).</p>
<p>Before I get into this, I should re-acknowledge that I believe there are better methods to measure/evaluate risk, and I fully subscribe to their development. However, I am looking for evolution not revolution &#8211; Geoffrey Moore pointed out the challenges of disruptive innovation in &#8220;Crossing the Chasm&#8221; many years ago and I agree wholeheartedly. Evolution to me means slightly modifying existing approaches in beneficial ways. That is why a few of us are developing the Tech Risk Mgt Maturity Model.</p>
<p>So my goal is, essentially, to be &#8220;better than existing practices in techrisk mgt&#8221; &#8211; I am looking for marginal utility.</p>
<p>I also believe that resources are scarce and that every time infosec/techrisk folks make decisions about allocating them they are revealing preferences that are measurable in very coarse ways. Even though the existing models are seen as &#8220;qualitative&#8221; we can create control horizons and conduct breakeven analysis in ways to tease out some thresholds at the very least.</p>
<p>Now, to answer the questions:</p>
<ul>
<li><span style="letter-spacing: 0.05em; line-height: 1.6875;">Pr(t): Yes, &#8220;the probability that a (sufficiently capable) threat actor will attack the system of interest&#8221; characterizes my belief well. And since I go out of my way to remind liability-minded folks that the intelligent adversary makes our situation much different from the &#8220;acts of god&#8221; kinds of hazard, I should acknowledge the non-randomness of the threat&#8230; but I am not ready to do that, exactly&#8230; for the same reasons as the &#8220;random walk down Wall Street&#8221; problem &#8211; easy to assert non-randomness yet hard to show otherwise.</span></li>
</ul>
<p style="padding-left: 30px;"><span style="letter-spacing: 0.05em; line-height: 1.6875;">Here is my thought process:</span><br />
<span style="letter-spacing: 0.05em; line-height: 1.6875;">a) If it isn&#8217;t random, it should be predictable; and</span><br />
<span style="letter-spacing: 0.05em; line-height: 1.6875;">b) if it isn&#8217;t predictable, then it approximates randomness (especially in the aggregate).</span><br />
<span style="letter-spacing: 0.05em; line-height: 1.6875;">c) Since we can&#8217;t predict threat (afaik) then we should be evaluating any model compared to random, so</span><br />
<span style="letter-spacing: 0.05em; line-height: 1.6875;">d) random is a good place to start.</span></p>
<p style="padding-left: 30px;">There are many ways to approach how to determine Pr(t) &#8211; could be degrees of belief, could be public data (real-time blacklists, etc.), could be based on historical data, could be something else. My favorite application is a simple comparison of two scenarios. I don&#8217;t even quantify &#8211; just look at the accessibility of the two &#8220;systems of interest&#8221; and determine which one is higher (compare, say, a bluesnarf attack that requires local proximity to a sql injection that can happen from anywhere; or assess the diff in wi-fi attacks btwn being in the city and in the country). I come up with higher Pr(t) for the latter and the former in my two examples. (It may also be useful to factor in attacker&#8217;s costs in the first example).</p>
<ul>
<li><span style="letter-spacing: 0.05em; line-height: 1.6875;">Pr(v): This is difficult to characterize, but I think of it more as &#8220;the probability that a system of interest will be attacked, and that the attack will succeed [within some time period].&#8221; While I agree that any non-trivial system is vulnerable in a theoretical sense, it does not appear that every system is compromised (and I think that &#8220;two kinds of orgs &#8211; those that are compromised and those that don&#8217;t know it yet&#8221; *</span><b style="letter-spacing: 0.05em; line-height: 1.6875;">is</b><span style="letter-spacing: 0.05em; line-height: 1.6875;">* closer to nonsense than r=t*v*i). Whether there is an over-abundance of targets, the attacker costs are too high, the control environment is sufficiently strong, or some other reason, not all systems are in a compromised state and so it is worthwhile to measure. It is especially important since the bulk of our defensive efforts revolve around reducing this probability.</span></li>
</ul>
<p style="padding-left: 30px;">Again, estimating Pr(v) can be done in similar ways as Pr(t). In my comparative analysis &#8211; I look at things like number of users (as vulns), size of code base, number of open ports, RASQ, etc&#8230;</p>
<ul>
<li><span style="letter-spacing: 0.05em; line-height: 1.6875;">It is worth discussing why breaking down Pr(event) into Pr(t)*Pr(v) is beneficial. For the most part, I would actually prefer to simply use Pr(event) if we have enough information (historical data). For example, I think we have pretty good data on email-borne attacks and so I wouldn&#8217;t be working too hard on assessing &#8216;t&#8217; and &#8216;v&#8217; there, though the McColo takedown can show how much of an impact a change in &#8216;t&#8217; can have.</span></li>
</ul>
<p>Maybe the biggest reason is that the respective populations are different and can change drastically. Here are some use cases:</p>
<p><span style="letter-spacing: 0.05em; line-height: 1.6875;">a) One of the better uses is to compare two scenarios/architectures. Banking from smartphone vs. laptop; moving to cloud from internal; determining risk btwn WEP vuln and remote Windows vuln; etc&#8230;</span></p>
<p>b) Acknowledge that if &#8216;t&#8217; or &#8216;v&#8217; is 0, then Pr(event) is 0. Though it is hard to conceive of a case where &#8216;v&#8217; is 0, we can see &#8216;t&#8217; approaching it in lots of PoCs.</p>
<p>c) Showing the significance of &#8216;t&#8217; or &#8216;v&#8217; as its partner approaches 1. I agree that &#8216;v&#8217; is essentially 1.0 so why do we spend all our time on it? Maybe we should be doing other things&#8230; this is also why I think the move towards threat intel is so important.</p>
<p>d) To help folks see how changes in populations of either &#8216;t&#8217; or &#8216;v&#8217; might affect each other, and ultimately risk. Like the McColo takedown, bounties (on malware writers and bugs), etc. My favorite use may be pointing out that vuln disclosure does nothing to &#8216;v&#8217; since it was already there; the impact is on &#8216;t.&#8217;</p>
<p>To round things out, all this is &#8220;good enough&#8221; at the level of precision we are working at, and &#8220;better than&#8221; existing practices, IMO.</p>
]]></content:encoded>
			<wfw:commentRss>http://spiresecurity.com/?feed=rss2&#038;p=1359</wfw:commentRss>
		<slash:comments>1</slash:comments>
		</item>
		<item>
		<title>Which is More Secure &#8211; Android or iOS?: Tale of the Tape</title>
		<link>http://spiresecurity.com/?p=1353</link>
		<comments>http://spiresecurity.com/?p=1353#comments</comments>
		<pubDate>Fri, 19 Jul 2013 16:04:13 +0000</pubDate>
		<dc:creator>Pete Lindstrom</dc:creator>
				<category><![CDATA[Economics and Risk]]></category>
		<category><![CDATA[Metrics]]></category>
		<category><![CDATA[Vulnerability Management]]></category>

		<guid isPermaLink="false">http://spiresecurity.com/?p=1353</guid>
		<description><![CDATA[Tech risk professionals love to have debates about platform security, though it used to be Windows vs. Linux (really closed vs. open source) which morphed to Windows vs. Apple and is now Android vs. iOS. In any case, there are&#8230;<p class="more-link-p"><a class="more-link" href="http://spiresecurity.com/?p=1353">Read more &#8594;</a></p>]]></description>
				<content:encoded><![CDATA[<p>Tech risk professionals love to have debates about platform security, though it used to be Windows vs. Linux (really closed vs. open source) which morphed to Windows vs. Apple and is now Android vs. iOS. In any case, there are often numbers available to support one viewpoint or another. Let&#8217;s have a look and see if we can come to some conclusions.</p>
<p>For our latest debate &#8211; Android vs. iOS &#8211; there are three sets of numbers that have recently come into play for evaluation:</p>
<ol>
<li><span style="line-height: 16px;">Number of vulnerabilities: A recent <a href="http://mobile.theverge.com/2013/7/16/4527326/android-versus-ios-security">blog post on TheVerge.com</a> highlights that iOS and its 238 vulns from 2007-2013 has 8.8x more vulnerabilities than Android&#8217;s 27 from 2009-2013.</span></li>
<li>Number of malware samples: In April, a <a href="http://www.symantec.com/content/en/us/enterprise/other_resources/b-istr_main_report_v18_2012_21291018.en-us.pdf">Symantec report [PDF]</a> pointed out that Apple&#8217;s 387 vulns in 2012 dwarfs Android&#8217;s 13 and yet Android had 103 &#8220;mobile threats&#8221; (malware) compared with Apple&#8217;s 1. Importantly, they also point out that &#8220;<em>most mobile threats have not used software vulnerabilities</em>.&#8221;</li>
<li>Percent of traffic: A <a href="http://www.cc.gatech.edu/~traynor/papers/lever-ndss13.pdf">paper presented at NDSS &#8217;13 [PDF]</a> monitored actual smartphone traffic and found that a) &#8220;<em>The </em>mobile malware found by the research community thus far <em id="__mceDel" style="letter-spacing: 0.05em; line-height: 1.6875;">appears in a minuscule number of devices in the network: </em><em id="__mceDel" style="letter-spacing: 0.05em; line-height: 1.6875;">3,492 out of over 380 million (less than 0.0009%)&#8221;</em> and b) &#8220;<em><span style="letter-spacing: 0.05em; line-height: 1.6875;">users of iOS devices are virtually identically as likely </span><span style="letter-spacing: 0.05em; line-height: 1.6875;">to communicate with known low reputation domains as the </span><span style="letter-spacing: 0.05em; line-height: 1.6875;">owners of other mobile platforms, calling into question the </span></em><span style="letter-spacing: 0.05em; line-height: 1.6875;"><em>conventional wisdom of one platform demonstrably providing greater security than another</em>&#8220;</span></li>
</ol>
<p>Now, since we all know that security is the number one priority for IT decisions (heh), the CIO is waiting to hear from us on which platform is more secure. How do you answer?</p>
<p>Here&#8217;s my analysis, just using the numbers provided*</p>
<p>First, number of vulnerabilities as a measure is often thought of as a leading indicator of risk even though we all recognize that more vulns found equals fewer vulnerabilities remaining. The perception, however, is that there are actually <em>even more</em> vulns left. Absent of any other information, however, it is worth considering the notion that a higher number here is a measure of stronger security going forward (that is, #vulns is a lagging indicator). It doesn&#8217;t help matters that at least one of the sets of numbers inexplicably uses different time periods in its analysis. This measure would be much more useful if we had a way to normalize the numbers across platforms &#8211; the two most obvious ways would be with 1) a measure of complexity or size of the code base or 2) a measure of the personhours expended in looking for vulns. While I favor this latter option, it is not very practical.</p>
<p>The second measure, number of malware samples, is interesting because it is closer to the actual compromise. In addition, as Symantec points out many of them don&#8217;t exploit software vulnerabilities (this is another knock against using vuln counts). The challenge here is that there is essentially unlimited ability to create more malware samples. Moreover, the notion of a &#8220;mobile threat&#8221; is fairly broad and not always threatening to the extent that legitimate apps have some similar characteristics. Given the (somewhat) restricted methods for distribution and installation of apps on smartphones, a better measure would be to identify the distribution and accessibility to the population of these malware apps. In this case, getting an understanding of the number of downloads would get significantly closer to understanding the relative risk.</p>
<p>The final measure, compromised smartphones, provides a historical measure of actual infected phones. Aside from the really, really low number, we must decide whether these values are a good reflection of (future) risk or not. Since this number identifies compromised systems, it gets us closest to that which we are trying to prevent, which is useful. Ultimately, I believe this measure is the best of the three in helping us understand &#8220;risk&#8221; in the mobile world. And right now, it&#8217;s a tossup.</p>
<p>A better measure for determining which platform is more secure, in my opinion, would involve a measure of attack surface combined with one of devices sold (as a placeholder for activity volume and popularity).</p>
<p>&nbsp;</p>
<p>&nbsp;</p>
]]></content:encoded>
			<wfw:commentRss>http://spiresecurity.com/?feed=rss2&#038;p=1353</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>On Information Value and Loss; The Simplicity of Breakeven Analysis</title>
		<link>http://spiresecurity.com/?p=1350</link>
		<comments>http://spiresecurity.com/?p=1350#comments</comments>
		<pubDate>Tue, 09 Jul 2013 15:09:59 +0000</pubDate>
		<dc:creator>Pete Lindstrom</dc:creator>
				<category><![CDATA[Economics and Risk]]></category>

		<guid isPermaLink="false">http://spiresecurity.com/?p=1350</guid>
		<description><![CDATA[On the SecurityMetrics mailing list, Dan Geer wrote: We have, of course, been around the mountain several times on how to value information. There are at least these: 1. acquisition cost (worth what you paid for it) 2. replacement cost&#8230;<p class="more-link-p"><a class="more-link" href="http://spiresecurity.com/?p=1350">Read more &#8594;</a></p>]]></description>
				<content:encoded><![CDATA[<p>On the SecurityMetrics mailing list, Dan Geer wrote:</p>
<blockquote><p><em>We have, of course, been around the mountain several times on how to value information. There are at least these:</em></p>
<p><em>1. acquisition cost (worth what you paid for it)</em><br />
<em> 2. replacement cost (worth what you would pay for it)</em><br />
<em> 3. opportunity cost (downside when, say, your IP is lost)</em></p>
<p><em>There is Pete Lindstrom&#8217;s (as I recall) effective minimum:</em></p>
<p><em>4. current dollar value of IT budget</em></p>
<p><em>I&#8217;ll suggest another:</em></p>
<p><em>5. paper at (a) says that the optimal amount to invest to </em><em>protect an information asset is 1/e of its info value at risk, </em><em>so the value of the information asset, assuming optimality is </em><em>_not_ being achieved, is greater than or equal to the </em><em>product [ e*investment ]</em></p>
<p><em style="letter-spacing: 0.05em; line-height: 1.6875;">Not particularly helpful, but upper/lower bounds are often useful to think about. I know a CISO who justified a rather substantial identity management system by arguing that it would protect reputation of the firm and was a net positive return if the reputation of the firm was worth at least one basis point of the market capitalization. Needless to say, no member of the management committee would say that the reputation of the firm was smaller than one basis point so the investment went through.</em></p></blockquote>
<p>There are two aspects of information (and technology valuation) that create the biggest problem for Technology Risk Professionals:</p>
<p>First, the perception of value is not absolute &#8211; it can be affected by timing, substitute/alternative options, opportunity cost, etc. Consider the price for Diet Coke is different at grocery store, fast food outlet, convenience store, baseball game, etc. So we negotiate &#8220;willingness-to-pay (WTP)&#8221; and &#8220;willingness-to-accept (WTA)&#8221; throughout the course of our lives. Valuation is even more difficult for goods without a large market (e.g. high-end artists&#8217; paintings) and especially difficult for intangible assets like &#8220;information.&#8221;</p>
<p>Second, the value being driven by technology to an organization is not the same as the possible losses. For example, Coca-Cola does not &#8220;lose&#8221; the ability to manufacture and sell Diet Coke if somebody steals its proverbial formula. It MAY lose some % of revenue due to the sale of black market Diet Coke, but even that is questionable IMO (much easier to copy the can and approximate the taste, I think). Similarly, Amazon.com is unlikely to lose 1/365th of revenue due to being offline for a day. OTOH, an unrecovered transfer of $10k from one account to another at another entity IS lost. And sometimes a breach can result in greater losses ((at least arguably) than the currently realized value &#8211; consider intellectual property associated with undeveloped products, for example.</p>
<p>As echoed in a couple of the workgroups at Metricon 8 this year, organizations are most comfortable reporting losses reflected by direct costs &#8211; immediate response, forensic analysis, notification, etc. The enlightened company may include economic costs &#8211; e.g. loss of productivity in other areas. It is the truly rare company that can get to the point of quantifying losses in &#8220;brand, reputation, etc.&#8221; even though <strong>losses can only be reflected in its (current and/or future) financials</strong> &#8211; higher costs, lower revenue, increased liabilities, decreased assets. After all, we are talking about an inanimate entity.</p>
<p>With all of the ambiguity, it could be that we&#8217;ll only ever be able to get consensus on value and losses using breakeven approaches to define thresholds and make decisions. On the value side, my assertion holds that the amount spent on IT reflects a minimum valuation but it is only part of the story given the missing relationship between value and loss, which is much more important to techrisk professionals. So we need to modify our breakeven approach accordingly and create a &#8220;control horizon&#8221;. Things get a bit trickier here.</p>
<p>While a million dollar purchase of IT assets reflects at least a million dollars in value, a hundred thousand dollar purchase of a security solution does not reflect its corresponding loss. We can say that $100k spent on security reflects at least $100k of <em>risk</em>, that is not the same as loss because it has been discounted by its probability of happening. What&#8217;s more, this probability also has a lot of associated ambiguity (for a number of reasons).</p>
<p>What we can do, however, is draw a line on a graph using the value pairs of probability and loss (10% of $1m; 1% of $10m; etc). Lo and behold, this creates a &#8220;control horizon&#8221; on a risk matrix &#8211; in essence, a breakeven line. If the initial risk was above the line (or should we say &#8220;up and to the right&#8221;) and it is completely addressed, then the purchase at least breaks even. Below the line means a bad decision was made somewhere along the line.</p>
<p>The control horizon provides a basic way to determine whether security spending makes sense.</p>
]]></content:encoded>
			<wfw:commentRss>http://spiresecurity.com/?feed=rss2&#038;p=1350</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>The 7-day Itch: Ups and Downs of Google&#8217;s New Disclosure Policy</title>
		<link>http://spiresecurity.com/?p=1331</link>
		<comments>http://spiresecurity.com/?p=1331#comments</comments>
		<pubDate>Wed, 05 Jun 2013 14:13:51 +0000</pubDate>
		<dc:creator>Pete Lindstrom</dc:creator>
				<category><![CDATA[Economics and Risk]]></category>
		<category><![CDATA[Highlights]]></category>
		<category><![CDATA[Random]]></category>
		<category><![CDATA[Vulnerability Management]]></category>

		<guid isPermaLink="false">http://spiresecurity.com/?p=1331</guid>
		<description><![CDATA[Recently, members of the security team at Google made an important announcement about &#8220;real-world exploitation of publicly unknown vulnerabilities.&#8221; While it was done on the Google Online Security blog, all indications are that this is an official Google policy statement.&#8230;<p class="more-link-p"><a class="more-link" href="http://spiresecurity.com/?p=1331">Read more &#8594;</a></p>]]></description>
				<content:encoded><![CDATA[<p><span style="letter-spacing: 0.05em; line-height: 1.6875;">Recently, members of the security team at Google made an <a href="http://googleonlinesecurity.blogspot.com/2013/05/disclosure-timeline-for-vulnerabilities.html">important announcemen</a>t about &#8220;real-world exploitation of publicly unknown vulnerabilities.&#8221; While it was done on the Google Online Security blog, all indications are that this is an official Google policy statement. To wit, Google announced that &#8220;after 7 days have elapsed without a patch or advisory, we will support researchers making details available so that users can take steps to protect themselves.&#8221;</span></p>
<p><span style="letter-spacing: 0.05em; line-height: 1.6875;">This is an important announcement because it highlights the very real problem of &#8220;<a href="http://spiresecurity.com/?p=36">in-the-wild-exploits of undercover vulnerabilities</a>.&#8221; This strain of &#8220;0day&#8221; is the most significant given that active exploits are already happening when they are discovered. In these scenarios, the threats (malicious actors) and vulnerabilities have already collided in the real world and losses are being actively incurred. Thus, <strong>this type of situation is the most important type that technology risk (techrisk) managers must deal with in their environments.</strong></span></p>
<p>The announcement itself highlights some important, underappreciated aspects of the techrisk profession:</p>
<p>- That exploits/breaches/incidents are the fundamental &#8220;unwanted outcome&#8221; that we are trying to prevent. It is not uncommon for techrisk pros to focus efforts on software quality, control weaknesses, or compliance violations &#8211; all useful intentions to the extent that they address the aforementioned incidents.</p>
<p>- That techrisk professionals can identify attacks even when the vulnerability is unknown. Much of our profession&#8217;s focus revolves around the notion that we must find vulnerabilities in order to protect ourselves, yet time and again we succeed in identifying these types of attacks using behavioral analysis and other techniques. With the growth in popularity of forensic archiving, we can now also determine to what extent we have been victims in the past to assist with understanding the risks of the future.</p>
<ul>
<li><span style="letter-spacing: 0.05em; line-height: 1.6875;">That much of the profession&#8217;s effort associated with vulnerability management is ineffective. Our efforts to identify each vulnerability prior to exploit are simply overwhelmed by scale and can simply be shown through a thought exercise &#8211; consider how many vulnerabilities are created every day (in the aggregate) as compared with how many are found. Perhaps more importantly, it is worth noting that the vast majority of vulnerabilities that are found are never known to be actively exploited <a href="https://www.isecpartners.com/media/12955/eip-final.pdf">[pdf]</a>.</span></li>
<li><span style="letter-spacing: 0.05em; line-height: 1.6875;">That there is a variance in how different types of attacks &#8211; namely, targeted vs. opportunistic &#8211; manifest themselves online. Google&#8217;s primary cited reason for its new policy involves political activists as victims of targeted attacks that may lead to physical harm. The history of infosec and techrisk highlight other scenarios &#8211; the NIMDA worm, WMF exploit, WebDAV, etc &#8211; that involve opportunistic exploits across a multitude of targets.</span></li>
<li><span style="letter-spacing: 0.05em; line-height: 1.6875;">That the most significant way to &#8220;move the marker&#8221; in security is through the identification of exploits and not vulnerabilities. As with Code Red and Nimda in the Fall of 2001 leading to Bill Gates&#8217; well-known &#8220;<a href="http://www.microsoft.com/en-us/news/features/2012/jan12/GatesMemo.aspx">Trustworthy Computing Memo</a>,&#8221; active exploits are the best drivers of change in the techrisk profession.</span></li>
</ul>
<p><span style="letter-spacing: 0.05em; line-height: 1.6875;"> </span><span style="letter-spacing: 0.05em; line-height: 1.6875;">While Google&#8217;s new policy offers and opportunity to assess the state of security on the Internet overall, it also demonstrates significant deficiencies in its approach:</span></p>
<ul>
<li><span style="letter-spacing: 0.05em; line-height: 1.6875;">The 7-day deadline has no risk basis. With the significant variance in number of affected parties and speed of compromise associated with opportunistic attacks versus targeted ones, the number is an arbitrary one. In the primary example cited (activists at risk of physical harm), speed is highly unlikely to have a significant impact on risk reduction.</span></li>
<li><span style="letter-spacing: 0.05em; line-height: 1.6875;">The capabilities of enterprises and/or users to protect themselves can vary significantly. There are many reasons why some parties choose to remain vulnerable to certain types of attacks &#8211; system complexities, legacy support needs, lack of technical skill, competitive priorities, etc. Through the years some security researchers (including some employees of Google) have expressed disdain for those that cannot protect themselves. A company the size of Google should be held to a higher standard in its willingness to protect those online that can&#8217;t always protect themselves.</span></li>
<li><span style="letter-spacing: 0.05em; line-height: 1.6875;"> </span><span style="letter-spacing: 0.05em; line-height: 1.6875;">No consideration of economics. The policy completely ignores tradeoffs like the risk of breaking systems when taking precautionary measures (e.g. patch failures), the well-known increase in exploits that occur after the disclosure of many new vulnerabilities [<a href="http://www.cs.umd.edu/~waa/pubs/Windows_of_Vulnerability.pdf">Arbaugh, McHugh, 2000 pdf</a>; <a href="http://users.ece.cmu.edu/~tdumitra/public_documents/bilge12_zero_day.pdf">Bilge, Dumitras 2013 pdf</a>], and the opportunity costs associated with new requirements. When Google says, for example, &#8220;each day an actively exploited vulnerability remains undisclosed to the public and unpatched, more computers will be compromised&#8221; they neglect the significant likelihood that computers will be compromised regardless of the state of disclosure to the public and fall back on the age-old myth that only patches can protect systems.</span></li>
<li><span style="letter-spacing: 0.05em; line-height: 1.6875;"> </span><span style="letter-spacing: 0.05em; line-height: 1.6875;">It can lead to even more exploitations and incidents. Anyone paying close attention to the vulnerability research community knows that there is wide variance in how researchers disclose their information and some decisions are made based on annoyance, frustration, spite and sometimes even malice. If a vulnerability will get &#8220;noticed&#8221; more quickly, researchers may be tempted to &#8220;test&#8221; it in the wild in order to increase its priority level.</span></li>
</ul>
<p><span style="letter-spacing: 0.05em; line-height: 1.6875;">A company with the talent and resources at Google can do better. Here are some opportunities for improving the state of security on the Internet and addressing the real, significant risk associated with actively exploited 0days:</span></p>
<ul>
<li><span style="letter-spacing: 0.05em; line-height: 1.6875;">Encourage and train political activists in obfuscation and evasion techniques. It is challenging to discuss a blanket policy across all scenarios simply by highlighting arguably the most important one &#8211; that involving physical harm. It seems highly unlikely that this case is a common one and the best way to discuss the overall implications of the policy itself is to remove this scenario from the discussion as it tends to cause an emotional reaction. As many of us know, there are many ways political activists can protect themselves online that would be much more effective than a 7-day disclosure policy which comes after they have been compromised.</span></li>
<li><span style="letter-spacing: 0.05em; line-height: 1.6875;"> </span><span style="letter-spacing: 0.05em; line-height: 1.6875;">Increase focus on actively exploited 0days. Since these are the most important scenarios the techrisk profession has to deal with, Google should be making every effort to identify these exploits and employ or invent new ways to protect against them. Google researchers still participate in random, ineffective vulnerability research that simply distracts from this very real problem.</span></li>
<li><span style="letter-spacing: 0.05em; line-height: 1.6875;"> </span><span style="letter-spacing: 0.05em; line-height: 1.6875;">Provide more insight into the &#8220;dozens&#8221; of 0days identified &#8220;through the years&#8221; that was mentioned in the blog announcement. If there is one thing Google has, it is great data. As evidenced by past reports [<a href="http://static.googleusercontent.com/external_content/untrusted_dlcp/research.google.com/en/us/archive/provos-2008a.pdf">Provos, 2008 pdf</a>], Google could very easily provide more specific evidence on the number of 0days they have identified, the volume of exploits, and their disposition by vendors. The fact that they haven&#8217;t yet, especially in the face of this policy announcement, is disappointing and makes it difficult to evaluate the measure.</span></li>
<li><span style="letter-spacing: 0.05em; line-height: 1.6875;"> </span><span style="letter-spacing: 0.05em; line-height: 1.6875;">Take a risk-based approach to disclosure. Fast-moving worms do most of their damage in hours and days &#8211; in those cases, seven days is too long. Targeted attacks are unlikely to get repeated in a way that demands immediate attention for most environments &#8211; in those cases, seven days is too short. A risk-based approach would take into account the frequency of exploit, probability of future exploit within a target population, and impact of the exploit while evaluating the changes to these variables over time &#8211; in particular before and after disclosure.</span></li>
<li><span style="letter-spacing: 0.05em; line-height: 1.6875;"> </span><span style="letter-spacing: 0.05em; line-height: 1.6875;">Monitor the situation closely. Google&#8217;s unique ability to gather data in this regard is worth mentioning again as a function of its ability to assess its own policy. Collecting and publishing data on actual 0days throughout their exploit lifecycle would be a boon to the entire profession.</span></li>
<li><span style="letter-spacing: 0.05em; line-height: 1.6875;"> </span><span style="letter-spacing: 0.05em; line-height: 1.6875;">Initiate or participate in discussions to create new ways to address this very real problem. Commercial, community, and government mechanisms already exist for sharing data publicly and privately that could be used as models for minimizing the risks associated with these types of attacks. For example, a (private) process similar to federal wiretap capabilities in secrecy and opportunity may be more effective in addressing targeted attacks. There are countless other approaches that could be leveraged to address this problem.</span></li>
</ul>
<p>Make no mistake, the Google 7-day policy announcement sheds light on a <strong>real and significant issue</strong> in technology-related risk. While it highlights some of the challenges techrisk professionals face on a daily basis, it also demonstrates significant deficiencies in its approach to address the problem. This is a great opportunity to evaluate the existing state of the Internet from a risk and security perspective to determine where inconsistencies or weaknesses lay and map out a risk-based program that has the highest likelihood of success.</p>
]]></content:encoded>
			<wfw:commentRss>http://spiresecurity.com/?feed=rss2&#038;p=1331</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
	</channel>
</rss>
