The 7-day Itch: Ups and Downs of Google’s New Disclosure Policy

Recently, members of the security team at Google made an important announcement about “real-world exploitation of publicly unknown vulnerabilities.” While it was done on the Google Online Security blog, all indications are that this is an official Google policy statement. To wit, Google announced that “after 7 days have elapsed without a patch or advisory, we will support researchers making details available so that users can take steps to protect themselves.”

This is an important announcement because it highlights the very real problem of “in-the-wild-exploits of undercover vulnerabilities.” This strain of “0day” is the most significant given that active exploits are already happening when they are discovered. In these scenarios, the threats (malicious actors) and vulnerabilities have already collided in the real world and losses are being actively incurred. Thus, this type of situation is the most important type that technology risk (techrisk) managers must deal with in their environments.

The announcement itself highlights some important, underappreciated aspects of the techrisk profession:

- That exploits/breaches/incidents are the fundamental “unwanted outcome” that we are trying to prevent. It is not uncommon for techrisk pros to focus efforts on software quality, control weaknesses, or compliance violations – all useful intentions to the extent that they address the aforementioned incidents.

- That techrisk professionals can identify attacks even when the vulnerability is unknown. Much of our profession’s focus revolves around the notion that we must find vulnerabilities in order to protect ourselves, yet time and again we succeed in identifying these types of attacks using behavioral analysis and other techniques. With the growth in popularity of forensic archiving, we can now also determine to what extent we have been victims in the past to assist with understanding the risks of the future.

  • That much of the profession’s effort associated with vulnerability management is ineffective. Our efforts to identify each vulnerability prior to exploit are simply overwhelmed by scale and can simply be shown through a thought exercise – consider how many vulnerabilities are created every day (in the aggregate) as compared with how many are found. Perhaps more importantly, it is worth noting that the vast majority of vulnerabilities that are found are never known to be actively exploited [pdf].
  • That there is a variance in how different types of attacks – namely, targeted vs. opportunistic – manifest themselves online. Google’s primary cited reason for its new policy involves political activists as victims of targeted attacks that may lead to physical harm. The history of infosec and techrisk highlight other scenarios – the NIMDA worm, WMF exploit, WebDAV, etc – that involve opportunistic exploits across a multitude of targets.
  • That the most significant way to “move the marker” in security is through the identification of exploits and not vulnerabilities. As with Code Red and Nimda in the Fall of 2001 leading to Bill Gates’ well-known “Trustworthy Computing Memo,” active exploits are the best drivers of change in the techrisk profession.

 While Google’s new policy offers and opportunity to assess the state of security on the Internet overall, it also demonstrates significant deficiencies in its approach:

  • The 7-day deadline has no risk basis. With the significant variance in number of affected parties and speed of compromise associated with opportunistic attacks versus targeted ones, the number is an arbitrary one. In the primary example cited (activists at risk of physical harm), speed is highly unlikely to have a significant impact on risk reduction.
  • The capabilities of enterprises and/or users to protect themselves can vary significantly. There are many reasons why some parties choose to remain vulnerable to certain types of attacks – system complexities, legacy support needs, lack of technical skill, competitive priorities, etc. Through the years some security researchers (including some employees of Google) have expressed disdain for those that cannot protect themselves. A company the size of Google should be held to a higher standard in its willingness to protect those online that can’t always protect themselves.
  •  No consideration of economics. The policy completely ignores tradeoffs like the risk of breaking systems when taking precautionary measures (e.g. patch failures), the well-known increase in exploits that occur after the disclosure of many new vulnerabilities [Arbaugh, McHugh, 2000 pdfBilge, Dumitras 2013 pdf], and the opportunity costs associated with new requirements. When Google says, for example, “each day an actively exploited vulnerability remains undisclosed to the public and unpatched, more computers will be compromised” they neglect the significant likelihood that computers will be compromised regardless of the state of disclosure to the public and fall back on the age-old myth that only patches can protect systems.
  •  It can lead to even more exploitations and incidents. Anyone paying close attention to the vulnerability research community knows that there is wide variance in how researchers disclose their information and some decisions are made based on annoyance, frustration, spite and sometimes even malice. If a vulnerability will get “noticed” more quickly, researchers may be tempted to “test” it in the wild in order to increase its priority level.

A company with the talent and resources at Google can do better. Here are some opportunities for improving the state of security on the Internet and addressing the real, significant risk associated with actively exploited 0days:

  • Encourage and train political activists in obfuscation and evasion techniques. It is challenging to discuss a blanket policy across all scenarios simply by highlighting arguably the most important one – that involving physical harm. It seems highly unlikely that this case is a common one and the best way to discuss the overall implications of the policy itself is to remove this scenario from the discussion as it tends to cause an emotional reaction. As many of us know, there are many ways political activists can protect themselves online that would be much more effective than a 7-day disclosure policy which comes after they have been compromised.
  •  Increase focus on actively exploited 0days. Since these are the most important scenarios the techrisk profession has to deal with, Google should be making every effort to identify these exploits and employ or invent new ways to protect against them. Google researchers still participate in random, ineffective vulnerability research that simply distracts from this very real problem.
  •  Provide more insight into the “dozens” of 0days identified “through the years” that was mentioned in the blog announcement. If there is one thing Google has, it is great data. As evidenced by past reports [Provos, 2008 pdf], Google could very easily provide more specific evidence on the number of 0days they have identified, the volume of exploits, and their disposition by vendors. The fact that they haven’t yet, especially in the face of this policy announcement, is disappointing and makes it difficult to evaluate the measure.
  •  Take a risk-based approach to disclosure. Fast-moving worms do most of their damage in hours and days – in those cases, seven days is too long. Targeted attacks are unlikely to get repeated in a way that demands immediate attention for most environments – in those cases, seven days is too short. A risk-based approach would take into account the frequency of exploit, probability of future exploit within a target population, and impact of the exploit while evaluating the changes to these variables over time – in particular before and after disclosure.
  •  Monitor the situation closely. Google’s unique ability to gather data in this regard is worth mentioning again as a function of its ability to assess its own policy. Collecting and publishing data on actual 0days throughout their exploit lifecycle would be a boon to the entire profession.
  •  Initiate or participate in discussions to create new ways to address this very real problem. Commercial, community, and government mechanisms already exist for sharing data publicly and privately that could be used as models for minimizing the risks associated with these types of attacks. For example, a (private) process similar to federal wiretap capabilities in secrecy and opportunity may be more effective in addressing targeted attacks. There are countless other approaches that could be leveraged to address this problem.

Make no mistake, the Google 7-day policy announcement sheds light on a real and significant issue in technology-related risk. While it highlights some of the challenges techrisk professionals face on a daily basis, it also demonstrates significant deficiencies in its approach to address the problem. This is a great opportunity to evaluate the existing state of the Internet from a risk and security perspective to determine where inconsistencies or weaknesses lay and map out a risk-based program that has the highest likelihood of success.