If I were a Czar*

Via New School we have a challenge from Michael Tanji:

You are the nation's new cyber czar/shogun/guru. You know you can't
_force _anyone to do jack, therefore you spend your time/energy trying
to accomplish what three things via influence, persuasion, shame and
force of will?

I think pretty much every security professional should take this one point to heart – you can't force anyone to do jack – click your heels together, look in the mirror, deny it for as long as you want… but it is so true. But don't cry about it. Figure a way around it.

Getting back to the question, here are my three things I would do as Cybersecurity Czar:

  1. Promote and engender industry support for Software Safety Data Sheets and/or Software Facts Labels, the former providing a mechanism for inline protection by host intrusion prevention and the latter providing software details to assist humans in recognizing and assessing risk factors of the software they are evaluating.
  2. Create a lab, or promote existing labs for use in validating the strength and weaknesses of various software applications and platforms. The lab would consist of enough systems (VMs, likely) to allow for single variable manipulation for controlled experiments. This lab could test the veracity of each individual configuration setting or patched vs. unpatched systems, etc.
  3. Create secret backdoors in software and a secret universal monitoring program for the federal government. (Ooops, guess I blew the "secret" part, and some may also suggest I am too late, anyway). 
  4. Since number 3 is really a joke (you did get that, right?), I have another option – find a stronger way for universal identification and authentication other than using social security numbers. Push the industry towards a better solution by publishing all SSNs.

(*sung to Beyonce's "If I were a boy" – here's hoping Hoff or Shrdlu strut their stuff and flesh this one out! Maybe that's what it takes to find a cybersecurity czar who will stick around -> Beyonce singing about it… Btw, I really don't get the song, but maybe that is the point… ;-) )

4 comments for “If I were a Czar*

  1. August 20, 2009 at 1:39 pm

    Thanks for picking this up!

    On #2, how do you ensure that it does better than common criteria labs?

  2. Pete
    August 20, 2009 at 4:17 pm

    @Adam -

    The point of #2 is to deploy systems live on the ‘Net but allow for control groups and test groups. Since with computers you can manipulate single variables (e.g. config settings) you could evaluate the impact of changing configurations using real data – evidence and outcomes.

    A smart man once said the following:

    “Another way to put this is if you want to improve something, you have to start by measuring it. Let’s start measuring security outcomes, so we can start assessing the processes, errors or hostile acts that lead to those outcomes.”

    That is the goal of the lab – to create experiments where we can measure outcomes. (Perhaps not exactly what that smart man intended, but it fits well, IMO).

    Btw, am I hired? ;-)

    Pete

  3. August 20, 2009 at 5:31 pm

    I’m still a little confused. If the system is on the internet, how do you ensure that A & B both get hit by the same attacks?

    BTW, since my VP (Scott Charney) is in the touted shortlists, I’m not sure if I can endorse anyone. He might take it amiss if I endorse him, and likely would if I endorse someone else.

  4. Pete
    August 20, 2009 at 7:14 pm

    @Adam -

    Part of the point is that the attacks will be “random” or perhaps some other controlled distribution. We don’t need to know whether some configuration could be compromised by some specific attack – we can test that easily. What we need to know is how likely is it in the wild – taking into account volume of benign activity, time, perhaps strategic placement at various points throughout the ‘Net, etc..

    So, if we have two groups of 5, 50, 500, 5000 systems strategically placed and they have very specific configurations to test for some single variable or set of variables (e.g. aggregated patches), then we can see how long they last under those circumstances. And we can do it over and over, continuously measuring the circumstances.

    This is really just a more formalized way of doing what (I think) the Internet Storm Center has been doing by measuring the time to compromise or whatever it is.

    We would certainly need to think through the client vs. server and active vs. passive activity of the groups (e.g. honeymonkeys vs. honeypots).

    The use case that goes through my mind the most is for enterprises considering an update (think about your patch timing paper) trying to decide when to patch by measuring the threat component of risk. The idea isn’t about possibility, it is about probability.

    Pete

Comments are closed.