More on Null Tag

TQBF comments about my Null Tag and Zero Day Initiative posts. It is nice to know somebody is reading them and fully considering them. That said, I am compelled to respond (and couldn’t post a comment without a Blogger account on his blog):

Part of my point is that there is no point – you are welcome to choose whatever reference you’d like. I was just trying to identify the oldest known public use of the phrase. Having been in the military, it wasn’t completely unknown to me.

The other part of my point is that anyone can pick any reference point they want – words and phrases change meaning over time and ultimately, the majority rules. This means that you are welcome to differ w/ me, but the vendors are as well. Whether or not the piracy warez guys bothered to be consistent, I would be completely shocked if whoever coined the phrase for their purposes was not aware of its use in the military, and not surprised at all if the meaning were changed. And I don’t really care, as long as everyone is clear about how everyone else is using it, which you will hopefully see from this post is somewhat arbitrary to begin with.

I think people tend to get too caught up in terminology – like with the word "hacker". There is a key problem, however, in the case of "zero day". That is, how good is the protection we are providing? If the protection is based on whether we know about the vuln or not, I believe the protection is only good for protecting against the artificially-created threat. If it protects us against any exploits, even those against vulns that are currently unknown to the general security community (as HIPS is intended to do), then the protection is better.

TQBF has not interpreted my two ZDI points, but it is too long to explain, and I don’t think it will change anyone’s mind to begin with, but it is rooted in some previous commentary I’ve written, recently in ComputerWorld, or in the past, google search on "folly" and "Lindstrom".

1 comment for “More on Null Tag

  1. A. Nonymous
    August 10, 2005 at 3:12 am

    There has been an ongoing (and at this point it’s probably safe to say: Age Old) debate amongst security researchers as to whether or not it’s the Operating System’s job to prevent insecure programming practices, that inevitably result in software vulnerabilities, from adversely affecting the overall state of system integrity. Any hacker worth his salt will tell you that it’s the programmer’s job to implement and use API’s in a sane, efficient, and secure manner, but in all reality this is really a shortcut to thinking. I have always been of the personal opinion that in order to minimize the effect that “0day/NullTag/P.reviously U.ndisclosed V.ulnerabilities (see, you too can create security terminology, I think this plays into Lindstrom’s point)” have on system integrity the Operating Systems themselves need to protect their own f!cking memory.

    At the risk of getting too technical, is it too much to ask that programs not store procedure redirection tables for shared library routines in writable memory, for applications running with privileged credentials? That an access violation is thrown when a process’ runtime registers are overwritten, or maybe that user definable data is never put in an area in memory where the runtime registers can be altered?

    Ok, so that last case is a bit far fetched as the programmer is the only one that knows which memory will contain user supplied data, but the fact of the matter remains, if the OS is doing its job, none of these conditions should result in system compromise in the first place.

    I mean sure the advent of Perl brought to light the concept of TMTOWTDI, and any hacker will tell you that the same holds true in vulnerability exploitation. This, however, doesn’t change the fact that the majority of system compromises are the result of the same old techniques be used to exploit the same types of vulnerabilities that have existed from the inception of the modern computing base.

    The more startling realization one should come to at this point is that Operating System developers have had to use the crutch of PaX, grsecurity, and other system integrity validation software to “stop the bleeding” out of their own lack of motivation to redesign their memory management in a secure fashion themselves. Halvar Flake brought up an excellent point in one of his recent Daily Dave posts by saying the current Cisco IOS heap exploitation techniques are based around a single design flaw in the memory management. This is where 99.9999999% of software vulnerabilities that are widely exploited stem from [flaws in memory management].

    Granted we live in a world where database contents need to be made available to publicly accessable web sites, which gives rise to input validation problems; and web-based CGI applications do need to be able to run external applications to support their functionalities, and these are problems which can not be solved at the OS level. The fact remains that, more often than not, a machine is made vulnerable by the design and implementation of its OS and shared libraries than it is by the ignorance and/or incompetence (in terms of secure practices) of the application programmers, system administrators, and end users.

    With all that being said, and to summarize a very long and convoluted point here, the only way to mitigate the impact that PUVs have on system integrity is with modifications to the design and implementation of Operating Systems to prevent them from being useful.

Comments are closed.