Once or twice each year, some security company trots out a “study” that counts the number of vulnerabilities that were found and fixed in widely used software products over a given period and then pronounces the worst offenders in a Top 10 list that is supposed to tell us something useful about the relative security of these programs. And nearly without fail, the security press parrots
this information as if it were newsworthy.
The reality is that these types of vulnerability count reports — like the one issued this week by application whitelisting firm Bit9
— seek to measure a complex, multi-faceted problem from a single dimension. It’s a bit like trying gauge the relative quality of different Swiss cheese brands by comparing the number of holes in each: The result offers almost no insight into the quality and integrity of the overall product, and in all likelihood leads to erroneous and — even humorous — conclusions.
The Bit9 report
is more notable for what it fails to measure than for what it does, which is precious little: The applications included in its 2010 “Dirty Dozen” Top Vulnerable Applications list had to:
- Be legitimate, non-malicious applications;
- Have at least one critical vulnerability that was reported between Jan. 1, 2010 and Oct. 21, 2010; and
- Be assigned a severity rating of high (between 7 and 10 on a 10-point scale in which 10 is the most severe).
The report did not
seek to answer any of the questions that help inform how concerned we should be about these vulnerabilities, such as:
- Was the vulnerability discovered in-house — or was the vendor first alerted to the flaw by external researchers (or attackers)?
- How long after being initially notified or discovering the flaw did it take each vendor to fix the problem?
- Which products had the broadest window of vulnerability, from notification to patch?
- How many of the vulnerabilities were exploitable using code that was publicly available at the time the vendor patched the problem?
- How many of the vulnerabilities were being actively exploited at the time the vendor issued a patch?
- Which vendors make use of auto-update capabilities? For those vendors that include auto-update capabilities, how long does it take “n” percentage of customers to be updated to the latest, patched version?