The Anti-Malware Testing Standards Organization (AMTSO
) is currently meeting to discuss, among other things, adoption of guidelines for testing the False Positive (FP) rate for antivirus
programs. A False Positive occurs when the antivirus utility erroneously wipes out a file that is not malicious. Protection against viruses is essential, but when that protection backfires it can cause huge problems. The Worst False Positives
All FPs are not created equal. If the antivirus deletes a brand-new download you can usually mark the file as trusted and try again. But if it deletes an essential system component, which happened to some McAfee users
this past April, it can bring down the whole computer.
An antivirus that erroneously wipes out a file present on just 10 computers worldwide hasn't caused as much trouble as one that kills off a file present on 10 million computers. Clearly testers should account for both factors when evaluating a product for false positives, but how do you find out a given file's prevalence?
Many vendors automatically collect statistics about the files found on every user's system, usually after asking permission. For example, Symantec's huge Norton Insight database stores prevalence information that's integral to malware detection in Norton Antivirus 2011
. Serious testers and researchers can apply to Symantec for access to a prevalence tool that allows direct queries to the Norton Insight database.
The best source of information on a file's prevalence, then, may come from the antivirus vendors. Naturally the tester would collect and collate information from all willing vendors. Just how accurate would this information be?