Thursday, July 2, 2009

Have you heard the one about the independent testing lab?

They always independently verify that their client is the best.

Independent tests these days are a joke.

In the last week, two different reports from December 2008 came to my attention: one from Cascadia Labs commissioned by Trend Micro and the other from Tolly Group commissioned by Websense. They both have sections on the effectiveness of the major web filtering companies in blocking malicious websites.

Of these two reports, the Cascadia Labs report was slightly more fair ranking Trend Micro as able to block 53% of web threats (the highest -- presumably with Anti-Virus enabled as well as URL filtering) followed by McAfee (42%), Blue Coat (31%), Websense (23%) and IronPort (20%). I'm ignoring the SurfControl entry (9%) because since Websense bought SurfControl, the product is essentially defunct and SurfControl partners are being urged to change to Websense.

The Tolly Group report said, "In tests with 379 URLs containing binary exploits or compromise code, Websense blocked 99% of URLs, versus other vendors who blocked between 53% to 91%." Lets look just at the results for Websense versus Trend Micro in terms of exploit detection in the two tests:





ReportTrend MicroWebsense
Tolly53%99%
Cascadia53%23%


Well, Trend Micro is consistent, but depending on who you ask, Websense is either twice as good or half as good. But here's the kicker, the Tolly report says, "All the URLs tested were mined from Websense ThreatSeeker network." So what they're saying is that Websense is very good (but not perfect) at detecting exploits on URLs it knows to have exploits.

Now here's the bottom line. A lot of folks make claims about security, but its a hard thing to verify. eSoft, the sponsor of this blog, for example, detected 35k new malicious URLs last week and has over 1.5m recently verified malicious URLs in its database at the moment. The combined lists of Google, Trend Micro, Sunbelt, PayPal, Mozilla, AOL, and Consumer Reports on the other hand have only 318k [source: stopbadware.org]. But these might be 318k not covered in the eSoft list, so the question becomes: how do you test these types of products?

I have some thoughts on how truly independent testing could be done including the collection and verification of malicious URLs without relying on a particular list that some vendor may already include directly, but I want to put it out there. What testing methodology should be used in a fair comparison of the ability of different products to block access to compromised, phishing, and otherwise malicious websites? And should the tests include things like malware call-home addresses? If so where does the source of URLs come from? And what is a fair sample size? What is a fair timeframe from first detection? Any feedback would be appreciated.

2 comments:

Unknown said...

Cascadia Labs has a history of providing credible reviews to the endpoint security, security gateway, and data protection industries. I'd like to point out a few things about the test you mention:

* Cascadia Labs sources its URLs completely independently - vendors don't provide URLs for testing as that would discredit the results
* Our corpus, which has to be of very high quality in order to produce credible results, contains millions of URLs in over 25 categories and we routinely test with tens of thousands of randomly chosen URLs for a given test run
* We have created our own independent collection system and human review to find and categorize URLs and find the latest in-the-wild Web threat URLs
* We invite all vendors to participate in our tests
* In addition to these published reports, we routinely perform internal tests for clients that are never published - they use our services to improve their product by using an independent, unbiased, and knowledgeable third-party test lab to provide valuable insight

We invite feedback and criticism from vendors, companies that use these products, and the general community. This is the audience we serve, but we ask that this criticism be reasonable.

Rob Lipschutz
Partner, Cascadia Labs

Unknown said...

Not entering in the discussion who is more efective blocking sites what I would like really to see is a test giving information not about how many sites where blocking but how many off trustable website are blocking by URL Filtering aplications.

From the experience we have our major problem when using such solutions is not blocking site that must be blocked but trying to using sites that are not suppose to be blocked. This is the relly problem.

A test llike this would reveal some interested results

Antonio Goncalves
DMZONE, LDA