Who's No. 1? Depends Who Counts

Why is AOL the top-rated destination according to one measurement firm, and MSN the top site according to another? It's because statistics can say just about anything. By Joanna Glasner.

Whether it's ballots, beans or Internet traffic, the science of counting tends to follow the same principle:

The more folks there are compiling the numbers, the more likely it is that disagreement will arise over whose figures are correct.

That is a lesson many Americans learned in the recounting drama following last November's presidential election.

In the comparatively low-stakes business of tabulating online traffic, it is a lesson that audience measurement firms are finding reinforced every day.

From panel surveys to server logs to Internet service provider records, Web data collectors have employed an ever-increasing array of methods in recent years to assiduously track the movements of Net users.

The trouble is websites and the companies that buy advertising on them aren't always pleased with the results.

"The big issue is the numbers don't match up, and that's created quite a crisis in confidence," said Jim Spaeth, president of the Advertising Research Foundation, which is working with other industry groups to develop uniform standards for measuring Web traffic.

Although standardization efforts have made some progress in the last few years, companies are still far apart on such issues as how to distinguish human visitors from automated "bots" that routinely crawl the Internet in search of data.

Last week, the dream of a one-size-fits-all measurement standard moved a little closer to fruition with the announcement from NetRatings (NTRT) -- the Web tracking firm developed by television measurement giant Nielson Media Research -- that it would buy rival Jupiter Media Metrix (JMXI) for a hefty premium.

From a business perspective, the deal made great sense. The companies have much in common: Both employ a panel of Internet users (over 100,000 worldwide for Jupiter and 225,000 worldwide for NetRatings) who agree to have their online activities tracked.

Previously, the two were head-to-head competitors. Earlier this year, in fact, Jupiter filed a patent infringement lawsuit against NetRatings for its tracking technology. The suit is no longer being pursued.

Now, said Tim Meadows, NetRatings' vice president of products and services, the plan down the road is to adopt a single set of tracking standards and use only one panel.

But although the announcement of the acquisition sent Jupiter's stock soaring, the verdict is still out on what it will mean for audience measurement. One reason is that although NetRatings and Jupiter Media Metrix are arguably the best-known names in measurement, they're facing increasing competition from a number of newer firms that employ a range of creative methods to track Web behavior.

"There are lots of different ways you can measure different things like an impression, a click, a user, a page view and an advertisement," said Stephen DiMarco, vice president of marketing at Compete Inc., a company founded last year that licenses data from ISPs to analyze online traffic patterns.

Because it collects aggregate data from an estimated 9 million ISP customers, Compete claims its analysis offers more detail than panel-style measurement.

Another rival, ComScore, has amassed a larger panel of users than either Jupiter or NetRatings (estimated at 1.5 million users) through an unorthodox technique. The company provides panelists a service that helps speed up loading of Web pages in exchange for having their usage data collected.

Individual sites also do much of their own traffic analysis using internal server logs, often coming up with numbers that are vastly different from those that panel services provide. A study commissioned by the Advertising Research Foundation, for example, found that server logs results could be anywhere from three times larger to one-fourth the size of panel numbers.

Experts say some of these discrepancies can be explained.

"A lot of the confusion has to do with the market not understanding that these are measuring different things," he said.

Meadows attributes some of the difference to the fact that server results often include site visits from non-human agents like bots that crawl the Web in search of data.

Meadows said that panel data was never intended to provide a detailed count of every visitor to a website. Panels are better geared for giving a company a broad overview of the kind of people who visit its site, or how it stacks up to competitors.

However, it's not just different methods of tracking that produce varying results. Even panels can come up with different findings.

Those who require proof of such inconsistency of Internet measurement data need only look at the listings of top Internet properties published each month by Jupiter Media Metrix and NetRatings. It's not uncommon for AOL to rank first in one index, and MSN to be first in another.

NetRatings said some of the inconsistencies stem from the fact that measurement firms have different criteria for what they count as a page view. An example Meadows cited was the counting of "pop-under ads," such as the ubiquitous "X-10 spy camera" browser window that has irritated millions of Net users this year.

When tabulating its rankings, Media Metrix included the camera ads, reasoning that they were in fact viewed by users.

NetRatings, however, decided not to count pop-under ads as page views, since users didn't deliberately go to those windows.

Who's right? At this stage, there isn't really a right answer, said ARF's Spaeth. Until the industry adopts a uniform set of rules for measuring traffic, deciding who ranks as the most-visited site will have to be a subjective judgment.

Spaeth's call for standards was echoed by the Internet Advertising Bureau, which recently published a glossary defining common industry terms such as "page view" and "unique user." The industry group also compiled a list of spiders and bots to aid websites in filtering out automated agents when gathering traffic data.

For companies that sell advertising online, the statistical confusion may be hurting their bottom line.

The lack of reliable audience measurement data was cited in a recent survey by the Association of National Advertisers (ANA) as the No. 3 reason why companies held back spending on online advertising. Budget constraints and lack of proof of return on investment ranked first and second.

Although more firms spent ad money online than the previous year, they would have spent more had they been more confident about what they were getting, said Barbara Mirque, ANA vice president.

"There isn't standardization on the metrics coming out of the Internet, so from an advertiser's perspective, you might not know which number is the right number," she said.