There has been considerable research and debate regarding the problems of online panels.

This is a post from Voice of Vovici Blog ( and is one of the more candid critiques of the disadvantages of online panels.

Originally Posted by Jeffrey Henning on Fri, Sep 04, 2009
I’ve ignored many of the initiatives to improve the quality of third-party online panels. To me, these initiatives are laughable. Yes, you should…
  • Seek to identify panelists participating in the same survey multiple times under different names
  • Remove respondents who speed through their answers
  • Have a broad-based demographic representation so that you do not need to weight individual respondents
But these simply put lipstick on the piggy bank. They make it easier for organizations to continue to put cost before quality and to justify doing research on the cheap with third-party panels. “See? The panel companies are working hard to ensure consistent high quality!”
Um, a consistent high quality convenience panel is certainly better than a low quality convenience panel. But it’s still a pig. Er, piggy bank: a cheap alternative to a random sample.
The laws of mathematics have not been repealed: a convenience sample cannot be used to extrapolate to any target audience. A convenience sample is representative of its respondents only. This point keeps getting lost, as I saw last year at the MRA Conference at the presentation What’s the Catch? Does Sample Sourcing Matter:

A pointed question from the audience said that probability sampling was the theoretical basis for the projectability of survey research and asked what the scientific underpinnings were for assuming that Internet research was similarly representative.  Melanie [the presenter] answered that replicability is emerging as the standard instead of randomization and that the results from her research were replicable.

What “irrational exuberance” was to NASDAQ, the third-party online panel is to MR.
This week, Gary Langer, director of polling at ABC News, writes in his column:

A new study led by Stanford University researchers raises doubts about the accuracy of one of the most common forms of survey research, polls done among people who sign up to fill in questionnaires via the internet in exchange for cash and gifts.

In the most extensive such analysis to date, David Yeager and Prof. Jon Krosnick compared seven non-random internet surveys with two surveys based instead on random or so-called probability samples. The non-probability internet surveys were less accurate, and customary adjustments did not uniformly improve them.

While the random-sample surveys were “consistently highly accurate,” the internet surveys based on self-selected or “opt-in” panels “were always less accurate, on average, than probability sample surveys, and were less consistent in their level of accuracy,” the researchers said. Further, they said, adjusting these samples to known population values had no effect on accuracy (and in one case even worsened it) as often as that process, known as weighting, improved it.

Most Vovici customers are surveying house lists of customers, employees, resellers and other key constituencies.  It’s very easy to do a random survey of employees when you have the email address of every employee and have empaneled the list of employees by synchronizing your HRIS.  For surveys of prospects, many organizations are using the web for all lead generation and can easily field random samples of prospects.  Unless you’re an e-commerce or SaaS business, though, it is more difficult to build a representative house list of customers that you can then random sample: check out these tips for creating and managing representative email lists of your customer base.
Putting in regular processes to build a quality house list is like setting up automatic monthly withdrawals from checking to savings: better than the panel piggy bank as way to save research costs in the long run. Building such a house list is a sound investment towards conducting quality, representative survey research.