Random Device Engagement Experiment
Text by Sean McElwee (@SeanMcElwee), co-founder of Data for Progress
Weighted by Evan Roth Smith and Sam Haass, co-founders of Slingshot Strategies
A few weeks back, I (Sean) read an interesting post on PredictWise about Random Device Engagement (RDE) which was presented as a new, quick and cheap way of obtaining online panels that had deep penetration (the blog estimated 70 percent).
RDE engages people in an online environment and offers incentives to participate in polling research, targeting app users. It is often used in consumer research. But could it be used in politics? I asked Data for Progress senior adviser Jacob Coblentz about it and he was excited about the cost ($2 per complete with two screeners, which is dirt cheap) and suggested we try it. I asked my friends Sam Haass and Evan Roth Smith of Slingshot Strategies to come on board and we decided to try an experiment. We would poll TX-22 (N=420) and GA-Gov (N=769). We also polled some policy issues as well as Trump favorables. We’re presenting those results here.
Even before fielding our polls, we had a few takeaways:
It’s really fucking cheap. The full cost of both surveys combined was around $3,000, though to create longer surveys, you’d need to pay monthly fees. In the polling world, this is extraordinarily cheap. Where coverage was good (GA statewide) it was reasonably fast, comparable to an online panel. And it was easy to use. We coded up our 17 questions (the limit for a free account) and screeners in less than an hour, and the platform produces built-in topline reports.
However, we ran into a few problems for using RDE for political polling. First, there is a strict 200 character limit on questions, which required us to be play around a bit with some of our wording (this does not change, even with a paid plan). Second, in some areas, coverage was sketchy. Our TX-22 poll for instance, we didn’t get enough N (we went to field October 31st and cut off on November 6th with 128 N). Our GA Governor poll collected 656 respondents in the same period (we were refunded for non-completed N, bringing the cost down). We considered polling in IA-04, but couldn’t do it because of coverage limitations. A paid plan would speed up the process of gathering N. Another problem is quotas. RDE’s age quotas lump everyone over 55 together, and the sample skews very young. This makes it difficult to sample age accurately, and we encountered some difficulty getting statistically appropriate samples of race as well. Lastly and most importantly, the firm we used did not have voter file integration.
We don’t have one yet. We wanted this to be an experiment, so we’re putting out our results ahead of the election. In the interests of transparency, the (unweighted) dataset is here.
RDE presented some unique challenges when it came to capturing a truly random, representative sample. Because the reach of the platform is not matched to voter files, we set age and race quotas based on a pastiche of 2014 and 2016 exit polling. This still returned a sample that skewed heavily Democratic (46D, 35R, 17I), so we weighted our results based on those same exit polls to reflect a more likely partisan turnout scenario (34D, 35R, 29I). This approach is supported by the latest early vote data, which shows Republicans with a narrow turnout advantage.
Yet even with a weighted turnout advantage for Republicans, Abrams holds a 5-point lead, driven by heavy support from independent and unaffiliated voters. We'll understand more about the usefulness and limits of RDE, as well as the accuracy of our turnout model, once tomorrow's results come in.