Many Hands Make Light Work

By Aaron Strauss (@aaron_strauss)

Setting aside a terrible Senate map, Democrats won a clear majority of the House (in seats and in popular vote) and in crucial state battles across the country. Many journalists will spend the next few weeks dissecting what the Democrats did that “worked” or “didn’t work.” I run the Analyst Institute and part of our mission is to evaluate the impact of voter contact programs (e.g., canvasses, paid digital ads, mailers). Through this post, I want to help you be a smart consumer of this punditry.

Based on my experience, most programs have a small positive effect on the election outcome. Every phone call to a friend, every conversation at a stranger’s door, every handwritten letter, and yes, every TV ad (on average) helps a little bit. For instance, when MoveOn tested its digital program in Virginia and Alabama in 2014, its net impact was 0.4 percentage points on the Democratic candidates’ overall margins. A single program can be the difference in very close race. As I write, the difference between Nelson and Scott in Florida is just 0.2 percentage points.

Further, the impacts of multiple programs combine to make the difference in more than just the closest of races. With millions of people volunteering and hundreds of organizations investing in races, many hands make light work, and the small effect that each voter contact has (whether from volunteers or paid staff) adds up to victories across the country.

As a disclaimer, I should say that some factors have huge electoral impacts. The national mood of this midterm election, for instance, was undergirded by low approval ratings for President Trump. Also, candidates matter. For example, Obama being on the ballot led to massive turnout of African Americans in 2008 and 2012. Some candidates are capable of inspiring a town, state, or nation--and winning in tough environments.

In terms of voter outreach, modest average effects mask differences in the potency and quality of individual programs. It’s important to understand the proper ways to evaluate impacts of these programs. One terribly misguided approach is to believe everything a winning campaign did was “the best thing since sliced bread” and that the losing campaign was “the worst campaign ever.” For example, did you know that in 2000 the Bush campaign spent millions of dollars advertising in California? Hardly a savvy move from a “winning” campaign. This correlation implies causation––immortalized in the West Wing as post hoc ergo propter hoc––may seem obvious, but it’s easy to fall into such a trap.

Another methodological approach worthy of deep skepticism is a comparison between the behavior of voters contacted to those who weren’t contacted. For instance, a group might find that voters contacted at the door cast ballots at a rate of 60 percent while those who weren’t home voted at a rate of 45 percent. It would truly be remarkable if a canvass accounted for that difference. But this analysis is undoubtedly spoiled by the fact that voters who are home to be canvassed are already more likely to vote (e.g., because they haven’t moved). In fact, if you take these analyses at face value and evaluate the 2016 turnout rate of folks canvassed in 2018, you’ll likely find that the 2018 canvass time traveled backwards two years and affected 2016 voting rates!

A similar complication arises when comparing geographic areas or districts that were full of activity and those districts that were ignored by groups. The choice of “where to play” is not made in a vacuum; multiple organizations might decide to play in the same area. So if a Republican group says “Republicans did better in these three districts where we ran radio ads” ask yourself “Why did the group pick those districts? Were they always going to be more competitive? Did other groups make similar decisions and were they thus equally responsible for the outcome?” It’s fine for people to decide to run program in places with a high likelihood of electoral victories, but they can’t then use those wins to prove their program was successful.

The proper way to measure impact is to run an experiment where the activity is conducted among a random set of voters or within a random set of precincts. Leaving a control group uncontacted might seem like a tough ask--and it’s certainly not advisable in all cases--but many groups use randomized controlled trials to accurately measure their impact (just like in medicine). Some of the groups who have gone public with their results include Vote Forward, vote.org, Working America, HMP, and, as mentioned above, MoveOn. These organizations are committed to learning, iterating, and optimizing their programs. As a consumer of political analysis, you should be on the lookout for groups who can back up their claims with rigorous evidence from randomized controlled trials. One way to accomplish this is to run randomized-controlled experiments earlier in the electoral campaign and then apply those findings to the entire electorate in the fall. This way groups aren’t leaving critical voters uncontacted in October, while having more confidence that the fall program is having a meaningful impact.
Because impacts of voter outreach programs are often modest, we commend the courage of campaigns and organizations who rigorously measure their effect. Especially on the independent expenditure side, when often tens of millions of dollars are on the line, it takes a strong stomach to measure the number of votes generated by a specific program. We are heartened that more and more folks in the progressive community are holding themselves accountable and learning how to be even better campaigners by 2020.

And most importantly, if you were out there talking to voters, writing letters, making calls, sending texts: thank you! Science shows you helped make a difference on Tuesday.


Aaron Strauss (@aaron_strauss) is the executive director of the Analyst Institute, which serves as the clearinghouse for evidence-based learnings around voter contact for the progressive community.




Ethan Winter