The way Democratic campaigns make decisions doesn’t work. It’s time for change.
With everything going on right now, it might seem an odd time to discuss the nuts-and-bolts of campaigning. But unless Trump finds a way to shut down voting this fall (and it’s still possible he might), Democrats need to be firing on all cylinders to have a successful 2026.
So, this week I’ll offer some thoughts on what’s wrong with the way we do polling.
All of these problems stem from the fact that over the years a creeping obsolescence has infected every element of the polling process to the point where the research findings used to make nearly every decision in the Democratic Party are highly questionable if not totally erroneous. My concern is primarily with using polling to test issue positions and messages.
Why doesn’t relying on polling to make decisions work anymore?
Tension between cost and design. Like everything else in America — from a carton of eggs to a Disneyland vacation — polling has become much more expensive over the years. These rising prices have had a big impact on survey quality: Fundamental elements of a poll that used to be considered foundational to understanding the electorate are now routinely omitted to “meet budget.”
Limited idea input. Most polls use question wording created by political consultants like me, which works kind of like AI: We’re really good at knowing what “tested well” in the past or in another campaign, but we’re terrible at predicting where public opinion will be in the future, when you’re actually going to be using the polling results in advertising, door knocking, and other activities.
Stilted language. Many polling questions are written in language more appropriate to the faculty lounge than the sports bar. As a result, we see all sorts of polling presentations every election cycle that tout the latest and greatest finding featuring words that nobody can actually speak through their mouths.
Modeling = Guessing. Polls used to use “random sampling” to select voters to talk with. This was intended to guarantee that your poll gave you an accurate representation of public opinion in the group you were studying. (the national electorate, registered voters in Illinois, etc.) However, over time “response rates” (the percentage of people who answer their phone to take a poll) have plummeted, rendering random sampling obsolete. As a result, pollsters typically use a “model” to construct a picture of what they think the voters in the next election will look like and then fit their results to that model. If you can perfectly predict the future, this is a great approach. If not, predicting the future is a pretty sketchy basis for a campaign that may spend hundreds of thousands, millions, or even over a billion dollars.
Statistical insignificance. Using a poll to predict attitudes is basically a math problem. You need a certain number of people of different types (younger women, older Hispanics, college-educated Independents, whatever) to make a legitimate argument about differences between people. That’s called “statistical significance.” That’s why back in the day when I got into politics, every poll report would have these notations in them to indicate which findings were really significant, sort of significant, or just statistical noise. But over the years these notations largely disappeared from polling reports so polling consumers today have no idea whether a given finding is “real” (ie, statistically significant) or just noise.
“Yesterday” is a great song but a terrible plan. Polling is usually used to question people about things as they are today or were yesterday. For the most part, even when we ask voters about proposed future policies, we use the language and structure of the way things are today. But people don’t know how they’ll think or feel about something they’ve never thought about or felt. This approach tends to advantage things that are already out there in the public discourse and discourage innovative proposals because if people don’t support a proposal because they’ve never really thought about it that way before, by definition it’s a bad idea, and we don’t talk about it. Got it?
Question selection bias. By definition, polling only gives you a snapshot of how voters think about the questions you ask at a certain moment in time. But you learn nothing about the questions you don’t ask. For instance, when asking about proposals to make life more affordable, most polls would likely ask a limited number of questions about this topic due to budget and time constraints. So you have to winnow your list of possible solutions into a tiny set of options and exclude other potential ideas. And the ideas that tend to get in the poll are those that have tested well elsewhere so, again, poll results tend to reinforce each other rather than study voter opinion on the broadest range of options.
Cutting and Pasting. Once we have poll findings that seem persuasive to the electorate, the next step is usually to insert them into the campaign’s communications. But cutting and pasting polling language into a TV ad or piece of direct mail isn’t the same as using a convincing narrative to persuade voters. The value of using stories and narrative when trying to get a point across — to voters or to your clueless uncle — is widely accepted today. But still, Democratic campaigns too often just cram language that “tested well” into their communications without framing it with any humanity or emotional appeal. As a result, our communications with voters often feel like a Social Studies textbook.
Strategic impotence. The final — and most damaging — problem with how polling is currently practiced in Democratic politics is that nobody will do anything without first checking to see if it “tests well.” This is an absolutely terrible approach for all the reasons above and also has a more insidious effect on Democratic officeholders: It renders them powerless and inert in the face of fast-moving, unpredictable events. Kind of like the world we’re currently living in.
In a future newsletter, I’ll offer some proposals for changing the way we study public opinion.
Until then, ask me no questions and I’ll tell you no lies.

