Welcome to DU! The truly grassroots left-of-center political community where regular people, not algorithms, drive the discussions and set the standards. Join the community: Create a free account Support DU (and get rid of ads!): Become a Star Member Latest Breaking News Editorials & Other Articles General Discussion The DU Lounge All Forums Issue Forums Culture Forums Alliance Forums Region Forums Support Forums Help & Search

Sunlei

(22,651 posts)
Mon Jan 2, 2017, 01:05 PM Jan 2017

How Analytical Models Failed Clinton

National Politics |By Charlie Cook, December 30, 2016

How Analytical Models Failed Clinton



This story was originally published on nationaljournal.com on December 27, 2016

"The Novem­ber elec­tions pit­ted Demo­crats against Re­pub­lic­ans, con­ser­vat­ives against lib­er­als, Trump-style pop­u­lists and tea parti­ers against the es­tab­lish­ment and con­ven­tion­al politi­cians. An­oth­er con­test, fol­lowed mainly by polit­ic­al afi­cion­ados, matched tra­di­tion­al poll­sters against newly fash­ion­able ana­lyt­ics wiz­ards, some of whom—pre­ten­tiously in my opin­ion—called them­selves “data sci­ent­ists.”

It was well known that tra­di­tion­al polling was hav­ing prob­lems. The numb­ing ef­fect of bil­lions of tele­market­ing calls and the ad­vent of caller ID and voice mail had re­duced re­sponse rates (the per­cent­age of com­pleted in­ter­views for every hun­dred at­tempts) from the 40s a couple of dec­ades ago to the high single di­gits. As they struggled to get truly rep­res­ent­at­ive samples, poll­sters “weighted” their data more than ever be­fore, mak­ing as­sump­tions of what the elect­or­ate would look like on elec­tion days that were weeks, months, or even a year or more away. ........

...........Ex­per­i­enced journ­al­ists might ar­gue that the over­re­li­ance by re­port­ers on both polls and ana­lyt­ics has led to a de­crease in shoe-leath­er, on-the-ground re­port­ing that might have picked up move­ments in the elect­or­ate that the polls missed. As the Michigan res­ults came in on elec­tion night, I vividly re­called that two con­gress­men from Michigan—one a Demo­crat, the oth­er a Re­pub­lic­an—had been warn­ing me for months that Michigan was more com­pet­it­ive than pub­licly thought. I wished I had listened.

The ana­lyt­ic­al mod­els for both sides poin­ted to a Clin­ton vic­tory, al­beit not a run­away. The Clin­ton cam­paign and su­per PACs had sev­er­al of the most highly re­garded polling firms in the Demo­crat­ic Party, yet in the places that ended up mat­ter­ing, very little if any polling was done. So while 2016 wasn’t a vic­tory for tra­di­tion­al polling, it cer­tainly took a lot of the luster from ana­lyt­ics. In the end, big data mattered very little. ......"

http://cookpolitical.com/story/10205

8 replies = new reply since forum marked as read
Highlight: NoneDon't highlight anything 5 newestHighlight 5 most recent replies

liberal N proud

(60,950 posts)
1. Analytical models can't account for election fraud
Mon Jan 2, 2017, 01:25 PM
Jan 2017

They can't account for hacking by a foreign government.
They can't account for FBI meddling in an election.
They can't account for a silent media on all the above.

Keeping voters misinformed destroyed the analyst's ability to read the voter.

The election was stolen, PERIOD!

Sunlei

(22,651 posts)
6. didn't take much to steal either. 90k detroit blank ballots. couple thousand rustbelt 'trump' protes
Mon Jan 2, 2017, 02:56 PM
Jan 2017

protest votes, couple thousands Ds stay home, and several thousands D voters with wrong address forced to use provisional ballots that got thrown away.

Rstrstx

(1,568 posts)
4. Why don't I hear Cambridge Analytica mentioned more??
Mon Jan 2, 2017, 02:38 PM
Jan 2017

Though not a traditional analytical company there's no question they had a pronounced influence on the election.

karynnj

(59,942 posts)
8. What is strange is that the extremely low rate of response for polls - high single digits (!) means
Mon Jan 2, 2017, 03:17 PM
Jan 2017

they can not be trusted at all. It is surprising they came in as close as they did. It is not JUST a problem that "weighting" is more important, it is that polling ALWAYS had to make an implicit assumption that people in a given demographic cell who answer the phone are similar to their counterparts in the same cell who do not answer. Even when the response rate was in the 40s, that bothered me.

Now the wizards used the polls aggregated and weighted as their models suggested and they pulled in other variables that they had reason to believe were relevant and improved the unmeasurable "accuracy".

The funny thing is that the one thing that might have given more insight was something I always questioned whether it made sense while I did it -- door to door canvassing. Having done it, I know that even if people try to return both a few times to a neighborhood, there are people who fail to come to the door or are not there. There are also some who would not say. (Here, it would be interesting to know if they were willing to say in the prior year. Reading a few analyses, they have noted that less of this was done this year - including in the critical rust belt states. It should also be noted that this is something better done by people "from around here".

I hope that whoever gets the DNC job assigns a good team to look at how this is done in various places and how the data is compiled, both for use in that campaign, but to help building a database so we know who are people are. Combining the voter lists, the voter record of who voted in which elections and any response to a canvasser or phone banker for a prior election would give any future campaign - for any office - a wealth of information. (If you want to go there, they could also get various demographic information if they were willing to pay for it.)

Consider how that data base could give you a red flag long before an election. Let's say that you observe that you are NOT getting the definite yeses you did in a prior election for a set of people, you could have a red flag that something may be wrong. Having someone ask them about likely issues might determine if there is a systemic problem. Then if the issue is something the campaign thinks is a misunderstanding, it could be addressed to try to assure those votes.

I am not naive enough to think that data collected this way is necessarily good -- it might be very good as a sanity check for a telephone poll.

Latest Discussions»Retired Forums»2016 Postmortem»How Analytical Models Fai...