MSilbThirtyEight: Defending Poll Data and FiveThirtyEight

A Polarized America | Restoring Faith in Data | More Information

A Polarized America

The 2016 Election Cycle was many things.  On November 9, 2016, my one word to describe our current state is polarized.  Many Americans, including myself, went to bed on Tuesday Night absolutely shocked, no matter which side they identified with.  Upon waking up you could have been excited or fearful, confident or confused, faithful or faithless.  The general sentiment from my #Millennial social media follows and friends included themes of alienation and disappointment in the other side.  We’ve all likely gotten the common interview question “Tell me about a time when you’ve had to work with a difficult person, or with someone who you don’t get along with.  Walk me through how you came to a solution.”  After the sigh and anguish, we answer the question, and ideally give a happily ever after story.  As understandably hard as it is for many, four years from now I’d like for our country to have an answer to this question.

Side note: Can we please phase out the negativity around the word millennial?  We’re growing up in the information age, and those screens we all stare at is our medium for connecting with people, learning from people, and expanding our world view so we can have an open mind.

Restoring Faith In Data

Data and analytics were hot topics as election results started to come in Tuesday night.  Projections of the deterministic results of battleground states were largely incorrect.  Blue states flipped red, toss ups flipped red, and we had an upset on our hands.  We went in knowing that polls typically have about 3% error which feeds into the probabilistic models.  So what went wrong?  Are the models the problem?  Is polling systematically incorrect?  Was there a quiet *Insert Candidate Here* vote?  Was there even anything wrong at all?

Remember the craziness that was the 2000 election?  The 2016 election was poised to be even crazier with 12.5% undecided or third party voters.  Increased variability in the electorate leads to increased uncertainty leads to wider ranges in error.  In 2000 Gore won the popular vote even though Bush was favored by 3.2% (HUGE) in national polling.  The 9.6% Undecided/Other voters likely played a large role, and we saw that again on election night in 2016.

Probabilistic models (showing the probability of certain events/results) like FiveThirtyEight’s showed that both Hillary Clinton and Donald Trump had potential paths to the White House.  Some more likely than others.  This admission gets lost in election model reporting, and public opinion and reactions. It’s driving me absolutely insane.  Make all of the annoying xyz “blew a 3-1 lead” jokes you want, but in a year where we saw improbable comebacks in both the NBA Championship (Cavaliers) and MLB World Series (Cubs), we have to understand that these things happen.  Numbers Time: Probabilities greater than 50% DO NOT mean that something will definitely happen, and probabilities below 50% DO NOT mean that something will definitely not happen.

Heading into Election Night, Democratic Nominee Hillary Clinton had a greater probability of winning than Republican Nominee Donald Trump.  According to FiveThirtyEight (Polls Only), Clinton had a 71.4% chance of victory compared to Trump’s 28.6%.  Yeah that’s about a 3:1 ratio, we get it.

Things that have a lower probability of occurrence than Trump’s 28.6% chance of winning:

  • Flipping two heads or tails in a row (25%)
  • Rolling any number on a dice (16.67%)
  • Walking up to a random registered voter in 2016 who is undecided/other (12.5%)
  • Coming back from a 3-1 deficit to win a 7 game series (12.5%*) – Sounds Familiar
    *assuming independent 50/50 odds of winning each game

The “quiet vote” is reflected in polling.  The effect of an unconventional and controversial set of candidates is reflected in polling.  The election itself is the last poll.  Donald Trump had a limited and fairly unlikely path to 270+ electoral votes.  But he still had a significant chance, and that chance was realized.  The FiveThirtyEight model isn’t broken, polling isn’t broken (however there is a lot to learn from this election), and analytics is not broken.  Carry on making analytical decisions, believing in the data, and trusting the process.

Here’s some more information to mull over from FiveThirtyEight’s Editor-In-Chief

(He likely means 2016 not 2014)

The national popular vote looks like it will be right on the projection after all.  So food for thought:  Is there a better way than our current system to decide a national election?  Connect with me on Twitter at @MSilbAnalytics .

FiveThirtyEight 2016 Projection: http://projects.fivethirtyeight.com/2016-election-forecast/

FiveThirtyEight Model explanation: http://fivethirtyeight.com/features/a-users-guide-to-fivethirtyeights-2016-general-election-forecast/

FiveThirtyEight Elections Podcast (loyal subscriber here): http://fivethirtyeight.com/tag/elections-podcast/

Advertisements