I am not very interested who Nate Silver will vote for; I am not very enthusiastic about Newsweek’s choice of title. I think that’s probably by far the least-worthwhile piece of information in the article.

But what I do think is interesting is that he’s got an assessment of the impact of the presidential debate up:

He also discussed the candidates’ win probabilities following their debate on Tuesday: “Before the debate, it had been like Trump 54, Harris 46. These are not vote shares. These are win probabilities. And after, it’s 50-50,” Silver said.

“She, right now, is at 49 percent of the vote in polls,” Silver said on the podcast. “To win, she has to get to 51 percent—51 because she has a disadvantage in all likelihood in the Electoral College.”

Despite having previously shown Trump as surging in the polls, Silver’s model now has him neck and neck with Harris.

  • atzanteol@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    6
    ·
    2 months ago

    And of course, probabilities are just that: probabilities.

    LIke… Yeah. It’s crazy to me how many people think “75%” means “that person will win” and then blame the polls if they don’t. People do this all the f’ing time with the weather as well. “They said it would rain!” when, in fact, they said there was a 80% probability that it would rain.

    • Carrolade@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      2 months ago

      Yeah it’s true. Though with weather its more egregious, we have a huge dataset to test weather predictions vs weather results against, to test the accuracy of models.

      Our ability to test election forecasting percentages is limited to a dataset too small and too different year-to-year to call it tested. So, it really is much less grounded in empirical reality than weather forecasting. It’s mathematical voodoo, in my eyes closer to numerology than science.

      • atzanteol@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        2
        ·
        2 months ago

        Though with weather its more egregious, we have a huge dataset to test weather predictions vs weather results against, to test the accuracy of models.

        You’re right - and the predictions are quite accurate for 24-48 hours out. The thing is - if you say “there is a 90% chance of rain” then you would expect that 1 out of 10 times it would rain. If it rains all 10 times then your probability estimate was wrong (it was 100% chance of rain). At least over a large sample set.

        Nate actually goes into this quite a bit in his book The Signal and the Noise.