The forecast is now final and will no longer update.
2024forecast.com has been among the best forecasts in predicting the results of this presidential election, an analysis comparing it with the forecasts listed on the Wikipedia article for the 2024 presidential election shows.
Out of the forecasts listed there, 6 made a prediction for a winner in every state. The number of correctly predicted states for every forecast is:
Decision Desk HQ: 49
2024forecast: 48
Sabato's Crystal Ball: 46
fivethirtyeight: 46
YouGov: 45
Cnalysis: 44
Out of the forecasts listed there that either provided a probabilty for each candidate to win the presidency or a prediction in every state, the predicted winner was:
Decision Desk HQ: Donald Trump
2024forecast: Donald Trump
Sabato's Crystal Ball: Kamala Harris
fivethirtyeight: Kamala Harris
YouGov: Kamala Harris
Cnalysis: Kamala Harris
The Economist: Kamala Harris
Out of the forecasts listed there that provided a prediction of the vote share in every state, the average error in predicting the final margin of the vote was:
2024forecast: 2.1 percent
fivethirtyeight: 2.4 percent
YouGov: 3.5 percent
This analysis was last updated based on current vote counts on 11/15. For any forecast which did provide a prediction in every state but not a probability for the winner of the election, the predicted winner of the election for the purpose of this analysis is the candidate who is favored in states whose electoral vote total exceeds or is equal to 270.
Overall, I believe that my forecast provided an more accurate reflection of the state of the race than other forecasts. My forecast was one of only two forecasts that predicted the correct winner of the election. Other prominent forecasts that were not listed in the Wikipedia article also predicted a Harris win, including most prominently Nate Silver, Allan Lichtman and the JHK forecast. I also predicted every state other than Michigan and Wisconsin correctly, Donald Trump's two closest wins, correctly, beating all but one of the other forecasts. The fact, that I was able to make this forecast as an uncommercial project and with severely limited ressources compared to other forecasts like fivethirtyeight makes me even prouder of this result.
Despite providing a better prediction than most other forecasts, there are still things I will improve and factors that I will look into further when the next election comes. I believe it is important to be transparent about those aspects in order to provide an more in depth understanding of the forecast, the predictions it has made and why they might have been wrong in some aspects.
The forecast overestimated the difference between the different swing states. The predicted margin difference between the most democratic and the most republican state was 7.4 percent, compared to only 4.9 percent in the results of the election. I will look further into why this mistake has been made. but I believe the reason is the way the forecast converted the betting odds into percentage margins. The forecast calculates the probabilities of candidate winning in every state based on the vote margin prediction it makes and historic errors of these predictions in past elections. As the betting odds are included in the vote margin prediction, when converting the betting odds into vote margins, the relationship the forecast assumes between the chance of a candidate winning a state and the predicted vote margin is not yet known. Taking this into account, the betting odds are converted into vote margins using historic error data of a simplified version of the forecast. To take into account the possibiliy of this simplified version having on average a higher or a lower error than what it should have, the forecast adjusted this conversion of betting odds by adding a modification based on variable whose ideal value is determined based on analyzing the forecast's predictions accuracy with this variable being set to different values. This variables value was influenced not only by the possible error of the simplified version of the forecast but also of past errors of betting odds which underestimated the winning chances of candidates in states with clear results. This led to the betting odds being converted into higher margins than they would have been based on an adjustion only based on the simplified forecasts accuracy. As in this election cycle, the betting odds showed no such error, the forecast's estimation of their correlation to vote margins was inaccurate. A possible solution to use in the next forecast is to readjust the betting odds converter based on the historic prediction errors of the forecast, than recalculating the forecast and repeating this process until it converts toward what then will be the final prediction.
Another aspect in which I believe the forecast could be improved is in how it predicts the errors in polling. This is not something I would call a mistake the forecast made but rather something that is interesting to look into further and to make a better prediction. After carefully analyzing the polling of elections leading back to 2004, I am convinced there is a connection betweeen polling errors in different election cycles. I am not sure whether this connection exits nationally as there have only been 5 elections in that timespan which is too low of a number to make out an realistic connection. Other than the poll errors, the forecast also considers a poll error relative to the national popular vote, meaning for example that if the popular vote polling overestimates a party by 1 and the polling in a state is accurate, the relative poll error in that state is than counted as overestimating the other party by 1. Out of the 77 combinations of RCP-averages from back-to-back elections in the same state between 2020 and 2004, the correlation between the poll error in an election and the relative poll error in the past election is at about 0.32. This time around, I modified the RCP-average based on what the model anticipates to be the poll error, but next time around I think it will be interesting to further explore how the poll errors behave based on how high they were in the past election and to then include them as a separate prediction category which may only be given an relative prediction weight but not an absolute prediction weight.
Also, for the next election, the weights of certain variables will change. Any indicator that was accurate this time will likely be weighted higher for the next election and the other way around. This is not an indication of something having been done wrong this time around but rather an improvement of the forecast based on addidational information being gained.
I am not sure whether I will make a model for the next election again, but I think that to be quite likely. If I do, I will try to make an even better model that takes into account more data points and explores the connections between these data points even more in depth.
Thank you to anyone who has read this page or my posts on social media and I hope that some of you might have found the forecast interesting and feel that you have gained knowledge due to this forecast.