Rules of Engagement
Welcome, Chloë. The Council on Foreign Relationships has convened this ballot to settle a matter of considerable geopolitical importance: who is the superior Oscar forecaster?
For each of the seventeen categories below, assign a probability between 0 and 100 to every nominee. These represent your percent confidence that each nominee will take home the statuette. Your probabilities within each category must sum to exactly 100 β after all, someone has to win.
After the ceremony, we score your predictions using the Brier score, the gold standard for evaluating probabilistic forecasts. Developed by Glenn Brier in 1950 for weather prediction, it has since become the preferred metric everywhere that calibration matters β from intelligence analysis at the CIA to Philip Tetlock's famous Superforecasting tournaments. The reason is elegant: the Brier score doesn't merely reward picking the right winner. It rewards knowing how confident to be.
The outcome is 1 for the winner and 0 for every other nominee. Your forecast is on a 0β1 scale (so 70% becomes 0.70). Consider: if you gave the actual winner 90% confidence, your penalty is a mere (0.90 − 1)² = 0.01. But if you gave the winner only 10%, you eat (0.10 − 1)² = 0.81 β and the penalties from the losers you over-weighted pile on too. We sum your Brier scores across all seventeen categories β lower is better.
While I'm deeply confident in myself, and us, you would be wise to hedge your forecasts. Good luck, Chloë.