A debrief on the French Open and the conclusion to this series.
Rafael Nadal won his 15th major and 10th French Open title on Sunday, as he cruised over Stan Wawrinka in straight sets. This victory felt inevitable, and yet it’s unclear how many formal modeling systems would have called Nadal’s dominant run before play began in Roland Garros. The previous article in this series adjusted one of the most popular competitive ranking systems, Elo, in an attempt to predict the outcome of this tournament. How did its predictions fare against other ranking systems and against other forms of modeling (like those discussed in part II)?
|ATP Ranking||Elo||Adjusted Elo|
|% of Matches Correctly Predicted||79.45%||80.82%||80.82%|
Obviously, the differences between these systems are minuscule. In fact, there was only one match predicted differently (which accounts for the 1.37% margin): the final, Stan vs. Rafa. The ATP seeding puts Wawrinka third and Nadal fourth, whereas both forms of Elo have Nadal well above Wawrinka. Though the championship match is certainly a nice one to get right, the bigger takeaway here is that, for the top players, the French was mostly unsurprising.
Of course, the tournament was not without upsets. Alexander Zverev, who has played very well in 2017 and is fifth in the unadjusted Elo rankings, got bounced in the first round by Fernando Verdasco. Nick Kyrgios (18) lost to Kevin Anderson in the second round. Jo-Wilfried Tsonga, ninth in the adjusted Elo rankings, lost in the first round to Renzo Olivo. These upsets, however, are nearly impossible to catch in any model or ranking system without qualitative interpretation by a human. No model would predict that Renzo Olivo, 91st on tour, beats Tsonga, a top-15 player. The utility of statistical models in situations like these will be discussed later on.
To wrap up the comparison of the ATP seeding, Elo, and adjusted Elo, I note that only adjusted Elo accurately predicted the winner of the tournament outright (Nadal). It also highlighted Dominic Thiem as a player who could play abnormally well given the clay surface and recent momentum. Neither of these insights is novel, as many qualitative tennis analysts said the same things. Nonetheless, building quantitative tools that confirm or deny qualitative insight is a crucial part of the forecasting process. In this case, adjusted Elo would qualify as such a tool, but standard Elo, which missed on both of these points, would not.
What about the head-to-head predictive models from earlier articles in this series? Do they offer any additional predictive capacity over adjusted Elo? In short, no. Running the Klassen and Magnus model over these matches yields the same percentage correct as adjusted Elo. How could this be and what does this mean?
Lies, Damned Lies, And Statistics?
If the different predictive tools I’ve built out over this series of articles all yield roughly the same results for this tournament, are they of any use or should we write them off per the old adage above? First, allow me to point out that one tournament does not constitute a large enough sample size to make any definitive claims. Second and more importantly, I emphasize that no valid predictive model is going to consistently flip common sense on its head. These tools aren’t built to predict that an unranked player will beat a ranked one or that Jelena Ostapenko, whose highest winnings were previously just a few thousand euros, would win the French Open. A good predictive model will tell you that these things are unlikely, which they are. If you make predictions strictly off the quantitative results of the model (as I have been), you will almost always miss these unlikely events.
Admittedly, the models I’ve looked at in this series are about as simple as they come, but the underlying truth still stands. Rarely will a predictive model in sports tell an analyst that something assessed qualitatively to be unlikely is probable. There are models light years ahead of what I’ve discussed that do a better job, but even those cannot reliably predict the unpredictable.
Ultimately, the utility of these statistical tools lies not solely in their ability to predict, but in their ability to predict better.
Many of the stakeholders in the business of predicting sporting events are gamblers. Gamblers make bets that imply a probability of the selected outcome. The betting lines that determine these probabilities are the results of statistical models owned by the casino/sportsbook and of the betting behavior of other gamblers. If a gambler possesses a model that can predict the outcome of an event better than the one used by the house, he has an “edge.” Edge is the sine qua non of profitable sports gambling.
One can achieve an edge with the sorts of statistical models discussed in these articles. As a matter of fact, there are sports betting hedge funds attempting to do just that. When the probability implied by the betting line differs from the probability yielded by the model, you take the bet.
The following example should crystallize the preceding discussion nicely. Suppose that the day before the French Open final, a casino or sportsbook offers the following money line:
Jelena Ostapenko (+400) // Simona Halep (-500)
This line implies a probability of victory of 83.33% for Halep and 20% for Ostapenko. Notice that the implied probability sums to over 100%; this 3.33% margin is called the “vig” and is the guaranteed profit for the house.
Imagine that you have a predictive model that puts Ostapenko’s chances of winning at 30%. Is it probable that she will win? No. Is there any way to imagine that this unranked Latvian will beat the world No. 1? Not really. Your model confirms common sense with its 30% win probability. Ostapenko probably will lose! However, your model also tells you that the betting line has underestimated her chances. This is the source of an edge. If your model is correct, you achieve positive expected value by betting on Ostapenko. To simplify, you have a greater chance than normal to make money because your model predicted better, not because it predicted the unpredictable.
That is the essential utility of these predictive models. They compete with one another in a statistical arms race.
This is the last article of this series on statistical modeling of ATP singles matches. Hopefully, you have a basic understanding of how these models work, why they are not designed to predict the future, and how people successfully use them anyway. Whether you’re looking for an edge over the house or just want to predict matches for your own entertainment, these rudimentary models and their more evolved descendants represent an irreplaceable tool by themselves and something akin to a crystal ball when combined with a sports fan’s intuition.
CORRECT!Your overall SQ:
Your Tennis SQ:
WRONG!The answer was: Answer more Tennis questions »
- Nick Bollettieri
- Uncle Toni
- Brad Gilbert
- Stefan Edberg