Discover more from Real Man Sports
Artificial Intelligence For Fantasy Sports
I hired four people (on a contingency basis — they don’t get paid unless we win) to run my fantasy teams this year because I no longer want the season-long commitment to staying on top of all the moves. This prompted the question: would I rather just have an AI eventually run my teams instead?
I’m not against it in principle as in that case I wouldn’t have to share the winnings! But while there are some things AI might do better, there are others where I’d expect skilled humans to outperform it.
Let’s start with the strengths of AI: it would be able to analyze player performance and (if it could take in visual input too), more precisely know which players are performing better than our limited stats otherwise show. We have metrics like average exit velocity, barrels, launch angle, etc. that give us an idea beyond the cosmetic HR, RBI, etc. that a player puts up, and the AI could aggregate that data better than we can. It could really hone in on the kind of contact a hitter is making or a pitcher is giving up, the kind of movement, beyond just swinging strike, but swinging foul tip, for example. It could give us a much more complete picture of performance than the human can as we can only hone in a few stats and extrapolate over a much more limited set of inputs. The AI is better at telling us what happened in the past.
What I suspect the AI will not be as good is predicting which players will depart from past performance in the future. Because every year, and every month, there is someone who was playing really badly who suddenly starts playing really well. And it’s not just because of batted ball luck which implies cosmetic statistical improvement despite consistent play. I’m talking about bad play becoming good and vice-versa, sometimes injury related, sometimes skill acquisition or adjustment related. Past performance does not guarantee future results, and this is so even accounting for injuries and luck, Things change, often seemingly inexplicably, but there is always a reason even if we don’t know it
Now the human skill of sensing a change about to happen, or as it’s happening before there’s a sufficient sample to confirm it, might seem trivial — radical departure from past performance only happens for a small subset of players, and in any event, no human has a crystal ball, i.e., even those with the best intuition and instincts will often be wrong. Surely this small skill — if it even exists — is no match for the performance-measuring advantages of AI.
But consider that most of performance measurement is already priced in. While the AI would be more precise, it’s only the last couple percent more, and humans, even just looking at HR and batting average, let alone exit velo and launch angle, are 90-something percent of the way there — we largely know who’s playing well and who isn’t already.
Moreover, and perhaps more importantly, leagues are typically not won and lost because you correctly valued Player X as the 12th best player on the board rather than the consensus 17th. They are won and lost because you avoided taking at 14 a player who finished 155th. Or you took a chance on a former prospect who showed nothing the year prior and wound up in the Cy Young race. Profiting from (or getting rugged by) radical departure drives outcomes more than anything else.
The ability of certain humans to understand the less likely possibilities and profit from chaos won’t be easily matched by AI because it doesn’t have sense and intuition, it has only rules. And good rules of this sort tend to focus on the probable, the normal, the range of outcomes in the middle of the curve.
Of course, because the demonstrable advancements of the last 20 years are mostly in the measurement-of-past-performance area, people have trained themselves to focus on it, turning themselves into mini-AIs of sorts, and there’s a increasing cohort of stat projectors and analytics purveyors doing that work. In fact, an industry “expert” is far more likely to cite this work in informing the public than he might have 20 or even 10 years ago. It makes him sound smart, and in fact there is some intelligence and merit to this framework.
But it’s a mistake to train yourself to think like a machine — and this will be increasingly so once machine-thinking is cheap and ubiquitous. The open-minded human learns in the way the machine never can, and as such is capable of doing something the machine cannot: acting on vision instead of probability.