Despite its considerable hype, I have yet to see any tangible benefit from AI. I know others claim to, but I have yet to find a use case. I don’t use it to write code because I’m not a coder, and I certainly don’t want it writing prose in its trademark flowery bullshit style.
That said, I had an idea that might work for fantasy sports:
I’m an old school player. My skill is in understanding what questions to ask in service of sorting out the signal (actionable information) from the noise (trendy jargon, incremental edges oversold, redundant or backward-looking indicators inaccurately marketed as predictive tools.) It is definitely not in crunching data, generating projections or creating good UI for people.
To that end, AI might be useful to me in that it can replace a lot of the crunching/stat-nerd work in which I have no interest, but in the end is only as good as the questions of which it is asked. The more AI takes over the mental-menial labor, so to speak, the more the ideas people like me have an edge. The more people like me require mental-menial labor for execution, the more the number-crunching grinders have an edge. I look at AI a bit like I would a chainsaw. It was probably a lazy person who invented it to avoid having to sweat it out swinging an axe.
A few examples of questions off the top of my head: at what number of career at-bats do hitters typically break out (defining “break out” within certain supplied statistical paramters)? At what number of innings pitched do pitchers typically need Tommy John surgery for the first time? Including and excluding minor league innings?
Who are the exceptions? What hitter broke out, but only after 3,000 major league at-bats? Who was good right away? What pitchers never needed surgery? What pitchers needed it right away?
You’re never going to get a list that works for all cases. What you want is to understand the rule and the exceptions. But mostly the exceptions. The outliers are the real signal even though most statisticians discount them as noise. They show you what’s possible, where the limits lie. And the limits often have more explanatory power than observing the pattern itself.
And you can iterate on this, figure out who is an outlier, who is more of a center-of-the-bell-curve type. You can identify who to put in the generic projections bucket (center of the bell curve player) and who to put in the hunch one (outlier). And you can develop better heuristics for distinguishing between the two.
The entire game is figuring out the exceptions, where risk and reward are incommensurate, but the market doesn’t yet know. An AI, when prompted with the right questions, won’t hand you the answers, but maybe it can reduce the search space in which you should look.
I'm dubious of what AI spits out. This year, I was pretty amazed by how all of the 1 seeds and all of the 2 seeds advanced to the Sweet Sixteen. Off the top of my head, I couldn't recall that ever happening and I'm a mild CBB junkie. I figured ChatGPT would be the quickest way to find out, so i decided to ask if it's ever happened before. This is what it spit out:
"As of my last update in January 2022, the most 1 seeds and 2 seeds to advance to the Sweet Sixteen in a single NCAA Men's Basketball Tournament was in 2009. In that year's tournament, all four 1 seeds and three out of four 2 seeds advanced to the Sweet Sixteen. This means a total of seven out of the eight top-seeded teams made it to the third round. It's a notable achievement in the tournament's history, showcasing the dominance of the highest-seeded teams in that particular year."
I figured it was correct, except it wasn't. As recently as 2019, it happened, and a quick search query on any search engine pulls it up.
I'm starting to wonder if things such as ChatGPT are merely additional forms of data collection. To that point, I copied and pasted my old question with ease because it is saved on my profile through my gmail account for what I can only assume as permanently.
The one solid case use for ChatGPT that I have found, as embarrassing as this is to share, is with helping with your child's school project. Yeah I know, I am not proud of it. I'm not proud of helping the child in the first place, nor the even more egregious act of cheating in the act of helping. But most people have been there before where the project is left to the last night. The time for lecturing about time management to your child was before or after said night, but not during.
A question posed to chatGPT like "Please write in the style of a sixth grader, an essay which comprises A, B, and C, in x amount of paragraphs/words" can get the ball rolling. Except it's not even really that good at that, because it just writes how it assumes 6th graders talk, like as an example, "Neptune is a super cool, big blue planet, made of like gasses and stuff."
So you usually need to pose additional questions to clean it up, and you always need to up it a grade or three (my kid is in fourth grade) because apparently it's been coded to think that we're all pretty dumb. Then a nice copy and paste job with a sit-down with the kid about how you want to word this or that (so at least the kid learns some shit), and the written portion of the project is done, and it's on to the "Paper Mache," or whatever.