Proposing The Next Frontier
Two persistent ideas in AI discourse are that AI is existential and that AI will widen the economic moat between users and deniers. Existentialism often gets misconstrued as the idea that AI will replace people and make certain jobs extinct. The more accurate tie-in is not that people’s work will disappear, but rather that the rules of the game change and the basis of advantage shifts. AI is existential because it changes how advantage compounds over time, rewarding organizations that adapt faster and learn more systematically. Baseball mirrors business in that it consists of competing firms with differing payrolls, operating on different information, and subject to strong feedback loops. It therefore stands to reason that in baseball, organizations that invest in AI systems will separate in decision-making quality from those that do not.
Putting this plan into action will take discipline. Most teams likely perform decision support on their own transactions, recapping what a model said, how much was paid, and how much surplus value was believed to be gained. Fewer teams likely perform decision support on all transactions. Like anything in life, when you do not know exactly where you are going, it is important to understand where you are. The same holds true for determining what unstructured data is worth creating. MLB Trade Rumors and baseball news outlets will cover the basics of every transaction, but what specific knowledge does an organization have that can shed light on why a transaction took place? Was there agent familiarity that influenced the deal? Did the player train at a certain facility from which a certain organization regularly acquires talent? Did a high-ranking executive trust a recommendation from a former colleague with prior exposure to the player? These factors undoubtedly play a role in decision-making, yet are difficult to retrospectively quantify because of current modeling limitations.
The case for AI is not that it is going to lead to perfect decision-making. Its value lies in acknowledging that current systems do not capture everything, and that the more uncertainty can be reduced, the better decision-making will be in the long run. The purpose of decision support is not merely to justify decisions, but to preserve reasoning. Decision support becomes more useful when there is a systematic way to learn from past biases, especially when that process also strips out human bias. AI, and large language models specifically, represent a meaningful next step because, given the right inputs, they can help organizations retain context, identify which situational factors repeatedly mattered, and recalibrate beliefs as evidence accumulates.
A practical way to begin would be to address one problem, such as trade deadline transactions. Following each successive deal made at the deadline, an organization could log considerations not currently captured by models or public reporting. An LLM could already access internal valuations, player archetypes, and external articles, but it would otherwise lack insight into the contextual factors discussed internally. Capturing perceived risks, non-model inputs, and points of internal debate does not require new workflows, but rather a deliberate effort to preserve information already exchanged in meetings and calls. In the long run, this discipline creates an institutional record of judgment that reflects both how other organizations tend to operate and how internal decision-making evolves.