The financial services industry and just about every industry on Earth are set to undergo a series of major transformations brought about by the generative proliferation of artificial intelligence, and specifically large language models (or LLMs). Education, research and practically every data-driven workflow are susceptible to rapid change as the full flexibility and nature of this technological breakthrough are realized. Made possible by ever-increasing semiconductor prowess and open-sourced efforts, LLMs have the potential to transform global markets with significantly enhanced efficiency and access.
Indeed, such breakthroughs will provide all institutions with the ability to expeditiously obtain sophisticated real-time data analysis without prohibitive costs, thereby leveling the playing field between larger and smaller institutions globally. Imagine having access to automated transcript analysis that provides real-time sentiment and keyword reporting of all publicly reporting entities, or having the ability to decipher predictive relationships of gross margin progression across every publicly traded asset globally at a price entry point that is no longer only the domain of the mega institutions.
So, what are the risks, and how should investors approach integration of LLMs into their investment processes?
Before continuing, it is important to highlight what an LLM is and how recent advancements differ from previous AI frameworks. LLMs are computerized models that utilize millions to hundreds of billions of layers of statistical connections that are trained on large computer networks. These layers, known as parameters, set statistical weights and connections for the language components contributed into the model. While humans can influence the importance of these inputs, many of the outward behaviors between LLM parameters are self-created by the contributed language data. Advancements in hardware have allowed these systems to grow exponentially in complexity and while we still don't understand exactly why, the ability for these large models to acquire implicit knowledge about syntax, semantics and ontology within a language is what differentiates LLMs from human-trained systems that came before.