The use of natural language processing (NLP) in the realm of financial technology is broad and complex, with applications ranging from sentiment analysis and named entity recognition to question answering. Large language models (LLMs) have been shown to be effective on a variety of tasks; however, no LLM specialized for the financial domain has been reported in literature.
In this work, researchers from Bloomberg present BloombergGPT, a 50 billion parameter language model that is trained on a wide range of financial data. They construct a 363 billion token dataset based on Bloomberg’s extensive data sources, perhaps the largest domain-specific dataset yet, augmented with 345 billion tokens from general purpose datasets.
Across dozens of tasks in many benchmarks a clear picture emerges. Among the models with tens of billions of parameters for comparison, BloombergGPT performs the best. Furthermore, in some cases, it is competitive or exceeds the performance of much larger models (hundreds of billions of parameters). While the goal for BloombergGPT was to be a best-in-class model for financial tasks, and researchers included general-purpose training data to support domain-specific training, the model has still attained abilities on general purpose data that exceed similarly sized models, and in some cases match or outperform much larger models.
In the paper, they validate BloombergGPT on standard LLM benchmarks, open financial benchmarks, and a suite of internal benchmarks that most accurately reflect our intended usage. The mixed dataset training leads to a model that outperforms existing models on financial tasks by significant margins without sacrificing performance on general LLM benchmarks. Additionally, they explain modeling choices, training process, and evaluation methodology. As a next step, researchers plan to release training logs (chronicles) detailing experiences in training BloombergGPT.