What If ChatGPT Was Trained On Decades Of Financial Data And News?

With a modest-sized staff, the news and data giant has constructed an AI that is superior to its rivals when it comes to fulfilling its particular demands regarding information.

Bloomberg, a data company driven by its subscription to terminals, was likely the first out of the news companies with its massive AI model. On Friday, it announced BloombergGPT- a computer that houses every bit of knowledge and information within the company.

Bloomberg has released a new research paper unveiling the development of BloombergGPT™, an advanced LLM focusing on financial data for AI purposes. This language model has been specifically crafted to assist various NLP tasks in Finance.

BloombergGPT is an exciting new development in AI, reflecting a shift towards using LLMs specifically tailored to the complex and highly specific financial domain. It marks the beginning of a new era in which this technology can be applied to areas such as Finance, enabling more accurate and efficient data processing.

BloombergGPT can open up new possibilities in Finance for utilizing the immense amount of info on the Bloomberg Terminal, aiding our customers, and leveraging AI to its fullest. It could additionally bolster existing financial NLP programs, such as sentiment analysis, news classification, named entity recognition, and question answering.

Drawing upon Bloomberg’s data and public resources, a 50-billion parameter Large Language Model was developed specifically for Finance to support Natural Language Processing (NLProc) jobs associated with this domain.

The promised technical details can be found in the research paper by Bloomberg’s Shijie Wu, Ozan İrsoy, Steven Lu, Vadim Dabravolski, Mark Dredze, Sebastian Gehrmann, Prabhanjan Kambadur, David Rosenberg and Gideon Mann.

BloombergGPT stands out with its more than 700 billion token corpus, making it larger than even GPT-3, which only trained on 500 billion. OpenAI has not revealed how many tokens their GPT-4 successor trained on due to the competitive nature of the landscape.

Bloomberg has constructed an expansive dataset – the “largest domain-specific dataset yet,” comprising over 700 million tokens. Of these, 363 billion are drawn from the financial data that fuels its terminals, while the remaining 345 billion come from outside “general purpose datasets.”

Rather than selecting a single approach among general-purpose LLMs or small LLMs exclusive to domain-specific data, we opt for a mixed route. While general models offer exceptional performance across varied tasks without pre-training specialization, domain-specific models demonstrate that these cannot replace them altogether.

At Bloomberg, due to the large and diverse variety of tasks that our applications support, we have developed a model specifically tailored for use in the financial domain capable of achieving best-in-class results on financial benchmarks and maintaining competitive performance on general language model benchmarks.

Using the new BloombergGPT AI as a potential harbinger of the next wave of corporate AI, current AIs are trained on web data, though companies can add their training. In contrast, BloombergGPT is 52% either propriety or cleaned financial data and appears more efficient in financial-related tasks.

FinPile is a comprehensive source of English financial documents, consisting of data gathered from Bloomberg archives, SEC filings, and transcripts from Bloomberg TV and other sources. It also collects information from non-Bloomberg-related news outlets to provide a more comprehensive overview of the financial markets.

The FinPile dataset consists of numerous English news sources, encompassing a diverse range of subjects related to the financial industry while focusing on accuracy and objectivity. Apart from Bloomberg-published content, there is much to explore in the ‘News’ category – from media outlets to authoritative reports.

The Pile, an extensive corpus of data, includes sources such as YouTube captions, Project Gutenberg, and the infamous collection of Enron emails used for AI training. Last July, it was updated to include a full copy of Wikipedia.

BloombergGPT can do many tasks, including those other LLMs can do and ones more connected to Bloomberg’s needs. In particular, it can generate and restructure sentences with the given keywords.

A Bloomberg-style headline could summarize news stories related to the market capitalization and earnings per share of Apple and IBM: “Apple, IBM See Increase in Market Cap, EPS.”

Input: In the latter part of 2022, figures from Redfin indicate that the US housing market experienced a $2.3 trillion shrinkage in value – representing a decrease of 4.9%. This was the greatest decline in terms of percentage since 2008, when values plummeted to 5.8%.

Output: Home Prices See Biggest Drop in 15 Years

Input: At the G20, Janet Yellen mentioned the improvement in the global economy compared to earlier predictions. She highlighted that inflation levels are decreasing domestically, and employment is good in the US. Furthermore, she pressed for a rapid conclusion of an IMF-funded rescue package for Ukraine.

Output: Yellen Sees Global Economy More Resilient Than Expected

Input: The US and eight states are taking a big challenge to tech giants as they sue Google for supposedly monopolizing the digital advertising market. This is the first case since 1982 where the Department of Justice (DOJ) has tried to divide a large firm. The suit aims to break up Google’s ad-tech business.

Output: Google Sued for Monopoly in Online Ad Market

Businesses prefer the improved version of this AI because of its ability to effectively address specific questions related to their operations, such as sentiment analysis, categorization, and data extraction. For instance, it performs exceedingly well in determining the CEO of a company.

This week marks a potentially groundbreaking development in the AI industry, as Bloomberg has designed a 50B parameter model trained on financial data. This could point to the future of multi-faceted AI, with various players succeeding in this sector rather than just Big Tech and OpenAI monopolizing it.

Regarding performance, when facing off against similarly sized models, BloombergGPT stands its ground in general tasks and comes out even better on finance-specific tasks. According to the paper, comparisons to GPT-3 and other Language Learning Models (LLMs) suggest these findings.

The internal testing battery contains “Penguins in a Table,” “Snarks,” “Web of Lies,” and the feared “Hyperbaton.” These carnival-game-ready terms are used to assess effectiveness.

Compared to models with tens of billions of parameters, BloombergGPT consistently outperforms and, in certain tasks, even surpasses the results of larger competitors with hundreds of billions. Across various benchmarks and tasks, it becomes evident that BloombergGPT excels.

While BloombergGPT was designed to be an advanced model for financial tasks, including general-purpose training data allowed it to build abilities on general-purpose data surpassing similarly sized models. In certain cases, it has even matched or surpassed larger models in performance.

Analysts provide senior finance professionals a chat-based interface to efficiently gather, organize, and output data. GPT will replace them, resulting in quicker and more streamlined financial workflows that don’t need to observe protected Saturdays.

Penguins aside, specific use cases other than benchmarking – both for the company’s journalists and terminal customers – can easily be envisaged. Bloomberg’s announcement didn’t explain what they’ll do with the new technology it has created.

A corpus of ~all of the world’s premium English-language business reporting — plus the universe of financial data, structured and otherwise, that underpins it — is just the sort of rich vein of information a generative AI is designed to mine. It’s institutional memory in a box.

We’re thrilled about the potential of BloombergGPT, prompting us to investigate ways to evaluate, perfect, and apply it. Moreover, we’re eager to see how our learnings can benefit the wider community in developing their remarkable models and applications.

Despite the potential of BloombergGPT being able to hallucinate, all the usual warnings that come with LLMs still apply. The data used for training can cause its own biases, and we can be sure that BloombergGPT will not encourage a Marxist revolution at any point in the future.

BloombergGPT has inspired many news organizations with its scale of data that can be applied due to its vast collection of digitized archives. It provides a unique edge that smaller publishers can leverage in the long run.

The Anytown Gazette is training an AI by utilizing archives of its newspaper for the past 100 years and constructing a vast database comprised of documents from cities, counties, and states, as well as any other local sources that are accessible.

Unlike its predecessor, Bloomberg has been using AI to create a scale that may be more helpful in-house rather than publicly. However, with the growth of AI advances over the last year, this new scale could become more beneficial for consumers earlier than expected.

By leveraging the power of natural language processing and machine learning, ChatGPT could provide investors and analysts with valuable insights and predictions, helping to inform their investment decisions and strategies. However, it is important to remember that AI is not a panacea and that the insights generated by ChatGPT must be interpreted and contextualized by human experts. Additionally, we must approach AI development and implementation with a thoughtful and ethical perspective, considering AI’s potential benefits, risks, and challenges. By doing so, we can ensure that AI is used to serve the greater good and contribute to a more just and equitable society for all.

Source: Nieman Lab

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top