How Much Does It Cost To Operate ChatGPT Each Day?

OpenAI’s ChatGPT could be too expensive to run – an analyst informed The Information that it might cost around $700,000 a day due to the “expensive servers” being utilized.

ChatGPT necessitates extensive computing resources on high-price servers to deliver responses to questions.

According to The Information, Microsoft is constructing an artificial intelligence (AI) chip in the background to reduce expenses.

Dylan Patel, chief analyst of semiconductor research firm SemiAnalysis told The Information that the “pricey tech infrastructure” used by OpenAI to operate ChatGPT for various purposes such as writing cover letters, generating lesson plans, and revamping one’s dating profile could cost up to $700,000 a day.

ChatGPT is an AI-powered tool that needs much computing power to generate answers based on user input.

Patel says:

“Most of this cost is based around the expensive servers they require,”

In a phone call with Insider, Patel revealed that operating OpenAI’s GPT-4 model is even more expensive than his initially estimated cost for the GPT-3. He further explained that the expenses are likely higher with the latest model.

OpenAI did not comment to Insider before publication when asked for one. Despite the request, the organization remained silent.

While inference costs of training ChatGPT’s large language models are likely to be in the tens of millions, operational expenses must also be considered.

SemiAnalysis says:

“far exceed training costs when deploying a model at any reasonable scale,”

“In fact, the costs to inference ChatGPT exceed the training costs on a weekly basis,”

In 2021, Latitude, a startup behind an AI dungeon game, needed to pay Amazon Web Services servers $200,000 a month and to answer million—toes, they had to run OpenAI’s language model. CEO Nick Walton stated that this was costly for the company.

Walton mentioned choosing a language software provider, backed by AI21 Labs, to reduce the costs of AI for his company drastically. This shift cut their monthly expenses in half to $100,000. He declared that this solution was crucial due to the initially high cost.

Walton says:

“We joked that we had human employees and we had AI employees, and we spent about as much on each of them,”

“We spent hundreds of thousands of dollars a month on AI and we are not a big startup, so it was a very massive cost.”

Microsoft Secret Chip Project: What We Know So Far

Microsoft’s AI-chip project Athena, which began in 2019 and targeted a reduction of running costs for generative AI models, has been reported by The Information. This follows Microsoft’s previous $1 billion deal with OpenAI requiring all OpenAI model simulations to run by Azure cloud servers.

To save costs, Microsoft was determined to construct its chip to compete with Google and Amazon. As the source familiar with The Information mentioned, Microsoft could keep pace with its rivals by developing its chip and increasing savings from AI models that utilized Nvidia GPUs.

Four years after tproject’st’s launch, The Information cites sources familiar with the matter, stating that over 300 Microsoft employees are projected to finish constructing a chip as early as 2021. This chip is expected to be used by both Microsoft and OpenAI.

The development and implementation of AI should be guided by a commitment to transparency, fairness, and inclusivity to ensure that it benefits society as a whole rather than just a select few.

Source: Business Insider

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top