The high popularity and potential of OpenAI’s ChatGPT have come at an immense cost, according to The Information. Research firm SemiAnalysis reported that the artificial intelligence technology results in a daily expenditure of up to $700,000 to keep its impressive infrastructure running.
Dylan Patel, the firm’s chief analyst, conducts analyst research and runs projects across the organization.
Dylan Patel says:
“Most of this cost is based around the expensive servers they require,”
In a follow-up interview, Patel informed Insider that the costs could be even higher since his estimates referred to GPT-3, the prior model used for the older and now free version of ChatGPT.
GPT-4, OpenAI’s latest model, would require an even bigger investment to run than its predecessors, Patel commented.
ChatGPT is not alone in its costly upkeep, as specialized chips powering AI and search engine conversational systems are notoriously power-hungry. This can result in exorbitant expenses for using these technologies.
Microsoft, which has invested heavily in OpenAI, is readying its proprietary AI chip, Athena. Development of Athena has been going on since 2019, and it is now accessible to select Microsoft and OpenAI personnel. This highlights the importance Microsoft places on AI technology.
Microsoft is deploying a new chip to reduce running costs while replacing the existing Nvidia graphics processing units; the aim is efficiency.
Dylan Patel continues to say:
“Athena, if competitive, could reduce the cost per chip by a third when compared with Nvidia’s offerings,”
Nvidia recently agreed to a years-long collaboration with Microsoft, who are now making their first foray into AI hardware though still behind rivals Google and Amazon regarding in-house chips. This does not mean that the company is looking to replace Nvidia’s AI chips across the board.
Athena could be an incredibly helpful addition if the rumors are to be believed. Despite this, we still don’t know when it will arrive; hopefully, it won’t be too long.
Sam Altman, the CEO of OpenAI, recently declared that the development of “giant AI models” is ending. Large language models such as ChatGPT show diminishing returns despite their great size.
OpenAI’s recently presented GPT-4 model has been estimated to contain more than one trillion parameters, which may have pushed it to the upper limits of practical scalability per the company’s analysis.
Patel’s analysis suggests that, while a larger size usually provides increased power and more capabilities when it comes to AI, this would increase costs.
OpenAI appears to have no financial woes, as ChatGPT’s success has been overwhelming. CEO of AI-driven media products recommends using Artificial Intelligence for content production at a huge scale, 30-50 times more than what is produced currently.
The cost of running ChatGPT underscores the significant investment required for AI development and the potential benefits that this technology can bring to society. While the costs of running advanced AI systems are high, it’s important to recognize this technology’s value and work towards sustainable business models supporting the ongoing development.
Source: Futurism