Unlock Seamless Conversations with ChatGPT and Whisper APIs

Today, we are releasing gpt-3.5-turbo, a member of the ChatGPT model family and the same model used in our ChatGPT product. This new model is offered at an affordable price of $0.002 per 1k tokens, which is ten times lower than our current GPT-3.5 models.

GPT-3.5-turbo is our most effective model for various applications unrelated to chatbots; we have noticed that testers who tried out text-davinci-003 switched to GPT-3.5 turbo with minimal modifications in their commands.

Rather than taking in unstructured text, ChatGPT models consume a sequence of messages with associated metadata. This sequence is represented in the model as “tokens.” Traditionally, GPT models have been used for this purpose.

For those who are interested to know what happens behind the scenes: the information given to the model is still broken down into a series of “tokens” and then processed using a new format called Chat Markup Language (“ChatML”).

ChatGPT Experience with Upgrades

We continually strive to upgrade our ChatGPT models to provide developers with the latest enhancements. By using the gpt-3.5-turbo model, developers are guaranteed access to our dependable standard model while still having the option of selecting a particular version.

Today, we are introducing get-3.5-turbo-0301, which will be supported until June 1st. In April, a new stable version of gpt-3.5-turbo will be released, and you can find updated information about the switchover on the model’s page.

Dedicated Instances

We now offer dedicated instances for those who require a higher level of control over the particular model version and system performance. On the contrary, users typically pay on a per-request basis while using the shared compute infrastructure.

Our API is hosted on Azure and provides developers with reserved compute infrastructure for their requests, which they will pay for my period.

Developers have the power to regulate the load on their instance (greater load leads to higher throughput but makes individual requests slower), the opportunity to switch on features like expanded context limits, and can fix a model snapshot.

A dedicated instance can be cost-effective for developers running more than 450 million tokens per day. Moreover, it allows for direct workload optimization against hardware performance, which can significantly reduce expenses compared to a shared infrastructure. If you would like information about dedicated instances, please don’t hesitate to contact us.

Whisper API

Whisper, the speech-to-text model open-sourced in September 2022, has been greatly admired by developers; however, it can be difficult to set up and manage.

Our API now provides convenient, on-demand access to our large-v2 model at $0.006 per minute. Furthermore, our serving stack is highly optimized for enhanced speed compared to other services.

You can access Whisper API through our endpoints for transcription (in the source language) and translation (into English), which support a variety of audio formats such as m4a, mp3, mp4, mpeg, mpga, wav, and webm.

The introduction of ChatGPT and Whisper APIs is a significant development for businesses and developers looking to enhance their chatbot capabilities. These APIs offer advanced natural language processing (NLP) and sentiment analysis, enabling chatbots to provide more personalized and human-like user interactions.

Additionally, integrating these APIs is user-friendly and customizable, making it easy for businesses to implement and tailor their chatbot experience. As AI and NLP technology evolve, we can expect these APIs to become even more powerful and versatile, opening up new possibilities for chatbot applications across various industries.

Source: @OpenAI

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top