NEWSLETTERS OpenAI GPT-3.5 Turbo get out.

NEWSLETTERS OpenAI GPT-3.5 Turbo get out.

Starting today, you can now fine-tune GPT-3.5 Turbo for custom use cases. Read more about the new fine-tuning capabilities in our latest blog post.
Fine-tuning use cases
Since the release of GPT-3.5 Turbo, developers and businesses have asked for the ability to customize the model to create unique and differentiated experiences for their users. With this launch, developers can now run supervised fine-tuning to make this model perform better for their use cases. In our early results, we have seen developer achieve:

Improved steerability
Reliable output formatting
Consistent custom tone

In addition to increased performance, fine-tuning also enables businesses to shorten their prompts while ensuring similar performance.

Pricing
Fine-tuning costs are broken down into two buckets: the initial training cost and usage cost:

Training: $0.008 / 1K Tokens
Usage input: $0.012 / 1K Tokens
Usage output: $0.016 / 1K Tokens

For example, a gpt-3.5-turbo fine-tuning job with a training file of 100,000 tokens that is trained for 3 epochs would have an expected cost of $2.40. You can read more on our pricing page.

API updates
We are excited to announce an updated fine-tuning API, /v1/fine_tuning/jobs. This new endpoint offers pagination and more extensibility to support the future evolution of the fine-tuning API. This deprecates the old /v1/fine-tunes endpoint, which will be turned off on January 4th, 2024. Fine tuning, like all OpenAI APIs, is subject to our usage policies.
Updated GPT-3 models
In July, we announced that the original GPT-3 base models (ada, babbage, curie, and davinci) would be turned off on January 4th, 2024. Today, we are making babbage-002 and davinci-002 available as replacements for these models, either as base or fine-tuned models. Customers can access those models by querying the Completions API.
—The OpenAI team
Retour au blog