OpenAI Introduces Fine-Tuning Tools for GPT-4o Models for Targeted Training and Improved Output Results
Today, OpenAI announced the release of fine-tuning tools for the GPT-4o mini model, available for free within a specific usage range for two months. Initially, these tools are only accessible to developers at levels 4 to 5 on the OpenAI API platform, with plans to gradually extend access to more developers.
From now until September 23, 2024, developers can use up to 2 million tokens daily for free. Any usage beyond this limit will be charged according to the API call prices. After the expiry date, all tokens will be charged at the API rate.
The API pricing for the fine-tuning tools varies depending on the model. For instance, the input cost for GPT-4o mini is 0.3 per million tokens, and the output cost 1.2 per million tokens. If operations are performed through the Batch API, the prices are 0.15 and 0.6 respectively.
Supported models for the fine-tuning tools include gpt-3.5-turbo-0125, gpt-3.5-turbo-1106, gpt-3.5-turbo-0613, babbage-002, davinci-002, gpt-4-0613 (in testing), gpt-4o-2024-05-13, and the recommended gpt-4o-mini-2024-07-18.
OpenAI states that for most developers, the gpt-4o-mini-2024-07-18 model should suffice in terms of capability and usability, making it the most appropriate choice for the majority of users.
By utilizing the fine-tuning tools, developers can input more training content to improve the output quality of models like GPT-4. This means users can get accurate responses without needing longer prompts, saving token expenses and reducing latency.
OpenAI also notes that the fine-tuning tools can enhance few-shot learning effectiveness. Few-shot learning refers to teaching the model how to perform tasks using examples. Although OpenAI models are already trained on a vast corpus of text, few-shot learning is used to further improve the output quality of the models.
Interested developers are encouraged to test the fine-tuning tools on the OpenAI API platform. For those targeting users and operating on a large scale, the fine-tuning tools could significantly enhance the end-user experience.