Fine-tuning lets businesses essentially hone ChatGPT to a more focused model that’s especially efficient for certain tasks. The supervised training would make a bot that’s unique to the client company so that it offers, say, reliable responses in a specific language or with more concise wording. Until now, business customers werefor this, like davinci-002 or babbage-002.
The model would come pre-trained,, up to September 2021 before being fed company data. OpenAI says that none of that data, nor any input or output, will be used to train models outside of the client company.
Other uses include ensuring the bot is trained to mimic brand voices so they’re more consistent — think ad copy or internal communications at least partially written by AI (not that we don’t). Software companies could use it for routine code like API calls or to dependably format and complete snippets of code.
GPT-3.5 Turbo is a model family the companythat it said was ideal for use cases that aren’t chat-specific. It can handle 4,000 tokens at a time, which OpenAI says is double what previously offered models could interpret. The company added that early testers have been able to make 90 percent shorter prompts after priming GPT-3.5 with fine-tuned instructions.
Pricing for GPT-3.5 is $0.0080 per 1,000 tokens for training, $0.0120 per 1,000 tokens for input usage, and $0.0120 per 1,000 tokens of the chatbot’s output.
Microsoft also offers refinable GPT-based models as part of its, intended to be connected to a company’s internal data to craft responses. Microsoft pitches them as a way to summarize information or generate content for email campaigns. Like OpenAI’s fine-tuning bots, Microsoft’s customizable AI bots can connect to company data to generate responses from a business’ knowledge base.