Table of contents
The OpenAI Dev Day was filled with exciting announcements and updates that are likely to shape the future of AI development. From new models and APIs to pricing changes, here are the key takeaways.
OpenAI unveiled the GPT-4 Turbo, the next generation of its popular GPT-4 model. This new model is not only more capable but also comes with a 128K context window, allowing it to process the equivalent of more than 300 pages of text in a single prompt. Users can feed long PDFs and books to get the answers to their questions.
The model shines across tasks that require following precise instructions, such as generating specific formats. With the introduction of JSON mode, developers can ensure that the model responds with valid JSON, opening up new possibilities for data manipulation and integration.
Note: Response token length is restricted to 4096.
OpenAI introduced the Assistants API, enabling developers to build agent-like AI applications similar to character.ai. This API equips developers with the power to create assistants with specific instructions, knowledge, and model integration, making it easier to develop high-quality AI apps.
Assistants will have access to:
The RAG functionality can affect companies that were previously building similar solutions. Let's see how this turns out. After all OpenAI plugins was called a failure by Sama himself.
GPT-4 Turbo's vision capabilities enable it to accept images as inputs, offering features like image captioning and detailed image analysis. DALL·E 3, which is integrated into the API enables programmatically generated images and designs. Furthermore, the text-to-speech (TTS) API provides human-quality speech generation. These new modalities enable businesses to create innovative applications and services, such as image recognition, content generation, and voice interactions.
OpenAI's commitment to customer protection is evident with the introduction of Copyright Shield. As a result, OpenAI will step in and defend customers against legal claims related to copyright infringement, adding an extra layer of security for users of ChatGPT Enterprise and the developer platform. The added legal protection provides businesses with peace of mind when using AI in their applications and services, reducing potential legal risks.
While fine-tuning is less impactful on GPT-4 compared to GPT-3.5, OpenAI is working to improve its quality and safety. Developers using GPT-3.5 fine-tuning will soon have the option to transition to GPT-4 fine-tuning.
OpenAI is launching a Custom Models program, catering to organizations with domain-specific needs. This program allows for deep customization of GPT-4, including pre-training and reinforcement learning tailored to a particular domain. Pricing for custom models starts at $2-3 Million, making this offering.
OpenAI has doubled the limit of tokens per minute for all paying GPT-4 customers, allowing developers to scale their applications more efficiently. The transparency in usage tier expectations provides developers with insights into rate limit increases. Higher rate limits support the growth and scalability of AI-powered applications, allowing businesses to serve more users and data.
16K Context Window: By default, the new version of GPT-3.5 Turbo now supports a 16K context window.This expanded context window allows the model to consider a larger amount of text when generating responses, which is particularly useful for tasks that require a broader context.
Improved Instruction Following: GPT-3.5 Turbo has been enhanced to perform better on tasks that demand precise instruction following. This means that the model can generate responses more accurately according for tasks with complex instructions.
JSON Mode: Similar to GPT-4 Turbo, GPT-3.5 Turbo now supports JSON mode. This enables the model to generate valid JSON output which reduces parsing errors.
Parallel Function Calling: GPT-3.5 Turbo now supports parallel function calling, allowing developers to invoke multiple functions in a single message. This improves the efficiency and effectiveness of function calls, reducing the need for multiple roundtrips with the model.
The new seed parameter in GPT-4 Turbo allows for reproducible outputs, making it valuable for debugging, unit testing, and improving control over model behavior.
OpenAI is set to provide log probabilities for the most likely output tokens which is useful for features like autocomplete in search experiences or detecting hallucinations.
Function calling is a crucial feature for developers, allowing them to describe functions for their apps or external APIs. OpenAI is enhancing functionality by enabling the model to call multiple functions in a single message, simplifying the interaction with the AI. Furthermore, GPT-4 Turbo is now better at returning the right function parameters, making the user experience more seamless and accurate.
OpenAI has significantly reduced pricing for various models, making AI more affordable than ever.
OpenAI released a new major version of python SDK! It's a total rewrite of the library, and comes with these new features:
If you are using gpt-3.5-turbo as the model identifier for the latest model, it will point to new gpt-3.5-turbo-1106 Dec 11 onwards. It is recommended to evaluate your prompts before the switching happens or lock the current version by using gpt-3.5-turbo-0613.
We're not sure yet how GPT3.5's price reductions will impact the necessity for making your own special models and using vector databases. Users can now provide more information than before, but we're uncertain if the model can handle information that's lost in the middle. Early research has indicated that both GPT3.5-4k and GPT3.5-16k had trouble dealing with long pieces of information.
Source: Lost in the Middle: How Language Models Use Long Contexts
We created a thread of all the cool demos people are building with new APIs.
OpenAI Dev Day has brought about a wave of innovation and opportunities for developers and businesses. The reduced pricing, new models, and enhanced features empower organizations to create more sophisticated and cost-effective AI-driven solutions, ultimately giving them a competitive edge in the fast-evolving AI landscape.
Request a demo to see how Galileo can help your team train, evaluate and deploy trustworthy LLM applications.
Table of contents