OpenAI's Big Update - How Can Your Company Leverage It?

OpenAI's Big Update - How Can Your Company Leverage It?

OpenAi released a slew of updates and new pricing last week. What were the most impactful updates and how can your company use them?

I'll start by saying this assessment is written from an enterprise perspective. Some of the updates that won't be covered here may be quite useful for people in a non-enterprise environment.

Here is a list of these updates, I'll describe their significance below:

? Assistants Tool

? Retrieval

? Code Interpreter API

? Vision API

? Knowledge Update to April 2023

? Price Cut

? Larger Context Window

Assistants

Assistants is a very friendly tool that allows OpenAI functionality to be integrated into enterprise applications or to simply act as an assistant for individual employees or groups. Assistants allows GPT-4 to utilize multiple tools to help perform tasks. These include Retrieval, Code Interpreter and Function Calling. You can set up an assistant in the OpenAI GUI in just a few minutes and leverage these tools as you see fit. Its power will become clearer as we review the tools.

Retrieval

Retrieval allows you to upload up to 20 512MB documents/files to act as your knowledge base. It's not the same as retrieval augmented generation, which can utilize thousands of documents but can only utilize text "chunks" from them. It's more like loading documents into ChatGPT's immediate memory, but with 10GB of allowable files it holds much more data, including spreadsheets or other tables of data for analysis.

Code Interpreter API

Briefly, Code Interpreter can read spreadsheets and other data files and run Python code. That allows you to to conduct modeling, analysis and visualization just by uploading your data files and telling Code Interpreter what you want (see this post for more info: https://www.dhirubhai.net/feed/update/urn:li:activity:7087438993960701952/ ). Code Interpreter has been around around for several months but could only be used within the ChatGPT interface. Now that it's a part of the Assistants API it can be integrated into applications, opening a world of possibilities for analytic automation.

Vision API

OpenAI recently released image uploading in ChatGPT, and now there is an API for that capability. At the moment it can only answer general questions about images, but that still allows it to get over limitations of text such as reading diagrams and charts. As its capabilities advance and videos can be broken down into images one can imagine a number of retail, transportation, security and other scenarios where GPT can conduct analysis.

Knowledge Update to April 2023

Until last week GPT-4's knowledge base only went up to September 2021. Particularly in fast-moving fields such as software development this update is welcome.

Price Cut

OpenAI also announced they were cutting prices two and threefold. That type of cut could have a significant impact on business cases.

Larger Context Window

OpenAI increased its largest context window size (the amount of text you can include with your question) from 32K to 128K tokens. It will struggle if you try to go beyond about 60K tokens, but that's still roughly a 110 page book that you could have GPT-4 reference as it answers your questions.

Wrap-Up - Function Calling

I'd like to wrap up with the Function Calling component of the Assistants API. Function Calling allows a company's developers to create code libraries for all types of activities. Instead of relying upon OpenAI to generate code, a company's developers can make sure that best practices and standards are put into place. Then, depending on the problem, GPT-4 can choose the code libraries to use. While Function Calling has been around for five months, its inclusion in the Assistants API really facilitates its utilization in enterprise applications. With proper business unit input, developer focus and knowledge management practices the level of automation within a company can be considerably enhanced.

要查看或添加评论,请登录

社区洞察

其他会员也浏览了