OpenAI Announcements
Many in the startup community and specifically AI #founder community have been watching as well as passionately talking about #openaidevday announcements. Some even going far and saying only API one calls is #openai.
In this post, we take a closer look at the background prior to openai's announcements, summary of announcements and its implications, what will likely change and what may not necessarily change.
I also pose a curious question on as OpenAI starts moving up the stack and hopefully more closer to enterprise offerings, could it evolve to look like another Enterprise data and AI platform?
Background
Let us look at the lay of the land prior to these announcements
1. Over the last 3-6 months, post #openai's announcement of #GPT35 and subsequently #GPT4, we saw 3 themes emerge among the pureplay #llm startups. These were around #enabling and #orchestrating smaller, cheaper value driven #oss models, libraries and frameworks for building and deploying #RAG based workflows and conversational applications and #finetuning infrastructure for #customizing #oss language models.
Main business drivers for the above themes were as follows:
i) Need for #cheaper #value driven alternative, for companies wanting to host there own models for variety of reasons or want a hosted solution
ii) Fine tuning models for #domain specific requirements continued to be hard and therefore needed significant other infrastructure.
iii) libraries and frameworks for building optimal RAG based workflows and agents for #usecase specific needs. In most cases, these were layers of software built on top of an openai based model or other #oss models.
Both startup community and other vendors saw opportunity in #oss foundation models and #frameworks and #finetuning infrastructure for the above business drivers and there was visible #momentum in the direction of #oss. #googletrends showed #openai was loosing traction in the next leg of #foundationmodel adoption atleast among the dicerning #enterprise customers.
OpenAI DevDay Announcements
With #openaidevday announcements, #openai has made a few changes which are #credible and test the #startup, oss community and other #llmvendors on above objectives. Here are few takeaways as we see it from the announcements:
1. Commercial Changes
#openai pricing changes (2x to 3x cheaper) for most of the #models brings it closer to the price points that #oss model startups and possibly many enterprises are aiming for when they look at #oss based approaches.
2. OpenAI moves up the stack
With the addition of #seed in the #model api, additional #constraints for json output, #larger context window, #ability to #upload documents, videos, voice etc and the AI Assistant sdk/application, no-code workflow, #openai moves up the stack. Whether this would excite #enterprises or stay at consumer facing applications, only time will tell. But these changes significantly reduce the bar for common #RAG based applications specifically #conversational assistants.
3. Lowers bar on finetuning and creating custom #GPTs on top of #openai model
It #further significantly lowers the bar for #finetuning an #openai model. Fine tuning is now as simple as uploading a dataset and triggering a training job.
Implications
Implications of the above are many fold, but first lets look at the implications on enterprise use-cases.
Firstly lets look at what will likely not change
1. #oss models will continue to provide a credible alternative for enterprise usecases where #dataprivacy #datalocality is of paramount importance.
2. Complex #RAG based workflows and transformation pipelines don't necessarily change with the above.
Now lets look at what is likely to change
1. Independent Low hanging #RAG based workflows which are not part of a larger workflow such as #chat with your document, etc will need strong reasoning to justify an alternate approach. Larger context window, grounding seed etc help reduce the amount of glue logic written in current set of RAG workflows to workaround the limitation of context window and handling inconsistent responses by the model.
2. As overall TCO becomes favorable, a lot of #startups and #enterprises will need to justify #oss approach beyond #pricing concerns.
3. Fine tuning becomes lot more easier on top of #openai ecosystem, raising bar for other vendors to provide a similar or better credible alternative.
4. Bar to build a Multimodal data products which were earlier much harder lowers significantly and opens up a variety of newer Use-cases.
What could the future look like?
We today look at foundation models companies differently than Enterprise data platform vendors.
With OpenAI moving up the stack, enabling people to upload various forms of data to it's SAAS offering, A curious question for future is can these two different universes of solutions likely start overlapping as we get into the future.
Data/ML platforms continue to add LLM/AI capabilities, while on the other hand, AI LLM platforms like OpenAI attempting to move up the stack may continue to add more data and even traditional ML capabilities. This isn't really far fetched scenario and is in the realm of possibilities of evolution in the future.
Ofcourse for now, OpenAI doesn't look like solely an enterprise vendor. But evolution can bring these often differently seen solution universes far closer than one beleives they are today.
Data and AI Engineering (Gen AI) Leader |Data Analytics, Technology and Architecture | Strategy & Consulting @ Accenture
1 年Data platform powered by GenAI