What's next for ChatGPT plugin developers and why they are very advantageous when compared to non-plugin developers and foundational model providers
Ozgur (Oscar) Ozkan
Software Engineer Contractor ( Bootstrapped an AI company to 1M$+ ARR in 3 months as the sole developer / Open to new opportunities ) SRE / Platform Engineering / DevOps / DevSecOps / Backend / Full stack
ChatGPT created TLDR version of what I think it's below
[TLDR]
Startups working with AI, particularly interaction and summarization LLMs, and search databases, are limited by request speed and can only effectively integrate one of each type. Their main competitive advantage lies in collecting and correctly implementing metadata for AI applications. Startups will typically:
ChatGPT plugin developers have a competitive edge over non-plugin developers, as they have access to more user and market data, and can iterate on their products more effectively. However, they face challenges with certain types of metadata, such as PDFs, and risk OpenAI implementing similar features or services.
Startups focusing on AI assistants for knowledge workers are particularly at risk, as OpenAI is likely to enter this space. To mitigate this, startups can either specialize in meta work products or develop fully autonomous AI agents.
Key industries and interfaces for startups to focus on include internet search, PDFs, dev tools, project management tools, AI agents, software and infrastructure deployment. However, non-tech industries may struggle to adopt these technologies in 2023-2024 due to regulatory challenges and the complexity of prompt engineering.
In conclusion, plugin developers have significant advantages in terms of market insight, user data, and existing customer relationships, which can be leveraged to create successful products, even in the face of competition from OpenAI.
[TLDR END]
My own ideas: There's only room for one interaction LLM (GPT-4) and one summarization LLM(Claude-2) , one Search DB (Vector DBs) beyond that the requests are not fast enough.
The only moat startups can build is collecting correct metadata for the AI they are implementing. Or helping people pass this metadata correctly.?
So basically copy-paste all startups will do this:
ChatGPT plugin developers are advantegous when compared to non-plugin developers. Some ideas are not good because obviously it requires more metadata or explanation or prompt engineering for the interaction AI. GPT-4 knows google search page metadata by default, when a user references links or headers or snippets AI understands. On the other hand GPT-4 knows less about PDF metadata. Table of contents what subtitle or title reference goes to which page etc. Any startup that focuses on same knowledge worker AI assistant will see openAI either implementing similar or doing similar things.? This even includes developer assistants many developers use ChatGPT for coding. I think there are two ways two mitigate this either going one side of the spectrum. Being a meta work product or going to the other side of the spectrum which is non human involved AI agents. ( Not make.com type of automation I am considering 0 human involvement either the output or input of the automation agent should be prepared by the program or output of automation agent should be consumed by other AI agents or programs)
Obvious red-ocean interfaces that have product market fit with AI are
1. Internet search (already known by llm)
2. PDF ( has a lot more metadata room for startups)
3. Devtools (infinite number of metadata room for startups but there's still openAI risk)
领英推荐
4. Worker's work PM tools ( business flow ERP metadata ) left side of the spectrum
5. AI agent ( there's still openAI risk, fine tunning agents are actually long running AI tasks)?
6. Software deployment (OpenAI opensource tool risk / spotify's ones)
7. Infra deployment (OpenAI opensource tool risk / spotify's ones)
8.? Non tech industries will have hard time adopting at this time 2023 - 2024 ( Law will show pushback as hallucinations are a thing )?
9. No-code, low-code Business automation is not 100% important (it not necessarily making it easier to publish SEO generating automation)? ( you need specific prompt engineering from non tech people ) In my opinion:
This is non consensus but true in my opinion:
Plugin devs have the bleeding edge of market and user research. i.e. Keymate has touched quarter million people's lives as a 1 plugin.
That one single plugin:
Discovered search requirement in the plugin store and proved that there's product market fit for search+LLM Implemented long term memory, pdf reading, pdf training, RAG for quarter million people, hybrid search ( knowledge base + internet ). Manual plugin execution with "/" commands.
All within 1 plugin.? We literally experimented everything for OpenAI in function calling AI zone before OpenAI.
We talked to 4000 people one by one I don't think 1 person in OpenAI interviewed with 4000 active users.?
Maybe no more new users will use ChatGPT plugins for PDF and Search plugins after time but they know what to implement next and what OpenAI will implement next too.
That was positive-sum game for a long while. I still believe it's there's no ecosystem that gives you opportunity to experiment with GPT-4 for free. When compared to a regular startup founder that will develop anything with GPT-4 API, a Plugin developer has market and user research:
Usage data of GPT-4 API where it failed where it sticked.
Brand awareness and registered users.
Yearly contracts.
Existing user base that are very helpful and innovative.
So as a plugin developer can build a venture studio and produce 10 products with the data I have. If 1 of them hits big and non eaten by OpenAI competition it's still a win.
Individual developer can use multiple AI tools without restrictions:OpenAI has to use Bing because of Microsoft investment.When you become part of an investment fund you start to pick sides.This is not necessarily good for end users. OpenAI can't implement Claude 2 inside ChatGPT plus but an individual developer can. Because llms are text input and text output they can talk to each other and developers can build best products these foundational model providers can't.Check this sample:Keymate API uses Knowledge Base+Google Search + Claude 2 + GPT-4 8K to help you build things upon Keymate Functionality ( Up to date information, uses your personal or company knowledge base, reads 100K context and don't have to respond in 45 seconds like ChatGPT plugins should )We actually implemented Claude 2 pdf reading inside ChatGPT plus and it has interest from the community. https://www.reddit.com/r/OpenAI/comments/17j9aoj/comment/k715tf2/?utm_source=share&utm_medium=web2x&context=3