Fireside Chat with OpenAI's Marc Manara
Xoogler.co
Network of Google alumni and current Googlers helping each other advance our ambitions in the startup ecosystem
The Xoogler community hosted OpenAI's Marc Manara for a fireside chat on February 14, 2024 with Christopher Fong, Xoogler's founder as the moderator. You can find the summary of the discussions below. If you're a SF Bay Area Xoogler founder and running a funded startup please RSVP here to join our upcoming event with Marc on how you can optimize building with OpenAI!
Background
Marc Manara and his team at OpenAI works with startups as part of OpenAI’s Go To Market Team. He joined the team in October and was previously the Head of Business Development for the Early Stage Startup Ecosystem at Amazon Web Services (AWS) from 2018-2023. He joined Chris and the Xoogler community for a fireside chat to talk about startups, using ChatGPT, and OpenAI.
Tell us a bit about yourself and your current role at OpenAI
Most recently, I worked with AWS, and was there for about 5 years, running part of the startups team that worked with companies, particularly in the early stages. We did a lot of work with venture funds, with accelerators, and places where there were a lot of startups forming and receiving their first checks, getting started building their technology. Before that, I ran a venture backed startup as Founder/CEO and learned a lot in that journey. I started my career as an engineer and spent several years on a detour working at an impact investment fund called Acumen where we were making venture style investments in emerging markets with a social impact focus.?
I’ve had a pretty wide ranging career but now I’m laser focused on how we can help the startups building with AI accomplish the most and get the most performance/capability and great products that wouldn’t have been possible before what’s happened in the last several years. From early stage getting started using the APIs building into their product all the way through to more heavily venture backed and beyond. We’re a very small team right now, but I’m hiring and we’re building out the strategy of the team - trying to make OpenAI a great partner to startups at each stage of their journey. For us there’s a strong belief that some of the greatest impact from this set of technologies is really going to come from startups. There are a lot of enterprise use cases and we’re seeing more and more investment from enterprises, especially in the last 18 months to find out how/ways to bring AI into their operations and products, to improve productivity, and their customer experiences. But we think that some of the breakthrough/category defining use cases are really going to originate with startups that may not exist yet, or are just starting out. That’s the fundamental belief we’re building on.
Can you tell us how you’re personally using ChatGPT?
I use it in a few different ways. The most frequent use for me is unblocking my writing process. It shows up in various forms. When I wrote up our initial strategy doc, I used it to figure out the structure that I thought made sense, and generated some ideas on a topic. It can help pressure test my thinking and also helps me get over some of the initial creative writing hump.
Another way that I use it is we have this advanced data set analysis tool inside of ChatGPT - if you’re not familiar with it, you can upload some spreadsheet data and ask it to do different analyses and it’s writing code in the background that goes off and performs the analysis and returns results. So rather than sitting at a spreadsheet and doing it myself, I find it’s a big productivity boost to use this tool and get some questions answered or produce an analysis that I want to use for something.?
The writing use cases are the ones that I use the most. I used it just this week to create a few more interview questions on a topic that I was trying to make sure I was thinking about from a few different angles. Kind of a simple generative task but again, saved me a few minutes and got my thinking pushed beyond a few boundaries.
A lot of folks here are developers, who want to build on the technology. Some of these folks might want to work for OpenAI. We saw you posted on LinkedIn recently that you’re hiring. What is OpenAI hiring for these days and what is the interview process like at OpenAI? Do you have any advice on how to prepare for the application and interview process?
I am hiring specifically for folks on my teams - my team has two main components. We have a small sales team that’s growing; though I don’t really think of selling to startups the same way as you might think about sales to an enterprise. It’s more about how we can support startups to get the most out of this technology. We’re also hiring for venture capital partnerships - building relationships with venture funds, trying to create a set of resources that their portfolio companies can access from OpenAI and have more scale than some of our 1:1 engagements.?
OpenAI is hiring across all sorts of roles, across all functions - mostly in SF but elsewhere too. The interview process can look a little different depending on the role. Most of them involve a “project” in the early stages of the process to see folks’ work, thinking, and how they apply their background to the topics that are critical to the role they are applying for. We do onsite, panel-type interviews as well. Nothing too crazy in the hiring process except for the project component for most of them, it’s something you can usually do in a few hours and is a step that’s fairly early in the process.?
As for preparing, the most critical thing is to do your homework on AI, particularly the usage of AI in various forms. If you’re applying for roles on the API side of the business, you should really understand our API inside and out; understanding design patterns, customer use cases, common external tools that folks use, etc. The most common “failure” in folks interviewing is when that fundamental understanding isn't there. In a place that’s moving so quickly and trying to hire quickly and execute on a number of different fronts at a pace that’s relentless, being able to hit the ground running with the core solid fundamentals is really important. I suspect that the folks on this call are probably deeply interested and invested in AI in a lot of ways but this is what I'd really focus on. The others - to a degree, there’s certain core elements we’re looking for with folks we’re hiring. This is very much a super fast-paced, high growth “startup” environment. We’re looking for people who are scrappy, builders, and can get things done in significant ambiguity, who can keep up with the pace of releases, changes, and process changes, etc. and have an eye towards scale - show that you’ve helped take something to scale. We’re a really small team, given the attention on the company at this point, and the number of customers/users so we think a lot about leverage and scale and how we can do the most and have the greatest impact with a pretty small but growing organization.
Given your role, can you share how OpenAI works with startups and what are the most suitable startups for OpenAI to partner with?
We are trying to work with startups in a few different dimensions. One of the core initiatives that we have now is basic support and technical advisory for startups. Two things: support I think of as - something’s wrong. I’m using OpenAI and I’ve hit a blocker - I need help now. And we have a pretty small support team with pretty long queues and tickets. We’re doing our best to serve a pretty large population; how can we improve that for startups specifically and create different channels for support to be elevated and for responses to be more real-time. On the more pro-active side, some of the more sophisticated teams that are farther along in their journey, trying to get real, actionable advice, on how to get the most performance out of models and how to architect around some of these use cases. So we get lots of questions around fine-tuning, how to do it most effectively how to prepare datasets most effectively; retrieval, augmented generation, which tools are the right tools on the data pipeline side of that to make sure that the retrieval is serving up the best context and has the best chance of producing great results, etc. How to optimize for cost, how to distill from one model to a cheaper model.
We are also starting to do more events, and some of these are more educational in nature. We’re trying to share what’s recently come out, deep dive/share new product features/releases, how people are using them, and have some q&a as part of that. Some of these events are also just so there can be some human contact with OpenAI. Being such a tiny organization with a pretty small customer facing team, the most frequent complaint I hear from startups is they don’t know anybody here and they feel like it’s a bit of a black box in some ways where when they have support questions or other questions or have a proposal where they want to partner with us on something and it’s just hard to get that routed to a human who has the time and will pick up the phone. So we’re trying to improve on that. Obviously we’ll be expanding the team and our capability there will go up. I think events are a way to have a little more regular contact with startups in our ecosystem and have a channel open for those kinds of contacts.
领英推荐
We’ve talked a bit about startups and you’ve been a founder yourself so you can relate to how expensive products can be. Can you share some more about pricing, custom work, tips for startups to keep costs lower in the beginning, especially if they’re bootstrapped?
One of the things I see a lot of startups do in trying to manage costs is to experiment with the different models that we offer. GPT4 turbo is our most capable model and also the more expensive one between that and GPT3.5 turbo which is a smaller, faster, cheaper model with a little less capability. A lot of the startups are trying to route specific use cases to specific models based on what they’re trying to do, so they would use GPT4 as a starting point for tech-space use cases to see if what you’re trying to do is possible. And GPT4, being very capable, can do a lot and has more advanced reasoning capabilities. However, the cost difference is an order of magnitude between the two models so many founders will realize that they really don’t need GPT4 for a lot of the use cases and can save some money by using GPT3.5 Turbo. Maybe not in those earliest stages, but a lot of companies using GPT4 are finding ways to be able to fine tune GPT3.5 and get fantastic results. You can fine tune through our API and serve the model through our API and those costs are still lower than using GPT4.?
On the upper end of the spectrum, as companies tend to have more and more usage and it’s steady usage, there are more options that open up where we have this concept today of “reserved capacity.” For companies that are seeing a lot of throughput, have steady usage, have some product market fit or some semblance of it and they are trying to optimize costs, reserved capacity is something that we offer that is a dedicated compute option. It’s essentially compute dedicated to a specific company; you can scale it up as you need and requires obviously a financial commitment to do this but particularly for 3.5Turbo, there can be some cost savings for that reserved capacity if you have heavy steady utilization. Soon we’ll be launching another version of that which will be more self-serve in nature and I think that’ll be an option for companies to manage costs when they have heavier costs and more steady usage.
You mentioned that you’ll be holding more events and more educational programming to share information. How do we find these programs, training or workshops to help us get started using the OpenAI API?
Up to now, we’ve done most of our programming through our relationships with the venture funds. We’ve hosted a few events and we’re planning to do more for their portfolio or their prospective portfolio companies.?
We’ve also sponsored/co-hosted a few hackathons, mostly in the SF area. We have one coming up with South Park Commons and we’re in the planning stages with a venture fund that will be kind of an open application. I expect that we’ll be doing more in the coming months as we set up the infrastructure. We’ll be looking to do a combination of open to the public ones as well as ones specific to venture fund portfolio companies and YC and other key partners in the ecosystem.?
Let’s talk about the developer landscape. Can you talk about segmentation and rising verticals that you’re seeing?
I think the highest level of segmentation is there are model providers/developers, I’d put OpenAI in this category. There’s tooling providers of various sorts that can support model developers or the folks building applications on top of it, fine-tuning, etc. And then the application layer where a lot of companies are building using these technologies. Within the application layer, we’ve seen a lot of activity in vertical specific applications and there seem to be many companies trying to tackle the space of how to apply in the health care context and use AI to improve the delivery of care, to improve documentation, and to improve productivity of doctors and medical providers - as an example. Similarly in the legal space, we’re seeing examples in tax and compliance and financial services. So I think these are interesting areas because there’s very specific needs and usually some unique datasets as well that may be hard, some proprietary datasets that can be incorporated into improved domain specific applications. I’d say another category is helping enterprises make use of the data they already have in various forms. Some enterprises are trying to do this themselves in some cases and using our technology directly to do that. We’re seeing a lot of startups trying to help enterprises search across their proprietary data or processing of really large volumes of unstructured and structured information in a way that their other products and internal tools can make sense of. So those are a couple of categories that we’ve seen a lot. I’m really excited about some of the multi-modal capabilities that we’ve launched. I think that’s going to unlock a lot of new use cases but those are some that we’re seeing today.
Do you see any themes in terms of what new startups are building with the API? Which of them do you think will sustain and which might vanish as the foundation models get better and better??
That’s a hard question to answer. If you rewind over the past year, I think some of the companies that have built with AI that have had a tougher time differentiating were companies that created a chatbot experience that maybe wasn’t as differentiated. We probably saw this more in companies that had an existing product that didn’t have an AI core/AI-native product and were trying to find ways to incorporate AI; so there was a flurry of different chatbots out there attached to an existing product. Some of those work and make sense and they actually add value but others it felt like not really a significant improvement on the product experience. It’s hard to predict into the future just given the pace of changes is so significant on each of these modalities; and there’ve been so many announcements and so much innovation that i think startups that are building in this space - in some regard, you have to build a great product, whether it’s AI-powered or not, and some of those fundamentals really don’t change. And in other cases, startups are building maybe a periphery/additional feature that may soon become obsolete when the next round of innovation is coming out from folks building models or building on top of models so I think there’s in some regard there’s nothing new here, it’s just a different set of innovations built around AI but you still have to build a great product or a great experience, solve customer needs, and user needs and find some sort of product-market fit. On the other hand, there are maybe some point solutions that don’t stand alone as businesses and those are a little bit harder to predict sometimes.?
Putting your startup hat back on, is there a startup that you wish was out there??
It’s interesting - I see a lot of companies trying to use this technology. Some of them struggle with the data pipeline, or if you think of retrieval augmented generation where I’ve got some proprietary data or a set of information and I want to serve up the right pieces of that information to answer user queries or inject it in a prompt so that the actual LLM has the right context to give a useful and generation. There’s just a lot of experimentation there and I think there's a lot of knobs to tweak and things to change on how you do that data ingestion and the pipeline of data to make that retrieval process really go great. So I think there’s some opportunities there potentially on how you can help other users of LLMs get the most out of that and improve their own data pipelines. It’s remarkable how different the outcomes can be if the data pipeline is great vs ok and it makes a difference on the product experience. That might be one, but there are so many and with the new modalities it unlocks a whole new - audio, image generation, vision model, the capabilities set has just expanded so significantly that there are probably many more use cases that are difficult to imagine until someone goes and experiments and creates them.
On a recent YC/Lightcone podcast they posited that the startups riding this wave of AI best will be startups that have LLM functionality within their platforms for specific issues as opposed to something general (which may get displaced by GPT5). What do you think are the most tactical opportunities that are worth tackling in the long term?
That’s a hard question to answer. I’m of the belief that even if OpenAI creates something in a space like that in the future, I believe that there’s usually room for multiple big companies/winners in almost every category. There are some counterexamples out there, but I think that’s generally the case. I don’t feel comfortable saying “don’t do this category and go do this other category,” just given that belief and the pace of change is so fast that something that’s built today may be totally irrelevant to both you and OpenAI in six months. It’s a really hard question to answer in a general case. In a specific case sometimes we might have a stronger opinion knowing what is on the roadmap or things that we’re working on building. This is probably not a satisfactory answer but it's really hard to answer in the general case.
What are the best courses to learn about AI?
My top three are: 1. Andrej Karpathy’s YouTube Channel - Andrej is a really great instructor and has a lot of great content on his channel; 2. deeplearning.ai has a lot of great resources as well; 3. OpenAI’s documentation is also pretty good. There’s also a cookbook: cookbook.openai.com that you may find useful.
What didn’t we cover that you want everyone to know?
If you're building a startup and we can be helpful, please reach out, we’re here to try to support you. The best thing you can do for me, I would love your feedback if you have issues or hit friction, etc. I want to be mindful of that and want to move the organization forward.
Thank you Marc! You can find other inspiration conversations on Xoogler's YouTube Channel, Xooglers/Googlers sign up for other events via our Xoogler newsletter and sign up on Xoogler.co to be a part of the 32,000+ Xoogler community! Thank you Key.ai for running the session with Marc Manara.