Experimenting with AI

Experimenting with AI

I wrote my first AI app yesterday to satisfy a real problem I deal with on a daily basis––coming up with meal ideas for my kids. It took me 4-5 hours (really only a few minutes, but there's a lot of nuance here) with a total cost of $2 between prompts, embeddings and other testing. The code is still a bit in flux, but I will consider sharing it on Github in the coming days.

Background

Like many others in my network, I've been intrigued by the developments in AI and have been rapidly trying to follow along as the space changes. I've allocated time every day to reading papers, blogs and tutorials, listening to podcasts and experimenting with product ideas through prompts and code. For me, learning is exciting, but putting that knowledge to use as quickly as possible is where I find real joy.

The most unique aspect of this current AI space is how quickly it's evolving and how nuanced the interaction between you and the model are. The beauty of the playground environments like ChatGPT are that the outputs are often good enough to satisfy my needs, thus leaving me without the motivation to code. For weeks, I struggled to find the demarcation point between my contributions and those of the model. Each time I did manage to write code, I'd almost immediately come up against an issue that required further learning, thus delaying my ability to create.

Yesterday, I committed to writing something, anything. The goal for the project was simple––take all the information I have been learning over the past few weeks and solve a real problem.

The App

I have pages of ideas related to cybersecurity and product management, but ultimately landed on a meal idea application because it was a problem I needed to solve by the evening and it has great linear potential. Within minutes, I was able to create a bare bones application and after seeing how easy that was, I kept expanding functionality to take advantage of more advanced concepts.

Here's the breakdown of functionality

  • Customization. Options allow for refinement of the meal ideas. Meal type is auto-selected based on time of day. Specific ingredients can be included or excluded from the meal plan.
  • Prompt Templates. Primary prompt leverages an input template and response template. The response template massages the meal plan into JSON.
  • Memory & Grounding. When specifying specific ingredients to include, those will be enriched through a local database of kid recipe books. This enrichment pulls in context on preparation and other ingredients commonly found in recipes. The primary goal was to expand the memory of the model while also grounding the meal ideas to be more kid-friendly.
  • Image Generation. Model responses are fed to DALL-E 2 to generate a realistic image of the meal and in context (toddler setting).
  • Simple Output. Response includes a brief summary and a listing of ingredients needed below in a list format.

The outputs of the application have exceeded my expectations, though it could benefit from fine-tuning and a proper RLHF loop. Issues aside, getting this done quickly was made possible using the tooling released by OpenAI and the broader community.

Here's the technology I used

  • OpenAI. Leveraged for text completion, embeddings, and image generation.
  • LangChain. Powerful abstraction toolkit to organize prompts, engage with LLMs, reference local vector databases, ingest data and chain together outcomes. Use this library, but don't ignore the work it's doing on your behalf.
  • Streamlit. Quick way to prototype and release applications. For production settings, I tend to prefer my full app stack of Flask, but Streamlit is amazing for rapid development.
  • Chroma. Used as a specialized "memory" for kid-specific recipes and information. Langchain pulled in my information, split it into documents and then generated embeddings using OpenAI. Data was stored in Chroma and persisted for reference when generating meal ideas with specific ingredients.

Parting Thoughts and Observations

Thoughts, so many thoughts. My biggest takeaway in making this small project was just how fast I was able to go from idea to implementation. As I reflect on the meals generated and their paired images, I consider the work it would take for me to make a quality solution like this without using AI. It's obviously possible, but the effort required would be considerable and even then, it would probably feel stale rather quickly. There's immense power in these tools and while it can be scary, I am more excited than anything else.

Random trailing thoughts

  • LangChain reminds me of learning a new programming language. They have done a great job or making order from the chaos of AI solutions being introduced. I initially began rolling a lot of my own interactions with the models and quickly replaced those with 1-2 lines of LangChain. The big factor when using this library is to ensure you still take the time to understand what it's doing and how it works.
  • I didn't hit any token limits on this particular project, but that's what plagued me on other ideas I wanted to execute. Hitting those limits put me on the path of understanding embeddings, vector databases and how the LLM could be leveraged to create concise context in prompts. I originally planned to use Pinecone, but their free tier is waitlisted. Picking Chroma was nice since it's local and quickly highlighted some of the benefits of Pinecone (web interface to manage data).
  • Juypter Notebooks really shine when interacting with all of these tools. It's extremely helpful to have a dynamic way to execute code, inspect variables and immediately adjust, especially when it comes to testing prompt outputs.
  • I really love the Microsoft Copilot moniker. My experience with these tools have shown me that they aren't a replacement for my actions or ideas. When used properly, AI capabilities supercharge your outcomes. A friend recently compared AI to being a manager––you need to find ways to breakdown your work into small chunks you can delegate to others who may have less experience. Teach the model and avoid doing the work yourself.
  • Getting to "production grade" outputs requires development approaches of today. Plumbing AI directly into your application as a passthrough is fine for experimentation, but you will need caching, sanitization, local data reference and fine-tuning to get quality and consistency.
  • Current AI is truly bleeding edge. New tools, papers, blogs, podcasts, ideas, etc. are all being released on a daily basis. It's insane how quickly everything is moving. It's tempting to sit back and wait for a solid foundation, but if you can afford the time, I say start now.

Andy Jabbour

Managing Director, Gate 15; Board Member (various)

1 年

I’m kind of excited to see what you come up with once you really get familiar with things. I’m also kind of scared of what you come up with when you really get familiar with things…

Haha I love this!! We face similar challenges! This is genius!

Peter Kruse

Founder of Kruse Industries, CSIS Group, Heimdal, SIE Europe, Defendas & Cybercrime Investigator, Counter Intelligence, Threat Hunter, CARO member, LE advisor

1 年

Cool

回复
Jose Miguel Esparza

Principal Intelligence Analyst, GTAC at CrowdStrike

1 年

Interesting, Brandon! Looking forward to taking a look at that code! ??

Clarence Cheuk

Investing in early-stage tech companies

1 年

yum

回复

要查看或添加评论,请登录

Brandon Dixon的更多文章

社区洞察

其他会员也浏览了