Implementing AI: A Product Perspective

Implementing AI: A Product Perspective

Recently, I had the chance to work with my team on strategically implementing AI into our product to solve key customer problems. The journey was challenging; we found ourselves questioning our assumptions, constantly evaluating the value of our solution, and battling imposter syndrome. With three impending AI releases around the corner for our product, I want to share some key insights my team and I uncovered about implementing AI, which might inspire your own product team's AI initiatives.

Its all about Solving Customer Problems?

At the heart of AI implementation is the focus on solving customer problems and driving value. As with any initiative that your product team considers working on, AI implementation in your product starts with identifying the customer problems that are valuable to solve. The popularity of LLMs and their built-in tools to help analyze data has made a lot of problems that were previously expensive to solve, much cheaper to solve. It is therefore a good time to revisit your backlog iceboxes and see which they have ideas or customer requests that you deemed too expensive to pursue previously.

What’s the best way to determine if an AI integration can potentially help you easily solve a complex customer problem? If the problem involves text/image generation or data analysis on a large & complex chunk of data, there’s a good chance of AI being helpful. You could also just ask an LLM, preferably one that’s up-to-date on emerging trends, about ways in which AI can help. Lastly, if one or more of your competitors have figured out a great way to integrate AI into their product, chances are that you can too. Maybe you could learn from their implementation and make your own solution even better than theirs. It all depends on your team’s the quality of problem discovery and understanding of the customer problem.

As with any new feature or enhancement, you will need to identify the inherent user need and reduce the 4 key risks (value, usability, viability, feasibility) in order to make a great solution that solves the customer problem and works for your business.

Understanding the types of AI implementations

Assuming that you don’t have the budget to hire a few AI engineers to build your own language/imaging model, you will need to utilize an existing LLM like GPT or Gemini and decide between 3 ways of AI implementations:

1.??????? Prompt Design

2.??????? Model Fine-tuning

3.??????? Retrieval Augmented Generation (RAG)

Prompt Design

Prompt design is the most recognizable form of AI implementation. It is what you’re thinking about when you imagine implementing an LLM integration to solve a problem in your product. By definition, Prompt Design is the process of crafting inputs (prompts) that guide the LLM to generate desired outputs. It's akin to asking the right questions to get the best possible answers. As you can imagine, effective prompt design is crucial because the quality and relevance of the model's responses heavily depend on how questions or commands are structured.

Prompt Design, in my opinion is best when you need an LLM to do some standard tasks on a variety of user inputs. A great use case for prompt design is summarizing text input in a specific tone or format suited to the environment.

What’s the best way to design a prompt? The best way to think about it is cycling through 3 steps:

?

Iterating through a few cycles of the above three steps as well as identifying any edge cases will bring you closer to the right prompt. There’s also a couple of ways you can pass the prompt to an LLM when using APIs:

1.? Having pre-defined context that the LLM refers to for each API calls (this reduces the amount of tokens you pass in every API call)

2.? Passing in the whole prompt with every API call (this works well when your prompt isn’t too long)

There are also a variety of tools available online that can help you design a scalable prompt for your use case. Below are some I found helpful

1.??????? Google Cloud guide to Prompt Design

2.??????? Prompt Engineering Guide

3.??????? A comprehensive guide by Daniel Ramos

Model Fine Tuning

Model fine-tuning is a more advanced AI implementation that involves adjusting a pre-trained LLM to better suit specific tasks or understand particular domains. Model tuning narrows the scope of an LLM to a specialized task and makes the LLM ‘tuned’ to become an expert at doing the task based on given example data. Model tuning is a great option when there’s a need for the LLM to generate responses that are not just accurate in general terms but also aligned with specialized practices or standards.

As the definition and example above illustrate, there are two key considerations when going down the road of model tuning:

1. Do you have enough example data to train the LLM and narrow its scope to perform specific tasks?

2. Are you willing to spend an upfront amount for computational resources to do the model tuning?

If the answer to both of the above questions is yes, then model tuning could be a good option for your use case.

All of the popular LLMs including GPT and Google’s Vertex AI have documentation on how to tune their available LLMs. The process usually involves preparing a dataset of input and output example data in JSON format and creating a ‘tuning job’ to help tune the LLM to the specific task. It’s important to provide examples of edge cases and user misbehavior (deliberate improper inputs from users) in the tuning data to set the boundaries and handle different kinds of user inputs. If you have the data available as plain text and want to convert it into a JSON file fit for running a tuning job, you can actually just ask an LLM to do that for you. Make sure to verify what the LLM creates before starting? the tuning job though as they can get expensive to run.

Retrieval Augmented Generation (RAG)

If your AI use cases requires outputs that are both relevant and always up-to-date, RAG might be the right implementation for you. Simply put, RAG helps pull in the most recent and relevant information from a massive pool of external data sources to answer user queries. This is super helpful when you want your product to not just be smart, but also up-to-date with the latest happenings.

RAG is also more cost effective to implement than Model tuning as it does not require running jobs that may cost hundreds if not thousands of dollars. You will likely need some dev help in implementing RAG as it requires your chosen LLM to have access to up-to-date sources. AWS recently launched Amazon Bedrock, a cloud service using which you can implement RAG. If your organization uses AWS already, you can probably setup an account and try it for yourself.?

Becoming an expert at the subject matter

If you have a few AI engineers on your payroll, or if you’re trying to figure out your AI integration with your current team, you will likely need to become experts at the subject matter of your use case. When you’ve determined that AI can potentially offer a solution to a complex customer problem, I recommend that you look at a few things before solutioning:

1.?Have your customers considered utilizing AI models like GPT or Gemini outside your platform to solve the problem? If they have and if they’ve been successful, it’s great to understand what worked for them and what didn’t.

2.??Is there any research or studies around what humans recommend as best practices on solving the problem? There’s a high likelihood that having your AI model draw from these best practices will make your solution better.

3.??What are some things you can learn from your competitors? Did they recently release an AI implementation of their own? What did customers have to say about it?

4.??What is the quantity and quality of data you have access to in order to help train or tune your AI model? Most organizations have years of data they are sitting on that can help make their AI implementation great.

Considering some of the above will make you and your team better informed and prepared to go ahead with an AI implementation.

?

As we wrap up this conversation on integrating AI into our products, it's clear that the journey is as challenging as it is rewarding. Remember, diving into AI isn't just about tapping into the latest tech trend; it's about genuinely enriching your user's experience and solving real problems that make a difference in their daily lives.

Through this journey, my team and I have learned that the beauty of AI doesn't just lie in its power to analyze and automate, but in its potential to bring our products closer to the needs of our users, making every interaction more meaningful and every solution more impactful.?

Whether it’s crafting the perfect prompt, fine-tuning a model to exceptional precision, or leveraging the latest in RAG to keep our outputs sharp and relevant, the path to effective AI implementation is paved with continuous learning and adaptation.

So, if you’re considering AI for your product, start small but think big. Experiment, iterate, and learn from each step. Engage with your peers, share your challenges, and celebrate your victories. The road might be complex, but the future it promises is exciting and full of opportunities.

?

Piyush Singh

Driving AI, GenAI, Automation, and SAS/R in Healthcare & Life Sciences | Ready to Offer Insights and Expertise

11 个月

Great perspective Swapnil! Thanks for sharing!

Gib Olander

Chief Product Officer | Strategic Direction | Software Development

11 个月

Good stuff, thanks for sharing. I really like your problem, prompt, test cycle.

要查看或添加评论,请登录

Swapnil Tandon的更多文章

  • ╰┈?Defining the Solution

    ╰┈?Defining the Solution

    Building on my previous blog post about defining the problem and market need, I’d like to delve into the next crucial…

  • Defining the Problem and Market Need for Your Product

    Defining the Problem and Market Need for Your Product

    This past year I've had the opportunity to again work on the conception of a brand new product. It's always fascinating…

    1 条评论
  • My thoughts on Product Changes at Airbnb

    My thoughts on Product Changes at Airbnb

    Now that the dust has settled on Airbnb founder Brian Chesky’s controversial opinions on product management and the…

    1 条评论
  • Why Scope*Quality = Time

    Why Scope*Quality = Time

    Towards the end of last year, I was in a meeting between the executive team and product managers where we were…

  • The 'Zero Interest Rate' Product Manager

    The 'Zero Interest Rate' Product Manager

    As I was relaxing in some good weather last weekend, I had the urge to write about a phenomenon that we are starting to…

    1 条评论
  • "So, what does a Product Manager do anyway?"

    "So, what does a Product Manager do anyway?"

    In this article I want to explore the roles and responsibilities of a product manager in today's technological…

    3 条评论

社区洞察

其他会员也浏览了