Build an AI-Assistant in under 10 minutes with Hugging Face
With Adobe Firefly

Build an AI-Assistant in under 10 minutes with Hugging Face

AI agents are the next big thing in the AI scene. Imagine fully self-driving cars, bots doing your taxes, and AI tutors teaching your kids. Recently I have already dedicated seven (1, 2, 3, 4, 5, 6, jeez 7) full-fledged articles to the subject - and it never ceases to be interesting.

“Hey Jarvis! Walk the dog please, book a trip for next weekend, and while you’re at it, go double my income”.

But let’s be real. Agents — and I mean fully autonomous agents — are far from being solved. That’s because agency involves planning, world models, and resource management — skills Machine Learning sucks at.

It’s so hard to build autonomous agents that companies are either changing the definition of “agent” or trying to hack their way around it.

Some stitched together two clunky LLMs and pretended they figured out agency (Hi AutoGPT). Others shared fake demos boasting about half-baked capabilities (hello Devin, helloooo 43 other assistants platforms)

It’s not all hype and empty words though. A few actors are trying to get close, giving us a glimpse of what agents could look like — and this brings us to AI assistants (howdy Crew.AI, EMA, Autodev).

Picture AI assistants as the primitive ancestors of AI agents. They’re often custom versions of your favorite LLM equipped with extra tools like image generation and web browsing.


Subscribe to the TechTonic Shifts newsletter

The idea is to combine prompting and function calling to build specialized tools. Some of the applications involve:

  • Talking to data: Interact with the content of websites, documents, and datasets using natural language.
  • Personal productivity: organize your work based on your personal preferences and objectives.
  • Code generation: Ground your model into a specific domain like data engineering and a specific language like Python.
  • Creative work: Set specific parameters to generate assets based on a particular style.

And yes you can apply a classic LLM to each of the previous use cases, but you’ll have to write lengthy prompts and a bunch of complementary follow-ups for every task.

But with AI assistants you can skip a significant chunk of prompting. Context and repetitive instructions can go inside your assistant’s prompt, which means you only have to write them once.

In other words, AI assistants an efficient way to store your favorite prompts. And every time you create one, it’s as if you recruit a technically gifted intern ready to tackle any task.

Okay cool, how do you build an AI assistant?

You need two things: an idea and a good dose of prompt engineering.

There are also parameters to tweak and functions to enable. But worry not, we’ll go through each of the four steps required to build an assistant.

We’ll use HuggingChat because it allows you to pick from several open-source models. Their elegant interface covers state-of-the-art capabilities like multi-step RAG, dynamic prompts, and controlled web browsing. The best part? You get all of this for the attractive price of $0.

Alright, let’s dive right in.

0. Open the assistant tab

If you don’t have a HuggingFace account, you can create one in a few clicks. From there, log into the HuggingChat interface. On the bottom left side, you’ll find an “Assistants” button. Click it. Then “Create new assistant.”

As soon as the new chat window opens the real fun starts.

1. Set up the assistant prompt:

There are two main sections on the “Create new assistant” page.

  • The Instructions or “System Prompt” (right side).
  • The parameters (left side).

We’ll start on the right side with the instructions.

System Prompt is where you break down your assistant’s concept into a detailed list of instructions. There are many prompting techniques you can pick from to write your system prompt — like role prompting, placeholders, meta-prompting, and more.

Below you’ll find a template I put together to help you get started with your assistant’s prompt. Make sure to update [the content inside brackets] with instructions of your choosing. Also, feel free to delete sections and add new ones.

The prompt is yours now.

# [AI Assistant Name] The [Specialization Descriptor] Wizard
{**Comment:** This template uses third-person instructions because they tend to perform better than first-person instructions with many models.
In other words, "They/She/He will do XYZ" is usually more effective than "You will do XYZ."}

## Overview
- **Name:** [AI Assistant Name]
- **Specialization:** [AI Assistant Name] is a [Role: Data Scientists / Software Engineer / Marketing Expert / Product Owner] who specializes in [NLP/ Python / B2B Marketing / Agile User Stories]
- **Style:** Communicates with [Communication Style Adjectives] language, characterized by [Specific Communication Features]. Uses terms like "[Specific Expressions]" to engage with the user.
- **Reasoning:** Employs a step-by-step reasoning approach to provide comprehensive and accurate answers.

## Capabilities

### Core Functions
- **Task Analysis:** Breaks down the user's goals and tasks by asking [Specific Number: like 2] probing questions.
- **Solution Crafting:** Provides solutions that are simple, concrete, and tailored to user needs.
- **Feedback Integration:** Incorporates user feedback to refine outputs continuously.

### Specific Functions
- Depending on the area of expertise, this section can include capabilities like:
  - **Email Management:** Composes clear and well-articulated emails and uses bullet points and line breaks for more clarity.
  - **Code Generation:** Writes effecient, commented, and bug-free code based on user specifications.
  - **Data Analysis:** Performs data interpretation and visualization using Python [Desired Version such as  3.11.5] code .
  - [Add or remove capabilities based on the specific assistant]

### Communication and Input Handling
- **Clear Communication:** Ensures all communications with the user are clear, practical,  and easy to understand.
- **Educational Interaction:** Provides explanations [Add More Specifics Here: external references and web links] to enhance user understanding of the process.
- **Technical Discussion:** When necessary, discusses technical terms such as "EDA," "data privacy standards," and "User Stories."

### Adaptation and Learning
- Uses information from previous interactions to improve response accuracy.
- Adapts to new inputs by the user such as pasted text, code snippets, and web links.
- Asks detailed questions to the user to improvement the understanding of the task at hand.

## Output Format
- Always starts with a step-by-step reasoning then provides the answer.
- Uses line breaks, bold text, and bullet points to ensure the output is clear and easy to read.

## [Additional Instructions]
- **Example #1:** [Content of Example #1]
- **Example #2:** [Content of Example #2]
  - [Sub-example #1]
  - [Sub-example #2]        

Quick additional tips:

  • Start with short instructions then expand their content to include specifics.
  • Keep your instructions organized in separate sections to make them easy to edit.
  • For more flexibility, use placeholders <like_this>. That way you can change the value of your placeholders instead of manually editing the prompt.

Here’s an illustration of how useful <placeholders> can be:

[Example of placeholder usage]

**Static instruction:**
Add comments to explain each line in the following SQL script.

-----------------------------------------------------------------------------------------------------------

**Dynamic instruction:**
Add comments to explain each line in the following <programming_language> script.
<programming_language> = Python

-----------------------------------------------------------------------------------------------------------
{**Comment:** With the dynamic instruction, you can pick any programming language you want without changing the body of the instruction.}        

2. Setting up the parameters:

Once you finish crafting your system prompt, you can move to the left side of the HuggingChat interface. You’ll find two batches of parameters.

  • Generic parameters like name and description.
  • Information access parameters like web browsing.

In this step, we’ll fill the first batch containing the generic parameters. They are straightforward and you can extract most of their values from your system prompt.

Here are the generic parameters and how to fill them:

  • Avatar: Generate an avatar using your favorite image generation model.
  • Name: Reintroduce the name of your AI assistant.
  • Description: Summarize what your AI assistant does in one sentence.
  • Model: Select an LLM from a list of available models on HuggingChat — including “Llama-3-70b,” “Mixtral-8x7B,” “c4ai-command-r-plus,” “zephyr-orpo-141b-A35b-v0.1,” and more.
  • User start messages: Add simple prompts to quickstart your typical use case or encourage the user to interact with your AI assistant. User start messages will appear next to your AI assistant as clickable buttons.

3. Internet access and dynamic prompting:

This step is about the second batch of parameters. This is where you tell your HuggingChat assistant how to access external information.

There are two options at your disposal: you can pick one or both of them. Either way, you have some setups to do.

[A] Internet access:

When it comes to web browsing, you have four options covering different use cases. Here are brief descriptions extracted from the HuggingChat interface.

1. Default: Assistant will not use internet to do information retrieval and will respond faster. Recommended for most Assistants.

In short, you can decide to open your assistant to the web in general or point it towards specific spots — be it web domains or URL links.

For example, Bernard, my Prompt Engineering AI assistant uses “Specific links” under the “Internet access” option to retrieve documents I stored on my HuggingFace account. These documents serve as references to answer use questions.

If you use the “Specific links” feature, make sure to submit your links in the required format. Here’s what it looks like:

**Format:** url#1,url#2,url#3
**Example:** https://huggingface.co/datasets/nabilalouani/prompt_engineering_techniques_ai_assistant/blob/main/basics_of_prompting_and_placeholders.docx,https://huggingface.co/datasets/nabilalouani/prompt_engineering_techniques_ai_assistant/blob/main/prompting_methodology.docx,https://huggingface.co/datasets/nabilalouani/prompt_engineering_techniques_ai_assistant/blob/main/zero-shot_and_few-shot_prompting.docx        

Good to know: When pointing to documents, note that your HuggingFace assistant will use multi-step RAG to select relevant documents based on sentence similarity. It’ll extract information and use it to answer user questions.

Now let’s explore the last feature.

[B] Dynamic Prompt:

Allow the use of template variables {{url=https://example.com/path}} to insert dynamic content into your prompt by making GET requests to specified URLs on each inference.

In plain English, you can update the content of your system prompt without having to edit your assistant. The key is to store “dynamic variables” online in the form of a document.

Every time you run your assistant, it will retrieve the latest version of the document and include it in the system prompt.

For example, I built an AI assistant that suggests coworking places around Paris based on which part of the city I happen to be in. I want my assistant to pick from a list I’d already put together. But I keep updating the list.

That’s why I used the “Dynamic Prompt” feature. It ensures my assistant uses the latest version of my tiny dataset. Plus, there’s no need for any edits on the assistant side.

Here’s the format for a “Dynamic Prompt” input:

**Format:** {{url=url_example}}
**Example:** {{url=https://huggingface.co/datasets/nabilalouani/ai_assistants_dynamic_inputs/blob/main/coworking_coffeeshops_Paris_sample.csv}}        

Examples of AI assistants

Now that you have the gist of AI assistants, let’s explore three examples. Like any other HuggingChat assistant, the prompts and parameters are open and you can check them directly through the interface.

Bernard:

A Prompt Engineering expert in Large Language Models always ready to help.

Bernad - The Prompt Engineering Sensei - HuggingChat

Use the Bernad - The Prompt Engineering Sensei assistant inside of HuggingChat

hf.co

Bernard uses nine different files to retrieve prompting techniques and examples. He’s already helped 1,000+ users refine their prompts.

Mr Coze:

The Frenchie AI companion, guiding you to the coziest workspaces in Paris.

Mr. Coze - HuggingChat

Use the Mr. Coze assistant inside of HuggingChat

hf.co

Mr Coze uses a “Dynamic Prompt” file stored online to help you select a cozy coworking place in Paris. Be careful though. Mr. Coze is pretty clumsy with travel time so make sure to double-check his suggestions.

eMay:

An elegant AI assistant who writes your emails for you.

eMay - HuggingChat

The first open source alternative to ChatGPT. ??

hf.co

eMay relies on a detailed prompt to write emails that sound like you. She drafts responses that align with your expertise, role, and unique writing style. The more you tell her about yourself, the better her output.

Why should you care now of all time?

Because the gap between closed-source models and open-source models is closing — and it’s closing fast. Most of the fancy stuff offered by ChatGPT and Claude is now available for free on HuggingChat.

Meta, Cohere, Mistral, and other companies started rolling out open-weight models that can compete with top-tier proprietary models. You can use these models locally or via API — and you can use them for both personal and commercial purposes.

Open source is a win-win deal.

You get fancy toys to play with, and the companies behind these toys get ideas on what they can build with them. Everybody wins, except for those who opted for a closed-source-for-maximum-profit approach (Hi Sam!)

As adoption and demand grow, open-source leaders like HuggingFace will provide more features —like tool usage, connectivity to apps, longer context windows, and improved RAG.

Every new function will allow you to build better AI tools. Over time, your AI assistants will move from a repository of your most-used prompts to an agent-like system. The keyword here is “like.”

I’m still pessimistic about agents because I’m not seeing any significant improvements in the world models side. But I look forward to being surprised. Perhaps a new architecture? Maybe a stack of clever hacks? Anything that actually works is fine by me.

In the meantime, there’s a ton of productivity to unleash with AI assistants and I suggest you try them today. I suggest you try them now. Right now.


Subscribe to the TechTonic Shifts newsletter

Well, that's it for now. If you like my article, subscribe to my newsletter or connect with me. LinkedIn appreciates your likes by making my articles available to more readers.

Signing off - Marco


Other articles you may like


Sidoney S.

Investor/Brand Ambassador

4 个月

Nice read

回复

要查看或添加评论,请登录

Marco van Hurne的更多文章