Building Yuki, A Level 3 Conversational AI Using Rasa 1.0 And Python For Beginners
Aditya Vivek Thota
Technical Content Creator for Developer Blogs | React, Nextjs, Python, Applied AI
Fascinated by AI-powered chatbots or natural language understanding and processing? but not sure where to start at and how to build stuff in this domain?
You are at the right place. I have got you covered. Just read on and together let us unravel the simplicity of complex topics in our journey to build something marvelous. I would recommend you to have a quick read through the article once to get the overall idea, bookmark the link to this article, and then come back later to follow along with me in doing all the tasks I mention here. I have simplified the details to that extent that even if you are a middle school student, you can follow along well and learn all the aspects of bot design. All you need is the zeal and passion to build something and some basic understanding of how computers work and a little bit about programming!
But yes, this tutorial is not only for beginners. Those with considerable experience might also find it helpful to kick start their development process.
Before getting started I want you to know the main motivation in writing this one.
- To give a different perspective to what building something means
- To enable self-learners alien to this subject feel the ease at which something can be learned
- and finally to highlight what the future holds, it’s gonna be bots everywhere.
Please note that I am writing this tutorial for a Windows 10 PC. However, a similar approach should work in Linux with very few modifications. If at any point in the article, any keyword, step, or technical term appears unclear, feel free to ask in the responses/comments and I will clarify it. With that being said, let us get started, keeping our end goal in mind.
“Can you create an AI as complex as Jarvis, if not better?”
Jarvis from Iron Man
?? Checkpoint 1: Understand What You Are Building
A level 3 conversational AI is basically a computer program that can understand human language along with the intent of the speaker, the context of the conversation, and can talk like one. We, humans, assimilate natural language through years of schooling and experience. Interestingly, that’s exactly what we are going to do here, build a bot, parallelling how a human baby learns to converse. At the end of this project, here are the objectives we want to achieve, to create a bot that-
- can chitchat with you
- can understand basic human phrases
- can perform special actions
- can autonomously complete a general conversation with a human
- can store and retrieve information to and from a database (equivalent to its memory)
- is deployed on Telegram, Facebook Messenger, Google Assistant, and Alexa, etc.
?? Checkpoint 2: Setting Up Your Development Environment
To build something incredible and quick, you will need great tools and easy to use frameworks. Here is my setup-
- Anaconda Distribution For Python
- Visual Studio Code (The IDE we will use)
- Microsoft Visual C++ Redistributables (Prerequisite for Rasa Stack) (Note: You have to install Microsoft Visual Studio, and during installation, checkmark on the C++ Redistributables option)
- Alternatively, use this link- https://visualstudio.microsoft.com/visual-cpp-build-tools/
- DB Browser For SQLite (We will be using this tool for creating and managing a simple SQLite database to store and retrieve information)
Download and install the latest versions of these in your system. It is preferred to install them with default paths and settings.
After installation, go to your search bar in desktop and type ‘Anaconda Prompt’ (terminal). Open it and create a new virtual environment for our project by typing the following command:
conda create -n bots python=3.6
This will create a virtual environment called ‘bots’ (you can create an env with whatever name you want) where you can install all the dependencies of your project without conflicting with the base. It is a good practice to use virtual environments for projects with a lot of dependencies.
Next, activate this virtual environment by typing (always ensure you activate it while makes any changes or running the program)-
activate bots
Now, install the rasa stack by typing the following command-
pip install rasa
It may take a few minutes to download all the packages and dependencies. There is a chance that you may encounter some error or warning. When you are stuck up somewhere, remember that Google has all the answers you want. Search your error message and you’ll find a solution in sites like StackOverflow.
In a similar way, you can install any Python package in your activated environment. We will be using a library called Spacy in the future. Install it now. ( -u is used to install only for the current user without admin privileges)
pip install -U spacy
Then run the following command in terminal to download a spacy model.
python -m spacy download en_core_web_sm
?? Checkpoint 3: Creating The Skeleton Of Our Bot
We are using the open-source rasa-stack which a very powerful, easy to understand, and highly customizable framework for creating conversation AIs (hence the choice). Before we finalize the functionalities of the bot, let us do some magic with a single line of command.
- Create a new folder on your desktop and name it for your reference. I named mine as ‘Yuki’, the name I gave to my bot.
- Now, go back to the anaconda prompt terminal and reach your project directory by typing the following command:
cd desktop\yuki
- Note that the syntax is cd followed the path to YOUR folder.
- Next type the following command:
rasa init
You will see a prompt asking to enter the path to the project. Since we are already in the project folder (Yuki), you can just press ‘Enter’. Just follow along with the terminal. You will see that a core and an NLU model getting trained after which you should get a prompt to talk to the bot in the command line. Boom! you have just now built a bot! Talk to the bot, say a ‘hi’ maybe!
So what the hell happened just now? You basically created a rasa project with the above command which initialized a default bot in your project folder. We will be working on top of this to build our bot.
?? Checkpoint 4: Understanding The Structure Of Your Bot
Now, open the new folder you created before. You will see so many new files in it. This is your project structure. Here is a brief on each of them:
- Data Folder: Here lies your training data. You will find two files, nlu.md (training data to make the bot understand the human language) and stories.md (data to help the bot understand how to reply and what actions to take in a conversation).
- Models Folder: Your trained models are saved in this folder.
- actions.py: This is a python file where we will define all the custom functions that will help the bot achieve its tasks.
- config.yml: We mention the details of a few configuration parameters in this file. I’ll get back to this later.
- credentials.yml: To deploy the bot on platforms like Telegram or Facebook Messenger, you will need some access tokens and keys. All of them are stored in this file.
- domain.yml: One of the most important files, this is like an index containing details of everything the bot is capable of. We will see more about it later.
- endpoints.yml: Here, we have the URL to the place where the actions file is running.
Once you are clear with the file structure, read ahead.
So, how exactly does it work?
Rasa has many inbuilt features that take care of a lot of miscellaneous things allowing you to focus more on the practical design of the bot from an application point of view. Don’t think too much about the inherent technical aspect at this moment. Follow along carefully, and you will find yourself building a complex AI at the end of this tutorial.
?? Checkpoint 5: Designing The User Experience (UX)
Before you go any further, you need to decide the purpose of your bot. What will your bot be used for? Here are some generic ideas:
- Weather bot that can tell the user about various weather parameters in the required area.
- A Wikipedia based bot that can answer to any kind of general knowledge-based questions.
- A Twitter-based bot, that can update the user with what’s trending right now in the location of interest.
- A generic search bot that can search about something (Eg: Jobs) based on user’s queries.
- From a commercial standpoint, an ordering/booking bot using which the users can purchase goods like clothes, order food, or book tickets to movies, travel tickets, schedule appointments with professionals like doctors, lawyers, etc.
In short, the possibilities are limited by your own imagination.
Think of JARVIS or EDITH from the MCU. It is even possible to build something like that. Too many choices? I will design my bot here, which you can try to replicate, but I hope you can be a little more creative to build upon your own ideas. Whatever idea you have, the steps will be similar.
As for Yuki, I will be demonstrating almost everything that is possible, with the end goal of creating an all-purpose conversational AI in a series of tutorials.
In my first design, I want to demonstrate these two things:
- How to use APIs (this stuff is like magic, super useful and easy to use)
- Playing with custom functions
Here’s what I want my bot to be capable of for now:
- Fetch the latest news or articles on the internet based on users topic of interest or search query
- Capability to handle the simple general conversations
Now I will list down all the functions/actions that the bot needs, to achieve its capabilities;
- utter_hello
- handle_out_of_scope
- utter_end
- get_news (and so on…)
Make a brief list like this for your bot. Next, I made a list of human intents the bot will have to detect from the messages users send it.
Intents
- greet
- bye
- getNews
- affirm
- deny
- thank_you
- out_of_scope (Everything else for which the bot is not programmed yet for must be detected as out of scope)
All this data goes into the domain.yml file. So, here’s how I expect my ideal user to interact with Yuki:
- User: Hi!
- Yuki: Hola! I am Yuki. What’s up?
- User: Get me the latest news updates
- Yuki: Give me a topic/keyword on which you would like to know the latest updates.
- User: <enters the topic name>
- Yuki: Here’s something I found.
<Links to news articles fetched>
- Yuki: Hope, you found what you were looking for.
- User: Thanks!
- Yuki: You're welcome.
This is generally referred to as a ‘Happy Path’, the ideal expected scenario that happens when a user interacts with the bot. Whatever bot you want to design, have an idea of the basic conversational flow it is expected to handle, like this. Write down multiple flows with different possibilities for your reference. The above flow (along with any minor variations to it) is the basic expectation we want to achieve.
Apart from this, we are going to teach Yuki how to respond to chitchat questions like ‘how are you?’, ‘what’s up’, ‘who made you?’, etc.
Once you are ready, move to the next checkpoint.
?? Checkpoint 6: Building Your NLU Model
First, let us teach our bot some human language, to identify the intents. Start Visual Studio Code, click ‘File->Open Folder’ and choose the project folder (in my case, Yuki).
Open the data/nlu.md file in your code editor. You will already see some default intents in it. This is the place where you add data about every intent the bot is expected to understand and the text messages that correspond to that intent. Update this file with all the required intents in the same format. My finished nlu.md file will look like this:
## intent:affirm - yes - indeed - of course - that sounds good - correct - alright - true ## intent:bye - bye - goodbye - see you around - see you later - ttyl - bye then - let us end this ## intent:chitchat_general - whats up? - how you doing? - what you doing now? - you bored? ## intent:chitchat_identity - what are you? - who are you? - tell me more about yourself - you human? - you are an AI? - why do you exist? - why are you yuki? ## intent:deny - no - never - I don't think so - don't like that - no way - not really - nope - not this - False ## intent:getNews - Send me latest news updates - I want to read some news - give me current affairs - some current affairs pls - Find some interesting news - News please - Get me latest updates in [science](topic_news) - latest updates in [sports](topic_news) - whats the latest news in [business](topic_news) - send news updates - Fetch some news - get news - whats happening in this world - tell me something about whats happening around - interesting news pls - latest updates in [blockchain](topic_news) - get me latest updates in [astronomy](topic_news) - any interesting updates in [physics](topic_news) - I want to read something interesting - I want to read news - latest news about [machine learning](topic_news) - latest updates about [Taylor Swift](topic_news) ## intent:greet - hey - hello - hi - good morning - good evening - hey there - hola - hi there - hi hi - good afternoon - hey - hi ## intent:thank_you - thanks! - thank you
As it is directly evident here, you are basically giving your bot the data to make it understand what words imply which intent of the user. These are some fundamental human intents. You can add more if you want.
Notice how I divided the chitchat intent into chitchat_general and chitchat_identity for more specificity. To create a robust bot, be as specific as you can with your intents.
Also, notice how I have placed some words in [] followed by (). These are the entities or the keywords with some significance in users’ text. To help the bot understand the same, this general syntax is used.
And it’s as simple as that. You are done with NLU! Rasa will take care of the training part for you. Let’s now move on to the next checkpoint.
?? Checkpoint 7: Updating The Domain File
The domain.yml file must contain the list of all the following stuff:
- Actions: These include the names of all the custom functions we implement in actions.py as well as the ‘utter actions’
- Entities: These are specific keywords present in user input which the bot can extract and save it to its memory for future use. For now, my bot requires only one entity which I named ‘topic_news’. This basically refers to whatever topic the user seeks the news.
- Decide upon what entities you need to extract based on your use case. For example, the Name of a person if you want to create a user profile internally, a geographical location, if you want to give the weather update, food items if your bot can order food online, email ids to subscribe the user to something, etc.
- Intents: We have already discussed intents. In the domain file, you’ll have to include the list of the same.
- Forms: The form action is a special feature provided by rasa stack to help the bot handle situations where it needs to ask the user multiple questions to acquire some information easily. For example, if your bot has the ability to book movie tickets, it must first know details like the name of the movie, the showtime, etc. While you can program the bot to react according to user context, using forms is highly efficient in such situations. In my case, I have a getNews form action which we will explore further on how it can be implemented. The names of all the form actions you implement must be put up in the domain file.
- Slots: In short, the entities and other internal variables that the Bot needs are stored in its memory as slots.
- Templates: These are like the blueprints of all utter actions. Utter actions need not be implemented anywhere else separately. But remember, they must be named inutter_<sample>format. You are essentially teaching the bot, the language, the text that it must use when it wants to send some message. You can have multiple text templates for each utter action.
- Check out my domain.yml file for reference, to understand how you should update your domain file with respect to your bot design.
%YAML 1.1 --- actions: - handle_out_of_scope - utter_confirm_if_service_is_correct - utter_end - utter_hello - utter_reply_chitchat_general - utter_reply_chitchat_identity - utter_welcome - utter_default entities: - topic_news forms: - get_news intents: - inform - getNews - thank_you - greet - deny - affirm - bye - chitchat_identity - chitchat_general slots: requested_slot: type: unfeaturized topic_news: type: text templates: utter_ask_topic_news: - text: Give me a topic/keyword on which you would like to know the latest updates in. utter_confirm_if_service_is_correct: - text: I hope you found what you were looking for! utter_end: - text: Bye bye - text: until next time... utter_hello: - text: Konnichiwa! Yuki here. Whatcha wanna do today? - text: Hola! I am Yuki. What's up? - text: Hi there! Watashi wa Yuki, as in snow in japanese. utter_reply_chitchat_general: - text: I have nothing much to tell. - text: Let's focus on you. - text: Learning new things, one line of code at a time! utter_reply_chitchat_identity: - text: I know about me as much as you do! - text: You should ask my creator, XQ for more details - text: I am not really sure what to tell utter_welcome: - text: You're welcome! utter_default: - text: Sorry, I did not understand you, please try input something else.
domain.yml
There are so many customizations and variations possible. The format you see here is given by Rasa, and includes keywords that make use of its features like ‘buttons’. As you can see, your domain file contains a summary of everything the bot can do.
Up to this point, we have hardly written a line of proper code, and we are about halfway done with the project! Notice how much ideation and brainstorming would go into building a high-level application even before you start coding. That’s the important part, the use case, and design. Anyone can write lines of code, but only a few can create brilliant products.
Remember that a good programmer isn’t always a great product developer and a great product developer isn’t always required to be a good programmer.
Highly functional AIs can have 100s of actions, intents, entities and be connected to huge databases. We are now taking the first steps towards this direction.
?? Checkpoint 8: Write Code For Your Custom Actions
- Open your actions.py file in visual studio code. This is the place where you will define all your custom actions, which you have listed in your domain.yml file.
- You’ll just need to define your action classes here and Rasa framework will internally take care of everything else. If you have no clue about python or the basic syntax, I would suggest you quickly learn some stuff online before you proceed to the next step. One recommended read through is the python 3 tutorial by Sololearn which can be quickly completed in a few hours even if you are an absolute beginner. That should cover all the basics you will need. You need not be an expert who can write code on your own but learn to just understand the code I am writing or the codes you see online. The ability to read code is important.
- I have 1 custom action to implement at the moment, ‘get_news’.
- But how? you may wonder. We are going to use something called an API (Application Programming Interface) to fetch the required data from the internet. In short, an API makes our developers' life easy. For now, know that when you send some data to an API (basically a URL link), and it will return to you, the information you seek or execute a process or do something for which the API is designed for.
- In my case, I am using the News API. I’ll send some payload (topic_news) to the API (the URL specified in its documentation), and it will return me all the news data in JSON format. In python, this can be done using the requests library, as you can see in the code below.
- In all my implementations I have used a very direct approach, to make you clearly understand and familiarise with the code. Be aware that there can be many more efficient ways to implement the same features you see below. (I’ll be upgrading the code accordingly in future articles)
- Here is my actions.py file. Read the code carefully. Take your time. I repeat, take your time.
from rasa_sdk import ActionExecutionRejection from rasa_sdk import Tracker from rasa_sdk.events import SlotSet, FollowupAction from rasa_sdk.executor import CollectingDispatcher from rasa_sdk.forms import FormAction, REQUESTED_SLOT, Action from rasa.core.slots import Slot from typing import Dict, Text, Any, List, Union import sqlite3 import requests #%% ############################################################################################################## # Global Constants NEWS_API_KEY = 'Paste your API key here' ############################################################################################################## #%% # A form action to fetch news from the internet class getNews(FormAction): def name(self): return "get_news" @staticmethod def required_slots(tracker: Tracker) -> List[Text]: """A list of required slots that the form has to fill""" return ["topic_news"] def slot_mappings(self): return {"topic_news": [self.from_text(intent=[None, "getNews", "inform"]), self.from_entity(entity="topic_news", intent =["getNews"])]} def validate(self, dispatcher: CollectingDispatcher, tracker: Tracker, domain: Dict[Text, Any]) -> List[Dict]: slot_values = self.extract_other_slots(dispatcher, tracker, domain) # extract requested slot slot_to_fill = tracker.get_slot(REQUESTED_SLOT) if slot_to_fill: slot_values.update(self.extract_requested_slot(dispatcher, tracker, domain)) if not slot_values: # reject form action execution # if some slot was requested but nothing was extracted # it will allow other policies to predict another action raise ActionExecutionRejection(self.name(), "Failed to validate slot {0} " "with action {1}" "".format(slot_to_fill, self.name())) # we'll check when validation failed in order # to add appropriate utterances # validation succeed, set the slots values to the extracted values return [SlotSet(slot, value) for slot, value in slot_values.items()] def submit(self, dispatcher: CollectingDispatcher, tracker: Tracker, domain: Dict[Text, Any]) -> List[Dict]: """Define what the form has to do after all required slots are filled""" topic_news = tracker.get_slot("topic_news") pageSize = '2' # Set the number to how many news articles you want to fetch url = "https://newsapi.org/v2/everything?q=" + topic_news + "&apiKey=" + NEWS_API_KEY + "&pageSize=" + pageSize r = requests.get(url = url) data = r.json() # extracting data in json format data ?= data['articles'] dispatcher.utter_message("Here is some news I found!") for i in range(len(data)): output = data[i]['title'] + "\n" + data[i]['url'] + "\n" dispatcher.utter_message(output) dispatcher.utter_template("utter_confirm_if_service_is_correct", tracker) # utter submit template return []
Few highlights from the code
- Observe how the ‘form action’ is being implemented. The format is standard syntax. To understand it, you can refer to the official documentation here. Notice how the class has different functions. Also, note that for any rasa action class, the first function you would define is its name.
- In short, we are first getting the data stored in the ‘topic_news’ slot from the bot’s memory. If it’s null, we are making the bot ask the user for the same and then extract it. This is in fact managed internally by the form action. You need not worry about it. Once, we have this, we are making an API call using requests library in the prescribed format (refer API documentation), and fetching the required data in JSON format, finally extracting what exactly we need from it (the title and the URL to the article), and dispatching it to the bot to send a message to the user.
- Most of the APIs are used in a similar fashion as shown in the code. Try it out with a different API from the one I used. You can search for interesting APIs here.
- You must have a clear understanding of Requests library, JSON format, and how to extract what you want from the JSON payload in python.
And boom! We are done with the coding part for now!
?? Checkpoint 9: Updating The Config File
We are almost nearing the completion. Now, go to your config.yml file and update it with the following-
# Configuration for Rasa NLU. # https://rasa.com/docs/rasa/nlu/components/ language: en pipeline: - model: en_core_web_sm name: SpacyNLP - name: SpacyTokenizer - name: SpacyEntityExtractor dimensions: ["GPE"] - name: CRFEntityExtractor - name: EntitySynonymMapper - name: CountVectorsFeaturizer token_pattern: (?u)\b\w+\b - name: EmbeddingIntentClassifier # Configuration for Rasa Core. # https://rasa.com/docs/rasa/core/policies/ policies: - name: MemoizationPolicy max_history: 10 - name: KerasPolicy epochs: 100 max_history: 10 - name: FallbackPolicy nlu_threshold: 0.7 core_threshold: 0.3 fallback_action_name: action_default_fallback - name: FormPolicy
You can read more about this in the URLs given. At the moment, just copy paste and don’t worry much about the inner details. This file will remain unchanged. To get some context, we are configuring a Natural Language Understanding (NLU) pipeline consisting of various functions and extractors, through which the user input is passed through.
The outputs of this NLU pipeline is then passed on to a neural network that forms the Rasa Core. Using these inputs, the neural network will predict the intent of the user, as well as what action the bot needs to take next, based on the context of the situation.
By default, rasa uses a neural network called an LSTM (Long Short Term Memory). The beautiful part here is that all the components we have included are customizable and at the same time, Rasa provides loads of inbuilt and default configurations that work pretty well. You can also make your own custom components and add them here. For now, you need not worry about this or your neural network architecture. We will be using default configurations.
?? Checkpoint 10: Training Your Bot
- Open data/stories.md and delete the default stories present there.
- This file needs to be filled with the training data that will be used by the bot to understand how it must behave in a given situation.
- The format of this training data is simple, just the expected conversational flow in terms of the intents and actions you defined. You have the intents detected followed by what actions need to be executed.
- You can add a few simple stories first, and then train the bot. After that, generate more stories automatically using the ‘interactive learning’ feature.
- Take a look at my stories.md file for reference. (click on the hyperlink)
Here are a few useful commands you can use in the terminal/anaconda prompt while you are in your project folder.
# To train both nlu and core models combined rasa train # To run your bot in interactive Mode rasa interactive -m models --endpoint endpoints.yml # To run actions server: python -m rasa_sdk --actions actions # To run locally (normal mode) rasa shell # To run the bot on external channels (fb, telegram, etc) # port number 80 rasa run -p 80 # To use Rasa X features for testing and interactive learning rasa x
Train your bot now, by running rasa train. Once the training is successfully completed, follow these steps:
- Update your endpoints.yml file as follows:
action_endpoint: url: "https://localhost:5055/webhook"
- Start an actions server in one terminal by typing the command-
python -m rasa_sdk --actions actions
- Keep it running (actions server must continue to run while the bot is running) and open another new terminal. In this, type the following command to run the bot locally-
rasa shell
- You can now talk to the bot, ask it some stuff or chitchat with it and check if it's responding as you have trained it. check for errors, if any.
- Next, use interactive learning to generate more accurate stories and conversational flow and then retrain the bot. Play with all these misc features to get a sense of familiarity. Here, you basically talk to the bot and validate every step the bot takes. If the bot behaves wrongly anytime, you have an option to tell, what the correct step is. The correct training data is generated automatically, and the bot’s performance is improved upon retraining. It’s pretty easy, just try it out!
- After this, experiment with the Rasa X feature, which just makes the whole training process a lot easier.
Once done with this exploration, you can move ahead with deploying the bot on platforms like Telegram, Messenger, Slack, etc. and test it there. I will not detail how it can be done right now, but here’s a clue- a tool called ngrok will help you do it. If you run into errors or find it cumbersome, stay tuned for a future article where I will elaborate on it!
?? Checkpoint 11: The Results
Carefully observe how the bot responds to your messages. You will notice a few interesting traits-
- The bot is able to identify the users’ intent correctly even when the user’s message deviates from the training data we used.
- The bot is contextual in nature. For example, in my case, the bot is able to extract the topic_news entity directly from the users’ text and perform the search. Only when the user doesn’t give the topic, the bot will ask explicitly for the same.
- The bot maintains continuity in the conversation.
As we build more features, we will see more, especially in cases of out of scope situations and error handling.
Here is a screenshot while using my bot in the terminal:
While it's far from perfect, we have now established a base to scale forward from! The performance can be drastically be improved by feeding more training data. Considering the amount of data we have used, this is pretty robust.
?? Checkpoint 12: What Next For Yuki!?
“Ok, you promised us, Jarvis. Yuki is pretty ordinary and most of it very basic! We want Jarvis!”
Before you start thinking this, let me summarise what you learned:
- To set up the basic architecture of a Rasa bot
- To make API calls, retrieve information and use custom actions to achieve your logic
- To generate training data for Core and NLU models
The real game starts now. Once you are clear with these foundations, you will truly understand the power of Rasa and bots in general.
Stay tuned for the next article in this series where I plan to try out some more stuff:
- Yuki is successfully able to fetch the news. What can you do with it? How about automatically tweeting about it on twitter? Pretty much possible using the Twitter API and there is more to it!
- Not satisfied with how Yuki delivers the news directly? It can be customized to give a behavioral touch! We can even create a special custom news delivery utility function and do all sorts of stuff, like finding the sentiment of the news, summarising, etc.
- Using different APIs is one way you can implement new functionalities into your bot, but I want Yuki to be capable of something more creative. How about generating art or music? For that, we must first give Yuki vision and speech! Let’s see soon, how we can do that.
Endnotes:
Excited to see what you will build with the knowledge acquired. The possibilities are endless. Feel free to share in the comments below.
Got stuck up anywhere in development, let me know the issue in comments, and I’ll see if I can help you out. If you have read this far, a big thank you! I hope you found it useful and learned a thing or two along the way.
This tutorial was first published on Research Nest's publication on Medium here. Do follow The Research Nest for more such tutorials and support our initiatives!
#ai #rasa #projects #indiastudents
Software Engineer at Impetus
4 年UnicodeEncodeError: 'ascii' codec can't encode character '\U0001f916' in position 22: ordinal not in range(128). . Thier is issue we go throught 3rd checkpoint
DevOps Engineer at Xero | Results-Driven Problem Solver | Utilizing Software Algorithms and Engineering to Deliver Innovative, Scalable Solutions to Complex Challenges
4 年Thanks man! Keep going.
Exploring
4 年Great work dude... Really appreciate it Aditya Vivek Thota