Using Fulfillment to manage your DialogFlow Chatbot Conversation
The Dialogflow platform enables the creation of AI based chatbots and supports integration onto multiple platforms. There are various reasons for wanting to use Dialogflow for your conversational agents, and below are some reasons that could be considered high on the list
- supports multilingual text-to-speech and vice versa
- AI-based engine easy for non-technical users to manage and "train" as required
- cheap pricing model
- multi-platform support e.g. web-page integration, Google/android app, Facebook
- supports text, images, links and general rich response display. It's worth noting that the provided web-integration client only supports text. For that reason, I've created a rough-and-ready text-based client that supports hyperlinks, images, and can be customized as required
- supports external fulfillment for user intent handling i.e. call a custom API for intent request/response handling
- provides an API to manage all your agent data e.g. intents
- complete logs that provide user conversation flows and this can be used to update intents as required
Fulfillment can be described as custom code API, that supports calls from your Dialogflow agent. Naturally, it needs to support the required call request/response data structure. Intents that have fulfillment enabled, from the Dialogfow console, will be managed through the custom API. Dialogflow will automatically forward requests if the matching intent supports fulfillment. One further point worth mentioning is the requirement to handle responses based on the platform e.g. ACTIONS_ON_GOOGLE. The supplied fulfillment code supports a custom widget and the Google Assistant simulator (ACTIONS_ON_GOOGLE); link found on the Dialogflow console. Further documentation can be found here.
It may well be advisable to read through the Dialogflow documentation, to get an understanding of Dialogflow as well as familiarity with the console and how to create basic building blocks, and their purpose. This post provides basic instruction on how to use fulfillment to manage chatbot conversations, and uses the provided fulfillment deployment via Cloud Functions. There is a working example, a Dialogflow zipfile that can be imported into your console and a simple widget simulator that makes a use of the authenticate dialogflow api to enable conversation.
Agent Import, Setup and Overview
Create an account from the Dialogflow site and navigate to the console. Click on the gear, settings, from the console, click on "Export and Import" and import the zipfile
Once the import is successful, navigate to "Intents" and you will see several intents. Having had a read of the documentation, with your new found understanding of how Dialogflow works, have a look through the intents. Various types of functionality implemented in each intent e.g. followups, training phrases, action parameters, slot filling. For fulfillment to work the webhook setting, at the bottom of all intent settings, should be enabled. Fulfillment, in general, also has to be enabled as below
Further down the Fulfillment page you will see the Inline Editor. Enable it and copy the content of index.js into the code area, disable it, and click Save. It's advisable to update the package.json with the provided version, and click Save.
On the General tab, in settings, you will find the Project Id, unique to your Dialogflow agent; this will be required for the NodeJs web widget. Click on your service account to generate a Google service account key and download the JSON keyfile.
From here it is worth taking a single intent, and it's fulfillment handling, with a detailed explanation.
Treatment Intent
To ensure all is working correctly, try out the following scenario, using the top-right of the console, where it says "Try it now". All text(denoted by IN in the example below) is entered in this box, other than where a response button needs to be clicked. Alternatively, use the Google Assistant; made possible because the fulfillment supports ACTIONS_ON_GOOGLE. The console logs can be found at https://console.firebase.google.com/u/0/project/<your-project-id>/functions/logs?severity=DEBUG. You can add further logging to index.js if required.
IN: how do you treat
OUT: provide a valid topic name
IN: tendinopathy
OUT: Do you want treatment approach or treatment algorithm?
IN: click the treatment-algorithm button
OUT: Enter patient group, options below
IN: shoulder with biceps tendinopathy
OUT: final description with a link to a page
It's worth noting that if the topic name, tendinopathy, is passed in the initial text as "how do you treat tendinopathy?", there would be no prompt for a topic name. The following goes through an explanation of how this conversation is managed.The treatment intent makes use of an output-context, "treatment-choice". Setting an output-context has a few implications. Typically, input text is used to activate an intent, however, setting an output context can be used to match a required intent via input-context, almost being force-matched, and is also used to carry parameters to that intent/s. The screenshots below should make it clear
The "Action and parameters" section of the intent is used to manage required parameters and how they are matched, with more details in the Entities section on the console.
index.js contains intent mappings, and the treatment handler is mapped with intentMap.set('treatment', treatmentHandler); as below
function treatmentHandler(agent) {
console.log('treatmentHandler');
const topics = agent.parameters.topics;
const topicsPartial = agent.parameters.topicsPartial;
console.log(`topics:- ${topics}`);
console.log(`topicsPartial:- ${topicsPartial}`);
if(topics) {
if (agent.requestSource === agent.ACTIONS_ON_GOOGLE) {
let conv = agent.conv();
conv.ask('Do you want treatment approach or treatment algorithm?');
conv.ask(new Suggestions(['treatment approach','treatment algorithm']));
agent.add(conv);
} else {
agent.add('Do you want treatment approach or treatment algorithm?');
agent.add(new Suggestion('treatment approach'));
agent.add(new Suggestion('treatment algorithm'));
}
agent.context.set({
name: 'treatment-choice',
lifespan: 5,
parameters:{topics: topics}
});
} else {
if(topicsPartial) {
console.log(`topicsPartial ${topicsPartial}`);
agent.add(`You entered ${topicsPartial}`);
agent.add(new Suggestion('unstable angina'));
} else {
agent.add('provide a valid topic name');
}
}
}
Hopefully, the code is self-explanatory. If the topics parameter is present, manage the response based on device type. If a partial topic name is matched, see the Entities section for defined values, then display some relevant options to the user. The example uses a static value, unstable angina, though you would want to use your custom API to present options. When a complete topic name is matched, present the user with two intent options, 'treatment algorithm' and 'treatment approach', and pass the topics parameter to the next intent using the context 'treatment-choice'. For this demo, the user chooses treatment algorithm, and this intent is activated by the set input context, treatment-choice.
The matched intent is mapped with intentMap.set('treatment.algorithm', treatmentAlgorithmHandler); and coded as below
function treatmentAlgorithmHandler(agent) {
console.log('treatmentAlgorithmHandler');
const treatmentAlgorithmContext = agent.context.get('treatment-choice');
const topics = treatmentAlgorithmContext.parameters.topics;
const section = request.body.queryResult.queryText;
console.log('treatmentAlgorithmHandler queryText: ' + request.body.queryResult.queryText);
console.log(`topics: ${topics}`);
//agent.add(`asked for ${topics}!`);
if(section != 'treatment algorithm') {
if (agent.requestSource === agent.ACTIONS_ON_GOOGLE) {
let conv = agent.conv();
conv.ask(`Main treatment is rest, plus evaluating and correcting over-training errors. Physiotherapy starts with stretching to improve range of motion followed by strengthening of the rotator cuff muscles and scapular stabilisers 2 or 3 times per week for 6 weeks. May need multiple courses...`);
conv.ask(new LinkOutSuggestion({
name: `shoulder with biceps`,
url: `https://bestpractice.bmj.com/topics/en-gb/582/treatment-algorithmh#shoulder_with_biceps_tendinopathy`
}));
agent.add(conv);
} else {
agent.add(`Main treatment is rest, plus evaluating and correcting over-training errors. Physiotherapy starts with stretching to improve range of motion followed by strengthening of the rotator cuff muscles and scapular stabilisers 2 or 3 times per week for 6 weeks. May need multiple courses...`);
agent.add(new Card({
title: `${topics}`,
buttonText: 'shoulder with biceps',
buttonUrl: 'https://bestpractice.bmj.com/topics/en-gb/582/treatment-algorithmh#shoulder_with_biceps_tendinopathy'
}));
}
} else {
agent.add(`${topics} asked for ${section}. Enter patient group, options below.`);
agent.add(`shoulder with rotator cuff tendinopathy, shoulder with biceps tendinopathy, elbow with lateral epicondylitis, elbow with medial epicondylitis, knee with patella tendinopathy, knee with quadriceps iliotibial band or popliteus tendinopathy, ankle with Achilles' tendinopathy`);
}
}
Initially, the queryText value is "treatment algorithm" resulting in displaying the static text values, "shoulder with rotator cuff tendinopathy, ....". At this point it is worth mentioning that there would be an API call to some custom code, returning dynamic values. In fact, all of the stubbed functions (functions not mapped to intents) in index.js, that display static content, would be replaced with API calls. The final response demonstrates the use of buttons backed by links, and, again, these would be dynamic values from an API call.
Other Examples
There are various other conversation scenarios that have been configured to use stubbed data. Below uses an explicit conversation starting point, using the topicHandler, and demonstrates the use of partial topic names, output-context and followup handling in index.js
IN: topic
OUT: suggest a topic
(matches topic intent and mapped to topicHandler in fulfillment)
IN: asthma
OUT: You entered asthma
(matches topic.by.name intent and mapped to topicByNameHandler in fulfillment)
SELECT: asthma in adults
OUT: Any particular section
(topicByNameHandler matched and set output-context topicbyname-followup)
SELECT: guidelines
OUT: asked for asthma in adults section guidelines
(matches 'topic.by.name - custom' intent and mapped to topicByNameCustomHandler in fulfillment)
Further are examples below can be mapped in a similar way
IN: specialties
IN: cardiology
IN: search key diagnostic factors
IN: Filovirus (makes use of entity synonyms)
IN: what is blast crisis
IN: how to prevent testicular torsion
Dialogflow provides an API to manage all data in the console, and this includes training phrases, parameters etc. This is useful, as content is updated, then so can your agent data/configuration e.g. training phrases, as required.
As mentioned, with fulfillement response content should be dynamic, based on request data to your custom API. Responses should be short and, where necessary, give access to further information e.g. website links that provide detailed information.
It's important to understand that content added via the console is used to manage intent matching, and data e.g. parameter names/types, synonyms. One of the most important aspects of Dialogflow is the in-built AI engine, used to process your training phrases. The primary focus of this training is to attempt to match intents as accurately as possible. Try to add as many variations as possible to ensure there is a higher chance of the correct intent being matched.
It's good practice to try to get to the answer through the shortest route possible, keeping conversations simple, to the point, and engaging. It's also important to quickly recognize where a query cannot be answered i.e. don't waste the users time. With that in mind, the chatbot should be explicitly clear as to the type of queries it is prepared to handle, and handle them well. Dialogflow provides excellent log information on conversations and their outcomes. This may well provide information to help improve performance e.g. adding new training phrases to better match intents.
Web-based Widget
The example widget can be used to connect to your Dialogflow agent to manage the same conversations mentioned above. Ensure you follow the instructions on the agent README.
When the widget loads up the Welcome intent is matched and the widget displays example usage of image, links and text. Try the example, that starts with 'topic' being entered.