Enterprise Grade Chat Bot: Natural Language Processing - Design, Plan and Model #DataScience #NLP

Enterprise Grade Chat Bot: Natural Language Processing - Design, Plan and Model #DataScience #NLP

The following is a professional project plan I created. I am open to any feedback and or criticism.

Due to the extensive amount of work involved making the sub-steps into Data Science products used by your departments will increase ROI in the early stages and get the much needed input from the relevant subject matter experts in Sales and Marketing.

Step 1- Design the corpus

No alt text provided for this image

Understanding Bag-of-words; a common misunderstanding is that your text analytics is text: what does this mean? well it's not text, all text "words" are transformed into unique numbers like a lookup dictionary for example 1 = aardvark, 2 = aaron and so on. This is due to algorithms being naturally better at processing and seeing the patterns in numbers. A specific example is when for a given word that can have multiple meanings is considered:

Synsets and Hypernyms, Lemmas and Synonyms fun facts:

“A Synset is one or more sets of synonyms where as a hypernym is a word or phrase that is included within another words semantic field”, where as a Lemma is the word that stands as the definition, as to a lexeme is a group of words that have the same meaning" 


No alt text provided for this image

There is a clear distinction that the same word is specifically given to two different things.

Sentences that relate to the sport cricket and contain "Bat" will have words that have synsets relative to cricket such as bowling, stadium, sport etc. Where as the mammal "Bat" in typical terms will contain words used with relevance to the activities, beings and things of an animal.

It is the density of the language used within a given window that will provide context to the given topic, now to explain; where there is a need to calculate the relationship between words based on their location here is a quick example where words are more useful as vectors

No alt text provided for this image

Shown above is the calculated difference between two sentences after converting to vectors. Pretty cool right!


Step 2 - Design the NLP Data Model for the Data Science work.

No alt text provided for this image

This model is used to apply relevant data required for the algorithms that is derived from the translated voice files. The model is based on call centre recorded calls and pulls together the relevant metadata to be attached to the record. Example:

No alt text provided for this image

Step 3 - Fastest Path to Answer - metrics and business value

No alt text provided for this image

This step is about establishing an understanding of succinct delivery and conversations held by efficient representatives and those of wafflers. There is no quality control in just throwing an algorithm at conversation data and this step is a luxury for those that have "lots" of data. The actual methodologies are not covered here but the essence is we are extracting metrics of the conversations in terms of "problems a,b,c,d are all solved by these 6000 sentences and they all have this in common". Out of those examples we can manipulate the training data for the best 5000 as 1000 candidate data observations are considered low quality so we omit.

Step 3 - Communication and Rebuttal

No alt text provided for this image

This step works well in unison with step 2. Additionally this part can be used for extracting metrics of call centre quality and target objectives. An example could be that the encoder offers a product or service, we know when this happens and we can capture the decoders response 12354,13245,654 ("that sounds fantastic") across thousands of calls we now know more about our customers than ever before.

Step 4 - Sentiment Analysis (everyone's favourite)

No alt text provided for this image

As explained in the slide, we have the ability to track the sentiment of the conversation, this allows us to know if the sentiment is good/bad or if it changes. Simple metrics that can give a wealth of understanding can be extracted such as how many customers exit a call with a positive sentiment. How many customers start with negative sentiment and leave positive. Powerful insights can now be delivered from sentiment calculation data.

Step 5 - tracking commitments

No alt text provided for this image

Have you ever been promised something and it wasn't delivered? lets face it we all have. This Data Science product is directed at reducing churn and monitor promises made by both humans and machines. - VERY IMPORTANT.

Step 6 - Chat Bot, we finally made it!

Now that all of the statistics are done, or more or less not covered in terms of probability density functions, we can start to apply nn's and build the chat bot.

No alt text provided for this image
No alt text provided for this image

The chat bot is reliant on all previous steps.

Data Life Cycle - Final and most important step

No alt text provided for this image

The data life cycle is based on all the previous steps. It is based on the business logic that has been placed within the recordings and the metrics. This represents a continuous flow of data that is either purged or retained at the "sub_products" level. This is actually the most genius part of the whole process being the data pipeline of automation where in this case the data model feeds the data science model increasing accuracy over time.

No alt text provided for this image



















Ranjith Pullagurla

Analyst @ The APP Group | PowerBI | SQL | Python | Helping businesses make insightful decisions

5 年

Great work Butler, But don't you think adding deep learning capabilities to this project would make it more efficient and increase the project capabilities for a larger database?

回复

要查看或添加评论,请登录

Aaron Butler的更多文章

社区洞察

其他会员也浏览了