Machine Learning – Tips and Tricks: Part I

Intro

Finding out the goal behind a user’s statement is fundamental for a bot to have an intelligent dialog with a human user. Doing so by means of a statistical approach leads us to a task called Intent Classification which uses a Machine Learning algorithm to classify the user’s intent with the most likely intent label. This identified label, also called the top intent, can then be used to trigger the corresponding conversation step. Many companies are aware of this being a key element in Conversational AI and decide carefully on the technology which they want to use. In Teneo, you get out of the box access to the Teneo Classifier and we also provide an easy-to-use connector to Microsoft’s Conversational Language Understanding (CLU) so that you have state-of-the-art tech at your hands in your project. As with many tools, the tech alone will not provide you the results though – you have to use it in the right way. In Machine Learning, here the focus on Intent Classification as task, you can commit mistakes which will lower the performance of your project.

In this article, we will take a look at some guidelines to make sure you make the most of your project.

Dataset Design

The following items are subtopics which are all related to the work with your dataset.

Project World Representation

A project is normally divided into sprints. When you start a project, you scope the contents of it, and plan accordingly for the next weeks, months or even years. The flows (conversation structures) will then be created bit by bit (or sprint by sprint) together with backend connections (if required for automatization processes). A common mistake is to take the same approach for the Machine Learning dataset. Intent Classes are created then only based on the availability of corresponding flows which leads to overtriggering of the few already existing classes especially at the beginning of a project.

If your project is going to cover three use cases around contracts, and you train at the beginning only one related class (since you are only creating this corresponding flow at that point of time), then you run into the risk that your classifier will overtrigger towards that class also requests regarding the not trained contract intents.

contract_example

Try to design your dataset from the start with an overall vision what you want to cover in your project. Try to gather data early on for all required intents. You might want to use data mining tools here.

The 1,000 Classes Trap

A CLU project may contain up to 500 intents. The Teneo Classifier does not have such a limit, but the question is how many intent classes should your project even have.

If your project contains 500+ classes, there is probably something going wrong in the design of the dataset. A common mistake here is to create several intent classes with exactly the same intent but with different entities.

pay_example

The training examples for such classes will be very similar , only with different entities , and will confuse your classifier. E.g. “Can I pay by Visa?” vs “Can I pay by Mastercard?”.

paypal_password

The classifier has learned here to focus very much on the entity reference as distinctive feature of this class, and gets easily confused by not related user inputs which make reference to the same entity.

Thus, this setup might lead not only to low(er) confidence scores of the affected classes but also to an overall lower performance of your intent classifier.

In the following example, you can see a performance hit for the PAY_VIA_GOOGLEPAY intent class once a separate Google Pay related intent class is added.

                                     Precision          Recall               F1

Better combine here all PAY intent classes into one single PAY intent class using all training examples and detect also the entity. You can trigger then the required flow or transition in Teneo by adding the corresponding Match Requirement for the Entity next to the Intent Match Requirement.

Example - Flow Design

visa_match

Example - Transition Match

In Teneo, you can make use of the Hybrid Intent Recognition approach (Advanced NLU) and create also only one single trigger for all these cases and handle then inside the flow via Entity (or Language Object) Match requirements on the transitions the different answers or even different conversation paths according to the recognized Entity. This approach can also boost your Intent Recognition accuracy in cases when the classifier has trouble to distinguish intents at the most fine-grained level. You can then group your low-level intent classes together into one single class, and make use of Teneo’s Lexical Resources to further fine-grain the recognized intent.

Annotate Your Data at Lowest Level

At the beginning of your project, you might have raw data available (from previous projects, different channels, etc.) on how end users will talk to your bot. In order to train your intent classifier, you will then want to label the data with the intents you select as relevant for your project. This process is also called data annotation. If you are unsure yet about the level of detail regarding your intent classes, and with that the corresponding data annotations it shall have, I can only recommend to annotate them at the lowest (most fine-grained) level that you might want to use. It is very easy afterwards to group fine-grained intents together via a small script but this cannot be easily done the other way around. If you annotate your data at a broad level, then the creation of fine-grained intents can be a complex task and might involve human review which costs time.

Broad level example:

Intent User Input
seats i would like to make seat assignments
seats i would like to select seats
seats can i select the seats now online?
seats can i upgrade my seat using my points
seats first class upgrade

Fine grained example:

Intent User Input
seats_selection i would like to make seat assignments
seats_selection i would like to select seats
seats_selection can i select the seats now online?
seats_upgrades can i upgrade my seat using my points
seats_upgrades first class upgrade

As you see, it is much easier to generalize in automatic ways from the fine-grained table into the broad level but not the other way around.

You can easily import data into your Teneo Class Manager by using the tsv format, and into CLU by using the required json structure.

Minimum Amount of Training Examples per Intent

The general recommendation, both by Microsoft (for CLU) and by Artificial Solutions (for the Teneo Classifier), is a minimum of 15-20 examples as training data per intent. Having said that, you might need more data if an intent shows a certain complexity or an intent is in some way closely related to other intents in your dataset.

Balanced Dataset

The term balanced dataset refers here to having approximately the same amount of training data for each of your intents. This setup is recommended to avoid bias towards overrepresented classes in your model. It is worth to notice here though that if you follow the recommendation regarding the minimum amount of training data that normally even with 3 or 4 times the examples in other (more complex) classes you can achieve very good results. There is no need to think that all classes must have exactly the same number of training examples or you are ‘doing it wrong’.

ML_2

This example is taken from our Teneo Dialogue Resources, you can see that the topics GDPR and Live Agent (can I speak with a person) show much more variety in how users can refer to these intents than the other intent classes here, and do therefore have also a higher number of learning examples in the training dataset for the Teneo Classifier. In case you need more data for some of your classes, Teneo’s built-in Optimization tools can help you here to find the right example to optimize your model.

One Word User Inputs

Sometimes all your project covers around a specific keyword is a single standard answer like “All info around our presence at the Mobile World Congress can be found here: www.mypage.com/mwc”.

The question is now, do you need a Machine Learning class to handle the recognition of related user inputs. Probably not. You can easily create a Keyword trigger (with the right position in the Trigger Ordering) to handle these inputs. If you want to let your Classifier know that this topic is part of your project world then you can add one or two examples of it to a generic class which is shared with other types of similar expressions.

In other scenarios, you might know that a one word user input like “contract” will need to open a disambiguation flow since your solution handles different use cases around this keyword and you will want to request more information from the user around the desired intent.

You have also here the possibility to handle one word user inputs in Teneo directly via Lexical Resources and assure correct triggering while saving API calls to an external classifier service since you know 100% for sure that you have found the desired next step already. The user mentions only one word, you have matched exactly on this word. You might want to do that for the most common one word (or short) expressions that your bot is receiving.

The Entity could then look similar to this:

ML_4

And you can then easily fill a global variable for the intent target via a Global Pre-Listener and use the value of your global variable for the triggering process:

In this example, the value of sTarget gets stored in the global variable sIntentTarget and the language object %WD_1.SCRIPT makes sure that this is done only against single word user inputs. The Propagation Script at the end sets a classifier related variable to false to turn off the classifier for this single interaction with the bot.

ML_6

The Match requirement on the corresponding Disambiguation Flow trigger could then look like in the above screenshot, and be part of a very precise Trigger Ordering group.

This logic is obviously just an example, and you can adjust it to the needs of your project.

Conclusions

Intent Classification is one of the key tasks in every Conversational AI project. Today’s article has provided you some advice on how to design your dataset and how to improve the performance of your classifier. As the title suggests, more tips and tricks coming soon in part 2!

4 Likes