Virtual assistants are everywhere nowadays. They sit on a cloud somewhere not bothering anyone; only making our lives easier. The question remains: are we making the best out of our digital helpers?
As a matter of fact, Conversational AI has come a long way providing us with more intelligent virtual assistants than ever before. That is why we are still learning about the best way to communicate with them.
In this article I will explain how bots make sense of our input, and I will share some ideas on how you can make the most of your interactions with the bots you encounter in your day-to-day life. Finally, we will also see how our current way of communicating with the bots will shape the future of good conversational experience
I am using Teneo platform to show you what dialogues look like in the backstage of a bot.
How Do Bots Understand Humans?
What are bots made of?Bots are created by conversational AI developers. During creation, a bot is geared with an engine and provided with a knowledge base in addition to the logic needed to make use of the data in this knowledge base. The knowledge and the logic parts are used to make sense of human requests, also called, user input.
What goes on under the hood of a bot?
Let us have a look at how developers teach a bot a certain potential dialogue. This image below is a typical piece of dialogue created in Teneo platform. This is referred to as a Flow, and it consists of different Nodes; here we see a Trigger and Output Nodes. When a user sends in a request to order a cup of coffee the bot asks a follow-up question to get the coffee type, then asks to get the user’s name to finalize the order.
The bot developers’ job is to feed the bot with phrases that a user might send to the bot and compose suitable responses to each input. They design Flows that ensure a chain of conversational interactions.
The Trigger makes use of the class ORDER_A_COFFEE in a Match Requirement, and the class has been trained on the examples shown in the image below. When working in Teneo, classes are maintained in a Class Manager where each class is trained on full sentences that represent natural user input.
Where Is Your Conversation with a Bot Stored?
Bot conversation backstage
Developers keep an eye on the bots by examining user input and bot responses. The conversations that are generated are stored in the bot’s Log Data Source. The image below is an example of what a conversation between a user and a bot looks like in Teneo, also known as Session View.
Who can see the conversations you have with bots?
Your conversation with bots is visible to any member of the bot development team who has access to the data logs of that bot. Companies may utilize User Data Anonymization strategies to hide personal information, such as names and phone numbers, in the sessions. Eventually, developers are interested in the conversation itself and the way that bot responses can be enhanced and improved; they do not need to see any personal data.
Common Mistakes When Talking to A Bot
Typical User Input
When search engines became popular, it was common that people would send in full sentences and questions that were sometimes too long, and these would retrieve information was irrelevant or lead to dead ends. In time, we adjusted our queries to include only keywords and the desired constraints to receive relevant results.
Nowadays, many people will treat bots as search engines, sending one-word commands to bots and expecting them to respond with relevant information.
Why are keywords not so useful with bots?
As we saw in the class manager view in a previous section, bots are trained on full meaningful sentences. This is what gives a bot its conversational superpowerer. It can make sense of sentences similar to ones you would use with human agents. Sending in a request like “I want to book a flight from Stockholm to Sydney” provides the bot with information about the user’s intention to book a flight, and the specified departure and destination cities.
How are keyword inputs processed?
Users may send in random keywords that they assume are enough for the bot to process their request. These types of inputs are processed in a SafetyNet.
A SafetyNet is a Flow that will be triggered when the bot does not recognize what the user is talking about. Some bots are equipped with an Area-specific SafetyNet which enables the bot to recognize certain keywords in case they are related to the bot’s work. Such as the words “flight, trip, bus …etc.” for a booking agent bot, or the words “coffee, juice …etc.” for a barista bot.
The image below displays a typical unsuccessful conversation with a barista bot. You can see how certain keywords lead nowhere, while some keywords can be recognized but the user is asked for more information to continue with his/her order.
Good Manners Are a Good Idea When Talking to A Bot
Prepare a Bot for the Real WorldNow more than any time before, we are playing a major role in conversational AI development; users as well as developers are taking part in shaping what future conversational AI is capable of. As end users we make demands for more markets and more languages, and developers work to meet those demands.
But our impact exceeds that!
The image below illustrates the cycle of conversational AI projects constructed with Teneo platform. A bot is built and delivered to the users who in turn interact with the bot, and their conversations are then analyzed to enhance the bot performance.
When a bot is exposed to offensive language and aggressive style, the developers must train it to be prepared to answer and react to this type of language. Who knows what a machine will do with all the offensive language it learns throughout the years!
Good AI development is thought to be the equivalent of “good parenting”. At this point, as users of conversational bots, we are raising the bots of years to come.
Setting a Good Example
Another strong argument for using proper language with virtual assistants is to set a good example for children for their conversational skills. Having a smart assistant device at home means that you are using voice requests, and children are picking on the type of language that you use. The more “please” and “thank you” you use, the more of these children will learn to use as well. An initiative to raise awareness to this issue was addressed by Amazon’s Alexa. You can read about it in this BBC News article.
In this article, we see what conversations look like in the bot’s backstage and how bots make sense of human language. We show bad and good examples of user input, and we present arguments for proper conversational skills to use with bots. Please share your thoughts on the subject with us on .
 Chiang, T. (2020). Exhalation. Picador.