Collecting Feedback with Conversational Interfaces

How can you obtain feedback from users through a Conversational Interface (CI)?

The implementation of a CI opens a new space for interaction in which it is important to evaluate the service provided. It is also important to understand other useful insights that can be extracted from this new connection, and how can they help you to manage the different customers’ expectations: identify your strengths and points for improvement and devise new opportunities.

From a few frequently asked questions to completing complex tasks, customer satisfaction (CSAT) needs to be measured carefully. Otherwise, it will be difficult to evaluate the conversational experience from the user’s perspective

Listening, understanding, and answering customer’s feedback

Collecting feedback is a process that can be done through different channels: web/apps, forms, email, calls… etc. And of course, through a CI. But a single survey will not work for all the channels mentioned: interaction with a live agent generates a different empathetic relationship. Therefore, it is not convenient to use the same type of metrics. To effectively collect and review user opinions through a CI, it is necessary to consider a series of inherent and unique characteristics of this channel.

For example, when asked for feedback during a call, normally we would wait for the survey to start and provide our evaluation based on the experience of the call and the agent. Within a CI there are different opportunities to ask for feedback. Considering the different capabilities that have been implemented into the CI, you may be able to measure, not only the interaction itself, but also any of the actions that could have occurred within the interaction. Here is an example of a particular situation where the user can ask for a coffee, so this is precisely the service we are measuring:

Feedbak_example_Yes_No_Question

These yes/no questions are easy to answer and related to the event that just happened. Its answer also to the core topic: does this use case work properly?

Besides that, depending on the type of interaction, we must establish a different strategy, as between Voice and Text you can find many differences. In case of doubt, a multimodal approach would benefit both interactions and complement the conversation without affecting the modality.

Having said that, it is crucial to understand why you need to include the collection of feedback, how you can implement that and act according to the results. We continue this article by discussing the following essential aspects: previous considerations, effective strategies to listen to your users and what to do after.

Why do I need to collect user’s feedback?

Before implementing any specific conversational strategy to collect users’ feedback, it is necessary to investigate and define the reasons behind this need. We recommend going through these simple questions, classified into 3 different aspects: goals, design, and actions. This will help you decide and define the correct procedure:

Defining objectives:

How is this feedback useful for you? What do you want to prove with this feedback? What do you want to measure with this feedback? (CSAT with service, conversation, product…etc)

Design:

Quantitative or qualitative feedback? Measurable or analyzable information? Where would this ask for feedback be placed? In specific channel: conversational interface (in dialogue, end of session… etc.) or on alternative channels (emailing, web page section… etc.)?

Analyzing and acting:

How will you measure this new data? Will the Conversational AI platform allow you to extract and analyze this information? What actions will be done once it is analyzed?

Designing and following strategic actions will be expected by users, as it is part of this communicative process:

53% of customers expect businesses to respond to negative reviews within a week. 1 in 3 have a shorter time frame of 3 days or less (Reviewtrackers)

How can I collect users’ feedback?

Among all the front-end components and Voice User Interface (VUI) elements that can be used to collect feedback, here is a concrete selection of the most used ones depending on the interaction we want to focus on and 3 possible strategies:

As a sample of the mentioned strategies, here is a section of our own Customer Survey. In this particular use case we included some conversational strategies using the Clickable List component among other Message types included in Teneo Web Chat options:

feedback_TWC_example

What can I do with the user’s feedback?

Having decided how to collect user feedback, it is necessary to clarify what will be done next. For example, for each negative result, there should be detailed research to understand what went wrong. In Teneo Studio, dialogues can be filtered in and analyzed in the Log Data Source section, where you read the entire conversation where positive and/or negative feedback is received. In this way the analysis is undertaken on the entire dialogue.

Also, the results obtained can be considered among other aspects such as correlation between users and feedback or comparison to other surveys already open to the users.

Once this information is collected and analyzed, you will need to act accordingly. In other words, to plan actions that will improve the user experience. In the following table we have compiled different actions based on two possible results: what to do when receiving positive and negative feedback. It is obviously inspired in the use case that we have been focusing on: ordering coffee.

Example of comment Feedback type Knowledge responsible reaction
This is what I was looking for Correct answer received Collect information and include it into monthly/quarterly report
No opinion/no reply Everything right but not excellent. Can be considered as neutral feeling or an opportunity to engage with customers Collect this specific data and create a list of suggestions received. In the future evaluate the implementation of a specific survey for this type of interaction
This is not the information I asked for Wrong answer received Improve the knowledge/correct the condition/include new knowledge
Why do you not offer tea? Right answer, but the user does not like it or can be considered a customer request Collect direct information from your customer and create a list of suggestions received. In the future evaluate the implementation of this new use cases
“This info is not clear Customer does not understand the content Review and improve the information available in the web site. Reformulate the answer text including more detailed information.
“I have not received my coffee correctly” Right answer but the product/service is not working as expected Detect complain from customer and react following the company policy

Apart from the above-mentioned actions and based on the analysis of real data you can understand and learn more about your services. Some additional future work that could be implemented:

  • Users’ segmentation : adapt the feedback collection based on user’s characteristics, for example if we already know they are logged in the system and based on previous interactions versus a new unrecognized customer.
  • Personalized activation of surveys: once some metrics are regular, place the survey in specific situations such as after interactions that are more commonly asked or multi-step processes you found to be more complex.

Conclusion

Collecting, analyzing, and acting according to our customer feedback should be considered an essential part of a Conversational Interface. It can help us understand how it is perceived by users and identify improvement points not only from a performing perspective but also from a user experience point of view. As any other use case, feedback can and should be adapted and improved regarding the customer’s interactions.

We can conclude that a Conversational Interface is a considerable touch point to implement a feedback collect strategy and as valid as any other option. Some of our recommendations are using simple surveys: increases probability of been completed; personalized interactions: simple adaptation of the content depending on the feedback response improves the conversational experience; and design the feedback based on data obtained and interaction mode: obtaining both qualitative and quantitative data.

3 Likes