Teneo x GPT - Better Together

Teneo x GPT – Better Together


You might have heard about ChatGPT, who has not, and you might have also heard about our recent GPT Connector announcement, which lets you access the APIs to the models behind the famous ChatGPT bot.
As a Microsoft partner, Azure Open AI services offers us the opportunity to share access with our customers and partners via Teneo’s SaaS offering.
It means that you have access to one of the most powerful AI tools for generative AI. So, how can this be used to support your own conversational AI & Teneo project?

What is (Chat-)GPT?

We’re creating an article to explain what (Chat-)GPT is, so why not use GPT to help? Here’s a brief explanation of what the technology is and how it works, created from the latest model, GPT-4.

The GPT model is a type of neural network architecture that is based on the Transformer model. It consists of several layers of self-attention mechanisms, which allow the model to understand the relationships between different words in a sentence and generate more coherent text.

One of the most notable features of GPT is its ability to perform unsupervised learning, which means it can learn from data without explicit supervision. This is achieved through a pre-training phase, during which the model is trained on a large corpus of text to learn the underlying structure of language.

Once the model is pre-trained, it can be fine-tuned on a specific task, such as language translation, summarization, or question-answering. Fine-tuning involves training the model on a smaller dataset that is specific to the task at hand, allowing it to learn to generate text that is tailored to the task.

What are the differences between the different GPT models?

The GPT (Generative Pre-trained Transformer) models are a series of natural language processing AI models developed by OpenAI. They are designed for various tasks such as text generation, translation, summarization, and more. The main GPT models include:

GPT: The first GPT model was introduced in 2018 and featured 117 million parameters. It demonstrated strong language understanding capabilities but had limitations in terms of scalability and performance.

GPT-2: Released in 2019, GPT-2 built upon the success of the original GPT model. It was significantly larger with 1.5 billion parameters and demonstrated improved performance in various language tasks. However, OpenAI initially withheld the full model due to concerns about potential misuse.

GPT-3: Launched in 2020, GPT-3 is the third iteration in the series and represents a substantial leap forward in terms of size and capabilities. It boasts 175 billion parameters and has demonstrated impressive performance in a wide range of applications, including translation, summarization, and even code generation.

GPT-4: Launched in March 2023, GPT-4 is a large multimodal model (accepting image and text inputs, emitting text outputs) that, while less capable than humans in many real-world scenarios, exhibits human-level performance on various professional and academic benchmarks. For example, it passes a simulated bar exam with a score around the top 10% of test takers; in contrast, GPT-3.5’s score was around the bottom 10%.

Remember that each GPT model also has smaller versions with fewer parameters, providing a trade-off between computational requirements and performance.

What do Teneo users have access to?

As of April 2023, Teneo users can have access to several variations of OpenAI’s GPT3 model via their Azure OpenAI account (US region) – Ada, Babbage, Curie, Davinci, ChatGPT (GPT 3.5 turbo)- and GPT4.
The latter is the most powerful of these, having most parameters. Note that the models have differences in terms of cost, latency and API limits (Azure Pricing).
Furthermore, only GPT3 models are currently available for fine-tuning with additional data for a specific project; neither ChatGPT nor GPT4 offer this functionality at the time of writing.

Potential Use Cases

So, how can you make use of the technology and start building solutions using GPT?

GPT for General World Knowledge

You can now create a conversational flow that returns GPT answers directly to your customers, allowing you to take advantage of the world knowledge that the large language model contains.

It can give your bot a broader knowledge coverage about topics that are not in the core scope of your project, while it saves you time for building out and maintaining content.

By positioning the Trigger of the GPT flow rather low within the Trigger Ordering, you also make sure that your business/project-specific content will be handled with preference.

The example below demonstrates how this can be achieved.

Answer variation

You could create new answer variations ‘on the fly’ by calling the GPT API before responding to your user. By filling your output node with a variable, the answer could dynamically be created on each interaction.

If you do not want to perform API calls to GPT on each interaction, Teneo already offers the possibility of adding several answer variations to an output node and providing either randomly or in a sorted order.

Generative AI could help you brainstorm ways of phrasing the responses. Just as you would go through a conversation with ChatGPT, you can ask the API to provide different ways of phrasing the response and then store the phrasings inside Teneo.

This reduces latency and costs when you go live with your solution, and it lets you control the tone and the content of the responses.


Tip: If you want to use this ‘ChatGPT style’ brainstorming inside the Tryout, just implement our GPT Connector solution, it comes ‘out of the box’. You might want to restrict the trigger of this flow then to a specific command and make it only available on Tryout via a Global Scripted Context.

Dialog Summarization

Prompt Engineering is key when you want to use GPT models for specific conversational AI tasks which are not directly related to answering a user input.

We can easily create an additional instance of our GPT Helper that can be tasked with writing summaries of the current conversation state.

You can call this additional GPT instance whenever needed, have it create a summary of the available conversation, and finally store the summary in a global variable inside your Teneo solution.

This can be done easily via a global Postprocessing or End Dialog script which updates the conversational context for the summary and calls the GPT Helper’s completion method.

You can find the complete code to run this example in our GPT Example solution which you can download at the end of this page.

Example Dialog:

GPT Summary:

A dialog summary can be very interesting, for example in handovers to a human agent, as in your favorite Contact Center solution – OpenQuestion.

Sentiment Analysis

Similar to the previous example, you can also create a GPT Helper instance to support you with the task of performing Sentiment Analysis on the conversation.

You can find the complete code to run this example in our GPT Example solution which you can download at the end of this page.

Dialog Example:


Identified Sentiment:


Tip: You can do Sentiment Analysis for six languages directly within Teneo, check out all info here: Sentiment & Intensity Analysis | Reference documentation | Teneo Developers

ML data via Tryout

We have discussed a use case on Data Augmentation in one of our previous articles – GPT models can also help here. It’s the same approach as above with Answer Variation, you can use Tryout to get sample utterances for your dataset.
Using GPT to generate data should only be considered as a starting point for your data gathering – real user inputs are the source of truth here, and the real user inputs can be easily added to your model with Teneo’s Optimization functionalities.
To illustrate how GPT can help generate data for your ML model, GPT provides sample utterances around a Book a Flight intent.

The rest is then only a copy and pasting task to add the data to your Class Manager.


GPT & Generative AI - Opportunities

By adding a GPT model to your Teneo project you can expand your bot with general world knowledge at a global scale and focus on the creation of the specific business-related flows of your project. You can also automatize several functionalities used in a typical Conversational AI project, as we have seen for Dialog Summaries and Sentiment Analysis.

GPT & Generative AI – Things to think about

Involving Large Language Models (LLM) and Generative AI into your project can make it hard to control what your bot / service actually answers to the final users (your clients). Many projects even tend to involve UX Designers to make sure to be spot on in terms of the quality of the answers you provide and the tone that you use to speak your clients – in the end a bot is also a representative of your company. These requirements are currently difficult to fulfill via Generative AI.
Then there’s also the point of made up answers by LLMs, and the term Hallucinations. Basically, a LLM can derive answers which are not based on facts, not even within its own training data. Truth checking and avoidance of hallucinations have since then become an interesting research activity, as described here by Bret Kinsella and here by Cobus Greyling.

Responsible AI

The idea behind the Teneo SaaS offering is to provide you with all you need to build a state-of-the-art Conversational AI project and to fulfill our vision of responsible AI usage.
Here you can read an article with thoughts around the ethical considerations in conversational AI applications.
The security of your infrastructure and the data of your users is always priority for us, and the usage of Generative AI is no exception.
A powerful tool needs to be used in the correct way in order for it to be useful. Before using OpenAI services, make sure to check what kind of data you will be sending over to the service, and if sharing this data with another service is compliant with your companies’ regulations.
You can find an overview around data privacy in Azure OpenAI here and watch Microsoft’s AI Show episode Being Responsible with Generative AI, which goes through the topic via a nice conversation between Sarah Bird, the Responsible AI expert on Microsoft side, and the always brilliant Seth Juarez:

For example, it could be that you need to use Teneo to anonymize personal information in the data before sending it to the service.
Learn more about how to anonymize data in Teneo here.


The Conversational AI world has now powerful models at hand – in today’s article we have seen some ideas on how to make use of them. Make sure to find the right applications for your project and to use them under the umbrella of responsible ai. Teneo gives you all tools at hand to orchestrate cognitive services and to create state-of-the-art conversational AI projects.
If you would like support with connecting GPT to Teneo, comment below and our team will get in touch.