Conversational AI & Agile Methodologies

Conversational AI (CAI) is at the intersection of IT development and content creation: on one side, it involves a portion of system architecture, hosting considerations, creation of backend connectors, QA & testing processes etc. And on the other, a portion of subject specific content in the format of the definition of expected user questions, elaboration of a natural language understanding model, answer authoring and overall UX considerations. So how do you go about implementing such a project, making sure the tasks of all involves roles and at all project phases are clear?

Agile Implementation Methodologies

Over the past many years an agile methodology has been the preferred choice in the projects we have been involved in, and this would be our recommended approach both for the initial build and for the ongoing development and optimization.

Our experience has shown us that the benefits of a quicker go-to-market with a smaller scope that constantly grows based on real user inputs are usually greater than those of waiting with the go-live until everything you planned for has been built out (only to find out that your users are asking for something you did not foresee). Being agile with your CAI implementation has the same benefits for the final product as in any other IT implementation, among others:

  • release smaller portions of content frequently
  • start collecting data about real usage quickly
  • quickly react to new or changed needs
  • prioritize your development based on actual user needs

Considerations for an Agile CAI Implementation

Working with sprints, user stories, backlogs etc. are common for all projects that follow agile methodologies, and this article does not intend to explain the basics of Agile. Below you will find considerations for an Agile CAI implementation where particular observations are worth mentioning.


Who will be the users of your bot? How do they behave and what do they expect of a bot?

Before you release your digital colleague to the world, you want to make sure it is well prepared to meet your end-users. A good way of preparing for that is to define different types of users or personas and imagine how they would interact with your bot and for what reason.

For a retail company, user types could be:

  • The returning customer
  • The new customer
  • The impatient customer

Write sample dialogues to illustrate how your personas would interact with your bot in different situations. These sample dialogues will be a great tool to check if you are on the right track when you start developing the content and the user experience in your bot, and they will allow you to spot gaps, such as if you forgot to build in the unhappy paths of the dialogue.

Conversational AI User Stories

User stories will reflect different kinds of tasks needed to implement a Conversational AI project. In the initial phases of a CAI project, there will be many stories related to data gathering, data analysis, ML model creation but also more technical stories related to backend integrations, IT security and frontend work.

  • As a Data Analyst, I want to retrieve the live chat statistics so that I can determine the most frequently asked questions
  • As a CAI Developer, I want to retrieve the live chat logs so that I can prepare training data for my NLU layer
  • As Technical Architect, I want to document the technical design, so that I can identify technical dependencies

When the initial scoping & analysis is in place, it is time to start the actual implementation, and at that time user stories will start to be more about the content of your new digital assistant.

  • As a new customer, I want the bot to be able to enroll me in the loyalty program so that I can benefit from loyalty offers
  • As a recurring customer, I want the bot to be able to remember my preferred coffee shop so that it can suggest it for future pick-ups

Finally, before Go-Live, you also want to make sure to include stories that focus on the analysis of the value and the performance of your digital assistant:

  • As a Product owner, I want to be able to see the number of business relevant dialogues per month so I can track the call deflection rate of my bot
  • As a CAI Developer, I want to be able to review the inputs in the SafetyNet, so I can improve the performance of my NLU layer

Definition of Done

In Agile project implementations it is recommended to have a (one or more) Definition of Done to provide transparency and reduce the risk of different members of the development team having different understandings of when a piece of development is done.

Below you can see an example of a Definition of Done for the development or update of a virtual assistant conversation flow built in Teneo.


The duration of a sprint in an agile project may vary but be as short as 2 weeks. In the initial phases of analysis and scoping of the project, a 2-week sprint can be very appropriate to keep the team aligned on the project scope and purpose. Some teams may call this “Sprint 0” or not even talk about sprints yet as no development is going on.

When the work shifts to development and eventually the ongoing optimization, our experience with different projects has shown us that the ideal duration of a sprint for a CAI project is 3 or 4 weeks. 3 to 4-week sprints give the team time to go through all the steps required to get a user story to the “Done” state, as per the Definition of Done mentioned above.

Depending on the team size/members, it can be useful to work with both a Development and a QA Sprint to be as efficient as possible when it comes to making the most of the time of your teams. In this setup, Development and QA can be completely parallel tracks where the QA constantly team reviews what the Development team has done, while the developers continue to work on new user stories while the QA team performs their prerelease validations.

In this scenario, Version Flags are an ally for the development with two parallel tracks, allowing the teams to have both a tested and stable version of the project content, as well as a latest and under development version of the same. Teneo natively supports publishing to different publish environments for the different stages of the sprint, so developers can seamlessly publish the Latest version of the content to their Development environment, while the QA team is doing their validation of the Stable content on the QA environment.


Conversational AI project implementations fit perfectly together with the agile methodologies as it will let you tap into the needs of your end-users and react to their needs rapidly. You may need to think of Personas and User Stories in a more conscious way compared to other IT implementation projects as UX must sit at the forefront of all Conversational AI development for the project to be well perceived among the end users. Also, your sprints need to leave room for a good QA process as in Conversational AI projects there are very close relations among the parts of the content and you need time to check if work in backend integrations changed the response in the API, if a new class in your ML model has altered the behavior of other classes etc.

Tell us in the comments: Do you use an agile implementation methodology in your Conversational AI project? Do you have any recommendations for other teams? What do you find most useful about the Agile methodology in relation to Conversational AI?