The use of Conversational AI (and AI as a whole) is rapidly spreading across different sectors of society. While these technologies expand, a series of questions arise on the role these technologies have in society, the impact they can have on people’s lives and which are the ethical and moral responsibilities for people and companies who choose to use and adopt them.
The concepts and debates around Ethics in AI can be endless and each topic is worthy of an article on its own. In this piece however, we attempt to provide an overview of the most relevant ethical guidelines and principles that companies developing conversational AI projects should embed in their development and deployment processes.
Why are Ethics important in a Conversational AI project?
First and foremost, there is an inherent moral responsibility for everyone working with Conversational AI to not make any use of them in ways that could be harmful to other humans.
Apart from the moral obligation, overlooking ethical factors can put the development and continuity of the whole project at risk and even make companies face financial losses and significant reputation blows. An example of this is could be Tickemaster’s fine due to data leakage or the recent case of Google’s Bard , which knocked $100bn off the tech giant’s market value.
As an additional remark, Governments and international organizations actively working on developing regulations which slowly force companies to make an Ethical use of these technologies. Some applying to AI in general, like the upcoming EU´s AI Act or others more specific to technologies being used by chatbots, like the HIPAA protecting medical records in the US.
Which ethical principles revolve around Conversational AI ?
The next section goes a little deeper on which ethical guidelines that impact Conversational AI projects and how some companies put them into practice.
Privacy
Both chat and voicebots usually handle Personally Identifiable Information (PII). With this, the responsibility arises of ensuring that data is not only obtained with the explicit consent of users, but also securely processed, stored and made available to its owners. In the EU, this topic has made great progress since the launch of the General Data Protection Regulation (GDPR) enforced in 2018.
This is one of the most regularly addressed topics by companies implementing Conversational AI technologies since it is always tied to laws which force companies to disclose their privacy and data management policies with users. This can be seen in cases (mostly by enforcement rather than by design) where chatbots explicitly ask for user’s consent to collect their data, or when links to the company’s privacy policies are displayed in the welcome message.
Transparency
It’s important for developers to make sure that end users understand what the conversational bot can do, how its decision-making process operates and how to achieve its best performance. The main reason behind this lies in avoiding the risk of deceiving and misleading the user, but it also aids in building the user’s trust on both the bot and the company who developed it.
Transparency does not end there; there might be use cases in which the fact that a person is not talking to a human should be disclosed to avoid misguiding the user or excessively anthropomorphizing the conversational bot. This becomes crucial in those use cases that deal with sensitive topics like mental or physical health.
How can a company apply this principle in a Conversational AI project ? The best and simplest way forward is via disclosure. A simple greeting message stating the Bot’s nature and capabilities should be more than enough to set the user’s expectations correctly. Another example of this are Microsoft’s Transparency Notes used to depict characteristics and limitations of each of their AI related offerings.
Bias
A bias is a tendency, inclination, or prejudice toward or against something or someone. Every single one of us has some type of bias, and that is naturally going to be expressed via our language.
Since conversational AI relies mostly on Machine Learning Models using real examples of how people speak to recognize user intents, companies should ensure that their training data is well rounded and is representative of most society sectors. The introduction of Large Language Models bring promising advances in many of the challenges Conversational AI face, but currently pose a huge risk of providing answers that may be gender, religion, sexuality, ethnicity or political affiliation bias. A clear example of this is the reported bias found in ChatGPT.
A thorough management and analysis of the training data should be a regular tasks in every Conversational AI team to avoid situations in which the bot replies with unjust, harmful, and oppressive language. Special attention should also be paid when on how user feedback on certain outputs is handled, since some of those answers could also be biased and will later be used to train the model.
Fairness
This principle is closely related to the Bias one since it relates to making the conversational bot treat all its users in an equal way. This principle is also strictly related to user adoption since it is very unlikely a person will interact with a bot that makes them feel misrepresented.
Companies should be paying special attention to not showing traces of discrimination, exclusion, and toxicity in the answers provided by the bot, which inevitably leads again to closely analyzing training data to ensure it is not biased.
But it is not only about the content. Making sure that a bot is accessible enough is also a key element to catering to every user type susceptible of interacting with the bot. There are several actions companies can take to make their bots more accessible which involve from displaying content in different formats as to providing different interaction alternatives for users.
Accountability
The best way to explain this principle would be by understanding and thinking about who is responsible for the bot’s actions. Every company adopting Conversational AI should be able to explain the processes, actions, and reasons behind a specific answer a bot gave . Additionally, as the degree of autonomy of conversational bots increase, so does the need for defining processes and controls that guarantee its quality and reliability and change traceability.
The raise of Generative AI adds a new layer to this principle, since debates towards ownership and Intellectual Property for AI-generated content begin to spark.
Final remarks
Keeping ethical principles as a compass that leads strategy definition, development and deployment is a key factor in both guaranteeing the success and minimizing the risk of a Conversational AI project.
While Government work on new regulations and slowly provide frameworks for an ethical use of Conversational AI, it falls to those working with these technologies to uphold those values and embed them into the bots they develop. In the end, a Conversational Bot is only as good as the humans who work on it.
Tell us about yourself? How do you usually approach this in your project? Are there any other principles that you are considering that should be listed here? Let us know in the comments!