Friday, May 24, 2024

Conversational artificial intelligence (AI) is the most popular form of AI out there. It has powered some of the most downloaded apps and is often the gateway to broader AI systems. As AI develops, businesses are increasingly shifting toward ethical design to protect stakeholders and reduce bias.

What is Conversational AI?

Conversational AI is the AI that users can talk to, such as chatbots and virtual agents. They use massive volumes of data machine learning (ML) and natural language processing (NLP) to process what users are saying and respond to their requests. The ideal conversational AI tool is indistinguishable from a human being. In fact, one conversational AI tool designed to mimic the responses of a dead person, has been so successful that users have used it to deal with the passing of a loved one!

What are the Ethical Implications of Conversational AI?

Over the last few years, industry insiders and researchers have discussed AI bias. We tend to think of AI as being objective, but ultimately, it is a tool made by human beings and that means that the prejudices, overt and unconscious, of an algorithm’s designers are embedded in AI. AI reflects the assumptions that its creators make about the world.

This leads to the possibility that AI could make discrimination more efficient, scaling bias against women, people of color, the LGBTQ+ community and others. For instance, a conversational AI tool might infer that specific phrasing used by a person suggests that they are likely to commit fraud, even though that phrasing is common among a non-criminal minority group.

Why We Should Have Ethically Designed Conversational AI

The AI of the future is no longer just about increasing efficiencies by creating tools that can do tasks that no human being can plausibly do. AI must also be used, not to scale biases and inequalities, but to bring about more transparency and fairness in the world.

We have to think of AI in a more holistic way. In the past, AI was the provenance of data scientists and AI engineers, but in order for AI to be better, its design has to encompass ethics.

AI is too big to be left to AI experts. It impacts almost every facet of our lives. So many parts of our lives are driven by AI, such as dating apps, security systems, loan applications, hotel reservations, and perhaps someday even checking if a person meets LPC supervision requirements. AI is eating the world and it’s incumbent on the industry to ensure that the world to come is the better for having AI, not a less fair and less transparent world.

There also has to be a human backstop to ensure that decisions taken by AI are indeed fair. The risk of abdicating all responsibility to AI is that AI is unaccountable and for many people, it is not transparent. This issue has been the subject of much debate on social media platforms where content moderation is really needed. What happens if content is taken down that is harmless? Many people find it hard to get a response from social media platforms because they simply do not have big enough teams to deal with these issues. Similar questions are relevant to conversational AI. How do we ensure that we can correct errors made by conversational AI in real-time?


Leave a Comment