A Trustworthy, Responsible and Interpretable System to Handle Chit-Chat in Conversational Bots

Abstract

Most often, chat-bots are built to solve the purpose of a search engine or a human assistant: Their primary goal is to provide information to the user or help them complete a task. However, these chat-bots are incapable of responding to unscripted queries like” Hi, what’s up”,” What’s your favorite food”. Human evaluation judgments show that 4 humans come to a consensus on the intent of a given query which is from chat domain only 77% of the time, thus making it evident how non-trivial this task is. In our work, we show why it is difficult to break the chitchat space into clearly defined intents. We propose a system to handle this task in chat-bots, keeping in mind scalability, interpretability, appropriateness, trustworthiness, relevance and coverage. Our work introduces a pipeline for query understanding in chitchat using hierarchical intents as well as a way to use seq-seq auto-generation models in professional bots. We explore an interpretable model for chat domain detection and also show how various components such as adult/offensive classification, grammars/regex patterns, curated personality based responses, generic guided evasive responses and response generation models can be combined in a scalable way to solve this problem.

Publication
In AAAI 2018 Workshop on Reasoning and Learning for Human-Machine Dialogues (DEEP-DIAL18)
Anshuman Suri
Anshuman Suri
PhD Student

My research interests include security and privacy in machine learning.

Related