This doesn't help me! Enable a chatbot to solve errors and to disambiguate

  • Status: Open!

Background

Chatbots are a great thing. They can answer and process user queries automatically. This can reduce the workload of customer service, especially when it comes to handling simple queries. At the same time, however, poorly processed queries and unsuccessful conversations lead to dissatisfaction among users. The goal of a chatbot design must therefore be to solve or moderate problematic conversations as well. This can be achieved via error handling strategies and disambiguation. For example, if a user formulates an ambiguous statement that is not clearly recognized by the chatbot, the latter can provide an option selection ("Did you perhaps mean...?"). Other typical error handling strategies include asking the user to paraphrase or, in the worst case, passing the entire conversation to a human. Which of the many strategies are useful and applicable for a chatbot depends heavily on the structure and shape of the chatbot. However, the incorporation of error handling and disambiguation makes sense in any case in order to make as many conversations as possible successful, to process them end-to-end, and to satisfy the user in terms of customer experience.

Goal

Against this background, the aim of the thesis is to theoretically investigate and practically apply error and error handling strategies. The thesis thus consists of two parts that build on each other: First, relevant research literature and practical data will be used to investigate which error types and problems can occur in conversations between humans and retrieval chatbots. Furthermore, it will be theoretically elaborated which strategies are known to avoid or moderate these errors. Subsequently, in the second part of the thesis, an active practical chatbot (ISSD bot) will be provided with suitable features and error handling strategies. According to the previous theoretical elaborations, the usage data of the bot can be examined here to determine how users behave and where conversations fail. This failure should then be addressed by incorporating appropriate error handling strategies. In sum, the user experience should be practically improved based on the theoretical investigation.

Skills

  • Interest in chatbots and user-centered software design
  • very good programming skills (e.g., JavaScript)
  • knowledge in scientific work and writing, English and time management skills

Related Literature

For a brief scientific glimpse into the topic, you can have a look at Ashktorab, Zahra, et al. "Resilient chatbots: repair strategy preferences for conversational breakdowns." Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. 2019.