Computationally clarifying user intent for improved question answering
A common drawback of chatbots is the impractical handling of unrecognized user inputs. These situations force the system to return a response along the lines of “sorry, could you rephrase that?” and do not move the conversation forward in any meaningful way. When in conversation with humans we do not end up in these situations nearly as often as we can use our domain knowledge to infer the meaning even if we wouldn’t have a perfect understanding of the context.
In my master’s thesis, I set out to explore if domain knowledge could be added to modern chatbots, allowing more humanlike interactions and improved performance in terms of giving more relevant answers.
This thesis is a part of an ongoing project at Atostek, titled Oxilate. As a part of the project, we are developing a digital assistant for industrial product and maintenance needs. The goal is to produce an assistant for various industrial settings capable of referencing the user with the right parts of the source material and answering questions about the data.
This bot is built using the Rasa framework, an open-source framework for developing dialogue systems. Rasa allows easy integrations to other software and high configurability and thus was the technology of choice for this project.
The current version of the project is trained to answer users’ questions about Ubuntu operating system, and the source materials for the chatbot include Ubuntu’s official documentation, Ubuntu launchpad forum conversations, and Ubuntu community help wiki pages. Now the task is to add domain knowledge to this system and allow the system to query this data when it can’t link the user’s input to anything it knows.
Ontologies as a source of domain knowledge
Before adding domain knowledge, we need to find a way to represent a domain appropriately. This is where ontologies come to play. They are presentations of different domains or languages, where entities and their relationships are presented as interconnected nodes.
The figure below shows a simplified example of what an ontology can look like. The circles represent different entities in the context, and the lines represent different relationships. In a more complex domain, there can and should be multiple different types of relationships between entities to fully capture the domain.In the context of this work, an ontology detailing the Ubuntu domain was needed. A perfect match was not readily available, and thus Computer Science Ontology (CSO) was chosen. It is an automatically generated ontology detailing the whole field of computer science. Generated using an algorithm that combines machine learning and semantic technologies, some relationships are also reviewed by experts. CSO contains multiple different kinds of relationships, but for this use case we are mostly interested in relationships that define synonyms or other closely related entities.
Fitting the ontology to the chatbot
Adding the domain knowledge offered by the CSO to our chatbot was relatively straightforward thanks to Rasa’s extendability. A custom module for handling low-confidence inputs was built, which is activated when the input falls below a fallback threshold set in Rasa. This module then takes the user’s input, extracts the relevant entities from it, and queries the ontology with these entities. The result terms are fed back into the answer generating model running in the background, and if it is able to retrieve meaningful results, those are shown to the user along with a fitting message (e.g. did you mean to ask about “Bluetooth”?).
Evaluating the model & results
Because of the nature of the implementation – it only activates when the baseline model fails – and the already poor universal metrics for chatbot evaluation, the implementation was evaluated by organizing a user study of 20 people. These testers first evaluated the baseline model by interacting with it and answering a set of questions. Then this group asked the same questions from the model using the custom implementation and answered the same questionnaire.
The results of these tests can be summarized as follows: the custom module is able to produce recommendations, but they are not relevant enough to be useful, although they are slightly more useful than the baseline’s responses. The cause for this lies within the ontology: CSO, although a large ontology detailing the field of computer science, does not contain enough information about Ubuntu or other operating systems to produce relevant recommendations in this domain.
Although the results show that the domain of the chatbot is not captured by the ontology used, the implementation can provide the users with recommendations even when the ontology does not match the domain but the quality is questionable. Further research is required to find out the true potential of the implementation, such as finding a domain specific ontology for a use case, more complex utilization of different relationships in the ontology, and developing a heuristic for choosing the best matching answer out of the found ontology results.
For now the problem with poor domain understanding in chatbots still persists, but combining more sophisticated machine learning models with better fitting ontologies might be able to conquer this in the future.