The Kelly School of Business at Indiana University has a large, award-winning online knowledge base, started in 1988. For years, their self-help search tool has been a well-developed function on IU’s website to provide on-demand IT support. The search tool functions similarly to a google search, where a user enters key search terms to retrieve a ranked list of knowledge base articles.
But, what if that process could be improved? The research team led by Antino Kim, Agrim Sachdeva, and Alan Dennis implemented ivy.ai after the University’s IT support system looked for ways to better service its users while reducing operational costs.
One of the objectives was to reduce the number of entry level staff required to work in the IT department due to the naturally high turnover rate, which creates high retraining costs. Entry level staff typically provide answers by searching against the knowledge base, so providing a better way to support end users was key.
Indiana engaged in a controlled study of 261 undergraduate students who would interact with both the chatbot and the search tool. After participating, the students answered survey questions to measure satisfaction, likelihood of use, and perceived difficulty on a seven-point Likert scale for each tool.
In the study, students were provided with one of two questions to present to each tool. Indiana verified that both tools were equipped to provide answers for each question. The research team randomized whether each student would first interact with the chatbot or the search tool.
For each survey question, Indiana found a statistically significant outcome in favor of the chatbot. Students rated the chatbot with a higher level of overall satisfaction, and said they would be more likely to use a chatbot to answer their questions than a traditional search method. According to the study, it’s also substantially less difficult to use a chatbot than it is to use traditional search.
In the main study, researches accounted for variables such as gender, familiarity with information technology, and which question participants were instructed to ask. They found that within each group, students still rated the chatbot more favorably than traditional search.
Researchers also discovered an order effect, meaning that students who used the chatbot last yielded a higher average satisfaction rating than students who completed participation with the search tool. They found similar results for likelihood to use and difficulty of use.
In a supplementary study, the researchers wanted to eliminate the order effect and introduce a wider variety of verified questions. 91 students were asked to interact with either the chatbot or the search tool, ask one of 93 available questions, and then provide a rating on a scale between one and seven.
This time, students were only asked to rate their overall satisfaction and perceived difficulty of use. Again, the students rated the chatbot much higher than the search tool in all areas.
“It seems like an efficient way to get my questions answered without spending hours scrolling through IU’s website and randomly guessing what things are,” one student said.
“I like that it seems like you are texting someone when you are getting help,” said another.
Based on the results, IU concluded that the chatbot had a significant positive effect on satisfaction, and that students are more likely to use a chatbot for self-help compared to traditional search methods. Importantly, students reported significantly lower perceived difficulty when using a chatbot to find answers to their questions.
These responses suggest that students appreciated the more conversational nature of a chatbot, while being shorter and more to-the-point. Specifically, traditional search methods require students to review multiple results and select the best answer. Chatbots also have an advantage in that they can interpret meaning from natural language, whereas some search methods may rely on the presence of keywords.