Researchers at the University of Göttingen are studying the effects of non-human interlocutors in customer service
More and more companies are using chatbots in customer service. Due to advances in artificial intelligence and natural language processing, chatbots are often indistinguishable from humans in communication. But should companies tell their customers that they are communicating with machines and not with people? Researchers at the University of Göttingen have investigated this. Their research found that consumers tend to react negatively when they learn that the person they are talking to is actually a chatbot. However, if the chatbot makes mistakes and cannot solve a customer’s problem, disclosure triggers a positive response. The results of the study were published in the Service management magazine.
Previous studies have shown that consumers react negatively when they learn they are communicating with chatbots – it seems that consumers are inherently averse to the technology. The team at the University of Göttingen investigated whether this is always the case in two experimental studies. Each study had 200 participants, each placed in the scenario that they had to contact their energy supplier via online chat in order to update their address in their electricity contract after moving. In the chat they encountered a chatbot – but only half of them were informed that they were chatting online with a non-human contact. The first study examined the effects of this disclosure depending on how important the customer considers the solution of his service request. In a second study, the team examined the impact of this disclosure depending on whether or not the chatbot was able to resolve the customer’s request. To study the impact, the team used statistical analysis such as covariance and mediation analysis.
The result: Especially when service topics are perceived as particularly important or critical, there will be a negative reaction if it turns out that the conversation partner is a chatbot. This scenario weakens customer confidence. Interestingly, the results also show that disclosing that the contact is a chatbot leads to positive customer reactions if the chatbot cannot solve the customer’s problem. “If your problem is not resolved, the information that you have spoken to a chatbot makes it easier for the consumer to understand the cause of the error,” says first author Nika Mozafari from the University of Göttingen. “A chatbot is more likely to be forgiven if it makes a mistake than a human.” In this scenario, customer loyalty can even improve.
Original publication: Mozafari, Nika, Weiger, Welf H. and Hammerschmidt, Maik (2021), “Trust me, I am a bot – Effects of chatbot disclosure in various service frontline settings”, Service management magazine. https: /
Nika Mozafari, MSc
University of Göttingen
Marketing and innovation management
Platz der Göttinger Sieben 3, 37073 Göttingen, Germany
Tel: +49 (0) 551 39-39-26546
E-mail: [email protected]