When AI bots can be good for your brand – and when they aren’t

By our News Team | 2022

The growing trend of artificial intelligence bots in customer service roles sees some interesting research on when they’re best used.

Marketers and customer-service executives are increasingly using artificial-intelligence agents – call them AI bots or smart bots, if you will – to automate certain interactions with customers. But what do clients think and what are the satisfaction implications of using AI agents versus human agents?

These are questions an academic study, recently published in the peer-reviewed Journal of Marketing, seeks to answer. The work was carried out in Australia and the US by three teams of researchers and is titled: “Bad News? Send an AI. Good News? Send a Human”.

Are we more forgiving of an artificial intelligence (AI) agent than a human when we are let down? Less appreciative of an AI bot than a human when we are helped? The research examines these questions and discovers that consumers respond differently to favourable and unfavourable treatment at the hands of an AI agent versus another human.

Artificial intelligence

Image by Gerd Altmann from Pixabay

According to the summary of the study, published by the American Marketing Association, consumers and marketing managers currently are in a period of technological transition where AI agents are increasingly replacing human representatives. 

AI agents have been adopted across a broad range of consumer domains to handle customer transactions – including traditional retail, travel, ride sharing and even legal and medical services. Given AI agents’ advanced information-processing capabilities and labour cost advantages, the transition is expected to continue. However, what are the implications?

The researchers found that when a product or service offer is worse than expected, consumers respond better when dealing with an AI agent. In contrast, for an offer that is better than expected, consumers respond more favourably to a human agent. 

Researcher Aaron Garvey explains: “This happens because AI agents, compared to human agents, are perceived to have weaker personal intentions when making decisions. That is, since an AI agent is a non-human machine, consumers typically do not believe that an AI agent’s behaviour is driven by underlying selfishness or kindness.” 

As a result, consumers believe that AI agents lack selfish intentions (which would typically be punished) in the case of an unfavourable offer and lack benevolent intentions (which would typically be rewarded) in the case of a favourable offer.

Responses change when bot is more human-like

But designing an AI agent to appear more humanlike can change consumer response. For example, a service robot that appears more humanlike (e.g., with human body structure and facial features) elicits more favourable responses to a better-than-expected offer than a more machine-like AI agent without human features. This occurs because AI agents that are more humanlike are perceived to have stronger intentions when making the offer.

What does this mean for marketing managers? Researcher TaeWoo Kim explains: “For a marketer who is about to deliver bad news to a customer, an AI representative will improve that customer’s response. This would be the best approach for negative situations such as unexpectedly high price offers, cancellations, delays, negative evaluations, status changes, product defects, rejections, service failures, and stockouts. 

“However, good news is best delivered by a human. Unexpectedly positive outcomes could include expedited deliveries, rebates, upgrades, service bundles, exclusive offers, loyalty rewards and customer promotions.”

The researchers say marketers can apply these findings to prioritise (vs. postpone) human-to-AI role transitions in situations where negative (vs. positive) interactions are more frequent. Moreover, even when a role transition is not entirely passed to an AI agent, the selective recruitment of an AI agent to disclose certain negative information could still be advantageous. 

Firms that have already transitioned to consumer-facing AI agents also stand to benefit from the findings. 

The research reveals that AI agents should be selectively made to appear more, or less, humanlike depending upon the situation. For consumers, these findings reveal a ‘blind spot’ when dealing with AI agents, particularly when considering offers that fall short of expectations. 

Indeed, the research reveals an ethical dilemma around the use of AI agents; is it appropriate to use AI to bypass consumer resistance to poor offers? 

“We hope that making consumers aware of this phenomenon will improve their decision quality when dealing with AI agents, while also providing marketing managers [with] techniques – such as making AI more humanlike in certain contexts – for managing this dilemma,” says researcher Adam Duhachek.

    Your Cart
    Your cart is emptyReturn to Shop

    Dr Kin Kariisa

    Group CEO - Next Media

    Dr. Kin Kariisa is an extraordinary force at the helm of Next Media Services, a conglomerate encompassing NBS TV, Nile Post, Sanyuka TV, Next Radio, Salam TV, Next Communication, Next Productions, and an array of other influential enterprises. His dynamic role as Chief Executive Officer exemplifies his unwavering commitment to shaping media, business, and community landscapes.
    With an esteemed academic journey, Dr. Kariisa’s accolades include an Honorary PhD in exemplary community service from the United Graduate College inTexas, an MBA from United States International University in Nairobi, Kenya, a Master’s degree in Computer Engineering from Huazong University in China, and a Bachelor’s degree in Statistics from Makerere University.
    Dr. Kariisa pursued PhD research in Computer Security and Identity Management at Security of Systems Group, Radboud University in Nijmegen, Netherlands. As a dynamic educator, he has shared his expertise as a lecturer of e-Government and Information Security at both Makerere University and Radboud University.

    Dr Kin did his PhD research in Computer Security and Identity Management at Security of Systems Group, Radbond University in Nigmegen, Netherlands. He previously served as a lecturer of e-Government and Information Security at Makerere University in Kampala, Uganda and Radbond University in Netherlands.

    Dr Kin did his postgraduate courses in Strategic Business Management, Strategic Leadership Communication and Strategies for Leading Successful Change Initiatives at Harvard University, Boston USA.

    • Other current and previous roles played by Dr Kin Kariisa:
    • Lecturer of e-Government and Information Security to graduate students at Makerere University, Kampala and Radbond University in the Netherlands
    • Director of Eco Bank Uganda Limited, one of the largest banks in Africa
    • Chairman of the National Association of Broadcasters, an umbrella industry association for all Television, Radio and online broadcasters in Uganda.
    • Chairman of Board of Directors of Nile Hotel International, that owns the leading hotel in Uganda, Kampala Serena Hotel.
    • Chairman of Board of Directors of Soliton Telmec Uganda, the leading telecom company in Optic fibre business managing over 80% of optic fibre in Uganda.