chat bot

Here’s a scenario I’m hearing about more and more regularly: In an effort to cut costs, Company X decides to replace the humans in their call centers with bots. In theory (and in Facebook CEO Mark Zuckerberg’s mind) it makes perfect sense. In the United States, one in 25 employees is a call center worker and staffing costs are often one of a company’s biggest expense, so why not invest in a cheap technology that will lower that line item?

Simply put, because you may be measuring the wrong ROI. Using a customer service bot to sell or service a high-ticket item or complex matter—a ride on a private jet, say, or an insurance claim—will save staffing costs if the bots can pull it off. But their lack of skill or subtlety might also lose you the sale or customer. When you stand to gross $1,000 on a sale or the lifetime value of a customer, is it wiser to spend nothing on a bot or $50 on a human?

As both a customer and someone who works closely with conversational AI, the following issues make me wary about the impact of bots alone on customer service teams and companies.

Human language is easily misconstrued by bots.

Bots are not programmed to handle nuanced phrases or questions yet people speak with great variation. Amazon’s bot network is arguably among the best but it can understand only 100 intents (in bot talk, “intent” means the intention—or desired outcome—of those communicating with the bot), likely because bots break down when you add more intents. I like to use Google to illustrate this challenge because the conversational AI used by most bots is very similar to the one used by the search engine. So: How often does the first listed response adequately answer your Google query, and what are the chances your bot programmers will do better than Google’s? Further complications arise if people suspect they are conversing with a bot, because they begin to use “computer-speak.” As bots are programmed for human phrasing, that only increases the miscommunication.

Quick questions usually go long.

Bots shine in what are called short-tail interactions—those frequently asked questions that are easily resolved. And for many companies that’s seems good enough, because they believe the majority of their interactions to be short tail. The problem is, they’re wrong: Our data finds that 90% of conversations are actually long tail. Think about it this way: If the issue was a simple one, the customer wouldn’t need to chat about it. A question about something as seemingly straightforward as a return policy can quickly become more complex once the customer shares details about when the item was bought, when they plan to ship it back, and whether there is any wiggle room. As soon as the conversation gets more involved, it’s a lot harder for bots to deliver a sufficient customer service experience.

Sometimes the human cost is not an expense, but rather your product’s value.

The classic example is life insurance—bots have no empathy so they should never answer questions about  claims. You can try to program empathy in, but do you really want to see it go even slightly wrong? Similarly if every time you call a bank, you talk to a bot then what is that bank really offering—lower fees? In a race to the bottom, customer service is one of the last elements financial service institutions can wield as a competitive edge.

This is not an argument against forward-looking institutions investing in conversational AI or bots, which can be exceptional tools when implemented in a smart way. But it is a word of caution against completely replacing your human customer service team with them. Instead, look to people-assisted bots so that when there is confusion or complex, high-value use cases, the customer is connected with a human and not more AI. Given the high expectations of consumers when it comes to customer service, this built-in softer landing will benefit not only the customer but your company as well.