At my last job, I helped create and implement a virtual support agent that assists business customers. I thought I understood how AI and machine learning work together to enable virtual agents. I assumed that through autonomous learning the virtual agent would generate its own solutions. As it turns out, our virtual agent was more like a toddler, needing human guide rails to help it learn and act.
We trained our virtual agent how to answer the most common questions for the most common intents a business customer would have. Then we trained the virtual agent to punt to a live chat agent when a customer had a question about something outside of a narrow scope. Or when the customer was getting frustrated (like yelling at the virtual agent in ALL CAPS or using certain #!@ terms). Or if it was during Mercury retrograde. OK, I’m kidding about that last one, although it probably wouldn’t have been a bad idea….
Eventually we were able to have the virtual agent complete some tasks for the customer, saving the customer time and effort. We watched in real time like proud parents as the virtual agent chatted with customers, cheering its successes and cringing at its mistakes. Here’s a secret—even completing tasks for the customer was something that we programmed the virtual agent to do, not something it learned on its own as it answered more and more customer questions.
We watched in real time like proud parents as the virtual agent chatted with customers, cheering its successes and cringing at its mistakes.
So, was our virtual agent really AI? Yes. But a hybrid form of AI, requiring some rules-based programming and a human being to step in to handle trickier situations. I believe this hybrid approach reflects the reality of much of AI “in the wild.” And I’m not passing judgment. The hybrid form isn’t inherently good or bad.
But I think it’s something we should all be aware of when people talk about “just using AI” to implement some solution. We’re nowhere near a reality where AI is fully autonomous. The technology still requires plenty of human intervention to be successful. I bring this up so that when management’s eyes gleam with dollar signs and dreams of big cost savings, you’ll know better. If management doesn’t account for the cost of the cross-discipline team of people needed to implement an effective AI solution, they may be very surprised when payment comes due.
Our virtual agent stood up well to the ultimate test this spring—COVID 19. When customer service agents couldn’t go to work in the call centers, we made sure that our virtual agent not only informed customers about the delays in live chats, but also that it was able to handle a lot of customer questions directly. We even added actions, such as bill payment, that the virtual agent could take care of for customers. And overall, the company did see cost savings, even after paying a team of humans to implement, monitor, and grow the AI solution.