I’ve been thinking a lot about our relationship with AI lately. As a customer service expert, I’ve observed an interesting double standard: when AI makes a mistake, we’re ready to dismiss the entire technology, but when humans err, we shrug it off as “just being human.”
This realization hit me while analyzing the results from my annual customer service research. When I asked over 1,000 US consumers if they’d ever received incorrect information from an AI self-service technology, 51% said yes. That’s a significant number, but it led me to a more important question.
Has a live customer support agent ever given you bad information?
When I pose this question to AI skeptics, I typically get a surprised look, followed by a smile, and finally an acknowledgment that humans make mistakes too. It’s a perspective many haven’t considered.
The Competence Comparison
I’ve started using specific terms to describe these parallel situations. When AI gives bad information, I call it “artificial incompetence.” When humans do the same, it’s “human incompetence.” Both are equally frustrating for customers.
Neither the AI nor the human is typically trying to mislead you. They’re both attempting to provide service, but mistakes happen regardless of whether silicon or brain cells are processing the information.
Let me share a personal experience that highlights this reality. I once called customer support with what seemed like a straightforward question. The answer I received didn’t make sense, so rather than argue, I thanked the agent, hung up, and immediately called back.
A different agent answered, and I asked the exact same question. This time, I received a completely different answer—one that made sense. Two humans from the same company provided contradictory information, yet we worry about AI being inconsistent!
Adjusting Our Expectations
The key difference lies in our expectations. We don’t expect humans to be perfect. When they make mistakes, we might be disappointed or even angry, but we usually forgive them because, well, they’re human.
With AI, our expectations are different. We demand reliability and consistency. When AI makes a mistake, many people assume the entire system is fundamentally flawed and untrustworthy.
This double standard doesn’t serve us well as technology continues to advance. Perhaps we should apply the same reasonable expectations and healthy skepticism to both human and artificial intelligence.
Think about weather forecasters as an example. They use sophisticated technology and have years of training, yet they still get predictions wrong with surprising frequency. Yet we continue to check the weather forecast daily, understanding its limitations while still finding value in the information provided.
Finding a Balanced Approach
Moving forward, I believe we should consider these points when evaluating both AI and human customer service:
- Both AI and humans will make mistakes—it’s inevitable
- The frequency of errors will decrease as AI technology improves
- Human training and knowledge management can reduce human errors
- A combination of AI and human support often provides the best experience
The goal isn’t to excuse poor service from either source but to recognize that perfection is an unrealistic standard. What matters most is how quickly and effectively errors are corrected when they occur.
As AI becomes more integrated into customer service operations, we need to develop a more nuanced understanding of its capabilities and limitations—just as we have with human service providers.
The next time you receive incorrect information from an AI system, before dismissing the technology entirely, ask yourself: Would I be this critical if a human had made the same mistake? The answer might surprise you.