Blogs
Analysis: AI that ‘neutralises’ call centre accents – boosting customer service or empowering fraudsters?
2025-04-28|Jo Whalley |Director of fraud and fincrime
Analysis: AI that ‘neutralises’ call centre accents – boosting customer service or empowering fraudsters?
AI voice-masking boosts service by removing bias, but opens doors for fraudsters to sound more convincing than ever
Teleperformance SE, the world’s largest call centre operator – serving a wide range of well-known companies and major global brands, from Samsung to TikTok – is now using AI to hide the accents of its Indian employees. Thanks to a $13 million investment in AI start-up Sanas, the company is modifying the voices of its agents in real-time, effectively ‘neutralising’ their accents. This means that customers receiving calls from Teleperformance’s centres – or dialling in themselves – no longer have any audible indication of where the agent is physically located.
The accent-neutralising technology currently works on Indian and Filipino accents, with others, such as those found in Latin America, in development. Background and ambient noise are also removed from the line.
According to Teleperformance, the technology helps to reduce “accent-based discrimination”, making interactions smoother and improving customer confidence. However, the implications go far beyond customer service. Could this technology also be exploited by fraudsters, enabling them to disguise their identity, bypass voice authentication systems, and deceive unsuspecting victims with greater ease?
A gift to fraudsters?
According to Jo Whalley, Director at bigspark, this technology presents fraudsters with opportunities. “AI-powered voice modification is a double-edged sword,” she says. “On the one hand, it can create a seamless experience and prevent biases. On the other, it offers cybercriminals a powerful tool to impersonate legitimate individuals.”
This concern is far from theoretical. Voice synthesis and deepfake technology are already being used in scams, and AI-driven accent neutralisation could be the next step in making fraudulent calls more convincing. Fraudsters could use similar technology to mimic customer service agents, tricking individuals into divulging sensitive financial information. Moreover, criminals could bypass traditional voice authentication systems, which rely on unique vocal patterns to verify identity.
Jo warns that while AI has significant fraud-prevention potential, it must be deployed carefully. “Voice authentication has been seen as a secure alternative to passwords and PINs, but AI-modified voices undermine that trust,” she says. “If a system can change an agent’s voice, a fraudster can use the same technology to sound like a bank employee – or even a customer.”
Ethics and regulation
Beyond direct fraud concerns, AI-driven voice technology raises ethical and regulatory issues. Transparency is a key challenge: will customers be informed that the voice they hear has been altered? How will regulators ensure AI voice modification isn’t used maliciously? Governments and financial institutions are only beginning to address the implications of deepfake technology in financial crime. “There’s a fine line between innovation and exploitation,” Jo adds. “We need clearer regulations on AI-assisted voice modification to prevent it from becoming a tool for fraud.”
There’s also the issue of accountability. If a fraudulent transaction occurs due to AI-assisted voice deception, who bears responsibility? Banks and financial institutions rely heavily on voice-based verification for customer support, and AI voice manipulation could erode trust in these systems. The rise of deepfake scams highlights the urgent need for stronger verification methods, such as multi-factor authentication that incorporates biometric and behavioural analytics.
Security vs convenience
Teleperformance insists that its AI technology is designed solely to enhance customer experience, but history has shown that any innovation in AI quickly finds its way into the hands of bad actors. Criminal networks constantly adapt, and accent-neutralisation AI could become another tool in their arsenal.
Despite the risks, AI-driven voice technology has benefits in fraud prevention. If properly integrated with biometric security measures and real-time fraud detection, it could help detect social engineering scams by analysing speech patterns for signs of deception. AI can also monitor conversations at scale, identifying suspicious interactions faster than human agents.
“Oversight is key,” says Jo. “AI should complement human fraud detection rather than replace it. Teleperformance and other companies deploying this technology must build safeguards to prevent misuse, ensuring that it strengthens security rather than weakening it.”
Handling with care
As AI continues to redefine financial crime, businesses must remain one step ahead. Accent-neutralisation technology could be a customer service breakthrough – but if not carefully managed, it might also be the next great enabler of fraud. The financial sector must take proactive steps in ensuring that AI’s potential for deception does not outweigh its benefits.
“AI is evolving faster than regulation,” Jo concludes. “We need a collaborative effort between tech companies, regulators, and financial institutions to stay ahead of fraudsters. Otherwise, we risk creating a system where trust in voice authentication – and customer service – erodes entirely.”
If you like to discuss further get in touch here
c.2024 bigspark.ai All Rights Reserved.