Comment & Analysis

Could artificial intelligence create a fraud-free future?

By Nigel Cannings, CTO at Intelligent Voice

Fraud is known to be one of the biggest cost drivers in the insurance industry. Despite various measures being put in place to tackle the problem – medical assessments, CCTV, dedicated fraud prevention analysts – fraud still adds around £50 to the annual insurance bill of every UK policyholder. Artificial Intelligence (AI) and automatic speech recognition (ASR) have the potential to instigate change.

The problem of insurance fraud

In 2018, the average cost of insurance scams came in at £12,000 per claim. The typical annual cost in the UK is valued at around £1.3 billion. During the pandemic, it’s estimated that attempted fraud has increased significantly – if the numbers of fraudulent insurance claims processed in the global economic downturn are anything to go by, the upsurge could be as much as 21%. And most of the problem comes down to call centres.

Think of your average call centre. Dozens of operatives taking hundreds of calls daily. Their role is to get through as many calls as possible. Quickly. Efficiently. And on to the next. With a plausible demeanour, it doesn’t take much to phish your way past established defences, especially for previously unknown fraudsters. Call centres are the soft touch of the industry, open to social engineering in a way that websites simply aren’t.
That’s why the insurance industry suffers £4.6 billion to £10.4 billion of hard losses to undetected fraud every year.

The UK Finance IVR’s report stated that a fraudster will make 26 calls to the contact centre during the execution of a given fraud. Every contact is an opportunity to identify a potential fraudulent attempt. Because claimants will move through multiple call handlers throughout the process of their claim, call centre workers can’t hope to pick up inconsistencies or behavioural patterns. AI and ASR N can.

The power of intelligent voice recognition software

AI and speech recognition are being increasingly employed across sectors to gain actionable insights from voice content. Together, they deliver real-time analysis and can be used for a variety of purposes. From customer satisfaction tracking to employee efficiency.

In the case of insurance fraud, cloud-based contact centres are the ideal place to centralise the detection of would-be fraudsters in the early stages of their claims. The analysis can provide insights into the speaker’s credibility, emotional state or behavioural changes that are significant in context.

This goes well beyond “sentiment”, although this can be detected with a high degree of accuracy down to an utterance level. Negation, latency, and emotion are the common features of the fraudster’s interaction.

Conversational analytics can flag these features, enabling contact centre employees to pass would-be charlatans further up the food chain for focused handling.

What’s more, the potential is there to use these features to protect the theoretically vulnerable in their own homes.

Preventing fraud at its source

With an increasingly elderly population, the opportunity for scammers is growing. There is a mounting need for vulnerable people to be protected against coercion into decisions they don’t fully understand, be that the unintended revealing of passwords and sensitive information, the open disclosure of bank details under the misapprehension that they are dealing with someone in an official capacity, or pure and simple cold calling at its very worst. With AI and ASR, telcos have the opportunity to provide that protection: It’s the first genuine opportunity to end the existing, long-standing telephone fraud conundrum.

Is this the silver bullet?

No. OK, so I could be accused of positioning ASR and AI as the holy grail to prevent fraud in the insurance industry. And you know, I think it has the potential to instigate real change. But the thing to remember is that these systems are intended to be used as indicators, not decision-makers.

Every system needs calibrating for different use cases. It can flag unusual behaviour, highlight potential risks, and provide valuable real-time insights. But at this stage, human decision making must also play its part.
When trained investigators use conversational analytics, they don’t rely on the tools to assess guilt They use the evidence it provides as a guide to push, ask questions, investigate further:

But widespread adoption of these tools is poor, and their implementation not always well understood: it’s not just a case of install and go. However, the potential is there. It just needs to be grasped.

Nigel Cannings, CTO Intelligent Voice , has over 25 years’ experience in both Law and Technology.

Founder of Intelligent Voice Ltd and a pioneer in all things voice, Nigel is a regular speaker at industry events not limited to NVIDIA, IBM, HPE and AI Financial Summits.

Show More
Back to top button